datas

Frequency And Spatial Oriented Hybrid  Deep Network For Lung Cancer Screening

 

Abstract: Lung cancer is the second most common deadliest disease around the world. Increasing lung cancer among people is very challenging to diagnose in the early stages. In this paper, a unique deep learning architecture is proposed to find out lung cancer in its beginning stages. The proposed deep learning structure helps to deeply analyze and produce better feature maps to identify the variation between the healthy and diseased samples. The proposed H-DNN (Hybrid deep neural networks) architecture is the fusion of two different deep neural networks. The first neural network (DNN-1 or SDNN) is used to analyze the spatial information and the second neural network (DNN-2 or FDNN) is used to analyze the feature related to frequency information. Finally, This paper combines both neural networks to produce a better result. The first neural network has the training input as texture information which is computed by local binary pattern and the second neural network has the training input as frequencies that are calculated by wavelets.  This method provides 98.8% classification accuracy which is better than compared other conventional CNN and SVM classifiers.

Keywords: Convolutional Neural Network (CNN), Frequency Deep Neural Network (FDNN), Spatial Deep Neural Network (SDNN) Lung Cancer, Rectified Linear Unit, Deep neural network, Support Vector Machine  Classifier.

 

1. INTRODUCTION

Around the world, it is evident that cancer accounts for maximum deaths; cancer in the lung is the more predominant among them. In the year 2012, an overall estimate of 1.6 million people died on account of lung cancer. Also, other 1.8 million fresh cases were scrutinized.[1] Diagnosing for a malignant growth in the lung is critical in the premature analysis and treatment of patients with improved diagnosing methods promoting better patient results. However, screening for a malignant growth in the lung is sometimes inclined to incorrect positive samples, expanding expenses through futile treatment and causing superfluous worry for patients.[2] Automated computer-aided methods for continuous analysis of lung cancer helps in increased diagnosis of cancer in the earlier screening stages and eventually leads to decreased false positive rate in diagnosis.  Our objective narrows down to a classification problem to recognize the lung cancer inpatient CT scans that consist of healthy and diseased lungs. This chooses to utilize strategies from.digital image processing and deep learning principles, especially 2D convolutional neural network systems to develop a precise classifier. The proposed classifier reduces the speed and costs of lung cancer screening that permits for more wide-ranging early detection and improved survival rate. This classifier could accelerate the lung malignant growth screening and aids in prolonged survival. In this paper, the novel deep learning classifier predicts whether a patient suffers from lung cancerwhen the patient’s chest CT scan is given as input.

 

2.  LITERATURE SURVEY

              The Support vector machine, Artificial Neural Network (ANN), and Nearest Neighborhood. These are the machine learning supervised classifiers mostly used for classification and recognition in recent times.   These classifiers ought to be prepared with explicit elements to get legitimate order execution. The current AI procedures that are centered around lung cancer  in the lungs grouping are examined. In ANN, the info highlights assume a critical part. Additionally, it is mind boggling and requires a ton of calculations. Zhou et al. [3] proposed an Artificial Neural Network (ANN) based gathering to get high precision in the characterization of carcinoma. Zhu et al. [3] chose 25 elements dependent on the surfaces in Solitary Pulmonary Nodules (SPN) got from the CT pictures that are ordered utilizing the SVM classifier. Also, Shao et al. [5] proposed a computerized system to recognize the SPN adequately utilizing the SVM classifier. Rao et al. [6] and Kurniawan et al. [7] utilized CNN in an improved on manner for the location of cellular breakdown in the lungs during the course of CT examine.   

                  Song et al. [8] studied the classification and the performance of CNN in detail along with some other important aspects related to it such as deep neural network and stacked automatic encoder which acts as a multiple-layer sparse autoencoder for a complicated neural network. It was concluded that among all the machines used, CNN has the highest accuracy.        Chen et al. [9] experimented with the enhancement and segmentation of nodules before performing any task regarding the detection of a nodule.

           Hosny et al. [10] and Xu et al. [11] aimed to use featured and transfer learning along with the data augmentation using the model of CNN. In the process of augmentation, flipping and rotation are used by both studies along with the process of translation. The LUNA16 was leveraged by the authors inside the data set for the training and detection of nodules. With these studies, it was later defined that along with KDCB17, the detector was used as a dataset that has got the potential to offer worldwide features. When the local features are combined with it, they form an independent classifier for the nodule. This shows higher accuracy for the detection of lung cancer.

3. PROPOSED SYSTEM

          The fusión of the two different convolutional neural networks has proposed to analyze both spatial related and frequency related features.

 

3.1. Network architecture

This network architecture is made up of the convolutional layers, Rectifier Linear Units (ReLU), and the max-pooling layers. Convolutional layers help to preserve the spatial dimensions. It performs the filter operation on the input image with multiple filter kernels. The most common filter size is 3×3

Fig .1. Proposed Architecture for HDNN 

 

ReLU is the non-linear activation function used at the end of the convolution layer. The activation function used by the neurons in the convolution grids is represented as f(x) = max (0, x). This non-linearity added to the network aids in the convergence of gradients, unlike the conventional linear activation functions such as sigmoids. Max pooling layers downsample the output from the previous convolutional layers. It is achieved by applying a max filter with a stride of 2 on the obtained feature maps. It provides translational invariance and reduces the computational overhead.

3.2. Zero center normalization

            Standardization of input images in the dataset used for training is a common requirement for machine learning. It is quite common to make the individual features resemble the standard normal distribution with its mean zero. Hence in this paper, the mean value is made to zero and then it is normalized by dividing the non-constant features by their standard deviation [12-16].

                                          

3.3. Lung area  segmentation

The segmentation of the lung region using adaptive thresholding.13 and morphological operations helps to separate the lung region from the chest wall. Mathematically each pixel is thresholded with a different threshold value in the thresholding operation so it adaptively discriminates the foreground and background by different threshold values, it is expressed  by 

                                                             (1)                                           

                                                            (2)                                     

Where, TH(p,q) refers to the threshold, Id(x,y) represents the dilation, and F’’(x,y) indicates the complement of the input image. The ostu based adaptive thresholding [16-20] is adopted here for segmentation. The intensity for the chest wall and lung region pixels has some unique values so the adaptive algorithm separates the foreground region with a different threshold so that it can get better segmentation results. After the Lung region segmentation, used morphological operation to remove the unwanted portion and filling the holes inside the lung region 

 

                                                        

                                                     Fig .6. Segmentation of lung Regions

 

3.4 Deep Network Layers

Convolutional layer

The essential structure squares of CNN are the convolution channels. These channels are able to learn highlights explicit to the info picture and its relating result. The subsequent component maps are continued on to the forthcoming layers. This convolution has the property of translational invariant that assists with distinguishing the different highlights in the info picture.

 

Max pooling

          Pooling is utilized to decrease the channel’s affectability to clamor and other enlightenment impacts. It fundamentally does the errand of subsampling and smooths the picture by averaging or taking the greatest over a veiled area.

 

Rectified Linear Unit

        The activation function  work controls the terminating of neurons in CNN to learns explicit elements. At long last, the completely associated network exemplifies the designed neural organization to play out the arrangement or division assignments. In this layer, some non-linearities are added to make it more versatile to this present reality case. it is characterized as

                        R=Max(0,Z)                     (3)

           Z- is the input feature value , R is the output

 

3.4. Spatial Deep Network

The first deep neural network consists of the following base layers and the local binary pattern image is used to train the network model,

3.4.1. Local Binary pattern

Local Binary Pattern (LBP) is used to extract the texture information from the image it is used for classification problems. Each and every generated pattern provide the information variation in the image, This feature information is the input of the first deep network so-called this network as spatial network, This texture information helps to differentiate the normal abnormal areas from the lung CT image.

The spatial information is given to the first deep neural network, which allows learning change in the spatial information; here the paper has taken the Local binary pattern to analyze the spatial information. The image patterns are nothing but regular or repeated spatial arrangements. These patterns vary from normal to abnormal in CT lung images. These texture patterns are represented by LBP (Local Binary Patterns), the pattern information is most useful to discriminate the area of the abnormalities and it is represented by

                                                            (4)                  

Where Zq and Zp are the intensity of central pixel and neighborhood pixels respectively. The number of samples represented by p. Radius is represented by R

 

                   Fig .7  Local binary pattern of Lung image 

It gives the pattern information which helps to identify the normal and abnormal samples with high accuracy

 

3.4.2. DNN1-Architecture

 

               The spatial deep network consists of four convolution layers and 2 different pooling layers and the feature map has 8 channel deep data which is generated by combining four convolutional layers and classification done by cross-entropy. So it takes all the receptive field from all layers and accumulating

                         Fig .8.  The architecture of  Spatial Deep Neural Network (SDNN)

 

3.5.2. DNN2-Architecture

The frequency deep network consists of five convolution layers and 2 different pooling layers and one drop-out layer. The feature map has 4 channel deep data which is generated by combining 2 convolutional layers and classification done by cross-entropy. So it takes all the receptive field from all layers and accumulating

Fig. 10.  The architecture of  Frequency  Deep Neural  Network (FDNN)

4. Fusion of Network

The combination of network assists with expanding the open field as far as element vectors so both learned network share their responsive fields to improve open field at last both are consolidated by mean technique 2.2 Final stage

 

5.  RESULTS AND DISCUSSION

5.1. Data Augmentation

This work has taken a sum of 180 example pictures it has 90 typical and 70 unusual pictures, for preparing it has taken 600,500 pictures individually by pivot and interpretation, Then done the flip and revolution of all the preparation pictures so absolutely it has 1100, 1350 ordinary strange pictures for preparing the organization. For testing, taken 100, 50 pictures of typical and unusual examples separately,

The experimental results are shown that the proposed hybrid technique outperforms well compared to other conventional methods like CNN and SVM. The performance calculation formulated below

Accuracy: Accuracy is its capacity to segregate between ordinary and strange examples effectively. To gauge the exactness of this framework, likewise ought to consistently ascertain the extent of genuine positive and genuine negative through and through assessed tests. Numerically, this will be expressed as:

                   Accuracy =               (5)

 The sensitivity, specificity is also mathematically expressed as

                 Sensitivity =                                                          (6)

                 Specificity=                                                          (7)

Where TP.TN,FP, FN are True positive, True negative, False positive and False-negative respectively.  

               

                       Fig .11. ROC of  proposed vs DNN-1 vs DNN-2

The  ROC curve for the proposed system and the AUC for this system  is 98.95 it is shown in figure 11

        Fig .12. ROC of  proposed vs CNN vs SVM

The ROC curve  for the proposed framework and the AUC for this framework is 98.8 which is higher than customary CNN and SVM which is shown in figure 12

 

Fig .14. Training progress plot  

 

Table .1. Proposed System  Performance

 

MethodAccuracySensitivitySpecificity
    
DNN1919393
DNN2939492
Proposed98.39796.5

 

Table .2 Layers Distribution

 

ArchitectureConv-2dFully connectedDrop out
    
SDNN3X320000.3
SDNN5×510000.5
SDNN7×710000.7
  FDNN  3×3  2000  0.3
FDNN FDNN5×5 7×72000 20000.5 0.7
    
HYBRID DNN HYBRID DNN HYBRID DNN3×3 5×5 7×7  2000 1000 10000.3 0.5 0.7

 

Table .3 Performance of  proposed system  with Dropout-0.5                          

MethodAccuracySensitivitySpecificity
    
SDNN9493.393
FDNN95.59495.7
Proposed98.896.797.5

 

6. CONCLUSION AND FUTURE SCOPE

The proposed deep learning structure helps to deeply analyze and produce better feature maps to identify the difference between the normal and abnormalities, The combined spatial and frequency-based networks separately analyze the input data in both time and frequency domain so the miss classification rate is significantly reduced. This method provides 98.8% classification accuracy which is better than compared other methods like CNN, NN, and SVM classifiers. In the future, they can use different types of features to learn the deep network to get a reasonable improvement in terms of detection rate. 

REFERENCES

 

  1. Torre, Lindsey A., et al. “Global cancer statistics, 2012.” CA: a cancer journal for clinicians 65.2 (2015): 87-108.
  2. Firmino, Macedo, et al. “Computer-aided detection system for lung cancer in computed tomography scans: review and future prospects.” Biomedical engineering online 13.1 (2014): 41.
  3. Demir, Cigdem, and Bülent Yener. “Automated cancer diagnosis based on histopathological images: a systematic survey.” Rensselaer Polytechnic Institute, Tech. Rep (2005).
  4.  Jaffar, M. Arfan, et al. “GA and morphology based automated segmentation of lungs from Ct scan images.” 2008 International Conference on Computational Intelligence for Modelling Control & Automation. IEEE, 2008.
  5. Messay, Temesguen, Russell C. Hardie, and Steven K. Rogers. “A new computationally efficient CAD system for pulmonary nodule detection in CT imagery.” Medical image analysis 14.3 (2010): 390-406.
  6. Gomathi, M., and P. Thangaraj. “A computer-aided diagnosis system for lung cancer detection using support vector machine.” American Journal of Applied Sciences 7.12 (2010): 1532.
  7. Taher, Fatma, and Rachid Sammouda. “Lung cancer detection by using artificial neural network and fuzzy clustering methods.” 2011 IEEE GCC Conference and Exhibition (GCC). IEEE, 2011.
  8. Hashemi, Atiyeh, Abdol Hamid Pilevar, and Reza Rafeh. “Mass Detection in Lung CT Images Using Region Growing Segmentation and Decision Making Based on Fuzzy Inference System and Artificial Neural Network.” International Journal of Image, Graphics & Signal Processing 5.6 (2013).
  9. Patz, Edward F., et al. “Overdiagnosis in low-dose computed tomography screening for lung cancer.” JAMA internal medicine 174.2 (2014): 269-274.
  10. Kumar, Devinder, Alexander Wong, and David A. Clausi. “Lung nodule classification using deep features in CT images.” 2015 12th Conference on Computer and Robot Vision. IEEE, 2015.
  11. Devi, T. Arumuga Maria, et al. “Meyer controlled Watershed segmentation on Schistosomiasis in HyperSpectral data analysis.” 2015 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE, 2015.
  12. Jeyalakshmi Aruna, Parasuraman Kumar, Arumuga Maria Devi. “Graph Cut Based Method for Automatic Lung Segmentation for Tuberculosis by using Screening Method in Chest Radiographs”. Digital Image Processing 7.9,(2015): 285-291
  13. Devi, T. Arumuga Maria, VI Mebin Jose, and P. Kumar Parasuraman. “A novel approach for automatic detection of non-small cell lung carcinoma in ct images.” 2016 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE, 2016.
  14. V.i.mebin Jose, Dr.t.arumuga Maria Devi. “A non invasive computer aided diagnosis system for early detection of lung carcinoma in ct medical images “. International Journal of Latest Trends in Engineering and Technology8. 4 (2017): 125-130
  15. Sakthi, T. S., K. Parasuraman, and A. Maria Devi. “Implementation of lung cancer nodule feature extraction using threshold technique.” International Advanced Research Journal in Science, Engineering and Technology 3.8 (2016): 29-33.
  16. S. Mohamed Vijithan, Kumar Parasuraman, T. Arumuga Maria Devi, “A Novel Approach for MRI Brain Image Segmentation using Local Independent Projection Model”, CIIT International Journal of Digital Image Processing, ISSN 0974 – 9691 & Online: ISSN 0974 – 9586, Vol 8 No 07, July 2016.
  17. D.Muthukumar, Dr.T.Arumuga Maria Devi, Performance Comparison on Various Bio Electrical Signals of MRI, CT and HSI in Human Abnormal conditions using Hyperspectral Signal Analysis based 3D Visualization”, International Journal for Research in Engineering Application and Management Scopus and UGC approved Journal, ISSN 2454-9150, Volume 04, Issue 04, July 2018.
  18. Dr.T.ARUMUGA MARIA DEVI, D.Muthukumar, “Visualization on Bio Electrical Signals through MRI, CT and HSI Analysis in Normal Conditions of Human Body”, International Journal of New Technologies in Science and Engineering, IJNTSE, JSSN:2349-0780, Volume 6, Issue 1, 01,Jan, 2019.
  19. V.I. Mebinjose and T. Arumuga Maria Devi, “Three Stream Network Model for Lung Cancer Classification In Ct Images”, International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN Print: 0976-6480.
  20. T. Arumuga Maria Devi, V. I. Mebin Jose, P. Kumar Parasuraman, “A Novel Approach for Automatic Detection of Non Small Cell Lung Carcinoma in CT Images”, International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), ISBN: 978-1-5090-5240-0, 16th and 17th of 2016.
  21. T.Arumuga Maria Devi, D. Muthukumar, P. Kumar Parasuraman, “Graphical Representation of Voltage and Current measurements in series or parallel RLC resonant circuits for Magnification using Hyperspectral Analysis”, IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS’17), March, 23-25,2017.
  22. Ms.R.Hepzibai, Dr.T.Arumuga Maria Devi, Mr.D.Muthukumar, ”Mr.P.Darwin, Detection of Normal and Abnormalities from Diabetics Patient’s Foot on Hyperspectral Image Processing”, 7 – International Conference on Innovations in Computer Science & Engineering (ICICSE – 2019), ISBN 978-981-15-2042-6.
  23. Dr.T.Arumuga Maria Devi,Mrs. N. Rekha, “Hyperspectral Image Classification using Spatial and Spectral Features”, International Journal of Scientific and Engineering Research, ISSN 2229-5518,July 2013.
  24. Dr.T. Arumuga Maria Devi , V.Senthilkumar, “Hyperspectral Video Data across WiMax Networks”, International Journal of Scientific and Engineering Research, ISSN 2229-5518,July 2014.
  25. Dr.T. Arumuga Maria Devi, “Video Transcoding of Temporal Hyperspectral Images on Web Browsing”, International Journal of Computer Application and Engineering Technology, ISSN 2277-7962, Volume 3, Issue3, July 2014.

 

  • T Arumuga Maria Devi, G.Arumugaraj, “A Modified MSRCR Technique for Hyper Spectral Images on Various Levels of Resolution Enhancement”, International Journal of Advanced Research in Electrical,Electronics and Instrumentation Engineering, ISSN (Print) : 2320 – 3765 ISSN (Online): 2278 – 8875, Vol. 4, Issue 7, July 2015.
  • T Arumuga Maria Devi , I.Rajeswari, “Hyperspectral Band Clustering on EBCOT Pre Encoding Technique”, International Journal of Advanced Research in Electrical,Electronics and Instrumentation Engineering, ISSN (Print) : 2320 – 3765 ISSN (Online): 2278 – 8875, Vol. 4, Issue 7, July 2015.
  • V Senthilkumar, T Arumuga Maria Devi, “Hyper spectral Image Processing in Coal Detection”, Transactions on Engineering and Sciences, ISSN 2347-1964 (online) ISSN 2347-1875, Volume 3, Issue 5, July – September 2015.
  • Dr. T. Arumuga Maria Devi, Mr.G.Arumugaraj, “A Novel Technique of Resolution Enhancement in Hyperspectral Images on Proposed CHLAE Technique”, Journal of Chemical and Pharmaceutical Sciences (JCPS) Scopus indexed journal, ISSN 0974-2115 ISSN : 0975-7384, Volume 9, Issue 1, January-March 2016.
  • Dr.T.Arumuga Maria Devi, Ms.I.Rajeswari, “Comparison between the parameter values of EBCOT Pre Encoding Technique on Hyperspectral Band Clustering with EBCOT Post Encoding Technique”, Journal of Chemical and Pharmaceutical Sciences (JCPS) Scopus Indexed Journal. ISSN 0974-2115 ISSN : 0975-7384, Volume 9, Issue 1, January-March 2016.

 

 

              

 

 

 

Frequency And Spatial Oriented Hybrid  Deep Network For Lung Cancer Screening

 

Abstract: Lung cancer is the second most common deadliest disease around the world. Increasing lung cancer among people is very challenging to diagnose in the early stages. In this paper, a unique deep learning architecture is proposed to find out lung cancer in its beginning stages. The proposed deep learning structure helps to deeply analyze and produce better feature maps to identify the variation between the healthy and diseased samples. The proposed H-DNN (Hybrid deep neural networks) architecture is the fusion of two different deep neural networks. The first neural network (DNN-1 or SDNN) is used to analyze the spatial information and the second neural network (DNN-2 or FDNN) is used to analyze the feature related to frequency information. Finally, This paper combines both neural networks to produce a better result. The first neural network has the training input as texture information which is computed by local binary pattern and the second neural network has the training input as frequencies that are calculated by wavelets.  This method provides 98.8% classification accuracy which is better than compared other conventional CNN and SVM classifiers.

Keywords: Convolutional Neural Network (CNN), Frequency Deep Neural Network (FDNN), Spatial Deep Neural Network (SDNN) Lung Cancer, Rectified Linear Unit, Deep neural network, Support Vector Machine  Classifier.

 

1. INTRODUCTION

Around the world, it is evident that cancer accounts for maximum deaths; cancer in the lung is the more predominant among them. In the year 2012, an overall estimate of 1.6 million people died on account of lung cancer. Also, other 1.8 million fresh cases were scrutinized.[1] Diagnosing for a malignant growth in the lung is critical in the premature analysis and treatment of patients with improved diagnosing methods promoting better patient results. However, screening for a malignant growth in the lung is sometimes inclined to incorrect positive samples, expanding expenses through futile treatment and causing superfluous worry for patients.[2] Automated computer-aided methods for continuous analysis of lung cancer helps in increased diagnosis of cancer in the earlier screening stages and eventually leads to decreased false positive rate in diagnosis.  Our objective narrows down to a classification problem to recognize the lung cancer inpatient CT scans that consist of healthy and diseased lungs. This chooses to utilize strategies from.digital image processing and deep learning principles, especially 2D convolutional neural network systems to develop a precise classifier. The proposed classifier reduces the speed and costs of lung cancer screening that permits for more wide-ranging early detection and improved survival rate. This classifier could accelerate the lung malignant growth screening and aids in prolonged survival. In this paper, the novel deep learning classifier predicts whether a patient suffers from lung cancerwhen the patient’s chest CT scan is given as input.

 

2.  LITERATURE SURVEY

              The Support vector machine, Artificial Neural Network (ANN), and Nearest Neighborhood. These are the machine learning supervised classifiers mostly used for classification and recognition in recent times.   These classifiers ought to be prepared with explicit elements to get legitimate order execution. The current AI procedures that are centered around lung cancer  in the lungs grouping are examined. In ANN, the info highlights assume a critical part. Additionally, it is mind boggling and requires a ton of calculations. Zhou et al. [3] proposed an Artificial Neural Network (ANN) based gathering to get high precision in the characterization of carcinoma. Zhu et al. [3] chose 25 elements dependent on the surfaces in Solitary Pulmonary Nodules (SPN) got from the CT pictures that are ordered utilizing the SVM classifier. Also, Shao et al. [5] proposed a computerized system to recognize the SPN adequately utilizing the SVM classifier. Rao et al. [6] and Kurniawan et al. [7] utilized CNN in an improved on manner for the location of cellular breakdown in the lungs during the course of CT examine.   

                  Song et al. [8] studied the classification and the performance of CNN in detail along with some other important aspects related to it such as deep neural network and stacked automatic encoder which acts as a multiple-layer sparse autoencoder for a complicated neural network. It was concluded that among all the machines used, CNN has the highest accuracy.        Chen et al. [9] experimented with the enhancement and segmentation of nodules before performing any task regarding the detection of a nodule.

           Hosny et al. [10] and Xu et al. [11] aimed to use featured and transfer learning along with the data augmentation using the model of CNN. In the process of augmentation, flipping and rotation are used by both studies along with the process of translation. The LUNA16 was leveraged by the authors inside the data set for the training and detection of nodules. With these studies, it was later defined that along with KDCB17, the detector was used as a dataset that has got the potential to offer worldwide features. When the local features are combined with it, they form an independent classifier for the nodule. This shows higher accuracy for the detection of lung cancer.

3. PROPOSED SYSTEM

          The fusión of the two different convolutional neural networks has proposed to analyze both spatial related and frequency related features.

 

3.1. Network architecture

This network architecture is made up of the convolutional layers, Rectifier Linear Units (ReLU), and the max-pooling layers. Convolutional layers help to preserve the spatial dimensions. It performs the filter operation on the input image with multiple filter kernels. The most common filter size is 3×3

Fig .1. Proposed Architecture for HDNN 

 

ReLU is the non-linear activation function used at the end of the convolution layer. The activation function used by the neurons in the convolution grids is represented as f(x) = max (0, x). This non-linearity added to the network aids in the convergence of gradients, unlike the conventional linear activation functions such as sigmoids. Max pooling layers downsample the output from the previous convolutional layers. It is achieved by applying a max filter with a stride of 2 on the obtained feature maps. It provides translational invariance and reduces the computational overhead.

3.2. Zero center normalization

            Standardization of input images in the dataset used for training is a common requirement for machine learning. It is quite common to make the individual features resemble the standard normal distribution with its mean zero. Hence in this paper, the mean value is made to zero and then it is normalized by dividing the non-constant features by their standard deviation [12-16].

                                          

3.3. Lung area  segmentation

The segmentation of the lung region using adaptive thresholding.13 and morphological operations helps to separate the lung region from the chest wall. Mathematically each pixel is thresholded with a different threshold value in the thresholding operation so it adaptively discriminates the foreground and background by different threshold values, it is expressed  by 

                                                             (1)                                           

                                                            (2)                                     

Where, TH(p,q) refers to the threshold, Id(x,y) represents the dilation, and F’’(x,y) indicates the complement of the input image. The ostu based adaptive thresholding [16-20] is adopted here for segmentation. The intensity for the chest wall and lung region pixels has some unique values so the adaptive algorithm separates the foreground region with a different threshold so that it can get better segmentation results. After the Lung region segmentation, used morphological operation to remove the unwanted portion and filling the holes inside the lung region 

 

                                                        

                                                     Fig .6. Segmentation of lung Regions

 

3.4 Deep Network Layers

Convolutional layer

The essential structure squares of CNN are the convolution channels. These channels are able to learn highlights explicit to the info picture and its relating result. The subsequent component maps are continued on to the forthcoming layers. This convolution has the property of translational invariant that assists with distinguishing the different highlights in the info picture.

 

Max pooling

          Pooling is utilized to decrease the channel’s affectability to clamor and other enlightenment impacts. It fundamentally does the errand of subsampling and smooths the picture by averaging or taking the greatest over a veiled area.

 

Rectified Linear Unit

        The activation function  work controls the terminating of neurons in CNN to learns explicit elements. At long last, the completely associated network exemplifies the designed neural organization to play out the arrangement or division assignments. In this layer, some non-linearities are added to make it more versatile to this present reality case. it is characterized as

                        R=Max(0,Z)                     (3)

           Z- is the input feature value , R is the output

 

3.4. Spatial Deep Network

The first deep neural network consists of the following base layers and the local binary pattern image is used to train the network model,

3.4.1. Local Binary pattern

Local Binary Pattern (LBP) is used to extract the texture information from the image it is used for classification problems. Each and every generated pattern provide the information variation in the image, This feature information is the input of the first deep network so-called this network as spatial network, This texture information helps to differentiate the normal abnormal areas from the lung CT image.

The spatial information is given to the first deep neural network, which allows learning change in the spatial information; here the paper has taken the Local binary pattern to analyze the spatial information. The image patterns are nothing but regular or repeated spatial arrangements. These patterns vary from normal to abnormal in CT lung images. These texture patterns are represented by LBP (Local Binary Patterns), the pattern information is most useful to discriminate the area of the abnormalities and it is represented by

                                                            (4)                  

Where Zq and Zp are the intensity of central pixel and neighborhood pixels respectively. The number of samples represented by p. Radius is represented by R

 

                   Fig .7  Local binary pattern of Lung image 

It gives the pattern information which helps to identify the normal and abnormal samples with high accuracy

 

3.4.2. DNN1-Architecture

 

               The spatial deep network consists of four convolution layers and 2 different pooling layers and the feature map has 8 channel deep data which is generated by combining four convolutional layers and classification done by cross-entropy. So it takes all the receptive field from all layers and accumulating

                         Fig .8.  The architecture of  Spatial Deep Neural Network (SDNN)

 

3.5.2. DNN2-Architecture

The frequency deep network consists of five convolution layers and 2 different pooling layers and one drop-out layer. The feature map has 4 channel deep data which is generated by combining 2 convolutional layers and classification done by cross-entropy. So it takes all the receptive field from all layers and accumulating

Fig. 10.  The architecture of  Frequency  Deep Neural  Network (FDNN)

4. Fusion of Network

The combination of network assists with expanding the open field as far as element vectors so both learned network share their responsive fields to improve open field at last both are consolidated by mean technique 2.2 Final stage

 

5.  RESULTS AND DISCUSSION

5.1. Data Augmentation

This work has taken a sum of 180 example pictures it has 90 typical and 70 unusual pictures, for preparing it has taken 600,500 pictures individually by pivot and interpretation, Then done the flip and revolution of all the preparation pictures so absolutely it has 1100, 1350 ordinary strange pictures for preparing the organization. For testing, taken 100, 50 pictures of typical and unusual examples separately,

The experimental results are shown that the proposed hybrid technique outperforms well compared to other conventional methods like CNN and SVM. The performance calculation formulated below

Accuracy: Accuracy is its capacity to segregate between ordinary and strange examples effectively. To gauge the exactness of this framework, likewise ought to consistently ascertain the extent of genuine positive and genuine negative through and through assessed tests. Numerically, this will be expressed as:

                   Accuracy =               (5)

 The sensitivity, specificity is also mathematically expressed as

                 Sensitivity =                                                          (6)

                 Specificity=                                                          (7)

Where TP.TN,FP, FN are True positive, True negative, False positive and False-negative respectively.  

               

                       Fig .11. ROC of  proposed vs DNN-1 vs DNN-2

The  ROC curve for the proposed system and the AUC for this system  is 98.95 it is shown in figure 11

        Fig .12. ROC of  proposed vs CNN vs SVM

The ROC curve  for the proposed framework and the AUC for this framework is 98.8 which is higher than customary CNN and SVM which is shown in figure 12

 

Fig .14. Training progress plot  

 

Table .1. Proposed System  Performance

 

MethodAccuracySensitivitySpecificity
    
DNN1919393
DNN2939492
Proposed98.39796.5

 

Table .2 Layers Distribution

 

ArchitectureConv-2dFully connectedDrop out
    
SDNN3X320000.3
SDNN5×510000.5
SDNN7×710000.7
  FDNN  3×3  2000  0.3
FDNN FDNN5×5 7×72000 20000.5 0.7
    
HYBRID DNN HYBRID DNN HYBRID DNN3×3 5×5 7×7  2000 1000 10000.3 0.5 0.7

 

Table .3 Performance of  proposed system  with Dropout-0.5                          

MethodAccuracySensitivitySpecificity
    
SDNN9493.393
FDNN95.59495.7
Proposed98.896.797.5

 

6. CONCLUSION AND FUTURE SCOPE

The proposed deep learning structure helps to deeply analyze and produce better feature maps to identify the difference between the normal and abnormalities, The combined spatial and frequency-based networks separately analyze the input data in both time and frequency domain so the miss classification rate is significantly reduced. This method provides 98.8% classification accuracy which is better than compared other methods like CNN, NN, and SVM classifiers. In the future, they can use different types of features to learn the deep network to get a reasonable improvement in terms of detection rate. 

REFERENCES

 

  1. Torre, Lindsey A., et al. “Global cancer statistics, 2012.” CA: a cancer journal for clinicians 65.2 (2015): 87-108.
  2. Firmino, Macedo, et al. “Computer-aided detection system for lung cancer in computed tomography scans: review and future prospects.” Biomedical engineering online 13.1 (2014): 41.
  3. Demir, Cigdem, and Bülent Yener. “Automated cancer diagnosis based on histopathological images: a systematic survey.” Rensselaer Polytechnic Institute, Tech. Rep (2005).
  4.  Jaffar, M. Arfan, et al. “GA and morphology based automated segmentation of lungs from Ct scan images.” 2008 International Conference on Computational Intelligence for Modelling Control & Automation. IEEE, 2008.
  5. Messay, Temesguen, Russell C. Hardie, and Steven K. Rogers. “A new computationally efficient CAD system for pulmonary nodule detection in CT imagery.” Medical image analysis 14.3 (2010): 390-406.
  6. Gomathi, M., and P. Thangaraj. “A computer-aided diagnosis system for lung cancer detection using support vector machine.” American Journal of Applied Sciences 7.12 (2010): 1532.
  7. Taher, Fatma, and Rachid Sammouda. “Lung cancer detection by using artificial neural network and fuzzy clustering methods.” 2011 IEEE GCC Conference and Exhibition (GCC). IEEE, 2011.
  8. Hashemi, Atiyeh, Abdol Hamid Pilevar, and Reza Rafeh. “Mass Detection in Lung CT Images Using Region Growing Segmentation and Decision Making Based on Fuzzy Inference System and Artificial Neural Network.” International Journal of Image, Graphics & Signal Processing 5.6 (2013).
  9. Patz, Edward F., et al. “Overdiagnosis in low-dose computed tomography screening for lung cancer.” JAMA internal medicine 174.2 (2014): 269-274.
  10. Kumar, Devinder, Alexander Wong, and David A. Clausi. “Lung nodule classification using deep features in CT images.” 2015 12th Conference on Computer and Robot Vision. IEEE, 2015.
  11. Devi, T. Arumuga Maria, et al. “Meyer controlled Watershed segmentation on Schistosomiasis in HyperSpectral data analysis.” 2015 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE, 2015.
  12. Jeyalakshmi Aruna, Parasuraman Kumar, Arumuga Maria Devi. “Graph Cut Based Method for Automatic Lung Segmentation for Tuberculosis by using Screening Method in Chest Radiographs”. Digital Image Processing 7.9,(2015): 285-291
  13. Devi, T. Arumuga Maria, VI Mebin Jose, and P. Kumar Parasuraman. “A novel approach for automatic detection of non-small cell lung carcinoma in ct images.” 2016 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE, 2016.
  14. V.i.mebin Jose, Dr.t.arumuga Maria Devi. “A non invasive computer aided diagnosis system for early detection of lung carcinoma in ct medical images “. International Journal of Latest Trends in Engineering and Technology8. 4 (2017): 125-130
  15. Sakthi, T. S., K. Parasuraman, and A. Maria Devi. “Implementation of lung cancer nodule feature extraction using threshold technique.” International Advanced Research Journal in Science, Engineering and Technology 3.8 (2016): 29-33.
  16. S. Mohamed Vijithan, Kumar Parasuraman, T. Arumuga Maria Devi, “A Novel Approach for MRI Brain Image Segmentation using Local Independent Projection Model”, CIIT International Journal of Digital Image Processing, ISSN 0974 – 9691 & Online: ISSN 0974 – 9586, Vol 8 No 07, July 2016.
  17. D.Muthukumar, Dr.T.Arumuga Maria Devi, Performance Comparison on Various Bio Electrical Signals of MRI, CT and HSI in Human Abnormal conditions using Hyperspectral Signal Analysis based 3D Visualization”, International Journal for Research in Engineering Application and Management Scopus and UGC approved Journal, ISSN 2454-9150, Volume 04, Issue 04, July 2018.
  18. Dr.T.ARUMUGA MARIA DEVI, D.Muthukumar, “Visualization on Bio Electrical Signals through MRI, CT and HSI Analysis in Normal Conditions of Human Body”, International Journal of New Technologies in Science and Engineering, IJNTSE, JSSN:2349-0780, Volume 6, Issue 1, 01,Jan, 2019.
  19. V.I. Mebinjose and T. Arumuga Maria Devi, “Three Stream Network Model for Lung Cancer Classification In Ct Images”, International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN Print: 0976-6480.
  20. T. Arumuga Maria Devi, V. I. Mebin Jose, P. Kumar Parasuraman, “A Novel Approach for Automatic Detection of Non Small Cell Lung Carcinoma in CT Images”, International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), ISBN: 978-1-5090-5240-0, 16th and 17th of 2016.
  21. T.Arumuga Maria Devi, D. Muthukumar, P. Kumar Parasuraman, “Graphical Representation of Voltage and Current measurements in series or parallel RLC resonant circuits for Magnification using Hyperspectral Analysis”, IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS’17), March, 23-25,2017.
  22. Ms.R.Hepzibai, Dr.T.Arumuga Maria Devi, Mr.D.Muthukumar, ”Mr.P.Darwin, Detection of Normal and Abnormalities from Diabetics Patient’s Foot on Hyperspectral Image Processing”, 7 – International Conference on Innovations in Computer Science & Engineering (ICICSE – 2019), ISBN 978-981-15-2042-6.
  23. Dr.T.Arumuga Maria Devi,Mrs. N. Rekha, “Hyperspectral Image Classification using Spatial and Spectral Features”, International Journal of Scientific and Engineering Research, ISSN 2229-5518,July 2013.
  24. Dr.T. Arumuga Maria Devi , V.Senthilkumar, “Hyperspectral Video Data across WiMax Networks”, International Journal of Scientific and Engineering Research, ISSN 2229-5518,July 2014.
  25. Dr.T. Arumuga Maria Devi, “Video Transcoding of Temporal Hyperspectral Images on Web Browsing”, International Journal of Computer Application and Engineering Technology, ISSN 2277-7962, Volume 3, Issue3, July 2014.

 

  • T Arumuga Maria Devi, G.Arumugaraj, “A Modified MSRCR Technique for Hyper Spectral Images on Various Levels of Resolution Enhancement”, International Journal of Advanced Research in Electrical,Electronics and Instrumentation Engineering, ISSN (Print) : 2320 – 3765 ISSN (Online): 2278 – 8875, Vol. 4, Issue 7, July 2015.
  • T Arumuga Maria Devi , I.Rajeswari, “Hyperspectral Band Clustering on EBCOT Pre Encoding Technique”, International Journal of Advanced Research in Electrical,Electronics and Instrumentation Engineering, ISSN (Print) : 2320 – 3765 ISSN (Online): 2278 – 8875, Vol. 4, Issue 7, July 2015.
  • V Senthilkumar, T Arumuga Maria Devi, “Hyper spectral Image Processing in Coal Detection”, Transactions on Engineering and Sciences, ISSN 2347-1964 (online) ISSN 2347-1875, Volume 3, Issue 5, July – September 2015.
  • Dr. T. Arumuga Maria Devi, Mr.G.Arumugaraj, “A Novel Technique of Resolution Enhancement in Hyperspectral Images on Proposed CHLAE Technique”, Journal of Chemical and Pharmaceutical Sciences (JCPS) Scopus indexed journal, ISSN 0974-2115 ISSN : 0975-7384, Volume 9, Issue 1, January-March 2016.
  • Dr.T.Arumuga Maria Devi, Ms.I.Rajeswari, “Comparison between the parameter values of EBCOT Pre Encoding Technique on Hyperspectral Band Clustering with EBCOT Post Encoding Technique”, Journal of Chemical and Pharmaceutical Sciences (JCPS) Scopus Indexed Journal. ISSN 0974-2115 ISSN : 0975-7384, Volume 9, Issue 1, January-March 2016.

 

 

              

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Open chat