Next Article in Journal
Comparative Analysis of the Diagnostic Effectiveness of SATRO ECG in the Diagnosis of Ischemia Diagnosed in Myocardial Perfusion Scintigraphy Performed Using the SPECT Method
Next Article in Special Issue
Deep Learning Model Based on 3D Optical Coherence Tomography Images for the Automated Detection of Pathologic Myopia
Previous Article in Journal
Occupational Lyme Disease: A Systematic Review and Meta-Analysis
Previous Article in Special Issue
An Efficient Multi-Level Convolutional Neural Network Approach for White Blood Cells Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review

1
College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
2
Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen 518060, China
3
College of Applied Sciences, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Submission received: 15 November 2021 / Revised: 21 January 2022 / Accepted: 22 January 2022 / Published: 25 January 2022
(This article belongs to the Special Issue Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging)

Abstract

:
Lung cancer has one of the highest mortality rates of all cancers and poses a severe threat to people’s health. Therefore, diagnosing lung nodules at an early stage is crucial to improving patient survival rates. Numerous computer-aided diagnosis (CAD) systems have been developed to detect and classify such nodules in their early stages. Currently, CAD systems for pulmonary nodules comprise data acquisition, pre-processing, lung segmentation, nodule detection, false-positive reduction, segmentation, and classification. A number of review articles have considered various components of such systems, but this review focuses on segmentation and classification parts. Specifically, categorizing segmentation parts based on lung nodule type and network architectures, i.e., general neural network and multiview convolution neural network (CNN) architecture. Moreover, this work organizes related literature for classification of parts based on nodule or non-nodule and benign or malignant. The essential CT lung datasets and evaluation metrics used in the detection and diagnosis of lung nodules have been systematically summarized as well. Thus, this review provides a baseline understanding of the topic for interested readers.

1. Introduction

Lung cancer is one of the deadliest forms of cancer worldwide and represents a significant threat to human health and life. Over the past 50 years, the incidence and mortality rate of lung cancer have increased significantly in many countries. For example, according to the American Cancer Society, approximately 1,898,160 new cases will be diagnosed in 2021 and approximately 608,570 patients will die.
In general, the detection of lung cancer begins with the diagnosis of lung nodules, which are a leading radiological indicator for early diagnosis. The degree of malignancy of the nodule depends on its diameter. In most cases, nodules are small, rounded opacities within the pulmonary interstitium [1], which is a collection of support tissues within the lung that includes the alveolar epithelium, pulmonary capillary endothelium, basement membrane, and perivascular and perilymphatic tissues [2]. Lung nodules vary widely in terms of their shapes, sizes, and types [3]. Some nodules are spherical, with diameters from <2 mm to 30 mm [4], whilst other nodules have complex vascular attachments located in regions with large vessels and are challenging to detect. For instance, solid nodules (SN) and sub-solid nodules (SSNs) have densities only slightly above that of the surrounding lung parenchyma [5]. SNs are the most common type of nodule and contain the core functional lung tissues, whereas SSNs are pulmonary tumors with restricted ground-glass opacity (GGO). SSNs can be further categorized into part-solid nodules and pure ground-glass nodules [6]. These nodules show opacifications of a greater density than the nearby tissues and do not obscure underlying broncho-vascular structures [7].
As nodule size is related to malignancy, accurately measuring the diameter of nodules is critical to diagnosis. Several studies [1,8,9] have provided guidelines to this end. For example, the End-Use Load and Consumer Assessment Program (ELCAP) database [3] suggests a 1% malignancy rate for nodules smaller than 5 mm in diameter, 24% for nodules between 6 mm and 10 mm, 33% for nodules between 11 mm and 20 mm, and 80% for nodules of more than 20 mm [10]. However, errors may arise while measuring the diameter of very small nodules.
The treatment of lung cancer nodules is relatively complex. Nearly 70% of lung cancer patients require radiation therapy as part of their treatment, but such therapy can cause radiation-induced lung injury, which is a limiting toxicity that may decrease cure rates and increase morbidity and mortality. To overcome the defect of extracting additional information from nodules and to improve the accuracy of the classification of nodules, computer-aided diagnostic (CAD) systems are critical tools for radiologists. CAD systems are designed to overcome observational errors, reduce false-negative rates [11], and provide a second opinion for medical image interpretation and diagnosis [12]. Several studies have suggested that incorporating a CAD system into the diagnostic process can improve the performance of image diagnosis by decreasing inter-observer variation [13]. Likewise, CAD systems: provide quantitative support for clinical decisions such as biopsy recommendations [14]; assist in the performance of diagnostic checkups; reduce unnecessary false-positive biopsies [15] and thoracotomies [12]; and can be used to differentiate between the malignancy and benignancy of tumors [16,17].
Positive results in clinical studies have led to an upsurge in lung cancer detection using CAD models. Adopting and using such systems can improve survival rates through the diagnosis of lung nodules at early stages. The current computed tomography (CT) CAD applications search for pulmonary densities with specific physical characteristics (e.g., sphericity) that are representative of lung nodules [11]. Thus, CT CAD applications for lung nodule screening have become an active area of research.
Initially, lung nodule diagnosis was heavily dependent on approaches that did not incorporate machine learning [18,19,20,21,22,23,24]. Later, machine learning-based approaches [25,26,27,28,29,30] were introduced to build the optimal boundary using data [31]. Recently, much research has been devoted to deep learning (DL)-inspired methods due to the accuracy of their predictions. DL-based models are different from conventional CAD systems, as they can be easily optimized and applied to a large amount of data [32]. DL relies on convolutional neural networks (CNNs) and has made a significant contribution to lung nodule diagnosis and management [33,34,35,36]. DL has been applied to pulmonary nodule diagnosis in three modules: nodule detection, segmentation, and classification. The detection module is responsible for localizing the nodule, the segmentation module aims to contour the nodule voxels, and the classification module predicts the nodule type (i.e., benign or malignant) [31].
Previous studies have conducted reviews of the research on pulmonary nodule detection techniques [31,32,37,38,39,40,41,42,43], with a range of objectives. However, the objective of this review article is to focus on the segmentation and classification modules of the pulmonary CAD system. The segmentation and classification tasks are the core components of a CAD system and ease the final decision-making regarding lung nodule, non-nodule, lung nodule type, and size. Furthermore, this work classifies the lung nodule segmentation literature based on different network architectures (general neural network and multiview CNN architecture), which will provide a clear understanding and intuition to a new researcher in the field for future research. For this purpose, the intended review study describes some recent and previous publications from reputable databases, including IEEE Xplore, Web of Science, PubMed, ScienceDirect, and Scopus, that have addressed the difficulties involved in diagnosing lung nodules.
The rest of the paper is organized as follows: Section 2 provides an overview of a CAD framework; Section 3 and Section 4 describe the primary datasets and various evaluation metrics; and Section 5 and Section 6 outline the research on segmentation and classification approaches, respectively.

2. General CAD Framework for Detection and Diagnosis of Pulmonary Nodules

Different CAD systems comprise different elements. The most common components of a CAD system are data acquisition, pre-processing, lung segmentation, lung nodule detection, false positive (FP) reduction, lung nodule segmentation, and lung nodule classification [42,44]. In the data acquisition step, the images used by the CAD system are collected. CT is a preferred choice for early nodule screening for this purpose, due to its high sensitivity and relatively low cost [45]. The pre-processing step involves the elimination of noise, artifacts, and other useless information from the images, thus improving the image quality for the subsequent steps. During lung segmentation, clinicians identify the boundaries of the lung from surrounding thoracic tissue in the CT images [46]. The detection step involves the localization of the lung nodule or mass and is followed by the FP reduction step [47]. The false-positive step is an essential process and involves identifying true lung nodules from the detected candidate nodules. In the lung nodule segmentation step, each nodule is segmented from the lung parenchyma. Then, in the feature extraction step, the characteristics of the nodule are quantified. These features are further used in the nodule classification step. Nodule classification is the final, and most vital, component of a CAD system and involves the differentiation of benign and malignant nodules. A typical pipeline of a lung nodule CAD system is depicted in Figure 1.

3. Datasets

DL models rely heavily on datasets. This is because the effective performance of the advanced learning algorithms is achieved using high-quality training datasets. However, high-quality, labeled training sets are usually complicated and expensive to produce. As a result, few public databases are available to support the development of lung nodule CAD systems. Indeed, whilst some organizations have made significant contributions to the formation of public datasets to facilitate research on the diagnosis of lung nodules using CT, these datasets do not adopt a standardized format for the storage of nodule information, and their labeling procedures also vary. For instance, some datasets are labeled using the coordinates of the vertices of the polygon, while other datasets are labeled according to the form of the center and radius of the nodule. A selection of databases used for lung nodule diagnosis research are briefly described below.
LIDC-IDRI: The Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) contains 1018 CT scans with marked-up annotated lesions in DICOM format. The diagnostic annotations are provided in XML format. For each CT scan, four experienced thoracic radiologists performed a two-stage annotation and revision procedure. In the first phase, the radiologists independently assessed each CT scan and labeled lesions as “nodule ≥ 3 mm,” “nodule = 3 mm,” or “non-nodule ≥ 3 mm”. After that, each radiologist independently reviewed the labels of the other lesions and gave their final diagnosis [48].
LUNA16: The extensive Lung Nodule Analysis 16 (LUNA 16) dataset is derived from LIDC-IDRI. It includes 36,378 annotations by radiologists on 888 selected CT scans. The authors only considered annotations categorized as nodules that were ≥3 mm as relevant lesions; nodules that were <3 mm and non-nodule lesions were not regarded as relevant for lung cancer screening protocols. A total of 1186 nodules were considered to be positive examples (i.e., the lesions that the algorithms should detect). Other nodules, i.e., those with different diameters, were regarded as irrelevant findings, and the marks on such locations were not counted as either FPs or true positives; rather, the irrelevant findings were excluded from the evaluation altogether [49].
Ali Tianchi: This dataset was developed by the Ali Tianchi Medical AI Competition and includes information on the nodules of 1000 patients. All the nodules were marked and confirmed by three doctors, except for the nodules analyzed by pathology. Nodules of 5–10 mm account for 50% of the dataset, and nodules of 10–30 mm account for the remaining 50%. Information on the position and size of the marked nodules is stored in CSV format.
NSCLC: The Non-Small Cell Lung Cancer (NSCLC) dataset contains information from 211 patients and 1355 CT images, with the images of 144 patients being obtained from axial CT imaging using an automatic segmentation algorithm. All the obtained segmentations were reviewed by thoracic radiologists, each of whom had more than five years of experience. The nodule annotations are stored in AIM format [50].
ELCAP: The Early Lung Cancer Action Program (ELCAP) database consists of a dataset of 50 low-dose documented whole-lung CT scans for detection purposes. The CT scans were obtained in a single breath-hold with a 1.25 mm slice thickness. The dataset also provides the locations of nodules between 2 mm and 5 mm in diameter that were detected by a radiologist [3].
ANODE09: This dataset belongs to the 2009 Automated Nodule Detection Database, which contains 55 CT scans, each with a slice thickness of approximately 1.0 mm. However, in the dataset, only five scans of 512 × 512 pixels were annotated; the other 55 scans were left unmarked to test the models. The annotated scans comprise 39 nodules and 31 non-nodules [51].
Table 1 summarizes the cited datasets and their detailed composition.

4. Lung Nodule Evaluation Metrics

Existing research has recognized the critical role played by evaluation metrics, which are used to measure the performance of the developed diagnostic models. The models for the detection and classification of lung nodules are mostly assessed according to sensitivity (SEN), specificity (SPEC), accuracy (ACC), precision (PPV), F1-score, receiver operating characteristic (ROC) curve, free-response operating characteristic (FROC), and area under the ROC curve (AUC), but the competition performance metric (CPM) can also be used to assess their performance [27]. The various metrics used to evaluate the performance of lung cancer algorithms are summarized in Table 2.

5. Lung Nodule Segmentation

The accurate segmentation of lung nodules is challenging due to their small size, especially at the edge of the lung and near the blood vessels. Lung nodule segmentation is relatively broad and varies in terms of architecture, image pre-processing, and training strategy [52]. For instance, some DL-based nodule segmentation approaches are based on multiview neural network architecture, whilst others are based on general neural network architecture. Approaches that are based on multiview neural network architecture consider multiple views of lung nodules and combine them as an input to neural networks. General neural network architecture, on the other hand, builds on traditional CNN networks by changing or adding some blocks.
Thus, this section, categorizes the literature on segmentation into two key approaches to pulmonary nodule segmentation: general neural network architecture and multiview architecture (i.e., multiscale problems). Moreover, the type and shape of a lung nodule significantly affect the choice of segmentation method for nodule detection. Therefore, this section also focuses on the different segmentation methods that are proposed for different types and sizes of nodules.

5.1. General Neural Network Architecture

Most published studies combined the conventional convolution network (CNN) architecture with neural network blocks for lung nodule segmentation. U-Net and Fully Convolutional Neural Networks (FCN) architectures are two basic structures that are frequently used. Numerous works have shown that convolutional neural networks architecture can significantly improve the performance of lung segmentation [53,54,55,56,57,58,59,60], especially semantic segmentation networks such as FCN [61] and U-Net [62]. Such networks implement two key steps. First, the image feature maps are extracted using a down-sampling process to filter the unnecessary information, whilst the important information is retained. Second, the resulting feature maps are then amplified through an up-sampling process to achieve a higher-resolution display image.
Inspired by these networks, many segmentation studies modified and fine-tuned their models by leveraging the basic CNN architecture or changing or adding blocks to that CNN architecture. Huang et al. [57] proposed a system with four major modules: candidate nodule detection with faster regional-CNN (R-CNN), candidate merging, FP reduction using a CNN, and nodule segmentation using a customized FCN. Their model was trained and validated on the LIDC-IDRI dataset and achieved an average of 0.793 DSC. Tong et al. [59] utilized the U-Net architecture to perform the lung nodule segmentation. Their proposed method improved network performance by combining the U-Net with a residual block. Moreover, the lung parenchyma was extracted using a morphological method and the image was cropped to 64 × 64 pixels as an input to their improved network. The proposed model was trained and validated on the LUNA16 dataset and achieved 0.736 DSC. Usman et al. [56] proposed a dynamic modification region of interest (ROI) algorithm. This approach used Deep Res-UNet as the foundation for locating the input lung nodule volumes and improving lung nodule segmentation. Their method was divided into two stages. In the first stage, the Deep Res-Net was used for training and predicting the axial axis of the CT images. The second stage then focused on the new ROI in the CT image and used the deep Res-UNet architecture for the coronal and sagittal axes to train the network. The second stage also integrated the prediction results from the first stage into the final 3D results. Ultimately, the proposed method achieved 87.55% average DSC, 91.62% SEN, and 88.24% PPV.
Zhao et al. [60] proposed the implementation of a patch-based 3D U-Net and contextual CNN to automatically segment and classify lung nodules. This process began with a 3D U-Net architecture being used to segment the lung nodules, before generative adversarial networks (GANs) [63] were used to enhance the 3D U-Ne, and, finally, the contextual CNN was used to reduce the lung nodule segmentation FPs and improve the benign and malignant classification. This method achieved good results in segmenting lung nodules and classifying nodule types. Kumar et al. [55] utilized V-Net [64] for their lung nodule segmentation model. The proposed architecture adopted a 3D CNN model, using only the convolutional layers and ignoring the pooling layers. This model was evaluated on the LUNA16 dataset and achieved a DSC of 0.9615. Pezzano et al. [65] proposed a lung nodule segmentation network that added Multiple Convolutional Layer (MCL) blocks to the U-Net. The proposed network architecture was also divided into two phases, as opposed to being an end-to-end network. In the first phase, the researcher- trained model obtained the initial results; then, in the second phase, a morphological method was used for post-processing in order to highlight the nodules at the lung edges. The proposed architecture was trained and validated on the LIDC-IDRI database. The model achieved an 85.9% sensitivity, a 76.7% IoU, and an 86.1% F1-score. Keetha et al. [54] proposed a resource-efficient U-Det architecture by integrating U-Net with Bi-FPN (implemented in Efficient-Det). The proposed network was trained and tested on the LUNA dataset and achieved an average DSC of 82.82%, an average SEN of 92.25%, and an average PPV of 78.92%.

5.2. Multiview CNN Architecture

Many research studies have proposed new architectures by taking multiple views of lung nodules as inputs to neural networks to achieve improved results [53,66,67,68,69,70]. These segmentation methods are primarily based on CNN networks and combine multiscale or multiview methods to train the neural networks. The structures of the different networks are depicted in Figure 2.
For instance, Zhang et al. [71] used a conventional method for nodule segmentation with a multiscale Laplacian of Gaussian filter to detect nodules. The proposed method was evaluated on the LUNA16 dataset and achieved a detection score of 0.947. Similarly, Shen et al. [70] considered different scales at feature levels in a single network and proposed the use of a multicrop CNN (MC-CNN) to automatically extract salient nodule information by employing a novel multicrop pooling strategy. Dong et al. [67] proposed a multiview secondary input residual (MV-SIR) CNN model for 3D lung nodule segmentation. Their approach achieved good results, with an 0.926 average DSC and 0.936 PPV.
Cao et al. [72] constructed a dual-branch residual network (DB-ResNet) and obtained improved lung segmentation results, with an average SEN of 89.35% and an average DSC of 82.74%. The proposed method employed two newly integrated schemes. First, it used the multiview and multiscale features of different nodules in CT images; second, it combined the intensity features with a CNN. Recently, Wu et al. [53] developed an interpretable, multitask learning CNN–joint learning for pulmonary nodule segmentation attributes and malignancy prediction (PN-SAMP) based on the U-Net architecture. The model achieved an average DSC of 73.89% and an average SEN of 97.58%. Finally, Wang et al. [69] proposed a central-focused CNN (CF-CNN) to segment lung nodules. Their architecture comprised two stages that used the same neural network to extract features and then merge those features. In addition, the authors used central pooling to preserve more of the features of the lung nodules. The model was tested on the LIDC-IDRI dataset and achieved an average DSC of 82.15% and an average SEN of 92.75%.

5.3. Segmentation Based on Lung Nodule Type

As mentioned above, pulmonary nodules have different types, shapes, and clinical features. Thus, the procedures used to detect nodules, as well as the associated challenges of such procedures, vary from case to case. In this section, we begin to address this issue by summarizing some of the works that have considered variations based on type. Table 3 shows the related methods and summarizes their key highlights.
Generally, lung nodules that are close to blood vessels and pleura are most challenging to detect. Thus, increasing the detail of the boundary nodules is core for all models. Pezzano et al. [65] proposed a network structure based on U-Net to segment lung nodules. The authors developed the multiple convolutional layers (MCL) module to fine-tune the details of the boundary nodules and post-process the nodule segmentation results. Morphological methods were used to strengthen the detail of the edge nodules. Dong et al. [67], meanwhile, proposed a model that incorporated features of voxel heterogeneity (VH) and shape heterogeneity (SH). VH reflects differences in gray voxel value, while SH reflects the characteristics of a better nodule shape. The authors found that VH can significantly learn gray information, whereas SH can better learn boundary information.
In addition, for juxta-pleural and small nodules, Cao et al. [72] proposed the DB-ResNet and presented a central intensity-pooling layer (CIP), which preserved the intensity features centered on the target voxel rather than the intensity information. For well-circumscribed nodules, Huang et al. [57] proposed a system that included segmentation and classification. In the segmentation step, the authors demonstrated that the model had a segmentation effect that was superior to that of other models.
However, the above methods exhibited slightly worse performance for the juxta-pleural, juxta-vascular, and ground-glass opacity nodules. Thus, to improve the performance of the model with respect to these types of nodules, AI-Shabi et al. [73] proposed the use of residual blocks with a 3 × 3 kernel size to extract local features and non-local blocks to extract global features. The proposed method managed to avoid many parameters and thus performed to a high standard. The LIDC-IDRI dataset was used for the training and testing. The proposed model achieved outstanding results as compared to DenseNet and ResNet in terms of transfer learning, scoring an AUC of 95.62%. Recently, Aresta et al. [58] constructed iW-Net, which comprised nodule segmentation and elements of user intervention. The proposed architecture performed well on large nodules without any user intervention. However, when user intervention was incorporated, the quality of the nodule segmentation in non-solid and sub-solid abnormalities improved significantly.
Table 3. Deep learning-based lung nodule segmentation architectures and their key information.
Table 3. Deep learning-based lung nodule segmentation architectures and their key information.
StudyYearArchitectureDatasetApproachPerformance
Pezzano et al. [65]2021CoLe-CNNLIDC-IDRI2D
Based U-Net
Inception-v4 architecture
Mean Square Error function
F1 = 86.1
IoU = 76.6
Dong et al. [67]2020MV-SIRLIDC-IDRI2D/3D
Residual block
Secondary input
Multi views
Voxel heterogeneity (VH)
Shape heterogeneity (SH)
ASD = 7.2 ± 3.3
HSD = 129.3 ± 53.3
DSC = 92.6 ± 3.5
PPV = 93.6 ± 2.2
SEN = 98.1 ± 11.3
Keetha et al. [54]2020U-DNetLUNA162D
Based U-Net
Bi-FPN
Efficient-Det
Mish activity function
DSC = 82.82 ± 11.71
SEN = 92.24 ± 14.14
PPV = 78.92 ± 17.52
Cao et al. [72]2020DB-ResNetLIDC-IDRI2D/3D
ResNet
CIP
Multiview
Multiscale
Central Intensity-Pooling
DSC = 82.74 ± 10.19
ASD = 19 ± 21
SEN = 89.35 ± 11.79
PPV = 79.64 ± 13.34
Kumar el al. [55]2020V-NetLUNA163D
V-Net
PReLU
Only fully convolutional lays
DSC = 96.15
Usman et al. [56]2020Adaptive ROI with Multi-view Residual LearningLIDC-IDRI2D/3D
the Deep Residual U-Net
Adaptive ROI
Multiview
SEN = 91.62
PPV = 88.24
DSC = 87.55
Tang et al. [74]2019NoduleNetLIDC-IDRI3D
Multitask
Residual-block
Detection, FPR, segmentation
Different loss function
DSC = 83.10
CPM = 87.27
Huang et al. [57]2019Faster R-CNNLUNA162D
Faster RCNN
Merge overlap
FP reduction
Based FCN
ACC = 91.4
DSC = 79.3
Aresta et al. [58]2019iW-NetLIDC-IDRI3D
Based U-Net
two points in the nodule boundary
none heavy pre-processing steps
augmentation
IoU = 55
Hesamian et al. [75]2019Atrous convolutionLIDC-IDRI2D
Atrous convolution
Residual Network
Weight loss
Normalize to 0, 255
DSC = 81.24
Precision = 79.75
Liu et al. [76] 2018Mask R-CNNLIDC-IDRI2D
Backbone: ResNet101, FPN
transfer learning
RPN
FCN
73.34 mAP
79.65 mAP
Khosravan et al. [77]2018Semi-supervised multitask learningLUNA163D
Data augmentation
Semi-supervised
FP reduction
SEN = 98
DSC = 91
Wu et al. [53]2018PN-SAMPLIDC-IDRI3D
3D U-Net
WW/WC
Dice coefficient loss
Segmentation, classification
DSC = 73.98
Tong et al. [59]2018Improved U-NET networkLUNA162D
U-Net
Modify residual block
Obtain lung parenchyma
DSC = 73.6
Zhao et al. [60]20183D U-Net and Contextual Convolutional Neural NetworkLIDC-IDRI3D
3D U-Net
GAN
Morphological methods
Residual block
Inception structure
None
Wang et al. [66] 2017MV-CNNLIDC-IDRI2D/3D
Mutilview
A multiscale patch strategy
SEN = 83.72
PPV = 77.59
DSC = 77.67
Wang et al. [69] 2017CF-CNNLIDC-IDRI/GDGH2D/3D
Central pooling
3D patch
2D views
A sampling method
Two datasets
LIDC:
DSC = 82.15 ± 10.76
SEN = 92.75 ± 12.83
PPV = 75.84 ± 13.14
GDGH:
DSC = 80.02 ± 11.09
SEN = 83.19 ± 15.22
PPV = 79.30 ± 12.09

6. Classification

In addition to the above works on nodule detection, other studies have focused on the classification of candidate nodules according to the following categories: benign, primary cancer, and metastatic cancer. Classification tasks also incorporate FP screening or reduction, which can improve the accuracy of classification. Therefore, nodule classification is essential because it assists doctors in diagnosing benign and malignant nodules, thus improving the overall efficiency of diagnoses. For this purpose, deep neural networks are used to analyze various characteristics of lung nodules, such as their shape and location. Neural networks are further used to analyze the input lesion area and to predict the final results. This section therefore provides a brief overview of the existing literature to illustrate the various techniques that have been developed to support malignancy detection and the classification of lung nodules.

6.1. Classification as Nodule or Non-Nodule

When searching for candidate nodules, the presence of blood vessels and other soft tissues can lead to the detection of FPs. Adopting effective classification techniques can reduce FP detection and, thus, significantly improve the accuracy of nodule identification and reduce the difficulty of subsequent tasks. For example, Wu et al. [78] developed a deep residual network to classify lung nodules. For this, the authors adopted a transfer-learning-based 50-layer Res-Net structure with global average pooling. The Principal Component Analysis (PCA) technique was used to reduce the feature size and number of parameters. The architecture was trained and tested on the LIDC-IDRI dataset. It achieved an accuracy of 98.23%, a sensitivity of 97.7%, a specificity of 98.35%, and an F1-score of 98.06%. Tran et al. [79] proposed an LdcNet model to improve the accuracy of the classification of pulmonary nodules. They used a 15-layer 2D CNN and employed Focal loss to classify the pulmonary candidates as nodule or non-nodule. They also used data extracted from the LIDC-IDRI and LUNA16 data sets. Their model achieved an accuracy of 97.2%, a sensitivity of 96.0%, and a specificity of 97.3%.
Li et al. [80] used CNN and a fully connected layer to classify pulmonary nodules. They divided the input patch into 32 × 32 and 64 × 64 sections using the nodule size and trained two identical networks. The proposed network was trained on 62,492 ROI samples, including 40,772 nodules and 21,720 non-nodules, from the LIDC-IDRI dataset. It achieved an accuracy of 86.4% and a sensitivity of 89.0%. Mastouri et al. [81] proposed a network to classify lung nodules, known as bilinear CNN (BCNN). Their network used VGG-16 and VGG-19 as the training network to extract the relevant features, while an SVM classifier was utilized for the nodule classification. The additional ablation experiments conducted by the authors on the LIDC-IDRI dataset demonstrated that the SVM classifier performed better than soft-max, KNN, and other classifiers. Additionally, using VGG-16 + VGG-19 was shown to be superior to using a single network (e.g., VGG-16 + VGG-16). Ultimately, the proposed network achieved an accuracy of 91.99% and an AUC of 95.9%. Finally, Yang et al. [82] suggested using a two-stage CNN (2S-CNN) to classify the candidates into nodules/non-nodules, leading to a classification accuracy of 89.6%.

6.2. Classification as Benign or Malignant

Given that lung nodules have various types and shapes, radiologists find it challenging to classify them as benign or malignant based on CT images—a topic that numerous studies have focused on. Table 4 summarizes the methods adopted in each study.
For example, Ali [83] proposed using a transferable texture CNN to improve the classification of pulmonary nodules in CT scans. The proposed approach used the LIDC-IDRI and LUNGx datasets for training and validation. Furthermore, a transfer learning technique was used to extract the weights of some layers of the training model to retrain the LUNGx dataset using the proposed architecture. The trained model achieved an accuracy of 97.69%, an error rate of 3.3%, an AUC of 99.11%, and a sensitivity of 97.19% on the LIDC-IDRI dataset. It achieved excellent results, with a 90.91% accuracy, 91.37% sensitivity, and 94.14% AUC on the LUNGx dataset.
Al-Shabi et al. [73] proposed combining a deep local-global network with residual and non-local blocks to extract the global features with few parameters. Their proposed network successfully analyzed the shape and size of the nodules. The architecture training and testing were performed using the LIDC-IDRI dataset. Their results produced an AUC of 95.62%, an accuracy of 88.46%, a sensitivity of 88.66%, and a precision of 87.38%. In another study [84], also led by Al-Shabi, gated dilated networks were used to classify nodules as malignant or benign. The proposed network incorporated a context-aware sub-network that analyzed the input features and guided the features to a suitably dilated convolution to reduce the parameters. The framework achieved better recognition of medium-sized nodules, based on an evaluation using the LIDC-IDRI dataset. Similarly, another model, MoDenseNet [85], used a two-pathway 3D CNN architecture with dense blocks to classify nodules as malignant or benign. Patches of the same nodule at different scales (e.g., 50 × 50 × 5, 100 × 100 × 10) were used as the network’s inputs. Finally, the intermediate and final feature maps were concatenated and classified into the prediction results. The proposed model was trained on 686 lung nodules from the LIDC-IDRI dataset. The testing results produced a TPR of 90.47%, a TNR of 90.33%, a PPV of 90.55%, an AUC of 95.48%, and an accuracy of 90.40%.
Wu et al. [53] developed a classification network to concatenate the feature map of the third layer of their proposed lung nodule segmentation network with the feature map of the final result of the lung nodule segmentation network so as to classify nodules as benign or malignant. Their model was evaluated on the LIDC-IDRI dataset and achieved an accuracy of 89.33%. Zhao et al. [60] proposed using a contextual CNN to reduce FPs when classifying nodules as benign or malignant. Moreover, they concatenated the feature map of the contextual CNN with the feature map of the sixth layer of the CNN. The proposed framework performed the required classification accurately and efficiently. In another study, a three-pathway multiview CNN model [86] was used to classify lung nodules as benign or malignant, with a curriculum learning strategy also being adopted to enable the network to learn more precise weightings. The model was evaluated on the LIDC-IDRI dataset and achieved a sensitivity of 91.07%, a specificity of 88.64%, a precision of 89.35%, an accuracy of 89.90%, and an AUC of 94.59%. Liu et al. [87] developed an MV-CNN framework to classify pulmonary nodules. They used multichannel CT images to improve the extraction of feature information, incorporating seven patches at different scales, and proposed both binary and ternary classification. Furthermore, they performed multiple experiments to prove that multichannel input performed better than single-channel input and achieved better quantitative results compared with other models.
Shen et al. [70] proposed a multicrop pooling layer technique based on a spatial pyramid pooling network (SPPNet) that captured nodule-centric visual features without including a nodule classification step after segmentation. The proposed model was evaluated on the LIDC-IDRI dataset and achieved an accuracy of 87.14%, an AUC of 93%, a sensitivity of 77%, and a specificity of 93%. Other methods have also been proposed to classify lung nodules as benign or malignant. For example, one study proposed that an auto-encoder and binary decision tree be used [88]. For this, the fully connected (FC) layer was used to extract the features, before the binary decision tree was used to classify the nodules; after that, the LIDC-IDRI dataset was used to evaluate the model.
In another study [89], the authors developed an unsupervised method based on a deep sparse autoencoder (SAE) to extract the robust features from the lung. They then used a linear support vector machine (SVM) to classify the nodules. Nasrullah et al. [90] utilized a gradient boosting machine (GBM) and MixNet [91] to learn the complex features of nodules. Their approach combined the advantages of dual-path networks (DPN) with residual networks (ResNet) and Densly Connected Networks (DenseNet) to perform the lung nodule classification tasks. Akila et al. [92] proposed an efficient deep neural network for automatic pulmonary nodule classification with a supervised approach. Akila, therefore, employed several deep neural networks, including RNN, LSTM, and CNN, and produced results showing that RNN was not less suitable for learning patterns than LSTM and CNN models, with the CNN-based classifiers achieving a 25% higher accuracy than the RNN model. Similarly, another study [93] presented a novel interpretable deep hierarchical semantic CNN (HSCNN) for pulmonary nodule classification. The proposed model was trained and tested on the LIDC dataset and achieved better results than common 3D CNN approaches. Ge et al. [94] proposed a model with DenseNet-based architecture together with 3D filters and pooling kernels. The model achieved a classification accuracy of 92.4% on the LUNA16 dataset. Guobin [95] used a squeeze-and-excitation (SEN) network and aggregated residual transformations (SE-ResNeXt) to perform the required classification task. The model was evaluated on the LUNA16 dataset and achieved an accuracy of 91.67%. Yang [96] developed a novel two-stage CNN (2S-CNN) to classify lung CT images in an approach that consisted of two CNNs—one a basic network and the other a simplified version of GoogLeNet. The experimental results of this study produced an accuracy of 89.6%.
In another study [97], the authors proposed a multiscale synchronized deep supervision technique using an AlexNet network and synchronized deep supervision (SDS). The multiscale spatial pyramid strategy was used to extract the features from lung nodules at different scales. Another recent work [98] applied a 3D CNN (MMEL-3DCNN) to classify different types of lung nodules. The proposed approach was built on a multimodel network architecture and applied ensemble learning to improve the robustness of the nodule classification model. The experimental results were verified on the LIDC-IDRI and the model obtained satisfactory classification results. Kai [99] first constructed a residual attention network (RAN) and SEN network to extract the spatial and contextual features of the nodules. Next, they introduced a novel multiscale attention network (MSAN) to capture the multiscale attention features and used a GBM algorithm to differentiate benign and malignant nodules. Experiments on the LIDC-IDRI database achieved an accuracy of 91.9%, a sensitivity of 91.3%, an FP rate of 8.0%, and an F1-score of 91.0%. The detailed results comparison for segmentation and classification models are summarized in Table 5 and Table 6.
Table 4. Overview of lung nodule classification architectures and their key information.
Table 4. Overview of lung nodule classification architectures and their key information.
YearAuthorMethodPerformance
2021Ge Zhang [94]
  • 3D DenseNet
  • Dense block
  • Transition layer
  • Malignant or benign
ACC = 92.4%
SEN = 87.0%
SPEC = 96.0%
2020Akila Agnes [92]
  • CNN
  • 2D
  • Malignant or benign
SEN = 81%
SPEC = 91.9%
Precision = 87.8%
ACC = 87.26%
AUC = 0.944
2020Rekka Mastouri [81]
  • BCNN
  • 3D
  • VGG16
  • VGG19
  • SVM
  • Nodule or non-Nodule
ACC = 91.99%
SEN = 91.85%
SPEC = 92.27%
F1-score = 93.76%
FPR = 7.72%
2020Hong Liu [98]
  • MMEL-3DCNN
  • VggNet
  • ResNet
  • InceptionNet
  • Multinetwork
  • Malignant or benign
SEN = 0.837%
SPC = 0.939%
ACC = 0.906%
AUC = 0.939%
2020Kai Xia [99]
  • Residual learning
  • Dense learning
  • MSAN
  • GBM
  • 3D attention
  • Dual-path
  • Malignant or benign
ACC = 91.9%
SEN = 91.3%
FP rate = 8.0%
F1-score = 91.0%
2020Wu et al. [78]
  • 2D
  • Migration learning
  • ResNet50
  • PCA
  • Nodule or non-nodule
ACC = 98.23%
SEN = 97.7%
SPEC = 98.35%
F1 = 98.06%
Precision = 98.64%
FPR = 1.65%
2020Ali et al. [83]
  • 2D
  • Energy Layer
  • Transfer learning
  • Malignant or benign
ACC = 96.69% ± 0.72%
Error rate = 3.3% ± 0.72%
AUC = 99.11% ± 0.45%
SEN = 97.19% ± 0.57%
2019Yang An [96]
  • 2S-CNN
  • Inception CNN
  • Nodule or non-nodule
ACC = 89.6%
2019Zhang Li [97]
  • AlexNet
  • SDS
  • MPPS
  • Malignant or benign
ACC = 93.68%
SEN = 95.17%
SPEC = 93.92%
2019Tran et al. [79]
  • 2D
  • Focal loss
  • Nodule or non-nodule
ACC = 97.2%
SEN = 96.0%
SPEC = 97.3%
2019Al-Shabi et al. [73]
  • 2D
  • Residual block
  • Non-Local block
  • Self-attention
  • Malignant or benign
AUC = 95.62%
ACC = 88.46%
Precision = 87.38%
SEN = 88.66%
2019Al-Shabi et al. [84]
  • 2D
  • Multiple dilated convolutions
  • Context-Aware sub-network
  • Mid-range sized nodules
  • Malignant or benign
AUC = 93.15%
ACC = 92.57%
Precision = 91.85%
SEN = 92.21%
2019Guobin Zhang [95]
  • SE-ResNeXt
  • SENet
  • ResNet
  • Malignant or benign
AUC = 0.9563
ACC = 91.67%
2018Shiwen Shen [93]
  • HSCNN
  • 3D
  • Sub-task
  • Malignant or benign
AUC = 0.856
ACC = 0.842
SEN = 0.705
SPEC = 0.889
2018Dey et al. [85]
  • 3D
  • Multiscale
  • Multioutput
  • Dense block
  • Malignant or benign
TPR = 90.47%
TNR = 90.33%
PPV = 90.55%
AUC = 95.48%
ACC = 90.40%
2018Wu et al. [53]
  • 3D
  • 3D U-Net
  • WW/WC
ACC = 97.58%
2018Zhao et al. [60]
  • 3D
  • CNN
  • Inception structure
None
2017Nibali et al. [86]
  • 2D
  • ResNet18
  • Transfer learning
  • Curriculum learning
  • Malignant or benign
SEN = 91.07%
SPEC = 88.64
Precision = 89.35%
AUC = 94.59%
ACC = 89.90%
2017Liu et al. [87]
  • 2D
  • Multiscale
  • Multichannel
  • Binary/Ternary classification
  • Malignant or benign
SEN = 90.18%
SPEC = 100%
Error rate = 5.41%
AUC = 0.981
2016Li et al. [80]
  • 2D
  • CNN
  • Multiscale
  • Two networks
  • Nodule or non-nodule
ACC = 86.4%
SEN = 89.0%
2016Shen et al. [70]
  • 3D
  • CNN
  • Multi-crop pooling layer
  • Malignant score
ACC = 87.14%
AUC = 0.93
SEN = 0.77
SPEC = 0.93
2015Kumar et al. [88]
  • 2D
  • CNN
  • Binary decision tree
  • Malignant or benign
ACC = 75.01%
SEN = 83.35%
FP = 0.39 FP/patient

7. Challenges and Future Perspectives

Despite recent breakthroughs in deep learning for diagnosing pulmonary lung nodules, a large volume of chest scan data has several challenges, including feature extraction, nodule detection, false-positive reduction, and benign-malignant classification [100]. For effective feature extraction and benign-malignant classification, recurrent neural networks (RNNs) [101], deep belief networks (DBNs) [102], and autoencoders [103] could be used. Similarly, advances in graphical processing units (GPUs) positively impact the use of deep learning. As a result of the parallelization of CNNs, better feature extraction and classification may be achieved [104].
The majority of the CAD system’s decision-making for nodule detection relies heavily on supervised learning approaches. However, supervised learning is costly and time-consuming because it relies on massive labeled datasets. Furthermore, a model trained on fewer data has a higher risk of overfitting and convergence problems. Unsupervised learning approaches such as transfer learning techniques may be more suited in such situations. Balanced datasets, such as those used by [105], could be used to limit false positives.
Aside from the challenges mentioned above, one of the most significant obstacles to deep learning pulmonary nodule analysis is the scarcity of well-annotated and large-scale labeled datasets. Therefore, high quality and large datasets are also pressing for setting up an efficient deep learning CAD system to detect pulmonary nodules. From this review study, it has been also noted that the leading solutions employed CNNs and used the provided set of nodule candidates.
In the future, substantial resources and strong regulatory criteria will be required for lung nodule screening in order to ensure significant advantages and attempt to reduce the number of false negatives and positives. Future research should also focus on developing and validating simpler nodule evaluation algorithms by incorporating emerging diagnostic modalities like molecular signatures, biomarkers, and liquid biopsies [106]. For this purpose, deep learning and machine learning algorithms will be a perfect choice that allows more accurate automatic characterization and classification of nodules with higher accuracy and could led to revolutionary changes in radiology.

8. Conclusions

Recent research has incorporated DL strategies, achieving promising results in relation to the detection of lung nodules using CT images. However, segmenting and classifying lung nodules for the purposes of detection and diagnosis remains challenging.
Many lung nodule segmentation approaches are based on either general or multiview neural network architecture. Most studies using multiview neural networks incorporated some new architecture by taking multiple views of the lung nodules and using those views as inputs to the neural networks. In contrast, the general neural network-based methods were primarily based on U-Net architecture. Likewise, different lung nodule segmentation methods were adopted for different types of lung nodules, including boundary, juxta-pleural, small, well-circumscribed, and large nodules.
In terms of classification methods, many techniques have been proposed for the classification of lung nodules (e.g., whether they are benign or malignant). Most of the conducted works focused on supervised, as opposed to semi-supervised, learning. It is noted that limited datasets are available for training and testing the models, with the LIDC-IDRI dataset being the most used. To compare different approaches and their results, a series of tables were arranged to summarize the important findings. Different models used different metrics to verify their results on a range of datasets. Overall, the best-performing models vary greatly according to their data type, annotation conditions, and experimental aims. Therefore, comparing their performance is not straightforward. In spite of this limitation, our analysis of the literature is quite successful in highlighting the importance of building robust DL architectures to segment and classify lung nodules accurately and efficiently. Last but not least, future research must be initiated in terms of new guidelines and computer-based algorithms that are easy to use, which would provide great aid to both researchers and medical practitioners.

Author Contributions

Conceptualization, R.L., H.H. and B.H.; writing—original draft preparation, R.L., C.X. and Y.H.; supervision, H.H. and B.H. All authors have read and agreed to the published version of the manuscript.

Funding

We would like to thank the School-Enterprise Graduate Student Cooperation Fund of Shenzhen Technology University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bankier, A.A.; MacMahon, H.; Goo, J.M.; Rubin, G.; Schaefer-Prokop, C.M.; Naidich, D. Recommendations for Measuring Pulmonary Nodules at CT: A Statement from the Fleischner Society. Radiology 2017, 285, 584–600. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Hansell, D.M.; Bankier, A.A.; MacMahon, H.; McLoud, T.C.; Müller, N.L.; Remy, J. Fleischner Society: Glossary of Terms for Thoracic Imaging. Radiology 2008, 246, 697–722. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. ELCAP Public Lung Image Database. Available online: http://www.via.cornell.edu/lungdb.html (accessed on 21 January 2022).
  4. Choi, W.J.; Choi, T.S. Automated pulmonary nodule detection based on three-dimensional shape-based feature descriptor. Comput. Methods Programs Biomed. 2014, 113, 37–54. [Google Scholar] [CrossRef] [PubMed]
  5. Peloschek, P.; Sailer, J.; Weber, M.; Herold, C.J.; Prokop, M.; Schaefer-Prokop, C. Pulmonary Nodules: Sensitivity of Maximum Intensity Projection versus That of Volume Rendering of 3D Multidetector CT Data. Radiology 2007, 243, 561–569. [Google Scholar] [CrossRef] [PubMed]
  6. Kim, H.; Park, C.M.; Koh, J.M.; Lee, S.M.; Goo, J.M. Pulmonary subsolid nodules: What radiologists need to know about the imaging features and management strategy. Diagn. Interv. Radiol. 2013, 20, 47–57. [Google Scholar] [CrossRef] [PubMed]
  7. Radiopedia, Pulmonary Nodule. 2020. Available online: https://radiopaedia.org/articles/pulmonary-nodule-1 (accessed on 21 January 2022).
  8. Revel, M.P.; Bissery, A.; Bienvenu, M.; Aycard, L.; Lefort, C.; Frija, G. Are two-dimensional CT measurements of small noncalcified pulmonary nodules reliable? Radiology 2004, 231, 453–458. [Google Scholar] [CrossRef]
  9. Han, D.; Heuvelmans, M.A.; Oudkerk, M. Volume versus diameter assessment of small pulmonary nodules in CT lung cancer screening. Transl. Lung Cancer Res. 2017, 6, 52. [Google Scholar] [CrossRef] [Green Version]
  10. Henschke, C.I.; McCauley, D.I.; Yankelevitz, D.F.; Naidich, D.P.; McGuinness, G.; Miettinen, O.S.; Libby, D.M.; Pasmantier, M.W.; Koizumi, J.; Altorki, N.K.; et al. Early Lung Cancer Action Project: Overall design and findings from baseline screening. Lancet 1999, 354, 99–105. [Google Scholar] [CrossRef]
  11. Castellino, R.A. Computer aided detection (CAD): An overview. Cancer Imaging 2005, 5, 17–19. [Google Scholar] [CrossRef] [Green Version]
  12. McCarville, M.B.; Lederman, H.M.; Santana, V.M.; Daw, N.C.; Shochat, S.J.; Li, C.-S.; Kaufman, R.A. Distinguishing benign from malignant pulmonary nodules with helical chest CT in children with malignant solid tumors. Radiology 2006, 239, 514–520. [Google Scholar] [CrossRef]
  13. Singh, S.; Maxwell, J.; Baker, J.A.; Nicholas, J.L.; Lo, J.Y. Computer-aided Classification of Breast Masses: Performance and Interobserver Variability of Expert Radiologists versus Residents. Radiology 2011, 258, 73–80. [Google Scholar] [CrossRef] [PubMed]
  14. Giger, M.L.; Karssemeijer, N.; Schnabel, J.A. Breast image analysis for risk assessment, detection, diagnosis, and treatment of cancer. Annu. Rev. Biomed. Eng. 2013, 15, 327–357. [Google Scholar] [CrossRef] [PubMed]
  15. Joo, S.; Yang, Y.S.; Moon, W.K.; Kim, H.C. Computer-aided diagnosis of solid breast nodules: Use of an artificial neural network based on multiple sonographic features. IEEE Trans. Med. Imaging 2004, 23, 1292–1300. [Google Scholar] [CrossRef]
  16. Way, T.W.; Sahiner, B.; Chan, H.-P.; Hadjiiski, L.; Cascade, P.N.; Chughtai, A.; Bogot, N.; Kazerooni, E. Computer-aided diagnosis of pulmonary nodules on CT scans: Improvement of classification performance with nodule surface features. Med. Phys. 2009, 36, 3086–3098. [Google Scholar] [CrossRef] [Green Version]
  17. Way, T.W.; Hadjiiski, L.M.; Sahiner, B.; Chan, H.P.; Cascade, P.N.; Kazerooni, E.A.; Zhou, C.; Bogot, N. Computer-aided diagnosis of pulmonary nodules on CT scans: Segmentation and classification using 3D active contours. Med. Phys. 2006, 33, 2323–2337. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Giger, M.L.; Ahn, N.; Doi, K.; MacMahon, H.; Metz, C.E. Computerized detection of pulmonary nodules in digital chest images: Use of morphological filters in reducing false-positive detections. Med. Phys. 1990, 17, 861–865. [Google Scholar] [CrossRef]
  19. Ying, W.; Cunxi, C.; Tong, J.; Xinhe, X. Segmentation of regions of interest in lung CT images based on 2-D OTSU optimized by genetic algorithm. In Proceedings of the 2009 Chinese Control and Decision Conference, Guilin, China, 17–19 June 2009; pp. 5185–5189. [Google Scholar]
  20. Helen, R.; Kamaraj, N.; Selvi, K.; Raman, V.R. Segmentation of pulmonary parenchyma in CT lung images based on 2D Otsu optimized by PSO. In Proceedings of the International Conference on Emerging Trends in Electrical and Computer Technology, Nagercoil, India, 23–24 March 2011; pp. 536–541. [Google Scholar]
  21. Liu, Y.; Wang, Z.; Guo, M.; Li, P. Hidden conditional random field for lung nodule detection. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 3518–3521. [Google Scholar]
  22. John, J.; Mini, M. Multilevel Thresholding Based Segmentation and Feature Extraction for Pulmonary Nodule Detection. Procedia Technol. 2016, 24, 957–963. [Google Scholar] [CrossRef] [Green Version]
  23. Teramoto, A.; Fujita, H.; Yamamuro, O.; Tamaki, T. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique. Med. Phys. 2016, 43, 2821–2827. [Google Scholar] [CrossRef] [PubMed]
  24. Mastouri, R.; Neji, H.; Hantous-Zannad, S.; Khlifa, N. A morphological operation-based approach for Sub-pleural lung nodule detection from CT images. In Proceedings of the 2018 IEEE 4th Middle East Conference on Biomedical Engineering (MECBME), Tunis, Tunisia, 28–30 March 2018; pp. 84–89. [Google Scholar]
  25. Santos, A.M.; Filho, A.O.D.C.; Silva, A.C.; de Paiva, A.C.; Nunes, R.A.; Gattass, M. Automatic detection of small lung nodules in 3D CT data using Gaussian mixture models, Tsallis entropy and SVM. Eng. Appl. Artif. Intell. 2014, 36, 27–39. [Google Scholar] [CrossRef]
  26. Orozco, H.M.; Villegas, O.O.V.; Sánchez, V.G.C.; Domínguez, H.D.J.O.; Alfaro, M.D.J.N. Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine. Biomed. Eng. Online 2015, 14, 9. [Google Scholar] [CrossRef] [Green Version]
  27. Lu, L.; Tan, Y.; Schwartz, L.H.; Zhao, B. Hybrid detection of lung nodules on CT scan images. Med. Phys. 2015, 42, 5042–5054. [Google Scholar] [CrossRef] [PubMed]
  28. Farahani, F.V.; Ahmadi, A.; Zarandi, M.F. Lung nodule diagnosis from CT images based on ensemble learning. In Proceedings of the 2015 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Niagara Falls, ON, Canada, 12–15 August 2015; pp. 1–7. [Google Scholar]
  29. Klik MA, J.; v Rikxoort, E.M.; Peters, J.F.; Gietema, H.A.; Prokop, M.; v Ginneken, B. Improved classification of pulmonary nodules by automated detection of benign subpleural lymph nodes. In Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro; IEEE: Arlington, VA, USA, 6–9 April 2006; pp. 494–497. [Google Scholar]
  30. Froz, B.R.; Filho, A.O.D.C.; Silva, A.C.; Paiva, A.; Nunes, R.A.; Gattass, M. Lung nodule classification using artificial crawlers, directional texture and support vector machine. Expert Syst. Appl. 2017, 69, 176–188. [Google Scholar] [CrossRef]
  31. Wu, J.; Qian, T. A survey of pulmonary nodule detection, segmentation and classification in computed tomography with deep learning techniques. J. Med. Artif. Intell. 2019, 2, 1–12. [Google Scholar] [CrossRef]
  32. Liu, K.; Li, Q.; Ma, J.; Zhou, Z.; Sun, M.; Deng, Y.; Tu, W.; Wang, Y.; Fan, L.; Liu, S.; et al. Evaluating a fully automated pulmonary nodule detection approach and its impact on radiologist performance. Radiol. Artif. Intell. 2019, 1, e180084. [Google Scholar] [CrossRef]
  33. Shen, W.; Zhou, M.; Yang, F.; Yang, C.; Tian, J. Multiscale convolutional neural networks for lung nodule classification. In Proceedings of the International Conference on Information Processing in Medical Imaging 2014, Isle of Skye, UK, 28 June–3 July 2014; Springer: Cham, Switzerland, 2015; pp. 588–599. [Google Scholar]
  34. Ciompi, F.; Chung, K.; Van Riel, S.J.; Setio, A.A.A.; Gerke, P.K.; Jacobs, C. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci. Rep. 2017, 7, 46479. [Google Scholar] [CrossRef] [PubMed]
  35. Causey, J.; Zhang, J.; Ma, S.; Jiang, B.; Qualls, J.A.; Politte, D.G.; Prior, F.W.; Zhang, S.; Huang, X. Highly accurate model for prediction of lung nodule malignancy with CT scans. Sci. Rep. 2018, 8, 9286. [Google Scholar] [CrossRef]
  36. Hua, K.L.; Hsu, C.H.; Hidayati, S.C.; Cheng, W.H.; Chen, Y.J. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. OncoTargets Ther. 2015, 8, 2015–2022. [Google Scholar]
  37. Dhara, A.K.; Mukhopadhyay, S.; Khandelwal, N. Computer-aided detection and analysis of pulmonary nodule from CT images: A survey. IETE Tech. Rev. 2012, 29, 265. [Google Scholar] [CrossRef]
  38. Sluimer, I.; Schilham, A.; Prokop, M.; Van Ginneken, B. Computer analysis of computed tomography scans of the lung: A survey. IEEE Trans. Med. Imaging 2006, 25, 385–405. [Google Scholar] [CrossRef] [PubMed]
  39. Valente, I.R.S.; Cortez, P.C.; Neto, E.C.; Soares, J.M.; Albuquerque, V.H.C.; Tavares, J.M.R. Automatic 3D pulmonary nodule detection in CT images: A survey. Comput. Methods Programs Biomed. 2015, 124, 91–107. [Google Scholar] [CrossRef] [Green Version]
  40. Halder, A.; Dey, D.; Sadhu, A.K. Lung Nodule Detection from Feature Engineering to Deep Learning in Thoracic CT Images: A Comprehensive Review. J. Digit. Imaging 2020, 33, 655–677. [Google Scholar] [CrossRef] [PubMed]
  41. Zhang, G.; Jiang, S.; Yang, Z.; Gong, L.; Ma, X.; Zhou, Z.; Bao, C.; Liu, Q. Automatic nodule detection for lung cancer in CT images: A review. Comput. Biol. Med. 2018, 103, 287–300. [Google Scholar] [CrossRef]
  42. Gu, Y.; Chi, J.; Liu, J.; Yang, L.; Zhang, B.; Yu, D.; Zhao, Y.; Lu, X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput. Biol. Med. 2021, 137, 104806. [Google Scholar] [CrossRef] [PubMed]
  43. Monkam, P.; Qi, S.; Ma, H.; Gao, W.; Yao, Y.; Qian, W. Detection and classification of pulmonary nodules using convolutional neural networks: A survey. IEEE Access 2019, 7, 78075–78091. [Google Scholar] [CrossRef]
  44. El-Regaily, S.A.; Salem MA, M.; Aziz MH, A.; Roushdy, M.I. Lung nodule segmentation and detection in computed tomography. In Proceedings of the 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 5–7 December 2017; pp. 72–78. [Google Scholar]
  45. Nithila, E.E.; Kumar, S.S. Segmentation of lung nodule in CT data using active contour model and Fuzzy C-mean clustering. Alex. Eng. J. 2016, 55, 2583–2588. [Google Scholar] [CrossRef] [Green Version]
  46. Mansoor, A.; Bagci, U.; Foster, B.; Xu, Z.; Papadakis, G.Z.; Folio, L.R.; Udupa, J.K.; Mollura, D.J. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends. RadioGraphics 2015, 35, 1056–1076. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Zhang, J.; Xia, Y.; Cui, H.; Zhang, Y. Pulmonary nodule detection in medical images: A survey. Biomed. Signal Process. Control 2018, 43, 138–147. [Google Scholar] [CrossRef]
  48. Armato, S.G., 3rd; McLennan, G.; Bidaut, L.; McNitt-Gray, M.F.; McNitt-Gray, M.F.; Reeves, A.P.; Reeves, A.P.; Aberle, D.R.; Zhao, B.; Henschke, C.I. The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans. Med. Phys. 2011, 38, 915–931. [Google Scholar] [CrossRef] [PubMed]
  49. Setio AA, A.; Traverso, A.; De Bel, T.; Berens, M.S.; Van Den Bogaard, C.; Cerello, P. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Med. Image Anal. 2017, 42, 1–13. [Google Scholar] [CrossRef] [Green Version]
  50. Zarogoulidis, K.; Zarogoulidis, P.; Darwiche, K.; Boutsikou, E.; Machairiotis, N.; Tsakiridis, K. Treatment of non-small cell lung cancer (NSCLC). J. Thorac. Dis. 2013, 5, S389. [Google Scholar] [PubMed]
  51. Van Ginneken, B.; Armato, S.G., III; de Hoop, B.; van Amelsvoort-van de Vorst, S.; Duindam, T.; Niemeijer, M. Comparing and combining algorithms for computer-aided detection of pulmonary nodules in computed tomography scans: The ANODE09 study. Med. Image Anal. 2010, 14, 707–722. [Google Scholar] [CrossRef] [Green Version]
  52. Riquelme, D.; Akhloufi, M.A. Deep Learning for Lung Cancer Nodules Detection and Classification in CT Scans. AI 2020, 1, 28–67. [Google Scholar] [CrossRef] [Green Version]
  53. Wu, B.; Zhou, Z.; Wang, J.; Wang, Y. Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1109–1113. [Google Scholar]
  54. Keetha, N.V.; Annavarapu, C.S.R. U-Det: A Modified U-Net architecture with bidirectional feature network for lung nodule segmentation. arXiv 2020, arXiv:2003.09293. [Google Scholar]
  55. Kumar, S.; Raman, S. Lung Nodule Segmentation Using 3-Dimensional Convolutional Neural Networks[M]//Soft Computing for Problem Solving; Springer: Singapore, 2020; pp. 585–596. [Google Scholar]
  56. Usman, M.; Lee, B.-D.; Byon, S.-S.; Kim, S.-H.; Lee, B.-I.; Shin, Y.-G. Volumetric lung nodule segmentation using adaptive ROI with multi-view residual learning. Sci. Rep. 2020, 10, 2839. [Google Scholar] [CrossRef] [PubMed]
  57. Huang, X.; Sun, W.; Tseng, T.-L.; Li, C.; Qian, W. Fast and fully-automated detection and segmentation of pulmonary nodules in thoracic CT scans using deep convolutional neural networks. Comput. Med. Imaging Graph. 2019, 74, 25–36. [Google Scholar] [CrossRef] [PubMed]
  58. Aresta, G.; Jacobs, C.; Araújo, T.; Cunha, A.; Ramos, I.; van Ginneken, B.; Campilho, A. iW-Net: An automatic and minimalistic interactive lung nodule segmentation deep network. Sci. Rep. 2019, 9, 1591. [Google Scholar] [CrossRef] [Green Version]
  59. Tong, G.; Li, Y.; Chen, H.; Zhang, Q.; Jiang, H. Improved U-NET network for pulmonary nodules segmentation. Optik 2018, 174, 460–469. [Google Scholar] [CrossRef]
  60. Zhao, C.; Han, J.; Jia, Y.; Gou, F. Lung nodule detection via 3D U-Net and contextual convolutional neural network. In Proceedings of the 2018 International Conference on Networking and Network Applications (NaNA), Xi’an, China, 12–15 October 2015; pp. 356–361. [Google Scholar]
  61. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  62. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  63. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. Available online: https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf (accessed on 21 January 2022).
  64. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmen-tation. In Proceedings of the 2016 4th International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  65. Pezzano, G.; Ripoll, V.R.; Radeva, P. CoLe-CNN: Context-learning convolutional neural network with adaptive loss func-tion for lung nodule segmentation. Comput. Methods Programs Biomed. 2021, 198, 105792. [Google Scholar] [CrossRef] [PubMed]
  66. Wang, S.; Zhou, M.; Gevaert, O.; Tang, Z.; Dong, D.; Liu, Z.; Jie, T. A multi-view deep convolutional neural networks for lung nodule segmentation. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Glasgow, Scotland, 11–15 July 2017; pp. 1752–1755. [Google Scholar]
  67. Dong, X.; Xu, S.; Liu, Y.; Wang, A.; Saripan, M.I.; Li, L.; Zhang, X.; Lu, L. Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation. Cancer Imaging 2020, 20, 53. [Google Scholar] [CrossRef] [PubMed]
  68. Ning, W.; Lei, S.; Yang, J.; Cao, Y.; Jiang, P.; Yang, Q.; Zhang, J.; Wang, X.; Chen, F.; Geng, Z.; et al. Open resource of clinical data from patients with pneumonia for the prediction of COVID-19 outcomes via deep learning. Nat. Biomed. Eng. 2020, 4, 1197–1207. [Google Scholar] [CrossRef] [PubMed]
  69. Wang, S.; Zhou, M.; Liu, Z.; Liu, Z.; Gu, D.; Zang, Y.; Dong, D.; Gevaert, O.; Tian, J. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. Med. Image Anal. 2017, 40, 172–183. [Google Scholar] [CrossRef]
  70. Shen, W.; Zhou, M.; Yang, F.; Yu, D.; Dong, D.; Yang, C.; Zang, Y.; Tian, J. Multi-crop Convolutional Neural Networks for lung nodule malignancy suspiciousness classification. Pattern Recognit. 2017, 61, 663–673. [Google Scholar] [CrossRef]
  71. Zhang, J.; Xia, Y.; Zeng, H.; Zhang, Y. NODULe: Combining constrained multi-scale LoG filters with densely dilated 3D deep convolutional neural network for pulmonary nodule detection. Neurocomputing 2018, 317, 159–167. [Google Scholar] [CrossRef]
  72. Cao, H.; Liu, H.; Song, E.; Hung, C.-C.; Ma, G.; Xu, X.; Jin, R.; Lu, J. Dual-branch residual network for lung nodule segmentation. Appl. Soft Comput. 2019, 86, 105934. [Google Scholar] [CrossRef]
  73. Al-Shabi, M.; Lan, B.L.; Chan, W.Y.; Ng, K.H.; Tan, M. Lung nodule classification using deep local–global networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1815–1819. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Tang, H.; Zhang, C.; Xie, X. Nodulenet: Decoupled false positive reduction for pulmonary nodule detection and segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer: Cham, Switzerland, 2019; pp. 266–274. [Google Scholar]
  75. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P.J. Atrous convolution for binary semantic segmentation of lung nodule. In Proceedings of the ICASSP 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1015–1019. [Google Scholar]
  76. Liu, M.; Dong, J.; Dong, X.; Yu, H.; Qi, L. Segmentation of lung nodule in CT images based on mask R-CNN. In Proceedings of the 2018 9th International Conference on Awareness Science and Technology (iCAST), Fukuoka, Japan, 19–21 September 2018; pp. 1–6. [Google Scholar]
  77. Khosravan, N.; Bagci, U. Semi-supervised multi-task learning for lung cancer diagnosis. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 710–713. [Google Scholar]
  78. Wu, P.; Sun, X.; Zhao, Z.; Wang, H.; Pan, S.; Schuller, B. Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning. Comput. Intell. Neurosci. 2020, 2020, 8975078. [Google Scholar] [CrossRef] [PubMed]
  79. Tran, G.S.; Nghiem, T.P.; Nguyen, V.T.; Luong, C.M.; Burie, J.-C. Improving Accuracy of Lung Nodule Classification Using Deep Learning with Focal Loss. J. Health Eng. 2019, 2019, 5156416. [Google Scholar] [CrossRef]
  80. Li, W.; Cao, P.; Zhao, D.; Wang, J. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images. Comput. Math. Methods Med. 2016, 2016, 6215085. [Google Scholar] [CrossRef]
  81. Mastouri, R.; Khlifa, N.; Neji, H.; Hantous-Zannad, S. A bilinear convolutional neural network for lung nodules classification on CT images. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 91–101. [Google Scholar] [CrossRef]
  82. Bhavanishankar, K.; Sudhamani, M.V. Classification of Lung Nodules into Benign or Malignant and Development of a CBIRSystem for Lung CT Scans. In Proceedings of the International Conference on Computational Vision and Bio Inspired Computing, Coimbatore, India, 25–26 September 2019; Springer: Cham, Switzerland, 2019; pp. 563–575. [Google Scholar]
  83. Ali, I.; Muzammil, M.; Haq, I.U.; Khaliq, A.A.; Abdullah, S. Efficient lung nodule classification using transferable texture convolutional neural network. IEEE Access 2020, 8, 175859–175870. [Google Scholar] [CrossRef]
  84. Al-Shabi, M.; Lee, H.K.; Tan, M. Gated-Dilated Networks for Lung Nodule Classification in CT Scans. IEEE Access 2019, 7, 178827–178838. [Google Scholar] [CrossRef]
  85. Dey, R.; Lu, Z.; Hong, Y. Diagnostic classification of lung nodules using 3D neural networks. In Proceedings of the 2018 IEEE 15th interna-tional symposium on biomedical imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 774–778. [Google Scholar]
  86. Nibali, A.; He, Z.; Wollersheim, D. Pulmonary nodule classification with deep residual networks. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1799–1808. [Google Scholar] [CrossRef] [PubMed]
  87. Liu, K.; Kang, G. Multiview convolutional neural networks for lung nodule classification. Int. J. Imaging Syst. Technol. 2017, 27, 12–22. [Google Scholar] [CrossRef] [Green Version]
  88. Kumar, D.; Wong, A.; Clausi, D.A. Lung nodule classification using deep features in CT images. In Proceedings of the 2015 12th Conference on Computer and Robot Vision, Halifax, NS, Canada, 3–5 June 2015; pp. 133–138. [Google Scholar]
  89. Jia, T.; Zhang, H.; Bai, Y.K. Benign and Malignant Lung Nodule Classification Based on Deep Learning Feature. J. Med. Imaging Health Inform. 2015, 5, 1936–1940. [Google Scholar] [CrossRef]
  90. Sang, J.; Alam, M.S.; Xiang, H. Automated detection and classification for early stage lung cancer on CT images using deep learning. In Proceedings of the Pattern Recognition and Tracking XXX. International Society for Optics and Photonics, Baltimore, MA, USA, 15–16 April 2019; p. 109950S. [Google Scholar]
  91. Wang, W.; Li, X.; Yang, J.; Lu, T. Mixed link networks. arXiv 2018, arXiv:1802.01808. [Google Scholar]
  92. Agnes, S.A.; Anitha, J. Automatic 2D Lung Nodule Patch Classification using Deep Neural Networks. In Proceedings of the 2020 Fourth International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 8–10 January 2020. [Google Scholar]
  93. Shen, S.; Han, S.X.; Aberle, D.R.; Bui, A.A.; Hsu, W. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. Expert Syst. Appl. 2019, 128, 84–95. [Google Scholar] [CrossRef] [Green Version]
  94. Zhang, G.; Lin, L.; Wang, J. Lung Nodule Classification in CT Images Using 3D DenseNet. IOP Publishing 2021, 1827, 012155. [Google Scholar] [CrossRef]
  95. Zhang, G.; Yang, Z.; Gong, L.; Jiang, S.; Wang, L.; Zhang, H. Classification of lung nodules based on CT images using squeeze-and-excitation net-work and aggregated residual transformations. La Radiol. Med. 2020, 125, 374–383. [Google Scholar] [CrossRef] [PubMed]
  96. An, Y.; Hu, T.; Wang, J.; Lyu, J.; Banerjee, S.; Ling, S.H. Lung Nodule Classification using A Novel Two-stage Convolutional Neural Networks Structure. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 6259–6262. [Google Scholar]
  97. Zhang, L.; Qiang, Y.; Zhang, X.; Wang, X. Classification of multi-scale lung nodules based on synchronized deep supervision. Comput. Appl. Softw. 2019, 36, 214–219. [Google Scholar] [CrossRef]
  98. Liu, H.; Cao, H.; Song, E.; Ma, G.; Xu, X.; Jin, R.; Liu, C.; Hung, C.-C. Multi-model Ensemble Learning Architecture Based on 3D CNN for Lung Nodule Malignancy Suspiciousness Classification. J. Digit. Imaging 2020, 33, 1242–1256. [Google Scholar] [CrossRef]
  99. Xia, K.; Chi, J.; Gao, Y.; Jiang, Y.; Wu, C. Adaptive Aggregated Attention Network for Pulmonary Nodule Classification. Appl. Sci. 2021, 11, 610. [Google Scholar] [CrossRef]
  100. Yang, Y.; Feng, X.; Chi, W.; Li, Z.; Duan, W.; Liu, H. Deep learning aided decision support for pulmonary nodules diagnosing: A review. J. Thorac. Dis. 2018, 10, S867. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  101. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
  102. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  103. Baldi, P. Autoencoders, unsupervised learning, and deep architectures. In Proceedings of the ICML Workshop on Unsupervised and Transfer Learning, Bellevue, DC, USA, 27 June 2012; pp. 37–49. [Google Scholar]
  104. Navamani, T.M. Efficient deep learning approaches for health informatics. In Deep Learning and Parallel Computing Environment for Bioengineering Systems; Elsevier: Amsterdam, The Netherlands, 2019; pp. 123–137. [Google Scholar]
  105. Liang, J.; Ye, G.; Guo, J.; Zhang, S.; Huang, Q. Reducing False-Positives in Lung Nodules Detection Using Balanced Datasets. Front. Public Health 2021, 9, 517. [Google Scholar] [CrossRef]
  106. Gaga, M.; Loverdos, K.; Fotiadis, A.; Kontogianni, C.; Iliopoulou, M. Lung nodules: A comprehensive review on current approach and management. Ann. Thorac. Med. 2019, 14, 226–238. [Google Scholar] [CrossRef]
Figure 1. A general pipeline of a lung nodule CAD system.
Figure 1. A general pipeline of a lung nodule CAD system.
Diagnostics 12 00298 g001
Figure 2. Network architecture of lung nodule segmentation.
Figure 2. Network architecture of lung nodule segmentation.
Diagnostics 12 00298 g002
Table 1. Cited datasets and their composition.
Table 1. Cited datasets and their composition.
DatasetThe Number of CT ScansThe Number of NodulesAnnotation
LIDC-IDRI101836,378
LUNA1688813,799
Ali Tianchi10001000
NSCLC211-
ELCAP50-
ANODE0955 (only 5 CT scans)39
Table 2. Various evaluation metrics used for lung cancer/nodule diagnosis.
Table 2. Various evaluation metrics used for lung cancer/nodule diagnosis.
MetricBriefExpression
Sensitivity (SEN)Measures the proportion of positives that are correctly identified S E N = T P T P + F N
Accuracy (ACC)Classification accuracy of the classifier A C C = T P + T N T P + F N + F P + T N
Positive predictive value (PPV)The proportions of positive results in statistics and diagnostic tests that are truly positive P P V = T P T P + F P
Dice Similarity Coefficient (DSC)A statistics used to gauge the similarity of two samples. D S C = 2 T P 2 T P + F P + F N
Intersection over Union (IoU)The IoU measurement gives the similarity between the predicted area and the real area of the objects present in the set of images I o U = T P T P + F P + F N
F1-ScoreUsed in statistics to measure the accuracy of a binary classification model F 1 = 2 ( S E N + P P V ) ( S E N + P P V )
Receiver Operating Characteristic (ROC) A curve depicting the relationship between the sensitivity and specificity (Y-axis is TP rate and X-axis is the FP rate)-
Free Receiver Operating Characteristic (FROC) Similar to the ROC curve, differing only in the X-axis. The X-axis is the FP rate per image (or per scan)-
Area Under Curve (AUC) Total area under the ROC curve -
Competition Performance Metric (CPM)Average of the Sensitivity at seven defined FP rates in the FROC curve: 1/8,1/4,1/2,1,2,4,8 FPs/scan-
Mean Average Precision (mAP)Mean Average Precision-
Note: TP: true positive; TN: true negative; FN: false negative; FP: false positive.
Table 5. Results comparison of nodule segmentation models.
Table 5. Results comparison of nodule segmentation models.
YearAuthorDatasetPPV (%)SEN (%)DSC (%)IOUArchitectureApproach
2020Dong et al. [67]LIDC-IDRI93.698.1092.6- Multiview
2020Cao et al. [72]LIDC-IDRI79.6489.3582.74- Multiview
2017Wang et al. [66]LIDC-IDRI77.5983.7277.67- Multiview
2017Wang et al. [69]LIDC-IDRI/GDGH75.8492.7582.15- Multiview
2017Shen et al. [70]Random datasets87.140.77--MC-CNNMultiview
2021Pezzano et al. [65]LIDC-IDRI---76.6Nodule typeGeneral
2020Keetha et al. [54]LUNA1678.9292.2482.82-U-Net et al.General
2020Kumar et al. [55]LUNA16--96.15-U-Net et al.General
2020Usman et al. [56]LIDC-IDRI88.2491.6287.55- General
2019Huang et al. [57]LUNA16--79.3-U-Net et al.General
2018Wu et al. [53]LIDC-IDRI--73.98-U-Net et al.General
2018Tong et al. [59]LUNA16--73.6-U-Net et al.General
2018Zhao et al. [60] LIDC-IDRI----U-Net et al.General
2018Liu et al. [76]LIDC-IDRI----FCNGeneral
2019Aresta et al. [58]LIDC-IDRI---55 Nodule type
2019Hesamian et al. [75]LIDC-IDRI--81.24- -
2018Khosravan et al. [77]LUNA16-9891- semi-supervised
2019Tang et al. [74]LIDC-IDRI--83.10- -
Table 6. Results comparison of nodule classification models.
Table 6. Results comparison of nodule classification models.
YearReferenceDatasetSEN (%)AUC (%)ACCClassification
2021Ge Zhang [94] LUNA1687.00-92.40MOB
2020Akila Agnes [92]LIDC- IDRI81.0094.40-MOB
2020Hong Liu [98]LIDC-IDRI0.83793.9090.60MOB
2020Kai Xia [99]LIDC-IDRI91.30-91.90MOB
2020Ali et al. [83]LIDC-IDRI98.1099.1196.69MOB
2019Zhang Li [97]LIDC-IDRI95.17-93.68MOB
2019Guobin Zhang [95]LUNA16-95.6391.67MOB
2019Al-Shabi et al. [73]LIDC-IDRI88.6695.6288.46MOB
2019Al-Shabi et al. [84]LIDC-IDRI92.2193.1592.57MOB
2018Shiwen Shen [93]LIDC-IDRI0.7050.8560.842MOB
2018Dey et al. [85]LIDC-IDRI/Themselves dataset-95.4890.40MOB
2017Nibali et al. [86]LIDC-IDRI91.0794.5989.90MOB
2016Shen et al. [70]LIDC-IDRI77.0093.0087.14MOB
2015Kumar et al. [88]LIDC-IDRI83.35-75.01MOB
2020Wu et al. [78]LIDC-IDRI97.70-98.23NON
2019Yang An [96]LIDC-IDRI--89.60NON
2019Tran et al. [79]LIDC-IDRI96.00-97.20NON
2020Rekka Mastouri [81]LUNA1691.85-91.99NON
2018Wu et al. [53]LIDC-IDRI--97.58NON
2018Zhao et al. [60]LIDC-IDRI---NON
2017Liu et al. [87]LIDC-IDRI90.1898.10-others
2016Li et al. [80]LIDC-IDRI89.0-86.40others
MOB: Malignant or benign; NON: nodule or non-nodule; Other: except MOB and NON.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, R.; Xiao, C.; Huang, Y.; Hassan, H.; Huang, B. Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review. Diagnostics 2022, 12, 298. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020298

AMA Style

Li R, Xiao C, Huang Y, Hassan H, Huang B. Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review. Diagnostics. 2022; 12(2):298. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020298

Chicago/Turabian Style

Li, Rui, Chuda Xiao, Yongzhi Huang, Haseeb Hassan, and Bingding Huang. 2022. "Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review" Diagnostics 12, no. 2: 298. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12020298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop