Next Article in Journal
Assessing Grain Yield and Achieving Enhanced Quality in Organic Farming: Efficiency of Winter Wheat Mixtures System
Previous Article in Journal
Using Genetic Programming to Identify Characteristics of Brazilian Regions in Relation to Rural Credit Allocation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pre-Trained Deep Neural Network-Based Features Selection Supported Machine Learning for Rice Leaf Disease Classification

1
Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
2
Department of Computer Science and Engineering, School of Engineering and Technology, Central University of Haryana, Mahendragarh 123031, Haryana, India
3
Higher Polytechnic School, Universidad Europea del Atlántico, C/Isabel Torres 21, 39011 Santander, Spain
4
Department of Engineering, Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
5
Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, Uttarakhand, India
6
Computer Science Department, Community College, King Saud University, Riyadh 11437, Saudi Arabia
7
Engineering Research & Innovation Group, Universidad Europea del Atlántico, C/Isabel Torres 21, 39011 Santander, Spain
*
Author to whom correspondence should be addressed.
Submission received: 8 April 2023 / Revised: 16 April 2023 / Accepted: 21 April 2023 / Published: 24 April 2023
(This article belongs to the Section Digital Agriculture)

Abstract

:
Rice is a staple food for roughly half of the world’s population. Some farmers prefer rice cultivation to other crops because rice can thrive in a wide range of environments. Several studies have found that about 70% of India’s population relies on agriculture in some way and that agribusiness accounts for about 17% of India’s GDP. In India, rice is one of the most important crops, but it is vulnerable to a number of diseases throughout the growing process. Farmers’ manual identification of these diseases is highly inaccurate due to their lack of medical expertise. Recent advances in deep learning models show that automatic image recognition systems can be extremely useful in such situations. In this paper, we propose a suitable and effective system for predicting diseases in rice leaves using a number of different deep learning techniques. Images of rice leaf diseases were gathered and processed to fulfil the algorithmic requirements. Initially, features were extracted by using 32 pre-trained models, and then we classified the images of rice leaf diseases such as bacterial blight, blast, and brown spot with numerous machine learning and ensemble learning classifiers and compared the results. The proposed procedure works better than other methods that are currently used. It achieves 90–91% identification accuracy and other performance parameters such as precision, Recall Rate, F1-score, Matthews Coefficient, and Kappa Statistics on a normal data set. Even after the segmentation process, the value reaches 93–94% for model EfficientNetV2B3 with ET and HGB classifiers. The proposed model efficiently recognises rice leaf diseases with an accuracy of 94%. The experimental results show that the proposed procedure is valid and effective for identifying rice diseases.

1. Introduction

Globally, rice serves as a fundamental food source for over 3.5 billion individuals [1]. Rice, wheat, and maize are the three largest grains. Rice is a highly self-sufficient crop that is widely consumed as a primary food source in various regions worldwide [2]. It is the primary source of food all over the world in agriculture. Most people include it as a complete meal in their meals. Due to its low cost, starchy nature, and high caloric value, rice is an affordable and easily accessible food for everyone [3]. Rice crops are very important for employment on the Asian continent, and they also help to some extent reduce poverty. Rice crops require hot, humid weather to grow because it grows in water. Rice production depends on effective irrigation, which includes building dams and having good soil. India is the second-largest rice producer, producing approximately 116.42 million tonnes [4].
There are a number of factors, including soil quality, environmental factors, the choice of unfavourable crops, pest weeds, poor manure, and various plant diseases that can cause different diseases and infections in plants. Plant diseases have a major impact on agricultural production [5]. Plant diseases that are contagious are brought about by viruses, fungi, and bacteria, and their impact can vary from minor harm to fruits or leaves to the death of the plant [6]. Infected leaves can cause significant damage to rice crops and lower productivity. Once infected, they spread quickly, and the rice crop is susceptible to a number of different diseases, including blast, brown spot, bacterial blight, tungro, sheath rot, false smut, and hispa [7]. On rice leaves, these diseases’ symptoms are typically visible. They can be recognised by a circular or oval spot that is coloured orange and greenish-grey. Blast is identified by a greyish-green border with a dark green outline; a brown to purple oval spot on leaves is an indication of brown spot; and bacterial blight is identified by a greenish-white lesion on the leaves. It effectively lowers the quality and quantity of the harvest. Table 1 provides a brief explanation of the diseases’ key characteristics [8,9]. Different rice plant diseases can occur, which has an adverse effect on crop growth and, if they are not identified in time, could have disastrous effects on food security [10].
Leaf diseases have a direct impact on the rice crop production of a country because the plants are not consistently monitored. Farmers may not always be aware of these diseases and their occurrence periods, which can result in diseases appearing unexpectedly on any plant, ultimately affecting the overall production of rice [11]. In the conventional method, a knowledgeable expert who is capable of spotting slight variations in leaf colour visually detects disease. The method is labour-intensive, takes more time, and makes it impossible to assess the harshness and stained areas in large-scale farming accurately. Predicting and forecasting diseases affecting rice leaves is crucial for maintaining the quantity and quality of rice production. Detecting plant diseases at an early stage is crucial in agriculture as it enables prompt intervention to prevent their spread, promote healthy plant growth, and increase rice production and supply [12]. Therefore, the identification of plant diseases is currently a significant requirement in agriculture.
A large portion of India’s population works in fields, and the sector accounts for about 17 percent of the country’s GDP. The country of India holds the position of being the second-largest producer of rice globally, with a yield of 116.42 million tonnes [4]. Automated non-destructive methods for spotting leaf diseases have emerged as a result of recent advancements in farming technology. Farmers can benefit greatly from a rapid leaf disease detection tool [13]. In order to diagnose diseases of the rice leaf, advanced automated techniques such as image processing and machine learning must be used. A new branch of data mining called machine learning (ML) enables a programme to predict outcomes more accurately without having to be explicitly programmed. ML algorithms are frequently divided into supervised and unsupervised categories [14]. Classification refers to the process of transforming a given set of instances into a designated set of attributes or labels, commonly referred to as target attributes. DT classifiers, NN, K-NN classifiers, RF, and SVM are all used in a number of applications. DL is an enhancement of ML that effectively trains huge data, automatically picks up the input features, and produces results based on predetermined rules.
A CNN that has already been trained can be transferred to a different problem. As a result, the proposed model performs better than the model created from scratch, and the training time for the model can be reduced [15]. Transfer learning can be utilised to create a model that acts as a fixed feature extractor for a particular dataset by either fine-tuning the last few layers of the model or removing the fully connected layers. This allows the model to perform efficiently with the given dataset. Recently, DL techniques have been expanded in the agricultural sector as well. Many researchers conduct tremendous research for the early detection of paddy leaf diseases at early stages, such as in Ref. [16] author using the Minimum Distance (MDC) and the K-NN classifier to accurately classify SR, blast, and BS rice leaf diseases. However, the same idea is also presented in [17], which compares two classifiers, Minimum Distance and Naïve-based classifier, for the identification of the rice crop disease with the R2016 tool. The authors obtained a dataset of 200 digital images featuring diseased rice leaves, achieving an accuracy rate of 69% with Bayes classifier and 81.06% with MDC.
According to [18], a technique for detecting rice diseases using DCNN is proposed. The authors trained CNNs to recognize ten distinct rice leaf diseases, achieving an accuracy of 95.48% with 10-fold cross-validation. The authors of Ref. [19] propose an INC-VGGN module that combines the inception and VGGNet modules to identify plant diseases. The module involves the addition of a pooling layer and modification of the activation task. The VGGNet image net is pooled with the inception module to create the module. The proposed module achieves an average accuracy of 91.83% on public datasets and 92% in complex conditions. The authors of Ref. [20] introduced a two-layer detection method based on the RCNN algorithm for detecting Brown Rice Planthoppers (BRPH) in images. The method showed good performance in identifying BRPH, with accuracy and recall rates of 94.5% and 88.0%, respectively. The study also compared the results of this method with the YOLO v3 algorithm.
The study found that the performance of the BRPH detection algorithm was consistent, and it outperformed the YOLO v3 algorithm. The authors also introduced a client-server architecture-based technique in their discussion. There are three aspects to the scheme: a mobile phone client that allows users to upload photographs to the server; a programme on the server-side that analyses the images and displays the results to the user; and also, the server must keep all the relevant results in the database. The authors of Ref. [14] created a dataset of 5932 field images of rice leaf diseases such as tungro, BB, and BS and assessed the performance of 11 CNN models in deep learning approaches based on various parameters, including accuracy, F1-score, FPR, and training time. The results indicated that SVM outperformed transfer learning methods. The authors of Ref. [21] proposed a model for detecting rice leaf diseases such as BS, LS, and BB using hue threshold segmentation. The model also integrated a classification algorithm called gradient boosting decision tree to improve performance, achieving an accuracy of 86.58% and [5] proposed two CNN architectures, namely Simple CNN and Inception ResNetV2, along with their hyper-parameters. In Inception ResNetV2, transfer learning was used for feature extraction, and the model aggregated the data for experimentation. The model’s parameters were optimised for the categorisation task, and it achieved an accuracy of 95.67%. Table 2 shows the different ML/DL algorithms that can be used to find diseases on rice leaves. The accuracy of these algorithms ranges from 86% to 95%.
The objective is to present a model for the identification of rice leaf diseases that helps farmers identify rice leaf diseases timely and also helps to improve production. The proposed method in this paper employs pre-trained models with knowledge stored in the weights (ImageNet) that are converted into an experiment for the feature extraction process using a transfer learning technique. For classification, the approaches of machine learning and ensemble learning are used, and the outcomes are compared using different performance metrics.
The major contributions of this study are as follows:
  • Implementation of the pre-trained deep learning-based feature selection techniques on segmented images.
  • Implementation analysis of machine and ensemble learning classification techniques using pre-trained deep learning models based on selected features.
  • The experimental results show the effectiveness of the proposed procedure in comparison to existing techniques with high parameters for the classification of rice leaf diseases.
Further, this paper is organized as follows. In Section 2, the overall procedure of the proposed model is discussed. Section 3 gives the experimental results and comparative analysis between normal images and segmented images of rice leaves, and finally, a conclusion and future scope are discussed in Section 4.

2. Materials and Methods

The overall procedure of our proposed methodology for identifying the rice leaf diseases is discussed: first, a collection of rice disease images is gathered and properly labelled based on expert knowledge; then, various image processing techniques, such as image resizing, reshaping, grey colour conversion, and so on, are performed on the acquired dataset, and segmentation techniques are used to enhance the data set; and finally, the proposed method involved feeding both segmented and normal images into the model for feature extraction, which is then used to train the model. The trained model is subsequently utilised in the analysis. Thus, the final results are obtained. The proposed model was trained on the basis of the Algorithm 1.
Algorithm 1: Proposed Algorithm for Pre-trained Deep Neural Network-Based Features Selection Supported Machine Learning for Rice Leaf Disease Classification
Input: Infected rice leaf images ((Xi, Yi)…… (Xm, Ym))
Output: Class of rice leaf disease
  • For each K:=1 -> P, where P is the total number of input leaf image do
  • Convert Kth image into RGB leaf image.
  • Read Kth RGB leaf image.
  • Resize Kth image to (h × w) size.
  • Apply segmentation technique to each image.
    For each T:=1 -> t, where t is the number of pre-trained model do
    Load each model by Initializing imagenet weights and extract feature from the second last layer.
    Update weights wk = wk−1 − a* m /( v k + Є where k is the class index, w is the weights, a* learning rate, m and v k is the first and second bias.
    Store extracted feature in Fpt = i × FV, where i is the number of sample images and FV is the feature vector.
    End for
  • Input extracted feature (Fpt) for classification to classify function y = f(x).
Further, this section divides the proposed work’s process into several steps for identifying rice leaf diseases, as shown in Figure 1.

2.1. Data Acquisition and Pre-Processing

In the experiments, we collected 551 images of rice leaf diseases from the internet [31]. It includes the three different types of diseases that affect rice leaves: BB, BS, and blast. Figure 2 displays a few sample images of leaf diseases. All the images are properly labelled and saved in JPG format. There are 551 images in total, of which 192 depict bacterial leaf blight, 159 depict blast, and 200 depict the brown spot discussed in Table 3. There is only one disease in each image. The data set is divided into training and test sets in an 80:20 ratio. Initially, the model was trained on 80 percent of the training dataset and 20 percent of the testing dataset to validate a trained model.
In order to transform the raw data into useful data and improve the effectiveness and accuracy of a model, image pre-processing is required. Pre-processing is important because it makes it possible for input images to be processed more smoothly [8]. Pre-processing is thought to be a crucial step in processing the images so they are suitable for the detection process [32]. In order to enhance the quality of the images in the collected dataset, various pre-processing techniques such as image resizing, reshaping, and converting to greyscale are applied, resulting in sharper images. Each image is uniformly processed and resized to 224 × 224 pixels. Formal methods can be used to verify the correctness and safety of AI-based solutions, including data collection and processing, by providing rigorous mathematical models and techniques for verification [33]. This can involve using techniques such as formal proof or program analysis to check that the algorithms are correct and do not have any unintended behaviours [34].

2.2. Segmentation

The pre-processed images of rice leaves are fed into the segmentation module to provide high-dimensional data segmentation. Segmentation is used to divide an image into areas that are homogeneous in terms of one or more characteristics or features (also known as classes or subsets). Segmentation is a crucial tool for image processing and has numerous applications such as feature extraction, image measurement, and display [7]. As a crucial stage in the image processing pipeline, segmentation enables us to locate and extract desired features from a given image. However, for all imaging applications, there is not a single standard segmentation method that can deliver satisfactory results [25]. Depending on the classification scheme, there are numerous ways to categorise segmentation techniques such as manual, automatic, and semi-automatic; region- and global-based approaches; low-level thresholding; model-based thresholding, etc. Each method has its own pros and cons. Segments from the images are represented as a
S = {S1, S2, …, Sd, …, Sn}
where n is the total number of segments in the image and sd is the dth segment of the image.
In this study, we separated disease spots from images of rice leaf disease, as shown in Figure 3. In order to extract features from the images, we used watershed and graph cut techniques for segmentation as mentioned in Ref. [35]. Compared to the conventional threshold segmentation method, this method produces better segmentation results. The two main goals of the image segmentation algorithm are as follows:
(1)
It can increase the quality of the image and reduce background noise in the lesion image, which will increase recognition accuracy.
(2)
It can decrease the volume of data, which will shorten the program’s execution time. To shorten the program’s runtime and increase the program’s recognition effectiveness.
When applied to complex images, the watershed and graph cut algorithms perform better than thresholding and contour detection. Remove the background and foreground elements first, and then use the markers to find the precise edges. Generally speaking, this algorithm aids in the detection of touching and overlapping objects in images. For markers, it is possible for the user to define them by manually clicking to obtain the coordinates for the markers or by using predefined algorithms to reduce noise, such as thresholding or any morphological operations [36].

2.3. Feature Extraction Using Pre-Trained Models

A quick and effective way to utilise the features that a neural network has already been trained through feature extraction. The crucial component of the deep learning network is feature extraction. There are numerous pooling and convolutional layers in it. It aids in the extraction of image features useful for target positioning and identification [37]. To improve our model(s), we could experiment with various configurations, such as adding more layers, changing the learning rate, adjusting the number of neurons per layer, and so on. Fortunately, by using pre-trained models, we can speed up the process [27]. These models save computational resources and time. Pre-trained models, or neural networks that have been trained on large-scale datasets, can be reused for subsequent tasks. These could be used for extracting the features. By choosing the right feature extractor, system performance is improved. There are numerous pre-trained models that perform different tasks, such as Xception, VGG16, VGG19, ResNet101V2, InceptionNetV2, DenseNet, EfficientNetV2, NasNet, MobileNet, ResNet50, etc.
In our research area, we used 32 pre-trained models for feature extraction. Using large datasets such as ImageNet, pre-trained models VGG, ResNet, and Inception have learned to extract meaningful visual features from images. Inputs to subsequent processes such as object recognition, segmentation, and classification can be derived from these characteristics. Each segment is modified for the feature extraction process in order to increase the accuracy of the features in identifying rice leaf diseases. This frequently results in excellent outcomes with fewer data.
The approach described speeds up the training process and improves the accuracy by using a specific model architecture consisting of various layers, including reshape, flatten, dense, dropout, and activation functions. A global pooling layer such as max global pooling or average global pooling can be utilised to summarise the activations for use in a classifier or as a feature vector representation of the input [38]. In this study, a new classifier model is constructed using the output of a layer in the model that precedes the output layer responsible for classifying rice leaf disease images. Table 4 presents the feature extractors used in this study, along with their input shape, number of parameters, size, and feature layer.

2.4. Classification

Classification is a supervised learning technique in which input data is mapped to a specific class. It is critical to perform data mining and classify data obtained from a database. Sometimes combinations of more classifiers give reliable and accurate results as compared to a single classification model [39]. Various machine learning and ensemble learning algorithms were applied to detect rice leaf diseases. In this work, ten classification algorithms were applied to detect the diseases. On the normal data set and segmented dataset, we apply DT, QDA, K-NN, AB, GNB, LR, RF, ET, HGB, and MLP ML to the base classifier. We use 32 pre-trained models to extract features from different shapes and numerous classifiers to classify different disease classes. Further details are mentioned in Figure 4.

2.5. Experimental Setup and Evaluation Metrics

For this experiment, a Windows 10 PC, a Jupyter notebook, 8 GB of storage space on Google Drive, and a 64-bit operating system were utilised. The Keras 2.4.3 framework and Tensorflow backend were employed to facilitate the training and validation processes of the deep neural network. The crucial phase in the proposed model is the evaluation stage, which enables the calculation of the discrepancy between the predicted and actual value. This inference will help us achieve a consistently reliable model for identifying rice diseases. There are a number of parameters, such as accuracy, precision, recall rate, F1-Score, Matthews Coefficient, and Kappa Statistics, as discussed in Table 5 [40]. In the next section, we discuss the various results that were achieved during the implementation process and compare the results.

3. Results

This section demonstrates and discusses the outcome obtained using the suggested methods. By using a dataset on rice leaf diseases, this section describes in detail the accuracy, precision, recall rate, F1-Score, Matthews Coefficient, and Kappa Statistics of the evaluation of the proposed technique with respect to conventional strategies. Comparative analysis of different machine learning and deep learning approaches with normal and segmented datasets is discussed in this section. In case 1, results are discussed on the basis of a normal image set. In case 2, an analysis of the segmented image set is discussed, and in case 3, a comparative analysis of results from the normal and segmented image sets is discussed.

3.1. Analysis of Normal Data

An analysis of the normal data on the basis of accuracy is represented in Table 6, and it is noted that the maximum accuracy achieved was 91% for the pre-trained model EfficientNetB3 with ET and HGB classifiers and 90% for EfficientNetV2B3 with ET classifier. Table 7 shows an analysis based on precision for normal data. It is clear that the classifier HGB gave the highest precision value of 92% with model EfficientNetV2B3 and 91% with EfficientNetV2B3, EfficientNetB3 Models with ET, and HGB classifiers.
Following that, recall rate analysis is represented in Table 8, and it is seen that EfficientNetB3 with the ET and HGB classifiers achieved the highest recall rate of 89. Model EfficientNetV2B3 with the HGB classifier achieved a recall rate of 86%. Moreover, pre-trained models EfficientNetB5, EfficientNetB6, and EfficientV2S with ET, HGB classifier gave an 84% recall rate value. Next, analysis was conducted on the basis of the metric F1-Score, and from Table 9, it was observed that model EfficientNetB3 with ET and HGB gave the maximum F1-Score value, i.e., 90%. EfficientNetV2B3 with the HGB classifier gave a value of 89%.
Analysis on the basis of the Matthews Coefficient and the Kappa Statistics is represented in Table 10 and Table 11, and it is noted that the maximum value is 86% for both the Matthews Coefficient and the Kappa Statistics with model EfficientNetB3 and ET, HGB classifiers. For model EfficientNetV2B3 with Classifiers ET, the Matthews Coefficient value is 85% and Kappa Coefficient value is 84%.
Discussion. From the case 1 analysis, it is observed that pre-trained models EfficientNetV2B3 and EfficientNetB3 gave better results with classifiers such as ET and HGB.

3.2. Analysis on Segmented Data

To enhance the performance of our model, we apply the segmentation technique to the same data set. Further, use the same approach to analyse the various parameters. We observed that after segmentation, our results were improved. Table 12 represents the analysis of proposed pre-trained models with machine learning and ensemble learning classifiers using accuracy. The most accurate model was found to be EfficientNetV2B3 with HGB and ET, with 94% and 93% accuracy, respectively. Similarly, classifier RF and HFB gave an accuracy of 91% with mode EffiecientNetB3, respectively. The precision-based value analysis is shown in Table 13, which is the same as the accuracy model, EfficientNetV2B3, which achieved 93% accuracy with the ET and 92% precision with the HGB classifier. EfficientNetB3 with the HGB classifier achieved a 92% precision value.
Next, an analysis on the basis of recall rate and F1-Score is represented in Table 14 and Table 15. It is noted that, as measured by precision, EfficientNetV2B3 with the HGB classifier achieved a 92% recall rate and an F1-Score.
Analysis on the basis of Matthews Coefficient and Kappa Statistics is represented in Table 16 and Table 17, and it is noted that the maximum values were 90% for both Matthews Coefficient and Kappa Statistics with EfficientNetV2B3 for HGB classifiers.
From the case 2 analysis, it is observed that pre-trained model EfficientNetV2B3, EfficientNetB3, and EfficientNetB4 gave better results with classifiers such as ET and HGB.

3.3. Comparative Discussion

In comparison to all mentioned algorithms, maximum accuracy resulted from the approaches EfficientNetV2B3, EfficientNetB3, EfficientNetV2S, and EfficientNetB6 with classifiers such as RF, ET and HGB classifier, and it was near 91 percent. However, by implementing comparative analysis with the segmented dataset, the highest accuracy was 94 and 93 percent with the EfficientNetV2B3 model with HGB, the ET classifier. Similarly, the other efficiency parameters such as precision, recall rate, F1-Score, and Matthews and Kappa Coefficients achieved their highest values with the same models and classifiers discussed in Table 18.

4. Conclusions

Rice leaf diseases have a devastating effect on global food security and are the primary threats to agricultural progress around the world. There may be no harvest at all if the leaf disease is severe [41]. To ensure the productivity of rice products, the prompt and precise identification of rice leaf diseases is essential. For this reason, it is very important to look for quick, less expensive, and accurate ways to identify rice leaf disease cases. In solving the majority of the technological issues related to the classification of leaf diseases, pre-trained transfer learning algorithms have demonstrated excellent performance. In this study, we proposed an analysis of various pre-trained models with different classifiers for the detection of rice leaf diseases. The three major rice leaf diseases, BB, BS, and blast, are considered for this research. Image-based rice leaf disease data set was collected and pre-processed according to algorithmic requirements. Initially, 32 pre-trained models were used to extract features, and then the images were classified using various machine and ensemble learning classifiers. Images are enhanced by the segmentation process, and the results are compared on various performance parameters such as accuracy, precision, recall rate, F1-Score, Matthews Coefficient, and Kappa Statistics. Experiments were performed on both the normal image data set and the segmented image data set. With the pre-trained models EfficientNetB3, EfficientNetB6, EfficientNetV2S, and EfficientNetV2B3 with an Extra Tree and HGB classifier, the proposed model achieves 91% accuracy on a normal data set and 94% accuracy on a segmented data set. In the future, we will deploy these results with mobile devices to recognise the rice leaf disease automatically, and also this model could be used to classify other related crops in agriculture.

Author Contributions

Study conception and design: M.A., V.K., N.G.; data collection: M.A., V.K., N.G.; methodology: M.A., V.K., N.G., A.T., A.S., E.B.T., S.K.; analysis and interpretation of results: M.A., V.K., N.G.; draft manuscript preparation: M.A., V.K., N.G, A.T., A.S., E.B.T., S.K.; Supervision, V.K., N.G, A.T., A.S.; Project Administration, V.K., N.G, A.T.; Funding Acquisition, E.B.T.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Researchers Supporting Project Number (RSPD2023R681), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was funded by the Researchers Supporting Project Number (RSPD2023R681), King Saud University, Riyadh, Saudi Arabia. Authors are grateful to King Saud University, Riyadh, Saudi Arabia for carry out research work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviation

AbbreviationDefinitionAbbreviationDefinition
MLMachine LearningGNBGaussian Naïve Bayes
DLDeep LearningK-NNK-Nearest Neighbour
DCNNDeep CNNLRLogistic Regression
FSFalse SmutSVMSupport Vector Machine
BSBrown SpotDTDecision Tree
SBSheath BlightRFRandom factor
SBStem BorerQDAQuadratic Discriminant Analysis
LSLeaf SmutABAda-boost
SRSheath RotETExtra Tree
FSFalse SmutHGBHistogram Gradient boosting
BBBacterial BlightGBGradient Boosting
MLPMulti-Layer
Preceptron
FNFalse Negative
TPTrue PositiveMCMatthews Coefficient
TNTrue NegativeKPKappa Statistics
FPFalse PositiveYOLOYou only look once

References

  1. Stanley, M. Food Staple. Available online: https://education.nationalgeographic.org/resource/food-staple (accessed on 26 December 2022).
  2. Kodama, T.; Hata, Y. Development of Classification System of Rice Disease Using Artificial Intelligence. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 3699–3702. [Google Scholar] [CrossRef]
  3. Chen, J.; Zhang, D.; Zeb, A.; Nanehkaran, Y.A. Identification of Rice Plant Diseases Using Lightweight Attention Networks. Expert Syst. Appl. 2021, 169, 114514. [Google Scholar] [CrossRef]
  4. Wani, J.A.; Sharma, S.; Muzamil, M.; Ahmed, S.; Sharma, S.; Singh, S. Machine Learning and Deep Learning Based Computational Techniques in Automatic Agricultural Diseases Detection: Methodologies, Applications, and Challenges. In Archives of Computational Methods in Engineering; Springer: Amsterdam, The Netherlands, 2022; Volume 29, pp. 641–677. ISBN 0123456789. [Google Scholar]
  5. Krishnamoorthy, N.; Prasad, L.N.; Kumar, C.P.; Subedi, B.; Abraha, H.B.; Sathishkumar, V.E. Rice Leaf Diseases Prediction Using Deep Neural Networks with Transfer Learning. Environ. Res. 2021, 198, 111275. [Google Scholar] [CrossRef]
  6. Prajapati, H.B.; Shah, J.P.; Dabhi, V.K. Detection and Classification of Rice Plant Diseases. Intell. Decis. Technol. 2017, 11, 357–373. [Google Scholar] [CrossRef]
  7. Gunawan, P.A.; Kencana, E.N.; Sari, K. Classification of Rice Leaf Diseases Using Artificial Neural Network. J. Phys. Conf. Ser. 2021, 1722, 012013. [Google Scholar] [CrossRef]
  8. Narmadha, R.P.; Arulvadivu, G. Detection and Measurement of Paddy Leaf Disease Symptoms Using Image Processing. In Proceedings of the 2017 International Conference on Computer Communication and Informatics (ICCCI), Tamilnadu, India, 5–7 January 2017; pp. 5–8. [Google Scholar] [CrossRef]
  9. Liu, L.W.; Hsieh, S.H.; Lin, S.J.; Wang, Y.M.; Lin, W.S. Rice Blast (Magnaporthe Oryzae) Occurrence Prediction and the Key Factor Sensitivity Analysis by Machine Learning. Agronomy 2021, 11, 771. [Google Scholar] [CrossRef]
  10. Sharma, T.; Kaur, P.; Chahal, J.; Sharma, H. Classification of Rice Leaf Diseases Based on the Deep Convolutional Neural Network Architectures: Review. In Proceedings of the AIP Conference Proceedings, Himachal Pradesh, India, 2–3 July 2022; Volume 2451. [Google Scholar]
  11. Maniyath, S.R.; Vinod, P.V.; Niveditha, M.; Pooja, R.; Prasad Bhat, N.; Shashank, N.; Hebbar, R. Plant Disease Detection Using Machine Learning. In Proceedings of the 2018 International Conference on Design Innovations for 3Cs Compute Communicate Control (ICDI3C), Bangalore, India, 25–28 April 2018; pp. 41–45. [Google Scholar] [CrossRef]
  12. Bari, B.S.; Islam, M.N.; Rashid, M.; Hasan, M.J.; Razman, M.A.M.; Musa, R.M.; Nasir, A.F.A.; Majeed, A.P.P.A. A Real-Time Approach of Diagnosing Rice Leaf Disease Using Deep Learning-Based Faster R-CNN Framework. PeerJ Comput. Sci. 2021, 7, e432. [Google Scholar] [CrossRef] [PubMed]
  13. Daniya, T.; Vigneshwari, S. Deep Neural Network for Disease Detection in Rice Plant Using the Texture and Deep Features. Comput. J. 2022, 65, 1812–1825. [Google Scholar] [CrossRef]
  14. Sethy, P.K.; Barpanda, N.K.; Rath, A.K.; Behera, S.K. Deep Feature Based Rice Leaf Disease Identification Using Support Vector Machine. Comput. Electron. Agric. 2020, 175, 105527. [Google Scholar] [CrossRef]
  15. Rautaray, S.S.; Pandey, M.; Gourisaria, M.K.; Sharma, R.; Das, S. Paddy Crop Disease Prediction—A Transfer Learning Technique. Int. J. Recent Technol. Eng. 2020, 8, 1490–1495. [Google Scholar] [CrossRef]
  16. Jadhav, A.A.J.B.D. Monitoring and Controlling Rice Diseases Using Image Processing Techniques. In Proceedings of the International Conference on Computing, Analytics and Security Trends (CAST), Pune, India, 19–21 December 2016; pp. 471–476. [Google Scholar]
  17. Sharma, V.; Mir, A.A.; Sarwr, D.A. Detection of Rice Disease Using Bayes’ Classifier and Minimum Distance Classifier. J. Multimed. Inf. Syst. 2020, 7, 17–24. [Google Scholar] [CrossRef]
  18. Lu, Y.; Yi, S.; Zeng, N.; Liu, Y.; Zhang, Y. Identification of Rice Diseases Using Deep Convolutional Neural Networks. Neurocomputing 2017, 267, 378–384. [Google Scholar] [CrossRef]
  19. Jiang, F.; Lu, Y.; Chen, Y.; Cai, D.; Li, G. Image Recognition of Four Rice Leaf Diseases Based on Deep Learning and Support Vector Machine. Comput. Electron. Agric. 2020, 179, 105824. [Google Scholar] [CrossRef]
  20. He, Y.; Zhou, Z.; Tian, L.; Liu, Y.; Luo, X. Brown Rice Planthopper (Nilaparvata Lugens Stal) Detection Based on Deep Learning. Precis. Agric. 2020, 21, 1385–1402. [Google Scholar] [CrossRef]
  21. Azim, M.A.; Islam, M.K.; Rahman, M.M.; Jahan, F. An Effective Feature Extraction Method for Rice Leaf Disease Classification. Telkomnika (Telecommun. Comput. Electron. Control) 2021, 19, 463–470. [Google Scholar] [CrossRef]
  22. Suresha, M.; Shreekanth, K.N.; Thirumalesh, B.V. Recognition of Diseases in Paddy Leaves Using Knn Classifier. In Proceedings of the 2nd International Conference for Convergence in Technology, I2CT, Mumbai, India, 7–9 April 2017; pp. 663–666. [Google Scholar]
  23. Sardogan, M.; Tuncer, A.; Ozen, Y. Plant Leaf Disease Detection and Classification Based on CNN with LVQ Algorithm. In Proceedings of the UBMK—3rd International Conference on Computer Science and Engineering, Sarajevo, Bosnia and Herzegovina, 20–23 September 2018; pp. 382–385. [Google Scholar]
  24. Liang, W.-J.; Zhang, H.; Zhang, G.F.; Cao, H.-X. Rice Blast Disease Recognition Using a Deep Convolutional Neural Network. Sci. Rep. 2019, 9, 2869. [Google Scholar] [CrossRef] [PubMed]
  25. Nidhis, A.D.; Pardhu, C.N.V.; Reddy, K.C.; Deepa, K. Cluster Based Paddy Leaf Disease Detection, Classification and Diagnosis in Crop Health Monitoring Unit. In Lecture Notes in Computational Vision and Biomechanics; Springer International Publishing: New York, NY, USA, 2019; Volume 31, pp. 281–291. ISBN 9783030040611. [Google Scholar]
  26. Anami, B.S.; Malvade, N.N.; Palaiah, S. Deep Learning Approach for Recognition and Classification of Yield Affecting Paddy Crop Stresses Using Field Images. Artif. Intell. Agric. 2020, 4, 12–20. [Google Scholar] [CrossRef]
  27. Ghosal, S.; Sarkar, K. Rice Leaf Diseases Classification Using CNN with Transfer Learning. In Proceedings of the IEEE Calcutta Conference, CALCON–Proceedings, Kolkata, India, 28–29 February 2020; pp. 230–236. [Google Scholar]
  28. Li, D.; Wang, R.; Xie, C.; Liu, L.; Zhang, J.; Li, R.; Wang, F.; Zhou, M.; Liu, W. A Recognition Method for Rice Plant Diseases and Pests Video Detection Based on Deep Convolutional Neural Network. Sensors 2020, 20, 578. [Google Scholar] [CrossRef] [PubMed]
  29. Matin, M.M.H.; Khatun, A.; Moazzam, M.G.; Uddin, M.S. An Efficient Disease Detection Technique of Rice Leaf Using AlexNet. J. Comput. Commun. 2020, 8, 49–57. [Google Scholar] [CrossRef]
  30. Temniranrat, P.; Kiratiratanapruk, K.; Kitvimonrat, A.; Sinthupinyo, W.; Patarapuwadol, S. A System for Automatic Rice Disease Detection from Rice Paddy Images Serviced via a Chatbot. Comput. Electron. Agric. 2021, 185, 106156. [Google Scholar] [CrossRef]
  31. Francisco, A.K.G. Rice Disease Data Set. Available online: https://github.com/aldrin233/RiceDiseases-DataSet (accessed on 12 October 2022).
  32. Sethy, P.K.; Barpanda, N.K.; Rath, A.K.; Behera, S.K. Image Processing Techniques for Diagnosing Rice Plant Disease: A Survey. Procedia Comput. Sci. 2020, 167, 516–530. [Google Scholar] [CrossRef]
  33. Krichen, M.; Mihoub, A.; Alzahrani, M.Y.; Adoni, W.Y.H.; Nahhal, T. Are Formal Methods Applicable To Machine Learning And Artificial Intelligence? In Proceedings of the International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia, 9–11 May 2022; pp. 48–53. [Google Scholar]
  34. Seshia, S.A.; Sadigh, D.; Sastry, S.S. Toward Verified Artificial Intelligence. Commun. ACM 2022, 65, 46–55. [Google Scholar] [CrossRef]
  35. Anantrasirichai, N.; Hannuna, S.; Canagarajah, N. Automatic Leaf Extraction from Outdoor Images. arXiv 2017, arXiv:1709.06437. [Google Scholar] [CrossRef]
  36. Watershed Alogithm and Application. Available online: https://www.aegissofttech.com/articles/watershed-algorithm-and-limitations.html (accessed on 25 December 2022).
  37. Dara, S.; Tumma, P. Feature Extraction by Using Deep Learning: A Survey. In Proceedings of the 2nd International Conference on Electronics, Communication and Aerospace Technology, ICECA 2018, Coimbatore, India, 29–31 March 2018; pp. 1795–1801. [Google Scholar]
  38. Rallapalli, S.M.; Saleem Durai, M.A. A Contemporary Approach for Disease Identification in Rice Leaf. Int. J. Syst. Assur. Eng. Manag. 2021, 1–11. [Google Scholar] [CrossRef]
  39. Sengupta, S.; Dutta, A.; Abdelmohsen, S.A.M.; Alyousef, H.A.; Rahimi-Gorji, M. Development of a Rice Plant Disease Classification Model in Big Data Environment. Bioengineering 2022, 9, 758. [Google Scholar] [CrossRef] [PubMed]
  40. Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y.A. Using Deep Transfer Learning for Image-Based Plant Disease Identification. Comput. Electron. Agric. 2020, 173, 105393. [Google Scholar] [CrossRef]
  41. Kaur, A.; Guleria, K.; Trivedi, N. A Deep Learning Based Model for Rice Leaf Disease Detection. In Proceedings of the 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 13–14 October 2022; pp. 1–5. [Google Scholar]
Figure 1. The overall flow of rice leaf disease prediction.
Figure 1. The overall flow of rice leaf disease prediction.
Agriculture 13 00936 g001
Figure 2. Images of rice leaf diseases.
Figure 2. Images of rice leaf diseases.
Agriculture 13 00936 g002
Figure 3. Segmented image samples.
Figure 3. Segmented image samples.
Agriculture 13 00936 g003
Figure 4. Classification process.
Figure 4. Classification process.
Agriculture 13 00936 g004
Table 1. Classification of various rice diseases with symptoms.
Table 1. Classification of various rice diseases with symptoms.
DiseaseStageSymptomsImportant SeasonFactors for Infection
BlastIn growing stageGreen-grey spot with dark green outline and more difficult to detect with grey centre and green outlineRain shower and cooled temperatureHigh humidity and nitrogen level
Sheath BlightAt tilleringGreenish grey irregular spot between water and leaf bladeRainy seasonHigh temperature and humidity with high level of nitrogen
False SmutAt flowering to maturityFollicles are in orange and at maturity turn greenish yellow or black In periodic rain fallExtreme nitrogen and high humidity
Brown SpotFlowering to maturityBrown to purple-brown oval spot on leavesPeriodic rainHigh humidity, soil deficiency and high temperature
Bacterial BlightTillering to headingTan-greyish to white In wetHigh temperature and humidity
Table 2. Comparative analysis of ML/DL techniques.
Table 2. Comparative analysis of ML/DL techniques.
ReferencesML/DL TechniqueDisease TypeData Set Size (Images)Improved TechniquePerformance Measure/ScoreLimitation
[22]K-NN classifier with global thresholdBlast, BS330SegmentationAccuracy = 0.76Lower accuracy
[18]Deep CNNBlast, FS, BS, SB500NoneAccuracy mean = 0.95Time consuming because deep learning architectures contained several layers
[23]LVQ with CNNBS500NoneAccuracy = 0.86Only one class is used
[24]DCNNRice blast5808NoneMean
Accuracy = 0.89
AUC = 0.95
Only one rice disease discussed
[25]Image ProcessingBB, BS, blast-SegmentationAccuracy = 0.91Back propagation method was not discussed
[26]DCNN
VGG-16
BB, FS6000NoneAccuracy 0.95Feature extraction technique was not accurate
[21]Extreme Gradient BoostingBB, LS, BS120SegmentationAccuracy = 0.86
F1-Score = 0.87
Less data set size
[27]CNN with Transfer Learning
(VGG 16)
Blight
BS, LB
1649AugmentationAccuracy = 0.92Augmentation approach is not appropriate
[20]RCNNBrown rice plant hooper4600NoneAccuracy = 0.94 Recall rate = 0.88Feature extraction technique was not appropriate
[28]VGG16, ResNet50, ResNet101, and YOLOv3SB, BS5320NoneMean
F1-score = 0.74,
Recall rate = 0.77
Precision = 0.74
Performance parameters are low
[29]AlexNet Neural NetworkBS, BB, LS900AugmentationAccuracy = 0.9Augmentation technique was appropriate
[7]ANNBS, LS96SegmentationAccuracy = 0.79Less data set
[9]Probabilistic Neural Network (PNN)Rice blast1800NoneAccuracy = 0.91
F1-Score = 0.92
Only one rice leaf disease was discussed
[5]CNN and InceptionResNeV2Blast, BB, BS5200AugmentationAccuracy = 0.95Feature extraction technique was not appropriate
[30]Neural Network with YOLOv3Blast, BS
Streak
6538NoneTPR = 0.78Performance parameters are not enough
Table 3. Details of images present in dataset.
Table 3. Details of images present in dataset.
Disease NameNo. of ImagesImages for Training (80%)Images for Validation (20%)
Bacterial leaf Blight19215438
Brown Spot20016040
Blast15912732
Total551441110
Table 4. Feature extractor with Imagenet Weights used in this work.
Table 4. Feature extractor with Imagenet Weights used in this work.
ModelInput ShapeSelected Features SizeNumber of ParametersMemory
(in Bytes)
Feature Layer
Xception(229, 229, 3)204820,861,480272,784,948Global Average Pooling 2D
VGG16(224, 224, 3)4096134,260,544195,307,328Dense
VGG19(224, 224, 3)4096139,570,240205,835,328Dense
ResNet50(224, 224, 3)204823,587,712172,064,560Global Average Pooling 2D
ResNet50V2(224, 224, 3)204823,564,800149,085,616Global Average Pooling 2D
ResNet101(224, 224, 3)204842,658,176266,198,320Global Average Pooling 2D
ResNet101V2(224, 224, 3)204842,626,560247,667,120Global Average Pooling 2D
ResNet152(224, 224, 3)204858,370,944374,636,336Global Average Pooling 2D
ResNet152V2(224, 224, 3)204858,331,648361,348,528Global Average Pooling 2D
InceptionV3(229, 229, 3)204821,802,784152,016,332Global Average Pooling 2D
InceptionResNetV2(229, 229, 3)153654,336,736379,140,364Global Average Pooling 2D
MobileNet(224, 224, 3)10004,253,86471,638,760Reshape
DenseNet121(224, 224, 3)10247,037,504206,739,952Global Average Pooling 2D
DenseNet169(224, 224, 3)166412,642,880253,015,536Global Average Pooling 2D
DenseNet201(224, 224, 3)192018,321,984327,486,960Global Average Pooling 2D
NASNetMobile(224, 224, 3)10564,269,716115,028,536Global Average Pooling 2D
NASNetLarge(331, 331, 3)403284,916,8181,247,153,502Global Average Pooling 2D
EfficientNet B0224, 224, 312804,049,571105,116,063Dropout
EfficientNet B1240, 240, 312806,575,239167,863,763Dropout
EfficientNet B2260, 260, 314087,768,569212,211,693Dropout
EfficientNet B3300, 300, 3153610,783,535361,891,419Dropout
EfficientNet B4380, 380, 3179217,673,823739,756,747Dropout
EfficientNet B5456, 456, 3204828,513,5271,464,166,467Dropout
EfficientNet B6528, 528, 3230440,960,1432,466,985,915Dropout
EfficientNet B7600, 600, 3256064,097,6874,252,866,467Dropout
EfficientNetV2B0224, 224, 312805,919,31270,588,560Dropout
EfficientNetV2B1240, 240, 312806,931,124107,709,828Dropout
EfficientNetV2B2260, 260, 314088,769,374143,981,142Dropout
EfficientNetV2B3300, 300, 3153612,930,622228,894,934Dropout
EfficientNetV2S384, 384, 3128020,331,360512,960,320Dropout
EfficientNetV2M480, 480, 3128053,150,3881,301,769,700Dropout
EfficientNetV2L480, 480, 31280117,746,8482,317,398,688Dropout
Table 5. Performance Parameters.
Table 5. Performance Parameters.
MetricsDefinitionFormula
AccuracyComparison between actual and predicted valueAccuracy = (TP + TN)/TP + FP + TN + FN)
PrecisionActual corrected positive predictionPrecision = TP/(TP + FP)
Recall rateActual positive incorrected predictionRecall = TP/(TP + FN)
F1-scoreSingle value for both precision and recall rateF1-Score = 2TP/(2TP + FP + FN)
MC (Matthews Coefficient)Used to measure of the quality of binary and multiclass classification.MC = (TP × TN)(FP × FN)/ T P + F P ) ( T P + F N ) ( T N + F N )
KP (Kappa Statistics)Used to measure the inter-rater reliability for categorical items.K = (po − pe)/(1 − pe)
Table 6. Accuracy on Normal Data.
Table 6. Accuracy on Normal Data.
Classifiers/
Pre-Trained Model
DTQDAKNNABGNBLRRFETHGBMLP
Xception 0.70.350.630.660.660.650.790.80.860.69
VGG190.670.390.670.720.590.640.820.830.830.79
VGG160.610.360.640.660.690.560.820.810.780.73
ResNet152V20.630.380.560.660.570.540.760.780.80.65
ResNet1520.740.480.720.770.560.660.830.830.840.7
ResNet101V20.560.410.560.670.630.590.740.710.790.7
ResNet1010.710.440.680.710.620.70.840.810.810.77
ResNet50V20.540.380.530.710.650.530.760.780.730.66
ResNet500.720.470.70.710.60.650.840.810.80.73
NASNetMobile0.610.450.620.670.510.570.770.770.780.66
NASNetLarge0.70.440.620.660.360.730.810.780.80.72
MobileNet0.610.40.550.610.610.690.790.810.80.7
InceptionV30.650.390.640.660.770.610.790.830.770.76
InceptionResNetV20.640.480.680.650.790.670.790.820.80.72
EfficientNetV2S0.720.40.760.690.710.690.910.890.890.73
EfficientNetV2M0.660.380.470.710.670.570.770.790.790.66
EfficientNetV2L0.660.380.660.560.780.640.820.870.870.74
EfficientNetV2B30.680.430.720.810.780.760.850.90.890.85
EfficientNetV2B20.690.390.610.690.610.560.80.780.810.66
EfficientNetV2B10.710.410.610.730.60.630.780.80.820.69
EfficientNetV2B00.630.330.630.670.60.570.760.760.740.77
EfficientNetB70.830.460.650.730.790.710.860.860.830.74
EfficientNetB60.710.430.720.630.670.60.90.860.860.74
EfficientNetB50.640.440.690.790.710.630.850.880.830.74
EfficientNetB40.690.460.760.780.710.650.810.830.850.74
EfficientNetB30.760.40.70.810.870.690.890.910.910.86
EfficientNetB20.740.50.610.650.670.660.770.820.780.71
EfficientNetB10.680.40.670.720.630.690.80.830.80.71
EfficientNetB00.650.360.70.730.630.570.80.830.820.71
DenseNet2010.60.340.560.730.640.630.790.770.810.65
DenseNet1690.590.360.640.70.530.660.810.780.770.67
DenseNet1210.720.40.710.740.40.620.790.810.830.72
Table 7. Precision on Normal Data.
Table 7. Precision on Normal Data.
Classifiers/
Pre-Trained Model
DTQDAKNNABGNBLRRFETHGBMLP
Xception 0.670.230.520.690.640.440.720.770.830.64
VGG190.620.230.60.690.590.620.770.780.790.74
VGG160.550.210.620.670.670.390.770.750.730.67
ResNet152V20.590.230.490.590.550.360.660.70.760.61
ResNet1520.690.30.670.720.610.610.780.780.790.67
ResNet101V20.510.250.50.630.620.50.670.620.740.63
ResNet1010.660.30.650.660.710.470.790.750.760.7
ResNet50V20.480.240.450.660.690.540.690.730.670.63
ResNet500.670.280.570.670.620.580.790.750.740.7
NASNetMobile0.570.290.570.630.440.530.710.710.720.61
NASNetLarge0.660.230.570.640.60.50.770.740.780.6
MobileNet0.590.150.520.560.690.640.750.740.760.65
InceptionV30.620.220.530.690.730.430.750.780.710.71
InceptionResNetV20.60.280.630.650.760.470.750.790.760.67
EfficientNetV2S0.670.240.70.630.70.620.890.850.850.68
EfficientNetV2M0.610.240.40.660.630.580.680.720.70.55
EfficientNetV2L0.590.210.580.60.730.660.770.850.850.67
EfficientNetV2B30.620.260.660.750.720.840.810.910.920.81
EfficientNetV2B20.660.270.560.670.620.480.740.720.750.62
EfficientNetV2B10.670.210.570.710.530.460.730.760.780.66
EfficientNetV2B00.590.220.590.630.680.620.710.70.680.72
EfficientNetB70.790.280.560.690.750.650.880.90.790.7
EfficientNetB60.710.270.680.660.70.430.880.820.820.69
EfficientNetB50.610.280.660.740.690.420.820.860.810.71
EfficientNetB40.650.280.670.740.70.460.780.790.80.7
EfficientNetB30.720.240.660.780.840.720.890.910.90.84
EfficientNetB20.670.320.570.650.640.630.70.760.740.66
EfficientNetB10.640.250.610.720.660.620.750.780.770.66
EfficientNetB00.630.180.630.680.630.530.740.760.760.65
DenseNet2010.550.190.530.710.630.430.720.70.750.61
DenseNet1690.560.20.570.660.580.550.750.710.710.62
DenseNet1210.690.260.690.710.550.610.730.760.780.68
Table 8. Recall Rate on Normal Data.
Table 8. Recall Rate on Normal Data.
Classifiers/
Pre-Trained Model
DTQDAKNNABGNBLRRFETHGBMLP
Xception 0.670.390.530.670.630.520.690.760.840.63
VGG190.630.380.60.680.560.550.780.780.780.74
VGG160.550.360.620.680.680.450.780.760.720.66
ResNet152V20.470.410.450.660.660.480.680.720.660.64
ResNet1520.670.470.580.670.560.580.790.750.740.7
ResNet101V20.580.390.490.590.490.430.660.70.740.61
ResNet1010.680.450.680.720.520.610.790.780.80.68
ResNet50V20.510.410.50.610.570.490.660.620.710.62
ResNet500.660.520.670.670.580.560.80.760.770.69
NASNetMobile0.570.510.560.620.430.520.70.710.70.61
NASNetLarge0.660.380.570.630.460.590.760.730.740.6
MobileNet0.610.320.510.550.540.60.720.720.750.62
InceptionV30.610.360.540.660.710.480.720.750.710.69
InceptionResNetV20.590.480.630.630.760.540.760.770.740.67
EfficientNetV2S0.680.380.710.610.720.620.870.840.840.69
EfficientNetV2M0.620.390.40.650.620.480.680.710.680.55
EfficientNetV2L0.590.350.580.590.730.520.740.80.850.66
EfficientNetV2B30.620.430.650.760.720.620.780.850.860.79
EfficientNetV2B20.680.470.560.670.580.490.730.710.740.6
EfficientNetV2B10.680.350.570.730.50.50.720.760.770.61
EfficientNetV2B00.570.350.590.630.640.560.710.690.680.73
EfficientNetB70.790.450.560.690.720.580.830.80.780.67
EfficientNetB60.730.430.670.660.670.470.860.830.840.68
EfficientNetB50.60.480.680.740.690.50.820.840.780.71
EfficientNetB40.640.450.670.750.710.520.80.810.810.71
EfficientNetB30.730.390.650.80.850.590.870.890.890.84
EfficientNetB20.670.50.570.630.650.610.690.770.750.66
EfficientNetB10.630.420.60.730.640.620.750.780.790.66
EfficientNetB00.640.310.610.690.610.490.740.750.740.63
DenseNet2010.550.320.520.710.580.50.720.70.760.61
DenseNet1690.560.310.570.650.560.550.750.70.70.62
DenseNet1210.70.450.680.710.460.580.730.760.790.69
Table 9. F1-Score on Normal Data.
Table 9. F1-Score on Normal Data.
Classifiers/
Pre-Trained Model
DTQDAKNNABGNBLRRFETHGBMLP
Xception 0.670.280.520.640.620.470.70.770.830.63
VGG190.620.290.60.680.550.540.780.780.780.74
VGG160.550.270.620.640.660.40.780.760.730.66
ResNet152V20.470.30.440.660.620.470.680.720.660.63
ResNet1520.660.350.550.660.530.580.790.750.740.69
ResNet101V20.580.290.490.590.490.390.650.70.740.6
ResNet1010.680.350.670.720.470.610.790.780.790.66
ResNet50V20.510.310.50.610.570.480.660.620.710.62
ResNet500.660.360.650.660.570.510.790.750.760.69
NASNetMobile0.570.360.550.620.420.520.710.710.70.61
NASNetLarge0.660.280.570.620.370.540.760.730.750.59
MobileNet0.590.210.510.540.540.610.730.730.750.63
InceptionV30.60.270.530.640.720.440.730.760.710.7
InceptionResNetV20.590.350.630.620.760.50.750.780.740.67
EfficientNetV2S0.670.290.70.60.690.620.880.850.850.68
EfficientNetV2M0.610.290.40.650.630.460.680.710.680.54
EfficientNetV2L0.590.260.580.550.730.50.750.810.850.66
EfficientNetV2B30.620.320.660.760.720.580.790.870.890.8
EfficientNetV2B20.660.320.560.650.570.480.730.710.740.6
EfficientNetV2B10.680.250.570.70.480.450.720.760.770.62
EfficientNetV2B00.570.260.590.620.590.530.70.690.680.72
EfficientNetB70.790.340.550.680.720.560.850.830.780.68
EfficientNetB60.70.330.670.620.650.420.870.820.830.68
EfficientNetB50.60.340.670.740.680.460.820.850.790.7
EfficientNetB40.640.340.670.730.690.480.780.80.80.7
EfficientNetB30.720.290.650.780.840.590.870.90.90.84
EfficientNetB20.670.390.570.620.630.610.690.770.740.66
EfficientNetB10.630.310.60.70.590.610.750.780.770.66
EfficientNetB00.620.230.610.680.580.480.740.760.750.63
DenseNet2010.550.240.520.690.580.460.720.70.750.61
DenseNet1690.550.240.570.650.490.540.750.70.70.62
DenseNet1210.690.320.690.70.40.560.730.760.790.68
Table 10. Matthews Coefficient Value.
Table 10. Matthews Coefficient Value.
Classifiers/
Pre-Trained Model
DTQDAKNNABGNBLRRFETHGBMLP
Xception 0.530.10.40.530.490.430.660.680.780.51
VGG190.490.090.480.580.390.420.710.730.730.66
VGG160.380.030.430.520.530.280.710.70.640.57
ResNet152V20.270.110.250.540.490.260.610.640.580.46
ResNet1520.570.230.520.560.390.440.740.690.680.59
ResNet101V20.430.060.30.460.330.230.60.640.670.44
ResNet1010.60.230.570.640.360.460.730.730.750.55
ResNet50V20.310.110.30.480.410.330.590.540.660.52
ResNet500.540.260.510.550.430.510.750.70.70.62
NASNetMobile0.380.240.40.50.210.320.620.630.640.46
NASNetLarge0.530.120.390.490.240.570.70.650.680.55
MobileNet0.40.020.280.390.40.50.660.690.680.52
InceptionV30.460.10.410.530.630.360.660.730.630.61
InceptionResNetV20.440.250.50.490.670.460.660.710.680.56
EfficientNetV2S0.570.060.610.520.580.50.860.830.830.58
EfficientNetV2M0.470.120.140.540.470.340.620.660.660.44
EfficientNetV2L0.460.020.450.380.650.450.710.80.80.59
EfficientNetV2B30.490.160.550.70.650.610.760.850.830.76
EfficientNetV2B20.530.190.380.540.430.30.670.640.690.47
EfficientNetV2B10.550.050.380.60.360.420.640.680.710.52
EfficientNetV2B00.430.060.410.490.470.360.620.620.60.63
EfficientNetB70.730.190.430.60.660.530.780.780.730.59
EfficientNetB60.580.130.560.470.540.360.850.780.790.59
EfficientNetB50.430.230.520.670.560.380.760.810.730.6
EfficientNetB40.530.190.610.660.570.430.710.740.760.6
EfficientNetB30.630.130.520.710.80.540.830.860.860.78
EfficientNetB20.590.280.380.480.50.470.630.710.650.54
EfficientNetB10.510.090.470.60.480.50.680.730.70.54
EfficientNetB00.4700.520.590.450.310.680.730.710.54
DenseNet2010.36-0.030.30.60.450.390.660.630.70.44
DenseNet1690.370.060.430.540.350.440.690.640.630.49
DenseNet1210.570.150.540.610.220.420.660.70.730.57
Table 11. Kappa Statistics values.
Table 11. Kappa Statistics values.
Classifiers/
Pre-Trained Model
DTQDAKNNABGNBLRRFETBGMLP
Xception 0.530.080.390.50.480.40.650.680.780.51
VGG190.480.080.470.570.380.390.710.730.730.66
VGG160.380.030.420.490.520.250.710.70.640.57
ResNet152V20.270.090.240.540.460.230.60.640.580.46
ResNet1520.570.180.50.550.350.430.740.690.680.58
ResNet101V20.430.050.30.460.30.220.60.640.670.44
ResNet101 0.60.150.560.630.30.460.730.730.750.54
ResNet50V20.310.090.30.470.390.320.580.540.650.52
ResNet500.540.210.50.550.370.490.750.70.70.62
NASNetMobile0.380.190.390.490.20.320.620.630.640.46
NASNetLarge0.530.080.390.480.170.550.690.640.670.54
MobileNet0.390.010.280.380.340.490.650.690.680.51
InceptionV30.450.080.410.50.620.330.660.720.630.61
InceptionResNetV20.440.20.490.470.660.440.660.710.670.56
EfficientNetV2S0.570.040.610.510.560.50.860.830.830.58
EfficientNetV2M0.470.10.140.540.470.280.620.660.650.43
EfficientNetV2L0.460.010.450.360.650.380.710.790.80.59
EfficientNetV2B30.490.130.550.70.640.580.760.840.830.76
EfficientNetV2B20.530.150.380.530.410.290.670.640.690.46
EfficientNetV2B10.550.030.380.590.330.360.640.680.710.49
EfficientNetV2B00.430.050.410.490.420.330.620.610.60.63
EfficientNetB70.730.140.420.590.650.510.780.770.730.58
EfficientNetB60.570.10.550.450.510.30.850.780.780.59
EfficientNetB50.420.190.520.670.560.360.760.810.720.59
EfficientNetB40.520.140.610.660.560.40.710.740.760.6
EfficientNetB30.620.10.520.70.80.480.830.860.860.78
EfficientNetB20.590.20.380.470.490.460.620.710.650.54
EfficientNetB10.510.070.470.580.450.50.680.730.690.54
EfficientNetB00.4600.510.580.430.290.680.730.710.53
DenseNet2010.36-0.020.290.590.410.360.660.630.70.44
DenseNet1690.360.050.430.540.320.440.690.640.630.48
DenseNet1210.570.120.540.610.180.390.660.70.730.57
Table 12. Accuracy on Segmented Data Set.
Table 12. Accuracy on Segmented Data Set.
Classifiers/
Pre-Trained Model
DTQDAKNNABGNBLRRFETHGBMLP
Xception 0.730.370.680.740.810.650.850.860.890.82
VGG190.660.480.760.760.680.690.760.770.810.72
VGG160.630.40.720.680.680.620.760.790.850.76
ResNet152V20.620.370.520.640.520.590.740.710.780.61
ResNet1520.690.40.560.760.670.460.770.790.840.76
ResNet101V20.680.370.560.660.510.480.690.720.70.67
ResNet1010.660.430.680.710.660.590.840.820.870.79
ResNet50V20.630.350.520.640.640.510.730.730.760.65
ResNet500.690.360.690.720.790.70.80.810.810.7
NASNetMobile0.640.390.590.640.640.670.770.80.730.71
NASNetLarge0.650.450.520.680.380.460.810.830.790.69
MobileNet0.550.430.520.740.570.520.770.760.730.67
InceptionV30.780.450.70.740.820.690.860.870.850.82
InceptionResNetV20.770.380.660.730.790.680.830.820.870.8
EfficientNetV2S0.720.380.770.740.720.710.840.850.890.82
EfficientNetV2M0.660.480.60.760.780.560.80.850.850.78
EfficientNetV2L0.740.380.640.770.670.620.80.840.870.71
EfficientNetV2B30.710.440.710.80.870.770.90.930.940.84
EfficientNetV2B20.710.40.640.770.630.590.80.830.850.77
EfficientNetV2B10.720.320.680.680.590.680.820.810.770.77
EfficientNetV2B00.670.460.660.740.650.530.790.790.790.7
EfficientNetB70.710.470.680.780.810.770.860.870.890.8
EfficientNetB60.820.470.690.760.740.710.860.870.90.81
EfficientNetB50.710.410.720.770.740.70.760.80.810.77
EfficientNetB40.790.450.740.790.720.670.870.90.830.74
EfficientNetB30.730.430.80.80.90.760.910.880.910.85
EfficientNetB20.690.490.680.720.730.710.840.820.850.76
EfficientNetB10.590.410.70.740.720.680.830.840.840.73
EfficientNetB00.790.370.670.710.740.610.820.840.770.74
DenseNet2010.680.460.660.780.720.570.830.830.820.73
DenseNet1690.710.430.540.740.680.670.810.840.820.7
DenseNet1210.710.450.670.740.710.670.760.780.850.77
Table 13. Precision value on Segmented Data Set.
Table 13. Precision value on Segmented Data Set.
Classifier/Pre-Trained ModelDTQDAKNNABGNBLRRFETHGBMLP
Xception 0.660.230.580.720.750.450.840.850.870.78
VGG190.630.290.710.720.650.630.720.730.770.7
VGG160.610.230.680.650.630.460.720.750.820.7
ResNet152V20.550.20.40.610.590.560.680.610.760.54
ResNet1520.680.260.50.670.540.420.680.690.780.69
ResNet101V20.660.230.530.670.620.450.60.630.640.63
ResNet1010.590.240.620.670.650.520.820.780.840.77
ResNet50V20.60.220.460.610.590.350.660.650.710.59
ResNet500.650.270.630.670.780.70.760.780.760.64
NASNetMobile0.570.250.540.670.590.60.70.740.70.66
NASNetLarge0.60.270.560.630.580.370.770.810.730.73
MobileNet0.510.150.390.710.540.480.70.70.650.56
InceptionV30.720.270.650.730.760.470.810.820.810.76
InceptionResNetV20.70.240.620.710.760.630.780.790.850.74
EfficientNetV2S0.70.230.730.690.690.680.810.810.860.78
EfficientNetV2M0.620.290.540.710.740.510.760.80.80.74
EfficientNetV2L0.690.250.590.720.630.660.740.780.810.66
EfficientNetV2B30.680.290.650.80.870.840.880.930.920.81
EfficientNetV2B20.680.240.610.730.650.540.760.810.810.73
EfficientNetV2B10.670.270.650.680.650.670.760.750.690.73
EfficientNetV2B00.630.260.60.710.620.660.710.710.720.64
EfficientNetB70.660.30.630.750.740.670.840.840.890.72
EfficientNetB60.780.270.640.690.670.480.830.850.880.77
EfficientNetB50.660.260.660.720.710.730.690.770.760.7
EfficientNetB40.750.280.70.760.650.450.840.880.790.69
EfficientNetB30.70.260.760.770.930.850.90.880.920.82
EfficientNetB20.660.290.650.680.70.70.790.780.830.72
EfficientNetB10.550.240.660.690.70.650.770.80.80.68
EfficientNetB00.730.220.620.640.690.490.770.790.70.68
DenseNet2010.620.270.630.730.680.50.790.780.760.7
DenseNet1690.660.270.480.70.620.640.770.80.810.63
DenseNet1210.70.240.590.690.510.630.690.710.810.71
Table 14. Recall Rate on Segmented Data Set.
Table 14. Recall Rate on Segmented Data Set.
Classifier/Pre-Trained ModelDTQDAKNNABGNBLRRFETHGBMLP
Xception 0.670.380.580.730.730.510.790.790.870.78
VGG190.650.490.710.750.650.60.740.750.810.73
VGG160.610.380.630.650.640.480.740.760.840.68
ResNet152V20.550.330.420.620.590.480.660.610.710.54
ResNet1520.70.440.50.670.550.40.670.680.780.65
ResNet101V20.670.380.520.680.570.430.60.630.640.63
ResNet1010.590.410.610.670.610.50.80.780.840.76
ResNet50V20.580.380.440.60.590.40.660.650.710.58
ResNet500.640.460.630.690.710.580.760.780.760.64
NASNetMobile0.570.430.530.640.580.590.70.740.710.67
NASNetLarge0.610.40.50.640.470.350.780.780.730.6
MobileNet0.50.330.420.730.490.440.680.680.650.56
InceptionV30.730.450.640.740.760.550.790.820.840.76
InceptionResNetV20.70.410.620.730.720.550.780.750.830.73
EfficientNetV2S0.730.390.730.70.70.620.820.810.860.8
EfficientNetV2M0.620.510.540.70.750.480.770.810.810.74
EfficientNetV2L0.690.430.590.730.630.540.740.780.790.66
EfficientNetV2B30.70.490.660.830.790.620.870.890.920.77
EfficientNetV2B20.70.410.590.740.650.530.760.780.810.7
EfficientNetV2B10.680.440.650.660.60.670.750.750.690.74
EfficientNetV2B00.630.430.590.730.610.510.710.690.720.65
EfficientNetB70.660.510.630.780.720.650.780.820.810.73
EfficientNetB60.790.460.640.70.660.560.830.860.90.75
EfficientNetB50.660.430.660.730.710.580.690.740.770.7
EfficientNetB40.760.480.690.790.640.530.840.880.80.69
EfficientNetB30.720.440.760.780.830.610.890.870.890.79
EfficientNetB20.670.490.660.680.720.660.80.790.80.71
EfficientNetB10.530.40.640.690.720.610.770.780.820.68
EfficientNetB00.740.370.610.630.70.490.770.80.70.67
DenseNet2010.620.460.610.740.70.480.80.790.760.72
DenseNet1690.660.440.480.710.610.620.740.760.750.63
DenseNet1210.660.40.590.70.570.60.690.710.810.72
Table 15. F1-Score on Segmented Data Set.
Table 15. F1-Score on Segmented Data Set.
Classifier/Pre-Trained ModelDTQDAKNNABGNBLRRFETHGBMLP
Xception 0.660.280.580.70.730.470.80.810.870.77
VGG190.630.370.710.720.640.60.720.730.780.7
VGG160.590.280.630.640.630.440.720.740.820.69
ResNet152V20.540.240.40.60.520.460.670.60.730.54
ResNet1520.670.310.50.670.530.380.680.680.780.66
ResNet101V20.650.280.520.640.50.420.60.630.640.62
ResNet1010.590.30.610.660.60.50.810.780.840.76
ResNet50V20.580.270.440.60.590.360.660.650.710.58
ResNet500.640.30.630.670.720.570.760.780.760.64
NASNetMobile0.570.310.530.620.570.590.70.740.690.66
NASNetLarge0.60.30.520.630.390.260.770.790.730.61
MobileNet0.490.20.40.710.470.420.680.690.650.55
InceptionV30.720.340.640.710.760.50.80.820.820.76
InceptionResNetV20.70.290.610.70.730.520.780.760.840.73
EfficientNetV2S0.70.290.730.690.690.620.810.810.860.79
EfficientNetV2M0.610.370.540.70.750.480.760.810.810.74
EfficientNetV2L0.680.30.590.720.620.540.740.780.80.66
EfficientNetV2B30.680.350.660.780.810.590.870.90.920.79
EfficientNetV2B20.680.30.60.730.60.530.760.790.810.71
EfficientNetV2B10.670.270.650.650.570.640.750.750.690.73
EfficientNetV2B00.630.330.590.710.60.420.710.70.720.64
EfficientNetB70.660.370.630.750.730.650.80.830.820.73
EfficientNetB60.780.340.640.690.660.520.830.850.890.76
EfficientNetB50.660.320.660.720.70.580.690.750.760.7
EfficientNetB40.750.350.690.760.640.480.840.880.790.69
EfficientNetB30.70.320.760.760.850.580.90.870.90.8
EfficientNetB20.650.360.650.670.70.670.80.780.810.71
EfficientNetB10.530.30.650.680.690.610.770.790.810.68
EfficientNetB00.730.270.610.630.690.460.770.80.70.67
DenseNet2010.620.340.60.730.690.480.790.790.760.7
DenseNet1690.660.320.480.70.610.620.750.780.760.62
DenseNet1210.670.30.590.690.530.60.690.710.810.71
Table 16. Matthews Coefficient on Segmented Data Set.
Table 16. Matthews Coefficient on Segmented Data Set.
Classifier/Pre-Trained ModelDTQDAKNNABGNBLRRFETHGBMLP
Xception 0.570.080.480.620.690.430.760.780.830.72
VGG190.480.240.620.630.520.50.630.640.710.58
VGG160.440.10.540.510.490.40.610.670.770.6
ResNet152V20.39-0.020.230.450.340.30.580.530.640.37
ResNet1520.540.170.290.60.470.10.620.650.740.6
ResNet101V20.520.080.30.520.360.150.50.550.520.49
ResNet1010.450.150.480.560.490.320.740.710.790.66
ResNet50V20.440.060.240.450.420.170.570.570.610.44
ResNet500.530.180.50.570.660.510.670.690.690.52
NASNetMobile0.430.150.340.50.440.460.630.680.590.56
NASNetLarge0.450.120.220.50.250.090.70.720.660.5
MobileNet0.31-0.010.220.610.330.210.620.60.580.46
InceptionV30.650.170.540.630.710.490.780.790.770.71
InceptionResNetV20.630.110.470.60.660.480.730.710.80.67
EfficientNetV2S0.580.10.620.60.570.530.740.760.830.71
EfficientNetV2M0.480.250.350.60.650.290.680.760.760.64
EfficientNetV2L0.60.150.420.640.480.40.680.740.790.54
EfficientNetV2B30.560.250.550.720.80.620.850.880.90.74
EfficientNetV2B20.560.110.410.640.480.330.670.720.760.62
EfficientNetV2B10.570.160.490.540.440.520.710.690.620.63
EfficientNetV2B00.470.180.450.620.470.320.650.650.660.53
EfficientNetB70.550.290.50.670.690.620.780.790.830.68
EfficientNetB60.710.210.510.610.580.530.780.80.850.69
EfficientNetB50.550.180.550.630.60.530.610.670.690.62
EfficientNetB40.670.190.60.690.550.450.80.850.730.59
EfficientNetB30.580.170.670.70.850.610.860.810.860.76
EfficientNetB20.520.260.50.570.590.530.750.710.760.61
EfficientNetB10.370.120.510.60.590.480.730.740.750.57
EfficientNetB00.670.060.470.550.590.350.710.750.620.59
DenseNet2010.490.20.480.650.560.320.730.730.710.59
DenseNet1690.540.20.280.60.50.470.690.740.710.53
DenseNet1210.540.160.470.60.530.460.610.640.760.63
Table 17. Kappa Statistics on Segmented Data Set.
Table 17. Kappa Statistics on Segmented Data Set.
Classifier/Pre-Trained ModelDTQDAKNNABGNBLRRFETHGBMLP
Xception 0.570.060.470.610.690.390.750.770.830.71
VGG190.470.190.610.620.510.490.620.640.70.57
VGG160.430.080.540.510.490.340.610.670.760.6
ResNet152V20.39-0.020.220.450.310.290.580.530.630.37
ResNet1520.530.130.290.60.460.090.620.650.740.59
ResNet101V20.510.060.30.50.310.140.50.550.520.48
ResNet1010.450.120.480.550.460.310.740.710.790.65
ResNet50V20.430.050.240.440.420.160.570.570.610.44
ResNet500.520.130.50.570.650.490.670.690.690.52
NASNetMobile0.430.120.330.470.420.460.630.680.590.55
NASNetLarge0.450.080.210.50.180.040.690.720.660.48
MobileNet0.300.220.60.290.190.610.60.580.45
InceptionV30.650.130.530.610.710.470.780.790.770.71
InceptionResNetV20.630.090.460.590.650.450.730.70.790.67
EfficientNetV2S0.570.080.620.60.560.520.740.760.830.71
EfficientNetV2M0.480.20.350.60.640.280.680.760.760.64
EfficientNetV2L0.590.120.420.630.480.350.680.740.790.54
EfficientNetV2B30.560.20.550.70.790.60.850.880.90.74
EfficientNetV2B20.550.090.410.640.450.320.670.720.760.62
EfficientNetV2B10.570.110.490.520.40.50.710.690.620.63
EfficientNetV2B00.470.140.450.610.460.230.650.650.660.52
EfficientNetB70.550.230.50.660.690.610.770.790.820.68
EfficientNetB60.710.170.50.610.580.50.780.80.850.69
EfficientNetB50.550.150.550.630.590.490.610.670.690.62
EfficientNetB40.670.160.590.670.550.430.80.850.730.59
EfficientNetB30.580.130.670.690.840.580.860.810.860.76
EfficientNetB20.520.20.50.570.590.530.740.710.760.6
EfficientNetB10.360.090.510.60.580.470.730.740.750.57
EfficientNetB00.660.050.470.540.590.330.710.740.620.58
DenseNet2010.490.160.470.650.560.310.730.730.710.59
DenseNet1690.540.160.270.60.50.460.690.740.70.52
DenseNet1210.530.120.470.60.520.450.610.640.760.63
Table 18. Comparative analysis on normal and segmented data.
Table 18. Comparative analysis on normal and segmented data.
DataPre-Trained ModelClassifierAccuracyPrecisionRecallF1-ScoreMatthew CoefficientKappa Statistics
Normal
Data
EfficientNetB3HGB0.910.900.890.900.860.86
EfficientNetV2B3HGB0.890.920.860.890.830.83
Segmented
Data
EfficientNetV2B3HGB0.940.920.920.920.900.90
EfficientNetV2B3ET0.930.930.890.900.880.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aggarwal, M.; Khullar, V.; Goyal, N.; Singh, A.; Tolba, A.; Thompson, E.B.; Kumar, S. Pre-Trained Deep Neural Network-Based Features Selection Supported Machine Learning for Rice Leaf Disease Classification. Agriculture 2023, 13, 936. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture13050936

AMA Style

Aggarwal M, Khullar V, Goyal N, Singh A, Tolba A, Thompson EB, Kumar S. Pre-Trained Deep Neural Network-Based Features Selection Supported Machine Learning for Rice Leaf Disease Classification. Agriculture. 2023; 13(5):936. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture13050936

Chicago/Turabian Style

Aggarwal, Meenakshi, Vikas Khullar, Nitin Goyal, Aman Singh, Amr Tolba, Ernesto Bautista Thompson, and Sushil Kumar. 2023. "Pre-Trained Deep Neural Network-Based Features Selection Supported Machine Learning for Rice Leaf Disease Classification" Agriculture 13, no. 5: 936. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture13050936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop