Next Article in Journal
Influence of Thermal Pretreatment on Lignin Destabilization in Harvest Residues: An Ensemble Machine Learning Approach
Previous Article in Journal
Post-Harvest Management of Immature (Green and Semi-Green) Soybeans: Effect of Drying and Storage Conditions (Temperature, Light, and Aeration) on Color and Oil Quality
Previous Article in Special Issue
Improving Coffee Yield Interpolation in the Presence of Outliers Using Multivariate Geostatistics and Satellite Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones

by
David Mojaravscki
* and
Paulo S. Graziano Magalhães
School of Agricultural Engineering, Campinas State University (UNICAMP), Campinas 13083-875, Brazil
*
Author to whom correspondence should be addressed.
Submission received: 27 November 2023 / Revised: 20 December 2023 / Accepted: 12 January 2024 / Published: 16 January 2024
(This article belongs to the Special Issue Big Data Analytics in Agriculture)

Abstract

:
Integrating deep learning for crop monitoring presents opportunities and challenges, particularly in object detection under varying environmental conditions. This study investigates the efficacy of image preprocessing methods for olive identification using mobile cameras under natural light. The research is grounded in the broader context of enhancing object detection accuracy in variable lighting, which is crucial for practical applications in precision agriculture. The study primarily employs the YOLOv7 object detection model and compares various color correction techniques, including histogram equalization (HE), adaptive histogram equalization (AHE), and color correction using the ColorChecker. Additionally, the research examines the role of data augmentation methods, such as image and bounding box rotation, in conjunction with these preprocessing techniques. The findings reveal that while all preprocessing methods improve detection performance compared to non-processed images, AHE is particularly effective in dealing with natural lighting variability. The study also demonstrates that image rotation augmentation consistently enhances model accuracy across different preprocessing methods. These results contribute significantly to agricultural technology, highlighting the importance of tailored image preprocessing in object detection models. The conclusions drawn from this research offer valuable insights for optimizing deep learning applications in agriculture, particularly in scenarios with inconsistent environmental conditions.

1. Introduction

Olive oil production is a significant economic activity in many countries, with a global market value estimated at over €14 billion in 2022 [1]. To ensure high-quality olive oil production, it is essential to identify and classify olives accurately according to their ripeness. Establishing the “right time” to harvest is of the utmost significance [2].
In agriculture, accurate detection and evaluation of olive ripeness is crucial for determining the quality and yield of extracted oil [3]. Traditional methods of assessing olive ripeness rely on experts’ visual inspection, which can be subjective and time-consuming [4].
The literature includes similar initiatives for a range of crops. In fact, the literature has computer vision proposals for in-the-field fruit recognition for a variety of cases, including orchard crops like apples [5], mangoes [6], sweet peppers [7], almonds [8], and tomatoes [9], and in vineyards [10,11].
Regarding olive fruit, there have also been earlier studies [12,13] that addressed determining the size and mass of the fruits, as well as those [14,15,16] that addressed categorizing the fruits according to their surface condition. While Ponce et al. [17] achieved this distinction by examining the fruits’ external features, [4,18] used endocarp image analysis to identify different varieties of olive fruit. A neural-network-based image analysis algorithm for identifying olive fruits in tree brunches was demonstrated by Gatica et al. [19]. Also, post-harvest research is available [20,21,22], in which maturity is determined in-line, right before milling [23]. The work of Aljaafreh et al. [24], who used CNN in natural light, is one noteworthy exception.
In our study, we explored image preprocessing techniques like histogram equalization and adaptive histogram equalization, widely recognized in precision agriculture [25]. Notably, we observed their efficacy in controlled environments, such as watershed preprocessing and HSV in olive identification [12], and histogram equalization for plant disease detection [26]. Additionally, the usage of Gaussian filters, HSV color space [12], OTSU [17], LAB color space [18], thin-plate spline interpolation [20], and Kuwahara filters [22] were noted for assessing maturity indexes. Under varying lighting conditions, adaptive histogram equalization (AHE) has shown promise [27,28].
Computer vision and image processing techniques have emerged as promising solutions for automating olive detection and ripeness assessment. Object detection in agricultural settings faces unique challenges. Variations in lighting conditions, camera optics, and image quality can significantly hinder the accuracy of detection systems. Traditional image processing methods and earlier iterations of object detection algorithms, including the YOLO (You Only Look Once) series, have shown limitations in effectively addressing these challenges [29]. The accuracy of fruit detection based on color is affected by variations in fruit color due to its maturity level, fruit variety, uncertain and varying background features, and variable lighting conditions [30].
To overcome these limitations, this study aims to enhance YOLOv7’s performance in detecting olive fruits through advanced image preprocessing techniques. A key focus is color correction using ColorChecker—considering each image includes use of a color chart—enabling precise calibration and correction [31]. This method is compared with images without any color correction and other color correction methods, like adaptive histogram processing and histogram equalization, to assess their efficacy in addressing color imbalances and imaging condition variations. Additionally, data augmentation methods, including image rotation and bounding box rotation, are utilized to improve the model’s adaptability to diverse olive fruit orientations and backgrounds.
The primary objective is to assess the impact of these image-preprocessing techniques on the accuracy of YOLOv7 for olive fruit detection. The hypothesis is that implementing these preprocessing and data augmentation techniques will significantly enhance the mean average precision (mAP) of YOLOv7-based object detection systems in diverse imaging conditions [32].

2. Materials and Methods

2.1. Experimental Set-Up

The study utilized the built-in cameras of various cell phone models, namely the iPhone 6 ™ (8-megapixel) (manufactured by Apple, Zhengzhou, China), iPhone X ™ (12-megapixel) (manufactured by Apple, Zhengzhou, China), iPhone SE ™ (12-megapixel) (manufactured by Apple, Zhengzhou, China), and Motorola e4 ™ (8-megapixel) (manufactured by Motorola, Tianjin, China), along with the Xrite ColorChecker PASSPORT ™color chart (Figure 1). The focus of the investigation was the Arbequina variety of olive, captured under natural light between 10:00 a.m. and 4:00 p.m. in four distinct locations across Brazil. The locations were as follows: Farm São Sepé Prosperado, São Sepé, RS, coordinates 30º21′56.68″ S, 53º30′41.98″ W, notable for producing the acclaimed Prosperato olive oil; Azeite Don Jose, Caçapava do Sul, RS, coordinates 30º37′25.00″ S, 53º20′44.19″ W, recognized for the production of Don Jose olive oil; Olivas do Sul—Pomar, Cachoeira do Sul, RS, coordinates 30º00′33.23″ S, 52º51′59.51″ W, known for producing Olivas do Sul olive oil, and Fazenda Oliq, São Bento do Sapucaí, SP, coordinates 22º37′19.70″ S, 45º41′25.72″ W, home to the producer of Oliq olive oil.
The image collection occurred in February 2023 on different days, resulting in a comprehensive dataset comprising approximately 2400 images for training and 180 for validation (90 for cross-validation 1 and 90 for cross-validation 2). It is important to note that all images were meticulously acquired with a color-checker integrated into the frame, ensuring consistency and accuracy in the dataset.

2.2. Image Color Correction

2.2.1. Color Correction Based on ColorChecker

Color correction based on ColorChecker aligns with best practices by considering ColorChecker reference information, including D65 illuminant details. Each image undergoes processing, incorporating ColorChecker detection techniques as a key step in color decoding [33,34]. This ensures precise calibration and adherence to industry standards for consistent and accurate color representation. Crucially, the CAT02 chromatic adaptation transform was used to convert between different illuminants, an advanced feature of color management systems and critical in maintaining color consistency across different lighting conditions [35]. This is performed during the conversion from RGB to xyY color spaces, ensuring accurate color representation [36]. In the following step, Finlayson’s 2015 [37] method is utilized for color correction, a respected technique in color constancy [37]. After correcting the colors, we converted the images back to sRGB.

2.2.2. Adaptive Histogram Equalization (AHE)

The thinking behind AHE comes from our eyes, which adapts to the local context of the image to evaluate the contents. The procedure to accomplish it is the image divided into a grid of rectangular contextual regions in which the optimal contrast must be calculated. The optimal number of contextual regions depends on the type of input image.
The histogram of the contained pixels is calculated for each of these contextual regions. Calculating the corresponding cumulative histogram results in an assignment table that optimizes the contrast in each contextual region [38]. AHE is an algorithm that aims to enhance the contrast in an image by compensating for the differences between the various regions in the image. Unlike other methods, it operates locally. It considers the pixel’s location to perform the enhancement. This method allows adapting to the various features of an image to preserve details better and enhance local contrast. One disadvantage is that it can over-enhance noise in the image’s almost uniform regions [33,39].

2.2.3. Histogram Equalization

HE is a computer image processing technique that redistributes the intensity values of an image to enhance contrast [40]. This method operates by transforming the image’s intensity distribution. The image histogram is relatively flat by enhancing the peak portion’s contrast and reducing the valley portions’ contrast on both sides. The fundamental concept is to ensure that each grey level appears with the same frequency so that the probability of each grey level is evenly distributed, leading to a flat histogram, and the image also becomes clear [41]. In essence, histogram equalization aims to achieve a balanced representation of intensity levels, thereby improving the overall visual quality of the image (Figure 2).

2.3. Data Augmentation

Data augmentation is one of the crucial approaches in object detection, which encompasses adjusting the picture that contains objects to increase the size of the dataset. This helps to increase the generality of a model and application of the model for practical problem-solving. This study employed two types of geometric [42] data augmentation: bounding box and image rotations [43]. Bounding box rotation involves rotating the bounding boxes of the olive fruits by 90 degrees in three directions: vertical flipping, horizontal flipping, and vertical and horizontal flipping. This allows the neural network to learn in different orientations [44]. This geometric rotation was applied to the image rotation level, vertical flipping, horizontal flipping, and vertical and horizontal flipping. This technology introduces rotations on the olive tree fruit’s perspective, changing the background for their object learning [45]. The dataset is augmented with an added bounding box rotation and a rotated image to ensure that the olive fruits are captured under different scenes.

2.4. Model

Deep learning methods have significantly advanced the object detection field, with the YOLO algorithm emerging as a notable development [32]. The YOLO, an acronym for “You Only Look Once”, algorithm works by dividing the input image into a grid and predicting the object classes and bounding box coordinates for each grid cell. This approach enables YOLO to detect multiple objects in a single pass, making it more efficient than other object detection algorithms [46]. YOLOv7 represents a continuation and refinement of the YOLO series YOLOv2 [47], YOLOv3 [48], YOLOv4 [49], and YOLOv5, renowned for its innovative approach to object detection.
This approach, involving a single-stage process for the simultaneous prediction of bounding boxes and object classification, marks a significant departure from conventional methods that adapted classifiers for object detection. YOLOv7 is celebrated for its state-of-the-art (SOTA) performance [50]; YOLOv7 achieved the highest mAP when compared to YOLOv3, YOLOv5, and Faster RCNN in the detection of Camellia oleifera fruit in field environments. Also, for marine creature detection, YOLOv7 outperformed previous YOLO versions [51]. This advancement has been well-documented in several key publications [52,53,54]. Wang et al. [32] and Liu et al. [55] describe the YOLOv7 as comprising an input, backbone, and head network, and a prediction network, as explained below (Figure 3):
Input module: To ensure that the input color images are uniformly scaled to a 640 × 640 size and meet the requirements for the input size of the backbone network, the preprocessing stage of the YOLOv7 model uses mosaic and hybrid data enhancement techniques. It also uses the adaptive anchor frame calculation method established by YOLOv5.
Backbone network: The three primary parts of the YOLOv7 network are MP1, E-ELAN, and CBS. The CBS module comprises SiLU activation functions, batch normalization, and convolution. The E-ELAN module preserves the original gradient path and helps the network learn more features by directing various feature group computational blocks to learn more varied features. This improves the network’s capacity for learning. MP1 is split into upper and lower branches and comprises CBS and MaxPool. The upper branch reduces the image’s length and width by half using MaxPool and reduces the image’s channels by half using CBS with 128 output channels. The lower branch utilizes a concatenation (Cat) operation to combine the features extracted from both branches, halving the image length and breadth. It also uses the picture channels using a CBS with a 1 × 1 kernel and stride. The network’s capacity to extract features is enhanced by MaxPool, which extracts the maximum value information from tiny local areas, and CBS, which extracts all the value information from small local areas.
Head network: The head network of YOLOv7 is organized using the PANet-based feature pyramid network (FPN) architecture. This network includes the extended efficient layer aggregation network (E-ELAN), MaxPool-2 (MP2), several convolutional, batch normalization, and SiLU activation (CBS) blocks, as well as the introduction of a spatial pyramid pooling and convolutional spatial pyramid pooling (Sppcspc) structure. Adding a convolutional spatial pyramid (CSP) structure within the spatial pyramid pooling (SPP) structure and a large residual edge to facilitate optimization and feature extraction, the Sppcspc structure enhances the perceptual field of the network. The ELAN-H layer, a combination of multiple E-ELAN-based feature layers, further improves feature extraction. With a minor change to the number of output channels, the MP2 block’s structure is similar to that of the MP1 block.
Prediction network: YOLOv7’s prediction network uses a Rep structure to modify how many image channels the head network’s features output should have. Then, it applies a 1 × 1 convolution to predict the confidence, category, and anchor frame. RepVGG [56] serves as the model for the Rep structure, which adds a unique residual design to facilitate training. In practical predictions, this special residual structure reduces to simple convolution, which reduces network complexity without compromising predictive performance.

2.5. YOLOv7 Training

The training of YOLOv7 was conducted on the Google Colab platform with NVIDIA A100 GPU with 40 GB of memory (manufactured by NVIDIA, Hsinchu, Taiwan). Unlike the original broader configuration, this setup was specifically tailored to identify two classes: “olive fruit” and “not olive fruit”. The training was set to run for 100 epochs since multiple epochs can be found in the literature, like 100, 50, and 45 [57,58,59]. The input image resolution was set to 1280 × 1280 pixels, adapting the YOLOv7-E6 model through transfer learning. This decision was based on optimizing the model’s performance for the specific task at hand without any adjustments to the default configuration of hyp.scratch.p5.yaml [32].

2.6. Metrics

2.6.1. Precision

Precision can be considered as the model’s ability to identify all the detected olives correctly. It measures the accuracy of the system’s detections for identifying olives in images. It is calculated by dividing the true positives ( T P ) by the sum of T P and the false positives ( F P ) , as expressed by Equation (1).
P r e c i s i o n = T P T P + F P
where for true positives ( T P ) , olives are present in the image, and the model predicts their presence correctly; for false positives ( F P ) , olives are not in the image but are falsely detected.

2.6.2. Recall

Recall reflects the system’s ability to detect all actual olives in the images. It measures how many of the actual olives in an image were correctly identified by the system. It is calculated as the ratio of the true positives ( T P ) to the sum of T P and the false negatives ( F N ) in Equation (2):
R e c a l l = T P T P + F N
where for true positive ( T P ) , olives are present in the image, and for false negative (FN), an olive is present in the image but is not detected by the model.

2.6.3. mAP

The mean average precision (mAP) is a metric in object detection tasks that measures the system’s accuracy in detecting objects, such as olives, across images. It is derived from the precision and recall values for olives at a certain threshold or over a range of thresholds. The mAP is typically computed as the area under the precision–recall curve for olives, as expressed by Equation (3):
m A P = 1 C i = 1 C A P i
where, A P i is the average precision for olives, and C represents the number of image sets or categories where olives are to be detected. In the case of a single category, such as olives, the mAP would be equal to the average precision ( A P ) for that category [60].

2.6.4. Paired t-Test

A paired t-test was used to compare the mAP of two related treatments. A p-value below 0.05 indicates a statistically significant difference between the control group (no treatment) and the corresponding treatment group [61]. The paired t-test is appropriate because the two mAP groups are related [62], as expressed by Equation (4).
t = d ¯ s d / n
where t is the t-value, d ¯ is the mean of the differences between the paired observations, s d is the standard deviation of the differences, and n is the number of pairs.

2.6.5. ANOVA

In the case of augmentation and preprocessing, we used ANOVA (analysis of variance), a statistical method used to determine whether there are statistically significant differences between the means of two or more groups [63]. The total sum of squares ( T S S ) is the sum of all the squared differences between the mean of a sample and the individual values in that sample:
T S S = i = 1 k j = 1 n i ( Y i j Y ¯ ) 2 ,
where Y i j is the observation, Y ¯ is the overall mean, k is the number of groups, and n i is the number of observations in the ith group.
Sum of squares between ( S S B ) : For each subject, compute the difference between its group mean and the grand mean.
S S B = i = 1 k n i ( Y i ¯ Y ¯ ) 2 ,
where Y i ¯ is the mean of the ith group, Y ¯ is the overall mean, k is the number of groups, and n i is the number of observations in the ith group. Sum of squares within ( S S W ) is the sum of the squared differences between a value and its sample mean for all values.
S S W = i = 1 k j = 1 n i ( Y i j Y i ¯ ) 2 ,
where Y i ¯ is the mean of observations within the ith group, Y ¯ is the overall mean, k is the number of groups, and n i is the number of observations in the ith group.
M S B is the mean sum of squares between the groups.
M S B = S S B k 1 ,
where k 1 is the degrees of freedom for groups, and S S B is the sum of squares between. M S W , the mean sum of squares, is calculated by dividing the sum of squares within the groups by the error degrees of freedom.
M S W = S S W N k ,
where N is the total number of observations and k is the number of groups, and S S W is the sum of squares within.
To obtain the F-statistic, we want to compare the “average” variability between the groups to the “average” variability within the groups. We take the ratio of the between mean sum of squares to the error mean sum of squares.
F = M S B M S W ,
where M S B and M S W are the mean squares between and within groups.

2.6.6. Tukey

The Tukey test, also known as Tukey’s honest significant difference (HSD), is a post hoc test used after an ANOVA has been conducted and found to be significant. The Tukey test is a multiple comparison procedure which controls the family-wise error rate (FWER) [64], as expressed by Equation (11):
Q = q α ( k , d f ) × M S E n
where q α is the studentized range statistic q for a given alpha level α , number of groups k, and degrees of freedom d f . M S E , is the mean square error from the ANOVA. n, is the number of observations in each group.

3. Results

This study evaluated four treatments in the building dataset, namely: H1—no preprocessing and no augmentation; H1.1—no preprocessing and image rotate augmentation; H1.2—no preprocessing and bound box augmentation; H2—color correction based on ColorChecker preprocessing and no augmentation; H2.1—color correction based on ColorChecker preprocessing and image rotate augmentation; H2.2—color correction based on ColorChecker preprocessing and bound box augmentation; H3—AHE preprocessing and no augmentation; H3.1—AHE preprocessing and image rotate augmentation; H3.2—AHE preprocessing and bound box augmentation; H4—HE preprocessing and no augmentation; H4.1—HE preprocessing and image rotate augmentation; H4.2—HE preprocessing and bound box augmentation (Table 1).
A paired t-test was used to compare the mAP of two related treatments. A p-value below 0.05 indicates a statistically significant difference between the control group (no treatment) and the corresponding treatment group [61]. The paired t-test is appropriate because the two mAP groups are related [62]. For this, it could be used to compare the mAP metric from H1 versus H2, H3, and H4 to test which color adjustment improves the mAP metrics (Table 2). In this comparison, they have the same number of images. The study comprehensively evaluates color correction as a preprocessing step for olive identification using cell phones under natural light. The results align with the findings, highlighting the challenges and importance of color correction in agricultural image analysis [65,66].
The paired t-test results revealed that AHE (hypothesis H3) preprocessing is statistically significantly different compared to the other methods, providing empirical evidence for its efficacy in this specific application. AHE separates an image into several sub-blocks, and each sub-block is processed by histogram equalization. It creates a local equalization, making it more interesting for natural light images, wherein the same image in many regions might be darker or lighter. Moreover, the HE performs this for the entire image without considering each region [67]. Color correction, even when using a sophisticated matrix-based method to adjust the colors in an image, often applies corrections uniformly across the entire image [68]. In the case of augmentation and preprocessing, we used ANOVA or variance analysis. The ANOVA p-value was 1.3318434825276204 × 10 5 , showing statistically significant differences in the mean mAP scores among the different color correction methods and augmentations. To investigate this, we proceeded with the Tukey test (Table 3).
The results of the Tukey HSD test provide a detailed comparison of the mean differences between the various groups using different augmentation techniques. Here is a summary of the key findings:
H1 vs. Others:
  • Significant differences were found between H1 and several other groups (H1.1, H1.2, H2.1, H2.2, H3.2, H4.1, H4.2), indicating that the use of any augmentation or preprocessing method generally improves performance compared to no augmentation/preprocessing.
H1.1 vs. Others:
  • H1.1 (no preprocessing with image rotation augmentation) significantly differs from H3.1, H3.2, and H4.2, indicating differences in performance when comparing image rotation augmentation with various preprocessing methods.
H1.2 vs. H3.1 and H1.2 vs. H3.2:
  • These comparisons show significant differences, suggesting that the type of preprocessing used with bbox rotation augmentation can impact performance.
H2.1 vs. H2.2:
  • A significant difference exists, indicating that the choice between image rotation and bbox rotation in color correction preprocessing can impact the results.
H3.1 vs. H4.1:
  • This comparison reveals a significant difference, suggesting that histogram equalization preprocessing combined with image rotation performs better than histogram adaptive preprocessing with the same augmentation.
H4.1 vs. H4.2:
  • A significant difference is noted here, highlighting the impact of the type of augmentation (image rotation vs. bbox rotation) in histogram equalization preprocessing.
Based on the Tukey HSD test results, the best treatment in terms of [email protected] performance among the ones tested can be identified by looking for the groups that consistently showed superior performance. The key is to find the group that was significantly better than most others and was not significantly outperformed by any other group. Here is a summary based on our results:
  • H1.2 (No preprocessing with image rotation): While it showed significant improvements over H1, it was outperformed by H3.1, H3.2, and H4.2.
  • H2.1 (Color correction with image rotation): This group did not show significant differences when compared to H1.1, H3.1, and H4.1, and it was only significantly better than H2.2, H3.1, H3.2, and H4.2.
  • H4.1 (Histogram equalization with image rotation): H4.1 stands out as no other group significantly outperformed it in our test, and it showed significant improvements over several groups, including H3.1 and H3.2.

4. Discussion

Aljaafreh et al. [24] conducted a similar test using an RGB camera to collect data from 10 olive farms in Jordan under natural light. Methodologically, they employed YOLOv5 and tested four hyperparameters to determine the best olive fruit detection performance. Hyperparameter D (with four anchors, a learning rate of 0.01, and a weight decay of 0.05) used with YOLOv5x, which accepts images of 640 × 640 pixels, achieved the highest mAP of 0.7708. All the mAP metrics are available in Table 4.
In this study, the mAP (mean average precision) metrics are compared with those reported by Aljaafreh et al. [24]. Our results demonstrate a significant improvement: an increase of 7.94% in CV1 (H4.1: 0.832 compared to 0.7708 mAP (Equation (12)) and an increase of 8.8% in CV2 (H4.1: 0.839 compared to 0.7708 mAP (Equation (13)).
0.832 0.7708 0.7708 × 100 7.94 %
0.839 0.7708 0.7708 × 100 8.85 %
We observed a significant improvement when comparing our H2.1 results with the best mAP achieved by Aljaafreh et al. [24]. Specifically, in CV1, H2.1 surpassed their results by 9.9% (Equation (14)), and in CV2, the improvement was even more pronounced at 11.57% (Equation (15)).
0.847 0.7708 0.7708 × 100 9.90 %
0.86 0.7708 0.7708 × 100 11.57 %
This finding would allow farmers to use mobile devices to identify the olives for counting and later research for maturity index and disease identification. Such an approach does not require experience or skills with a camera for image acquisition, therefore becoming an ally for farmers. Since the light condition is a challenge, artificial light can be used to improve the results. In [69], which used Yolov5 with 500 epochs for peach recognition, it was concluded that artificial light resulted in better results when compared to natural light, with an F1 score of 0.81, highlighting that natural light presents issues like excessive and shade sun fleck conditions. Aquino et al. [18], using artificial light to identify olives at night and convolution neural network (CNN)-based models, reached F1 scores of 0.83 and 0.84 for precision using Inception-ResNetV2 with 64 epochs.
In the study which focused on using deep learning for olive fruit detection, particularly employing the YOLOv7 model, notable limitations emerge that warrant attention for future research. Firstly, the study’s findings are inherently tied to specific conditions, notably the use of cell phones for image capture under natural lighting, which may not be representative of internal areas and controlled light environments. Additionally, while exploring a range of preprocessing techniques, including histogram equalization and image rotation, the research may not have encompassed the entire spectrum of available or potentially more effective methods. This limited scope may have resulted in overlooking other innovative techniques that might enhance detection accuracy. Furthermore, the reliance on the YOLOv7 model as the sole CNN tool raises questions about the generalizability of the findings across different object detection models, each potentially responding differently to the same preprocessing and augmentation techniques. These limitations highlight the need for more extensive research to validate and extend these findings under varying conditions, with different olive varieties, with diverse resolutions, and across multiple deep learning models.
The study comprehensively evaluated color correction as a preprocessing technique for olive identification using cell phones in natural light conditions. Various methods were compared, including histogram equalization (HE), adaptive histogram equalization (AHE), and color correction based on use of ColorChecker. The findings indicated that any preprocessing or augmentation method generally improved performance compared to scenarios without such treatments.

5. Conclusions

Based on the results of the Tukey test, it can be concluded that using histogram equalization preprocessing with image rotation augmentation (H4.1) performs the best in terms of [email protected] performance compared to the other tested preprocessing methods and augmentation techniques. The results in terms of image augmentation agree with [70], with the method also performing better on the image level as opposed to the box level.
Notably, the AHE preprocessing method (H3) demonstrated statistically significant differences over the other methods, highlighting its suitability for image capture under natural light with diverse lighting conditions. The study revealed that different preprocessing and augmentation techniques impact the performance of the YOLOv7 model in olive detection.
Among the treatments, the combination of histogram equalization preprocessing and image rotation (H4.1) emerged as the most effective. While sophisticated color correction methods were effective, they were less impactful than AHE preprocessing. This study highlights the importance of carefully selecting image preprocessing and augmentation techniques, with AHE preprocessing and image rotation augmentation identified as key contributors to enhancing olive detection accuracy using YOLOv7 in natural lighting conditions.
In addition, our comprehensive analysis and comparative study with Aljaafreh et al. (2023) [24] have led to notable advancements in olive fruit detection using YOLOv5. We achieved significant improvements in the mean average precision (mAP) when using YOLOv5 [24], with increases of 7.94% and 8.8% in CV1 and CV2 for H4.1, and even more substantial gains of 9.9% and 11.57% in CV1 and CV2 for H2.1, respectively. These results demonstrate the efficacy of our methodological enhancements and provide valuable insights for future research in precision agriculture using deep learning techniques.

Author Contributions

Conceptualization, D.M.; methodology, D.M.; software, D.M.; validation, D.M.; formal analysis, D.M.; investigation, D.M.; resources, D.M.; data curation, D.M.; writing—original draft preparation, D.M.; writing—review and editing, D.M. and P.S.G.M.; visualization, D.M.; supervision, P.S.G.M.; project administration, D.M. and P.S.G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001.

Data Availability Statement

The data code and material are available from the corresponding author by reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHEAdaptive Histogram Equalization
CVCross-Validation
CNNConvolution Neural Network
HEHistogram Equalization
YOLOYou Only See Once
MSEMean Squared Error
mAPMean Average Precision

References

  1. The Brainy Insights. Olive Oil Market Size by Type (Extra Virgin, Virgin, Pure/Refined, and Others), By End-user (Foodservice/HoReCa, Household/Retail, Food Manufacturing, and Others), Regions, Global Industry Analysis, Share, Growth, Trends, and Forecast 2023 to 2032. 2023. Available online: https://www.thebrainyinsights.com/report/olive-oil-market-13494 (accessed on 12 November 2023).
  2. Rodrigues, N.; Casal, S.; Rodrigues, A.I.; Cruz, R.; Pereira, J.A. Impact of Frost on the Morphology and Chemical Composition of cv. Santulhana Olives. Appl. Sci. 2022, 12, 1222. [Google Scholar] [CrossRef]
  3. Khosravi, H.; Saedi, S.I.; Rezaei, M. Real-time recognition of on-branch olive ripening stages by a deep convolutional neural network. Sci. Hortic. 2021, 287, 110252. [Google Scholar] [CrossRef]
  4. Martínez, S.S.; Gila, D.M.M.; Beyaz, A.; Ortega, J.G.; García, J.G. A computer vision approach based on endocarp features for the identification of olive cultivars. Comput. Electron. Agric. 2018, 154, 341–346. [Google Scholar] [CrossRef]
  5. Roy, P.; Kislay, A.; Plonski, P.A.; Luby, J.; Isler, V. Vision-based preharvest yield mapping for apple orchards. Comput. Electron. Agric. 2019, 164, 104897. [Google Scholar] [CrossRef]
  6. Qureshi, W.; Payne, A.; Walsh, K.; Linker, R.; Cohen, O.; Dailey, M. Machine vision for counting fruit on mango tree canopies. Precis. Agric. 2017, 18, 224–244. [Google Scholar] [CrossRef]
  7. Bac, C.; Hemming, J.; Van Henten, E. Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper. Comput. Electron. Agric. 2013, 96, 148–162. [Google Scholar] [CrossRef]
  8. Underwood, J.P.; Hung, C.; Whelan, B.; Sukkarieh, S. Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors. Comput. Electron. Agric. 2016, 130, 83–96. [Google Scholar] [CrossRef]
  9. Zhao, Y.; Gong, L.; Zhou, B.; Huang, Y.; Liu, C. Detecting tomatoes in greenhouse scenes by combining AdaBoost classifier and colour analysis. Biosyst. Eng. 2016, 148, 127–137. [Google Scholar] [CrossRef]
  10. Nuske, S.; Wilshusen, K.; Achar, S.; Yoder, L.; Narasimhan, S.; Singh, S. Automated visual yield estimation in vineyards. J. Field Robot. 2014, 31, 837–860. [Google Scholar] [CrossRef]
  11. Aquino, A.; Millan, B.; Diago, M.P.; Tardaguila, J. Automated early yield prediction in vineyards from on-the-go image acquisition. Comput. Electron. Agric. 2018, 144, 26–36. [Google Scholar] [CrossRef]
  12. Ponce, J.M.; Aquino, A.; Millán, B.; Andújar, J.M. Olive-fruit mass and size estimation using image analysis and feature modeling. Sensors 2018, 18, 2930. [Google Scholar] [CrossRef]
  13. Ponce, J.M.; Aquino, A.; Millan, B.; Andújar, J.M. Automatic counting and individual size and mass estimation of olive-fruits through computer vision techniques. IEEE Access 2019, 7, 59451–59465. [Google Scholar] [CrossRef]
  14. Diaz, R.; Gil, L.; Serrano, C.; Blasco, M.; Moltó, E.; Blasco, J. Comparison of three algorithms in the classification of table olives by means of computer vision. J. Food Eng. 2004, 61, 101–107. [Google Scholar] [CrossRef]
  15. Hassan, H.; El-Rahman, A.A.; Attia, M. Color Properties of olive fruits during its maturity stages using image analysis. In Proceedings of the AIP Conference Proceedings, Omaha, NE, USA, 3–4 August 2011; American Institute of Physics: College Park, MD, USA, 2011; Volume 1380, pp. 101–106. [Google Scholar] [CrossRef]
  16. Puerto, D.A.; Martínez Gila, D.M.; Gámez García, J.; Gómez Ortega, J. Sorting olive batches for the milling process using image processing. Sensors 2015, 15, 15738–15754. [Google Scholar] [CrossRef]
  17. Ponce, J.F.; Aquino, A.; Andújar, J.M. Olive-fruit variety classification by means of image processing and convolutional neural networks. IEEE Access 2019, 7, 147629–147641. [Google Scholar] [CrossRef]
  18. Aquino, A.; Ponce, J.M.; Andújar, J.M. Identification of olive fruit, in intensive olive orchards, by means of its morphological structure using convolutional neural networks. Comput. Electron. Agric. 2020, 176, 105616. [Google Scholar] [CrossRef]
  19. Gatica, G.; Best, S.; Ceroni, J.; Lefranc, G. Olive fruits recognition using neural networks. Procedia Comput. Sci. 2013, 17, 412–419. [Google Scholar] [CrossRef]
  20. Figorilli, S.; Violino, S.; Moscovini, L.; Ortenzi, L.; Salvucci, G.; Vasta, S.; Tocci, F.; Costa, C.; Toscano, P.; Pallottino, F. Olive fruit selection through ai algorithms and RGB imaging. Foods 2022, 11, 3391. [Google Scholar] [CrossRef]
  21. Avila, F.; Mora, M.; Oyarce, M.; Zuñiga, A.; Fredes, C. A method to construct fruit maturity color scales based on support machines for regression: Application to olives and grape seeds. J. Food Eng. 2015, 162, 9–17. [Google Scholar] [CrossRef]
  22. Sola-Guirado, R.R.; Bayano-Tejero, S.; Aragón-Rodríguez, F.; Bernardi, B.; Benalia, S.; Castro-García, S. A smart system for the automatic evaluation of green olives visual quality in the field. Comput. Electron. Agric. 2020, 179, 105858. [Google Scholar] [CrossRef]
  23. Aguilera Puerto, D.; Cáceres Moreno, Ó.; Martínez Gila, D.M.; Gómez Ortega, J.; Gámez García, J. Online system for the identification and classification of olive fruits for the olive oil production process. J. Food Meas. Charact. 2019, 13, 716–727. [Google Scholar] [CrossRef]
  24. Aljaafreh, A.; Elzagzoug, E.Y.; Abukhait, J.; Soliman, A.H.; Alja’Afreh, S.S.; Sivanathan, A.; Hughes, J. A Real-Time Olive Fruit Detection for Harvesting Robot Based on Yolo Algorithms. Acta Technol. Agric. 2023, 26, 121–132. [Google Scholar] [CrossRef]
  25. Sharmila, G.; Rajamohan, K. A Systematic Literature Review on Image Preprocessing and Feature Extraction Techniques in Precision Agriculture. In Proceedings of the Congress on Intelligent Systems: CIS 2021, Bengaluru, India, 4–5 September 2021; Springer: Singapore, 2022; Volume 1, pp. 333–354. [Google Scholar] [CrossRef]
  26. Kiran, S.; Chandrappa, D. Plant Leaf Disease Detection Using Efficient Image Processing and Machine Learning Algorithms. J. Robot. Control 2023, 4, 840–848. [Google Scholar]
  27. Ojo, M.O.; Zahid, A. Improving Deep Learning Classifiers Performance via Preprocessing and Class Imbalance Approaches in a Plant Disease Detection Pipeline. Agronomy 2023, 13, 887. [Google Scholar] [CrossRef]
  28. Nugroho, B.; Yuniarti, A. Performance of contrast-limited AHE in preprocessing of face recognition with training image under various lighting conditions. In Proceedings of the 2020 6th Information Technology International Seminar (ITIS), Surabaya, Indonesia, 14–16 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 167–171. [Google Scholar] [CrossRef]
  29. Wosner, O.; Farjon, G.; Bar-Hillel, A. Object detection in agricultural contexts: A multiple resolution benchmark and comparison to human. Comput. Electron. Agric. 2021, 189, 106404. [Google Scholar] [CrossRef]
  30. Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2021, 116, 8–19. [Google Scholar] [CrossRef]
  31. Liu, Z.; Liu, Y.X.; Gao, G.A.; Yong, K.; Wu, B.; Liang, J.X. An integrated method for color correction based on color constancy for early mural images in Mogao Grottoes. Front. Neurosci. 2022, 16, 1024599. [Google Scholar] [CrossRef]
  32. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar] [CrossRef]
  33. Zimmerman, J.; Cousins, S.; Hartzell, K.; Frisse, M.; Kahn, M. A psychophysical comparison of two methods for adaptive histogram equalization. J. Digital Imaging 1989, 2, 82–91. [Google Scholar] [CrossRef]
  34. Khan, F.S.; van Weijer, J.; Vanrell, M. Top-down color attention for object recognition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2010. [Google Scholar] [CrossRef]
  35. Luo, M.R.; Cui, G.; Rigg, B. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Res. Appl. 2001, 26, 340–350. [Google Scholar] [CrossRef]
  36. Fairchild, M.D. Color Appearance Models; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar] [CrossRef]
  37. Finlayson, G.D.; Mackiewicz, M.; Hurlbert, A. Color Correction Using Root-Polynomial Regression. IEEE Trans. Image Process. 2015, 24, 1460–1470. [Google Scholar] [CrossRef]
  38. Heckbert, P. Graphics Gems IV (IBM Version); Elsevier: Amsterdam, The Netherlands, 1994; Chapter 5. [Google Scholar]
  39. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  40. Cheng, H.D.; Shi, X. A simple and effective histogram equalization approach to image enhancement. Digit. Signal Process. 2004, 14, 158–170. [Google Scholar] [CrossRef]
  41. Xiong, J.; Yu, D.; Wang, Q.; Shu, L.; Cen, J.; Liang, Q.; Chen, H.; Sun, B. Application of histogram equalization for image enhancement in corrosion areas. Shock Vib. 2021, 2021, 1–13. [Google Scholar] [CrossRef]
  42. Khalifa, N.E.; Loey, M.; Mirjalili, S. A comprehensive survey of recent trends in deep learning for digital images augmentation. Artif. Intell. Rev. 2022, 55, 2351–2377. [Google Scholar] [CrossRef]
  43. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
  44. Quiroga, F.; Ronchetti, F.; Lanzarini, L.; Bariviera, A.F. Revisiting data augmentation for rotational invariance in convolutional neural networks. In Modelling and Simulation in Management Sciences: Proceedings of the International Conference on Modelling and Simulation in Management Sciences (MS-18), Girona, Spain, 28–29 June 2018; Springer: Cham, Switzerland, 2020; pp. 127–141. [Google Scholar] [CrossRef]
  45. Simard, P.Y.; Steinkraus, D.; Platt, J.C. Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis; IEEE: Piscataway, NJ, USA, 2003. [Google Scholar] [CrossRef]
  46. Badeka, E.; Karapatzak, E.; Karampatea, A.; Bouloumpasi, E.; Kalathas, I.; Lytridis, C.; Tziolas, E.; Tsakalidou, V.N.; Kaburlasos, V.G. A Deep Learning Approach for Precision Viticulture, Assessing Grape Maturity via YOLOv7. Sensors 2023, 23, 8126. [Google Scholar] [CrossRef]
  47. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. arXiv 2017, arXiv:1612.08242. [Google Scholar] [CrossRef]
  48. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  49. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
  50. Wu, D.; Jiang, S.; Zhao, E.; Liu, Y.; Zhu, H.; Wang, W.; Wang, R. Detection of Camellia oleifera fruit in complex scenes by using YOLOv7 and data augmentation. Appl. Sci. 2022, 12, 11318. [Google Scholar] [CrossRef]
  51. Shankar, R.; Muthulakshmi, M. Comparing YOLOV3, YOLOV5 & YOLOV7 Architectures for Underwater Marine Creatures Detection. In Proceedings of the 2023 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates, 9–10 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 25–30. [Google Scholar] [CrossRef]
  52. Gallo, I.; Rehman, A.U.; Dehkordi, R.H.; Landro, N.; La Grassa, R.; Boschetti, M. Deep object detection of crop weeds: Performance of YOLOv7 on a real case dataset from UAV images. Remote Sens. 2023, 15, 539. [Google Scholar] [CrossRef]
  53. Zeng, Y.; Zhang, T.; He, W.; Zhang, Z. Yolov7-uav: An unmanned aerial vehicle image object detection algorithm based on improved yolov7. Electronics 2023, 12, 3141. [Google Scholar] [CrossRef]
  54. Fu, X.; Wei, G.; Yuan, X.; Liang, Y.; Bo, Y. Efficient YOLOv7-Drone: An Enhanced Object Detection Approach for Drone Aerial Imagery. Drones 2023, 7, 616. [Google Scholar] [CrossRef]
  55. Liu, K.; Sun, Q.; Sun, D.; Peng, L.; Yang, M.; Wang, N. Underwater target detection based on improved YOLOv7. J. Mar. Sci. Eng. 2023, 11, 677. [Google Scholar] [CrossRef]
  56. Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13733–13742. [Google Scholar]
  57. Dang, F.; Chen, D.; Lu, Y.; Li, Z. YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems. Comput. Electron. Agric. 2023, 205, 107655. [Google Scholar] [CrossRef]
  58. Ariza-Sentís, M.; Baja, M.; Martín, S.V.V.; Valente, J.R.P. Object detection and tracking on UAV RGB videos for early extraction of grape phenotypic traits. Comput. Electron. Agric. 2023, 211, 108051. [Google Scholar] [CrossRef]
  59. Fazari, A.; Pellicer-Valero, O.; Gómez-Sanchís, J.; Bernardi, B.; Cubero, S.; Benalia, S.; Zimbalatti, G.; Blasco, J. Application of deep convolutional neural networks for the detection of anthracnose in olives using VIS/NIR hyperspectral images. Comput. Electron. Agric. 2021, 187, 106252. [Google Scholar] [CrossRef]
  60. Padilla, R.; Netto, S.L.; da Silva, E.A.B. A Survey on Performance Metrics for Object-Detection Algorithms; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  61. Chen, B.; Wang, X.; Qiu, B.; Jia, B.; Li, X.; Wang, Y. An unsafe behavior detection method based on improved YOLO framework. Electronics 2022, 11, 1912. [Google Scholar] [CrossRef]
  62. Hsu, H.; Lachenbruch, P. Paired t test. In Wiley StatsRef: Statistics Reference Online; Wiley: Hoboken, NJ, USA, 2014. [Google Scholar] [CrossRef]
  63. Fisher, R. The Design of Experiments; Oliver & Boyd: Edinburgh, UK, 1949. [Google Scholar]
  64. Tukey, J.W. Comparing individual means in the analysis of variance. Biometrics 1949, 5, 99. [Google Scholar] [CrossRef]
  65. Guzmán, E.; Baeten, V.; Pierna, J.A.F.; García-Mesa, J.A. Determination of the olive maturity index of intact fruits using image analysis. J. Food Sci. Technol. 2013, 52, 1462–1470. [Google Scholar] [CrossRef]
  66. Ortenzi, L.; Figorilli, S.; Costa, C.; Pallottino, F.; Violino, S.; Pagano, M.; Imperi, G.; Manganiello, R.; Lanza, B.; Antonucci, F. A Machine Vision Rapid Method to Determine the Ripeness Degree of Olive Lots. Sensors 2021, 21, 2940. [Google Scholar] [CrossRef] [PubMed]
  67. Guo, J.; Ma, J.; Ángel, F. García-Fernández.; Zhang, Y.; Liang, H.N. A survey on image enhancement for Low-light images. Heliyon 2023, 9, e14558. [Google Scholar] [CrossRef] [PubMed]
  68. Finlayson, G.D.; Darrodi, M.M.; Mackiewicz, M. The alternating least squares technique for nonuniform intensity color correction. Color Res. Appl. 2014, 40, 232–242. [Google Scholar] [CrossRef]
  69. Bortolotti, G.; Piani, M.; Mengoli, D.; Corelli Grappadelli, L.; Manfrini, L. Pilot study of a computer vision system for in-field peach fruit quality evaluation. Acta Hortic. 2022, 1352, 315–322. [Google Scholar] [CrossRef]
  70. Chen, Y.; Zhang, P.; Kong, T.; Li, Y.; Zhang, X.; Qi, L.; Sun, J.; Jia, J. Scale-aware automatic augmentations for object detection with dynamic training. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 2367–2383. [Google Scholar] [CrossRef]
Figure 1. Image acquisition sample.
Figure 1. Image acquisition sample.
Agriengineering 06 00010 g001
Figure 2. (a) Original image, (b) Color correction based on ColorChecker, (c) Adaptive histogram equalization, (d) Histogram equalization.
Figure 2. (a) Original image, (b) Color correction based on ColorChecker, (c) Adaptive histogram equalization, (d) Histogram equalization.
Agriengineering 06 00010 g002
Figure 3. Yolov7 architecture [55].
Figure 3. Yolov7 architecture [55].
Agriengineering 06 00010 g003
Table 1. Cross-validation (CV) evaluation of each treatment.
Table 1. Cross-validation (CV) evaluation of each treatment.
TreatmentCV1CV1CV1CV2CV2CV2
No preprocessingPrecisionRecallmAPPrecisionRecallmAP
H10.5930.6720.6410.5720.7560.686
H1.10.7620.7860.840.7380.7940.848
H1.20.6510.8720.8210.6690.8280.814
ColorChecker preprocessingPrecisionRecallmAPPrecisionRecallmAP
H20.66510.8720.8210.6630.7590.726
H2.10.7180.8420.8470.780.780.86
H2.20.690.7570.7850.6620.8350.774
AHEPrecisionRecallmAPPrecisionRecallmAP
H30.740.810.780.7720.790.84
H3.10.6330.7290.7130.660.7360.744
H3.20.7010.6690.7250.6750.7380.745
HEPrecisionRecallmAPPrecisionRecallmAP
H40.6760.7930.7750.6640.8140.785
H4.10.720.8290.8320.7230.8170.839
H4.20.6470.7850.7520.6610.8090.784
Table 2. H1 X H2, H3 and H3.
Table 2. H1 X H2, H3 and H3.
H1H2H3H4
p-value0.36080.03260.0949
Table 3. Tukey HSD Test.
Table 3. Tukey HSD Test.
Group1Group2Meandiffp-adjLowerUpperReject
H1H1.10.180500.11550.2455True
H1H1.20.1540.00010.0890.219True
H1H2.10.1900.1250.255True
H1H2.20.1160.00110.0510.181True
H1H3.10.0650.050100.13False
H1H3.20.07150.02940.00650.1365True
H1H4.10.17200.1070.237True
H1H4.20.10450.00240.03950.1695True
H1.1H1.20.02650.77940.09150.0385False
H1.1H2.10.00950.99930.05550.0745False
H1.1H2.20.06450.05220.12950.0005False
H1.1H3.10.11550.00120.18050.0505True
H1.1H3.20.1090.00180.1740.044True
H1.1H4.10.00850.99970.07350.0565False
H1.1H4.20.0760.02050.1410.011True
H1.2H2.10.0360.47790.0290.101False
H1.2H2.20.0380.41970.1030.027False
H1.2H3.10.0890.00740.1540.024True
H1.2H3.20.08250.01230.14750.0175True
H1.2H4.10.0180.96090.0470.083False
H1.2H4.20.04950.17850.11450.0155False
H2.1H2.20.0740.0240.1390.009True
H2.1H3.10.1250.00060.190.06True
H2.1H3.20.11850.0010.18350.0535True
H2.1H4.10.0180.96090.0830.047False
H2.1H4.20.08550.00970.15050.0205True
H2.2H3.10.0510.15830.1160.014False
H2.2H3.20.04450.26340.10950.0205False
H2.2H4.10.0560.10530.0090.121False
H2.2H4.20.01150.99740.07650.0535False
H3.1H3.20.006510.05850.0715False
H3.1H4.10.1070.0020.0420.172True
H3.1H4.20.03950.3790.02550.1045False
H3.2H4.10.10050.00320.03550.1655True
H3.2H4.20.0330.57190.0320.098False
H4.1H4.20.06750.04080.13250.0025True
Table 4. Aljaafreh et al., 2023 [24], YOLOv5 olive fruit object detection under natural light mAP.
Table 4. Aljaafreh et al., 2023 [24], YOLOv5 olive fruit object detection under natural light mAP.
NameHyperparametermAP
YOLOv5xD0.7708
YOLOv5sD0.7265
YOLOv5xC0.7116
YOLOv5sC0.6827
YOLOv5xB0.733
YOLOv5sB0.7384
YOLOv5xA0.7559
YOLOv5sA0.7413
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mojaravscki, D.; Graziano Magalhães, P.S. Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones. AgriEngineering 2024, 6, 155-170. https://0-doi-org.brum.beds.ac.uk/10.3390/agriengineering6010010

AMA Style

Mojaravscki D, Graziano Magalhães PS. Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones. AgriEngineering. 2024; 6(1):155-170. https://0-doi-org.brum.beds.ac.uk/10.3390/agriengineering6010010

Chicago/Turabian Style

Mojaravscki, David, and Paulo S. Graziano Magalhães. 2024. "Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones" AgriEngineering 6, no. 1: 155-170. https://0-doi-org.brum.beds.ac.uk/10.3390/agriengineering6010010

Article Metrics

Back to TopTop