Next Article in Journal
Performance Analysis and Monitoring of Vanadium Redox Flow Battery via Polarization curves
Previous Article in Journal
Factors Influencing the Stability of a Slope Containing a Coal Seam in a Goaf
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Mask R-CNN Combined with Otsu Preprocessing for Rice Panicle Detection and Segmentation

1
School of Information and Computer, Anhui Agricultural University, Hefei 230036, China
2
Anhui Rural Comprehensive Economic Information Center, Hefei 230031, China
*
Author to whom correspondence should be addressed.
Submission received: 20 September 2022 / Revised: 4 November 2022 / Accepted: 14 November 2022 / Published: 17 November 2022

Abstract

:
Rice yield is closely related to the number and proportional area of rice panicles. Currently, rice panicle information is acquired with manual observation, which is inefficient and subjective. To solve this problem, we propose an improved Mask R-CNN combined with Otsu preprocessing for rice detection and segmentation. This method first constructs a rice dataset for rice images in a large field environment, expands the dataset using data augmentation, and then uses LabelMe to label the rice panicles. The optimized Mask R-CNN is used as a rice detection and segmentation model. Actual rice panicle images are preprocessed by the Otsu algorithm and input into the model, which yields accurate rice panicle detection and segmentation results using the structural similarity and perceptual hash value as the measurement criteria. The results show that the proposed method has the highest detection and segmentation accuracy for rice panicles among the compared algorithms. When further calculating the number and relative proportional area of the rice panicles, the average error of the number of rice panicles is 16.73% with a minimum error of 5.39%, and the error of the relative proportional of rice panicles does not exceed 5%, with a minimum error of 1.97% and an average error of 3.90%. The improved Mask R-CNN combined with Otsu preprocessing for rice panicle detection and segmentation proposed in this paper can operate well in a large field environment, making it highly suitable for rice growth monitoring and yield estimation.

1. Introduction

Rice is one of the most important food crops worldwide, as more than half of the world’s population depends on rice as its main food source. China is the second largest rice-growing country, accounting for more than 30% of the total global rice production, and rice is the second most important crop in China [1]. Rice yield is closely related to the number and proportional area of rice panicles. Traditional monitoring methods mainly rely on manual observation, which is tedious, inefficient and subjective, and has difficulty meeting real-time, rapid and nondestructive monitoring requirements. With improvements in agricultural informatization and the development of computer vision, traditional production management methods are gradually shifting to an artificial intelligence basis, and the detection and assessment of rice pests and diseases, as well as fertility, can be achieved with the help of images taken by field cameras in real time, allowing timely prevention and management [2,3,4,5,6,7]. Accurate rice panicle segmentation is a key step in achieving intelligent rice monitoring and yield assessment. However, it is very difficult to segment rice panicles due to the complex natural environment in rice fields, for example, panicle and leaf shielding, the easy color mixing and uneven light changes.
Researchers have applied image detection technology in detecting and segmenting field crops such as rice panicles and wheat ears, effectively improving detection efficiency and reducing costs. By leveraging color and texture features at the pixel level, Zhou, C. selected red, green and blue images acquired by manned ground vehicles according to the light intensity for superpixel segmentation, and used the relevant features to count the number of wheat ears, but the accuracy varied greatly with the number of features and illumination intensity [8]. Jose, A. proposed an automatic counting algorithm for measuring the density of ears of wheat using RGB images acquired by handheld cameras, however this method was only suitable for counting unripe wheat [9]. Lu fused a simple linear iterative clustering (SLIC) method to generate superpixels and segment ears of corn, but the hardware facilities that this method relies on could not meet the requirements of real-time segmentation [10]. Xiong, X. proposed a simple linear iterative clustering (SLIC) method based on superpixel region generation and convolutional neural network (CNN) classification to segment ears of wheat [11]. However, this method was time-consuming and cumbersome, and the segmentation results were easily affected by many factors. Fan, M. collected images of wheat ears in a field environment with a camera, and accurately extracted the outlines of the ears based on the color and texture features of the ears, finally obtaining the number of ears in the image, however illumination had a great influence on color and texture features in the real environment [12]. Li, H. used the LAB color model and the Otsu thresholding segmentation (Otsu) method to extract rice seedling information and combined it with skeleton extraction to detect the number of machine-planted seedlings [13]. Cao, Y. used digital images of experimental rice fields taken by unmanned aerial vehicles (UAV), the number of rice spikes in the ground plot sample and other measured data, and applied the feature parameters extracted by the best subset selection algorithm to construct a rice spike segmentation model and obtain the number of rice spikes [14]. Nevertheless, it was difficult to prepare the data set, the color features were unstable, and the segmentation effect was closely related to the shooting. By implementing candidate area-based classification and a counting method by segmenting (spike or background), and counting the generated candidate areas, Li, Q. generated spike candidate regions using Laws’ texture energy to calculate the number of spikes for spike detection. Interference from noise was reduced by applying area and height thresholds, however the method was just used for individual plants in a laboratory setting [15]. Olsen, P. proposed a computer vision-based method for UAV-acquired rice spike images for detection and counting [16]. In summary, the above methods have a variety of limitations: color and texture alterations caused by uneven changes in light and noise; fuzzy boundaries caused by similarities between the panicle and leaf colors; and serious panicle adhesion in field environments, which leads to low detection accuracy or poor segmentation results.
Developments in computer vision have led to the widespread adaptation of object detection, semantic segmentation and instance segmentation in agriculture [17,18,19,20,21,22,23,24,25,26,27,28]. Recent studies have applied CNN to spike recognition and counting and segmenting gram crops in the field. Zhang, L. constructed a wheat spike recognition model based on the characteristics of winter wheat images collected at the flowering stage in a large field environment, and trained the model for winter wheat spike counting using the gradient descent method [29]. Madec, S. estimated the spike density of wheat based on the Faster R-CNN algorithm with 91% counting accuracy [30]. Yang, M. constructed a semantic segmentation network for UAV images based on a deep learning image processing method to construct a semantic segmentation network for estimating the number of rice crops in large-area rice fields and experimentally found that, combined with excess green factor retraining, this method yielded better results [31]. Duan, L. applied a rice image dataset and data augmentation techniques to train three models for rice crop segmentation offline and found that SegNet-based fully CNN effectively improved accuracy and efficiency in rice spike segmentation [32]. However, in this method, there was a large amount of work required for dataset edging and manual annotation in photoshop. Kong, H. proposed a Mask R-CNN-based feature extraction and 3D recognition method for rice spike computed tomography (CT) images, extracting 3D spike grain features based on the Euclidean distance to calculate the maturing rate and determine rice spike fullness or dryness, however this method was not suitable for practical environment and the equipment was difficult to operate [33]. Theoretically, methods based on CNN are capable of detection and segmentation, but various factors in real-time field environments can interfere with these processes and have not been fully considered in the literature, and only separate studies on rice panicle detection or segmentation have been conducted.
Typical object detection algorithms such as Faster R-CNN and the YOLO (you only look once) series mainly obtain target location information through analyzing rectangular boxes [18,20]. Semantic segmentation can be used to distinguish individual pixels in the image but cannot distinguish between different individuals of the same target [22,23]. In contrast, instance segmentation can be used to determine both the location information and semantic information of the target. However, the common instance segmentation algorithm SOLOv2 requires excessive training, and the boundary information from the PolarMask algorithm can be unclear [26,27]. Mask R-CNN, another common instance segmentation algorithm, can be used to detect and segment rice panicles [34]. Compared with other algorithms, it implements a non-unique input image scale and has a faster detection speed. Using a constructed data set to directly train the model for detection, we can obtain information on the quantity, location and area occupancy of rice panicle, all without concern for false detections. However, the rice panicle distribution can be relatively tight, and the colors of the panicles and leaves vary unevenly, which leads to missed detection.
Therefore, this paper proposes an improved Mask R-CNN combined with Otsu preprocessing for rice panicle detection and segmentation [35]. The data set is first expanded through data enhancement and annotated with LabelMe, and then the base network training is optimized by combining the KL divergence and the Soft-DIoU-NMS algorithm [36,37]. The real-time field rice images are preprocessed using the Otsu algorithm and then fed into the trained model for detection and segmentation. This detection and segmentation method is easy to operate and has high accuracy, making it highly suitable for monitoring rice growth and yield estimation. In the following section, we will introduce the dataset and methods in detail, the third section is the results and analysis of the experimental data, the fourth section is a comparison of the rice panicle image detecting and segmenting methods, and the last section is the conclusion.

2. Dataset and Method

2.1. Dataset and Preprocessing

The dataset was obtained from the experimental rice fields at the Anhui Agricultural Meteorological Center Hefei Branch experimental base, Hefei, China (117°03′26″ E, 31°57′20″ N) [7]. Images of the whole process from transplanting to the harvesting of four varieties of one-season rice (Dangyujing 10, Xuanjing Nuo 1, Chuangliangyou 699 and Liangyou 631) were collected in 2019 and 2020. They were planted in six different sowing periods in 12 plots of equal area and proportion (each plot size was 12 m × 5 m). As shown in Figure 1 and Figure A1, network cameras were set up at the diagonal points of the rice field. Camera 1 took images of the whole growth period of the rice in areas 1–6, camera 2 took images of the whole growth period of the rice in areas 7–12, the blurred images taken in area 6 and 7 are not available. The Hikvision iDS-2DF8825IX-A(T5) camera (made in Hangzhou, Zhejiang, China) was used to acquire a total of 300 images for creating a rice milk ripening stage image dataset. The camera video output supports 3840 × 2160 @25 fps, 2100 lines, 37× optical zoom, a maximum of 300 preset positions, 18 cruise paths, and a mounting riser 2.5 m above ground [7]. Ten shooting points were set in each sowing period block of each experimental field, and the images were automatically uploaded to the FTP server after a timely fixed-point collection, as to avoid as many substantial changes in light intensity caused by direct sunlight as possible. Due to the large size of the original images used for segmentation, they were randomly segmented into areas of 573 × 880 pixels; each image was cropped only once to avoid training overfitting. To improve the algorithm robustness, the dataset was expanded using random cropping, mosaicking, and scaling to simulate images obtained at different angles and distances and in different environments. The expanded dataset consisted of a total of 1000 images and was divided into a training set, test set, and validation set at a ratio of 8:1:1.

2.2. Method

The principle of the original Mask R-CNN algorithm is shown in Figure 2. Mask R-CNN is an algorithm for the detection and segmentation of targets in complex backgrounds. Feature pyramid networks (FPN) and anchor technologies are used to optimize the detection effect of targets at different scales, and the fully convolutional networks (FCN) are combined to achieve accurate segmentation of the objective [35]. Here, we propose a method combining an improved Mask R-CNN with Otsu preprocessing, called Panicle-Mask, to further improve detection accuracy. The flowchart is shown in Figure 3. The original Mask R-CNN algorithm was first improved and optimized by adjusting the anchor and detection box accuracy for its dataset, labelled as B, C, and D, and the rice images are preprocessed by the Otsu algorithm, labelled as A, and then input into the trained model for detection and segmentation.

2.2.1. Otsu Preprocessing

Otsu is a traditional adaptive thresholding algorithm for grayscale image segmentation [35]. In this case, rice panicles can be segmented based on the grayscale pixel distribution in an image. The rice panicle in a rice image shows a clear contrast with the background. First, the excess green index (ExG) is calculated based on the dataset [38]. Then, the collected image is converted to grayscale using the ExG, and the Otsu algorithm is used to calculate the threshold to transform the grayscale image into a binary image that allows segmentation without leakage. For the collected rice color images, the values of the rice panicle and background in the R, G, and B color components have different characteristics. In creating the ExG grayscale images, the original image is first separated into three independent primary color planes, and different color feature combinations are selected. Then, each pixel in the image is transformed to enhance the contrast between the target and the background. The ExG is obtained by the system after automatically exhausting the color components and adding human supervised judgement. Equation (1) is the RGB linear combination calculation formula, and the best combination is calculated by the dataset, that is, the ExG. This combination is robust to light and color changes and can more completely extract the features of the part of the image corresponding to the ears of rice. Compared to the original images, the ExG-based extraction of green plant images is better; shadows, withered grass, and soil elements can be substantially suppressed, and areas representing the plant are more prominent. For crop or weed recognition, the most commonly used gray method is the ExG method. Equation (2) represents a common ExG expression. For segmentation based on the Otsu algorithm, the ExG is calculated based on the dataset and then used to perform feature extraction to segment the rice panicles. Equation (3) shows the ExG expression in this paper, where R and G are the red and green color components of the image, respectively.
T(i,j) = rR(i,j) + gG(i,j) + bB(i,j)
ExG = 2G(i,j) − R(i,j) − B(i,j)
ExG′ = 255 − G(i,j) + R(i,j)
where T(i,j) is the resultant feature after the linear operation, R(i,j), G(i,j), and B(i,j) are the grayscale values of the red, green and blue color components, respectively, of the image at (i,j), r, g, and b are the linear coefficients of the color components R(i,j), G(i,j), and B(i,j), and (i,j) denotes the two-dimensional array variables of the color components. Equation (4) is the Otsu algorithm formula:
I = 1, I ≥ threshold
I = 0, I < threshold
where I is the gray value of image pixels, and threshold is the threshold that maximizes the variance between all pixel classes in the gray map. When the pixel gray value in the image is greater than the threshold, the pixel corresponds to a rice panicle; otherwise, it belongs to the background area.
According to the underlying principle of the Otsu algorithm, to obtain a better segmentation effect, the variance between the classes needs to be maximized while minimizing the within-class variances in the target and background. Thus, the variance is a measure of the uniformity of the grey scale distribution; the larger the value is, the greater the difference between the target and the background after segmentation at a given threshold, and the more favorable the feature extraction [35]. When part of the target is misclassified into the background or the background is misclassified into the target, the difference between the two parts of the image decreases; thus, by achieving the largest variance between classes, the probability of misalignment will be minimal. This method can segment out the ears of rice and will not mistake them as background. However, a field environment is complex; the color of the panicles and leaves is greatly affected by light, split rice panicles are mutilated and incomplete, and some leaves or stalks may be misdetected. Therefore, this method uses the improved Mask R-CNN in combination with Otsu preprocessing for detecting and segmenting rice panicles.

2.2.2. Adjustment of the RPN Anchor Box

The region proposal network (RPN) lightweight neural network was first proposed in the Faster RCNN algorithm, and the Mask R-CNN follows its basic structure and adds segmentation [24]. The RPN generates all possible target candidate regions through fixed window sliding on feature images extracted from the feature extraction network, and then maps these candidate regions to the original map through the region of interest alignment (ROI align) operation. Finally, it performs category classification, box regression, and mask generation on these candidate regions [25]. In the Mask R-CNN algorithm, the RPN generates a total of five different scales of anchor boxes, which are the five initial anchor boxes with set areas and proportions. By further fine-tuning their position and size, the best detection frame containing the target can be selected. Whether the setting of the initial anchor box is appropriate or not affects the prediction box accuracy. In this study, the anchor box size was appropriately increased and decreased from the original size under the same conditions for the experiments, and the optimal initial anchor box size was obtained by comparing the average accuracy (mean average precision, mAP).

2.2.3. Bounding Box Adjustment

In the traditional Mask R-CNN algorithm, the candidate box is represented by a four-dimensional vector (x, y, w, h). By learning the deformation ratio of the real and prediction boxes, the final prediction boxes are obtained by step-by-step fine-tuning. During the training process, the outputs are all a series of probability distributions converging to a Gaussian distribution. The Mask R-CNN defines a multitask loss function consisting of three components, as shown in Equation (5), which is the specific calculation of the classification error and detection error.
L = L cls + L reg + L mask = 1 N cls i L cls p i , p i + λ 1 N reg i p i L reg t i , t i + L mask
where Lcls and Lreg are the classification error and detection error, respectively, and Lmask is the semantic segmentation error. The Ncls and Nreg are the parameters of standardization, i is the index of anchor in a mini-batch, pi stands for the predicted value of an object, pi* stands for the true value of an object. The ti and ti* represent the coordinates of the prediction box and the real box, respectively.
However, the bounding box may not be clear when the target is partially obscured, leading to inaccurate labelling and a blurred bounding box, which directly affects the detection error. To solve the problem of blurred bounding boxes, this paper uses the KL divergence loss function instead of the traditional loss function. The KL loss function still consists of three parts; the classification error and semantic segmentation error remain unchanged and the detection error is Lreg’, which is calculated using Equation (6). The KL divergence, called information gain or information divergence in statistical model inference, measures the difference between two probability distributions [39]. According to the KL divergence theorem, the boundary prediction box and the real box are modeled as Gaussian distributions and Dirac delta functions, respectively, and the KL loss function is the KL distance between the distribution area of the prediction box and that of the real box [36]. Therefore, it is only necessary to calculate the similarity between the Gaussian distribution of the prediction box and the probability distribution of the real box. When the KL divergence of the two distributions is smaller, the probability distributions of the two are closer; that is, the prediction box is closer to the real box.
Lreg′ = e−a(|xe − xg| − 1/2) + α/2
where xg is the estimated coordinate, ang xe is the coordinate of the predicted box, and α = log(σ2), where σ is the standard deviation, which is used to measure the difference between the predicted box and the real box. As σ → 0, the predicted box approaches the real box, and the prediction is more accurate.

2.2.4. Prediction Box Deletion and Selection

Mask R-CNN uses the non-maximum suppression (NMS) algorithm to rank the predictors of a certain category by confidence level, and filters them by calculating the intersection-over-union (IoU) to obtain the object. The best position is obtained by calculating the IoU; however, the detection box with the second highest confidence in this algorithm may be mistakenly deleted due to its high overlap with the detection box with the highest confidence. The improved Soft DIoU-NMS can effectively solve this problem [37].
The Soft DIoU-NMS algorithm first sorts the detection boxes according to the confidence level and selects the highest scoring detection box as the benchmark, and the remaining detection boxes are calculated using the first linear decay. The detection box with the highest confidence level is retained, and the next highest confidence level is used as the benchmark. In this way, the confidence level remains unchanged after processing. Finally, the desired effect is achieved by comprehensive culling. The improved algorithm is shown in Equation (7). Compared with the original NMS algorithm, the complexity of the updated algorithm is practically unchanged, and the implementation is equally simple.
Si = Si, IoU − RDIoU(μ,Bi) < ε
Si = Si (1 − IoU(μ,Bi)), IoU − RDIoU(μ,Bi) > ε
where Si is the confidence score of the current category, RDIoU is the penalty term of the DIoU loss function, Bi denotes all the compared detection boxes in the current category, and μ denotes the detection box with the highest confidence among all the prediction boxes, generally taking ε = 0.5.

2.3. Evaluation Indicators

2.3.1. Precision, Recall, F1-Score, and IoU

Precision and recall are often used as model evaluation indicators in target detection. Precision, also known as accuracy, reflects the proportion of correct classifications among the number of positive samples classified by the model. Recall, known as the completeness rate, reflects the proportion of correctly predicted positive samples out of the number of actual positive samples. The area under the precision-recall (P-R) curve represents the correctness of the model (AP), and the AP values of all categories are averaged to obtain the average correctness (mAP), which is calculated with Equation (8). The F1-score is the weighted harmonic average of the precision and recall. The IoU, the overlap rate of the candidate bound and ground truth bound generated in target detection; that is, the ratio of their intersection to their union, is calculated as Equation (9).
P = TP/(TP + FP)
R = TP/(TP + FN)
F1-score = (1 + a2) × P × R/(a2(P + R)), a > 1
F1-score = 2 × P × R/(P + R), a > 1
IoU = (A ∩ B)/(A ∪ B)
where TP is the number of correctly predicted positive samples, FP is the number of negative samples incorrectly predicted as a positive sample, and FN is the number of positive samples incorrectly predicted as a negative sample. In Equation (9), a is the number of target categories, A is the area of the detected frame, and B is the area of the real frame.

2.3.2. SSIM

The structural similarity (SSIM) is an indicator used to measure the similarity of images, consisting of a comparison of brightness l, contrast c, and structure s [40,41]. As the image size must be the same during the calculation, the image needs to be converted to grayscale first, and then the SSIM value of the corresponding sub-image is obtained by moving along the image pixel-by-pixel with a window of a certain size. Finally, the similarity of the two images is obtained by averaging the values of the windows. The SSIM is a number between 0 and 1, and the calculation is shown in Equation (10). The larger the value is, the smaller the difference between the two images.
SSIM (x, y) = [l (x, y)]α[c (x, y)]β[s (x, y)]γ
where l (x, y) is used as an estimate of luminance, c (x, y) is used as an estimate of contrast, and s (x, y) is as a measure of structural similarity, calculated as in Equation (11). The α, β, and γ are weights.
l (x, y) = (2μxμy + C1)/(μx2 + μy2 + C1)
c (x, y) = (2σxσy + C2)/(σx2+ σy2 + C2)
s (x, y) = (σxy + C3)/(σxσy + C3)
where μx and μy are all pixels of the image, σx and σy are the standard deviations of the pixel values of the image, σxy is the covariance of x, and y, C1, C2, and C3 are constants.
Finally, to simplify the calculation by letting α = β = γ = 1 and C3 = C2/2, we obtain the final Equation (12).
SSIM (x, y) = ((2μxμy + C1)(2σxσy + C2))/((μx2 + μy2 + C1)(σx2+ σy2 + C2))

2.3.3. pHash

The hash algorithm (Hash) specifically includes the mean hash, difference hash, and perceptual hash (pHash), which is commonly used to determine the similarity of two images irrespective of the height, width, brightness or color [42]. The pHash algorithm reduces the image frequency through a discrete cosine transform (DCT) and forms the hash sequence by encoding a unique hash fingerprint. The similarity is then calculated by sequentially comparing the values in the sequence. Compared with the other two algorithms, this algorithm has better robustness and higher accuracy. The higher the value of pHash is, the less similar the two images are; if the value is less than 5, the two images are very similar. Comparing the input image to a two-dimensional signal, the low-frequency part contains most of the image information with small brightness variation, and the high-frequency part contains the image details with large brightness variation. The calculation is shown in Equation (13).
E u = c u i = 0 N 1 f i cos i + 0.5 π N u F u , v = c v j = 0 N 1 E u cos j + 0.5 π N v c u = 1 N , u = 0 2 N ,   u 0
where E(u) and F(u,v) are the coefficients of the one-dimensional and two-dimensional transformations, respectively, N is the number of points in the original signal, f(i) is the initial signal of the input, c is a coefficient of the matrix orthogonal transformation, and u and v are the number of points in each dimension of the input signal.

3. Experimental Results and Analysis

3.1. Experimental Process

The Keras deep learning framework was implemented on a computer with an Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30 GHz, 12 GB memory, and an NVIDIA GeForce GTX 1080 Ti GPU with a Windows 10 Professional operating system to perform the experiments in this study. Python was used to train the rice panicle detection model. First, LabelMe was used to label the dataset, applying a different label to each rice panicle. Then, the boundary of each rice panicle was obtained from the generated json-format file to form a mask image. The Mask R-CNN and improved Mask R-CNN algorithms were trained separately. Under the same conditions, five sets of different fixed anchor box sizes were used for training. After the experiment, the parameters of the improved algorithm were set as follows: learning_rate = 1e-5, RPN_ANCHOR_SCALES = (16,32,64,128,256), epochs = 100, and steps_per_epoch = 200. The improved model converged after 20,000 training iterations, and the corresponding loss and P–R curves are shown in Figure 4 and Figure 5, respectively. As seen in Figure 5, the loss function converged, and the model finished training.

3.2. Results and Analysis

3.2.1. Experimental Results

Table 1 shows the detection results of the model with different anchor box sizes. The accuracy of the original Mask R-CNN model is 68.58% using original prior box sizes of {32,64,128,256,512}. Under the same conditions, the model was trained with different anchor frame sizes, producing the RPN-panicle1, RPN-panicle2, Mask R-CNN, RPN-panicle3, and RPN-panicle models. The optimal model was RPN-panicle2, which showed an accuracy improvement of 11.67% over the original model with an optimal RPN anchor box size of {16,32,64,128,256}. Then, the detection box accuracy was improved by improving the loss function and NMS algorithm, and the final improved model, named Panicle-Mask, achieved a mAP of 89.10% after training. The IoU of the detection box was 84.42% after training. The IoU value obtained by training using the most basic Mask R-CNN model was 59.73%, while the IoU value obtained by using the method in this paper was 87.42%.
Figure 6 shows the detection results for four randomly selected images. Otsu, Mask R-CNN, and Panicle-Mask represent the segmentation method based on Otsu thresholding, detection and segmentation based on the Mask R-CNN, and detection and segmentation based on the improved Mask R-CNN combined with the Otsu thresholding algorithm. As shown in Figure 6, all three methods could segment the rice panicles. Among them, the latter two methods detected different rice panicles labelled with different colors, where each rectangular box represents the smallest rice panicle enclosed within the box, and the labelled numerical part indicates the confidence level of detection. Unlike the Otsu algorithm, the Mask R-CNN-based algorithm both detected and segmented the rice panicles, and their number and relative positions were obtained from the number and coordinate information of the rectangular boxes. The background interfered greatly with the Otsu algorithm, but when the color difference between the rice panicle and the background was relatively large, a better segmentation effect was achieved. In contrast, when the image contains similar colors, the segmentation is incomplete, and there are many false detections. Due to the use of the Otsu algorithm to preprocess the image, the influence of the background on detection was reduced, and the model was improved and optimized according to the characteristics of the dataset. Therefore, Panicle-Mask has higher accuracy.
Figure 7 shows examples of the binary images transformed using the three methods after segmentation. The similarity of the images can be further measured according to the gray image, and the proportion of rice panicles in the image can be calculated from the binary image. As seen in Figure 7, the binarized images obtained by the Otsu algorithm based on threshold segmentation were correctly segmented, but a large amount of background was also segmented as rice panicles with gaps. The results detected directly using the original Mask R-CNN algorithm showed many missed detections, while the detection using the combination of the Otsu thresholding segmentation algorithm and the improved Mask R-CNN algorithm made up for the respective shortcomings of the individual elements. Further analysis of the examples in Figure 6 reveals that the results obtained by the Otsu algorithm were both on the high side and those obtained by the original Mask R-CNN algorithm were both on the low side. The results obtained by the combined and improved Panicle-Mask algorithm were the most similar to the real results.

3.2.2. Error Analysis

Twenty randomly selected rice images at the milk ripening stage were detected and segmented using the three methods. The results were converted into grayscale images, and the SSIM values and pHash values were calculated with reference to the grayscale images to comprehensively measure the detection result accuracy. The reference grayscale images were then converted into rice panicle images after manual labeling. Some example images are shown in Figure 8. In some of the images, such as those in Figure 8a–c, the contrast between the rice panicle and background is obvious, while in others, such as those seen in Figure 8d–f the contrast between the two elements is poor.
Table 2 shows the SSIM results obtained using the different methods. Analysis of the data in Table 2 showed that among those obtained with the three methods, the SSIM obtained by direct detection using the Mask R-CNN algorithm yielded the second-highest results, with a maximum of 92.31%. The results obtained using Otsu segmentation and the improved model were the highest, with minimum values above 88% and a maximum of 95.86%. The results obtained from the Otsu algorithm showed that the segmentation results were better for rice images 9, 12, and 13, where the color difference between the rice panicles and the background was larger, while the color difference was smaller for rice images 3, 5, 19, and 20. Images 3 and 20 were first preprocessed using the Otsu algorithm to remove interference from the background region, and then detected and segmented using the improved Mask R-CNN algorithm model. The obtained SSIM values improved by approximately 10%.
Table 3 shows the pHash results obtained using the different methods; the pattern of the data was approximately the same as that in Table 2. As seen in Table 3, the accuracy of the results obtained by directly using the Mask R-CNN algorithm for detection was low, with a maximum pHash value of 12, which may be related to the different growth characteristics of rice with different panicle shapes, sizes, and colors. The accuracy of the results obtained by using the Otsu algorithm showed overall improvements; specifically, the segmentation results for images 12 and 13 were more similar to the real results. The segmentation accuracy was further improved by combining the Otsu algorithm and Mask R-CNN in the Panicle-Mask algorithm, and the perceived hash value of all the detected rice images was no more than 5. The perceived hash value of images 12 and 13 was reduced to 2, which indicates that the results obtained by this method are very similar to the real results.

3.3. Detection and Segmentation Results

From the above analysis, it is clear that detection using the Otsu algorithm combined with the Mask R-CNN algorithm produces images with the highest similarity to the input image. This method first removes most of the background interference and then uses the trained model for detection and segmentation, which not only produces both the number and specific location information of the rice panicles but also calculates their proportional area. The proportional area of the rice panicles was little affected by their close distribution, the parts of the rice panicles that were stuck together, and the large error in the statistics of the number of rice panicles. Figure 9 and Figure 10 show a comparison between the reference number and proportional area of the rice panicles and those obtained using the proposed method for 40 randomly selected rice images at milk maturity. The horizontal axis represents the number of images, and the vertical axis represents the number of panicles and the proportion area of the panicle of the corresponding image in Figure 9 and Figure 10, respectively. The reference value refers to the number of rice panicles and the proportion of rice panicle area obtained by manual calculation and program statistics after the corresponding picture is manually segmented, and the test value refers to the corresponding result obtained by using the method in this paper. The first 20 images were from the validation set, i.e., 2019 and 2020 rice images, and the last 20 images were randomly selected from among the 2018 milk-ripening rice images to further verify the generalizability of this method. If the panicle was completely blocked by leaves, it was considered to be different from the same rice panicle. Analysis of Figure 9 and Figure 10 shows that the results obtained by the method in this paper were all lower than the reference values, and the error for the number of panicles was slightly higher than that for the proportional area, with an average error of 16.73% versus 3.90%. The corresponding minimum errors were 2.02% and 1.97%, respectively.

3.4. Comparison with Rice Panicle Image Detecting and Segmenting Methods

The proposed method was compared with existing rice panicle detection and segmentation methods, as shown in Table 4. The table lists the specific algorithms used and their evaluation indexes, and summarizes the advantages and disadvantages of each method. In addition to the evaluation indicators already listed above, root mean square error (RMSE), also called standard error, is a measure of how close the calculated value is to the true value; the smaller the value, the higher the accuracy of the model. Accuracy (acc) refers to the proportion of the correct quantity to the total quantity. As can be seen from Table 4, the method proposed in this paper is obviously better than the method proposed by Xiong, X. and slightly worse than that propose by Duan, L. [11,32]. However, the images of rice in reference [32] were taken from the top with a Nikon D40 digital camera, and the number of rice panicles in each image was small. In this paper, rice images were taken from the side with the help of field cameras, and there were many rice panicles in each picture, which was more complex but more practical. Compared with the method proposed by Cao, Y., the RMSE value of the number of rice panicles is slightly lower; however, the method in reference [14] has higher requirements for the flight height of the UAV when collecting images, and is not suitable for the small range in the rice field environment [14]. Compared with the method proposed by Kong, H., this method in reference [33] was used for the experiment and counting of grain per ear of rice panicle in a laboratory environment [33]. Although the result in this paper was worse than that of reference [33], as the accuracy of the number of rice panicles with the proposed method in this paper was 83.27%, it was more practical for a real field environment. It can be seen from Table 4 that the method proposed in this paper has good results in an actual field environment.

4. Conclusions

Based on the basic Mask R-CNN instance segmentation algorithm, this research proposes an improved Mask R-CNN combined with Otsu algorithm preprocessing for detecting and segmenting rice panicles, and compared the results among the three methods. Compared to the image produced by the component, the binary image obtained using the proposed method after detection, transformation and calculation is more comparable to the real image, and information on the number and positions of rice panicles can be obtained with less interference from the background. After the rice image is segmented, the number of rice panicles and their relative proportional area can be calculated, and the rice panicle quality can be judged by color, texture and other features. Therefore, this method is of great value for monitoring rice growth and estimating yield. Compared with prior methods, the present method has several advantages:
(1)
The classical Mask R-CNN model has been improved and optimized by combining the KL divergence and the soft NMS algorithm with the best RPN anchor box size, which makes the model more accurate and efficient in detecting rice panicles, and improves the long training time, low detection accuracy and fuzzy boundaries of the original algorithm;
(2)
Before the actual images are inputted into the detection model, the ExG is calculated based on the dataset features. Then, the traditional Otsu threshold segmentation method is used for preprocessing, which reduces the influence of background interference and improves the model detection accuracy to a certain extent;
(3)
This method can operate well in a field environment and is of great value for monitoring rice growth and estimating yield.
The detection and segmentation accuracy of this method needs to be further improved; the next steps will involve improving both the dataset and model optimization. In terms of the dataset, we plan to improve the diverseness with an automatic data enhancement algorithm to better match the statistics of actual field environments. For model optimization, the network structure will be changed to automatically search for the best module for training to improve the accuracy of model detection and segmentation.

Author Contributions

Conceptualization, S.H. and Z.J.; methodology, Z.J., J.W. and S.H.; software, S.H.; validation, S.H. and Z.J.; formal analysis, S.H.; investigation, S.H. and Z.J.; resources, Z.J.; data curation, S.H., L.Z. and Z.J.; writing—original draft preparation, S.H.; writing—review and editing, Z.J. and S.H.; visualization, S.H.; supervision, Z.J., L.L. and J.X.; project administration, Z.J.; funding acquisition, Z.J. All authors have read and agreed to the published version of the manuscript.

Funding

The research is sponsored by the Natural Science Major Project for Anhui Provincial University (No. KJ2019ZD20), the Independent Innovation Research Fund of Anhui Provincial Key Laboratory of Smart Agricultural Technology and Equipment (APKLSATE2019X002), and the Key Research and Development Project of Anhui Province (202104a06020012 and 202204c06020022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Partial dataset of the off-line training of the detection model used in this article can be found online at https://pan.baidu.com/s/15IN7S6V6_4rD7CTsDbBimA?pwd=3ltx (accessed on 19 September 2022) with the extraction code 3ltx. Data available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1 shows how data were collected in the rice field. There is no shooting site in the milk ripening stage, but the collection equipment and location are the same.
Figure A1. Capturing real-time images of a rice field with a web camera.
Figure A1. Capturing real-time images of a rice field with a web camera.
Applsci 12 11701 g0a1

References

  1. Guo, T.; Liang, G.; Zhou, W.; Liu, D.; Wang, X.; Sun, J.; Li, S.; Hu, C. Effect of fertilizer management on greenhouse gas emission and nutrient status in paddy soil. J. Plant Nutr. 2016, 22, 337–345. [Google Scholar]
  2. Mique, E.; Palaoag, T. Rice pest and disease detection using convolutional neural network. In Proceedings of the 2018 International Conference on Information Science and Applications, Hong Kong, China, 25–27 June 2018. [Google Scholar]
  3. Chen, J.; Zhang, D.; Nanehkaran, Y.; Li, D. Detection of rice plant diseases based on deep transfer learning. J. Sci. Food Agric. 2020, 100, 3246–3256. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, Y.; Liu, M.; Dannenmann, M.; Tao, Y.; Yao, Z.; Jing, R.; Zheng, X.; Klaus, B.; Lin, S. Benefit of using biodegradable film on rice grain yield and N use efficiency in ground cover rice production system. Field Crop Res. 2017, 201, 52–59. [Google Scholar] [CrossRef]
  5. Bai, X.; Cao, Z.; Zhao, L.; Zhang, J.; Lv, C.; Xie, J. Rice heading stage automatic observation by multi-classifier cascade-based rice spike detection method. Agric. For. Meteorol. 2018, 259, 260–270. [Google Scholar] [CrossRef]
  6. Xu, J.; Wang, J.; Xu, X.; Ju, S. Image recognition for different developmental stages of rice by RAdam deep convolutional neural networks. Trans. CSAE 2021, 37, 143–150. [Google Scholar]
  7. Guo, W.; Fukatsu, T.; Ninomiya, S. Automated characterization of flowering dynamics in rice using field-acquired time-series RGB images. Plant Methods 2015, 11, 7. [Google Scholar] [CrossRef] [Green Version]
  8. Zhou, C.; Liang, D.; Yang, X.; Yang, H.; Yue, J.; Yang, G. Wheat Ears Counting in Field Conditions Based on Multi-Feature Optimization and TWSVM. Front. Plant. Sci. 2018, 9, 1024–1040. [Google Scholar] [CrossRef] [PubMed]
  9. Fernandez-Gallego, J.; Kefauver, S.; Gutiérrez, N.; Nieto-Taladriz, M.; Araus, J. Wheat ear counting in-field conditions: High throughput and low-cost approach using RGB images. Plant Methods 2018, 14, 22–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Lu, H.; Cao, Z.; Xiao, Y.; Fang, Z.; Zhu, Y.; Xian, K. Fine-grained maize tassel trait characterization with multi-view representations. Comput. Electron. Agric. 2015, 118, 143–158. [Google Scholar] [CrossRef]
  11. Xiong, X.; Duan, L.; Liu, L.; Tu, H.; Yang, P.; Wu, D.; Chen, G.; Xiong, L.; Yang, W.; Liu, Q. Panicle-SEG: A robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization. Plant Methods 2017, 13, 104–119. [Google Scholar] [CrossRef] [Green Version]
  12. Fan, M.; Ma, Q.; Liu, J.; Wang, Q.; Wang, Y.; Duan, X. Counting Method of Wheatear in Field Based on Machine Vision Technology. Trans. CSAM 2015, 46 (Suppl. S1), 234–239. [Google Scholar]
  13. Li, H.; Li, Z.; Dong, W.; Cao, X.; Wen, Z.; Xiao, R.; Wei, Y.; Zeng, H.; Ma, X. An automatic approach for detecting seedlings per hill of machine-transplanted hybrid rice utilizing machine vision. Comput. Electron. Agric. 2021, 185, 106178–106192. [Google Scholar] [CrossRef]
  14. Cao, Y.; Liu, Y.; Ma, D.; Li, A.; Xu, T. Best Subset Selection Based Rice Panicle Segmentation from UAV Image. Trans. CSAM 2020, 8, 1000–1298. [Google Scholar]
  15. Li, Q.; Cai, J.; Bettina, B.; Okamoto, M.; Miklavcic, S. Detecting spikes of wheat plants using neural networks with Laws texture energy. Plant Methods 2017, 13, 83–96. [Google Scholar]
  16. Olsen, P.; Ramamurthy, K.; Ribera, J.; Chen, Y.; Thompson, A.; Luss, R.; Tuinstra, M.; Abe, N. Detecting and counting panicles in sorghum images. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 1–3 October 2018. [Google Scholar]
  17. Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep Learning for Generic Object Detection: A Survey. Int. J. Comput. Vision 2020, 128, 261–318. [Google Scholar] [CrossRef]
  18. Zhao, L.; Li, S. Object detection algorithm based on improved YOLOv3. Electronics 2020, 9, 537. [Google Scholar] [CrossRef] [Green Version]
  19. Luo, Y.; Wang, B.; Chen, X. Research progresses of target detection technology based on deep learning. Semicond. Optoelectron. 2020, 41, 1–10. [Google Scholar]
  20. Hu, X.; Liu, Y.; Zhao, Z.; Liu, J.; Yang, X.; Sun, C.; Chen, S.; Li, B.; Zhou, C. Real-time detection of uneaten feed pellets in underwater images for aquaculture using an improved YOLO-V4 network. Comput. Electron. Agric. 2021, 185, 106135–106146. [Google Scholar] [CrossRef]
  21. Wu, D.; Wu, D.; Feng, H.; Duan, L.; Dai, G.; Liu, X.; Wang, K.; Yang, P.; Cheng, G.; Gay, A.; et al. A deep learning-integrated micro-CT image analysis pipeline for quantifying rice lodging resistance-related traits. Plant Commun. 2021, 2, 100165–100177. [Google Scholar] [CrossRef]
  22. Gu, X.; Li, S.; Ren, S.; Zheng, H.; Fan, C.; Xu, H. Adaptive enhanced swin transformer with U-net for remote sensing image segmentation. Comput. Electr. Eng. 2022, 102, 108223–108234. [Google Scholar] [CrossRef]
  23. Chen, L.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Zhang, Y.; Xiao, D.; Chen, H.; Liu, Y. Rice Panicle Detection Method Based on Improved Faster R-CNN. Trans. CSAM 2021, 52, 231–240. [Google Scholar]
  26. Sun, X.; Fang, W.; Gao, C.; Fu, L.; Majeed, Y.; Liu, X.; Gao, F.; Yang, R.; Li, R. Remote estimation of grafted apple tree trunk diameter in modern orchard with RGB and point cloud based on SOLOv2. Comput. Electron. Agric. 2022, 199, 107209–107221. [Google Scholar] [CrossRef]
  27. Xie, E.; Sun, P.; Song, X.; Wang, W.; Liang, D.; Shen, C.; Luo, P. Polarmask: Single shot instance segmentation with polar representation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  28. Zhang, X.; Cao, J. Contour-Point Refined Mask Prediction for Single-Stage Instance Segnebtation. Acad. Accel. 2020, 40, 113–121. [Google Scholar]
  29. Zhang, L.; Chen, Y.; Li, Y.; Mang, L.; Du, K. Detection and Counting System for Winter Wheat Ears Based on Convolutional Neural Network. Trans. CSAM 2019, 50, 144–150. [Google Scholar]
  30. Madec, S.; Jin, X.; Lu, H.; Solan, B.; Liu, S.; Duyme, F.; Heritier, E.; Baret, F. Ear density estimation from high resolution RGB imagery using deep learning technique. Agric. For. Meteorol. 2019, 264, 225–234. [Google Scholar] [CrossRef]
  31. Yang, M.; Tseng, H.; Hsu, Y.; Tsai, H. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sens. 2020, 12, 633. [Google Scholar] [CrossRef] [Green Version]
  32. Duan, L.; Xiong, X.; Liu, Q.; Yang, W.; Huang, C. Field rice panicle segmentation based on deep full convolutional neural network. Trans. CSAE 2018, 34, 202–209. [Google Scholar]
  33. Kong, H.; Chen, P. Mask R-CNN-based feature extraction and three-dimensional recognition of rice panicle CT images. Plant Direct 2021, 5, e00323. [Google Scholar] [CrossRef]
  34. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  35. Yang, P.; Song, W.; Zhao, X.; Zhang, R. An improved Otsu threshold segmentation algorithm. Int. J. Comput. Sci. Eng. 2020, 22, 146–153. [Google Scholar] [CrossRef]
  36. He, Y.; Zhu, C.; Wang, J.; Savvides, M.; Zhang, X. Bounding box regression with uncertainty for accurate object detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  37. Bodla, N.; Singh, B.; Chellappa, R.S.; Davis, L. Soft-NMS improving object detection with one line of code. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017.
  38. Zhang, L.; Zhang, H.; Han, W.; Niu, Y.; Chávez, J.; Ma, W. The mean value of gaussian distribution of excess green index: A new crop water stress indicator. Agric. Water Manag. 2021, 251, 106866–106877. [Google Scholar] [CrossRef]
  39. Chen, J.; Matzinger, H.; Zhai, H.; Zhou, M. Centroid estimation based on symmetric KL divergence for Multinomial text classification problem. In Proceedings of the 2018 IEEE International Conference on Machine Learning and Applications, Orlando, FL, USA, 17–20 December 2018. [Google Scholar]
  40. Huang, X.; Jiang, Z.; Lu, L.; Tan, C.; Jiao, J. The study of illumination compensation correction algorithm. In Proceedings of the 2011 IEEE International Conference on Electronics, Communications and Control (ICECC), Ningbo, China, 9–11 September 2011. [Google Scholar]
  41. Tang, Y.; Ren, F.; Pedrycz, W. Fuzzy C-Means clustering through SSIM and patch for image segmentation. Appl. Soft Comput. 2020, 87, 105928. [Google Scholar] [CrossRef]
  42. Huang, Z.; Liu, S. Robustness and Discrimination Oriented Hashing Combining Texture and Invariant Vector Distance. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Korea, 22–26 October 2018. [Google Scholar]
Figure 1. Distribution diagram of 12 rice experimental fields.
Figure 1. Distribution diagram of 12 rice experimental fields.
Applsci 12 11701 g001
Figure 2. Mask R-CNN model structure.
Figure 2. Mask R-CNN model structure.
Applsci 12 11701 g002
Figure 3. Panicle-Mask rice panicle detection and segmentation algorithm flowchart.
Figure 3. Panicle-Mask rice panicle detection and segmentation algorithm flowchart.
Applsci 12 11701 g003
Figure 4. Loss function curve of Panicle-Mask.
Figure 4. Loss function curve of Panicle-Mask.
Applsci 12 11701 g004
Figure 5. Precision–recall curve of Panicle-Mask.
Figure 5. Precision–recall curve of Panicle-Mask.
Applsci 12 11701 g005
Figure 6. Examples of detection results using different methods.
Figure 6. Examples of detection results using different methods.
Applsci 12 11701 g006
Figure 7. Results from different methods and examples of binarized images: (a) Original image; (b) Otsu; (c) Mask R-CNN; (d) Panicle-Mask.
Figure 7. Results from different methods and examples of binarized images: (a) Original image; (b) Otsu; (c) Mask R-CNN; (d) Panicle-Mask.
Applsci 12 11701 g007
Figure 8. Example images from the test set: (a) No. 9; (b) No. 12; (c) No. 13; (d) No. 5; (e) No. 19; (f) No. 20.
Figure 8. Example images from the test set: (a) No. 9; (b) No. 12; (c) No. 13; (d) No. 5; (e) No. 19; (f) No. 20.
Applsci 12 11701 g008
Figure 9. Comparison of the results of the number of rice panicles.
Figure 9. Comparison of the results of the number of rice panicles.
Applsci 12 11701 g009
Figure 10. Comparison of the results of the proportional area of rice panicles.
Figure 10. Comparison of the results of the proportional area of rice panicles.
Applsci 12 11701 g010
Table 1. Detection results of the models for different anchor box sizes.
Table 1. Detection results of the models for different anchor box sizes.
ModelAnchor_SizemAP (%)F1-Score (%)IoU(%)
Mask R-CNN{32,64,128,256,512}68.5871.3161.42
RPN-panicle1{8,16,32,64,128}64.5780.4960.36
RPN-panicle2{16,32,64,128,256}80.2574.7875.73
RPN-panicle3{64,128,256,512,1024}60.4169.0658.75
RPN-panicle4{128,256,512,1024,2048}55.4267.3152.73
Panicle-Mask{16,32,64,128,256}89.1081.9584.42
Table 2. SSIM values for the different methods.
Table 2. SSIM values for the different methods.
No.Otsu (%)Mask R-CNN (%)Panicle-Mask (%)
187.6978.5193.75
283.5974.3989.86
378.2072.4988.55
484.7574.7588.74
579.7079.4689.09
681.1177.3792.68
780.1672.4289.13
884.8672.9788.24
989.2076.8790.10
1088.5879.5491.50
1187.2183.3192.85
1289.2073.3795.86
1392.3181.2294.75
1485.6275.0189.52
1584.6074.3189.97
1685.8475.2188.96
1785.3684.8890.67
1880.4074.1088.53
1979.9276.0689.05
2079.2872.3789.74
Table 3. pHash values for the different methods.
Table 3. pHash values for the different methods.
No.OtsuMask R-CNNPanicle-Mask
15103
27114
36125
47115
5995
67103
78124
87125
95114
10594
11573
124122
13382
146114
156114
165105
17664
188115
199105
2010125
Table 4. Comparison between Panicle-Mask and other methods for rice panicle detection and segmentation.
Table 4. Comparison between Panicle-Mask and other methods for rice panicle detection and segmentation.
PaperAlgorithmEvaluation IndicatorsPerformance
Xiong X.
et al., 2017 [11]
SLIC, SegNet, entropy rate superpixel optimizationp = 0.82
R = 0.73
F1-score = 76.73%
Segmentation and nondestructive estimation; time-consuming processing and training
Cao Yingli
et al., 2020 [14]
Best subset selection, multiple linear regression,
BP neural network
RMSE = 11.11Accurate extraction of rice panicle number; difficulty in dataset preparation, unstable colour features, and shooting height affects the segmentation result
Duan Lingfeng et al., 2018 [32]Deep full CNN, SegNetp = 0.83,
R = 0.83,
F1-score = 83%
High segmentation accuracy and fast processing speed for rice panicles in the field; verbose image edge filling and manual annotation in Photoshop
Kong Huihua et al., 2021 [33]3-D recognition, Mask R-CNN, Euclidean distanceCount accuracy
(grain)
≥99%.
Effectively identifies and counts individual rice panicles photographed at close range; inapplicable to actual field environments
This articleAn improved
Mask R-CNN;
Otsu preprocessing
p = 0.84,
R = 0.80,
F1-score = 81.95%
Count accuracy (panicle) =83.27%.
RMSE = 11.08
Detection and segmentation of rice panicles in an actual field environment, rice growth monitoring and yield estimation; verbose labelling process
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hong, S.; Jiang, Z.; Liu, L.; Wang, J.; Zhou, L.; Xu, J. Improved Mask R-CNN Combined with Otsu Preprocessing for Rice Panicle Detection and Segmentation. Appl. Sci. 2022, 12, 11701. https://0-doi-org.brum.beds.ac.uk/10.3390/app122211701

AMA Style

Hong S, Jiang Z, Liu L, Wang J, Zhou L, Xu J. Improved Mask R-CNN Combined with Otsu Preprocessing for Rice Panicle Detection and Segmentation. Applied Sciences. 2022; 12(22):11701. https://0-doi-org.brum.beds.ac.uk/10.3390/app122211701

Chicago/Turabian Style

Hong, Shilan, Zhaohui Jiang, Lianzhong Liu, Jie Wang, Luyang Zhou, and Jianpeng Xu. 2022. "Improved Mask R-CNN Combined with Otsu Preprocessing for Rice Panicle Detection and Segmentation" Applied Sciences 12, no. 22: 11701. https://0-doi-org.brum.beds.ac.uk/10.3390/app122211701

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop