Next Article in Journal
Enhancing Growth, Yield, and Antioxidant Activity of Bitter Gourd (Momordica charantia L.) through Amino Acid Foliar Spray Application
Previous Article in Journal
Parthenocarpic Cactus Pears (Opuntia spp.) with Edible Sweet Peel and Long Shelf Life
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Active Laser-Camera Scanning for High-Precision Fruit Localization in Robotic Harvesting: System Design and Calibration

1
Department of Mechanical Engineering, Michigan State University, East Lansing, MI 48824, USA
2
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA
3
United States Department of Agriculture Agricultural Research Service, East Lansing, MI 48824, USA
*
Author to whom correspondence should be addressed.
Submission received: 6 December 2023 / Revised: 27 December 2023 / Accepted: 29 December 2023 / Published: 31 December 2023
(This article belongs to the Special Issue Advanced Automation for Tree Fruit Orchards and Vineyards)

Abstract

:
Robust and effective fruit detection and localization is essential for robotic harvesting systems. While extensive research efforts have been devoted to improving fruit detection, less emphasis has been placed on the fruit localization aspect, which is a crucial yet challenging task due to limited depth accuracy from existing sensor measurements in the natural orchard environment with variable lighting conditions and foliage/branch occlusions. In this paper, we present the system design and calibration of an Active LAser-Camera Scanner (ALACS), a novel perception module for robust and high-precision fruit localization. The hardware of the ALACS mainly consists of a red line laser, an RGB camera, and a linear motion slide, which are seamlessly integrated into an active scanning scheme where a dynamic-targeting laser-triangulation principle is employed. A high-fidelity extrinsic model is developed to pair the laser illumination and the RGB camera, enabling precise depth computation when the target is captured by both sensors. A random sample consensus-based robust calibration scheme is then designed to calibrate the model parameters based on collected data. Comprehensive evaluations are conducted to validate the system model and calibration scheme. The results show that the proposed calibration method can detect and remove data outliers to achieve robust parameter computation, and the calibrated ALACS system is able to achieve high-precision localization with the maximum depth measurement error being less than 4 mm at distance ranging from 0.6 to 1.2 m.

1. Introduction

With the growing global population, the agriculture industry has been pushing to adopt mechanization and automation for increasing, sustainable food production at lower economic and environmental costs. While such technologies have been deployed for field crops such as corn and wheat, the fruit sector (e.g., apple, citrus, and pear) still heavily relies on seasonal, manual labor. In many advanced economies, the availability of labor for farming has been on steady decline, while the cost of labor has increased significantly. Moreover, tasks like manual harvesting involve extensive body motion repetitions and awkward postures (especially when picking fruits at high places or deep in the canopy and repeatedly ascending and descending on ladders with heavy loads), which put workers at risk for ergonomic injuries and musculoskeletal pain [1]. Considering the aforementioned issues, robotic harvesting is thus considered to be a promising solution for sustainable fruit production and has received increasing attention in recent years.
Research on robotic harvesting technology has been ongoing for several decades, and different robotic systems have been attempted for semi-automated or fully automated fruit harvesting [2,3,4,5,6,7,8,9,10,11,12,13,14,15]. A typical robotic harvesting system consists of a perception module, a manipulator, and an end-effector. Specifically, the perception module exploits onboard sensors (e.g., cameras and LiDARs) to detect and localize the fruit. Once the fruit position is determined by the perception system, the manipulator is controlled to reach the target fruit, and then a specialized end-effector (e.g., gripper or vacuum tube) is actuated to detach the fruit. Therefore, the development of a robotic harvesting system requires multi-disciplinary advancements to enable a variety of synergistic functionalities. Among the various tasks, fruit detection and localization is the first and foremost one to support robotic manipulation and fruit detachment. Specifically, the fruit detection function aims at segmenting fruits from the complex background, while the localization is to calculate the spatial positions of the detected fruits. Due to variable lighting conditions, color variations of fruits with different degrees of ripeness and varietal differences, and fruit occlusions by foliage and branches, developing sensing modules and perception algorithms capable of robust and effective fruit detection and localization in the real orchard environment poses significant technical challenges.
To date, extensive studies have been devoted to efficient and robust fruit detection, which is most commonly accomplished using color images captured by RGB cameras. In general, these approaches can be classified into two categories: feature-based and deep learning-based. The feature-based methods [16,17,18,19,20,21] use differences among predefined features (e.g., color, texture, and geometric shape) to identify the fruit, and various conventional computer vision techniques (e.g., Hough transform-based circle detection method, optical flow method, and Ostu adaptive threshold segmentation) are used for feature extraction. Such methods perform well under certain simple harvesting scenarios but are susceptible to varying lighting conditions and heavy occlusions. This is because the extracted features are defined artificially and they are not universally adaptable and may lack generalization capabilities in distinguishing target fruits when the harvesting scene changes [22]. Different from feature-based methods, deep learning-based methods exploit convolutional neural networks to extract abstract features from color images, making them suitable for complex recognition problems. Deep learning-based object recognition algorithms have seen tremendous success in recent years, and a variety of network structures, i.e., region convolution neural network (RCNN) [23], Faster RCNN [24], Mask RCNN [25,26], You Only Look Once (YOLO) [27,28,29], and Single-Shot Detection (SSD) [30], have been studied and extended for fruit detection. Specifically, RCNN-based approaches employ a two-stage network architecture, in which a region proposal network (RPN) is used to search the region of interest and a classification network is used to conduct bounding-box regression. As opposed to two-stage networks, YOLO- and SSD-based one-stage networks merge the RPN and classification branch into a single convolution network architecture, which enjoys improved computation efficiency.
Once the fruits are recognized and a picking sequence is determined (see e.g., [12]), three-dimensional (3D) localization needs to be conducted to compute the spatial coordinates of a target fruit. Accurate fruit localization is crucial since erroneous localization will cause the manipulator to miss the target and subsequently degrade the harvesting performance of the robotic system. Various sensor configurations and techniques have been used for fruit localization [31,32,33,34,35]. One example is (passive) stereo vision systems, which exploit two-camera layout and the triangulation optical measurement principle to obtain depth information. For such systems, the relative geometric pose of the two cameras needs to be carefully designed and calibrated, and sophisticated algorithms are required to search common features in two dense RGB images for stereo matching. Therefore, the main disadvantages of stereo vision systems are that the generation of depth information is computationally expensive and the performance of stereo matching is inevitably affected by occluded pixels or varying lighting conditions that are common in the natural orchard environment.
Consumer RGB-D cameras are another type of depth measurement sensors that have recently been employed to localize fruits [36,37,38,39]. Different from passive stereo-vision systems that purely rely on natural light, the RGB-D sensors include a separate artificial illumination source to aid the depth computation. According to the methods on how the depth measurements are computed, RGB-D cameras can be divided into three categories: structured light (SL), time of flight (ToF), and active infrared stereo (AIRS) [33]. An SL-based RGB-D sensor usually consists of a light source and a camera system. The light source projects a series of light patterns onto the workspace, and the depth information can then be extracted from the images based on the deformation of the light pattern. So far, the consumer sensors that operate with SL have been utilized in different agricultural applications [40,41,42]. The ToF-based RGB-D sensors use an infrared light emitter to emit light pulses onto the scene. The distance between the sensor and the object is calculated based on the known speed of light and the round trip time of the light signal. One important feature of the ToF systems is that their depth measurement precision does not deteriorate with distance, which makes them suitable for harvesting applications requiring a long perception range. Moreover, the AIRS-based RGB-D sensors are an extension of the conventional passive stereo-vision system. They combine an infrared stereo camera pair with an active infrared light source to improve the depth measurement under low-texture environment. Despite some successes, the sensors mentioned above may have limited and unstable performance in the natural orchard environment. For example, the SL-based sensors are sensitive to the natural light condition and to the interference of multiple patterned light sources. The ToF systems are vulnerable to scattered light and multi-path interference and usually provide lower resolution of depth images compared to other RGB-D cameras. Similar to passive stereo-vision systems, the AIRS-based sensors encounter stereo matching issues, which can lead to flying pixels or over-smoothing around the contour edges [33]. In addition, the performance of these sensors could deteriorate significantly when target fruits are occluded by leaves and branches due to low or limited density of the illuminating light patterns or point cloud.
It is thus clear that both the stereo vision systems and the RGB-D sensors have inherent depth measurement limitations in providing precise fruit localization information that is necessary for effective robotic harvesting systems. Inaccurate fruit localization stands out as a primary cause leading to the failure of robotic harvesting, which has been reported by several recent works [10,11]. For instance, the field experiment detailed in [11] shows that about 70% of the total failed attempts are attributed to inaccurate fruit localization. Towards this end, we devise a novel perception module, called the Active LAser-Camera Scanner (ALACS), to improve fruit localization accuracy and robustness for ready deployment in apple harvesting robots. In this paper, we present the system design and calibration scheme of the ALACS, and the main contributions of this paper are highlighted as follows.
  • A hardware system consisting of a red line laser, an RGB camera, and a linear motion slide, coupled with an active scanning scheme, is developed for fruit localization based on the laser-triangulation principle.
  • A high-fidelity extrinsic model is developed to capture 3D measurements by matching the laser illumination source with the RGB pixels. A robust calibration scheme is then developed to calibrate the model parameters by leveraging random sample consensus (RANSAC) techniques to detect and remove data outliers.
  • The effectiveness of the developed model and calibration scheme is evaluated through comprehensive experiments. The results show that the calibrated ALACS system can achieve high-precision localization with millimeter-level accuracy.
This is the first effort that, to the best of our knowledge, combines a line laser with a camera to accomplish millimeter-level localization performance. Our focus has predominantly centered on the development of an effective automated apple harvesting system, with the ALACS being specifically tailored and validated for this application. Nonetheless, the ALACS possesses inherent adaptability and can be extended and adopted for the harvesting of other tree fruits.
The rest of the paper is organized as follows. Section 2 provides an overview of our newly developed robotic apple harvesting system. Section 3 presents the system design of the ALACS. The extrinsic model for 3D measurement characterization and the corresponding robust calibration scheme are introduced in Section 4. Simulation and experimental results are presented in Section 5. Finally, conclusions are drawn in Section 6.

2. Overview of the Robotic Apple Harvesting System

In this section, we first briefly introduce our robotic apple harvesting platform, into which the ALACS is integrated. As shown in Figure 1, the robotic platform consists of four main components: a perception module, a four-degree-of-freedom manipulator, a soft vacuum-based end-effector, and a dropping module. The robotic system is mounted on a trailer base to facilitate movement in the orchard environment. An industrial computer is utilized to coordinate the perception module, the manipulator, and all communication devices. The entire software is fully integrated using the robot operating system (ROS), where different software components are primarily communicated via custom messages.
The following introduces the steps that our system takes to harvest an apple. At the beginning of each harvesting cycle, the perception module is activated to detect and localize the fruits within the manipulator’s workspace. Given the 3D apple location, the planning algorithm is used to generate a reference trajectory, and the control module then actuates the manipulator to follow this reference trajectory to approach the fruit. After successfully attaching the fruit to the end-effector, a rotation mechanism is triggered to rotate the end-effector by a certain angle, and then the manipulator is driven to pull and detach the apple. Finally, the manipulator retracts to a dropping spot and releases the fruit. According to the aforementioned picking procedure, it can be seen that the fruit detection and localization is a key task in automated apple harvesting. Our previous system prototypes [10,12] utilized RGB-D cameras to facilitate fruit detection and localization. However, laboratory and field tests found that the commercial RGB-D cameras could not provide accurate depth information of the target fruits under leaf/branch occlusions and/or challenging lighting conditions. Inaccurate apple localization has been identified as one of the primary causes for harvesting failure. To enhance the apple localization accuracy and robustness, we designed a new perception unit (called ALACS), which seamlessly integrates the line laser with RGB image for active sensing.

3. Design of the Active Laser-Camera Scanner

As shown in Figure 2, the perception module of the robotic apple harvesting system includes an Intel RealSense D435i RGB-D camera (Intel Corp., Santa Clara, CA, USA) and a custom ALACS unit. The RGB-D camera is mounted on a horizontal frame that is above the manipulator to provide a global view of the scene. The ALACS unit is comprised of a red line laser (Laserglow Technologies, North York, ON, Canada), a FLIR RGB camera (Teledyne FLIR, Wilsonville, OR, USA), and a linear motion slide. The line laser is mounted on top of the linear motion slide that enables the laser to move left and right horizontally with a full stroke of 20 cm. Meanwhile, the FLIR RGB camera is installed at the rear end of the linear motion slide with a relative angle to the laser. The hardware configuration of the ALACS is designed to facilitate depth measurements using the principle of laser triangulation. The laser triangulation-based technique captures depth measurements by pairing a laser illumination source with a camera, which has been widely used in industry applications for precision 3D object profiling. It should be noted that the ALACS unit is different from the conventional laser triangulation sensors. For conventional laser triangulation sensors, the relative position between the laser and the camera is fixed (i.e., both of them are either stationary or moving simultaneously). For the ALACS, the camera is fixed while the laser position can be adjusted with the linear motion slide.
The RGB-D camera and the ALACS unit are fused synergistically to achieve apple detection and localization. Specifically, the fusion scheme includes two steps. In the first step, the images captured by the RGB-D camera are fed into a deep learning approach for fruit detection (see [43]), and the target apple location is then roughly calculated with the depth measurements provided by the RGB-D camera. In the second step, by using the rough apple location, the ALACS unit is triggered to actively scan the target apple, and an ameliorative apple position is obtained. As shown in Figure 3, the basic working principle of the ALACS is to project the laser line onto the target fruit and then use the image information and triangulation technique to localize the fruit. The perception strategy of the ALACS unit is designed as follows:
  • Initialization. The linear motion slide is actuated to regulate the laser towards an initial position, ensuring that the red laser line is projected on the left half region of the target apple. The initial laser position is obtained by transforming the rough target apple location provided by the RGB-D camera into the coordinate frame of the ALACS unit.
  • Interval scanning. When the laser reaches the initial position, the FLIR camera is activated to capture an image. The linear motion slide then travels to the right by four centimeters in one centimeter increments, pausing at each increment to allow the FLIR camera to take an image. A total of five images are acquired through this scanning procedure, with the laser line projected on various positions in each image. The purpose of utilizing such scanning strategy is to mitigate the impact of occlusion, since the laser line provides high spatial-resolution localization information for the target fruit. More precisely, when the target apple is partially occluded by foliage, moving the laser to multiple positions can reduce the likelihood that the laser lines will be entirely blocked by the obstacle.
  • Refinement of 3D position. For each image captured by the FLIR camera, the laser line projected on the target apple surface is extracted and then used to generate a 3D location candidate. Computer vision approaches and laser triangulation-based techniques are exploited to accomplish laser line extraction and position candidate computation, respectively. Five position candidates will be generated as a result, and a holistic evaluation function is used to select one of the candidates as the final target apple location.
To accomplish the aforementioned fruit localization scheme, laser line extraction and position candidate computation are two key tasks. The laser line extraction is achieved by leveraging computer vision techniques, and a detailed description on the extraction algorithm can be found in our recent work [44]. To facilitate the computation of fruit 3D positions, a high-fidelity model is derived based on the principle of laser triangulation, and a robust calibration scheme is designed. The following will detail the development of the high-fidelity model and calibration scheme.

4. Extrinsic Model and Calibration

4.1. Modeling of the ALACS Unit

The basic idea of laser triangulation-based technique is to capture depth measurements by pairing a laser illumination source with a camera. Both the laser beam and the camera are aimed at the target object, and based on the extrinsic parameters between the laser source and the camera sensor, the depth information can be collected with trigonometry. As shown in Figure 4, F l and F c are denoted as the laser frame and camera frame, respectively. α R is the rotating angle along the y l -axis between F l and F c . L R is the horizontal distance (i.e., the translation along the x l -axis) between F l and F c . β R is the angle between the laser plane and the ( y l , z l ) plane of F l . α , L, and β are considered as the extrinsic parameters between the laser illumination source and the camera, which are essential for deriving the high-fidelity model of the ALACS unit. In the following, we first introduce the pin-hole model of the camera and then present the model of the ALACS.
Let p i be a point located at the intersection of the laser line and the object. The 3D position of p i under the camera frame F c is denoted by p c , i = x c , i , y c , i , z c , i R 3 . The corresponding normalized coordinate p ¯ c , i R 3 is defined by
p ¯ c , i = u ¯ c , i , v ¯ c , i , 1 = x c , i z c , i , y c , i z c , i , 1 .
Denote m c , i = u c , i , v c , i , 1 R 3 as the pixel coordinate of p i on the image plane. Then, the following pin-hole camera model can be used to describe the projection from p ¯ c , i to m c , i :
m c , i = ϖ ( K p ¯ c , i ) ,
where ϖ ( · ) is the camera distortion model and K R 3 × 3 is the camera intrinsic matrix. Both ϖ ( · ) and K can be obtained via standard calibration approaches, and thus once m c , i is detected from the image, the normalized coordinate p ¯ c , i can be calculated by
p ¯ c , i = K 1 ϖ 1 ( m c , i ) .
We now derive the high-fidelity model for the ALACS unit. Denote p l , i = x l , i , y l , i , z l , i R 3 as the 3D position of p i under the laser frame F l . According to the relative pose between F l and F c (see Figure 4), it can be concluded that
x c , i y c , i z c , i = cos ( α ) 0 sin ( α ) 0 1 0 sin ( α ) 0 cos ( α ) x l , i y l , i z l , i + L cos ( α ) 0 L sin ( α ) .
In addition, as there is an angle, i.e., β , between the laser plane and the ( y l , z l ) plane of F l , we have
x l , i = y l , i tan ( β ) .
Based on (4) and (5), the following expression can be derived:
tan ( α ) = x c , i + L cos ( α ) + y c , i cos ( α ) tan ( β ) z c , i L sin ( α ) y c , i sin ( α ) tan ( β ) .
It can be concluded from (1) that x c , i = z c , i u ¯ c , i and y c , i = z c , i v ¯ c , i . After submitting these two relations into (6), we can derive that
z c , i = L sin ( α ) u ¯ c , i cos ( α ) v ¯ c , i tan ( β ) .
Using (7) and the facts that x c , i = z c , i u ¯ c , i and y c , i = z c , i v ¯ c , i , we have
x c , i = L u ¯ c , i sin ( α ) u ¯ c , i cos ( α ) v ¯ c , i tan ( β ) , y c , i = L v ¯ c , i sin ( α ) u ¯ c , i cos ( α ) v ¯ c , i tan ( β ) .
Equations (7) and (8) are the high-fidelity model that reveals the 3D measurement mechanism of the ALACS unit. Specifically, given the pixel coordinate m c , i , p ¯ c , i , i.e., u ¯ c , i and v ¯ c , i , this can be computed via (3). Then, models (7) and (8) can be exploited to calculate the 3D position p c , i = x c , i , y c , i , z c , i provided that the extrinsic parameters α , L, and β are well calibrated.

4.2. Robust Calibration Scheme

The extrinsic parameters α , L, and β play a crucial role in facilitating the 3D measurement of the ALACS unit. In this subsection, we focus on introducing how we perform robust calibration on the extrinsic parameters α , L, and β . Note that α and β are constants, while L is variable as the linear motion slide can move to different positions. During the calibration procedure, the linear motion slide is fixed at an initial position, and the corresponding horizontal distance between laser and camera is denoted by L 0 R . α , β , and L 0 (i.e., the initial value of L) are obtained via offline calibration. Then, when the linear motion slide is moving, L can be updated online based on its initial value L 0 and the movement distance of the linear motion slide.
The calibration procedure includes two steps. In the first step, multiple sets of data s i = u ¯ c , i , v ¯ c , i , z c , i R 3 ( i = 1 , 2 , , n ) are collected from recorded images. The second step then formulates an optimization problem by using the collected data and the model (7) to compute the extrinsic parameters. The following details these two steps in sequence.
The hardware setup for image and data collection is shown in Figure 5, where a planar checkerboard is placed in front of the ALACS unit so that the laser line will be projected on it. We use the planar checkerboard as the calibration pattern to facilitate the data collection. Specifically, given an image that covers the whole checkerboard, the pixel coordinates of laser points projected on the checkerboard are extracted based on the color feature. Once pixel coordinate m c , i is obtained, the corresponding normalized coordinate p ¯ c , i , i.e., u ¯ c , i and v ¯ c , i , is calculated with (3). Furthermore, we leverage the following scheme to calculate z c , i (see Figure 6):
  • Corner Detection. The checkerboard corners are detected from the image by using the algorithm developed in [45].
  • Pose Reconstruction. Based on the detected checkerboard corners and the prior knowledge about the checkerboard square size, the relative pose information between the planar checkerboard and the camera is reconstructed [46]. The pose information is described by the rotation matrix R b SO 3 and the translation vector t b R 3 .
  • Computation of z c , i . Based on the relative pose information R b , t b and the normalized coordinate p ¯ c , i , z c , i is calculated with projection geometry [46].
To obtain multiple data samples s i = u ¯ c , i , v ¯ c , i , z c , i ( i = 1 , 2 , , n ) , the planar checkerboard is moved to different positions, and an image is recorded at each position. For each image, several laser points are selected and the corresponding data samples s i = u ¯ c , i , v ¯ c , i , z c , i are computed by using the aforementioned strategy. A total of n data samples will be collected and then used for the calibration of extrinsic parameters.
In the second step, the extrinsic parameters are to be identified based on the model (7) and the collected data samples s i = u ¯ c , i , v ¯ c , i , z c , i ( i = 1 , 2 , , n ) . In the ideal case, each data sample s i should satisfy the relation (7). According to this observation, the extrinsic parameters α , L 0 , and β can be estimated by solving the following optimization problem:
min α ^ , L ^ 0 , β ^ f = i = 1 n z c , i z ^ c , i 2 , s . t . z ^ c , i = L ^ 0 sin ( α ^ ) u ¯ c , i cos ( α ^ ) v ¯ c , i tan ( β ^ ) , i = 1 , 2 , , n ,
where α ^ , L ^ 0 , and β ^ R are estimated values of α , L 0 , and β , respectively. Note that the minimization problem (9) directly applies all data samples to compute extrinsic parameters, which is not robust in the presence of data outliers. In general, the data samples s i = u ¯ c , i , v ¯ c , i , z c , i ( i = 1 , 2 , , n ) are corrupted with noise and may contain outliers that do not satisfy the relation (7). These outliers can severely influence the calibration accuracy and thus need to be removed. Towards that end, we adopt the random sample consensus (RANSAC) methodology [47,48] to extract credible data from S = s 1 , s 2 , , s n . The RANSAC-based robust calibration scheme is detailed in Algorithm 1. Specifically, the calibration scheme is divided into three steps. First, subsets of S are randomly selected to calculate different possible solutions to problems (9). Each one of these possible solutions is called a hypothesis in the RANSAC algorithm. Second, hypotheses are scored using the data points in S , and the hypothesis that obtains the best score is returned as the solution. Finally, the data points that voted for the solution are categorized as a set of inliers and will be used to calculate the final solution.
Algorithm 1 RANSAC-based robust calibration
Input:  S = s 1 , s 2 , , s n , k m a x , ϵ
Output:  α ^ , L ^ 0 , β ^
k = 0 , I m a x = 0
while  k < k m a x  do
     1. Hypothesis generation
     Randomly select 4 data samples from S to construct the subset S k = s k 1 , s k 2 , s k 3 , s k 4 , where k 1 , k 2 , k 3 , k 4 1 , 2 , , n
     Estimate parameters α ^ k , L ^ 0 , k , β ^ k based on S k and (9)
     2. Verification
     Initialize the inlier set I k = {}
     for  i = 1 , 2 , , n  do
         if  z c , i L ^ 0 , k sin ( α ^ k ) u ¯ c , i cos ( α ^ k ) v ¯ c , i tan ( β ^ k ) ϵ  then
            Add s i to the inlier set I k
         end if
     end for
     if  I k > I m a x  then
          I * = I k , I m a x = I k
     end if
      k = k + 1
end while
 Estimate parameters α ^ , L ^ 0 , β ^ based on I * and (9)
The developed calibration scheme leverages RANSAC techniques to iteratively estimate the model parameters and select the solution with the largest number of inliers. Therefore, it is able to robustly identify the model parameters when some data samples are corrupted or noisy.

5. Experiments

5.1. Calibration Methods and Results

As shown in Figure 5, the experimental setup mainly consists of a specially designed ALACS unit and a planar checkerboard. To collect data samples for calibration, the planar checkerboard is placed in sequence at 10 different positions between 0.6 and 1.2 m from the ALACS unit, and at each position the FLIR camera is triggered to capture an image. For each image, three laser points are selected and the corresponding data samples s i = u ¯ c , i , v ¯ c , i , z c , i are computed by using the strategy introduced in Section 4.2. A total of n = 30 data samples are collected and then used for the calibration of extrinsic parameters.
To better evaluate the effectiveness of the developed high-fidelity model and robust calibration scheme, four different methods are implemented and tested on the same data samples. These four methods are introduced, as follows:
  • Method 1: This method utilizes the low-fidelity model to conduct the calibration. Specifically, the low-fidelity model only considers two extrinsic parameters, α and L, and assumes that β = 0 . Under this case, the depth measurement mechanism of the ALACS unit degenerates into
    z c , i = L sin ( α ) u ¯ c , i cos ( α ) .
    The model (10) and all collected data samples are used to estimate the extrinsic parameters α and L.
  • Method 2: Both the low-fidelity model (10) and RANSAC techniques are used for calibration. Compared with Method 1, this method leverages RANSAC to remove outlier data.
  • Method 3: This method computes the extrinsic parameters α , L 0 , and β by solving the optimization problem (9), which is designed based on the high-fidelity model (7) and all data samples.
  • Method 4: This is our developed method which combines the high-fidelity model with RANSAC techniques for calibration. The method is detailed in Algorithm 1.
The mean error of z c , i z ^ c , i is computed to evaluate the performance of these four methods. The calibration results are summarized in Table 1. Both Methods 1 and 2 use model (10) for calibration, while Methods 3 and 4 rely on model (7). From Table 1, it can be seen that Methods 3 and 4 achieve better calibration performance than Methods 1 and 2, indicating that the high-fidelity model (7) can well pair the laser with the RGB camera for depth measurements. Moreover, by comparing Method 3 with Method 4, it can be concluded that the RANSAC technique is robust for the removal of outlier data and the developed calibration method is effective in determining the extrinsic parameters of the ALACS unit. The precision of the extrinsic model and the existence of outlier data are two main contributors to calibration errors. The extrinsic model plays a crucial role in establishing the correlation between the raw data captured by the ALACS and the corresponding 3D measurements. Inaccuracies in the extrinsic model, which fails to faithfully represent the characteristics of the ALACS, result in improper corrections, leading to substantial calibration errors. Additionally, outliers within data samples can introduce significant distortions in the estimation of extrinsic parameters, further contributing to inaccuracies in calibration. The removal of outliers becomes imperative to ensure that the calibration model is founded on the majority of reliable and accurate data points.
All four methods are implemented in MATLAB 2022a on a laptop equipped with an Intel i7-10710U CPU boasting 6 cores, a 1.6 GHz clock rate, and 16 GB RAM. The computation time for each method is presented in Table 2. It can be seen that the computation time for Methods 2 and 4 is much larger than that for Methods 1 and 3. Both Methods 2 and 4 employ the RANSAC technique to remove outlier data. As introduced in Section 4.2, RANSAC involves the random selection of data point subsets to form candidate solutions, and this process is repeated for a predetermined number of iterations or until a termination condition is satisfied. The inherent randomness and iterative nature of RANSAC contribute significantly to the computational cost. Therefore, Methods 2 and 4 exhibit longer computation time in comparison to Methods 1 and 3.

5.2. Localization Accuracy

As mentioned in Section 4.2, the parameters α and β are constants, while L is variable since the laser position can be adjusted via the linear motion slide. The linear motion slide is fixed at an initial position (i.e., L is fixed to L 0 ) during the calibration procedure. We change the value of L by moving the laser to different positions and collect data samples to fully evaluate the localization accuracy of the ALACS unit. More precisely, the laser is moved from its initial position towards the camera side by d cm, where d is selected as the following values in turn:
d = 0 , 5 , 10 , 15 , 20 .
Given L 0 and d, L can be computed by L = L 0 d . For each laser position (i.e., for each L value), 10 images are collected with the planar checkerboard being placed at different positions between 0.6 and 1.2 m away from the ALACS unit. Three laser points are randomly chosen from each image, and then at each laser position, a total of 30 data samples are utilized to evaluate the localization accuracy of the ALACS unit. The 3D measurements of the collected data, i.e., p c , j = x c , j , y c , j , z c , j ( j = 1 , 2 , , 30 ), are obtained with the aid of the checkerboard setup. Meanwhile, the extrinsic parameters calculated with the developed robust calibration scheme (see Table 1) are used to determine the estimated 3D measurements p ^ c , j = x ^ c , j , y ^ c , j , z ^ c , j .
The localization results are shown in Figure 7. Specifically, Figure 7a shows the localization error distribution of the ALACS with laser being placed at five different positions, and Figure 7b depicts the corresponding statistical metrics. It can be found from the results that the ALACS unit achieves precise localization in the x (horizontal), y (vertical), and z (depth) directions. In most instances, the localization errors along x, y, and z directions are within 0.4 mm, 0.8 mm, and 3 mm, respectively. Even under the worst-case scenarios, the largest localization errors along these three directions are less than 0.6 mm, 1.2 mm, and 4 mm, respectively, when the distance between the planar checkerboard and the ALACS is within 0.6∼1.2 m. Note that our robotic harvesting system uses a vacuum-based end-effector to grasp and detach fruits, and the end-effector is able to attract fruits within a distance of about 1.5 cm. Therefore, according to the evaluation results, it can be concluded that the ALACS unit can meet the requirements for fruit localization and can be integrated with other hardware modules for automated apple harvesting.
The RealSense D435i RGB-D camera was used in our previous apple harvesting robotic prototypes to localize the fruit [10,12]. According to the manufacturer’s datasheet [49], this camera offers a measurement accuracy of less than 2% of the depth range. This suggests that the maximum localization error along the depth direction is estimated to be less than 24 mm within the distance range of 0.6 to 1.2 m between the target and the camera. On the other hand, the ALACS unit demonstrates a maximum depth measurement error of 4 mm at distance ranging from 0.6 to 1.2 m. These results indicate that the ALACS unit has promising potential for achieving precise and reliable fruit localization. Meanwhile, it is worth noting that, in contrast to passive stereo-vision systems and RGB-D cameras capable of generating dense depth information across their entire workspace, the ALACS unit is specifically tailored for fruit localization and can only provide precise 3D measurements for the target fruits.

6. Conclusions

This paper has reported the system design and calibration scheme of a new perception module, called the Active LAser-Camera Scanner (ALACS), for fruit localization. A red line laser, an RGB camera, and a linear motion slide were fully integrated as the main components of the ALACS unit. A high-fidelity model was established to reveal the localization mechanism of the ALACS unit. Then, a robust scheme was proposed to calibrate the model parameters in the presence of data outliers. Experimental results demonstrated that the proposed calibration scheme can achieve accurate and robust parameter computation, and the ALACS unit can be exploited for localization with the maximum errors being less than 0.6 mm, 1.2 mm, and 4 mm in the horizontal, vertical, and depth directions, respectively, when the distance between the target and the ALACS is within 0.6∼1.2 m. Future work will focus on enhancing the efficiency and scalability of the scanner such that it can provide a faster measurement to support multiple arms planned in our next version of the harvesting robot. Additionally, improvements will be made in the localization algorithm to address diverse environmental conditions (e.g., variations in lighting). The system will be extended to facilitate the localization and harvesting of other tree fruits. Last but not least, we will design comprehensive experiments to compare the measurement accuracy of the ALACS unit and consumer depth cameras.

Author Contributions

Conceptualization, R.L. and Z.L.; methodology, K.Z. and P.C.; software, K.Z., P.C. and K.L.; validation, K.Z., P.C. and K.L.; formal analysis, K.Z.; investigation, K.Z., P.C. and K.L.; resources, Z.L. and R.L.; data curation, K.Z., P.C. and K.L.; writing—original draft preparation, K.Z.; writing—review and editing, P.C., K.L., Z.L. and R.L.; visualization, K.Z. and K.L.; supervision, Z.L. and R.L.; project administration, Z.L. and R.L.; funding acquisition, Z.L. and R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the U.S. Department of Agriculture Agricultural Research Service (No. 5050-43640-003-000D) and the National Science Foundation (No. ECCS-2024649). The findings and conclusions in this paper are those of the authors and should not be construed to represent any official USDA or U.S. Government determination or policy. Mention of commercial products in the paper does not imply endorsement by the USDA over those not mentioned.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fathallah, F.A. Musculoskeletal disorders in labor-intensive agriculture. Appl. Ergon. 2010, 41, 738–743. [Google Scholar] [CrossRef] [PubMed]
  2. Zhao, D.A.; Lv, J.; Ji, W.; Zhang, Y.; Chen, Y. Design and control of an apple harvesting robot. Biosyst. Eng. 2011, 110, 112–122. [Google Scholar] [CrossRef]
  3. Mehta, S.; Burks, T. Vision-based control of robotic manipulator for citrus harvesting. Comput. Electron. Agric. 2014, 102, 146–158. [Google Scholar] [CrossRef]
  4. De Kleine, M.E.; Karkee, M. A semi-automated harvesting prototype for shaking fruit tree limbs. Trans. ASABE 2015, 58, 1461–1470. [Google Scholar] [CrossRef]
  5. Silwal, A.; Davidson, J.R.; Karkee, M.; Mo, C.; Zhang, Q.; Lewis, K. Design, integration, and field evaluation of a robotic apple harvester. J. Field Robot. 2017, 34, 1140–1159. [Google Scholar] [CrossRef]
  6. Xiong, J.; He, Z.; Lin, R.; Liu, Z.; Bu, R.; Yang, Z.; Peng, H.; Zou, X. Visual positioning technology of picking robots for dynamic litchi clusters with disturbance. Comput. Electron. Agric. 2018, 151, 226–237. [Google Scholar] [CrossRef]
  7. Williams, H.A.; Jones, M.H.; Nejati, M.; Seabright, M.J.; Bell, J.; Penhall, N.D.; Barnett, J.J.; Duke, M.D.; Scarfe, A.J.; Ahn, H.S.; et al. Robotic kiwifruit harvesting using machine vision, convolutional neural networks, and robotic arms. Biosyst. Eng. 2019, 181, 140–156. [Google Scholar] [CrossRef]
  8. Hohimer, C.J.; Wang, H.; Bhusal, S.; Miller, J.; Mo, C.; Karkee, M. Design and field evaluation of a robotic apple harvesting system with a 3D-printed soft-robotic end-effector. Trans. ASABE 2019, 62, 405–414. [Google Scholar] [CrossRef]
  9. Zhang, X.; He, L.; Karkee, M.; Whiting, M.D.; Zhang, Q. Field evaluation of targeted shake-and-catch harvesting technologies for fresh market apple. Trans. ASABE 2020, 63, 1759–1771. [Google Scholar] [CrossRef]
  10. Zhang, K.; Lammers, K.; Chu, P.; Li, Z.; Lu, R. System design and control of an apple harvesting robot. Mechatronics 2021, 79, 102644. [Google Scholar] [CrossRef]
  11. Bu, L.; Chen, C.; Hu, G.; Sugirbay, A.; Sun, H.; Chen, J. Design and evaluation of a robotic apple harvester using optimized picking patterns. Comput. Electron. Agric. 2022, 198, 107092. [Google Scholar] [CrossRef]
  12. Zhang, K.; Lammers, K.; Chu, P.; Dickinson, N.; Li, Z.; Lu, R. Algorithm Design and Integration for a Robotic Apple Harvesting System. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyoto, Japan, 23–27 October 2022; pp. 9217–9224. [Google Scholar] [CrossRef]
  13. Meng, F.; Li, J.; Zhang, Y.; Qi, S.; Tang, Y. Transforming unmanned pineapple picking with spatio-temporal convolutional neural networks. Comput. Electron. Agric. 2023, 214, 108298. [Google Scholar] [CrossRef]
  14. Wang, C.; Li, C.; Han, Q.; Wu, F.; Zou, X. A Performance Analysis of a Litchi Picking Robot System for Actively Removing Obstructions, Using an Artificial Intelligence Algorithm. Agronomy 2023, 13, 2795. [Google Scholar] [CrossRef]
  15. Ye, L.; Wu, F.; Zou, X.; Li, J. Path planning for mobile robots in unstructured orchard environments: An improved kinematically constrained bi-directional RRT approach. Comput. Electron. Agric. 2023, 215, 108453. [Google Scholar] [CrossRef]
  16. Bulanon, D.; Kataoka, T.; Ota, Y.; Hiroma, T. AE—Automation and emerging technologies: A segmentation algorithm for the automatic recognition of Fuji apples at harvest. Biosyst. Eng. 2002, 83, 405–412. [Google Scholar] [CrossRef]
  17. Zhao, J.; Tow, J.; Katupitiya, J. On-tree fruit recognition using texture properties and color data. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 263–268. [Google Scholar] [CrossRef]
  18. Wachs, J.P.; Stern, H.; Burks, T.; Alchanatis, V. Low and high-level visual feature-based apple detection from multi-modal images. Precis. Agric. 2010, 11, 717–735. [Google Scholar] [CrossRef]
  19. Zhou, R.; Damerow, L.; Sun, Y.; Blanke, M.M. Using colour features of cv. ‘Gala’ apple fruits in an orchard in image processing to predict yield. Precis. Agric. 2012, 13, 568–580. [Google Scholar] [CrossRef]
  20. Nguyen, T.T.; Vandevoorde, K.; Wouters, N.; Kayacan, E.; De Baerdemaeker, J.G.; Saeys, W. Detection of red and bicoloured apples on tree with an RGB-D camera. Biosyst. Eng. 2016, 146, 33–44. [Google Scholar] [CrossRef]
  21. Lin, G.; Tang, Y.; Zou, X.; Xiong, J.; Fang, Y. Color-, depth-, and shape-based 3D fruit detection. Precis. Agric. 2020, 21, 1–17. [Google Scholar] [CrossRef]
  22. Li, T.; Feng, Q.; Qiu, Q.; Xie, F.; Zhao, C. Occluded Apple Fruit Detection and localization with a frustum-based point-cloud-processing approach for robotic harvesting. Remote. Sens. 2022, 14, 482. [Google Scholar] [CrossRef]
  23. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
  24. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  25. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef] [PubMed]
  26. Chu, P.; Li, Z.; Lammers, K.; Lu, R.; Liu, X. Deep learning-based apple detection using a suppression mask R-CNN. Pattern Recognit. Lett. 2021, 147, 206–211. [Google Scholar] [CrossRef]
  27. Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  28. Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
  29. Kang, H.; Chen, C. Fast implementation of real-time fruit detection in apple orchards using deep learning. Comput. Electron. Agric. 2020, 168, 105108. [Google Scholar] [CrossRef]
  30. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar] [CrossRef]
  31. Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
  32. Gené-Mola, J.; Gregorio, E.; Guevara, J.; Auat, F.; Sanz-Cortiella, R.; Escolà, A.; Llorens, J.; Morros, J.R.; Ruiz-Hidalgo, J.; Vilaplana, V.; et al. Fruit detection in an apple orchard using a mobile terrestrial laser scanner. Biosyst. Eng. 2019, 187, 171–184. [Google Scholar] [CrossRef]
  33. Fu, L.; Gao, F.; Wu, J.; Li, R.; Karkee, M.; Zhang, Q. Application of consumer RGB-D cameras for fruit detection and localization in field: A critical review. Comput. Electron. Agric. 2020, 177, 105687. [Google Scholar] [CrossRef]
  34. Neupane, C.; Koirala, A.; Wang, Z.; Walsh, K.B. Evaluation of depth cameras for use in fruit localization and sizing: Finding a successor to kinect v2. Agronomy 2021, 11, 1780. [Google Scholar] [CrossRef]
  35. Kang, H.; Wang, X.; Chen, C. Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation. Comput. Electron. Agric. 2022, 203, 107450. [Google Scholar] [CrossRef]
  36. Xiong, Y.; Peng, C.; Grimstad, L.; From, P.J.; Isler, V. Development and field evaluation of a strawberry harvesting robot with a cable-driven gripper. Comput. Electron. Agric. 2019, 157, 392–402. [Google Scholar] [CrossRef]
  37. Tian, Y.; Duan, H.; Luo, R.; Zhang, Y.; Jia, W.; Lian, J.; Zheng, Y.; Ruan, C.; Li, C. Fast recognition and location of target fruit based on depth information. IEEE Access 2019, 7, 170553–170563. [Google Scholar] [CrossRef]
  38. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  39. Kang, H.; Zhou, H.; Chen, C. Visual perception and modeling for autonomous apple harvesting. IEEE Access 2020, 8, 62151–62163. [Google Scholar] [CrossRef]
  40. Lehnert, C.; English, A.; McCool, C.; Tow, A.W.; Perez, T. Autonomous sweet pepper harvesting for protected cropping systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar] [CrossRef]
  41. Liu, J.; Yuan, Y.; Zhou, Y.; Zhu, X.; Syed, T.N. Experiments and analysis of close-shot identification of on-branch citrus fruit with realsense. Sensors 2018, 18, 1510. [Google Scholar] [CrossRef]
  42. Milella, A.; Marani, R.; Petitti, A.; Reina, G. In-field high throughput grapevine phenotyping with a consumer-grade depth camera. Comput. Electron. Agric. 2019, 156, 293–306. [Google Scholar] [CrossRef]
  43. Chu, P.; Li, Z.; Zhang, K.; Chen, D.; Lammers, K.; Lu, R. O2RNet: Occluder-Occludee Relational Network for Robust Apple Detection in Clustered Orchard Environments. Smart Agric. Technol. 2023, 5, 100284. [Google Scholar] [CrossRef]
  44. Zhang, K.; Lammers, K.; Chu, P.; Li, Z.; Lu, R. An Automated Apple Harvesting Robot—From System Design to Field Evaluation. J. Field Robot. 2023; in press. [Google Scholar] [CrossRef]
  45. Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar] [CrossRef]
  46. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  47. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  48. Raguram, R.; Chum, O.; Pollefeys, M.; Matas, J.; Frahm, J.M. USAC: A Universal Framework for Random Sample Consensus. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2022–2038. [Google Scholar] [CrossRef] [PubMed]
  49. Intel. Intel RealSense Product Family D400 Series Datasheet. 2023. Available online: https://www.intelrealsense.com/wp-content/uploads/2023/07/Intel-RealSense-D400-Series-Datasheet-July-2023.pdf?_ga=2.51357024.85065052.1690338316-873175694.1690172632 (accessed on 1 October 2023).
Figure 1. The developed robotic apple harvesting system. (a) Image of the whole system operating in the orchard environment. (b) Main components of the robotic system.
Figure 1. The developed robotic apple harvesting system. (a) Image of the whole system operating in the orchard environment. (b) Main components of the robotic system.
Horticulturae 10 00040 g001
Figure 2. CAD model of the perception module.
Figure 2. CAD model of the perception module.
Horticulturae 10 00040 g002
Figure 3. Fundamental working principle of the ALACS unit.
Figure 3. Fundamental working principle of the ALACS unit.
Horticulturae 10 00040 g003
Figure 4. Coordinate frames and extrinsic parameters of the ALACS unit.
Figure 4. Coordinate frames and extrinsic parameters of the ALACS unit.
Horticulturae 10 00040 g004
Figure 5. Hardware setup for extrinsic parameter calibration.
Figure 5. Hardware setup for extrinsic parameter calibration.
Horticulturae 10 00040 g005
Figure 6. Scheme to compute z c , i . (a) Corner detection. (b) Pose reconstruction. (c) Computation of z c , i .
Figure 6. Scheme to compute z c , i . (a) Corner detection. (b) Pose reconstruction. (c) Computation of z c , i .
Horticulturae 10 00040 g006
Figure 7. Localization accuracy of ALACS when the laser is adjusted to different positions (i.e., d = 0 , 5 , 10 , 15 , 20 cm). (a) Localization error distribution at 5 different laser positions. (b) Statistics summary of the localization error distribution. On each box, the central red mark is the median, the edges of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points.
Figure 7. Localization accuracy of ALACS when the laser is adjusted to different positions (i.e., d = 0 , 5 , 10 , 15 , 20 cm). (a) Localization error distribution at 5 different laser positions. (b) Statistics summary of the localization error distribution. On each box, the central red mark is the median, the edges of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points.
Horticulturae 10 00040 g007
Table 1. Calibration results for the four methods used.
Table 1. Calibration results for the four methods used.
α (deg) L 0 (mm) β (deg)Mean Error z c , i z ^ c , i (mm)
Method 1 (Low-fidelity model + All data)19.03382.83/4.91
Method 2 (Low-fidelity model + RANSAC)19.28386.37/3.80
Method 3 (High-fidelity model + All data)19.01381.090.731.84
Method 4 (High-fidelity model + RANSAC) 19.07381.980.690.39
Table 2. Computation time for four methods.
Table 2. Computation time for four methods.
Computation Time(s)
Method 1 (Low-fidelity model + All data)0.015
Method 2 (Low-fidelity model + RANSAC)1.741
Method 3 (High-fidelity model + All data)0.023
Method 4 (High-fidelity model + RANSAC)1.824
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, K.; Chu, P.; Lammers, K.; Li, Z.; Lu, R. Active Laser-Camera Scanning for High-Precision Fruit Localization in Robotic Harvesting: System Design and Calibration. Horticulturae 2024, 10, 40. https://0-doi-org.brum.beds.ac.uk/10.3390/horticulturae10010040

AMA Style

Zhang K, Chu P, Lammers K, Li Z, Lu R. Active Laser-Camera Scanning for High-Precision Fruit Localization in Robotic Harvesting: System Design and Calibration. Horticulturae. 2024; 10(1):40. https://0-doi-org.brum.beds.ac.uk/10.3390/horticulturae10010040

Chicago/Turabian Style

Zhang, Kaixiang, Pengyu Chu, Kyle Lammers, Zhaojian Li, and Renfu Lu. 2024. "Active Laser-Camera Scanning for High-Precision Fruit Localization in Robotic Harvesting: System Design and Calibration" Horticulturae 10, no. 1: 40. https://0-doi-org.brum.beds.ac.uk/10.3390/horticulturae10010040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop