Recently, research on convergence studies based on information and communication technology (ICT) has been actively conducted in order to improve the competitiveness of agriculture and to solve the rapidly developing problem of climate change. With the development of fusion technology for crop analysis, the importance of plant phenomics that observes and analyzes the entire phenotype, including crop shape and biochemical characteristics, is emerging [1
]. Plant phenotyping to express complex crop characteristics extends the accuracy of the analysis and the range of applications by using a variety of sensor techniques such as visible light, thermal imaging, hyperspectral, and depth cameras [2
In addition to sensor technology, a variety of algorithms, especially machine learning, have enabled a variety of analyzes of plant phenotypes. Leaf area and nutrient concentration have been identified through the use of plant images and pattern recognition methods [3
]. These studies were developed for the purpose of automation. Using deep learning, one of the machine learning algorithms, observations are being made in real fields by analyzing growth status and environmental information [5
], and research applied to identification of features and location is in progress [6
]. In terms of data collection, several machine learning algorithms are being used for automated analysis such as disease severity, lodging assessment, and growth monitoring for a large area of crops using an Unmanned Aerial Vehicle (UAV) [7
One important factor for determining plant phenotype is the shape of crops, referred to as morphology. The shape includes the crop height, leaf area, fruit volume, weight, and direction of growth [10
]. These factors reflect both environmental factors such as temperature, humidity, and soil, and the trait elements of the crop itself [11
]. Crops may grow toward the light, or environmental stress may cause a lower fruit yield or smaller fruits than the natural size due to genetic factors. Two-dimensional (2D) images have been used to anlyze these various crop shapes. However, observing three-dimensional (3D) objects as 2D cross-section images is difficult, and there are limitations to the analysis due to the portion covered by leaves.
To overcome this problem, crop analysis using 3D images has been attempted, and various studies are underway. Three-dimensional images are created through a process called 3D reconstruction, methods of which can be divided into passive and active methods. Passive methods are slower than active ones, but data acquisition is easier and high-resolution analysis is possible. Meanwhile, active methods have a lower resolution than passive ones, but real-time analysis can be performed as a result of the acquisition of accurate depth information.
The passive type is an image-based 3D modeling method using only images acquired by a normal RGB camera without a distance measuring sensor. Typically, the structure from motion (SfM) method is used, and there are various methods to achieve this, such as shape from silhouettes and shape from shading. Snavely et al. [12
] encountered many issues when working with photo tourism, restoring images of historic locations that people had photographed under various conditions using the SfM method. Li et al. [13
] briefly introduced and summarized an algorithm for multi-image 3D reconstruction. Remondino et al. [14
] compared the results of dense image matching algorithms (SURE, Micmac, PMVS, Photoscan) with various data sets captured using a DSLR camera. Some studies have applied such algorithms to crops. Lou et al. [15
] estimated camera poses using several feature-matching algorithms with images taken with DSLR and performed point cloud generation using the Multi-View Stereo (MVS) method. Similarly, Lou et al. [16
] determined that a 3D laser/LiDAR or structured-light scanner is not suitable for plants and compared their algorithms using SfM, stereo matching, depth optimization and merging with Patch-based Multi-View Stereo (PMVS) and CMPMVS [17
] algorithms. Ni et al. [18
] performed 3D reconstruction using software called visualSfM and tried to use voxels to obtain the volume. Nguyen et al. [19
] photographed sunflowers using a stereo camera in an outdoor environment and performed 3D reconstruction using the SfM method. The results were compared with other algorithms in terms of error.
The active type is a depth-based 3D modeling method that uses expensive equipment employing sensors capable of distance measurement. Structured-light, laser scanning, and time-of-flight (TOF) cameras have been used, and many studies have been carried out as a result of the development of inexpensive RGB with depth (RGB-D) sensors. Q. Yang et al. [20
] introduced an algorithm that can apply depth image upsampling, mentioning that the passive stereo method fails in the non-textured featureless portion. Zhu et al. [21
] and Lee and Ho [22
] increased the accuracy of the depth map and created a multi-view video sequence and a depth map using a combination of a TOF range sensor and a stereo sensor. Paulus et al. [23
] measured sugar beet using relatively inexpensive equipment such as the DAVID laser scanning system and showed that it can substitute expensive equipment through comparison with reference values. Wasenmüller and Stricker [24
] and Yang et al. [25
] analyzed the accuracy and precision of a Kinect sensor, a type of RGB-D camera, for temperature effect, color, and depth value. Lachat et al. [26
] performed the 3D reconstruction of a small object using a Kinect v2 and compared this result with that of Kinect v1 and the photogrammetry method using a DSLR. Gai et al. [27
] and Gai et al. [28
] attempted plant detection and the localization of crops such as broccoli and lettuce using a Kinect sensor for weed management. Several clustering and classification algorithms were used to perform plant localization and discrimination for different crop growth stages. In addition, this type of sensor was used for research to acquire various plant growth-related information [29
In this study, several steps were performed to obtain more accurate 3D crop reconstruction images in the laboratory, and various methods were applied for automatic analysis. After acquiring depth information using a Kinect v2, which is an RGB-D sensor, we attempted to assist the 3D reconstruction process by performing matching with a high-resolution RGB image. To be able to use the color image and the depth image together, a color image is matched to a depth image with a relatively small resolution. In this case, a lot of information in the color image is discarded, and it is difficult to perform precise analysis. Accordingly, an algorithm that increases the resolution of the depth image was used to reduce the amount of discarded color information. With respect to obtaining an image, a process to increase the accuracy of 3D reconstruction was also added. The results were compared with those of existing 3D reconstruction algorithms, and model accuracy analysis was performed on the ground-truth data of the directly measured crops for morphology characteristics. Similar to the component that creates the 3D image, the process of automatic analysis is an important aspect. 2D image analysis has been automated in various areas, but in the case of 3D images, there is no sufficient analysis technology. Accordingly, an automated analysis algorithm process for the reconstructed 3D crop image was also developed. Through the classification and segmentation of the image, the desired crop part was identified and divided based on a specific part. Automatic analysis was performed on major crop factors, including crop growth direction, number of leaves, leaf length, width, area, and crop height. Through this, it was possible to reconstruct 3D images using various types of images and to perform accuracy analysis on the model. In addition, for automatic analysis, an extraction algorithm suitable for 3D crop images was selected and various attempts at segmentation were made. This study contributes to broadening the scope of analysis by presenting various plant characteristics and the applicability of sensors and algorithms for 3D information in plant phenotypic analysis.