remotesensing-logo

Journal Browser

Journal Browser

3D Modelling and Mapping for Precision Agriculture

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (15 March 2023) | Viewed by 53775

Special Issue Editors


E-Mail Website
Guest Editor
Department of Agricultural, Forestry and Food Sciences (DiSAFA), University of Turin, 10124 Torino, Italy
Interests: precision agriculture; robotics; remote sensing; image processing; machine learning

E-Mail Website
Guest Editor
Universitat de Lleida, Lleida, Spain
Interests: agricultural machinery; sensors; precision agriculture

E-Mail Website
Guest Editor
Dipartimento di Scienze Agrarie, Forestali e Alimentari, Università degli Studi di Torino, Turin, Italy
Interests: precision agriculture; precision viticulture; UAV; renewable energy; machine learning

Special Issue Information

Dear Colleagues,

An effective precision agriculture (PA) management approach relies on accurate knowledge of the agricultural environment, with the aim of timely and properly performing site specific operations. Recent solutions for PA are based on unmanned vehicles, both ground (UGVs) and aerial (UAVs), that can profitably perform crop scouting and monitoring tasks, and even accomplish several management operations in an autonomous way.

In this context, the contribution of 3D models of crops to the improvements of PA practices is rapidly growing. Indeed, point clouds of agricultural environments can be profitably exploited to retrieve information on the crop status, geometries, field yield, and other valuable agronomical indices. In addition, 3D models are proving to be an effective input of robust control and navigation algorithms of autonomous vehicles in complex scenarios, such as the agricultural ones, allowing for enhanced obstacles and targets detection. In order to mine valuable information for agricultural purposes from 3D point clouds, however, specific computing frameworks are usually required, many of which are based on artificial intelligence (AI) and machine learning (ML) methods.

The goal of this Special Issue is to present an up-to-date overview of the recent achievements in the field of 3D modelling and mapping in agriculture, as well as to identify the obstacles still ahead. Review and research papers on, but not limited to, the following topics are welcome:

  • Time of flight (ToF) and structured light (SL) technologies for PA
  • Structure from motion (SfM) methods for PA
  • 3D point cloud processing
  • Machine learning and artificial intelligence
  • Crop 3D modelling
  • Field 3D mapping
  • Navigation and control based on 3D point clouds
  • Agricultural UAVs
  • Agricultural UGVs
  • Agricultural robots

Dr. Lorenzo Comba
Dr. Jordi Llorens
Dr. Alessandro Biglia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Precision agriculture
  • 3D point clouds
  • Sensing for automation
  • Crop monitoring
  • Drones and robotics
  • UAVs and UGVs
  • Feature extraction
  • Machine learning
  • Semantic segmentation

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 10595 KiB  
Article
A Method for Predicting Canopy Light Distribution in Cherry Trees Based on Fused Point Cloud Data
by Yihan Yin, Gang Liu, Shanle Li, Zhiyuan Zheng, Yongsheng Si and Yang Wang
Remote Sens. 2023, 15(10), 2516; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15102516 - 10 May 2023
Cited by 2 | Viewed by 1446
Abstract
A proper canopy light distribution in fruit trees can improve photosynthetic efficiency, which is important for improving fruit yield and quality. Traditional methods of measuring light intensity in the canopy of fruit trees are time consuming, labor intensive and error prone. Therefore, a [...] Read more.
A proper canopy light distribution in fruit trees can improve photosynthetic efficiency, which is important for improving fruit yield and quality. Traditional methods of measuring light intensity in the canopy of fruit trees are time consuming, labor intensive and error prone. Therefore, a method for predicting canopy light distribution in cherry trees was proposed based on a three-dimensional (3D) cherry tree canopy point cloud model fused by multiple sources. First, to quickly and accurately reconstruct the 3D cherry tree point cloud model, we propose a global cherry tree alignment method based on a binocular depth camera vision system. For the point cloud data acquired by the two cameras, a RANSAC-based orb calibration method is used to externally calibrate the cameras, and the point cloud is coarsely aligned using the pose transformation matrix between the cameras. For the point cloud data collected at different stations, a coarse point cloud alignment method based on intrinsic shape signature (ISS) key points is proposed. In addition, an improved iterative closest point (ICP) algorithm based on bidirectional KD-tree is proposed to precisely align the coarse-aligned cherry tree point cloud data to achieve point cloud data fusion and obtain a complete 3D cherry tree point cloud model. Finally, to reveal the pattern between the fruit tree canopy structure and the light distribution, a GBRT-based model for predicting the cherry tree canopy light distribution is proposed based on the established 3D cherry tree point cloud model, which takes the relative projected area features, relative surface area and relative volume characteristics of the minimum bounding box of the point cloud model as inputs and the relative light intensity as output. The experiment results show that the GBRT-based model for predicting the cherry tree canopy illumination distribution has good feasibility. The coefficient of determination between the predicted value and the actual value is 0.932, and the MAPE is 0.116, and the model can provide technical support for scientific and reasonable cherry tree pruning. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

16 pages, 20442 KiB  
Article
Geomatic Data Fusion for 3D Tree Modeling: The Case Study of Monumental Chestnut Trees
by Mattia Balestra, Enrico Tonelli, Alessandro Vitali, Carlo Urbinati, Emanuele Frontoni and Roberto Pierdicca
Remote Sens. 2023, 15(8), 2197; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15082197 - 21 Apr 2023
Cited by 6 | Viewed by 2186
Abstract
In recent years, advancements in remote and proximal sensing technology have driven innovation in environmental and land surveys. The integration of various geomatics devices, such as reflex and UAVs equipped with RGB cameras and mobile laser scanners (MLS), allows detailed and precise surveys [...] Read more.
In recent years, advancements in remote and proximal sensing technology have driven innovation in environmental and land surveys. The integration of various geomatics devices, such as reflex and UAVs equipped with RGB cameras and mobile laser scanners (MLS), allows detailed and precise surveys of monumental trees. With these data fusion method, we reconstructed three monumental 3D tree models, allowing the computation of tree metric variables such as diameter at breast height (DBH), total height (TH), crown basal area (CBA), crown volume (CV) and wood volume (WV), even providing information on the tree shape and its overall conditions. We processed the point clouds in software such as CloudCompare, 3D Forest, R and MATLAB, whereas the photogrammetric processing was conducted with Agisoft Metashape. Three-dimensional tree models enhance accessibility to the data and allow for a wide range of potential applications, including the development of a tree information model (TIM), providing detailed data for monitoring tree health, growth, biomass and carbon sequestration. The encouraging results provide a basis for extending the virtualization of these monumental trees to a larger scale for conservation and monitoring. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

16 pages, 7427 KiB  
Article
Cotton Growth Modelling Using UAS-Derived DSM and RGB Imagery
by Vasilis Psiroukis, George Papadopoulos, Aikaterini Kasimati, Nikos Tsoulias and Spyros Fountas
Remote Sens. 2023, 15(5), 1214; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15051214 - 22 Feb 2023
Cited by 1 | Viewed by 2005
Abstract
Modeling cotton plant growth is an important aspect of improving cotton yields and fiber quality and optimizing land management strategies. High-throughput phenotyping (HTP) systems, including those using high-resolution imagery from unmanned aerial systems (UAS) combined with sensor technologies, can accurately measure and characterize [...] Read more.
Modeling cotton plant growth is an important aspect of improving cotton yields and fiber quality and optimizing land management strategies. High-throughput phenotyping (HTP) systems, including those using high-resolution imagery from unmanned aerial systems (UAS) combined with sensor technologies, can accurately measure and characterize phenotypic traits such as plant height, canopy cover, and vegetation indices. However, manual assessment of plant characteristics is still widely used in practice. It is time-consuming, labor-intensive, and prone to human error. In this study, we investigated the use of a data-processing pipeline to estimate cotton plant height using UAS-derived visible-spectrum vegetation indices and photogrammetric products. Experiments were conducted at an experimental cotton field in Aliartos, Greece, using a DJI Phantom 4 UAS in five different stages of the 2022 summer cultivation season. Ground Control Points (GCPs) were marked in the field and used for georeferencing and model optimization. The imagery was used to generate dense point clouds, which were then used to create Digital Surface Models (DSMs), while specific Digital Elevation Models (DEMs) were interpolated from RTK GPS measurements. Three (3) vegetation indices were calculated using visible spectrum reflectance data from the generated orthomosaic maps, and ground coverage from the cotton canopy was also calculated by using binary masks. Finally, the correlations between the indices and crop height were examined. The results showed that vegetation indices, especially Green Chromatic Coordinate (GCC) and Normalized Excessive Green (NExG) indices, had high correlations with cotton height in the earlier growth stages and exceeded 0.70, while vegetation cover showed a more consistent trend throughout the season and exceeded 0.90 at the beginning of the season. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

21 pages, 117905 KiB  
Article
Quality Analysis of a High-Precision Kinematic Laser Scanning System for the Use of Spatio-Temporal Plant and Organ-Level Phenotyping in the Field
by Felix Esser, Lasse Klingbeil, Lina Zabawa and Heiner Kuhlmann
Remote Sens. 2023, 15(4), 1117; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15041117 - 18 Feb 2023
Cited by 2 | Viewed by 1484
Abstract
Spatio–temporal determination of phenotypic traits, such as height, leaf angles, and leaf area, is important for the understanding of crop growth and development in modern agriculture and crop science. Measurements of these parameters for individual plants so far have been possible only in [...] Read more.
Spatio–temporal determination of phenotypic traits, such as height, leaf angles, and leaf area, is important for the understanding of crop growth and development in modern agriculture and crop science. Measurements of these parameters for individual plants so far have been possible only in greenhouse environments using high-resolution 3D measurement techniques, such as laser scanning or image-based 3D reconstruction. Although aerial and ground-based vehicles equipped with laser scanners and cameras are more and more used in field conditions to perform large-scale phenotyping, these systems usually provide parameters more on the plot level rather than on a single plant or organ level. The reason for this is that the quality of the 3D information generated with those systems is mostly not high enough to reconstruct single plants or plant organs. This paper presents the usage of a robot equipped with a high-resolution mobile laser scanning system. We use the system, which is usually used to create high-definition 3D maps of urban environments, for plant and organ-level morphological phenotyping in agricultural field conditions. The analysis focuses on the point cloud quality as well as the system’s potential by defining quality criteria for the point cloud and system and by using them to evaluate the measurements taken in an experimental agricultural field with different crops. Criteria for evaluation are the georeferencing accuracy, point precision, spatial resolution, and point cloud completeness. Additional criteria are the large-scale scan efficiency and the potential for automation. Wind-induced plant jitter that may affect the crop point cloud quality is discussed afterward. To show the system’s potential, exemplary phenotypic traits of plant height, leaf area, and leaf angles for different crops are extracted based on the point clouds. The results show a georeferencing accuracy of 1–2 cm, a point precision on crop surfaces of 1–2 mm, and a spatial resolution of just a few millimeters. Point clouds become incomplete in the later stages of growth since the vegetation is denser. Wind-induced plant jitters can lead to distorted crop point clouds depending on wind force and crop size. The phenotypic parameter extraction of leaf area, leaf angles, and plant height from the system’s point clouds highlight the outstanding potential for 3D crop phenotyping on the plant-organ level in agricultural fields. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

16 pages, 5126 KiB  
Article
Geometrical Characterization of Hazelnut Trees in an Intensive Orchard by an Unmanned Aerial Vehicle (UAV) for Precision Agriculture Applications
by Alessandra Vinci, Raffaella Brigante, Chiara Traini and Daniela Farinelli
Remote Sens. 2023, 15(2), 541; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15020541 - 16 Jan 2023
Cited by 10 | Viewed by 2387
Abstract
Knowledge of tree size is of great importance for the precision management of a hazelnut orchard. In fact, it has been shown that site-specific crop management allows for the best possible management and efficiency of the use of inputs. Generally, measurements of tree [...] Read more.
Knowledge of tree size is of great importance for the precision management of a hazelnut orchard. In fact, it has been shown that site-specific crop management allows for the best possible management and efficiency of the use of inputs. Generally, measurements of tree parameters are carried out using manual techniques that are time-consuming, labor-intensive and not very precise. The aim of this study was to propose, evaluate and validate a simple and innovative procedure using images acquired by an unmanned aerial vehicle (UAV) for canopy characterization in an intensive hazelnut orchard. The parameters considered were the radius (Rc), the height of the canopy (hc), the height of the tree (htree) and of the trunk (htrunk). Two different methods were used for the assessment of the canopy volume using the UAV images. The performance of the method was evaluated by comparing manual and UAV data using the Pearson correlation coefficient and root mean square error (RMSE). High correlation values were obtained for Rc, hc and htree while a very low correlation was obtained for htrunk. The method proposed for the volume calculation was promising. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

26 pages, 30996 KiB  
Article
Analysis of UAS-LiDAR Ground Points Classification in Agricultural Fields Using Traditional Algorithms and PointCNN
by Nadeem Fareed, Joao Paulo Flores and Anup Kumar Das
Remote Sens. 2023, 15(2), 483; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15020483 - 13 Jan 2023
Cited by 8 | Viewed by 3991
Abstract
Classifying bare earth (ground) points from Light Detection and Ranging (LiDAR) point clouds is well-established research in the forestry, topography, and urban domains using point clouds acquired by Airborne LiDAR System (ALS) at average point densities (≈2 points per meter-square (pts/m2)). [...] Read more.
Classifying bare earth (ground) points from Light Detection and Ranging (LiDAR) point clouds is well-established research in the forestry, topography, and urban domains using point clouds acquired by Airborne LiDAR System (ALS) at average point densities (≈2 points per meter-square (pts/m2)). The paradigm of point cloud collection has shifted with the advent of unmanned aerial systems (UAS) onboard affordable laser scanners with commercial utility (e.g., DJI Zenmuse L1 sensor) and unprecedented repeatability of UAS-LiDAR surveys. Therefore, there is an immediate need to investigate the existing methods, and to develop new ground classification methods, using UAS-LiDAR. In this paper, for the first time, traditional ground classification algorithms and modern machine learning methods were investigated to filter ground from point clouds of high-density UAS-LiDAR data (≈900 pts/m2) over five agricultural fields in North Dakota, USA. To this end, we tested frequently used ground classification algorithms: Cloth Simulation Function (CSF), Progressive Morphological Filter (PMF), Multiscale Curvature Classification (MCC), and ArcGIS ground classification algorithms along with the PointCNN deep learning model were trained. We investigated two aspects of ground classification algorithms and PointCNN: (a) Classification accuracy of optimized ground classification algorithms (i.e., fine adjustment is user-defined parameters) and PointCNN over training site, and (b) transferability potential over four yet diverse test agricultural fields. The well-established evaluation metrics of omission error, commission error, and total error, along with kappa coefficients showed that deep learning outperforms the traditional ground classification algorithms in both aspects: (a) overall classification accuracy, and (b) transferability over diverse agricultural fields. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

20 pages, 17374 KiB  
Article
Accuracy Evaluation and Branch Detection Method of 3D Modeling Using Backpack 3D Lidar SLAM and UAV-SfM for Peach Trees during the Pruning Period in Winter
by Poching Teng, Yu Zhang, Takayoshi Yamane, Masayuki Kogoshi, Takeshi Yoshida, Tomohiko Ota and Junichi Nakagawa
Remote Sens. 2023, 15(2), 408; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15020408 - 09 Jan 2023
Cited by 1 | Viewed by 1708
Abstract
In the winter pruning operation of deciduous fruit trees, the number of pruning branches and the structure of the main branches greatly influence the future growth of the fruit trees and the final harvest volume. Terrestrial laser scanning (TLS) is considered a feasible [...] Read more.
In the winter pruning operation of deciduous fruit trees, the number of pruning branches and the structure of the main branches greatly influence the future growth of the fruit trees and the final harvest volume. Terrestrial laser scanning (TLS) is considered a feasible method for the 3D modeling of trees, but it is not suitable for large-scale inspection. The simultaneous localization and mapping (SLAM) technique makes it possible to move the lidar on the ground and model quickly, but it is not useful enough for the accuracy of plant detection. Therefore, in this study, we used UAV-SfM and 3D lidar SLAM techniques to build 3D models for the winter pruning of peach trees. Then, we compared and analyzed these models and further proposed a method to distinguish branches from 3D point clouds by spatial point cloud density. The results showed that the 3D lidar SLAM technique had a shorter modeling time and higher accuracy than UAV-SfM for the winter pruning period of peach trees. The method had the smallest RMSE of 3084 g with an R2 = 0.93 compared to the fresh weight of the pruned branches. In the branch detection part, branches with diameters greater than 3 cm were differentiated successfully, regardless of whether before or after pruning. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

23 pages, 8412 KiB  
Article
Vine Canopy Reconstruction and Assessment with Terrestrial Lidar and Aerial Imaging
by Igor Petrović, Matej Sečnik, Marko Hočevar and Peter Berk
Remote Sens. 2022, 14(22), 5894; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14225894 - 21 Nov 2022
Cited by 4 | Viewed by 1718
Abstract
For successful dosing of plant protection products, the characteristics of the vine canopies should be known, based on which the spray amount should be dosed. In the field experiment, we compared two optical experimental methods, terrestrial lidar and aerial photogrammetry, with manual defoliation [...] Read more.
For successful dosing of plant protection products, the characteristics of the vine canopies should be known, based on which the spray amount should be dosed. In the field experiment, we compared two optical experimental methods, terrestrial lidar and aerial photogrammetry, with manual defoliation of some selected vines. Like those of other authors, our results show that both terrestrial lidar and aerial photogrammetry were able to represent the canopy well with correlation coefficients around 0.9 between the measured variables and the number of leaves. We found that in the case of aerial photogrammetry, significantly more points were found in the point cloud, but this depended on the choice of the ground sampling distance. Our results show that in the case of aerial UAS photogrammetry, subdividing the vine canopy segments to 5 × 5 cm gives the best representation of the volume of vine canopies. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

19 pages, 5977 KiB  
Article
Virtual Laser Scanning Approach to Assessing Impact of Geometric Inaccuracy on 3D Plant Traits
by Michael Henke and Evgeny Gladilin
Remote Sens. 2022, 14(19), 4727; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194727 - 21 Sep 2022
Cited by 2 | Viewed by 1961
Abstract
In recent years, 3D imaging became an increasingly popular screening modality for high-throughput plant phenotyping. The 3D scans provide a rich source of information about architectural plant organization which cannot always be derived from multi-view projection 2D images. On the other hand, 3D [...] Read more.
In recent years, 3D imaging became an increasingly popular screening modality for high-throughput plant phenotyping. The 3D scans provide a rich source of information about architectural plant organization which cannot always be derived from multi-view projection 2D images. On the other hand, 3D scanning is associated with a principle inaccuracy by assessment of geometrically complex plant structures, for example, due the loss of geometrical information on reflective, shadowed, inclined and/or curved leaf surfaces. Here, we aim to quantitatively assess the impact of geometrical inaccuracies in 3D plant data on phenotypic descriptors of four different shoot architectures, including tomato, maize, cucumber, and arabidopsis. For this purpose, virtual laser scanning of synthetic models of these four plant species was used. This approach was applied to simulate different scenarios of 3D model perturbation, as well as the principle loss of geometrical information in shadowed plant regions. Our experimental results show that different plant traits exhibit different and, in general, plant type specific dependency on the level of geometrical perturbations. However, some phenotypic traits are tendentially more or less correlated with the degree of geometrical inaccuracies in assessing 3D plant architecture. In particular, integrative traits, such as plant area, volume, and physiologically important light absorption show stronger correlation with the effectively visible plant area than linear shoot traits, such as total plant height and width crossover different scenarios of geometrical perturbation. Our study addresses an important question of reliability and accuracy of 3D plant measurements and provides solution suggestions for consistent quantitative analysis and interpretation of imperfect data by combining measurement results with computational simulation of synthetic plant models. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

18 pages, 6515 KiB  
Article
3D Distance Filter for the Autonomous Navigation of UAVs in Agricultural Scenarios
by Cesare Donati, Martina Mammarella, Lorenzo Comba, Alessandro Biglia, Paolo Gay and Fabrizio Dabbene
Remote Sens. 2022, 14(6), 1374; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14061374 - 11 Mar 2022
Cited by 5 | Viewed by 3280
Abstract
In precision agriculture, remote sensing is an essential phase in assessing crop status and variability when considering both the spatial and the temporal dimensions. To this aim, the use of unmanned aerial vehicles (UAVs) is growing in popularity, allowing for the autonomous performance [...] Read more.
In precision agriculture, remote sensing is an essential phase in assessing crop status and variability when considering both the spatial and the temporal dimensions. To this aim, the use of unmanned aerial vehicles (UAVs) is growing in popularity, allowing for the autonomous performance of a variety of in-field tasks which are not limited to scouting or monitoring. To enable autonomous navigation, however, a crucial capability lies in accurately locating the vehicle within the surrounding environment. This task becomes challenging in agricultural scenarios where the crops and/or the adopted trellis systems can negatively affect GPS signal reception and localisation reliability. A viable solution to this problem can be the exploitation of high-accuracy 3D maps, which provide important data regarding crop morphology, as an additional input of the UAVs’ localisation system. However, the management of such big data may be difficult in real-time applications. In this paper, an innovative 3D sensor fusion approach is proposed, which combines the data provided by onboard proprioceptive (i.e., GPS and IMU) and exteroceptive (i.e., ultrasound) sensors with the information provided by a georeferenced 3D low-complexity map. In particular, the parallel-cuts ellipsoid method is used to merge the data from the distance sensors and the 3D map. Then, the improved estimation of the UAV location is fused with the data provided by the GPS and IMU sensors, using a Kalman-based filtering scheme. The simulation results prove the efficacy of the proposed navigation approach when applied to a quadrotor that autonomously navigates between vine rows. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

20 pages, 91121 KiB  
Article
Comparison of Aerial and Ground 3D Point Clouds for Canopy Size Assessment in Precision Viticulture
by Andrea Pagliai, Marco Ammoniaci, Daniele Sarri, Riccardo Lisci, Rita Perria, Marco Vieri, Mauro Eugenio Maria D’Arcangelo, Paolo Storchi and Simon-Paolo Kartsiotis
Remote Sens. 2022, 14(5), 1145; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14051145 - 25 Feb 2022
Cited by 19 | Viewed by 3204
Abstract
In precision viticulture, the intra-field spatial variability characterization is a crucial step to efficiently use natural resources by lowering the environmental impact. In recent years, technologies such as Unmanned Aerial Vehicles (UAVs), Mobile Laser Scanners (MLS), multispectral sensors, Mobile Apps (MA) and Structure [...] Read more.
In precision viticulture, the intra-field spatial variability characterization is a crucial step to efficiently use natural resources by lowering the environmental impact. In recent years, technologies such as Unmanned Aerial Vehicles (UAVs), Mobile Laser Scanners (MLS), multispectral sensors, Mobile Apps (MA) and Structure from Motion (SfM) techniques enabled the possibility to characterize this variability with low efforts. The study aims to evaluate, compare and cross-validate the potentiality and the limits of several tools (UAV, MA, MLS) to assess the vine canopy size parameters (thickness, height, volume) by processing 3D point clouds. Three trials were carried out to test the different tools in a vineyard located in the Chianti Classico area (Tuscany, Italy). Each test was made of a UAV flight, an MLS scanning over the vineyard and a MA acquisition over 48 geo-referenced vines. The Leaf Area Index (LAI) were also assessed and taken as reference value. The results showed that the analyzed tools were able to correctly discriminate between zones with different canopy size characteristics. In particular, the R2 between the canopy volumes acquired with the different tools was higher than 0.7, being the highest value of R2 = 0.78 with a RMSE = 0.057 m3 for the UAV vs. MLS comparison. The highest correlations were found between the height data, being the highest value of R2 = 0.86 with a RMSE = 0.105 m for the MA vs. MLS comparison. For the thickness data, the correlations were weaker, being the lowest value of R2 = 0.48 with a RMSE = 0.052 m for the UAV vs. MLS comparison. The correlation between the LAI and the canopy volumes was moderately strong for all the tools with the highest value of R2 = 0.74 for the LAI vs. V_MLS data and the lowest value of R2 = 0.69 for the LAI vs. V_UAV data. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

17 pages, 8356 KiB  
Article
Determination of the Optimal Orientation of Chinese Solar Greenhouses Using 3D Light Environment Simulations
by Anhua Liu, Demin Xu, Michael Henke, Yue Zhang, Yiming Li, Xingan Liu and Tianlai Li
Remote Sens. 2022, 14(4), 912; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14040912 - 14 Feb 2022
Cited by 2 | Viewed by 2296
Abstract
With the continuous use of resources, solar energy is expected to be the most used sustainable energy. To improve the solar energy efficiency in Chinese Solar Greenhouses (CSG), the effect of CSG orientation on intercepted solar radiation was systematically studied. By using a [...] Read more.
With the continuous use of resources, solar energy is expected to be the most used sustainable energy. To improve the solar energy efficiency in Chinese Solar Greenhouses (CSG), the effect of CSG orientation on intercepted solar radiation was systematically studied. By using a 3D CSG model and a detailed crop canopy model, the light environment within CSG was optimized. Taking the most widely used Liao-Shen type Chinese solar greenhouse (CSG-LS) as the prototype, the simulation was fully verified. The intercepted solar radiation of the maintenance structures and crops was used as the evaluation index. The results showed that the highest amount of solar radiation intercepted by the maintenance structures occurred in the CSG orientations of 4–6° south to west (S-W) in 36.8° N and 38° N areas, 8–10° S-W in 41.8° N areas, and 2–4° south to east (S-E) in 43.6° N areas. The solar radiation intercepted by the crop canopy displayed the highest value at an orientation of 2–4° S-W in 36.8° N, 38° N, 43.6° N areas, and 4–6° S-W in the 41.8° N area. Furthermore, the proposed model could provide scientific guidance for greenhouse crop modelling. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

21 pages, 8373 KiB  
Article
High-Throughput Legume Seed Phenotyping Using a Handheld 3D Laser Scanner
by Xia Huang, Shunyi Zheng and Ningning Zhu
Remote Sens. 2022, 14(2), 431; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14020431 - 17 Jan 2022
Cited by 5 | Viewed by 2461
Abstract
High-throughput phenotyping involves many samples and diverse trait types. For the goal of automatic measurement and batch data processing, a novel method for high-throughput legume seed phenotyping is proposed. A pipeline of automatic data acquisition and processing, including point cloud acquisition, single-seed extraction, [...] Read more.
High-throughput phenotyping involves many samples and diverse trait types. For the goal of automatic measurement and batch data processing, a novel method for high-throughput legume seed phenotyping is proposed. A pipeline of automatic data acquisition and processing, including point cloud acquisition, single-seed extraction, pose normalization, three-dimensional (3D) reconstruction, and trait estimation, is proposed. First, a handheld laser scanner is used to obtain the legume seed point clouds in batches. Second, a combined segmentation method using the RANSAC method, the Euclidean segmentation method, and the dimensionality of the features is proposed to conduct single-seed extraction. Third, a coordinate rotation method based on PCA and the table normal is proposed to conduct pose normalization. Fourth, a fast symmetry-based 3D reconstruction method is built to reconstruct a 3D model of the single seed, and the Poisson surface reconstruction method is used for surface reconstruction. Finally, 34 traits, including 11 morphological traits, 11 scale factors, and 12 shape factors, are automatically calculated. A total of 2500 samples of five kinds of legume seeds are measured. Experimental results show that the average accuracies of scanning and segmentation are 99.52% and 100%, respectively. The overall average reconstruction error is 0.014 mm. The average morphological trait measurement accuracy is submillimeter, and the average relative percentage error is within 3%. The proposed method provides a feasible method of batch data acquisition and processing, which will facilitate the automation in high-throughput legume seed phenotyping. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

21 pages, 24842 KiB  
Article
Four-Dimensional Plant Phenotyping Model Integrating Low-Density LiDAR Data and Multispectral Images
by Manuel García Rincón, Diego Mendez and Julian D. Colorado
Remote Sens. 2022, 14(2), 356; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14020356 - 13 Jan 2022
Cited by 5 | Viewed by 2551
Abstract
High-throughput platforms for plant phenotyping usually demand expensive high-density LiDAR devices with computational intense methods for characterizing several morphological variables. In fact, most platforms require offline processing to achieve a comprehensive plant architecture model. In this paper, we propose a low-cost plant phenotyping [...] Read more.
High-throughput platforms for plant phenotyping usually demand expensive high-density LiDAR devices with computational intense methods for characterizing several morphological variables. In fact, most platforms require offline processing to achieve a comprehensive plant architecture model. In this paper, we propose a low-cost plant phenotyping system based on the sensory fusion of low-density LiDAR data with multispectral imagery. Our contribution is twofold: (i) an integrated phenotyping platform with embedded processing methods capable of providing real-time morphological data, and (ii) a multi-sensor fusion algorithm that precisely match the 3D LiDAR point-cloud data with the corresponding multispectral information, aiming for the consolidation of four-dimensional plant models. We conducted extensive experimental tests over two plants with different morphological structures, demonstrating the potential of the proposed solution for enabling real-time plant architecture modeling in the field, based on low-density LiDARs. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

20 pages, 7939 KiB  
Article
Canopy Volume Extraction of Citrus reticulate Blanco cv. Shatangju Trees Using UAV Image-Based Point Cloud Deep Learning
by Yuan Qi, Xuhua Dong, Pengchao Chen, Kyeong-Hwan Lee, Yubin Lan, Xiaoyang Lu, Ruichang Jia, Jizhong Deng and Yali Zhang
Remote Sens. 2021, 13(17), 3437; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173437 - 30 Aug 2021
Cited by 13 | Viewed by 4092
Abstract
Automatic acquisition of the canopy volume parameters of the Citrus reticulate Blanco cv. Shatangju tree is of great significance to precision management of the orchard. This research combined the point cloud deep learning algorithm with the volume calculation algorithm to segment the canopy [...] Read more.
Automatic acquisition of the canopy volume parameters of the Citrus reticulate Blanco cv. Shatangju tree is of great significance to precision management of the orchard. This research combined the point cloud deep learning algorithm with the volume calculation algorithm to segment the canopy of the Citrus reticulate Blanco cv. Shatangju trees. The 3D (Three-Dimensional) point cloud model of a Citrus reticulate Blanco cv. Shatangju orchard was generated using UAV tilt photogrammetry images. The segmentation effects of three deep learning models, PointNet++, MinkowskiNet and FPConv, on Shatangju trees and the ground were compared. The following three volume algorithms: convex hull by slices, voxel-based method and 3D convex hull were applied to calculate the volume of Shatangju trees. Model accuracy was evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE). The results show that the overall accuracy of the MinkowskiNet model (94.57%) is higher than the other two models, which indicates the best segmentation effect. The 3D convex hull algorithm received the highest R2 (0.8215) and the lowest RMSE (0.3186 m3) for the canopy volume calculation, which best reflects the real volume of Citrus reticulate Blanco cv. Shatangju trees. The proposed method is capable of rapid and automatic acquisition for the canopy volume of Citrus reticulate Blanco cv. Shatangju trees. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

48 pages, 48288 KiB  
Article
How to Build a 2D and 3D Aerial Multispectral Map?—All Steps Deeply Explained
by André Vong, João P. Matos-Carvalho, Piero Toffanin, Dário Pedro, Fábio Azevedo, Filipe Moutinho, Nuno Cruz Garcia and André Mora
Remote Sens. 2021, 13(16), 3227; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163227 - 13 Aug 2021
Cited by 8 | Viewed by 4282
Abstract
The increased development of camera resolution, processing power, and aerial platforms helped to create more cost-efficient approaches to capture and generate point clouds to assist in scientific fields. The continuous development of methods to produce three-dimensional models based on two-dimensional images such as [...] Read more.
The increased development of camera resolution, processing power, and aerial platforms helped to create more cost-efficient approaches to capture and generate point clouds to assist in scientific fields. The continuous development of methods to produce three-dimensional models based on two-dimensional images such as Structure from Motion (SfM) and Multi-View Stereopsis (MVS) allowed to improve the resolution of the produced models by a significant amount. By taking inspiration from the free and accessible workflow made available by OpenDroneMap, a detailed analysis of the processes is displayed in this paper. As of the writing of this paper, no literature was found that described in detail the necessary steps and processes that would allow the creation of digital models in two or three dimensions based on aerial images. With this, and based on the workflow of OpenDroneMap, a detailed study was performed. The digital model reconstruction process takes the initial aerial images obtained from the field survey and passes them through a series of stages. From each stage, a product is acquired and used for the following stage, for example, at the end of the initial stage a sparse reconstruction is produced, obtained by extracting features of the images and matching them, which is used in the following step, to increase its resolution. Additionally, from the analysis of the workflow, adaptations were made to the standard workflow in order to increase the compatibility of the developed system to different types of image sets. Particularly, adaptations focused on thermal imagery were made. Due to the low presence of strong features and therefore difficulty to match features across thermal images, a modification was implemented, so thermal models could be produced alongside the already implemented processes for multispectral and RGB image sets. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

20 pages, 10510 KiB  
Article
EasyIDP: A Python Package for Intermediate Data Processing in UAV-Based Plant Phenotyping
by Haozhou Wang, Yulin Duan, Yun Shi, Yoichiro Kato, Seishi Ninomiya and Wei Guo
Remote Sens. 2021, 13(13), 2622; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132622 - 03 Jul 2021
Cited by 14 | Viewed by 5742
Abstract
Unmanned aerial vehicle (UAV) and structure from motion (SfM) photogrammetry techniques are widely used for field-based, high-throughput plant phenotyping nowadays, but some of the intermediate processes throughout the workflow remain manual. For example, geographic information system (GIS) software is used to manually assess [...] Read more.
Unmanned aerial vehicle (UAV) and structure from motion (SfM) photogrammetry techniques are widely used for field-based, high-throughput plant phenotyping nowadays, but some of the intermediate processes throughout the workflow remain manual. For example, geographic information system (GIS) software is used to manually assess the 2D/3D field reconstruction quality and cropping region of interests (ROIs) from the whole field. In addition, extracting phenotypic traits from raw UAV images is more competitive than directly from the digital orthomosaic (DOM). Currently, no easy-to-use tools are available to implement previous tasks for commonly used commercial SfM software, such as Pix4D and Agisoft Metashape. Hence, an open source software package called easy intermediate data processor (EasyIDP; MIT license) was developed to decrease the workload in intermediate data processing mentioned above. The functions of the proposed package include (1) an ROI cropping module, assisting in reconstruction quality assessment and cropping ROIs from the whole field, and (2) an ROI reversing module, projecting ROIs to relative raw images. The result showed that both cropping and reversing modules work as expected. Moreover, the effects of ROI height selection and reversed ROI position on raw images to reverse calculation were discussed. This tool shows great potential for decreasing workload in data annotation for machine learning applications. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Other

Jump to: Research

20 pages, 14723 KiB  
Technical Note
Technical Challenges for Multi-Temporal and Multi-Sensor Image Processing Surveyed by UAV for Mapping and Monitoring in Precision Agriculture
by Alessandro Lambertini, Emanuele Mandanici, Maria Alessandra Tini and Luca Vittuari
Remote Sens. 2022, 14(19), 4954; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194954 - 04 Oct 2022
Cited by 7 | Viewed by 1900
Abstract
Precision Agriculture (PA) is an approach to maximizing crop productivity in a sustainable manner. PA requires up-to-date, accurate and georeferenced information on crops, which can be collected from different sensors from ground, aerial or satellite platforms. The use of optical and thermal sensors [...] Read more.
Precision Agriculture (PA) is an approach to maximizing crop productivity in a sustainable manner. PA requires up-to-date, accurate and georeferenced information on crops, which can be collected from different sensors from ground, aerial or satellite platforms. The use of optical and thermal sensors from Unmanned Aerial Vehicle (UAV) platform is an emerging solution for mapping and monitoring in PA, yet many technological challenges are still open. This technical note discusses the choice of UAV type and its scientific payload for surveying a sample area of 5 hectares, as well as the procedures for replicating the study on a larger scale. This case study is an ideal opportunity to test the best practices to combine the requirements of PA surveys with the limitations imposed by local UAV regulations. In the field area, to follow crop development at various stages, nine flights over a period of four months were planned and executed. The usage of ground control points for optimal georeferencing and accurate alignment of maps created by multi-temporal processing is analyzed. Output maps are produced in both visible and thermal bands, after appropriate strip alignment, mosaicking, sensor calibration, and processing with Structure from Motion techniques. The discussion of strategies, checklists, workflow, and processing is backed by data from more than 5000 optical and radiometric thermal images taken during five hours of flight time in nine flights throughout the crop season. The geomatics challenges of a georeferenced survey for PA using UAVs are the key focus of this technical note. Accurate maps derived from these multi-temporal and multi-sensor surveys feed Geographic Information Systems (GIS) and Decision Support Systems (DSS) to benefit PA in a multidisciplinary approach. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

Back to TopTop