Image Processing in Agriculture and Forestry

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 December 2016) | Viewed by 102257

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Department Software Engineering and Artificial Intelligence, Faculty of Informatics, University Complutense of Madrid, 28040 Madrid, Spain
Interests: computer vision; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection and tracking; fusion and registering from imaging sensors; superresolution from low-resolution image sensors
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Agricultural Robotics Laboratory, Polytechnic University of Valencia, 46022 Valencia, Spain
Interests: agricultural robotics and automation; intelligent vehicles; artificial intelligence; machine vision; mechatronics; control systems; autonomous navigation; stereoscopic vision; fluid power; automatic steering; off-road equipment; precision agriculture

Special Issue Information

Dear Colleagues,

Agriculture, forestry and forests are specific areas where imaging-based systems play an important role. They allow a more efficient use of resources while facilitating the realization of different tasks, which are occasionally difficult and dangerous.

Image acquisition, processing and interpretation are oriented toward the efficiency of agricultural activities.

The following is a list of the main topics covered by this Special Issue. The issue will, however, not be limited to these topics:

  • Image acquisition devices and systems in outdoor environments.
  • Image processing techniques: color, segmentation, texture analysis, image fusion.
  • Computing vision-based approaches: pattern recognition, 3D structures and movement.
  • Applications: autonomous agricultural vehicles, obstacle avoidance, crop rows detection, yield estimation and quality, plant health, trees monitoring, crown height, bark thickness, communications.

Prof. Dr. Gonzalo Pajares Martinsanz
Prof. Dr. Francisco Rovira-Más
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

2562 KiB  
Article
Early Yield Prediction Using Image Analysis of Apple Fruit and Tree Canopy Features with Neural Networks
by Hong Cheng, Lutz Damerow, Yurui Sun and Michael Blanke
J. Imaging 2017, 3(1), 6; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3010006 - 19 Jan 2017
Cited by 85 | Viewed by 12711
Abstract
(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN); [...] Read more.
(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN); (2) Methods: Two back propagation neural network (BPNN) models were developed for the early period after natural fruit drop in June and the ripening period, respectively. Within the same periods, images of apple cv. “Gala” trees were captured from an orchard near Bonn, Germany. Two sample sets were developed to train and test models; each set included 150 samples from the 2009 and 2010 growing season. For each sample (each canopy image), pixels were segmented into fruit, foliage, and background using image segmentation. The four features extracted from the data set for the canopy were: total cross-sectional area of fruits, fruit number, total cross-section area of small fruits, and cross-sectional area of foliage, and were used as inputs. With the actual weighted yield per tree as a target, BPNN was employed to learn their mutual relationship as a prerequisite to develop the prediction; (3) Results: For the developed BPNN model of the early period after June drop, correlation coefficients (R2) between the estimated and the actual weighted yield, mean forecast error (MFE), mean absolute percentage error (MAPE), and root mean square error (RMSE) were 0.81, −0.05, 10.7%, 2.34 kg/tree, respectively. For the model of the ripening period, these measures were 0.83, −0.03, 8.9%, 2.3 kg/tree, respectively. In 2011, the two previously developed models were used to predict apple yield. The RMSE and R2 values between the estimated and harvested apple yield were 2.6 kg/tree and 0.62 for the early period (small, green fruit) and improved near harvest (red, large fruit) to 2.5 kg/tree and 0.75 for a tree with ca. 18 kg yield per tree. For further method verification, the cv. “Pinova” apple trees were used as another variety in 2012 to develop the BPNN prediction model for the early period after June drop. The model was used in 2013, which gave similar results as those found with cv. “Gala”; (4) Conclusion: Overall, the results showed in this research that the proposed estimation models performed accurately using canopy and fruit features using image analysis algorithms. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Figure 1

5608 KiB  
Article
Peach Flower Monitoring Using Aerial Multispectral Imaging
by Ryan Horton, Esteban Cano, Duke Bulanon and Esmaeil Fallahi
J. Imaging 2017, 3(1), 2; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3010002 - 06 Jan 2017
Cited by 34 | Viewed by 9462
Abstract
One of the tools for optimal crop production is regular monitoring and assessment of crops. During the growing season of fruit trees, the bloom period has increased photosynthetic rates that correlate with the fruiting process. This paper presents the development of an image [...] Read more.
One of the tools for optimal crop production is regular monitoring and assessment of crops. During the growing season of fruit trees, the bloom period has increased photosynthetic rates that correlate with the fruiting process. This paper presents the development of an image processing algorithm to detect peach blossoms on trees. Aerial images of peach (Prunus persica) trees were acquired from both experimental and commercial peach orchards in the southwestern part of Idaho using an off-the-shelf unmanned aerial system (UAS), equipped with a multispectral camera (near-infrared, green, blue). The image processing algorithm included contrast stretching of the three bands to enhance the image and thresholding segmentation method to detect the peach blossoms. Initial results showed that the image processing algorithm could detect peach blossoms with an average detection rate of 84.3% and demonstrated good potential as a monitoring tool for orchard management. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Figure 1

2121 KiB  
Article
Automated Soil Physical Parameter Assessment Using Smartphone and Digital Camera Imagery
by Matt Aitkenhead, Malcolm Coull, Richard Gwatkin and David Donnelly
J. Imaging 2016, 2(4), 35; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2040035 - 13 Dec 2016
Cited by 19 | Viewed by 6609
Abstract
Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera) to estimate the structure, texture and drainage of the soil. The method is adapted from [...] Read more.
Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera) to estimate the structure, texture and drainage of the soil. The method is adapted from earlier work on developing smartphone apps for estimating topsoil organic matter content in Scotland and uses an existing visual soil structure assessment approach. Colour and image texture information was extracted from the imagery. This information was linked, using geolocation information derived from the smartphone GPS system or from field notes, with existing collections of topography, land cover, soil and climate data for Scotland. A neural network model was developed that was capable of estimating soil structure (on a five-point scale), soil texture (sand, silt, clay), bulk density, pH and drainage category using this information. The model is sufficiently accurate to provide estimates of these parameters from soils in the field. We discuss potential improvements to the approach and plans to integrate the model into a set of smartphone apps for estimating health and fertility indicators for Scottish soils. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Figure 1

3748 KiB  
Article
3D Reconstruction of Plant/Tree Canopy Using Monocular and Binocular Vision
by Zhijiang Ni, Thomas F. Burks and Won Suk Lee
J. Imaging 2016, 2(4), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2040028 - 29 Sep 2016
Cited by 19 | Viewed by 8689
Abstract
Three-dimensional (3D) reconstruction of a tree canopy is an important step in order to measure canopy geometry, such as height, width, volume, and leaf cover area. In this research, binocular stereo vision was used to recover the 3D information of the canopy. Multiple [...] Read more.
Three-dimensional (3D) reconstruction of a tree canopy is an important step in order to measure canopy geometry, such as height, width, volume, and leaf cover area. In this research, binocular stereo vision was used to recover the 3D information of the canopy. Multiple images were taken from different views around the target. The Structure-from-motion (SfM) method was employed to recover the camera calibration matrix for each image, and the corresponding 3D coordinates of the feature points were calculated and used to recover the camera calibration matrix. Through this method, a sparse projective reconstruction of the target was realized. Subsequently, a ball pivoting algorithm was used to do surface modeling to realize dense reconstruction. Finally, this dense reconstruction was transformed to metric reconstruction through ground truth points which were obtained from camera calibration of binocular stereo cameras. Four experiments were completed, one for a known geometric box, and the other three were: a croton plant with big leaves and salient features, a jalapeno pepper plant with median leaves, and a lemon tree with small leaves. A whole-view reconstruction of each target was realized. The comparison of the reconstructed box’s size with the real box’s size shows that the 3D reconstruction is in metric reconstruction. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Figure 1

4160 KiB  
Article
Estimating Mangrove Biophysical Variables Using WorldView-2 Satellite Data: Rapid Creek, Northern Territory, Australia
by Muditha K. Heenkenda, Stefan W. Maier and Karen E. Joyce
J. Imaging 2016, 2(3), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2030024 - 08 Sep 2016
Cited by 15 | Viewed by 7308
Abstract
Mangroves are one of the most productive coastal communities in the world. Although we acknowledge the significance of ecosystems, mangroves are under natural and anthropogenic pressures at various scales. Therefore, understanding biophysical variations of mangrove forests is important. An extensive field survey is [...] Read more.
Mangroves are one of the most productive coastal communities in the world. Although we acknowledge the significance of ecosystems, mangroves are under natural and anthropogenic pressures at various scales. Therefore, understanding biophysical variations of mangrove forests is important. An extensive field survey is impossible within mangroves. WorldView-2 multi-spectral images having a 2-m spatial resolution were used to quantify above ground biomass (AGB) and leaf area index (LAI) in the Rapid Creek mangroves, Darwin, Australia. Field measurements, vegetation indices derived from WorldView-2 images and a partial least squares regression algorithm were incorporated to produce LAI and AGB maps. LAI maps with 2-m and 5-m spatial resolutions showed root mean square errors (RMSEs) of 0.75 and 0.78, respectively, compared to validation samples. Correlation coefficients between field samples and predicted maps were 0.7 and 0.8, respectively. RMSEs obtained for AGB maps were 2.2 kg/m2 and 2.0 kg/m2 for a 2-m and a 5-m spatial resolution, and the correlation coefficients were 0.4 and 0.8, respectively. We would suggest implementing the transects method for field sampling and establishing end points of these transects with a highly accurate positioning system. The study demonstrated the possibility of assessing biophysical variations of mangroves using remotely-sensed data. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

6052 KiB  
Article
Viewing Geometry Sensitivity of Commonly Used Vegetation Indices towards the Estimation of Biophysical Variables in Orchards
by Jonathan Van Beek, Laurent Tits, Ben Somers, Tom Deckers, Pieter Janssens and Pol Coppin
J. Imaging 2016, 2(2), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2020015 - 09 May 2016
Cited by 3 | Viewed by 6416
Abstract
Stress-related biophysical variables of capital intensive orchard crops can be estimated with proxies via spectral vegetation indices from off-nadir viewing satellite imagery. However, variable viewing compositions affect the relationship between spectral vegetation indices and stress-related variables (i.e., chlorophyll content, water content [...] Read more.
Stress-related biophysical variables of capital intensive orchard crops can be estimated with proxies via spectral vegetation indices from off-nadir viewing satellite imagery. However, variable viewing compositions affect the relationship between spectral vegetation indices and stress-related variables (i.e., chlorophyll content, water content and Leaf Area Index (LAI)) and could obstruct change detection. A sensitivity analysis was performed on the estimation of biophysical variables via vegetation indices for a wide range of viewing geometries. Subsequently, off-nadir viewing satellite imagery of an experimental orchard was analyzed, while all influences of background admixture were minimized through vegetation index normalization. Results indicated significant differences between nadir and off-nadir viewing scenes (∆R2 > 0.4). The Photochemical Reflectance Index (PRI), Normalized Difference Infrared Index (NDII) and Simple Ratio Pigment Index (SRPI) showed increased R2 values for off-nadir scenes taken perpendicular compared to parallel to row orientation. Other indices, such as Normalized Difference Vegetation Index (NDVI), Gitelson and Merzlyak (GM) and Structure Insensitive Pigment Index (SIPI), showed a significant decrease in R2 values from nadir to off-nadir viewing scenes. These results show the necessity of vegetation index selection for variable viewing applications to obtain an optimal derivation of biophysical variables in all circumstances. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

13912 KiB  
Article
Using Deep Learning to Challenge Safety Standard for Highly Autonomous Machines in Agriculture
by Kim Arild Steen, Peter Christiansen, Henrik Karstoft and Rasmus Nyholm Jørgensen
J. Imaging 2016, 2(1), 6; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2010006 - 15 Feb 2016
Cited by 53 | Viewed by 10111
Abstract
In this paper, an algorithm for obstacle detection in agricultural fields is presented. The algorithm is based on an existing deep convolutional neural net, which is fine-tuned for detection of a specific obstacle. In ISO/DIS 18497, which is an emerging standard for safety [...] Read more.
In this paper, an algorithm for obstacle detection in agricultural fields is presented. The algorithm is based on an existing deep convolutional neural net, which is fine-tuned for detection of a specific obstacle. In ISO/DIS 18497, which is an emerging standard for safety of highly automated machinery in agriculture, a barrel-shaped obstacle is defined as the obstacle which should be robustly detected to comply with the standard. We show that our fine-tuned deep convolutional net is capable of detecting this obstacle with a precision of 99 . 9 % in row crops and 90 . 8 % in grass mowing, while simultaneously not detecting people and other very distinct obstacles in the image frame. As such, this short note argues that the obstacle defined in the emerging standard is not capable of ensuring safe operations when imaging sensors are part of the safety system. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

2152 KiB  
Article
Imaging for High-Throughput Phenotyping in Energy Sorghum
by Jose Batz, Mario A. Méndez-Dorado and J. Alex Thomasson
J. Imaging 2016, 2(1), 4; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2010004 - 26 Jan 2016
Cited by 9 | Viewed by 5887
Abstract
The increasing energy demand in recent years has resulted in a continuous growing interest in renewable energy sources, such as efficient and high-yielding energy crops. Energy sorghum is a crop that has shown great potential in this area, but needs further improvement. Plant [...] Read more.
The increasing energy demand in recent years has resulted in a continuous growing interest in renewable energy sources, such as efficient and high-yielding energy crops. Energy sorghum is a crop that has shown great potential in this area, but needs further improvement. Plant phenotyping—measuring physiological characteristics of plants—is a laborious and time-consuming task, but it is essential for crop breeders as they attempt to improve a crop. The development of high-throughput phenotyping (HTP)—the use of autonomous sensing systems to rapidly measure plant characteristics—offers great potential for vastly expanding the number of types of a given crop plant surveyed. HTP can thus enable much more rapid progress in crop improvement through the inclusion of more genetic variability. For energy sorghum, stalk thickness is a critically important phenotype, as the stalk contains most of the biomass. Imaging is an excellent candidate for certain phenotypic measurements, as it can simulate visual observations. The aim of this study was to evaluate image analysis techniques involving K-means clustering and minimum-distance classification for use on red-green-blue (RGB) images of sorghum plants as a means to measure stalk thickness. Additionally, a depth camera integrated with the RGB camera was tested for the accuracy of distance measurements between camera and plant. Eight plants were imaged on six dates through the growing season, and image segmentation, classification and stalk thickness measurement were performed. While accuracy levels with both image analysis techniques needed improvement, both showed promise as tools for HTP in sorghum. The average error for K-means with supervised stalk measurement was 10.7% after removal of known outliers. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

15965 KiB  
Article
Non-Parametric Retrieval of Aboveground Biomass in Siberian Boreal Forests with ALOS PALSAR Interferometric Coherence and Backscatter Intensity
by Martyna A. Stelmaszczuk-Górska, Pedro Rodriguez-Veiga, Nicolas Ackermann, Christian Thiel, Heiko Balzter and Christiane Schmullius
J. Imaging 2016, 2(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2010001 - 25 Dec 2015
Cited by 18 | Viewed by 6679
Abstract
The main objective of this paper is to investigate the effectiveness of two recently popular non-parametric models for aboveground biomass (AGB) retrieval from Synthetic Aperture Radar (SAR) L-band backscatter intensity and coherence images. An area in Siberian boreal forests was selected for this [...] Read more.
The main objective of this paper is to investigate the effectiveness of two recently popular non-parametric models for aboveground biomass (AGB) retrieval from Synthetic Aperture Radar (SAR) L-band backscatter intensity and coherence images. An area in Siberian boreal forests was selected for this study. The results demonstrated that relatively high estimation accuracy can be obtained at a spatial resolution of 50 m using the MaxEnt and the Random Forests machine learning algorithms. Overall, the AGB estimation errors were similar for both tested models (approximately 35 t∙ha−1). The retrieval accuracy slightly increased, by approximately 1%, when the filtered backscatter intensity was used. Random Forests underestimated the AGB values, whereas MaxEnt overestimated the AGB values. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

5249 KiB  
Article
Precise Navigation of Small Agricultural Robots in Sensitive Areas with a Smart Plant Camera
by Volker Dworak, Michael Huebner and Joern Selbeck
J. Imaging 2015, 1(1), 115-133; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging1010115 - 13 Oct 2015
Cited by 5 | Viewed by 6123
Abstract
Most of the relevant technology related to precision agriculture is currently controlled by Global Positioning Systems (GPS) and uploaded map data; however, in sensitive areas with young or expensive plants, small robots are becoming more widely used in exclusive work. These robots must [...] Read more.
Most of the relevant technology related to precision agriculture is currently controlled by Global Positioning Systems (GPS) and uploaded map data; however, in sensitive areas with young or expensive plants, small robots are becoming more widely used in exclusive work. These robots must follow the plant lines with centimeter precision to protect plant growth. For cases in which GPS fails, a camera-based solution is often used for navigation because of the system cost and simplicity. The low-cost plant camera presented here generates images in which plants are contrasted against the soil, thus enabling the use of simple cross-correlation functions to establish high-resolution navigation control in the centimeter range. Based on the foresight provided by images from in front of the vehicle, robust vehicle control can be established without any dead time; as a result, off-loading the main robot control and overshooting can be avoided. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

4137 KiB  
Article
Land Cover Change Image Analysis for Assateague Island National Seashore Following Hurricane Sandy
by Heather Grybas and Russell G. Congalton
J. Imaging 2015, 1(1), 85-114; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging1010085 - 05 Oct 2015
Cited by 3 | Viewed by 6008
Abstract
The assessment of storm damages is critically important if resource managers are to understand the impacts of weather pattern changes and sea level rise on their lands and develop management strategies to mitigate its effects. This study was performed to detect land cover [...] Read more.
The assessment of storm damages is critically important if resource managers are to understand the impacts of weather pattern changes and sea level rise on their lands and develop management strategies to mitigate its effects. This study was performed to detect land cover change on Assateague Island as a result of Hurricane Sandy. Several single-date classifications were performed on the pre and post hurricane imagery utilized using both a pixel-based and object-based approach with the Random Forest classifier. Univariate image differencing and a post classification comparison were used to conduct the change detection. This study found that the addition of the coastal blue band to the Landsat 8 sensor did not improve classification accuracy and there was also no statistically significant improvement in classification accuracy using Landsat 8 compared to Landsat 5. Furthermore, there was no significant difference found between object-based and pixel-based classification. Change totals were estimated on Assateague Island following Hurricane Sandy and were found to be minimal, occurring predominately in the most active sections of the island in terms of land cover change, however, the post classification detected significantly more change, mainly due to classification errors in the single-date maps used. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

Other

Jump to: Research

12072 KiB  
Technical Note
Machine-Vision Systems Selection for Agricultural Vehicles: A Guide
by Gonzalo Pajares, Iván García-Santillán, Yerania Campos, Martín Montalvo, José Miguel Guerrero, Luis Emmi, Juan Romeo, María Guijarro and Pablo Gonzalez-de-Santos
J. Imaging 2016, 2(4), 34; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging2040034 - 22 Nov 2016
Cited by 46 | Viewed by 13326
Abstract
Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous) for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain [...] Read more.
Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous) for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a) spectral bands (visible and infrared); (b) imaging sensors and optical systems (including intrinsic parameters) and (c) geometric visual system arrangement (considering extrinsic parameters and stereovision systems). A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management) project for effective weed control in maize fields (wide-rows crops), funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Graphical abstract

Back to TopTop