Machine Learning in Plant Identification and Phenological, Anatomical, and Morphological Research

A special issue of Plants (ISSN 2223-7747). This special issue belongs to the section "Plant Development and Morphogenesis".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 26679

Special Issue Editors


E-Mail Website
Guest Editor
CIRAD, BIOS Department, Joint Research Unit Amap (botAny and Modelling of Plant Architecture and vegetation), 34398 Montpellier, France
Interests: plant ecology; plant identification; biodiversity informatics; digital agriculture

E-Mail Website
Guest Editor
INRIA Sophia-Antipolis, ZENITH team, LIRMM, 34095 Montpellier, France
Interests: machine learning; biodiversity informatics; multimedia information retrieval; scientific data management; computer vision; active learning; crowdsourcing; high-dimensional data

E-Mail Website
Guest Editor
Professor of Ecology & Evolutionary Biology Department of Ecology, Evolution & Marine Biology University of California, Santa Barbara, Santa Barbara, CA 93106, USA
Interests: botany; ecology; evolution; evolutionary ecology; evolutionary genetics; organismal biology; population and community ecology

Special Issue Information

Dear Colleagues, 

Recent advances in imaging and information technology have led to the massive production of digital images of plant specimens and of living plants around the world. This new and rich material, directly produced in the field from digital cameras, smartphones, mobile or aerial autonomous robots, or from the digitization of herbarium specimens, offers new opportunities to study plant phenology (the seasonal timing of plant life cycle events) and to identify wild plant species and domesticated varieties. As plant phenology is strongly influenced by recent environmental changes, a better understanding of phenological shifts is essential to predict potential changes in species distribution, plant-based resource availability, ecosystem productivity, and community structure. These changes, together with the growing need to develop a much more resilient and responsible society, are forcing us to rethink our capacity to study plant biology – and to leverage the very large number of digital images available -- in the context of both applied and basic research.

Computer vision and machine learning approaches are highly promising technologies with which to investigate and to interpret digitized images of wild and domesticated taxa. Deep learning technologies, in particular, have been recently shown to achieve impressive performance on a variety of predictive tasks such as automated species identification, trait detection, and organ counting, measurement and recognition. Nevertheless, their use to support innovative phenological studies and to identify taxa below the species level is under-developed, as is our application of the full potential of Convolutional Neural Networks (CNN), Recurrent CNN, and transfer learning strategies and techniques to address these challenges.

In this Special Issue, we welcome the submission of scientific articles focused on the development of new machine learning techniques applied to phenological, anatomical, or morphological features of plants, particularly those that focus on new types of data produced or analysed with machine learning. We hope to increase the visibility of machine learning tools and promote scientific research at the frontiers of environmental / life science and computer science.


Dr. Pierre Bonnet
Dr. Alexis Joly
Prof. Dr. Susan J Mazer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Plants is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • plant phenology
  • phenological response
  • plant reproductive structure
  • deep learning
  • multimedia data
  • automated visual data classification

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 9258 KiB  
Article
Can Artificial Intelligence Help in the Study of Vegetative Growth Patterns from Herbarium Collections? An Evaluation of the Tropical Flora of the French Guiana Forest
by Hervé Goëau, Titouan Lorieul, Patrick Heuret, Alexis Joly and Pierre Bonnet
Plants 2022, 11(4), 530; https://0-doi-org.brum.beds.ac.uk/10.3390/plants11040530 - 16 Feb 2022
Cited by 5 | Viewed by 2553
Abstract
A better knowledge of tree vegetative growth phenology and its relationship to environmental variables is crucial to understanding forest growth dynamics and how climate change may affect it. Less studied than reproductive structures, vegetative growth phenology focuses primarily on the analysis of growing [...] Read more.
A better knowledge of tree vegetative growth phenology and its relationship to environmental variables is crucial to understanding forest growth dynamics and how climate change may affect it. Less studied than reproductive structures, vegetative growth phenology focuses primarily on the analysis of growing shoots, from buds to leaf fall. In temperate regions, low winter temperatures impose a cessation of vegetative growth shoots and lead to a well-known annual growth cycle pattern for most species. The humid tropics, on the other hand, have less seasonality and contain many more tree species, leading to a diversity of patterns that is still poorly known and understood. The work in this study aims to advance knowledge in this area, focusing specifically on herbarium scans, as herbariums offer the promise of tracking phenology over long periods of time. However, such a study requires a large number of shoots to be able to draw statistically relevant conclusions. We propose to investigate the extent to which the use of deep learning can help detect and type-classify these relatively rare vegetative structures in herbarium collections. Our results demonstrate the relevance of using herbarium data in vegetative phenology research as well as the potential of deep learning approaches for growing shoot detection. Full article
Show Figures

Graphical abstract

20 pages, 1965 KiB  
Article
Investigating Explanatory Factors of Machine Learning Models for Plant Classification
by Wilfried Wöber, Lars Mehnen, Peter Sykacek and Harald Meimberg
Plants 2021, 10(12), 2674; https://0-doi-org.brum.beds.ac.uk/10.3390/plants10122674 - 05 Dec 2021
Cited by 5 | Viewed by 3055
Abstract
Recent progress in machine learning and deep learning has enabled the implementation of plant and crop detection using systematic inspection of the leaf shapes and other morphological characters for identification systems for precision farming. However, the models used for this approach tend to [...] Read more.
Recent progress in machine learning and deep learning has enabled the implementation of plant and crop detection using systematic inspection of the leaf shapes and other morphological characters for identification systems for precision farming. However, the models used for this approach tend to become black-box models, in the sense that it is difficult to trace characters that are the base for the classification. The interpretability is therefore limited and the explanatory factors may not be based on reasonable visible characters. We investigate the explanatory factors of recent machine learning and deep learning models for plant classification tasks. Based on a Daucus carota and a Beta vulgaris image data set, we implement plant classification models and compare those models by their predictive performance as well as explainability. For comparison we implemented a feed forward convolutional neuronal network as a default model. To evaluate the performance, we trained an unsupervised Bayesian Gaussian process latent variable model as well as a convolutional autoencoder for feature extraction and rely on a support vector machine for classification. The explanatory factors of all models were extracted and analyzed. The experiments show, that feed forward convolutional neuronal networks (98.24% and 96.10% mean accuracy) outperforms the Bayesian Gaussian process latent variable pipeline (92.08% and 94.31% mean accuracy) as well as the convolutional autoenceoder pipeline (92.38% and 93.28% mean accuracy) based approaches in terms of classification accuracy, even though not significant for Beta vulgaris images. Additionally, we found that the neuronal network used biological uninterpretable image regions for the plant classification task. In contrast to that, the unsupervised learning models rely on explainable visual characters. We conclude that supervised convolutional neuronal networks must be used carefully to ensure biological interpretability. We recommend unsupervised machine learning, careful feature investigation, and statistical feature analysis for biological applications. Full article
Show Figures

Figure 1

17 pages, 2430 KiB  
Article
Machine Learning Undercounts Reproductive Organs on Herbarium Specimens but Accurately Derives Their Quantitative Phenological Status: A Case Study of Streptanthus tortuosus
by Natalie L. R. Love, Pierre Bonnet, Hervé Goëau, Alexis Joly and Susan J. Mazer
Plants 2021, 10(11), 2471; https://0-doi-org.brum.beds.ac.uk/10.3390/plants10112471 - 16 Nov 2021
Cited by 5 | Viewed by 1577
Abstract
Machine learning (ML) can accelerate the extraction of phenological data from herbarium specimens; however, no studies have assessed whether ML-derived phenological data can be used reliably to evaluate ecological patterns. In this study, 709 herbarium specimens representing a widespread annual herb, Streptanthus tortuosus, [...] Read more.
Machine learning (ML) can accelerate the extraction of phenological data from herbarium specimens; however, no studies have assessed whether ML-derived phenological data can be used reliably to evaluate ecological patterns. In this study, 709 herbarium specimens representing a widespread annual herb, Streptanthus tortuosus, were scored both manually by human observers and by a mask R-CNN object detection model to (1) evaluate the concordance between ML and manually-derived phenological data and (2) determine whether ML-derived data can be used to reliably assess phenological patterns. The ML model generally underestimated the number of reproductive structures present on each specimen; however, when these counts were used to provide a quantitative estimate of the phenological stage of plants on a given sheet (i.e., the phenological index or PI), the ML and manually-derived PI’s were highly concordant. Moreover, herbarium specimen age had no effect on the estimated PI of a given sheet. Finally, including ML-derived PIs as predictor variables in phenological models produced estimates of the phenological sensitivity of this species to climate, temporal shifts in flowering time, and the rate of phenological progression that are indistinguishable from those produced by models based on data provided by human observers. This study demonstrates that phenological data extracted using machine learning can be used reliably to estimate the phenological stage of herbarium specimens and to detect phenological patterns. Full article
Show Figures

Figure 1

17 pages, 3738 KiB  
Article
Automated Grapevine Cultivar Identification via Leaf Imaging and Deep Convolutional Neural Networks: A Proof-of-Concept Study Employing Primary Iranian Varieties
by Amin Nasiri, Amin Taheri-Garavand, Dimitrios Fanourakis, Yu-Dong Zhang and Nikolaos Nikoloudakis
Plants 2021, 10(8), 1628; https://0-doi-org.brum.beds.ac.uk/10.3390/plants10081628 - 08 Aug 2021
Cited by 32 | Viewed by 6366
Abstract
Extending over millennia, grapevine cultivation encompasses several thousand cultivars. Cultivar (cultivated variety) identification is traditionally dealt by ampelography, requiring repeated observations by experts along the growth cycle of fruiting plants. For on-time evaluations, molecular genetics have been successfully performed, though in many instances, [...] Read more.
Extending over millennia, grapevine cultivation encompasses several thousand cultivars. Cultivar (cultivated variety) identification is traditionally dealt by ampelography, requiring repeated observations by experts along the growth cycle of fruiting plants. For on-time evaluations, molecular genetics have been successfully performed, though in many instances, they are limited by the lack of referable data or the cost element. This paper presents a convolutional neural network (CNN) framework for automatic identification of grapevine cultivar by using leaf images in the visible spectrum (400–700 nm). The VGG16 architecture was modified by a global average pooling layer, dense layers, a batch normalization layer, and a dropout layer. Distinguishing the intricate visual features of diverse grapevine varieties, and recognizing them according to these features was conceivable by the obtained model. A five-fold cross-validation was performed to evaluate the uncertainty and predictive efficiency of the CNN model. The modified deep learning model was able to recognize different grapevine varieties with an average classification accuracy of over 99%. The obtained model offers a rapid, low-cost and high-throughput grapevine cultivar identification. The ambition of the obtained tool is not to substitute but complement ampelography and quantitative genetics, and in this way, assist cultivar identification services. Full article
Show Figures

Figure 1

21 pages, 3559 KiB  
Article
Image-Based Wheat Fungi Diseases Identification by Deep Learning
by Mikhail A. Genaev, Ekaterina S. Skolotneva, Elena I. Gultyaeva, Elena A. Orlova, Nina P. Bechtold and Dmitry A. Afonnikov
Plants 2021, 10(8), 1500; https://0-doi-org.brum.beds.ac.uk/10.3390/plants10081500 - 21 Jul 2021
Cited by 48 | Viewed by 7511
Abstract
Diseases of cereals caused by pathogenic fungi can significantly reduce crop yields. Many cultures are exposed to them. The disease is difficult to control on a large scale; thus, one of the relevant approaches is the crop field monitoring, which helps to identify [...] Read more.
Diseases of cereals caused by pathogenic fungi can significantly reduce crop yields. Many cultures are exposed to them. The disease is difficult to control on a large scale; thus, one of the relevant approaches is the crop field monitoring, which helps to identify the disease at an early stage and take measures to prevent its spread. One of the effective control methods is disease identification based on the analysis of digital images, with the possibility of obtaining them in field conditions, using mobile devices. In this work, we propose a method for the recognition of five fungal diseases of wheat shoots (leaf rust, stem rust, yellow rust, powdery mildew, and septoria), both separately and in case of multiple diseases, with the possibility of identifying the stage of plant development. A set of 2414 images of wheat fungi diseases (WFD2020) was generated, for which expert labeling was performed by the type of disease. More than 80% of the images in the dataset correspond to single disease labels (including seedlings), more than 12% are represented by healthy plants, and 6% of the images labeled are represented by multiple diseases. In the process of creating this set, a method was applied to reduce the degeneracy of the training data based on the image hashing algorithm. The disease-recognition algorithm is based on the convolutional neural network with the EfficientNet architecture. The best accuracy (0.942) was shown by a network with a training strategy based on augmentation and transfer of image styles. The recognition method was implemented as a bot on the Telegram platform, which allows users to assess plants by lesions in the field conditions. Full article
Show Figures

Figure 1

14 pages, 3149 KiB  
Article
Automated In Situ Seed Variety Identification via Deep Learning: A Case Study in Chickpea
by Amin Taheri-Garavand, Amin Nasiri, Dimitrios Fanourakis, Soodabeh Fatahi, Mahmoud Omid and Nikolaos Nikoloudakis
Plants 2021, 10(7), 1406; https://0-doi-org.brum.beds.ac.uk/10.3390/plants10071406 - 09 Jul 2021
Cited by 36 | Viewed by 3498
Abstract
On-time seed variety recognition is critical to limit qualitative and quantitative yield loss and asynchronous crop production. The conventional method is a subjective and error-prone process, since it relies on human experts and usually requires accredited seed material. This paper presents a convolutional [...] Read more.
On-time seed variety recognition is critical to limit qualitative and quantitative yield loss and asynchronous crop production. The conventional method is a subjective and error-prone process, since it relies on human experts and usually requires accredited seed material. This paper presents a convolutional neural network (CNN) framework for automatic identification of chickpea varieties by using seed images in the visible spectrum (400–700 nm). Two low-cost devices were employed for image acquisition. Lighting and imaging (background, focus, angle, and camera-to-sample distance) conditions were variable. The VGG16 architecture was modified by a global average pooling layer, dense layers, a batch normalization layer, and a dropout layer. Distinguishing the intricate visual features of the diverse chickpea varieties and recognizing them according to these features was conceivable by the obtained model. A five-fold cross-validation was performed to evaluate the uncertainty and predictive efficiency of the CNN model. The modified deep learning model was able to recognize different chickpea seed varieties with an average classification accuracy of over 94%. In addition, the proposed vision-based model was very robust in seed variety identification, and independent of image acquisition device, light environment, and imaging settings. This opens the avenue for the extension into novel applications using mobile phones to acquire and process information in situ. The proposed procedure derives possibilities for deployment in the seed industry and mobile applications for fast and robust automated seed identification practices. Full article
Show Figures

Figure 1

Back to TopTop