sensors-logo

Journal Browser

Journal Browser

Deep Learning Applications for Fauna and Flora Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (15 July 2022) | Viewed by 26039

Special Issue Editor


E-Mail Website
Guest Editor
School of Science, RMIT University, GPO Box 2476, Melbourne 3001, Australia
Interests: close range photogrammetry applications; precise optical metrology; camera calibration
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Species identification is one of the most important topics in the field of image and video analysis. Both in air and underwater, and for fauna and flora, species identification and measurement is an essential tool to estimate biomass or population distributions. Evident changes within ecosystems can be used to inform important management decisions, especially for vulnerable species. The automation of species identification has been under development for many years, based on computer vision and image processing techniques, achieving good identification success rates. More recently, deep learning has produced even higher levels of success for species identification, in the region of 95% accuracy in the best cases.

The main objective of this Special Issue is to demonstrate the effectiveness of deep learning applied to species identification across a range of fauna, flora and environments. A secondary aim is to evaluate the use of different sensors, specifically imaging and video systems using different spectral sensitivities, such as thermal infrared imagers.

Prof. Dr. Mark Shortis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep learning
  • Ecosystem management
  • Flora and fauna identification
  • RGB images
  • Species recognition
  • Thermal infra-red images
  • Video capture

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 15365 KiB  
Article
Automatic Fungi Recognition: Deep Learning Meets Mycology
by Lukáš Picek, Milan Šulc, Jiří Matas, Jacob Heilmann-Clausen, Thomas S. Jeppesen and Emil Lind
Sensors 2022, 22(2), 633; https://0-doi-org.brum.beds.ac.uk/10.3390/s22020633 - 14 Jan 2022
Cited by 14 | Viewed by 8946
Abstract
The article presents an AI-based fungi species recognition system for a citizen-science community. The system’s real-time identification too — FungiVision — with a mobile application front-end, led to increased public interest in fungi, quadrupling the number of citizens collecting data. FungiVision, deployed with [...] Read more.
The article presents an AI-based fungi species recognition system for a citizen-science community. The system’s real-time identification too — FungiVision — with a mobile application front-end, led to increased public interest in fungi, quadrupling the number of citizens collecting data. FungiVision, deployed with a human-in-the-loop, reaches nearly 93% accuracy. Using the collected data, we developed a novel fine-grained classification dataset — Danish Fungi 2020 (DF20) — with several unique characteristics: species-level labels, a small number of errors, and rich observation metadata. The dataset enables the testing of the ability to improve classification using metadata, e.g., time, location, habitat and substrate, facilitates classifier calibration testing and finally allows the study of the impact of the device settings on the classification performance. The continual flow of labelled data supports improvements of the online recognition system. Finally, we present a novel method for the fungi recognition service, based on a Vision Transformer architecture. Trained on DF20 and exploiting available metadata, it achieves a recognition error that is 46.75% lower than the current system. By providing a stream of labeled data in one direction, and an accuracy increase in the other, the collaboration creates a virtuous cycle helping both communities. Full article
(This article belongs to the Special Issue Deep Learning Applications for Fauna and Flora Recognition)
Show Figures

Figure 1

15 pages, 3772 KiB  
Article
Research on Lightweight Citrus Flowering Rate Statistical Model Combined with Anchor Frame Clustering Optimization
by Jianqiang Lu, Weize Lin, Pingfu Chen, Yubin Lan, Xiaoling Deng, Hongyu Niu, Jiawei Mo, Jiaxing Li and Shengfu Luo
Sensors 2021, 21(23), 7929; https://0-doi-org.brum.beds.ac.uk/10.3390/s21237929 - 27 Nov 2021
Cited by 3 | Viewed by 1690
Abstract
At present, learning-based citrus blossom recognition models based on deep learning are highly complicated and have a large number of parameters. In order to estimate citrus flower quantities in natural orchards, this study proposes a lightweight citrus flower recognition model based on improved [...] Read more.
At present, learning-based citrus blossom recognition models based on deep learning are highly complicated and have a large number of parameters. In order to estimate citrus flower quantities in natural orchards, this study proposes a lightweight citrus flower recognition model based on improved YOLOv4. In order to compress the backbone network, we utilize MobileNetv3 as a feature extractor, combined with deep separable convolution for further acceleration. The Cutout data enhancement method is also introduced to simulate citrus in nature for data enhancement. The test results show that the improved model has an mAP of 84.84%, 22% smaller than that of YOLOv4, and approximately two times faster. Compared with the Faster R-CNN, the improved citrus flower rate statistical model proposed in this study has the advantages of less memory usage and fast detection speed under the premise of ensuring a certain accuracy. Therefore, our solution can be used as a reference for the edge detection of citrus flowering. Full article
(This article belongs to the Special Issue Deep Learning Applications for Fauna and Flora Recognition)
Show Figures

Figure 1

15 pages, 9515 KiB  
Article
Automated Quantification of Brittle Stars in Seabed Imagery Using Computer Vision Techniques
by Kazimieras Buškus, Evaldas Vaičiukynas, Antanas Verikas, Saulė Medelytė, Andrius Šiaulys and Aleksej Šaškov
Sensors 2021, 21(22), 7598; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227598 - 16 Nov 2021
Cited by 4 | Viewed by 2113
Abstract
Underwater video surveys play a significant role in marine benthic research. Usually, surveys are filmed in transects, which are stitched into 2D mosaic maps for further analysis. Due to the massive amount of video data and time-consuming analysis, the need for automatic image [...] Read more.
Underwater video surveys play a significant role in marine benthic research. Usually, surveys are filmed in transects, which are stitched into 2D mosaic maps for further analysis. Due to the massive amount of video data and time-consuming analysis, the need for automatic image segmentation and quantitative evaluation arises. This paper investigates such techniques on annotated mosaic maps containing hundreds of instances of brittle stars. By harnessing a deep convolutional neural network with pre-trained weights and post-processing results with a common blob detection technique, we investigate the effectiveness and potential of such segment-and-count approach by assessing the segmentation and counting success. Discs could be recommended instead of full shape masks for brittle stars due to faster annotation among marker variants tested. Underwater image enhancement techniques could not improve segmentation results noticeably, but some might be useful for augmentation purposes. Full article
(This article belongs to the Special Issue Deep Learning Applications for Fauna and Flora Recognition)
Show Figures

Figure 1

23 pages, 37844 KiB  
Article
Camera Assisted Roadside Monitoring for Invasive Alien Plant Species Using Deep Learning
by Mads Dyrmann, Anders Krogh Mortensen, Lars Linneberg, Toke Thomas Høye and Kim Bjerge
Sensors 2021, 21(18), 6126; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186126 - 13 Sep 2021
Cited by 4 | Viewed by 3921
Abstract
Invasive alien plant species (IAPS) pose a threat to biodiversity as they propagate and outcompete natural vegetation. In this study, a system for monitoring IAPS on the roadside is presented. The system consists of a camera that acquires images at high speed mounted [...] Read more.
Invasive alien plant species (IAPS) pose a threat to biodiversity as they propagate and outcompete natural vegetation. In this study, a system for monitoring IAPS on the roadside is presented. The system consists of a camera that acquires images at high speed mounted on a vehicle that follows the traffic. Images of seven IAPS (Cytisus scoparius, Heracleum, Lupinus polyphyllus, Pastinaca sativa, Reynoutria, Rosa rugosa, and Solidago) were collected on Danish motorways. Three deep convolutional neural networks for classification (ResNet50V2 and MobileNetV2) and object detection (YOLOv3) were trained and evaluated at different image sizes. The results showed that the performance of the networks varied with the input image size and also the size of the IAPS in the images. Binary classification of IAPS vs. non-IAPS showed an increased performance, compared to the classification of individual IAPS. This study shows that automatic detection and mapping of invasive plants along the roadside is possible at high speeds. Full article
(This article belongs to the Special Issue Deep Learning Applications for Fauna and Flora Recognition)
Show Figures

Figure 1

22 pages, 6401 KiB  
Article
An Instance Segmentation-Based Method to Obtain the Leaf Age and Plant Centre of Weeds in Complex Field Environments
by Longzhe Quan, Bing Wu, Shouren Mao, Chunjie Yang and Hengda Li
Sensors 2021, 21(10), 3389; https://0-doi-org.brum.beds.ac.uk/10.3390/s21103389 - 13 May 2021
Cited by 9 | Viewed by 2943
Abstract
Leaf age and plant centre are important phenotypic information of weeds, and accurate identification of them plays an important role in understanding the morphological structure of weeds, guiding precise targeted spraying and reducing the use of herbicides. In this work, a weed segmentation [...] Read more.
Leaf age and plant centre are important phenotypic information of weeds, and accurate identification of them plays an important role in understanding the morphological structure of weeds, guiding precise targeted spraying and reducing the use of herbicides. In this work, a weed segmentation method based on BlendMask is proposed to obtain the phenotypic information of weeds under complex field conditions. This study collected images from different angles (front, side, and top views) of three kinds of weeds (Solanum nigrum, barnyard grass (Echinochloa crus-galli), and Abutilon theophrasti Medicus) in a maize field. Two datasets (with and without data enhancement) and two backbone networks (ResNet50 and ResNet101) were replaced to improve model performance. Finally, seven evaluation indicators are used to evaluate the segmentation results of the model under different angles. The results indicated that data enhancement and ResNet101 as the backbone network could enhance the model performance. The F1 value of the plant centre is 0.9330, and the recognition accuracy of leaf age can reach 0.957. The mIOU value of the top view is 0.642. Therefore, deep learning methods can effectively identify weed leaf age and plant centre, which is of great significance for variable spraying. Full article
(This article belongs to the Special Issue Deep Learning Applications for Fauna and Flora Recognition)
Show Figures

Figure 1

17 pages, 63921 KiB  
Article
U-Infuse: Democratization of Customizable Deep Learning for Object Detection
by Andrew Shepley, Greg Falzon, Christopher Lawson, Paul Meek and Paul Kwan
Sensors 2021, 21(8), 2611; https://0-doi-org.brum.beds.ac.uk/10.3390/s21082611 - 08 Apr 2021
Cited by 4 | Viewed by 4960
Abstract
Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been [...] Read more.
Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy. Full article
(This article belongs to the Special Issue Deep Learning Applications for Fauna and Flora Recognition)
Show Figures

Figure 1

Back to TopTop