remotesensing-logo

Journal Browser

Journal Browser

Convolutional Neural Networks for Object Detection

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 45261

Special Issue Editors

Department of Computer Technology and Communications, Polytechnic School of Cáceres, University of Extremadura, avenida de la Universidad s/n, 10003 Cáceres, Spain
Interests: hyperspectral remote sensing; deep learning; Graphics Processing Units (GPUs); High Performance Computing (HPC) techniques
Special Issues, Collections and Topics in MDPI journals
Department of Computer Technology and Communications, Polytechnic School of Cáceres, University of Extremadura, 10003 Cáceres, Spain
Interests: hyperspectral image analysis; machine (deep) learning; neural networks; multisensor data fusion; high performance computing; cloud computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Object detection is a fundamental problem within remote sensing imaging analysis. Recent advances in hardware and software capabilities have allowed for the development of powerful machine-learning-based object detection techniques. In particular, deep learning models have received increased interest due to their great potential for extracting very abstract and descriptive feature data representations from original inputs. Instead of widely used shallow architectures and traditional handcrafted feature processing, deep learning methods offer a great variety of very deep architectures based on stacking layers, which extract increasingly complex and abstract features from the input data in a successive and hierarchical way. In this context, convolutional-based neural models have demonstrated a great generalization power coupled with a strong and automatic feature extraction capability, allowing them to reach an outstanding performance and positioning themselves as the current state of the art in many tasks related to computer vision, in particular in image classification tasks.

In this sense, object detection requires going a step further, as it involves not only classifying images but also locating objects of different classes in different positions/orientations and with different sizes within the images, with the aim of providing a more complete image understanding. This imposes the detailed processing of a huge amount of geometric and spatial information contained in the remote sensed image. Even with so much literature devoted to this topic, the development of powerful and efficient deep learning models remains a challenging task, due to the limitations of convolutional architectures (rotations, spatial relations, black box nature, etc.) and the characteristics of remotely sensed images (different spatial/spectral resolutions, atmospheric noises, sensor limitations, etc.). In this sense, there is still so much we do not know about deep learning models related to object detection in the remote sensing field.

This Special Issue aims to foster the application of advanced deep learning algorithms to perform accurate object detection applied within the remote sensing field, and it is an excellent opportunity for the dissemination of recent results and cooperation for further innovations.

For this Special Issue, we welcome contributions related, but not limited to, the following:

  • Deep learning, convolutional neural networks, hybrid architectures, etc. for object detection;
  • Improvements in deep learning model capabilities for extracting and learning features of interest within object detection tasks, such as context- and attention-based mechanisms, among others;
  • Detection of small or occluded objects and/or in challenging conditions;
  • Real-time or fast models for object detection;
  • Improvements in localization accuracy.
Dr. Mercedes E. Paoletti
Dr. Juan M. Haut
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote sensing
  • Machine learning
  • Deep learning
  • Transfer learning
  • Convolutional neural network
  • Recurrent neural network
  • Object detection

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 2494 KiB  
Article
Water Body Extraction in Remote Sensing Imagery Using Domain Adaptation-Based Network Embedding Selective Self-Attention and Multi-Scale Feature Fusion
by Jiahang Liu and Yue Wang
Remote Sens. 2022, 14(15), 3538; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14153538 - 23 Jul 2022
Cited by 6 | Viewed by 2121
Abstract
A water body is a common object in remote sensing images and high-quality water body extraction is important for some further applications. With the development of deep learning (DL) in recent years, semantic segmentation technology based on deep convolution neural network (DCNN) brings [...] Read more.
A water body is a common object in remote sensing images and high-quality water body extraction is important for some further applications. With the development of deep learning (DL) in recent years, semantic segmentation technology based on deep convolution neural network (DCNN) brings a new way for automatic and high-quality body extraction from remote sensing images. Although several methods have been proposed, there exist two major problems in water body extraction, especially for high resolution remote sensing images. One is that it is difficult to effectively detect both large and small water bodies simultaneously and accurately predict the edge position of water bodies with DCNN-based methods, and the other is that DL methods need a large number of labeled samples which are often insufficient in practical application. In this paper, a novel SFnet-DA network based on the domain adaptation (DA) embedding selective self-attention (SSA) mechanism and multi-scale feature fusion (MFF) module is proposed to deal with these problems. Specially, the SSA mechanism is used to increase or decrease the space detail and semantic information, respectively, in the bottom-up branches of the network by selective feature enhancement, thus it can improve the detection capability of water bodies with drastic scale change and can prevent the prediction from being affected by other factors, such as roads and green algae. Furthermore, the MFF module is used to accurately acquire edge information by changing the number of the channel of advanced feature branches with a unique fusion method. To skip the labeling work, SFnet-DA reduces the difference in feature distribution between labeled and unlabeled datasets by building an adversarial relationship between the feature extractor and the domain classifier, so that the trained parameters of the labeled datasets can be directly used to predict the unlabeled images. Experimental results demonstrate that the proposed SFnet-DA has better performance on water body segmentation than state-of-the-art methods. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Graphical abstract

18 pages, 3725 KiB  
Article
High Quality Object Detection for Multiresolution Remote Sensing Imagery Using Cascaded Multi-Stage Detectors
by Binglong Wu, Yuan Shen, Shanxin Guo, Jinsong Chen, Luyi Sun, Hongzhong Li and Yong Ao
Remote Sens. 2022, 14(9), 2091; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14092091 - 27 Apr 2022
Cited by 5 | Viewed by 1847
Abstract
Deep-learning-based object detectors have substantially improved state-of-the-art object detection in remote sensing images in terms of precision and degree of automation. Nevertheless, the large variation of the object scales makes it difficult to achieve high-quality detection across multiresolution remote sensing images, where the [...] Read more.
Deep-learning-based object detectors have substantially improved state-of-the-art object detection in remote sensing images in terms of precision and degree of automation. Nevertheless, the large variation of the object scales makes it difficult to achieve high-quality detection across multiresolution remote sensing images, where the quality is defined by the Intersection over Union (IoU) threshold used in training. In addition, the imbalance between the positive and negative samples across multiresolution images worsens the detection precision. Recently, it was found that a Cascade region-based convolutional neural network (R-CNN) can potentially achieve a higher quality of detection by introducing a cascaded three-stage structure using progressively improved IoU thresholds. However, the performance of Cascade R-CNN degraded when the fourth stage was added. We investigated the cause and found that the mismatch between the ROI features and the classifier could be responsible for the degradation of performance. Herein, we propose a Cascade R-CNN++ structure to address this issue and extend the three-stage architecture to multiple stages for general use. Specifically, for cascaded classification, we propose a new ensemble strategy for the classifier and region of interest (RoI) features to improve classification accuracy at inference. In localization, we modified the loss function of the bounding box regressor to obtain higher sensitivity around zero. Experiments on the DOTA dataset demonstrated that Cascade R-CNN++ outperforms Cascade R-CNN in terms of precision and detection quality. We conducted further analysis on multiresolution remote sensing images to verify model transferability across different object scales. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Figure 1

18 pages, 1717 KiB  
Article
Revise-Net: Exploiting Reverse Attention Mechanism for Salient Object Detection
by Rukhshanda Hussain, Yash Karbhari, Muhammad Fazal Ijaz, Marcin Woźniak, Pawan Kumar Singh and Ram Sarkar
Remote Sens. 2021, 13(23), 4941; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234941 - 05 Dec 2021
Cited by 33 | Viewed by 2956
Abstract
Recently, deep learning-based methods, especially utilizing fully convolutional neural networks, have shown extraordinary performance in salient object detection. Despite its success, the clean boundary detection of the saliency objects is still a challenging task. Most of the contemporary methods focus on exclusive edge [...] Read more.
Recently, deep learning-based methods, especially utilizing fully convolutional neural networks, have shown extraordinary performance in salient object detection. Despite its success, the clean boundary detection of the saliency objects is still a challenging task. Most of the contemporary methods focus on exclusive edge detection modules in order to avoid noisy boundaries. In this work, we propose leveraging on the extraction of finer semantic features from multiple encoding layers and attentively re-utilize it in the generation of the final segmentation result. The proposed Revise-Net model is divided into three parts: (a) the prediction module, (b) a residual enhancement module, and (c) reverse attention modules. Firstly, we generate the coarse saliency map through the prediction modules, which are fine-tuned in the enhancement module. Finally, multiple reverse attention modules at varying scales are cascaded between the two networks to guide the prediction module by employing the intermediate segmentation maps generated at each downsampling level of the REM. Our method efficiently classifies the boundary pixels using a combination of binary cross-entropy, similarity index, and intersection over union losses at the pixel, patch, and map levels, thereby effectively segmenting the saliency objects in an image. In comparison with several state-of-the-art frameworks, our proposed Revise-Net model outperforms them with a significant margin on three publicly available datasets, DUTS-TE, ECSSD, and HKU-IS, both on regional and boundary estimation measures. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Figure 1

22 pages, 6871 KiB  
Article
Lightweight Underwater Object Detection Based on YOLO v4 and Multi-Scale Attentional Feature Fusion
by Minghua Zhang, Shubo Xu, Wei Song, Qi He and Quanmiao Wei
Remote Sens. 2021, 13(22), 4706; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13224706 - 21 Nov 2021
Cited by 74 | Viewed by 7407
Abstract
A challenging and attractive task in computer vision is underwater object detection. Although object detection techniques have achieved good performance in general datasets, problems of low visibility and color bias in the complex underwater environment have led to generally poor image quality; besides [...] Read more.
A challenging and attractive task in computer vision is underwater object detection. Although object detection techniques have achieved good performance in general datasets, problems of low visibility and color bias in the complex underwater environment have led to generally poor image quality; besides this, problems with small targets and target aggregation have led to less extractable information, which makes it difficult to achieve satisfactory results. In past research of underwater object detection based on deep learning, most studies have mainly focused on improving detection accuracy by using large networks; the problem of marine underwater lightweight object detection has rarely gotten attention, which has resulted in a large model size and slow detection speed; as such the application of object detection technologies under marine environments needs better real-time and lightweight performance. In view of this, a lightweight underwater object detection method based on the MobileNet v2, You Only Look Once (YOLO) v4 algorithm and attentional feature fusion has been proposed to address this problem, to produce a harmonious balance between accuracy and speediness for target detection in marine environments. In our work, a combination of MobileNet v2 and depth-wise separable convolution is proposed to reduce the number of model parameters and the size of the model. The Modified Attentional Feature Fusion (AFFM) module aims to better fuse semantic and scale-inconsistent features and to improve accuracy. Experiments indicate that the proposed method obtained a mean average precision (mAP) of 81.67% and 92.65% on the PASCAL VOC dataset and the brackish dataset, respectively, and reached a processing speed of 44.22 frame per second (FPS) on the brackish dataset. Moreover, the number of model parameters and the model size were compressed to 16.76% and 19.53% of YOLO v4, respectively, which achieved a good tradeoff between time and accuracy for underwater object detection. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Graphical abstract

20 pages, 10445 KiB  
Article
Advancing Tassel Detection and Counting: Annotation and Algorithms
by Azam Karami, Karoll Quijano and Melba Crawford
Remote Sens. 2021, 13(15), 2881; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152881 - 23 Jul 2021
Cited by 13 | Viewed by 2913
Abstract
Tassel counts provide valuable information related to flowering and yield prediction in maize, but are expensive and time-consuming to acquire via traditional manual approaches. High-resolution RGB imagery acquired by unmanned aerial vehicles (UAVs), coupled with advanced machine learning approaches, including deep learning (DL), [...] Read more.
Tassel counts provide valuable information related to flowering and yield prediction in maize, but are expensive and time-consuming to acquire via traditional manual approaches. High-resolution RGB imagery acquired by unmanned aerial vehicles (UAVs), coupled with advanced machine learning approaches, including deep learning (DL), provides a new capability for monitoring flowering. In this article, three state-of-the-art DL techniques, CenterNet based on point annotation, task-aware spatial disentanglement (TSD), and detecting objects with recursive feature pyramids and switchable atrous convolution (DetectoRS) based on bounding box annotation, are modified to improve their performance for this application and evaluated for tassel detection relative to Tasselnetv2+. The dataset for the experiments is comprised of RGB images of maize tassels from plant breeding experiments, which vary in size, complexity, and overlap. Results show that the point annotations are more accurate and simpler to acquire than the bounding boxes, and bounding box-based approaches are more sensitive to the size of the bounding boxes and background than point-based approaches. Overall, CenterNet has high accuracy in comparison to the other techniques, but DetectoRS can better detect early-stage tassels. The results for these experiments were more robust than Tasselnetv2+, which is sensitive to the number of tassels in the image. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Graphical abstract

21 pages, 60660 KiB  
Article
Small Object Detection in Remote Sensing Images with Residual Feature Aggregation-Based Super-Resolution and Object Detector Network
by Syed Muhammad Arsalan Bashir and Yi Wang
Remote Sens. 2021, 13(9), 1854; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13091854 - 10 May 2021
Cited by 34 | Viewed by 6373
Abstract
This paper deals with detecting small objects in remote sensing images from satellites or any aerial vehicle by utilizing the concept of image super-resolution for image resolution enhancement using a deep-learning-based detection method. This paper provides a rationale for image super-resolution for small [...] Read more.
This paper deals with detecting small objects in remote sensing images from satellites or any aerial vehicle by utilizing the concept of image super-resolution for image resolution enhancement using a deep-learning-based detection method. This paper provides a rationale for image super-resolution for small objects by improving the current super-resolution (SR) framework by incorporating a cyclic generative adversarial network (GAN) and residual feature aggregation (RFA) to improve detection performance. The novelty of the method is threefold: first, a framework is proposed, independent of the final object detector used in research, i.e., YOLOv3 could be replaced with Faster R-CNN or any object detector to perform object detection; second, a residual feature aggregation network was used in the generator, which significantly improved the detection performance as the RFA network detected complex features; and third, the whole network was transformed into a cyclic GAN. The image super-resolution cyclic GAN with RFA and YOLO as the detection network is termed as SRCGAN-RFA-YOLO, which is compared with the detection accuracies of other methods. Rigorous experiments on both satellite images and aerial images (ISPRS Potsdam, VAID, and Draper Satellite Image Chronology datasets) were performed, and the results showed that the detection performance increased by using super-resolution methods for spatial resolution enhancement; for an IoU of 0.10, AP of 0.7867 was achieved for a scale factor of 16. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Graphical abstract

26 pages, 10839 KiB  
Article
YOLO-Fine: One-Stage Detector of Small Objects Under Various Backgrounds in Remote Sensing Images
by Minh-Tan Pham, Luc Courtrai, Chloé Friguet, Sébastien Lefèvre and Alexandre Baussard
Remote Sens. 2020, 12(15), 2501; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12152501 - 04 Aug 2020
Cited by 82 | Viewed by 14566
Abstract
Object detection from aerial and satellite remote sensing images has been an active research topic over the past decade. Thanks to the increase in computational resources and data availability, deep learning-based object detection methods have achieved numerous successes in computer vision, and more [...] Read more.
Object detection from aerial and satellite remote sensing images has been an active research topic over the past decade. Thanks to the increase in computational resources and data availability, deep learning-based object detection methods have achieved numerous successes in computer vision, and more recently in remote sensing. However, the ability of current detectors to deal with (very) small objects still remains limited. In particular, the fast detection of small objects from a large observed scene is still an open question. In this work, we address this challenge and introduce an enhanced one-stage deep learning-based detection model, called You Only Look Once (YOLO)-fine, which is based on the structure of YOLOv3. Our detector is designed to be capable of detecting small objects with high accuracy and high speed, allowing further real-time applications within operational contexts. We also investigate its robustness to the appearance of new backgrounds in the validation set, thus tackling the issue of domain adaptation that is critical in remote sensing. Experimental studies that were conducted on both aerial and satellite benchmark datasets show some significant improvement of YOLO-fine as compared to other state-of-the art object detectors. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 4690 KiB  
Technical Note
Smart Count System Based on Object Detection Using Deep Learning
by Jiwon Moon, Sangkyu Lim, Hakjun Lee, Seungbum Yu and Ki-Baek Lee
Remote Sens. 2022, 14(15), 3761; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14153761 - 05 Aug 2022
Cited by 4 | Viewed by 4567
Abstract
Object counting is an indispensable task in manufacturing and management. Recently, the development of image-processing techniques and deep learning object detection has achieved excellent performance in object-counting tasks. Accordingly, we propose a novel small-size smart counting system composed of a low-cost hardware device [...] Read more.
Object counting is an indispensable task in manufacturing and management. Recently, the development of image-processing techniques and deep learning object detection has achieved excellent performance in object-counting tasks. Accordingly, we propose a novel small-size smart counting system composed of a low-cost hardware device and a cloud-based object-counting software server to implement an accurate counting function and overcome the trade-off presented by the computing power of local hardware. The cloud-based object-counting software consists of a model adapted to the object-counting task through a novel DBC-NMS (our own technique) and hyperparameter tuning of deep-learning-based object-detection methods. With the power of DBC-NMS and hyperparameter tuning, the performance of the cloud-based object-counting software is competitive over commonly used public datasets (CARPK and SKU110K) and our custom dataset of small pills. Our cloud-based object-counting software achieves an mean absolute error (MAE) of 1.03 and a root mean squared error (RMSE) of 1.20 on the Pill dataset. These results demonstrate that the proposed smart counting system accurately detects and counts densely distributed object scenes. In addition, the proposed system shows a reasonable and efficient cost–performance ratio by converging low-cost hardware and cloud-based software. Full article
(This article belongs to the Special Issue Convolutional Neural Networks for Object Detection)
Show Figures

Figure 1

Back to TopTop