remotesensing-logo

Journal Browser

Journal Browser

Computer Vision and Machine Learning Application on Earth Observation

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 55191

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical Engineering, University of Valladolid, Valladolid, Spain
Interests: computer-aided diagnosis; computer vision; machine learning; expert systems

Special Issue Information

Dear Colleagues,

With the rapid development of computing, the interest, power, and advantages of automatic computer-aided processing techniques in science and engineering have become clear—in particular, automatic computer vision (CV) techniques together with machine learning (ML, a.k.a. computational intelligence or machine intelligence) systems, in order to reach both a very high degree of automation and high accuracy. CV in conjunction with ML may be applied to a high number of problems of interest, such as in remote Earth sensing, mainly through different nature remote imaging and remote video processing approaches that have been made possible due to the very rapid development and growth of high-resolution, high-SNR, and low-cost imaging sensors and devices of various types, including single or multiple sensor, visible-range CCD/CMOS, hyper-spectral, multi-spectral, infrared, ultraviolet, and thermal, to name a few.

At the same time, it is clear how the use of autonomous ML systems—including expert systems, neural networks, and genetic algorithms, among others—has recently seen very rapid development, allowing computer-aided diagnosis, automatic classification, pattern recognition, and regression using ML techniques and learning algorithms with either supervised or unsupervised learning and reinforcement or deep learning paradigms.

Given the reasons above, the application of CV and ML to remote Earth observation and sensing is becoming highly attractive and popular, making it possible to reach a very high degree of autonomous functioning, accuracy, and promising results, including the following applications among others of interest:

  • Aerial imaging systems;
  • Agriculture field and aquaculture open-air automatic image classification systems;
  • Air traffic, airways, and plane pathways observation;
  • Climate and atmospheric/tropospheric observation, prediction, classification, and sensing systems;
  • Crops, crop yield, vegetation, and forest remote imaging sensing systems;
  • Deep-space star sensing;
  • Earth-surface remote sensing;
  • Earth-surface traffic, street, road and highway detection, classification, and sensing systems;
  • Ecology, eco-systems, wild life and migratory remote observation and monitoring;
  • Electrical power lines and power supply system remote imaging;
  • Fire detection and monitoring systems;
  • Hybrid automatic sensing systems;
  • Hyper-spectral imaging remote sensing systems;
  • Maritime/ship traffic observation, classification, or estimation;
  • Multi-sensor array remote sensing systems;
  • Multi-spectral automatic remote imaging systems;
  • Navigation, GPS, and other Earth-surface geodesic and localization systems;
  • Open-air orchard/vineyard imaging sensing;
  • Population, people, and crowd remote imaging estimation or counting;
  • Railway traffic lines remote observation;
  • Satellite Earth observation;
  • Storm, cloud, rainfall, and water diffraction sensing;
  • Time-lapse and seasonal Earth observation;
  • UAV/drone imaging systems;
  • Water, river, lake, sea, and flooding remote observation and monitoring.

Dr. Juan Ignacio Arribas
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • classification
  • computer vision
  • detection and estimation
  • expert systems
  • imaging
  • learning systems
  • machine learning
  • neural networks
  • optimization
  • pattern recognition
  • receiver operating characteristic
  • remote sensing applications
  • segmentation
  • video processing

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

25 pages, 65047 KiB  
Article
On the Robustness and Generalization Ability of Building Footprint Extraction on the Example of SegNet and Mask R-CNN
by Muntaha Sakeena, Eric Stumpe, Miroslav Despotovic, David Koch and Matthias Zeppelzauer
Remote Sens. 2023, 15(8), 2135; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15082135 - 18 Apr 2023
Cited by 2 | Viewed by 1260
Abstract
Building footprint (BFP) extraction focuses on the precise pixel-wise segmentation of buildings from aerial photographs such as satellite images. BFP extraction is an essential task in remote sensing and represents the foundation for many higher-level analysis tasks, such as disaster management, monitoring of [...] Read more.
Building footprint (BFP) extraction focuses on the precise pixel-wise segmentation of buildings from aerial photographs such as satellite images. BFP extraction is an essential task in remote sensing and represents the foundation for many higher-level analysis tasks, such as disaster management, monitoring of city development, etc. Building footprint extraction is challenging because buildings can have different sizes, shapes, and appearances both in the same region and in different regions of the world. In addition, effects, such as occlusions, shadows, and bad lighting, have to also be considered and compensated. A rich body of work for BFP extraction has been presented in the literature, and promising research results have been reported on benchmarking datasets. Despite the comprehensive work performed, it is still unclear how robust and generalizable state-of-the-art methods are to different regions, cities, settlement structures, and densities. The purpose of this study is to close this gap by investigating questions on the practical applicability of BFP extraction. In particular, we evaluate the robustness and generalizability of state-of-the-art methods as well as their transfer learning capabilities. Therefore, we investigate in detail two of the most popular deep learning architectures for BFP extraction (i.e., SegNet, an encoder–decoder-based architecture and Mask R-CNN, an object detection architecture) and evaluate them with respect to different aspects on a proprietary high-resolution satellite image dataset as well as on publicly available datasets. Results show that both networks generalize well to new data, new cities, and across cities from different continents. They both benefit from increased training data, especially when this data is from the same distribution (data source) or of comparable resolution. Transfer learning from a data source with different recording parameters is not always beneficial. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

17 pages, 106711 KiB  
Article
Enhancing Contrast of Dark Satellite Images Based on Fuzzy Semi-Supervised Clustering and an Enhancement Operator
by Nguyen Tu Trung, Xuan-Hien Le and Tran Manh Tuan
Remote Sens. 2023, 15(6), 1645; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15061645 - 18 Mar 2023
Cited by 1 | Viewed by 1397
Abstract
Contrast enhancement of images is a crucial topic in image processing that improves the quality of images. The methods of image enhancement are classified into three types, including the histogram method, the fuzzy logic method, and the optimal method. Studies on image enhancement [...] Read more.
Contrast enhancement of images is a crucial topic in image processing that improves the quality of images. The methods of image enhancement are classified into three types, including the histogram method, the fuzzy logic method, and the optimal method. Studies on image enhancement are often based on the rules: if it is bright, then it is brighter; if it is dark, then it is darker, using a global approach. Thus, it is hard to enhance objects in all dark and light areas, as in satellite images. This study presents a novel algorithm for improving satellite images, called remote sensing image enhancement based on cluster enhancement (RSIECE). First, the input image is clustered by the algorithm of fuzzy semi-supervised clustering. Then, the upper bound and lower bound are estimated according to the cluster. Next, a sub-algorithm is implemented for clustering enhancement using an enhancement operator. For each pixel, the gray levels for each channel (R, G, B) are transformed with this sub-algorithm to generate new corresponding gray levels because after clustering, pixels belong to clusters with the corresponding membership values. Therefore, the output gray level value will be aggregated from the enhanced gray levels by the sub-algorithm with the weight of the corresponding cluster membership value. The test results demonstrate that the suggested algorithm is superior to several recently developed approaches. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

18 pages, 10560 KiB  
Article
Local Convergence Index-Based Infrared Small Target Detection against Complex Scenes
by Siying Cao, Jiakun Deng, Junhai Luo, Zhi Li, Junsong Hu and Zhenming Peng
Remote Sens. 2023, 15(5), 1464; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15051464 - 06 Mar 2023
Cited by 5 | Viewed by 1742
Abstract
Infrared small target detection (ISTD) plays a crucial role in precision guidance, anti-missile interception, and military early-warning systems. Existing approaches suffer from high false alarm rates and low detection rates when detecting dim and small targets in complex scenes. A robust scheme for [...] Read more.
Infrared small target detection (ISTD) plays a crucial role in precision guidance, anti-missile interception, and military early-warning systems. Existing approaches suffer from high false alarm rates and low detection rates when detecting dim and small targets in complex scenes. A robust scheme for automatically detecting infrared small targets is proposed to address this problem. First, a gradient weighting technique with high sensitivity was used for extracting target candidates. Second, a new collection of features based on local convergence index (LCI) filters with a strong representation of dim or arbitrarily shaped targets was extracted for each candidate. Finally, the collective set of features was inputted to a random undersampling boosting classifier (RUSBoost) to discriminate the real targets from false-alarm candidates. Extensive experiments on public datasets NUDT-SIRST and NUAA-SIRST showed that the proposed method achieved competitive performance with state-of-the-art (SOTA) algorithms. It is also important to note that the average processing time was as low as 0.07 s per frame with low time consumption, which is beneficial for practical applications. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

24 pages, 5822 KiB  
Article
ANLPT: Self-Adaptive and Non-Local Patch-Tensor Model for Infrared Small Target Detection
by Zhao Zhang, Cheng Ding, Zhisheng Gao and Chunzhi Xie
Remote Sens. 2023, 15(4), 1021; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15041021 - 12 Feb 2023
Cited by 7 | Viewed by 1518
Abstract
Infrared small target detection is widely used for early warning, aircraft monitoring, ship monitoring, and so on, which requires the small target and its background to be represented and modeled effectively to achieve their complete separation. Low-rank sparse decomposition based on the structural [...] Read more.
Infrared small target detection is widely used for early warning, aircraft monitoring, ship monitoring, and so on, which requires the small target and its background to be represented and modeled effectively to achieve their complete separation. Low-rank sparse decomposition based on the structural features of infrared images has attracted much attention among many algorithms because of its good interpretability. Based on our study, we found some shortcomings in existing baseline methods, such as redundancy of constructing tensors and fixed compromising factors. A self-adaptive low-rank sparse tensor decomposition model for infrared dim small target detection is proposed in this paper. In this model, the entropy of image block is used for fast matching of non-local similar blocks to construct a better sparse tensor for small targets. An adaptive strategy of low-rank sparse tensor decomposition is proposed for different background environments, which adaptively determines the weight coefficient to achieve effective separation of background and small targets in different background environments. Tensor robust principal component analysis (TRPCA) was applied to achieve low-rank sparse tensor decomposition to reconstruct small targets and their backgrounds separately. Sufficient experiments on the various types data sets show that the proposed method is competitive. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

24 pages, 4189 KiB  
Article
AutoML-Based Neural Architecture Search for Object Recognition in Satellite Imagery
by Povilas Gudzius, Olga Kurasova, Vytenis Darulis and Ernestas Filatovas
Remote Sens. 2023, 15(1), 91; https://doi.org/10.3390/rs15010091 - 24 Dec 2022
Cited by 4 | Viewed by 2119
Abstract
Advancements in optical satellite hardware and lowered costs for satellite launches raised the high demand for geospatial intelligence. The object recognition problem in multi-spectral satellite imagery carries dataset properties unique to this problem. Perspective distortion, resolution variability, data spectrality, and other features make [...] Read more.
Advancements in optical satellite hardware and lowered costs for satellite launches raised the high demand for geospatial intelligence. The object recognition problem in multi-spectral satellite imagery carries dataset properties unique to this problem. Perspective distortion, resolution variability, data spectrality, and other features make it difficult for a specific human-invented neural network to perform well on a dispersed type of scenery, ranging data quality, and different objects. UNET, MACU, and other manually designed network architectures deliver high-performance results for accuracy and prediction speed in large objects. However, once trained on different datasets, the performance drops and requires manual recalibration or further configuration testing to adjust the neural network architecture. To solve these issues, AutoML-based techniques can be employed. In this paper, we focus on Neural Architecture Search that is capable of obtaining a well-performing network configuration without human manual intervention. Firstly, we conducted detailed testing on the top four performing neural networks for object recognition in satellite imagery to compare their performance: FastFCN, DeepLabv3, UNET, and MACU. Then we applied and further developed a Neural Architecture Search technique for the best-performing manually designed MACU by optimizing a search space at the artificial neuron cellular level of the network. Several NAS-MACU versions were explored and evaluated. Our developed AutoML process generated a NAS-MACU neural network that produced better performance compared with MACU, especially in a low-information intensity environment. The experimental investigation was performed on our annotated and updated publicly available satellite imagery dataset. We can state that the application of the Neural Architecture Search procedure has the capability to be applied across various datasets and object recognition problems within the remote sensing research field. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

19 pages, 1615 KiB  
Article
Comparison of Classic Classifiers, Metaheuristic Algorithms and Convolutional Neural Networks in Hyperspectral Classification of Nitrogen Treatment in Tomato Leaves
by Brahim Benmouna, Raziyeh Pourdarbani, Sajad Sabzi, Ruben Fernandez-Beltran, Ginés García-Mateos and José Miguel Molina-Martínez
Remote Sens. 2022, 14(24), 6366; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14246366 - 16 Dec 2022
Cited by 7 | Viewed by 1587
Abstract
Tomato is an agricultural product of great economic importance because it is one of the most consumed vegetables in the world. The most crucial chemical element for the growth and development of tomato is nitrogen (N). However, incorrect nitrogen usage can alter the [...] Read more.
Tomato is an agricultural product of great economic importance because it is one of the most consumed vegetables in the world. The most crucial chemical element for the growth and development of tomato is nitrogen (N). However, incorrect nitrogen usage can alter the quality of tomato fruit, rendering it undesirable to customers. Therefore, the goal of the current study is to investigate the early detection of excess nitrogen application in the leaves of the Royal tomato variety using a non-destructive hyperspectral imaging system. Hyperspectral information in the leaf images at different wavelengths of 400–1100 nm was studied; they were taken from different treatments with normal nitrogen application (A), and at the first (B), second (C) and third (D) day after the application of excess nitrogen. We investigated the performance of nine machine learning classifiers, including two classic supervised classifiers, i.e., linear discriminant analysis (LDA) and support vector machines (SVMs), three hybrid artificial neural network classifiers, namely, hybrid artificial neural networks and independent component analysis (ANN-ICA), harmony search (ANN-HS) and bees algorithm (ANN-BA) and four classifiers based on deep learning algorithms by convolutional neural networks (CNNs). The results showed that the best classifier was a CNN method, with a correct classification rate (CCR) of 91.6%, compared with an average of 85.5%, 68.5%, 90.8%, 88.8% and 89.2% for LDA, SVM, ANN-ICA, ANN-HS and ANN-BA, respectively. This shows that modern CNN methods should be preferred for spectral analysis over other classical techniques. These CNN architectures can be used in remote sensing for the precise detection of the excessive use of nitrogen fertilizers in large extensions. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

24 pages, 1318 KiB  
Article
Supervised Segmentation of NO2 Plumes from Individual Ships Using TROPOMI Satellite Data
by Solomiia Kurchaba, Jasper van Vliet, Fons J. Verbeek, Jacqueline J. Meulman and Cor J. Veenman
Remote Sens. 2022, 14(22), 5809; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14225809 - 17 Nov 2022
Cited by 7 | Viewed by 1659
Abstract
The shipping industry is one of the strongest anthropogenic emitters of NOx—a substance harmful both to human health and the environment. The rapid growth of the industry causes societal pressure on controlling the emission levels produced by ships. All the methods [...] Read more.
The shipping industry is one of the strongest anthropogenic emitters of NOx—a substance harmful both to human health and the environment. The rapid growth of the industry causes societal pressure on controlling the emission levels produced by ships. All the methods currently used for ship emission monitoring are costly and require proximity to a ship, which makes global and continuous emission monitoring impossible. A promising approach is the application of remote sensing. Studies showed that some of the NO2 plumes from individual ships can visually be distinguished using the TROPOspheric Monitoring Instrument on board the Copernicus Sentinel 5 Precursor (TROPOMI/S5P). To deploy a remote-sensing-based global emission monitoring system, an automated procedure for the estimation of NO2 emissions from individual ships is needed. The extremely low signal-to-noise ratio of the available data, as well as the absence of the ground truth makes the task very challenging. Here, we present a methodology for the automated segmentation of NO2 plumes produced by seagoing ships using supervised machine learning on TROPOMI/S5P data. We show that the proposed approach leads to more than a 20% increase in the average precision score in comparison to the methods used in previous studies and results in a high correlation of 0.834 with the theoretically derived ship emission proxy. This work is a crucial step towards the development of an automated procedure for global ship emission monitoring using remote sensing data. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

19 pages, 41109 KiB  
Article
Multiscale Normalization Attention Network for Water Body Extraction from Remote Sensing Imagery
by Xin Lyu, Yiwei Fang, Baogen Tong, Xin Li and Tao Zeng
Remote Sens. 2022, 14(19), 4983; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194983 - 07 Oct 2022
Cited by 4 | Viewed by 1418
Abstract
Extracting water bodies is an important task in remote sensing imagery (RSI) interpretation. Deep convolution neural networks (DCNNs) show great potential in feature learning; they are widely used in the water body interpretation of RSI. However, the accuracy of DCNNs is still unsatisfactory [...] Read more.
Extracting water bodies is an important task in remote sensing imagery (RSI) interpretation. Deep convolution neural networks (DCNNs) show great potential in feature learning; they are widely used in the water body interpretation of RSI. However, the accuracy of DCNNs is still unsatisfactory due to differences in the many hetero-features of water bodies, such as spectrum, geometry, and spatial size. To address the problem mentioned above, this paper proposes a multiscale normalization attention network (MSNANet) which can accurately extract water bodies in complicated scenarios. First of all, a multiscale normalization attention (MSNA) module was designed to merge multiscale water body features and highlight feature representation. Then, an optimized atrous spatial pyramid pooling (OASPP) module was developed to refine the representation by leveraging context information, which improves segmentation performance. Furthermore, a head module (FEH) for feature enhancing was devised to realize high-level feature enhancement and reduce training time. The extensive experiments were carried out on two benchmarks: the Surface Water dataset and the Qinghai–Tibet Plateau Lake dataset. The results indicate that the proposed model outperforms current mainstream models on OA (overall accuracy), f1-score, kappa, and MIoU (mean intersection over union). Moreover, the effectiveness of the proposed modules was proven to be favorable through ablation study. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

20 pages, 33405 KiB  
Article
Diffusion Model with Detail Complement for Super-Resolution of Remote Sensing
by Jinzhe Liu, Zhiqiang Yuan, Zhaoying Pan, Yiqun Fu, Li Liu and Bin Lu
Remote Sens. 2022, 14(19), 4834; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194834 - 28 Sep 2022
Cited by 21 | Viewed by 7705
Abstract
Remote sensing super-resolution (RSSR) aims to improve remote sensing (RS) image resolution while providing finer spatial details, which is of great significance for high-quality RS image interpretation. The traditional RSSR is based on the optimization method, which pays insufficient attention to small targets [...] Read more.
Remote sensing super-resolution (RSSR) aims to improve remote sensing (RS) image resolution while providing finer spatial details, which is of great significance for high-quality RS image interpretation. The traditional RSSR is based on the optimization method, which pays insufficient attention to small targets and lacks the ability of model understanding and detail supplement. To alleviate the above problems, we propose the generative Diffusion Model with Detail Complement (DMDC) for RS super-resolution. Firstly, unlike traditional optimization models with insufficient image understanding, we introduce the diffusion model as a generation model into RSSR tasks and regard low-resolution images as condition information to guide image generation. Next, considering that generative models may not be able to accurately recover specific small objects and complex scenes, we propose the detail supplement task to improve the recovery ability of DMDC. Finally, the strong diversity of the diffusion model makes it possibly inappropriate in RSSR, for this purpose, we come up with joint pixel constraint loss and denoise loss to optimize the direction of inverse diffusion. The extensive qualitative and quantitative experiments demonstrate the superiority of our method in RSSR with small and dense targets. Moreover, the results from direct transfer to different datasets also prove the superior generalization ability of DMDC. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

21 pages, 28968 KiB  
Article
Fine-Grained Classification of Optical Remote Sensing Ship Images Based on Deep Convolution Neural Network
by Yantong Chen, Zhongling Zhang, Zekun Chen, Yanyan Zhang and Junsheng Wang
Remote Sens. 2022, 14(18), 4566; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14184566 - 13 Sep 2022
Cited by 3 | Viewed by 1698
Abstract
Marine activities occupy an important position in human society. The accurate classification of ships is an effective monitoring method. However, traditional image classification has the problem of low classification accuracy, and the corresponding ship dataset also has the problem of long-tail distribution. Aimed [...] Read more.
Marine activities occupy an important position in human society. The accurate classification of ships is an effective monitoring method. However, traditional image classification has the problem of low classification accuracy, and the corresponding ship dataset also has the problem of long-tail distribution. Aimed at solving these problems, this paper proposes a fine-grained classification method of optical remote sensing ship images based on deep convolution neural network. We use three-level images to extract three-level features for classification. The first-level image is the original image as an auxiliary. The specific position of the ship in the original image is located by the gradient-weighted class activation mapping. The target-level image as the second-level image is obtained by threshold processing the class activation map. The third-level image is the midship position image extracted from the target image. Then we add self-calibrated convolutions to the feature extraction network to enrich the output features. Finally, the class imbalance is solved by reweighting the class-balanced loss function. Experimental results show that we can achieve accuracies of 92.81%, 93.54% and 93.97%, respectively, after applying the proposed method on different datasets. Compared with other classification methods, this method has a higher accuracy in optical aerospace remote sensing ship classification. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

19 pages, 6072 KiB  
Article
A Novel Method of Ship Detection under Cloud Interference for Optical Remote Sensing Images
by Wensheng Wang, Xinbo Zhang, Wu Sun and Min Huang
Remote Sens. 2022, 14(15), 3731; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14153731 - 04 Aug 2022
Cited by 2 | Viewed by 1641
Abstract
In this paper, we propose a novel method developed for detecting incomplete ship targets under cloud interference and low-contrast ship targets in thin fog based on superpixel segmentation, and outline its application to optical remote sensing images. The detection of ship targets often [...] Read more.
In this paper, we propose a novel method developed for detecting incomplete ship targets under cloud interference and low-contrast ship targets in thin fog based on superpixel segmentation, and outline its application to optical remote sensing images. The detection of ship targets often requires the target to be complete, and the overall features of the ship are used for detection and recognition. When the ship target is obscured by clouds, or the contrast between the ship target and the sea-clutter background is low, there may be incomplete targets, which reduce the effectiveness of recognition. Here, we propose a new method combining constant false alarm rate (CFAR) and superpixel segmentation with feature points (SFCFAR) to solve the above problems. Our newly developed SFCFAR utilizes superpixel segmentation to divide large scenes into many small regions which include target regions and background regions. In remote sensing images, the target occupies a small proportion of pixels in the entire image. In our method, we use superpixel segmentation to divide remote sensing images into meaningful blocks. The target regions are identified using the characteristics of clusters of ship texture features and the texture differences between the target and background regions. This step not only detects the ship target quickly, but also detects ships with low contrast and under cloud cover. In optical remote sensing, ships at sea under thin clouds are not common in practice, and the sample size generated is relatively small, so this problem is not applicable to deep learning algorithms for training, while the SFCFAR algorithm does not require data training to complete the detection task. Experiments show that the proposed SFCFAR algorithm enhances the detection of obscured ship targets under clouds and low-contrast targets in thin fog, compared with traditional target detection methods and as deep learning algorithms, further complementing existing ship detection methods. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

38 pages, 12580 KiB  
Article
A Machine Learning Strategy Based on Kittler’s Taxonomy to Detect Anomalies and Recognize Contexts Applied to Monitor Water Bodies in Environments
by Maurício Araújo Dias, Giovanna Carreira Marinho, Rogério Galante Negri, Wallace Casaca, Ignácio Bravo Muñoz and Danilo Medeiros Eler
Remote Sens. 2022, 14(9), 2222; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14092222 - 06 May 2022
Cited by 3 | Viewed by 1749
Abstract
Environmental monitoring, such as analyses of water bodies to detect anomalies, is recognized worldwide as a task necessary to reduce the impacts arising from pollution. However, the large number of data available to be analyzed in different contexts, such as in an image [...] Read more.
Environmental monitoring, such as analyses of water bodies to detect anomalies, is recognized worldwide as a task necessary to reduce the impacts arising from pollution. However, the large number of data available to be analyzed in different contexts, such as in an image time series acquired by satellites, still pose challenges for the detection of anomalies, even when using computers. This study describes a machine learning strategy based on Kittler’s taxonomy to detect anomalies related to water pollution in an image time series. We propose this strategy to monitor environments, detecting unexpected conditions that may occur (i.e., detecting outliers), and identifying those outliers in accordance with Kittler’s taxonomy (i.e., detecting anomalies). According to our strategy, contextual and non-contextual image classifications were semi-automatically compared to find any divergence that indicates the presence of one type of anomaly defined by the taxonomy. In our strategy, models built to classify a single image were used to classify an image time series due to domain adaptation. The results 99.07%, 99.99%, 99.07%, and 99.53% were achieved by our strategy, respectively, for accuracy, precision, recall, and F-measure. These results suggest that our strategy allows computers to recognize contexts and enhances their capabilities to solve contextualized problems. Therefore, our strategy can be used to guide computational systems to make different decisions to solve a problem in response to each context. The proposed strategy is relevant for improving machine learning, as its use allows computers to have a more organized learning process. Our strategy is presented with respect to its applicability to help monitor environmental disasters. A minor limitation was found in the results caused by the use of domain adaptation. This type of limitation is fairly common when using domain adaptation, and therefore has no significance. Even so, future work should investigate other techniques for transfer learning. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

29 pages, 24138 KiB  
Article
Deep Learning Based Electric Pylon Detection in Remote Sensing Images
by Sijia Qiao, Yu Sun and Haopeng Zhang
Remote Sens. 2020, 12(11), 1857; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12111857 - 08 Jun 2020
Cited by 9 | Viewed by 4226
Abstract
The working condition of power network can significantly influence urban development. Among all the power facilities, electric pylon has an important effect on the normal operation of electricity supply. Therefore, the work status of electric pylons requires continuous and real-time monitoring. Considering the [...] Read more.
The working condition of power network can significantly influence urban development. Among all the power facilities, electric pylon has an important effect on the normal operation of electricity supply. Therefore, the work status of electric pylons requires continuous and real-time monitoring. Considering the low efficiency of manual detection, we propose to utilize deep learning methods for electric pylon detection in high-resolution remote sensing images in this paper. To verify the effectiveness of electric pylon detection methods based on deep learning, we tested and compared the comprehensive performance of 10 state-of-the-art deep-learning-based detectors with different characteristics. Extensive experiments were carried out on a self-made dataset containing 1500 images. Moreover, 50 relatively complicated images were selected from the dataset to test and evaluate the adaptability to actual complex situations and resolution variations. Experimental results show the feasibility of applying deep learning methods to electric pylon detection. The comparative analysis can provide reference for the selection of specific deep learning model in actual electric pylon detection task. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

31 pages, 8300 KiB  
Article
Infrared Small Target Detection via Non-Convex Tensor Rank Surrogate Joint Local Contrast Energy
by Xuewei Guan, Landan Zhang, Suqi Huang and Zhenming Peng
Remote Sens. 2020, 12(9), 1520; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12091520 - 09 May 2020
Cited by 55 | Viewed by 3892
Abstract
Small target detection is a crucial technique that restricts the performance of many infrared imaging systems. In this paper, a novel detection model of infrared small target via non-convex tensor rank surrogate joint local contrast energy (NTRS) is proposed. To improve the latest [...] Read more.
Small target detection is a crucial technique that restricts the performance of many infrared imaging systems. In this paper, a novel detection model of infrared small target via non-convex tensor rank surrogate joint local contrast energy (NTRS) is proposed. To improve the latest infrared patch-tensor (IPT) model, a non-convex tensor rank surrogate merging tensor nuclear norm (TNN) and the Laplace function, is utilized for low rank background patch-tensor constraint, which has a useful property of adaptively allocating weight for every singular value and can better approximate l 0 -norm. Considering that the local prior map can be equivalent to the saliency map, we introduce a local contrast energy feature into IPT detection framework to weight target tensor, which can efficiently suppress the background and preserve the target simultaneously. Besides, to remove the structured edges more thoroughly, we suggest an additional structured sparse regularization term using the l 1 , 1 , 2 -norm of third-order tensor. To solve the proposed model, a high-efficiency optimization way based on alternating direction method of multipliers with the fast computing of tensor singular value decomposition is designed. Finally, an adaptive threshold is utilized to extract real targets of the reconstructed target image. A series of experimental results show that the proposed method has robust detection performance and outperforms the other advanced methods. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

14 pages, 1624 KiB  
Article
Multi-Label Remote Sensing Image Classification with Latent Semantic Dependencies
by Junchao Ji, Weipeng Jing, Guangsheng Chen, Jingbo Lin and Houbing Song
Remote Sens. 2020, 12(7), 1110; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12071110 - 31 Mar 2020
Cited by 12 | Viewed by 3834
Abstract
Deforestation in the Amazon rainforest results in reduced biodiversity, habitat loss, climate change, and other destructive impacts. Hence obtaining location information on human activities is essential for scientists and governments working to protect the Amazon rainforest. We propose a novel remote sensing image [...] Read more.
Deforestation in the Amazon rainforest results in reduced biodiversity, habitat loss, climate change, and other destructive impacts. Hence obtaining location information on human activities is essential for scientists and governments working to protect the Amazon rainforest. We propose a novel remote sensing image classification framework that provides us with the key data needed to more effectively manage deforestation and its consequences. We introduce the attention module to separate the features which are extracted from CNN(Convolutional Neural Network) by channel, then further send the separated features to the LSTM(Long-Short Term Memory) network to predict labels sequentially. Moreover, we propose a loss function by calculating the co-occurrence matrix of all labels in the dataset and assigning different weights to each label. Experimental results on the satellite image dataset of the Amazon rainforest show that our model obtains a better F 2 score compared to other methods, which indicates that our model is effective in utilizing label dependencies to improve the performance of multi-label image classification. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

18 pages, 5257 KiB  
Article
Comparison of the Remapping Algorithms for the Advanced Technology Microwave Sounder (ATMS)
by Jun Zhou and Hu Yang
Remote Sens. 2020, 12(4), 672; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12040672 - 18 Feb 2020
Cited by 12 | Viewed by 2791
Abstract
One of the limitations in using spaceborne, microwave radiometer data for atmospheric remote sensing is the nonuniform spatial resolution. Remapping algorithms can be applied to the data to ameliorate this limitation. In this paper, two remapping algorithms, the Backus–Gilbert inversion (BGI) technique and [...] Read more.
One of the limitations in using spaceborne, microwave radiometer data for atmospheric remote sensing is the nonuniform spatial resolution. Remapping algorithms can be applied to the data to ameliorate this limitation. In this paper, two remapping algorithms, the Backus–Gilbert inversion (BGI) technique and the filter algorithm (AFA), widely used in the operational data preprocessing of the Advanced Technology Microwave Sounder (ATMS), are investigated. The algorithms are compared using simulations and actual ATMS data. Results show that both algorithms can effectively enhance or degrade the resolution of the data. The BGI has a higher remapping accuracy than the AFA. It outperforms the AFA by producing less bias around coastlines and hurricane centers where the signal changes sharply. It shows no obvious bias around the scan ends where the AFA has a noticeable positive bias in the resolution-enhanced image. However, the BGI achieves the resolution enhancement at the expense of increasing the noise by 0.5 K. The use of the antenna pattern instead of the point spread function in the algorithm causes the persistent bias found in the AFA-remapped image, leading not only to an inaccurate antenna temperature expression but also to the neglect of the geometric deformation of the along-scan field-of-views. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

18 pages, 6450 KiB  
Article
Remote Sensing Image Ship Detection under Complex Sea Conditions Based on Deep Semantic Segmentation
by Yantong Chen, Yuyang Li, Junsheng Wang, Weinan Chen and Xianzhong Zhang
Remote Sens. 2020, 12(4), 625; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12040625 - 13 Feb 2020
Cited by 15 | Viewed by 3153
Abstract
Under complex sea conditions, ship detection from remote sensing images is easily affected by sea clutter, thin clouds, and islands, resulting in unreliable detection results. In this paper, an end-to-end convolution neural network method is introduced that combines a deep convolution neural network [...] Read more.
Under complex sea conditions, ship detection from remote sensing images is easily affected by sea clutter, thin clouds, and islands, resulting in unreliable detection results. In this paper, an end-to-end convolution neural network method is introduced that combines a deep convolution neural network with a fully connected conditional random field. Based on the Resnet architecture, the remote sensing image is roughly segmented using a deep convolution neural network as the input. Using the Gaussian pairwise potential method and mean field approximation theorem, a conditional random field is established as the output of the recurrent neural network, thus achieving end-to-end connection. We compared the proposed method with other state-of-the-art methods on the dataset established by Google Earth and NWPU-RESISC45. Experiments show that the target detection accuracy of the proposed method and the ability of capturing fine details of images are improved. The mean intersection over union is 83.2% compared with other models, which indicates obvious advantages. The proposed method is fast enough to meet the needs for ship detection in remote sensing images. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

25 pages, 19135 KiB  
Article
Cirrus Detection Based on RPCA and Fractal Dictionary Learning in Infrared imagery
by Yuxiao Lyu, Lingbing Peng, Tian Pu, Chunping Yang, Jun Wang and Zhenming Peng
Remote Sens. 2020, 12(1), 142; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12010142 - 01 Jan 2020
Cited by 18 | Viewed by 2827
Abstract
In earth observation systems, especially in the detection of small and weak targets, the detection and recognition of long-distance infrared targets plays a vital role in the military and civil fields. However, there are a large number of high radiation areas on the [...] Read more.
In earth observation systems, especially in the detection of small and weak targets, the detection and recognition of long-distance infrared targets plays a vital role in the military and civil fields. However, there are a large number of high radiation areas on the earth’s surface, in which cirrus clouds, as high radiation areas or abnormal objects, will interfere with the military early warning system. In order to improve the performance of the system and the accuracy of small target detection, the method proposed in this paper uses the suppression of the cirrus cloud as an auxiliary means of small target detection. An infrared image was modeled and decomposed into thin parts such as the cirrus cloud, noise and clutter, and low-order background parts. In order to describe the cirrus cloud more accurately, robust principal component analysis (RPCA) was used to get the sparse components of the cirrus cloud, and only the sparse components of infrared image were studied. The texture of the cirrus cloud was found to have fractal characteristics, and a random fractal based infrared image signal component dictionary was constructed. The k-cluster singular value decomposition (KSVD) dictionary was used to train the sparse representation of sparse components to detect cirrus clouds. Through the simulation test, it was found that the algorithm proposed in this paper performed better on the the receiver operating characteristic (ROC) curve and Precision-Recall (PR) curve, had higher accuracy rate under the same recall rate, and its F-measure value and Intersection-over-Union (IOU) value were greater than other algorithms, which shows that it has better detection effect. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Graphical abstract

Review

Jump to: Research

43 pages, 7184 KiB  
Review
A Review of the Challenges of Using Deep Learning Algorithms to Support Decision-Making in Agricultural Activities
by Khadijeh Alibabaei, Pedro D. Gaspar, Tânia M. Lima, Rebeca M. Campos, Inês Girão, Jorge Monteiro and Carlos M. Lopes
Remote Sens. 2022, 14(3), 638; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14030638 - 28 Jan 2022
Cited by 28 | Viewed by 5572
Abstract
Deep Learning has been successfully applied to image recognition, speech recognition, and natural language processing in recent years. Therefore, there has been an incentive to apply it in other fields as well. The field of agriculture is one of the most important fields [...] Read more.
Deep Learning has been successfully applied to image recognition, speech recognition, and natural language processing in recent years. Therefore, there has been an incentive to apply it in other fields as well. The field of agriculture is one of the most important fields in which the application of deep learning still needs to be explored, as it has a direct impact on human well-being. In particular, there is a need to explore how deep learning models can be used as a tool for optimal planting, land use, yield improvement, production/disease/pest control, and other activities. The vast amount of data received from sensors in smart farms makes it possible to use deep learning as a model for decision-making in this field. In agriculture, no two environments are exactly alike, which makes testing, validating, and successfully implementing such technologies much more complex than in most other industries. This paper reviews some recent scientific developments in the field of deep learning that have been applied to agriculture, and highlights some challenges and potential solutions using deep learning algorithms in agriculture. The results in this paper indicate that by employing new methods from deep learning, higher performance in terms of accuracy and lower inference time can be achieved, and the models can be made useful in real-world applications. Finally, some opportunities for future research in this area are suggested. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning Application on Earth Observation)
Show Figures

Figure 1

Back to TopTop