remotesensing-logo

Journal Browser

Journal Browser

Artificial Intelligence and Remote Sensing Datasets

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (1 February 2022) | Viewed by 23126

Special Issue Editors


E-Mail Website
Guest Editor
Department of Civil Engineering, National Chung Hsing University, 250 Kuokuang Rd., Taichung 402, Taiwan
Interests: remote sensing; image processing; AI; UAVs; environmental monitoring; disaster damage assessment
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Civil Engineering, National Chung Hsing University, Taichung, Taiwan
Interests: remote sensing; GIS; UAV photogrammetry; image processing; water vapor
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
National Space Organisation, Hsinchu, Taiwan
Interests: image processing; photogrammetry; earth observation

Special Issue Information

Dear Colleagues,

Nowadays, AI (artificial intelligence) is rapidly developed and is applied to a variety of remote sensing areas. Among various AI models, supervised and semi-supervised learning techniques are adopted mostly and need a great number of training data, especially for deep learning approaches. Training data, which usually is labeled data used to train AI models or machine learning algorithms to make proper inference, is paramount to the success of AI models or projects. Labeled data is a set of samples that have been tagged with one or more labels. However, labeling typically takes a great effort by asking experts to make judgments about a given set of unlabeled data to indicate data with informative tags. Accordingly labeled data through manual or semi-automatic process is significantly more expensive than the raw unlabeled data. Therefore, a proper data sharing mechanism is the keystone for the AI remote sensing community to facilitate the establishment of various AI models and algorithms.

The purpose of this Special Issue is to provide a platform for training data sharing by making labeled and unlabeled data findable and accessible through domain-specific repositories. All kinds of remote sensing data are welcome, such as images, videos, and sensor data. An article describes data descriptors containing a description of a dataset, including what methods used for collecting or producing the data, where the dataset may be found, and how to use the data with use information or a showcase.

Prof. Dr. Ming-Der Yang
Dr. Huiping Tsai
Dr. Ming-Chih Cheng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • training data
  • label
  • data sharing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 7002 KiB  
Article
Accessing the Impact of Meteorological Variables on Machine Learning Flood Susceptibility Mapping
by Heather McGrath and Piper Nora Gohl
Remote Sens. 2022, 14(7), 1656; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14071656 - 30 Mar 2022
Cited by 7 | Viewed by 3011
Abstract
Machine learning (ML) algorithms have emerged as competent tools for identifying areas that are susceptible to flooding. The primary variables considered in most of these works include terrain models, lithology, river networks and land use. While several recent studies include average annual rainfall [...] Read more.
Machine learning (ML) algorithms have emerged as competent tools for identifying areas that are susceptible to flooding. The primary variables considered in most of these works include terrain models, lithology, river networks and land use. While several recent studies include average annual rainfall and/or temperature, other meteorological information such as snow accumulation and short-term intense rain events that may influence the hydrology of the area under investigation have not been considered. Notably, in Canada, most inland flooding occurs during the freshet, due to the melting of an accumulated snowpack coupled with heavy rainfall. Therefore, in this study the impact of several climate variables along with various hydro-geomorphological (HG) variables were tested to determine the impact of their inclusion. Three tests were run: only HG variables, the addition of annual average temperature and precipitation (HG-PT), and the inclusion of six other meteorological datasets (HG-8M) on five study areas across Canada. In HG-PT, both precipitation and temperature were selected as important in every study area, while in HG-8M a minimum of three meteorological datasets were considered important in each study area. Notably, as the meteorological variables were added, many of the initial HG variables were dropped from the selection set. The accuracy, F1, true skill and Area Under the Curve (AUC) were marginally improved when the meteorological data was added to the a parallel random forest algorithm (parRF). When the model is applied to new data, the estimated accuracy of the prediction is higher in HG-8M, indicating that inclusion of relevant, local meteorological datasets improves the result. Full article
(This article belongs to the Special Issue Artificial Intelligence and Remote Sensing Datasets)
Show Figures

Figure 1

21 pages, 7680 KiB  
Article
UnityShip: A Large-Scale Synthetic Dataset for Ship Recognition in Aerial Images
by Boyong He, Xianjiang Li, Bo Huang, Enhui Gu, Weijie Guo and Liaoni Wu
Remote Sens. 2021, 13(24), 4999; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13244999 - 09 Dec 2021
Cited by 9 | Viewed by 4240
Abstract
As a data-driven approach, deep learning requires a large amount of annotated data for training to obtain a sufficiently accurate and generalized model, especially in the field of computer vision. However, when compared with generic object recognition datasets, aerial image datasets are more [...] Read more.
As a data-driven approach, deep learning requires a large amount of annotated data for training to obtain a sufficiently accurate and generalized model, especially in the field of computer vision. However, when compared with generic object recognition datasets, aerial image datasets are more challenging to acquire and more expensive to label. Obtaining a large amount of high-quality aerial image data for object recognition and image understanding is an urgent problem. Existing studies show that synthetic data can effectively reduce the amount of training data required. Therefore, in this paper, we propose the first synthetic aerial image dataset for ship recognition, called UnityShip. This dataset contains over 100,000 synthetic images and 194,054 ship instances, including 79 different ship models in ten categories and six different large virtual scenes with different time periods, weather environments, and altitudes. The annotations include environmental information, instance-level horizontal bounding boxes, oriented bounding boxes, and the type and ID of each ship. This provides the basis for object detection, oriented object detection, fine-grained recognition, and scene recognition. To investigate the applications of UnityShip, the synthetic data were validated for model pre-training and data augmentation using three different object detection algorithms and six existing real-world ship detection datasets. Our experimental results show that for small-sized and medium-sized real-world datasets, the synthetic data achieve an improvement in model pre-training and data augmentation, showing the value and potential of synthetic data in aerial image recognition and understanding tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence and Remote Sensing Datasets)
Show Figures

Graphical abstract

Other

Jump to: Research

20 pages, 6842 KiB  
Technical Note
Spatial Negative Co-Location Pattern Directional Mining Algorithm with Join-Based Prevalence
by Guoqing Zhou, Zhenyu Wang and Qi Li
Remote Sens. 2022, 14(9), 2103; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14092103 - 27 Apr 2022
Cited by 9 | Viewed by 2690
Abstract
It is usually difficult for prevalent negative co-location patterns to be mined and calculated. This paper proposes a join-based prevalent negative co-location mining algorithm, which can quickly and effectively mine all the prevalent negative co-location patterns in spatial data. Firstly, this paper verifies [...] Read more.
It is usually difficult for prevalent negative co-location patterns to be mined and calculated. This paper proposes a join-based prevalent negative co-location mining algorithm, which can quickly and effectively mine all the prevalent negative co-location patterns in spatial data. Firstly, this paper verifies the monotonic nondecreasing property of the negative co-location participation index (PI) value as the size increases. Secondly, using this property, it is deduced that any prevalent negative co-location pattern with size n can be generated by connecting prevalent co-location with size 2 and with an n − 1 size candidate negative co-location pattern or an n − 1 size prevalent positive co-location pattern. Finally, the experiment results demonstrate that while other conditions are fixed, the proposed algorithm has an excellent efficiency level. The algorithm can eliminate the 90% useless negative co-location pattern maximumly and eliminate the useless 40% negative co-location pattern averagely. Full article
(This article belongs to the Special Issue Artificial Intelligence and Remote Sensing Datasets)
Show Figures

Figure 1

11 pages, 2298 KiB  
Data Descriptor
iVS Dataset and ezLabel: A Dataset and a Data Annotation Tool for Deep Learning Based ADAS Applications
by Yu-Shu Ni, Vinay M. Shivanna and Jiun-In Guo
Remote Sens. 2022, 14(4), 833; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14040833 - 10 Feb 2022
Cited by 3 | Viewed by 2054
Abstract
To overcome the limitations of standard datasets with data at a wide-variety of scales and captured in the various conditions necessary to train neural networks to yield efficient results in ADAS applications, this paper presents a self-built open-to-free-use ‘iVS dataset’ and a data [...] Read more.
To overcome the limitations of standard datasets with data at a wide-variety of scales and captured in the various conditions necessary to train neural networks to yield efficient results in ADAS applications, this paper presents a self-built open-to-free-use ‘iVS dataset’ and a data annotation tool entitled ‘ezLabel’. The iVS dataset is comprised of various objects at different scales as seen in and around real driving environments. The data in the iVS dataset are collected by employing a camcorder in vehicles driving under different conditions, e.g., light, weather and traffic, and driving scenarios ranging from city traffic during peak and normal hours to freeway traffics during busy and normal conditions. Thus, the collected data are wide-ranging and captured all possible objects at various scales appearing in real-time driving situations. The data collected in order to build the dataset has to be annotated before use in training the CNNs and so this paper presents an open-to-free-use data annotation tool, ezLabel, for data annotation purposes as well. Full article
(This article belongs to the Special Issue Artificial Intelligence and Remote Sensing Datasets)
Show Figures

Figure 1

17 pages, 11337 KiB  
Data Descriptor
A UAV Open Dataset of Rice Paddies for Deep Learning Practice
by Ming-Der Yang, Hsin-Hung Tseng, Yu-Chun Hsu, Chin-Ying Yang, Ming-Hsin Lai and Dong-Hong Wu
Remote Sens. 2021, 13(7), 1358; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071358 - 01 Apr 2021
Cited by 22 | Viewed by 7725
Abstract
Recently, unmanned aerial vehicles (UAVs) have been broadly applied to the remote sensing field. For a great number of UAV images, deep learning has been reinvigorated and performed many results in agricultural applications. The popular image datasets for deep learning model training are [...] Read more.
Recently, unmanned aerial vehicles (UAVs) have been broadly applied to the remote sensing field. For a great number of UAV images, deep learning has been reinvigorated and performed many results in agricultural applications. The popular image datasets for deep learning model training are generated for general purpose use, in which the objects, views, and applications are for ordinary scenarios. However, UAV images possess different patterns of images mostly from a look-down perspective. This paper provides a verified annotated dataset of UAV images that are described in data acquisition, data preprocessing, and a showcase of a CNN classification. The dataset collection consists of one multi-rotor UAV platform by flying a planned scouting routine over rice paddies. This paper introduces a semi-auto annotation method with an ExGR index to generate the training data of rice seedlings. For demonstration, this study modified a classical CNN architecture, VGG-16, to run a patch-based rice seedling detection. The k-fold cross-validation was employed to obtain an 80/20 dividing ratio of training/test data. The accuracy of the network increases with the increase of epoch, and all the divisions of the cross-validation dataset achieve a 0.99 accuracy. The rice seedling dataset provides the training-validation dataset, patch-based detection samples, and the ortho-mosaic image of the field. Full article
(This article belongs to the Special Issue Artificial Intelligence and Remote Sensing Datasets)
Show Figures

Graphical abstract

Back to TopTop