sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence for 3D Big Spatial Data Processing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (31 August 2020) | Viewed by 10998

Special Issue Editors


E-Mail Website
Guest Editor
Data Platform Research Team, Artificial Intelligence Research Center (AIRC), National Institute of Advanced Industrial Science and Technology (AIST), Koto-ku Tokyo 135-0064, Japan
Interests: geo-enabled computing framework based on gis; location-based services; spatiotemporal databases; big data analysis; cyber-physical cloud computing

E-Mail Website
Guest Editor
Artificial Intelligence Research Center (AIRC), National Institute of Advanced Industrial Science and Technology (AIST), Koto-ku Tokyo 135-0064, Japan
Interests: point cloud data management, processing, and analysis; spatial data processing, big data management, processing, and mining; real-time data processing (stream data processing); transaction processing; data warehousing and data mining; online analytical processing (OLAP); parallel and distributed data processing; statistical modeling; uncertain data processing

Special Issue Information

Dear Colleagues,

The Second International Workshop on Artificial Intelligence for 3D Big Spatial Data Processing (AI3D 2019, https://sites.google.com/view/ai3d2019/home), co-located with IEEE ISM 2019, will be from 9 to 11 December 2019 in San Diego, California, USA. The AI3D 2019 is intended to provide a common forum for researchers, scientists, engineers, and practitioners throughout the world to present their latest research findings, developments, and applications in the area of Artificial Intelligence for 3D big spatial data processing.

Recent advances in laser technology have resulted in the generation of a massive amount of 3D spatial data. The 3D spatial data offer a useful source of information for natural resource management, urban planning, autonomous driving, etc. Artificial intelligence (AI) has the potential to take the 3D spatial data processing into hyper-growth. Recently, 3D indoor mapping systems, terrestrial mobile mapping systems (MMS), and airborne LiDAR systems have become able to collect terabytes of data (including images and point clouds) in a single scan or on a single trip. Existing approaches and tools, however, lack the efficient management, processing, and analysis of 3D spatial data. The solution to this problem lies in the use of Artificial Intelligence and deep learning to redefine the workflow. This workshop focuses on the use of Artificial Intelligence and deep learning to improve the processing, management, and analysis of 3D Big Spatial data. The workshop aims at providing a platform to the group of researchers working in this direction to share their work, exchange ideas, and solve research problems.

To enable the researchers to publish the extended versions of their work in a journal in a timely manner, we have planned a Special Issue for the AI3D 2019 workshop. The Special Issue will contain a selection of papers submitted, accepted, and presented at AI3D 2019. We warmly invite researchers to submit their contributions to this Special Issue. A list of topics of interest includes but is not limited to:

  • AI and the 3D scanning technologies and devices;
  • Use of AI in 3D view registration and surface modeling;
  • Intelligent LiDAR data processing, management and analysis;
  • Point cloud data processing and analysis;
  • 3D modeling and depth image processing;
  • Spatiotemporal data processing and analysis;
  • Geospatial data processing and analysis;
  • Distributed, parallel, and peer-to-peer approaches to index, search and process 3D big spatial data;
  • AI-based object detection from point cloud and images;
  • Supervised/unsupervised object annotation in 3D data;
  • Point cloud data indexing and querying;
  • 3D big spatial data architectures;
  • 3D big spatial data visualization and analytics;
  • 3D big spatial data cleaning, compression, and integration;
  • Geographic information retrieval;
  • Indoor and outdoor mapping;
  • 3D mapping;
  • Urban planning;
  • Spatial data applications;
  • User interfaces and visualization.

Prof. Dr. Sisi Zlatanova
Dr. Kyoung-Sook Kim
Dr. Salman Ahmed Shaikh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 5072 KiB  
Article
Fusion of Hyperspectral CASI and Airborne LiDAR Data for Ground Object Classification through Residual Network
by Zhanyuan Chang, Huiling Yu, Yizhuo Zhang and Keqi Wang
Sensors 2020, 20(14), 3961; https://0-doi-org.brum.beds.ac.uk/10.3390/s20143961 - 16 Jul 2020
Cited by 5 | Viewed by 2468
Abstract
Modern satellite and aerial imagery outcomes exhibit increasingly complex types of ground objects with continuous developments and changes in land resources. Single remote-sensing modality is not sufficient for the accurate and satisfactory extraction and classification of ground objects. Hyperspectral imaging has been widely [...] Read more.
Modern satellite and aerial imagery outcomes exhibit increasingly complex types of ground objects with continuous developments and changes in land resources. Single remote-sensing modality is not sufficient for the accurate and satisfactory extraction and classification of ground objects. Hyperspectral imaging has been widely used in the classification of ground objects because of its high resolution, multiple bands, and abundant spatial and spectral information. Moreover, the airborne light detection and ranging (LiDAR) point-cloud data contains unique high-precision three-dimensional (3D) spatial information, which can enrich ground object classifiers with height features that hyperspectral images do not have. Therefore, the fusion of hyperspectral image data with airborne LiDAR point-cloud data is an effective approach for ground object classification. In this paper, the effectiveness of such a fusion scheme is investigated and confirmed on an observation area in the middle parts of the Heihe River in China. By combining the characteristics of hyperspectral compact airborne spectrographic imager (CASI) data and airborne LiDAR data, we extracted a variety of features for data fusion and ground object classification. Firstly, we used the minimum noise fraction transform to reduce the dimensionality of hyperspectral CASI images. Then, spatio-spectral and textural features of these images were extracted based on the normalized vegetation index and the gray-level co-occurrence matrices. Further, canopy height features were extracted from airborne LiDAR data. Finally, a hierarchical fusion scheme was applied to the hyperspectral CASI and airborne LiDAR features, and the fused features were used to train a residual network for high-accuracy ground object classification. The experimental results showed that the overall classification accuracy was based on the proposed hierarchical-fusion multiscale dilated residual network (M-DRN), which reached an accuracy of 97.89%. This result was found to be 10.13% and 5.68% higher than those of the convolutional neural network (CNN) and the dilated residual network (DRN), respectively. Spatio-spectral and textural features of hyperspectral CASI images can complement the canopy height features of airborne LiDAR data. These complementary features can provide richer and more accurate information than individual features for ground object classification and can thus outperform features based on a single remote-sensing modality. Full article
(This article belongs to the Special Issue Artificial Intelligence for 3D Big Spatial Data Processing)
Show Figures

Figure 1

20 pages, 5353 KiB  
Article
FWNet: Semantic Segmentation for Full-Waveform LiDAR Data Using Deep Learning
by Takayuki Shinohara, Haoyi Xiu and Masashi Matsuoka
Sensors 2020, 20(12), 3568; https://0-doi-org.brum.beds.ac.uk/10.3390/s20123568 - 24 Jun 2020
Cited by 15 | Viewed by 4021
Abstract
In the computer vision field, many 3D deep learning models that directly manage 3D point clouds (proposed after PointNet) have been published. Moreover, deep learning-based-techniques have demonstrated state-of-the-art performance for supervised learning tasks on 3D point cloud data, such as classification and segmentation [...] Read more.
In the computer vision field, many 3D deep learning models that directly manage 3D point clouds (proposed after PointNet) have been published. Moreover, deep learning-based-techniques have demonstrated state-of-the-art performance for supervised learning tasks on 3D point cloud data, such as classification and segmentation tasks for open datasets in competitions. Furthermore, many researchers have attempted to apply these deep learning-based techniques to 3D point clouds observed by aerial laser scanners (ALSs). However, most of these studies were developed for 3D point clouds without radiometric information. In this paper, we investigate the possibility of using a deep learning method to solve the semantic segmentation task of airborne full-waveform light detection and ranging (lidar) data that consists of geometric information and radiometric waveform data. Thus, we propose a data-driven semantic segmentation model called the full-waveform network (FWNet), which handles the waveform of full-waveform lidar data without any conversion process, such as projection onto a 2D grid or calculating handcrafted features. Our FWNet is based on a PointNet-based architecture, which can extract the local and global features of each input waveform data, along with its corresponding geographical coordinates. Subsequently, the classifier consists of 1D convolutional operational layers, which predict the class vector corresponding to the input waveform from the extracted local and global features. Our trained FWNet achieved higher scores in its recall, precision, and F1 score for unseen test data—higher scores than those of previously proposed methods in full-waveform lidar data analysis domain. Specifically, our FWNet achieved a mean recall of 0.73, a mean precision of 0.81, and a mean F1 score of 0.76. We further performed an ablation study, that is assessing the effectiveness of our proposed method, of the above-mentioned metric. Moreover, we investigated the effectiveness of our PointNet based local and global feature extraction method using the visualization of the feature vector. In this way, we have shown that our network for local and global feature extraction allows training with semantic segmentation without requiring expert knowledge on full-waveform lidar data or translation into 2D images or voxels. Full article
(This article belongs to the Special Issue Artificial Intelligence for 3D Big Spatial Data Processing)
Show Figures

Figure 1

15 pages, 4453 KiB  
Article
Identifying Informal Settlements Using Contourlet Assisted Deep Learning
by Rizwan Ahmed Ansari, Rakesh Malhotra and Krishna Mohan Buddhiraju
Sensors 2020, 20(9), 2733; https://0-doi-org.brum.beds.ac.uk/10.3390/s20092733 - 11 May 2020
Cited by 7 | Viewed by 3292
Abstract
As the global urban population grows due to the influx of migrants from rural areas, many cities in developing countries face the emergence and proliferation of unplanned and informal settlements. However, even though the rise of unplanned development influences planning and management of [...] Read more.
As the global urban population grows due to the influx of migrants from rural areas, many cities in developing countries face the emergence and proliferation of unplanned and informal settlements. However, even though the rise of unplanned development influences planning and management of residential land-use, reliable and detailed information about these areas is often scarce. While formal settlements in urban areas are easily mapped due to their distinct features, this does not hold true for informal settlements because of their microstructure, instability, and variability of shape and texture. Therefore, detecting and mapping these areas remains a challenging task. This research will contribute to the development of tools to identify such informal built-up areas by using an integrated approach of multiscale deep learning. The authors propose a composite architecture for semantic segmentation using the U-net architecture aided by information obtained from a multiscale contourlet transform. This work also analyzes the effects of wavelet and contourlet decompositions in the U-net architecture. The performance was evaluated in terms of precision, recall, F-score, mean intersection over union, and overall accuracy. It was found that the proposed method has better class-discriminating power as compared to existing methods and has an overall classification accuracy of 94.9–95.7%. Full article
(This article belongs to the Special Issue Artificial Intelligence for 3D Big Spatial Data Processing)
Show Figures

Figure 1

Back to TopTop