Special Issue "3D Reconstruction and Visualization of Dynamic Object/Scenes Using Data Fusion"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 December 2021.

Special Issue Editors

Dr. Kyungeun Cho
E-Mail Website1 Website2 Website3
Guest Editor
Department of Multimedia Engineering, Dongguk University, Republic of Korea
Interests: 3D reconstruction; artificial intelligence for games and robots; virtual reality; NUI/NUX; human–robot interactions
Dr. Pradip Kumar Sharma
E-Mail Website
Guest Editor
Department of Computing Science, University of Aberdeen, UK
Interests: edge computing; IoT security; blockchain; software-defined networking; social networking
Special Issues and Collections in MDPI journals
Dr. Wei Song
E-Mail Website
Guest Editor
College of Computer Science and Technology, North China University of Technology, Beijing 100144, China
Interests: environment perception; unmanned ground vehicle; 3D reconstruction; object recognition
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

For an in-depth analysis and understanding of the contextual environment, knowledge of the 3D structure of a scene provides valuable information. 3D virtual reconstruction involves the geometric structure of a scene captured by a collection of images by facilitating the position of the camera and the internal parameters. The technology of data fusion-based 3D reconstructing using 3D sensors such as RGB-D Camera, Lidar, and Radar have been used in various applications such as autonomous things, robotics, remote sensing, or VR/AR. In particular, deep learning methods for multi-modal 3D data fusion using only images or heterogamous sensor data such as images and point clouds are actively used for 3D reconstruction in research and industry. Complexity, occlusions, variety of structures, and inaccessible locations are serious issues that will affect the capture of all the geometric details of 3D structures. It is, therefore, necessary to collect a large amount of data from different stations that must be accurately recorded and integrated together.

This Special Issue on “3D Reconstruction and Visualization of Dynamic Object/Scene using Data fusion” will focus on finding robust methods to use in uncontrolled environments using 3D scene modeling, autonomous exploration of unknown scenes, autonomous obstacle avoidance system, etc. We welcome novel research, reviews, and opinion articles covering all related topics.

Dr. Kyungeun Cho
Dr. Pradip Kumar Sharma
Dr. Wei Song
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-view 3D reconstruction
  • 3D remote sensing
  • Multi-modal data fusion of 3D sensors
  • Depth map fusion
  • Point cloud analysis
  • Deep learning and statistical computing
  • Procedural modeling

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Reflective Noise Filtering of Large-Scale Point Cloud Using Multi-Position LiDAR Sensing Data
Remote Sens. 2021, 13(16), 3058; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163058 - 04 Aug 2021
Viewed by 293
Abstract
Signals, such as point clouds captured by light detection and ranging sensors, are often affected by highly reflective objects, including specular opaque and transparent materials, such as glass, mirrors, and polished metal, which produce reflection artifacts, thereby degrading the performance of associated computer [...] Read more.
Signals, such as point clouds captured by light detection and ranging sensors, are often affected by highly reflective objects, including specular opaque and transparent materials, such as glass, mirrors, and polished metal, which produce reflection artifacts, thereby degrading the performance of associated computer vision techniques. In traditional noise filtering methods for point clouds, noise is detected by considering the distribution of the neighboring points. However, noise generated by reflected areas is quite dense and cannot be removed by considering the point distribution. Therefore, this paper proposes a noise removal method to detect dense noise points caused by reflected objects using multi-position sensing data comparison. The proposed method is divided into three steps. First, the point cloud data are converted to range images of depth and reflective intensity. Second, the reflected area is detected using a sliding window on two converted range images. Finally, noise is filtered by comparing it with the neighbor sensor data between the detected reflected areas. Experiment results demonstrate that, unlike conventional methods, the proposed method can better filter dense and large-scale noise caused by reflective objects. In future work, we will attempt to add the RGB image to improve the accuracy of noise detection. Full article
Show Figures

Figure 1

Article
Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization
Remote Sens. 2021, 13(13), 2526; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132526 - 28 Jun 2021
Viewed by 458
Abstract
Large-scale 3D-scanned point clouds enable the accurate and easy recording of complex 3D objects in the real world. The acquired point clouds often describe both the surficial and internal 3D structure of the scanned objects. The recently proposed edge-highlighted transparent visualization method is [...] Read more.
Large-scale 3D-scanned point clouds enable the accurate and easy recording of complex 3D objects in the real world. The acquired point clouds often describe both the surficial and internal 3D structure of the scanned objects. The recently proposed edge-highlighted transparent visualization method is effective for recognizing the whole 3D structure of such point clouds. This visualization utilizes the degree of opacity for highlighting edges of the 3D-scanned objects, and it realizes clear transparent viewing of the entire 3D structures. However, for 3D-scanned point clouds, the quality of any edge-highlighting visualization depends on the distribution of the extracted edge points. Insufficient density, sparseness, or partial defects in the edge points can lead to unclear edge visualization. Therefore, in this paper, we propose a deep learning-based upsampling method focusing on the edge regions of 3D-scanned point clouds to generate more edge points during the 3D-edge upsampling task. The proposed upsampling network dramatically improves the point-distributional density, uniformity, and connectivity in the edge regions. The results on synthetic and scanned edge data show that our method can improve the percentage of edge points more than 15% compared to the existing point cloud upsampling network. Our upsampling network works well for both sharp and soft edges. A combined use with a noise-eliminating filter also works well. We demonstrate the effectiveness of our upsampling network by applying it to various real 3D-scanned point clouds. We also prove that the improved edge point distribution can improve the visibility of the edge-highlighted transparent visualization of complex 3D-scanned objects. Full article
Show Figures

Graphical abstract

Article
DeepLabV3-Refiner-Based Semantic Segmentation Model for Dense 3D Point Clouds
Remote Sens. 2021, 13(8), 1565; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13081565 - 17 Apr 2021
Cited by 2 | Viewed by 741
Abstract
Three-dimensional virtual environments can be configured as test environments of autonomous things, and remote sensing by 3D point clouds collected by light detection and range (LiDAR) can be used to detect virtual human objects by segmenting collected 3D point clouds in a virtual [...] Read more.
Three-dimensional virtual environments can be configured as test environments of autonomous things, and remote sensing by 3D point clouds collected by light detection and range (LiDAR) can be used to detect virtual human objects by segmenting collected 3D point clouds in a virtual environment. The use of a traditional encoder-decoder model, such as DeepLabV3, improves the quality of the low-density 3D point clouds of human objects, where the quality is determined by the measurement gap of the LiDAR lasers. However, whenever a human object with a surrounding environment in a 3D point cloud is used by the traditional encoder-decoder model, it is difficult to increase the density fitting of the human object. This paper proposes a DeepLabV3-Refiner model, which is a model that refines the fit of human objects using human objects whose density has been increased through DeepLabV3. An RGB image that has a segmented human object is defined as a dense segmented image. DeepLabV3 is used to make predictions of dense segmented images and 3D point clouds for human objects in 3D point clouds. In the Refiner model, the results of DeepLabV3 are refined to fit human objects, and a dense segmented image fit to human objects is predicted. The dense 3D point cloud is calculated using the dense segmented image provided by the DeepLabV3-Refiner model. The 3D point clouds that were analyzed by the DeepLabV3-Refiner model had a 4-fold increase in density, which was verified experimentally. The proposed method had a 0.6% increase in density accuracy compared to that of DeepLabV3, and a 2.8-fold increase in the density corresponding to the human object. The proposed method was able to provide a 3D point cloud that increased the density to fit the human object. The proposed method can be used to provide an accurate 3D virtual environment by using the improved 3D point clouds. Full article
Show Figures

Figure 1

Article
DGCB-Net: Dynamic Graph Convolutional Broad Network for 3D Object Recognition in Point Cloud
Remote Sens. 2021, 13(1), 66; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13010066 - 26 Dec 2020
Cited by 1 | Viewed by 1307
Abstract
3D (3-Dimensional) object recognition is a hot research topic that benefits environment perception, disease diagnosis, and the mobile robot industry. Point clouds collected by range sensors are a popular data structure to represent a 3D object model. This paper proposed a 3D object [...] Read more.
3D (3-Dimensional) object recognition is a hot research topic that benefits environment perception, disease diagnosis, and the mobile robot industry. Point clouds collected by range sensors are a popular data structure to represent a 3D object model. This paper proposed a 3D object recognition method named Dynamic Graph Convolutional Broad Network (DGCB-Net) to realize feature extraction and 3D object recognition from the point cloud. DGCB-Net adopts edge convolutional layers constructed by weight-shared multiple-layer perceptrons (MLPs) to extract local features from the point cloud graph structure automatically. Features obtained from all edge convolutional layers are concatenated together to form a feature aggregation. Unlike stacking many layers in-depth, our DGCB-Net employs a broad architecture to extend point cloud feature aggregation flatly. The broad architecture is structured utilizing a flat combining architecture with multiple feature layers and enhancement layers. Both feature layers and enhancement layers concatenate together to further enrich the features’ information of the point cloud. All features work on the object recognition results thus that our DGCB-Net show better recognition performance than other 3D object recognition algorithms on ModelNet10/40 and our scanning point cloud dataset. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

Technical Note
LiDAR Data Enrichment by Fusing Spatial and Temporal Adjacent Frames
Remote Sens. 2021, 13(18), 3640; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183640 - 12 Sep 2021
Viewed by 435
Abstract
In autonomous driving scenarios, the point cloud generated by LiDAR is usually considered as an accurate but sparse representation. In order to enrich the LiDAR point cloud, this paper proposes a new technique that combines spatial adjacent frames and temporal adjacent frames. To [...] Read more.
In autonomous driving scenarios, the point cloud generated by LiDAR is usually considered as an accurate but sparse representation. In order to enrich the LiDAR point cloud, this paper proposes a new technique that combines spatial adjacent frames and temporal adjacent frames. To eliminate the “ghost” artifacts caused by moving objects, a moving point identification algorithm is introduced that employs the comparison between range images. Experiments are performed on the publicly available Semantic KITTI dataset. Experimental results show that the proposed method outperforms most of the previous approaches. Compared with these previous works, the proposed method is the only method that can run in real-time for online usage. Full article
Show Figures

Figure 1

Back to TopTop