Next Article in Journal
Spatial Paradigms in Road Networks and Their Delimitation of Urban Boundaries Based on KDE
Next Article in Special Issue
Automatic Generation of High-Accuracy Stair Paths for Straight, Spiral, and Winder Stairs Using IFC-Based Models
Previous Article in Journal
Disaster Mitigation in Urban Pakistan Using Agent Based Modeling with GIS
Previous Article in Special Issue
Data Model for IndoorGML Extension to Support Indoor Navigation of People with Mobility Disabilities

DM-SLAM: A Feature-Based SLAM System for Rigid Dynamic Scenes

by 1,2, 3,*, 4, 1 and 1,2
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430070, China
Shenzhen Jimuyida Technology Co., Ltd., Shenzhen 518000, China
School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
School of Resource and Environment Sciences, Wuhan University, Wuhan 430070, China
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2020, 9(4), 202;
Received: 25 February 2020 / Revised: 18 March 2020 / Accepted: 23 March 2020 / Published: 27 March 2020
(This article belongs to the Special Issue 3D Indoor Mapping and Modelling)
Most Simultaneous Localization and Mapping (SLAM) methods assume that environments are static. Such a strong assumption limits the application of most visual SLAM systems. The dynamic objects will cause many wrong data associations during the SLAM process. To address this problem, a novel visual SLAM method that follows the pipeline of feature-based methods called DM-SLAM is proposed in this paper. DM-SLAM combines an instance segmentation network with optical flow information to improve the location accuracy in dynamic environments, which supports monocular, stereo, and RGB-D sensors. It consists of four modules: semantic segmentation, ego-motion estimation, dynamic point detection and a feature-based SLAM framework. The semantic segmentation module obtains pixel-wise segmentation results of potentially dynamic objects, and the ego-motion estimation module calculates the initial pose. In the third module, two different strategies are presented to detect dynamic feature points for RGB-D/stereo and monocular cases. In the first case, the feature points with depth information are reprojected to the current frame. The reprojection offset vectors are used to distinguish the dynamic points. In the other case, we utilize the epipolar constraint to accomplish this task. Furthermore, the static feature points left are fed into the fourth module. The experimental results on the public TUM and KITTI datasets demonstrate that DM-SLAM outperforms the standard visual SLAM baselines in terms of accuracy in highly dynamic environments. View Full-Text
Keywords: visual SLAM; deep learning; dynamic scenes; Mask R-CNN; optical flow; ORB-SLAM2 visual SLAM; deep learning; dynamic scenes; Mask R-CNN; optical flow; ORB-SLAM2
Show Figures

Graphical abstract

MDPI and ACS Style

Cheng, J.; Wang, Z.; Zhou, H.; Li, L.; Yao, J. DM-SLAM: A Feature-Based SLAM System for Rigid Dynamic Scenes. ISPRS Int. J. Geo-Inf. 2020, 9, 202.

AMA Style

Cheng J, Wang Z, Zhou H, Li L, Yao J. DM-SLAM: A Feature-Based SLAM System for Rigid Dynamic Scenes. ISPRS International Journal of Geo-Information. 2020; 9(4):202.

Chicago/Turabian Style

Cheng, Junhao, Zhi Wang, Hongyan Zhou, Li Li, and Jian Yao. 2020. "DM-SLAM: A Feature-Based SLAM System for Rigid Dynamic Scenes" ISPRS International Journal of Geo-Information 9, no. 4: 202.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop