ISVD-Based Advanced Simultaneous Localization and Mapping (SLAM) Algorithm for Mobile Robots
Round 1
Reviewer 1 Report
Authors developed a novel ISVD-baesd method for displacement estimation using key points detected by SURF and ORB features.
The work is presented in clear way, however, I recommend to improve the following points:
- Authors need to clearly show the motivation for developing this approach.
- In the Related work section, I suggest to add a comparison table to show the difference between the existing approaches. For instance, employed sensors, experiment testbed area, accuracy, and so on.
- Add a figure key to Figure 7. There are two colored lines (orange and blue)
- In section 5, "Evaluation and Results" I recommend to discuss your proposed system with the recent developed system.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
The paper presents an open loop SLAM method based on the progressive RGB-D motion estimation through feature matching and one-step point cloud –based optimization. The paper seems more like a comparative study between ICP and ISVD method. The novelty is not very well defined and the results are poorly presented.
The authors should consider rework the research statement and clarify why studying such methods which has been introduced 10 years before worth to be mention. Loop closure optimization through BoWs and semantic learning, that significantly improve the accuracy of the SLAM are missing. This should be also justified in the intro.
Abstract and introduction is not exciting. The provided knowledge is very well established and the novelty statement is missing.
In introduction, a summary of the contributions should be provided.
Surf and ORB feature-based tracking is not state of art. Explanation for their usage should be provided
ISVD should be briefly (1 sentence) explained in the introduction and abstract.
“line is painted on the route” authors need to express this statement more formal
Related Word
The related work is a little bi obsolete. Ref 16 is too old. There are alternatives such as
For stereo
1. Zhou, Yi, Guillermo Gallego, and Shaojie Shen. "Event-based stereo visual odometry." IEEE Transactions on Robotics 37.5 (2021): 1433-1450.
2. Kostavelis, Ioannis, et al. "Stereo-based visual odometry for autonomous robot navigation." International Journal of Advanced Robotic Systems 13.1 (2016): 21.
For RGB-D,
The conducted work with Kinect sensor closely resembles with the following:
1. Kostavelis, I., & Gasteratos, A. (2013). Learning spatially semantic representations for cognitive robot navigation. Robotics and Autonomous Systems, 61(12), 1460-1475.
For Lidar, there are geometrical approaches and learning based ones;
1. Amanatiadis, Angelos, et al. "Avert: An autonomous multi-robot system for vehicle extraction and transportation." 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015.
2. Chen, X., Läbe, T., Milioto, A., Röhling, T., Vysotska, O., Haag, A., ... & Stachniss, C. (2021). OverlapNet: Loop closing for LiDAR-based SLAM. arXiv preprint arXiv:2105.11344.
In line 112 the calibration information is trivial. The part can be omitted.
The RGB-D sensor is noisy for distant measurements. What the community typically does is to truncate the point cloud after 5m meters. Such filtering step is not mentioned.
In addition, progressive visual odometry aggregates error due to the reprojection error from depth to color camera and missmatches in the detected features (SURF, ORB). An outlier discarding step is typically necessary. How the authors handle such issues?
The experimental results are not concentrated. Some comparison is refered in line 206 and some other in the Experimental section.
Authors should compare their work with other geometrical methods.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
All the comments have been addressed.