Next Article in Journal
Study of Load Calculation Models for Anti-Sliding Short Piles Using Finite Difference Method
Previous Article in Journal
Evaluation of the Efficacy of the VHO-Osthold® Perfect Orthonyxic Buckle While Treating Ingrown Nails and Its Effect on the Clinical Picture of the Nail Apparatus
 
 
Article
Peer-Review Record

LunarSim: Lunar Rover Simulator Focused on High Visual Fidelity and ROS 2 Integration for Advanced Computer Vision Algorithm Development

by Dominik Pieczyński, Bartosz Ptak, Marek Kraft * and Paweł Drapikowski
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Submission received: 28 September 2023 / Revised: 9 November 2023 / Accepted: 14 November 2023 / Published: 16 November 2023
(This article belongs to the Section Robotics and Automation)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This is an interesting paper that presents a simulator for lunar rover. The simulator is based on several common platforms and can be linked to other libraries to enable the simulation of a lunar rover on conditions similar to the lunar surface, according to the authors' claim.

However, a major concern is about it's "high-fidelity". As discussed in the manuscript, "A large part of the functionality of both of the aforementioned simulators is focused on accurately modelling the physical properties of the rover and the environment, such as rover-surface soil interactions.." This is critically important for such simulators as the major purpose of the simulator is that it's very difficult to do real experiments and the simulator must guarantee accuracy. But it is not clear how to guarantee "high-fidelity" in the simulator introduced in the manuscript. It is simply mentioned "The physical forces of the lunar environment are simulated and controlled by the NVIDIA PhysX engine " without any details of the physical model parameters and the justification that the setup of the model can accurately represent lunar environment.


For model validation, the simulator may also be used to simulate some scenarios on Earth, which may have detailed data.

Comments on the Quality of English Language

Moderate editing of English language required

Author Response

We thank the Reviewer for pointing out this issue. The main purpose of the developed simulator is to assure visual fidelity and produce ground truth for vision-based algorithms and this was the main development focus. The physical simulation is not as complex as in the case of complete terramechanics simulators, but realistic enough to allow for testing of the intended class of algorithms. Phenomena such as slip, inertia, gravity levels, collisions, traces left by the moving robot etc. are simulated, so that robot traversal is realistic enough to test mapping and navigation algorithms. We do not model soil compacting or settlement, since it is not crucial for vision-based situational awareness. In a similar way, terramechanics simulators do not provide configurable, photorealistic rendering, hardware-in-the-loop or standard software integration capabilities. 


We have provided additional clarifications in Section 1, along with citations to a paper clarifying fidelity levels for digital twins. Additional tests performed using neural network feature extractors prove, that the data collected using the presented simulator is closer to real-world lunar surface data than the data contained in the closest alternative solution. The results of this additional experiment confirming the gains in terms of visual fidelity are added in Section 4.

While the simulator does not yet provide access to mission scenarios and environments on Earth, it can be extended to do so, which was acknowledged in the last section of the article.

Reviewer 2 Report

Comments and Suggestions for Authors

The authors presented a new space rover simulator and toolkit achieving a high degree of rendered scene realism, ease of use through the adoption of a popular game engine, and familiar, industry-standard ROS interface compatibility, with integrated support for a wide range of heterogeneous computational hardware. The presented LunarSim – a space simulator that is tightly integrated with ROS 2, space-ROS and micro-ROS, capable of performing simulations using a wide range of hardware (programmable logic, embedded microcomputers and microcontrollers. The simulator displays favorable performance in terms of generated image fidelity, providing a valuable tool for researchers developing lunar rover systems. It is also compatible to accommodate other devices. In brief, the presented simulator significantly extends the capabilities for replicating critical components of space robotics applications in perception and autonomy. In general, the manuscript is well written and may be accepted for publication. Some of the important remarks observed from the paper are as follows.

 

1.      The authors provided a brief introduction about related simulation tools following an introductory note of the present objective.

2.      The implemented simulation solution is well explained with the necessary block diagram. Sample results are then provided with Object detection – embedded device, Image segmentation – FPGA platform, trajectory of rover calculated by Visual odometry – x64-based mini-PC.

 

3.      Finally, in conclusion, they presented that the simulator significantly extends the capabilities for replicating critical components of space robotics applications in perception and autonomy.

Author Response

The authors would like to express their gratitude to the Reviewer for their valuable time and effort dedicated to the review.

Reviewer 3 Report

Comments and Suggestions for Authors

This paper introduces a novel and highly valuable contribution to the field of autonomous lunar exploration and robotics. The development of a lunar-orientated robotic simulation environment using the Unity game engine, integrated with Robot Operating System 2 (ROS 2), provides a cutting-edge tool for researchers to test their algorithms for lunar rover perception and navigation. The simulator's versatility is showcased through its deployment on various hardware platforms, including FPGA and Edge AI devices, demonstrating its ability to evaluate vision-based algorithms for lunar exploration. The paper emphasizes the significance of this simulator, as it offers a sophisticated and realistic rendering environment with compatibility for ROS 2, making it a valuable resource for developing and testing perception and control algorithms for lunar and exoplanetary rovers. Furthermore, the simulator's exceptional performance in generating high-fidelity images, surpassing existing datasets in transfer learning experiments, underscores its potential for advancing space robotics applications. The paper should be published following the responses.

 

Comments that require addressing include the following:

1.         The growth of the space exploration sector may encounter challenges related to the scalability of current technology. It is essential to specify the reasons for this potential slowdown and whether they are relevant to the content of this paper.

2.    “Although many such solutions have been developed over the years, most of them are not up to the task of addressing the challenges of modern-day space autonomy.” Detailing the shortcomings and challenges faced by these previous methods will help contextualize the significance of the research presented in this article. Additionally, it is crucial to clearly articulate which specific solutions or aspects this article improves upon or advances.

3.         The article mentions the impact of external factors on algorithmic performance, such as lighting conditions, visibility limitations, and terrain textures. To enhance clarity, it is important to explain how these external factors affect the algorithm, and, if applicable, how the proposed solution mitigates.

4.         Highlighting the significance of the article's applicability to multiple hardware platforms is important. Clarifying how this versatility benefits actual detection and the practical implications of using different hardware platforms will provide a more comprehensive understanding of the research.

5.         In Figure 7, it would be beneficial to reposition the difference below the ground truth and estimated data to improve the visual representation

Author Response

The authors would like to thank the Reviewer for the time spent on the review. We have prepared responses for the comments:

1. The introduction section has been extended to expand upon the problems with scalability of human-controlled equipment as applied to complex space missions which will most probably grow in volume. We also added a few citations supporting our claims.

2. The introduction has been expanded to emphasize the importance of the research compared to previous simulations. We have emphasized the motivation for the creation of our simulator and its benefits for the development of computer vision algorithms. The areas of research where the simulator can be used have been pointed out.

3. The capability to simulate different lighting conditions is important to assure the necessary level of robustness. Extraterrestrial bodies might differ from the Earth in terms of atmospheric conditions and surface cover. For example, the simulated lunar environment displays no atmospheric scattering and the surface is characterised by relatively large reflectance, leading to the formation of sharp shadows. We added necessary explanations to the paper, along with citations emphasizing the impact of lightning conditions on computer vision applications to Section 3. Textures also play an important role in computer vision and navigation. Feature detection and tracking is a prerequisite and first step in almost all visual odometry and SLAM algorithms, and lack of texture might not only degrade the faithfulness of the simulation - it can make testing the vision-based navigation algorithms impossible. As suggested, we added additional explanations to the manuscript along with related citations to Section 3.  

4. The importance of the ability for multiple hardware applications and benefits have been discussed and added at the end of the first paragraph in Section 4. We focused on the changes occurring with the New Space paradigm, in which the use of Commercial off-the-shelf devices becomes enabled and requires the creation of universal tools. Appropriate citations have been added.

5. To generate a trajectory comparison and calculate the error between them, we utilized a tool commonly employed in the field of robotics. The included figures consist of standard plots commonly employed for this purpose.

Reviewer 4 Report

Comments and Suggestions for Authors

1. The parameter matching process of the simulator is not sufficient.

2. The content about the simulator in the main text can be appropriately added.

3. Check all formats. For example, whether the "," should be used after 51 and 54.

4. The data in figure 5 is not clear enough, and there is not a significant contrast between the images of the dataset. In addition, the characteristics of smaller rock blocks are not reflected in the synthetic dataset image, which is not explained in this paper.

5. Will the comparison of F1 scores between the image segmentation part and the data synthesized by other simulators highlight the advantages of the simulator in this paper?

6. In the part of visual odometer, the explanation of the results is not enough.

Author Response

The authors would like to thank the Reviewer for the time spent on the review. We have prepared responses for the comments:

1. To deal with the parameter-matching process of the simulation during its creation, we leaned into many of its aspects:
- Environmental properties. This encompasses physical properties such as lunar gravity, the absence of atmospheric effects, and the simulation of the world's terrain geometry, size, and shape, which have been informed by existing research and imagery obtained from the Moon's surface and satellite images.
- Lightning properties. The realistic lighting conditions, such as illumination angles and lighting temperature, were employed to maximize the representation of the images rendered in the simulation. 
- Rover model properties. The physical properties of the robot, including its mass, dimensions, joint dynamics, and friction coefficients, were represented in the simulation using Perseverance rover parameters.
- Sensors parameters. The simulation model's sensor parameters, like field of view, resolution, noise characteristics, and delay, were set to match the real sensors as closely as possible, imitating consumer-grade devices.

However, an additional description of rover and sensor parameters has been added to the Section “Overview of the solution”. Further examinations conducted with neural network feature extractors confirm that the data generated by the simulator described in this study closely resembles real-world lunar surface data, surpassing the data from the nearest alternative solution. The outcomes of this supplementary experiment, which validate the improvements in visual accuracy, are incorporated in Section 4.

2. The Introduction section has been substantially enhanced and extended to encompass the significance of utilising simulators in research, the driving motivation behind the creation of our simulator, and a thorough comparison of its characteristics and features concerning other existing solutions.

3. We have enhanced the document's formatting, including the standardization of the item list format.

4. The example from the synthetic dataset has been updated with a more illustrative one that includes rocks of different sizes and the caption has been improved (Figure 5). Information on rock size characteristics has been added (Section 4.1). Please note, that the end user can easily create different scenarios and scene layouts for the simulator using industry-standard tools.

5. In the article, we conducted a comparison between the Artificial Lunar Landscape dataset and the dataset generated through our simulation. It is worth noting that, unfortunately, other simulations do not offer ground truth annotations for the segmentation task. To provide clarity on this point, we have appended this information at the end of the Related Work section.

6. The explanation of the visual odometry has been extended. We add the sentence: “This outcome signifies the capability of the VO algorithm to accurately reconstruct the trajectory, even in the face of challenging environmental conditions and a limited number of features. Furthermore, the level of error implies that the simulation environment is just as demanding as real-world outdoor scenarios \cite{openvslam2019}”

Back to TopTop