Next Article in Journal
Can Neural Networks Do Arithmetic? A Survey on the Elementary Numerical Skills of State-of-the-Art Deep Learning Models
Previous Article in Journal
Activated Carbons as Effective Adsorbents of Non-Steroidal Anti-Inflammatory Drugs
Previous Article in Special Issue
Automated Aviation Wind Nowcasting: Exploring Feature-Based Machine Learning Methods
 
 
Article
Peer-Review Record

Using Auto-ML on Synthetic Point Cloud Generation

by Moritz Hottong 1, Moritz Sperling 1 and Christoph Müller 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Submission received: 7 December 2023 / Revised: 4 January 2024 / Accepted: 12 January 2024 / Published: 15 January 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

File has attached.

Comments for author File: Comments.pdf

Comments on the Quality of English Language

File has attached.

Author Response

Dear Reviewer,

thank you for your valuable input. In our current revision, we were able to incorporate changes suggested by you. In particular, we have tried to explain our approach and our process better and to present the results and their application potential in a clearer way. In addition, we expanded the references. Although the previous version had already been checked for linguistic correctness by human and AI-based reviewers, we also made minor linguistic adjustments.

Here is a detailed list of your comments and suggestions and how we addressed them

  1. The paper also has some typos and language issue which needs to be checked and corrected in the revision.

    • --> Re-checked language issues using AI-based, rule-based and human assisted correction processes
  2. Introduction part should be polished. Grammar mistakes should be checked and corrected.

    • --> See 1.
  3. More detailed descriptions are desired to explain the simulation example.

    • --> Added process and architecure diagrams and description
    • --> Expanded and clarified discussion of results and future work
  4. Possible comparisons should be included.

    • --> Added comparison to other methods in paragraph 4
  5. In practical example, It is suggested that to compare the existing method?

    • --> See 4. Otherwise, please clarify your question
  6. Check the notations part.

    • --> There is no "notations" part in our manuscript. Please clarify.
  7. Some remarks need to be added to explain the main differences with the existing results.

    • --> Added more references citing existing approaches and comparing them to our approach.
  8. The literature review is somewhat incomplete, there are some key references missing the list. For example see: Neural Comput & Applic 32, 9699–9712 (2020). https://0-doi-org.brum.beds.ac.uk/10.1007/s00521-019-04497-y.

    • --> Cited more references at relevant sections.

Thanks again, best regards,

the team of authors.

Reviewer 2 Report

Comments and Suggestions for Authors

Dear authors,

Please see notes in the file attached.

Comments for author File: Comments.pdf

Author Response

Dear Reviewer,

thank you for your valuable input. In our current revision, we were able to incorporate changes suggested by you. In particular, we have tried to explain our approach and our process better and to present the results and their application potential in a clearer way. In addition, we expanded the references.

Here is a detailed list of your comments and suggestions and how we addressed them

  1. L54: The main idea of this sentence is already repeated before.

    • --> Deleted repeating sentence
  2. L59: Here it would be interesting to include some references that explain these techniques in more detail. For example, Ferreira, André, et al. "GAN-based generation of realistic 3D data: A systematic review and taxonomy." arXiv preprint arXiv:2207.01390 (2022). for GANs with volumetric data, including point clouds.

    • --> Added this reference (among others) as suggested
  3. Table 3: Additionally, another synthetic data strategy or data augmentation mechanism should be used to compare with the results of your approach.

    • --> Added comparison to other methods in paragraph 4 (Conclusion)
  4. L215: The manuscript should contain a more comprehensive explanation of possible applicability and how to implement it.

    • --> Added paragraph describing possible applications. Added diagrams and explanation of the process and the architecture of this approach.

Thanks again, best regards,

the team of authors.

Reviewer 3 Report

Comments and Suggestions for Authors The followings are my comments about this paper.  

The work is interesting, but the following points required to be addressed:

1.  In this paperthe authors illustrated that the system is built on an existing 3D-ML functionalities and communication interfaces. So,what’s the main contribution or innovation in this research.

2. It is recommended that the authors provide a diagram describing the work flow of this system.  

3.  “The system stops when the optimizer reaches the stopping criterion or …”, How to determine stopping criterion?

4.   “Auto-ML is not a guarantee success and may not always produce the best results”, so how the authors solved this problem in this method ?

5.  “During the experiments, a random optimizer is used that randomly chooses domain parameter settings from the search space. “ I think the domain parameter is important in this method .Why the authors chose the three domain parameter(Objects in Environment, Angular Loss, Camera Trajectory) in this experiment.  

6. The method should be compared with other domain adaptation techniques mentioned in this paper.

7. Detailed introduction of Open3D and Open3D ML is desired in section1.1. 

Author Response

Dear Reviewer,

thank you for your valuable input. In our current revision, we were able to incorporate changes suggested by you. In particular, we have tried to explain our approach and our process better by adding workflow/architecture diagrams. We also put some effort into presenting the results and their application potential in a clearer way. In addition, we added some insight into Open3D/ML and expanded the references.

Here is a detailed list of your comments and suggestions and how we addressed them

  1. In this paper,the authors illustrated that the system is built on an existing 3D-ML functionalities and communication interfaces. So, what’s the main contribution or innovation in this research.

    • --> Added overview diagram and explaining subsection to paragraph 1
    • --> Added process diagram showing the contributing packages explaining subsection of the process to paragraph 2
  2. It is recommended that the authors provide a diagram describing the work flow of this system.

    • --> See 1
  3. "The system stops when the optimizer reaches the stopping criterion or …", How to determine stopping criterion?

    • --> Added section describing the stopping criterion is determined after line 116.
  4. "Auto-ML is not a guarantee success and may not always produce the best results", so how the authors solved this problem in this method ?

    • --> Should now become clear from the added overview/process diagram (See 1)
  5. "During the experiments, a random optimizer is used that randomly chooses domain parameter settings from the search space. " I think the domain parameter is important in this method .Why the authors chose the three domain parameter(Objects in Environment, Angular Loss, Camera Trajectory) in this experiment.

    • --> Added paragraph describing how the parameters are chosen to represent a broad diversity.
  6. The method should be compared with other domain adaptation techniques mentioned in this paper.

    • --> Added comparison to other methods in paragraph 4 (conclusion)
  7. Detailed introduction of Open3D and Open3D ML is desired in section1.1.

    • --> Added paragraph describing those technologies and how they are utilized

Thanks again, best regards,

the team of authors.

Reviewer 4 Report

Comments and Suggestions for Authors

The paper is relevant and well written. 


More details about the concrete workflow and implementation of the pipeline are needed to get an idea of the tools and complexity of the setting: in Sect. 2.1 the description is purely verbal (no architecture diagram nor workflow), this is too little information for readers interested to "learn" from your approach. 

More insight is desirable for what you learned that is useful to interested readers:
- Is your tool available? Where?
-  Or just some components? 
- Apart from the "nice to know" that you tried it out and succeeded, what else is directly useful to readers interested ein either producing a similar tool (if yours is not shared) or using that or similar tools? 

More info on the runtime for these results is needed:
- on which machines did you run the experiments? How much memory was needed?
- What are the runtimes? (in times of green computing...), and in comparison with other methods as well? 

Specifically on the pictures, in Fig. 8 do the green/blue colours have the same meaning as inthe earlier pics? There is no explanation there. 

On Fig. 4: why is (b) better than (a)? The car itself seems from this image to be recognized equally well. It seems that the other green object in (a) is more easily dismissable as "this is not part of the car" (if that's the objective), than the green small areas on the ground quite close to the car in (b).
More explanation on the errors and why somehting is better than something else is actually needed. 


Author Response

Dear Reviewer,

thank you for your valuable input. In particular, we have tried to explain our approach and our process better by adding workflow/architecture diagrams. We also added some clearifying information to several images / image sublines. In addition, we expanded the references.

Here is a detailed list of your comments and suggestions and how we addressed them

  1. More details about the concrete workflow and implementation of the pipeline are needed to get an idea of the tools and complexity of the setting: in Sect. 2.1 the description is purely verbal (no architecture diagram nor workflow), this is too little information for readers interested to "learn" from your approach.

    • --> Added overview diagram and explaining subsection to paragraph 1
    • --> Added process diagram showing the contributing packages explaining subsection of the process to paragraph 2
  2. More insight is desirable for what you learned that is useful to interested readers:

    • Is your tool available? Where?
    • Or just some components?
    • Apart from the "nice to know" that you tried it out and succeeded, what else is directly useful to readers interested ein either producing a similar tool (if yours is not shared) or using that or similar tools?
    • --> Unfortunately at the time of writing, our tool cannot be made publicly available due to funding restrictions. We are looking to find a way to at least publish the data generated with the tool.
    • --> Added paragraph describing possible applications. Added diagrams and explanation of the process and the architecture of this approach.
  3. More info on the runtime for these results is needed:

    • on which machines did you run the experiments? How much memory was needed?
    • What are the runtimes? (in times of green computing...), and in comparison with other methods as well?
    • --> As this manuscript describes the general applicability and the potential of the approach at an early step, we found that a systematic (and inarguably valuable) consideration of the runtime/energy consumption aspects should be carried out in future work
  4. Specifically on the pictures, in Fig. 8 do the green/blue colours have the same meaning as inthe earlier pics? There is no explanation there.

    • --> Added text explaining the colors in Fig 6 (NOT 8)
  5. On Fig. 4: why is (b) better than (a)? The car itself seems from this image to be recognized equally well. It seems that the other green object in (a) is more easily dismissable as "this is not part of the car" (if that's the objective), than the green small areas on the ground quite close to the car in (b). More explanation on the errors and why somehting is better than something else is actually needed.

    • --> Added explanation in subline of Fig 4: The roof part in (a) is mislabeld and thus the IoU is bad.

Thanks again, best regards,

the team of authors.

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear authors,

I would like to thank you for considering my suggestions and amending the paper accordingly. I believe that this paper will contribute to the field of Auto-ML in the area of synthetic generation.


Best regards.

Reviewer 3 Report

Comments and Suggestions for Authors The author  improved the manuscript to the standard of this journal. I have no further comment on this manuscript.
Back to TopTop