Next Article in Journal
A Review of Thermal Comfort Applied in Bus Cabin Environments
Previous Article in Journal
A Current Control Algorithm to Improve Command Tracking Performance and Resilience of a Grid-Connected Inverter
 
 
Article
Peer-Review Record

An Atomic Force Acoustic Microscopy Image Fusion Method Based on Grayscale Inversion and Selection of Best-Fit Intensity

by Zhaozheng Chen, Xiaoqing Li and Mingyue Ding *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 16 October 2020 / Revised: 19 November 2020 / Accepted: 20 November 2020 / Published: 3 December 2020
(This article belongs to the Section Applied Biosciences and Bioengineering)

Round 1

Reviewer 1 Report

Comments and edits are found in the attached pdf.

Comments for author File: Comments.pdf

Author Response

We want to thank you for reviewing our manuscript and for the constructive comments and suggestions.

Response to your Comments: Thank you for pointing out our language mistakes. We have changed them as you suggest in the article. We also correct some wrong uses in English to make the article easier to be read.

Reviewer 2 Report

The article "an atomic force microscopy image fusion method based on grayscale inversion and selection of best-fit intensity" by Chen, Li and Ding describes a method to register surface morphology and internal structure (acoustic) images generated by an atomic force acoustic microscope (AFAM). The authors propose a method that uses several image processing techniques, including image inversion and the "max rule" to combine the two images.

Overall, the authors have provided a good description of their methodology. My biggest concern is that the motivation seems to be missing.

I have a number of concerns about this manuscript.

1. Despite reading this manuscript several times, I have to concede that I do not understand the purpose of the proposed method. The authors seem to be trying to *merge* the morphology and structure images into a single image, but I do not understand why this is necessary. The resulting image is, at best, only useful for visualization purposes. Based on my understanding, the resulting image would no longer be suitable for quantitative analysis since information has been changed to get the final image.

A better approach would be to encode the different information using different colors, which the same authors have already described in their other papers (see Cells 2019, 8, 314).

2. The second major concern I have is that the authors have provided no ground truth dataset, so there is simply no way of knowing whether the final fused images are accurate or not. I also find it quite concerning that these authors appear to have published a number of other articles using essentially the same dataset (see citation above and also Appl. Sci 2020, 10, 7424). If this is the case, why can a ground truth dataset not have been produced?

3. It is unclear to me what the purpose of inverting the morphology image is. The authors say that a "larger space is offered in the fusion step". What "space" is this referring to? Bit depth? Contrast?

4. The authors also say that "the inversted image may be beyond the bounds of gray level. To solve this problem, we truncate the image gray level..." This is poor practice as it means that the raw information has now been changed. In this case, the authors have thrown information away and it is unclear why this is necessary or justified.

5. According to the "max rule", the authors compare the pixel values of the inverted morphological image and acoustic image and keep the maximum intensity. This is, again, very odd to me as the final image will simply become a combination of two completely different types of data - the morphology and the acoustic image. What do the final grayscale values represent?

6. In the "selection" step, the authors use grayscale morphological opening to try to compute the cell and background image. However, opening cannot accurately estimate the background unless the cell occupies a much smaller area than the background in the image. In the example image, the cell appears to take up close to 50% of the total image area, so this technique would not work.

7. What is the size of the structuring element used for the opening? 

8. Finally, there a numerous language errors. This manuscript would be improved with a professional editing service.

Author Response

We thank you for reviewing our manuscript and for the comments. Our response to the comments is attached.

Author Response File: Author Response.pdf

Reviewer 3 Report

Please see attachment

Comments for author File: Comments.pdf

Author Response

Thank you for the constructive suggestions. Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

I'd like to thank the authors for addressing my comments. For the most part, they have improved the quality of the manuscript.

Personally, two issues remain for me:

  1. I still do not understand the purpose of fusing the images. Quantitatively, it is still unclear what the final image represents as it is a combination of two distinct types of data. However, I can see some merit as a qualitative way to visualize both morphological and structure information.

  2. The word "acoustic" is still spelled incorrectly in Figure 1 ("Acousric"). This should be corrected before publication.

Author Response

Please see the attached file.

Author Response File: Author Response.pdf

Back to TopTop