Next Article in Journal
Insect Pest Image Recognition: A Few-Shot Machine Learning Approach including Maturity Stages Classification
Next Article in Special Issue
Evaluating Impacts between Laboratory and Field-Collected Datasets for Plant Disease Classification
Previous Article in Journal
Rice Momilactones and Phenolics: Expression of Relevant Biosynthetic Genes in Response to UV and Chilling Stresses
 
 
Article
Peer-Review Record

Mobile Plant Disease Classifier, Trained with a Small Number of Images by the End User

by Nikos Petrellis 1,*, Christos Antonopoulos 1, Georgios Keramidas 2 and Nikolaos Voros 1
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Submission received: 15 June 2022 / Revised: 13 July 2022 / Accepted: 18 July 2022 / Published: 22 July 2022
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)

Round 1

Reviewer 1 Report

The manuscript shows a mobile application for plant disease classification trained  by user. The manuscript is clear, relevant for the field and presented in a well-structured manner. However, in plant disease classification Many data sets have been proposed in the literature. Citing and using at least 3 more, including the RoCoLe and LeLePhid dataset. Also, correct the use of references according to a scientific work. Avoid citing websites like kaggle.

Author Response

We thank the reviewers for the time they spent to read our manuscript and make valuable comments. In this revised manuscript we have addressed all the reviewer comments and we believe the paper meets the high quality criteria of MDPI Agronomy journal. The modifications are highlighted in yellow in the revised manuscript. Please find below our responses to each reviewer comment along with references to the section, paragraph, sentence or lines where the reviewer can find the additional or modified text that addresses his concerns.

We are at the disposal of the reviewers for any other modification required

Best Regards

Nikos Petrellis

On behalf of all the authors.

 

Reviewer1

 

Comment 1.1

The manuscript shows a mobile application for plant disease classification trained  by user. The manuscript is clear, relevant for the field and presented in a well-structured manner. However, in plant disease classification Many data sets have been proposed in the literature. Citing and using at least 3 more, including the RoCoLe and LeLePhid dataset.

Author Response 1.1

Thank you for your comment. The additional datasets RoCoLe and LeLePhid have been cited. 

Please see

  • the new 4th sentence of the 1st paragraph of subsection 2.1 (lines 147-149)
  • the new references [30] and [31]

Moreover, a private set of photographs displaying pear diseases has also been analyzed to measure the accuracy achieved with photographs that are not segmented or normalized.

Please see

  • the new subsection 3.3.
  • the new paragraphs 5 and 6 in section 4 (lines 542-562)
  • the new Fig. 2, and updated Tables 12 and 13

 

Comment 1.2

Also, correct the use of references according to a scientific work. Avoid citing websites like kaggle

Author Response 1.2

Thank you for your suggestion. The references have been reorganized. The old [2],[4],[7],[10], [11] and [20] references have been removed since they were less relevant and older while [10-11], [13-16] and [30-31] that were added. General website references like Kaggle were removed.

Reviewer 2 Report

The author proposed a mobile application to train and classify plant diseases with a small number of training images. In their study, 1,400 images were randomly selected from Kaggle and PlantVillage datasets. The author used the color and segmented tomato disease images with a black background for the training and test sets. Many features were extracted, including the number of spots, area of the spot, the average gray level of the spots, etc. (see Table 1). Then, grading and classifying the tomato diseases. 

However, the author should be more concerned about these comments.

  • What is the difference between disease grading and disease classification?
  • Do the equations 1-3 design for classification or grading? Could the author describe more clearly, add more examples, or show the calculation process.
  • Could the author explain Figure 2 more in detail? What is the 'Rule selection'?
  • The theoretical part of this manuscript is required. For example, how to compute the features, what is the feature range? As shown in Figure 5, how does the author define the ten stars?
  • As shown in Table 13, all the accuracy of references [6, 12, 13, 24, and 26] evaluate based on the same dataset and the same number of training and test sets? If not, the author should not compare your experimental results with other methods.

Author Response

We thank the reviewers for the time they spent to read our manuscript and make valuable comments. In this revised manuscript we have addressed all the reviewer comments and we believe the paper meets the high quality criteria of MDPI Agronomy journal. The modifications are highlighted in yellow in the revised manuscript. Please find below our responses to each reviewer comment along with references to the section, paragraph, sentence or lines where the reviewer can find the additional or modified text that addresses his concerns.

We are at the disposal of the reviewers for any other modification required

Best Regards

Nikos Petrellis

On behalf of all the authors.

Reviewer2

Comment 2.1

    What is the difference between disease grading and disease classification?

Author Response 2.1

Thank you for your comment. The developed application, grades each disease according to the extracted feature values in comparison with the reference feature ranges of each disease. Then the photograph is classified to the disease that receives the highest grade.

Please see the modified sentences 5-7 of Abstract (lines 15-18)

 

Comment 2.2

    Do the equations 1-3 design for classification or grading? Could the author describe more clearly, add more examples, or show the calculation process.

Author Response 2.2

Thank you for your comment. As stated in the last sentence above eq. (1), these equations perform the grading. Then the photograph is classified to the disease with the highest grade

Please see

  • the Author Response 2.1 and the new last sentence of subsection 2.2 (line 355)
  • lines 113-155 in the 3rd paragraph before the end of Introduction
  • the last 2 sentences of the 5th paragraph of section 2.2 (lines 291-293)

Details about how the feature ranges used in grading are defined as well as arithmetical examples can be found in:

  • the extended paragraphs 6, 7 and 8 of section 2.2 (lines 322-349)
  • paragraphs 3,4 of section 2.3 (lines 401-423)

 

Comment 2.3

    Could the author explain Figure 2 more in detail? What is the 'Rule selection'?

Author Response 2.3

Thank you for your comment. “Rule selection” in Figure 3 (old Fig. 2) was changed to “Rules File Selection”. The Rules file contains the reference ranges of the features in different diseases. These reference ranges are the limits of each range after statistically processing (finding the min/max) of the feature values that are extracted from the training photographs. The reference feature ranges of a single disease are generated by the Disease Classification Trainer from the feature values extracted from the training photos of this disease. These feature values have been extracted in the Ranges file from the Disease Classifier app. The Disease Classification Trainer combines these feature ranges in a single “Rules” file assigning to a single set of feature ranges, the name and the description of the disease it corresponds to. The Rules file is returned then to the Disease Classifier in order to classify in real time new photographs.

Please see:

  • The modified Fig. 3 (old Fig. 2)
  • The new paragraph and modified 2nd paragraph at the beginning of Section 2.2 (lines 203-225)
  • Author Response 2.1

 

Comment 2.4

    The theoretical part of this manuscript is required. For example, how to compute the features, what is the feature range?

Author Response 2.4

Thank you very much for your comment. The feature extraction process and histogram creation was briefly described in the original paper in order to avoid repeating too much information that is available in [27]. However, the 3rd and 4th paragraph of section 2.2 (lines 226-283) was extended describing how the image is scanned to perform segmentation and feature extraction.

The feature ranges and the rules that are based on the statistical processing of these ranges is described in more details in the revised manuscript. Please see the previous Author responses 2.1-2.3.

 

Comment 2.5

As shown in Figure 5, how does the author define the ten stars?

Author Response 2.5

Thank you very much for your comment. The 10 stars correspond to example values of the same feature measured in 10 sample training images. The minimum and maximum of these values define the loose range. 50% of the values around the median define the 50% of the loose range which is called strict range. Other ranges can be defined in reference to the 100% loose range.

Please see the extended/modified 7th paragraph (lines 322 - 341) of section 2.2.

These reference ranges are compared with the feature values in real time to determine the grade (please see Author Response 2.1 and 2.2)

 

Comment 2.6

    As shown in Table 13, all the accuracy of references [6, 12, 13, 24, and 26] evaluate based on the same dataset and the same number of training and test sets? If not, the author should not compare your experimental results with other methods.

Author Response 2.6

Thank you very much for your comment. Indeed, no common dataset has been used and thus, it is not possible to do a fair comparison but still we think it is useful to have a clue about the accuracy achieved by similar applications even if different datasets are used. Moreover, Table 15 (old Table 13) has been extended with additional references. We clarify that Table 13 is presented for informative reasons and not strict comparison in

  • the new sentences 2 and 3 of the 8th paragraph of Section 4 (lines 582-584).

 

Reviewer 3 Report

In the present work, the authors designed some mobile applications for plant disease classification trained by the end user. The work is very meaningful and interesting. Some comments are listed to further improve the work as bellows.

1. The current title is too general. Please rename the manuscript with a new title that can specify the main content.

2. In the literature review, please investigate more recent work. Here, I suggest using a table to clearly summarize them.

3. In the introduction part, please list the motivations of the work in a single paragraph. The motivations shall relate to the literature review.

4. In the experiment, please update parameter settings for each benchmark.

5. In the experiment, please show the distributions of the responses for training set and test set, respectively. How did author overcome the problem of imbalance in real practices?

6. In the experiment, authors shall report the computational cost for each algorithm.

7. In the conclusion, please list some limitations of the proposed algorithm.

8. Please carefully check the English writing.

9. Please check your reference format. There are many typos.

Author Response

We thank the reviewers for the time they spent to read our manuscript and make valuable comments. In this revised manuscript we have addressed all the reviewer comments and we believe the paper meets the high quality criteria of MDPI Agronomy journal. The modifications are highlighted in yellow in the revised manuscript. Please find below our responses to each reviewer comment along with references to the section, paragraph, sentence or lines where the reviewer can find the additional or modified text that addresses his concerns.

We are at the disposal of the reviewers for any other modification required

Best Regards

Nikos Petrellis

On behalf of all the authors.

 

Reviewer3

 

Comment 3.1

The current title is too general. Please rename the manuscript with a new title that can specify the main content.

Author Response 3.1

Thank you for your suggestion. The title changed to “Mobile plant disease classifier, trained with a small number of images by the end user”. This title emphasizes to the main features of the proposed applications:

  • the aim of the proposed system: to classify images to plant diseases
  • it is trained by the end user
  • a small training set is sufficient
  • it is appropriate for mobile platforms

 

Comment 3.2

In the literature review, please investigate more recent work. Here, I suggest using a table to clearly summarize them.

Author Response 3.2

Thank you for your suggestion. The references have been reorganized. The old [2][4][7][10] [11] and [20] references have been removed since they were less relevant and older, while [10-11], [13-16] and [30-31] were added.

Please also see the restructured Introduction section (especially paragraphs 2-4) and the Discussion section (the paragraph before the last one in section 4) where the description of the new references has been incorporated.

 

Comment 3.3

In the introduction part, please list the motivations of the work in a single paragraph. The motivations shall relate to the literature review.

Author Response 3.3

Thank you for your suggestion. The 5th paragraph of the Introduction (lines 90-109) describes the weaknesses of the referenced deep learning approaches (large training sets, resources, not extendible). Our approach attempts to offer an alternative that does overcomes these weaknesses as described in the same paragraph. A brief description of the concept follows in the next paragraph. Then in paragraph 7 of the Introduction (lines 124-135), the differences from our previous works are listed.

 

Comment 3.4

In the experiment, please update parameter settings for each benchmark.

Author Response 3.4

Thank you for your comment.

  • The general features of the datasets are described in 2.1 that has been expanded to cover the new pear disease dataset in the last paragraph (lines 167-176)
  • The 1st paragraph of Section 3 has been modified to describe more clearly what Ra, Rb and Rc configurations are (how many reference ranges will be used for grading each disease) and the training datasets T40 and T100.
  • All the tables presented in section 3 are based on the combinations that can be done between Ra, Rb, Rc and T40, 100.
  • The experimental results are discussed in the first 5 paragraphs of Section 4
  • The thresholds used in each disease can be found in the modified Table 12 and a discussion about the thresholds can be found in the extended 7th paragraph of section 4 (lines 563-578)
  • A discussion about the complexity and the speed of the classification/training can be found in the last paragraph of section 4 (lines 599-615)

 

Comment 3.5

In the experiment, please show the distributions of the responses for training set and test set, respectively. How did author overcome the problem of imbalance in real practices?

Author Response 3.5

Thank you for your comment. In the tomato dataset when T40 training is employed 28.5% of the images are used for training and 71.5% for testing. The training test was also used as a test set with the results presented in Tables 2, 3. Then the test set of T40 was evaluated for Ra, Rb, Rc range configurations with the results presented in Tables 4-9.

In the T100, the distribution is reversed: 71.5% of the images are used for training and 28.5% for testing. Please see the 5 first sentences of the extended 1st paragraph of Section 3 (lines 427-432).

The size of the training dataset in the new pear images is ranging between 16.4% and 50% of the whole dataset per disease. This intentional imbalance has been employed to study its effect in the statistical quality metrics. Please see

  • The modified 1st and the new 2nd paragraph of subsection 2.1 (lines 144-176)
  • the new 3.3 subsection
  • The discussion in Section 4 and especially paragraphs 4 (lines 531-541) and 6 (lines 552-562)

 

Comment 3.6

In the experiment, authors shall report the computational cost for each algorithm.

Author Response 3.6

Thank you very much for your suggestion. The image segmentation and feature extraction is the only computational cost of the employed classification method. All the images are internally resized to a constant size of 324X182 and the image pixels are scanned in two phases: segmentation and for histogram creation. During segmentation the pixels classified as spots can be visited again during the adjacent spot merging. Consequently, the processing time of a single image is constant and short. During training the processing time of a single image is again too small compared to the time needed by the end user to interactively select a training image, decide if its segmentation is satisfactory, etc. Unfortunately training images cannot be processed in batch mode in the present version, since the user has to approve the segmentation performed with the specific thresholds. However, this drawback is counterbalanced by the fact that the number of training photographs needed is small: 20-100 photographs / disease were used in the conducted experiments.

Concerning the complexity of the classification and training process please see the following:

  • Lines 105-109 in the Introduction
  • The new last paragraph of the Discussion (Section 4, lines 599-615).

 

 

Comment 3.7

In the conclusion, please list some limitations of the proposed algorithm.

Author Response 3.7

Thank you very much for your comment. The most important limitations are the automatic separation of complex background and the dependency on thresholds. These are mentioned in the new last paragraph of the Conclusions section.

 

Comment 3.8

Please carefully check the English writing.

Author Response 3.8

Thank you very much for your comment. The whole manuscript was thoroughly revised as you can see from the highlighted modifications, paying attention to the correction and improvement of English language used.

 

Comment 3.9

Please check your reference format. There are many typos

Author Response 3.9

Thank you very much for your comment. All the reference section has been reorganized based on the format of the template. Please also see Author response 3.2

Reviewer 4 Report

Authors propose a light vision-based method to detect tomato leaf diseases deployed in a mobile app. I have some important issues regarding the paper: 

 

I miss some existing works related to tomato leaf desease detection and a comparison with them as they are related to this topic and should be included:

 

https://0-ieeexplore-ieee-org.brum.beds.ac.uk/document/8807737

https://0-www-sciencedirect-com.brum.beds.ac.uk/science/article/pii/S1877050920306906

https://0-www-sciencedirect-com.brum.beds.ac.uk/science/article/pii/S1877050918310159

https://www.intechopen.com/chapters/76494

https://0-ieeexplore-ieee-org.brum.beds.ac.uk/document/9231174

http://fs.unm.edu/ScArt/SmartMobileApplication.pdf

 

There is even some guided examples:

https://towardsdatascience.com/crop-plant-disease-identification-using-mobile-app-aef821d1a9bc

 

As your work use an existing dataset and do not acquired images with IoT devices or UAVs paragraph  38-52 is not related to the proposed work (in addition to this, if you keep it, it should be rewritten in order to organize it better as now goes from sensors, to UAVs, precision livestock, NLP…. Too messy.)

 

Cites 16 and 18, what is the difference?

 

Some sentences in the Introduction should be rewritten using connectors and giving more information about the papers you cite in order to understand better the existing work. i.e. “A 73 mobile application for maize diseases is discussed in [23]. A mobile application with user 74 friendly interface that performs fungal disease detection is presented in [24]. The diagno-75 sis is assisted with meteorological historic data and chat with the end user. Another mo-76 bile application for the segmentation of tomato images in order to estimate the damage of 77 the plant from Tuta Absoluta pest is described in [25]. A mobile CNN approach for the 78 diagnosis of 26 diseases from 14 crop species is presented in [26]. " ------> What methods do they use? 

 

As you consider a dataset where the leafs are segmented from the background it is a much easier task to recognize the diseases as the hard part of identifying where there is a leaf and where not it is already done. Also the images are normalized so the challenge is lower in the whole procedure, for example picking image threshold to obtain the ROIs. A comparison with real images should be done as your mobile app is easy to use it can be performed quite fast. Execution time for training for the different sizes of the training set and the time a prediction takes should be also provided. 

 

The vision-based approach you propose it is too dependent on the dataset. If images are normalized, segmented, isolated, etc, it is quite easy to determine manually its features. If you try the same procedure with real images, acquired by the mobile under different illumination conditions, orientations, with background, the accuracy is going to be much lower. I could consider the paper interesting if it should have been trained and tested with a dataset collected by the mobile as this is the main contribution. If not, it is a too specific approach that only will work under fixed conditions so it is not an important contribution to the field. 

Author Response

We thank the reviewers for the time they spent to read our manuscript and make valuable comments. In this revised manuscript we have addressed all the reviewer comments and we believe the paper meets the high quality criteria of MDPI Agronomy journal. The modifications are highlighted in yellow in the revised manuscript. Please find below our responses to each reviewer comment along with references to the section, paragraph, sentence or lines where the reviewer can find the additional or modified text that addresses his concerns.

We are at the disposal of the reviewers for any other modification required

Best Regards

Nikos Petrellis

On behalf of all the authors.

 

Reviewer4

Comment 4.1

Authors propose a light vision-based method to detect tomato leaf diseases deployed in a mobile app. I have some important issues regarding the paper:

I miss some existing works related to tomato leaf disease detection and a comparison with them as they are related to this topic and should be included:

https://0-ieeexplore-ieee-org.brum.beds.ac.uk/document/8807737

https://0-www-sciencedirect-com.brum.beds.ac.uk/science/article/pii/S1877050920306906

https://0-www-sciencedirect-com.brum.beds.ac.uk/science/article/pii/S1877050918310159

https://www.intechopen.com/chapters/76494

https://0-ieeexplore-ieee-org.brum.beds.ac.uk/document/9231174

http://fs.unm.edu/ScArt/SmartMobileApplication.pdf

There is even some guided examples:

https://towardsdatascience.com/crop-plant-disease-identification-using-mobile-app-aef821d1a9bc

Author Response 4.1

Thank you very much for your detailed guidance and suggestions. Most of the proposed papers have been referenced and compared in this revised paper. All the information of these links will be taken into consideration in our future work too. Please see:

  • The new references [13]-[16] from the list suggested by the reviewer
  • Their descriptions can be found in the 3rd paragraph of Introduction (lines 54-71)
  • Two additional new references have been added for pear diseases ([10]-[11])
  • Their comparison can be found in Section 4 and especially in paragraph 8 (lines 581-598).

 

Comment 4.2

As your work use an existing dataset and do not acquired images with IoT devices or UAVs paragraph  38-52 is not related to the proposed work (in addition to this, if you keep it, it should be rewritten in order to organize it better as now goes from sensors, to UAVs, precision livestock, NLP…. Too messy.)

Author Response 4.2

Thank you very much for your comment. The Introduction section was reorganized and the most irrelevant references are removed (the old [2], [4], [7], [10], [11], [20] and [29]) since 8 new references are added, that are more related to our work. The structure of the Introduction section is now the following:

  • Brief reference to approaches that require advanced tools and hardware for plant monitoring (precision agriculture, IoT sensors)- this part of Introduction is shorter now than the original paper (references [1]-[8])
  • Referenced Plant disease diagnosis approaches (e.g. Deep Learning) emphasizing on tomato/pear diseases. These approaches are also used for comparison ([9]-[16])
  • Other Mobile apps used for plant monitoring ([17]-[26])
  • Motivation of our work.
  • Overview of our work
  • Our contribution in comparison with our older work
  • Paper structure

 

 

Comment 4.3

Cites 16 and 18, what is the difference?

Author Response 4.3

These references are now [17] and [19]. More general agriculture tasks are performed by the approaches reviewed in [17] while in [19] applications are reviewed for plant disease signature detection i.e., for disease diagnosis.

Please see the modified sentences 2-4 of the 4th paragraph in the Introduction section (lines 73-75)

 

Comment 4.4

Some sentences in the Introduction should be rewritten using connectors and giving more information about the papers you cite in order to understand better the existing work. i.e. “A mobile application for maize diseases is discussed in [23]. A mobile application with user friendly interface that performs fungal disease detection is presented in [24]. The diagnosis is assisted with meteorological historic data and chat with the end user. Another mobile application for the segmentation of tomato images in order to estimate the damage of the plant from Tuta Absoluta pest is described in [25]. A mobile CNN approach for the diagnosis of 26 diseases from 14 crop species is presented in [26]. " ------> What methods do they use?

Author Response 4.4

Thank you very much for your comment. We tried to improve the whole paper clarifying expressions that were ambiguous in the previous version. In the revised manuscript we give brief information about the background of the referenced approaches (implementation type, supported plants, etc) in the Introduction. More details are given for the references that are compared in Section 4 (training:test set ratio, classification method, diseases treated) and especially in Table 15 in the Notes column. Please see the revised Sections 1 and 4.

 

Comment 4.5

As you consider a dataset where the leafs are segmented from the background it is a much easier task to recognize the diseases as the hard part of identifying where there is a leaf and where not it is already done. Also the images are normalized so the challenge is lower in the whole procedure, for example picking image threshold to obtain the ROIs. A comparison with real images should be done as your mobile app is easy to use it can be performed quite fast.

Author Response 4.5

We have tested and additional dataset with pear diseases that consists of photographs that are not segmented nor normalized. Separation of a complicated background is not currently supported but the leaves in this dataset were placed on a white sheet of paper to create a background that is brighter than the leaves. In some other cases background can also be separated by the application without this trick (e.g., yellow citrus fruits, red tomatos from the green leaf background). The photographs in the pear disease dataset are also not normalized. The accuracy results are comparable to the ones achieved in the segmented photographs of the tomato dataset. Please see

  • the new last sentence of the Abstract (lines 20-22)
  • the last 2 sentences of the paragraph before the last one in the Introduction (lines 134-135)
  • the new last sentence of the 2nd paragraph of section 2.2 (lines 223-225)
  • the new subsection 3.3.
  • the new paragraphs 5 and 6 in section 4 (lines 542-562)
  • Please also pay attention to the new Fig. 2, Tables 12 and 13

 

Comment 4.6

Execution time for training for the different sizes of the training set and the time a prediction takes should be also provided.

Author Response 4.6

Thank you very much for your suggestion. The image segmentation and feature extraction is the only computational cost of the employed classification method. All the images are internally resized to a constant size of 324X182 and the image pixels are scanned in two phases: segmentation and for histogram creation. During segmentation the pixels classified as spots can be visited again during the adjacent spot merging. Consequently, the processing time of a single image is constant and short. During training the processing time of a single image is again too small compared to the time needed by the end user to interactively select a training image, decide if its segmentation is satisfactory, etc. Unfortunately training images cannot be processed in batch mode in the present version, since the user has to approve the segmentation performed with the specific thresholds. However, this drawback is counterbalanced by the fact that the number of training photographs needed is small: 20-100 photographs / disease were used in the conducted experiments.

Concerning the complexity of the classification and training process please see the following:

  • Lines 105-109 in the Introduction
  • The new last paragraph of the Discussion (Section 4, lines 599-615).

 

 

 

Comment 4.7

The vision-based approach you propose it is too dependent on the dataset. If images are normalized, segmented, isolated, etc, it is quite easy to determine manually its features. If you try the same procedure with real images, acquired by the mobile under different illumination conditions, orientations, with background, the accuracy is going to be much lower. I could consider the paper interesting if it should have been trained and tested with a dataset collected by the mobile as this is the main contribution. If not, it is a too specific approach that only will work under fixed conditions so it is not an important contribution to the field.

Author Response 4.7

Thank you very much for your valuable comment. As we already explained in your comment 4.5 segmenting a complicated background is indeed the only limiting factor of our approach. However, as was with the pear disease dataset that we tested, placing a leaf on a white sheet or a white bench creates in a very simple way an artificial white background to make its separation easy. Moreover, photographs of fruits with distinct color such as red tomatos, or yellow citrus with green leaf background, do not even need such an artificial background and can be separated with the available RGB thresholds. The orientation and size does not affect at all the employed classification method.

Please see:

  • The modified 1st paragraph of section 2.1 (lines 144-163)
  • The new 2nd paragraph of section 2.1 (lines 167-176)
  • New Fig. 2
  • The modified 3rd sentence of the 4th paragraph of section 4 (lines 537-538)

Round 2

Reviewer 2 Report

The revised manuscript is changed so much and is also acceptable. The manuscript is ready to publish in the journal. I accept the revised manuscript.

Reviewer 3 Report

I have no more comment.

Reviewer 4 Report

Authors have significantly improved the paper although the images are still too much nice for real world applications. 

Back to TopTop