Next Article in Journal
No-Wait Job Shop Scheduling Using a Population-Based Iterated Greedy Algorithm
Previous Article in Journal
Overrelaxed Sinkhorn–Knopp Algorithm for Regularized Optimal Transport
 
 
Article
Peer-Review Record

Boundary Loss-Based 2.5D Fully Convolutional Neural Networks Approach for Segmentation: A Case Study of the Liver and Tumor on Computed Tomography

by Yuexing Han 1,2,†, Xiaolong Li 2,†, Bing Wang 2 and Lu Wang 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 7 April 2021 / Revised: 26 April 2021 / Accepted: 28 April 2021 / Published: 30 April 2021

Round 1

Reviewer 1 Report

The manuscript is interesting and organized. However, the use of boundary loss is not novel, and many studies have used it to segment the medical images. The authors should add an ablation study that discusses different loss functions and possible configurations of the proposed method. 

As shown in Table 4, the improvements on the results are not high when checking the results of the compared methods. Recently published studies suggested different techniques to enlarge the receptive field of the segmentation models and increase the learning ability of the model without information loss. For instance, ‘’ LungINFseg: Segmenting COVID-19 Infected Regions in Lung CT Images Based on a Receptive-Field-Aware Deep Learning Framework’’ to name but a few. Please use the Receptive-Field-Aware module in your model or discuss such methods in the manuscript.  Other evaluation metrics should be used such IoU and distance measures to demonstrate the efficacy of the proposed method.  The authors are recommended to visualize true positive, true negative, false positive and false negative segmentations on the segmented images using different colors.  The authors should add statistical analysis (e.g., Bland–Altman plot and standard deviations).

Also, the following minor issues should be considered:

The first paragraph of the introduction could be removed or shortened.

The expressions ‘’complicated images’’ or ‘’complex images’’ are improper and unclear. Please use a more suitable description.

The English of the manuscript needs major revisions. Some expressions should be changes, for instance ‘’so on’’.

In line 76 of page 2, the explanation of the histogram of the images based on the Hounsfield Unit should be improved.

In the caption of Figure 1, ‘’of the two’’ could be rephrased. 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper entitled „Boundary Loss-based 2.5D Fully Convolutional Neural Networks Approach for Segmentation: A Case Study of the Liver and Tumor” by. Han et al., deals with a deep learning method based on 2.5D and image segmentation. The whole process is divided into two stages, the first stage consist of segmenting of liver shape from the image and the second stage is extract tumor shape based on the results from the first stage.

The experimental results of segmentation stage are compared with new researches that were applied on two image datasets  3DIRCADb  and e LiTS.

In order to explore boundary features in segmentation stage the authors proposed cascading 2.5D FCN-based framework. The proposed method is verified using liver cancer images as experimental cases. The segmentation method is proposed so that to be avoid the overlapping areas.

 In addition to other studies the process of neural network training was done with three loss functions: (i) the cross-entropy loss function, (ii)similarity coefficient loss function and (iii) contour loss function.

A few observations:

The relevant and new studies are enumerated in the introduction. This section include the methods with witch the authors compared their results. Instead are a of undescribed acronyms, i.e  COCO, VOCs, GPUs and so on. Please to complete the meaning of acronyms.

The row 110 the word “mothed” replace it with “method”.

Milletari et al. [18,36] is [43] reference, please to separate references and  Li et al. [18] and Milletari et al.[36].

The row 358 the authors said „we dilate each slice once as the final segmentation result”, what method did you use?

Please to improve the quality of figures 5, 6 and 7.  The values on axis are unreadable.

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Dear Editors,

the manuscript is overall interesting but needs major improvements.

Please find my detailed review report.

Many thanks for your consideration and take care.

This paper presents a deep learning approach for liver and liver tumor segmentation on Computed Tomography (CT). The method is based on a Hybrid V-Net. The experiments were performed on the 3DIRCADb and Liver Tumor Segmentation (LiTS) 2019 databases and a comparison against the state-of-the-art is provided.

The work is overall interesting but the clinical background needs to be extended. In addition, the manuscript needs careful proofreading to further improve the English language.

In conclusion, the Authors should carefully address the following critical points to be further processed for publication.

1) Title: 'Computed Tomography' or 'CT' should appear in the title.

2) Keywords: 'CT' should be replaced with 'Computed Tomography'.

3) Abstract has to be substantially improved. The novelties have to be pointed out, as well as final remarks are missing.

4) The use of a 2.5D model needs to be clarified and justified.

5) Figure 1: The CT slices have to be rotated in an anterior-posterior manner.
Also, the label for the 3D volumes might be revised in '3D rendering' and also rotated accordingly.

6) The name of 'Section 2.3. The Methods Based Loss Optimization' should be refined and clarified.

7) Section 3.1: The Authors refer to 'The brightness value of the CT data'. It is more correct to use the term 'density' for CT in terms of Hounsfield units.
Please consider to introduce and discuss this interesting tissue-specific sub-segmentation approach based on CT density: Rundo L., et al. (2020) Tissue-specific and interpretable sub-segmentation of whole tumour burden on CT images by unsupervised fuzzy clustering. Computers in Biology and Medicine, 103751. DOI: 10.1016/j.compbiomed.2020.103751].

8) Figure 3: the CT slices have to be correctly rotated like in Figure 8.
Besides, the red and green colored ROIs might be displayed in alpha-blending transparency.

9) Conditional Random Fields (CRFs) have been successfully combined with deep learning techniques in medical imaging:
Zormpas-Petridis, K., Failmezger, H., Raza, S. E. A., Roxanis, I., Jamin, Y., Yuan, Y. (2019) Superpixel-based Conditional Random Fields (SuperCRF): Incorporating global and local context for enhanced deep learning in melanoma histopathology. Frontiers in Oncology, 9, 1045. DOI: 10.3389/fonc.2019.01045.
Interestingly, CRFs as a Recurrent Neural Network (CRF-RNN) [Zheng, S., Jayasumana, S., Romera-Paredes, B., et al. (2015). Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1529-1537). DOI: 10.1109/ICCV.2015.179] have been recently applied to prostate cancer detection:
Lapa, P., Castelli, M., Gonçalves, I., Sala, E., Rundo, L. (2020). A hybrid end-to-end approach integrating conditional random fields into CNNs for prostate cancer detection on MRI. Applied Sciences, 10(1), 338. DOI: 10.3390/app10010338
Please consider to discuss these relevant CRF applications.

10) All the acronyms have to be defined. For instance: 'FCN', 'RNNs'.

11) Section 4: a statistical treatment for performance comparisons would be beneficial.

12) The literature background on liver cancer CT image segmentation might be improved by discussing alternative approaches [Almotairi, S., Kareem, G., Aouf, M., Almutairi, B., & Salem, M. A. M. (2020) Liver tumor segmentation in CT scans using modified SegNet. Sensors, 20(5), 1516. DOI: 10.3390/s20051516] and other loss functions [Ma, Jun, Jianan Chen, Matthew Ng, Rui Huang, Yu Li, Chen Li, Xiaoping Yang, and Anne L. Martel. "Loss Odyssey in Medical Image Segmentation." Medical Image Analysis (2021): 102035. DOI: 10.1016/j.media.2021.102035].

13) Section 5: Conclusions should be extended. Moreover, future work should be supported also by a feasibility plan.

14) The English language needs improvements. Colloquial expressions should be avoided, such as 'and so on'.
Other examples of inaccuracies:
'fundamental problem in computer vision' -> 'fundamental task in computer vision'; ' image’s processing targets' -> ' image processing targets'; 'artificially defined features' -> 'hand-crafted features'; 'region growth' -> 'region growing'; 'threshold mothed' -> 'threshold method'; 'Post-process' -> 'Post-processing' (also in the labels in Figures 2 and 5); 'to segment complicated images' -> 'complicated segmentation tasks'.

15) The Reference style should be strictly compliant with the MDPI guidelines. For instance, MDPI requires that "Cited journals should be abbreviated according to ISO 4 rules".
Please refer to the following link and abbreviate all the journal names accordingly:
https://0-www-mdpi-com.brum.beds.ac.uk/authors/references

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors have addressed most of my comments. Thanks.

Back to TopTop