Next Article in Journal
Using KNN Algorithm Predictor for Data Synchronization of Ultra-Tight GNSS/INS Integration
Next Article in Special Issue
Deep Q-Learning-Based Neural Network with Privacy Preservation Method for Secure Data Transmission in Internet of Things (IoT) Healthcare Application
Previous Article in Journal
Data-Driven Modelling of Human-Human Co-Manipulation Using Force and Muscle Surface Electromyogram Activities
 
 
Article
Peer-Review Record

Efficient Binarized Convolutional Layers for Visual Inspection Applications on Resource-Limited FPGAs and ASICs

by Taylor Simons and Dah-Jye Lee *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 28 May 2021 / Revised: 15 June 2021 / Accepted: 17 June 2021 / Published: 23 June 2021

Round 1

Reviewer 1 Report

This paper presents a new binarized convolutional layer, called the Neural Jet Features layer, that combines the power of deep learning with classic computer vision kernels. The authors show that Neural Jet Features perform comparably and tend to be more stable when training small models compared to standard BNN convolutional layers, thus may provide an efficient solution for resource limited systems. However, there are still some problems to be addressed:
1. Please add the number of calculation instructions and the actual test speed for the three algorithms in the experimental part.
2. Avoid references in abstracts.
3. Some grammatical errors exist, please pay attention to correction.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear authors,

       you have proposed a binarized convolutional layer for efficiency-critical onboard applications. You have experimented on multiple datasets. The paper is overall well written and nicely structured. We have the following concerns:

  1. Instead of only showing the accuracy in figures, we would suggest that the authors also summarize and report the results in some tables for comparison.
  2. The authors aim for defect detection applications. Do you have any qualitative defect detection results for illustration?
  3. ImageNet should be referenced when mentioning it.
  4. The authors aim for efficient networks. We would suggest that the authors present more computation complexity results like FLOPs/MACs, memory requirements, power consumptions, parameters, running time, etc.
  5. This paper does not have a related work section. Please consider discussing and comparing with more state-of-the-art works, and if it is possible, conducting more ablation studies to verify the effectiveness of the proposed components in your method.
  6. How about recent attention- and transformer-based architectures like ACNet and ViT. How about state-of-the-art efficient networks like GhostNet? Please discuss these. Would your method suitable for these architectures? [*] "Acnet: Attention based network to exploit complementary features for rgbd semantic segmentation." ICIP2019. [*] "An image is worth 16x16 words: Transformers for image recognition at scale." ICLR2021. [*] "Ghostnet: More features from cheap operations." CVPR2020.

For these reasons, a revision is recommended.

Sincerely,

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The revised version answers all the questions raised.

Author Response

Thank you so much for your help.  

Reviewer 2 Report

Dear authors,

        Thank you very much for your responses and revision. Most of the concerns have been addressed. We would suggest that this paper be accepted. 

        Just two minor points that the authors are suggested to incorporate in the final version:

  1. Many figures and the texts in the figures are very blurry. Please improve the quality.
  2. We would suggest that the authors discuss some future work directions in the conclusion section, e.g., applying attention-based methods.

Author Response

 Appreciate all your suggestions and positive comments.

  1. We have improved a few figures to make the text clear.  Please review and let us know if they are acceptable.  I have seen sometimes the PDF loses quality after going through the submission system, maybe compressed.  All figures look good now.
  2. We added a sentence and two new references at the end of the Conclusion section.  
Back to TopTop