Next Article in Journal
Automatic Classification of Normal–Abnormal Heart Sounds Using Convolution Neural Network and Long-Short Term Memory
Next Article in Special Issue
Development of a Face Prediction System for Missing Children in a Smart City Safety Network
Previous Article in Journal
Optimal Coordinated Control Strategy of Clustered DC Microgrids under Load-Generation Uncertainties Based on GWO
Previous Article in Special Issue
A Vendor-Managed Inventory Mechanism Based on SCADA of Internet of Things Framework
 
 
Article
Peer-Review Record

In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM

by Jun-Ying Huang 1, Jing-Lin Syu 2, Yao-Tung Tsou 2,*, Sy-Yen Kuo 1 and Ching-Ray Chang 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 7 March 2022 / Revised: 9 April 2022 / Accepted: 11 April 2022 / Published: 14 April 2022
(This article belongs to the Special Issue Advances of Future IoE Wireless Network Technology)

Round 1

Reviewer 1 Report

In the present manuscript, the authors report an improved in-memory architecture for the training/computation of convolutional neural networks (CNNs) which is based on spin-orbit torque MRAM. As the authors explain, there are numerous issues in the current set of architectures, more specifically with the increased energy consumption, efficiency of the algorithm, and the amount of reading/write executions performed during the calculation of the CNN. By combining the new CIM architecture and a distributed arithmetic algorithm the authors have reported a reduction of 43.3% in reading times and 22.7% in writing times.

 

I believe the manuscript is well written and important for the community. However, I have some comments that I think are important to improve the current manuscript:

 

  • The authors did a great job in summarizing important tools in the field, however, I think that the authors should improve the link their work has with the current literature. In many instances in the introduction, for example, the authors provide arguments that are connected to other studies but fail to provide the respective citations. An example would be their discussion of the Von Neumann architecture, but there are many other instances. I think the authors should try to improve it and make a more consistent connection with the literature.

 

  • The authors focused on reporting the reduction in read/write times and the reduction in power consumption. Their results are consistent, but I wonder if the authors have observed differences in the accuracy of the algorithm?

 

  • In addition, the authors should give more details about what is being done by the CNN. Is this a default experiment in machine learning which can be found somewhere where the accuracy can be compared? What is the input image? What is being classified? This information is important so that additional comparisons can be done by others.

 

  • Some figures could be merged into a single one. Some examples are Figs 20, 21, and 22, or Figs 24, 25, and 26. Please try to lower the number of figures so the paper does not get too long.

 

  • Page 3: please correct “windowalong”.

 

  • Fig 3: please improve the quality of the table.

 

  • Fig 18: in the figure, the file should be States.txt and not “Stas.txt”, please make it consistent.

Author Response

REVISION STATEMENT

Manuscript ID electronics-1648214 entitled “In-Memory Computing Architecture for a Convolutional Neural Network based on Spin Orbit Torque MRAM”

We would like to express our gratefulness to the reviewers for their precious comments, which were very constructive and helpful in clarifying the manuscript. We also thank the editor for the effort in handling our manuscript.

We have taken reviewers’ comments into careful consideration to revise our manuscript. Our responses to reviewers’ comments are described in detail as attached file.

Author Response File: Author Response.pdf

Reviewer 2 Report

In this paper, the authors propose computing in-memory architecture for CNNs to overcome memory bottlenecks. Authors claim that their method can achieve shorter clock periods and reduce read times by up to 43.3% without the need for additional circuits. Please see below my detailed feedback for authors to consider.

 

  1. Authors have stated that the CIM architecture can achieve low memory access latency, parallel operation, ultra-low power consumption, and close access to the arithmetic logic unit of the CIM architecture can overcome the bottlenecks of the Von Neumann architecture. Can the authors quantify each aspect and quantify separately?
  2. How can the on-chip memory solve the CNN based real-time applications? – please elaborate.
  3. How does the proposed solution for example match with Intel’s Movidius neural compute stick? What is done by Intel and how propose solution can be compared, please quantify and explain the merits/demerits?
  4. Can the proposed in-memory architecture be used for CPUs, GPUs, FPGAs and ASIC? If yes, how it can be used? If not, what are the limitations of the design? – please critically analyse.
  5. How bus interface will work with slower peripherals? A very efficient in-memory method with slower peripheral input to the CNN will reduce the gain achieved by your method. How would you analyse this aspect in context to the solution offered?
  6. What were the simulation parameters for each circuit block presented? It should be explicitly mentioned in the paper for reproducibility.
  7. How did the authors quantify the changing magnetic field of the MRAM? Please explain.
  8. Proposed methods were tested with very basic circuits. I want to see the proposed method actually implemented with the CNN based application and benchmarked.
  9. As MRAM circuit is the same as mentioned in the below paper. May I know what are the differences between your work and the work presented in this paper? This aspect is crucial, without having clarity on this aspect, the contribution of the paper can not be ascertained.

 

  1. Kazemi, G. E. Rowlands, E. Ipek, R. A. Buhrman, and E. G. Friedman, “Compact model for spin–orbit magnetic tunnel junctions,” IEEE Transactions on Electron Devices, vol. 63, no. 2, pp. 848–855, 2016.
  2. My overall impression of the paper is that several entities were cherry-picked from different sources (for example strong ARM latch by ARM, MRAM circuit as stated in the above-mentioned paper) and simulated. Could the authors draw a flow chart/workflow of their work where they explicitly mention, which blocks were reproduced from other papers/sources and what is the actual contribution of this work? It is not clear at present.
  3. The conclusion section is very sketchy, please clearly mention what are the contributions of this paper and how they are different to what already exists.

Author Response

REVISION STATEMENT

Manuscript ID electronics-1648214 entitled “In-Memory Computing Architecture for a Convolutional Neural Network based on Spin Orbit Torque MRAM”

We would like to express our gratefulness to the reviewers for their precious comments, which were very constructive and helpful in clarifying the manuscript. We also thank the editor for the effort in handling our manuscript.

We have taken reviewers’ comments into careful consideration to revise our manuscript. Our responses to reviewers’ comments are described in detail as attached file.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Thanks for addressing my comments. Regarding the questions raised in my previous round, I was expecting authors to address comments in their revised manuscript by explicitly mentioning to the reviewer where exactly (page number, line numbers) the changes have been incorporated. The authors did address my comment #6 by stating that it was incorporated in Section 4.1, however, no other comments were explicitly referred to in the manuscript. Could the authors reply back and mention where exactly (page number, line numbers) the changes have been incorporated in the manuscript so that I can track them? - it is not possible at present. The reason to raise questions was not only for the authors to respond, it was rather for them to explicitly mention in the manuscript so as to improve the quality and readability of the manuscript.

In the Conclusion section, please rephrase the following sentence:

"In addition, we evaluated that our CIM model applied to Arm A9 CPU
can reduce CPU power consumption in different manners".

 

Author Response

REVISION STATEMENT

Manuscript ID electronics-1648214-R2 entitled “In-Memory Computing Architecture for a Convolutional Neural Network based on Spin Orbit Torque MRAM”

We would like to express our gratefulness to the reviewers for their precious comments, which were very constructive and helpful in clarifying the manuscript. We also thank the editor for the effort in handling our manuscript.

We have taken reviewers’ comments into careful consideration to revise our manuscript. In particular, we have explicitly mentioned where exactly (paragraph, section, page number) the changes have been incorporated. Moreover, we revised this sentence “In addition, we evaluated that our CIM model applied to Arm A9 CPU can reduce CPU power consumption in different manners” as “Additionally, we evaluated that a CIM model running on an Arm A9 CPU can significantly reduce power consumption.” Please see the attachment.

Author Response File: Author Response.pdf

Round 3

Reviewer 2 Report

In the revised version, section1, the references now start with [25], [26] and then [4]. Authors are rushing through without paying careful attention to the details. Could the authors take their time and carefully look into this and make their article presentable? - Overall, I appreciate authors for their work and hope my suggestions would have helped them improve their manuscript. I am happy to accept the article after the authors have made minor corrections and arranged all references in chronological order. 

Author Response

REVISION STATEMENT

Manuscript ID electronics-1648214-R3 entitled “In-Memory Computing Architecture for a Convolutional Neural Network based on Spin Orbit Torque MRAM”

We would like to express our gratefulness to the reviewers for their precious comments, which were very constructive and helpful in clarifying the manuscript. We also thank the editor for the effort in handling our manuscript.

We have taken reviewers’ comments into careful consideration to revise our manuscript. Our responses to reviewers’ comments are described in detail as follows.

Question:

In the revised version, section1, the references now start with [25], [26] and then [4]. Authors are rushing through without paying careful attention to the details. Could the authors take their time and carefully look into this and make their article presentable? - Overall, I appreciate authors for their work and hope my suggestions would have helped them improve their manuscript. I am happy to accept the article after the authors have made minor corrections and arranged all references in chronological order. 

Reply:

Thank you for kindly reminder. We've dealt with out-of-order references and arranged all references in chronological order according to reviewer's suggestion. Please refer to the newest version of our manuscript.

Back to TopTop