Next Article in Journal
Retrospective Analysis of Functional Pain among Professional Climbers
Previous Article in Journal
Comparison of Anthropometric and Physiological Profiles of Hungarian Female Rowers across Age Categories, Rankings, and Stages of Sports Career
 
 
Article
Peer-Review Record

Hypernetwork Representation Learning with the Set Constraint

by Yu Zhu 1 and Haixing Zhao 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 31 January 2022 / Revised: 25 February 2022 / Accepted: 28 February 2022 / Published: 4 March 2022
(This article belongs to the Special Issue Deep Learning with Differential Equations)

Round 1

Reviewer 1 Report

In this work, the author presents the hypernetwork representation learning method with the set constraint; the work is well organized and is a promising approach. However, I have some comments to enhance this work. - In both conclusions and abstract, I am missing some quantitative assessment of the extent to which this architecture has improved the basic parameters, compared to the work of other researchers. - The contributions of this work are not clear. I went through the abstract and introduction; I did not get the main contribution of this work. I suggest the authors spend significant efforts to enhance the main work in this research. - The language has some typos, and it should be improved to make sure the readability is better for the readers. - The authors only list and introduce some articles one by one in the literature review. The research gaps between the reviewed studies and problem formulation are not discussed enough compared to the proposed work. - In the literature review, in the first paragraph, references are missing. Further, I recommend the authors add a table comparing different approaches (i.e., Skip-Gram, LINE, SDNE, etc.) with the proposed approach and the pros and cons for each approach. - Check the mathematical notation; what are the following notations mean g in eq.6, in eq. 10 first symbol and alpha. - In the last paragraph before the conclusion, the discussion of results needs to include the strengths and weaknesses of the proposed algorithm. - Update the references with recent references.

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 2 Report

The Paper investigates a very interesting problem related to machine learning strategies in order to catch more realistically every-day life situations. In this sense an hypernetwork representation learning framework is proposed, proving with case examples its superiority with respect to other existing techniques. For this reason I think the work is of interest for the scientific community and worth publishing. The paper is almost perfect but here are few small issues that need to be addressed before it can be published.

 

  • Line 164-166

“…the hypernetwork topology structure has the characteristic of preserving the hypergraph topology structure better than other transformation strategies.”

So maybe a reference here is missing. Why is that? What is the clear or sought advantage w.r.t. to others ? Please explain this clearly.

  • Figure 2 – Line 257 -258

The Figure proposes an interesting and well-drawn scheme. However, there is no clarity or legend regarding the blue and red color. While its meaning can be inferred, I believe it is worth putting it in evidence. Please revise.

  • Line 319-320

“Compared to Deep walk, the computational efficiency of HRSC is improved.” While it appears clear from the discussion, it would good to list the reasons for why the computational efficiency is improved, leaving to later experiments the task to prove quantitatively the improvement.

  • Line 383

The sentence “HRSC outperforms other best baseline method (i.e., DeepWalk) by about 1%”, explains the superiority of the proposed method. Personally I think that a larger discussion should be done on why 1% is a much better result than the previous to justify implementing the methodology. In engineering 1% might be significant or not depending on the contest. Please add couple of comments about why this is the case. Also is there any relationship with computational time? Or is this irrelevant? Higher accuracies (1% or so) but higher computational time might or not be justified.

Comments for author File: Comments.pdf

 

Back to TopTop