Next Article in Journal
Luxury and Sustainability: Technological Pathways and Potential Opportunities
Next Article in Special Issue
Constructing a Digital Competence Evaluation Framework for In-Service Teachers’ Online Teaching
Previous Article in Journal
Design Product-Service Systems by Using a Hybrid Approach: The Fashion Renting Business Model
Previous Article in Special Issue
Research on Open Practice Teaching of Off-Campus Art Appreciation Based on ICT
 
 
Article
Peer-Review Record

Grade Prediction Modeling in Hybrid Learning Environments for Sustainable Engineering Education

by Zoe Kanetaki 1, Constantinos Stergiou 1, Georgios Bekas 1, Sébastien Jacques 2,*, Christos Troussas 3, Cleo Sgouropoulou 3 and Abdeldjalil Ouahabi 4
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Sustainability 2022, 14(9), 5205; https://doi.org/10.3390/su14095205
Submission received: 8 March 2022 / Revised: 19 April 2022 / Accepted: 21 April 2022 / Published: 26 April 2022

Round 1

Reviewer 1 Report

The general impression on the paper is positive. Grade prediction model can be successfully used by teachers and managers of university.

Besides, there are some weak points that I would like to note:

The structure of the paper should be improved. In Introduction there is a  part of literature analysis, that from my point of view should be transfered to a separate section "Theoretical background" or at least at additional subheading with the same title. Also, the literature analysis should be expanded. 

The results of the validity test of collected data are not presented.

In Conclusion section it would be great to emphasize theoretical and practical implications of the study in a separate paragraph, that enhance the the contribution of your research. Also, the limitations of the study are absent.

 

Author Response

Thank you for your thorough review of our paper.

We are sorry for the delay in resubmitting the article, but we have made sure to respond to all of your comments and suggestions, both on the substance and form of the manuscript.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are highlighted in yellow.

 

Reviewer#1, Concern#1: The general impression on the paper is positive. Grade prediction model can be successfully used by teachers and managers of university.

Besides, there are some weak points that I would like to note:

The structure of the paper should be improved. In Introduction there is a part of literature analysis, that from my point of view should be transferred to a separate section "Theoretical background" or at least at additional subheading with the same title. Also, the literature analysis should be expanded.

Author response: Thank you very much for appreciating our work. The introduction has been revised very carefully and clearly states the objectives and expectations of the manuscript. The literature review is now addressed in section 2 of the revised manuscript. It is therefore independent of the general introduction.

Author action: In the revised version of the manuscript, section 1 (general introduction) has been separated and section 2 (related works) has been added and expanded. In the end, 15 related articles were added in the references section [1,7,9,11,13,18,42,43,44,45,48,49,50,51,52].

 

Reviewer#1, Concern#2: The results of the validity test of collected data are not presented.

Author response: In response to your comment, we have implemented a statistical method that is very common in the field of operations research and education to determine the validity of the data collected: the calculation of Cronbach's alpha.

Author action: In the revised version of the manuscript, subchapter "4.3 Validity of collected data" has been added to Chapter 4 (please see lines 410-434).

 

Reviewer#1, Concern#3: In Conclusion section it would be great to emphasize theoretical and practical implications of the study in a separate paragraph, that enhance the contribution of your research. Also, the limitations of the study are absent.

Author response: The practical implications of the research were announced and the limitations of the work presented were outlined in the concluding section, particularly just before the prospects for future work.

Author action: In the revised version of the manuscript, please refer to lines 700-718.

Author Response File: Author Response.pdf

Reviewer 2 Report

Authors propose a case study on grade prediction modeling in Hybrid Learning Environment.

The work is well written and well organized. Experiments have been conducted on real data to show the effectiveness of the proposed approach. 

However, my major concern regards the novelty of this paper. It is not clear which is the new contribution of the paper regarding to the two previously published works [26] and [31]. Authors must clearly state which are the differences with the previous works and how the proposed work is novel.

Moreover, predictive values on the two considered data show that the GLAR model is not able to correctly identify (and predict) the fail class. Data is unbalanced. This could be one of the causes of these results. However, this is a big pitfall in the results since the fail class is the one we are interested in mostly. Authors should clarify this point.

Moreover, no comparison with other methods has been proposed. Is there a reason which does not allow a comparison? In my opinion in order to have general results a comparison with different algorithms (at least) and different data should be added. 

 

Some minor comments:

  • figure 1 should be described in the text. Please check it, since there is a typos in the first block (exclussive)
  • page 8, row 319 mentions  36.62%. It is not clear what this value refers to.

Author Response

Thank you for your thorough review of our paper.

We are sorry for the delay in resubmitting the article, but we have made sure to respond to all of your comments and suggestions, both on the substance and form of the manuscript.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are highlighted in yellow.

 

Reviewer#2, Concern#1: Authors propose a case study on grade prediction modeling in Hybrid Learning Environment.

The work is well written and well organized. Experiments have been conducted on real data to show the effectiveness of the proposed approach.

However, my major concern regards the novelty of this paper. It is not clear which is the new contribution of the paper regarding to the two previously published works [26] and [31]. Authors must clearly state which are the differences with the previous works and how the proposed work is novel.

Author response: There is a major difference between the two GLAR models. The one established in 2020-2021, i.e., at the very beginning of the health crisis, takes into account exclusively online learning modes. We refined the existing model to better account for the online situations of hybrid learning modes. To do so, we identified and implemented new factors based on the results of surveys conducted in 2021-2022. It seems fundamental to test the 2020-2021 model as is with the 2021-2022 data to demonstrate the need to identify not only common variables, but more importantly new variables that affect students' academic performance in the new meta-Covid-19 (this term is explained in the introduction) learning environments. The originality of our work, then, lies in the fact that the hybrid learning spaces on which the study was conducted are based on emerging technologies issued and applied during the pandemic. These environments are new. They characterize the beginning of the meta-Covid-19 period in education and have not yet been studied. To our knowledge, no studies have yet been published on real people, providing their identification numbers in their survey responses, in the newly applied hybrid learning spaces.

Author action: In the revised manuscript, we have added a new diagram (see Figure 1) and modified the diagram in Figure 2 to clearly indicate the difference between the previous models. The methodology of the study has been further elaborated (lines 74-89). The differences are now clearly stated and diagrammed.

 

Reviewer#2, Concern#2: Moreover, predictive values on the two considered data show that the GLAR model is not able to correctly identify (and predict) the fail class. Data is unbalanced. This could be one of the causes of these results. However, this is a big pitfall in the results since the fail class is the one we are interested in mostly. Authors should clarify this point.

Author response: Because the group of failing students was the most sensitive, a qualitative analysis was added to the discussion. Specifically, this subsection focuses on failing students for whom the model predicted success, but who ultimately failed. Ten cases of failing students were studied individually who had the extreme outliers with the highest errors in their score estimates. By carefully examining their survey responses and extracting additional variables from the survey before filtering, we were able to explain the reasons for their failure.

Author action: In the revised version of the manuscript, subchapter 5.2. "Qualitative analysis of the failing class" has been added, enhanced with Figure 10 and Table 5 for validation of the results (please refer to lines 543-597).

 

Reviewer#2, Concern#3: Moreover, no comparison with other methods has been proposed. Is there a reason which does not allow a comparison? In my opinion in order to have general results a comparison with different algorithms (at least) and different data should be added.

Author response: The manuscript focuses on the comparison of two different models from two different learning environments: one online established in 2020-2021 at the very beginning of the health crisis, when distance learning was exclusive; the other hybrid established in 2021-2022. Of course, the GLAR model we discuss here is based on the 2020-2021 data; the latter being insufficient to correctly predict student performance in online hybrid learning situations. Therefore, we have enriched the existing model with new variables to truly account for this mode of learning. The goal is that the model, which should be as simple as possible and easy to implement, should be able to predict students' results and warn them early enough to avoid failure situations. It is important to note that we have tested this model in both Greece and France, with strictly identical working methods, and the conclusions are the same. A comparison with different algorithms has not been made at this stage, as it is not the objective of the manuscript. However, this point could be studied in the near future.

Author action: In the revised version of the manuscript, the abstract, general introduction, and conclusions have been expanded to reflect the above.

 

Reviewer#2, Concern#4: Some minor comments:

    Figure 1 should be described in the text. Please check it, since there is a typos in the first block (exclussive).

Author response: These minor comments have been reviewed and corrected.

Author action: In the revised version of the manuscript, Figure 1 has been added and discussed. Figure 2 replaces Figure 1 in the original version of the manuscript. Thus, Figures 1 and 2 are now analyzed together. Finally, typos have been corrected.

 

Reviewer#2, Concern#5: Some minor comments:

    Page 8, row 319 mentions  36.62%. It is not clear what this value refers to.

Author response: These minor comments have been reviewed and corrected.

Author action: In the revised version of the manuscript, please refer to lines 340-342.

Author Response File: Author Response.pdf

Reviewer 3 Report

The present work on a grade prediction model in hybrid learning environments for education in university settings is of great interest since the Covid-19 pandemic has forced the academic community around the world to rapidly implement new methods of learning using information and communication technologies through distance and online learning. In this sense, it is essential to draw lessons from this experience on student performance, in order to build a sustainable future. This work aims to identify the main factors that affect student performance through this hybrid model.
The theoretical foundation is solid, the methodological approach has a coherent structure and the results are of great interest, showing conclusions that highlight the value of the applicability of said model in all aspects of academic life.

Author Response

Thank you for your thorough review of our paper.

We are sorry for the delay in resubmitting the article, but we have made sure to respond to all of your comments and suggestions, both on the substance and form of the manuscript.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are highlighted in yellow.

 

Reviewer#3, Concern#1: The present work on a grade prediction model in hybrid learning environments for education in university settings is of great interest since the Covid-19 pandemic has forced the academic community around the world to rapidly implement new methods of learning using information and communication technologies through distance and online learning. In this sense, it is essential to draw lessons from this experience on student performance, in order to build a sustainable future. This work aims to identify the main factors that affect student performance through this hybrid model.

The theoretical foundation is solid, the methodological approach has a coherent structure and the results are of great interest, showing conclusions that highlight the value of the applicability of said model in all aspects of academic life.

Author response: Thank you very much for appreciating our work.

Author action: During the first round of the review process, the following changes were made to further improve the quality of the manuscript:

  • The introduction has been revised very carefully and clearly states the objectives and expectations of the manuscript. The originality of our work compared to what already exists in the literature is clearly specified in this section.
  • The literature review is now addressed in section 2 of the revised manuscript. It is therefore independent of the general introduction.
  • We added a new diagram (see Figure 1) and modified the diagram in Figure 2 to clearly indicate the difference between the previous models.
  • The methodology of the study has been expanded. The differences are now clearly stated and diagrammed.
  • We have implemented a statistical method that is very common in the field of operations research and education to determine the validity of the data collected: the calculation of Cronbach's alpha.
  • Subchapter 5.2 entitled “Qualitative Analysis of the Failing Class” was added, augmented with Figure 10 and Table 5 for validation of results.
  • In the conclusions, practical implications were added, as well as limitations of the work.
  • Typing errors have been corrected.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Authors have strongly improved the quality of their manuscript by addressing all my comments.

Author Response

Thank you for your thorough review of our paper.

In this second round of revisions, we have taken care to respond to all your comments and suggestions, both on the substance and form of the manuscript.

Please find below our responses given point-by-point.

Please also refer to the revised manuscript whose changes are highlighted in yellow.

 

Reviewer#2, Concern#1: Authors have strongly improved the quality of their manuscript by addressing all my comments.

 

Author response: All of the authors are pleased to read that you appreciated the amount of work that was done after this first round of revisions. We therefore sincerely thank you for your kind words.

Author Response File: Author Response.pdf

Back to TopTop