Next Article in Journal
Evaluation of Embodied Carbon Emissions in UK Supermarket Constructions: A Study on Steel, Brick, and Timber Frameworks with Consideration of End-of-Life Processes
Next Article in Special Issue
Towards a Sustainable Digital Ecosystem: Exploring New Frontiers in Information Systems
Previous Article in Journal
Exploring the Structure of Static Net Fisheries in a Highly Invaded Region: The Case of Rhodes Island (Eastern Mediterranean)
Previous Article in Special Issue
Social Media Analysis to Enhance Sustainable Knowledge Management: A Concise Literature Review
 
 
Article
Peer-Review Record

A Multi-Module Information-Optimized Approach to English Language Teaching and Development in the Context of Smart Sustainability

Sustainability 2023, 15(20), 14977; https://0-doi-org.brum.beds.ac.uk/10.3390/su152014977
by Shiyuan Gan 1, Xuejing Yang 2,* and Bilal Alatas 3,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Sustainability 2023, 15(20), 14977; https://0-doi-org.brum.beds.ac.uk/10.3390/su152014977
Submission received: 4 September 2023 / Revised: 11 October 2023 / Accepted: 16 October 2023 / Published: 17 October 2023
(This article belongs to the Special Issue Sustainable Information Systems)

Round 1

Reviewer 1 Report

This paper studies the problems of English teaching in the context of intelligent sustainable development and proposes a framework of learning interest identification and intelligent assessment based on autoencode and GRU, which can complete the task of modeling students' interests and multi-module joint intelligent assessment, providing new ideas and technical reference for future research on the sustainable development of English teaching. However, the  paper needs to improve the following shortcomings:

(1) I would advise the authors to adjust the title to better fit the topic of the article;

(2) The course design references in section 3.1 about automatic coding, I will suggest the author include in the related works section along with the suitable comparison.

(3) The author needs to supplement the process description in Figure 1 for the reader to understand. The story should be consistent and comprehend the idea being presented in the figures;

(4) The model training process in Figure 2 lacks the introduction of model methods and the rationale of using only those models for training and further evaluation purposes.;

(5) In the experiment, the training loss of GRU lacks comparison with that of other models;

(6) The experiment in Figure 6 tests the system operation efficiency of students' online learning. What are the evaluation criteria for this test;

(7) It seems that the multi-module combination method is not reflected in the experiment. The author needs to explain;

(8) The conclusion section does not emphasize the outstanding contributions, and some prospects for the future can be added.

The paper needs thorough proofread to improve the language and consistency of the presented story.

Author Response

Reply to Reviewer 1:

 

(1) I would advise the authors to adjust the title to better fit the topic of the article;

 

Thanks for your comment, we have revised the title as follows:

A multi-module information optimized approach to English language teaching and development in the context of smart sustainability

 

(2) The course design references in section 3.1 about automatic coding, I will suggest the author include in the related works section along with the suitable comparison.

 

We have given more information about the automatic coding in Related works as follows:

 

Autoencoder (AE) is one of the most representative models in unsupervised learning and has received widespread attention. It can automatically learn potential features from a large amount of unlabeled sample data, then reconstruct the input sample data, and train the model through reconstruction errors to obtain more accurate sample features. With in-depth research, many deformations of autoencoders have emerged, forcing them to be more robust by adding certain constraints to the model's hidden layer. The use of automatic encoders for unsupervised clustering classification of training data without labeling can reduce the cost and time of data labeling. Secondly, the automatic encoder can automatically learn useful representations and features of input data, which helps to discover patterns and structures in the data, thereby better clustering. Therefore, for the analysis of data from a large amount of learning index related data, the use of automatic encoding methods can greatly improve efficiency.

 

(3) The author needs to supplement the process description in Figure 1 for the reader to understand. The story should be consistent and comprehend the idea being presented in the figures;

 

We gave a description for the framework below the Figure.1 as follows:

As shown in Figure 1, after completing the collection of relevant data, we feed it into the auto-encoder model and push relevant courses based on the students' needs information and collected learning features. After completing the four modules of listening, reading, and writing based on data push, students use the GRU method to evaluate their learning data and predict their academic performance.

 

 

(4) The model training process in Figure 2 lacks the introduction of model methods and the rationale of using only those models for training and further evaluation purposes.;

 

Thanks for your comment, Figure 2 illustrates the precision and the loss of the model. We have added more information in Section 4.1 as follows:

The training process of the model is shown in Figure 2, where the red line represents the recognition accuracy of the model and the black represents the loss function output of the model. Due to the inherent characteristics of the data, we conducted research on classification and clustering, and analyzed it based on its quantitative data features. At the input end of the model, we used quantified student questionnaire analysis results and specified clustering data. Then, we compared the results of the interest categories based on the real data collected, thus completing the evaluation of the model's accuracy.

 

 

(5) In the experiment, the training loss of GRU lacks comparison with that of other models;

 

In the experiment, the data was store at the server and we didn’t save other models information so that we can not give the training loss. The comparison result is shown in Figure. 5, where the MSE result is shown.

 

(6) The experiment in Figure 6 tests the system operation efficiency of students' online learning. What are the evaluation criteria for this test;

 

We described the process and criteria with a more specific way in Section 4.4 as follows:

For the personalized development of the course, in addition to accurately identifying students' interests in each module, it is necessary to fully understand students' learning habits and ensure their own efficiency during multi user use. Therefore, this article conducted a system operation efficiency test through simultaneous online learning among students. During the testing process, students completed the interest test pushed by the system in the shortest possible time. Based on this result, we observed the efficiency of the system in pushing first-hand information in the shortest possible time, as well as students' satisfaction. If it is consistent with the results of students' interest classification before conducting the experiment, it is marked as valid. Our test results after several days are shown in Figure 6:

 

 

 

(7) It seems that the multi-module combination method is not reflected in the experiment. The author needs to explain;

 

The module in this paper represents the Listening, speaking, reading and writing part. We used a two-step framework to realize the recognition of the students’ interest and course push.  

 

(8) The conclusion section does not emphasize the outstanding contributions, and some prospects for the future can be added.

 

We have revised the Conclusion as follows:

 

This study delves into the issues of English teaching in the context of intelligent sustainable development and successfully proposes a method with significant contributions. We have innovatively constructed a learning interest recognition and intelligent evaluation framework based on autoencoders and GRUs. This framework can accurately classify students' interests based on their learning history data, achieving an astonishing 93.1% classification accuracy. This high-precision interest classification provides multi-dimensional information for course automatic push, significantly improving learning satisfaction. Furthermore, we combined the GRU model to complete a joint analysis of the four typical modules of listening, speaking, reading, and writing in English teaching, achieving the evaluation of intelligent learning effectiveness. It is worth noting that the mean square error between our evaluation results and the teacher's final score is only 0.63, indicating that our method has excellent accuracy and reliability in evaluating students' learning outcomes. Compared with traditional methods such as RNN and LSTM, our GRU method not only has higher efficiency, but also significantly lower MSE, further demonstrating the superiority of our method. In summary, the framework proposed in this article performs well in modeling student user interests and multi module joint intelligent assessment tasks, providing new ideas and technical references for future research on sustainable development of English teaching, marking an important research breakthrough in the field of intelligent education.

However, there are also some problems in the research of this paper. This paper evaluates a small amount of students' learning history data, which is limited by the sample size, and the interest modeling in this paper only identifies and classifies the general module in the learning process, so in the future sample research, expanding the sample size to improve the user portrait is the focus of the research.

Reviewer 2 Report

With the advancement of information technology, smart, sustainable development has become an aspect of People's Daily life. In this context, the continuity of English teaching is challenged. The method proposed in this paper solves the problem of students' interest classification in online English teaching and promotes the sustainable development of intelligent English teaching. However, there are still the following shortcomings in this paper:

1.            The description of the problem solved in this paper is not clear and not enough for readers to quickly understand;

2.            The method cited in the introduction section is not highly related to sustainable development and English teaching, so it is suggested that the author modify it;

3.            In section 2.2, about the research on intelligent assessment technology of deep learning, the methods cited in the literature are too few, and the authors need to increase them appropriately;

4.            The author chooses the GRU method for multi-module data analysis and discusses the advantages of this method;

5.            The selection of basic parameters is lacking in the experiment;

6.            How is the autoencoder method in Figure 3 implemented to identify the four types of modules;

7.            The logic of the discussion part is not clear enough and the author needs to sort it out again;

8.            Good articles from recent years should be added to the references to improve the level.

Dear Authors,

It is suggested to read the paper in order to avoid any language mistakes. 

Author Response

Reply to Reviewer 2

 

  1. The description of the problem solved in this paper is not clear and not enough for readers to quickly understand;

 

Thanks for your comment, we have revised the part at the end of the Introduction as follows:

By adopting an intelligent and sustainable approach to English language teaching, students can not only improve their language skills but also acquire sustainable knowledge and skills, which can help them better adapt to the future needs of society (Sun et al, 2021). With the widespread application of multimedia and internet technology, it is crucial to analyze and guide students' learning interests, abilities, and methods through more intelligent means, and provide more personalized learning plans for their comprehensive development. Specifically, the multi module joint optimization technology based on deep learning is an effective means to achieve intelligent and sustainable English teaching. Multi module joint optimization technology can use intelligent methods to analyze relevant data based on students' interests, thereby improving learning efficiency and ensuring teaching effectiveness. The specific contributions of this article are as follows:

 

 

  1. The method cited in the introduction section is not highly related to sustainable development and English teaching, so it is suggested that the author modify it;

 

We have modified the reference as follows:

[4] Ramalingam S, Yunus M M, Hashim H. Blended learning strategies for sustainable English as a second language education: a systematic review[J]. Sustainability, 2022, 14(13): 8051.

[5] Kwee C T T. I want to teach sustainable development in my English classroom: A case study of incorporating sustainable de-velopment goals in English teaching[J]. Sustainability, 2021, 13(8): 4195.

 

  1. In section 2.2, about the research on intelligent assessment technology of deep learning, the methods cited in the literature are too few, and the authors need to increase them appropriately;

 

We have revised the Section 2.2 as follows:

In recent years, computers and artificial intelligence have flourished and have made important progress in numerous fields. Deep network models are widely used in a variety of evaluation fields because they can handle multimodal information quickly and accu-rately.Mangalathua (Mangalathu et al, 2018) proposed multi-parameter fragility using neural network to establish the relationship between structural parameters and structural demand parameters; Karbassi et al. (2014) proposed a decision tree CART algorithm based seismic vulnerability analysis method; Sainct et al. (2020) proposed a support vec-tor machine (SVM)-based method to assess the seismic fragility of reinforced concrete structures; in the risk assessment of emergency accidents, Jacek Skorupski proposed a fuzzy risk matrix to obtain the probability and severity of accident consequences and de-rived the probability of accidents by Petri nets, and applied this assessment method to achieve risk assessment of air traffic accidents (Skorupski, 2016); Brandon Johnson pro-posed a Monte Carlo-based approach to assess the impact of earthquakes on large and complex power systems (Johnson et al, 2020). In the industrial field, Farzad Piadeh et al. proposed a combined tree analysis method and applied it to the risk assessment of ATUs for industrial wastewater treatment (Piadeh et al, 2018). Autoencoder (AE) is one of the most representative models in unsupervised learning and has received widespread atten-tion. It can automatically learn potential features from a large amount of unlabeled sam-ple data, then reconstruct the input sample data, and train the model through reconstruc-tion errors to obtain more accurate sample features. With in-depth research, many defor-mations of autoencoders have emerged, forcing them to be more robust by adding certain constraints to the model's hidden layer. The use of automatic encoders for unsupervised clustering classification of training data without labeling can reduce the cost and time of data labeling. Secondly, the automatic encoder can automatically learn useful representa-tions and features of input data, which helps to discover patterns and structures in the data, thereby better clustering. Therefore, for the analysis of data from a large amount of learning index related data, the use of automatic encoding methods can greatly improve efficiency.

 

  1. The author chooses the GRU method for multi-module data analysis and discusses the advantages of this method;

 

We revised that as follows:

 

Considering the data computation requirements, this paper uses the GRU method for collaborative data analysis with multiple modules. GRU is a modified recurrent neural network that controls the flow of information by adding two gating units to reduce the occurrence of gradient disappearance and explosion problems. GRU contains a reset gate and an update gate to control the flow of information in the memory cell. Compared to traditional RNN and LSTM, GRU has higher efficiency and fewer parameters, making it the preferred choice in many sequence modeling tasks. The gating mechanism of GRU is simpler, but it can still effectively capture long-distance dependencies while reducing gradient vanishing issues, making training more stable. This makes GRU widely used in fields such as natural language processing and time series data, providing high-performance sequence modeling capabilities. In particular, the reset gate is used to control the past information and the update gate controls the current flow of information.

  1. The selection of basic parameters is lacking in the experiment;

 

The parameters were all optimized with the grid search method.

 

  1. How is the autoencoder method in Figure 3 implemented to identify the four types of modules;

 

We have revised the Section 4.1 as follows:

The training process of the model is shown in Figure 2, where the red line represents the recognition accuracy of the model and the black represents the loss function output of the model. Due to the inherent characteristics of the data, we conducted research on clas-sification and clustering, and analyzed it based on its quantitative data features. At the input end of the model, we used quantified student questionnaire analysis results and specified clustering data. Then, we compared the results of the interest categories based on the real data collected, thus completing the evaluation of the model's accuracy.

  1. The logic of the discussion part is not clear enough and the author needs to sort it out again;

 

We have revised the Discussion as follows:

With the continuous evolution of artificial intelligence technology, English teaching models in a sustainable context are gradually showing diverse prospects. In this constantly changing educational landscape, we can further expand the model proposed in this article to better meet the individual needs of students. This extension includes intelligently recommending various learning resources, such as textbooks, questions, exercises, etc., based on students' learning history and preferences, to achieve extensive and in-depth personalized recommendations, thereby improving learning outcomes. In addition, based on massive learning data, we can also achieve the vision of adaptive learning. By analyzing students' learning performance, the system can dynamically adjust learning content, difficulty, and teaching methods to ensure that each student can better understand and master knowledge, achieving a more efficient learning process. This personalized educational method is expected to play a key role in sustainable English teaching (Gupta et al, 2023). In the future, with the popularization of online learning, intelligent assisted education will become the mainstream. The application of deep learning technology will make natural language processing, intelligent dialogue, and question answering more advanced and intelligent. Students will be able to interact and learn with virtual robots through intelligent teaching platforms, and gain personalized educational experiences. This vision represents the future development direction of sustainable English education, providing students with richer and more attractive learning opportunities, and promoting the continuous improvement of education quality and sustainability.

 

  1. Good articles from recent years should be added to the references to improve the level.

 

Thanks for your comment, we have added more references from MDPI and IEEE.

 

Reviewer 3 Report

This paper focuses on classifying students' interests and jointly assessing the listening, reading, and writing modules in the process of online English teaching. It presents several intriguing ideas and insights, but there are some notable issues with both the introduction and discussion sections.

Regarding the introduction, the final part currently highlights the paper's specific contribution. Instead, it is recommended to clearly state the study's specific objectives. This will provide readers with a more focused understanding of what the research aims to achieve.

Furthermore, the second part, which discusses related works, exhibits a significant amount of word repetition that should be addressed for improved readability and clarity.

The discussion section stands out as the weakest part of the paper. In its current form, it lacks depth and brevity. What's missing is a thorough exploration of how the results relate to existing literature and what unique insights the current study has contributed to our knowledge in the field.

In summary, while this paper contains valuable points and ideas, refining the introduction, addressing word repetition in the related works section, and expanding the discussion's depth are essential steps to enhance its overall quality and impact.

The use of English is satisfactory, but there is room for improvement in terms of reducing word repetition to enhance the text's cohesiveness and readability.

Author Response

1.Regarding the introduction, the final part currently highlights the paper's specific contribution. Instead, it is recommended to clearly state the study's specific objectives. This will provide readers with a more focused understanding of what the research aims to achieve. Thanks for your comment, we have revised that and added the content before the contribution introduction as follows: The objective of this paper is to achieve the ultimate intelligent and rapid evaluation of multimedia English teaching through data fusion through multi-module learning. The specific contributions of this article are as follows: 1. to implement user interest recognition based on Autoencoder for four modules typical of the English teaching process: listening, reading, writing and listening; 2. a joint multi-module intelligent assessment of English learning was implemented using GRU with an MSE of only 0.63 between its and actual scores; 3. Practical testing was conducted on the proposed intelligent learning and evaluation framework, and the actual operating efficiency of the system exceeded 80%. 2.Furthermore, the second part, which discusses related works, exhibits a significant amount of word repetition that should be addressed for improved readability and clarity. We have adjusted the expression as follows: 2.1 Research on content pushing based on learning habits Recommendation systems have been developed from the field of e-commerce to various fields, and with the change and development of digital education, recommendation systems are gradually involved (Farzan & Brusilovsky, 2006). Khribi et al. based on the history of active learners and the similarities and differences between learners' preferences and the content of learning resources, provide them with learning resource recommendations (Khribi et al., 2009). In 2011, Katuk et al. recommended appropriate learning paths for learners based on their learning experience and emotional engagement from the perspective of "optimal learning experience" (Katuk & Ryu, 2011). With the development of information technology, researchers have found differences between the recommendation of learning resources and the recommendation of commodities, and the recommendation of learning resources also needs to consider learners' preferences, learning styles, learning levels and, cognitive abilities, etc. Klasnja et al. discovered the existence of implicit labels for learning resources and clustered and ranked the labels to provide personalized learning resources for learners with different preferences based on the labels (Klasnja et al., 2018). Dascalu et al. designed and developed a personalized recommendation system for learning resources and made recommendations in the virtual environment U-learn. The system recommends suitable learning resources for learners based on the similarity calculation of their learning styles. For the recommended resources, learners can choose whether to accept the system's recommendations (Dascalu et al., 2015). Yau et al. proposed a context-aware personalized m-learning application for the characteristics of different learners' learning preferences. Six learning scenario preferences were designed to recommend suitable learning objects for learners (Yau & Joy, 2011). Through the above research, it is easy to see that with the development of artificial intelligence technology, the personalized pushing of courses is also more intelligent, and the recommendation methods for different scenarios and preferences are becoming increasingly mature, providing more convenient help for students learning. 2.2 Research on intelligent evaluation techniques in the context of machine learning and deep learning Computers and artificial intelligence have flourished in recent years and have made important progress in numerous fields. Deep network models are widely used in various evaluation fields because they can handle multimodal information quickly and accurately. Mangalathua (Mangalathu et al., 2018) proposed multi-parameter fragility using a neural network to establish the relationship between structural parameters and structural demand parameters; Karbassi et al. (2014) proposed a decision tree CART algorithm-based seismic vulnerability analysis method; Sainct et al. (2020) proposed a support vector machine (SVM)-based method to assess the seismic fragility of reinforced concrete structures; in the risk assessment of emergency accidents, Jacek Skorupski proposed a fuzzy risk matrix to obtain the probability and severity of accident consequences and derived the probability of accidents by Petri nets, and applied this assessment method to achieve risk assessment of air traffic accidents (Skorupski, 2016); Brandon Johnson proposed a Monte Carlo-based approach to assess the impact of earthquakes on large and complex power systems (Johnson et al., 2020). In the industrial field, Farzad Piadeh et al. proposed a combined tree analysis method and applied it to the risk assessment of ATUs for industrial wastewater treatment (Piadeh et al., 2018). Autoencoder (AE) is one of the most representative models in unsupervised learning and has received widespread attention. It can automatically learn potential features from many unlabeled sample data, then reconstruct the input sample data and train the model through reconstruction errors to obtain more accurate sample features. With in-depth research, many deformations of autoencoders have emerged, forcing them to be more robust by adding certain constraints to the model's hidden layer. Using automatic encoders for unsupervised clustering classification of training data without labeling can reduce the cost and time of data labeling. Secondly, the automatic encoder can automatically learn useful representations and features of input data, which helps to discover patterns and structures in the data, thereby better clustering. Therefore, automatic encoding methods can greatly improve efficiency for analyzing data from a large amount of learning index-related data. Through the above intelligent evaluation system based on artificial intelligence methods, it is easy to see that the performance evaluation of most systems can be completed based on the existing data and the integration of multimodal information. The same is true for English teaching. The daily teaching of English is often divided into four modules: listening, speaking, reading and writing. The learning between each module is often done independently, so after personalized recommendations of learning materials based on users' habits, the overall learning assessment is completed for their different learning situations, which is of great significance for future English teaching. 3.The discussion section stands out as the weakest part of the paper. In its current form, it lacks depth and brevity. What's missing is a thorough exploration of how the results relate to existing literature and what unique insights the current study has contributed to our knowledge in the field. In this paper, we wanted to give a framework for English teaching evaluation and the content was also polished. In summary, while this paper contains valuable points and ideas, refining the introduction, addressing word repetition in the related works section, and expanding the discussion's depth are essential steps to enhance its overall quality and impact. We have refined paper and checked the grammar.  

Reviewer 4 Report

The authors attempt to bridge intelligent technology with the field of English teaching, creating a framework for personalized learning plans utilizing Autoencoder and GRU. They focus on the integration of sustainability, artificial intelligence, and education. While this research offers an innovative approach, it has several significant weaknesses and limitations, making its reliability, validity, and generalizability questionable. The authors must revise following the presented comments and suggestions:

·         The study’s generalizability is highly compromised due to the restricted sample size. The limited amount of students' learning history data provided significantly limits the scope and applicability of the findings to a wider population.

·         The paper lacks a comprehensive and detailed description of the methodology used, especially on the actual implementation of the Autoencoder and GRU models, the process of collecting students' learning histories, and the approach for evaluating and interpreting the results. This omission raises questions about the replicability of the study.

·         The study acknowledges the limitations of the autoencoder model in handling high-dimensional sparse data but fails to justify the choice of this algorithm over others potentially better-suited to managing such data. A more critical reflection on the choice of models in relation to the nature of the data is necessary.

·         The paper claims a 93.1% classification accuracy in recognizing students’ interests but does not sufficiently explain the criteria and process for classification, leaving questions on the reliability and validity of the classification results.

·         While it mentions the superiority of GRU over RNN and LSTM, the paper does not provide substantial empirical evidence or clear comparative analysis to support this assertion, reducing the credibility of the claim.

·         The paper overly emphasizes the potential applicability and benefits of the proposed framework but lacks a critical evaluation of its actual effectiveness and feasibility in real-world educational settings.

·         The claim of a mean squared error of 0.63 between the comprehensive assessment and the teacher's given grade under GRU lacks context and clarification on its implications, and on whether this is statistically significant. This leaves ambiguity in understanding the real-world efficacy of the proposed model.

·         The terms like “sustainable development of English teaching intelligence” are vague and not adequately defined, leaving them open to multiple interpretations and reducing the clarity and coherence of the paper.

·         The paper fails to provide substantial information regarding how interests were modeled and classified based on learning history data, which is crucial for the reliability and validity of the results and the feasibility of the proposed method in practical settings.

·         The paper does not address potential biases in collecting and interpreting students’ learning history data and does not discuss the ethical implications of using such data, leaving significant concerns unaddressed.

·         The concluding remarks appear to overreach, labeling the framework as a “vital research breakthrough” without adequate justification or comparison to existing methodologies in intelligent education. This may give a misleading impression of the actual impact and novelty of the research.

·         Multiple cited references are not related to the content and topic of the paper such as refs. [13-19]. The paper uses a confusing citation scheme, which makes it difficult to relate citations to the reference list. The authors should be consistent with the journal template requirements. The context of the citation is different from the cited references. For example, Lines 208-209 – “RNN method to evaluate multi-module course information (Huang et al., 2023).”, but ref. [23] has nothing to do with courses or education. The authors must carefully check and align. On the other hand, the article lacks of proper contextualization and discussion of the current state-of-the-art on sustainability based teaching practices. I would refer the authors to the works of eminent professor Dr. Swach and his research Gruppe, such as Introducing Sustainable Development topics into Computer Science Education: Design and evaluation of the Eco JSity game (2021) Sustainability, 13 (8), 4244; An interactive serious mobile game for supporting the learning of programming in JavaScript in the context of eco-friendly city management (2020) Computers, 9 (4), 102.

Recommendations for Improvement:

·         A rigorous and detailed methodology section is indispensable to address ambiguities related to data collection, model implementation, and result interpretation.

·         Explicitly addressing and justifying the choice of algorithms in relation to the nature and characteristics of the data used is essential.

·         A more nuanced and critical discussion on the limitations, potential biases, and ethical considerations of the proposed framework is warranted.

·         Clarity and precision in defining terms and elaborating on the classification and evaluation criteria and processes are crucial for the credibility and coherence of the paper.

·         A balanced discussion on the potential and actual effectiveness of the proposed method, substantiated with empirical evidence, will enhance the paper’s reliability and validity.

 

Conclusion: While the research attempts to contribute to the field of intelligent education through an innovative approach, the numerous limitations, lack of clarity, detail, and critical reflection severely undermine its reliability, validity, and contribution to the field. Significant revisions and additions are essential to address the mentioned issues and to substantiate the claims made in the paper.

Author Response

1.The study’s generalizability is highly compromised due to the restricted sample size. The limited amount of students' learning history data provided significantly limits the scope and applicability of the findings to a wider population. Thanks for your comment, this work just wants to give a framework for the English teaching evaluation when concerning different modules, so that we didn’t employ the so many data to train the model. 2.The paper lacks a comprehensive and detailed description of the methodology used, especially on the actual implementation of the Autoencoder and GRU models, the process of collecting students' learning histories, and the approach for evaluating and interpreting the results. This omission raises questions about the replicability of the study. We have given more details about the datasets and the model information as follows: The training process of the AutoREC model designed in this article is as follows: Algorithm 1 The course push process for the autoencoder model 1. Input user learning data, which is the statistical time series of the four modules of listening, speaking, reading, and writing for the users used 2. Obtain the interest prediction output h of the model 3. Train according to the error between the real interest and user interest vectors shown in equation (2) to minimize the final model parameters of the loss function θ 4. Input the feature data of the learning user in sequence to obtain the user's interest classification; 5. Obtain the rating of user n's interest in the course, sort it, and obtain a recommendation list for user n; After recommending courses based on the user's interests, we conducted data analysis based on the recommended student data. The data input of the GRU model is also the user's learning time series. Based on this, we fitted and output the actual final grades of the student users, thus completing the data training. For the Auto encoder and GRU models used, both use a single-layer network structure and determine the number of units based on batch size optimization. Based on this, we completed the training of the model. Therefore, in this paper, the automatic English learning assessment process is completed for multi-module English teaching using an auto-encoder for interest recognition pushing, as shown in Figure 1: Figure 1. Framework for the intelligent course push and assessment 3.The study acknowledges the limitations of the autoencoder model in handling high-dimensional sparse data but fails to justify the choice of this algorithm over others potentially better-suited to managing such data. A more critical reflection on the choice of models in relation to the nature of the data is necessary. More information about that is given as follows: Automatic encoders have significant advantages in processing time series data. Firstly, it can automatically learn key features and patterns in data without the need for manual feature engineering, which is crucial for the complexity and difficult to capture trends of time series data. Secondly, automatic encoders can compress time series data and extract the most important information, thereby reducing the dimensionality of the data and helping to reduce the impact of noise and redundant information. In addition, automatic encoders can also be used for anomaly detection and reconstruction tasks, helping to detect outliers or missing data in time series and perform interpolation or repair. Finally, the deep structure and recursive variants of automatic encoders can handle long-term dependencies, which are crucial in time series modeling. In summary, automatic encoders are powerful tools for processing time series data, which can improve the performance of tasks such as feature learning, dimensionality reduction, anomaly detection, and reconstruction. For the student learning time and related operational data extracted from class, using the automatic encoder method can better understand the internal data correlation. 4.The paper claims a 93.1% classification accuracy in recognizing students’ interests but does not sufficiently explain the criteria and process for classification, leaving questions on the reliability and validity of the classification results. We have added more information to make the paper reliability as follows: Due to the inherent characteristics of the data, we conducted research on classification and clustering and analyzed it based on its quantitative data features. At the input end of the model, we used quantified student questionnaire analysis results and specified clustering data. Then, we compared the results of the interest categories based on the actual data collected, thus completing the evaluation of the model's accuracy. In this experiment, more than 350 students from their school were surveyed and analyzed for their English learning data. We have provided a detailed explanation of the overall data training process in the Section 3. 5.While it mentions the superiority of GRU over RNN and LSTM, the paper does not provide substantial empirical evidence or clear comparative analysis to support this assertion, reducing the credibility of the claim. We have compared the MSE for these methods in Fig.5 as follows: Figure 5. The methods comparison using different methods 6.The paper overly emphasizes the potential applicability and benefits of the proposed framework but lacks a critical evaluation of its actual effectiveness and feasibility in real-world educational settings. Thanks for your comment, we have added that in the Discussion as follows: At present, we have conducted practical application tests on the currently designed framework as shown in Chapter 4. In addition to testing the accuracy of the proposed algorithm framework, we have also conducted system efficiency tests on relevant frameworks. Through real-time testing by nearly a hundred volunteers, the system has achieved over 80% efficiency in pushing while ensuring student satisfaction, indicating that the system framework has broad prospects in practical applications. 7.The claim of a mean squared error of 0.63 between the comprehensive assessment and the teacher's given grade under GRU lacks context and clarification on its implications, and on whether this is statistically significant. This leaves ambiguity in understanding the real-world efficacy of the proposed model. We have revised that as follow: In the model training, we first used the students' final grades of this semester as the true values of the GRU model sequence regression, completed the model training, and then conducted relevant sequence analysis. In the model training loss shown in Figure 4, it can be seen that the final average MSE of this article is 0.63. For the percentage system, its final error is already less than 1%, which can effectively evaluate the course score for the current sample. In order to better illustrate the course evaluation under multi module writing, we conducted a comparison of typical neural network methods. 8.The terms like “sustainable development of English teaching intelligence” are vague and not adequately defined, leaving them open to multiple interpretations and reducing the clarity and coherence of the paper. We have added that as follows: Intelligent teaching evaluation is crucial for sustainable development, as it plays a role through multiple channels such as personalized education support, resource optimization, increased student participation, improved education quality, and adaptation to future needs. Personalized education support ensures that every student fully unleashes their potential, resource optimization improves the efficiency of the education system, increases student participation, and cultivates learning interest. Improving education quality helps ensure that students receive high-quality education. In addition, intelligent assessment also helps the education system adapt to constantly changing social and economic needs, thereby supporting the sustainable development of human resources. The comprehensive effects of these aspects not only enable education to meet current needs, but also help shape a more sustainable future. 9.The paper fails to provide substantial information regarding how interests were modeled and classified based on learning history data, which is crucial for the reliability and validity of the results and the feasibility of the proposed method in practical settings. We have added the details in Section 3 and 4 as follows: After recommending courses based on the user's interests, we conducted data analysis based on the recommended student data. The data input of the GRU model is also the user's learning time series. Based on this, we fitted and output the actual final grades of the student users, thus completing the data training. For the Auto encoder and GRU models used, both use a single-layer network structure and determine the number of units based on batch size optimization. Based on this, we completed the training of the model. Therefore, in this paper, the automatic English learning assessment process is completed for multi-module English teaching using an auto-encoder for interest recognition pushing, as shown in Figure 1: The model's training process of the model is shown in Figure 2, where the red line represents the recognition accuracy of the model and the black represents the loss function output of the model. Due to the inherent characteristics of the data, we conducted research on classification and clustering and analyzed it based on its quantitative data features. At the input end of the model, we used quantified student questionnaire analysis results and specified clustering data. Then, we compared the results of the interest categories based on the actual data collected, thus completing the evaluation of the model's accuracy. In this experiment, more than 350 students from their school were surveyed and analyzed for their English learning data. We have provided a detailed explanation of the overall data training process in the Section 3. 10.The paper does not address potential biases in collecting and interpreting students’ learning history data and does not discuss the ethical implications of using such data, leaving significant concerns unaddressed. The informed consent form was obtained from the volunteers and we described that as follows: In this experiment, more than 350 students from their school were surveyed and analyzed for their English learning data and the informed consent form was obtained from the subjects. We have provided a detailed explanation of the overall data training process in the Section 3. 11.The concluding remarks appear to overreach, labeling the framework as a “vital research breakthrough” without adequate justification or comparison to existing methodologies in intelligent education. This may give a misleading impression of the actual impact and novelty of the research. We have deleted that expression. 12. Multiple cited references are not related to the content and topic of the paper such as refs. [13-19]. The paper uses a confusing citation scheme, which makes it difficult to relate citations to the reference list. The authors should be consistent with the journal template requirements. The context of the citation is different from the cited references. For example, Lines 208-209 – “RNN method to evaluate multi-module course information (Huang et al., 2023).”, but ref. [23] has nothing to do with courses or education. The authors must carefully check and align. On the other hand, the article lacks of proper contextualization and discussion of the current state-of-the-art on sustainability based teaching practices. I would refer the authors to the works of eminent professor Dr. Swach and his research Gruppe, such as Introducing Sustainable Development topics into Computer Science Education: Design and evaluation of the Eco JSity game (2021) Sustainability, 13 (8), 4244; An interactive serious mobile game for supporting the learning of programming in JavaScript in the context of eco-friendly city management (2020) Computers, 9 (4), 102. Thanks for your comment, the ref[23] introduces the clustering algorithm which is important for the framework establishment. Further, we have added the related content about the discussion sustainability based teaching practices according to the recommended references. Intelligent teaching evaluation is crucial for sustainable development, as it plays a role through multiple channels such as personalized education support, resource optimiza-tion, increased student participation, improved education quality, and adaptation to fu-ture needs. Personalized education support ensures that every student fully unleashes their potential, resource optimization improves the efficiency of the education system, in-creases student participation, and cultivates learning interest (Swacha et al, 2021). Im-proving education quality helps ensure that students receive high-quality education. In addition, intelligent assessment also helps the education system adapt to constantly changing social and economic needs, thereby supporting the sustainable development of human resources. The comprehensive effects of these aspects not only enable education to meet current needs, but also help shape a more sustainable future (Maskeliūnas et al, 2020). Recommendations for Improvement: 1.A rigorous and detailed methodology section is indispensable to address ambiguities related to data collection, model implementation, and result interpretation. We have added more details about the experiment in Section 3 and Section 4. 2. Explicitly addressing and justifying the choice of algorithms in relation to the nature and characteristics of the data used is essential. Thanks for your comment, more explanations about the methods choice is given in Section 3. 3.A more nuanced and critical discussion on the limitations, potential biases, and ethical considerations of the proposed framework is warranted. We have given these details in Discussion and Method section. 4.Clarity and precision in defining terms and elaborating on the classification and evaluation criteria and processes are crucial for the credibility and coherence of the paper. We have carefully checked the information. 5.A balanced discussion on the potential and actual effectiveness of the proposed method, substantiated with empirical evidence, will enhance the paper’s reliability and validity. We have given more descriptions about the effectiveness and application in Discussion.

Round 2

Reviewer 4 Report

Rewrite Algorithm 1 using pseudocode notation.

Author Response

RESPONSE: We revised Algorithm 1 using pseudocode notation as follows:

 

Algorithm 1 The course push using autoencoder model

Step1: Input user learning data xt={x1,x2,x3,x4}, where xi represents the learning, speaking, reading and writing data.

Step2: Obtaining h that is the intrest prediction of the model

Step3: Model training using MSE the interest vector Eq.(2) to minimize the loss function θ

Step4: Input the feature data of the learning user in sequence to obtain the user's interest classification;

Step5: Obtaining the rating of user n's interest in the course, sort it, and obtain a recommendation list for user n;

Author Response File: Author Response.docx

Back to TopTop