Next Article in Journal
Interactive Mobile Home Tasks vs. Individual Home Tasks in University Foreign Language Education at the Upper-Intermediate Level
Previous Article in Journal
Building Primary Preservice Teachers’ Identity as Engineering Educators
Previous Article in Special Issue
The Learning of E-Sustainability Competences: A Comparative Study between Future Early Childhood and Primary School Teachers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Knowledge Control in Digital Learning as a Factor in Improving the Quality of Education

Russian State Agrarian University—Moscow Timiryazev Agricultural Academy, Moscow 127550, Russia
*
Author to whom correspondence should be addressed.
Submission received: 15 August 2022 / Revised: 10 September 2022 / Accepted: 16 September 2022 / Published: 22 September 2022
(This article belongs to the Special Issue Quality Education on Digital Learning Environment)

Abstract

:
The quality of knowledge is the most important task of the learning system at any level and stage of education. The COVID-19 pandemic has made its own adjustments to the process of organizing education, requiring the transition from traditional to distance learning as soon as possible. In the new conditions, the use of adaptive knowledge control has become relevant, taking into account the individual level of knowledge of the trainees. The study is devoted to the analysis of the features of adaptive testing, the conditions of application and the possibilities of web technologies for its organization. The article presents the results of a study aimed at organizing and conducting adaptive knowledge control as one of the means of implementing an individual learning trajectory. The study presents algorithms for constructing an individual trajectory of adaptive testing for each user. The analysis of web technologies and learning management systems that are currently used in terms of their capabilities for the implementation of the adaptive learning module is carried out. An adaptive testing module has been developed for implementation into the LMS Moodle learning management system, built taking into account the modular organization of the system. The construction of the module is based on a Markov random process with discrete states, continuous and discrete time, which makes it possible to implement a condition for completing testing with the function of viewing detailed statistics of its passage. The problem of developing and implementing an adaptive testing module in an online learning system is considered. The results of experimental work confirming the effectiveness of the implementation of the adaptive testing module are presented. The study of technology by future teachers and its application in practice will contribute to an increasingly widespread implementation in practical activities.

1. Introduction

Distance learning finds more and more applications every year. Its introduction has become especially active in recent years in connection with the coronavirus pandemic [1,2,3]. As a result, in many countries of the world, including in Russia, many educational institutions have switched to distance learning, which has revealed problems in the field of teaching methods and control of students’ knowledge. The COVID-19 pandemic with forced self-isolation has set up an experiment to change the way of life of each family. The students switched to distance learning, and the functions that were delegated by parents to school teachers returned back. The result was both a general decline in school performance and an increase in the gap between academic performance within the classroom and between schools. This means that most of the students in the allotted time cannot or do not have time to solve the tasks that their predecessors successfully solved.
According to UNESCO estimates, due to the COVID-19 pandemic, the closure of educational institutions and the transfer to distance learning affected 1 billion students [4]. Such a transition puts the education system into new conditions associated with determining the involvement of students in the process of acquiring knowledge, mastering new mental processes and skills, organizing and conducting control and self-control.
The influence of distance learning on the quality of learning is seen in a number of studies.
L.V. Mammadova and V.V. Nikiforova determined the influence of the transition to distance learning on the process of formation of personal qualities of children [5]. The lack of live communication, and of life situations that can arise only at school, which provide a basis for the formation of a child’s personality, may have a negative impact on this personality in the future. The possibility of this, as well as the possibility of exerting a positive influence on the formation of personal qualities in younger schoolchildren during distance learning, is considered in this publication.
N.Y. Marchuk, investigating the psychological and pedagogical features of distance learning, argues that it is able to overcome the shortcomings of traditional education [6]. From the author’s point of view, modern technologies contribute to the development of the student’s creative abilities, the desire to learn new things, because more sources of information are available to them.
Problems have been identified in the field of teacher training for the organization and control of students’ knowledge [7]. The control should be carried out taking into account the requirements of differentiated learning, the construction of individual learning trajectories and the organization of adaptive learning.
One of the most important aspects is the control of students’ knowledge.
Thanks to the control, feedback is provided between the teacher and the students, which allows them to determine the dynamics of mastering the educational material, as well as the level of proficiency in subject knowledge, skills and abilities. The general purpose of knowledge control is to manage the educational and cognitive activities of students. At the same time, three key tasks can be identified:
  • management of the pedagogical process within the framework of repetition, consolidation and generalization of knowledge;
  • management of the pedagogical process based on the assessment of the quality and volume of knowledge received by students;
  • management of the process of formation and development of skills of independent and mutual control of students.
Knowledge assessment in e-learning is implemented most often with the help of computer testing. The simplest implementation of computer testing is a linear form in which the test is formed from a certain set of test questions, each of which determines a certain score or weight in the final assessment. The main disadvantage of this form of testing is considered to be the comparative bias of evaluation, although in general terminology such testing is called upon to ensure the objectivity of such a process. In particular, there may be a situation when the generated test turns out to be too difficult or too easy and the test results do not indicate the real knowledge of the test taker.
Nevertheless, the development of computer testing and the qualitative characteristics of its linear form have led to the emergence of a new form of testing–adaptive testing. A number of researchers [7,8,9,10] describe the theoretical and methodological foundations adaptive testing.
From the analysis of this literature, it can be concluded that the advantage of adaptive testing is the possibility of a more complete and accurate determination of the level of knowledge of the subject.
Thus, study [7] describes the methodological system of online training of future teachers of mathematics in Zambia. Using cluster analysis, the authors show that students have different levels of both technical and subject training. Adaptive learning and knowledge control for different clusters will help to increase the effectiveness of online learning.
Mohammed Yakin and Kelly Linden in their work investigated the impact of adaptive online lessons on improving perceived and measurable academic performance, motivation and perception of students in dental education [8]. The authors conclude that adaptive lessons have demonstrated significant potential for improving student engagement, motivation, perceived knowledge, and assessed exam performance.
Russian scientists from Volgograd State Technical University have developed a system that is able to select a problem from the database of problems in accordance with the student’s academic performance, solve it using programmatic reasoning, evaluate answers, determine the causes of errors, show explanatory feedback, ask clarifying questions and show elaborated examples [9]. The authors note that the evaluation of the system showed that its use led to moderate learning success, which was higher among students with low academic performance.
Researchers A. Christodoulou and A. Charoula examined the design and use of an adaptive electronic environment for self-study designed to promote the development of teachers’ knowledge about technological pedagogical content [10]. The authors conclude that the development of adaptive learning technologies should take into account the previous knowledge of students and changing levels of knowledge in the process of constant interaction with a computer system.
Thus, adaptive testing determines the methods of selecting test tasks with specified characteristics (difficulty level, maximum score, etc.), the procedure for providing tasks to the subject and the subsequent assessment of the level of readiness of the subject based on the results of each of the presented components.
Adaptive testing determines the methods of selecting test tasks with the given characteristics (level of complexity, maximum score, etc.), the order of providing tasks to the test subject and the subsequent assessment of the level of readiness of the test subject based on the results of each of the presented components. Consequently, there is a need for design based on the principles of the development of adaptive teaching practices in various subjects [11].
The relationship between the levels of complexity of the test task and the readiness of the test subject can be described by a model based on fuzzy mathematics [12], logical models of Rush and Birnbaum [13], models using Bayesian networks [14,15], Markov networks [16], Petri nets [17], and finite automata [18].
The task of organizing and conducting adaptive testing arises. To do this, it is necessary to develop a high-quality educational product that will automate the process. Teachers need to understand the basic principles of adaptive learning and the development of adaptive tests.

2. Materials and Methods

2.1. Adaptive Testing

Adaptive testing is a set of processes for forming an adaptive test and monitoring the results of its implementation. An adaptive test is a test, each subsequent task of which is selected based on a response to the previous task from a predefined set (called, for example, a bank of questions). An attempt to perform an adaptive test is carried out directly until the specified accuracy of the assessment of the knowledge of the subject is achieved [19].
The use of adaptive testing makes it possible to increase the effectiveness of the assessment of the test subject due to [18]:
  • a more optimized selection of task characteristics that correspond to the level of abilities of the test subject;
  • providing tasks in sufficient quantity;
  • the order and speed of presentation of tasks.
Adaptive testing is built using some strategy that determines the algorithm for choosing the next task. It is customary to distinguish two strategies for selecting tasks: two-step and multi-step. With a two-step strategy, at the first stage, a group of test takers is provided with the same input test, according to the results of which an adaptive test is formed at the second stage. Multi-step strategy is divided into fixed and variable choice strategies. In the fixed choice strategy, a fixed set of tasks with an individual trajectory of movements on them is generated for the test takers. The strategy of variable choice assumes that tasks are selected from a certain set of tasks according to a certain algorithm that allows you to more accurately predict the complexity for the next task, based on the result of the previous one.
To describe the adaptive testing model, appropriate mathematical models can be used, which are also used to assess the probabilities of the ability level of the test subject and provide a choice of difficulty for the next task during testing. One such model is a Markov random process with discrete states and continuous and discrete time [16].
In the domain of the Markov random process model, the measurement of the difficulty of the test performed is carried out in logits. Logit is a unit of measurement of the complexity of test tasks and the abilities of the subjects.
The logit takes one of the acceptable values of a certain range, which correspond to individual states, where k = 0, 1, ..., n. The test subject is in one state or another with some probability, and during the entire testing process transitions occur according to certain rules from one state to another.
The obtained value is used to assess the difficulty of the question and characterizes the possibility of performing a certain task for the entire set of subjects. In the case of an assessment of abilities, it characterizes the results of a certain subject for the entire set of permissible tasks.
The basic concept of adaptive testing is the consistent presentation of tasks to the test subject, the level of complexity of which is determined by networks or Markov chains [16]. In the case of solving the problem by the test subject, when being in x k a state, a transition occurs to the next, x k + 1 , state, otherwise–to the previous, x k 1 , state. At the end of the test, the subject is in a certain x * state that corresponds to the level of his training. Using this approach allows you to choose tasks according to the level of the subject and to carry out the best differentiation of subjects according to their level of abilities.
The identification of Markov models is carried out according to the samples of test subjects, and each level of abilities is considered separately. For each level of abilities, a unique set of estimates of the model parameters is set, which in the future makes it possible to determine the knowledge index of each subject most accurately. The probability and intensity of transitions between levels have characteristics such as the level of abilities and the complexity of the task. The number of ability levels is entered taking into account the sample size of the test subjects, and the required accuracy of the results of each solved applied problem. This quantity is a discrete quantity and sets the resolution of the evaluation of the presented applied problems.

2.2. Learning Management System LMS Moodle

Distance learning in modern education requires further development [20]. A special place is occupied by individualization, which can be implemented on the basis of adaptive learning with the use of adaptive testing [21,22,23]. The development of adaptive tests is currently being carried out using various course management systems.
A number of studies have been devoted to the implementation of learning management systems.
In a study by Dusan et al., a detailed review of the issues of the learning management system and the most well-known LMS systems is carried out. The authors conclude that the use of e-learning is progressing, constantly improving and can be used in the educational process, making it more effective [24].
The study by Husár and Dupláková [25] is devoted to the assessment of students’ knowledge in LMS conditions by the index of convenience and discrimination in teaching a foreign language. The authors claim that after evaluating the object index and the discrimination index, questions and tests for students can be modified. This will improve the quality of training.
Lukanova and Hristova in their work [26] cite a pedagogical study on the need to use a “Reference Book of control points” in the theoretical material of midwifery students in a digital learning environment. The authors conclude that with the use of the “Reference Book of reference points” in the virtual room, students increase their practical learning competencies.
Indonesian scientists Yon Rizal, Widya Hestiningtyas and Albet Maydiantoro in their study improve the quality of online English language learning using a contextual learning model [27]. Based on the results of their research, it can be argued that the use of a contextual learning model in the study of professional English contributes to the quality of knowledge.
Thus, it can be argued that the use of learning management systems in the educational process contributes to improving the quality of education in various fields.
For 2022, the most popular LMS in the Russian education is Moodle.
Moodle is an open course management system (or learning management system), freely distributed under the GNU GPL license. The main purpose of Moodle is to create websites on the Internet for various ways of organizing training: distance learning, blended learning and self-study.
The Moodle system uses a block-modular learning principle. Its essence lies in the fact that the content of training is organized into modules, which are autonomous organizational and methodological blocks. The content and scope of the modules varies depending on the didactic goals, the differentiation of students and the individual trajectory of the course.
The Moodle system has a number of advantages:
  • the ability to customize the system for the features of an educational institution or project;
  • modularity, which allows users to implement their own modules;
  • the possibility of obtaining complete information about the activity of the training;
  • variety of course elements;
  • the communicativeness of the environment, expressed in the possibility of communication between the course applicant and fellow students (via a forum, personal messages, etc.);
  • user-friendly interface.
The disadvantages include:
  • lack of high-quality technical support, which requires having a specialist in the Moodle system in the staff;
  • lack of proctoring.
Despite the existing shortcomings, the Moodle system has been widely distributed around the world. The system is divided into relatively independent parts, called modules, which have well-defined functionality and a set of interfaces.
The main functionality of the system is presented in the form of plug–ins. The module is one of the most important types of plug-in, since it is with the help of modules that the activities in the courses are provided. The module determines the activity of all participants in the educational process: with its help, students perform the tasks of the teacher and provide him with the result of the work. Teachers have the opportunity to study the necessary material, evaluate students and get some statistics on them. Examples are the modules “Test” (quiz), “Forum” (forum), “Lecture” (lesson) and others.
The analysis of the module “Lecture” revealed the possibility of determining the options for the trajectories of testing and individual points of its completion, taking into account the answers to test questions, which allows us to talk about the module as a special case of adaptive testing. This feature is implemented using the so-called “clusters”, which allow the grouping of questions into sets. Depending on the correctness of the answer, a question is randomly selected from one or another cluster, and the questions are not repeated in the attempt. The limiting factor is the maintenance of insufficiently complete statistics on the attempts of the subjects, primarily due to the fact that the main purpose of the module is training, not control.
During the analysis of the presented works, the software implementation of adaptive testing for LMS Moodle was reviewed. Each of the realizations corresponds to a different degree to the criteria listed earlier, as shown in Table 1.
The table shows that the “Test” module does not correspond to one of the main criteria: it does not allow the user to form an individual trajectory of movement on the questions of the test. The standard module “Lecture” corresponds to a smaller number of criteria in comparison with the considered third-party software issues, which indicates the expediency of their use. Of the third-party implementations, only one provides the display of detailed statistics, taking into account that for the last implementation under consideration, it is not known in what volume and form it provides statistics.

2.3. The Main Processes of Adaptive Testing Organization

For the development of the Adaptive Testing module, the main processes performed by this module were identified and described.
Based on the user ID and his role in the course, the system generates the functionality allowed to the user (Figure 1):
  • The process of creating and filling the test occurs if the functionality allows this action. After performing these actions, the test can be passed.
  • The test is available for passing if its limitations allow it.
The process of creating and filling the test is presented in detail in Figure 2. The basic test settings are determined by the teacher. After that, questions from the question bank are added to the test. For each added question, its complexity is indicated and, if necessary, the maximum score initially equal to the default score for this question. The output generates a generated test. For the role of “Teacher”, a function is defined that allows you to perform actions:
  • define basic settings;
  • develop a bank of questions;
  • perform the setup and generate the test.
The input data is the test ID and allowed actions. If the actions and settings of the test allow, then the user starts the test attempt. The system selects a question and waits for an answer from the user, after which the answer is checked for correctness and the complexity for the next question is determined. The current state of the attempt parameters is checked and compared with the test settings, and depending on the result of the comparison, a transition is made to the process of completing the attempt or to the process of selecting a question with the transfer of complexity for a new question. The process of passing the test is considered in detail on Figure 3.
Conducting testing for each participant in the process provides various opportunities. For example, for a trainee, the system settings give an opportunity to start a test in accordance with the permitted actions and assign an identifier to the test. Further actions are performed under the control of the system. The test settings determine the choice of questions and the verification of answers to the questions posed. Further action is aimed at determining not only the correctness of the answers, but also to determine the complexity of the next question. The adaptive test allows the user to check the system settings, followed by the choice of the direction of transition by complexity and comparing the obtained trajectory of the student’s level assessment with his answers. At the final stage, the attempt is completed, the assessment of each stage and the final assessment are formed.
The defining stage is working with the test and possible available actions, which are presented in Figure 4.
The adaptive test compares each step with the difficulty level of each test subject and corrects the trajectory of movement. As a result, the test is adapted according to the individual achievements of the trainees. The diagram of entities and relationships between them is presented in Figure 5.
The ER diagram of the database for the Adaptive Testing module is prepared in the MySQL Workbench 6.3 CE software notation of the IDEF1x methodology and is presented in Figure 6.
The tables located on the “Tables for Adaptive Quiz” layer belong to the module being developed, the rest of the tables belong to the standard tables of the Moodle database.
Adaptive testing allows the teacher to change the test questions, add and edit questions, adjust transitions by difficulty levels, adjusting the system to an individual trajectory of assessing the knowledge of trainees.
The developed software module allows the designer to organize the formation and passage of adaptive testing according to the Markov model in LMS Moodle 2.8 with the possibility of further viewing detailed statistics of attempts to pass this testing. The implementation of the module under development was carried out on a server using Apache version 2.4 and PHP version 5.6, MariaDB DBMS version 5.5.5, CSS according to the basic requirements of LMS Moodle version 2.8.
During the development of the module, standard Moodle software interfaces were used [20], providing secure methods for working with the system:
  • Form API–interface for creating forms;
  • Page API–interface for installing a page, connecting js scripts;
  • Output API–an interface for generating the HTML code of a page;
  • String API–interface for localized text display;
  • Access API–an interface for managing existing access rights and creating new access rights;
  • Availability API–an interface for managing access to elements;
  • Data manipulation API–an interface for working with the Moodle database;
  • Question API–an interface for working with questions from the question bank;
  • Gradebook API–an interface for working with user ratings in the “Rating Management” module.
The developed module includes 6 forms:
  • form of the main settings of the module;
  • form of advanced module settings;
  • form of the main page of the module;
  • form of passing the test;
  • form for viewing the test attempt;
  • form for viewing the statistics report on all testing attempts of selected user groups.
The file system structure of the adaptive testing module for LMS Moodle is presented in Figure 7.
The developed adaptive testing module provides the ability to work with several users: a teacher, a programmer, an administrator. A user with the “teacher” role can add a module to the course page by performing the following actions: edit, add an element or resource, add the “Adaptive Test” module, configure the module and save the changes. Further settings are carried out in accordance with the capabilities implemented in the system. As a result, the adaptive testing module with the name specified in the settings will be displayed on the course page in the specified topic. Further configuration is done by each user according to his requirements.
The module settings are edited in the editing mode as a teacher by clicking on the “Edit” link and selecting the “Edit settings” item in the menu that appears.
On the editing page that opens, the teacher can:
  • Change the name of the test and the maximum score for the test in the “Test name” and “Maximum score” fields, respectively.
  • Set the opening and closing dates of the test by activating the “Enable” checkbox for the “Test opens” and “Test closes” fields.
  • Select the number of decimal places displayed in the test scores and in the question scores using the “Decimal places in Grades” and “Decimal places in Question Scores” fields, respectively.
  • Specify other settings that are standard according to the official LMS Moodle documentation.
It is possible to save the test settings and navigate to the current course page using the “Save and return to course” button. It is possible to save the test settings and navigate to the main test page using the “Save and Show” button. Cancellation of changes is carried out by the “Cancel” button.
A page of the Advanced module settings form is available for the teacher, which contains 3 areas:
  • The first area allows the teacher to change the maximum and starting difficulty and the maximum score by filling in the fields “Maximum difficulty”, “Starting difficulty” and “Maximum score”.
  • The second area contains fields for specifying the maximum number of transitions to a question of each complexity.
  • The third area includes a button for adding questions and a table with the test results.
Preparing teachers to work with the adaptive testing module will solve the important task of building an individual trajectory of testing the knowledge of trainees. To do this, it becomes necessary for the teacher to be able to work with the module on entering lists of trainees, on filling out the content of the tests, taking into account adaptability, and on setting up the module to work with the corresponding test.

3. Results

The considered module of adaptive testing during online training was tested at the Moscow Timiryazev Agricultural Academy—Russian State Agrarian University and the Mari State University.
The pedagogical experiment was aimed at proving or disproving the following hypothesis: the introduction of an adaptive testing module that takes into account the individual characteristics of trainees/students into the educational process during online learning effectively affects the improvement of the quality of learning.
The authors also considered the possibility of assessing the impact of the module on the quality of training in a separate lesson and when checking knowledge after studying a certain topic or section.
The experiment was conducted both during the explanation of new material, and during seminars and intermediate control in the disciplines of “Artificial Intelligence” in online learning.
The quality of training was measured during an online exam in the discipline “Artificial Intelligence”. In total, more than 100 students took part in the experiment. The experimental group included students, where the adaptive testing module was used in the educational process during online training. The experimental group consisted of 25 students. The control group (23 people) was represented by students who did not use adaptive testing.
At the initial stage of the pedagogical experiment (before the introduction of the adaptive testing system), it was proved that both groups did not differ statistically and had a normal distribution. For this purpose, the results of intermediate certification in the disciplines “Mathematical analysis”, “Probability Theory and Statistics” and “Programming” were used. These disciplines are selected as the most important for the study of the discipline “Artificial Intelligence”.
The results of the intermediate attestation in these disciplines are presented in Table 2. The results are presented on a three-point scale.
To check the normal distribution, the 3σ rule was used, which reads as follows: if a random variable is distributed normally, then the absolute value of its deviation from the mathematical expectation does not exceed the tripled mean square deviation.
The following values were obtained for our samples: for the experimental group X ¯ e = 3.96 ;   σ = 0.73 , interval ( X ¯ e 3 σ ; X ¯ e + 3 σ ) = (1.76; 6.16); for the control group X ¯ e = 3.96 ;   σ = 0.70 , interval ( X ¯ e 3 σ ; X ¯ e + 3 σ ) = (1.83; 6.07).
We can state that all the elements of the samples are included in these intervals. Therefore, we can say that the samples have normal distributions with a probability of 0.9973.
To prove that the samples are statistically indistinguishable, we will use the t-Student criterion. To do this, first of all, it is necessary to determine whether the samples have the same or different variances using the F-Fisher criterion. If the empirical value of Femp is less than the critical value of Fcrit, for the significance level α = 0.05, then we can say that the samples have statistically equal variances. All calculations are carried out using the MS Excel Data Analysis tool.
For our samples, we get the following results: Femp = 1.08; Fcrit = 2.02. Since Femp < Fcrit, we can say that the samples have the same variances.
Now we use the t-Student criterion for samples with the same variances. If the empirical value of temp is less than the critical value of tcrit, for the significance level α = 0.05, then we can say that the samples are statistically indistinguishable.
For our samples we get the following results: temp = 0.02; tcrit = 0.98. Since temp < tcrit, we can say that the samples before the introduction of the adaptive testing system are statistically indistinguishable.
Similar results were obtained in the disciplines of “Mathematical Analysis” and “Programming”.
Further, the experimental group was trained using an adaptive learning system, while the control group used traditional methods.
At the end of the training, an intermediate attestation was carried out.
As a result of applying the Fisher criterion to compare variances, it was found that the groups have statistically identical variances. The Student’s criterion for comparing the mean values of the control and experimental groups for data with the same variances showed the following results: temp (2.53) > tcrit (2.01). Thus, the samples differ statistically significantly, and it can be concluded that the introduction of an adaptive testing module that takes into account the individual characteristics of trainees/students into the educational process during online training effectively affects the improvement of the quality of training.
The study of the features of adaptive learning and adaptive testing, was carried out in the preparation of students, future teachers of mathematics and computer science, in the discipline “Theory and methodology of teaching computer science”. After getting acquainted with the example of the developed module, each student was given the task to develop their own adaptive testing module. The development of the module was carried out in accordance with the stages of software development.
During the development of their adaptive testing module, students performed work as a programmer, administrator, and teacher. Everyone needed to write program code, create a database, and establish connections between data by creating a program code for adding a module to the course page by performing the following actions: edit, add an element or resource, add an Adaptive Test module, configure the module and save the changes. There should be provision for the possibility of configuring the test by each user according to his requirements.
The main requirement for the implemented adaptive test was the requirement for advanced module settings that allow the teacher to set the maximum and starting complexity and the maximum score. The next requirement was to be able to specify the maximum number of transitions per issue of each complexity. Another requirement was to implement the possibility of adding test questions.
In addition, each student was given a task, in accordance with the work program, to prepare adaptive tests in the section of mathematics and computer science, with which they will undergo pedagogical practice. Adaptive tests were prepared on the topics “Number systems”, “Information, information properties”, “Algorithms”, “Basic algorithmic structures”, “Cycles”, “Modeling in spreadsheets”, “Databases”, etc.
The application of the developed modules in practice during online training has shown the effectiveness of adaptive testing to improve the quality of training.
The introduction of adaptive testing in the control of knowledge and assessment of the quality of students’ knowledge will allow the teacher to take into account the individual abilities of children. This approach is aimed at obtaining a more objective assessment and eliminating the psychological discomfort of the trainees.

4. Discussion

Thus, the developed module provides the following functions:
  • The ability to create a test, specify and further change its basic settings: specifying the name of the test, the maximum score for the test, the date and time of opening and closing the test, standard settings determined by the system.
  • The ability to specify and further change the breakpoints of the test attempt (advanced settings): starting difficulty, maximum difficulty, maximum number of transitions to each difficulty level.
  • The ability to add questions to the test, specify the difficulty and maximum score for them, remove questions from the test.
  • The possibility of passing the test attempt and trial viewing the test attempt, continuing the last incomplete attempt.
  • The ability to view one user’s attempt with an indication of the start and end time of the attempt, the duration of execution, the number of points scored, the score received, the maximum difficulty achieved, the complexity of the last question, the number of questions of starting complexity, the trajectory of movement by difficulty levels.
  • The ability to view a report on all attempts of a selected group of users, presented in the form of a table and a histogram.
  • The ability to display the final assessment of the adaptive test in the standard module “Assessment Management”.
There are ready-made tools for adaptive testing on the Internet, while each of the solutions determines its qualitative characteristics and properties. Building a testing system using the adaptive testing method allows you to get a very flexible and customizable system that takes into account the individual characteristics of the test takers. It is important that the system meets the following criteria:
  • The implementation should provide flexible settings for the formation of an individual trajectory of movement on issues in accordance with the selected model. For example, such settings may include specifying the difficulty level of the first question in an attempt, restrictions on the number of transitions to difficulty levels, and others, depending on the adaptive testing model. These settings determine how much the calculated level of knowledge of the subject after the test attempt corresponds to his real level of knowledge, that is, how accurate is the qualitative assessment of the subject’s knowledge.
  • Adaptive testing assumes greater accuracy in the qualitative assessment of knowledge. To analyze the test results, it is necessary to be able to view detailed statistics on the results, the presence of which is also a criterion.
  • The possibility of integrating the software in question into the Moodle system.
The application of the module in practice has shown its effectiveness among students and trainees.

5. Conclusions

Digital learning will be introduced more and more into the education system every year. In many programs and courses, the main part of educational activities is transferred from classrooms to individual work. The interaction between student and teacher is reduced. Students should regulate their own learning.
The traditional model of full-time education is no longer able to meet all the educational needs of society. To meet this diversity, new, more effective and flexible forms of learning and assessment need to be introduced. In the above-mentioned study, a positive attempt was made to improve the quality of education in the digital environment.
The authors solved two important tasks of pedagogy:
  • We have developed a software product that has improved the quality of learning in a digital environment.
  • Based on the developed module of adaptive testing, we have developed and implemented a training program for future teachers of mathematics and computer science to create similar products and introduce them into the educational process.
This research makes a significant contribution to the education system and contributes to the practical understanding and implementation of adaptive learning systems in the educational process. Firstly, the authors show that it is possible to improve learning management systems, in particular LMS Moodle, making them more effective for learning at all levels of education. This is confirmed by the pedagogical experiment. Secondly, the methodology of developing adaptive testing systems is implemented in the training of future teachers of mathematics and computer science. This will significantly improve the quality of online learning for a wider range of trainees. The authors argue that new information technologies, such as artificial intelligence technologies, must be introduced into the education system. At the same time, it is important to note that educational products that improve the quality of education should not only be used in the education system, but also serve as demonstration examples. Their architecture and program codes should be studied in the preparation of future teachers. This will significantly improve the quality of education in general.

Author Contributions

Conceptualization, I.B. and P.N.; methodology, P.N.; software, P.N.; validation, I.B.; formal analysis, I.B. and P.N.; data curation, I.B.; writing—original draft preparation, I.B. and P.N.; writing—review and editing, P.N.; visualization, I.B.; supervision, P.N.; project administration, I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iglesias-Pradas, S.; Hernández-García, Á.; Chaparro-Peláez, J.; Prieto, J.L. Emergency Remote Teaching and Students’ Academic Performance in Higher Education during the COVID-19 Pandemic: A Case Study. Comput. Hum. Behav. 2021, 119, 106713. [Google Scholar] [CrossRef] [PubMed]
  2. Scherer, R.; Howard, S.K.; Tondeur, J.; Siddiq, F. Profiling Teachers’ Readiness for Online Teaching and Learning in Higher Education: Who’s Ready? Comput. Hum. Behav. 2021, 118, 106675. [Google Scholar] [CrossRef]
  3. Daumiller, M.; Rinas, R.; Hein, J.; Janke, S.; Dickhäuser, O.; Dresela, M. Shifting from Face-To-Face to Online Teaching during COVID-19: The Role of University Faculty Achievement Goals for Attitudes Towards This Sudden Change, and Their Relevance for Burnout/Engagement and Student Evaluations of Teaching Quality. Comput. Hum. Behav. 2021, 118, 106677. [Google Scholar] [CrossRef]
  4. UNESCO. Global Education Coalition: A Response to COVID-19 Education. 2021. Available online: https://en.unesco.org/covid19/educationresponse/globalcoalition (accessed on 14 August 2022).
  5. Mammadova, L.; Nikiforova, V. The Influence of Distance Learning on the Formation of Personal Qualities of Younger Schoolchildren. Mod. Pedagog. Educ. 2020, 10, 100–104. [Google Scholar]
  6. Marchuk, N. Psychological and Pedagogical Features of Distance Learning. Pedagog. Educ. Russ. 2013, 4, 73–85. [Google Scholar]
  7. Mulenga, E.M.; Marbán, J.M. Prospective Teachers’ Online Learning Mathematics Activities in the Age of COVID-19: A Cluster Analysis Approach. Eurasia J. Math. Sci. Technol. Educ. 2020, 16, 1872. [Google Scholar] [CrossRef]
  8. Yakin, M.; Linden, K. Adaptive E-learning Platforms Can Improve Student Performance and Engagement in Dental Education. Eur. J. Dent. Educ. 2021, 85, 1309–1315. [Google Scholar] [CrossRef]
  9. Sychev, O.; Penskoy, N.; Anikin, A.; Denisov, M.; Prokudin, A. Improving Comprehension: Intelligent Tutoring System Explaining the Domain Rules When Students Break Them. Educ. Sci. 2021, 11, 719. [Google Scholar] [CrossRef]
  10. Christodoulou, A.; Charoula, A. Adaptive Learning Techniques for a Personalized Educational Software in Developing Teachers’ Technological Pedagogical Content Knowledge. Front. Educ. 2022, 7, 789397. [Google Scholar] [CrossRef]
  11. Hong, H.-Y.; Chai, C.S. Principle-Based Design: Development of Adaptive Mathematics Teaching Practices and Beliefs in a Knowledge Building Environment. Comput. Educ. 2017, 115, 38–55. [Google Scholar] [CrossRef]
  12. Chrysafiadi, K.; Troussas, C.; Virvou, M. Combination of Fuzzy and Cognitive Theories for Adaptive E-assessment. Expert Syst. Appl. 2020, 161, 113614. [Google Scholar] [CrossRef]
  13. Sumin, V.; Kravchenko, A.; Riabinin, V. Adaptive Testing. Logic Models of Rash and Birnbaum. Bull. Voronezh State Tech. Univ. 2012, 6, 35–38. [Google Scholar]
  14. Plajner, M.; Vomlel, I. Proceedings of the Eighth International. Conference on Probabilistic Graphical Models. Proc. Mach. Learn. Res. 2016, 52, 403–414. [Google Scholar]
  15. Plajner, M.; Vomlel, J. Monotonicity in Bayesian Networks for Computerized Adaptive. In Proceedings of the 14th European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, Lugano, Switzerland, 10–14 July 2017. [Google Scholar] [CrossRef]
  16. Carlon, M.K.J.; Cross, J.S. Knowledge Tracing for Adaptive Learning in a Metacognitive Tutor. Open Educ. Stud. 2022, 4, 206–224. [Google Scholar] [CrossRef]
  17. Dai, J.-Y.; Yeh, K.-L.; Kao, M.-T.; Yuan, Y.-H.; Chang, M.-W. Applying Petri-Net to Construct Knowledge Graphs for Adaptive Learning Diagnostics and Learning Recommendations. J. Res. Educ. Sci. 2021, 66, 61–105. [Google Scholar] [CrossRef]
  18. Nikitin, P. Automation of Quality Management of Students Education in Conditions of Level Differentiation. In Proceedings of the 30th International Business Information Management Association Conference, Madrid, Spain, 8–9 November 2017; pp. 5533–5538. [Google Scholar]
  19. Shukla, A.; Singh, P.; Vardhan, M. An Adaptive Inertia Weight Teaching-Learning-Based Optimization Algorithm and Its Applications. Appl. Math. Model. 2020, 77, 309–326. [Google Scholar] [CrossRef]
  20. Vachkova, S.N.; Vachkov, I.V.; Klimov, I.A.; Petryaeva, E.Y.; Salakhova, V.B. Lessons of the Pandemic for Family and School—The Challenges and Prospects of Network Education. Sustainability 2022, 14, 2087. [Google Scholar] [CrossRef]
  21. Vadivel, B.; Mathuranjali, M.; Khalil, N.R. Online Teaching: Insufficient Application of Technology. Mater. Today Proc. 2021; in press. [Google Scholar] [CrossRef]
  22. Delgado-Gómez, D.; Laria, J.C.; Ruiz-Hernández, D. Computerized Adaptive Test and Decision Trees: A Unifying Approach. Expert Syst. Appl. 2019, 117, 358–366. [Google Scholar] [CrossRef]
  23. Čisar, S.M.; Čisar, P.; Pinter, R. Evaluation of Knowledge in Object Oriented Programming Course with Computer Adaptive Tests. Comput. Educ. 2016, 92–93, 142–160. [Google Scholar] [CrossRef]
  24. Dusan, M.; Dupláková, D.; Duplák, J.; Mitaľová, Z.; Radchenko, S. Implementation of Industry 4.0 Using E-learning and M-learning Approaches in Technically-Oriented Education. TEM J. 2021, 10, 368–375. [Google Scholar] [CrossRef]
  25. Husár, J.; Dupláková, D. Evaluation of Foreign Languages Teaching in LMS Conditions by Facility and Discrimination Index. TEM J. 2016, 5, 44–49. [Google Scholar]
  26. Lukanova, Y.; Hristova, T. Pedagogical Study on the Necessity of a “Handbook of Reference Points” in the Digital Learning Environment. SAR J. 2022, 5, 46–51. [Google Scholar] [CrossRef]
  27. Rizal, Y.; Hestiningtyas, W.; Maydiantoro, A. Implementation of Contextual Learning Model Efforts to Improve the Quality of Online Learning of Professional English Students. SAR J. 2021, 4, 167–174. [Google Scholar] [CrossRef]
Figure 1. Generalized scheme of the system operation process.
Figure 1. Generalized scheme of the system operation process.
Education 12 00638 g001
Figure 2. Diagram of the process of creating and filling the test.
Figure 2. Diagram of the process of creating and filling the test.
Education 12 00638 g002
Figure 3. The scheme of passing the test.
Figure 3. The scheme of passing the test.
Education 12 00638 g003
Figure 4. Generalized algorithm of the system.
Figure 4. Generalized algorithm of the system.
Education 12 00638 g004
Figure 5. Entities and connections between them.
Figure 5. Entities and connections between them.
Education 12 00638 g005
Figure 6. ER-database diagram for adaptive testing module.
Figure 6. ER-database diagram for adaptive testing module.
Education 12 00638 g006
Figure 7. The structure of the file system of the adaptive testing module.
Figure 7. The structure of the file system of the adaptive testing module.
Education 12 00638 g007
Table 1. Criteria for comparison.
Table 1. Criteria for comparison.
ImplementationIndividual TrajectoryHigh Accuracy of Knowledge AssessmentDetailed StatisticsCompatibility with MS MoodleUsability
Standard module “Test”+/−+++
Standard module “Lecture”+/−++
Implementation on the model of neural networks+++/−+ (as with the accompanying system)+/−
Blocks “Statistics” and “Library”++++
Module for various adaptive testing models+Depends on the model used?+?
Table 2. Results of intermediate attestation in the discipline “Probability theory and statistics”.
Table 2. Results of intermediate attestation in the discipline “Probability theory and statistics”.
e4435454334455433344445345
c44434554453343344554344
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bystrenina, I.; Nikitin, P. Adaptive Knowledge Control in Digital Learning as a Factor in Improving the Quality of Education. Educ. Sci. 2022, 12, 638. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci12100638

AMA Style

Bystrenina I, Nikitin P. Adaptive Knowledge Control in Digital Learning as a Factor in Improving the Quality of Education. Education Sciences. 2022; 12(10):638. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci12100638

Chicago/Turabian Style

Bystrenina, Irina, and Petr Nikitin. 2022. "Adaptive Knowledge Control in Digital Learning as a Factor in Improving the Quality of Education" Education Sciences 12, no. 10: 638. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci12100638

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop