1. Introduction
Computer-Based Learning Environments (CBLEs) have emerged as an ubiquitous source of education, able to overcome the spatiotemporal constrictions of classroom education [
1], and institutions of higher education have incorporated CBLEs as a means of expanding their activity [
2]. In the beginning, virtual university campuses were complementary to institutional activities, but in recent years, they have become a core component of university work [
3]. Nowadays, every university has e-campuses, not only to complement their educational activity, but also to administer relationships between the academic community and as a source of research [
4]. Digital literacy has therefore gone from being a helpful skill to being a compulsory requirement for effective participation in higher education [
5].
Within this context, diverse learning experiences take place, presenting challenges not only to the students, but also to education providers [
6]. CBLEs encompass all those applications and services, based on Advanced Learning Technologies, which use computers as cognitive tools and technology-rich learning environments to facilitate the learning process [
7]. However, learning in a CBLE means a great deal of responsibility being placed on the student. In general, students have to decide when, what, and how they learn, and for how long, because they are often asked to complete learning tasks with little or no support. In other words, CBLEs require students to make additional effort to self-regulate their learning [
8,
9,
10,
11]. There is abundant empirical evidence suggesting that learners do not successfully adapt their behavior to the self-regulatory demands of CBLE environments [
3,
12,
13,
14], and that students lacking self-regulation skills might experience cognitive overload, usability problems, and distractions potentially resulting in lower learning gains [
15]. In this scenario, there is no knowledge about the specific difficulties for Students with Learning Disabilities (SLDs) in learning processes involving CBLE, nor specific intervention actions in this sense to reduce the effect of these self-regulatory requirements [
16], bearing in mind that those disabilities have some commonality in terms of metacognitive and self-regulatory malfunctioning [
17,
18,
19,
20]. This is worth addressing due to the increasing numbers of SLDs who are accessing post-compulsory education (vocational training, university, etc.), and specifically higher education [
21,
22,
23]. It is essential to recognize that if self-regulation is crucial for learning in virtual learning environments for regular students, it is even more important for those students that have some kind of learning disability [
24,
25]. Research has shown that although they might overcome some of their difficulties, most continue to manifest behaviors characteristic of learning difficulties as adults [
26].
In this regard, the pedagogical design of virtual learning environments plays a key role in student performance, but as the European Commission has noted, not all of these environments are properly designed [
27]. Research plays a key role in producing standards and guidelines to design effective CBLEs and theoretically grounded and empirically based aids [
28].
Within this context, research about self-regulation while learning is one of the most notable tendencies in the field, together with the CBLE design features that can support self-regulated learning processes [
15,
28,
29], apart from SLD.
Research findings highlight the fact that metacognitive and self-regulation skills can be developed by using embedded scaffolding in virtual environments, helping students to maintain motivated and self-regulated behavior [
3,
30,
31]. In this sense, one current line of study is focused on the learning process that takes place in virtual environments where some kind of scaffolding to improve self-regulated learning (SRL) behavior is provided.
Scaffolding consists of providing the students with specific guidelines that, acting in their zone of proximal development [
32], can assist them in autonomously doing certain activities in pursuit of a learning goal, making it possible for them to control and regulate their cognitive processes [
29]. Scaffolding was the terminology used by Wood, Brunery, and Ross to define support and control over the learning process [
33]. Narrowing the subject down to CBLEs, many researchers included scaffolding as a teaching strategy in virtual learning environments and research findings highlight the specific beneficial role of dynamic adaptive scaffolding, as providing more accurately targeted support leads students to more effective use of learning strategies and to better learning gains [
9,
34,
35].
One of the most popular forms of scaffold in Virtual Learning Environments (VLE) is prompts [
36]. Prompts are any kind of stimuli presented to the learner to increase the likelihood of producing a response [
37]. Prompts are very popular in the SRL research field since they have been proven to be the most effective type of SRL training in CBLEs such as hypermedia, as a systematic review by Devolder, Van Braak, and Tondeur [
15] concluded.
In this study, we implemented an Intelligent Tutoring System (ITS) designed to model, trace, and foster students’ self-regulated learning in CBLEs. The system provides conceptual, procedural, metacognitive, and strategic scaffolds through prompts provided by animated pedagogical agents. Pedagogical agents are virtual characters equipped with artificial intelligence in order to facilitate students’ learning processes in CBLEs [
38]. They can be in various forms, from 2D characters to 3D human-like characters, with the latter being more effective in promoting learner engagement [
39] and included in the current ITS. We used 3D human-like avatar technology and an indirect approach in which pedagogical agents track students’ SRL behavior and provide prompts on this basis [
40,
41,
42].
This study examines the use of SRL strategies in two different groups of students when learning with an ITS: in a normative group of university students where self-regulatory scaffolding is known to be beneficial, and in a university SLD group. Despite the ITS not being specifically designed for this purpose, our intention was to explore the gap of remedial action for this population with the goal of promoting self-regulatory processes in CBLEs. Research has consistently documented SLDs’ deficits in deploying effective self-regulation. In many current reviews of the research in reading, writing, mathematics, and subject-area learning for SLDs [
43], authors emphasize how these students struggle not just with “basic processing” problems, but also with higher-order processes involved in self-regulation that are so essential to successful performance. When these students enter higher education, they face significant difficulties; as they are characterized by handicapped executive ability, their normal functioning is impaired for planning, inhibition, and time organization and management [
44,
45,
46]. They exhibit low self-regulation and self-efficacy [
47,
48,
49], apply ineffective learning strategies [
50], and have harmful self-perceptions [
51,
52]. Remedial intervention with this group is possible because research has shown that high achieving disabled students compensate for their difficulties by applying self-regulated learning strategies [
53] and could benefit significantly from remedial education provided through CBLEs [
54]. Paradoxically, unlike in prior educational levels and in younger students, there is hardly any evidence-based, self-regulatory scaffolding intervention for SLDs when learning in CBLEs [
55]; therefore, we consider them to be a group of particular interest for the results of ITS implementation.
At the same time, a recent literature review found that most of the information collected about SLDs in higher education is through interviews and self-report questionnaires [
56]. Both techniques, although very valuable, are not sufficient to accurately assess self-regulation. The importance of scales and interview methodology for measuring those processes is undeniable [
57], as are the associated problems of validity [
57,
58] and incongruence with other innovative methods of assessment, such as the method used in this study, designed to assess learning during learning.
Specifically, we intend to respond to the following research questions:
Does the Intelligent Tutoring System help students to self-regulate their learning process?
Does the Intelligent Tutoring System help SLDs to self-regulate their learning process even though is not specifically designed for this purpose?
Additionally, is there any difference between Students with a Learning Disability (SLD) and Students with No Learning Disabilities (SNLD) in terms of the use of SRL strategies during learning with the ITS?
Based on previous research, we hypothesize that the ITS will increase the deployment of SRL strategies, but only in the group of SNLD, due to the additional self-regulatory demands that hypermedia virtual environments involve for students and because SLDs struggle with essential self-regulatory strategies in general.
3. Methodology
3.1. Procedure
Participants attended two sessions carried out individually in the educational psychology laboratory.
In the first session, the researcher explained to each participant the ethical and confidentiality aspects of the study and asked them to acknowledge and sign the individual informed consent. In addition, they completed a sociodemographic questionnaire and a pretest about the hypermedia content. Moreover, for the students that suspected or were aware of their LD condition, the learning disabilities assessment protocol described in the sample section was applied.
In the second session, we reminded the participants that the session would last approximately 2 h and that they were going to work in the learning environment while some devices recorded their performance throughout the session. Participants were randomly assigned to the experimental or control conditions and did the learning session followed by the application of the posttest. The learning session lasted between two and three hours since participants had to study for 90 min, but the timer stopped whenever they applied self-regulation strategies. During the learning session, the agents guided the processes for both conditions but only students in the experimental condition received prompts from the agents to use self-regulation strategies and adaptive feedback about it.
As stated before, each of the four pedagogical agents plays a different role during the session: Guille, the guide, informs participants about the system characteristics and interface in order to help them navigate through the learning environment. Guille is also in charge of the administration of pretest and posttest knowledge assessment and self-report measures. Nora, the planner, helps learners to set appropriate subgoals and manage them. Mery, the monitor, helps students to assess their understanding of the content they read during the learning session (for instance, thought-out assessment of learning or feeling that they know expressions). Finally, Ortega, the strategizer, is in charge of supporting students’ use of learning strategies.
In the control condition, the activity of the agents was limited to responding to student-selected actions (depending on the type of action, one or another agent intervenes). In contrast, in the experimental condition, agents appear at the students’ request or on the basis of the adaptive rules embedded in the system. One example of the differences between the control group and experimental group based on the treatment provided is in relation to the Goal Setting strategy; when a student sets a goal related to questions in the pretest in which they had a low score, the system does not correct the control group students, but for the experimental group students, the system suggests working on a goal that corresponds to their lower scores in the pretest. Another example is when a student takes a quick look to monitor their progress towards their goals, the system only provides feedback for the experimental group, not the control group. The descriptive statistics for each group and variable studied are given in
Table 2:
These rules provide adaptive scaffolding through the action of the pedagogical agents based on learners’ behavior and responses, since they are designed to scaffold students’ self-regulatory learning processes and understanding of content. In addition, by replying to student responses, the pedagogical agents—in the experimental condition only—provide students with immediate directive feedback about their SRL strategies.
Once a pedagogical agent finishes interacting with the student, it remains visible until a new interaction begins with either the same pedagogical agent or a different one. It is worth noting that learners cannot choose which pedagogical agent they want to interact with, as this is a consequence of the kind of actions they perform.
3.2. Study Design and Data Analysis
In this study, we analyzed the performance of continuous random variables (i.e., self-initiated surface SRL strategies, agent-initiated surface SRL strategies, self-initiated deep SRL strategies, and pedagogical agent-initiated deep SRL strategies) using a multivariate two-way factorial design (people with and without learning difficulties and people with and without prompts).
One of the most commonly used statistical procedures for examining the relationship between several response variables and one or more categorical predictor variables is multivariate analysis of variance (MANOVA). One of the limitations of MANOVA is that the ordinary least squares estimates, and therefore, subsequent hypothesis tests, are very sensitive to deviations of the underlying assumptions of multivariate normality and homogeneity of the variance–covariance matrices. Therefore, before applying MANOVA, we used tests based on multivariate skewness and kurtosis [
73] and Box’s M test [
74] for homogeneity of covariance matrices to examine the suitability of the analysis. Given that these assumptions were not satisfied by our data and sample sizes were very unequal across groups, we needed some robust procedures for hypothesis testing of the two main effects and the interaction effect on the combination of the four response variables of non-orthogonal multivariate two-way factorial design.
In order to avoid the negative impact of the violation of the assumptions of MANOVA on multivariate test criteria, in this study, we used a multivariate version of the modified Brown–Forsythe (MBF) test statistic developed by Vallejo and Ato [
75] to address the question of how to analyze several centroids or multivariate mean vectors. Practical implementation of the MBF procedure requires estimation of the degree of freedom of the approximate central multidimensional Wishart distribution, which can be easily derived by equating the first two moments of the quadratic form associated with the source of variation of interest in the multivariate linear model to those of the central Wishart distribution. In addition, to deal with missing observations in the response variables for one or more subjects, we used the combining rules developed by Vallejo and colleagues [
76] for obtaining multiple imputation inferences.
On the other hand, semiparametric methods, such as generalized linear models (GLM), provide an attractive alternative to parametric methods when the response is bounded, data are missing at random, and the assumption of multivariate normality is deemed untenable [
77]. A GLM with a binomial error structure was conducted to assess the ability of treatment conditions to predict the proportion of ocular fixations data (proportions being between 0.02 and 0.67). This method allows us to estimate and directly test the effects of the interaction between people with and without learning difficulties and people with and without help pop-ups (prompts).
5. Discussion
This study aims to contribute to SRL research through exploring the use of different kinds of SRL strategies in students with different characteristics. Deekens and colleagues concluded in their literature review that the research background of the area has mainly focused on the predictive capacity of these processes in isolation, ignoring their interrelationships [
80].
The literature has shown how “the depth of strategy learners employ, ranging from surface strategies (e.g., re-reading) to more complex deep strategies (e.g., knowledge elaboration), is also predictive of learning outcomes across contexts and academic domains” [
80] p. 64. Green and colleagues obtained results suggesting that the depth of strategy use predicts different academic results, with poorer results from those corresponding to surface strategy use [
81]. Similarly, Dinsmore and Alexander found that applying deeper strategies, such as prior knowledge activation, is more effective than applying surface strategies such as re-reading [
82].
The analysis performed for this study examined the use of surface and deep SRL strategies in university students with and without LD. Additionally, we also compared the results from the Experimental Group to the Control Group. The data suggest that the differences we saw in the comparisons between groups (SLD vs. SNLD and EG vs. CG) occurred differently according to the level of the strategies used (Deep vs. Surface), or whether the use of the strategies was with or without ITS scaffolding.
In general terms, we saw that when surface strategies were used, the differences between SLD and SNLD groups were only significant when their use depended on the help of pedagogical agents, but not if they were self-initiated. SNLD deployed more agent-initiated surface strategies (moderate effect). Something similar happened in the comparison between Experimental and Control groups. The differences were only significant when they were related to agent-initiated surface strategies, with the Experimental Group demonstrating a higher rate (large effect).
The picture changed when it came to the use of deep SRL strategies. In this case, the differences were statistically significant, and of moderate size, between SLD and SNLD only when the behavior was self-initiated (without the help of the pedagogical agents), with more deep SRL strategies displayed by the SNLD group. These differences were non-existent when it was the external agent initiating and directing the use of the strategies. This means that the training provided by MetaTutor helps SLD students to develop a study process similar to SNLD in terms of using a high number of deep SRL strategies. In contrast, when the Experimental Group and the Control Group were compared, there were differences only when pedagogical agent help was available (the size of the effect was large), but not when this help was not available. This means that the actions of pedagogical agents are useful in promoting deep SRL strategy use, as the Experimental Group deployed more deep SRL strategies.
Addressing the three initial research questions: First, is the Intelligent Tutoring System helping students to self-regulate their learning process? The results show that students in the experimental condition deploy more SRL strategies. Therefore, the answer to this first research question is yes, the ITS is helping students to self-regulate their learning process. Similar results have been obtained using the same software in different educational contexts (universities in North America) [
41,
83,
84,
85,
86]. In addition, researchers testing other tools have also verified how SRL training through CBLEs is effective in increasing and improving SRL strategy use [
3,
87,
88,
89,
90,
91,
92,
93,
94].
Our second question was “Is the ITS helping SLD to self-regulate their learning process even though is not specifically designed for this purpose?” The results show how SLDs increase their use of deep SRL strategies when prompted by the pedagogical agents. Therefore, it seems that the answer to this second question is also yes. These results agree with those from Reed and colleagues [
51]. Those authors developed a preparation course for freshmen students with and without LD (so that course was not specifically designed for students with LD either) and they compared the results from SLD with those from SNLD. They found that both groups benefited from the intervention, and found increased attentiveness and increased academic and general resourcefulness after the course.
Our last question was “Is there any difference between SLD and SNLD in terms of use of SRL strategies during learning with an Intelligent Tutoring System?” Our findings in this regard were that SLD used major surface but less self-initiated deep SRL strategies than SNLD. Our results agree with findings from Chevalier and colleagues when comparing metacognitive study and learning strategy use with SLD vs. SNLD; those authors found that both groups had different profiles of strategy use, which was predictive of their GPA [
53]. SLD demonstrated lower strategy use and lower metacognitive study than SNLD. Similar results were found by Andreanssen and colleagues comparing students with dyslexia to students without it, finding that students with dyslexia used more visual and social strategies—and found them more useful—than students without dyslexia [
47]. Nevertheless, in our study, when prompted, SLD used more deep SRL strategies, engaging in a more similar learning process to SNLD. In other studies, SLD have been shown to have learning difficulties such as reading, writing, processing information, and organizing content, etc. Thus, deploying surface learning strategies such as reading or re-reading is not enough to learn particular content [
95]. SLD students need to develop other cognitive and metacognitive learning strategies; therefore, an intervention designed to foster deep learning strategies helps SLD to engage in a more complex learning process that leads them towards better academic achievement.
Finally, we must discard our hypothesis; contrary to our expectation that the Intelligent Tutoring System would increase the use of SRL strategies only in the SNLD group, we found that it also increased the use of SRL strategies in the SLD group. In this regard, findings from Reaser and colleagues [
96] and Chatzara and colleagues [
97] also support the idea that SLD students can greatly benefit from training in the development of SRL processes.
6. Conclusions
We can draw two main conclusions from the results discussed above. On the one hand, the software is effective at providing SRL scaffolding, as it led participants in the experimental condition to use more SRL strategies than participants in the control condition. This agrees with results from other authors [
98,
99,
100] and supports Graesser and colleagues’ conclusion—that to develop effective SRL strategies, most students need some kind of scaffolding [
42].
On the other hand, our results agree with other studies showing the suitability of using prompts to support and promote SRL processes in VLEs. Prompts have been proved to be the most effective SRL training method in VLEs [
15]. Many other authors have successfully used them in different ways, for instance Moos and Blonde embedding SRL prompts in videos in a flipped classroom context [
99]. Other authors, such as Bannert and colleagues, gave students the chance to design their own metacognitive prompts before learning in a hypermedia learning environment [
100]. They found that students in the experimental condition visited relevant pages more and devoted more time to those pages than students in the control condition.
In our study, the SRL training was provided by human-like pedagogical agents that led students in the experimental condition to apply more SRL strategies not only as a response to a prompt—but also on their own initiative (self-initiated SRL strategies)—compared with students in the control condition. This agrees with results from Azevedo and colleagues, who found, using a CBLE that supported SRL while learning, that participants in the external regulatory condition applied more, and more diverse, SRL strategies than participants in the SRL condition [
28]. Participants in the externally regulated condition were prompted to use SRL strategies, in contrast to the SRL condition where they were not prompted. In addition, Azevedo and colleagues, in a similar study using the same CBLE, showed how participants in the externally regulated condition had greater learning efficacy than participants in the self-regulated condition [
40]. Nonetheless, although these kinds of prompts have been shown to be effective, other authors believe that this kind of scaffolding combined with other types—for instance, procedural scaffolds or metacognitive feedback—could lead to better results [
15].
To summarize, we can conclude that our intervention seems to be appropriate for SNLD students.
In addition, the evidence presented leads us to state that this kind of learning environment is even more helpful for SLD (second conclusion). Our results show that when SLD have tools that facilitate applying SRL strategies, they do so even more than SNLD. This result is even more significant if we bear in mind the results from Goroshit and colleagues, who—in the context of face to face education—found lower levels of self-regulation in SLD [
48]. Along similar lines, Andreanssen and colleagues, in a study with a sample of 34 SLD and 34 SNLD, used web-based diaries to record study activities and SRL behaviors [
47]. Their results showed a restricted repertoire of SRL strategies used by SLD compared with SNLD.
Those authors worked in traditional learning environments, which leads us to suspect that CBLEs can be helpful for SLD when they are digitally literate, as in our study, these students increased their use of deep SRL strategies with the help of the pedagogical agents. In this regard, as highlighted by Comby, Standen, and Brown [
101], CBLEs have three characteristics that make them particularly suitable for SLD: 1. They allow students to make mistakes without public consequences; 2. Students can manipulate the learning environment in a way that is not possible in traditional learning environments; and 3. The rules of the system can be inferred without symbolic systems.
Other authors have obtained results similar to ours. For instance, Erikson and Larwin carried out a meta-analysis about teaching methods for SLD [
102]. Comparing online methods with traditional methods, they found that students’ performance in online learning environments is significantly better than with offline methods. In addition, Chatzara and colleagues looked into the influence of virtual pedagogical agents on SLD [
97]. They carried out a study which found that students in the experimental condition had better results in learning assessments than students in the control condition.
Since authors such as Reed and colleagues showed how SLDs often feel unprepared for higher education, remedial actions such as ITSs for promoting SRL are strongly recommended [
51].
7. Limitations and Future Research Directions
The results of this study must be considered in light of its limitations, the main one being that the sample of SLD was quite small. To gather the SLD sample, we contacted the University Office for People with Specific Needs. This office collaborated on the project by informing students in their SLD database about the study. In addition, we posted leaflets advertising it in most university buildings. Although several attempts were made to increase the response rate, both samples (SLD and SNLD) were small. Following these attempts, a proposal was made that participation in the study would be in exchange for course credit in two subjects in two different study programs: Educational Psychology in the Teaching program and Social Development in the Psychology program. This allowed us to collect a moderate sample but there were few SLD.
As well as this, it would be very interesting to broaden the variables studied. Motivation, self-efficacy, approach to learning, personal epistemology, prior knowledge about the learning content, and so on, are variables that have shown to play a role in the deploying of SRL strategies. In the near future, we plan to extend the study scope to have a broader perspective of the use of SRL strategies during learning with an ITS.
Our findings have some implications for future research. On the one hand, it is necessary run the same studies with larger, similar samples (both SLD and SNLD), so that the presented results can be compared with a more heterogeneous group, improving generalizability. On the other hand, similar studies are also needed in other contexts, including other academic domains (such as law, languages, and engineering), to assess whether the performance of students is domain-general or domain-specific.