Next Article in Journal
Culturally Responsive Practices or Assimilation? Views and Practices on Linguistic Diversity of Community College Instructors Working with Multilingual Learners
Previous Article in Journal
Transforming Foreign Language Education: Exploring Educators’ Practices and Perspectives in the (Post-)Pandemic Era
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of Signaling and Practice Types in Video-Based Software Training

by
Vasiliki Ragazou
* and
Ilias Karasavvidis
Department of Early Childhood Education, University of Thessaly, 38221 Volos, Greece
*
Author to whom correspondence should be addressed.
Submission received: 9 May 2023 / Revised: 2 June 2023 / Accepted: 7 June 2023 / Published: 13 June 2023
(This article belongs to the Topic Advances in Online and Distance Learning)

Abstract

:
Video tutorials are a popular means of learning software applications but their design and effectiveness have received little attention. This study investigated the effectiveness of video tutorials for software training. In addition, it examined whether two multimedia design principles, signaling and practice types, contribute to task performance, mental effort, and self-efficacy. The study participants were 114 undergraduate students from a nursing department. A two (no signals vs. signals) × two (video practice vs. video practice video) mixed factorial design was used for testing the main study hypotheses. The analysis revealed a unique contribution of signaling and practice types on task performance and self-efficacy. Contrary to expectations, however, no combined effect of signaling and practice types was found. This paper is concluded with a discussion of the findings and implications for future research.

1. Introduction

There is sufficient evidence that video tutorials for software training can be engaging and enjoyable for users who often have to become familiar with many software applications [1]. A video tutorial, which is a type of instructional video, is a powerful tool for learning complex processes in software applications. Typically, it is portrayed by a digital video recording of a computer screen and usually includes audio narration [2]. Users seem to prefer video tutorials over paper-based instructions [3]. Moreover, there is an increased interest in video tutorials and their effects on educational settings [4]. However, many of the studies evaluating the value of video tutorials demonstrate contradictory results, signaling the need for more systematic research in this field. To mitigate this problem, instructional theories that are based on human cognitive architecture should have been used [5].
The cognitive theory of multimedia learning (CTML) [6] and the cognitive load theory (CLT) [7] are two frameworks that propose several research-based principles for designing video tutorials. The focus is on how novice users process visuospatial information in relation to working and long-term memory [8]. A basic recommendation to practitioners is to design multimedia materials that take into consideration the limited resources of working memory [9]. This is particularly relevant when the learning materials are complex (high element interactivity), in which case working memory is likely to be overloaded [10].
In dynamic visualizations such as video tutorials, the “transient information effect” is a common one that is associated with visuospatial processing [11]. The transient information effect occurs when dynamic visualizations provide a constant flow of information to which the users are incapable of allocating their cognitive resources so they might miss the most relevant aspects of the learning material [1]. Multimedia design principles have been proposed as a means to support learning from video tutorials [12].
One such design principle of potential value is signaling (or cueing). The term denotes visual signals to indicate relevant visuospatial information so that users know where to direct their attention [13]. Another design consideration is practice, a guided practice sequence that involves segmenting during the demonstration [14]. This technique helps users retain the incoming information and then apply it in meaningful contexts without exceeding their working memory capacity, which leads to cognitive overload [15]. Empirical evidence has shown that these design principles are helpful when (a) the users have relatively low domain knowledge [12], and (b) the main goal is the acquisition of conceptual knowledge [16].
This present study aims to address the research gap by exploring evidence-based design principles for the acquisition of procedural knowledge through video tutorials. In particular, it examines how signaling and practice affect novices’ learning of software applications with the aid of video tutorials. Additionally, this study investigates the cognitive (i.e., mental effort) and motivational effects (i.e., flow, self-efficacy) of signaling and practice on software training. The remainder of this paper presents an experiment that tests the added value of signaling and practice for enhancing task performance from video tutorials.

2. The Literature Review

2.1. Learning Software Applications with Video Tutorials

The literature suggests that the focus of published studies is different. On the one hand, some experimental studies compare video against text when acquiring conceptual knowledge [17]. Yet very little is known about the acquisition of procedural knowledge in a step-by-step manner. This is reflected in a recent meta-analysis [18] that reported that only four studies have targeted software instruction. The authors concluded that all other studies used concepts from a variety of disciplines unrelated to software training. The learning of software applications through video tutorials requires a thorough design because a series of actions need to be performed. Therefore, users should become involved in the cognitive processes (selecting, organizing, and integrating), which demand more mental effort from users who have low levels of expertise [12].
The effectiveness of video tutorials in software training has not been fully established, as the evidence is not consistent [19]. Some studies have reported mixed results when examining the learning of common software applications such as word processing or statistics [20,21,22]. Other studies report the positive effects of video tutorials for software training [23]. One possible explanation for such inconsistent results may be related to the complexity of software applications’ interfaces in terms of functionality. According to Leutner [24], a complex interface demands a large number of operations, complex workflows, and time for visuospatial searching when locating menus, panels, and tools. As a result, novice users often experience fear and stress when using unfamiliar software applications. Therefore, instructional designers should optimize software training through video tutorials by incorporating practice-based features.

2.2. The Role of Signaling in Instructional Videos

Signaling is a fundamental principle in the CTML [12]. It is used to direct users’ attention to specific elements, as inexperienced users might not automatically recognize which information in a video tutorial is essential and which might be of secondary importance. Signaling modes can be color devices (e.g., arrows, geometrical shapes), labeling, flashing, zooming, or eye-tracking [25].
Three recent meta-analyses have corroborated the influence of signaling on knowledge and cognitive load (i.e., the learner’s cognitive resources used to learn or accomplish a task). The first meta-analysis by Alpizar et al. [26] reported a moderate effect size of learning with signaling educational materials (d = 0.38). Schneider et al. [27] also concluded that the effect size for retention was medium (g = 0.53), a small to medium effect for transfer (g = 0.33), and a small effect for cognitive load (g = 0.25). Lastly, Richter et al. [28] also found a small to medium overall effect size (r = 0.17). Consequently, all three meta-analyses confirmed the positive influence of signaling on learning while reducing the perceived cognitive load.
In the context of software training, a growing number of studies have explored the effectiveness of signaling techniques in combination with other design features [19,29,30]. Even though their findings support the effectiveness of signaling coupled with other features, the unique contribution of signaling has not been tested in software training. The study by Jamet and Fernandez [31] was an exception, as they systematically studied the role of signaling in learning to use a web-based form. The authors reported that signaling served as an aid to draw users’ attention. In addition, participants in the signaling condition reported more positive appraisals than those in the control condition. However, there was no direct effect of signaling on task performance.
All in all, while signaling appears to positively affect attention, its impact on task performance and motivation is inconsistent. Consequently, it is difficult to determine when and in what learning environments signaling is to be employed [26].

2.3. The Role of Practice in Learning from Instructional Videos

The aim of software training is to enhance procedural knowledge [32]. Practice is a classic design approach in education with two main learning affordances: (a) it enables retention [33], and (b) it enhances learning through the construction of mental models [34]. According to Grossman [35], the practice also gives users the opportunity to apply the demonstrated models in various settings, thus supporting the transfer of knowledge to new situations.
In the field of software training, there is little experimental research on the effects of practice. A comprehensive literature search returned only five recent studies that have explored the influence of different practice opportunities in software training. In the context of learning a desktop publishing program, Ertelt [29] explored practice in the form of guided exploration cards (self-paced content) after each video demonstration. Compared to the non-practice control condition, the study found a main effect of practice on task performance in terms of procedural knowledge.
In a series of studies, van der Meij and his colleagues [36,37,38] investigated the effectiveness of practice on novice users (e.g., elementary students) for formatting tasks using MS Word. The first study by van der Meij et al. [37] compared three conditions: video–practice, practice–video, and video condition. As the performance across the three conditions was similar, embedding practice in video tutorials did not appear to enhance learning. In a follow-up replication study [36], the participants were assigned randomly to four conditions: video–practice, practice–video, practice–video–practice, and video. Despite expectations, the findings indicated no main effects of practice on procedural knowledge. Still, the practice was found to be beneficial for the transfer of knowledge. A recent study by van der Meij and Maseland [38] compared a guided practice sequence with an interleaved practice sequence for learning MS Word. The findings showed no differences between the two conditions. However, the performance scores were slightly higher in the guided condition than in the interleaved one, before and after training. Another study that focused on learning a statistical software application [21] investigated the effects of reviews and practice. A mixed factorial design was used with four conditions (control, practice, review, review and practice). As opposed to the findings from previous studies by the same research group, the findings of this study indicated that practice had a positive effect on procedural knowledge. Yet, there was no practice effect on transfer.
Overall, the aforementioned studies showed no or mixed results regarding the influence of practice on the acquisition of procedural knowledge. Some of them used a classic coupling of instructions followed by practice, whereas others used a mixed practice sequence. In terms of task performance, research has shown that a practice schedule favors more novices than experienced users [39]. However, it is not known what the influence of practice on task performance is when the software application targeted is a complex one.

2.4. The Role of Cognitive Load in Learning from Instructional Videos

When instructional designers produce video tutorials for teaching complex learning materials, the users’ abilities to process the incoming information should be considered. This is because of the limits of human working memory [40]. The CLT [41] identifies three main kinds of cognitive difficulty. Intrinsic cognitive load is due to the material’s inherent difficulty and is increased by element interactivity. Extraneous cognitive load is generated by the material’s presentation. Germane cognitive load refers to the effort that is put into constructing a schema. It is not additive to the other two types. The CLT as an instructional foundation considers the effects of instructional multimedia design utilizing the three cognitive load types to interpret learning findings [42].
A typical instrument to measure total perceived load is mental effort, which reflects people’s subjective appraisals of the effort that they invest in a task [43]. A systematic review by Scheiter [8] highlights that mental effort scales are more popular than other cognitive measures in two ways: (a) they are valid subjective rating scales, and (b) they are good predictors of cognitive outcomes.
Regarding signaling, some multimedia studies have measured mental effort using either one item or an aggregate of multiple effort types, yielding positive outcomes [44,45]. Contrary to this, other studies investigating the effects of signaling on mental effort reported no statistically significant results [31,46]. Regarding practice, only a handful of video-based studies [36,38] have measured cognitive load with other constructs, i.e., flow, indicating inconsistent results.

2.5. The Role of Self-Efficacy in Instructional Videos

The concept of self-efficacy reflects the users’ belief that they feel strong enough to succeed in a given task [47]. It could be seen as a user’s evaluation of what he or she is capable of achieving in a future task. A current extension of CTML includes self-efficacy, where students’ self-efficacy beliefs can improve their learning with multimedia lessons [48].
In the last few years, self-efficacy beliefs have been targeted by several software training studies [21,23,36,37]. The aforementioned studies have examined an ensemble of CTML design features and reported positive effects on self-efficacy. Yet, to this date, no study has systematically examined how signaling and practice types might jointly influence self-efficacy.

2.6. Rationale of the Study and Research Questions

The preceding literature review indicates that, as design principles for video tutorials, signaling and practice could potentially improve task performance. Regarding signaling, the evidence suggests that it is beneficial for learning software applications, though its effects have been examined in conjunction with other features. Thus, its unique contribution to learning from video tutorials has not been documented. Regarding practice, empirical evidence that practice yields better learning outcomes in software training is still limited. Several studies have explored various practice types (e.g., blocked practice, mixed practice) [21,29,37,49] reporting inconsistent results.
In the context of video-based software training, both signaling and practice types are among the design principles utilized [21,31]. On the one hand, this study aims to examine the unique contribution of signaling on task performance from video tutorials targeting complex software training. On the other hand, this study aims to examine whether a specific configuration of practice types (video–practice–video) facilitates video-based software training. Finally, the study aims to examine the combined effect of signaling and practice on video-based software training. In addition to considering how signaling and practice types directly influence task performance, their potential impact on mental effort and self-efficacy are also considered.
All in all, most studies and meta-analyses underline the importance of previous knowledge as an important mediator of how signaling and practice affect learning. The CTML research-validated design principles have mostly focused on users with no prior domain knowledge [50]. Although empirical research has confirmed the effect of each design principle, there is little knowledge about the combined effect of signaling and practice types as far as novice users are concerned.

2.7. Research Questions

The following research questions are addressed:
  • RQ1: Does signaling promote task performance through video-based software training?
    • Based on the literature review, it was hypothesized that signaling would lead to higher learning performance [12];
  • RQ2: Do practice types promote task performance through video-based software training?
    • According to the CTML [51], we hypothesized that the interpolation of practice types would yield better retention and recall [36];
  • RQ3: Does the combination of signaling and practice types enhance task performance through video-based software training?
    • It was expected that the combination of signaling and practice types would have a positive effect on learning performance [14];
  • RQ4: What is the influence of signaling and practice types on mental effort?
    • Complex software applications might demand more mental effort from inexperienced users, who need the most support while they are working on tasks [8]. Hence, it was hypothesized that signaling and practice types would mitigate the mental effort invested by the users;
  • RQ5: What is the influence of signaling and practice types on self-efficacy?
    • According to the CTML [48], self-efficacy is a crucial factor in developing a positive attitude toward task performance. Both signaling and practice types were expected to enhance the participants’ self-confidence (self-efficacy).

3. Methodology

3.1. Participants and Research Design

The study participants were 114 undergraduate students from a nursing department of a university in mainland Greece who were enrolled in a mandatory ICT course. However, only 85 participants (Mage = 20 years, SD = 3.27; 45 females, 37 males) were included in the data analysis because technical issues prevented the collection of task performance scores for 32 participants. The students had little prior familiarity with software applications. They were randomly allocated to one of four treatment conditions of a 2 (signaling: no signals vs. signals) × 2 [practice types: video–practice (VP) vs. video–practice–video (VPV)] factorial between-subjects design. Students received one course credit point for their participation in the study.

3.2. Instructional Materials

Three video tutorials were specifically developed for the purposes of this study. All three demonstrated how to perform common video editing tasks in Blender’s Video Sequence Editor [52], a complex nonlinear editor (NLE) that is bundled with the 3D content creation suite. More specifically, the video tutorials covered fundamental video-editing operations such as navigating the interface, manipulating clips, and translating the positions of images and video clips (Appendix A).
The videos were developed following the eight guidelines for instructional videos [14]. Each video held a specific pertinent title for easy location (G1) and conveyed procedural information (G5). The videos were short (G7). Each began with a brief outline (G4) and demonstrated simple and clear tasks (G6) in a stepwise manner (G3). Moreover, video instructions were coupled with practice to strengthen student performance (G8). In all videos, a human female spoken narration was used (G2) with a conversational style.
Video #1 demonstrated the interface of Blender’s VSE and introduced simple clip operations (e.g., clip selection, changing a clip’s position in the timeline or channel). Video #2 was more complex than Video #1 and demonstrated how to add filters (i.e., zoom or rotation). Finally, Video #3 covered even more complex topics, such as the creation of a picture-in-picture effect using the actions that had been demonstrated in the former video.

3.3. Operationalization

3.3.1. Signaling

In the signaling conditions, the video tutorials included two different signaling types: animated red shapes (e.g., arrows, rectangles, or circles) and contrast (using luminosity masks). The first signaling type was featured when the narrator referred to the relevant onscreen menu items and panel options (Figure 1a). The second signaling type was applied to highlight the result on the video editor’s preview screen after 2 or 3 steps of the procedure that had been demonstrated (Figure 1b). Even though the meta-analysis by Alpizar et al. [26] revealed significant effects for visual cues when the instructional materials cover concepts, little attention has been given to video tutorials demonstrating complex workflows within complex software interfaces.

3.3.2. Practice Type Conditions

In both practice types, a practice file was provided. Depending on the condition, the participants followed the instructions and practiced either in a step-by-step manner or at the end of the video tutorial.
In the VP condition, the participants first watched the video tutorial and then practiced on the corresponding practice file.
In the VPV condition, a static slide was inserted in the video tutorial following a sequence of two to three steps. This slide instructed the participants to pause the video and apply the steps that had been demonstrated to a similar task. After practice, the students were instructed to return to the video tutorial, hit play, and watch the next part of the video. The rationale behind this choice was to help the participants conceptualize the functionality of the procedure that had been demonstrated.

3.4. Measures

Task performance, demographics, ICT experience, mental effort, and self-efficacy were the main measures. All research instruments were administered electronically and adapted to Greek, following the standard procedures, e.g., translation to Greek, back-translation into English, and subsequent piloting with subjects who did not participate in the study.

3.4.1. Task Performance

Three main tasks were used for evaluating performance. Each task comprised a full test incorporating two declarative knowledge items, one procedural knowledge item, and one transfer knowledge item. The declarative knowledge items aimed to capture conceptual knowledge. The procedural knowledge items asked the participants to apply a set of demonstrated steps to a given file. The transfer knowledge item asked participants to apply their knowledge to a novel task (Appendix B).
Binary coding was used for scoring all performance tasks, with each item given 1 point if correct and 0 points otherwise. The reliability scores for the three performance tasks were satisfactory (task 1: α = 0.63; task 2: α = 0.64; task 3: α = 0.62).

3.4.2. Demographics and ICT Experience

A questionnaire was used to collect data related to demographics and ICT experience. The questionnaire asked participants to fill in demographic data (e.g., age, gender, ease of Internet access) and to rate their former ICT experience. Examples: ‘How familiar are you with the following software categories? (a) image processing (e.g., Gimp, Photoshop); (b) web development (e.g., FrontPage, Dreamweaver); (c) online video editing services (e.g., YouTube, Vimeo), etc. The self-reported ICT questionnaire comprised nineteen Likert-type items on a five-point Likert scale, ranging from one (‘not at all’) to five (‘very much’).

3.4.3. Mental Effort

The mental effort that the users invested to construct new schemas was measured with a one-item instrument (Cronbach’s alpha = 0.64). This item was adapted from [43] and asked participants to estimate the amount of mental effort that they had invested in processing each video tutorial. A seven-point Likert scale was used, ranging from one (‘very low mental effort’) to seven (‘very high mental effort’). Although a subjective measure may appear questionable, it has been widely used in studies for assessing the mental effort related with learning instructional materials [8].

3.4.4. Self-Efficacy

The participants were asked to rate their knowledge based on how well they could perform the actions that had been demonstrated in each video. The scale we utilized ranged from 0 to 100% and was based on an instrument proposed by [53]. Three items targeted Video #1 (Cronbach’s alpha = 0.99), five items targeted Video #2 (Cronbach’s alpha = 0.96), and six items targeted Video #3 (Cronbach’s alpha = 0.98).

3.5. Procedure

The experiment was conducted in a single session, which took place in a computer laboratory on the university campus. The total duration of the experiment was approximately 90 min. Before the intervention, participants filled out the demographic and ICT questionnaires. In the beginning, the experimenter informed the students about the procedures of the study (5 min). Next, the participants logged in to the course’s LMS, and depending on the condition to which they had been randomly assigned, they followed a specific learning path. Furthermore, the participants were instructed to wear headphones during training, to work individually, and to request help only when experiencing technical problems.
The participants followed a classic training schedule, watching the first video tutorial and then completing the mental effort and self-efficacy instruments and then the task performance tests. The same procedure was followed for the other two video tutorials. The participants were not allowed to consult the software video tutorials when carrying out the performance tasks. Upon completion, the participants were debriefed.

3.6. Analysis

A two × two mixed ANOVA was used for the analysis of the data to determine both the main and interaction effects of signaling and practice on task performance, mental effort, and self-efficacy. Signaling (no signals vs. signals) and practice types (VP vs. VPV) were the between-subjects factors, and time (tasks 1–3) was the within-subjects factor. As the participants were unfamiliar with the specific software used in the study, using a pre-test was deemed impractical. Consequently, the first task was used as a reference for the subsequent two. An alpha level of 0.05 was used for all statistical tests. Bonferroni’s correction was used in the multiple comparisons. When the sphericity assumption was violated, the Greenhouse–Geisser correction was applied.

4. Results

4.1. Task Performance

Performance scores were aggregated and converted to percentages for each task. Table 1 presents the descriptive statistics for the mean success rates for the tasks in the four conditions.
The two-way mixed ANOVA did not reveal a significant practice type by signaling interaction: F(1, 77) = 1.03, p = 0.314, ηp2 = 0.010. Consequently, task performance was not dependent on the combination of signaling with practice types. Thus, the null hypothesis was rejected. However, further analyses indicated main effects for practice types, F(1, 77) = 27.48, p < 0.001, ηp2 = 0.26, power = 0.999, and also for signaling, F(1, 77) = 8.66, p = 0.004, ηp2 = 0.10. Considering the magnitude of the effect sizes, this indicates a large difference for practice types and a moderate one for signaling. Therefore, both signaling and practice types appear to be conducive to performance. This finding is in line with the first and second hypotheses that signaling and practice types would yield higher learning gains compared to the respective reference conditions, namely no signals and VP.

4.2. Mental Effort

Table 2 presents the findings for mental effort. The two-way mixed ANOVA indicated no interaction of signaling and practice types: F(1,77) = 2.83, p = 0.097, ηp2 = 0.035. Thus, mental effort was not dependent upon the combination of signaling and practice types.
However, further analysis indicated a significant main effect of time: F(1.63, 125.23) = 21.79, p < 0.001, ηp2 = 0.221. This indicates that the average perceived difficulty of each increased with time (see Table 2).

4.3. Self-Efficacy

Table 3 presents the findings for self-efficacy. The two-way mixed ANOVA revealed significant main effects for signaling: F(1, 77)= 12.22, p = 0.001, ηp2 = 0.14, and practice types: F(1, 77) = 34.06, p < 0.001, ηp2 = 0.31, indicating that 3.1% of the variance in self-efficacy appraisals can be explained by practice types. The medium-to-large effect sizes are particularly remarkable for both factors, signaling and practice types. However, no interaction was found between signaling and practice types: F(1, 77) = 0.64, p = 0.425, ηp2 = 0.008.

5. Discussion

The present study investigated the effects of signaling and practice types on task performance and their indirect effects on mental effort and self-efficacy. The results indicated that there was a main effect of signaling on task performance. The participants in the signaling groups (MSignals = 68%) scored higher than the participants in the no-signaling groups (MNo signals = 55.3%). Consistent with this finding, other studies have also reported that signaling is beneficial for novices by helping them to select, organize, and integrate the necessary information in a contextual mental model [13,28,54]. However, this finding is not in compliance with the study by [31] that found no influence of signaling on task performance. This discrepancy between our findings and the ones reported might be explained by the fact that the current study focused on a complex video editing application. It is possible that the simple software application that was used in the particular study (filling out a university web form) might have rendered signaling redundant.
Regarding the second research question, a significant effect of practice types was found. The students in VPV conditions achieved higher mean success rate scores (MVPV = 73%) compared to those in VP conditions (MVP = 50.4%). The magnitude of the effect size (ηp2 = 0.26) indicates this to be a sizeable difference. This finding is consistent with the ones reported by other studies [29], which indicate that practice can be beneficial for task performance since users have no domain-specific prior knowledge. It appears that pausing gave the students the opportunity they needed to reflect on the small pieces of information. Moreover, the practice on the specially provided files allowed them to apply this information to an authentic task [14]. Considering that the study subjects had low expertise levels, it can be concluded that the specific types of practice arrangements (instructions were followed by practice or practice kept up with instructions) are more helpful for novices [55].
Even though main effects were detected for signaling and practice types, no interaction effects were found, a finding that is not in the hypothesized direction. While both signaling and practice types are among the recommended design principles in various conceptualizations [5,32], this present study did not find a combined influence. Thus, signaling and practice types did not appear to jointly help students perform better than either feature alone. CTML research [51] indicates that the signaling principle holds mainly for novices. Likewise, practice types have been proposed mainly for users with low levels of expertise [14]. This outcome is particularly puzzling considering that novices would be the learner group that would, ideally, benefit the most from a combination of signaling and practice. Given that each design feature on its own fosters task performance, the findings of this current study suggest that combining different design features (such as signaling and practice types) should not be assumed to lead to better performance than either feature on its own.
With respect to mental effort, our initial hypothesis that signaling and practice types would decrease mental effort was not supported. We offer two main explanations for this finding. First, the one-item instrument that was used to assess the overall cognitive load might have been suboptimal. This is a typical instrument used by researchers to measure mental effort as an index of total cognitive load [43]. This measure may not have equally assessed all types of cognitive load: intrinsic, extraneous, and germane [7]. A recent review study [8] suggested that the mixed results related to this mental construct may be due to individual differences (i.e., how people interpret questions regarding their effort investment). In other words, users with growth mindsets may see effort as positively related to performance, whereas users with fixed mindsets might believe that trying harder does not improve task performance. It is likely that these mindset differences also influence how users assess their effort. It could be the case that more refined scales, i.e., mental effort operationalized by the response time, might be more appropriate [56]. Second, empirical studies in the field of multimedia learning have shown the positive effect of signaling on learning when the materials are static rather than dynamic [33,57]. Therefore, the dynamic nature of the video tutorials might also play a part in the perceived mental effort that the users invested.
This present study indicated that both signaling and practice types positively influence self-efficacy. The VPV seemed to be a catalyst for self-efficacy, which increased from the first to the last video. The fact that VPV improved students’ self-efficacy regardless of signaling is important as it suggests the relative importance that the stepwise execution of a procedure might have. On the other hand, signaling also had a positive effect on user’s self-efficacy. The results show that incorporating signals into instructional materials such as video tutorials increased self-efficacy appraisals. The presence of signaling may have influenced self-efficacy through stress reduction when using software with which users have no prior experience. These findings are consistent with CTML and indicate the potentially vital role of instructional design in improving self-efficacy and learning outcomes [48].

5.1. Practical Implications

The practical contribution of this study lies in providing evidence of the effects of adding signaling or VPV to facilitate learning through video tutorials. The results can inform instructional designers and practitioners to select the signaling or practice-type techniques according to their learning objectives. This current study shows that both cueing and practice can support novices in learning complex software applications through video tutorials. Considering that the main goal of software training is to learn how to carry out specific operations, VPV is highly recommended, as it will help novices rehearse the information presented and construct a corresponding mental model of the sequence of the required steps. Moreover, if the users are known or expected to have low self-esteem, then either signaling or VPV will likely be beneficial for boosting learner confidence.

5.2. Limitations and Future Directions

In conclusion, this study examined the effects of signaling and practice types on learning a complex software application. While there are a handful of studies that focus on software training, empirical research on the unique and combined effects of signaling and practice types is sparse. This present study attempted to fill this gap by shedding light on two design principles for video-based training and their direct (i.e., performance) and indirect effects (i.e., mental effort, self-efficacy). The results of this study confirm the validity of signaling and practice as design guidelines for software training [14]. Contrary to expectations, however, the study found that in the case of complex software training, signaling and VPV make unique contributions to learning, as no combined influence was detected.
One limitation of this study is that no post-test was administered immediately after the intervention. Such a measure would furnish data related to the degree of retention of the procedures demonstrated. Another study limitation is that the participants were undergraduate students who had low levels of ICT expertise. It is not known whether users from different demographics and levels of expertise would exhibit similar responses to signaling or practice types. Future research should replicate these findings with students from different fields (e.g., science, technology, engineering, and mathematics).
Overall, this present study shows the potential of signaling and practice types for learning complex software applications through video tutorials. Considering the lack of studies in the field and the importance of learning software applications in contemporary societies, more systematic research is required. Promising routes involve the support of tutor-based solutions (e.g., practice with feedback) when users are novices, or self-regulatory solutions (e.g., mixed practice sequence) when users are experts [5,57].

Author Contributions

Conceptualization, V.R.; methodology, V.R and I.K.; validation, V.R. and I.K.; formal analysis, V.R. & I.K.; investigation, V.R.; writing—original draft preparation, V.R.; writing—review and editing, I.K. and V.R.; supervision, I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Ethics Committee of the University of Thessaly, Greece.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Topics of Video Tutorials

Video IDTitleDurationTopicsDescription of the Videos
1Introduction3:26(1) Introduction to video editing graphical user interfacePresentation of the interface (workspaces, menus, panels, etc.), video clip placement, and video clip manipulation.
2Transform tool3:28(1) Control the rotation, location and scale of a video clipThis video introduced how to use transform tool to video clips.
3Video overlay effect4:00(1) Side by side picture-in-picture effectThis video introduced how to setup and apply a pipeline of a complex video effect.

Appendix B

  • Sample of Task Performance Test
  • Declarative knowledge test
  • Q.1 The mouse key button for video selection is [Choose the correct answer].
    • Left mouse button
    • Right mouse button
  • Q.2 Which colour corresponds to the audio clip?
    • Purple
    • Blue
    • Cyan
  • Procedural knowledge test
  • Q.3 Open the file file1.blend.
    • Move the image clip to the horizontal axis at frame 35.
    • Save the changes and submit the file.
  • Q.4 Open the file file2.blend.
    • Move the image clip to the vertical axis in channel 3.
    • Save the changes and submit the file.
  • Transfer knowledge test
  • Q.5 Open the file file3.blend.
    Education 13 00602 i001
    • Place image clips in different channels so that they overlap with each other.
    • Then create the corresponding result as provided in the right screenshot.
    • Save the changes and submit the file.

References

  1. Bétrancourt, M.; Benetos, K. Why and when does instructional video facilitate learning? A commentary to the special issue ‘developments and trends in learning with instructional video’. Comput. Human Behav. 2018, 89, 471–475. [Google Scholar] [CrossRef]
  2. Lloyd, S.A.; Robertson, C.L. Screencast Tutorials Enhance Student Learning of Statistics. Teach. Psychol. 2012, 39, 67–71. [Google Scholar] [CrossRef]
  3. Höffler, T.N.; Leutner, D. Instructional animation versus static pictures: A meta-analysis. Learn. Instr. 2007, 17, 722–738. [Google Scholar] [CrossRef]
  4. Kilinç, H.; Firat, M.; Yüzer, T.V. Uzaktan eğitimde video kullanim eğilimleri: Bir araştirma sentezi. Pegem Egit. Ogretim Derg. 2017, 7, 55–82. [Google Scholar] [CrossRef] [Green Version]
  5. Castro-Alonso, J.C.; de Koning, B.B.; Fiorella, L.; Paas, F. Five Strategies for Optimizing Instructional Materials: Instructor- and Learner-Managed Cognitive Load. Educ. Psychol. Rev. 2021, 33, 1379–1407. [Google Scholar] [CrossRef] [PubMed]
  6. Mayer, R.E. Cognitive Theory of Multimedia Learning; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  7. Sweller, J.; van Merrienboer, J.J.G.; Paas, F.G.W.C. Cognitive Architecture and Instructional Design. Educ. Psychol. Rev. 1998, 10, 251–296. [Google Scholar] [CrossRef]
  8. Scheiter, K.; Ackerman, R.; Hoogerheide, V. Looking at Mental Effort Appraisals through a Metacognitive Lens: Are they Biased? Educ. Psychol. Rev. 2020, 32, 1003–1027. [Google Scholar] [CrossRef]
  9. Oberauer, K.; Lewandowsky, S.; Awh, E.; Brown, G.D.A.; Conway, A.; Cowan, N.; Donkin, C.; Farrell, S.; Hitch, G.J.; Hurlstone, M.J.; et al. Benchmarks for models of short-term and working memory. Psychol. Bull. 2018, 144, 885–958. [Google Scholar] [CrossRef]
  10. Ashman, G.; Kalyuga, S.; Sweller, J. Problem-solving or Explicit Instruction: Which Should Go First When Element Interactivity Is High? Educ. Psychol. Rev. 2020, 32, 229–247. [Google Scholar] [CrossRef]
  11. Ayres, P.; Paas, F. Making intructional animations more effective: A cognitive load approach. Appl. Cogn. Psychol. 2007, 21, 695–700. [Google Scholar] [CrossRef]
  12. Mayer, R.E. Thirty years of research on online learning. Appl. Cogn. Psychol. 2019, 33, 152–159. [Google Scholar] [CrossRef]
  13. van Gog, T. The signaling (or cueing) principle in multimedia learning. In The Cambridge Handbook of Multimedia Learning, 2nd ed.; Mayer, R., Ed.; Cambridge University Press: Cambridge, UK, 2014; pp. 263–278. [Google Scholar] [CrossRef]
  14. van der Meij, H.; van der Meij, J. Eight guidelines for the design of instructional videos for software training. Tech. Commun. 2013, 60, 205–228. [Google Scholar]
  15. Castro-Alonso, J.C.; Ayres, P.; Sweller, J. Instructional Visualizations, Cognitive Load Theory, and Visuospatial Processing. In Visuospatial Processing for Education in Health and Natural Sciences; Springer: Berlin/Heidelberg, Germany, 2019; pp. 111–143. [Google Scholar] [CrossRef]
  16. Wong, M.; Castro-Alonso, J.C.; Ayres, P.; Paas, F. Investigating gender and spatial measurements in instructional animation research. Comput. Human Behav. 2018, 89, 446–456. [Google Scholar] [CrossRef] [Green Version]
  17. Stebner, F.; Kühl, T.; Höffler, T.N.; Wirth, J.; Ayres, P. The role of process information in narrations while learning with animations and static pictures. Comput. Educ. 2017, 104, 34–48. [Google Scholar] [CrossRef]
  18. Rey, G.D.; Beege, M.; Nebel, S.; Wirzberger, M.; Schmitt, T.H.; Schneider, S. A Meta-analysis of the Segmenting Effect. Educ. Psychol. Rev. 2019, 31, 389–419. [Google Scholar] [CrossRef]
  19. van Der Meij, H. Developing and testing a video tutorial for software training. Tech. Commun. 2014, 61, 110–122. [Google Scholar]
  20. Alexander, K.P. The usability of print and online video instructions. Tech. Commun. Q. 2013, 22, 237–259. [Google Scholar] [CrossRef]
  21. van der Meij, H.; Dunkel, P. Effects of a review video and practice in video-based statistics training. Comput. Educ. 2020, 143, 103665. [Google Scholar] [CrossRef]
  22. Worlitz, J.; Stabler, A.; Peplowsky, S.; Woll, R. Video tutorials: An appropriate way of teaching quality management tools applied with software. Qual. Innov. Prosper. 2016, 20, 169–184. [Google Scholar] [CrossRef]
  23. van der Meij, H.; van der Meij, J. A comparison of paper-based and video tutorials for software learning. Comput. Educ. 2014, 78, 150–159. [Google Scholar] [CrossRef]
  24. Leutner, D. Double-fading support-A training approach to complex software systems. J. Comput. Assist. Learn. 2000, 16, 347–357. [Google Scholar] [CrossRef]
  25. Castro-Alonso, J.C.; Ayres, P.; Paas, F. Dynamic visualisations and motor skills. In Handbook of Human Centric Visualization; Springer: New York, NY, USA, 2014; pp. 551–580. [Google Scholar] [CrossRef]
  26. Alpizar, D.; Adesope, O.O.; Wong, R.M. A meta-analysis of signaling principle in multimedia learning environments. Educ. Technol. Res. Dev. 2020, 68, 2095–2119. [Google Scholar] [CrossRef]
  27. Schneider, S.; Beege, M.; Nebel, S.; Rey, G.D. A meta-analysis of how signaling affects learning with media. Educ. Res. Rev. 2018, 23, 1–24. [Google Scholar] [CrossRef]
  28. Richter, J.; Scheiter, K.; Eitel, A. Signaling text-picture relations in multimedia learning: A comprehensive meta-analysis. Educ. Res. Rev. 2016, 17, 19–36. [Google Scholar] [CrossRef]
  29. Ertelt, A. On-Screen Videos as an Effective Learning Tool: The Effect of Instructional Design Variants and Practice on Learning Achievements, Retention, Transfer, and Motivation. 2007. Available online: http://www.freidok.uni-freiburg.de/volltexte/3095/%5Cnpapers3://publication/uuid/EC9507CE-E3EB-486B-AD15-C51CE89614A6 (accessed on 20 April 2023).
  30. van der Meij, H.; van der Meij, J. Demonstration-based training (DBT) in the design of a video tutorial for software training. Instr. Sci. 2016, 44, 527–542. [Google Scholar] [CrossRef] [Green Version]
  31. Jamet, E.; Fernandez, J. Enhancing interactive tutorial effectiveness through visual cueing. Educ. Technol. Res. Dev. 2016, 64, 631–641. [Google Scholar] [CrossRef]
  32. Brar, J.; van der Meij, H. Complex software training: Harnessing and optimizing video instruction. Comput. Human Behav. 2017, 70, 475–485. [Google Scholar] [CrossRef]
  33. Leppink, J.; Paas, F.; van Gog, T.; van der Vleuten, C.P.M.; van Merriënboer, J.J.G. Effects of pairs of problems and examples on task performance and different types of cognitive load. Learn. Instr. 2014, 30, 32–42. [Google Scholar] [CrossRef]
  34. Murthy, N.N.; Challagalla, G.N.; Vincent, L.H.; Shervani, T.A. The impact of simulation training on call center agent performance: A field-based investigation. Manag. Sci. 2008, 54, 384–399. [Google Scholar] [CrossRef] [Green Version]
  35. Grossman, R.; Salas, E.; Pavlas, D.; Rosen, M.A. Using instructional features to enhance demonstration-based training in management education. Acad. Manag. Learn. Educ. 2013, 12, 219–243. [Google Scholar] [CrossRef]
  36. van der Meij, H. Cognitive and motivational effects of practice with videos for software training. Tech. Commun. 2018, 65, 265–279. [Google Scholar]
  37. van der Meij, H.; Rensink, I.; van der Meij, J. Effects of practice with videos for software training. Comput. Human Behav. 2018, 89, 439–445. [Google Scholar] [CrossRef]
  38. van der Meij, H.; Maseland, J. Practice schedules in a video-based software training arrangement. Soc. Sci. Humanit. Open 2021, 3, 100133. [Google Scholar] [CrossRef]
  39. Abel, M.; Roediger, H.L. Comparing the testing effect under blocked and mixed practice: The mnemonic benefits of retrieval practice are not affected by practice format. Mem. Cognit. 2017, 45, 81–92. [Google Scholar] [CrossRef] [Green Version]
  40. Castro-Alonso, J.C.; Uttal, D.H. Science Education and Visuospatial Processing. In Visuospatial Processing for Education in Health and Natural Sciences; Springer International Publishing: Cham, Switzerland, 2019; pp. 53–79. [Google Scholar] [CrossRef]
  41. Sweller, J. Cognitive load theory and educational technology. Educ. Technol. Res. Dev. 2020, 68, 729–749. [Google Scholar] [CrossRef]
  42. van Gog, T.; Hoogerheide, V.; van Harsel, M. The Role of Mental Effort in Fostering Self-Regulated Learning with Problem-Solving Tasks. Educ. Psychol. Rev. 2020, 32, 1055–1072. [Google Scholar] [CrossRef]
  43. Paas, F.G.W.C.W.C. Training Strategies for Attaining Transfer of Problem-Solving Skill in Statistics: A Cognitive-Load Approach. J. Educ. Psychol. 1992, 84, 429–434. [Google Scholar] [CrossRef]
  44. Wang, J.; Antonenko, P.D. Instructor presence in instructional video: Effects on visual attention, recall, and perceived learning. Comput. Human Behav. 2017, 71, 79–89. [Google Scholar] [CrossRef] [Green Version]
  45. Yilmaz, R.M. Effects of using cueing in instructional animations on learning and cognitive load level of elementary students in science education. Interact. Learn. Environ. 2020, 31, 1727–1741. [Google Scholar] [CrossRef]
  46. Arslan-Ari, I.; Crooks, S.M.; Ari, F. How Much Cueing Is Needed in Instructional Animations? The Role of Prior Knowledge. J. Sci. Educ. Technol. 2020, 29, 666–676. [Google Scholar] [CrossRef]
  47. Bandura, A. Self-Efficacy: Freeman The exercise of Control; Henry Holt & Co.: New York, NY, USA, 1997. [Google Scholar]
  48. Huang, X.; Mayer, R.E. Benefits of adding anxiety-reducing features to a computer-based multimedia lesson on statistics. Comput. Human Behav. 2016, 63, 293–303. [Google Scholar] [CrossRef]
  49. Helsdingen, A.; van Gog, T.; van Merriënboer, J. The Effects of Practice Schedule and Critical Thinking Prompts on Learning and Transfer of a Complex Judgment Task. J. Educ. Psychol. 2011, 103, 383–398. [Google Scholar] [CrossRef] [Green Version]
  50. Kalyuga, S. The expertise reversal principle in multimedia learning. In The Cambridge Handbook of Multimedia Learning, 2nd ed.; Cambridge University Press: Cambridge, UK, 2014; pp. 576–597. [Google Scholar] [CrossRef]
  51. Mayer, R.E. Multimedia instruction. In Handbook of Research on Educational Communications and Technology, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 385–399. [Google Scholar] [CrossRef]
  52. Bandura, A. Guide for constructing self-efficacy scales. Self-Effic. Beliefs Adolesc. 2006, 5, 307–337. [Google Scholar] [CrossRef]
  53. de Koning, B.; Tabbers, H.K.; Rikers, R.M.J.P.; Paas, F. Attention cueing as a means to enhance learning from an animation. Appl. Cogn. Psychol. 2007, 21, 731–746. [Google Scholar] [CrossRef] [Green Version]
  54. Reisslein, J.; Atkinson, R.K.; Seeling, P.; Reisslein, M. Encountering the expertise reversal effect with a computer-based environment on electrical circuit analysis. Learn. Instr. 2006, 16, 92–103. [Google Scholar] [CrossRef]
  55. Baars, M.; Wijnia, L.; de Bruin, A.; Paas, F. The Relation Between Students’ Effort and Monitoring Judgments During Learning: A Meta-analysis. Educ. Psychol. Rev. 2020, 32, 979–1002. [Google Scholar] [CrossRef]
  56. Van Gog, T.; Paas, F. Instructional efficiency: Revisiting the original construct in educational research. Educ. Psychol. 2008, 43, 16–26. [Google Scholar] [CrossRef]
  57. Mayer, R.E.; Fiorella, L.; Stull, A. Five ways to increase the effectiveness of instructional video. Educ. Technol. Res. Dev. 2020, 68, 837–852. [Google Scholar] [CrossRef]
Figure 1. Sample Screenshot of Signaling Techniques: (a) adding a red animated arrow to point to an object in the VSE, and (b) adding a bright rectangle to highlight the result of the edited media clips in the preview window.
Figure 1. Sample Screenshot of Signaling Techniques: (a) adding a red animated arrow to point to an object in the VSE, and (b) adding a bright rectangle to highlight the result of the edited media clips in the preview window.
Education 13 00602 g001
Table 1. Task performance by Condition: mean 1 (M) and standard deviation (SD).
Table 1. Task performance by Condition: mean 1 (M) and standard deviation (SD).
Task 1Task 2Task 3
ConditionMSDMSDMSD
No signals—VP (n = 18)44.4427.0642.2224.6338.8925.18
Signals—VP (n = 18)52.2226.6958.8927.8465.5623.57
No signals—VPV (n = 22)71.8228.0559.0933.5175.4519.45
Signals—VPV (n = 23)82.6113.8972.1723.9276.5224.61
1 The means were converted to percentages.
Table 2. Mental Effort by Condition: mean 1 (M) and standard deviation (SD).
Table 2. Mental Effort by Condition: mean 1 (M) and standard deviation (SD).
Task 1Task 2Task 3
ConditionMSDMSDMSD
No signals—VP (n = 18)3.000.303.110.323.940.87
Signals—VP (n = 18)2.720.752.890.763.170.92
No signals—VPV (n = 23)3.270.943.500.963.821.30
Signals—VPV (n = 22)3.571.313.571.274.131.32
1 Scale values range from one to seven, with higher values indicating higher effort.
Table 3. Self-efficacy by Condition: mean 1 (M) and standard deviation (SD).
Table 3. Self-efficacy by Condition: mean 1 (M) and standard deviation (SD).
Task 1Task 2Task 3
ConditionMSDMSDMSD
No signals—VP (n = 18)65.4329.7474.3119.7175.7421.44
Signals—VP (n = 18)73.8027.9084.7213.9777.9623.47
No signals—VPV (n = 23)79.4119.3682.8514.8192.488.77
Signals—VPV (n = 22)98.842.5894.918.7194.499.66
1 Scale values range from 0% to 100%, with higher values meaning a more positive rating.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ragazou, V.; Karasavvidis, I. Effects of Signaling and Practice Types in Video-Based Software Training. Educ. Sci. 2023, 13, 602. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci13060602

AMA Style

Ragazou V, Karasavvidis I. Effects of Signaling and Practice Types in Video-Based Software Training. Education Sciences. 2023; 13(6):602. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci13060602

Chicago/Turabian Style

Ragazou, Vasiliki, and Ilias Karasavvidis. 2023. "Effects of Signaling and Practice Types in Video-Based Software Training" Education Sciences 13, no. 6: 602. https://0-doi-org.brum.beds.ac.uk/10.3390/educsci13060602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop