Next Article in Journal
Effect of Cognitive-Behavioral Therapy on Nocturnal Autonomic Activity in Patients with Fibromyalgia: A Preliminary Study
Next Article in Special Issue
Intersubject Variability in Cerebrovascular Hemodynamics and Systemic Physiology during a Verbal Fluency Task under Colored Light Exposure: Clustering of Subjects by Unsupervised Machine Learning
Previous Article in Journal
Mediation of Sinusoidal Network Oscillations in the Locus Coeruleus of Newborn Rat Slices by Pharmacologically Distinct AMPA and KA Receptors
Previous Article in Special Issue
Improving Attention through Individualized fNIRS Neurofeedback Training: A Pilot Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Gender Differences in the Instructor Presence Effect in Video Lectures: An Eye-Tracking Study

1
Bilingual Cognition and Development Lab, Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies, Guangzhou 510420, China
2
School of International Studies, Zhejiang University, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Submission received: 27 May 2022 / Revised: 7 July 2022 / Accepted: 16 July 2022 / Published: 19 July 2022

Abstract

:
The instructor’s presence on the screen has become a popular feature in the video lectures of online learning and has drawn increasing research interest. Studies on the instructor presence effect of video lectures mainly focused on the features of the instructor, and few have taken learners’ differences, such as gender, into consideration. The current study examined whether male and female learners differed in their learning performance and eye movement features when learning video lectures with and without the instructor’s presence. All participants (N = 64) were asked to watch three different types of video lectures: audio-video without instructor presence (AV), picture-video with instructor presence (PV), and video-video with instructor presence (VV). They watched nine videos, three of each condition, and completed a reading comprehension test after each video. Their eye movement data were simultaneously collected when they watched these videos. Results showed that learners gained better outcomes after watching the videos with a talking instructor (VV) than those with the instructor’s picture (PV) or without the instructor (AV). This finding suggests that the dynamic presence of the instructor in video lectures could enhance learning through increased social presence and agency. Gender differences were found in their attention allocation, but not behavioral learning performance. When watching the videos with a talking instructor (VV), female learners dwelt longer on the instructor, while males transited more between the instructor and the text. Our results highlight the value of instructor presence in video lectures and call for more comprehensive explorations of gender differences in online learning outcomes and attention distribution.

1. Introduction

Online learning is popular and widespread, especially during the COVID-19 pandemic when many schools experienced lockdown. Online learning, often in video lectures, can provide access to high-quality multimedia education resources without time and space constraints. However, it lacks face-to-face interactions between the instructor and learners, who may feel disconnected and less engaged in online courses. Do learners improve when an on-screen instructor is present? How could the instructor presence promote online learning?
According to the personalization principle of the social agency theory [1], the instructor on-screen presence in the multimedia instructional message, as social cues (such as eye gaze, facial expressions, body orientation, and gestures), could fuel a social response in the students and create social presence, a sense of partnership between the students and the instructor. Students try harder to make sense of the presented learning materials when they feel they are in a social partnership with the instructor. Thus, their increased interest and motivational commitment could lead to deeper cognitive processing of the learning materials and better learning performance [1,2]. However, the instructor’s physical image on the screen (such as a one-shot talking head or a picture of a cartoon character) does not substantially improve the students’ learning outcome, according to the image principle [1]. As the social agent, the on-screen instructor needs to engage in real human-like gestures to facilitate the learners’ interest, motivation, engagement, and learning performance.
In contrast, the cognitive load theory [3,4] regards the instructor’s presence in video lectures as a source of interference. Continual access to the instructor’s face and gestures during the lecture may divert learners’ limited attention from the learning content and create split attention between the instructor and the learning materials on the screen [5]. Frequent engagement switches between the instructor and the learning content might also overload learners. Their limited working memory capacity has to be devoted to additional extraneous processing that is not directly related to the instructional objective [4]. Thus, the interference effect caused by the instructor’s presence in the videos might offset the advantages of social presence it brings and even hampers learning at the worst.
Many empirical studies have examined the impact of instructor presence on students’ learning outcomes in video lectures, and the results are mixed. Some studies comparing students’ learning performance in instructor-present and instructor-absent video lectures support a significant role of instructor presence in students’ improved learning performance [6,7,8,9,10] and enhanced positive affective responses, i.e., learning satisfaction and situational interest [11,12]. The instructor’s facial expression, eye gaze, body orientation, gestures, and sizes have been testified to have various consequences on learners’ learning performance and attention allocation [13,14,15,16,17]. However, some researchers claimed that the instructor’s presence might capture students’ attention to the learning materials and impose a higher cognitive load on the students [18,19,20].
According to Mayer [21], the instructor presence effect can be subjected to boundary conditions, including the instructional content, instructional context, and individual differences. Many controversies might arise from learners’ differences, which have not been closely examined. For example, gender differences have been reported in the perceptions of social presence during e-learning, with females experiencing stronger perceptions of social presence than males [22,23]. However, little is known about gender-based sensitivity to instructor presence during video lectures, which is essential to understand individual differences in online learning outcomes. Considering the relevant literature involved primarily female participants, investigating gender differences in the effect of instructor presence in video lectures is necessary.

1.1. Instructor Presence Effect: Now You See It, Now You Do Not

The primary debate in this line of research lies in the facilitation effect of instructor presence in video lectures. For example, Kizilcec and his colleagues [24] revealed that 75% of students in their study preferred to learn video lectures with an instructor’s face. These students reported a better learning experience than those who did not see an instructor’s face in the video lectures. Another study found that students viewing videos with the instructor and PPT slides had better learning performance than those watching video podcasts with only PPT slides [25]. More recently, Hew and Lo [26] demonstrated that secondary school students had the highest scores in the recall and application questions in the video lectures with the teacher’s talking head. However, other studies using similar paradigms failed to find such an instructor presence effect [19,20,27,28]. For example, Homer and his colleagues [20] asked adult participants to view video lectures with the speaker or a no-video lecture with the audio and slides. They assessed the learning, cognitive load, and social presence in two groups of participants. Both groups did not differ in learning performance or social presence, but the video group experienced a greater cognitive load. Another study reported that video with instructor presence as a distractor impaired learning performance [28]. However, their learners preferred and believed this learning condition was most effective.
Hong and his colleague [29] provided a unique way to see the conflicting results. They revealed that instructor presence increased learners’ cognitive load when they learned procedural knowledge. Adding the instructor in a video lecture only facilitated declarative knowledge learning. In another study involving procedure knowledge learning, the authors tested the impact of teachers’ continuous vs. intermittent presence in instructional video lectures on procedural knowledge [30]. They found that a teacher’s intermittent presentational approach improved learning achievement and satisfaction and caused less cognitive load than the continuous presentation condition. So far, the inconsistent findings suggest that the on-screen instructor presence only plays an important role in some presenting modes. Fiorella and his colleagues [18] compared two instructional methods: a talking instructor with static diagrams or dynamically drawn diagrams without the instructor. Students were asked to adopt one of the learning strategies (explain, draw or rewatch). They found that the alignment of the instructional methods with learning strategies was important to the learning outcome instead of instructor presence.
Obviously, researchers are interested in the effectiveness of adding an instructor to the video lecture on learners’ learning outcomes. In this line of explorations, two main concerns are usually involved: instructional methods and learning outcomes. However, Mayer [21] suggested inserting a focus on the learning process between instructional methods and learning outcomes. Interviews, behavioral tests, and self-reports that have been mostly used could only make inferences about the information processing during online learning.
Eye-tracking technology is one of those measures that can shed light on the underlying attentional dynamics during learning. It has been adopted in some recent studies on instructor presence [8,9,10,11,12,14,19]. The eye–mind hypothesis postulates that the learner’s fixation and visual attention are linked [31]. The more fixation time is attributed to an item, the more visual attention is allocated to that stimuli. So far, the most commonly used eye-tracking measures in the relevant literature are fixation count and dwell time. More fixation counts and longer dwell time indicate more attention to an object/area. Using these measures, previous researchers showed that the onscreen instructor did divert some of the learners’ attention from the learning content: there was a shorter dwell time on the learning materials in instructor-present videos as compared to instructor-absent ones [7,9,10,19,27]. Apart from examining participants’ close attention to the instructor and other content on the screen, some studies also considered the participants’ number of transitions between the instructor and the content [11,12]. This measure has been regarded as an index for split attention caused by the instructor’s presence in the video. Learners have been shown to make more transitions between the instructor and content areas in instructor-present videos [11,12].
The increasing eye-tracking studies on the instructor presence effect have mainly focused on the impact of instructor presence on students’ attention allocation and learning. For example, Wang and Antonenko [11] made 26 participants view 10-min mathematics videos on easy and difficult topics with the instructor either present or absent. Although there were no significant group differences in their learning transfer, instructor presence improved recall for easy topics and decreased the self-reported mental effort for the difficult topic. In contrast, Pi and Hong [9] revealed that participants allocated more visual attention to the instructor than to the slides in a video podcast that a psychologist gave on attachment. The condition of the instructor talking and the slides led to the best learning performance. The video lectures’ topic might influence the instructor’s visual attention allocation.
To identify effective conditions that instructor presence work and the underlying visual attention process, many researchers examined social cues or features of instructor presence. These features include but are not limited to, the instructor’s eye gaze [14,15,19], facial expression [32,33], body orientation [14], gestures [34,35], image size [36], and position [17] on the screen. Researchers investigated the instructor’s eye gaze mostly to testify whether continual access to an instructor’s eye gaze can guide and improve learning. Van Gog and his colleagues [10] revealed that the face of the instructor in the problem-solving modeling video was beneficial to participants’ learning performance. van Wermeskerken and van Gog [19] compared similar demonstrating videos with instructor’s gaze guidance (i.e., staring straight into the camera) present or absent. They failed to find any facilitation or hinder effect of the instructor’s face or eye gaze on learning performance. Still, both affected the visual attention allocation when participants viewed the videos. In another study, students viewed organic chemistry video lectures with the instructor’s direct gaze (the instructor looked into the camera in a transparent blackboard context) or the instructor’s gaze guidance (the instructor looked and wrote on the blackboard) [37]. The two groups did not differ in learning performance and engagement. Finally, Pi and her colleagues [14] extended the studies on eye gaze. They examined the effect of the instructor’s eye gaze and body orientation on attention allocation and learning in video lectures. Their learners who viewed the instructor’s guided gaze paid more visual attention to the slides, while learners who viewed the instructor’ s direct gaze spent more attention on her face. The former group had better retention and transfer outcomes. The body orientation did not play any significant role. These explorations mentioned above help answer the key question that educators and researchers care about: how to optimize the design of video lectures to improve students’ learning.
Compared with the efforts on the instructor features and learning materials, learners’ differences in the instructor presence effect have not been closely examined. In Kokoç et al. [8], participants of different sustained attention levels watched three types of video lectures (picture-in-picture, voiceover presentation, and screencast) which differed in instructor presence. At the same time, their eye movements were simultaneously recorded. Due to the heterogeneity in content and multimedia elements across different video types, they conducted separate analyses of the eye-tracking measurements for each type. Results demonstrated that only the picture-in-picture type with instructor presence resulted in different eye movement features between learners of high and low sustained attention levels.
Kokoç and his colleagues claimed that modeling individual differences in the design of video lectures is still at an early stage in the literature [8]. Learner characteristics matter as it is thought to be one of the most important issues to consider when designing effective e-learning environments [38,39]. Unfortunately, it remains unclear whether presenting an instructor on the video screen has the same effect on learners of different gender, age, and cognitive abilities.

1.2. Gender Differences in the Perceptions of Online Social Presence

According to the gender similarities hypothesis [40], males and females are similar on most, but not all, psychological variables. For example, meta-analyses have shown that gender differences have been reliably found in cognitive skills such as attention, memory, and spatial ability [41,42,43]. These cognitive differences could influence their processing and learning procedures [44]. For example, it was found that females with lower spatial ability benefited more from animated instructional presentations than males [45]. Thus, the same instructional interventions could impact the two groups differently [44,46].
Gender differences have been examined in online learning environments. One of the issues that interest researchers is whether males and females differ in their perceptions of social presence in e-learning settings. Online social presence refers to “the subjective feeling of being connected and together with others during computer-mediated communication” [47] (p. 1739). It has been assumed to be crucial to the success of online learning. Previous studies have demonstrated that as a positive experience, social presence positively influences online learners’ satisfaction [48,49,50] and performance [51,52]. However, the same e-learning environment can result in different subjective experiences of presence between male and female learners, with females having greater perceptions of social presence than males [22,23].
In a web-based introductory information systems course, Johnson reported that women communicated more, experienced higher social presence, and performed better than men [22]. Johnson attributed females’ stronger perceptions of social presence to the gender-related differences in communication as females were found more attuned to the socially oriented aspects of communication. Unlike Johnson, Rodríguez-Ardura and Meseguer-Artola attempted to explore the cognitive and emotional factors that contributed to social presence experience and considered the moderating role of gender [23]. They found gender moderated the relationship between emotion and presence, with women more sensitive to emotion in their presence formation than men. In other words, the greater the emotional effort women experience, the more intense their experience of presence. Though the relevant empirical studies have been limited, the potential differences between males and females indicated by the existing evidence still show the necessity for future presence-related research to take gender into account.
Suppose the instructor-present video lectures activate a higher level of social presence than those instructor-absent videos. In that case, the instructor presence effect might not be the same for male and female learners, who differ in their perception of social presence. Actually, males and females differ significantly in social brain function when making social decisions from faces [53]. Studies of the instructor presence effect included primarily female participants, who made up around 70% of the sample [8,10,14,19,20,34]. Such a gender imbalance could have skewed the results’ distribution. Therefore, examining the gender differences in online learning, especially in video lectures with an instructor’s presence, is necessary.

1.3. The Present Study

As far as we know, our research is the very first to look into gender differences in the instructor presence effect in video lectures. We explored whether men and women differed in attention allocation and learning performance in video lectures with either instructor presence or instructor absence. There are mainly two types of video lectures in the literature: lecture videos with slides and modeling videos in which an instructor provides a step-by-step demonstration of how to perform a task or solve a problem [10,19]. We selected the lecture video, the most common type for learning, and included three different formats of video lectures. The first type is the audio-video presentation (AV), which contains video talks from presentation slides, supplemented with the instructor’s narration without visual presence. This type of video lecture has been widely used for e-learning due to its cost efficiency [54]. The second is the picture-video presentation (PV), which features a teacher’s image (picture) in the presentation slides. The instructor image provides the instructor’s social presence but is less interactive and distractive than the teacher’s talking head [26]. The third condition comprises a synchronized video of the instructor explaining the content and a video of corresponding presentation slides (VV). High media richness is characteristic of videos of this type [6]. The instructor’s image or video was continually displayed in the top-right corner as the default talking head in Zoom meetings. The instructor in the VV condition looked at the camera and spoke naturally without deliberate facial expression or eye gaze, as shown in most online lectures and courses. In each of the three conditions (AV, PV, VV), every participant watched three videos while their eye movement was simultaneously recorded. The comprehension test following each video measured their learning performance.
We hypothesized that the instructor presence effect would be significant in male and female learner groups. As females have been suggested to be more sensitive to social presence than males, they might allocate more visual attention to the instructor than the males. This preference could be displayed in the eye data of fixation and dwell time. Meanwhile, males were less sensitive to social cues, so their attention allocation might be more distributed than the female participants. Since both genders have compensatory online learning strategies, they could achieve similar learning performance.
Therefore, using the eye-tracking technique, the current study investigated the eye movement patterns in male and female students who learned video lectures with/without the instructor’s presence. Our findings should benefit the current understanding of social agency theory regarding learners’ differences and thus improve the effectiveness of online education.

2. Materials and Methods

2.1. Participants and Design

Sixty-six undergraduates (34 males; age range: 18–21) from a Chinese university participated in the study. All participants had normal or corrected-to-normal vision and hearing. They provided written informed consent before the experiment and were paid for their participation. Two male participants were excluded from the data analysis because they experienced problems in the eye movements’ calibration phase, and 64 participants (32 females; mean age = 19.72 ± 1.02) remained for the data analysis.
This study adopted a two-factor mixed design. Male and female participants were asked to watch nine videos in three instructor conditions (audio-video, AV; picture-video, PV; video-video, VV). They were asked to make true or false judgments about a series of statements following each video. Their scores on this comprehension task indicated their learning performance. All the audio and video stimuli were in Chinese, the participants’ first language. All participants provided informed consent and received payment for their participation. The present study was approved by the ethical committee of the Bilingual Cognition and Development Lab at the Guangdong University of Foreign Studies, China.

2.2. Apparatus and Eye Movement Data Analysis

Participants’ eye movement data were collected via an Eyelink 1000 eye tracker (SR Research Ltd., Mississauga, ON, Canada) in the desktop-mounted mode, with a sampling rate of 1000 Hz. Participants were seated approximately 60 cm from the screen. A chin rest was used to minimize their head movements. Each video had three areas of interest (AOI): the text area, the topic-related picture area, and the instructor area. Those instructor-absent videos (the AV condition) did not have an instructor area. We created a corresponding AOI with the equivalent size and location of the instructor AOI from the instructor-present videos [12]. Within each AOI, we collected the participants’ fixation count (average number of total fixations on a particular AOI), fixation count percentage (average percentage of all fixations on a specific AOI), dwell time (average sum of all fixation duration on a specific AOI), dwell time percentage (average percentage of trial time spent on a specific AOI), and number of transitions between different AOIs. All the data were collected at the Bilingual Cognition and Development Lab at the Guangdong University of Foreign Studies. The eye movement data were preprocessed in Data Viewer (SR Research), in which unsuccessful trials (the tracking ratio was lower than 90%) were discarded.

2.3. Materials

Eleven video lectures were used in the current study, nine for experimental stimuli and two for the practice session. Those videos introduced topics in science, history, and literature. Details of these videos are presented in Table 1. We downloaded the original passages from the Chinese version of Wikipedia (https://zh.wikipedia.org/wiki/Wikipedia (accessed on 2 May 2021)) and then revised each text into a 400-word script. We asked 20 Chinese students from the same university to rate the familiarity (from 1 not familiar at all to 5 very familiar) and difficulty of its content (from 1 not difficult at all to 5 very difficult) of each topic on two five-point Likert scales. Generally, they reported being not familiar with the topics (mean rating scores = 2.03 ± 1.22) and being not difficult (mean rating scores = 2.23 ± 0.93) with the content of the topics.
Based on those scripts, 11 videos were recorded by the same instructor (a young female native Chinese speaker with the standard accent of Putonghua) in three instructor conditions: audio-video without instructor presence (AV); picture-video with instructor presence (PV); video-video with instructor presence (VV) (Figure 1). Each video lasted about two minutes. In the AV condition, there was only learning content accompanied by the instructor’s narration and a topic-related picture. In the PV condition, a static image of the instructor appeared in the screen’s upper-right corner, with the text and pictures as in the AV condition. In the VV condition, the static image of the instructor was replaced by the instructor’s video giving the talk, as the slides showed. All videos were identical in the size of the text area (454 × 630 pixels), the topic-related picture area (220 × 246 pixels), and the instructor area (260 × 260 pixels). Nine videos were presented in a randomized order for all participants.

2.4. Measurements

Comprehension test: After watching each video, participants were instructed to complete eight true or false judgments on visually presented statements based on what they learned from the video. Each question would appear in the same window as the lecture video (Figure 1d). Participants pressed the Yes/No buttons to indicate true or false to those statements. Comprehension scores were calculated by assigning one point for a correct response and zero for incorrect responses. Each participant completed 72 questions for the nine videos (the average accuracy is 85%); the maximum score for their learning performance in each condition was 24. Before the eye-tracking experiment, all the comprehension questions were reviewed and optimized for clarity, accuracy, and content validity by 20 matched control participants who did not join the study.
After the participants finished answering all questions of a video during the eye-tracking experiment, they were asked to rate the familiarity of the topic and the difficulty level of the video content. The instruction clarified that the familiarity referred to their prior knowledge of each topic. As shown in Table 1, participants in the pilot study rated the familiarity and difficulty of the materials; those in the eye-tracking experiment were also unfamiliar with those topics. They all regarded those video contents as moderate to low in difficulty.

2.5. Procedure

The participants were tested individually, seated at a desk facing an eye-tracker. Before each video, the participant’s gaze would be calibrated and validated with a 9-point calibration algorithm. Following the eye tracker calibration, participants were given basic instructions and then watched the video. Immediately after watching each video, participants were instructed to answer eight comprehension questions (by pressing the Yes/No buttons on the keyboard). They then rated the familiarity and difficulty of each video (by pressing the keys on the keyboard, from 1 to 5). The nine videos were presented in a randomized order for each participant. They could take a break after watching every three videos. The total duration of this experiment was approximately 40 min.

2.6. Data Analysis

We used linear mixed-effects models (LMMs) [55] achieved by the lme4 package in the R environment (version 4.1.0) [56]. In our analyses, we adopted the maximal random-effects structure [57], with instructor (AV, PV, and VV) and gender (male vs. female) as fixed factors, participants, and video areas (items) as crossed random factors. A random slope would be kept if its inclusion significantly improved the model’s goodness of fit. Besides, as instructor was a three-level categorical predictor, we adopted the treatment coding and turned it into two contrasts [58], with the first contrast comparing the AV and PV conditions and the second comparing the AV and VV conditions.

3. Results

3.1. Effects of Gender and Instructor on Learning Performance

We first examined whether gender and instructor would exert any influence on learners’ comprehension performance. The LMM results (Table 2) did not reveal a significant effect of gender [β = 0.01, SE = 0.16, p = 0.949]. However, we found a significant main effect of the instructor. To be more specific, it was found that participants obtained higher scores in the VV condition than in the AV condition [β = 0.43, SE = 0.14, p = 0.002]. There were no significant differences in the comprehension scores between the AV and PV conditions. Thus, the static image of the instructor was not a significant boost to the participants’ learning performance in the video lectures. Additionally, there were no significant interactions between instructor contrasts and gender, implying that participants achieved the same learning outcomes in all conditions regardless of their gender (Figure 2).

3.2. Effects of Gender and Instructor on Visual Attention Allocation

We analyzed the eye movement data to examine gender differences in visual attention allocation during the learning procedure. Each condition involved three AOIs: the text, the topic-related picture, and the instructor. In each AOI, learners’ fixation count (%), dwell time (%), and the number of transitions were gathered. Table 3 presents the descriptive results. The measures, including fixation count, dwell time, and the number of transitions, were further examined across instructor conditions and gender using linear mixed-effects models. Table 4 presents the LMM results for each measure.
Learners’ attention to the text AOI. The significant effects of instructor suggest that learners spent less time on the text in the PV and VV conditions than in the AV condition. In other words, adding an instructor (either in a static picture or a video) diverted some of the learners’ attention from the learning content. However, we did not find any effect of gender or significant interactions between instructor contrasts and gender. This suggests that males and females distributed their attention similarly to the text across conditions.
Learners’ attention to the picture AOI. The results showed a significant effect of instructor. Specifically, it is in the PV condition that learners pay more attention to the topic-related picture. No significant gender differences were found in fixation count and dwell time.
Learners’ attention to the instructor AOI. The significant effects of instructor suggest that the added instructor in both the PV and VV conditions did attract much of the learners’ attention. Besides, as for dwell time, a significant interaction between instructor and gender was found. Follow-up tests showed that female learners dwelt longer on the presented instructor than males in the VV condition (p < 0.05) (Figure 3).
We also examined the number of transitions between different AOIs, and the direction of transitions was also taken into account. As for the transitions between the text AOI and the instructor AOI, there were significant effects of instructor, showing that participants made more transitions between these two AOIs in the PV and VV conditions than in the AV condition. Besides, we also found significant interactions between instructor and gender. Follow-up tests showed that males transited more between these two AOIs than females (ps < 0.05) (Figure 4). Significant effects of instructor for the transitions between the instructor AOI and the picture AOI were also found, indicating that learners transited more between these two AOIs in the PV and VV conditions than in the AV condition. Last, for the transitions between the text AOI and the picture AOI, there were significant effects of instructor, showing that learners transited less between these two AOIs in the PV and VV conditions than in the AV condition. In summary, the significant effects of instructor generally found for the number of transitions across AOIs suggest that the added instructor in the PV and VV conditions split learners’ attention while they were watching video lectures.

4. Discussion

Online learning boomed during the pandemic. The change in learning requires empirical research to verify effective teaching methods in online learning environments. The present study aimed to assess gender differences in the instructor presence effect of video lectures, a central component of the online learning experience. Using the eye-tracking technique, we examined the learning outcome and learning process in male and female Chinese adult learners who took video lectures. In both groups, we found the instructor presence effect: males and females learned better in the videos with the instructor talking than those without the instructor or with the instructor’s static image. In addition, we revealed some gender differences in their attention allocation during the process of video lecture learning.

4.1. The Instructor’s Active Engagement in Video Lectures Facilitates Learning Performance in Both Male and Female Adult Learners

An initial objective of this project is to identify the instructor presence effect regardless of gender. We found a significant main effect of instructor. The video lectures with the instructor explaining the content of the slides boost the learning performance of male and female learners. Our results support the claim of instructor presence’s facilitation function on online learning performance [6,7,8,9,10,12,18]. This observation supports the hypothesis of the social agency theory [1]: the instructor as a social cue, only in the video condition, might prime a feeling of social presence in learners. The dynamic presence of the instructor might make them more committed to actively processing the provided information and thus improve learning performance.
The instructor’s static presence in the PV condition, though not significantly, benefits the learning of video lectures compared with the baseline condition without the presence of an instructor (Figure 2). This might be explained by the fact that the instructor’s image displayed limited nonverbal cues without mutual eye gaze and active engagement as in the VV condition. For the lack of these embodiment cues, learners might have much less perception of the instructor’s engagement and, therefore, less interaction with the instructor when they just saw the instructor’s picture on the screen. Actually, they did pay some attention (3% dwell time, Table 3) to the instructor’s picture, which is limited compared with that of the VV condition (over 10% dwell time). Our findings suggested that only active instructor engagement in the video lectures is crucial for the facilitation effect of instructor presence.
Our results also shed light on the instructor’s embodiment. The instructor’s static picture represented a low embodiment (without no movement at all). It had been demonstrated ineffective in improving learning outcomes, and the instructor’s video demonstrated a mediate embodiment. Unlike the presented instructor (with a high embodiment) in most previous studies [14,19,32,33,34,35], the instructor in our VV condition did not show any deliberate facial expression, eye gaze, or gestures. It simulated the natural presence of an instructor in some online learning settings, such as the Zoom meeting. In our study, such a natural dynamic presence has also been demonstrated to be effective in learning performance improvement. Therefore, we suggest the teachers turn on their cameras and show their presence in daily online courses.

4.2. Males and Females Achieve the Same Performance via Different Attention Allocation Processes

To the best of our knowledge, this is the first study to explore gender differences in the instructor presence effect during video lectures. We compared the learning performance of male and female learners in terms of their comprehension scores after video lectures. We also examined their attention allocation during the learning process using eye-tracking technology. Contrary to expectations, this study did not find a significant gender difference in their learning performance. However, in the VV condition (featuring a video of the instructor), we found some significant gender difference in their attention allocation during the learning process. Male and female learners differed in the dwell time on the instructor AOI, and the number of transitions between the text and the instructor AOIs.
Gender differences, as a personal and fundamental characteristic of learners, have been found in the perception of social presence in web courses [22], online learning strategies [59], and communication efficacy [60]. For example, female learners seemed more sensitive to social presence [23]. They experienced a higher level of social presence during online learning [22]. In the current study, gender differences in attention allocation were found when learners watched the video lectures with the instructor’s video present. Specifically, the results demonstrated that females spent a longer time on the AOI of the instructor. At the same time, males transited more frequently between the instructor and the text. As a social cue to increase social presence, the on-screen instructor made males switch between contents more frequently and did not sustain their attention on the instructor. This might reflect males’ distinct approach to social cues. As a previous study pointed out, men tended to miss social cues and have difficulty processing those cues in social tasks [53]. On the contrary, female learners tended to dwell longer on the instructor. As previous studies demonstrated, this might also be due to females’ sensitivity to the social presence in online learning environments.
Unfortunately, we did not collect data on their perception of the instructor’s social presence, making it hard to explain the findings comprehensively. It is unclear whether females’ attention preference for the instructor indicates their actual liking and interest. Besides, despite the different attention allocation processes, males and females in our study achieved the same learning outcomes after watching the instructor-present videos. Such inconsistent gender differences found in attention allocation and learning outcomes also indicate the necessity of including subjective measures to reveal learners’ actual perceptions. In other words, gender differences in the instructor presence effect should be thoroughly examined regarding learners’ perception, learning outcomes, and attention distribution. Possible relationships among the three dimensions should also be considered when discussing the moderating role of gender in the instructor presence effect.
Last, as mentioned before, the presented instructor in our study did not show salient social cues such as facial expression, body orientation, and gestures. It remains unknown whether males and females would react to the instructor presence differently when presented with an instructor with high embodiment social cues. In that situation, gender differences in learning performance and attention allocation might be more evident. Future research on gender differences could tap into high embodiment settings.

4.3. Implications, Limitations, and Future Study

In sum, this study revealed positive effects of instructor presence on participants’ learning outcomes. It highlighted that the positive instructor presence effect held true in male and female learners and suggested including the talking instructor during online lecturing. The learning process regarding eye movement patterns showed some differences for the two groups, with females attending more to the instructor’s video. Future work should measure learners’ perception of social presence to better account for females’ attention preference for the instructor. Finally, our results also caution the problem of imbalanced gender ratio in online learning research and call for the consideration of gender when exploring the effectiveness of online instruction.
This study has a few limitations that future research should consider and address. First, our participants self-reported being generally unfamiliar with the topics covered in this study, but we could have used a prior knowledge test to gauge their pre-existing knowledge of each topic accurately. Besides, we did not use subjective measures to reveal learners’ perception of social presence in video lectures with/without instructor presence. Future research should consider learners’ social presence perception when discussing the potential gender differences in the instructor presence effect. Finally, this study only tested participants’ retention for assessment of learning. It is better to include a transfer test to see how well learners understand the material [21]. Therefore, it still remains an open question whether males and females will differ in transferring what they have learned from video lectures with (or without) the instructor’s presence. Future research on gender differences in the instructor presence effect should adopt retention and transfer tests to assess learning outcomes.

5. Conclusions

In response to the call for more attention on individual differences in instructor presence research, the current study focused on gender differences. Using eye-tracking technology, we examined male and female learners’ attention allocation and learning outcomes in video lectures with the instructor present or absent. The instructor talking video facilitated learning performance in both genders, who achieved similar learning performance. However, the male and female learners showed different attention allocation patterns: females dwelt longer on the talking instructor, and males switched more between the instructor and learning content. Future research should comprehensively explore the potential gender differences in the instructor presence effect by examining learners’ perception, learning outcome, and attention allocation in high embodiment settings.

Author Contributions

J.Y. and Y.Z. designed the study; Y.Z. conducted the experiments and collected data; Y.Z. analyzed data; Y.Z. and J.Y. discussed the results; Y.Z. and J.Y. wrote and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by a grant (BCD202105) from the Bilingual Cognition and Development Lab, Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies, and the Fundamental Research Funds for the Central Universities, China.

Institutional Review Board Statement

The study was approved by the ethical committee of the Bilingual Cognition and Development Lab at the Guangdong University of Foreign Studies, China (approval code: BCDL_202106_001).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical restrictions.

Acknowledgments

The authors thank the participants for their collaboration and the Language Learning and Brain (LLaB) research team members for data collection and discussion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mayer, R.E. Principles based on social cues in multimedia learning: Personalization, voice, image, and embodiment principles. In The Cambridge Handbook of Multimedia Learning, 2nd ed.; Mayer, R., Ed.; Cambridge University Press: New York, NY, USA, 2014; pp. 345–368. [Google Scholar]
  2. Mayer, R.E.; Fennell, S.; Farmer, L.; Campbell, J. A personalization effect in multimedia learning: Students learn better when words are in conversational style rather than formal style. J. Educ. Psychol. 2004, 96, 389–395. [Google Scholar] [CrossRef] [Green Version]
  3. Paas, F.; Renkl, A.; Sweller, J. Cognitive load theory and instructional design: Recent developments. Educ. Psychol. 2003, 38, 1–4. [Google Scholar] [CrossRef]
  4. Sweller, J.; Ayres, P.; Kalyuga, S. Cognitive Load Theory; Springer: New York, NY, USA, 2011. [Google Scholar]
  5. Ayers, P.; Sweller, J. The split-attention principle in multimedia learning. In The Cambridge Handbook of Multimedia Learning, 2nd ed.; Mayer, R., Ed.; Cambridge University Press: New York, NY, USA, 2014; pp. 206–226. [Google Scholar]
  6. Chen, C.; Wu, C. Effects of different video lecture types on sustained attention, emotion, cognitive load, and learning performance. Comput. Educ. 2015, 80, 108–121. [Google Scholar] [CrossRef]
  7. Colliot, T.; Jamet, E. Understanding the effects of a teacher video on learning from a multimedia document: An eye-tracking study. Educ. Technol. Res. Dev. 2018, 66, 1415–1433. [Google Scholar] [CrossRef]
  8. Kokoç, M.; IIgaz, H.; Altun, A. Effects of sustained attention and video lecture types on learning performances. Educ. Technol. Res. Dev. 2020, 68, 3015–3039. [Google Scholar] [CrossRef]
  9. Pi, Z.; Hong, J. Learning process and learning outcomes of video podcasts including the instructor and PPT slides: A Chinese case. Innov. Educ. Teach. Int. 2016, 53, 135–144. [Google Scholar] [CrossRef]
  10. Van Gog, T.; Verveer, I.; Verveer, L. Learning from video modeling examples: Effects of seeing the human model’s face. Comput. Educ. 2014, 72, 323–327. [Google Scholar] [CrossRef]
  11. Wang, J.; Antonenko, P. Instructor presence in instructional video: Effects on visual attention, recall, and perceived learning. Comput. Hum. Behav. 2017, 71, 79–89. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, J.; Antonenko, P.; Dawson, K. Does visual attention to the instructor in online video affect learning and learner perceptions? An eye-tracking analysis. Comput. Educ. 2020, 146, 103779. [Google Scholar] [CrossRef]
  13. Liew, T.W.; Zin, N.A.M.; Sahari, N. Exploring the affective, motivational and cognitive effects of pedagogical agent enthusiasm in a multimedia learning environment. Hum.-Cent. Comput. Inf. Sci. 2017, 7, 9. [Google Scholar] [CrossRef]
  14. Pi, Z.; Xu, K.; Liu, C.; Yang, J. Instructor presence in video lectures: Eye gaze matters, but not body orientation. Comput. Educ. 2020, 144, 103713. [Google Scholar] [CrossRef]
  15. Beege, M.; Schneider, S.; Nebel, S.; Rey, G.D. Look into my eyes! Exploring the effect of addressing in educational videos. Learn. Instr. 2017, 49, 113–120. [Google Scholar] [CrossRef]
  16. Igualada, A.; Esteve-Gibert, N.; Prieto, P. Beat gestures improve word recall in 3- to 5-year-old children. J. Exp. Child Psychol. 2017, 56, 99–112. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Zhang, Y.; Xu, K.; Pi, Z.; Yang, J. Instructor’s position affects learning from video lectures in Chinese context: An eye-tracking study. Behav. Inf. Technol. 2021, 1–10. [Google Scholar] [CrossRef]
  18. Fiorella, L.; Stull, A.T.; Kuhlmann, S.; Mayer, R.E. Instructor presence in video lectures: The role of dynamic drawings, eye contact, and instructor visibility. J. Educ. Psychol. 2019, 111, 1162–1171. [Google Scholar] [CrossRef]
  19. Van Wermeskerken, M.; Van Gog, T. Seeing the instructor’s face and gaze in demonstration video examples affects attention allocation but not learning. Comput. Educ. 2017, 113, 98–107. [Google Scholar] [CrossRef]
  20. Homer, B.D.; Plass, J.L.; Blake, L. The effects of video on cognitive load and social presence in multimedia-learning. Comput. Hum. Behav. 2008, 24, 786–797. [Google Scholar] [CrossRef]
  21. Mayer, R.E. Multimedia Learning, 3rd ed.; Cambridge University Press: New York, NY, USA, 2021. [Google Scholar]
  22. Johnson, R. Gender Differences in E-Learning: Communication, Social Presence, and Learning Outcomes. J. Organ. End User Comput. 2011, 23, 79–94. [Google Scholar] [CrossRef] [Green Version]
  23. Rodríguez-Ardura, I.; Meseguer-Artola, A. Presence in personalised e-learning—The impact of cognitive and emotional factors and the moderating role of gender. Behav. Inf. Technol. 2016, 35, 1008–1018. [Google Scholar] [CrossRef]
  24. Kizilcec, R.F.; Bailenson, J.N.; Gomez, C.J. The instructor’s face in video instruction: Evidence from two large-scale field studies. J. Educ. Psychol. 2015, 107, 724–739. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, J.; Hao, Y.H.; Lu, J.L. The effect of presenting mode of teaching video on self-directed learning effectiveness: An experimental study. E-Educ. Res. 2014, 251, 93–105. (In Chinese) [Google Scholar]
  26. Hew, K.F.; Lo, C.K. Comparing video styles and study strategies during video-recorded lectures: Effects on secondary school mathematics students’ preference and learning. Interact. Learn. Environ. 2018, 28, 847–864. [Google Scholar] [CrossRef]
  27. Van Wermeskerken, M.; Ravensbergen, S.; van Gog, T. Effects of instructor presence in video modeling examples on attention and learning. Comput. Hum. Behav. 2018, 89, 430–438. [Google Scholar] [CrossRef]
  28. Wilson, K.; Martinez, M.; Mills, C.; D’Mello, S.; Smilek, D.; Risko, E. Instructor presence effect: Liking does not always lead to learning. Comput. Educ. 2018, 122, 205–220. [Google Scholar] [CrossRef]
  29. Hong, J.; Pi, Z.; Yang, J. Learning declarative and procedural knowledge via video lectures: Cognitive load and learning effectiveness. Innov. Educ. Teach. Int. 2018, 55, 74–81. [Google Scholar] [CrossRef]
  30. Yi, T.; Yang, X.; Pi, Z.; Huang, L.; Yang, J. Teachers’ continuous vs. intermittent presence in procedural knowledge instructional videos. Innov. Educ. Teach. Int. 2019, 56, 481–492. [Google Scholar] [CrossRef]
  31. Just, M.A.; Carpenter, P.A. Using eye fixations to study reading comprehension. In New Methods in Reading Comprehension Research; Kieras, D.E., Just, M.A., Eds.; Erlbaum: Hillsdale, NJ, USA, 1984; pp. 151–182. [Google Scholar]
  32. Wang, Y.; Liu, Q.; Chen, W.; Wang, Q.; Stein, D. Effects of instructor’s facial expressions on students’ learning with video lectures. Br. J. Educ. Technol. 2019, 50, 1381–1395. [Google Scholar] [CrossRef]
  33. Pi, Z.; Chen, M.; Zhu, F.; Yang, J.; Hu, W. Modulation of instructor’s eye gaze by facial expression in video lectures. Innov. Educ. Teach. Int. 2022, 59, 15–23. [Google Scholar] [CrossRef]
  34. Pi, Z.; Zhang, Y.; Zhu, F.; Xu, K.; Yang, J.; Hu, W. Instructors’ pointing gestures improve learning regardless of their use of directed gaze in video lectures. Comput. Educ. 2019, 128, 345–352. [Google Scholar] [CrossRef]
  35. Pi, Z.; Zhang, Y.; Yu, Q.; Zhang, Y.; Yang, J.; Zhao, Q. Neural oscillations and learning performance vary with an instructor’s gestures and visual materials in video lectures. Br. J. Educ. Technol. 2022, 53, 93–113. [Google Scholar] [CrossRef]
  36. Pi, Z.; Hong, J.; Yang, J. Does instructor’s image size in video lectures affect learning outcomes? J. Comput. Assist. Learn. 2017, 33, 347–354. [Google Scholar] [CrossRef]
  37. Stull, A.; Fiorella, L.; Mayer, R. An eye-tracking analysis of instructor presence in video lectures. Comput. Hum. Behav. 2018, 88, 263–272. [Google Scholar] [CrossRef]
  38. Asoodar, M.; Vaezi, S.; Izanloo, B. Framework to improve e-learner satisfaction and further strengthen e-learning implementation. Comput. Hum. Behav. 2016, 63, 704–716. [Google Scholar] [CrossRef]
  39. Liaw, S.S. Considerations for developing constructivist web-based learning. Int. J. Instr. Media 2004, 31, 309. [Google Scholar]
  40. Hyde, J.S. The gender similarities hypothesis. Am. Psychol. 2005, 60, 581–592. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Hyde, J.S. Gender similarities and differences. Annu. Rev. Psychol. 2014, 65, 373–398. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Maccoby, E.E.; Jacklin, C.N. The Psychology of Sex Differences; Stanford University Press: Stanford, CA, USA, 1974. [Google Scholar]
  43. Zell, E.; Krizan, Z.; Teeter, S.R. Evaluating gender similarities and differences using metasynthesis. Am. Psychol. 2015, 70, 10–20. [Google Scholar] [CrossRef]
  44. Bevilacqua, A. Commentary: Should gender differences be included in the evolutionary upgrade to cognitive load theory. Educ. Psychol. Rev. 2017, 29, 189–194. [Google Scholar] [CrossRef]
  45. Sánchez, C.A.; Wiley, J. Sex differences in science learning: Closing the gap through animations. Learn. Individ. Differ. 2010, 20, 271–275. [Google Scholar] [CrossRef]
  46. Castro-Alonso, J.; Wong, A.; Adesope, O.O.; Ayres, P.; Paas, F. Gender imbalance in instructional dynamic versus static visualizations: A meta-analysis. Educ. Psychol. Rev. 2019, 31, 361–387. [Google Scholar] [CrossRef]
  47. Sung, E.; Mayer, R.E. Five facets of social presence in online distance education. Comput. Hum. Behav. 2012, 28, 1738–1747. [Google Scholar] [CrossRef]
  48. Joo, Y.J.; Lim, K.Y.; Kim, E.K. Online university students’ satisfaction and persistence: Examining perceived level of presence, usefulness and ease of use as predictors in a structural model. Comput. Educ. 2011, 57, 1654–1664. [Google Scholar] [CrossRef]
  49. Lim, J.; Rosenthal, S.; Sim, Y.; Lim, Z.; Oh, K. Making online learning more satisfying: The effects of online-learning self-efficacy, social presence and content structure. Technol. Pedagog. Educ. 2021, 30, 543–556. [Google Scholar] [CrossRef]
  50. Richardson, J.C.; Maeda, Y.; Lv, J.; Caskurlu, S. Social presence in relation to students’ satisfaction and learning in the online environment: A meta-analysis. Comput. Hum. Behav. 2017, 71, 402–417. [Google Scholar] [CrossRef]
  51. Joksimović, S.; Gašević, D.; Kovanović, V.; Riecke, B.; Hatala, M. Social presence in online discussions as a process predictor of academic performance. J. Comput. Assist. Learn. 2015, 31, 638–665. [Google Scholar] [CrossRef] [Green Version]
  52. Vrieling-Teunter, E.; Henderikx, M.; Nadolski, R.; Kreijns, K. Facilitating Peer Interaction Regulation in Online Settings: The Role of Social Presence, Social Space and Sociability. Front. Psychol. 2022, 13, 793798. [Google Scholar] [CrossRef]
  53. Hall, J.; Philip, R.; Marwick, K.; Whalley, H.; Romaniuk, L.; McIntosh, A.; Lawrie, S. Social cognition, the male brain and the autism spectrum. PLoS ONE 2012, 7, E49033. [Google Scholar] [CrossRef] [Green Version]
  54. Köster, J. Design of instructional videos. In Video in the Age of Digital Learning; Köster, J., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 49–55. [Google Scholar]
  55. Baayen, R.; Davidson, D.; Bates, D. Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 2008, 59, 390–412. [Google Scholar] [CrossRef] [Green Version]
  56. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  57. Barr, D.J.; Levy, R.; Scheepers, C.; Tily, H.J. Random effects structure for confirmatory hypothesis testing: Keep it maximal. J. Mem. Lang. 2013, 68, 255–278. [Google Scholar] [CrossRef] [Green Version]
  58. Cai, Z.; Sun, Z.; Zhao, N. Interlocutor modelling in lexical alignment: The role of linguistic competence. J. Mem. Lang. 2021, 121, 104278. [Google Scholar] [CrossRef]
  59. Wu, J.Y.; Cheng, T. Who is better adapted in learning online within the personal learning environment? Relating gender differences in cognitive attention networks to digital distraction. Comput. Educ. 2019, 128, 312–329. [Google Scholar] [CrossRef]
  60. Rafique, G.M.; Mahmood, K.; Warraich, N.F.; Rehman, S.U. Readiness for Online Learning during COVID-19 pandemic: A survey of Pakistani LIS students. J. Acad. Libr. 2021, 47, 102346. [Google Scholar] [CrossRef]
Figure 1. Screenshots of the videos in the (a) audio-video condition (AV), (b) picture-video condition (PV), (c) video-video condition (VV), and (d) one comprehension question following a video.
Figure 1. Screenshots of the videos in the (a) audio-video condition (AV), (b) picture-video condition (PV), (c) video-video condition (VV), and (d) one comprehension question following a video.
Brainsci 12 00946 g001
Figure 2. Leaning performance of male and female learners in three instructor conditions. AV, the audio-video condition; PV, the picture-video condition; VV, the video-video condition. ** p < 0.01.
Figure 2. Leaning performance of male and female learners in three instructor conditions. AV, the audio-video condition; PV, the picture-video condition; VV, the video-video condition. ** p < 0.01.
Brainsci 12 00946 g002
Figure 3. Learners’ dwell time on the instructor AOI. AV, the audio-video condition; PV, the picture-video condition; VV, the video-video condition. * p < 0.05.
Figure 3. Learners’ dwell time on the instructor AOI. AV, the audio-video condition; PV, the picture-video condition; VV, the video-video condition. * p < 0.05.
Brainsci 12 00946 g003
Figure 4. Number of transitions between the text and the instructor in the VV condition. * p < 0.05.
Figure 4. Number of transitions between the text and the instructor in the VV condition. * p < 0.05.
Brainsci 12 00946 g004
Table 1. Descriptives of 11 lecture videos.
Table 1. Descriptives of 11 lecture videos.
NO.TopicAreaConditionFamiliarity aDifficulty aFamiliarity bDifficulty b
1 *Venus SciencePV3.212.472.292.70
2 *Volcano ScienceVV3.632.472.922.65
3Rosetta stoneHistoryVV1.372.161.382.94
4Medici HistoryPV1.952.051.912.18
5Copper ageHistoryAV1.582.631.562.59
6The sound and the fury LiteraturePV2.422.261.563.06
7Isabel AllendeLiteratureAV1.422.051.582.41
8Malin Kundang cLiteratureVV1.261.581.422.00
9Rhizanthella gardneri dSciencePV1.312.151.502.56
10Balinese tigerScienceAV1.951.791.762.3
11PermafrostScienceVV2.162.422.532.67
Note. AV, the audio-video condition; PV, the picture-video condition; VV, the video-video condition; * used as practice; a rated by 20 Chinese students who did not participate in the study; b rated by the 64 participants in the eye-tracking experiment; c a folk tale in Southeast Asia; d an entirely subterranean mycoheterotrophic orchid.
Table 2. Results of linear mixed-effects models for comprehension scores.
Table 2. Results of linear mixed-effects models for comprehension scores.
EffectβSEtp
Intercept6.690.1936.05<0.001 ***
Instructor 1: AV vs. PV0.160.141.140.257
Instructor 2: AV vs. VV0.430.143.100.002 **
Gender0.010.160.060.949
Instructor 1: Gender−0.060.19−0.320.748
Instructor 2: Gender−0.080.19−0.4280.669
Note. The final LMM included both by-participant and by-item intercepts. ** p < 0.01; *** p < 0.001.
Table 3. Visual attention distribution statistics for the videos in the AV, PV, and VV conditions.
Table 3. Visual attention distribution statistics for the videos in the AV, PV, and VV conditions.
AOIMeasureAVPVVV
MaleFemaleMaleFemaleMaleFemale
TextFixation count244.90 (30.60)257.67 (44.81)217.96 (30.17)221.92 (45.63)211.33 (35.14)221.41 (49.45)
Fixation count (%)87.61 (0.04)87.42 (0.07)80.62 (0.06)80.54 (0.08)78.37 (0.08)79.14 (0.10)
Dwell time a77.47 (8.51)79.82 (11.64)70.55 (9.43)71.44 (11.54)66.94 (11.10)67.48 (14.90)
Dwell time (%)87.14 (0.05)86.59 (0.08)80.69 (0.07)79.65 (0.10)75.80 (0.11)74.86 (0.14)
PictureFixation count 30.64 (11.64)32.56 (18.5)38.83 (17.80)39.82 (21.22)29.74 (15.74)30.18 (17.78)
Fixation count (%)10.84 (0.04)11.03 (0.06)14.16 (0.06)14.37 (0.07)10.98 (0.06)10.67 (0.06)
Dwell time10.46 (4.86)10.97 (6.41)12.99 (6.76)13.98 (6.81)10.38 (6.54)9.89 (5.68)
Dwell time (%)11.66 (0.05)12.17 (0.07)14.86 (0.07)15.76 (0.08)11.60 (0.07)11.30 (0.07)
InstructorFixation count0.25 (0.63)0.29 (0.92)8.85 (6.18)8.83 (7.95)23.65 (19.88)23.82 (20.02)
Fixation count (%)0.09 (0.20)0.11 (0.44)3.20 (0.02)3.21 (0.03)8.53 (0.07)8.79 (0.07)
Dwell time0.07 (0.18)0.09 (0.39)2.67 (1.95)2.75 (2.47)9.55 (7.28)11.58 (10.58)
Dwell time (%)0.08 (0.20)0.10 (0.45)3.07 (0.02)3.13 (0.03)10.91 (0.08)12.83 (0.11)
Note. AOI, area of interest; AV, the audio-video condition; PV, the picture-video condition; VV, the video-video condition; a The unit of dwell time was second.
Table 4. LMM statistics for eye movement measures.
Table 4. LMM statistics for eye movement measures.
InstructorGenderInteraction
Instructor 1: AV vs. PVInstructor 2: AV vs. VV Instructor 1: GenderInstructor 2: Gender
AOIMeasureβSEpβSEpβSEpβSEpβSEp
TextFixation count a−38.045.58<0.001 ***−39.435.58<0.000 ***−16.1710.340.12112.027.890.1295.857.890.459
Dwell time a−9.421.69<0.001 ***−13.091.69<0.001 ***−3.322.930.2592.392.390.3172.472.390.303
PictureFixation count a6.232.650.019 *−3.012.650.256−2.144.360.6252.123.740.5722.153.750.567
Dwell time a2.621.020.011 *−1.311.020.199−0.551.580.727−0.151.440.9191.201.440.404
InstructorFixation count b8.191.59<0.001 ***23.391.59<0.001 ***−0.042.430.9860.392.260.861−1.042.260.645
Dwell time b2.590.76<0.001 ***11.410.76<0.001 ***−0.021.070.9838.191.070.993−2.231.070.038
Number of transitions
Instructor→Text b3.031.390.031 *5.641.39<0.001 ***−0.021.530.9890.211.980.9164.051.980.041 *
Text→Instructor b2.390.48<0.001 ***5.500.48<0.001 ***<.0010.751.0000.390.670.5691.770.670.009 **
Picture→Instructor b1.190.17<0.001 ***1.560.17<0.001 ***0.020.230.931−0.060.250.8000.160.250.518
Instructor→Picture b0.770.18<0.001 ***1.570.18<0.001 ***<.0010.231.0000.390.260.1290.380.260.151
Picture→Text c−1.690.750.027 *−3.320.82<0.001 ***0.561.390.6891.481.060.166−0.731.150.529
Text→Picture a−0.980.630.120−2.980.63<0.001 ***0.791.190.5080.690.880.433−0.980.890.271
Note: a The final LMM included both by-participant and by-item intercepts; b the final LMM included a by-participant intercept; c the final LMM model included a by-participant random slope for condition, in addition to by-participant and by-item intercepts. AOI, area of interest; AV, the audio-video condition; PV, the picture-video condition; VV, the video-video condition. * p < 0.05, ** p < 0.01, *** p < 0.001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Yang, J. Exploring Gender Differences in the Instructor Presence Effect in Video Lectures: An Eye-Tracking Study. Brain Sci. 2022, 12, 946. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci12070946

AMA Style

Zhang Y, Yang J. Exploring Gender Differences in the Instructor Presence Effect in Video Lectures: An Eye-Tracking Study. Brain Sciences. 2022; 12(7):946. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci12070946

Chicago/Turabian Style

Zhang, Yuyang, and Jing Yang. 2022. "Exploring Gender Differences in the Instructor Presence Effect in Video Lectures: An Eye-Tracking Study" Brain Sciences 12, no. 7: 946. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci12070946

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop