Next Article in Journal
In Support of Sustainability: The Historical Ecology of Vertebrate Biodiversity and Native American Harvest Practices in the Florida Keys, USA
Next Article in Special Issue
Live Sharing of Learning Activities on E-Books for Enhanced Learning in Online Classes
Previous Article in Journal
New Methodology to Characterize the Workability of Asphaltic Concrete Mixtures Based on Kinematic Compaction Energy
Previous Article in Special Issue
Cybersecurity Skills among European High-School Students: A New Approach in the Design of Sustainable Educational Development in Cybersecurity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predict Students’ Attention in Online Learning Using EEG Data

1
Computer Science Department, College of Computer and Information Sciences, Imam Muhammad Bin Saud University, Riyadh 11432, Saudi Arabia
2
Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(11), 6553; https://0-doi-org.brum.beds.ac.uk/10.3390/su14116553
Submission received: 14 April 2022 / Revised: 24 May 2022 / Accepted: 25 May 2022 / Published: 27 May 2022

Abstract

:
In education, it is critical to monitor students’ attention and measure the extents to which students participate and the differences in their levels and abilities. The overall goal of this study was to increase the quality of distance education. In particular, in order to craft an approach that will effectively augment online learning using objective measures of brain activity, we propose a brain–computer interface (BCI) system that aims to use electroencephalography (EEG) signals for the detection of student’s attention during online classes. This system will aid teachers to objectively assess student attention and engagement. To this end, experiments were conducted on a public dataset; we extracted power spectral density (PSD) features using used a fast Fourier transform. Different attention indexes were calculated. Then, we built three different classification algorithms: k-nearest neighbors (KNN), support vector machine (SVM), and random forest (RF). Our proposed random forest classifier achieved a higher accuracy (96%) than KNN and SVM. Moreover, our results compared to state-of-the-art attention-detection systems with respect to the same dataset. Our findings revealed that the proposed RF approach can be used to effectively distinguish the attention state of a user.

1. Introduction

Attention refers to the brain’s capability to choose one aspect on which to concentrate while ignoring everything else in an environment. Attention is the first step in the learning process. Students’ perceptions and responses need active attention to plan or preview and monitor and regulate their thoughts and actions.
Distance learning is defined as a “system of education in which students study at home and communicate with their teachers over the internet” [1]. A student’s attention during a class, whether the class is an in-person class or a distance learning class, plays an important role in the class’s effectiveness [2]. Detecting and monitoring students’ attention is vital for enhancing student engagement and learning quality.
The brain–computer interface (BCI), a technology that acts as a channel between the brain and a computer system, can detect humans’ emotional states. EEG-based BCI has gained widespread attention in research due to its portability, usability, and safety. Accordingly, an increasing amount of research on EEG-based BCI systems reflects the scientific community’s interest in EEG-based BCI technology and its application in different contexts [3].
Due to the COVID-19 pandemic, many schools had to switch to remote learning. Students across the globe were, and still are, forced to adapt to this new style of learning. However, the response was not all positive; many have complained that a lack of student–teacher interactions prevented students from focusing and paying attention compared to in-person school. Moreover, there are many distraction factors at home, so students are less focused, especially those who lack self-regulation skills. This raises the question of how we can detect students’ attention levels. To help teachers effectively observe students’ states of attention in online learning environments, we propose a solution based on the EEG feedback approach. Basically, we propose an automatic attention-monitoring system to help manage students in e-learning.
Advances in EEG-based BCI technology have facilitated the observation of activities and abnormalities within the human brain without invasive neurosurgery. EEG-based BCI can be used with online and distance learning in various ways. The aim of this study was to design and develop a real-time attention-detection system that can detect an attention level using physiological data. Our objectives included: (i) referring to and comparing previous similar studies, (ii) investigating different computational methods and classification approaches to achieve an acceptable accuracy, and (iii) designing and developing an attention-detection system that could be very efficient for detecting attention in real time and could thus be used in real-life e-learning contexts.
The remainder of this paper is arranged as follows: Section 2 introduces the main concepts of this study with background details; Section 3 presents the related works; Section 4 describes the research methodology, i.e., the experiments with EEG data; Section 5 discusses the evaluation results; and finally, Section 6 presents the conclusions.

2. Background

To clarify our research problem and BCI’s use, the following subsections present online learning, passive BCI, and the neural correlation of attention in EEG signals, respectively.

2.1. Online Learning

Distance learning, or online learning, refers to a teacher–learner separation by space, time, or both and the use of media and technology to enable communication and exchanges during the learning process despite this separation [1].
In response to the COVID-19 outbreak, governments ordered schools to close and switch to online learning. Across the world, students have been affected by partial or full school closures with which teacher–student communication has been conducted through a web-based exchange using learning platforms. Factors, such as the absence of student motivation, teacher training, and social isolation, have affected the engagement and productivity of students and teachers. Distance learning requires a level of learner self-direction and study skills and new teaching and guidance strategies.
There are different perspectives on issues associated with distance learning. Technology-related issues include the capabilities and capacities of digital learning platforms, internet connectivity, and digital devices. Students’ related issues include a lack of motivation and difficulty understanding materials, a lack of communication between the students themselves and their teachers, and social issues in the absence of regular school activities. Parents’ related issues include difficulty monitoring their children, which is much more challenging when parents have multiple children or are working parents. Teacher-related issues include a lack of access to traditional teaching materials and a lack of the technical skills and expertise needed to access and teach using online platforms [1].

2.2. Passive Brain–Computer Interface

Affective computing is an interdisciplinary research interest in the study and development of systems that can detect, interpret, recognize, and simulate human effects. Affective BCI (aBCI) is an emerging field of research in affective computing [3].
Passive BCI is an approach that uses BCI technology as an implicit communication channel between a user and a device while the user does not try to control their brain activity. Passive BCI originates its outputs from arbitrary brain activity arising without any voluntary control. Levels of meditation, engagement, frustration, excitement, and workload are examples of cognitive feedback in passive BCI. Emotion detection based BCI is considered a passive/involuntary control modality [3,4].
In BCI systems, in general, the reliability and accuracy of sensory interfacing and translation algorithms are key challenges that limit the usage of these technologies. Moreover, engineering challenges have concentrated on processing a low signal-to-noise ratio embedded in non-invasive electroencephalography (EEG) signals. Computational challenges include the optimal placement of a reduced number of electrodes and the robustness of BCI algorithms with respect to a smaller set of recording sites [4].

2.3. Neural Correlates of Attention in EEG Signals

Attention lies in two areas of the brain, as shown in Figure 1. The first area is the prefrontal cortex, located behind the forehead and spanning to the left and right sides of the brain, which handles willful concentration. As part of the motivational system, it helps a person focus attention on a goal. The second area is the parietal cortex, behind the ear, for sudden events that require action.
Brain waves are patterns of electrical activity occurring in the brain. There are five widely identified brain waves, and the main frequencies of human EEG waves are delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), and gamma (above 30 Hz). Neuroscientists have been studying the brain waves and reported that EEG frequencies can provide insight into an individual’s moods and emotional states, such as anxiety, surprise, happiness, and frustration. Moreover, EEG measures are sensitive to cognitive states, including task engagement/attention, working modality, and the perception of user/machine errors [5].
Many BCI applications were proposed in the literature for the purpose of attention monitoring and adaptation [3]. One important application of BCIs is in monitoring human alertness while performing critical and security tasks, such as driving and surveillance. For example, researchers in [6] reported that car accidents can be avoided by monitoring transitions of the brain state for changes from an alert or awake state to a sleeping or drowsy state.
Numerous research studies examined the neural correlates of attention in humans. One of the earliest studies was by Xiaowei et al. [2]. They contributed to improving the quality of distance education by creating a website with various learning content to simulate a distance-education environment. Their approach was based on the alpha-wave amplitude becoming smaller when a person is concentrating on learning materials. As a result, they discovered the relationship between changes in learners’ emotions and material being taught. However, some studies considered the delta pattern as it is related to deep sleep, which indicates a low awareness of subjects [7,8].
Fahimi et al. [9] investigated the neural index of attention. They extracted information from a single frontal channel EEG recorded during an attention-demanding task as a neural marker and explored how such EEG features correlate with the elderly’s response time as a behavioral marker of attention.
Wang et al. [10] computed the power ratio features β / θ , β / α , β / ( α + θ ) by using Welch’s power spectral-density estimation methods (Welch, 1967). These features were seen as classical EEG features for attention recognition in the study.

3. Related Work

Various studies investigated the possibility of utilizing EEG signals to detect individual attention [9,11]. To understand our research problem and the use of EEG-based attention detection in an e-learning context, we conducted a comprehensive literature review to compare our study with similar previous studies. The review explored published research to provide insights to practitioners and researchers for attention detection in education and, more specifically, in the distance-learning experience.
Table 1 summarizes various literature work related to user-attention detection in learning and cognitive memory functions using EEG signals. We investigated the computational methods for feature extraction and classification approaches to detect different labels of mental attention states for each related work.
Namita and Ajitkumar [12] aimed to enhance traditional e-learning by proposing a system that predicts a learning video based on emotion. They used five classification methods and measured their accuracy: neural network (71.6%), linear discriminant analysis (LDA; 75.25%), least-squares SVM (74.97%), Naive Bayes (75.02%), k-nearest neighbors (KNN; 88.46%), and random forest (RF), which had the highest accuracy rate at 97.03%.
Myrden and Chau [13] aimed to detect three mental states, namely fatigue, frustration, and attention. In their experiment, 11 participants completed a series of challenging mental tasks. PSD features were extracted, and different classification methods were applied, such as LDA, SVM, and Gaussian Naive Bayes. The attention classification achieved a 84.8 Â ± 7.4% accuracy. Their finding showed that the alpha band expressed in the larynx predicts changes in attention levels.
Alirezaei et al. [14] looked into how attention can be detected by using EEG signals. Data were gathered from 12 participants. Features were extracted using Fisher criteria and forward feature selection. An SVM classifier achieved a classification accuracy of 92.8%.
Nuamah and Seong [15] proposed an attention-monitoring system to be used in educational environments. EEG signals were recorded from 12 participants using eight forehead channels in their experiment. PSD features were extracted, and four classifiers, namely KNN (K = 9), c-SVM, LDA, and Bayesian, were used to determine attention states with 92.8% and 92.4% accuracies, respectively; the c-SVM and LDA classifiers were more accurate than the other methods.
Aci et al. [11] proposed a passive EEG-based BCI for monitoring a set of the mental states of students. They aimed to detect three mental attention states: passive attention, disengagement, and drowsiness. They collected an EEG signals dataset comprising a total of 25 h from five participants engaged in a low-intensity control task. The EEG data were acquired using an Emotiv headset. In the feature extraction stage, they used a short-time Fourier transform and the Blackman window to calculate the power spectra of the EEG signals in each EEG channel. Different classification methods were applied, and the SVM-based method achieved the best performance (accuracy = 96.70% (best) and 91.72% (avg.)) compared with an adaptive neuro-fuzzy system and KNN.
Ludi et al. [16] proposed an EEG emotion recognition system to distinguish between the positive and negative emotional states of a learner. They used the preprocessed version of the SEED dataset. They used the wavelet transform approach to decompose and extract the frequency band and calculate the sample entropy from EEG signals. They proposed using a recurrent neural network and long short-term memory to classify the emotions in EEG signals. Their classification method achieved a final accuracy rate of 90.12%.
Nandi et al. [17] proposed a method for a real-time emotion classification in e-learning contexts using EEG signals. They developed a logistic-regression and stochastic-gradient descent-based method. They used the DEAP dataset to validate the proposed method’s performance. They reported that their proposed method outperforms other offline and online approaches.
Djamal et al. [18] developed attention-recognition software to be used to evaluate the student learning process and employee development. They extracted brain frequency bands using the wavelet method. Then, the cognitive states were classified into only two classes, attentive and non-attentive, using an SVM binary classifier. The best-obtained accuracy was 83%.

4. Methodology

This section outlines our methodology and provides implementation details related to the proposed system for EEG-based attention recognition. We begin with an explanation of the used dataset. We explain the signal-preprocessing, feature-extraction, and attention-computation phases. Finally, we discuss classification algorithms for attention detection.

4.1. Dataset Description

We used a publicly available EEG dataset [11] for attention-detection experiments (Table 2). The original dataset includes 25 h of EEG recordings gathered from 5 subjects involved in a low-intensity control task. We preprocessed the dataset and selected the first 5 min for each trial. Our study aimed to detect a positive state of attention level. Therefore, we used only the focused state from the original dataset and marked other trials (i.e., unfocused and the drowsy states) as negative states of attention. The "focused" state was related to the inactive supervision of a train while preserving concentration. However, steady concentration was mandatory during the trials.

4.2. Data Preprocessing

We first averaged the EEG signals and then resampled the frequency to 128 Hz per channel. Based on prior knowledge of EEG, the correlated signal frequency ranges produced by the brain during attention states are mainly concentrated below 40 Hz. Therefore, the useful frequency band in EEG signal data is between 2 and 40 Hz. Because of this, we used a band-pass filter ranging from 2 to 40 Hz. Subsequently, we used ICA filters to eliminate artifacts. A selection of electrodes was based on attention mapping with brain areas (prefrontal cortex and parietal cortex). Figure 2 shows the selected electrodes in our study, i.e.,  F7, F3, P7, O1, O2, P8, and AF4.

4.3. Feature Extraction

Feature extraction plays a crucial role in building EEG-based BCI applications. PSD indicates power in a certain signal in terms of frequency. It is one of the most common feature extraction approaches used in EEG-based research, and it is based on frequency domain analysis. The PSD approach transforms data from a time domain to a frequency domain and vice versa. This conversion is based on the fast Fourier transform (FFT), which measures the discrete transformation of a Fourier series and its opposite. We extracted EEG frequency bands using a PSD approach known as Welch’s method (Figure 3). Algorithm 1 presents the pseudo-code for feature extraction with PSD. Then, we stacked the features computed by PSD into a single array over the raw EEG of the channels.
Algorithm 1: Feature extraction with PSD.
  • Iutput
  •         X                    Filtered EEG signals
  • Output
  •         bands            Frequency bands
  •         avg_power   Average power of signals
  •         
  • power ← tfr_array_morlet (X)                               ▹ Calculate frequency transform (FFT)
  •         
  • avg_power ← mean_over_time (power)
  • theta ← avg_power (4–8 Hz)
  • alpha ← avg_power (8–13 Hz)
  • beta ← avg_power (13–30 Hz)
  • gamma ← avg_power (30–40 Hz)
  •         
  • bands ← Vector (theta, alpha, beta, gamma)

4.4. Attention Index

This research computed six equations for attention detection: the alpha–beta–theta ratio (ABTR), alpha–gamma ratio (AGR), theta–beta ratio (TBR), beta–theta ratio (BTR), beta–alpha ratio (BAR), and alpha–beta ratio (ABR).
The alpha–beta–theta ratio (ABTR), reported by Daniel and Bilgen [5] and Wang et al. [10], was computed based on the powers of theta, alpha, and beta bands so that attention could be detected using Equation (1).
A B T R = β ( α + θ )
The alpha–gamma ratio (2) and theta–beta ratio (3), reported by Fahimi et al. [9], are correlated with the elderly’s response time as a behavioral marker of attention.
A G R = α γ
T B R = θ β
Wang et al. [10] computed power-ratio features, the beta–theta ratio (BTR) and beta–alpha ratio (BAR) β / θ , β / α , by using Welch’s PSD estimation methods (Welch, 1967). This study saw these features as classical EEG features for attention recognition.
Alpha–beta ratio (ABR), reported by Liu et al. [19], was used for calculating attention Equation (4):
A B R = E α E β
where E α = f r e q = 8 13 P f r e q ,   E β = f r e q = 14 30 P f r e q and P f r e q is the energy value of the frequency f r e q .

4.5. Classification

EEG-based attention classification normally involves the spectral conversion of waveforms into features that machine-learning algorithms can exploit. These algorithms are trained on labeled data to determine if the attention is presently being detected. Attention classification based on EEG signals may involve binary labels (e.g., focused/unfocused or most focused/least focused) or multiple ordinal labels (e.g., ranks, including 9-point ranks and 5-point ranks). This section proceeds by comparing the performance of classification algorithms, including SVM, RF, and KNN. In a literature review of the classifiers used in BCI systems, the authors of [20] categorized these classifiers into the following types: linear, nonlinear Bayesian, neural network, nearest neighbors, and ensemble classifiers. Linear classifiers distinguish classes using linear functions, such as LDA and SVM, the most commonly applied classification algorithms in EEG classification. The following paragraphs provide a brief overview of the classification algorithms used in this research.

4.5.1. K-Nearest Neighbors (KNN)

Due to KNN’s sensitivity to the curse of dimensionality, it is unusual in BCI applications [20]. It can be used efficiently with a small number of features in BCI applications because of its relatively high convergence speed and simplicity. However, there are many disadvantages associated with KNN classifiers. The main disadvantage is a large memory requirement needed to store an entire training set. If the training set is large, the response time will also be large, which leads to a poor runtime performance. Despite the memory requirement, the KNN algorithm in general performs effectively in classification problems. Moreover, KNN is highly sensitive to irrelevant and redundant attributes, influencing classification accuracy. Hence, a selected dataset should be preprocessed carefully using a suitable attribute-selection technique. Another disadvantage of KNN is the selection of k. If the value of k is too small, the result can be sensitive to noise. However, if the value of k is too large, then the result can be incorrect where neighbors include too many points from other classes [21].

4.5.2. Random Forest (RF)

RF is a classification and regression technique that uses ensemble learning. It consists of numerous decision trees with the ultimate result class being the mode of all the outcome classes of individual trees. It offers an additional layer of randomization for bootstrap aggregation in general. This layer integrates the results of all randomized trees into a single classifier that decides a class based on a majority vote of all the decision trees. This advantage reduces error rates and resilience against overfitting while maintaining computational efficiency [4,22].

4.5.3. Support Vector Machine (SVM)

SVM works by determining the optimum N-dimensional hyperplane to distinguish training examples into multiple classes with the greatest margin of error and the fewest classification mistakes. Maximizing margins can improve generalization flexibility, but optimal separation cannot be obtained if the resulting model cannot be extended to another dataset (i.e., the overfitting problem) [23,24]. SVM has been successfully utilized in several BCI systems with a reasonable level of generalization, allowing it to handle overtraining and highly dimensional data efficiently [20].

5. Results

Attention-detection experiments were performed on the benchmark dataset as explained in Section 4.1. Previous research implemented a subject-specific paradigm, leading to a poor generalization of the algorithms because one classifier was usually trained for each subject independently [25]. Such classifiers each achieve a good performance, but it is difficult to train a classifier for each person in real life. Therefore, we adopted a common-subject paradigm to generalize the methodology and train the same classifier for all the subjects.
To evaluate the performance of the classification algorithms, we used different cross-validation methods. These were holdout, i.e., splitting into a 70% training dataset and 30% testing dataset, and k-fold cross-validation, where k was set to five. Table 3 and Table 4 present the accuracy results for RF, KNN, and SVM using each cross-validation method. RF achieved the best accuracy results at 96% and 87% in holdout validation and k-fold cross-validation, respectively, when k was set to five.

6. Discussion

To compare our model with traditional approaches used in Aci et al. [11], we used the same holdout cross-validation in which we split the data randomly into training and testing sets. Our model achieved a better accuracy result at 96% over traditional algorithms, such as KNN and SVM. Table 5 presents a comparison of related work in regards to recognizing different labels of attentive states based on the accuracy of classification algorithms.
Liu et al. [19] aimed to enhance the class-learning process by proposing a system that objectively assesses students’ attentiveness during instruction. They used a fast Fourier transform (FFT) as a frequency domain analysis to extract PSD features and calculate an energy value, which were added together based on the waveband dispersion of the EEG data. Moreover, the alpha and beta activate ratio was calculated and used as the feature for assessing level of mental attentiveness. In the classification stage, the SVM classifier was used to identify whether students were attentive. The classification accuracy achieved 76%.
Ke et al. [26] proposed an attention-detection system to detect three states, namely attention, no attention, and rest. They investigated the effects of different features of linear parameters, such as power spectrum, and nonlinear parameters, such as approximate entropy, sample entropy, and multiscale entropy. The extracted features were fed to an SVM classifier. The results showed that sample entropy outperformed the power spectrum in the two experiments with accuracies of 76.19% and 85.24%, respectively.
Peng et al. [27] proposed an attentiveness-detection system that helps identify a person’s mental state and focus when performing specified activities. They used HHT to analyze single-channel EEG data. The data were recorded from the frontal region at various degrees of attention. Band powers and spectral entropy features were fed into an SVM classifier. The results reported that they could discriminate between attentive and relaxed states with an average classification accuracy of 84.80%.
Suhail [28] proposed a neurofeedback system that aimed to assess a cognitive state based on EEG signals. They experimented on 20 subjects during a learning task to detect two states: attentive and idle. In a feature extraction phase, they used time-domain techniques, such as Hjorth parameters, wavelet-based features, spectral entropy, the attention index AITABG, and a combination of EEG band ratios, which were calculated on EEG band ratios. They used the Fisher ratio and correlation analysis methods to select the most discriminating features in a feature selection phase. In a classification phase, they used three classifiers: SVM, KNN, and LDA. The results showed that the SVM classifier achieved the highest recognition performance with an accuracy of 92.9%.
We compared our proposed model and other existing studies in the same attention-detection experiment. From Table 5, we can conclude that our proposed RF model effectively distinguishes the attention state of a subject with a high classification accuracy of 96%.

7. Conclusions

The self-assessment of attention for teachers and students can enable them to implement necessary adjustments during classes. We propose an attention-based detection system aimed to monitor and analyze the attention levels of online learners in real time. Compared with traditional approaches, such as KNN and SVM, we achieved a better result with an accuracy of 96% using an RF model.
Future directions for this research include using a large EEG dataset to fine-tune the model to improve its performance. More alternative ML approaches, such as deep learning, could be explored to improve the detection results.

Author Contributions

M.A. conceived, designed, and performed the experiment; analyzed and interpreted the data; and drafted the manuscript. A.A.-N. co-supervised the analysis, reviewed and edited the manuscript, and contributed to the discussion. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was supported by a grant from the “Research Center of College of Computer and Information Sciences”, Deanship of Scientific Research, King Saud University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

A publicly available dataset was analyzed in this study. This data can be found here: https://www.kaggle.com/code/mashaelaldayel/feature-extraction-and-comparison-of-eeg-bcn (accessed on 20 March 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. UNESCO. Distance learning strategies in response to COVID-19 school closures. In UNESCO COVID-19 Education Response Education Sector Issue Notes; UNESCO: Paris, France, 2020; Volume 2, pp. 1–8. [Google Scholar]
  2. Li, X.; Hu, B.; Zhu, T.; Yan, J.; Zheng, F. Towards affective learning with an EEG feedback approach. In Proceedings of the First ACM International Workshop on Multimedia Technologies for Distance Learning, Beijing, China, 23 October 2009; pp. 33–38. [Google Scholar]
  3. Al-Nafjan, A.; Hosny, M.; Al-Ohali, Y.; Al-Wabil, A. Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. Appl. Sci. 2017, 7, 1239. [Google Scholar] [CrossRef] [Green Version]
  4. Al-Nafjan, A.; Hosny, M.; Al-Wabil, A.; Al-Ohali, Y. Classification of Human Emotions from Electroencephalogram (EEG) Signal using Deep Neural Network. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 419–425. [Google Scholar] [CrossRef]
  5. Szafir, D.; Mutlu, B. Pay attention! Designing adaptive agents that monitor and improve user engagement. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 11–20. [Google Scholar]
  6. Tiwari, R.K.; Giripunje, S. Design approach for EEG-based human computer interaction driver monitoring system. Int. J. Latest Trends Eng. Technol. IJLTET 2014, 3, 250–255. [Google Scholar]
  7. Zhou, S.; Gao, T. Brain activity recognition method based on attention-based rnn mode. Appl. Sci. 2021, 11, 425. [Google Scholar] [CrossRef]
  8. Harmony, T.; Fern’andez, T.; Silva, J.; Bernal, J.; D’iaz-Comas, L.; Reyes, A.; Marosi, E.; Rodr’iguez, M.; Rodr’iguez, M. EEG delta activity an indicator of attention to internal processing during performance of mental tasks. Int. J. Psychophysiol. 1996, 24, 161–171. [Google Scholar] [CrossRef]
  9. Fahimi, F.; Goh, W.B.; Lee, T.S.; Guan, C. Neural Indexes of Attention Extracted from EEG Correlate with Elderly Reaction Time in response to an Attentional Task. In Proceedings of the 3rd International Conference on Crowd Science and Engineering, Singapore, 28–31 July 2018; pp. 1–6. [Google Scholar]
  10. Wan, W.; Cui, X.; Gao, Z.; Gu, Z. Frontal EEG-based multi-level attention states recognition using dynamical complexity and extreme gradient boosting. Front. Hum. Neurosci. 2021, 15, 673955. [Google Scholar] [CrossRef] [PubMed]
  11. Aci, Ç.I.; Kaya, M.; Mishchenko, Y. Distinguishing mental attention states of humans via an EEG-based passive BCI using machine learning methods. Expert Syst. Appl. 2019, 134, 153–166. [Google Scholar] [CrossRef]
  12. Tambe, N.R.; Khachane, A. Mood based E-learning using EEG. In Proceedings of the 2016 International Conference on Computing Communication Control and automation (ICCUBEA), Pune, India, 12–13 August 2016; pp. 1–4. [Google Scholar]
  13. Myrden, A.; Chau, T. A Passive EEG-BCI for Single-Trial Detection of Changes in Mental State. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 345–356. [Google Scholar] [CrossRef] [PubMed]
  14. Alirezaei, M.; Hajipour Sardouie, S. Detection of Human Attention Using EEG Signals. In Proceedings of the 2017 24th Iranian Conference on Biomedical Engineering and 2017 2nd International Iranian Conference on Biomedical Engineering, ICBME, Tehran, Iran, 30 November–1 December 2017. [Google Scholar] [CrossRef]
  15. Nuamah, J.K.; Seong, Y. Support vector machine (SVM) classification of cognitive tasks based on electroencephalography (EEG) engagement index. Brain-Comput. Interfaces 2018, 5, 1–12. [Google Scholar] [CrossRef]
  16. Bai, L.; Guo, J.; Xu, T.; Yang, M. Emotional Monitoring of Learners Based on EEG Signal Recognition. Procedia Comput. Sci. 2020, 174, 364–368. [Google Scholar] [CrossRef]
  17. Nandi, A.; Xhafa, F.; Subirats, L.; Fort, S. Real-time emotion classification using eeg data stream in e-learning contexts. Sensors 2021, 21, 1589. [Google Scholar] [CrossRef] [PubMed]
  18. Djamal, E.C.; Pangestu, D.P.; Dewi, D.A. EEG-based recognition of attention state using wavelet and support vector machine. In Proceedings of the International Seminar on Intelligent Technology and Its Applications (ISITIA), Lombok, Indonesia, 28–30 July 2016; pp. 139–144. [Google Scholar] [CrossRef]
  19. Liu, N.H.; Chiang, C.Y.; Chu, H.C. Recognizing the degree of human attention using EEG signals from mobile sensors. Sensors 2013, 13, 10273–10286. [Google Scholar] [CrossRef] [PubMed]
  20. Ramadan, R.A.; Refat, S.; Elshahed, M.A.; Ali, R.A. Brain-Computer Interfaces; Intelligent Systems Reference Library; Springer International Publishing: Cham, Switzerland, 2015; Volume 74, pp. 31–50. [Google Scholar] [CrossRef] [Green Version]
  21. Aldayel, M.S. K-Nearest Neighbor classification for glass identification problem. In Proceedings of the 2012 International Conference on Computer Systems and Industrial Informatics, ICCSII 2012, Sharjah, United Arab Emirates, 18–20 December 2012. [Google Scholar] [CrossRef]
  22. Teo, J.; Chew, L.H.; Chia, J.T.; Mountstephens, J. Classification of Affective States via EEG and Deep Learning. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 132–142. [Google Scholar] [CrossRef] [Green Version]
  23. Vega-Escobar, L.; Castro-Ospina, A.; Duque-Munoz, L. DWT-based feature extraction for motor imagery classification. In Proceedings of the 6th Latin-American Conference on Networked and Electronic Media (LACNEM 2015), Medellin, Colombia, 23–25 September 2015; p. 3. [Google Scholar] [CrossRef]
  24. Moon, J.; Kim, Y.; Lee, H.; Bae, C.; Yoon, W.C. Extraction of user preference for video stimuli using eeg-based user responses. ETRI J. 2013, 35, 1105–1114. [Google Scholar] [CrossRef]
  25. Zhang, D.; Cao, D.; Chen, H. Deep Learning Decoding of Mental State in Non-invasive Brain Computer Interface. In Proceedings of the ACM International Conference Proceeding Series, Sanya, China, 19–21 December 2019; pp. 15–19. [Google Scholar] [CrossRef] [Green Version]
  26. Ke, Y.; Chen, L.; Fu, L.; Jia, Y.; Li, P.; Zhao, X.; Qi, H.; Zhou, P.; Zhang, L.; Wan, B.; et al. Visual attention recognition based on nonlinear dynamical parameters of EEG. Biomed. Mater. Eng. 2014, 24, 349–355. [Google Scholar] [CrossRef] [PubMed]
  27. Park, S.; Ha, J.; Kim, L.; Peng, C.J.; Chen, Y.C.; Chen, C.C.; Chen, S.J.; Cagneau, B.; Chassagne, L. An EEG-Based Attentiveness Recognition System Using Hilbert–Huang Transform and Support Vector Machine. J. Med. Biol. Eng. 2020, 40, 230–238. [Google Scholar] [CrossRef] [Green Version]
  28. Suhail, T.A.; Indiradevi, K.P.; Suhara, E.M.; Suresh, P.A.; Anitha, A. Electroencephalography based detection of cognitive state during learning tasks: An extensive approach. Cogn. Brain Behav. Interdiscip. J. 2021, 25, 157–178. [Google Scholar] [CrossRef]
Figure 1. Neural correlates of attention in the brain.
Figure 1. Neural correlates of attention in the brain.
Sustainability 14 06553 g001
Figure 2. International EEG 10–20 system with 14 channels.
Figure 2. International EEG 10–20 system with 14 channels.
Sustainability 14 06553 g002
Figure 3. PSD feature extraction.
Figure 3. PSD feature extraction.
Sustainability 14 06553 g003
Table 1. Related work on EEG-based attention detection.
Table 1. Related work on EEG-based attention detection.
Ref.ChannelsFeature ExtractionClassifierMental States (Class Labels)
Namita and Ajitkumar, 2016 [12]-PSD featuresSVM, LDA, Naive Bayes,
KNN, and neural network
Two states (low and high concentration)
Myrden and Chau, 2017 [13]Fz, F1, F2, F3, F4,
Cz, C1, C2, C3,C4,
CPz, Pz, POz, P1, and P2
PSD  featuresLDA, SVM, and 
Gaussian Naive Bayes
Three states (fatigue,
frustration, and attention)
 
Alirezaei and Sardouie, 2018 [14]AF3, F3, FC5, FC6, F4,
AF4, F7, and F8
Power spectrum and entropySVM and LDATwo states (attentive
and inattentive)
Nuamah and Seong, 2018 [15]C3, C4, P3, P4, O1, and O2PSD featuresSVMCognitive engagement
 during five tasks
Aci et al., 2019 [11]F3, F4, Fz, C3,
C4, Cz, and Pz
PSD featuresSVMThree states (focused,
unfocused, and drowsy)
Ludi et al., 2016 [16]AF3, AF4, F3, F4,
F7, F8, T7, and T8
Sample entropyRNN and long short-term memoryTwo states (positive
and negative)
Nandi, 2021 [17]48 channelsWavelet-based featuresLogistic regression-based
classifier
Emotion model
(low/high),
valence, and arousal
Djamal et al., 2016 [18]O1–O2,
C3–C4,
and F3–F4
Wavelet-based featuresSVMTwo states (attentive
and inattentive)
Table 2. Mental Attention dataset description.
Table 2. Mental Attention dataset description.
Attention ModelBinary (focused–unfocused)
StimuliVisual-based stimuli (10 min per mental state)
ProtocolVirtual, passive control task, i.e., controlling a computer-simulated train over a main shapeless route for 35–55 min
ParticipantsFive subjects (five trials for each subject)
TimeEEG recording of 25 h; each trial took 35–55 min
EEG deviceTwelve-channel Emotiv EPOC
Experimental methodEach user controlled and engaged in the focused control of a simulated train during the first 10 min of each trial. During the remaining time of the trial, the user stopped following the simulator and became unfocused with respect to changes on their computer screen
Table 3. Results for attention recognition using k-cross-validation (k = 5) and the different classifiers SVM, RF, and KNN.
Table 3. Results for attention recognition using k-cross-validation (k = 5) and the different classifiers SVM, RF, and KNN.
RFSVMKNN
Accuracy87%69%83%
Recall14%14%62%
Precision83%23%82%
F-measure77%17%68%
Table 4. Results for attention recognition using holdout validation and the different classifiers SVM, RF, and KNN.
Table 4. Results for attention recognition using holdout validation and the different classifiers SVM, RF, and KNN.
RFSVMKNN
Accuracy96%78%74%
Recall96%78%74%
Precision96%77%76%
F-measure96%78%75%
Table 5. Comparison of classification methods’ accuracies in detecting attentive states.
Table 5. Comparison of classification methods’ accuracies in detecting attentive states.
Ref.ClassifierAverage Accuracy
Liu et al., 2013 [19]SVM76.82%
Ke et al., 2014 [26]SVM85.24 %
Peng et al., 2020 [27]SVM84.80%
Suhail, 2021 [28]SVM92.98%
Our proposed systemRF96%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Nafjan, A.; Aldayel, M. Predict Students’ Attention in Online Learning Using EEG Data. Sustainability 2022, 14, 6553. https://0-doi-org.brum.beds.ac.uk/10.3390/su14116553

AMA Style

Al-Nafjan A, Aldayel M. Predict Students’ Attention in Online Learning Using EEG Data. Sustainability. 2022; 14(11):6553. https://0-doi-org.brum.beds.ac.uk/10.3390/su14116553

Chicago/Turabian Style

Al-Nafjan, Abeer, and Mashael Aldayel. 2022. "Predict Students’ Attention in Online Learning Using EEG Data" Sustainability 14, no. 11: 6553. https://0-doi-org.brum.beds.ac.uk/10.3390/su14116553

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop