Deep Learning for Electroencephalography(EEG) Data Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 11951

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical Engineering and Information Technology, University of Naples Federico II, 80125 Naples, Italy
Interests: machine learning; deep learning; neural networks; XAI; BCI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering and Information Technology, University of Naples Federico II, 80125 Naples, Italy
Interests: machine learning; deep learning; computer vision; XAI; BCI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR), San Martino della Battaglia 44, 00185 Roma, Italy
Interests: machine learning; deep learning; active inference; BCI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Brain–computer interfaces (BCI) aim to make it possible for a human being to communicate with electronic systems via a connection, typically obtained through electroencephalography (EEG), with their neural systems. They have essential applications in the biomedical domain. For example, they are of paramount importance in the case of locked-in patients, as it can be a way for them to interact with the external world.

BCI are applied in many fields other than the medical one, including:

  1. Neuromarketing for the evaluation of, for instance, the impact of an advertising campaign
  2. Education, where neurofeedback can be used for improving performance
  3. Security, where EEG could be used for biometric authentication/recognition
  4. Games and entertainment

Deep learning methods have been successfully applied in several research fields, such as computer vision and natural language processing. Researchers are trying to replicate the same success in the analysis and interpretation of EEG signals. However, some difficulties need to be overcome. First of all, differently from other fields, there is a high variability of the input signals (EEG) among subjects and sessions. Moreover, the datasets available are not as large as those available, for instance, in the computer vision domain, making the utilization of deep learning architectures not straightforward.

This Special Issue aims to provide an assorted and complementary collection of contributions showing new advancements and applications of deep learning methods in analyzing EEG signals. The ultimate objective is to promote research and advancement by publishing high-quality research articles and reviews in this rapidly growing interdisciplinary field.

Dr. Roberto Prevete
Dr. Francesco Isgrò
Dr. Francesco Donnarumma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • EEG data analysis
  • domain generalization/adaptation
  • feature extraction and selection
  • transfer learning
  • deep learning
  • motor imagery
  • emotion recognition
  • robotic systems

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 160 KiB  
Editorial
Special Issue on Deep Learning for Electroencephalography (EEG) Data Analysis
by Roberto Prevete, Francesco Isgrò and Francesco Donnarumma
Appl. Sci. 2023, 13(20), 11475; https://0-doi-org.brum.beds.ac.uk/10.3390/app132011475 - 19 Oct 2023
Viewed by 623
Abstract
Brain–computer interfaces (BCI) have emerged as a groundbreaking and transformative technology enabling communication between humans and computers through neural systems, primarily electroencephalography (EEG) [...] Full article
(This article belongs to the Special Issue Deep Learning for Electroencephalography(EEG) Data Analysis)

Research

Jump to: Editorial

22 pages, 744 KiB  
Article
Distributional Representation of Cyclic Alternating Patterns for A-Phase Classification in Sleep EEG
by Diana Laura Vergara-Sánchez, Hiram Calvo and Marco A. Moreno-Armendáriz
Appl. Sci. 2023, 13(18), 10299; https://0-doi-org.brum.beds.ac.uk/10.3390/app131810299 - 14 Sep 2023
Cited by 1 | Viewed by 727
Abstract
This article describes a detailed methodology for the A-phase classification of the cyclic alternating patterns (CAPs) present in sleep electroencephalography (EEG). CAPs are a valuable EEG marker of sleep instability and represent an important pattern with which to analyze additional characteristics of sleep [...] Read more.
This article describes a detailed methodology for the A-phase classification of the cyclic alternating patterns (CAPs) present in sleep electroencephalography (EEG). CAPs are a valuable EEG marker of sleep instability and represent an important pattern with which to analyze additional characteristics of sleep processes, and A-phase manifestations have been linked to some specific conditions. CAP phase detection and classification are not commonly carried out routinely due to the time and attention this problem requires (and if present, CAP labels are user-dependent, visually evaluated, and hand-made); thus, an automatic tool to solve the CAP classification problem is presented. The classification experiments were carried out using a distributional representation of the EEG data obtained from the CAP Sleep Database. For this purpose, data symbolization was performed using the one-dimensional symbolic aggregate approximation (1d-SAX), followed by the vectorization of symbolic data with a trained Doc2Vec model and a final classification with ten classic machine learning models for two separate validation strategies. The best results were obtained using a support vector classifier with a radial basis kernel. For hold-out validation, the best F1 Score was 0.7651; for stratified 10-fold cross-validation, the best F1 Score was 0.7611 ± 0.0133. This illustrates that the proposed methodology is suitable for CAP classification. Full article
(This article belongs to the Special Issue Deep Learning for Electroencephalography(EEG) Data Analysis)
Show Figures

Figure 1

13 pages, 2103 KiB  
Article
DABaCLT: A Data Augmentation Bias-Aware Contrastive Learning Framework for Time Series Representation
by Yubo Zheng, Yingying Luo, Hengyi Shao, Lin Zhang and Lei Li
Appl. Sci. 2023, 13(13), 7908; https://0-doi-org.brum.beds.ac.uk/10.3390/app13137908 - 06 Jul 2023
Cited by 1 | Viewed by 1361
Abstract
Contrastive learning, as an unsupervised technique, has emerged as a prominent method in time series representation learning tasks, serving as a viable solution to the scarcity of annotated data. However, the application of data augmentation methods during training can distort the distribution of [...] Read more.
Contrastive learning, as an unsupervised technique, has emerged as a prominent method in time series representation learning tasks, serving as a viable solution to the scarcity of annotated data. However, the application of data augmentation methods during training can distort the distribution of raw data. This discrepancy between the representations learned from augmented data in contrastive learning and those obtained from supervised learning results in an incomplete understanding of the information contained in the real data from the trained encoder. We refer to this as the data augmentation bias (DAB), representing the disparity between the two sets of learned representations. To mitigate the influence of DAB, we propose a DAB-aware contrastive learning framework for time series representation (DABaCLT). This framework leverages a raw features stream (RFS) to extract features from raw data, which are then combined with augmented data to create positive and negative pairs for DAB-aware contrastive learning. Additionally, we introduce a DAB-minimizing loss function (DABMinLoss) within the contrasting module to minimize the DAB of the extracted temporal and contextual features. Our proposed method is evaluated on three time series classification tasks, including sleep staging classification (SSC) and epilepsy seizure prediction (ESP) based on EEG and human activity recognition (HAR) based on sensors signals. The experimental results demonstrate that our DABaCLT achieves strong performance in self-supervised time series representation, 0.19% to 22.95% accuracy improvement for SSC, 2.96% to 5.05% for HAR, 1.00% to 2.46% for ESP, and achieves comparable performance to the supervised approach. The source code for our framework is open-source. Full article
(This article belongs to the Special Issue Deep Learning for Electroencephalography(EEG) Data Analysis)
Show Figures

Figure 1

14 pages, 289 KiB  
Article
Transposed Convolution as Alternative Preprocessor for Brain-Computer Interface Using Electroencephalogram
by Kenshi Machida, Isao Nambu and Yasuhiro Wada
Appl. Sci. 2023, 13(6), 3578; https://0-doi-org.brum.beds.ac.uk/10.3390/app13063578 - 10 Mar 2023
Cited by 3 | Viewed by 1013
Abstract
The implementation of a brain–computer interface (BCI) using electroencephalography typically entails two phases: feature extraction and classification utilizing a classifier. Consequently, there are numerous disordered combinations of feature extraction and classification techniques that apply to each classification target and dataset. In this study, [...] Read more.
The implementation of a brain–computer interface (BCI) using electroencephalography typically entails two phases: feature extraction and classification utilizing a classifier. Consequently, there are numerous disordered combinations of feature extraction and classification techniques that apply to each classification target and dataset. In this study, we employed a neural network as a classifier to address the versatility of the system in converting inputs of various forms into outputs of various forms. As a preprocessing step, we utilized a transposed convolution to augment the width of the convolution and the number of output features, which were then classified using a convolutional neural network (CNN). Our implementation of a simple CNN incorporating a transposed convolution in the initial layer allowed us to classify the BCI Competition IV Dataset 2a Motor Imagery Task data. Our findings indicate that our proposed method, which incorporates a two-dimensional CNN with a transposed convolution, outperforms the accuracy achieved without the transposed convolution. Additionally, the accuracy obtained was comparable to conventional optimal preprocessing methods, demonstrating the effectiveness of the transposed convolution as a potential alternative for BCI preprocessing. Full article
(This article belongs to the Special Issue Deep Learning for Electroencephalography(EEG) Data Analysis)
Show Figures

Figure 1

15 pages, 1706 KiB  
Article
Emotional Stress Recognition Using Electroencephalogram Signals Based on a Three-Dimensional Convolutional Gated Self-Attention Deep Neural Network
by Hyoung-Gook Kim, Dong-Ki Jeong and Jin-Young Kim
Appl. Sci. 2022, 12(21), 11162; https://0-doi-org.brum.beds.ac.uk/10.3390/app122111162 - 04 Nov 2022
Cited by 4 | Viewed by 2164
Abstract
The brain is more sensitive to stress than other organs and can develop many diseases under excessive stress. In this study, we developed a method to improve the accuracy of emotional stress recognition using multi-channel electroencephalogram (EEG) signals. The method combines a three-dimensional [...] Read more.
The brain is more sensitive to stress than other organs and can develop many diseases under excessive stress. In this study, we developed a method to improve the accuracy of emotional stress recognition using multi-channel electroencephalogram (EEG) signals. The method combines a three-dimensional (3D) convolutional neural network with an attention mechanism to build a 3D convolutional gated self-attention neural network. Initially, the EEG signal is decomposed into four frequency bands, and a 3D convolutional block is applied to each frequency band to obtain EEG spatiotemporal information. Subsequently, long-range dependencies and global information are learned by capturing prominent information from each frequency band via a gated self-attention mechanism block. Using frequency band mapping, complementary features are learned by connecting vectors from different frequency bands, which is reflected in the final attentional representation for stress recognition. Experiments conducted on three benchmark datasets for assessing the performance of emotional stress recognition indicate that the proposed method outperforms other conventional methods. The performance analysis of proposed methods confirms that EEG pattern analysis can be used for studying human brain activity and can accurately distinguish the state of stress. Full article
(This article belongs to the Special Issue Deep Learning for Electroencephalography(EEG) Data Analysis)
Show Figures

Figure 1

21 pages, 2747 KiB  
Article
A Novel Approach for Emotion Recognition Based on EEG Signal Using Deep Learning
by Awf Abdulrahman, Muhammet Baykara and Talha Burak Alakus
Appl. Sci. 2022, 12(19), 10028; https://0-doi-org.brum.beds.ac.uk/10.3390/app121910028 - 06 Oct 2022
Cited by 9 | Viewed by 2691
Abstract
Emotion can be defined as a voluntary or involuntary reaction to external factors. People express their emotions through actions, such as words, sounds, facial expressions, and body language. However, emotions expressed in such actions are sometimes manipulated by people and real feelings cannot [...] Read more.
Emotion can be defined as a voluntary or involuntary reaction to external factors. People express their emotions through actions, such as words, sounds, facial expressions, and body language. However, emotions expressed in such actions are sometimes manipulated by people and real feelings cannot be conveyed clearly. Therefore, understanding and analyzing emotions is essential. Recently, emotion analysis studies based on EEG signals appear to be in the foreground, due to the more reliable data collected. In this study, emotion analysis based on EEG signals was performed and a deep learning model was proposed. The study consists of four stages. In the first stage, EEG data were obtained from the GAMEEMO dataset. In the second stage, EEG signals were transformed with both VMD (variation mode decomposition) and EMD (empirical mode decomposition), and a total of 14 (nine from EMD, five from VMD) IMFs were obtained from each signal. In the third stage, statistical features were obtained from IMFs and maximum value, minimum value, and average values were used for this. In the last stage, both binary-class and multi-class classifications were made. The proposed deep learning model is compared with kNN (k nearest neighbor), SVM (support vector machines), and RF (random forest). At the end of the study, an accuracy of 70.89% in binary-class classification and 90.33% in multi-class classification was obtained with the proposed deep learning method. Full article
(This article belongs to the Special Issue Deep Learning for Electroencephalography(EEG) Data Analysis)
Show Figures

Figure 1

20 pages, 4425 KiB  
Article
Application of Deep Learning and WT-SST in Localization of Epileptogenic Zone Using Epileptic EEG Signals
by Sani Saminu, Guizhi Xu, Zhang Shuai, Isselmou Abd El Kader, Adamu Halilu Jabire, Yusuf Kola Ahmed, Ibrahim Abdullahi Karaye and Isah Salim Ahmad
Appl. Sci. 2022, 12(10), 4879; https://0-doi-org.brum.beds.ac.uk/10.3390/app12104879 - 11 May 2022
Cited by 7 | Viewed by 1892
Abstract
Focal and non-focal Electroencephalogram (EEG) signals have proved to be effective techniques for identifying areas in the brain that are affected by epileptic seizures, known as the epileptogenic zones. The detection of the location of focal EEG signals and the time of seizure [...] Read more.
Focal and non-focal Electroencephalogram (EEG) signals have proved to be effective techniques for identifying areas in the brain that are affected by epileptic seizures, known as the epileptogenic zones. The detection of the location of focal EEG signals and the time of seizure occurrence are vital information that help doctors treat focal epileptic seizures using a surgical method. This paper proposed a computer-aided detection (CAD) system for detecting and classifying focal and non-focal EEG signals as the manual process is time-consuming, prone to error, and tedious. The proposed technique employs time-frequency features, statistical, and nonlinear approaches to form a robust features extraction technique. Four detection and classification techniques for focal and non-focal EEG signals were proposed. (1). Combined hybrid features with Support Vector Machine (Hybrid-SVM) (2). Discrete Wavelet Transform with Deep Learning Network (DWT-DNN) (3). Combined hybrid features with DNN (Hybrid-DNN) as an optimized DNN model. Lastly, (4). A newly proposed technique using Wavelet Synchrosqueezing Transform-Deep Convolutional Neural Network (WTSST-DCNN). Prior to feeding the features to classifiers, statistical analyses, including t-tests, were deployed to obtain relevant and significant features at each approach. The proposed feature extraction technique and classification proved effective and suitable for smart Internet of Medical Things (IoMT) devices as performance parameters of accuracy, sensitivity, and specificity are higher than recently related works with a value of 99.7%, 99.5%, and 99.7% respectively. Full article
(This article belongs to the Special Issue Deep Learning for Electroencephalography(EEG) Data Analysis)
Show Figures

Figure 1

Back to TopTop