sensors-logo

Journal Browser

Journal Browser

EEG Signature Decoding towards Brain-Computer Interface Practice in Real World

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (15 November 2021) | Viewed by 22894

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Medical Science and Technology, National Sun Yat-sen University, Taiwan
Interests: Big EEG data; affective computing; wearable brain-computer interfaces
Department of Biomedical Engineering, Tianjin University, Tianjin, China
Interests: Brain-computer interfaces; neural signal processing; neuromodulation

E-Mail Website
Guest Editor
Swartz Center for Computational Neuroscience, University of California, San Diego, USA
Interests: Biosignal processing; machine learning; neuromodulation

E-Mail Website
Guest Editor
Swartz Center for Computational Neuroscience, University of California San Diego, San Diego, CA, USA
Interests: brain-computer interfaces; biological signal processing; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

An increasing number of cost-efficient commercialized wearable electroencephalogram (EEG) headsets in recent years greatly promotes the applications of brain-computer interfaces (BCIs) in our daily life. However, sensing and mining EEG signatures from a BCI user in real-world practice has brought up new challenges. For example, the naturalistic movements often severely corrupt the quality of EEG signals. The constant changes in behavioral and/or psychophysiological states may prevent successful decoding of the task-related EEG signatures that are learnt previously by a machine-learning model. Both the corrupted EEG signals and their non-stationary links to the task considerably degrade BCI performance and thereby frustrate a BCI user. In addition, intrinsic differences in brain anatomy and functionality across users may lead to substantial inter-individual variability, posing a demanding obstacle for developing a user-friendly wearable BCI application.

The aim of this Special Issue is to present and discuss how recent advances in EEG signal processing, machine learning framework, and user calibration scenario can effectively cope with the ecological sources of variability in sensing and mining EEG signals. We welcome original work addressing theoretical, analytical, and empirical demonstration on human with naturalistic movements and behaviors that can potentially advance the decoding of EEG signatures towards BCI practice in real world.

Prof. Dr. Yuan-Pin Lin
Dr. Minpeng Xu
Dr. Sheng-Hsiou Hsu
Dr. Masaki Nakanishi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • EEG signal processing
  • Intra- and inter individual variability
  • Artifact removal
  • Machine learning
  • Brain-computer interface

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 8462 KiB  
Article
Artifacts in EEG-Based BCI Therapies: Friend or Foe?
by Eric James McDermott, Philipp Raggam, Sven Kirsch, Paolo Belardinelli, Ulf Ziemann and Christoph Zrenner
Sensors 2022, 22(1), 96; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010096 - 24 Dec 2021
Cited by 6 | Viewed by 3246
Abstract
EEG-based brain–computer interfaces (BCI) have promising therapeutic potential beyond traditional neurofeedback training, such as enabling personalized and optimized virtual reality (VR) neurorehabilitation paradigms where the timing and parameters of the visual experience is synchronized with specific brain states. While BCI algorithms are often [...] Read more.
EEG-based brain–computer interfaces (BCI) have promising therapeutic potential beyond traditional neurofeedback training, such as enabling personalized and optimized virtual reality (VR) neurorehabilitation paradigms where the timing and parameters of the visual experience is synchronized with specific brain states. While BCI algorithms are often designed to focus on whichever portion of a signal is most informative, in these brain-state-synchronized applications, it is of critical importance that the resulting decoder is sensitive to physiological brain activity representative of various mental states, and not to artifacts, such as those arising from naturalistic movements. In this study, we compare the relative classification accuracy with which different motor tasks can be decoded from both extracted brain activity and artifacts contained in the EEG signal. EEG data were collected from 17 chronic stroke patients while performing six different head, hand, and arm movements in a realistic VR-based neurorehabilitation paradigm. Results show that the artifactual component of the EEG signal is significantly more informative than brain activity with respect to classification accuracy. This finding is consistent across different feature extraction methods and classification pipelines. While informative brain signals can be recovered with suitable cleaning procedures, we recommend that features should not be designed solely to maximize classification accuracy, as this could select for remaining artifactual components. We also propose the use of machine learning approaches that are interpretable to verify that classification is driven by physiological brain states. In summary, whereas informative artifacts are a helpful friend in BCI-based communication applications, they can be a problematic foe in the estimation of physiological brain states. Full article
Show Figures

Figure 1

19 pages, 1858 KiB  
Article
Evaluating the Effect of Stimuli Color and Frequency on SSVEP
by Xavier Duart, Eduardo Quiles, Ferran Suay, Nayibe Chio, Emilio García and Francisco Morant
Sensors 2021, 21(1), 117; https://0-doi-org.brum.beds.ac.uk/10.3390/s21010117 - 27 Dec 2020
Cited by 23 | Viewed by 3548
Abstract
Brain–computer interfaces (BCI) can extract information about the subject’s intentions by registering and processing electroencephalographic (EEG) signals to generate actions on physical systems. Steady-state visual-evoked potentials (SSVEP) are produced when the subject stares at flashing visual stimuli. By means of spectral analysis and [...] Read more.
Brain–computer interfaces (BCI) can extract information about the subject’s intentions by registering and processing electroencephalographic (EEG) signals to generate actions on physical systems. Steady-state visual-evoked potentials (SSVEP) are produced when the subject stares at flashing visual stimuli. By means of spectral analysis and by measuring the signal-to-noise ratio (SNR) of its harmonic contents, the observed stimulus can be identified. Stimulus color matters, and some authors have proposed red because of its ability to capture attention, while others refuse it because it might induce epileptic seizures. Green has also been proposed and it is claimed that white may generate the best signals. Regarding frequency, middle frequencies are claimed to produce the best SNR, although high frequencies have not been thoroughly studied, and might be advantageous due to the lower spontaneous cerebral activity in this frequency band. Here, we show white, red, and green stimuli, at three frequencies: 5 (low), 12 (middle), and 30 (high) Hz to 42 subjects, and compare them in order to find which one can produce the best SNR. We aim to know if the response to white is as strong as the one to red, and also if the response to high frequency is as strong as the one triggered by lower frequencies. Attention has been measured with the Conner’s Continuous Performance Task version 2 (CPT-II) task, in order to search for a potential relationship between attentional capacity and the SNR previously obtained. An analysis of variance (ANOVA) shows the best SNR with the middle frequency, followed by the low, and finally the high one. White gives as good an SNR as red at 12 Hz and so does green at 5 Hz, with no differences at 30 Hz. These results suggest that middle frequencies are preferable and that using the red color can be avoided. Correlation analysis also show a correlation between attention and the SNR at low frequency, so suggesting that for the low frequencies, more attentional capacity leads to better results. Full article
Show Figures

Figure 1

20 pages, 6522 KiB  
Article
Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network
by Kai Zhang, Guanghua Xu, Zezhen Han, Kaiquan Ma, Xiaowei Zheng, Longting Chen, Nan Duan and Sicong Zhang
Sensors 2020, 20(16), 4485; https://0-doi-org.brum.beds.ac.uk/10.3390/s20164485 - 11 Aug 2020
Cited by 69 | Viewed by 6471
Abstract
As an important paradigm of spontaneous brain-computer interfaces (BCIs), motor imagery (MI) has been widely used in the fields of neurological rehabilitation and robot control. Recently, researchers have proposed various methods for feature extraction and classification based on MI signals. The decoding model [...] Read more.
As an important paradigm of spontaneous brain-computer interfaces (BCIs), motor imagery (MI) has been widely used in the fields of neurological rehabilitation and robot control. Recently, researchers have proposed various methods for feature extraction and classification based on MI signals. The decoding model based on deep neural networks (DNNs) has attracted significant attention in the field of MI signal processing. Due to the strict requirements for subjects and experimental environments, it is difficult to collect large-scale and high-quality electroencephalogram (EEG) data. However, the performance of a deep learning model depends directly on the size of the datasets. Therefore, the decoding of MI-EEG signals based on a DNN has proven highly challenging in practice. Based on this, we investigated the performance of different data augmentation (DA) methods for the classification of MI data using a DNN. First, we transformed the time series signals into spectrogram images using a short-time Fourier transform (STFT). Then, we evaluated and compared the performance of different DA methods for this spectrogram data. Next, we developed a convolutional neural network (CNN) to classify the MI signals and compared the classification performance of after DA. The Fréchet inception distance (FID) was used to evaluate the quality of the generated data (GD) and the classification accuracy, and mean kappa values were used to explore the best CNN-DA method. In addition, analysis of variance (ANOVA) and paired t-tests were used to assess the significance of the results. The results showed that the deep convolutional generative adversarial network (DCGAN) provided better augmentation performance than traditional DA methods: geometric transformation (GT), autoencoder (AE), and variational autoencoder (VAE) (p < 0.01). Public datasets of the BCI competition IV (datasets 1 and 2b) were used to verify the classification performance. Improvements in the classification accuracies of 17% and 21% (p < 0.01) were observed after DA for the two datasets. In addition, the hybrid network CNN-DCGAN outperformed the other classification methods, with average kappa values of 0.564 and 0.677 for the two datasets. Full article
Show Figures

Figure 1

13 pages, 2450 KiB  
Article
The Effect of Static and Dynamic Visual Stimulations on Error Detection Based on Error-Evoked Brain Responses
by Rui Xu, Yaoyao Wang, Xianle Shi, Ningning Wang and Dong Ming
Sensors 2020, 20(16), 4475; https://0-doi-org.brum.beds.ac.uk/10.3390/s20164475 - 10 Aug 2020
Viewed by 2030
Abstract
Error-related potentials (ErrPs) have provided technical support for the brain-computer interface. However, different visual stimulations may affect the ErrPs, and furthermore, affect the error recognition based on ErrPs. Therefore, the study aimed to investigate how people respond to different visual stimulations (static and [...] Read more.
Error-related potentials (ErrPs) have provided technical support for the brain-computer interface. However, different visual stimulations may affect the ErrPs, and furthermore, affect the error recognition based on ErrPs. Therefore, the study aimed to investigate how people respond to different visual stimulations (static and dynamic) and find the best time window for different stimulation. Nineteen participants were recruited in the ErrPs-based tasks with static and dynamic visual stimulations. Five ErrPs were statistically compared, and the classification accuracies were obtained through linear discriminant analysis (LDA) with nine different time windows. The results showed that the P3, N6, and P8 with correctness were significantly different from those with error in both stimulations, while N1 only existed in static. The differences between dynamic and static errors existed in N1 and P2. The highest accuracy was obtained in the time window related to N1, P3, N6, and P8 for the static condition, and in the time window related to P3, N6, and P8 for the dynamic. In conclusion, the early components of ErrPs may be affected by stimulation modes, and the late components are more sensitive to errors. The error recognition with static stimulation requires information from the entire epoch, while the late windows should be focused more within the dynamic case. Full article
Show Figures

Figure 1

17 pages, 4253 KiB  
Article
Optimizing SSVEP-Based BCI System towards Practical High-Speed Spelling
by Jiabei Tang, Minpeng Xu, Jin Han, Miao Liu, Tingfei Dai, Shanguang Chen and Dong Ming
Sensors 2020, 20(15), 4186; https://0-doi-org.brum.beds.ac.uk/10.3390/s20154186 - 28 Jul 2020
Cited by 17 | Viewed by 3573
Abstract
The brain–computer interface (BCI) spellers based on steady-state visual evoked potentials (SSVEPs) have recently been widely investigated for their high information transfer rates (ITRs). This paper aims to improve the practicability of the SSVEP-BCIs for high-speed spelling. The system acquired the electroencephalogram (EEG) [...] Read more.
The brain–computer interface (BCI) spellers based on steady-state visual evoked potentials (SSVEPs) have recently been widely investigated for their high information transfer rates (ITRs). This paper aims to improve the practicability of the SSVEP-BCIs for high-speed spelling. The system acquired the electroencephalogram (EEG) data from a self-developed dedicated EEG device and the stimulation was arranged as a keyboard. The task-related component analysis (TRCA) spatial filter was modified (mTRCA) for target classification and showed significantly higher performance compared with the original TRCA in the offline analysis. In the online system, the dynamic stopping (DS) strategy based on Bayesian posterior probability was utilized to realize alterable stimulating time. In addition, the temporal filtering process and the programs were optimized to facilitate the online DS operation. Notably, the online ITR reached 330.4 ± 45.4 bits/min on average, which is significantly higher than that of fixed stopping (FS) strategy, and the peak value of 420.2 bits/min is the highest online spelling ITR with a SSVEP-BCI up to now. The proposed system with portable EEG acquisition, friendly interaction, and alterable time of command output provides more flexibility for SSVEP-based BCIs and is promising for practical high-speed spelling. Full article
Show Figures

Figure 1

15 pages, 2661 KiB  
Article
Separable EEG Features Induced by Timing Prediction for Active Brain-Computer Interfaces
by Jiayuan Meng, Minpeng Xu, Kun Wang, Qiangfan Meng, Jin Han, Xiaolin Xiao, Shuang Liu and Dong Ming
Sensors 2020, 20(12), 3588; https://0-doi-org.brum.beds.ac.uk/10.3390/s20123588 - 25 Jun 2020
Cited by 11 | Viewed by 2778
Abstract
Brain–computer interfaces (BCI) have witnessed a rapid development in recent years. However, the active BCI paradigm is still underdeveloped with a lack of variety. It is imperative to adapt more voluntary mental activities for the active BCI control, which can induce separable electroencephalography [...] Read more.
Brain–computer interfaces (BCI) have witnessed a rapid development in recent years. However, the active BCI paradigm is still underdeveloped with a lack of variety. It is imperative to adapt more voluntary mental activities for the active BCI control, which can induce separable electroencephalography (EEG) features. This study aims to demonstrate the brain function of timing prediction, i.e., the expectation of upcoming time intervals, is accessible for BCIs. Eighteen subjects were selected for this study. They were trained to have a precise idea of two sub-second time intervals, i.e., 400 ms and 600 ms, and were asked to measure a time interval of either 400 ms or 600 ms in mind after a cue onset. The EEG features induced by timing prediction were analyzed and classified using the combined discriminative canonical pattern matching and common spatial pattern. It was found that the ERPs in low-frequency (0~4 Hz) and energy in high-frequency (20~60 Hz) were separable for distinct timing predictions. The accuracy reached the highest of 93.75% with an average of 76.45% for the classification of 400 vs. 600 ms timing. This study first demonstrates that the cognitive EEG features induced by timing prediction are detectable and separable, which is feasible to be used in active BCIs controls and can broaden the category of BCIs. Full article
Show Figures

Figure 1

Back to TopTop