sensors-logo

Journal Browser

Journal Browser

EEG-Based Brain–Computer Interface for a Real-Life Appliance

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Biomedical Sensors".

Viewed by 53745

Editors


E-Mail Website
Collection Editor
Center for Bionics, Biomedical Research Institute, Korea Institute of Science and Technology (KIST), Seoul 02792, Korea
Interests: brain–computer interface; neuromodulation; focused ultrasound

E-Mail Website
Collection Editor
Center for Bionics, Biomedical Research Institute, Korea Institute of Science and Technology (KIST), Seoul 02792, Korea
Interests: brain–computer interface; neuromechanics; rehabilitation and bionic engineering

Topical Collection Information

Dear Colleagues,

In the past several decades, Brain–Computer Interface (BCI) technology has been rapidly developed, with various applications that directly connect the brain and external devices. Specifically, EEG-based BCI technologies are capable of non-invasively monitoring neural activities with a high resolution, showing the possibility of being used to control various real-life devices. Therefore, there has been a steady shift in BCI from replacing and restoring useful functions in people with neuromuscular disorders to augmenting the functional performance of people with movement disorders and athletes and bringing the convenience of controlling real-life appliances and external devices to the general public.

This Topical Collection aims to bring researchers together to share recent developments and findings in BCIs to control real-life appliances and external devices such as wheelchairs, drones, and robots. These include but are not limited to development of sensors, sensing principles, sensor interfaces, decoding algorithms of intention in BCI, and application of the existing and novel methods in BCI for real-life appliance control, assistive and rehabilitation technologies for people with disabilities. This Topical Collection aims to place more emphasis on BCI in a real-life appliance and activities of daily living.

We look forward to receiving your contribution to this Topical Collection.

Dr. Hyungmin Kim
Dr. Song Joo Lee
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • electroencephalography
  • brain–computer interface
  • non-invasive
  • real-life appliance

Published Papers (15 papers)

2023

Jump to: 2021, 2020

18 pages, 3838 KiB  
Article
Transfer Learning in Trajectory Decoding: Sensor or Source Space?
by Nitikorn Srisrisawang and Gernot R. Müller-Putz
Sensors 2023, 23(7), 3593; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073593 - 30 Mar 2023
Cited by 2 | Viewed by 1510
Abstract
In this study, across-participant and across-session transfer learning was investigated to minimize the calibration time of the brain–computer interface (BCI) system in the context of continuous hand trajectory decoding. We reanalyzed data from a study with 10 able-bodied participants across three sessions. A [...] Read more.
In this study, across-participant and across-session transfer learning was investigated to minimize the calibration time of the brain–computer interface (BCI) system in the context of continuous hand trajectory decoding. We reanalyzed data from a study with 10 able-bodied participants across three sessions. A leave-one-participant-out (LOPO) model was utilized as a starting model. Recursive exponentially weighted partial least squares regression (REW-PLS) was employed to overcome the memory limitation due to the large pool of training data. We considered four scenarios: generalized with no update (Gen), generalized with cumulative update (GenC), and individual models with cumulative (IndC) and non-cumulative (Ind) updates, with each one trained with sensor-space features or source-space features. The decoding performance in generalized models (Gen and GenC) was lower than the chance level. In individual models, the cumulative update (IndC) showed no significant improvement over the non-cumulative model (Ind). The performance showed the decoder’s incapability to generalize across participants and sessions in this task. The results suggested that the best correlation could be achieved with the sensor-space individual model, despite additional anatomical information in the source-space features. The decoding pattern showed a more localized pattern around the precuneus over three sessions in Ind models. Full article
Show Figures

Figure 1

17 pages, 2445 KiB  
Article
Robust Motor Imagery Tasks Classification Approach Using Bayesian Neural Network
by Daily Milanés-Hermosilla, Rafael Trujillo-Codorniú, Saddid Lamar-Carbonell, Roberto Sagaró-Zamora, Jorge Jadid Tamayo-Pacheco, John Jairo Villarejo-Mayor and Denis Delisle-Rodriguez
Sensors 2023, 23(2), 703; https://0-doi-org.brum.beds.ac.uk/10.3390/s23020703 - 08 Jan 2023
Cited by 3 | Viewed by 1669
Abstract
The development of Brain–Computer Interfaces based on Motor Imagery (MI) tasks is a relevant research topic worldwide. The design of accurate and reliable BCI systems remains a challenge, mainly in terms of increasing performance and usability. Classifiers based on Bayesian Neural Networks are [...] Read more.
The development of Brain–Computer Interfaces based on Motor Imagery (MI) tasks is a relevant research topic worldwide. The design of accurate and reliable BCI systems remains a challenge, mainly in terms of increasing performance and usability. Classifiers based on Bayesian Neural Networks are proposed in this work by using the variational inference, aiming to analyze the uncertainty during the MI prediction. An adaptive threshold scheme is proposed here for MI classification with a reject option, and its performance on both datasets 2a and 2b from BCI Competition IV is compared with other approaches based on thresholds. The results using subject-specific and non-subject-specific training strategies are encouraging. From the uncertainty analysis, considerations for reducing computational cost are proposed for future work. Full article
Show Figures

Figure 1

2021

Jump to: 2023, 2020

20 pages, 2081 KiB  
Article
Enhancing EEG-Based Mental Stress State Recognition Using an Improved Hybrid Feature Selection Algorithm
by Ala Hag, Dini Handayani, Maryam Altalhi, Thulasyammal Pillai, Teddy Mantoro, Mun Hou Kit and Fares Al-Shargie
Sensors 2021, 21(24), 8370; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248370 - 15 Dec 2021
Cited by 14 | Viewed by 3238
Abstract
In real-life applications, electroencephalogram (EEG) signals for mental stress recognition require a conventional wearable device. This, in turn, requires an efficient number of EEG channels and an optimal feature set. This study aims to identify an optimal feature subset that can discriminate mental [...] Read more.
In real-life applications, electroencephalogram (EEG) signals for mental stress recognition require a conventional wearable device. This, in turn, requires an efficient number of EEG channels and an optimal feature set. This study aims to identify an optimal feature subset that can discriminate mental stress states while enhancing the overall classification performance. We extracted multi-domain features within the time domain, frequency domain, time-frequency domain, and network connectivity features to form a prominent feature vector space for stress. We then proposed a hybrid feature selection (FS) method using minimum redundancy maximum relevance with particle swarm optimization and support vector machines (mRMR-PSO-SVM) to select the optimal feature subset. The performance of the proposed method is evaluated and verified using four datasets, namely EDMSS, DEAP, SEED, and EDPMSC. To further consolidate, the effectiveness of the proposed method is compared with that of the state-of-the-art metaheuristic methods. The proposed model significantly reduced the features vector space by an average of 70% compared with the state-of-the-art methods while significantly increasing overall detection performance. Full article
Show Figures

Figure 1

16 pages, 2477 KiB  
Article
Multi-Domain Convolutional Neural Networks for Lower-Limb Motor Imagery Using Dry vs. Wet Electrodes
by Ji-Hyeok Jeong, Jun-Hyuk Choi, Keun-Tae Kim, Song-Joo Lee, Dong-Joo Kim and Hyung-Min Kim
Sensors 2021, 21(19), 6672; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196672 - 07 Oct 2021
Cited by 7 | Viewed by 2757
Abstract
Motor imagery (MI) brain–computer interfaces (BCIs) have been used for a wide variety of applications due to their intuitive matching between the user’s intentions and the performance of tasks. Applying dry electroencephalography (EEG) electrodes to MI BCI applications can resolve many constraints and [...] Read more.
Motor imagery (MI) brain–computer interfaces (BCIs) have been used for a wide variety of applications due to their intuitive matching between the user’s intentions and the performance of tasks. Applying dry electroencephalography (EEG) electrodes to MI BCI applications can resolve many constraints and achieve practicality. In this study, we propose a multi-domain convolutional neural networks (MD-CNN) model that learns subject-specific and electrode-dependent EEG features using a multi-domain structure to improve the classification accuracy of dry electrode MI BCIs. The proposed MD-CNN model is composed of learning layers for three domain representations (time, spatial, and phase). We first evaluated the proposed MD-CNN model using a public dataset to confirm 78.96% classification accuracy for multi-class classification (chance level accuracy: 30%). After that, 10 healthy subjects participated and performed three classes of MI tasks related to lower-limb movement (gait, sitting down, and resting) over two sessions (dry and wet electrodes). Consequently, the proposed MD-CNN model achieved the highest classification accuracy (dry: 58.44%; wet: 58.66%; chance level accuracy: 43.33%) with a three-class classifier and the lowest difference in accuracy between the two electrode types (0.22%, d = 0.0292) compared with the conventional classifiers (FBCSP, EEGNet, ShallowConvNet, and DeepConvNet) that used only a single domain. We expect that the proposed MD-CNN model could be applied for developing robust MI BCI systems with dry electrodes. Full article
Show Figures

Figure 1

11 pages, 3483 KiB  
Article
A Hybrid Brain–Computer Interface for Real-Life Meal-Assist Robot Control
by Jihyeon Ha, Sangin Park, Chang-Hwan Im and Laehyun Kim
Sensors 2021, 21(13), 4578; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134578 - 04 Jul 2021
Cited by 10 | Viewed by 3196
Abstract
Assistant devices such as meal-assist robots aid individuals with disabilities and support the elderly in performing daily activities. However, existing meal-assist robots are inconvenient to operate due to non-intuitive user interfaces, requiring additional time and effort. Thus, we developed a hybrid brain–computer interface-based [...] Read more.
Assistant devices such as meal-assist robots aid individuals with disabilities and support the elderly in performing daily activities. However, existing meal-assist robots are inconvenient to operate due to non-intuitive user interfaces, requiring additional time and effort. Thus, we developed a hybrid brain–computer interface-based meal-assist robot system following three features that can be measured using scalp electrodes for electroencephalography. The following three procedures comprise a single meal cycle. (1) Triple eye-blinks (EBs) from the prefrontal channel were treated as activation for initiating the cycle. (2) Steady-state visual evoked potentials (SSVEPs) from occipital channels were used to select the food per the user’s intention. (3) Electromyograms (EMGs) were recorded from temporal channels as the users chewed the food to mark the end of a cycle and indicate readiness for starting the following meal. The accuracy, information transfer rate, and false positive rate during experiments on five subjects were as follows: accuracy (EBs/SSVEPs/EMGs) (%): (94.67/83.33/97.33); FPR (EBs/EMGs) (times/min): (0.11/0.08); ITR (SSVEPs) (bit/min): 20.41. These results revealed the feasibility of this assistive system. The proposed system allows users to eat on their own more naturally. Furthermore, it can increase the self-esteem of disabled and elderly peeople and enhance their quality of life. Full article
Show Figures

Figure 1

22 pages, 2566 KiB  
Review
Brain-Computer Interfaces Systems for Upper and Lower Limb Rehabilitation: A Systematic Review
by Daniela Camargo-Vargas, Mauro Callejas-Cuervo and Stefano Mazzoleni
Sensors 2021, 21(13), 4312; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134312 - 24 Jun 2021
Cited by 24 | Viewed by 5754
Abstract
In recent years, various studies have demonstrated the potential of electroencephalographic (EEG) signals for the development of brain-computer interfaces (BCIs) in the rehabilitation of human limbs. This article is a systematic review of the state of the art and opportunities in the development [...] Read more.
In recent years, various studies have demonstrated the potential of electroencephalographic (EEG) signals for the development of brain-computer interfaces (BCIs) in the rehabilitation of human limbs. This article is a systematic review of the state of the art and opportunities in the development of BCIs for the rehabilitation of upper and lower limbs of the human body. The systematic review was conducted in databases considering using EEG signals, interface proposals to rehabilitate upper/lower limbs using motor intention or movement assistance and utilizing virtual environments in feedback. Studies that did not specify which processing system was used were excluded. Analyses of the design processing or reviews were excluded as well. It was identified that 11 corresponded to applications to rehabilitate upper limbs, six to lower limbs, and one to both. Likewise, six combined visual/auditory feedback, two haptic/visual, and two visual/auditory/haptic. In addition, four had fully immersive virtual reality (VR), three semi-immersive VR, and 11 non-immersive VR. In summary, the studies have demonstrated that using EEG signals, and user feedback offer benefits including cost, effectiveness, better training, user motivation and there is a need to continue developing interfaces that are accessible to users, and that integrate feedback techniques. Full article
Show Figures

Figure 1

25 pages, 2964 KiB  
Article
Brain–Computer Interface (BCI) Control of a Virtual Assistant in a Smartphone to Manage Messaging Applications
by Francisco Velasco-Álvarez, Álvaro Fernández-Rodríguez, Francisco-Javier Vizcaíno-Martín, Antonio Díaz-Estrella and Ricardo Ron-Angevin
Sensors 2021, 21(11), 3716; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113716 - 26 May 2021
Cited by 20 | Viewed by 5430
Abstract
Brain–computer interfaces (BCI) are a type of assistive technology that uses the brain signals of users to establish a communication and control channel between them and an external device. BCI systems may be a suitable tool to restore communication skills in severely motor-disabled [...] Read more.
Brain–computer interfaces (BCI) are a type of assistive technology that uses the brain signals of users to establish a communication and control channel between them and an external device. BCI systems may be a suitable tool to restore communication skills in severely motor-disabled patients, as BCI do not rely on muscular control. The loss of communication is one of the most negative consequences reported by such patients. This paper presents a BCI system focused on the control of four mainstream messaging applications running in a smartphone: WhatsApp, Telegram, e-mail and short message service (SMS). The control of the BCI is achieved through the well-known visual P300 row-column paradigm (RCP), allowing the user to select control commands as well as spelling characters. For the control of the smartphone, the system sends synthesized voice commands that are interpreted by a virtual assistant running in the smartphone. Four tasks related to the four mentioned messaging services were tested with 15 healthy volunteers, most of whom were able to accomplish the tasks, which included sending free text e-mails to an address proposed by the subjects themselves. The online performance results obtained, as well as the results of subjective questionnaires, support the viability of the proposed system. Full article
Show Figures

Figure 1

18 pages, 17148 KiB  
Article
Effect of Static Posture on Online Performance of P300-Based BCIs for TV Control
by Dojin Heo, Minju Kim, Jongsu Kim, Yun-Joo Choi and Sung-Phil Kim
Sensors 2021, 21(7), 2278; https://0-doi-org.brum.beds.ac.uk/10.3390/s21072278 - 24 Mar 2021
Cited by 3 | Viewed by 2036
Abstract
To implement a practical brain–computer interface (BCI) for daily use, continuing changes in postures while performing daily tasks must be considered in the design of BCIs. To examine whether the performance of a BCI could depend on postures, we compared the online performance [...] Read more.
To implement a practical brain–computer interface (BCI) for daily use, continuing changes in postures while performing daily tasks must be considered in the design of BCIs. To examine whether the performance of a BCI could depend on postures, we compared the online performance of P300-based BCIs built to select TV channels when subjects took sitting, recline, supine, and right lateral recumbent postures during BCI use. Subjects self-reported the degrees of interference, comfort, and familiarity after BCI control in each posture. We found no significant difference in the BCI performance as well as the amplitude and latency of P300 and N200 among the four postures. However, when we compared BCI accuracy outcomes normalized within individuals between two cases where subjects reported relatively more positively or more negatively about using the BCI in a particular posture, we found higher BCI accuracy in those postures for which individual subjects reported more positively. As a result, although the change of postures did not affect the overall performance of P300-based BCIs, the BCI performance varied depending on the degree of postural comfort felt by individual subjects. Our results suggest considering the postural comfort felt by individual BCI users when using a P300-based BCI at home. Full article
Show Figures

Figure 1

12 pages, 3590 KiB  
Article
Development of a Guidance System for Motor Imagery Enhancement Using the Virtual Hand Illusion
by Hojun Jeong and Jonghyun Kim
Sensors 2021, 21(6), 2197; https://0-doi-org.brum.beds.ac.uk/10.3390/s21062197 - 21 Mar 2021
Cited by 5 | Viewed by 2520
Abstract
Motor imagery (MI) is widely used to produce input signals for brain–computer interfaces (BCI) due to the similarities between MI-BCI and the planning–execution cycle. Despite its usefulness, MI tasks can be ambiguous to users and MI produces weaker cortical signals than motor execution. [...] Read more.
Motor imagery (MI) is widely used to produce input signals for brain–computer interfaces (BCI) due to the similarities between MI-BCI and the planning–execution cycle. Despite its usefulness, MI tasks can be ambiguous to users and MI produces weaker cortical signals than motor execution. Existing MI guidance systems, which have been reported to provide visual guidance for MI and enhance MI, still have limitations: insufficient immersion for MI or poor expandability to MI for another body parts. We propose a guidance system for MI enhancement that can immerse users in MI and will be easy to extend to other body parts and target motions with few physical constraints. To make easily extendable MI guidance system, the virtual hand illusion is applied to the MI guidance system with a motion tracking sensor. MI enhancement was evaluated in 11 healthy people by comparison with another guidance system and conventional motor commands for BCI. The results showed that the proposed MI guidance system produced an amplified cortical signal compared to pure MI (p < 0.017), and a similar cortical signal as those produced by both actual execution (p > 0.534) and an MI guidance system with the rubber hand illusion (p > 0.722) in the contralateral region. Therefore, we believe that the proposed MI guidance system with the virtual hand illusion is a viable alternative to existing MI guidance systems in various applications with MI-BCI. Full article
Show Figures

Figure 1

19 pages, 705 KiB  
Article
Self-Relative Evaluation Framework for EEG-Based Biometric Systems
by Meriem Romaissa Boubakeur and Guoyin Wang
Sensors 2021, 21(6), 2097; https://0-doi-org.brum.beds.ac.uk/10.3390/s21062097 - 17 Mar 2021
Cited by 5 | Viewed by 2100
Abstract
In recent years, electroencephalogram (EEG) signals have been used as a biometric modality, and EEG-based biometric systems have received increasing attention. However, due to the sensitive nature of EEG signals, the extraction of identity information through processing techniques may lead to some loss [...] Read more.
In recent years, electroencephalogram (EEG) signals have been used as a biometric modality, and EEG-based biometric systems have received increasing attention. However, due to the sensitive nature of EEG signals, the extraction of identity information through processing techniques may lead to some loss in the extracted identity information. This may impact the distinctiveness between subjects in the system. In this context, we propose a new self-relative evaluation framework for EEG-based biometric systems. The proposed framework aims at selecting a more accurate identity information when the biometric system is open to the enrollment of novel subjects. The experiments were conducted on publicly available EEG datasets collected from 108 subjects in a resting state with closed eyes. The results show that the openness condition is useful for selecting more accurate identity information. Full article
Show Figures

Figure 1

16 pages, 4290 KiB  
Article
Enhancing SSVEP-Based Brain-Computer Interface with Two-Step Task-Related Component Analysis
by Hyeon Kyu Lee and Young-Seok Choi
Sensors 2021, 21(4), 1315; https://0-doi-org.brum.beds.ac.uk/10.3390/s21041315 - 12 Feb 2021
Cited by 4 | Viewed by 3226
Abstract
Among various methods for frequency recognition of the steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) study, a task-related component analysis (TRCA), which extracts discriminative spatial filters for classifying electroencephalogram (EEG) signals, has gathered much interest. The TRCA-based SSVEP method yields lower computational [...] Read more.
Among various methods for frequency recognition of the steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) study, a task-related component analysis (TRCA), which extracts discriminative spatial filters for classifying electroencephalogram (EEG) signals, has gathered much interest. The TRCA-based SSVEP method yields lower computational cost and higher classification performance compared to existing SSVEP methods. In spite of its utility, the TRCA-based SSVEP method still suffers from the degradation of the frequency recognition rate in cases where EEG signals with a short length window are used. To address this issue, here, we propose an improved strategy for decoding SSVEPs, which is insensitive to a window length by carrying out two-step TRCA. The proposed method reuses the spatial filters corresponding to target frequencies generated by the TRCA. Followingly, the proposed method accentuates features for target frequencies by correlating individual template and test data. For the evaluation of the performance of the proposed method, we used a benchmark dataset with 35 subjects and confirmed significantly improved performance comparing with other existing SSVEP methods. These results imply the suitability as an efficient frequency recognition strategy for SSVEP-based BCI applications. Full article
Show Figures

Figure 1

14 pages, 727 KiB  
Article
Induction of Neural Plasticity Using a Low-Cost Open Source Brain-Computer Interface and a 3D-Printed Wrist Exoskeleton
by Mads Jochumsen, Taha Al Muhammadee Janjua, Juan Carlos Arceo, Jimmy Lauber, Emilie Simoneau Buessinger and Rasmus Leck Kæseler
Sensors 2021, 21(2), 572; https://0-doi-org.brum.beds.ac.uk/10.3390/s21020572 - 15 Jan 2021
Cited by 12 | Viewed by 3705
Abstract
Brain-computer interfaces (BCIs) have been proven to be useful for stroke rehabilitation, but there are a number of factors that impede the use of this technology in rehabilitation clinics and in home-use, the major factors including the usability and costs of the BCI [...] Read more.
Brain-computer interfaces (BCIs) have been proven to be useful for stroke rehabilitation, but there are a number of factors that impede the use of this technology in rehabilitation clinics and in home-use, the major factors including the usability and costs of the BCI system. The aims of this study were to develop a cheap 3D-printed wrist exoskeleton that can be controlled by a cheap open source BCI (OpenViBE), and to determine if training with such a setup could induce neural plasticity. Eleven healthy volunteers imagined wrist extensions, which were detected from single-trial electroencephalography (EEG), and in response to this, the wrist exoskeleton replicated the intended movement. Motor-evoked potentials (MEPs) elicited using transcranial magnetic stimulation were measured before, immediately after, and 30 min after BCI training with the exoskeleton. The BCI system had a true positive rate of 86 ± 12% with 1.20 ± 0.57 false detections per minute. Compared to the measurement before the BCI training, the MEPs increased by 35 ± 60% immediately after and 67 ± 60% 30 min after the BCI training. There was no association between the BCI performance and the induction of plasticity. In conclusion, it is possible to detect imaginary movements using an open-source BCI setup and control a cheap 3D-printed exoskeleton that when combined with the BCI can induce neural plasticity. These findings may promote the availability of BCI technology for rehabilitation clinics and home-use. However, the usability must be improved, and further tests are needed with stroke patients. Full article
Show Figures

Figure 1

17 pages, 2604 KiB  
Article
Implementation of an Online Auditory Attention Detection Model with Electroencephalography in a Dichotomous Listening Experiment
by Seung-Cheol Baek, Jae Ho Chung and Yoonseob Lim
Sensors 2021, 21(2), 531; https://0-doi-org.brum.beds.ac.uk/10.3390/s21020531 - 13 Jan 2021
Cited by 4 | Viewed by 2947
Abstract
Auditory attention detection (AAD) is the tracking of a sound source to which a listener is attending based on neural signals. Despite expectation for the applicability of AAD in real-life, most AAD research has been conducted on recorded electroencephalograms (EEGs), which is far [...] Read more.
Auditory attention detection (AAD) is the tracking of a sound source to which a listener is attending based on neural signals. Despite expectation for the applicability of AAD in real-life, most AAD research has been conducted on recorded electroencephalograms (EEGs), which is far from online implementation. In the present study, we attempted to propose an online AAD model and to implement it on a streaming EEG. The proposed model was devised by introducing a sliding window into the linear decoder model and was simulated using two datasets obtained from separate experiments to evaluate the feasibility. After simulation, the online model was constructed and evaluated based on the streaming EEG of an individual, acquired during a dichotomous listening experiment. Our model was able to detect the transient direction of a participant’s attention on the order of one second during the experiment and showed up to 70% average detection accuracy. We expect that the proposed online model could be applied to develop adaptive hearing aids or neurofeedback training for auditory attention and speech perception. Full article
Show Figures

Figure 1

2020

Jump to: 2023, 2021

15 pages, 2954 KiB  
Article
Developing a Motor Imagery-Based Real-Time Asynchronous Hybrid BCI Controller for a Lower-Limb Exoskeleton
by Junhyuk Choi, Keun Tae Kim, Ji Hyeok Jeong, Laehyun Kim, Song Joo Lee and Hyungmin Kim
Sensors 2020, 20(24), 7309; https://0-doi-org.brum.beds.ac.uk/10.3390/s20247309 - 19 Dec 2020
Cited by 43 | Viewed by 5536
Abstract
This study aimed to develop an intuitive gait-related motor imagery (MI)-based hybrid brain-computer interface (BCI) controller for a lower-limb exoskeleton and investigate the feasibility of the controller under a practical scenario including stand-up, gait-forward, and sit-down. A filter bank common spatial pattern (FBCSP) [...] Read more.
This study aimed to develop an intuitive gait-related motor imagery (MI)-based hybrid brain-computer interface (BCI) controller for a lower-limb exoskeleton and investigate the feasibility of the controller under a practical scenario including stand-up, gait-forward, and sit-down. A filter bank common spatial pattern (FBCSP) and mutual information-based best individual feature (MIBIF) selection were used in the study to decode MI electroencephalogram (EEG) signals and extract a feature matrix as an input to the support vector machine (SVM) classifier. A successive eye-blink switch was sequentially combined with the EEG decoder in operating the lower-limb exoskeleton. Ten subjects demonstrated more than 80% accuracy in both offline (training) and online. All subjects successfully completed a gait task by wearing the lower-limb exoskeleton through the developed real-time BCI controller. The BCI controller achieved a time ratio of 1.45 compared with a manual smartwatch controller. The developed system can potentially be benefit people with neurological disorders who may have difficulties operating manual control. Full article
Show Figures

Figure 1

16 pages, 3708 KiB  
Article
Design of Wearable EEG Devices Specialized for Passive Brain–Computer Interface Applications
by Seonghun Park, Chang-Hee Han and Chang-Hwan Im
Sensors 2020, 20(16), 4572; https://0-doi-org.brum.beds.ac.uk/10.3390/s20164572 - 14 Aug 2020
Cited by 21 | Viewed by 5690
Abstract
Owing to the increased public interest in passive brain–computer interface (pBCI) applications, many wearable devices for capturing electroencephalogram (EEG) signals in daily life have recently been released on the market. However, there exists no well-established criterion to determine the electrode configuration for such [...] Read more.
Owing to the increased public interest in passive brain–computer interface (pBCI) applications, many wearable devices for capturing electroencephalogram (EEG) signals in daily life have recently been released on the market. However, there exists no well-established criterion to determine the electrode configuration for such devices. Herein, an overall procedure is proposed to determine the optimal electrode configurations of wearable EEG devices that yield the optimal performance for intended pBCI applications. We utilized two EEG datasets recorded in different experiments designed to modulate emotional or attentional states. Emotion-specialized EEG headsets were designed to maximize the accuracy of classification of different emotional states using the emotion-associated EEG dataset, and attention-specialized EEG headsets were designed to maximize the temporal correlation between the EEG index and the behavioral attention index. General purpose electrode configurations were designed to maximize the overall performance in both applications for different numbers of electrodes (2, 4, 6, and 8). The performance was then compared with that of existing wearable EEG devices. Simulations indicated that the proposed electrode configurations allowed for more accurate estimation of the users’ emotional and attentional states than the conventional electrode configurations, suggesting that wearable EEG devices should be designed according to the well-established EEG datasets associated with the target pBCI applications. Full article
Show Figures

Figure 1

Back to TopTop