Next Article in Journal
E-Nose: Time–Frequency Attention Convolutional Neural Network for Gas Classification and Concentration Prediction
Previous Article in Journal
Tiny-Machine-Learning-Based Supply Canal Surface Condition Monitoring
Previous Article in Special Issue
Empowering Diabetics: Advancements in Smartphone-Based Food Classification, Volume Measurement, and Nutritional Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Wireless Mouth Motion Recognition System Based on EEG-EMG Sensors for Severe Speech Impairments

by
Kee S. Moon
*,
John S. Kang
*,
Sung Q. Lee
,
Jeff Thompson
and
Nicholas Satterlee
Department of Mechanical Engineering, San Diego State University, San Diego, CA 92182, USA
*
Authors to whom correspondence should be addressed.
Submission received: 19 April 2024 / Revised: 19 June 2024 / Accepted: 20 June 2024 / Published: 25 June 2024
(This article belongs to the Special Issue Advances in Mobile Sensing for Smart Healthcare)

Abstract

This study aims to demonstrate the feasibility of using a new wireless electroencephalography (EEG)–electromyography (EMG) wearable approach to generate characteristic EEG-EMG mixed patterns with mouth movements in order to detect distinct movement patterns for severe speech impairments. This paper describes a method for detecting mouth movement based on a new signal processing technology suitable for sensor integration and machine learning applications. This paper examines the relationship between the mouth motion and the brainwave in an effort to develop nonverbal interfacing for people who have lost the ability to communicate, such as people with paralysis. A set of experiments were conducted to assess the efficacy of the proposed method for feature selection. It was determined that the classification of mouth movements was meaningful. EEG-EMG signals were also collected during silent mouthing of phonemes. A few-shot neural network was trained to classify the phonemes from the EEG-EMG signals, yielding classification accuracy of 95%. This technique in data collection and processing bioelectrical signals for phoneme recognition proves a promising avenue for future communication aids.
Keywords: biomedical signal processing; wearable biomedical sensors; machine learning; speech disability; human–computer-interface biomedical signal processing; wearable biomedical sensors; machine learning; speech disability; human–computer-interface

Share and Cite

MDPI and ACS Style

Moon, K.S.; Kang, J.S.; Lee, S.Q.; Thompson, J.; Satterlee, N. Wireless Mouth Motion Recognition System Based on EEG-EMG Sensors for Severe Speech Impairments. Sensors 2024, 24, 4125. https://0-doi-org.brum.beds.ac.uk/10.3390/s24134125

AMA Style

Moon KS, Kang JS, Lee SQ, Thompson J, Satterlee N. Wireless Mouth Motion Recognition System Based on EEG-EMG Sensors for Severe Speech Impairments. Sensors. 2024; 24(13):4125. https://0-doi-org.brum.beds.ac.uk/10.3390/s24134125

Chicago/Turabian Style

Moon, Kee S., John S. Kang, Sung Q. Lee, Jeff Thompson, and Nicholas Satterlee. 2024. "Wireless Mouth Motion Recognition System Based on EEG-EMG Sensors for Severe Speech Impairments" Sensors 24, no. 13: 4125. https://0-doi-org.brum.beds.ac.uk/10.3390/s24134125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop