Next Article in Journal
A High-Sensitivity Flexible Eddy Current Array Sensor for Crack Monitoring of Welded Structures under Varying Environment
Next Article in Special Issue
Adaptive Noise Reduction Algorithm to Improve R Peak Detection in ECG Measured by Capacitive ECG Sensors
Previous Article in Journal
Spatio-Temporal Field Estimation Using Kriged Kalman Filter (KKF) with Sparsity-Enforcing Sensor Placement
Previous Article in Special Issue
Analysis of Consistency of Transthoracic Bioimpedance Measurements Acquired with Dry Carbon Black PDMS Electrodes, Adhesive Electrodes, and Wet Textile Electrodes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study of the Home-Auxiliary Robot Based on BCI

1
School of Mechanic Engineering, Northeast Electric Power University, Jilin 132012, China
2
College of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China
3
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Submission received: 31 March 2018 / Revised: 20 May 2018 / Accepted: 29 May 2018 / Published: 1 June 2018
(This article belongs to the Special Issue Sensors for Biosignal Processing)

Abstract

:
A home-auxiliary robot platform is developed in the current study which could assist patients with physical disabilities and older persons with mobility impairments. The robot, mainly controlled by brain computer interface (BCI) technology, can not only perform actions in a person’s field of vision, but also work outside the field of vision. The wavelet decomposition (WD) is used in this study to extract the δ (0~4 Hz) and θ (4~8 Hz) sub-bands of subjects’ electroencephalogram (EEG) signals. The correlation between pairs of 14 EEG channels is determined with synchronization likelihood (SL), and the brain network structure is generated. Then, the motion characteristics are analyzed using the brain network parameters clustering coefficient (C) and global efficiency (G). Meanwhile, the eye movement characteristics in the F3 and F4 channels are identified. Finally, the motion characteristics identified by brain networks and eye movement characteristics can be used to control the home-auxiliary robot platform. The experimental result shows that the accuracy rate of left and right motion recognition using this method is more than 93%. Additionally, the similarity between that autonomous return path and the real path of the home-auxiliary robot reaches up to 0.89.

1. Introduction

As people age, their limb movement becomes more and more difficult, which brings inconvenience to their life. Additionally, some people can lose their normal neuromuscular pathway through which the brain exchanges information with the outside world. To solve this problem, brain computer interface (BCI) technology is developed by researches. BCI technology is a direct communication method established between the brain and a controlled device, which makes it possible to use the human brain to directly control external devices to assist humans in performing related operations [1,2,3,4]. The characteristics of human motion, which were collected in previous studies using models of the Event-Related Potential (ERP) [5,6,7,8] and the Steady-State Visual Evoked Potential (SSVEP) [9,10], are obvious. Additionally, these two models, which have strong anti-interference ability, are widely used in BCI technology research. A BCI system, which is highly recognized by users, should be non-invasive, safe, and practical to use. Many research teams have developed BCI systems which were used to control the navigation of humanoid robots [11,12,13,14,15,16,17,18,19], assistive exoskeletons [20], flying robots [21,22], robotic wheelchairs [21,23,24], and wheeled robots [25,26,27]. Moreover, the study of telepresence robots controlled by electroencephalogram (EEG) signals is an area of interest to researchers [25,28].
Research shows that the theory of modern complex networks has been used extensively to imitate human brain function [29]. Brain connectivity analysis has been proven to be a very effective and informative way to explore brain function and neural activity [30,31,32,33]. The theory of modern complex networks has been used to analyze human brain activities [34,35]. One study presented a small-world structure of the brain from the functional connectivity point of view, in which the functional relationship of the human brain is characterized by the structural properties of the network [36]. The two structural properties of the network, the clustering coefficient and global efficiency, are used to analyze the motion characteristics in this study. The electrooculogram (EOG), in experiments on BCI, was usually removed due to interference signal [37,38]. The authors applied BCI technology in combination with characteristics of EOG signals to control the auxiliary robot to complete the left-right movement. Additionally, the robot, which used the autonomous return mode to go back to the original starting point, could reduce the labor intensity of the users. This rendered it more convenient to use in the field of home application.

2. Experiment

2.1. Subjects

A total of 10 healthy subjects (eight males and two females; aged 27 ± 1.3 (Standard Deviation) years), who were randomly selected from a group of volunteers, were chosen for this experiment. All of the subjects, who were required to meet the condition of no visual illness or history of neurological diseases, were asked to refrain from consuming any type of stimulus such as alcohol, tea, or coffee during the experiment. Figure 1 shows the experimental setup.

2.2. Experimental Process

The authors used Neuroscan as the EEG acquisition device, and its electrodes were attached to the scalp according to the international 10–20 system (30 channels = FP1, FP2, F7, F3, FZ, F4, F8, FT7, FC3, FCZ, FC4, FT8, T3, C3, CZ, C4, T4, TP7, CP3, CPZ, CP4, TP8, T5, P3, PZ, P4, T6, O1, OZ, and O2), which can detect the brain activity features reflected in frontal, central, and posterior regions of the human brain [39,40].
The subjects in this study were arranged in a room as shown in Figure 1a. The home-auxiliary robot (TurtleBot) was arranged in another room as shown in Figure 1b. Subjects controlled the robot to walk using the wireless video sensor and the wireless radio frequency communication device NRF905 (made in China by Zhejiang Hangzhou Yuze Electronic Company, Hangzhou, China). Two types of experimental actions were performed per subject to ensure that the robot’s grayscale sensor was not out of orbit. One action involved the subject looking at the cross mark (Figure 2), and then moving their eyes to the end of the left red rectangle. Then, the subject moved their eyes back to look at the cross mark, and then back to the robot’s path visual monitor window to monitor the robot walking. The eye movement process was completed within 0.3 s. The second action involved the subject looking at the cross mark (Figure 2), and then moving their eyes to the end of the right red rectangle, and then completing the process in the same way as described for the previous experimental action. The model of visual evoked stimulation is shown in Figure 2.
The robot received a steering command through the wireless device NRF905 (A) and turned 15 degrees each time. The robot would continue to travel at a speed of 0.05 m/s after the steering was completed until the next steering command was received. Each subject controlled the robot to walk along the track by using EEG and EOG signals, and then allowed the robot to autonomously return to its starting position along the original track. Additionally, the path the robot walked was recorded by a PC every time.
All subjects were informed about the research background and the study protocol. Moreover, all of them gave their written informed consent to be included in the study. The Ethics Committee at the Northeast Electric Power University Hospital endorsed the study protocol, according to the Code of Ethics of the World Medical Association (Declaration of Helsinki). Additionally, they were free to choose to participate in the experiment or resign.

2.3. Data Pre-Processing

The EEG signals were easily influenced by noise, so the raw EEG signals needed to be quieted first. The wavelet decomposition (WD), which was represented as a continuous time wavelet decomposition sampled at different frequencies at every level, specifically enabled the authors to discriminate between non-stationary signals with different frequency features [41]. Studies have shown that the WD is more efficient than other methods in the frequency domain [42]. The WD was defined as a continuous wavelet transform (CWT) and a discrete wavelet transform (DWT), and where the input signal was X(t), the CWT was defined as:
C W T ( a , b ) = X ( t ) ψ a , b ( t ) d t
where * denotes a complex conjugate, aR+ shows the scale parameter, bR+ shows the translation, and ψa,b(t) was obtained by scaling the prototype wavelet ψ(t) at time b and scaling by a as follows:
ψ a , b ( t ) = 1 a ψ ( t b a )
The orthogonal dyadic functions are often chosen as the mother wavelet in the WD, which is defined as:
ψ j , k ( t ) = 2 j / 2 ψ ( 2 j t k )
where {ψj,k(t), j, k, ∈Z} for L2(R).
The DWT method analyzed the signal at different frequency bands with different resolutions. The function, decomposing the signal into the different frequency bands, was completed using successive high-pass and low-pass filtering of the time domain signal. The original signal, which is shown as x[n], was first passed through a half-band high-pass filter g[n] and a low-pass filter h[n]. Then, half of the samples were eliminated according to Nyquist’s rule. This procedure can be represented as follows:
Y h i g h [ k ] = x [ n ] × g [ 2 k n ]
Y l o w [ k ] = x [ n ] × h [ 2 k n ]
where Ylow [k] and Yhigh [k] are the outputs of the low-pass and high-pass filters, respectively. Following this procedure, sub-sampling was performed twice. The sub-sampling process is presented in in Figure 3.
Butterworth high-pass and low-pass filters as well as the filtering of the EEG signals at 0.5 Hz and 32 Hz were used to remove artifacts and noisy signals. Then, EEG data were divided into low and high wavelet coefficients. Once again, these low and high wavelet coefficients were divided into their sub-high and sub-low wavelet coefficients. Resulting from the original EEG signal, the authors obtained δ (0~4 Hz) and θ (4–8 Hz) sub-bands.

3. Algorithm

3.1. Brain Network

Research has shown that a number of cortical and sub-cortical regions are activated in different brain regions when human beings process complex information [43]. Every region of the brain is taken as a node and the connections between brain regions are taken as edges. The brain connectivity analysis of the subjects was used to express the differences of nerve activity between the two brain hemispheres. The connection between pairs of EEG channels was determined with the synchronization likelihood (SL) and the calculation method described below.
Consider a time series given by Xk,i (K = 1, …, M; i = 1, …, N). With a given embedding dimension m, the series can be denoted as:
X k , i = ( x k , i , x k , i + l , x k , i + 2 l , , x k , i + ( m 1 ) l )
The probability of the distance between pairs of embedded vectors less than ε is:
P k , i ε = E i 2 ( ω 2 ω 1 ) j = 1 θ ( ε | X N k , i X k , j | )
where ω1 < |j − i| < ω2. The Euclidean distance is expressed as |•| and the Heaviside staircase function is expressed as θ. The ω1 and ω2 are two window variables, which meet the condition ω1 « ω2 « N.
Regarding each k and each i, we can determine a critical distance using εk,i:
P k , i ε k , i = P r e f
in which Pref « 1. The number of channels, whose distance is less than the critical distance between vectors Xk,i and Xk,j, is expressed as:
H i , j = K = 1 M θ ( ε k , i | X k , i X k , j | )
in which ω1 < |j − i| < ω2. The SL algorithm, which defines each channel and each discrete time, is expressed as:
S K , i , j = H i , j 1 M 1
where |Xk,iXk,j| < εk,i and the average is calculated for all j values, which can be expressed as:
S k , i = I 2 ( ω 2 ω 1 ) j = 1 N S k , i , j
in which ω1 < |j − i| < ω2.
The correlations between pairs of 14 channels (14 channels = F7, F3, F4, F8, FT7, FT8, C3, C4, TP7, TP8, P3, P4, O1, and O2) were calculated using Equation (11). Then, the brain networks were formed. The steps of the brain connectivity analysis are described as follows.
First, the data of a 14-channel EEG were collected. Then, the δ and θ sub-bands were extracted from the EEG signals. The adjacent matrix was computed for the sub-bands (δ and θ). Each matrix element corresponded to the SL between a pair of channels EEG signals. The entries on the main diagonal were equal to 1, because each EEG signal was perfectly correlated with itself. The matrix elements were symmetric relative to the main diagonal, that is, Cij = Cji. An edge was deemed to exist between i and j if their SL was greater than the fixed T; otherwise, no edge existed between i and j. Finally, the networks were formed using the adjacent matrix and a threshold value.
The SL values lie in the range PrefT ≤ T ≤ 1, where the PrefT is the minimum value which is close to 0, or close to 1 in the case of maximally synchronous signals. To compare the clustering coefficient (C) and global efficiency (G) of brain networks between the left and right brain hemispheres, networks were formed for the two brain hemispheres. The authors explored a whole range of the T values, 0.01 < T < 0.12, with increments of 0.005. Figure 4 shows the comparison of C and G at different brain hemispheres.
Over the whole range of threshold values (0.01–0.12), the significant difference of C can be found in Figure 4 for different brain hemispheres when T is in the range of 0.07 < T < 0.11. The significant difference of G can be found for different brain hemispheres when T is in the range of 0.08 < T < 0.11. The authors used the mean value of T (T = 0.092) for the correlation calculation so, the mean value of T (T = 0.092) was chosen as the fixed threshold. Using the fixed threshold, the network parameters C and G for all of the subjects’ brain networks were computed.
C and G were used to analyze the functional differences of the complex brain networks. These are explained in the following subsections.

3.1.1. Clustering Coefficient

The connectivity degree of a node indicates the importance of that node in a network, which can be represented as the number of edges connected to that node. C can be expressed as the ratio of the number of existing edges to the number of maximum possible edges [44,45]. Its formula can be represented as:
C i = E i D i ( D i 1 ) / 2
in which Ei is the number of existing edges between neighbors of the node i and Di is the degree of connectivity of that node. Di(Di–1)/2 is the number of maximum possible edges between neighbors of the node i [45].

3.1.2. Global Efficiency

G can express the degree of integration of a network, which is associated with the speed at which the human brain processes information. The path length Li,j between two nodes i and j is the minimum number of edges that are needed to connect. Additionally, the path length, which is the inverse ratio of the nodal efficiency, is mathematically defined as [44,45]:
L i = 1 N 1 i j G L i , j
where Li,j is the minimum path length. The N is the number of nodes within a network. The average value of the nodal efficiencies of each node can be used to estimate the G. Thus, the global efficiency (G) of nodes can be defined by:
G = E g l o b a l = 1 N ( N 1 ) i j G 1 L i , j
Equation (14) can express that networks, which are characterized by a short minimum path length between any pair of regional nodes, have high global efficiency [46,47]. Combined with Equation (12), this leads to the fact that the bigger the values of C and G, the faster the information transmission speed of a node with others.

3.2. Motion Feature Recognition

Obvious fluctuations occurred in the F3 and F4 channels when eyes move left or right, and the directions of fluctuations are opposite. The result is shown in Figure 5.
Following the fluctuations that occurred in the F3 and F4 channels, the brain topography showed that there was a significant difference between the left and right hemispheres, which indicated that there were differences in neural activities between the two hemispheres. Regarding brain topography, low activity is indicated by the blue-shaded areas, whereas high activity is indicated by the red-shaded areas. Figure 5a shows that the color of the right-brain region is darker than the color of the left-brain region, which means that the neural activities in the right-brain are more active than those in the left-brain when a person is moving to the left. Additionally, a different phenomenon appears, as shown in Figure 5b, when a person is moving to right.
A moving window with a width of 20 samples was established in the experiment, and the fluctuation characteristics of the eye movement signal were identified by using Equation (15).
K i = y ( x i + 20 ) y ( x i ) 20
The K values of the F3 and F4 channels display opposite fluctuation changes when a person’s eyes move to the left or to the right. Concurrently, this meets the |K| > 1 condition. The authors used these features (G, C, and K) to judge the direction of movement for the robot.
The eye movement fluctuations in the time domain signal were identified using Equation (15), the parameter values for the left hemisphere and the right hemisphere were calculated. Then, the direction of the subject’s motion according to the K value and the brain network parameters’ value were judged. Taking the left movement as an example, the discrimination logic is shown in Figure 6.

3.3. Autonomous Return

The TurtleBot walked along the track to the destination controlled by EEG and EOG signals. During the robot travel, the software recorded each direction conversion information as an array for the robot, which included the robot walking time, steering direction, and steering angle elements. The arrays are shown in Figure 7.
Figure 7a shows the running track array, in which x is the robot walking time, y is the steering direction (1: turn to the left and −1: turn to the right), and z is the steering angle. Subjects controlled the robot to walk along the track using their EEG and EOG signals, and the direction conversion information was recorded in the running track array. Subsequently, the return track array was transformed from the running track array using a simple mathematical algorithm. Finally, the TurtleBot autonomously returned to its starting position along the original track using the return track array.

4. Results

The characteristics of human motion feature signals were comprehensively identified by combining the characteristics of the human brain network with the characteristics of eye motion signals in motor imagery.

4.1. Motion Recognition

The correlations between pairs of 14 EEG channels (14 channels = F7, F3, F4, F8, FT7, FT8, C3, C4, TP7, TP8, P3, P4, O1, and O2) were calculated using Equation (11), following which the brain networks were formed. Figure 8 shows the brain networks of one subject when he turned right and left in the experiment.
One clearly can see from Figure 8 that the connection density of the right-brain network is significantly higher than that of the left-brain network when the subject turned left. Quite the opposite, the connection density of the left-brain network is significantly higher than that of the right-brain network when the subject turned right. To quantify the density of a brain network, the parameters clustering coefficient (C) and global efficiency (G) were used to calculate and analyze network characteristics. Figure 9 shows the comparison of the brain network parameters C and G between a subject turning left and turning right.
Figure 9 shows that there are significant differences in C and G when subjects turn to the left and right (P < 0.05). Taking the left movement as an example, the connectivity density of the right hemisphere was higher than that of the left hemisphere when ERP was used to stimulate left motor imagery. The corresponding brain network parameter values (C and G) of the right hemisphere were larger than those of the left hemisphere at that time. This means that the neuron cluster in each brain region of the right hemisphere of the human brain cooperated to complete an equivalent action at that time, and the correlation between them was strong. Meanwhile, the neuron cluster in each brain region of the left hemisphere of the human brain did not need to cooperate to complete an equivalent action, thus the correlation between them was weak.
The authors analyzed motion recognition accuracy using EOG and EEG. Meanwhile, the accuracy was analyzed separately using the EOG and EEG signals. The accuracy comparison of motion direction recognition is shown in Table 1.
Table 1 shows that the accuracy rate of left and right motion recognition using EEG and EOG is more than 93%. This is compared with the identification methods using EOG and EEG separately, demonstrating that the recognition rate of the proposed method in this paper is higher.

4.2. Track Similarity

Subjects controlled the robot to walk along the track by using EEG and EOG signals, and then allowed the robot to autonomously return to its starting position. Additionally, the track of the robot’s autonomous return to its starting position was record by the PC, which is shown in Figure 10. The experiment was conducted eight times for each subject, and the track data of each subject were averaged.
The track similarity of the TurtleBot between the autonomous return track and the experimental original track reflects the accuracy of the robot motion controlled by EEG and EOG. To determine the similarity of the two types of track, the authors calculated the Pearson’s correlation coefficient between them. The similarity values calculated for the subjects are shown in Table 2.
Table 2 shows that the values of correlation coefficients between the autonomous return track and the experimental original track are greater than 0.50. This means that these variables have a strong correlation. Specifically, the correlation coefficients are greater than 0.85 for subjects after six tests. Thus, one can conclude that the method applied in this paper has high control precision after proper training for subjects, which makes it more convenient to use in the field of home application.

4.3. Training Effect

Each subject was tested eight times. The number of times the robot deviated from the path was recorded by the off-track counter shown in Figure 1b. The results are shown in Figure 11.
Figure 11 shows that the number of times the TurtleBot deviated from the path decreases significantly as the amount of training increases, which means that the method using EEG and EOG to control the robot’s movement has high accuracy after proper training. Additionally, the autonomous return of the robot makes it more convenient to use in the field of home application.

5. Discussion

The authors used EEG and EOG to comprehensively identify and control the TurtleBot to walk along the track and return by using the autonomous return mode, which made it more convenient to use in the field of home application.

5.1. Previous Studies

Rinku Roy et al. used the Genetic Algorithm (GA) to identify left-right arm movement. They achieved a correct recognition rate of 75.77% [48]. CAS Filho et al. used the graph method to classify human hands signals, achieving a high recognition rate of up to 98% [49]. Research shows that the recognition of motor information in brain signals using the BCI is an effective method [50,51,52]. Although it is very accurate to identify motion characteristics using EEG, the authors cannot confirm that it also has an efficient motion recognition rate when the auxiliary robot is controlled in real time outside the field of vision.

5.2. Novel Findings of This Study

This study’s results show that the method that subjects used to control the robot to walk along the track by using the EEG and EOG had high precision of the controls and recognition speed. Additionally, the method can record the data of the TurtleBot walking track, which can make the robot autonomously return to its starting position, all of which makes it more convenient to use in the field of home application.

5.3. Limitations and Future Research Lines

The EEG acquisition equipment with high precision is expensive and inconvenient to wear, which is not conducive to the popularization and application of this technology. Additionally, this study is limited to using EEG, EOG, and related sensors to quickly control the TurtleBot’s path of movement and return. Future research works might develop portable equipment that can be conveniently worn and inexpensively fabricated. It could become a reality that an individual could control a manipulator to easily imitate their motion using motion imagination.

6. Conclusions

A home-auxiliary robot platform was developed which could facilitate patients with physical disabilities and older persons with mobility impairments. The authors applied BCI technology to practical operations. Combined with ERP vision evoked stimulation, the method, which applies portable EEG acquisition equipment, acquires EEG signals and extracts EEG motion characteristics in real time. The experimental result showed that the accuracy rate of left and right motion recognition using this method is more than 93%. Additionally, the similarity between the autonomous return track and the original track of the home-auxiliary robot reached up to 0.89. Thus, one can conclude that the method applied in this paper has high control precision after proper training for subjects, which makes it more convenient to use in the field of home application.

Author Contributions

F.W. and R.F. conceived and designed the experiments; F.W. and X.Z. performed the experiments; G.S. analyzed the data; F.W. contributed reagents/materials/analysis tools; F.W. and X.Z. wrote the paper.

Funding

This research was funded by [National Natural Science Foundation of China] grant number [51605419], [Northeast Electric Power University] grant number [BSJXM-201521], [Jilin City Science and Technology Bureau] grant number [20166012], [Project Agreement for Science and Technology Development of Jilin Province] grant number [20170520099JH].

Acknowledgments

We also thank Professor Hong Wang for her help in the experiment.

Conflicts of Interest

The authors declare no conflict of interests.

References

  1. Voznenko, T.I.; Chepin, E.V.; Urvanov, G.A. The Control System Based on Extended BCI for a Robotic Wheelchair. Procedia Comput. Sci. 2018, 123, 522–527. [Google Scholar] [CrossRef]
  2. Zhang, Y.; Guo, D.; Yao, D.; Xu, P. The Extension of Multivariate Synchronization Index Method for SSVEP-based BCI. Neurocomputing 2017, 269, 226–231. [Google Scholar] [CrossRef]
  3. Vaughan, T.M.; McFarland, D.J.; Schalk, G.; Sarnacki, W.A.; Krusienski, D.J.; Sellers, E.W.; Wolpaw, J.R. The wadsworth BCI research and development program: At home with BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 229–233. [Google Scholar] [CrossRef] [PubMed]
  4. Buch, E.; Weber, C.; Cohen, L.G.; Braun, C.; Dimyan, M.A.; Ard, T.; Mellinger, J.; Caria, A.; Soekadar, S.; Fourkas, A.; et al. Think to move: A neuromagnetic brain-computer interface (BCI) system for chronic stroke. Stroke 2008, 39, 910–917. [Google Scholar] [CrossRef] [PubMed]
  5. Roslan, N.S.; Izhar, L.I.; Faye, I.; Saad, M.N.M.; Sivapalan, S.; Rahman, M.A. Review of EEG and ERP studies of extraversion personality for baseline and cognitive tasks. Personal. Individ. Differ. 2017, 119, 323–332. [Google Scholar] [CrossRef]
  6. Jiancheng, Y.U.; Jin, Z.; Wei, L.I. Controlling an Underwater Manipulator via Event-Related Potentials of Brainwaves. Robot 2017, 39, 395–404. [Google Scholar]
  7. Palankar, M.; De Laurentis, K.J.; Alqasemi, R.; Veras, E.; Dubey, R.; Arbel, Y.; Donchin, E. Control of a 9-DoF Wheelchair-mounted robotic arm system using a P300 Brain Computer Interface: Initial experiments. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Bangkok, Thailand, 22–25 February 2009; pp. 348–353. [Google Scholar]
  8. Waytowich, N.; Henderson, A.; Krusienski, D.; Cox, D. Robot application of a brain computer interface to staubli TX40 robots—Early stages. In Proceedings of the World Automation Congress, Kobe, Japan, 19–23 September 2010; pp. 1–6. [Google Scholar]
  9. Shen, H.; Zhao, L.; Bian, Y.; Xiao, L. Research on SSVEP-Based Controlling System of Multi-DoF Manipulator. In Proceedings of the International Symposium on Neural Networks, Advances in Neural Networks—ISNN 2009, Wuhan, China, 26–29 May 2009; pp. 171–177. [Google Scholar]
  10. Bakardjian, H.; Tanaka, T.; Cichocki, A. Brain control of robotic arm using affective steady-state visual evoked potentials. In Proceedings of the Fifth IASTED International Conference on Human-Computer Interaction (HCI 2010), Maui, HI, USA, 23–25 August 2010. [Google Scholar]
  11. Chae, Y.; Jeong, J.; Jo, S. Toward brain-actuated humanoid robots: Asynchronous direct control using an EEG-based BCI. IEEE Trans. Robot. 2012, 28, 1131–1144. [Google Scholar] [CrossRef]
  12. Li, W.; Li, M.; Zhao, J. Control of humanoid robot via motion-onset visual evoked potentials. Front. Syst. Neurosci. 2015, 8, 247. [Google Scholar] [CrossRef] [PubMed]
  13. Choi, B.; Jo, S. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition. PLoS ONE 2013, 8, e74583. [Google Scholar] [CrossRef] [PubMed]
  14. Guneysu, A.; Akin, H.L. An SSVEP based BCI to control a humanoid robot by using portable EEG device. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 3–7 July 2013; pp. 6905–6908. [Google Scholar]
  15. Cohen, O.; Druon, S.; Lengagne, S.; Mendelsohn, A.; Malach, R.; Kheddar, A.; Friedman, D. fMRI-Based Robotic Embodiment: Controlling a Humanoid Robot by Thought Using Real-Time fMRI. Presence Teleoper. Virtual Environ. 2014, 23, 229–241. [Google Scholar] [CrossRef]
  16. Bryan, M.; Green, J.; Chung, M.; Chang, L.; Scherer, R.; Smith, J.; Rao, R.P. An adaptive brain-computer interface for humanoid robot control. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 26–28 October 2011; pp. 199–204. [Google Scholar]
  17. Bell, C.J.; Shenoy, P.; Chalodhorn, R.; Rao, R.P. Control of a humanoid robot by a noninvasive brain-computer interface in humans. J. Neural Eng. 2008, 5, 214–220. [Google Scholar] [CrossRef] [PubMed]
  18. Iturrate, I.; Chavarriaga, R.; Montesano, L.; Minguez, J.; Millán, J.D.R. Teaching brain-machine interfaces as an alternative paradigm to neuroprosthetics control. Sci. Rep. 2015, 5, 13893. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Ofner, P.; Schwarz, A.; Pereira, J.; Müller-Putz, G.R. Upper limb movements can be decoded from the time-domain of low-frequency EEG. PLoS ONE 2017, 12, e0182578. [Google Scholar] [CrossRef] [PubMed]
  20. López-Larraz, E.; Trincado-Alonso, F.; Rajasekaran, V.; Pérez-Nombela, S.; del-Ama, A.J.; Aranda, J.; Minguez, J.; Gil-Agudo, A.; Montesano, L. Control of an ambulatory exoskeleton with a brain–machine interface for spinal cord injury gait rehabilitation. Front. Neurosci. 2016, 10, 359. [Google Scholar] [CrossRef] [PubMed]
  21. LaFleur, K.; Cassady, K.; Doud, A.; Shades, K.; Rogin, E.; He, B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. J. Neural Eng. 2015, 10, 046003. [Google Scholar] [CrossRef] [PubMed]
  22. Akce, A.; Johnson, M.; Dantsker, O.; Bretl, T. A Brain-Machine Interface to Navigate a Mobile Robot in a Planar Workspace: Enabling Humans to Fly Simulated Aircraft with EEG. IEEE Trans. Neural Syst. Rehabil. Eng. Publ. IEEE Eng. Med. Biol. Soc. 2012, 21, 306–318. [Google Scholar] [CrossRef] [PubMed]
  23. Leeb, R.; Friedman, D.; Müller-Putz, G.R.; Scherer, R.; Slater, M.; Pfurtscheller, G. Self-Paced (Asynchronous) BCI Control of a Wheelchair in Virtual Environments: A Case Study with a Tetraplegic. Comput. Intell. Neurosci. 2007, 2007, 7. [Google Scholar] [CrossRef] [PubMed]
  24. Bi, L.; Fan, X.A.; Liu, Y. EEG-Based Brain-Controlled Mobile Robots: A Survey. IEEE Trans. Hum. Mach. Syst. 2013, 43, 161–176. [Google Scholar] [CrossRef]
  25. Escolano, C.; Antelis, J.M.; Minguez, J. A telepresence mobile robot controlled with a noninvasive brain-computer interface. IEEE Trans. Syst. Man Cybern. B 2012, 42, 793–804. [Google Scholar] [CrossRef] [PubMed]
  26. Barbosa, A.O.G.; Achanccaray, D.R.; Meggiolaro, M.A. Activation of a mobile robot through a brain computer interface. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 4815–4821. [Google Scholar]
  27. Millán, J.D.R.; Renkens, F.; Mourino, J.; Gerstner, W. Non-invasive brain-actuated control of a mobile robot. IEEE Trans. Biomed. Eng. 2004, 51, 1026–1033. [Google Scholar] [CrossRef] [PubMed]
  28. Leeb, R.; Tonin, L.; Rohm, M.; Desideri, L.; Carlson, T.; Millan, J.D.R. Towards Independence: A BCI telepresence robot for people with severe motor disabilities. Proc. IEEE 2015, 103, 969–982. [Google Scholar] [CrossRef]
  29. Stam, C.J.; Reijneveld, J.C. Graph theoretical analysis of complex networks in the brain. Nonlinear Biomed. Phys. 2007, 1, 3. [Google Scholar] [CrossRef] [PubMed]
  30. Bullmore, E.T.; Bassett, D.S. Brain graphs: Graphical models of the human brain connectome. Ann. Rev. Clin. Psychol. 2011, 7, 113–140. [Google Scholar] [CrossRef] [PubMed]
  31. Bullmore, E.; Sporns, O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009, 10, 186–198. [Google Scholar] [CrossRef] [PubMed]
  32. Rubinov, M.; Sporns, O. Complex network measures of brain connectivity: Uses and interpretations. Neuroimage 2010, 52, 1059–1069. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, F.; Wang, H.; Fu, R. Real-Time ECG-based detection of fatigue driving using sample entropy. Entropy 2018, 20, 196. [Google Scholar] [CrossRef]
  34. Cai, S.M.; Chen, W.; Liu, D.B.; Tang, M.; Chen, X. Complex network analysis of brain functional connectivity under a multi-step cognitive task. Phys. A Stat. Mech. Appl. 2017, 466, 663–671. [Google Scholar] [CrossRef] [Green Version]
  35. Strogatz, S.H. Exploring complex networks. Nature 2001, 410, 268–276. [Google Scholar] [CrossRef] [PubMed]
  36. Stam, C.J. Functional connectivity patterns of human magnetoencephalographic recordings: A ‘small-world’network? Neurosci. Lett. 2004, 355, 25–28. [Google Scholar] [CrossRef] [PubMed]
  37. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  38. Schlögl, A.; Keinrath, C.; Zimmermann, D.; Scherer, R.; Leeb, R.; Pfurtscheller, G. A fully automated correction method of EOG artifacts in EEG recordings. Clin. Neurophysiol. 2007, 118, 98–104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Volosyak, I.; Gembler, F.; Stawicki, P. Age-related differences in SSVEP-based BCI performance. Neurocomputing 2017, 250, 57–64. [Google Scholar] [CrossRef]
  40. Tello, R.M.; Müller, S.M.; Hasan, M.A.; Ferreira, A.; Krishnan, S.; Bastos, T.F. An independent-BCI based on SSVEP using Figure-Ground Perception (FGP). Biomed. Signal Process. Control 2016, 26, 69–79. [Google Scholar] [CrossRef]
  41. Misiti, M.; Misiti, Y.; Oppenheim, G.; Poggi, J.M. Wavelet Toolbox. The MathWorks Inc.: Natick, MA, USA, 1996. [Google Scholar]
  42. Akin, M. Comparison of wavelet transform and FFT methods in the analysis of EEG signals. J. Med. Syst. 2002, 26, 241–247. [Google Scholar] [CrossRef] [PubMed]
  43. Sporns, O.; Chialvo, D.R.; Kaiser, M.; Hilgetag, C.C. Organization, development and function of complex brain networks. Trends Cogn. Sci. 2004, 8, 418–425. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Kranczioch, C.; Zich, C.; Schierholz, I.; Sterr, A. Mobile EEG and its potential to promote the theory and application of imagery-based motor rehabilitation. Int. J. Psychophysiol. 2013, 91, 10–15. [Google Scholar] [CrossRef] [PubMed]
  45. Whitham, E.M.; Pope, K.J.; Fitzgibbon, S.P.; Lewis, T.; Clark, C.R.; Loveless, S.; Broberg, M.; Wallace, A.; DeLosAngeles, D.; Lillie, P.; et al. Scalp electrical recording during paralysis: Quantitative evidence that EEG frequencies above 20 Hz are contaminated by EMG. Clin. Neurophysiol. 2007, 118, 1877–1888. [Google Scholar] [CrossRef] [PubMed]
  46. Messé, A.; Marrelec, G.; Bellec, P.; Perlbarg, V.; Doyon, J.; Pélégrini-Issac, M.; Benali, H. Comparing structural and functional graph theory features in the human brain using multimodal MRI. IRBM 2012, 33, 244–253. [Google Scholar] [CrossRef]
  47. Breckel, T.P.K.; Thiel, C.M.; Giessing, C. The efficiency of functional brain networks does not differ between smokers and non-smokers. Psychiatry Res. Neuroimaging 2013, 214, 349–356. [Google Scholar] [CrossRef] [PubMed]
  48. Roy, R.; Mahadevappa, M.; Kumar, C.S. Trajectory path planning of EEG controlled robotic arm using GA. Procedia Comput. Sci. 2016, 84, 147–151. [Google Scholar] [CrossRef]
  49. Filho, C.A.S.; Attux, R.; Castellano, G. Can graph metrics be used for EEG-BCIs based on hand motor imagery? Biomed. Signal Process. Control 2018, 40, 359–365. [Google Scholar] [CrossRef]
  50. Rai, R.; Deshpande, A.V. Fragmentary shape recognition: A BCI study. Comput. Aided Des. 2016, 71, 51–64. [Google Scholar] [CrossRef]
  51. Lange, G.; Low, C.Y.; Johar, K.; Hanapiah, F.A.; Kamaruzaman, F. Classification of electroencephalogram data from hand grasp and release movements for BCI controlled prosthesis. Procedia Technol. 2016, 26, 374–381. [Google Scholar] [CrossRef]
  52. Abibullaev, B.; An, J.; Lee, S.H.; Moon, J.I. Design and evaluation of action observation and motor imagery based BCIs using Near-Infrared Spectroscopy. Measurement 2017, 98, 250–261. [Google Scholar] [CrossRef]
Figure 1. Experimental setup; (a) shows the experimental environment in which the subjects are located, (b) shows the TurtleBot and its running track.
Figure 1. Experimental setup; (a) shows the experimental environment in which the subjects are located, (b) shows the TurtleBot and its running track.
Sensors 18 01779 g001
Figure 2. The model of visual evoked stimulation.
Figure 2. The model of visual evoked stimulation.
Sensors 18 01779 g002
Figure 3. Sub-band coding algorithm.
Figure 3. Sub-band coding algorithm.
Sensors 18 01779 g003
Figure 4. (a) Mean clustering coefficient (C) as a threshold when moved to the right; (b) mean global efficiency (G) as a threshold when moved to the right; (c) mean C as a threshold when moved to the left; (d) mean G as a threshold when moved to the left.
Figure 4. (a) Mean clustering coefficient (C) as a threshold when moved to the right; (b) mean global efficiency (G) as a threshold when moved to the right; (c) mean C as a threshold when moved to the left; (d) mean G as a threshold when moved to the left.
Sensors 18 01779 g004
Figure 5. The difference regarding eye movement and brain topography when moved right (a) and left (b).
Figure 5. The difference regarding eye movement and brain topography when moved right (a) and left (b).
Sensors 18 01779 g005
Figure 6. The logic of motion recognition.
Figure 6. The logic of motion recognition.
Sensors 18 01779 g006
Figure 7. Robot motion track array. (a) Running track array; (b) return track array.
Figure 7. Robot motion track array. (a) Running track array; (b) return track array.
Sensors 18 01779 g007
Figure 8. The difference of brain networks when moving right and left. (a) Move to the left; (b) move to the right.
Figure 8. The difference of brain networks when moving right and left. (a) Move to the left; (b) move to the right.
Sensors 18 01779 g008
Figure 9. The difference of brain network parameters when a subject turns right or left. (a) Move to the left; (b) move to the right.
Figure 9. The difference of brain network parameters when a subject turns right or left. (a) Move to the left; (b) move to the right.
Sensors 18 01779 g009
Figure 10. Track of autonomous return.
Figure 10. Track of autonomous return.
Sensors 18 01779 g010
Figure 11. The number of times the TurtleBot deviates from the path.
Figure 11. The number of times the TurtleBot deviates from the path.
Sensors 18 01779 g011
Table 1. Accuracy comparison of motion direction recognition.
Table 1. Accuracy comparison of motion direction recognition.
EEGEOGComprehensive Detection Using EEG and EOG
Subject 180%90%98%
Subject 285%95%100%
Subject 390%85%95%
Subject 485%88%99%
Subject 585%90%93%
Subject 680%85%97%
Subject 783%94%98%
Subject 879%87%96%
Subject 988%91%97%
Subject 1077%87%94%
Table 2. The similarity between the autonomous return track and the experimental original track.
Table 2. The similarity between the autonomous return track and the experimental original track.
Number of ExperimentsSubject 1Subject 2Subject 3Subject 4Subject 5Subject 6Subject 7Subject 8Subject 9Subject 10
10.78320.75330.65230.73980.68130.77610.82370.79340.80210.7731
20.85670.78210.68220.76780.71650.73240.78950.82770.83140.7964
30.89270.81750.71270.81360.78370.77520.81090.78990.82370.8311
40.84650.80130.83970.82870.80190.85320.83690.83220.84170.8543
50.92350.86180.81560.92130.85150.83140.85620.84520.82530.8827
60.90110.87280.89930.89860.90890.92100.86130.88710.86570.8375
70.92070.89220.92040.91850.91020.91250.89430.87930.89550.8815
80.93450.96880.91580.92810.90540.92270.93280.91270.90030.8966

Share and Cite

MDPI and ACS Style

Wang, F.; Zhang, X.; Fu, R.; Sun, G. Study of the Home-Auxiliary Robot Based on BCI. Sensors 2018, 18, 1779. https://0-doi-org.brum.beds.ac.uk/10.3390/s18061779

AMA Style

Wang F, Zhang X, Fu R, Sun G. Study of the Home-Auxiliary Robot Based on BCI. Sensors. 2018; 18(6):1779. https://0-doi-org.brum.beds.ac.uk/10.3390/s18061779

Chicago/Turabian Style

Wang, Fuwang, Xiaolei Zhang, Rongrong Fu, and Guangbin Sun. 2018. "Study of the Home-Auxiliary Robot Based on BCI" Sensors 18, no. 6: 1779. https://0-doi-org.brum.beds.ac.uk/10.3390/s18061779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop