Next Article in Journal
Brief Report: Using the Internet to Identify Persons with Cognitive Impairment for Participation in Clinical Trials
Previous Article in Journal
A Neurophysiological Perspective on a Preventive Treatment against Schizophrenia Using Transcranial Electric Stimulation of the Corticothalamic Pathway
Previous Article in Special Issue
A Genetic-Based Feature Selection Approach in the Identification of Left/Right Hand Motor Imagery for a Brain-Computer Interface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI

Faculty of Technology and Bionics, Rhine-Waal University of Applied Sciences, 47533 Kleve, Germany
*
Author to whom correspondence should be addressed.
Submission received: 30 January 2017 / Revised: 14 March 2017 / Accepted: 30 March 2017 / Published: 5 April 2017
(This article belongs to the Special Issue Brain-Computer Interfaces: Current Trends and Novel Applications)

Abstract

:
Steady state visual evoked potentials (SSVEPs)-based Brain-Computer interfaces (BCIs), as well as eyetracking devices, provide a pathway for re-establishing communication for people with severe disabilities. We fused these control techniques into a novel eyetracking/SSVEP hybrid system, which utilizes eye tracking for initial rough selection and the SSVEP technology for fine target activation. Based on our previous studies, only four stimuli were used for the SSVEP aspect, granting sufficient control for most BCI users. As Eye tracking data is not used for activation of letters, false positives due to inappropriate dwell times are avoided. This novel approach combines the high speed of eye tracking systems and the high classification accuracies of low target SSVEP-based BCIs, leading to an optimal combination of both methods. We evaluated accuracy and speed of the proposed hybrid system with a 30-target spelling application implementing all three control approaches (pure eye tracking, SSVEP and the hybrid system) with 32 participants. Although the highest information transfer rates (ITRs) were achieved with pure eye tracking, a considerable amount of subjects was not able to gain sufficient control over the stand-alone eye-tracking device or the pure SSVEP system (78.13% and 75% of the participants reached reliable control, respectively). In this respect, the proposed hybrid was most universal (over 90% of users achieved reliable control), and outperformed the pure SSVEP system in terms of speed and user friendliness. The presented hybrid system might offer communication to a wider range of users in comparison to the standard techniques.

1. Introduction

Brain-computer interfaces (BCIs) can provide a communication channel without the involvement of muscular activity [1,2]. Through the detection of specific brain patterns, in the noninvasively acquired electroencephalogram (EEG) data, users are enabled to perform direct commands in real time. BCIs have therefore the potential to be utilized as assistive technology for people with restricted motor abilities.
In this article we present a communication system that is based on the Steady-State Visually Evoked Potential (SSVEP) BCI paradigm [3,4,5]. SSVEP-based BCIs can be categorized as reactive BCI paradigm as it is based on the response to an external stimuli. Potentials are evoked at a certain frequency if the gaze is fixated on a flickering target at the same frequency.
Though SSVEP-based BCIs have been proven to be usable by most, if not all healthy users [6,7], there is an ongoing debate and concern regarding its dependency on eye gaze, which excludes patients with lack of oculomotor control from using such systems. Although some researchers address this issue showing that SSVEPs can also be controlled by shifting attention rather than gaze and striving towards so-called independent SSVEP-based BCI [8,9,10], the majority of studies—to some extend—allow gaze shift. Such systems then compete with other healthcare applications based on gaze direction such as eye trackers.
Eye trackers are devices which calculate gaze coordinates on the screen according to measured eye positions. So called video occulography eye trackers use a video camera which can be positioned in front of a subject or be head mounted (e.g., on eyeglasses frame), to track the position of the eyes [11,12]. Eye trackers are considered as robust and are already an established technology as communication systems for disabled persons; commercial systems such as the Tobii eye tracking devices (Tobii AB, Danderyd, Stockholm, Sweden) have become a valuable tool in augmentative communication [13,14,15].
The general consensus seems to be that eye tracking devices generally outperform SSVEP BCIs as they are faster and the required setup is much simpler. However, some studies suggest that the performance gap between the two technologies might be smaller than expected. Kishore et al. compared the two methods in a head-mounted display (HMD) hardware as a means of controlling gestures of a humanoid robot [16]. They found that both methods are appropriate for usage in immersive settings. All ten SSVEP participants triggered at least one gesture during the test. However, results for the Eye Tracker were surprisingly poor. Two out of ten eye tracker participants did not succeed in triggering gestures with the robot. They stated though, that there were technological differences in their setup compared to existing literature. Kos’myna and Tarpin-Bernard tested eye tracking in combination with different BCI paradigms in a gaming setup. Although they observed that the combination of eye tracking and SSVEP was slightly slower, it was more accurate than the pure eye tracker [17]. They concluded that the combination of the eye tracker and SSVEP was a well-rounded and natural combination.
One major obstacle with the eye tracking technology is, the so called, Midas touch problem (see e.g., [18]). Usually the activation of a selected target object is based on dwell times; the user has to focus on a target object for an extended period. But the system cannot differentiate intentional from unintentional fixation, which can easily lead to false classifications. One way to solve this problem is to use a BCI for target activation (see e.g., [19]).
For SSVEP-based BCIs another problem can occur. If consecutive commands are performed in a row (e.g., in spelling) the user needs to shifts her/his gaze between targets. EEG data collected during this gaze shifting phase is not relevant for the identification of any stimuli. That is why many researchers include automatic pauses after classifications in their applications to give the user a fixed amount of time to find the next target. However, the time needed to find the next target, depends on their arrangement, the familiarity with the application, as well as user factors, hence, the length of the pause provided by the system might not be optimal and slow the system down. A more user specific pause for gaze shifting could be determined with eye tracking.
If gaze control is not restricted, it is convenient to combine BCIs and eye tracking devices. Several hybrid systems combining eye trackers with BCI approaches have been developed with applications for controlling robotic limb prosthetics [20], quadrocopters [21], games [17] and communication tools [19,22,23].
Several hybrid systems utilize a BCI as a supportive technology for eye tracking. In such technologies, usually the eye tracking device is used for selection and the BCI for the activation of an object, circumventing the Midas touch problem described above. E.g., Vilmek and Zander proposed a spelling system using eye tracking for target selection and an imaginary movement based BCI for simulating a mouse click [19]. They stated that their hybrid system was somewhat slower but more reliable in comparison to standard dwell time based eye tracking interfaces. However, no direct comparisons between the BCI and the eye tracking systems were made.
McCullagh et al. used an EyeTribe tracker (The Eye Tribe ApS, Copenhagen, Denmark) for gaze coordinates and EEG-data recorded with an Emotiv EPOC (Emotiv, San Francisco, CA, USA) for selection [24]. Their hybrid approach maintained information transfer rate (ITR) while accuracy and efficiency were increased. They also stated that the ITR of both, the tested eye tracking device and the hybrid were higher in comparison to a previous SSVEP-only system.
Another approach is to use the eye tracker component as a complementary technology to the BCI. Lim et al. used the information of eye gaze direction detected by a low cost web-cam to prevent typing errors in an SSVEP-based BCI spelling application [23]. In online experiments with 10 participants, almost 40% of typos were prevented which shows that their system could reduce typing errors significantly.
The aforementioned methods have a clear distribution of tasks whereas the hybrid proposed in this article utilizes a more balanced allocation of tasks between the eye tracker and the BCI.
The here presented novel system allows hand-free control over a 30 target spelling interface using eye tracking for initial rough selection and the SSVEP technology for fine selection and activation. As we found during previous research, SSVEP systems with four or less targets allow high classification rates and offer control to a wide range of users [7]. Therefore, in this hybrid system we implemented only four simultaneously flickering stimuli. The letters are arranged in a 6 × 5 target matrix. If the user focuses on a specific letter, the area of the desired target is determined via eye tracking and a block of four letters starts flickering. As each of these four letters has a specific individual stimulation frequency, the system is able to classify a command. Gaze coordinates are tracked simultaneously in the background, allowing the user to switch to another block of letters if the initial area is false.
This method has several advantages:
  • Eye tracking data is not used for activation of letters, the Midas touch problem is circumvented.
  • Dynamic gaze shifting phases, ensuring that EEG data are only considered if the target object is fixated.
  • Only four SSVEP stimuli need to be distinguished resulting in high classification accuracy.
  • Little precision is expected from the eye tracking device, allowing a low cost hardware solution.
The presented article evaluates the feasibility of the proposed system and compares its performance to a pure SSVEP as well as a pure eye tracking system. In this respect a 30-target user interface was implemented for each of the three approaches.

2. Experimental Section

2.1. Participants

In total 32 able-bodied volunteer participants (six female) with mean (SD) age of 25.16 (7.71) years, ranging from 19 to 63 were recruited from the Rhine-Waal University of Applied Sciences (Kleve, Germany). Participants had normal or corrected-to-normal vision and had little to no previous experience with BCI systems. They gave written informed consent in accordance with the Declaration of Helsinki before taking part in the experiment. This research was approved by the ethical committee of the medical faculty of the University Duisburg-Essen. Information needed for the analysis of the test was stored pseudonymously. The entire session lasted on average approximately 50 min. Participants had the opportunity to withdraw at any time.
The EEG recordings were conducted in a quiet laboratory setting; luminance was kept low. Participants did not receive any financial reward.

2.2. Hardware

Participants were seated in front of a LCD screen (BenQ XL2420T, Taipei, Taiwan, resolution: 1920 × 1080 pixels, vertical refresh rate: 120 Hz) at a distance of about 60 cm. The used computer system operated on Microsoft Windows 7 Enterprise (Redmind, WA, USA) running on an Intel processor (Intel Core i7, Santa Clara, CA, USA. 3.40 GHz). An electroencephalogram (EEG) amplifier, g.USBamp (Guger Technologies, Graz, Austria) with standard Ag/AgCl electrodes were utilized. Eight signal electrodes were located over the visual cortex ( P Z , P O 3 , P O 4 , O 1 , O 2 , O Z , O 9 and O 10 in accordance with the international system of EEG electrode placement). The ground electrode was placed over A F Z , the reference electrode over C Z . Standard abrasive electrolytic electrode gel was applied between the electrodes and the scalp in order to bring impedances below 5 k Ω . An analogue bandpass filter between 2 and 30 Hz and a notch filter of around 50 Hz were applied in the g.USBamp amplifier.
For the eye tracking aspect, we used the low cost EyeTribe tracker with the provided software development kit. The EyeTribe is a video-based tracker, which uses binocular gaze data and high resolution infrared LED illumination [25]. The data rate of the EyeTribe was set to 30 Hz. The software development kit provides a calibration interface which ensures a correct position of the device and identifies unique eye characteristics needed to enhance the accuracy of the tracker. The EyeTribe tracker was mounted on a tripod and was placed in front of the monitor, facing the user. It was connected to the computer via the universal serial bus (USB 3.0 port).

2.3. Signal Processing

For SSVEP signal classification, the minimum energy combination method (MEC) as proposed by Friman et al. in [26] was utilized. The MEC creates a set of channels (a weighted combination of the electrode signals) which minimize the nuisance signals. Considering N t samples of EEG data, recorded for each of N y signal electrodes, the SSVEP response for a flickering stimuli of f Hz, measured with the i-th electrode, can be described as function of the frequency f and its harmonics k, with corresponding amplitudes a i , k and b i , k :
y i ( t ) = k = 1 N h a i , k sin ( 2 π k f t ) + b i , k cos ( 2 π k f t ) + E i , t
The term E i , t represents the noise component of the electrode i, the various artifacts that cannot attribute to the SSVEP response. For a time segment length of T s , acquired with a sampling frequency of F E Hz, the model can be described in a vector form as y i = X τ i + E i where y i = [ y i ( 1 ) , , y i ( N t ) ] T and X describes the N t × 2 N h SSVEP model matrix containing the sine and cosine components. Further, the vector τ i contains the corresponding amplitudes a i , k and b i , k .
To cancel out the nuisance and noise, N s channel vectors s i , i = 1 , , N s of length N t are defined as a linear combination of the electrode signals; the N t × N s matrix S = [ s 1 , , s N s ] can be written as S = X W , where the N t × N s matrix W contains the corresponding weights.
The noise and nuisance signals can be estimated by removing the SSVEP components from the signal. In this respect, the signal Y is projected on the orthogonal complement of the SSVEP model matrix,
Y ˜ = Y X ( X T X ) 1 X T Y .
As B Y ˜ , an optimal weight combination for the electrode signals can then be found by calculating the eigenvectors of the symmetric matrix Y ˜ T Y ˜ (please refer to [27] for more details).
The weight matrix can be set to W = v 1 λ 1 v N s λ N s , where λ 1 λ 2 λ N s are the calculated eigenvalues with corresponding eigenvectors v i .
To discard up to 90% of the nuisance signal the total number of channels is selected by finding the smallest value for N s that satisfies the equation:
i = 1 N s λ i j = 1 N y λ j > 0.1 .
To detect the SSVEP response for a specific frequency, the power of that frequency and its harmonics N h is estimated by
P ^ = 1 N s N h l = 1 N s k = 1 N h X k T s l 2 .
The SSVEP power estimations of all N f considered frequencies are then normalized,
p i = P i ^ j = 1 N f P ^ j .
Finally, in order to highlight the largest values, a Softmax function was applied as described in [28],
p i = e α p i j = 1 j = N f e α p j ,
where α was set to 0.25 .

2.4. Software

The EEG signal classification and processing, as well as the graphical user interface, were implemented as Microsoft Visual Studio C++ project (Version 2010, Redmond, WA, USA). For eye tracking, source files from the C++ GazeApi library provided by the Eyetribe C++ software development kit (SDK) were included manually into the project. Three different spelling applications were tested in the experiment: the SSVEP speller, solely based on the SSVEP paradigm, the Eyegaze speller, solely based on eye tracking, and the Hybrid, a combination of both control technologies. Figure 1 provides a system overview of the tested applications. In each application, thirty boxes containing the alphabet plus additional special characters were presented to the user. Command classifications were followed by an audio feedback voicing the selected command. Table 1 summarizes the main characteristics of each interface. A detailed description of each speller is provided in the following.
SSVEP speller: For the SSVEP speller, as well as the Hybrid, the MEC was utilized as described above. To avoid overlapping of frequencies, N h = 2 harmonic frequencies were considered. For the SSVEP speller, power estimations for N f = 30 frequencies were calculated. Each block consisted of 13 samples (101.5625 ms with the sampling rate of 128 Hz). For the on-line classification, we used block-wise increasing classification time windows instead of sliding windows, as we learned that some users benefit from larger time segments (see e.g., [7]). If a particular stimulation frequency had the highest probability, exceeded a certain predefined threshold and the classification time window exceeded a certain minimum threshold, the corresponding command was classified. As more frequencies needed to be distinguished, the minimum classification time windows was set to 20 blocks (approximately 2 s), in order to avoid false classifications. After each performed classification the flickering stopped for approximately 914 ms (9 blocks) and no EEG data were collected, so that during this gaze shifting period, the user had time to shift her/his gaze to another target.
In the SSVEP speller, each box represented a stimulation frequency; the box size varied (between 130 × 90 and 170 × 120 pixels) in relation to the SSVEP power estimations during the experiment as described in [29]. Each box was outlined by a frame determining the maximum box size which was reached immediately prior to the classification.
To implement the 30 stimulation frequencies, a frame-based stimulus approximation was used (see e.g., [30,31]). In the frame-based stimulus approximation method, a varying number of frames is used in each cycle. The stimulus signal at frequency f is generated by
stim ( f , i ) = square [ 2 π f ( i / RefreshRate ) ] ,
where square ( 2 π f t ) generates a square wave with frequency f and i is the frame index. E.g., the black/white reversing interval for the approximated frequency 17 Hz includes 17 cycles of varying length (three or four frames). For example, by using the formula above, the one-second stimulus sequence of 17 Hz can be generated: (4 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 4 3 4 3 4 3 4 3 4 3 4 3 4 3 4 3).
For the online spelling task with the SSVEP speller, approximated frequencies between 6.1 and 11.8 Hz (resolution < 0.2 Hz) were used. This interval was applied in previous studies as well, because it avoids overlapping in the 2-nd harmonics frequencies while still allowing a sufficient difference in-between [32]. As indicated in [33], an equidistant stimuli set is not optimal, hence we selected 30 logarithmically spaced frequencies as displayed in Figure 2.
Eyegaze speller: This application used the eye movements as input modality. Each box could be selected by looking at it. The selected box was highlighted white, while the 29 remaining boxes were grey.
The EyeTribe tracker calculated gaze coordinates e = ( x , y ) with respect to the monitor at which the participants were looking at. In order to detect the gaze box a user was focusing on, exponentially weighted moving averages (EWMAs) were utilized. In the EWMA, a weighted average between the current value and the average of the last observation is averaged. For a series of sample coordinate vectors e t the EWMA was calculated recursively:
e ^ 1 = e 1 , e ^ t = ( 1 α ) e ^ t 1 + α e t , for t > 1 .
We used α = 0.0625 to put more weight on the past value. After a time window of 0.33 s (10 sample coordinates), the box with the minimum distance to the average gaze position e ^ = e ^ 10 was highlighted white, however, in order to spell the letter contained in the box, it needed to be classified three times in a row. Hence the minimum time to select a character was 1 s. After a letter selection, letter classification was suppressed for the duration of 2 s to avoid false classifications.
Hybrid: The Hybrid operated in two phases. Firstly, only eye tracking was utilized, as described above, but instead of a single box, a block of the four nearest boxes to the averaged coordinates were highlighted, as displayed in Figure 3. In total, twenty overlapping blocks were selectable. After three consecutive selections of a block (minimum time 1 s), the corresponding boxes started flickering with four individual frequencies, initiating the second phase. However, gaze coordinates were still calculated in the background. Therefore, if the initial selection was false, the user still could switch to another block of letters by shifting her/his gaze. If the letters of this new block did not overlap with the letters of the preceding block, the flickering stopped. In the occasion that the calculated gaze coordinates were directly in the center of a box, up to four overlapping blocks had the exact same distance to the gaze coordinates. In this case, the lower right block had highest precedence.
To ensure that each block contained four different frequencies, they were arranged as displayed in Figure 3.
The first phase can also be seen as a dynamic gaze shifting period. Flickering started only if the user fixated the block containing the desired letter for a sufficiently long period of time.
In order to increase robustness, for the Hybrid, three additional frequencies (means between the four target frequencies) were used as in [34], hence N f = 7 . In particular, the additional frequencies 6.33, 7.09 and 8.13 Hz were considered. However, if one of these frequencies was classified, the output was rejected. This way the reliability of the output was improved, as the risk of false positives e.g., during gaze shifting was considerably reduced. Only if a particular stimulation frequency had the highest probability, exceeded a certain predefined threshold and the classification time window exceeded a certain minimum time period, the corresponding command was classified. As more frequencies needed to be distinguished for the SSVEP speller, the minimum classification time windows was set to 20 blocks (approximately 2 s) for this application, in order to avoid false classifications.
EEG data were transferred block-wise to the computer. The minimum SSVEP classification time window for the Hybrid was set to 8 blocks (approximately 0.8 s). Figure 4 compares the eye tracking accuracy needed for the Hybrid and the Eyegaze speller.

2.5. Experimental Setup

Initially, participants were prepared for the EEG recording. Thereafter, the eye tracker was calibrated. After accurate positioning in front of the device was ensured, the calibration software provided by the EyeTribe SDK presented a series of calibration targets which were distributed evenly throughout the screen. The calibration process took on average about 30 s to complete. Participants were instructed not to move their head during this calibration phase. Also, participants were asked not to wear their glasses during the experiment, as they affect the performance of the low cost eye tracker system. If the calibration results were poor, re-calibrating was performed.
Afterwards, participants tested the spelling applications as follows: Initially, subjects participated in a familiarization run, spelling the word “KLEVE” and a word of their own choice (e.g., their first name). Next, each participant used each GUI in random order to spell the phrase “RHINE WAAL UNIVERSITY”. The spelling phase ended automatically when the phrase was spelled correctly. In case a person was not able to execute a desired classification within a certain time frame, or if repeated false classifications occurred, the experiment was stopped manually. Spelling errors were corrected via the “delete” button. After the test phase, the subjects completed a post-questionnaire, answering questions regarding each spelling application.

3. Results

The overall BCI performance for the three tested spelling applications is provided in Table 2. For each subject, the following values are provided: The time T needed to complete the task, the command accuracy P and the commonly used information transfer rate (ITR) (see e.g., [1]),
B = log 2 N + P log 2 P + ( 1 P ) log 2 1 P N 1 ,
where B represents the number of bits per trial. The overall number of possible choices was N = 30 for each application.
The accuracy P was calculated based on the number of correct command classifications divided by the total number of classified commands C n . To obtain ITR in bits per minute, B is multiplied by the number of command classifications per minute. To obtain the average command classification time, the total time needed for the spelling task, T, was divided by C n .
In some cases, the system was unable to reliably detect the users intent. Participants who were unable to complete the spelling task, or who achieved classification accuracies below 70% were excluded from the calculation of the mean values; these participants, for the sake of brevity, we refer to as BCI illiterates and we define the BCI literacy rate as the percentage of BCI literate participants.
Every participant was able to control at least one of the systems. Out of the 32 participants, 29 were able to gain control over the Hybrid, 25 over the Eyegaze speller and 24 over the SSVEP speller. 18 participants were able to complete the tasks with all three applications. For these subjects a detailed performance comparison was conducted (see Table 3). The typing speed in chars/min was obtained by dividing the total number of spelled letters (including errors and error corrections) by T. A series of T-tests revealed that with the Eyegaze speller a significant higher ITR than both the SSVEP speller ( t ( 17 ) = 13.924, p = 0.000) and the Hybrid ( t ( 17 ) = 11.238, p = 0.000) was achieved. On the other hand, the ITR for the proposed Hybrid was significantly higher than for the SSVEP speller ( t ( 17 ) = 3.634, p = 0.002). Likewise, with the Eyegaze speller a significantly higher accuracy compared to the SSVEP speller ( t ( 17 ) = 3.160, p = 0.006) and the Hybrid ( t ( 17 ) = 2.747, p = 0.014) was achieved. Though the mean accuracy of the Hybrid (93.87%) was slightly higher than the SSVEP speller (90.81%), the difference was not statistically significant ( t ( 17 ) = 1.360, p = 0.191).
Figure 5 summarizes results from the post questionnaire for all subjects. The subjective impressions regarding user friendliness were measured using a five-point Likert scale, where “1” indicated the strongest degree of disagreement with a particular statement and “5” the strongest degree of agreement.
Figure 6 compares command classification time for each of the letters form the copy spelling task; the boxplot displays Minimum, Maximum and Median values, as well as Outliers. Note, that in two cases for the Eyegaze-speller, classification data was already collected in the background when the recording started and as a result, the first letter was selected in less than one second (e.g., Subject 3 and 5).
Several participants needed a considerable amount of time to complete the spelling task with the SSVEP-speller. In order to analyze the long trial performance, classification accuracies for slow and fast performers were compared. In total, 6 participants needed more than 4 min to write the phrase (slow performers); 11 participants completed the task in less than 3 min (fast performers). Figure 7 suggests that selection accuracy is slightly diminishing over the course of the spelling task for slow performers. To analyze this performance drop further, the classification means of the first and final five letters (94.5% and 87.2%) for slow performers were calculated. The observed difference was not significant (t(5) = 1.547, p = 0.182)).

4. Discussion

The results demonstrate, that while the Eyegaze speller was the fastest system overall, the combination of eye tracking and SSVEP showed a faster performance than the SSVEP system alone. All participants gained control over at least one of three systems, yet, the literacy rate differed for each of the systems. In this respect, the proposed Hybrid achieved the highest literacy rate; 90.63% of the participants achieved reliable control with the Hybrid, 78.13 with the Eyegaze speller, and 75% with the SSVEP speller.
The speed difference between SSVEP and eye tracking technology was expected. A relatively long time window is necessary until the SSVEP power estimations allow accurate classification. A direct comparison of a letter selection with the SSVEP speller and the Eyegaze speller is provided in Figure 8.
An explanation why the Hybrid was controlled by more users than the SSVEP speller is the fewer number of SSVEP targets. This also allowed shorter SSVEP classification time windows and hence resulted in a overall faster performance. For the Hybrid, the minimum time for SSVEP classification was below 1 s. On the other hand, for the SSVEP speller, we used time windows with minimal length of roughly 2 s, a rather typical value throughout BCI literature (see e.g., [35]). It should be noted though, that some studies successfully used smaller time windows for multi-target systems (see e.g., [31]).
Another advantage of the Hybrid is that the gaze shifting phase was accessed via eye tracking. Therefore, it was ensured that a user is concentrating on a target letter during the collection of EEG data. Figure 9 displays the entire spelling performance of a subject. It can be seen that the gaze shifting period indeed differs for each selection. e.g., for the selection of the consecutive “A”s, the gaze shifting period was as expected the smallest.
A considerable amount of users were not able to control the Eyegaze speller. As also observed by Janthanasub and Meesad, the calibration of the eye tracking device was relatively poor or not possible at all for participants with glasses [25]. We like to point out though, that more expensive eye-trackers might perform more reliably when glasses are worn in comparison to the eye-tracker used in our experiment. Future developments in camera based tracking or wearable devices might circumvent this issue. To compare eye tracking and SSVEP independent of the interference of glasses with the tracking, participants were asked to perform this experiment without visual aid, even if usually glasses were worn.
Despite this, 21.87% of the participants were not successful with the eye tracker. Other factors prevented reliable control as well; for example, participant related eye physiology (e.g., narrow eyes) tended to worsen trackability as also observed by Bilignaut and Wium [36]. The Midas touch problem seemed not to be an issue for most participants. Other researchers observed variability among participants in eye tracking performance as well. Räihä and Ovaska also discussed long term use performance [37]. They observed that during a one hour test run with an eye typing system, some participants were unable to complete the experiment due to eye fatigue, while other participants were not affected at all. The authors further listed reasons why eye fatigue may arise: poor calibration, participants frustration, system settings, mental demand, experimental conditions (e.g., temporal demand); also, the use of infrared light over longer time periods may cause discomfort, frustration and dryness of the eye. Eye fatigue can also be an issue with SSVEP-based BCIs (see e.g., [38]). Here, a slightly diminishing accuracy for slow performers was observed (see Figure 7. Apart from fatigue, higher stimulation frequencies towards the end of the phrase could explain the drop.
All in all, subjects gave generally positive feedback regarding the user friendliness of all tested systems (see Figure 5). Regarding the question if the system was easily controlled, the eye tracking system gathered the highest number of extreme answers (strong disagreement/agreement). The perceived level of control for the Hybrid was slightly better than for the SSVEP speller. Also, the majority of the subjects were more annoyed by the flickering of the SSVEP speller compared to the Hybrid. Fewer stimuli seemed generally to be less stressful for the user (see e.g., [39]). In addition, the time the subject had to look at a flickering target was larger for the SSVEP speller. Higher stimulation frequencies produce less visual fatigue and are more subtle than lower frequencies [40,41] , but their SSVEP amplitudes are significantly lower (see e.g., [27]). Especially for multi-target applications, BCI performance might drop to such an extent that reliable control is not possible. Because of this, we used lower frequencies in the tested applications.
As for the graphical user interface, we decided to use an alphabetically ordered layout as for some users a standard keyboard layout such as QWERTY might be unfamiliar. For people who use a QWERTZ or QWERTY keyboard regularly the interface could be modified. It should be noted, that an equal distribution of selectable targets on the screen might be more efficient in terms of data processing; that is why letters were arranged in a 6 × 5 matrix in the GUI implementation. If e.g., 10 or more letters are used in a row, as typical for the QWERTY keyboard arrangement, the data processing strategy of the hybrid needs to be altered accordingly.
It should be also noted that although the system is designed as a communication tool for disabled people, most of the subjects in this study were healthy young adults. Also, a few of the participants had previous BCI-experience. Therefore, they may not be reflective of the target population. E.g., Käthner et al. stress the importance of engaging end-users during all steps of developement process [42]. They tested eye tracker, electrooculography and an auditory BCI as access methods for augmentative communication. The participant, a 55 year old amyotrophic lateral sclerosis (ALS) patient in locked-in state, rated the ease of use of the auditory BCI as the highest, as no precise eye movements were required, but at the same time as most tiring due to the high level of attention that was necessary to control the BCI. Demographic factors influence BCI performance as well; elderly people for example are slightly poorer BCI performers [43,44]. Future tests with the target population are required.
We also like to mention that while we used a low cost eye tracking device, our SSVEP setup was state-of-art which makes the comparison somewhat biased towards the SSVEP paradigm. However, results from from Kos’myna and Tarpin-Bernard suggest that 4 classes could also be successfully distinguished with a low cost device such as the Emotiv Epoch [17]. A low cost version of the here proposed Hybrid might therefore be possible.
Although slightly slower, the presented SSVEP/eye-tracking combination proofed to be a well-rounded alternative to the pure eye tracking device. The proposed system could however be improved further. In terms of speed, reliability and user comfort, both individual control methods have by far not reached their full potential. But also, their combination offers further improvement possibilities. As recorded EEG signals are affected by non-neuronal activities such as eye blinking and eye movements, eye tracker data can be used to remove such ocular artifacts from EEG signal [45]. Our future work should include this feature as well.

5. Conclusions

The article presents a novel eye tracking/SSVEP hybrid spelling application and compares its performance to standalone SSVEP and eye tracking versions of the interface. Generally with eye tracking devices, a large number of targets can be distinguished. With SSVEP-based BCIs high resolution control can also be achieved as demonstrated in several studies. However, due to the high number of distinguishable targets, some users struggle to control either systems. A comparison of mean values revealed that ITR as well as classification accuracy were highest with the pure eye tracking interface, however, the amount of users who gained control was maximal for the proposed hybrid system. It is worth noting, that control over the pure eye tracking interface implied control over the hybrid system, but not vice versa. This indicates that through the data fusion of the two technologies, a wider range of users could access control over hand-free communication applications. Further advantages of the Hybrid system are more dynamic gaze shifting phases between consecutive selections and potentially less expensive hardware in comparison.

Acknowledgments

This research was supported by the European Fund for Regional Development under Grant GE-1-1-047. We also thank the participants of this study.

Author Contributions

Piotr Stawicki, Felix Gembler, and Ivan Volosyak conceived and designed the experiments; Piotr Stawicki, Felix Gembler, and Aya Rezeika performed the experiments; Piotr Stawicki, Felix Gembler, Aya Rezeika, and Ivan Volosyak analyzed the data. Ivan Volosyak wrote the mentioned research proposals and supervised the research. Piotr Stawicki, Felix Gembler, Aya Rezeika and Ivan Volosyak wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Wolpaw, J.; Birbaumer, N.; McFarland, D.; Pfurtscheller, G.; Vaughan, T. Brain-computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  2. Gao, S.; Wang, Y.; Gao, X.; Hong, B. Visual and auditory brain-computer interfaces. IEEE Trans. Biomed. Eng. 2014, 61, 1436–1447. [Google Scholar] [CrossRef] [PubMed]
  3. Muller-Putz, G.R.; Pfurtscheller, G. Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans. Biomed. Eng. 2008, 55, 361–364. [Google Scholar] [CrossRef] [PubMed]
  4. Yin, E.; Zhou, Z.; Jiang, J.; Yu, Y.; Hu, D. A dynamically optimized SSVEP brain-computer interface (BCI) speller. IEEE Trans. Biomed. Eng. 2015, 62, 1447–1456. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, Y.; Zhou, G.; Jin, J.; Wang, X.; Cichocki, A. SSVEP recognition using common feature analysis in brain-computer interface. J. Neurosci. Methods 2015, 244, 8–15. [Google Scholar] [CrossRef] [PubMed]
  6. Guger, C.; Allison, B.Z.; Windhager, B.G.; Prückl, R.; Hintermüller, C.; Kapeller, C.; Bruckner, M.; Krausz, G.; Edlinger, G. How many people could use an SSVEP BCI? Front. Neurosci. 2012, 6, 1–6. [Google Scholar] [CrossRef] [PubMed]
  7. Gembler, F.; Stawicki, P.; Volosyak, I. Autonomous parameter adjustment for SSVEP-based BCIs with a novel BCI wizard. Front. Neurosci. 2015, 9, 1–12. [Google Scholar] [CrossRef] [PubMed]
  8. Zhang, D.; Maye, A.; Gao, X.; Hong, B.; Engel, A.K.; Gao, S. An independent brain-computer interface using covert non-spatial visual selective attention. J. Neural Eng. 2010, 7, 016010. [Google Scholar] [CrossRef] [PubMed]
  9. Lopez-Gordo, M.; Prieto, A.; Pelayo, F.; Morillas, C. Customized stimulation enhances performance of independent binary SSVEP-BCIs. Clin. Neurophysiol. 2011, 122, 128–133. [Google Scholar] [CrossRef] [PubMed]
  10. Lesenfants, D.; Habbal, D.; Lugo, Z.; Lebeau, M.; Horki, P.; Amico, E.; Pokorny, C.; Gomez, F.; Soddu, A.; Müller-Putz, G.; et al. An independent SSVEP-based brain–computer interface in locked-in syndrome. J. Neural Eng. 2014, 11, 035002. [Google Scholar] [CrossRef] [PubMed]
  11. Lupu, R.G.; Ungureanu, F. A survey of eye tracking methods and applications. Bul. Inst. Polit. Iasi 2013, 84, 71–86. [Google Scholar]
  12. Harezlak, K.; Kasprowski, P.; Stasch, M. Towards accurate eye tracker calibration—Methods and procedures. Procedia Comput. Sci. 2014, 35, 1073–1081. [Google Scholar] [CrossRef]
  13. Pasqualotto, E.; Matuz, T.; Federici, S.; Ruf, C.A.; Bartl, M.; Belardinelli, M.O.; Birbaumer, N.; Halder, S. Usability and workload of access technology for people with severe motor impairment. Neurorehabil. Neural Repair 2015, 29, 950–957. [Google Scholar] [CrossRef] [PubMed]
  14. Debeljak, M.; Ocepek, J.; Zupan, A. Eye controlled human computer interaction for severely motor disabled children. In Proceedings of the 13th International Conference on Computers Helping People with Special Needs, Linz, Austria, 11–13 July 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 153–156. [Google Scholar]
  15. Newbutt, N.; Creed, C. Assistive tools for disability arts: Collaborative experiences in working with disabled artists and stakeholders. J. Assist. Technol. 2016, 10, 121–129. [Google Scholar]
  16. Kishore, S.; González-Franco, M.; Hintemüller, C.; Kapeller, C.; Guger, C.; Slater, M.; Blom, K.J. Comparison of SSVEP BCI and eye tracking for controlling a humanoid robot in a social environment. Presence Teleoper. Virtual Environ. 2014, 23, 242–252. [Google Scholar] [CrossRef]
  17. Kos’myna, N.; Tarpin-Bernard, F. Evaluation and comparison of a multimodal combination of BCI paradigms and Eye-tracking in a gaming context. IEEE Trans. Comput. Intell. AI Games (T-CIAIG) 2013, 5, 150–154. [Google Scholar] [CrossRef]
  18. Jacob, R.J. Eye tracking in advanced interface design. In Virtual Enviroments and Advanced Interface Design; Oxford University Press: Oxford, UK, 1995; pp. 258–288. [Google Scholar]
  19. Vilimek, R.; Zander, T.O. BC (eye): Combining eye-gaze input with brain-computer interaction. In Proceedings of the 5th International on Conference on Universal Access in Human-Computer Interaction, San Diego, CA, USA, 19–24 July 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 593–602. [Google Scholar]
  20. McMullen, D.P.; Hotson, G.; Katyal, K.D.; Wester, B.A.; Fifer, M.S.; McGee, T.G.; Harris, A.; Johannes, M.S.; Vogelstein, R.J.; Ravitz, A.D.; et al. Demonstration of a semi-autonomous hybrid brain–machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 784–796. [Google Scholar] [CrossRef] [PubMed]
  21. Kim, B.H.; Kim, M.; Jo, S. Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking. Comput. Biol. Med. 2014, 51, 82–92. [Google Scholar] [CrossRef] [PubMed]
  22. McCullagh, P.; Galway, L.; Lightbody, G. Investigation into a mixed hybrid using SSVEP and eye gaze for optimising user interaction within a virtual environment. In Proceedings of the 7th International Conference on Universal Access in Human-Computer Interaction, Las Vegas, NV, USA, 21–26 July 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 530–539. [Google Scholar]
  23. Lim, J.H.; Lee, J.H.; Hwang, H.J.; Kim, D.H.; Im, C.H. Development of a hybrid mental spelling system combining SSVEP-based brain-computer interface and webcam-based eye tracking. Biomed. Signal Process. Control 2015, 21, 99–104. [Google Scholar] [CrossRef]
  24. McCullagh, P.; Brennan, C.; Lightbody, G.; Galway, L.; Thompson, E.; Martin, S. An SSVEP and Eye Tracking Hybrid BNCI: Potential Beyond Communication and Control. In Proceedings of the 10th International Conference on Augmented Cognition, Toronto, ON, Canada, 7–22 July 2016; Springer: New York, NY, USA, 2016; pp. 69–78. [Google Scholar]
  25. Janthanasub, V.; Meesad, P. Evaluation of a low-cost eye tracking system for computer input. King Mongkutś Univ. Technol. North Bangkok Int. J. Appl. Sci. Technol. 2015, 8, 185–196. [Google Scholar] [CrossRef]
  26. Friman, O.; Volosyak, I.; Gräser, A. Multiple channel detection of steady-state visual evoked potentials for brain-computer interfaces. IEEE Trans. Biomed. Eng. 2007, 54, 742–750. [Google Scholar] [CrossRef] [PubMed]
  27. Volosyak, I.; Valbuena, D.; Lüth, T.; Malechka, T.; Gräser, A. BCI demographics II: How many (and what kinds of) people can use an SSVEP BCI? IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 232–239. [Google Scholar] [CrossRef] [PubMed]
  28. Volosyak, I.; Moor, A.; Gräser, A. A dictionary-driven SSVEP speller with a modified graphical user interface. In Advances in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2011; pp. 353–361. [Google Scholar]
  29. Volosyak, I. SSVEP-based Bremen-BCI interface—Boosting information transfer rates. J. Neural Eng. 2011, 8, 036020. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, Y.; Wang, Y.T.; Jung, T.P. Visual stimulus design for high-rate SSVEP BCI. Electron. Lett. 2010, 46, 1057–1058. [Google Scholar] [CrossRef]
  31. Chen, X.; Wang, Y.; Nakanishi, M.; Jung, T.P.; Gao, X. Hybrid frequency and phase coding for a high-speed SSVEP-based BCI speller. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 3993–3996. [Google Scholar]
  32. Gembler, F.; Stawicki, P.; Volosyak, I. Exploring the possibilities and limitations of multitarget SSVEP-based BCI applications. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1488–1491. [Google Scholar]
  33. Stawicki, P.; Gembler, F.; Volosyak, I. Evaluation of suitable frequency differences in SSVEP-based BCIs. In Symbiotic Interaction; Springer: Cham, Switzerland, 2015; pp. 159–165. [Google Scholar]
  34. Volosyak, I.; Cecotti, H.; Gräser, A. Steady-state visual evoked potential response—Impact of the time segment length. In Proceedings of the 7th International Conference on Biomedical Engineering BioMed2010, Innsbruck, Austria, 17–19 February 2010; pp. 288–292. [Google Scholar]
  35. Cecotti, H.; Coyle, D. Calibration-less detection of steady-state visual evoked potentials-comparisons and combinations of methods. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 4050–4055. [Google Scholar]
  36. Blignaut, P.; Wium, D. Eye-tracking data quality as affected by ethnicity and experimental design. Behav. Res. Methods 2014, 46, 67–80. [Google Scholar] [CrossRef] [PubMed]
  37. Räihä, K.J.; Ovaska, S. An exploratory study of eye typing fundamentals: Dwell time, text entry rate, errors, and workload. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12), Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 3001–3010. [Google Scholar]
  38. Martinez, P.; Bakardjian, H.; Cichocki, A. Fully online multicommand brain-computer interface with visual neurofeedback using SSVEP paradigm. Intell. Neurosci. 2007, 2007, 94561. [Google Scholar] [CrossRef] [PubMed]
  39. Gembler, F.; Stawicki, P.; Volosyak, I. Towards a user-friendly BCI for elderly people. In Proceedings of the 6th International Brain-Computer Interface Conference Graz, Graz, Austria, 16–19 September 2014. [Google Scholar]
  40. Chien, Y.Y.; Lin, F.C.; Zao, J.; Chou, C.C.; Huang, Y.P.; Kuo, H.Y.; Wang, Y.; Jung, T.P.; Shieh, H.P.D. Polychromatic SSVEP stimuli with subtle flickering adapted to brain-display interactions. J. Neural Eng. 2016, 14, 016018. [Google Scholar] [CrossRef] [PubMed]
  41. Won, D.O.; Hwang, H.J.; Dähne, S.; Müller, K.R.; Lee, S.W. Effect of higher frequency on the classification of steady-state visual evoked potentials. J. Neural Eng. 2015, 13, 016014. [Google Scholar] [CrossRef] [PubMed]
  42. Käthner, I.; Kübler, A.; Halder, S. Comparison of eye tracking, electrooculography and an auditory brain-computer interface for binary communication: A case study with a participant in the locked-in state. J. Neuroeng. Rehabil. 2015, 12, 76. [Google Scholar] [CrossRef] [PubMed]
  43. Gembler, F.; Stawicki, P.; Volosyak, I. A comparison of SSVEP-based BCI-performance between different age groups. In Advances in Computational Intelligence; Springer: Cham, Switzerland, 2015; pp. 71–77. [Google Scholar]
  44. Volosyak, I.; Gembler, F.; Stawicki, P. Age-related differences in SSVEP-based BCI performance. Neurocomputing 2017, 1–8, In Press. [Google Scholar] [CrossRef]
  45. Mannan, M.M.N.; Kim, S.; Jeong, M.Y.; Kamran, M.A. Hybrid EEG-Eye tracker: Automatic identification and removal of eye movement and blink artifacts from electroencephalographic signal. Sensors 2016, 16, 241. [Google Scholar] [CrossRef] [PubMed]
Figure 1. System overview of the three tested applications showing hardware, signal processing options, user interfaces and the system output.
Figure 1. System overview of the three tested applications showing hardware, signal processing options, user interfaces and the system output.
Brainsci 07 00035 g001
Figure 2. A logarithmically spaced set of frequencies used for the steady-state visual evoked potentials based application (SSVEP speller). Frequencies were implemented on the basis of the frequency approximation method.
Figure 2. A logarithmically spaced set of frequencies used for the steady-state visual evoked potentials based application (SSVEP speller). Frequencies were implemented on the basis of the frequency approximation method.
Brainsci 07 00035 g002
Figure 3. Frequency arrangement for the Hybrid. Stimuli f 1 = 6.00 Hz, f 2 = 6.67 Hz, f 3 = 7.5 Hz and f 4 = 8.57 Hz were used. Initially one of twenty gaze boxes e i was determined using the eye tracker (a). If, e.g., the box e 8 was classified, the four buttons “I,J,O,P” started to flicker allowing the selection of an individual letter via steady state visual evoked potentials (b).
Figure 3. Frequency arrangement for the Hybrid. Stimuli f 1 = 6.00 Hz, f 2 = 6.67 Hz, f 3 = 7.5 Hz and f 4 = 8.57 Hz were used. Initially one of twenty gaze boxes e i was determined using the eye tracker (a). If, e.g., the box e 8 was classified, the four buttons “I,J,O,P” started to flicker allowing the selection of an individual letter via steady state visual evoked potentials (b).
Brainsci 07 00035 g003
Figure 4. Selecting the letter “I” with the Hybrid (a) and the Eyegaze speller interface (b). In the Hybrid system the eye tracking data was used for initial rough selection. If the traced eye coordinates lay within the yellow rectangle (11 cm × 19 cm), the box containing the desired letter started flickering. If eye tracking was used alone, the traced coordinates needed to be much more precise (5.5 cm × 9.5 cm rectangle on the right hand side). The eye tracking software calculated the user’s eye gaze coordinates with an average accuracy of around 0.5° to 1° of visual angle depending on the calibration, which corresponded to an on-screen average error of 0.5 to 1 cm, assuming the user sat approximately 60 cm away from the screen.
Figure 4. Selecting the letter “I” with the Hybrid (a) and the Eyegaze speller interface (b). In the Hybrid system the eye tracking data was used for initial rough selection. If the traced eye coordinates lay within the yellow rectangle (11 cm × 19 cm), the box containing the desired letter started flickering. If eye tracking was used alone, the traced coordinates needed to be much more precise (5.5 cm × 9.5 cm rectangle on the right hand side). The eye tracking software calculated the user’s eye gaze coordinates with an average accuracy of around 0.5° to 1° of visual angle depending on the calibration, which corresponded to an on-screen average error of 0.5 to 1 cm, assuming the user sat approximately 60 cm away from the screen.
Brainsci 07 00035 g004
Figure 5. Results from the post test questionaire. Responses were given on a 1-5 Likert scale. The tested applications were the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller).
Figure 5. Results from the post test questionaire. Responses were given on a 1-5 Likert scale. The tested applications were the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller).
Brainsci 07 00035 g005
Figure 6. Comparison of letter selection times (only correct selections, i.e., last selection in case of corrected spelling errors). The average classification time needed for every letter of the spelling phrase is provided for each of the tested systems: the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller).
Figure 6. Comparison of letter selection times (only correct selections, i.e., last selection in case of corrected spelling errors). The average classification time needed for every letter of the spelling phrase is provided for each of the tested systems: the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller).
Brainsci 07 00035 g006
Figure 7. Steady-state visual evoked potentials classification accuracies over the experiment duration for slow and fast performers.
Figure 7. Steady-state visual evoked potentials classification accuracies over the experiment duration for slow and fast performers.
Brainsci 07 00035 g007
Figure 8. Comparison of the the steady-state visual evoked potentials based application (SSVEP speller) and the eye tracking based application (Eyegaze speller) during selection of the letter “I” in the example of subject 1. (a) The SSVEP power estimations are displayed as a function of time. The black line represents the SSVEP power of the frequency corresponding to the letter “I”. If a certain threshold value (in this case 6) was surpassed, an output command was classified; (b) Eye-movement path from the letter “S” to “I”. When the eye focused sufficiently long (1 s) on the desired box, the letter was selected. Before selection, the eye tracker recorded several gaze positions along the path from “S” to “I”.
Figure 8. Comparison of the the steady-state visual evoked potentials based application (SSVEP speller) and the eye tracking based application (Eyegaze speller) during selection of the letter “I” in the example of subject 1. (a) The SSVEP power estimations are displayed as a function of time. The black line represents the SSVEP power of the frequency corresponding to the letter “I”. If a certain threshold value (in this case 6) was surpassed, an output command was classified; (b) Eye-movement path from the letter “S” to “I”. When the eye focused sufficiently long (1 s) on the desired box, the letter was selected. Before selection, the eye tracker recorded several gaze positions along the path from “S” to “I”.
Brainsci 07 00035 g008
Figure 9. The entire spelling performance for the Hybrid in the example of subject 5. The steady-state visual evoked potentials power estimations are displayed as a function of time; the grey boxes represent the eye tracking phases. Eye tracking paths from (a) “R” to “H” and (b) “S” to “I” are provided.
Figure 9. The entire spelling performance for the Hybrid in the example of subject 5. The steady-state visual evoked potentials power estimations are displayed as a function of time; the grey boxes represent the eye tracking phases. Eye tracking paths from (a) “R” to “H” and (b) “S” to “I” are provided.
Brainsci 07 00035 g009
Table 1. Overview of the tested spelling applications: the setaty-state visual evoked potentials (SSVEP) based application (SSVEP speller), the eye tracking based application (Eyegaze speller), and the here presented hybrid application (Hybrid). The main differences between the three systems are provided.
Table 1. Overview of the tested spelling applications: the setaty-state visual evoked potentials (SSVEP) based application (SSVEP speller), the eye tracking based application (Eyegaze speller), and the here presented hybrid application (Hybrid). The main differences between the three systems are provided.
No. of Target CharactersNo. of Steps for Character SelectionNo. of Classes Eye TrackingVisual Angle Eye TargetNo. of Classes SSVEPMin. Difference between FrequenciesMin. Time Character Selection
SSVEP speller301--300.14 Hz2.031 s
Eyegaze speller30130 1.6 --1.000 s
Hybrid30220 4.4 40.67 Hz1.813 s
Table 2. Results from the copy spelling task of all three tested applications: the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller). Participants that were not able to successfully control a spelling interface were excluded from the calculation of mean values for that particular system. The 18 participants who completed the tasks with all three applications are highlighted bold.
Table 2. Results from the copy spelling task of all three tested applications: the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller). Participants that were not able to successfully control a spelling interface were excluded from the calculation of mean values for that particular system. The 18 participants who completed the tasks with all three applications are highlighted bold.
SubjectSSVEP spellerHybridEyegaze speller
#Time (s)Acc. (%)ITR (bpm)Time (s)Acc. (%)ITR (bpm)Time (s)Acc. (%)ITR (bpm)
1184.13381.8235.91162.90692.0037.9062.867100.0098.35
2188.90678.3836.47112.53188.8955.6264.796100.0095.42
3---208.50888.8930.02107.11186.2159.43
4---321.648100.0019.22---
591.711100.0067.4167.9453100.0091.0056.063100.00110.28
6431.64173.3317.36113.953100.0054.2660.125100.00102.83
7200.18083.8732.39110.602100.0055.3770.89095.6586.39
8114.359100.0054.06117.914100.0052.4361.242100.00100.96
9156.91495.6539.03185.75892.0033.2475.56395.6581.05
10276.85986.2122.99------
11---494.40672.3415.47108.010100.0057.24
12---677.82881.829.7664.492100.0095.87
13---175.09486.2136.3578.00092.0079.16
14138.531100.0044.63138.73495.6544.1463.477100.0097.40
15218.258100.0028.33------
16195.609100.0031.61120.35295.6550.8862.359100.0099.15
17436.51686.2114.58186.26695.6532.8882.875100.0074.60
1892.65295.6566.12108.77392.0056.76---
19145.33695.6542.14239.38382.7624.76---
20148.89192.0041.47116.28986.2154.7375.766100.0081.60
21---388.47776.9218.1185.21192.0072.46
22247.40686.2125.73203.63388.8830.7470.891100.0087.21
23165.44580.6539.63107.75892.0057.3057.992100.00106.61
241536.76063.834.04635.37566.1514.3658.703100.00105.32
25231.15685.1925.10239.28188.8926.1671.805100.0086.10
26125.227100.0049.37138.531100.0044.6373.836100.0083.74
27---220.8095.6527.7467.378100.0091.76
28253.29795.6524.18306.00881.8221.6163.172100.0097.87
29155.695100.0039.71490.24274.4214.97---
30268.12595.6522.84157.21992.0039.2791.91492.0067.17
31228.21100.0027.09232.2786.2127.40---
32146.047100.0042.33128.375100.0048.16111.47588.8956.15
Mean255.11591.0434.98230.22989.7737.5173.84097.7086.96
SD275.1149.7814.55155.1218.9017.6615.6224.0515.33
Literacy rate (%)75.0090.6378.13
Table 3. Mean (SD) values achieved for the 18 participants who completed the tasks with all three applications: the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller). The presented values are: the overall accuracy, the information transfer rate (ITR), the characters/minute, an the overall time needed to complete the spelling task). Subjects that were not able to successfully control all three spelling interfaces were excluded from the calculation of mean values.
Table 3. Mean (SD) values achieved for the 18 participants who completed the tasks with all three applications: the steady-state visual evoked potentials based application (SSVEP speller), the here presented Hybrid application, and the eye tracking based application (Eyegaze speller). The presented values are: the overall accuracy, the information transfer rate (ITR), the characters/minute, an the overall time needed to complete the spelling task). Subjects that were not able to successfully control all three spelling interfaces were excluded from the calculation of mean values.
Accuracy (%)ITR (bpm)Char/MinTime (s)
SSVEP speller90.81 (8.89)35.78 (13.36)8.65 (2.62)206.894 (96.051)
Hybrid93.87 (5.57)46.13 (15.75)10.60 (3.17)150.781 (56.637)
Eyegaze speller98.46 (3.27)89.60 (14.14)18.78 (2.23)70.950 (13.659)

Share and Cite

MDPI and ACS Style

Stawicki, P.; Gembler, F.; Rezeika, A.; Volosyak, I. A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI. Brain Sci. 2017, 7, 35. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci7040035

AMA Style

Stawicki P, Gembler F, Rezeika A, Volosyak I. A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI. Brain Sciences. 2017; 7(4):35. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci7040035

Chicago/Turabian Style

Stawicki, Piotr, Felix Gembler, Aya Rezeika, and Ivan Volosyak. 2017. "A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI" Brain Sciences 7, no. 4: 35. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci7040035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop