Next Article in Journal
An Optimal Design Method for Improving the Efficiency of Ultrasonic Wireless Power Transmission during Communication
Next Article in Special Issue
Single-Trial Classification of Error-Related Potentials in People with Motor Disabilities: A Study in Cerebral Palsy, Stroke, and Amputees
Previous Article in Journal
Embedded UV Sensors in CMOS SOI Technology
Previous Article in Special Issue
Classification of Individual Finger Movements from Right Hand Using fNIRS Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

fNIRS-Based Upper Limb Motion Intention Recognition Using an Artificial Neural Network for Transhumeral Amputees

1
Department of Mechatronics and Biomedical Engineering, Air University, Main Campus, PAF Complex, Islamabad 44000, Pakistan
2
Department of Mechanical Engineering, Khwaja Fareed University of Engineering & IT, Rahim Yar Khan 64200, Pakistan
3
Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
4
Institute of Manufacturing, Engineering Management, University of Engineering and Applied Sciences, Swat, Mingora 19060, Pakistan
5
Department of Mechanical Engineering, Faculty of Engineering and Technology, Future University in Egypt, New Cairo 11835, Egypt
*
Authors to whom correspondence should be addressed.
Submission received: 26 November 2021 / Revised: 10 January 2022 / Accepted: 12 January 2022 / Published: 18 January 2022
(This article belongs to the Special Issue Signal Processing for Brain–Computer Interfaces)

Abstract

:
Prosthetic arms are designed to assist amputated individuals in the performance of the activities of daily life. Brain machine interfaces are currently employed to enhance the accuracy as well as number of control commands for upper limb prostheses. However, the motion prediction for prosthetic arms and the rehabilitation of amputees suffering from transhumeral amputations is limited. In this paper, functional near-infrared spectroscopy (fNIRS)-based approach for the recognition of human intention for six upper limb motions is proposed. The data were extracted from the study of fifteen healthy subjects and three transhumeral amputees for elbow extension, elbow flexion, wrist pronation, wrist supination, hand open, and hand close. The fNIRS signals were acquired from the motor cortex region of the brain by the commercial NIRSport device. The acquired data samples were filtered using finite impulse response (FIR) filter. Furthermore, signal mean, signal peak and minimum values were computed as feature set. An artificial neural network (ANN) was applied to these data samples. The results show the likelihood of classifying the six arm actions with an accuracy of 78%. The attained results have not yet been reported in any identical study. These achieved fNIRS results for intention detection are promising and suggest that they can be applied for the real-time control of the transhumeral prosthesis.

1. Introduction

Amputation refers to the removal of a human limb due to an illness, accident, or trauma. To overcome human limb loss, an artificial device (prosthetics) is provided [1]. The upper limb amputation is divided into five major types, as indicated in Figure 1. Amputees wear transhumeral prosthetic arms to substitute for the loss of elbow and lower portion of arm [2]. A human upper limb can perform seven different motions associated with joints in the arm. Three arm motions are mandatory for transhumeral prosthesis, including elbow extension–flexion, wrist supination–pronation, and hand opening and closing. Advances in the field of biomechatronic have opened new doors to expand the use and applications of prosthetic devices for amputees. However, the control of such prosthetic arms is new area for researchers to explore. Bio signals are preferably used for intention detection that further triggers the implementation of control.
For a long time, upper limb prostheses have been largely controlled using electromyographic (EMG) signals from remnant muscles. Various research studies [4] have considered sEMG for motion intention assessment and used in upper limb prosthetic control. In [5], a DEKA arm with three modular configurations was proposed for people suffering from transradial, transhumeral, and shoulder disarticulations. It utilizes sEMG along with a feet controller and pneumatic bladder for arms control. Lenzi et al. [6] proposed a 5-DoF transhumeral prosthesis for elbow, forearm, wrist, and grasping motions that used an EMG-based low-level controller. Researchers have additionally utilized many other biosensors to control prosthetic arms, such as mechanomyography (MMG) [7], inertial measurement unit (IMU) [8] and near-infrared spectroscopy (NIRS) [9]. Regardless of the above-mentioned developments, a gap exists in the simultaneous control of motions of multi-dimensional transhumeral prostheses.
Signal acquisition and processing are a great challenge in the control of above elbow amputation due to few or no amount of residual muscle and weak muscle activity [4,10,11]. Furthermore, remaining muscle sites for the prosthetic control are not physiologically identified to the distal arm functions [12]. In the past few years, the brain-machine interface (BMI) has appeared as a potential alternative that can offer an incredible opportunity to amputated individuals by empowering them to play out their daily routine [13,14]. It evades the muscles intentions. BMI systems are also implemented to restore motor functions called neuro-prosthesis. The latter assists motor disabled individuals achieve simple everyday tasks [15,16,17]. Quite a few modalities, EEG, MEG, and fMRI, have been considered for BMI applications for their capability to measure brain activities noninvasively. Optical brain imaging has been recently practiced in the BMI field, recognized as functional near-infrared spectroscopy (fNIRS) [18].
fNIRS is useful over the other mentioned modalities for BMI as portability, safety, low noise, and no susceptibility to electrical noise adds to the easy utilization of the system [19]. fNIRS measures hemodynamic response in the cerebral cortical tissue of the brain. The principle of fNIRS uses oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR). The optodes are sensitive to two dissimilar wavelengths in the near-infrared range, 700–1000 nm. This is known as an “optical window”. The biological tissue is somewhat clear to light in this window. The light absorption by water molecules and hemoglobin is relatively low in this region. Henceforth, sensing the light pass through the brain tissue employing noninvasive signal acquisition is performed using an optical source/detector pair, which is placed on the scalp. A relative change in the concentration of HbO and HbR indicates neuronal activation relevant to the executed motion [13]. The attained brain responses relative to distinct motion may comprise of noises that pollute these recorded signals. The noise can be classified as physiological, experimental and instrumental noise [20]. These noises are removed from samples before converting them to magnitude by implementing the modified Beer–Lambert law [21]. The noise recorded because of a computer or neighboring environment is recognized as instrument noise. This noise typically has a high frequency (HF). The HF is separated by applying a low-pass filter. Experimental noise contains motion artefacts, such as head motions or optode dislocation from allotted positions. This generates spikes caused by a variation in light intensities. A study [18] used regularly advanced filtering techniques arbitrarily for noise reduction. Noise is physiologically fashioned [22] as a result of Mayer wave (~0.1 Hz), respiration (0.2~0.5 Hz), and heartbeat (1~1.5 Hz) This is large because of oscillations in blood pressure [23]. One of the chief BMI uses is to extract useful information from raw brain responses for a control–command generation [14]. The captured signals are refined in the four phases: signal preprocessing, feature extraction, classification, and control–command generation. In pre-processing, physiological and instrument artifact and noise are removed. Afterwards the filtration phase, feature extraction step in to gather detailed traits of the signal. Next, the extracted features are classified. The trained classifier is deployed generating control commands using the previously trained data samples [24,25].
Several researchers have embarked on the design and development of robotic arms. The configurations of robotic arms depend on the tasks to be performed by the human arm. The distinct comprehension of actuation approaches is employed. Earlier design approaches have focused on the mechanical issues of structures and the operation of the prosthetic arms [2]. Most of these prosthetic devices are controlled using unnatural methods, such as using the contraction of muscles of the opposite arm [4].
This research attempts to lay a foundation for a framework that offers functionality similar to the human arm, with an intuitive scheme of control. Therefore, by analyzing fNIRS signals to generate control commands for upper limb prosthetic devices, this current study proposes an ANN-based signal classification framework to recognize the intention of six upper limb motions of both healthy and transhumeral amputees. The novelty of the presented research is to generate six control commands using fNIRS for transhumeral amputees. To the best of the authors’ knowledge, there is no existing literature for motion intention detection of six control commands using fNIRS for upper-limb prosthesis applications [1,2,7,14,26].
This paper is organized as follows. Section 2 describes the materials and methods deployed for this study. This includes data acquisition and signal processing. Section 3 consists of feature extraction and classification methods, whereas in Section 4, results are presented and discussed. This includes, filtration, channel selection, and classification accuracies. Section 5 is the conclusion.

2. Materials and Methods

In this section, details regarding the experimental procedure followed by the methodology used in signal acquisition and processing, feature extraction, classification, and control command generation are included. A block diagram representation of the methods used is presented in Figure 2.

2.1. Subject Information

The study included 15 healthy and three transhumeral amputated subjects among which the healthy subjects were dominant right-hand males. Female amputees are not included simply due to their unavailability. No subject had any psychological, neurological, or optical affliction in the past, as per the recommendation given in [27,28]. The subjects signed a written consent after being briefed about the experimental process. The demographics of transhumeral amputees are given in Table 1. The experiments were allowed by the Human Research Ethics Committee (HREC) of Air University Islamabad. The experiments were performed as per the standards issued by the recent declaration of Helsinki [29].

2.2. Optode Placement

The fNIRS data were recorded from the NIRx Imager system, NIRsport (NIRx Medical Technologies, Germany), using an 8 × 8 sensor array positioned on the motor cortex region of the human head scalp. The fNIRS signals were acquired for six arm motions: elbow extension (E.E), elbow flexion (E.F), wrist pronation (W.P), and wrist supination (W.S), hand open (H.O), and hand close (H.C). The optodes were precisely placed on the motor cortex related areas on the 10–20 system that yielded 20 fNIRS channels (10 channels in each hemisphere). Figure 3a shows the position of fNIRS optodes on a healthy subject. Easy cap by NIRx technologies is specially made for optical brain imaging according to international standards [18]. The standard distance between source and detector is 3 cm, as illustrated in Figure 3b [8,22].

2.3. Experimental Procedure

The experimental procedure was designed for subjects to perform six motor imagery (MI) tasks. During MI tasks, subjects were instructed to think of performing one of the arm movements and refrain from any other action, such as muscle twitches. The individual subject was asked to perform MI, guided by the experimental team before starting the trials to make them aware of the experimental protocol [29]. During these tasks, the subjects were seated on a comfortable chair to remain relaxed. The chair was placed at an approximate distance of 90 cm from the screen so that the arm motion indications are noticeable and the computer screen backlight does not obstruct the optical sensors [18,30,31].
The experiment session began with an initial rest duration of 30 s to generate a baseline. After that, the routine for the motion was displayed on a computer monitor for subjects to follow. The experiment had two sessions. At first, all tasks were performed in sequence such that the arm motions were pre-defined. However, in the second step, all subjects performed similar arm motions but executed with random intentions. fNIRS logged all six tasks (E.E, E.F, W.P, W.S, H.O, and H.C). Each task/action comprised of ten-second trials with a rest session of 20 s. Each motion was repeated 10 times while in total 12 motions were performed by each subject. An experimental paradigm used in this study is described in Figure 4. A framework of the proposed study is illustrated in Figure 5.
After the signal acquisition, the signals were filtered using FIR filter. These filtered signals were then used to compute the hemodynamic responses using MBLL. Signal mean and peaks were extracted as a feature. The minimum values were also extracted to set the threshold for channel selection. These hemodynamic responses were then fed to the classifying network. Based on the training, the network predicted the motion class. All the details with mathematical equations and numeric values are briefly described in the next section.

2.4. Data Acquisition and Processing

This section includes signal acquisition, signal preprocessing, and statistical feature extraction. The signal classification algorithm is also contained within this section.
Before signal recording in fNIRS, an optical imaging technique, the light intensity values were recorded during the oxygenation and deoxygenation of human blood cells of the brain [32]. By using NIRx dual-tip optodes, the light concentration was measured at two values of wavelengths: 760 nm and 850 nm. The acquired light intensities are then processed in nirslab Using this application, one can truncate/remove unwanted data as well as infrequent gaps captured earlier during the acquisition process [2,33]. The dataset can be filtered and hemodynamic states can be computed in the same application as well [34].

2.4.1. Signal Acquisition

The fNIRS signals were acquired using a headset, a flexible cap made of a soft cloth with optodes referred to as EasyCap [35,36]. The experiments were performed with headset placement on the motor cortex in three ways: in the first setting, simply placing the cap; then, the optodes (on the cap) were fixed with spring grommet; and lastly, a complete black cap on top of second set was placed such that the optodes are not visible [35,36,37]. As soon as a subject was wearing the sensor cap, the optodes were calibrated. The results of the first setup are given in Figure 6b, while Figure 6a represents the outcome of the second set.
The rectangles/boxes are the representation of the optodes whether source or detector. The intensity bar on the right is an indication of the optode data quality status. The box changes its color according to the physical status of the sensor to give an idea about which optode to settle on for better signals [38]. The “white color” portrays that there exists no connection between the optodes and the subject scalp. The “red color” indicates a “critical” connection, which directs that it requires the optodes to be adjusted. Commonly, an anomaly is detected because thick hair may be caught in the EasyCap cavity, and just putting the fNIRS optodes again helped to form a better connection. A “yellow color” represents an acceptable connection and the brain signals can be attained. The fNIRS data acquisition system settings were attuned by the system itself. It enhances the gain factor of each optode when the connection is acceptable. The system further saves these numeric values in a “conditions” file. This file can be of help later in the optode selection process. Finally, the “green color” demonstrates that the fNIRS optodes are flawlessly positioned on the subject’s head. This also indicates that an outstanding tie is recognized between the sensing element and the scalp for data acquisition [38].
For the third case, when the data was continuously bad, although connections were fixed by giving a green signal during placement calibration, the dark noise tests were conducted [39]. This test examines the intensity of light that is incident on the optodes from the environment. Keeping in mind, the black covering cap cannot always be worn, dark noise was tested initially with a hypothesis that these special grommets make sure that the least amount of noise is induced to the sensors [40].

2.4.2. Signal Processing

As soon as the optodes were calibrated, the signal acquisition was started [34]. After that, the nirslab software comes with a fNIRS headset to differentiate between bad and good channels based upon the gain values as illustrated in Figure 7. The gain setting allows the exclusion of all channels that have a higher value than a specified value. The nirslab label ‘bad’ to any channel shows that it has a value, at either wavelength, equal to or greater than the threshold value that was specified [34,40]. This value is related to the light intensity of the environment in which the experiment is conducted. Then, the black covering cap was used to address the issue [41].
The discontinuities/spikes caused by the cap placement are removed as illustrated in Figure 8. The number of lines indicates the signal acquired from each optode. Hence, a clean signal can be fed for further processing. The disturbed/noisy and clean signals can be seen in Figure 9 and Figure 10, respectively.
nirslab further provide an artifact removal method.
An acquired hemodynamic response after noise/spike artefact removal from a healthy subject is illustrated in Figure 11.
Additionally, the unwanted or disturbing segments in the signal can be removed according to the values of the threshold or depending on the gain factor incorporated by the machine earlier [8]. A band-pass filter was applied to further smooth the samples to compute hemodynamic states [42]. Filtered data at wavelength 750 nm are illustrated below in Figure 12.
nirslab used “firls” and “filtfilt” MATLAB® instructions for filtering purposes. The “firls” proceeds parameters of linear-segment filtration [8]. The “filtfilt” employs filter parameters to samples. Then, a FIR is introduced. For FIR, a roll-off value states the size of the transition frequency band [34]. The mathematical formation of the filter is given in a set of equations, which are the Fourier transform of the truncated filter and are given in Equations (1) and (2):
H ( ω ) = 1 2 π π π H d ( λ ) W ( ω λ ) d λ
h ( n ) = h d ( n ) w ( n )
The width of transition region between the band pass limits H increases with the width of main lobe W. It decides the steepness of transition amongst frequencies [43]. This value was by default set to 15 by the nirslab according to signal condition [44]. After filtration, hemodynamic states are computed and are then set to extract features from them.

3. Feature Extraction and Classification of Motion Intention Signals

The method of signal feature extraction performs a crucial part in the identification of the discriminatory information carried by the bio signals [32,44,45,46]. This section details the features extracted from the dataset and the details of the applied machine learning algorithm for motion classifications.

3.1. Feature Extraction

To execute control commands for six arm motions, features for the signal classification were extracted. For fNIRS brain signals, signal means (SM), signal peak (SP), and signal minimum (min) [43,47] were extracted for thresholding purposes. The signal mean was calculated as (3):
SM = 1 N i = 1 N X i
where N represents the total data points and Xi represents the signal amplitude value. The signal peak was calculated using a signal amplitude variation between two head-to-head sections that exceeds a pre-defined threshold value to cut noise. It is given by (4):
SP = i = 1 N f ( | X i   X i + 1 | )
As per the existing literature [38], SM and SP offer improved control performance for fNIRS-based systems. However, regarding the stated possibility of an initial fNIRS signal dip, a (min) signal value was added as a feature [43]. The features were calculated from only selected optodes based on criteria described earlier in this section using a 2 s moving window. MATLAB® was employed to execute all the features computation.

3.2. Artificial Neural Network (ANN)

For the evaluation of the performance of acquired fNIRS signals, a widely used [38] classifier in pattern recognition was implemented, namely, artificial neural network (ANN). It uses various neuron layers to plan information starting with one circulation then onto the next one for better and enhanced results, i.e., returning less error [46,48]. A system called backpropagation assists ANN to form a bridge between input and output layers in which the corresponding labels/indicators are present [49]. The machine learning toolbox designed by MATLAB® for neural networks came into play for training the samples [50]. When using the toolbox, all you need to set is the number of hidden neurons in the layer of this artificial neural network [51]. The designed model then estimates the error of the probable output in contrast with the actual output. The network further explores the error to variate to adjust the weightage, to minimize the generated error for the next cycle, and this process continues unless the error approaches zero [46]. For the network, the rule activation function was applied, and the weights were randomly assigned by the toolbox. The results and ANN training particulars are presented in the next section. The comprehensive flow diagram of the ANN classifier is shown in Figure 13.
The ANN network had 2 hidden layers with 12 and 6 neurons. The output layer will give one definitive class, i.e., motion class, as defined in Figure 14. First, preprocessed information is passed through the first layers, which contain 128 filters with a kernel size of 12. The output from the first layer is 24 × 128 [52]. The second layer contains the same number of filters and a kernel size of six. The output from the second layer is 12 × 128. Subsequently, the global average pooling is applied between the output layer for which the Adam optimization method was deployed [53,54].
After classifying the fNIRS signals, the trained model was tested. The obtained results are discussed in the next section.

4. Results and Discussion

In this study, fNIRS signals were acquired to generate control commands of human arm motions for the transhumeral amputee. The fNIRS hemodynamic responses are acquired employing optical sensors, i.e., optodes. These responses are used to drive a prosthetic arm for transhumeral amputees. For optical sensors, dark noise plays an important role. The NIRx Technologies designed a high-functioning device that can be used for real-time purpose. However, the difficulties have never been addressed before. For this study, the results were not only analyzed based upon the accuracy of the motion classification, but also the sustainability for real-time applications. The optode placement problem was addressed and researched upon and is detailed in previous sections. This section specifically presents the experimental results achieved using the proposed framework. Window sizing of various lengths and durations has been observed in the literature to detect fNIRS signals [31]. The period of 0–0.5 and 0–1 s was selected [55,56]. This split time window was used to inspect hemodynamic response. This then determines the optimal window, which then generates a command with a minimum amount of time.

4.1. Channel Selection

The electrical gain component that was adjusted to the absorption spectra is shown by number 6 in Figure 7. The photocurrent generated by the optical wave is amplified to a greater extent since this factor rises in value [41]. As the gain component increases, the signal-to-noise ratio of the input drops [42]. As a result, nirslab may identify channels with gain factors greater than a preset value and reject them from further processing and analysis. The ordered pair (1.8477 1.7928) reflects the values of a metric we employed to quantify the raw data’s signal-to-noise ratio [45,46]. The coefficient of variation (CV) is the measure. Since there are different measuring wavelengths, two CV values are reported. The sampling frequency is 78.1 Hz. After the analysis presented earlier, a framework is presented here to overcome the issues contributing to the bad and unsteady signals from the fNIRS sensor [8,34].

4.2. Motion Classification Accuracy

The classification results were then analysed subject-wise and then the average accuracy of all subjects was calculated. The difference in the classification accuracy is evidence of the importance of signal acquisition and processing procedure. The reliability of the results is very well dependant on how the signals were acquired and further processed [8,47]. A noisy and disturbing signal resulted in error, while the refined signals were easy to handle by the classifier algorithms and hence a minimum error was generated [47,52,57]. Furthermore, a t-test was applied to student participants. This test checks the statistical significance of the attained results [54]. The confidence interval is specified at 95% (p < 0.05). A quantifiable comparison between healthy subjects and amputees was not possible due to a restricted number of amputees. However, the computed p-value is 0.0248, with a 95% confidence interval within healthy subjects.
The network used a sigmoid function for gradient descent. A total of 60% of the samples were fed for training and 20% samples were utilized by the network for testing and validation each. Each step was calculated in 320 µs with a 3 s epoch completion time. Then, a confusion matrix was extracted as soon as the training ends, which not only indicates the number of samples, those that were accurately classified, but also the false samples that generated an error. The number of hidden neurons was set to 20. Twelve neurons were present in each of the transitional hidden layers. The neuron number in the output layer is specified as six.
The healthy subject-wise accuracy is illustrated in Table 2, whereas Table 3 represents the accuracy of amputed subjects.
As stated in Section 1, no work has been conducted for transhumeral amputees in non-invasive manner to generate six number of control commands. However, the studies that used fNIRS for some other applications are presented below in tabular form for accuracy and number of control commands comparison. The study comparison is illustrated in Table 4.
It can be seen from the studies above that, as the number of control commands increase, the accuracy decrease. However, the time response of classifiers has no trend. It is due to the fact that these studies have been conducted on the signal set acquired by third parties via online forums. In the proposed framework, the signals were acquired and then analyzed. Based on the conditions during the signal acquisition process, further steps were taken, such as filtration and channel selection. This makes a difference, as documented by [22,58]. The results show the potential usability of the presented framework in real-time applications and is a step towards enhanced motion prediction in BMI applications.

5. Conclusions

In this research study, an fNIRS-based approach was investigated to recognize the motion intention of the human upper limb. The fNIRS signals are acquired from the motor cortex region of the brain using NIRSport from NIRx Technology. The fNIRS signals were acquired for six arm motions. These motions included elbow extension (E.E), elbow flexion (E.F), wrist supination (W.S), wrist pronation (W.P), hand open (H.O), and hand close (H.C). Channel selection was conducted based on the gain values computed during signal acquisition. An FIR filter was applied to filter the samples. Signal mean, signal peak and minimum value were computed as a feature set. ANN classifier was trained for motion intention prediction. On average, the motion intention prediction was 78% (p < 0.05) and 64% accurate for healthy and amputated subjects, respectively. The highest accuracy for an individual subject was recorded as 79.6%. A possible extension of the presented work includes the framework design for accuracy enhancement and eliminating the channel selection complications. The application of the presented approach with the increased number of arm motions, incorporating individuals of different age groups, and the implementation of generated control commands to control a prosthetic arm device in a real-time setting are some other directions for future work.

Author Contributions

Conceptualization, N.Y.S., Z.K. and S.A.U.; methodology, N.Y.S. and S.A.U.; software, U.F. and S.A.U.; validation, Z.K., N.Y.S., S.A.U., U.F., M.F.S. and S.M.; formal analysis, S.A.U., M.B. and R.K.; investigation, S.A.U. and N.Y.S.; resources, Z.K., M.B. and R.K.; data curation, Z.K. and S.A.U.; writing—original draft preparation, Z.K., N.Y.S. and S.A.U.; writing—review and editing, M.F.S., M.B., R.K., U.F., S.A.U. and S.M.; visualization, S.M., S.A.U. and N.Y.S.; supervision, Z.K.; project administration, Z.K. and N.Y.S.; funding acquisition, M.B., R.K. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Higher Education Commission of Pakistan, grant number 10702. This research was partially funded by Future University in Egypt, New Cairo, Egypt.

Institutional Review Board Statement

The Air University Human Research Ethics Committee (AU HREC) operates in compliance with The World Medical Association Declaration of Helsinki for Ethics Principles for Medical Research Involving Human Subjects.

Informed Consent Statement

I have read the Participant Information Sheet; have understood the nature of the research and why I have been selected. I have had the opportunity to ask questions and have them answered to my satisfaction. I have chosen to participate in this research voluntarily.

Data Availability Statement

It is a funded project and hence data is not publicly available. However, the data can be made available upon request.

Acknowledgments

We would like to show gratefulness to our friends who linked us with the transhumeral amputees. We would like to thank our funders i.e., HEC, Pakistan for their support.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Cordella, F.; Ciancio, A.L.; Sacchetti, R.; Davalli, A.; Cutti, A.G.; Guglielmelli, E.; Zollo, L. Literature Review on Needs of Upper Limb Prosthesis Users. Front. Neurosci. 2016, 10, 209. [Google Scholar] [CrossRef]
  2. Ribeiro, J.; Mota, F.; Cavalcante, T.; Nogueira, I.; Gondim, V.; Albuquerque, V.; Alexandria, A. Analysis of Man-Machine Interfaces in Upper-Limb Prosthesis: A Review. Robotics 2019, 8, 16. [Google Scholar] [CrossRef] [Green Version]
  3. Hussain, S.; Shams, S.; Khan, S.J. Impact of Medical Advancement: Prostheses. In Computer Architecture in Industrial, Biomechanical and Biomedical Engineering; IntechOpen: London, UK, 2019. [Google Scholar] [CrossRef] [Green Version]
  4. Neelum, Y.S.; Kausar, Z.; Usama, S.A. Reference position estimation for prosthetic elbow and wrist using EMG signals. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2019; Volume 635, p. 012031. [Google Scholar] [CrossRef]
  5. Resnik, L.; Klinger, S.L.; Etter, K. The DEKA Arm: Its features, functionality, and evolution during the Veterans Affairs Study to optimize the DEKA Arm. Prosthet. Orthot. Int. 2014, 38, 492–504. [Google Scholar] [CrossRef] [Green Version]
  6. Lenzi, T.; Lipsey, J.; Sensinger, J.W. The RIC Arm—A Small Anthropomorphic Transhumeral Prosthesis. IEEE/ASME Trans. Mechatron. 2016, 21, 2660–2671. [Google Scholar] [CrossRef]
  7. Islam, M.A.; Sundaraj, K.; Ahmad, R.B.; Ahamed, N.U.; Ali, M.A. Mechanomyography sensor development, related signal processing, and applications: A systematic review. IEEE Sens. J. 2013, 13, 2499–2516. [Google Scholar] [CrossRef]
  8. Bennett, D.A.; Goldfarb, M. IMU-Based Wrist Rotation Control of a Transradial Myoelectric Prosthesis. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 26, 419–427. [Google Scholar] [CrossRef] [PubMed]
  9. Syed, U.A.; Kausar, Z.; Sattar, N.Y. Control of a Prosthetic Arm Using fNIRS, a Neural-Machine Interface. In Data Acquisition-Recent Advances and Applications in Biomedical Engineering; IntechOpen: London, UK, 2020. [Google Scholar]
  10. Alshammary, N.A.; Bennett, D.A.; Goldfarb, M. Synergistic Elbow Control for a Myoelectric Transhumeral Prosthesis. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 26, 468–476. [Google Scholar] [CrossRef]
  11. Sattar, N.Y.; Syed, U.A.; Muhammad, S.; Kausar, Z. Real-Time EMG Signal Processing with Implementation of PID Control for Upper-Limb Prosthesis. In Proceedings of the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Hong Kong, China, 8–12 July 2019; pp. 120–125. [Google Scholar] [CrossRef]
  12. Jarrassé, N.; Nicol, C.; Touillet, A.; Richer, F.; Martinet, N.; Paysant, J.; de Graaf, J.B. Classification of phantom finger, hand, wrist, and elbow voluntary gestures in transhumeral amputees with sEMG. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 71–80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Oda, Y.; Sato, T.; Nambu, I.; Wada, Y. Real-Time Reduction of Task-Related Scalp-Hemodynamics Artifact in Functional Near-Infrared Spectroscopy with Sliding-Window Analysis. Appl. Sci. 2018, 8, 149. [Google Scholar] [CrossRef] [Green Version]
  14. Yamada, Y.; Suzuki, H.; Yamashita, Y. Time-Domain Near-Infrared Spectroscopy and Imaging: A Review. Appl. Sci. 2019, 9, 1127. [Google Scholar] [CrossRef] [Green Version]
  15. Li, X.; Samuel, O.W.; Zhang, X.; Wang, H.; Fang, P.; Li, G. A motion-classification strategy based on sEMG-EEG signal combination for upper-limb amputees. J. Neuroeng. Rehabil. 2017, 14, 3. [Google Scholar] [CrossRef] [Green Version]
  16. Bonilauri, A.; Intra, F.S.; Pugnetti, L.; Baselli, G.; Baglio, F. A Systematic Review of Cerebral Functional Near-Infrared Spectroscopy in Chronic Neurological Diseases—Actual Applications and Future Perspectives. Diagnostics 2020, 10, 581. [Google Scholar] [CrossRef]
  17. Banville, H.; Falk, T. Recent advances and open challenges in hybrid brain-computer interfacing: A technological review of non-invasive human research. Brain-Comput. Interfaces 2016, 3, 9–46. [Google Scholar] [CrossRef]
  18. Yao, L.; Meng, J.; Zhang, D.; Sheng, X.; Zhu, X. Combining Motor Imagery with Selective Sensation toward a Hybrid-Modality BCI. IEEE Trans. Biomed. Eng. 2013, 61, 2304–2312. [Google Scholar] [CrossRef]
  19. Herold, F.; Wiegel, P.; Scholkmann, F.; Müller, N.G. Applications of functional near-infrared spectroscopy (fNIRS) neuroimaging in Exercise–Cognition science: A systematic, Methodology-Focused review. J. Clin. Med. 2018, 7, 466. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Jian, C.; Deng, L.; Liang, L.; Luo, J.; Wang, X.; Song, R. Neuromuscular Control of the Agonist–Antagonist Muscle Coordination Affected by Visual Dimension: An EMG-fNIRS Study. IEEE Access 2020, 8, 100768–100777. [Google Scholar] [CrossRef]
  21. Abitan, H.; Bohr, H.; Buchhave, P. Correction to the Beer-Lambert-Bouguer law for optical absorption. Appl. Opt. 2008, 47, 5354–5357. [Google Scholar] [CrossRef]
  22. Herold, F.; Wiegel, P.; Scholkmann, F.; Thiers, A.; Hamacher, D.; Schega, L. Functional near-infrared spectroscopy in movement science: A systematic review on cortical activity in postural and walking tasks. Neurophotonics 2017, 4, 041403. [Google Scholar] [CrossRef] [Green Version]
  23. Pfeifer, M.D.; Scholkmann, F.; Labruyère, R. Signal Processing in Functional Near-Infrared Spectroscopy (fNIRS): Methodological Differences Lead to Different Statistical Results. Front. Hum. Neurosci. 2018, 11, 641. [Google Scholar] [CrossRef] [Green Version]
  24. Phinyomark, A.; Scheme, E. A feature extraction issue for myoelectric control based on wearable EMG sensors. In Proceedings of the 2018 IEEE Sensors Applications Symposium (SAS), Seoul, Korea, 12–14 March 2018; pp. 1–6. [Google Scholar] [CrossRef]
  25. Farina, D.; Merletti, R.; Enoka, R.M. The extraction of neural strategies from the surface EMG. J. Appl. Physiol. 2004, 96, 1486–1495. [Google Scholar] [CrossRef] [Green Version]
  26. Scholkmann, F.; Wolf, M. Measuring brain activity using functional near infrared spectroscopy: A short review. Spectrosc. Eur. 2012, 24, 6. [Google Scholar]
  27. Rocon, E.; Gallego, J.A.; Barrios, L.; Victoria, A.R.; Ibánez, J.; Farina, D.; Negro, F.; Dideriksen, J.L.; Conforto, S.; D’Alessio, T.; et al. Multimodal BCI-mediated FES suppression of pathological tremor. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 3337–3340. [Google Scholar]
  28. Pinti, P.; Aichelburg, C.; Gilbert, S.; Hamilton, A.; Hirsch, J.; Burgess, P.; Tachtsidis, I. A Review on the Use of Wearable Functional Near-Infrared Spectroscopy in Naturalistic Environments. Jpn. Psychol. Res. 2018, 60, 347–373. [Google Scholar] [CrossRef] [Green Version]
  29. Lloyd-Fox, S.; Blasi, A.; Elwell, C. Illuminating the developing brain: The past, present and future of functional near infrared spectroscopy. Neurosci. Biobehav. Rev. 2010, 34, 269–284. [Google Scholar] [CrossRef] [PubMed]
  30. World Medical Association. WMA Declaration of Helsinski–Ethical Principles for Medical Research Involving Human Subjects. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef] [Green Version]
  31. Leeb, R.; Sagha, H.; Chavarriaga, R. Multimodal fusion of muscle and brain signals for a hybrid-BCI. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 4343–4346. [Google Scholar]
  32. Buccino, A.P.; Keles, H.O.; Omurtag, A. Hybrid EEG-fNIRS asynchronous brain-computer interface for multiple motor tasks. PLoS ONE 2016, 11, e0146610. [Google Scholar]
  33. Ortega, P.; Zhao, T.; Faisal, A.A. HYGRIP: Full-Stack Characterization of Neurobehavioral Signals (fNIRS, EEG, EMG, Force, and Breathing) During a Bimanual Grip Force Control Task. Front. Neurosci. 2020, 14, 919. [Google Scholar] [CrossRef]
  34. Aryadoust, V.; Foo, S.; Ng, L.Y. What can gaze behaviors, neuroimaging data, and test scores tell us about test method effects and cognitive load in listening assessments? Lang. Test. 2021, 39, 56–89. [Google Scholar] [CrossRef]
  35. Maira, G.; Chiarelli, A.M.; Brafa, S.; Libertino, S.; Fallica, G.; Merla, A.; Lombardo, S. Imaging System Based on Silicon Photomultipliers and Light Emitting Diodes for Functional Near-Infrared Spectroscopy. Appl. Sci. 2020, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
  36. Ramadan, R.A.; Vasilakos, A.V. Brain computer interface: Control signals review. Neurocomputing 2017, 223, 26–44. [Google Scholar] [CrossRef]
  37. Kim, M. Shedding Light on the Human Brain. Opt. Photon-News 2021, 32, 26–33. [Google Scholar] [CrossRef]
  38. Geissler, C.F.; Schneider, J.; Frings, C. Shedding light on the prefrontal correlates of mental workload in simulated driving: A functional near-infrared spectroscopy study. Sci. Rep. 2021, 11, 705. [Google Scholar] [CrossRef] [PubMed]
  39. Lamberti, N.; Manfredini, F.; Baroni, A.; Crepaldi, A.; Lavezzi, S.; Basaglia, N.; Straudi, S. Motor Cortical Activation Assessment in Progressive Multiple Sclerosis Patients Enrolled in Gait Rehabilitation: A Secondary Analysis of the RAGTIME Trial Assisted by Functional Near-Infrared Spectroscopy. Diagnostics 2021, 11, 1068. [Google Scholar] [CrossRef]
  40. Guo, W.; Sheng, X.; Liu, H.; Zhu, X. Toward an Enhanced Human–Machine Interface for Upper-Limb Prosthesis Control With Combined EMG and NIRS Signals. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 564–575. [Google Scholar] [CrossRef]
  41. Feng, N.; Hu, F.; Wang, H.; Gouda, M.A. Decoding of voluntary and involuntary upper-limb motor imagery based on graph fourier transform and cross-frequency coupling coefficients. J. Neural Eng. 2020, 17, 056043. [Google Scholar] [CrossRef] [PubMed]
  42. Leff, D.; Orihuela-Espina, F.; Elwell, C.; Athanasiou, T.; Delpy, D.T.; Darzi, A.W.; Yang, G.-Z. Assessment of the cerebral cortex during motor task behaviours in adults: A systematic review of functional near infrared spectroscopy (fNIRS) studies. NeuroImage 2011, 54, 2922–2936. [Google Scholar] [CrossRef]
  43. Borrell, J.A.; Copeland, C.; Lukaszek, J.L.; Fraser, K.; Zuniga, J.M. Use-Dependent Prosthesis Training Strengthens Contralateral Hemodynamic Brain Responses in a Young Adult with Upper Limb Reduction Deficiency: A Case Report. Front. Neurosci. 2021, 15, 693138. [Google Scholar] [CrossRef]
  44. Matarasso, A.K.; Rieke, J.D.; White, K.; Yusufali, M.M.; Daly, J.J. Combined real-time fMRI and real time fNIRS brain computer interface (BCI): Training of volitional wrist extension after stroke, a case series pilot study. PLoS ONE 2021, 16, e0250431. [Google Scholar] [CrossRef]
  45. Luo, J.; Shi, W.; Lu, N.; Wang, J.; Chen, H.; Wang, Y.; Lu, X.; Wang, X.; Hei, X. Improving the performance of multisubject motor imagery-based BCIs using twin cascaded softmax CNNs. J. Neural Eng. 2021, 18, 036024. [Google Scholar] [CrossRef]
  46. Ang, K.K.; Guan, C.; Chua, K.S.G.; Ang, B.T.; Kuah, C.; Wang, C.; Phua, K.S.; Chin, Z.Y.; Zhang, H. A clinical study of motor imagery-based brain-computer interface for upper limb robotic rehabilitation. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 5981–5984. [Google Scholar] [CrossRef]
  47. Wen, Y.; Avrillon, S.; Hernandez-Pavon, J.C.; Kim, S.J.; Hug, F.; Pons, J.L. A convolutional neural network to identify motor units from high-density surface electromyography signals in real time. J. Neural Eng. 2021, 18, 056003. [Google Scholar] [CrossRef]
  48. Prôa, R.; Balardin, J.; de Faria, D.D.; Paulo, A.M.; Sato, J.R.; Baltazar, C.A.; Borges, V.; Silva, S.M.C.A.; Ferraz, H.B.; Aguiar, P.D.C. Motor Cortex Activation During Writing in Focal Upper-Limb Dystonia: An fNIRS Study. Neurorehabilit. Neural Repair 2021, 35, 729–737. [Google Scholar] [CrossRef]
  49. Li, G.; Yuan, Y.; Ren, H.; Chen, W. fNIRS study of effects of foot bath on human brain and cognitive function. J. Mech. Med. Biol. 2021, 21, 2140022. [Google Scholar] [CrossRef]
  50. Gusnard, D.A.; Raichle, M.E. Searching for a baseline: Functional imaging and the resting human brain. Nat. Rev. Neurosci. 2001, 2, 685–694. [Google Scholar] [CrossRef] [PubMed]
  51. Gomez-Gil, J.; San-Jose-Gonzalez, I.; Nicolas-Alonso, L.F.; Alonso-Garcia, S. Steering a Tractor by Means of an EMG-Based Human-Machine Interface. Sensors 2011, 11, 7110–7126. [Google Scholar] [CrossRef] [Green Version]
  52. Sitaram, R.; Zhang, H.; Guan, C.; Thulasidas, M.; Hoshi, Y.; Ishikawa, A.; Shimizu, K.; Birbaumer, N. Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain–computer interface. NeuroImage 2007, 34, 1416–1427. [Google Scholar] [CrossRef]
  53. Zimmermann, R.; Marchal-Crespo, L.; Edelmann, J.; Lambercy, O.; Fluet, M.C.; Riener, R.; Wolf, M.; Gassert, R. Detection of motor execution using a hybrid fNIRS-biosignal BCI: A feasibility study. J. Neuroeng. Rehabil. 2013, 10, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Yoo, S.-H.; Santosa, H.; Kim, C.-S.; Hong, K.-S. Decoding Multiple Sound-Categories in the Auditory Cortex by Neural Networks: An fNIRS Study. Front. Hum. Neurosci. 2021, 15, 211. [Google Scholar] [CrossRef]
  55. Vélez-Guerrero, M.; Callejas-Cuervo, M.; Mazzoleni, S. Artificial Intelligence-Based Wearable Robotic Exoskeletons for Upper Limb Rehabilitation: A Review. Sensors 2021, 21, 2146. [Google Scholar] [CrossRef]
  56. Medina, F.; Perez, K.; Cruz-Ortiz, D.; Ballesteros, M.; Chairez, I. Control of a hybrid upper-limb orthosis device based on a data-driven artificial neural network classifier of electromyography signals. Biomed. Signal Process. Control. 2021, 68, 102624. [Google Scholar] [CrossRef]
  57. Holtzer, R.; Verghese, J.; Allali, G.; Izzetoglu, M.; Wang, C.; Mahoney, J.R. Neurological Gait Abnormalities Moderate the Functional Brain Signature of the Posture First Hypothesis. Brain Topogr. 2015, 29, 334–343. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Su, Y.; Li, W.; Bi, N.; Lv, Z. Adolescents Environmental Emotion Perception by Integrating EEG and Eye Movements. Front. Neurorobotics 2019, 13, 46. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Fazli, S.; Mehnert, J.; Steinbrink, J.; Curio, G.; Villringer, A.; Mueller, K.-R.; Blankertz, B. Enhanced performance by a hybrid NIRS–EEG brain computer interface. NeuroImage 2011, 59, 519–529. [Google Scholar] [CrossRef] [PubMed]
  60. Witkowski, M.; Cortese, M.; Cempini, M.; Mellinger, J.; Vitiello, N.; Soekadar, S.R. Enhancing brain-machine interface (BMI) control of a hand exoskeleton using electrooculography (EOG). J. Neuron. Rehabil. 2014, 11, 165. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Levels of upper limb amputation [3].
Figure 1. Levels of upper limb amputation [3].
Sensors 22 00726 g001
Figure 2. Methodology of the study.
Figure 2. Methodology of the study.
Sensors 22 00726 g002
Figure 3. Signal acquisition environment and optode placement. (a) Experiment setup showing optode placement on the motor cortex of a healthy subject. (b) Eight sources (S) and eight detectors (D) were positioned on the subject’s motor cortex region of the brain to record fNIRS signals with a separation of 3 cm resulting in twenty channels.
Figure 3. Signal acquisition environment and optode placement. (a) Experiment setup showing optode placement on the motor cortex of a healthy subject. (b) Eight sources (S) and eight detectors (D) were positioned on the subject’s motor cortex region of the brain to record fNIRS signals with a separation of 3 cm resulting in twenty channels.
Sensors 22 00726 g003
Figure 4. After an initial 30 s rest, each functional near-infrared spectroscopy block consists of 10 s activations and 20 s rests. The total experiment duration for acquiring fNIRS signals is 11 min and includes 12 trials in total.
Figure 4. After an initial 30 s rest, each functional near-infrared spectroscopy block consists of 10 s activations and 20 s rests. The total experiment duration for acquiring fNIRS signals is 11 min and includes 12 trials in total.
Sensors 22 00726 g004
Figure 5. Flow diagram of fNIRS-based motion intention recognition for the transhumeral prosthesis.
Figure 5. Flow diagram of fNIRS-based motion intention recognition for the transhumeral prosthesis.
Sensors 22 00726 g005
Figure 6. Optode status window. (a) Flawless optode connection with head scalp; (b) faulty optode connection. The signal quality class can be read from a color bar shown along with the optode status window.
Figure 6. Optode status window. (a) Flawless optode connection with head scalp; (b) faulty optode connection. The signal quality class can be read from a color bar shown along with the optode status window.
Sensors 22 00726 g006
Figure 7. List of good/bad channels to remove bad channels from the analysis and signal classification.
Figure 7. List of good/bad channels to remove bad channels from the analysis and signal classification.
Sensors 22 00726 g007
Figure 8. Visualization of recorded raw light intensity of each optode.
Figure 8. Visualization of recorded raw light intensity of each optode.
Sensors 22 00726 g008
Figure 9. Disturbed/noisy signal before grommets and covering head cap is incorporated. Blue lines represent the data acquired from the detector while the red line represents the source signal.
Figure 9. Disturbed/noisy signal before grommets and covering head cap is incorporated. Blue lines represent the data acquired from the detector while the red line represents the source signal.
Sensors 22 00726 g009
Figure 10. Clean signal after incorporation of grommets incorporated with covering head cap.
Figure 10. Clean signal after incorporation of grommets incorporated with covering head cap.
Sensors 22 00726 g010
Figure 11. The fNIRS data obtained from a healthy subject according to the experimental protocol. The upper signal is from the source of 760 nm, and the lower one represents the 850 nm wavelength.
Figure 11. The fNIRS data obtained from a healthy subject according to the experimental protocol. The upper signal is from the source of 760 nm, and the lower one represents the 850 nm wavelength.
Sensors 22 00726 g011
Figure 12. The filtered sample after the implementation of band-pass filter in a range of 0.01 Hz–0.2 Hz.
Figure 12. The filtered sample after the implementation of band-pass filter in a range of 0.01 Hz–0.2 Hz.
Sensors 22 00726 g012
Figure 13. Flow diagram of the ANN classifier.
Figure 13. Flow diagram of the ANN classifier.
Sensors 22 00726 g013
Figure 14. ANN network architecture.
Figure 14. ANN network architecture.
Sensors 22 00726 g014
Table 1. Demographic characteristics of amputee subjects.
Table 1. Demographic characteristics of amputee subjects.
Amputee TitleA1A2A3
GenderMaleMaleMale
Age233242
Amputated SideRightLeftRight
Residual Limb Length14 cm18 cm10 cm
Time since Amputation7 Months24 Months145 Months
Table 2. Offline classification accuracies of fifteen healthy subjects using single features signal mean (SM), signal peak (SP), signal minimum (SMin) and waveform length features for EMG and fNIRS using LDA and ANN Classifiers.
Table 2. Offline classification accuracies of fifteen healthy subjects using single features signal mean (SM), signal peak (SP), signal minimum (SMin) and waveform length features for EMG and fNIRS using LDA and ANN Classifiers.
FeaturesS1S2S3S4S5
SM72.8861.174.8968.6369.63
SP67.8568.8976.969.0370.03
SMin74.9963.8473.4968.1369.13
S6S7S8S9S10
SM75.0475.3877.1567.8674.56
SP72.2272.2865.3875.7769.76
SMin7774.4471.1571.7566.62
S11S12S13S14S15
SM66.7864.7660.3771.5479.6
SP69.7469.5665.4771.9459.81
SMin66.8770.6164.7469.2267.6
Table 3. Offline classification accuracies of three amputee subjects using single features signal mean (SM), signal peak (SP), signal minimum (SMin), and waveform length features for EMG and fNIRS using LDA and ANN Classifiers.
Table 3. Offline classification accuracies of three amputee subjects using single features signal mean (SM), signal peak (SP), signal minimum (SMin), and waveform length features for EMG and fNIRS using LDA and ANN Classifiers.
FeaturesA1A2A3
SM69.2661.6557.05
SP68.9160.1857.72
SMin55.150.0551.93
Table 4. Performance evaluation and comparison with existing classification models.
Table 4. Performance evaluation and comparison with existing classification models.
TechniqueLearning MethodTime ResponseNumber of Control CommandsClassification Accuracy
TD features [58]LDA5.5 s272.82%
FD features [59]LDA/SVM15 s283%
Raw fNIRS [22]ANN4 s458%
TD features [60]SVM0.5 s668.1%
Proposed frameworkANN320 µs678.65%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sattar, N.Y.; Kausar, Z.; Usama, S.A.; Farooq, U.; Shah, M.F.; Muhammad, S.; Khan, R.; Badran, M. fNIRS-Based Upper Limb Motion Intention Recognition Using an Artificial Neural Network for Transhumeral Amputees. Sensors 2022, 22, 726. https://0-doi-org.brum.beds.ac.uk/10.3390/s22030726

AMA Style

Sattar NY, Kausar Z, Usama SA, Farooq U, Shah MF, Muhammad S, Khan R, Badran M. fNIRS-Based Upper Limb Motion Intention Recognition Using an Artificial Neural Network for Transhumeral Amputees. Sensors. 2022; 22(3):726. https://0-doi-org.brum.beds.ac.uk/10.3390/s22030726

Chicago/Turabian Style

Sattar, Neelum Yousaf, Zareena Kausar, Syed Ali Usama, Umer Farooq, Muhammad Faizan Shah, Shaheer Muhammad, Razaullah Khan, and Mohamed Badran. 2022. "fNIRS-Based Upper Limb Motion Intention Recognition Using an Artificial Neural Network for Transhumeral Amputees" Sensors 22, no. 3: 726. https://0-doi-org.brum.beds.ac.uk/10.3390/s22030726

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop