Next Article in Journal
Comparative Analysis of Major Machine-Learning-Based Path Loss Models for Enclosed Indoor Channels
Next Article in Special Issue
Criterion Validity of Linear Accelerations Measured with Low-Sampling-Frequency Accelerometers during Overground Walking in Elderly Patients with Knee Osteoarthritis
Previous Article in Journal
Detection of Partially Structural Collapse Using Long-Term Small Displacement Data from Satellite Images
Previous Article in Special Issue
Pathological-Gait Recognition Using Spatiotemporal Graph Convolutional Networks and Attention Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flexible Machine Learning Algorithms for Clinical Gait Assessment Tools

1
Department of Rehabilitation Medicine, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
2
Department of Human Movement Sciences, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
3
Oro Muscles B.V., 9715 CJ Groningen, The Netherlands
4
Department of Medical Biochemistry and Microbiology, Uppsala University, 751 23 Uppsala, Sweden
5
Department of Biomedical Engineering, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
6
Center for Development and Innovation (CDI), University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
7
Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands
*
Author to whom correspondence should be addressed.
Submission received: 19 May 2022 / Revised: 16 June 2022 / Accepted: 27 June 2022 / Published: 30 June 2022
(This article belongs to the Special Issue Sensor Technologies for Gait Analysis)

Abstract

:
The current gold standard of gait diagnostics is dependent on large, expensive motion-capture laboratories and highly trained clinical and technical staff. Wearable sensor systems combined with machine learning may help to improve the accessibility of objective gait assessments in a broad clinical context. However, current algorithms lack flexibility and require large training datasets with tedious manual labelling of data. The current study tests the validity of a novel machine learning algorithm for automated gait partitioning of laboratory-based and sensor-based gait data. The developed artificial intelligence tool was used in patients with a central neurological lesion and severe gait impairments. To build the novel algorithm, 2% and 3% of the entire dataset (567 and 368 steps in total, respectively) were required for assessments with laboratory equipment and inertial measurement units. The mean errors of machine learning-based gait partitions were 0.021 s for the laboratory-based datasets and 0.034 s for the sensor-based datasets. Combining reinforcement learning with a deep neural network allows significant reduction in the size of the training datasets to <5%. The low number of required training data provides end-users with a high degree of flexibility. Non-experts can easily adjust the developed algorithm and modify the training library depending on the measurement system and clinical population.

1. Introduction

Walking is the most important form of mobility and allows us to participate in labour, societal and sports-related activities. To regain a healthy, normal walking pattern after injury or disease is, therefore, one of the most important goals in clinical rehabilitation [1,2,3]. Effective gait rehabilitation is predicated on accurate diagnostics. The current gold standard of gait diagnostics is instrumented, laboratory-based three-dimensional (3D) motion-capture analysis, including measures of external forces and muscle activity (electromyography) (3D clinical gait analysis (3D CGA)) [4,5]. Using 3D CGA, clinicians can quantify abnormalities in joint motions, joint loading and muscle activity, and design patient-specific interventions. While 3D CGA has improved treatment outcomes after rehabilitation [4,5,6,7], it has some practical disadvantages. For example, 3D CGA is dependent on large, expensive motion-capture laboratories and highly trained clinical and technical staff to guarantee quality during the data processing and interpretation steps. These requirements for staff and equipment constrain the use of objective gait diagnostics to a limited number of highly specialized hospitals. To improve the accessibility of objective gait diagnostics and provide more patients with targeted, personalized gait rehabilitation, less technically demanding and laboratory-independent solutions for gait diagnostics are needed.
An alternative to laboratory-based 3D CGA are wearable sensor systems coupled with machine learning analytics [8,9,10,11,12]. These sensor systems use predictive machine learning methods to automatically partition and analyse gait (e.g., foot-contact and foot-off events) from sensor signals (e.g., inertial measurement units (IMUs)) [8,9,10,11,12,13]. These sensor systems are mobile, cost-effective and allow the automation of some tasks that currently require laboratory equipment and skilled knowledge [8,9,10,11,12,13]. For example, partitioning gait into stance and swing phases is an important step in 3D CGA. Gait partitioning allows comparison with normative data and the formulation of a medical diagnosis. For example, partitioned recordings of muscle activity (e.g., from electromyographic sensors) allow the diagnosis of muscle spasticity in patients with a central neurological lesion [4,5,6,7].
The current gold standard for gait partitioning uses laboratory-based force plate data to detect foot-contact and foot-off events [14]. While force-plate-based gait partitioning is very accurate, it (1) requires expensive and stationary equipment, (2) does not usually allow partitioning of more than 2–3 steps within a trial, (3) significantly loses accuracy in severely impaired patients with very small step sizes or impaired foot clearances and (4) requires trained technical staff to manually control the partitioning accuracy and correct partitions if needed. Machine learning-based gait partitioning is a promising alternative because it can be performed with a single, wearable and cheap IMU sensor on the foot or pelvis, which allows the inclusion of more steps in the diagnostic process and can be performed in the patients’ home environments [8,9,10,11,12].
While machine learning-based sensor systems are promising to improve the accessibility of CGA, they still have some limitations. One main limitation is the lack of technical flexibility (e.g., dependence on measurement systems) and clinical flexibility (e.g., dependence on specific patient groups). For example, the majority of currently used machine learning-based sensor systems were validated in healthy adults or built for rather narrow groups of patients with similar gait impairments [8,11]. Once a newly assessed patient differs too much from the original patient population, these algorithms significantly lose accuracy and become invalid. This lack of clinical flexibility was addressed by Kidzinski et al. (2019) by making use of large training datasets of different walking patterns [15]. A total of 9092 annotated 3D CGA recordings (80% of the total dataset) were required to accurately identify swing and stance phases with artificial neural networks (long–short-term memory) [15]. While the partitioning accuracy of the algorithm was good, the dependency on large training datasets limited its technical flexibility. For example, an entirely new training dataset would be required to re-build the algorithm from Kidzinski et al. (2019) for use with an IMU sensor system or if one would like to add other gait features than foot-contact and foot-off events to the analysis. Extensive re-training procedures are infeasible and often impossible to perform for non-experts and less specialized hospitals, local clinics or physiotherapy departments.
We address the current limitations in machine learning-based gait assessment tools by developing a highly flexible machine learning method that requires only a few training datasets and can be easily modified by end-users without the need for laboratory-based ground truth data sources. The proposed solution consists of a novel, reinforced deep neural network and a dynamic, living training library that can change and adapt over time with end-user input (Oro Muscles B.V, Groningen, The Netherlands). Contrary to previous attempts, the newly designed algorithm recognizes patterns in motion signals (e.g., accelerometer data) and identifies gait features, such as foot-contact and foot-off events, through reinforcement, rather than statistics alone. We hypothesize that, by giving more autonomy to end-users through reinforcement learning, our proposed approach results in an order of magnitude less training data, as well as orders of magnitude more technical and clinical flexibility in applicable use cases.
We will first establish the flexibility and accuracy of the reinforced deep neural network for laboratory-based 3D CGA patient recordings and then in a wearable IMU sensor system. Finally, we present a clinical use case for the AI tool and IMU sensor system by visualizing AI-based partitions of the electromyographic (EMG) signal of the gastrocnemius muscle in patients with a central neurological lesion. Time-normalized visualizations of gastrocnemius muscle activity are a frequently used method to identify abnormalities in calf muscle activity and diagnose spasticity [4,5,6,7].

2. Materials and Methods

To address the main aim of the current study, we used a two-way approach. First, accelerometer data from 3D CGA patient recordings were used to train the AI tool for laboratory-based assessments. In the second step, the AI tool was trained and validated with IMU-based accelerometer data from the Oro Muscles IMU sensor system.

2.1. Algorithm Development for Laboratory-Based 3D CGA Data

2.1.1. 3D CGA Patient Recordings

Historical data recordings between April 2021 and June 2021 were selected from the database of the motion laboratory of the University Medical Center Groningen, The Netherlands. Datasets were included if patients previously signed informed consent, were diagnosed with a central neurological lesion and full 3D CGA data were acquired. The patient recordings consisted of on average eight-meter walking trials during regular clinical visits at the motion laboratory of the UMCG Groningen, The Netherlands. In total, 60 walking trials from 14 patients were included in the analysis. The right and left legs were treated separately, resulting in a total of 120 datasets.

2.1.2. 3D CGA Data Processing and Spatio-Temporal Parameter Computation

The 3D CGA patient recordings consisted of 3D marker position data of the plug-in gait model (2010) recorded at 100 Hz with 10 optical cameras (Vero) and Vicon motion capture software Nexus 2.12. Two Amti force plates recorded the ground reaction force data at 1000 Hz during gait recordings.
The 3D CGA-based identification of foot-contact and foot-off events was based on the in-built Vicon Nexus algorithm using force plate data. Initial foot contact was detected once vertical ground reaction forces exceeded 10 Newton. The moment of foot-off was defined once the vertical ground reaction force decreased below 10 Newton. All gait events were manually checked by an experienced lab technician and corrected if needed. In cases where force plate signals were corrupted because the gait deviated too much, gait events were manually annotated by an experienced lab technician and visually checked by the principal investigator of this study.
Step lengths were computed based on the absolute distance between the left and right lateral malleolus markers from the plug-in gait 3D CGA model. The 3D CGA-based foot-contact events were used to define the moment of maximum step length. Gait speed was computed based on the absolute distance (m) travelled by the anterior superior spine marker divided by time (s). All spatio-temporal gait data were averaged across steps within a trial and across trials of the same condition (barefoot and shoes/orthotics) per participant.
The total number of steps made within a trial was computed from 3D ankle marker position data. First, the difference between the anterior–posterior left and right lateral malleolus marker positions was computed for each 3D CGA trial. In the next step, the number of peaks in the left–right ankle distance signal was used to count the number of steps per trial. The sum of all steps across trials was used as the total number of steps for each condition and participant. Custom Python scripts (v. 3.8) were used to compute the spatio-temporal parameters and total number of steps. The accuracy of the step detection algorithm was visually checked by the principal investigator and corrected if needed.

2.1.3. Implementation, Training and Accuracy Testing of the AI Tool

To build the AI tool, an existing ML algorithm, Saguaro, consisting of a Hidden Markov Model, self-organizing map and a generative module [16], was combined with end-user feedback through a graphical user interface (GUI) and reinforcement learning. The original self-organizing map was replaced with a proprietary generative deep learning network (fuzzy logic algorithm) (Oro Muscles BV, Groningen, The Netherlands). The Oro Muscles deep learning network allowed the accommodation of user inputs for unsupervised gait partitioning. The implementation of the newly developed ML algorithm and training workflow is depicted in Figure 1 with sample user interfaces and is explained in more detail below.
First, the recorded EMG and IMU signals were pre-processed with a smoothing algorithm (Fast Fourier Transform (FFT) and Zurbenko–Kolmogorov Filter (ZK)) [17]. The processed EMG and IMU signals were then fed into a Saguaro-like unsupervised algorithm to segment the data into stance and swing phases (unsupervised learning). Next, the partitioned results were displayed in a GUI and corrected by the user if needed. The end-user manually selected the signal pattern of interest from the GUI (e.g., start and end of the acceleration signal of the swing phase). During this step, the user was guided by the laboratory-based gait partitions. After the end-user selection of the correct patterns, the final start and end coordinates of the swing phase partition were fed back to the ML algorithm in a reinforced learning loop. A custom-made java script was implemented in the AI tool’s user interface to allow the transfer of the end-user inputs to other datasets.
In the current study, the AI tool was trained with manually annotated hints between foot-contact and foot-off events from the mtp-2 marker acceleration signal (Figure 2 and Supplementary Materials File S1). In a next step, a fuzzy logic algorithm in the time domain was applied to match the selected interval against any subset of the recording. This step produced interval instances, or “cycles”, where overlaps between these cycles were restricted. To ensure that the cycles of motion (e.g., swing phases) did not coincide, a sliding window (Hamming distance) was used. This procedure was repeated iteratively until the algorithm converged and found a discrete set of cycles (Figure 2).
The force plate data and manual partitions from the 3D CGA system were used as the ground truth data for assessing the AI tool’s partitioning accuracy. Partitioning accuracy was determined by computing the mean difference between foot-contact and foot-off events from the ground truth with the AI tool’s partitions. If the partitioning error exceeded 0.060 s, the corresponding data set was transitioned into the training set and another hint was created.
Custom MATLAB (R2021a Mathworks, Newark, DE, USA) and Python (v. 3.8) scripts were used for data processing, analysis, feature extraction and computation of spatio-temporal gait variables.

2.2. Experimental Validation with IMU System and Clinical Use Case

Experimental Procedure and Data Collection

The first five consecutive patients visiting the motion laboratory between June 2021 and December 2021 eligible for inclusion were asked to participate in the IMU-based validation study of the AI tool. Next to the plug-in gait marker set, Cometa EMG sensors were placed according to the SENIAM guidelines on all major superficial lower limb muscle groups (vatus medialis, rectus femoris, semitendinosus, medial head of the gastrocnemius, soleus and tibialis anterior muscle). EMG data were recorded at 1000 Hz as part of the usual clinical 3D CGA. In addition, two Oro Muscle IMUs and EMG sensors were placed at the shank and foot and gastrocenemius and tibialis anterior muscles, respectively (Figure 3). The Oro Muscle’s EMG sensors were placed proximal to the Cometa EMG sensors. The multiple channels of the Oro sensor system were connected together via a raspberry pi for data acquisition. Oro EMG and IMU data were recorded at sample rates of 500 hz and 100 hz, respectively. The Oro Muscle sensor data were time-synchronized with the 3D CGA-based data through post-processing by syncing the peaks of the accelerometer data of each step of the patient through a custom MATLAB script.
3D CGA-based event detection was the same as described in 2.1. Cometa and Oro Muscles EMG data were bandpass-filtered between 20 (high band) and 450 Hz (low band) with a fourth-order Butterworth filter. To create the linear envelope, EMG data were rectified and low-pass-filtered at 10 Hz with a fourth-order Butterworth filter.

3. Results

3.1. Laboratory-Based 3D CGA

3.1.1. Training and Test Datasets

In total, five steps and six hints from four patients were fed into the AI tool for algorithm training. Table 1 gives the spatio-temporal gait parameters of the trial included in the training dataset (Table 1). Each row denotes data from one participant and the corresponding condition (barefoot and shoes/orthotics). The trials used for training were excluded from the test set. The 3D CGA test dataset consisted of a total of 567 steps (291 right and 276 left steps) from 13 different patients (eight children (12.8 ± 3 years) and five adults (43.8 ± 14.6 years)) with severe walking dysfunctions (Table 2). Hence, less than 2% of the total number of included steps was used for algorithm training. From one patient, two datasets before and after the treatment of muscle spasticity were included in the validation dataset. Nine of the included patients were diagnosed with spastic cerebral palsy, one patient with dystonic cerebral palsy, one with an incomplete spinal cord lesion, one patient with an unknown lesion of the central nervous system and one with a primary lateral sclerosis.

3.1.2. Partitioning Accuracy

When compared with the ground truth data, the absolute average difference in foot-off and foot-contact event detection was 0.021 s (±0.021 s). The maximum time difference between laboratory-based and machine learning-based event detection was 0.08 s.

3.2. Wearable Sensor-Based CGA

3.2.1. Participant Characteristics

In total, five patients participated in the experimental validation study with the wearable sensor system. Table 3 gives the average spatio-temporal gait parameters of the included participants. On average, the patients were 59.2 years old (±14.6) and had a BMI of 27.2 (±4.5). Two participants were diagnosed with a stroke, one with cerebral palsy, one with an incomplete spinal cord injury and one patient with multiple sclerosis. Table 3 provides an overview of the basic gait parameters of the included patients and the corresponding barefoot and shoe conditions.

3.2.2. Datasets and Algorithm Training

For the partitioning of the IMU sensor system data, one step from every walking condition and each patient (barefoot, with shoes/orthotics) was used as a hint (nine hints in total). Hence, to re-train the AI tool and allow the partitioning of all 368 steps in swing and stance phases, less than 3% of the entire dataset was required for re-training.

3.2.3. Partitioning Accuracy

When compared with laboratory-based 3D CGA partitioning, the average 3D CGA-based AI partitioning error was 0.035 ± 0.028 s and the sensor-based AI partitioning error was 0.032 ± 0.024 s.

3.2.4. Clinical Use Case

Figure 4 shows an example case of a visualization of the AI tool-based EMG partitioning of the gastrocnemius muscle activity into stance and swing phases. Figures in the Supplementary Material provide all EMG and accelerometer partitions of the included participants.

4. Discussion

Combining a Saguaro algorithm with end-user-informed reinforcement learning and deep neural networks allows successful gait partitioning of 3D CGA data recordings as well as wearable, sensor-based IMU recordings. More importantly only a few manually annotated training hints were required to achieve accurate partitions. Only 2% (six hints out of 567 steps) of the entire dataset was required to accurately partition the 3D CGA recordings in severely impaired patients with a range of different gait impairments (Table 2). For sensor-based partitioning, about 3% (9 hints out of 368 steps in total) was required to achieve accurate partitioning of swing and stance phases. In comparison, previously used machine learning models required up to 80% of the entire dataset (>9000 datasets) [15] to achieve a similar level of partitioning accuracy or were confined to healthy subjects or small groups of rather similarly walking patients [8,11]. Therefore, the proposed ML workflow provides a new methodology to significantly reduce the burden associated with algorithm training as compared with previous ML-based methods for gait partitioning [8,15].
There is one main difference in ours as compared with current machine learning approaches for gait partitioning. By incorporating reinforcement learning and a deep neural network, the current AI tool does not try to discern patterns in the signals alone, but is aided by end-users to select patterns of interest. While this reinforcement does require more user input, it reduces the required training data in orders of magnitude and yields a high degree of technical and clinical flexibility. Only a few, manually annotated reinforcement hints are required to adjust the deep neural network and search for new patterns in the signal. These features allow clinicians to easily adjust the AI tool to different measurement systems (e.g., laboratory vs. IMU systems), analysis of different output signals (e.g., acceleration vs. velocity signals) or locomotor activities (e.g., walking vs. running). This clinical and technical flexibility is an important aspect for facilitating the use of CGA in a broad clinical context and making it accessible for more patients and non-specialized clinics or physiotherapy practices.
Another important feature of the novel AI tool is that it does not require the exact time points from laboratory-based ground truth data sources (e.g., force-plate-based partitions) for training. Instead, the AI tool uses manual inputs from the GUI to identify relevant patterns in the signal (Oro Muscles, B.V.; Figure 1). Once the moment of foot contact and foot off is roughly known, the AI tool detects the exact start and end points of the stance and swing phases automatically through a deep neural network incorporating a fuzzy logic algorithm in the time domain. This implementation of a fuzzy logic algorithm in the training process allows end-users to easily modify any existing training library. Even though it was not tested in the current study, end-users might solely rely on time-synchronized video recordings to modify the training library.
Next to cost-effectiveness and usability, AI-based solutions have high potential to improve diagnostic validity, decrease personnel costs associated with CGA, improve the accessibility of CGA to non-specialized hospitals and physiotherapy practices and facilitate remote gait rehabilitation. The novel AI tool allowed us to include more than 10 times as many steps (18.4 steps on average) as traditional force-plate-based methods (1–2 steps on average) per patient. Including more steps in the diagnostic process improves the diagnostic validity, especially in young children and patients with complex movement disorders. In addition, automating gait partitioning can reduce the time required for data processing during traditional 3D CGA by about 30 min per trial [15], leading to high savings in costs and engineering time. Finally, the AI tool has the potential to facilitate remote gait rehabilitation by allowing the automatic partitioning of IMU signals, independent of the type of measurement system or output data.
While our results and previous studies showed that machine learning has the potential to replace force-plate-based gait partitioning, their implementation is scarce, and it remains challenging to automate diagnostic steps in the CGA process. We made a first step by visualizing the AI-based partitioned muscle activity of the gastrocnemius muscle (Figure 4 and Supplementary Materials File S1). Visualizing gastrocnemius muscle activity as a percentage of the gait cycle allows comparison with normative data and the identification of abnormalities in the timing of muscle activity and, hence, muscle spasticity. However, to formulate a distinct diagnosis of muscle spasticity, one would need additional information (e.g., time-synchronized knee and ankle angular velocity or gastrocnemius-muscle-lengthening velocities) [4,5,6,7]. Until now, this kinematic information can only be acquired from complex kinematic models or wearable sensor systems with multiple IMUs on the lower limb segments [18]. However, future studies need to establish whether incorporating kinematic models or multiple IMUs into AI-based systems is feasible.
We propose that AI-based solutions should not aim to replace laboratory-based 3D CGA systems and biomechanical interpretation steps, but should aim to provide non-specialized clinics with the ability to perform basic assessments or screening of key gait functions. For example, sensor-based gait assessment tools could be used to inform clinicians whether or not abnormalities in muscle activity are present, and further investigation with more advanced systems is required. Other feasible use cases for AI-based sensor solutions can be automating the assessment of abnormal foot positioning during stance, problems in foot clearance or deficiencies in spatio-temporal parameters (e.g., stride time variability or gait speed). Spatio-temporal parameters are especially relevant because they are an important metric of overall mobility and are often used to evaluate the effect of interventions [1,2,3].
The following aspects warrant consideration when using the AI tool in daily clinical care or for research purposes. When training the AI tool for laboratory-based 3D CGA accelerometer data, one needs to account for the polarity of the acceleration signal with respect to the walking direction. The change in polarity when changing walking direction in the laboratory doubles the number of hints required. Using input data that do not change polarity as a consequence of the walking direction (e.g., kinematics or IMU systems) could, therefore, half the number of required manually annotated hints.
Before actual use in clinics, potential end-users may need to re-train the AI tool based on their own patient population, measurement system and primary data source. While all algorithms are implemented in C++ and run on Linux, MacOS and Windows 10, without any dependencies on third-party packages, their actual implementation would require some technical expertise. The GUI for selecting swing intervals is written in Java and accesses the partitioning algorithm via the Java Native Interface (JNI). Finally, before implementation into daily clinical patient care, the user-friendliness of the AI tool should be improved and further validation steps in larger patient populations are required.

5. Conclusions

Combining reinforcement learning with a deep neural network allows significantly smaller training datasets (<5%) with comparable accuracy in gait partitioning for laboratory-based 3D CGA as well as wearable sensor systems. The low number of required training data and ease of training through the GUI provide end-users with the flexibility to use the AI tool in different measurement systems and clinical populations. Future studies will show the potential of the novel AI tool to allow automatic diagnostics of foot contact and spatio-temporal gait parameters, and facilitate the accessibility of CGA in a broad clinical context.

Supplementary Materials

The following supporting information can be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/s22134957/s1. The following supporting information was added as .zip files. File S1: AI-based gait partitions of all included participants, including AI-based EMG partitions.

Author Contributions

Conceptualization, C.G., J.M.H., H.T. and B.S.; methodology, C.G., H.T., J.M.H. and M.G.; software, H.T., C.G. and M.G.; validation, C.G., H.T. and J.M.H.; formal analysis, C.G. and H.T.; investigation, C.G.; resources, C.G. and H.T.; data curation, C.G., H.T., M.G. and A.R.; writing—original draft preparation, C.G.; writing—review and editing, C.G., J.M.H., H.T., B.S., M.G. and A.R.; visualization, C.G. and H.T.; supervision, J.M.H. and B.S.; project administration, C.G., J.M.H. and B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received 10% funding from Oro Muscles B.V. 9715 CJ Groningen, The Netherlands; [email protected] (H.T.).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Medical Ethics Review Board of the University Medical Center, Groningen. The retrospective study was approved on 7 April 2021 (METc 2021/226). The clinical validation study was approved on 4 March 2021 (METc 2021/110).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical and privacy reasons.

Acknowledgments

We thank Tanya Colonna for fruitful discussion and support during this study.

Conflicts of Interest

The co-authors Hobey Tam and Manfred Grabherr are shareholders of Oro Muscles B.V.

References

  1. Jang, S.H. The recovery of walking in stroke patients: A review. Int. J. Rehabil. Res. 2010, 33, 285–289. [Google Scholar] [CrossRef]
  2. Langhorne, P.; Coupar, F.; Pollock, A. Motor recovery after stroke: A systematic review. Lancet Neurol. 2009, 8, 741–754. [Google Scholar] [CrossRef]
  3. van de Port, I.; Kwakkel, G.; Lindeman, E. Community ambulation in patients with chronic stroke: How is it related to gait speed? J. Rehabil. Med. 2008, 40, 23–27. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Wren, T.A.L.; Tucker, C.A.; Rethlefsen, S.A.; Gorton, G.E.; Õunpuu, S. Clinical efficacy of instrumented gait analysis: Systematic review 2020 update. Gait Posture 2020, 80, 274–279. [Google Scholar] [CrossRef] [PubMed]
  5. Baker, R.; Esquenazi, A.; Benedetti, M.G.; Desloovere, K. Gait analysis: Clinical facts. Eur. J. Phys. Rehabil. Med. 2016, 52, 560–574. [Google Scholar] [PubMed]
  6. Ferrarin, M.; Rabuffetti, M.; Bacchini, M.; Casiraghi, A.; Castagna, A.; Pizzi, A.; Montesano, A.; Palsy, C. Does gait analysis change clinical decision-making in poststroke patients? Results from a pragmatic prospective observational study. Eur. J. Phys. Rehabil. Med. 2015, 51, 171–184. [Google Scholar] [PubMed]
  7. Khouri, N.; Desailly, E. Contribution of clinical gait analysis to single-event multi-level surgery in children with cerebral palsy. Orthop. Traumatol. Surg. Res. 2017, 103, 105–111. [Google Scholar] [CrossRef] [PubMed]
  8. Prasanth, H.; Caban, M.; Keller, U.; Courtine, G.; Ijspeert, A.; Vallery, H.; von Zitzewitz, J. Wearable Sensor-Based Real-Time Gait Detection: A Systematic Review. Sensors 2021, 21, 2727. [Google Scholar] [CrossRef] [PubMed]
  9. Morbidoni, C.; Cucchiarelli, A.; Agostini, V.; Knaflitz, M.; Fioretti, S.; Di Nardo, F. Machine-Learning-Based Prediction of Gait Events from EMG in Cerebral Palsy Children. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 819–830. [Google Scholar] [CrossRef] [PubMed]
  10. Taborri, J.; Palermo, E.; Rossi, S.; Cappa, P. Gait partitioning methods: A systematic review. Sensors 2016, 16, 66. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Caldas, R.; Mundt, M.; Potthast, W.; de Lima Neto, F.B.; Markert, B. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms. Gait Posture 2017, 57, 204–210. [Google Scholar] [CrossRef] [PubMed]
  12. Jourdan, T.; Debs, N.; Frindel, C. The Contribution of Machine Learning in the Validation of Commercial Wearable Sensors for Gait Monitoring in Patients: A Systematic Review. Sensors 2021, 21, 4808. [Google Scholar] [CrossRef] [PubMed]
  13. Xu, L.; Chen, J.; Wang, F.; Chen, Y.; Yang, W.; Yang, C. Machine-learning-based children’s pathological gait classification with low-cost gait-recognition system. BioMed. Eng. OnLine 2021, 20, 60. [Google Scholar] [CrossRef] [PubMed]
  14. Caderby, T.; Yiou, E.; Peyrot, N.; Bonazzi, B.; Dalleau, G. Detection of swing heel-off event in gait initiation using force-plate data. Gait Posture 2013, 37, 463–466. [Google Scholar] [CrossRef] [Green Version]
  15. Kidziński, Ł.; Delp, S.; Schwartz, M. Automatic real-time gait event detection in children using deep neural networks. PLoS ONE 2019, 14, e0211466. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Zamani, N.; Russel, P.; Lantz, H.; Hoeppner, M.P.; Meadows, J.R.S.; Vijay, N.; Mauceli, E.; di Palma, F.; Lindblad-Toh, K.; Jern, P.; et al. Unsupervised genome-wide recognition of local relationship patterns. BMC Genom. 2013, 14, 347. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Zurbenko, I.; Porter, P.S.; Gui, R.; Rao, S.; Ku, J.; Eskridge, J. Detecting Discontinuities in Time Series of Upper-Air Data: Development and Demonstration of an Adaptive Filter Technique. J. Clim. 1996, 9, 3548–3560. [Google Scholar] [CrossRef] [Green Version]
  18. Karatsidis, A.; Jung, M.; Schepers, M.H.; Bellusci, G.; de Zee, M.; Veltink, P.H.; Andersen, M.S. Musculoskeletal model-based inverse dynamic analysis under ambulatory conditions using inertial motion capture. Med. Eng. Phys. 2019, 65, 68–77. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. AI tool workflow for implementation and training utilizing the Saguaro algorithm, user feedback and reinforcement learning. Panel 1 (a) depicts the unsupervised partitions (5–10, green and blue vertical lines) from the Saguaro algorithm on the EMG and accelerometer signal. RMS = root-mean-square error. Panel 1 (b) depicts the final partitions after reinforcement learning on the EMG and the accelerometer signal (red arrows indicate swing phases).
Figure 1. AI tool workflow for implementation and training utilizing the Saguaro algorithm, user feedback and reinforcement learning. Panel 1 (a) depicts the unsupervised partitions (5–10, green and blue vertical lines) from the Saguaro algorithm on the EMG and accelerometer signal. RMS = root-mean-square error. Panel 1 (b) depicts the final partitions after reinforcement learning on the EMG and the accelerometer signal (red arrows indicate swing phases).
Sensors 22 04957 g001
Figure 2. Sample IMU accelerometer signal and AI-based swing phase partitions (green shaded areas) of four out of five gait cycles. The user feedback from the GUI (Figure 1) was used to partition all datasets not included in training. Unshaded areas represent stance phases. Laboratory-based timings of foot-contact and foot-off events (red arrows) were used as ground truth to compute the partitioning error.
Figure 2. Sample IMU accelerometer signal and AI-based swing phase partitions (green shaded areas) of four out of five gait cycles. The user feedback from the GUI (Figure 1) was used to partition all datasets not included in training. Unshaded areas represent stance phases. Laboratory-based timings of foot-contact and foot-off events (red arrows) were used as ground truth to compute the partitioning error.
Sensors 22 04957 g002
Figure 3. Oro muscles B.V. and 3D CGA sensor set-up.
Figure 3. Oro muscles B.V. and 3D CGA sensor set-up.
Sensors 22 04957 g003
Figure 4. Representative sample of AI-based EMG partitioning of the right gastrocnemius muscle. The upper panel shows the Oro Muscles EMG envelope of a representative participant; the lower panel shows the accelerometer signal from the right IMU foot sensor; the green shaded areas indicate the AI-based partitioned swing phase (1–4 gait cycles); the dashed vertical red line indicates the 3D CGA-based identifications of foot-strike and toe-off events (red arrows from left to right).
Figure 4. Representative sample of AI-based EMG partitioning of the right gastrocnemius muscle. The upper panel shows the Oro Muscles EMG envelope of a representative participant; the lower panel shows the accelerometer signal from the right IMU foot sensor; the green shaded areas indicate the AI-based partitioned swing phase (1–4 gait cycles); the dashed vertical red line indicates the 3D CGA-based identifications of foot-strike and toe-off events (red arrows from left to right).
Sensors 22 04957 g004
Table 1. Mean and standard deviation (std) of gait parameters in the training dataset.
Table 1. Mean and standard deviation (std) of gait parameters in the training dataset.
ConditionStep Length Left (Mean (m))Step Length Right (Mean (m))Step Length Left std (m)Step Length Right std (m)Gait Speed (m/s)Number of Included Steps
Barefoot0.370.3890.0520.0380.8761
Barefoot0.4650.4730.0130.031.0191
Barefoot0.2820.4440.0370.0130.3892
Barefoot0.4570.4170.0120.020.7971
Mean0.3940.4310.0290.0250.770
std0.0860.0360.0190.0110.270
Table 2. Mean and standard deviation (std) of gait parameters in the test dataset.
Table 2. Mean and standard deviation (std) of gait parameters in the test dataset.
ConditionStep Length Left (Mean (m))Step Length Right (Mean (m))Step Length Left (std (m))Step Length Right (std (m))Gait Speed (m/s)Number of Steps LeftNumber of Steps Right
Barefoot0.560.6090.0470.0431.2921616
Barefoot0.3180.3450.0270.0210.8372423
Shoes/Orthotics0.3410.360.0520.0410.8777
Barefoot0.4710.4960.0320.0441.0532321
Barefoot0.4510.4520.0360.0371.0282020
Barefoot0.2980.3150.0370.0450.8232322
Barefoot0.3060.4570.0220.0160.39954
Barefoot0.5070.5010.0490.0131.2561919
Barefoot0.4460.4230.0220.0160.8142522
Barefoot0.5380.5020.0260.0631.1532423
Barefoot0.3890.4390.0430.0250.7712120
Barefoot0.5730.6040.0230.0170.9982018
Barefoot0.5110.5210.050.031.0922321
Barefoot0.3360.3410.0360.0240.8453230
Barefoot 0.446 0.0150.699910
Mean0.4320.4540.0360.0300.92919.418.4
std0.0990.0890.0110.0150.2317.36.7
Table 3. Mean and standard deviation (std) of gait parameters in the clinical validation study.
Table 3. Mean and standard deviation (std) of gait parameters in the clinical validation study.
ConditionStep Length Left (Mean (m))Step Length Right (Mean (m))Step Length Left std (Mean (m))Step Length Right std (m)Gait Speed (Mean (m/s))Number of Steps LeftNumber of Steps Right
Barefoot0.1490.240.060.0560.13477
Barefoot0.4670.5010.0460.0451.1332930
Shoes/Orthotics0.4390.4850.0530.0530.9794341
Barefoot0.2540.0320.0140.010.1842721
Shoes/Orthotics0.2780.0580.0170.0250.2281112
Barefoot0.2220.250.0230.0110.2871011
Shoes/Orthotics0.2840.3620.0240.0220.44988
Barefoot0.4750.140.0240.0220.2372422
Shoes/Orthotics0.4780.1860.020.0420.2732928
Mean0.3380.2500.0310.0310.43320.920
std0.1190.1600.0160.0160.34411.810.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Greve, C.; Tam, H.; Grabherr, M.; Ramesh, A.; Scheerder, B.; Hijmans, J.M. Flexible Machine Learning Algorithms for Clinical Gait Assessment Tools. Sensors 2022, 22, 4957. https://0-doi-org.brum.beds.ac.uk/10.3390/s22134957

AMA Style

Greve C, Tam H, Grabherr M, Ramesh A, Scheerder B, Hijmans JM. Flexible Machine Learning Algorithms for Clinical Gait Assessment Tools. Sensors. 2022; 22(13):4957. https://0-doi-org.brum.beds.ac.uk/10.3390/s22134957

Chicago/Turabian Style

Greve, Christian, Hobey Tam, Manfred Grabherr, Aditya Ramesh, Bart Scheerder, and Juha M. Hijmans. 2022. "Flexible Machine Learning Algorithms for Clinical Gait Assessment Tools" Sensors 22, no. 13: 4957. https://0-doi-org.brum.beds.ac.uk/10.3390/s22134957

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop