Next Article in Journal
Monitoring Gases Content in Modern Agriculture: A Density Functional Theory Study of the Adsorption Behavior and Sensing Properties of CO2 on MoS2 Doped GeSe Monolayer
Next Article in Special Issue
Three-Dimensional Lower-Limb Kinematics from Accelerometers and Gyroscopes with Simple and Minimal Functional Calibration Tasks: Validation on Asymptomatic Participants
Previous Article in Journal
MAC Protocols for mmWave Communication: A Comparative Survey
Previous Article in Special Issue
Wearable IMMU-Based Relative Position Estimation between Body Segments via Time-Varying Segment-to-Joint Vectors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Approach for Gait Event Detection from a Single Shank-Worn IMU: Validation in Healthy and Neurological Cohorts

1
Department of Neurology, Kiel University, 24105 Kiel, Germany
2
Innovative Implant Development (Fracture Healing), Division of Surgery, Saarland University, 66421 Homburg, Germany
3
Institute of Electrical Engineering and Information Technology, Faculty of Engineering, Kiel University, 24143 Kiel, Germany
*
Author to whom correspondence should be addressed.
Submission received: 22 April 2022 / Revised: 12 May 2022 / Accepted: 17 May 2022 / Published: 19 May 2022
(This article belongs to the Special Issue Human and Animal Motion Tracking Using Inertial Sensors II)

Abstract

:
Many algorithms use 3D accelerometer and/or gyroscope data from inertial measurement unit (IMU) sensors to detect gait events (i.e., initial and final foot contact). However, these algorithms often require knowledge about sensor orientation and use empirically derived thresholds. As alignment cannot always be controlled for in ambulatory assessments, methods are needed that require little knowledge on sensor location and orientation, e.g., a convolutional neural network-based deep learning model. Therefore, 157 participants from healthy and neurologically diseased cohorts walked 5 m distances at slow, preferred, and fast walking speed, while data were collected from IMUs on the left and right ankle and shank. Gait events were detected and stride parameters were extracted using a deep learning model and an optoelectronic motion capture (OMC) system for reference. The deep learning model consisted of convolutional layers using dilated convolutions, followed by two independent fully connected layers to predict whether a time step corresponded to the event of initial contact (IC) or final contact (FC), respectively. Results showed a high detection rate for both initial and final contacts across sensor locations (recall 92 % , precision 97 % ). Time agreement was excellent as witnessed from the median time error (0.005 s) and corresponding inter-quartile range (0.020 s). The extracted stride-specific parameters were in good agreement with parameters derived from the OMC system (maximum mean difference 0.003 s and corresponding maximum limits of agreement (−0.049 s, 0.051 s) for a 95 % confidence level). Thus, the deep learning approach was considered a valid approach for detecting gait events and extracting stride-specific parameters with little knowledge on exact IMU location and orientation in conditions with and without walking pathologies due to neurological diseases.

1. Introduction

Gait deficits are common in older adults and possibly reflect the presence of an underlying neurodegenerative disease [1,2]. For example, conversion to Parkinson’s Disease [3] or conversion from mild cognitive impairment to Alzheimer’s Disease [4,5] are linked with changes in spatiotemporal gait parameters. Similarly, temporal gait parameters are different for stroke patients [6,7] and patients with multiple sclerosis [8,9] when compared to healthy controls. To objectively quantify gait deficits, stride-specific parameters such as stride time or stride length are often used [10]. The beginning and end of a stride are determined from two successive initial contacts (ICs) of the same foot [11,12]. The IC is when the foot contacts the ground and together with the instant at which the foot leaves the ground (final contact, FC), each stride can be divided in a stance and swing phase [13,14]. The events of IC and FC, also referred to as gait events, are commonly determined using force or pressure measuring devices [14] or marker-based optoelectronic motion capture systems (OMC; henceforth referred to as the marker-based system or method) [15,16]. These systems are relatively expensive and restricted to usage in expertise laboratories [17,18]. As there is increasing evidence that gait measured in the lab does not reflect daily-life gait [19,20,21], there is increasingly more interest in measurement systems that allow for continuous gait analysis in ambulatory settings. Therefore, the use of inertial measurement units (IMUs) is especially attractive, as these can be used to measure gait in ecologically valid environments, such as the home environment, thereby painting a more complete picture of health status [22,23] and providing clinical information that is complementary to standardized lab-based assessments [20,21,24,25].
Previous research suggests that gait event detection is more accurate using an IMU worn on a lower limb (e.g., shank or foot) compared to an IMU worn on the low back [26,27,28]. In order to get from abstract IMU sensor readings to clinically relevant gait parameters (i.e., from accelerations and angular velocities to stride times) [10], different algorithmic approaches have been developed in the last twenty years of clinical gait research. A recent study evaluated a cross-section of these algorithms for different sensor locations on the lower leg and foot [29]. The algorithms were categorized according to which signals were analyzed, for example, the angular velocity about the medio-lateral axis or the accelerations along vertical and antero-posterior axes. This means the sensor readings need to be linked with the anatomical axes, that is, one needs to know which sensor axis aligns with, for example, the medio-lateral axis. In most approaches, it is simply assumed that due to sensor attachment, the sensor axis aligns roughly with the anatomical axis of interest [30,31,32,33,34,35,36], or an additional calibration procedure (e.g., [37]) is required [29,38]. In ambulatory assessments, however, study participants often attach the sensor themselves, for example, after showering, and therefore the sensor location and alignment cannot be controlled for. Furthermore, it is unlikely that each time the sensor is (re-)attached, study participants, especially those with gait deficits, perform a calibration procedure that usually consists of holding a pre-defined pose and performing some known movement sequences [39,40].
Taken together, this drives the need for an approach that is invariant to sensor orientation and is applicable across a variety of pathological gait patterns. In the field of image analysis, similar requirements have been successfully addressed by algorithms that share a common underlying methodology referred to as deep learning (DL) [10,41,42], for example, in differentiating diseased from healthy cells [43]. The main advantage of DL is that rather than relying on expert-defined, hand-crafted features, the algorithm learns relevant data representations automatically [41,44]. Furthermore, DL approaches allow for individualization of the algorithm to a specific patient [45,46,47]. Previously, DL approaches for wearable IMU data have successfully been applied in classification of bradykinesia [44], detection of freezing of gait [48], and prediction of spatiotemporal gait parameters in people with osteoarthritis and total knee arthroplasty [49]. DL was used to detect gait events from marker-based motion capture and showed improved performance when compared to conventional, often heuristics-based, algorithms [50,51,52]. Another study used a DL approach to detect gait events from either three IMUs (worn on the low back, and both ankles) or a single IMU (worn on the low back) and showed that the time error was considerably smaller for the deep learning algorithm than for a commonly applied wavelet-based approach [53].
To the best of our knowledge, this is the first study that validates the performance of a DL approach for detecting gait events in a heterogeneous cohort of healthy and neurodegenerative diseases at multiple self-selected walking speeds from short walking distances using a single sensor setup that can be worn on either side, either laterally just above the ankle joint or proximally just below the knee joint.
The structure of the paper is as follows: in the Material and Methods section the data collection, data pre-processing, and the model architecture are introduced. The Results section presents the results from gait event detection and the subsequently extracted stride-specific gait parameters. In the Discussion, the results are set in relation to relevant literature, and finally in the Conclusions we outline the research content, results, and innovations.

2. Materials and Methods

2.1. Data Collection

Gait analyses were performed in the Universitätsklinikum Schleswig-Holstein (UKSH) campus Kiel, Germany. The study [54] was approved by the ethical committee of the medical faculty at the UKSH (no: D438/18). In total, data from 157 participants were included for the current analysis, including data from young adults (YA; age: 18–60 years), older adults (OA; age: >60 years), people with Parkinson’s Disease (PD; according to the UK Brain Bank criteria [55]), people with a recent (<4 weeks) symptomatic stroke (stroke), people with multiple sclerosis (MS; according to the McDonalds criteria [56]), people with chronic low back pain (cLBP), and people with diagnoses not fitting in any aforementioned groups or disorders with no explicit diagnosis (other) (Table 1). Inclusion criteria were an age of 18 years or older and the ability to walk independently without a walking aid. Participants were excluded from the study with a Montreal Cognitive Assessment [57] score < 15 and other movement disorders that affected mobility, as noticed by the clinical assessor.
Participants performed three walking trials consisting of walking 5 m at either (1) preferred speed (“Please walk at your normal walking speed.”), (2) slow speed (“Please walk at half of your normal walking speed.”), or (3) fast speed (“Please walk as fast as possible, without running or falling.”). The 5 m distance was marked with two cones on both ends, and participants were asked to start walking approximately two steps before the cones on one end, and stop walking approximately two steps after passing the cones on the other end.
For the current analysis, data from four IMUs (Noraxon USA Inc., myoMOTION, Scottsdale, AZ, USA) were considered, namely those that were attached laterally above the left and right ankle joint and those attached proximally at the left and right shank. IMUs were secured to participants using elastic bands with a special hold for the IMU. Furthermore, reflective markers were attached on top of the usual foot wear at the heel and toe of both feet (Figure 1). Marker data were recorded using a twelve-camera OMC system (Qualisys AB, Göteborg, Sweden) at a sampling frequency of 200 Hz. IMU data were recorded at the same sampling frequency, and both systems were synchronized using a TTL signal [54].

2.2. Data Pre-Processing

2.2.1. Marker Data

First, data from both marker and IMU systems were cropped so that only data from within the 5 m distance were considered. Any gaps in the marker data were filled by interpolation making use of inter-correlations between markers [58,59]. The data were then low-pass filtered using a sixth-order Butterworth filter with a cut-off frequency of 20 Hz [60]. The filter was applied twice to the input data [61]. After filtering in the forward direction, the filtered sequence was reversed and run back through the filter [30]. The filtered data were differentiated to get velocity signals, and timings of ICs and FCs were determined from local maxima and minima in the heel and toe vertical velocity signals [62,63]. All identified ICs and FCs were manually checked using Qualisys Track Manager 2018.1 software (Qualisys AB, Göteborg, Sweden) and corrected if necessary [34,64]. The resulting annotated ICs and FCs were considered the true events (also labels or targets), and were used as reference timings to derive stride-specific gait parameters.

2.2.2. IMU Data

The idea behind the deep learning approach was that a model was trained to predict the likelihood of an IC and FC, given accelerometer and gyroscope data from a single IMU. The data from a single sensor channel, e.g., the acceleration in forward direction, were denoted by x d = x d [ 1 ] x d [ 2 ] x d [ N ] T , with d referring to the dth sensor channel (i.e., d = 1 , , D ) and n referring to the nth sample or time step (i.e., n = 1 , , N ). Similarly, the data from all D sensor channels at a given time instant n, were denoted by x [ n ] = x 1 [ n ] x 2 [ n ] x D [ n ] T . Data from all D channels, and for all N time steps, were then denoted by:
X = | | | x 1 x 2 x D | | | = x 1 [ 1 ] x 2 [ 1 ] x D [ 1 ] x 1 [ 2 ] x 2 [ 2 ] x D [ 2 ] x 1 [ N ] x 2 [ N ] x D [ N ] , X R N × D
Likewise, the labels were denoted by:
y IC = y IC [ 1 ] y IC [ 2 ] y IC [ N ] , y FC = y FC [ 1 ] y FC [ 2 ] y FC [ N ] , y IC / FC [ n ] [ 0 , 1 ]
The model was iteratively trained to learn a mapping h Θ ( X ) : X y , where h Θ was also referred to as the hypothesis parameterized by the weights, collectively denoted by Θ , and X was an array with raw sensor data from the 3-axis accelerometer and 3-axis gyroscope of a single sensor location.
All participant data were split into three independent datasets, namely a training set, a validation set, and a test set. Each set contained data from approximately one-third of the participants. Participants were randomly assigned to one of the sets, stratified by both group (i.e., diagnosis) and gender (Table 2). The training and validation set were used to train an optimal deep learning model. Test set data were not used for training the model or hyperparameter tuning. The results on the model’s performance were only based on the test set, and therefore reflected how good the model generalizes to new, unseen data.
Accelerometer and gyroscope data were normalized by subtracting the channel-wise mean and dividing by the channel-wise standard deviation. For the training and validation datasets, the data were then partitioned into equal length time windows [52] of 400 samples, with an overlap of 50% between successive windows (corresponding to 2 s windows, and 1 s overlap, respectively). For the test set, the complete trial was fed as input to the model for predicting ICs and FCs (hence the number of instances is the same as the number of trials, Table 2).

2.3. Model

2.3.1. Model Architecture

The generic architecture for the deep learning model was based on a temporal convolutional network (TCN) [52,65,66]. The TCN consisted of a sequence of residual blocks with exponentially increasing dilation factor [66,67]. Each residual block was built from two sequences of a dilated convolutional layer [67], a batch normalization layer [68], a rectified linear unit (ReLU) activation layer, and a dropout layer [69] (Figure 2). The model was built in Python [70] using the high-level TensorFlow API Keras [65,71].
For the current analysis, only convolutions of the “same” type were considered [65], i.e., the model was non-causal and zero-padded to account for edge effects, and the likelihood of an IC or FC was based on input data both before and after the current sample, n:
y ^ i [ n ] = f , x [ n 1 ] , x [ n ] , x [ n + 1 ] , , i IC , FC
The number of samples that the predictions at time n “sees” was referred to as the receptive field [72] and was a function of the kernel size and the dilation factors [65]. Dilation factors were always given as a sequence of increasing power of 2 [66,67,73].
The outputs of the TCN block were fed to two separate fully connected (FCN, or dense) layers, that were both followed by a sigmoid activation layer. Outputs were then predicted separately for ICs and FCs [52,53]. The mean squared error (MSE) was used as a loss function, and a gradient descent-based optimization algorithm with adaptive moment (Adam) optimizer was used to iteratively learn the weights [74,75].

2.3.2. Hyperparameter Optimization

In order to find the best model architecture, hyperparameter tuning was perfomed using KerasTuner [76]. Here, the number of filters, the kernel size, and the maximum dilation factor (Table 3) were optimized for using a random search strategy [77].
The model architecture that resulted from the hyperparameter optimization (Table 4) was then trained on the combined set of training and validation data. The trained model was used to predict the likelihoods of ICs and FCs on the test set data.

2.4. Analysis

The predictions of the model on the test set data were compared with the labels from the test set. The model performance was evaluated for (1) overall detection performance, (2) time agreement between the predicted events and the (marker-based) annotated events, and (3) agreement between subsequently derived stride-specific gait parameters.

2.4.1. Overall Detection Performance

The overall detection performance quantified how many of the annotated events were detected by the model (true positives, TP), how many of the annotated events were not detected (false negatives, FN), and how many events that were detected, were actually not annotated (false positives, FP). From these metrics, the recall (or sensitivity), precision, and F1 score were calculated as:
recall = TP TP + FN
precision = TP TP + FP
F 1 score = 2 · recall · precision recall + precision
Here, recall represented the ratio of gait events that were detected, precision represented the ratio of detected gait events that were truly gait events, and F1 score was the harmonic mean of the recall and precision.

2.4.2. Time Agreement

For all correctly detected gait events (TP, Section 2.4.1), time agreement was assessed by the time error between the annotated and detect gait event, which was defined as
time error = t ref t pred
with t ref the gait event time from the marker-based annotations, and t pred the gait event time from the model predictions. As a robust measure for the average time error and its spread, the median time error and the inter-quartile range (IQR) were reported [78].

2.4.3. Stride-Specific Gait Parameters

For those trials for which all gait events were detected and no false positives were detected, the stride time, stance time, and swing were calculated. Stride was the time between two successive ICs of the same foot. Stance time was the time between a FC and the preceding IC of the same foot. Swing time was the time between the IC following the last FC of the same foot.

3. Results

3.1. Overall Detection Performance

The performance of detecting ICs and FCs was objectively quantified by the number of TPs, the number of FNs, and the number of FPs. From these numbers, recall, precision, and F1 score were calculated (Table 5).
For both ICs and FCs, recall is high for each of the sensor locations (i.e., ≥92%) and so is precision (i.e., ≥97%). Differences between the sensor locations are small, i.e., the minimum recall is 92 % and the maximum recall is 97 % , and the minimum precision is 97 % and the maximum precision is 99 % . The recall and precision result in F1 scores of ≥96% for ICs and ≥94% for FCs.

3.2. Time Agreement

Time agreement between the annotated and detected events was quantified for the TPs for each of the sensor locations (Figure 3). The median time error for each of the sensor locations and for both ICs and FCs was close to zero (Table 6), and the largest median time error was −0.005 s, corresponding to one sample period (at a sampling frequency of 200 Hz). The IQR was at most 0.020 s, corresponding to four sample periods.

3.3. Stride-Specific Gait Parameters

For those trials for which all gait events were correctly detected (and no false positives were detected), stride time, stance time, and swing time were calculated. The mean difference and the limits of agreement between the marker-based annotations and the model-based detections were calculated.
For all stride-specific gait parameters, and for all sensor locations, the mean difference was close to zero, i.e., the maximum mean difference was 0.003 s, namely for the calculated swing time of the right ankle (Table 7). Furthermore, for all gait parameters and for all sensor locations, the limits of agreement, based on a 95% confidence interval, were distributed around a zero-mean difference with the overall limits of agreement at −0.049 s and 0.051 s (Figure 4).

4. Discussion

The current study aimed to validate a deep learning approach for detecting gait events from a single IMU worn on the lower leg. Data from left and right ankle- and shank-worn IMUs were used for training a neural network to detect gait events from walking trials performed by healthy YA, healthy OA, participants diagnosed with PD, MS, or cLBP, participants who had a recent symptomatic stroke, and participants diagnosed with other neurological diseases. Participants walked a 5 m distance at three different self-selected walking speeds. The gait event timings that were predicted by the neural network were compared to a common reference method, i.e., OMC system, and clinically relevant stride-specific gait parameters were extracted.
A first measure for the model performance was given by the recall (how many annotated events were detected) and precision (how many detected events were annotated). For both ICs and FCs, a high recall (≥95%), high precision (≥98%), and high F1 score (≥94%) were observed, meaning that most events were detected and most detected events were actually true events. There was little difference in recall, precision, and F1 score between sensor locations (Table 5), confirming that the deep learning approach is relatively invariant to exact sensor localization. The values for recall, precision, and F1 score are comparable to recall (≥85%), precision (≥95%), F1 score (≥91%) from studies that detected gait events in OA, people with PD, and people with MS [34] or adults and hemiplegic patients [79].
Next, the time error, that is, the difference between the annotated event and the detected event, was of interest. For both ICs and FCs, and for all sensor locations, the observed time error was small, and the middle 50 % of the time errors were within a range of 0.015 , 0.010 s (Table 6, Figure 3). These data showed that the deep learning-based approach is precise in detecting initial and final contacts. Time errors were slightly smaller than our previously reported results [34] that used a heuristics-based approach [30]. The heuristics-based approach determined ICs and FCs as local minima in the medio-lateral angular velocity [30], and it could be that these minima do not exactly coincide with the reference event timing as determined from the OMC systems. Previous studies that investigated the time error of IMU-based gait event detection reported a 95% confidence interval of [0.007, 0.013] s for IC, and [−0.005, 0.004] s for FC for young and older adults in treadmill and overground walking [80], or [−0.016, 0.001] s for IC or [0.037, 0.063] s for FC for typically developing children in overground walking [31]. Others reported the mean for the time error in healthy elderly subjects, subjects with PD, subjects with choreatic movement disorder, and hemiparetic subjects, and found a maximum mean error of 0.011 s at normal walking speed, and 0.022 s at faster speed [36]. Similarly, for healthy subjects, a mean error of 0.017 s for ICs and −0.016 s for FCs were reported, whereas for a single transfemoral amputee the mean error was 0.012 s for ICs and −0.024 s for FCs for the intact limb [33]. The median and IQR of the time errors of the current study were in the same range as previously found by [53], and time errors were smaller than previously reported time errors from a continuous wavelet-based approach [79]. Hence, our proposed deep learning approach resulted in time errors that are in the same range or better than those from previous approaches, while not being restricted to an exact sensor location (left or right ankle, or shank) and sensor alignment.
From the correctly detected gait events, stride-specific gait parameters were derived. These are probably of greatest clinical relevance, as changes in stride-specific gait parameters have been linked directly with disease onset and progression [3,4,5,6,7,8,9]. Therefore, stride time, stance time, and swing time were calculated, and the differences between the deep learning-based approach and the marker-based reference method were quantified (Table 7, Figure 4). The limits of agreement for a 95 % confidence interval were calculated, and for all metrics the zero-mean difference was enclosed in the limit of agreement. These data provided evidence that the deep learning-based model was able to derive stride-specific gait parameters. The differences between the deep learning model-based stride parameters and the marker-based stride parameters were in a similar range as a recent study that compared IMU-derived stride parameters against stride parameters obtained with a pressure sensing walkway [29,53] and were also in the same range as results from a study that compared IMU-derived stride parameters with stride parameters obtained with an OMC systems [64]. The mean error was lower than the mean errors reported for stance and swing time (0.011 s and 0.011 s, respectively) across elderly subjects, subjects with PD, subjects with choreatic movement disorder, and hemiparetic subjects [36].
The main limitations of the current study were that only walking trials involving straight-line walking were considered, and the walking distance was relatively short. Therefore, it may be that the observed gait patterns from these walking trials are not fully representative of gait patterns observed in daily life [19,20,21]. However, as the deep learning-based approach does not rely on fixed thresholds or assumptions of which sensor axis is used, it is theoretically transferable and scalable to other conditions if input data and corresponding labels can be provided.
Furthermore, although the proposed approach allows for relatively arbitrary sensor placement on the lower leg, it was not investigated to what extent participants would be willing to wear such a sensor for a prolonged period of time in the home environment. Previous research found that user acceptance and adherence to wearing IMUs was generally high in people with neurodegenerative diseases [81,82,83,84], although reduced adherence was linked with multi-day wear [82] and wearing multiple sensors [85].

5. Conclusions

In this study we have validated a DL-based approach to detect gait events and subsequently extract clinically relevant stride-specific gait parameters from a single IMU worn either laterally above the ankle joint or proximal below the knee joint. Performance analysis showed an excellent detection rate and low time errors in both event detection and stride parameter calculation for different walking speeds and across both healthy and neurological cohorts. Compared to relevant approaches that detected gait events from an ankle- or shank-worn IMU, the DL approach reached a performance that was on par or better, and it did not rely on expert-defined, hand-crafted features or empirically derived thresholds. The performance of the DL approach was not affected by the exact sensor placement and orientation, and hence is less obtrusive for potential applications in long-term continuous monitoring. In contrast to previous approaches, it allows for personalization of the network to individual study participants and is easily transferable even to other sensor placement locations (e.g., a foot-worn or low back-worn IMU) without the need for rethinking the set of decision rules and thresholds. Our next step is to further develop and validate these methods with real-life walking sequences in patients with neurodegenerative diseases.

Author Contributions

Conceptualization, R.R., G.S. and W.M.; methodology, R.R. and G.S.; software, R.R.; validation, R.R.; formal analysis, R.R.; investigation, R.R., G.S. and W.M.; resources, C.H. and W.M.; data curation, E.W. and R.R.; writing—original draft preparation, R.R.; writing—review and editing, E.W., C.H., G.S. and W.M.; visualization, R.R.; supervision, G.S. and W.M.; project administration, R.R., E.W., C.H., G.S. and W.M.; funding acquisition, W.M. All authors have read and agreed to the published version of the manuscript.

Funding

We acknowledge financial support by Land Schleswig-Holstein within the funding programme Open Access Publikationsfonds.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the medical faculty of the Christian-Albrechts-Universität zu Kiel (D438/18, approved on 8 May 2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data from the first 10 participants are available online at https://github.com/neurogeriatricskiel/Validation-dataset (accessed on 11 November 2021). Additionally, we are preparing the open-source release of all data from the healthy younger and older adults. Data from patient groups can be shared upon reasonable request. The scripts are publicly available at the author’s personal GitHub, which can be found at https://github.com/rmndrs89/my-gait-events-tcn (accessed on 1 April 2022).

Acknowledgments

The authors sincerely thank all people that were involved in the data collection, from study participants to students who aided and assisted during measurements and subsequent marker labeling. The authors thank Julius Welzel for his input regarding the organization of the data in a BIDS-like format. The authors thank Johannes Hoffmann for his tips regarding the LaTeX typesetting.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
cLBPchronic low back pain
CNNconvolutional neural network
IMUinertial measurement unit
MSmultiple sclerosis
No.number
OAolder adults
PDParkinson’s disease
TCNtemporal convolutional network
YAyounger adults

References

  1. Snijders, A.H.; van de Warrenburg, B.P.; Giladi, N.; Bloem, B.R. Neurological gait disorders in elderly people: Clinical approach and classification. Lancet Neurol. 2007, 6, 63–74. [Google Scholar] [CrossRef]
  2. Hodgins, D. The importance of measuring human gait. Med. Device Technol. 2008, 19, 44–47. [Google Scholar]
  3. Del Din, S.; Elshehabi, M.; Galna, B.; Hobert, M.A.; Warmerdam, E.; Suenkel, U.; Brockmann, K.; Metzger, F.; Hansen, C.; Berg, D.; et al. Gait analysis with wearables predicts conversion to Parkinson disease. Ann. Neurol. 2019, 86, 357–367. [Google Scholar] [CrossRef] [PubMed]
  4. König, A.; Klaming, L.; Pijl, M.; Demeurraux, A.; Davis, R.; Robert, P. Objective measurement of gait parameters in healthy and cognitively impaired elderly using the dual-task paradigm. Aging Clin. Exp. Res. 2017, 29, 1181–1189. [Google Scholar] [CrossRef]
  5. Bertoli, M.; Cereatti, A.; Trojaniello, D.; Avanzino, L.; Pelosin, E.; Del Din, S.; Rochester, L.; Ginis, P.; Bekkers, E.M.J.; Mirelman, A.; et al. Estimation of spatio-temporal parameters of gait from magneto-inertial measurement units: Multicenter validation among Parkinson, mildly cognitively impaired and healthy older adults. Biomed. Eng. Online 2018, 17, 58. [Google Scholar] [CrossRef] [Green Version]
  6. von Schroeder, H.P.; Coutts, R.D.; Lyden, P.D.; Billings, E., Jr.; Nickel, V.L. Gait parameters following stroke: A practical assessment. J. Rehabil. Res. Dev. 1995, 32, 25–31. [Google Scholar]
  7. Mohan, D.M.; Khandoker, A.H.; Wasti, S.A.; Ismail Ibrahim Ismail Alali, S.; Jelinek, H.F.; Khalaf, K. Assessment methods of post-stroke gait: A scoping review of technology-driven approaches to gait characterization and analysis. Front. Neurol. 2021, 12, 650024. [Google Scholar] [CrossRef]
  8. Griškevičius, J.; Apanskienė, V.; Žižienė, J.; Daunoravičienė, K.; Ovčinikova, A.; Kizlaitienė, R.; Sereikė, I.; Kaubrys, G.; Pauk, J.; Idźkowski, A. Estimation of temporal gait parameters of multiple sclerosis patients in clinical setting using inertial sensors. In Proceedings of the 11th International Conference BIOMDLORE 2016, Druskininkai, Lithuania, 20–22 October 2016; pp. 80–82. [Google Scholar]
  9. Flachenecker, F.; Gaßner, H.; Hannik, J.; Lee, D.H.; Flachenecker, P.; Winkler, J.; Eskofier, B.; Linker, R.A.; Klucken, J. Objective sensor-based gait measures reflect motor impairment in multiple sclerosis patients: Reliability and clinical validation of a wearable sensor device. Mult. Scler. Relat. Dis. 2019, 39, 101903. [Google Scholar] [CrossRef]
  10. Hannink, J.; Kautz, T.; Pasluosta, C.F.; Gaßmann, K.-G.; Klucken, J.; Eskofier, B.M. Sensor-Based Gait Parameter Extraction with Deep Convolutional Neural Networks. IEEE J. Biomed. Health 2017, 21, 85–93. [Google Scholar] [CrossRef] [Green Version]
  11. Perry, J.; Burnfield, J.M. Gait Analysis: Normal and Pathological Gait, 2nd ed.; SLACK Inc.: Thorofare, NJ, USA, 2010. [Google Scholar]
  12. Richards, J.; Levine, D.; Whittle, M. Whittle’s Gait Analysis, 5th ed.; Churchill Livingstone: London, UK, 2012. [Google Scholar]
  13. Rueterbories, J.; Spaich, E.G.; Larsen, B.; Andersen, O.K. Methods for gait event detection and analysis in ambulatory systems. Med. Eng. Phys. 2010, 32, 545–552. [Google Scholar] [CrossRef]
  14. Bruening, D.A.; Ridge, S.T. Automated event detection algorithms in pathological gait. Gait Posture 2014, 39, 472–477. [Google Scholar] [CrossRef] [PubMed]
  15. Chiari, L.; Della Croce, U.; Leardini, A.; Cappozzo, A. Human movement analysis using stereophotogrammetry: Part 2: Instrumental errors. Gait Posture 2005, 21, 197–211. [Google Scholar] [CrossRef] [PubMed]
  16. Topley, M.; Richards, J.G. A comparison of currently available optoelectronic motion capture systems. J. Biomech. 2020, 106, 109820. [Google Scholar] [CrossRef] [PubMed]
  17. Iosa, M.; Picerno, P.; Paolucci, S.; Morone, G. Wearable inertial sensors for human movement analysis. Expert Rev. Med. Devices 2016, 13, 641–659. [Google Scholar] [CrossRef] [PubMed]
  18. Jarchi, D.; Pope, J.; Lee, T.K.M.; Tamjidi, L.; Mirzaei, A.; Sanei, S. A Review on Accelerometry-Based Gait Analysis and Emerging Clinical Applications. IEEE Rev. Biomed. Eng. 2018, 11, 177–194. [Google Scholar] [CrossRef]
  19. Hillel, I.; Gazit, E.; Nieuwboer, A.; Avanzino, L.; Rochester, L.; Cereatti, A.; Della Croce, U.; Rikkert, M.O.; Bloem, B.R.; Pelosin, E.; et al. Is every-day walking in older adults more analogous to dual-task walking or to usual walking? Elucidating the gaps between gait performance in the lab and during 24/7 monitoring. Eur. Rev. Aging Phys. A 2019, 16, 6. [Google Scholar]
  20. Warmerdam, E.; Hausdorff, J.M.; Atrsaei, A.; Zhou, Y.; Mirelman, A.; Aminian, K.; Espay, A.J.; Hansen, C.; Evers, L.J.W.; Keller, A.; et al. Long-term unsupervised mobility assessment in movement disorders. Lancet Neurol. 2020, 19, 462–470. [Google Scholar] [CrossRef]
  21. Atrsaei, A.; Corrá, M.F.; Dadashi, F.; Vila-Chã, N.; Maia, L.; Mariani, B.; Maetzler, W.; Aminian, K. Gait speed in clinical and daily living assessments in Parkinson’s disease patients: Performance versus capacity. Npj Parkinson’s Dis. 2021, 7, 24. [Google Scholar] [CrossRef]
  22. Del Din, S.; Godfrey, A.; Mazzà, C.; Lord, S.; Rochester, L. Free-living monitoring of Parkinson’s disease: Lessons from the field. Mov. Disord. 2016, 31, 1293–1313. [Google Scholar] [CrossRef]
  23. Shah, V.V.; McNames, J.; Mancini, M.; Carlson-Kuhta, P.; Nutt, J.G.; El-Gohary, M.; Lapidus, J.A.; Horak, F.B.; Curtze, C. Digital Biomarkers of Mobility in Parkinson’s Disease During Daily Living. J. Park. Dis. 2020, 10, 1099–1111. [Google Scholar] [CrossRef]
  24. Fasano, A.; Mancini, M. Wearable-based mobility monitoring: The long road ahead. Lancet Neurol. 2020, 19, 378–379. [Google Scholar] [CrossRef]
  25. Corrá, M.F.; Atrsaei, A.; Sardoreira, A.; Hansen, C.; Aminian, K.; Correia, M.; Vila-Chã, N.; Maetzler, W.; Maia, L. Comparison of Laboratory and Daily-Life Gait Speed Assessment during ON and OFF States in Parkinson’s Disease. Sensors 2021, 21, 3974. [Google Scholar] [CrossRef] [PubMed]
  26. Ben Mansour, K.; Rezzoug, N.; Gorce, P. Analysis of several methods and inertial sensors locations to assess gait parameters in able-bodied subjects. Gait Posture 2015, 42, 409–414. [Google Scholar] [CrossRef] [PubMed]
  27. Storm, F.A.; Buckley, C.J.; Mazzà, C. Gait event detection in laboratory and real life settings: Accuracy of ankle and waist sensor based methods. Gait Posture 2016, 50, 42–46. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Panebianco, G.P.; Bisi, M.C.; Stagni, R.; Fantozzi, S. Analysis of the performance of 17 algorithms from a systematic review: Influence of sensor position, analysed variable and computational approach in gait timing estimation from IMU measurements. Gait Posture 2018, 66, 76–82. [Google Scholar] [CrossRef]
  29. Niswander, W.; Kontson, K. Evaluating the Impact of IMU Sensor Location and Walking Task on Accuracy of Gait Event Detection Algorithms. Sensors 2021, 21, 3989. [Google Scholar] [CrossRef]
  30. Salarian, A.; Russmann, H.; Vingerhoets, F.J.; Dehollain, C.; Blanc, Y.; Burkhard, P.R.; Aminian, K. Gait assessment in Parkinson’s disease: Toward an ambulatory system for long-term monitoring. IEEE Trans. Biomed. Eng. 2004, 51, 1434–1443. [Google Scholar] [CrossRef]
  31. Catalfamo, P.; Ghoussayni, S.; Ewins, D. Gait Event Detection on Level Ground and Incline Walking Using a Rate Gyroscope. Sensors 2010, 10, 5683–5702. [Google Scholar] [CrossRef] [Green Version]
  32. Sabatini, A.; Martelloni, C.; Scapellato, S.; Cavallo, F. Assessment of Walking Features from Foot Inertial Sensing. IEEE Trans. Biomed. Eng. 2005, 52, 486–494. [Google Scholar] [CrossRef] [Green Version]
  33. Maqbool, H.F.; Husman, M.A.B.; Awad, M.; Abouhossein, A.; Mehryar, P.; Iqbal, N.; Dehghani-Sanij, A.A. Real-time gait event detection for lower limb amputees using a single wearable sensor. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
  34. Romijnders, R.; Warmerdam, E.; Hansen, C.; Welzel, J.; Schmidt, G.; Maetzler, W. Validation of IMU-based gait event detection during curved walking and turning in older adults and Parkinson’s Disease patients. J. Neuroeng. Rehabil. 2021, 18, 28. [Google Scholar] [CrossRef]
  35. Jasiewicz, J.M.; Allum, J.H.; Middleton, J.W.; Barriskill, A.; Condie, P.; Purcell, B.; Li, R.C.T. Gait event detection using linear accelerometers or angular velocity transducers in able-bodied and spinal-cord injured individuals. Gait Posture 2006, 24, 502–509. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Trojaniello, D.; Cereatti, A.; Pelosin, E.; Avanzino, L.; Mirelman, A.; Hausdorff, J.M.; Della Croce, U. Estimation of step-by-step spatio-temporal parameters of normal and impaired gait using shank-mounted magneto-inertial sensors: Application to elderly, hemiparetic, parkinsonian and choreic gait. J. Neuroeng. Rehabil. 2014, 11, 152. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Ferraris, F.; Grimaldi, U.; Parvis, M. Procedure for effortless in-field calibration of three-axis rate gyros and accelerometers. Sens. Mater. 1995, 7, 311–330. [Google Scholar]
  38. Greene, B.R.; McGrath, D.; O’Neill, R.; O’Donovan, K.J.; Burns, A.; Caulfield, B. An adaptive gyroscope-based algorithm for temporal gait analysis. Med. Biol. Eng. Comput. 2010, 48, 1251–1260. [Google Scholar] [CrossRef] [PubMed]
  39. Leineweber, M.J.; Gomez Orozco, M.D.; Andrysek, J. Evaluating the feasibility of two post-hoc correction techniques for mitigating posture-induced measurement errors associated with wearable motion capture. Med. Eng. Phys. 2019, 71, 38–44. [Google Scholar] [CrossRef]
  40. Pacher, L.; Chatellier, C.; Vauzelle, R.; Fradet, L. Sensor-to-Segment Calibration Methodologies for Lower-Body Kinematic Analysis with Inertial Sensors: A Systematic Review. Sensors 2020, 20, 3322. [Google Scholar] [CrossRef]
  41. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  42. Ter Haar Romeny, B.M. A Deeper Understanding of Deep Learning. In Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks; Ranschaert, E.R., Morozov, S., Algra, P.R., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 25–38. [Google Scholar]
  43. Iqbal, M.S.; Ahmad, I.; Bin, L.; Khan, S.; Rodrigues, J.J.P.C. Deep learning recognition of diseased and normal cell representation. Trans. Emerg. Tel. Tech. 2021, 32, e4017. [Google Scholar] [CrossRef]
  44. Eskofier, B.M.; Lee, S.I.; Daneault, J.-F.; Golabchi, F.N.; Ferreira-Carvalho, G.; Vergara-Diaz, G.; Sapienza, S.; Costante, G.; Klucken, J.; Kautz, T.; et al. Recent machine learning advancements in sensor-based mobility analysis: Deep learning for Parkinson’s disease assessment. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 655–658. [Google Scholar]
  45. Stober, S.; Sternin, A.; Owen, A.M.; Grahn, J.A. Deep Feature Learning for EEG Recordings. arXiv 2016, arXiv:1511.04306. [Google Scholar]
  46. Yao, Y.; Plested, J.; Gedeon, T. Deep Feature Learning and Visualization for EEG Recording Using Autoencoders. In Neural Information Processing; Springer International Publishing: Cham, Switzerland, 2018; pp. 554–566. [Google Scholar]
  47. Horst, F.; Lapuschkin, S.; Samek, W.; Müller, K.-R.; Schöllhorn, W.I. Explaining the unique nature of individual gait patterns with deep learning. Sci. Rep. 2019, 9, 2391. [Google Scholar] [CrossRef] [Green Version]
  48. Camps, J.; Samà, A.; Martín, M.; Rodríguez-Martín, D.; Pérez-López, C.; Moreno Arostegui, J.M.; Cabestany, J.; Català, A.; Alcaine, S.; Mestre, B.; et al. Deep learning for freezing of gait detection in Parkinson’s disease patients in their homes using a waist-worn inertial measurement unit. Knowl.-Based Syst. 2018, 139, 119–131. [Google Scholar] [CrossRef]
  49. Sharifi Renani, M.; Myers, C.A.; Zandie, R.; Mahoor, M.H.; Davidson, B.S.; Clary, C.W. Deep Learning in Gait Parameter Prediction for OA and TKA Patients Wearing IMU. Sensors 2020, 20, 5553. [Google Scholar] [CrossRef] [PubMed]
  50. Kidziński, Ł.; Delp, S.; Schwartz, M. Automatic real-time gait event detection in children using deep neural networks. PLoS ONE 2019, 14, e0211466. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Lempereur, M.; Rousseau, F.; Rémy-Néris, O.; Pons, C.; Houx, L.; Quellec, G.; Brochard, S. A new deep learning-based method for the detection of gait events in children with gait disorders: Proof-of-concept and concurrent validity. J. Biomech. 2020, 98, 109490. [Google Scholar] [CrossRef] [PubMed]
  52. Filtjens, B.; Nieuwboer, A.; D’cruz, N.; Spildooren, J.; Slaets, P.; Vanrumste, B. A data-driven approach for detecting gait events during turning in people with Parkinson’s disease and freezing of gait. Gait Posture 2020, 80, 130–136. [Google Scholar] [CrossRef]
  53. Gadaleta, M.; Cisotto, G.; Rossi, M.; Ur Rehman, R.Z.; Rochester, L.; Del Din, S. Deep Learning Techniques for Improving Digital Gait Segmentation. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  54. Warmerdam, E.; Romijnders, R.; Geritz, J.; Elshehabi, M.; Maetzler, C.; Otto, J.C.; Reimer, M.; Stuerner, K.; Baron, R.; Paschen, S.; et al. Proposed Mobility Assessments with Simultaneous Full-Body Inertial Measurement Units and Optical Motion Capture in Healthy Adults and Neurological Patients for Future Validation Studies: Study Protocol. Sensors 2021, 21, 5833. [Google Scholar] [CrossRef]
  55. Gibb, W.R.; Lees, A.J. The relevance of the Lewy body to the pathogenesis of idiopathic Parkinson’s disease. J. Neurol. Neurosurg. Psychiatry 1988, 51, 745–752. [Google Scholar] [CrossRef] [Green Version]
  56. Thompson, A.J.; Banwell, B.L.; Barkhof, F.; Carroll, W.M.; Coetzee, T.; Comi, G.; Correale, J.; Fazekas, F.; Filippi, M.; Freedman, M.S.; et al. Diagnosis of multiple sclerosis: 2017 revisions of the McDonald criteria. Lancet Neurol. 2018, 17, 162–173. [Google Scholar] [CrossRef]
  57. Nasreddine, Z.S.; Phillips, N.A.; Bédirian, V.; Charbonneau, S.; Whitehead, V.; Collin, I.; Cummings, J.L.; Chertkow, H. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool For Mild Cognitive Impairment. J. Am. Geriatr. Soc. 2005, 53, 695–699. [Google Scholar] [CrossRef]
  58. Federolf, P.A. A Novel Approach to Solve the “Missing Marker Problem” in Marker-Based Motion Analysis That Exploits the Segment Coordination Patterns in Multi-Limb Motion Data. PLoS ONE 2013, 8, e78689. [Google Scholar] [CrossRef] [Green Version]
  59. Gløersen, Ø.; Federolf, P. Predicting Missing Marker Trajectories in Human Motion Data Using Marker Intercorrelations. PLoS ONE 2016, 11, e0152616. [Google Scholar]
  60. Rácz, K.; Rita, M.K. Marker displacement data filtering in gait analysis: A technical note. Biomed. Signal Proces. 2021, 70, 102974. [Google Scholar] [CrossRef]
  61. Kormylo, J.; Jain, V. Two-pass recursive digital filter with zero phase shift. IEEE Trans. Acoust. Speech 1974, 22, 384–387. [Google Scholar] [CrossRef]
  62. Pijnappels, M.; Bobbert, M.F.; Van Dieën, J.H. Changes in walking pattern caused by the possibility of a tripping reaction. Gait Posture 2001, 14, 11–18. [Google Scholar] [CrossRef]
  63. O’Connor, C.M.; Thorpe, S.K.; O’Malley, M.J.; Vaughan, C.L. Automatic detection of gait events using kinematic data. Gait Posture 2007, 25, 469–474. [Google Scholar] [CrossRef]
  64. Carcreff, L.; Gerber, C.; Paraschiv-Ionescu, A.; De Coulon, G.; Newman, C.; Armand, S.; Aminian, K. What is the best configuration of wearable sensors to measure spatiotemporal gait parameters in children with cerebral palsy? Sensors 2018, 18, 394. [Google Scholar] [CrossRef] [Green Version]
  65. Rémy, P. Temporal Convolutional Networks for Keras. GitHub Repos. 2020. Available online: https://github.com/philipperemy/keras-tcn (accessed on 17 March 2022).
  66. Bai, S.; Kolter, J.Z.; Koltun, V. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
  67. Yu, F.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. In Proceedings of the 4th International Conference on Learning Representations (ICLR), San Juan, Puerto Rico, 2–4 May 2016. Conference Track Proceedings. [Google Scholar]
  68. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  69. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  70. van Rossum, G.; Drake, F.L. Python 3 Reference Manual; CreateSpace: Scotts Valley, CA, USA, 2009. [Google Scholar]
  71. Keras. 2015. Available online: https://keras.io (accessed on 16 December 2021).
  72. Lea, C.; Flynn, M.D.; Vidal, R.; Reiter, A.; Hager, G.D. Temporal convolutional networks for action segmentation and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 22–25 July 2017; pp. 156–165. [Google Scholar]
  73. van den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.W.; Kavukcuoglu, K. WaveNet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
  74. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980v9. [Google Scholar]
  75. Schmidt, R.M.; Schneider, F.; Henning, P. Descending through a crowded valley-benchmarking deep learning optimizers. In Proceedings of the 38th International Conference on Machine Learning (ICML), Online. 18–24 July 2021; pp. 9367–9376. [Google Scholar]
  76. KerasTuner. GitHub Repos. 2019. Available online: https://github.com/keras-team/keras-tuner (accessed on 19 March 2022).
  77. Bergstra, J.; Bengio, Y. Random Search for Hyper-Parameter Optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  78. Diez, D.; Çetinkaya-Rundel, M.; Barr, C.D. OpenIntro Statistics, 4th ed.; 2019; Available online: https://www.openintro.org/book/os/ (accessed on 1 April 2022).
  79. Ji, N.; Zhou, H.; Guo, K.; Samuel, O.W.; Huang, Z.; Xu, L.; Li, G. Appropriate Mother Wavelets for Continuous Gait Event Detection Based on Time-Frequency Analysis for Hemiplegic and Healthy Individuals. Sensors 2019, 19, 3462. [Google Scholar] [CrossRef] [Green Version]
  80. Aminian, K.; Najafi, B.; Büla, C.; Leyvraz, P.-F.; Robert, P. Spatio-temporal parameters of gait measured by an ambulatory system using miniature gyroscopes. J. Biomech. 2002, 35, 689–699. [Google Scholar] [CrossRef]
  81. van Uem, J.M.; Maier, K.S.; Hucker, S.; Scheck, O.; Hobert, M.A.; Santos, A.T.; Fagerbakke, Ø.; Larsen, F.; Ferreira, J.J.; Maetzler, W. Twelve-week sensor assessment in Parkinson’s disease: Impact on quality of life. Mov. Disord. 2016, 31, 1337–1338. [Google Scholar] [CrossRef]
  82. Fisher, J.M.; Hammerla, N.Y.; Rochester, L.; Andras, P.; Walker, R.W. Body-Worn Sensors in Parkinson’s Disease: Evaluating Their Acceptability to Patients. Telemed. E-Health 2016, 22, 63–69. [Google Scholar] [CrossRef] [Green Version]
  83. Adams, J.L.; Dinesh, K.; Xiong, M.; Tarolli, C.G.; Sharma, S.; Sheth, N.; Aranyosi, A.J.; Zhu, W.; Goldenthal, S.; Biglan, K.M.; et al. Multiple Wearable Sensors in Parkinson and Huntington Disease Individuals: A Pilot Study in Clinic and at Home. Digit. Biomark. 2017, 1, 52–63. [Google Scholar] [CrossRef] [Green Version]
  84. Godkin, F.E.; Turner, E.; Demnati, Y.; Vert, A.; Roberts, A.; Swartz, R.H.; McLaughlin, P.M.; Weber, K.S.; Thai, V.; Beyer, K.B.; et al. Feasibility of a continuous, multi-sensor remote health monitoring approach in persons living with neurodegenerative disease. J. Neurol. 2022, 269, 2673–2686. [Google Scholar] [CrossRef]
  85. Espay, A.J.; Hausdorff, J.M.; Sánchez-Ferro, Á.; Klucken, J.; Merola, A.; Bonato, P.; Paul, S.S.; Horak, F.B.; Vizcarra, J.A.; Mestre, T.A.; et al. A roadmap for implementation of patient-centered digital outcome measures in Parkinson’s disease obtained using mobile health technologies. Mov. Disord. 2019, 34, 657–663. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic depiction (Picture from: https://www.vecteezy.com/free-vector/man-walking, accessed on 11 November 2021) of the current study. Study participants wore IMUs on the ankle and shanks, and reflective markers were adhered on the heel and toe of usual footwear (illustrated on the left). Marker data were used to obtain reference values for the timings of initial and final contacts (top), where accelerometer and gyroscope data from each tracked point were inputted to a neural network that predicted timings of the same initial and final contacts (bottom).
Figure 1. Schematic depiction (Picture from: https://www.vecteezy.com/free-vector/man-walking, accessed on 11 November 2021) of the current study. Study participants wore IMUs on the ankle and shanks, and reflective markers were adhered on the heel and toe of usual footwear (illustrated on the left). Marker data were used to obtain reference values for the timings of initial and final contacts (top), where accelerometer and gyroscope data from each tracked point were inputted to a neural network that predicted timings of the same initial and final contacts (bottom).
Sensors 22 03859 g001
Figure 2. The generic model architecture of the deep learning model to predict initial contacts (ICs) and final contacts (FCs). The inputs are the accelerometer and gyroscope data from a single inertial measurement unit, which are fed to a temporal convolutional network (TCN) (left). The TCN consisted of repeating residual blocks (ResBlocks) with exponentially increasing dilation factor (middle). Each ResBlock was built from two sequences of a convolutional layer (Conv), batch normalization layer (BatchNorm), a rectified linear unit activation layer (ReLU), and a dropout layer (DropOut) (right).
Figure 2. The generic model architecture of the deep learning model to predict initial contacts (ICs) and final contacts (FCs). The inputs are the accelerometer and gyroscope data from a single inertial measurement unit, which are fed to a temporal convolutional network (TCN) (left). The TCN consisted of repeating residual blocks (ResBlocks) with exponentially increasing dilation factor (middle). Each ResBlock was built from two sequences of a convolutional layer (Conv), batch normalization layer (BatchNorm), a rectified linear unit activation layer (ReLU), and a dropout layer (DropOut) (right).
Sensors 22 03859 g002
Figure 3. Time errors for initial (left) and final (right) contacts detection, for each of the different tracked points.
Figure 3. Time errors for initial (left) and final (right) contacts detection, for each of the different tracked points.
Sensors 22 03859 g003
Figure 4. The agreement of extracted gait parameters between the sensor-based and marker-based methods. The differences between the stride-specific temporal gait parameters extracted from the marker-based and proposed approach are plotted against the means.
Figure 4. The agreement of extracted gait parameters between the sensor-based and marker-based methods. The differences between the stride-specific temporal gait parameters extracted from the marker-based and proposed approach are plotted against the means.
Sensors 22 03859 g004
Table 1. Demographics data of the study participants. Age, height, and weight are presented as mean (standard deviation).
Table 1. Demographics data of the study participants. Age, height, and weight are presented as mean (standard deviation).
GroupGenderNumber of
Participants
Age
years
Height
cm
Weight
kg
YAF2127 (7)173 (5)67 (9)
M2129 (9)185 (8)80 (12)
OAF1270 (6)167 (6)72 (17)
M1073 (6)180 (6)83 (12)
PDF1267 (6)168 (7)70 (15)
M1961 (11)178 (7)86 (14)
MSF1237 (10)174 (9)75 (9)
M942 (16)189 (9)96 (32)
strokeF466 (11)160 (7)65 (13)
M1767 (18)178 (7)84 (15)
cLBPF364 (12)166 (6)65 (6)
M666 (17)177 (8)86 (14)
otherF360 (16)166 (4)79 (19)
M868 (19)182 (7)85 (14)
Group: YA: younger adults, OA: older adults, PD: Parkinson’s Disease, MS: multiple sclerosis, cLBP: chronic low back pain; Gender: F: female, M: male.
Table 2. Overview of the total number of participants, walking trials, and number of instances in the training, validation, and test set. A detailed overview with exactly for which trial and sensor location valid data were available can be found at https://github.com/rmndrs89/my-gait-events-tcn, accessed on 1 April 2022.
Table 2. Overview of the total number of participants, walking trials, and number of instances in the training, validation, and test set. A detailed overview with exactly for which trial and sensor location valid data were available can be found at https://github.com/rmndrs89/my-gait-events-tcn, accessed on 1 April 2022.
DatasetNo. of ParticipantsNo. of TrialsNo. of Instances
Train617493366
Validation485642570
Test48620620
Table 3. Model hyperparameters that were optimized for, and the corresponding sets of possible values.
Table 3. Model hyperparameters that were optimized for, and the corresponding sets of possible values.
DescriptionPossible Values
Number of filters8, 16, 32, 64, 128
Kernel size3, 5, 7
Dilations[1, 2], [1, 2, 4], [1, 2, 4, 8]
The hyperparameter values that were selected for the trained model to make predictions on the test set are shown in bold.
Table 4. Model layer hyperparameters.
Table 4. Model layer hyperparameters.
Layer #Layer TypeHyperparametersOutput Shape
0inputs batch size × 400 × 6
1aconvno. of filters: 16batch size × 400 × 16
kernel size: 5
stride: 1
padding: same
dilation: 1
1bconvno. of filters: 16batch size × 400 × 16
kernel size: 1
stride: 1
padding: same
dilation: 1
2convno. of filters: 16batch size × 400 × 16
kernel size: 5
stride: 1
padding: same
dilation: 1
3convno. of filters: 16batch size × 400 × 16
kernel size: 5
stride: 1
padding: same
dilation: 2
4convno. of filters: 16batch size × 400 × 16
kernel size: 5
stride: 1
padding: same
dilation: 2
5convno. of filters: 16batch size × 400 × 16
kernel size: 5
stride: 1
padding: same
dilation: 4
6convno. of filters: 16batch size × 400 × 16
kernel size: 5
stride: 1
padding: same
dilation: 4
7adenseno. of units: 1batch size × 400 × 1
7bdenseno. of units: 1batch size × 400 × 1
conv: convolutional layer.
Table 5. Overall detection performance for initial contacts and final contacts as quantified by recall, precision, and F1 score.
Table 5. Overall detection performance for initial contacts and final contacts as quantified by recall, precision, and F1 score.
Initial ContactsFinal Contacts
Tracked PointTPFNFPRecallPrecisionF1TPFNFPRecallPrecisionF1
Left ankle62419597%99%98%606321095%98%97%
Right ankle59942893%99%96%614171297%98%98%
Left shank605381594%98%96%585531892%97%94%
Right shank603361594%98%96%59530995%99%97%
TP: true positives, FN: false negatives, FP: false positives, F1: F1 score.
Table 6. Time errors for the correctly detected gait events. Note that 0.005 s corresponds to 1 sample period, given the sampling frequency of 200 Hz.
Table 6. Time errors for the correctly detected gait events. Note that 0.005 s corresponds to 1 sample period, given the sampling frequency of 200 Hz.
Initial ContactsFinal Contacts
Tracked Point Median
s
IQR
s
Median
s
IQR
s
Left ankle0.0000.0200.0000.010
Right ankle0.0000.020−0.0050.015
Left shank−0.0050.020−0.0050.020
Right shank−0.0030.020−0.0050.020
IQR: inter-quartile range.
Table 7. Time agreement between the stride-specific parameters.
Table 7. Time agreement between the stride-specific parameters.
Tracked PointParametersMean Difference
s
Limits of Agreement
(s, s)
Left anklestride time0.001(−0.035, 0.036)
stance time0.002(−0.039, 0.042)
swing time−0.001(−0.045, 0.043)
Right anklestride time0.000(−0.039, 0.040)
stance time−0.002(−0.048, 0.044)
swing time0.003(−0.046, 0.051)
Left shankstride time0.001(−0.039, 0.041)
stance time0.002(−0.043, 0.046)
swing time−0.001(−0.049, 0.047)
Right shankstride time−0.000(−0.031, 0.031)
stance time0.002(−0.046, 0.049)
swing time−0.002(−0.049, 0.046)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Romijnders, R.; Warmerdam, E.; Hansen, C.; Schmidt, G.; Maetzler, W. A Deep Learning Approach for Gait Event Detection from a Single Shank-Worn IMU: Validation in Healthy and Neurological Cohorts. Sensors 2022, 22, 3859. https://0-doi-org.brum.beds.ac.uk/10.3390/s22103859

AMA Style

Romijnders R, Warmerdam E, Hansen C, Schmidt G, Maetzler W. A Deep Learning Approach for Gait Event Detection from a Single Shank-Worn IMU: Validation in Healthy and Neurological Cohorts. Sensors. 2022; 22(10):3859. https://0-doi-org.brum.beds.ac.uk/10.3390/s22103859

Chicago/Turabian Style

Romijnders, Robbin, Elke Warmerdam, Clint Hansen, Gerhard Schmidt, and Walter Maetzler. 2022. "A Deep Learning Approach for Gait Event Detection from a Single Shank-Worn IMU: Validation in Healthy and Neurological Cohorts" Sensors 22, no. 10: 3859. https://0-doi-org.brum.beds.ac.uk/10.3390/s22103859

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop