Next Article in Journal
Dark Current Modeling for a Polyimide—Amorphous Lead Oxide-Based Direct Conversion X-ray Detector
Next Article in Special Issue
Modeling and Control of a Spherical Robot in the CoppeliaSim Simulator
Previous Article in Journal
A Robotic Experimental Setup with a Stewart Platform to Emulate Underwater Vehicle-Manipulator Systems
Previous Article in Special Issue
A Novel Path Planning Strategy for a Cleaning Audit Robot Using Geometrical Features and Swarm Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Short-Term Pedestrian Trajectory Prediction Based on Gait Biomechanics

Electrical Engineering Department, Campus of Gijon, University of Oviedo, 33204 Gijón, Spain
*
Author to whom correspondence should be addressed.
Submission received: 1 July 2022 / Revised: 28 July 2022 / Accepted: 1 August 2022 / Published: 4 August 2022
(This article belongs to the Special Issue Advanced Sensors Technologies Applied in Mobile Robot)

Abstract

:
The short-term prediction of a person’s trajectory during normal walking becomes necessary in many environments shared by humans and robots. Physics-based approaches based on Newton’s laws of motion seem best suited for short-term predictions, but the intrinsic properties of human walking conflict with the foundations of the basic kinematical models compromising their performance. In this paper, we propose a short-time prediction method based on gait biomechanics for real-time applications. This method relays on a single biomechanical variable, and it has a low computational burden, turning it into a feasible solution to implement in low-cost portable devices. We evaluate its performance from an experimental benchmark where several subjects walked steadily over straight and curved paths. With this approach, the results indicate a performance good enough to be applicable to a wide range of human–robot interaction applications.

1. Introduction

Human motion trajectory prediction (HMTP) is a critical technology in applications where people share their workspace with autonomous moving machines. That is needed, for example, in collaborative robotics for obstacle avoidance [1,2], in automatic driving assistance systems for safety assurance [3,4], in prostheses or exoskeletons for better performance [5], or in virtual reality to improve the sensation of immersion perceived by the user [6]. The strong interest in all these application fields explains the exponential growth of scientific communications devoted to this problem in the last few years [7].
A basic instance of the problem of HMTP can be described as how to estimate the future location of a specific mark in the body of a walking human within a given short temporal window (see Figure 1). With that prediction, the possibilities of enhancing intelligent human–robot collaborative environments increase. Robots can plan their motion to adjust their actions better for more efficient and safe collaboration with humans [8]. The prediction is based on information coming from the monitorisation of the human with environmental sensors, or sensors mounted on the robot, or wearables [9,10,11]. A viable real-time prediction of human trajectory must consider the sensors and signals that will be available and whether it will be possible to achieve the reactivity or fast response that the application demands.
As will be discussed below, there are different proposals in the literature for short-range anticipation of a walker’s position in real-time. All of them differ in both the amount of sensory information that needs to be provided and the type of information processing needed to solve the problem. In this paper, we propose a method with the following characteristics: (1) it is designed for close-proximity applications, with prediction windows around 1 s, (2) it relays on a single biomechanical variable that can be measured with inexpensive wearable sensors, and (3) it has a low computational load and is therefore feasible to implement in low-cost portable devices.
In the following section, we will present the relevant state-of-the-art. After describing the method details in Section 3, we evaluate its performance from an experimental benchmark where several subjects walked steadily over straight and curved paths, Section 4. We show that the main error source of the estimations has a specific biomechanical source: the left-to-right fluctuations of the position of the subject in the direction of the displacement induced by the alternation of the step. This effect is adequately compensated in our algorithm by introducing a sensor-based biomechanical compensation. The results in Section 5 indicate a performance good enough to be applicable to a wide range of human–robot interaction applications.

2. Related Work

Following the general taxonomy of HMTP tools proposed in [7], physics-based approaches based on Newton’s laws of motion seem best suited for real-time sensor-based short-term predictions because they operate within a reactive sense–predict scheme, avoiding intermediate processing time. For that reason, together with simplicity, we have ruled out methods that involved a higher level of cognition, such as learning based on data [12] or planning based on reasoning about motion intent [13].
Physics-based estimators, however, being useful to predict the motion of vehicles, fall short of capturing the quite complex dynamics of human walking. This has been addressed by resorting to a blending of a mixture of multiple dynamic models, but no performance improvement was found [14].
A similar strategy is to combine dynamic models with other learning or planning algorithms, even if, in principle, it goes against the speed of response of dynamic methods. A learning approach in [15] conveys promising results, but its performance on ambulatory motions such as walking has not been evaluated.
A third approach is to combine dynamic models with other information coming from the target agent himself. In [16], the short-term trajectory prediction is improved by tracking the user’s head in the context of virtual reality applications. Head orientation anticipates the trajectory, but it is also heavily influenced by other ambient-related impulses or distractions [17]. The authors suggest the use of eye trackers to overcome this problem, but this technology is too expensive to be used in more general intelligent environments. Our work is framed in this line but seeks to improve the prediction with the measurement of biomechanical signals that are easily implementable in practice.

3. Formulation of the Prediction Method

For the short-term prediction of the pedestrian, we will adopt a kinematic model and a sensor-based biomechanical compensation method as described below.

3.1. Kinematical Trajectory Prediction Models

Kinematical models are a good approach for the short-term prediction of the position of moving objects [18,19]. They are derived by applying the Newton laws from an initial position x ,   y (see Figure 2), considering a set of initial conditions defined by the orientation of the displacement ϕ , the linear velocity v , the angular velocity w and the tangential acceleration a t as follows:
x ˙ t = v t cos ϕ t
y ˙ t = v t s e n ϕ t
ϕ ˙ t = w t
v ˙ t = a t t
Depending on the assumptions taken about the translational and rotational velocities and the accelerations of the moving object, several particularisations can be formulated [18] (see Figure 3):
(i)
Constant Velocity (CV) model ( w = a = 0 ),
(ii)
Constant Acceleration (CA) model ( w = 0 , a = constant 0 ),
(iii)
Constant Turn Rate and Velocity (CTRV) model ( w = constant 0 , a = 0 ) and
(iv)
Constant Turn Rate and Acceleration (CTRA) model ( w = c o n s t a n t 0 ,   a = c o n s t a n t 0 ) .
Defining the model state space vector as x , y , ϕ , v , a , w , which includes the velocity vector v , ϕ , we can obtain the discrete version of the kinematical Equations (1)–(4). Assuming that in the time interval t , the rotational velocity and the translational acceleration are constant (CTRA model) and results in:
x k + 1 y k + 1 ϕ k + 1 v k + 1 a k + 1 w k + 1 C T R A = x k y k ϕ k v k a k w k + a w k 2   cos ϕ k + w k t a w k 2 cos ϕ k + v k + a t w k sin ϕ k + w k t v k w k sin ϕ k a w k 2   sin ϕ k + w k t a w k 2 sin ϕ k v k + a t w k cos ϕ k + w k t + v k w k cos ϕ k w k · t a k · t   0 0
From this discrete CTRA model, the equations of the other three can be calculated by substituting their respective values of translational and rotational velocities and the accelerations:
x k + 1 y k + 1 ϕ k + 1 v k + 1 w k + 1 C T R V = x k y k ϕ k v k w k + v k w k sin ϕ k + w k t v k w k sin ϕ k v k w k cos ϕ k + w k t + v k w k cos ϕ k w k · t 0 0
x k + 1 y k + 1 ϕ k + 1 v k + 1 a k + 1 C A = x k y k ϕ k v k a k + v k t + a 2 t 2 cos ϕ k v k t + a 2 t 2 sin ϕ k 0 a k · t   0
x k + 1 y k + 1 ϕ k + 1 v k + 1 C V = x k y k ϕ k v k + v k cos ϕ k t v k sin ϕ k t 0 0
At every sampling time, k , the model state vector x k , y k , ϕ k , v k , a k , w k , must be computed to apply it to the model. The actual orientation of the displacement ( ϕ k r ) and the translational ( v k r ) are estimated from the position ( x k r , y k r ):
ϕ k r = tan 1 y k r y k 1 r / x k r x k 1 r  
v k r = sqrt x k r x k 1 r 2 + y k r x k 1 r 2 / Δ t
The rotational velocities ( w k r ) and translational acceleration ( a k r ) of the displacement are estimated from ϕ k r , v k r :
a k r = v k r v k 1 r / Δ t
w k r = ϕ k r ϕ k 1 r   / Δ t
In the following, we will refer to the sequence x k r ,   y k r ,   ϕ k r ,   v k r ,   a k r ,   w k r as the raw trajectory since its variables are obtained recursively directly from the kinematic models, without any other consideration of the biomechanics of human walking. As discussed above, the predictive power of these equations alone can be expected to be unsatisfactory.

3.2. Offline Compensations

To correct the predictable limitations of the model discussed above, we will resort to studying the effects induced in it by gait biomechanics. It is known that in normal walking, the pelvis moves from side to side once per cycle, the trunk being over each leg to maintain balance. Similarly, the forward movement is not constant, and it produces variations in velocity according to the phase of the step. The resulting velocity and orientation signals corresponding to the pelvis have a sinusoidal shape [20,21].
To include that fact in the model, we propose offline filtering of the whole raw trajectory signals x k r , y k r to remove their waviness. We applied a polynomial regression, defining the offline estimated trajectories x k b , y k b for each of the experiments.
From the corrected position x k b , y k b , we calculated the orientation of the displacement ( ϕ k b ), the translational and rotational velocities ( v k b and w k b , respectively), and translational acceleration ( a k b ) of the displacement with previous Equations (9)–(12). By removing the undulations from the positions x k b , y k b , the undulations will, in turn, be removed from the signals ϕ k b , v k b , a k b , w k b .
In the following, we will refer to the sequence x k b ,   y k b ,   ϕ k b ,   v k b ,   a k b ,   w k b as the offline trajectory since its variables cannot be computed in real-time conditions. However, they could be taken as an instrumental way to test the utility of the proposed compensation.

3.3. Real-Time Sensor-Based Biomechanical Compensations

Assuming that the previous compensation will produce better estimations, it has the disadvantage of requiring filtering that induces time lags. We propose a real-time alternative to the offline compensation, which modifies the translational velocity and orientation signals v k r ,   ϕ k r considering previous knowledge about gait biomechanics.
The global velocity of the centre-of-mass velocity of the body, v k r , is an oscillatory signal whose average value coincides approximately with the value it takes in the initial and final contacts of the feet, as described in gait studies [22]. Therefore, the displacement velocity can be better computed from signals taken from the initial contact of the ipsilateral foot (Figure 4(A)) to the subsequent final contact of the contralateral foot (Figure 4(B)), and then to the subsequent initial contact of the contralateral foot (Figure 4(C)).
This way, we can fix the raw displacement velocity, v k r (Equation (10)), measured between initial and final contact foot events. We will refer to this sequence, v ˜ k , as the real-time estimated velocity. Notice that the initial and final foot contacts can be detected from local maximums and minimums of the derivative of the raw velocity [23,24], so this correction does not require specific new sensors. The acceleration in these segments, a ˜ k , can also be corrected and computed as the increment of velocity between consecutive events.
Regarding the orientation of the displacement ϕ k , it is known in gait biomechanics that the body trunk and the pelvis orientation evolve in counter-phase [21] (see Figure 5). Therefore, the raw position signal, ϕ k r , can be corrected by measuring the pelvis orientation ϕ k h . A proportional average can cancel the oscillations [25] and return the basic signal trend, ϕ ˜ k , closer to the actual forward orientation:
ϕ ˜ k = K · ϕ k h + 1 K   ϕ k r
The calibration gain K has to be experimentally calculated for each subject to minimise the least square error between the raw and the baseline trajectories. From the estimated orientation, ϕ k , we can compute the corresponding rotational velocity:
w ˜ k = ϕ ˜ k ϕ ˜ k 1   / Δ t
From the point of view of the real-time implementation, we have introduced the need for a sensor that measures hip orientation. As in the previous case of trunk velocity, it is possible to make this estimate with an IMU sensor placed on the hip [26].
In the following, we will refer to the sequence x k r ,   y k r ,   ϕ ˜ k ,   v ˜ k ,   a ˜ k ,   w ˜ k as the estimated trajectory. This sequence defines the initial conditions for the application of the kinematical models in real time to forecast the position of the walking subject. From the discussion above, it is expected that this model will produce a better trajectory estimation than the raw trajectory and close to the offline trajectory.
In the following, we will test this proposed method to evaluate and compare the performance of the raw, offline estimated and real-time estimated trajectories.

4. Evaluation

4.1. Experimental Setup

For the evaluation of the models, an experimental benchmark was designed involving five adult subjects aged between 21 and 52. Two types of trajectories, straight and curved, were defined (see Figure 6). For the straight trajectories, participants walked in a straight line 5.4 m in length. Participants were instructed to perform a normal gait while maintaining a constant walking velocity. This trajectory was repeated five times per individual. For the curved trajectories, participants walk in circles around a central point, 1.5 m radius, again five repetitions. They were instructed to keep the turning radius as constant as possible with the help of a guide painted on the floor, increasing the speed as the experiment progressed.
An Optitrack system composed of 10 Flex3 cameras was used to monitor the experiments with a sampling frequency of 100 Hz. Calibration of the system was performed following the recommendations of the manufacturer, and the nominal residual errors achieved were 1.4 mm (mean). Following [27] (see Figure 7), we used the bidimensional raw position x k ,   y k of the centroid of the rigid body formed from five markers placed around the waist as the subject position. Pelvis orientation, ϕ k h , was estimated from the actual orientation of the rigid body from the defined reference system. A 3rd-order low-pass Butterworth filter with a cut-off frequency of 6 Hz was applied during the acquisition to remove frequential components above those pertinent for the human gait.

4.2. Model Application and Error Analysis

Data from the curved experiment was divided into five trajectory segments corresponding to each of the individual turns. This way, we defined for each subject five straight trajectory segments and five curved trajectory segments (each corresponding to a whole turn). To form the offline estimated trajectories, we use linear regression for straight segments and for the curved segments, a grade 8 polynomial regression.
The four prediction models (CV, CA, CTRV, CTRA) were then sequentially applied to the sequence x k r ,   y k r ,   ϕ ˜ k ,   v ˜ k ,   a ˜ k ,   w ˜ k starting at every sample from k = 1 to k = N 100 , being N the length of the current trajectory data. Each sample is predicted to have a horizon of 1 s (100 samples) (see Figure 8). For this purpose, the models are applied recursively by introducing as input the output of the previous iteration until the 1-s horizon is raised.
Similarly, these models are applied directly to the signals without removing the wavelets x k r ,   y k r ,   ϕ k r ,   v k r ,   a k r ,   w k r and on the post-processed signal with offline compensation x k b ,   y k b ,   ϕ k b ,   v k b ,   a k b ,   w k b .
For every application of a model starting at sample k , the prediction error sequence e k was defined as the sequence of Euclidean distances between the predicted x ^ k k + 99 ,   y ^ k k + 99 and actual position values x k k + 99 r , y k k + 99 r of the raw trajectory.

5. Results

Figure 9 shows the Root Mean Square of the error sequences (predictions from one sample to one hundred samples, i.e., 0.01 s to 1 s) for each model (CV, CA, CTRV, CTRA) for the raw trajectories (top), offline estimated trajectories (centre), and real-time estimated trajectories (bottom).
We have found that prediction errors present an exponential growth with time, characteristic of the usual drift present in estimations based on integral procedures, with different growth rates for the different models. RMS prediction error at 1-s length is the biggest for predictions from the raw trajectories, with RMS error of predictions from offline estimated trajectories the lowest in general.
Table 1 contains the RMS error at a one-second prediction length from each trajectory (raw, offline estimated, and real-time estimated) and each prediction model. In general, as could be expected, curved models (CTRV, CTRA) perform better for curved trajectories, and straight models (CV, CA) perform better for straight trajectories. We found an exception in predictions from offline estimated trajectories, where there was no difference between the four prediction models for the straight trajectories. The translational acceleration and rotational velocity estimated for these trajectories were almost zero, as planned with the design of the experiment, which implies that all models adopted the behaviour of the CV model. Results also show that accelerated models perform in general worse than non-accelerated models, especially in the raw trajectories. In aggregated terms, the CTRV performs the best in all situations. Considering the type of path (straight, curved), the CTRV still performs the best except for prediction for straight real-time estimated trajectories, where straight models (CV, CA) perform better than the CTRV model.
To further analyse the performance of the prediction models, a multiway analysis of the variance (ANOVA) was carried out using MATLAB® R2020b from MathWorks to check if the mean prediction at a horizon of prediction of 1 s was affected by the intervening factors: the subject performing the test, the type of path (straight or curved), and the model applied for the prediction (CV, CA, CTRV, CTRA). The subject factor was treated as a random effect, while the type of path and model factors were treated as a fixed effect.
The results showed that mean prediction errors were not affected by the subject performing the test (p-value > 0.01) for the raw, offline estimated, and real-time estimated trajectories. On the contrary, the type of path and the prediction model significantly affected the mean prediction error for the raw, offline estimated, and real-time estimated trajectories (p-value < 0.01). A multiple comparison test was then carried out to analyse significant differences in the mean prediction error for pairs of type of trajectory/model.
For the offline estimated trajectories, we found four groups with mean prediction errors that were not significantly different among them and were significantly different from errors from the other three groups. These groups, expressed in ascending order or the averaged prediction error, were:
  • Group 1: CV and CA in straight segments.
  • Group 2: CTRV and CTRA in straight segments.
  • Group 2: CTRV and CTRA in curved segments.
  • Group 3: CV and CA in curved segments.
For the real-time estimated trajectories, we identified two different groups:
  • Group 1: CV in straight segments; CTRV in straight and curved segments.
  • Group 2: CA in straight segments; CTRA in straight and curved segments.
Regarding the raw trajectories, the test showed that the mean prediction error was significantly different for all type of trajectory/model pairs.

6. Discussion

6.1. Experiments Design

Experiments were designed to evaluate the performance of the proposed model during normal walking and, therefore, they comprised walks in straight and curved trajectories. We designed experiments for the extreme cases, straight trajectories at a constant velocity and circular trajectories with increasing velocity of displacement. The prediction results of the proposed model are compared with those obtained by applying the kinematic models directly to the raw data and to the post-processed data to be able to quantify the improvement achieved.

6.2. Impact of Velocity and Orientation on Predictions

Kinematical models based on the well-known Newton Laws have been used in the state-of-the-art as a general approach to forecasting the position of general moving objects. For instance, in [28], the authors perform a comparative study of several dynamical models (CV, CA, CTRV) and a multi-model algorithm combining such basic models. Particularly in terms of single motion models, they observe benefits for the CV model in some scenes and CTRV in others, making complex models inefficient. Similarly, we have observed that these models have usually been applied as a complement to other advanced techniques [15]. For our proposal prediction method, we chose to use the four models describing four types of trajectories where the occurrence of translational accelerations and angular velocities are combined.
The results of applying these models to our raw data confirm how the left-to-right fluctuations of the position of the subject in the direction of the displacement induced by the alternation of the step affect the performance of the basic kinematical models. Prediction on the raw data has the worst performance. On these, we can distinguish two effects. Firstly, the RMS of the accelerated models is significantly higher than that of the non-accelerated models for both curves and straight lines, in contrast to the offline prediction. This is the result of the influence of using the sinusoidal velocity in the kinematic models. A similar effect can also be found for straight versus curved models on straight trajectories. While in the offline prediction, all models have the same mean error, with the raw data, the CTRV model outperforms the RMS of the CV and the CTRA of the CA. This is due to the oscillation of the orientation, which is picked up by the curvilinear models.

6.3. Impact of Basic Kinematic Models on Predictions

The best model to apply in a general situation is the CTRV model. It shows the best prediction errors in aggregated terms. We have also found that the use of accelerated models supposes a concern. On the one side, we have found that for offline curved trajectories, curved models, accelerated and non-accelerated (CTRV, CTRA) performed equally. This finding was unexpected, as subjects were told to accelerate during the development of the curved experiments to analyse the performance of accelerated models (CA, CTRA) compared to non-accelerated models (CV, CTRV). However, we found that the acceleration estimated from the curved baseline trajectories ( a k b ) was on average 0.04 ± 0.09 m/s2, coherent with reference data for normal walking reported in the literature [9]. This value was possibly too small to make a difference in performance between the two groups of models, as confirmed by the multiple comparison test. Therefore, this finding supports the idea that accelerated models do not provide a real improved performance over non-accelerated models in the context of predicting position during human walking. Moreover, the use of the acceleration in the models may suppose an issue, as the estimated prediction seems to present a higher error to this parameter, even with the velocity compensation.
In general terms, for the reasons stated above, the CTRV performs the best for unconstrained trajectories. We provide a statistical analysis that reveals the model most efficient for each type of path. In case the type of path is known in advance, the taxonomy of models may help to choose the most appropriate for a given application framework.

6.4. Performance of Real-Time Sensor-Based Compensation Method

The increased performance of predictions from offline filtered trajectories compared to predictions from raw trajectories confirms the necessity of a technique to remove the fluctuations from the raw signals before applying the prediction kinematical models. For this work, we have made batch filtering considering the whole raw signals to define the offline filtered trajectories. However, in a real application scenario of prediction of the subject position, such undulations should be removed in real-time, considering only the signal sampled to the actual moment. Future positions of the subject would not be available, and processing like the one performed to define the baseline trajectories would not be possible.
Classical frequential filters would be an option to remove the gait-induced undulations found in the sampled signals. In certain situations, these filters could perform properly. For instance, if the trajectory is essentially straight or the velocity is essentially constant, a low pass filter of a few seconds in length could be adequate. However, they may present some drawbacks, as the low step frequencies of human gait (between 0.74 and 1.3 two-step/s [29]) would require slow filters that would introduce a delay too high for a short-time prediction (a four-pole Butterworth filter with a cutting frequency of 1 Hz is expected to add a delay about 0.5 s [30]). So, for changing orientations and eventually changing velocities, these filters could not be very efficient. Recent results [25] confirm that this approach may lead to erroneous results and that a different approach is needed for a general solution to the problem.
In this paper, we propose to address the elimination of the undulations in the orientation and the velocity from the consideration of the biomechanical characteristics of human gait. We aimed to provide a general approach applicable to unconstrained trajectories mixing straight and curved paths and eventually different linear and rotational velocities and accelerations. With this approach, we show how prediction errors are considerably reduced from predictions from raw trajectories, making them comparable to the prediction from baseline estimations. Anyway, there is still room for improvement. Regarding the use of the acceleration in the models, it is true that the biomechanical inspired corrections made over the acceleration reduce the neglecting effect they present of the prediction from the raw trajectories. However, as acceleration makes a big impact on error and acceleration in human walking is low, the use of accelerated models may not be very profitable.
Comparing the results with those of other works, we find results in the same range of values. An example is the case of [15], where the proposed prediction method achieves an average error between 100 and 200 mm for predictions at a horizon of 0.5 s. The result obtained by individually applying a CV model (called as Velocity-Based Position Projection Method) gives a mean error of 250 mm. In our case, the RMS for the four kinematic models at a prediction horizon of 0.5 s is between 50 and 100 mm, as opposed to an RMS of ≈120 mm by the CV model applied to the raw data.

7. Conclusions

In a human–robot collaborative space, the short-term prediction of a person’s walking position becomes necessary from a real and perceived safety point of view. In this paper, we propose a human trajectory prediction method for real-time application based on the biomechanical characteristics of human gait. This approach can be applied with inexpensive wearable sensors, and it has a low computational load. Experiments were designed to evaluate the performance of the proposed method during normal walking, comprising straight and curved paths. The results confirm how the left-to-right fluctuations of the position of the subject in the direction of the displacement induced by the alternation of the step affect the performance of the basic kinematical models to predict by themselves. We have found that if such undulations are removed, prediction performance would be improved and propose a prediction method that compensates the body swing by correcting velocity and orientation based on the initial and final contacts of the feet and the pelvic motion. The results show that, as acceleration makes a big impact on error and acceleration in human walking is low, the use of accelerated models may not be very profitable. Despite this, the performance of the proposed prediction method improves the use of basic kinematical models and produces results compatible with real-time applications.

Author Contributions

Conceptualization, L.G. and A.M.L.; methodology, L.G. and A.M.L.; software, L.G.; validation, J.C.Á., D.Á. and A.M.L.; investigation, L.G., A.M.L., J.C.Á. and D.Á.; data curation, L.G.; writing—original draft preparation, L.G. and A.M.L.; writing—review and editing, L.G. and J.C.Á.; visualization, L.G.; supervision, J.C.Á., A.M.L. and D.Á.; project administration, A.M.L.; funding acquisition, J.C.Á., A.M.L. and D.Á. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Unhelkar, V.V.; Lasota, P.A.; Tyroller, Q.; Buhai, R.-D.; Marceau, L.; Deml, B.; Shah, J.A. Human-Aware Robotic Assistant for Collaborative Assembly: Integrating Human Motion Prediction with Planning in Time. IEEE Robot. Autom. Lett. 2018, 3, 2394–2401. [Google Scholar] [CrossRef] [Green Version]
  2. Kruse, T.; Pandey, A.K.; Alami, R.; Kirsch, A. Human-Aware Robot Navigation: A Survey. Robot. Auton. 2013, 61, 1726–1743. [Google Scholar] [CrossRef] [Green Version]
  3. Kooij, J.F.P.; Flohr, F.; Pool, E.A.I.; Gavrila, D.M. Context-Based Path Prediction for Targets with Switching Dynamics. Int. J. Comput. Vis. 2019, 127, 239–262. [Google Scholar] [CrossRef] [Green Version]
  4. Koehler, S.; Goldhammer, M.; Bauer, S.; Zecha, S.; Doll, K.; Brunsmann, U.; Dietmayer, K. Stationary Detection of the Pedestrian? S Intention at Intersections. IEEE Intell. Transp. Syst. Mag. 2013, 5, 87–99. [Google Scholar] [CrossRef]
  5. Zhang, F.; Liu, M.; Huang, H. Effects of Locomotion Mode Recognition Errors on Volitional Control of Powered Above-Knee Prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 64–72. [Google Scholar] [CrossRef] [PubMed]
  6. Schwind, V.; Halbhuber, D.; Fehle, J.; Sasse, J.; Pfaffelhuber, A.; Tögel, C.; Dietz, J.; Henze, N. The Effects of Full-Body Avatar Movement Predictions in Virtual Reality Using Neural Networks. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, Virtual Event, 1–4 November 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar]
  7. Rudenko, A.; Palmieri, L.; Herman, M.; Kitani, K.M.; Gavrila, D.M.; Arras, K.O. Human Motion Trajectory Prediction: A Survey. Int. J. Robot. Res. 2020, 39, 895–935. [Google Scholar] [CrossRef]
  8. Karim, B.; Tan, Q.; Alvarez, J.C. Towards Fog-Based HiTLCPS for Human Robot Interactions in Smart Lab: Use Cases and Architecture Overview. In Proceedings of the 15th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2020), Burgos, Spain, 16–18 September 2020; Herrero, Á., Cambra, C., Urda, D., Sedano, J., Quintián, H., Corchado, E., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 590–599. [Google Scholar]
  9. González, L.; Álvarez, J.C.; López, A.M.; Álvarez, D. Metrological Evaluation of Human–Robot Collaborative Environments Based on Optical Motion Capture Systems. Sensors 2021, 21, 3748. [Google Scholar] [CrossRef] [PubMed]
  10. Álvarez, D.; Alvarez, J.C.; González, R.C.; López, A.M. Ambulatory Human Upper Limb Joint Motion Monitoring. In Proceedings of the 2012 IEEE I2MTC—International Instrumentation and Measurement Technology Conference, Graz, Austria, 13–16 May 2012; pp. 15–19. [Google Scholar]
  11. Alvarez, J.C.; Álvarez, D.; López, A.M. Accelerometry-Based Distance Estimation for Ambulatory Human Motion Analysis. Sensors 2018, 18, 4441. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Knoedler, L.; Salmi, C.; Zhu, H.; Brito, B.; Alonso-Mora, J. Improving Pedestrian Prediction Models with Self-Supervised Continual Learning. IEEE Robot. Autom. Lett. 2022, 7, 4781–4788. [Google Scholar] [CrossRef]
  13. Bajcsy, A.; Bansal, S.; Ratner, E.; Tomlin, C.J.; Dragan, A.D. A Robust Control Framework for Human Motion Prediction. IEEE Robot. Autom. Lett. 2021, 6, 24–31. [Google Scholar] [CrossRef]
  14. Schneider, N.; Gavrila, D.M. Pedestrian Path Prediction with Recursive Bayesian Filters: A Comparative Study. In German Conference on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2013; pp. 174–183. [Google Scholar] [CrossRef] [Green Version]
  15. Lasota, P.A.; Shah, J.A. A Multiple-Predictor Approach to Human Motion Prediction. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 2300–2307. [Google Scholar]
  16. Nescher, T.; Kunz, A. Using Head Tracking Data for Robust Short Term Path Prediction of Human Locomotion. In Transactions on Computational Science XVIII; LNCS 7848; Springer: Berlin/Heidelberg, Germany, 2013; pp. 172–191. [Google Scholar]
  17. López, A.M.; Alvarez, J.C.; Álvarez, D. Walking Turn Prediction from Upper Body Kinematics: A Systematic Review with Implications for Human-Robot Interaction. Appl. Sci. 2019, 9, 361. [Google Scholar] [CrossRef] [Green Version]
  18. Li, X.R.; Jilkov, V.P. Survey of Maneuvering Target Tracking. Part I: Dynamic Models. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1333–1364. [Google Scholar]
  19. Best, R.A.; Norton, J.P. A New Model and Efficient Tracker for a Target with Curvilinear Motion. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 1030–1037. [Google Scholar] [CrossRef]
  20. Whittle, M.W. Gait Analysis. In The Soft Tissues; Butterworth-Heinemann: Oxford, UK, 1993. [Google Scholar]
  21. Perry, J. Gait Analysis Normal and Pathological Function; SLACK Inc.: West Deptford, NJ, USA, 2010; Volume 159, ISBN 9781556421921. [Google Scholar]
  22. Jian, Y.; Winter, D.A.; Ishac, M.G.; Gilchrist, L. Trajectory of the Body COG and COP during Initiation and Termination of Gait. Gait Posture 1993, 1, 9–22. [Google Scholar] [CrossRef]
  23. Zijlstra, W.; Hof, A.L. Displacement of the Pelvis during Human Walking: Experimental Data and Model Predictions. Gait Posture 1997, 6, 249–262. [Google Scholar] [CrossRef]
  24. Alvarez, J.C.; Alvarez, D.; López, A.; González, R.C. Pedestrian Navigation Based on a Waist-Worn Inertial Sensor. Sensors 2012, 12, 10536–10549. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Gonzalez, L.; Lopez, A.M.; Alvarez, D.; Alvarez, J.C. Evaluation of Filtering Methods for the Prediction of Human Position during Walking by Means of Kinematical Models. In Proceedings of the Conference Record—IEEE Instrumentation and Measurement Technology Conference, Glasgow, UK, 17–20 May 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021. [Google Scholar]
  26. Bolink, S.A.A.N.; Naisas, H.; Senden, R.; Essers, H.; Heyligers, I.C.; Meijer, K.; Grimm, B. Validity of an Inertial Measurement Unit to Assess Pelvic Orientation Angles during Gait, Sit-Stand Transfers and Step-up Transfers: Comparison with an Optoelectronic Motion Capture System. Med. Eng. Phys. 2016, 38, 225–231. [Google Scholar] [CrossRef] [PubMed]
  27. Bernardin, D.; Kadone, H.; Bennequin, D.; Sugar, T.; Zaoui, M.; Berthoz, A. Gaze Anticipation during Human Locomotion. Exp. Brain Res. 2012, 223, 65–78. [Google Scholar] [CrossRef] [PubMed]
  28. Schneider, N.; Gavrila, D.M. Pedestrian Path Prediction with Recursive Bayesian Filters: A Comparative Study. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin, Germany, 2013; Volume 8142 LNCS, pp. 174–183. [Google Scholar]
  29. Oberg, T.; Karsznia, A.; Oberg, K. Basic Gait Parameters: Reference Data for Normal Subjects, 10–79 Years of Age. J. Rehabil. Res. Dev. 1993, 30, 210–223. [Google Scholar] [PubMed]
  30. Manal, K.; Rose, W. A General Solution for the Time Delay Introduced by a Low-Pass Butterworth Digital Filter: An Application to Musculoskeletal Modeling. J. Biomech. 2007, 40, 678–681. [Google Scholar] [CrossRef] [PubMed]
Figure 1. An instance of the problem of human motion trajectory prediction: the location of a specific landmark in the walking human body (green dot) is projected to its future location (red dot) within a given prediction horizon. The overall person’s mark movement (solid green line) results in an estimated walk trajectory (red dots line).
Figure 1. An instance of the problem of human motion trajectory prediction: the location of a specific landmark in the walking human body (green dot) is projected to its future location (red dot) within a given prediction horizon. The overall person’s mark movement (solid green line) results in an estimated walk trajectory (red dots line).
Sensors 22 05828 g001
Figure 2. Geometry of motion for the 2-dimensional kinematical model of a moving object.
Figure 2. Geometry of motion for the 2-dimensional kinematical model of a moving object.
Sensors 22 05828 g002
Figure 3. Relationship between common movement patterns defined from simplifying assumptions over the general model: CV (straight displacement at Constant Velocity), CA (straight displacement at Constant Acceleration), CTRV (curved displacement, at constant displacement and rotational velocities), and CTRA (curved displacement, at constant rotational velocity and constant accelerated displacement).
Figure 3. Relationship between common movement patterns defined from simplifying assumptions over the general model: CV (straight displacement at Constant Velocity), CA (straight displacement at Constant Acceleration), CTRV (curved displacement, at constant displacement and rotational velocities), and CTRA (curved displacement, at constant rotational velocity and constant accelerated displacement).
Sensors 22 05828 g003
Figure 4. Forward real-time estimated velocity (red line) and offline estimated velocity (green line). The estimation of the forward velocity is addressed by holding the actual value of the raw velocity (continuous blue line) at the initial contact of the ipsilateral foot until the final contact of the contralateral foot and vice–versa. Initial and final contacts are detected from local maxima and minima of the derivative of the velocity (dashed blue line).
Figure 4. Forward real-time estimated velocity (red line) and offline estimated velocity (green line). The estimation of the forward velocity is addressed by holding the actual value of the raw velocity (continuous blue line) at the initial contact of the ipsilateral foot until the final contact of the contralateral foot and vice–versa. Initial and final contacts are detected from local maxima and minima of the derivative of the velocity (dashed blue line).
Sensors 22 05828 g004
Figure 5. Orientation estimation (red line) and offline estimated orientation (green line). The real-time estimation is based on a proportional combination of the counter–phase signals pelvic orientation (continuous blue line) and the orientation estimated from the sampled raw spatial position of the subject (dashed blue line).
Figure 5. Orientation estimation (red line) and offline estimated orientation (green line). The real-time estimation is based on a proportional combination of the counter–phase signals pelvic orientation (continuous blue line) and the orientation estimated from the sampled raw spatial position of the subject (dashed blue line).
Sensors 22 05828 g005
Figure 6. Experiments involved displacements over straight (5.4 m length) and circular trajectories (1.5 m radius). Straight trajectories ran from point A to B. For curved trajectories, participants describe a circle starting and ending at point C.
Figure 6. Experiments involved displacements over straight (5.4 m length) and circular trajectories (1.5 m radius). Straight trajectories ran from point A to B. For curved trajectories, participants describe a circle starting and ending at point C.
Sensors 22 05828 g006
Figure 7. Five reflective markers were placed around the waist of the subject (left) for the estimation of the 2-dimensional spatial position of the body x k ,   y k and the orientation θ k h of the pelvis (right).
Figure 7. Five reflective markers were placed around the waist of the subject (left) for the estimation of the 2-dimensional spatial position of the body x k ,   y k and the orientation θ k h of the pelvis (right).
Sensors 22 05828 g007
Figure 8. The figure describes the application of a prediction model starting at the third sample ( k = 3 ) of a raw trajectory x k r ,   y k r ,   ϕ k r ,   v k r ,   a k r ,   w k r . Predictions are sequentially extended for one second (100 samples, k = 3 102 ) using the initial state vector highlighted in the blue column (left). The prediction error sequence is then defined from the Euclidean distance of predicted and actual positions of the subject ( k = 3 102 ).
Figure 8. The figure describes the application of a prediction model starting at the third sample ( k = 3 ) of a raw trajectory x k r ,   y k r ,   ϕ k r ,   v k r ,   a k r ,   w k r . Predictions are sequentially extended for one second (100 samples, k = 3 102 ) using the initial state vector highlighted in the blue column (left). The prediction error sequence is then defined from the Euclidean distance of predicted and actual positions of the subject ( k = 3 102 ).
Sensors 22 05828 g008
Figure 9. RMS prediction errors up to a prediction horizon of 1 s from the raw (top), offline estimated (centre), and real-time estimated (bottom) trajectories.
Figure 9. RMS prediction errors up to a prediction horizon of 1 s from the raw (top), offline estimated (centre), and real-time estimated (bottom) trajectories.
Sensors 22 05828 g009aSensors 22 05828 g009b
Table 1. RMS error [mm] of the prediction on the theoretical offline estimated, raw and real-time estimated trajectories at a one-second prediction horizon. Results are shown for each prediction model and each type of trajectory (straight/curved). Aggregated values show the RMS prediction error considering straight and curved trajectories together.
Table 1. RMS error [mm] of the prediction on the theoretical offline estimated, raw and real-time estimated trajectories at a one-second prediction horizon. Results are shown for each prediction model and each type of trajectory (straight/curved). Aggregated values show the RMS prediction error considering straight and curved trajectories together.
CVCACTRVCTRA
Offline estimatedAggregated360.83363.8078.1582.83
Curved445.13448.8292.198.17
Straights40.1240.1240.1240.12
RawAggregated379.52646.16294.27568.33
Curved454.5718.52305.17579.13
Straights160.28480.28272.44547.31
Real-time estimated Aggregated357.5369.04113.33130.59
Curved434.66447.78106.2127.94
Straights110.2120.29125.73135.44
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

González, L.; López, A.M.; Álvarez, J.C.; Álvarez, D. Real-Time Short-Term Pedestrian Trajectory Prediction Based on Gait Biomechanics. Sensors 2022, 22, 5828. https://0-doi-org.brum.beds.ac.uk/10.3390/s22155828

AMA Style

González L, López AM, Álvarez JC, Álvarez D. Real-Time Short-Term Pedestrian Trajectory Prediction Based on Gait Biomechanics. Sensors. 2022; 22(15):5828. https://0-doi-org.brum.beds.ac.uk/10.3390/s22155828

Chicago/Turabian Style

González, Leticia, Antonio M. López, Juan C. Álvarez, and Diego Álvarez. 2022. "Real-Time Short-Term Pedestrian Trajectory Prediction Based on Gait Biomechanics" Sensors 22, no. 15: 5828. https://0-doi-org.brum.beds.ac.uk/10.3390/s22155828

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop