Next Article in Journal
End-to-End QoS “Smart Queue” Management Algorithms and Traffic Prioritization Mechanisms for Narrow-Band Internet of Things Services in 4G/5G Networks
Next Article in Special Issue
Sensor Fusion in Assistive and Rehabilitation Robotics
Previous Article in Journal
Hardware/Software Co-Design of Fractal Features Based Fall Detection System
Previous Article in Special Issue
A Survey of Teleceptive Sensing for Wearable Assistive Robotic Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Characterizing Human Box-Lifting Behavior Using Wearable Inertial Motion Sensors

Department of Electrical and Computer Engineering, University of Wyoming, Laramie, WY 82071, USA
*
Author to whom correspondence should be addressed.
Submission received: 24 March 2020 / Revised: 14 April 2020 / Accepted: 15 April 2020 / Published: 18 April 2020
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)

Abstract

:
Although several studies have used wearable sensors to analyze human lifting, this has generally only been done in a limited manner. In this proof-of-concept study, we investigate multiple aspects of offline lift characterization using wearable inertial measurement sensors: detecting the start and end of the lift and classifying the vertical movement of the object, the posture used, the weight of the object, and the asymmetry involved. In addition, the lift duration, horizontal distance from the lifter to the object, the vertical displacement of the object, and the asymmetric angle are computed as lift parameters. Twenty-four healthy participants performed two repetitions of 30 different main lifts each while wearing a commercial inertial measurement system. The data from these trials were used to develop, train, and evaluate the lift characterization algorithms presented. The lift detection algorithm had a start time error of 0.10 s ± 0.21 s and an end time error of 0.36 s ± 0.27 s across all 1489 lift trials with no missed lifts. For posture, asymmetry, vertical movement, and weight, our classifiers achieved accuracies of 96.8%, 98.3%, 97.3%, and 64.2%, respectively, for automatically detected lifts. The vertical height and displacement estimates were, on average, within 25 cm of the reference values. The horizontal distances measured for some lifts were quite different than expected (up to 14.5 cm), but were very consistent. Estimated asymmetry angles were similarly precise. In the future, these proof-of-concept offline algorithms can be expanded and improved to work in real-time. This would enable their use in applications such as real-time health monitoring and feedback for assistive devices.

1. Introduction

Musculoskeletal disorders caused by frequent or high-risk lifting tasks are among the most common work-related injuries among physical laborers worldwide, especially those in the fields of construction and factory assembly lines [1]. Methods exist for assessing the risk associated with lifts. For example, the National Institute for Occupational Safety and Health (NIOSH) designed the revised NIOSH lifting equation (RNLE), which allows users to compute a lifting index for any lift using several parameters [2]. The lifting index indicates the amount of “risk” associated with a lift such that lifts with a higher lifting index have a higher likelihood of causing injury. Parameters required to compute the lifting index include the weight of the object, the vertical height above the floor, the horizontal distance from the person, etc. Such methods can be great tools for engineering safer lifting tasks, but they still rely on subjective observations and do not account for human variability. Thus, a system for automatically monitoring lifting behavior over time could prove useful. Furthermore, the information provided by such a real-time system could be used as control feedback for assistive devices such as trunk exoskeletons that support the user during lifting tasks, thereby preventing musculoskeletal disorders or reducing the consequences of such disorders [3,4,5].
For human motion analysis problems such as this one, optical tracker-based systems such as Vicon (Vicon Motion Systems Ltd., Oxford, UK) or Optotrak (Northern Digital Inc, Ontario, Canada) are often used [6]. These systems rely on multiple fixed cameras in a room and physical markers placed on the subjects’ bodies to produce measurements. They can be very accurate in a controlled laboratory setting, but are not very adaptable to different work environments due to a lack of portability, and because they are affected by physical obstructions, limited field of view, and adverse lighting conditions [6,7]. It is unlikely that enough markers can be used to capture the full 3D pose using just vision processing. A common technique is to create a skeletal model of a human, then perform inverse kinematic analysis on the known marker positions (captured with the computer vision system) to move the joints in a realistic way. One defining characteristic of optical systems is that their measurements are absolute; their accuracy is limited only by what the cameras can resolve and the fidelity of the human inverse kinematic analysis that is done in software. This characteristic makes them a popular choice as references for other motion capture techniques. The biggest weakness of optical systems, like force plates and other fixed measuring devices, is that they are not portable enough for field use.
Wearable sensors are likely the key to achieving reliable motion analysis in realistic environments. They can be fully self-contained on the wearer’s person so that they can move freely during the analysis, and on-body recording makes moving large distances in extreme environments possible while recording. Furthermore, wearable sensors have already been used for a wide variety of motion analysis applications. Several studies have focused on human gait analysis using wearable inertial measurement units (IMUs—devices consisting of accelerometers, gyroscopes, and optionally magnetometers) for both normal, [8,9,10], and abnormal gaits [11,12]. Gait measurements captured with IMUs can also be used as input to lower limb assistive devices, such as robotic prostheses and exoskeletons [13]. They can be used to detect dangerous conditions, such as falling in geriatric populations [14]. Wearable sensors have also been used extensively for kinematic analysis of the upper limbs [15]. For example, it is possible to assess the progress of arm rehabilitation [16] and to monitor the effectiveness of shoulder surgery [17]. Due to their form factor and cost-effectiveness, they can also be used as an input for rehabilitation games [18]. Recently, there has been increasing interest in using wearable sensors for analyzing human motion in athletics, such as swimming strokes [19] and the kicking of footballs [20]. This is only viable because of the portability and cost-effectiveness of IMUs.
IMU technology has advanced to the point where full-body motion capture is now possible with applications in many fields [21,22]. The 3D linear acceleration measured by the accelerometers, 3D angular rate measured by the gyroscopes, and 3D magnetic field measured by the magnetometers can be fused together with a sensor fusion algorithm (commonly a Kalman filter) to produce the absolute orientation of the device in 3D space. In crude implementations, the angular rates from the gyroscopes can simply be integrated over time to compute the orientation of the device. However, the measurements will inevitably drift due to accumulations of error caused by noise, temperature bias, or sensor error, which means that the device will require constant recalibration. Sensor fusion reduces the severity of this issue by adjusting the coordinate frame in reference to acceleration due to gravity (measured by the accelerometers) and the Earth’s magnetic field (measured by the magnetometers). When multiple IMUs are attached to various segments of the body, the orientations of those segments can be measured. Using inverse kinematics with a skeletal model of the human body, it is possible to measure joint angles, segment positions, velocities, etc. to produce a full-body pose reconstruction.
Some studies have been done on using wearable sensors to analyze human lifting behavior. For instance, Brandt et al. proposed a method of identifying a lift as low or high risk using data from two accelerometers placed on the back and surface electromyography (sEMG) electrodes on the upper trapezius and erector spinae muscles [23]. They designed a set of lifts for the participants to perform and only varied the load between trials. They showed that it is possible to estimate the load being carried for a known type of lift, and then use that to classify the lift as low or high risk, for which they achieved accuracy as high as 78.1 % using a subject-specific threshold based on sEMG and back inclination (obtained from the accelerometer). However, the starts/ends of the lifts were not automatically detected and the posture (e.g., stooping or squatting) used throughout the lift, the location of the object, and the amount of twisting involved were not measured.
Lu et al. [24] developed an algorithm that used IMUs to measure lifting risk factors in real time. Five IMUs were attached to specific locations on the subjects’ bodies and the data was fed into two separate software modules: a lift detection module and a sensor fusion module. The lift detection module monitored the IMU data in 2.5 s sliding windows with a 0.5 s step size and used a machine learning approach to determine whether the wrist sensors were “synchronized,” meaning that the hands were inertially coupled. An assumption was made that when the wrists were synchronized, a lift was most likely occurring. The authors manually labeled every 2.5 s window in 25 min of training data, then compared it to the results of the lift detection module. The module correctly labeled 83%–85% of the actual lift windows as lift windows (true positives), and it mislabeled 32% of the non-lift windows as lift windows (false positives). The sensor fusion module processed the output from the IMUs’ accelerometers and gyroscopes to produce absolute orientations in real time. These orientations were used to calculate the trunk flexion angle, the vertical height of the object, and the horizontal distance from the object to the lifter. In a second related study [25], the authors compared the estimated values to values measured by a commercial motion capture system and found that estimation of the vertical and horizontal positions of the box were poor with mean errors of 33 cm and 6.5 cm, respectively. The estimate of trunk flexion had a mean error of 2.3 degrees. However, these studies have several weaknesses: the features used for lift detection are unclear, and the authors do not report what percentage of the windows in the training data included true synchronization, so the actual training accuracy cannot be calculated. Lift detection accuracy is discussed in the second paper, but they do not mention whether any of the lifts were completely missed by the algorithm. Furthermore, the approach does not consider the asymmetry of the lift (in this study, asymmetry is defined as the twisting of the upper body relative to the lower body required to complete the lift) or the weight of the object, both of which are important factors for determining the risk associated with a lift with the NIOSH lifting equation [2]. The authors also did not attempt to classify the posture used throughout the lift, which would be useful for health monitoring or assistive device control.
O’Reily et al. [26] discuss a method of classifying deadlifts (a popular weight-lifting exercise for rehabilitation and strength training) as good or bad using wearable IMUs. Two experiments were carried out: one in which the participants deliberately performed aberrant deadlifts mixed with acceptable ones, and one in which they performed a 3-repetition maximum strength deadlift protocol to elicit aberrant form naturally. The authors defined five categories of deadlifts for the purpose of classification: acceptable, shoulders behind bar at start position, rounded back at any point during movement, hyperextended spine at any point during movement, bar tilting, and other. Random forest classifiers were trained with 17 descriptive features to perform both binary classification (acceptable or aberrant) and 5-category classification. Using five sensors placed on the lower back, left and right thighs, and both shanks, they were able to achieve 93% cross-validation accuracy in binary classification with personalized classifiers and 75% with a global classifier for the lifts from experiment 1. The multi-class personalized classifiers achieved 81% accuracy, whereas the global classifier achieved 60%. For experiment 2, the personalized classifiers were 84% and 78% accurate for binary and multi-class, respectively, while the global classifiers were 73% and 54% accurate. A characterization like this is similar to what we would like to achieve for general lifts. However, it is necessary to develop a lift detection algorithm to temporally locate the beginning and ending of the lifts. Classification-only algorithms are also potentially limited in their ability to characterize a wide range of lift types (it is difficult to design categories for every possible case), so it is important to take other continuous measurements, such as vertical height of the object, asymmetry angle, etc.
Most of these studies are too narrow in scope to be used to characterize general lifting behavior. Furthermore, they do not focus on classifying important features such as posture and asymmetry. They are also restricted by the amount of information they can gather from a limited number of sensors. Therefore, there is a need for further research. IMUs were the sensor of choice for this study, as they are simple, cost-effective, noninvasive, and highly portable compared to alternative motion tracking solutions [10]. The primary objective of this study was to develop an offline pattern recognition algorithm that can detect and characterize human lifting activity. Ideally, the algorithm would be able to extract information on the posture used throughout the lift and the approximate distance the object was moved. Information about the object being lifted, such as weight, would also be useful. However, these are difficult to get from IMU data alone as IMUs provide no way to measure them directly. For this study, we assumed that a lift can be broken down into the following components.
  • Source: The starting location of the object relative to the lifter.
  • Destination: The final location of the object relative to the lifter.
  • Asymmetry: The amount of twisting required to perform the lift.
  • Posture: For the purposes of this study, whether the lifter was squatting or stooping during the lift.
  • Weight: The estimated weight of the object being lifted.
This information includes the parameters necessary for computing the NIOSH lifting index [2] and could serve as input to an assistive device’s control algorithm.

2. Methods

2.1. Hardware

A commercial IMU system was used to obtain motion measurements and joint angles: Xsens Link (Xsens Technologies BV, Enschede, Netherlands). Link is a full-body motion capture system consisting of 17 IMU “trackers” attached to the feet, lower legs, upper legs, pelvis, shoulders, sternum, head, upper arms, forearms, and hands. Each sensor contains 3D linear accelerometers, 3D rate gyroscopes, 3D magnetometers, and a barometer [27]. Full specifications can be found in the MVN User Manual [27], but the basic tracker performance is characterized as follows; static accuracies for roll/pitch and heading are 0.2 and 0.5 degrees, respectively; dynamic accuracy is 1 degree (root mean square); accelerometer range is ± 16 g; and gyroscope range is ± 2000 /s. They are attached to the body using a tight-fitting pocketed shirt and hook-and-loop straps, and measure the motions of each body segment. Table 1 describes the positions of the sensors on the body and a participant wearing the system is shown in Figure 1.
3D tracking data from each of the motion trackers are transmitted to a workstation computer wirelessly at 240 Hz. There, the Xsens MVN Fusion Engine combines the data from the individual motion trackers with a biomechanical model of the subject’s body to obtain segment positions and orientations [21]. A calibration routine was carried out for each participant to account for sensor position/orientation and body shape variance. In this routine, the participants’ dimensions (body height, foot length, arm span, ankle height, hip height, hip width, knee height, shoulder width, shoulder height, and shoe sole height) were first measured and entered into the software. Next, they were asked to hold a specific static pose for several seconds, then walk back and forth about 10 feet. The Fusion Engine determined the orientation and position of the sensors relative to their segments during this process. The MVN biomechanical human model has 23 segments with 22 joints. Each joint is specified by statistical parameters for 6-degree-of-freedom joint laxity and an advanced model is used to solve the kinematics of the spine and shoulder blades. The output from the Fusion Engine is a full kinematic description of each segment, which includes position, velocity, acceleration, orientation, angular velocity, and angular acceleration. Thus, the Xsens motion capture system enables full-body pose reconstruction comparable to optical systems, with about a 1% error in segment traveled distance without additional fusion with optical systems, GPS, etc. Although the system records at 240 Hz, the data were downsampled to 60 Hz for analysis.

2.2. Experimental Design

An experiment was carried out in which participants performed lifts while wearing the IMUs. Twenty-four volunteer participants (19 male, 5 female, 30 ± 9 years old, 176.6 ± 8.6 cm tall, 79.9 ± 18.6 kg weight) were recruited from the students and staff of the University of Wyoming. Exclusion criteria excluded people with conditions that may affect their ability to perform lifts safely or normally. These criteria included a history of major spinal injuries or surgery (e.g., disc removal, spinal fusion, and hardware placement), chronic or current acute lower back pain, and pregnancy. Each participant signed an informed consent form prior to beginning the experiment and the study protocol was approved by the University of Wyoming Institutional Review Board (protocol #20190801SH02476). Participants were asked to wear closed-toe shoes suitable for exercise and to avoid very loose-fitting clothing that would interfere with the sensor straps.
Adjustable shelving was used to provide source/destination locations at the knee (54 cm), chest (108 cm), and head (157 cm) levels. These levels were fixed across all experiments and did not necessarily align with the knees, chest, and head of every participant. The floor was also used as a location, for a total of 4 locations. The 2 shelves were situated 45 degrees apart from each other, as shown in Figure 1.
Each participant performed a pre-determined set of lifts consisting of 30 main lifts and 1–3 transition lifts. The order of the main lifts was randomized for each experiment to avoid the effects of muscle fatigue on the overall data trends. As a consequence, the crate sometimes needed to be moved from one location to another between main lifts. Rather than just telling participants to move it, “transition” lifts were included in the predetermined set of lifts and were recorded to obtain additional data. These lifts were sometimes members of the main lift group, and were other times specifically designed for the transition. Lifts that were included for transitions, but were not main lifts, were called “extra” lifts. For data integrity, each main lift was performed 2 times, for a total of 60 main lifts plus 1–3 transition lifts per experiment. Mistakes during the trials (either by the participants or by the experimenter) and some bad recordings caused some lifts to be performed more or fewer times than planned. In 4 instances, unplanned lifts were also mistakenly performed. Because these lifts were not deemed harmful to the study, they were kept in the data for analysis. Unless otherwise stated, all lifting trials (main, extra, and unplanned) were processed by the algorithms. Table 2 summarizes the planned and actual occurrences of each lift.
Participants were given at least 15 s of rest between lifts. The main object was a plastic crate (33 cm wide, 33 cm long, and 28 cm tall) with handles that weighs 10.4 kg. The secondary object was a similar crate that weighed 3.6 kg. For the remainder of this paper, the primary object will be referred to as the 10 kg weight, and the secondary object the 3 kg weight.
To ensure consistency across subjects, tape marks on the floor indicated where the crate should be placed for the lifts that start or end on the floor level. The marks were arranged in front of the shelves, as shown in Figure 2.
When the floor was the starting point, the crate was initially placed in front of the appropriate line, like the example in Figure 3. Likewise, participants were instructed to place the crate in front of the appropriate line when the floor was the destination.
Table 2 shows the full list of lifts performed by each participant.
For the asymmetric lifts, the twist direction indicates which way the participant twisted while carrying the object. The participants always started facing toward the lower of the source/destination pairs. For example, for the “floor to chest, squatting, 10 kg, twisting left” lift, participants were instructed to start facing the object on the floor. Once the object was picked up, the participants twisted to the left to place it on the shelf. For the “chest to floor, squatting, 10 kg, twisting right” lift, participants started facing the location on the floor where the object would eventually be placed. They then twisted left, grabbed the object from the shelf, twisted back to the right, and placed it on the floor.
Participants were instructed to perform lifts with either the ergonomically correct posture (squatting) or with incorrect posture (stooping) [28]. In squatting lifts, participants attempted to keep their backs straight while doing most of the lifting with their legs. In stooping lifts, they bent over to pick up the object while keeping their knees mostly straight. Lifts from chest to head and vice versa do not require any particular posture.

2.3. Lift Detection

The first step to characterizing a lift is identifying its starting and ending time. This allows us to determine the starting and ending location of the object as well as the duration of the lift. The data were pre-segmented into individual lifting trials plus some time before/after the lift during the experiment. Only one lift was performed per trial, and the participants’ hands always started from the same position (relaxed, hanging by their sides). We hypothesized that, at the beginning and ending of the lift, the participant’s hands would be the furthest away from their center of mass. We made this assumption on the basis that human balance reactions cause the center of mass to remain vertically aligned with the base of support (in this case, the feet) [29]. As the arms are extended away from the body, the distance from the hands to the center of mass increases. The MVN Analyze software produces the participant’s estimated center of mass and hand positions over time. The lift detection algorithm computes the mean distance from the center of mass to the participant’s left and right hands for every frame of the trial. A moving-average filter with a window size of 30 points is applied, and then the two most prominent peaks in the trial are identified as the beginning and ending of the lift. The prominence of a peak measures how much it stands out due to its intrinsic height and its location relative to other local maxima. To enable comparison, the start/end times were manually labeled for each trial. The first author did this by watching the visual lift playbacks (with MVN Analyze), and estimating the times when the majority of the crate’s weight was transferred from its supporting platform (shelf or floor) to the participant and vice versa. Figure 4 shows a plot of the distance computed by the lift detection algorithm along with the estimated and manually labeled start/end times.

2.4. Parameter Estimation

Certain lift parameters may be computed directly from the IMU data without using machine learning techniques. Algorithms were implemented to calculate the asymmetry angle, the vertical height of the object above the floor, the vertical displacement, and the horizontal distance between the object and the midpoint between the person’s feet. These parameters were computed using both the manually and automatically labeled start/end times.
Lift asymmetry is defined as the amount of “twisting” of the trunk required to complete a lift. To calculate this angle, the program identifies two unit vectors on the floor plane: the reference vector and the maximum twist vector. The reference vector begins at the midpoint between the participant’s feet and extends forward in the direction the person is facing. The maximum twist vector originates from the same location, but extends in the direction of the hands at the beginning or end of the lift, depending on which involves more twist. The angle between these vectors is calculated as the asymmetry of the lift. The locations of the hands and feet are obtained from the MVN Analyze software, and the direction the participant is facing is determined by finding the mean orientation of the feet. The twist amount was not precisely controlled during the experiment. The shelves were situated 45 degrees apart from each other, but participants were free to twist as much or as little as they needed. Due to the placement of the shelves, we expected the mean twist angles over all the twisting lifts to be approximately 45 degrees.
The vertical height of the object above the floor at the beginning and ending of the lift can be estimated using the position of the hands. If we assume that the midpoint between the hands is the location of the object being carried, then the vertical height can easily be computed using the estimated height of the hands. The program extracts the location of the hands at the beginning and ending of the lift from the IMU data, then calculates the midpoint between them to estimate the location of the object. The shelf heights were fixed across all participants, so the vertical heights of the lift sources and destinations are known. To calculate reference heights, 30 cm (the approximate height of the hands relative to the bottom of the crate) was added to each of the shelf heights provided in Section 2.2. Thus, the reference heights are 30 cm for floor, 84 cm for knee, 138 cm for chest, and 187 cm for head.
Vertical displacement was computed by subtracting the vertical height of the object at the end of the lift from the height at the beginning. Reference displacements were calculated accordingly using the reference heights mentioned previously. The reference values for the six main source–destination pairs are shown in Table 3. The extra lifts from Table 2 were not processed by this algorithm, so the seven source–destination pairs unique to those lifts are not included in Table 3.
Like the vertical height estimations, the horizontal distance depends on the hand position measurements provided by the Xsens software. The horizontal distance is calculated as the distance in the floor plane from the midpoint between the participant’s feet to the object. Like asymmetry, the horizontal distance that the object was held from the body was not controlled during the experiment. Participants were allowed to hold the object at whatever distance felt comfortable to them. However, a tape measure was used to get an approximation of the horizontal distance for each of the sources/destinations. These reference distances are 0.3 m, 0.5 m, 0.45 m, and 0.45 m for the floor, knee, chest, and head levels, respectively.

2.5. Lift Classification

The classification problem was split into four subproblems: posture, twisting direction, vertical movement, and weight. The classes for each subproblem are defined as follows.
  • Posture = stooping, squatting, neither
  • Twisting = left, right, neither
  • Vertical Movement = floor to chest, chest to floor, knee to chest, chest to knee, chest to head, head to chest
  • Weight = 3 kg, 10 kg
To make these classifications, a pool of 223 features was extracted from the lifts. The joint movements along all 3 axes for 28 joints provided by the Xsens software were included. The joint movement is computed as θ f θ s , where θ f is the angle of the joint at the end of the lift and θ s is the angle at the beginning. The magnitudes of these joint movements (i.e., their absolute values) were included as separate features, just in case they were more useful for some of the classification problems. The mean absolute velocities and accelerations of 23 body segments were also included. The remaining 9 features were the parameters estimated from the IMU data, described in Section 2.4. These included the height of the object above the floor, the horizontal distance from the lifter to the object, and the asymmetry angle for both the start and end of each lift. They also included the vertical displacement of the object, the largest-in-magnitude asymmetry angle, and the difference in asymmetry angle between the start and end of the lift. The extra lifts from Table 2 were not included in classification of vertical movement, because there were not enough samples to train the classifier on the unique vertical movement categories contained in the those lifts. Like the lift parameters, classifications were made for both manually and automatically labeled start/end times.
With 223 features, this was a high-dimensionality problem that needed to be reduced. Two approaches to dimensionality reduction were attempted: neighborhood component analysis (NCA) [30] and principal component analysis (PCA) [31]. We tried dimensionality reduction for each subproblem, with the exception of vertical movement, and found that the preliminary classification results were better with NCA, so that is what we used for the final feature selection. The starting and ending vertical heights of the object, 2 features already included in the feature pool, proved to be sufficient for the vertical movement subproblem, so automatic feature selection was not used. For the other subproblems, NCA assigned weights to all the features based on correlation with the classification categories. Features with the highest 2% (a relative threshold based on the highest weight in each subproblem) weights were tried in various combinations to find the best-performing feature vector for each subproblem. Then, some features were manually removed from these vectors because we suspected they were picking up on experimental patterns and artificially boosting the results. For example, one of the top-weighted features for the weight subproblem was the maximum twist angle of the participant’s upper body. If we consider that all low-weight lifts performed were straight lifts, it becomes clear that selecting this feature would immediately eliminate all asymmetric lifts; therefore, it was removed from the feature vector for the weight subproblem. Other features removed for the weight subproblem were the angle movement between the pelvis and the T8 vertebra along the Y axis (vertical while standing) and the horizontal start/end hand positions. The pelvis-T8 angle movement was removed for the same reason as the maximum twist angle, and the horizontal hand positions were removed because the main low-weight lifts only involved 2 vertical levels (chest and floor), which had fairly consistent horizontal distance measurements across all participants. We did not want the algorithm to classify weight based on the shelf levels used during the lift. Vertical start/end hand positions were removed from the feature vector for the posture subproblem, because the vertical position of the hands should not be indicators of whether the participant was stooping or squatting.
Multiple classifiers were trained on each subproblem to determine the best model for each. Among the models tested were decision trees, naive Bayes, linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbor (KNN). Uniform prior probabilities were assumed. Each classifier was verified with subject-independent 5-fold cross-validation. In subject-independent cross-validation, the folds are created in such a way that every participant’s trials appear in exactly 1 fold. This ensures that the test data were never from the same participant as the training data. Because there were 24 participants, 1 of the folds contained 4 participants instead of 5. The results from the best classifiers are reported in Section 3. For all subproblems, the best results were obtained with KNN classifiers. However, KNN was deemed an inappropriate classifier to use for the weight subproblem, as the ratio of 3 kg lifts to 10 kg lifts is very low (4:26) and the feature spaces overlap severely. If KNN were to be used, the classification would almost certainly be biased toward 10 kg. For this reason, a naive Bayes classifier was chosen for the weight subproblem.

3. Results

All of the lifting trials recorded (shown in Table 2) were used for parameter estimation and classification. There were no trials lost to data corruption, participant fatigue, etc. Raw data from all participants are available as Supplementary Materials.

3.1. Lift Detection

The lift detection algorithm estimated both the start and end times of each lift. The times reported are the number of seconds since the beginning of the recording. Error was calculated as T e s t T a c t u a l , where T e s t is the estimated lift start/end time and T a c t u a l is the manually labeled start/end time. The error for start time estimates across all 1489 lift trials was 0.10 s ± 0.21 s. The error for end time was 0.36 s ± 0.27 s. To put this into perspective, the lift duration was 2.52 s ± 0.56 s and the trial duration was 7.74 s ± 1.09 s. The algorithm always detected a lift (i.e., there were no false negatives). This was expected, as it simply detects peaks in the distance from center of mass to hands during the trials.

3.2. Parameter Estimation

3.2.1. Asymmetry

Table 4 shows the asymmetry measurement for each lift type (no twist, twist left, and twist right) with their respective errors.
The mean twist amount for the straight lifts is very close to 0. The means for the left and right asymmetric lifts are larger in magnitude than the expected values.

3.2.2. Vertical Height and Displacement

The algorithm was used to estimate the vertical height of the object at the beginning and ending of every lift. Table 5 summarizes the vertical height measurements. The results of the vertical displacement estimation are shown in Table 6.

3.2.3. Horizontal Distance

Table 7 shows the reference horizontal distance measurements compared to the measurements.

3.3. Lift Classification

Optimal features were selected for each classification subproblem, as described in Section 2.5. Seven features were selected for posture, three for asymmetry, three for vertical movement, and three for weight:
  • Posture: absolute angle between the vertical plane and the participant’s T8 vertebra, absolute joint movements of the left and right knees on the flexion–extension axis, absolute joint movements of the left and right shoulders on the flexion–extension axis, and absolute joint movements of the left and right hips on the flexion–extension axis.
  • Asymmetry: difference in twist angle from the beginning to the end of the lift, the largest twist angle during the lift, and the T9 to T8 joint movement along the X axis, which extends forward in the direction the person is facing.
  • Vertical Movement: vertical displacement of the object, average vertical starting position of the left and right hands, and the average vertical ending position of the left and right hands.
  • Weight: average absolute velocity of the left and right hands and the average absolute velocity of the head.
Table 8 summarizes the classification results for the four subproblems. Classification results for both automatically and manually labeled start/end times are included.

4. Discussion

4.1. Discussion of Results

4.1.1. Lift Detection

The lift detection algorithm is accurate when participants do nothing but the lift during the recording. Spontaneous movements, such as waving a hand or reaching down to tie shoelaces, have the potential to cause false positives. In this experiment, the only cause of this was when participants over-anticipated the signal to begin the lift and began reaching for the object early, only to stop before grabbing it. In these lifts, the participants moved their arms toward the object early and held them there until instructed to begin the lift, creating a large plateau in the distance from center of mass to the hands. The algorithm does not have any way of identifying these false starts, so the beginnings of the lifts were occasionally mislabeled. In a few of the trials, this led to extremely large errors. The worst of these is shown in Figure 5. In this example, the beginning of the lift was detected at the false start and the lift did not actually begin until 3.68 s later. Because there was not a prominent local maximum at the actual start time, the algorithm was able to correctly identify the end time. For reference, the actual lift was only 3.12 s long.

4.1.2. Parameter Estimation

Based on the asymmetry estimation results in Table 4, it appears that the algorithm has a tendency to overestimate twist angles slightly. This is likely due to not controlling the exact twist amount participants were required to use. When performing asymmetric lifts, hands have the ability to translate the object side-to-side, effectively increasing the twist range. Subjects may have tended to place the object closer to the center of the offset shelf during asymmetric lifts, which would have increased the asymmetry measurement. Overall, the twist angle measurements are very precise and consistent across all participants, which makes it a viable option for real-time characterization.
The vertical height estimates are very precise, as shown by the low standard deviations in Table 5. However, they lack accuracy at the extremes. It appears that the Xsens software tends to overestimate the height of the hands when a person is bending over, and underestimate when a person is reaching above their head. In the middle ranges (knee and chest), the vertical height estimates are quite accurate. This is possibly due to limitations in the Xsens musculoskeletal body model. The hand IMUs were placed on the backs of the participants’ hands, and they were able to rotate their wrists freely, which may have also contributed to some of the error.
Due to the tendencies of the algorithm to underestimate object positions at the extremes, the absolute displacement is generally lower than expected for low-to-high lifts and higher than expected for high-to-low lifts. The standard deviation of the absolute error is low, as shown in Table 6, which means the method is at least consistent.
There was significant variation in the horizontal distance measurements across trials. This was expected, as participants were able to hold the crate at any distance they desired. Because of this, it is impossible to determine how much of the error was due to lift variation and how much was due to the method and sensors.
Overall, the parameter estimation algorithms performed very similarly between the manually labeled and automatically labeled lifts. Of course, the cases where the lift detection algorithm mislabeled a start or end have very poor parameter estimation results. To counter this, it could be possible to use additional features, such as inertial coupling between the hands [24], as a redundant check to make sure that a lift is actually occurring between the detected start and end times. If one is not, the algorithm can strategically search for better times.

4.1.3. Lift Classification

The classification accuracies from Table 8 for the main three subproblems (posture, asymmetry, and vertical movement) are quite high. By comparison, the weight classifier performed poorly. Part of the problem could be that there were not enough samples of low-weight lifts in the training data. The biggest issue, however, was most likely the fact that distinguishing between 3 and 10 kg weights with only IMUs is very difficult and there are probably no accurate features for classification. Because so few low-weight lifts were performed, and they were the same lifts for each participant, it is very easy for a classification algorithm to overfit to the training data. We attempted to fix this by removing “bad” features from the automatically selected features, but it is possible that the algorithm still shows some bias. Better features could exist that were not in the feature pool, but we chose not to pursue it further for this study because it was not the main goal, and we did not believe we would be able to reliably distinguish 3 kg from 10 kg using only motion data. Another possibility of improving classifier performance across all subproblems is to use ensemble classifiers, which combine the results from multiple classification algorithms to produce 1 (potentially better) classification [32].
It is worth mentioning that our method is subject-nonspecific. That is, the classification algorithms are trained on potentially different participants than they are used on. This is desirable in many cases, because it eliminates the need for training data to be collected from each individual participant before use, and therefore makes the product more widely marketable. However, subject-specific classifiers can often obtain better results with less data than their subject-nonspecific counterparts [26].

4.2. Conversion to Real-Time

A major limitation of the methods developed in this study is that they depend on presegmentation of the trials, which would not be possible in real-time applications. For lift detection, we assumed that there was only 1 lift per trial and that the hands always started from the same position (relaxed, at the participants’ sides). For cases where multiple lifts are performed back-to-back (i.e., person lifts box from floor to shelf and then back to floor), modifications would have to be made to ensure both lifts are detected. The biggest challenge in developing the algorithms for real-time would be identifying the start and end of a lift amidst a continuous stream of data. Lu et al. demonstrated one plausible method for accomplishing this using a sliding pattern recognition window [24]. Once that problem is solved, the analysis could be done the same way it is done offline. This would work fine for health monitoring applications, where lifting data are collected throughout the day and compiled into a report, but not for guiding assistive devices, which need classification results before the lift is finished so they can assist the user properly. For these applications, classification will need to be started as soon as possible and updated periodically throughout the lift. Posture, for example, could be classified in real-time by checking each individual frame for the conditions of stooping and squatting and reporting the most likely category along with a confidence value. This would allow an assistive device to decide which action it should take to help the user with the lift.

4.3. Applications

Lift characterization using IMUs has many potential applications. It could be useful for health monitoring, where lifters’ behaviors are analyzed over an extended period of time to obtain statistics about the types of lifts they perform. In a factory setting, this could be used to engineer safer processes and to help train workers on the correct procedures [2]. It could also be used to help patients suffering from musculoskeletal disorders recover or avoid further injury, especially if their occupations require frequent manual lifting. If a real-time algorithm was developed, the lift characterization data could be used to control assistive devices. For instance, an exoskeleton could be provided with lift information in order to properly support the user during lifting tasks, thereby increasing the effectiveness of such a device [3]. Such intent-detecting controllers have been designed for assistive devices before, such Parri et al.’s whole-body awareness controller for an active transfemoral prosthesis [13]. Their system uses lower limb kinematics computed using inertial sensors to recognize eight different behaviors, including quiet standing, quiet sitting, step-by-step stair ascent, walking (all the gait phases), sitting down, standing up, initiating walking, and terminating walking. The state of the user is decoded from the sensor signals, then used in a finite-state machine to drive the device actuators and make the task easier (or possible). Currently, the controller does not recognize lifting activity, so it is possible that real-time versions of the lifting algorithms developed in this study could be implemented in the future. The independent phases of the lift (e.g., grasping, releasing, entering squat, and standing back up from squat) could be detected and used to control the actuators accordingly. This would open up many new possibilities for active assistive devices.
As the IMU motion tracking system and presented algorithms were calibrated/trained only to healthy young adults, recalibration/retraining may be necessary for other participant samples. For instance, the posture and hand positions of older individuals that exhibit stooped posture may be inaccurately measured by these methods. This limits the use of these algorithms in general applications, where users may vary greatly in physical capability. Future work could focus on generalizing these algorithms for practical applications.
In this study, we assumed that both hands are used to perform a lift. However, many lifts can be performed with a single hand, such as picking up a kettlebell, as was done by Brandt et al. [23]. Most of the presented algorithms would not work properly for single-handed lifts, as most of them depend on the mean of the positions of the hands. This limits the applications of the methods in their current form, but minor modifications could be made to automatically detect which hand is being used for the lift and only take that hand into account.

5. Conclusions

This paper builds on previous studies by demonstrating the viability of lift detection and characterization using only wearable inertial sensors. With data pre-segmented into individual lifting trials, lift detection can be achieved using an algorithm which utilizes the estimated distance from the participant’s hands to his center of mass. Our algorithm had a start time error of 0.10 s ± 0.21 s and an end time error of 0.36 s ± 0.27 s. Once the beginning and ending of lifts have been identified, estimations of lifting parameters, such as vertical displacement, vertical starting and ending positions, horizontal distance, and asymmetry angle, can all be computed. Classification algorithms can also be used to classify the posture, twist direction, and vertical movement with very high accuracy, while weight of the object is quite difficult to classify using only IMUs. The classifiers in this study achieved accuracies of 96.78%, 98.32%, 97.28%, and 64.21% for posture, asymmetry, vertical movement, and weight, respectively, with automatic lift detection.
Although these algorithms were developed for offline lift classification, there is potential to expand them to online in the future so that they may be used for applications such as real-time health monitoring and assistive device control. The biggest obstacle to overcome in this regard is detecting the beginning and ending of lifts in real-time. Therefore, further work must be done in real-time detection of lifts using wearable inertial sensors.

Supplementary Materials

The IMU data collected for this study are available online at http://0-doi-org.brum.beds.ac.uk/10.5281/zenodo.3724998.

Author Contributions

Conceptualization, D.N.; methodology, S.D.H. and D.N.; software, S.D.H.; validation, S.D.H. and D.N.; formal analysis, S.D.H.; investigation, S.D.H.; resources, D.N.; data curation, S.D.H.; writing—original draft preparation, S.D.H.; writing—review and editing, D.N.; visualization, S.D.H.; supervision, D.N.; project administration, D.N.; funding acquisition, D.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Institute of General Medical Sciences of the National Institutes of Health under grant no. 2P20GM103432 as well as by the National Science Foundation under grant no. 1933409.

Acknowledgments

We would like to thank Boyi Dai for his excellent advice on human kinesiology and for his support of this project, as well as Maja Gorŝiĉ for her assistance with conceptualization and experiment design.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GPSGlobal positioning system
IMUInertial measurement unit
KNNK-nearest neighbor
LDALinear discriminant analysis
NCANeighborhood component analysis
NIOSHNational Institute for Occupational Safety and Health
PCAPrincipal component analysis
RNLERevised NIOSH lifting equation
sEMGSurface electromyography
SVMSupport vector machine

References

  1. Karwowski, W. Handbook of Standards and Guidelines in Ergonomics and Human Factors; CRC Press: Boca Raton, FL, USA, 2005. [Google Scholar] [CrossRef]
  2. Waters, T.R.; Putz-Anderson, V.; Garg, A.; Fine, L.J. Revised NIOSH equation for the design and evaluation of manual lifting tasks. Ergonomics 1993, 36, 749–776. [Google Scholar] [CrossRef] [PubMed]
  3. Toxiri, S.; Koopman, A.S.; Lazzaroni, M.; Ortiz, J.; Power, V.; de Looze, M.P.; O’Sullivan, L.; Caldwell, D.G. Rationale, implementation and evaluation of assistive strategies for an active back-support exoskeleton. Front. Robot. AI 2018, 5, 53. [Google Scholar] [CrossRef] [Green Version]
  4. Chen, B.; Grazi, L.; Lanotte, F.; Vitiello, N.; Crea, S. A real-time lift detection strategy for a hip exoskeleton. Front. Neurorobot. 2018, 12, 1–11. [Google Scholar] [CrossRef] [PubMed]
  5. Gorsic, M.; Regmi, Y.; Johnson, A.P.; Dai, B.; Novak, D. A pilot study of varying thoracic and abdominal compression in a reconfigurable trunk exoskeleton during different activities. IEEE Trans. Biomed. Eng. 2019, 9294, 1. [Google Scholar] [CrossRef] [PubMed]
  6. Lopez-Nava, I.H.; Angelica, M.M. Wearable inertial sensors for human motion analysis: A review. IEEE Sens. J. 2016, 16, 7821–7834. [Google Scholar] [CrossRef]
  7. Gohar, I.; Riaz, Q.; Shahzad, M.; Hashmi, M.Z.U.H.; Tahir, H.; Ul Haq, M.E. Person re-identification using deep modeling of temporally correlated inertial motion patterns. Sensors 2020, 20, 949. [Google Scholar] [CrossRef] [Green Version]
  8. Novak, D.; Reberšek, P.; De Rossi, S.M.M.; Donati, M.; Podobnik, J.; Beravs, T.; Lenzi, T.; Vitiello, N.; Carrozza, M.C.; Munih, M. Automated detection of gait initiation and termination using wearable sensors. Med. Eng. Phys. 2013, 35, 1713–1720. [Google Scholar] [CrossRef]
  9. Novak, D.; Goršič, M.; Podobnik, J.; Munih, M. Toward real-time automated detection of turns during gait using wearable inertial measurement units. Sensors 2014, 14, 18800–18822. [Google Scholar] [CrossRef] [Green Version]
  10. Seel, T.; Raisch, J.; Schauer, T. IMU-based joint angle measurement for gait analysis. Sensors 2014, 14, 6891–6909. [Google Scholar] [CrossRef] [Green Version]
  11. Qiu, S.; Wang, H.; Li, J.; Zhao, H.; Wang, Z.; Wang, J.; Wang, Q.; Plettemeier, D.; Bärhold, M.; Bauer, T.; et al. Towards wearable-inertial-sensor-based gait posture evaluation for subjects with unbalanced gaits. Sensors 2020, 20, 1193. [Google Scholar] [CrossRef] [Green Version]
  12. Schlachetzki, J.C.; Barth, J.; Marxreiter, F.; Gossler, J.; Kohl, Z.; Reinfelder, S.; Gassner, H.; Aminian, K.; Eskofier, B.M.; Winkler, J.; et al. Wearable sensors objectively measure gait parameters in Parkinson’s disease. PLoS ONE 2017, 12, e0183989. [Google Scholar] [CrossRef] [PubMed]
  13. Parri, A.; Martini, E.; Geeroms, J.; Flynn, L.; Pasquini, G.; Crea, S.; Lova, R.M.; Lefeber, D.; Kamnik, R.; Munih, M.; et al. Whole body awareness for controlling a robotic transfemoral prosthesis. Front. Neurorobot. 2017, 11, 1–14. [Google Scholar] [CrossRef]
  14. Howcroft, J.; Kofman, J.; Lemaire, E.D. Review of fall risk assessment in geriatric populations using inertial sensors. J. Neuroeng. Rehabil. 2013, 10, 91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Filippeschi, A.; Schmitz, N.; Miezal, M.; Bleser, G.; Ruffaldi, E.; Stricker, D. Survey of motion tracking methods based on inertial sensors: A focus on upper limb human motion. Sensors 2017, 17, 1257. [Google Scholar] [CrossRef] [Green Version]
  16. Bai, L.; Pepper, M.G.; Yan, Y.; Spurgeon, S.K.; Sakel, M.; Phillips, M. Quantitative assessment of upper limb motion in neurorehabilitation utilizing inertial sensors. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 232–243. [Google Scholar] [CrossRef] [PubMed]
  17. Duc, C.; Farron, A.; Pichonnaz, C.; Jolles, B.M.; Bassin, J.P.; Aminian, K. Distribution of arm velocity and frequency of arm usage during daily activity: Objective outcome evaluation after shoulder surgery. Gait Posture 2013, 38, 247–252. [Google Scholar] [CrossRef] [PubMed]
  18. Goršič, M.; Cikajlo, I.; Goljar, N.; Novak, D. A multisession evaluation of an adaptive competitive arm rehabilitation game. J. Neuroeng. Rehabil. 2017, 14, 128. [Google Scholar] [CrossRef] [PubMed]
  19. de Magalhaes, F.A.; Vannozzi, G.; Gatta, G.; Fantozzi, S. Wearable inertial sensors in swimming motion analysis: A systematic review. J. Sport. Sci. 2015, 33, 732–745. [Google Scholar] [CrossRef] [PubMed]
  20. Blair, S.; Duthie, G.; Robertson, S.; Hopkins, W.; Ball, K. Concurrent validation of an inertial measurement system to quantify kicking biomechanics in four football codes. J. Biomech. 2018, 73, 24–32. [Google Scholar] [CrossRef] [PubMed]
  21. Schepers, M.; Giuberti, M.; Bellusci, G. Xsens MVN: Consistent Tracking of Human Motion Using Inertial Sensing; Xsens Technologies Technical Report; Xsens: Enschede, The Netherlands, 2018. [Google Scholar] [CrossRef]
  22. von Marcard, T.; Rosenhahn, B.; Black, M.J.; Pons-Moll, G. Sparse inertial poser: Automatic 3D human pose estimation from sparse IMUs. Eurographics Symp. Geom. Process. 2017, 36, 349–360. [Google Scholar] [CrossRef]
  23. Brandt, M.; Madeleine, P.; Samani, A.; Jakobsen, M.D.; Skals, S.; Vinstrup, J.; Andersen, L.L. Accuracy of identification of low or high risk lifting during standardised lifting situations. Ergonomics 2018, 61, 710–719. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Lu, M.l.; Feng, S.; Hughes, G.; Barim, M.S.; Hayden, M.; Werren, D.; Vieira, E. Development of an algorithm for automatically assessing lifting risk factors using inertial measurement units. In Proceedings of the Human Factors and Ergonomics Society 2019 Annual Meeting, Seattle, WA, USA, 28 October–2 November 2019; Volume 63, pp. 1334–1338. [Google Scholar] [CrossRef]
  25. Barim, M.S.; Lu, M.I.; Feng, S.; Hughes, G.; Hayden, M.; Werren, D. Accuracy of an algorithm using motion data of five wearable IMU sensors for estimating lifting duration and lifting risk factors. In Proceedings of the Human Factors and Ergonomics Society 2019 Annual Meeting, Seattle, WA, USA, 28 October–2 November 2019; Volume 63, pp. 1105–1111. [Google Scholar] [CrossRef]
  26. O’Reilly, M.A.; Whelan, D.F.; Ward, T.E.; Delahunt, E.; Caulfield, B.M. Classification of deadlift biomechanics with wearable inertial measurement units. J. Biomech. 2017, 58, 155–161. [Google Scholar] [CrossRef] [PubMed]
  27. Xsens Technologies. Xsens MVN User Manual: MV0319P; Revision Y; Xsens Technologies: Enschede, The Netherlands, 2019. [Google Scholar]
  28. Kuschan, J.; Schmidt, H.; Krüger, J. Analysis of ergonomic and unergonomic human lifting behaviors by using inertial measurement units. Curr. Dir. Biomed. Eng. 2017, 3, 7–10. [Google Scholar] [CrossRef]
  29. Maki, B.E.; McIlroy, W.E. Cognitive demands and cortical control of human balance-recovery reactions. J. Neural Transm. 2007, 114, 1279–1296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Yang, W.; Wang, K.; Zuo, W. Neighborhood component feature selection for high-dimensional data. J. Comput. 2012, 7, 162–168. [Google Scholar] [CrossRef]
  31. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  32. Rokach, L. Ensemble-based classifiers. Artif. Intell. Rev. 2010, 33, 1–39. [Google Scholar] [CrossRef]
Figure 1. Participant wearing the Xsens Link system while performing a lift.
Figure 1. Participant wearing the Xsens Link system while performing a lift.
Sensors 20 02323 g001
Figure 2. Tape lines on the floor for start/end crate alignment. The tape line marked S is the start/end point for straight lifts and was parallel to and 56 cm away from the main shelf (white). TL was the starting point for lifts in which the participant twisted left while lifting from the floor to the shelf and was parallel to and 42 cm away from the main shelf. TR was the starting point for lifts in which the participant twisted right while lifting from the floor to the shelf and was parallel to and 42 cm away from the secondary shelf (brown).
Figure 2. Tape lines on the floor for start/end crate alignment. The tape line marked S is the start/end point for straight lifts and was parallel to and 56 cm away from the main shelf (white). TL was the starting point for lifts in which the participant twisted left while lifting from the floor to the shelf and was parallel to and 42 cm away from the main shelf. TR was the starting point for lifts in which the participant twisted right while lifting from the floor to the shelf and was parallel to and 42 cm away from the secondary shelf (brown).
Sensors 20 02323 g002
Figure 3. Crate aligned with a tape mark at the beginning of a twisting lift.
Figure 3. Crate aligned with a tape mark at the beginning of a twisting lift.
Sensors 20 02323 g003
Figure 4. Lift start/end detection using the distance from the center of mass to the hands.
Figure 4. Lift start/end detection using the distance from the center of mass to the hands.
Sensors 20 02323 g004
Figure 5. Lift start/end detection using the distance from the center of mass to the hands. The start time was mislabeled by the lift detection algorithm in this lift due to the participant reaching for the crate early.
Figure 5. Lift start/end detection using the distance from the center of mass to the hands. The start time was mislabeled by the lift detection algorithm in this lift due to the participant reaching for the crate early.
Sensors 20 02323 g005
Table 1. The body positions of the Xsens IMU trackers.
Table 1. The body positions of the Xsens IMU trackers.
LocationPosition
FeetMiddle of bridge of both feet
Lower legsFlat on the shin bones (medial surface of the tibia)
Upper legsLateral sides above knees
PelvisFlat on sacrum
SternumFlat, in the middle of the chest
ShouldersOn the Scapula (shoulder blades)
Upper armsLateral sides above elbows
ForearmsLateral and flat side of the wrists
HandsBacksides of both hands
HeadOn the back of the head (held on with headband)
Table 2. All combinations of source, destination, posture, and twist direction that were used in the experiment, along with the number of each that were planned and that actually occurred throughout all 24 experiments. The table is organized by main lifts (the 30 main lifts that we were interested in), extra lifts (lifts designed specifically to serve as transition lifts), and unplanned lifts (lifts that were performed by mistake but do not necessarily hinder results). Though we required 48 occurrences of each of the main lifts (2 of each per participant), some were planned more times as transition lifts for some participants. The extra lifts were only performed when necessary as a transition between 2 main lifts.
Table 2. All combinations of source, destination, posture, and twist direction that were used in the experiment, along with the number of each that were planned and that actually occurred throughout all 24 experiments. The table is organized by main lifts (the 30 main lifts that we were interested in), extra lifts (lifts designed specifically to serve as transition lifts), and unplanned lifts (lifts that were performed by mistake but do not necessarily hinder results). Though we required 48 occurrences of each of the main lifts (2 of each per participant), some were planned more times as transition lifts for some participants. The extra lifts were only performed when necessary as a transition between 2 main lifts.
Main Lifts
No.SourceDestinationPostureTwist DirectionWeight (kg)PlannedOccurred
1floorchestsquattingstraight105151
2floorcheststoopingstraight105048
3kneechestsquattingstraight105148
4kneecheststoopingstraight105050
5chestheadstraight104846
6floorchestsquattingstraight34848
7floorcheststoopingstraight34850
8chestfloorsquattingstraight104848
9chestfloorstoopingstraight105148
10chestkneesquattingstraight105048
11chestkneestoopingstraight104949
12headcheststraight104846
13chestfloorsquattingstraight34848
14chestfloorstoopingstraight34850
15floorchestsquattingleft104949
16floorcheststoopingleft104848
17floorchestsquattingright104847
18floorcheststoopingright104848
19kneechestsquattingleft104951
20kneecheststoopingleft104848
21kneechestsquattingright104848
22kneecheststoopingright104949
23chestfloorsquattingleft104848
24chestfloorstoopingleft104848
25chestfloorsquattingright104848
26chestfloorstoopingright104848
27chestkneesquattingleft104848
28chestkneestoopingleft104847
29chestkneesquattingright104850
30chestkneestoopingright104848
Extra Lifts
31kneefloorsquattingstraight1011
32kneefloorstoopingstraight1011
33floorheadsquattingstraight1011
34floorkneestoopingstraight1044
35headkneestraight1011
36chestchestleft1044
37chestchestright1099
38floorkneesquattingleft1022
39floorkneesquattingright1011
40kneefloorstoopingleft1011
41kneekneesquattingleft1011
42floorfloorsquattingleft1011
43floorfloorstoopingleft1011
44floorkneestoopingright1011
45kneekneestoopingright1011
Unplanned Lifts
46headcheststraight302
47chestheadstraight302
48kneechestsquattingstraight302
49chestkneesquattingstraight302
Table 3. The vertical displacement reference values in meters.
Table 3. The vertical displacement reference values in meters.
Floor to ChestChest to FloorKnee to ChestChest to KneeChest to HeadHead to Chest
1.08−1.080.54−0.540.49−0.49
Table 4. The asymmetry angles and errors measured across all 1489 lifts, separated by twist direction. Twists to the left (counterclockwise when viewed from above) are positive angles, while twists to the right are negative. Error was calculated as the estimated angle minus the reference angle. All values are reported in degrees.
Table 4. The asymmetry angles and errors measured across all 1489 lifts, separated by twist direction. Twists to the left (counterclockwise when viewed from above) are positive angles, while twists to the right are negative. Error was calculated as the estimated angle minus the reference angle. All values are reported in degrees.
Manually LabelledAutomatically Labelled
Twist DirectionExpectedMeasuredMean ErrorMeasuredMean Error
no twist0 0.4 ± 14.1 0.4 0.7 ± 15.3 0.7
twist left45 51.8 ± 11.8 6.8 51.9 ± 11.2 6.9
twist right−45 51.7 ± 11.7 6.7 51.8 ± 12.0 6.8
Table 5. The vertical height of the object was estimated for the beginning and ending of every lift. The error was calculated as the estimated height minus the reference height. The measured values and error are shown for each of the 4 possible vertical levels. All values shown are in meters.
Table 5. The vertical height of the object was estimated for the beginning and ending of every lift. The error was calculated as the estimated height minus the reference height. The measured values and error are shown for each of the 4 possible vertical levels. All values shown are in meters.
Manually LabelledAutomatically Labelled
LevelExpectedMeasuredMean ErrorMeasuredMean Error
floor0.300 0.452 ± 0.066 0.152 0.514 ± 0.089 0.214
knee0.840 0.845 ± 0.049 0.005 0.861 ± 0.053 0.021
chest1.380 1.283 ± 0.043 −0.097 1.286 ± 0.045 −0.094
head1.870 1.641 ± 0.056 −0.229 1.640 ± 0.055 −0.230
Table 6. The vertical displacement estimates and errors for each lift type. The vertical displacement error is calculated as v e s t v a c t u a l , where v e s t is the estimated vertical displacement and v a c t u a l is the actual vertical displacement. Distances are displayed in meters.
Table 6. The vertical displacement estimates and errors for each lift type. The vertical displacement error is calculated as v e s t v a c t u a l , where v e s t is the estimated vertical displacement and v a c t u a l is the actual vertical displacement. Distances are displayed in meters.
Manually LabelledAutomatically Labelled
Vertical MovementExpectedMeasuredMean ErrorMeasuredMean Error
floor to chest1.08 0.832 ± 0.070 −0.248 0.796 ± 0.077 −0.284
chest to floor−1.08 0.836 ± 0.074 0.244 0.752 ± 0.109 0.328
knee to chest0.54 0.432 ± 0.041 −0.109 0.433 ± 0.042 −0.107
chest to knee−0.54 0.451 ± 0.041 0.089 0.426 ± 0.047 0.114
chest to head0.49 0.384 ± 0.026 −0.106 0.379 ± 0.027 −0.111
head to chest−0.49 0.395 ± 0.043 0.095 0.404 ± 0.062 0.086
Table 7. The horizontal distance between the participant and the object was estimated at the beginning and ending of every lift. Error is calculated as h e s t h a c t u a l , where h e s t is the estimated horizontal distance and h a c t u a l is the reference horizontal distance. Distances are displayed in meters.
Table 7. The horizontal distance between the participant and the object was estimated at the beginning and ending of every lift. Error is calculated as h e s t h a c t u a l , where h e s t is the estimated horizontal distance and h a c t u a l is the reference horizontal distance. Distances are displayed in meters.
Manually LabelledAutomatically Labelled
LevelExpectedMeasuredMean ErrorMeasuredMean Error
floor0.300 0.248 ± 0.081 −0.052 0.244 ± 0.078 −0.056
knee0.500 0.500 ± 0.078 0.000 0.483 ± 0.077 −0.017
chest0.450 0.592 ± 0.094 0.142 0.575 ± 0.087 0.125
head0.450 0.559 ± 0.087 0.109 0.554 ± 0.074 0.104
Table 8. Classification results for the 4 subproblems: posture, asymmetry, vertical movement, and weight. As the feature extraction process depended on the estimated start/end times, classification results for both automatically and manually labeled start/end times are included.
Table 8. Classification results for the 4 subproblems: posture, asymmetry, vertical movement, and weight. As the feature extraction process depended on the estimated start/end times, classification results for both automatically and manually labeled start/end times are included.
SubproblemNumber of ClassesAccuracy Manually LabelledAccuracy Auto LabelledClassification Method
Posture3 97.18 % 96.78 % KNN, k = 30
Asymmetry3 98.27 % 98.32 % KNN, k = 10
Vertical Movement6 99.60 % 97.28 % KNN, k = 10
Weight2 65.74 % 64.21 % Naive Bayes

Share and Cite

MDPI and ACS Style

Hlucny, S.D.; Novak, D. Characterizing Human Box-Lifting Behavior Using Wearable Inertial Motion Sensors. Sensors 2020, 20, 2323. https://0-doi-org.brum.beds.ac.uk/10.3390/s20082323

AMA Style

Hlucny SD, Novak D. Characterizing Human Box-Lifting Behavior Using Wearable Inertial Motion Sensors. Sensors. 2020; 20(8):2323. https://0-doi-org.brum.beds.ac.uk/10.3390/s20082323

Chicago/Turabian Style

Hlucny, Steven D., and Domen Novak. 2020. "Characterizing Human Box-Lifting Behavior Using Wearable Inertial Motion Sensors" Sensors 20, no. 8: 2323. https://0-doi-org.brum.beds.ac.uk/10.3390/s20082323

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop