Next Article in Journal
A Model-Based System for Real-Time Articulated Hand Tracking Using a Simple Data Glove and a Depth Camera
Next Article in Special Issue
Novel PDMS-Based Sensor System for MPWM Measurements of Picoliter Volumes in Microfluidic Devices
Previous Article in Journal
A SAFT Method for the Detection of Void Defect inside a Ballastless Track Structure Using Ultrasonic Array Sensors
Previous Article in Special Issue
Exoskeleton Hand Control by Fractional Order Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Participation and Training Task Difficulty Applied to the Multi-Sensor Systems of Rehabilitation Robots

1
Parallel Robot and Mechatronic System Laboratory of Hebei Province, Yanshan University, Qinhuangdao 066004, China
2
Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
3
Robotics and Mechatronics Department, Institute of Solid Mechanics of Romanian Academy, 010141 Bucharest, Romania
*
Authors to whom correspondence should be addressed.
Submission received: 3 September 2019 / Revised: 14 October 2019 / Accepted: 23 October 2019 / Published: 28 October 2019

Abstract

:
In the process of rehabilitation training for stroke patients, the rehabilitation effect is positively affected by how much physical activity the patients take part in. Most of the signals used to measure the patients’ participation are EMG signals or oxygen consumption, which increase the cost and the complexity of the robotic device. In this work, we design a multi-sensor system robot with torque and six-dimensional force sensors to gauge the patients’ participation in training. By establishing the static equation of the mechanical leg, the man–machine interaction force of the patient can be accurately extracted. Using the impedance model, the auxiliary force training mode is established, and the difficulty of the target task is changed by adjusting the K value of auxiliary force. Participation models with three intensities were developed offline using support vector machines, for which the C and σ parameters are optimized by the hybrid quantum particle swarm optimization and support vector machines (Hybrid QPSO-SVM) algorithm. An experimental statistical analysis was conducted on ten volunteers’ motion representation in different training tasks, which are divided into three stages: over-challenge, challenge, less challenge, by choosing characteristic quantities with significant differences among the various difficulty task stages, as a training set for the support vector machines (SVM). Experimental results from 12 volunteers, with tasks conducted on the lower limb rehabilitation robot LLR-II show that the rehabilitation robot can accurately predict patient participation and training task difficulty. The prediction accuracy reflects the superiority of the Hybrid QPSO-SVM algorithm.

1. Introduction

Neuromuscular injury can lead to disability or inconvenient movements, such as stroke and spinal cord injury, which have become important problems in the world [1]. Nowadays, there are more than 33 million stroke patients in the world [2], the mortality rate is as high as 80%, and 75% of the survivors are disabled [3]. The necessity to develop rehabilitation robots has made it one of the research hotspots in the world [4,5]. As a robot that is in direct contact with the patient, the rehabilitation robot shoulders the responsibility of helping the patient recover smoothly and safely. The human–computer interaction strategy, the energy interaction and role distribution control, are very important [6].
Clear detection of human–computer interaction and patient intention are the basis of flexible robot control. Most rehabilitation robots use force sensors to feedback mechanical information from patients, such as Hongbing Tao [7], Victor G [8]. Hwang et al. judged human motion intention by collecting pressure sensor data placed at the contact point between the standing posture rehabilitation robot and stroke patients [9]. Wolf S et al. connected the elastic element in series with the driving part and named it the Serial elastic actuator. By detecting the deformation of the elastic element, the joint moment can be detected and the motion intention of the patient can be judged [10]. Kim et al. used only one pressure sensor to realize the assistant force of the robot in the motion of the patient’s elbow joint [11]. Some of them rely on current changes of the joint motors to detect motion intentions, such as Kim [12]. A few researchers use surface EMG (electromyography) signals and EEG (electroencephalogram) signals to predict the patients’ motor intentions [13,14], such as Edward [15], Magdalena [16], and Tang [17]. Yepes et al. use electromyogram signals to determine the required moment of the knee joint [18]. Some researchers apply EMG signals in motion modal recognition of the prosthesis [19] and Cooperative Robot [20]. All these measurement methods have their own advantages, but EEG is susceptible to noise interference [21]. The EMG signal is not easy to collect when the skin surface changes [22,23]. Relatively speaking, it does not increase the cost of the robot and the complexity of the system. In this paper, the joint torque sensor and the hardware detection system of the six-dimensional force sensor on the sole are used.
Meanwhile, the rehabilitation effect is not only related to scientific rehabilitation training methods and reasonable training planning, but also has a great impact on the patients’ active participation and active sports intention, which has been proven by clinical studies [24]. In order to improve the active participation of patients during the training process, it is necessary to provide assistance to patients according to the interaction situation during the training process [25] and to maximize the patient’s independent tasks. Introducing patients’ movement characteristics and physical fitness into the control strategy has a positive impact on the rehabilitation effect on the patients [26]. Yatsenko et al. controlled and adjusted the movement speed of the robotic arm according to the amplitude ratio of the EMG signal of the affected limb [27], and the patient could quickly adapt to control the movement of the prosthesis [28], but it was inconsistent with the movement characteristics of the human body. Many researches have introduced velocity field and virtual channel technology in the specific trajectory of rehabilitation training [29,30]. Cai uses impedance control to construct the velocity field in different directions of expected trajectories and provide correction force within a certain range of trajectories, with the correction force being a rigid force outside the interval threshold [31], but the threshold size setting is not given. In order to provide more accurate training for patients, radial basis function (RBF) neural network has excellent analytical ability in the patient’s motor ability analysis, which was researched by Wolbrecht [32] and Pehlivan [33]. However, during the training period, the rehabilitation robot frequently interferes with the training of patients, which easily causes the side effect of relying on the machine, and fails to motivate the active participation of patients. In order to introduce the training state of patients into the control loop more accurately, many studies have judged the patient’s psychological-level participation by collecting the patient’s EMG signal, EEG signal, and other physiological information [34,35,36]. At the same time, in order to stimulate patients to actively participate in training, most rehabilitation robots currently use interesting games [37] or virtual reality technology [38].
In summary, this paper proposes a lower limb rehabilitation robot using joint torque sensors and six-dimensional force sensors on the foot soles. In the training task, man–machine interaction force information is collected, from which can be extracted characteristic quantities to predict the task difficulty by using support vector machines. The rest of this paper is organized as follows: the second section introduces the rehabilitation robot structure of multi-sensor system and human–machine interaction mechanical model. In the third section, a multi-difficulty rehabilitation training task is proposed. Under the model of impedance control, a support vector machines algorithm is used to establish the model of the patients’ active participation and task difficulty detection. The fourth section analyses the characteristic quantity of 10 healthy volunteers during different difficulty training tasks, using support vector machines (SVM) to predict the participation and task difficulty of two other volunteers.

2. LLR-II Rehabilitation Robot

2.1. Structural Design of LLR-II

In order to adapt to the patients in the early stage of rehabilitation, the lower limb rehabilitation robot LLR-II designed in this paper can be trained in two postures, so as to prevent the mechanical leg from squeezing the patient [39,40]. As to the hardware platform of the rehabilitation system, LLR-II adopts a modular design and consists of five sub-modules: lower limb mechanical leg, main control system, sensor system, multi-function seat, and mechanical limit adjustment frame, as shown in Figure 1, to validate that the rehabilitation robot can accurately predict subjects participation and training task difficulty.
The mechanical leg is a planar three-degree-of-freedom serial mechanism, similar to the three joints of human leg, including hip, knee, and ankle. In order to solve the problem of excessive driving power of the hip joint, a self-balancing design is adopted. The knee drive component is installed on the back of the hip joint rotation axis to balance the weight of part of the mechanical leg, reduce the driving power of the hip joint, and improve the dynamic performance of the mechanical leg. The addition of an electric pushrod in the mechanical leg can automatically adapt to patients with a height of 1500 mm to 1900 mm. In order to realize the safety of sitting and lying posture training for patients, variable joint limitation consisting of fastened limit groove and driven limit groove was designed, as shown in Figure 2.
Torque sensors are mounted inside the hip and knee joint of the LLR-II sagittal plane to detect the dynamic torque characteristics of the patient’s training state in real time. The dynamic torque characteristic of the ankle joint is detected by a six-dimensional force sensor mounted on the sole of the foot. The torque sensor is manufactured by Sunrise Instruments Company in China, and the six-dimensional force sensor is manufactured by Junde Technology Co., Ltd. in China. The profile of the sensor and the sensor’s detailed parameters are shown in Figure 3. The output side of the reducer increases the sensitivity and accuracy of the mechanical information detection and uses this information to complete the patient’s motion intention detection.
Based on the LLR-II rehabilitation training function, its electrical control system is divided into Control Center system, Movement Control system, Signal Feedback system, Human–Computer Interaction system, as shown in Figure 4. The control center system use a variety of sensors to monitor the human–computer interaction state, and uses a variety of signals to complete the planning and training tasks; the robotic arm receives the instructions and drives the affected limbs to perform the multi-mode advanced rehabilitation training under the guidance of the driving system.

2.2. Man–Machine Interaction Mechanics Model of LLR-II

The joint no-load moment in LLR-II man–machine coupled motion is affected by the weight of the mechanical leg and the patient’s leg, and it can be expressed as a nonlinear function of joint variables [41].
M n × 1 = F ( θ n × 1 )
where, M n × 1 is the column vector of joint no-load torque, F ( ) is the mapping function, and θ n × 1 is the joint variable.
According to its own structure, it can be simplified as a planar three-link series mechanism. It should be noted that, considering the large weight of the leg, in order to increase the stability of the hip joint, the self-balancing design concept was introduced in the design process. The specific model can be shown in Figure 5.
In the Figure, l 1 l 3 represent the length of the thigh, calf, and sole, respectively; l 4 represents the length of the self-balancing part; O , A , B represent the hip, knee, and ankle joints, respectively; D , P respectively represent the first and last two endpoints; G 1 , G 2 , G 3 represent the weight of the machine and the patient’s thigh, calf, and foot, respectively; G 4 represents the weight of the self-balancing part; R 1 R 4 represent the lengths from the center of gravity of each part to the node; θ 1 , θ 2 , θ 3 represent the joint variables, in a counter-clockwise positive direction; θ 4 , θ 5 represent the intermediate quantity introduced.
The joint no-load moment equation is obtained as
[ M 1 M 2 M 3 ] = [ cos θ 1 cos θ 4 cos θ 5 0 cos θ 4 cos θ 5 0 0 cos θ 5 ] [ G 3 l 1 + G 2 l 1 + G 1 R 1 G 4 R 4 G 3 l 2 + G 2 R 2 G 3 R 3 ]
In combination with Equation (2), the above equation can be modified to:
[ M 1 M 2 M 3 ] = [ cos θ 1 cos ( θ 1 + θ 2 + θ 3 ) cos ( θ 2 θ 1 ) 0 cos ( θ 1 + θ 2 + θ 3 ) cos ( θ 2 θ 1 ) 0 0 cos ( θ 2 θ 1 ) ] [ G 3 l 1 + G 2 l 1 + G 1 R 1 G 4 R 4 + f 1 G 3 l 2 + G 2 R 2 + f 2 G 3 R 3 + f 3 ]
It can be abbreviated as:
M 3 × 1 = L 3 × 3 ( θ ) C 3 × 1
In the formula, the joint no-load torque term is represented by M 3 × 1 , and the joint variable term is L 3 × 3 ( θ ) , and C 3 × 1 is a characteristic parameter term.
The characteristic parameter item C 3 × 1 is associated with patient information, which is unique to any patient, and needs to be solved for each patient. Since joint variables L i and θ i can be measured by the sensor system on the robot body, but G i cannot be directly measured by the weight of the patient’s leg, the measured torque M s is the sum of applied torque M h and no-load torque M :
M s = M h + M
And can be obtained from Equation (4) as,
C 3 × 1 = L 3 × 3 ( θ ) 1 M 3 × 1
First, the ankle joint is moved at a small speed V , the foot pressure value f z d at this time is recorded at intervals Δ t , the joint angles θ1, θ 2 , and θ 3 are calculated, and a total of k times are recorded. Then the knee joint is rotated k times in the same manner, knee joint torque value M 2 and angle values θ 1 , θ 2 and θ 3 are recorded, then the hip joint is rotated to record k hip joint torque values M 1 and angle values θ 1 , θ 2 and θ 3 , then f z d , θ 3 , θ 2 , θ 1 are converted into M 1 , θ 4 , θ 5 . C 31 , C 21 and C 11 are calculated according to the following formulae.
M 3 i = cos θ 5 i C 31 i     ( i = 1 k ) C ¯ 31 = 1 k i = 1 k M 3 i cos θ 5 i
M 2 i = cos θ 4 i C 21 i + cos θ 5 i C ¯ 31     ( i = 1 k ) C ¯ 21 = 1 k i = 1 k M 2 i cos θ 5 i C ¯ 31 cos θ 4 i
M 1 i = cos θ 1 i C 11 i + cos θ 4 i C ¯ 21 + cos θ 5 i C ¯ 31     ( i = 1 k ) C ¯ 11 = 1 k i = 1 k M 1 i cos θ 5 i C ¯ 31 cos θ 4 i C ¯ 21 cos θ 4 i
C 3 × 1 = [ C ¯ 11 C ¯ 21 C ¯ 31 ] T
The force exerted by the patient’s active intention is the main feature to be identified in rehabilitation training. We judge the rehabilitation effect of the patient by identifying the force that the patient can produce actively. In the training process, the actual measurement of human–machine interaction force is the data measured by the sensor system of the robot. The following equation is the established equivalent terminal mechanical model of human patients.
f 2 × 1 = H ( θ 3 × 1 , M 3 × 1 , M s 3 × 1 )
In the formula, f 2 × 1 is static terminal forces in the plane of motion, θ 3 × 1 is current position lower joint variable, M 3 × 1 is three-joint no-load torque, M s 3 × 1 is measured force/torque at the three joints, and H ( ) is the mapping function.
In the process of human–machine motion, since the force exerted by the patient mainly acts on the pedal, a six-dimensional force sensor is placed in the middle of the bottom of the pedal. Due to the influence of the arch structure of the foot, the force at the heel and the forepaw is simplified to two points: B and P. The force at the heel generates torque mainly at the hips and knees, and the force at the forepaw generates pressure on the foot pedal, as shown in Figure 6.
In Figure 6, f g represents the patient applying force at point B; fgx, f g y and denote the horizontal and vertical resolution of f g , respectively; f h z x , f h z y represent the horizontal and vertical resolution of foot forepaw forces, respectively; l g x O , l g x A represent the moment arms generated by f g x at point A and O, respectively; l g y O , l g y A represent the moment arm generated by f g y at point A and O, respectively; l h z x B , l h z y B represent the moment arms generated by f h z x and f h z y at point B; f x and f y represent the force measured by the sensor mounted on the sole of the robot foot, respectively.
The moment of hip and knee joint can be expressed as:
[ M h 1 M h 2 ] = [ l 2 sin θ 5 + l 1 sin θ 1 l 2 cos θ 5 + l 1 cos θ 1 l 2 sin θ 5 l 2 cos θ 5 ] [ f g x f g y ]
The patient’s heel force f g can be expressed as:
f g = [ f g x f g y ] = [ l 2 sin θ 5 + l 1 sin θ 1 l 2 cos θ 5 + l 1 cos θ 1 l 2 sin θ 5 l 2 cos θ 5 ] 1 [ M h 1 M h 2 ]
From which can be obtained
f g = [ l 2 cos θ 5 ( l 2 cos θ 5 + l 1 cos θ 1 ) l 2 sin θ 5 l 2 sin θ 5 + l 1 sin θ 1 ] [ M s 1 M 1 M s 2 M 2 ] ( ( l 2 sin θ 5 + l 1 sin θ 1 ) ( l 2 cos θ 5 ) ( l 2 sin θ 5 ) ( l 2 cos θ 5 + l 1 cos θ 1 ) )
The force exerted on the patient’s forefoot is collected by a six-dimensional force sensor on the sole of the foot that has the same axial direction as the pedal, so the end force f h z can be decomposed as follows:
f h z = [ f h z x f h z y ] = [ f x cos θ 6 + f y sin θ 6 f x sin θ 6 f y cos θ 6 ]
Then, the equivalent terminal force of the patient can be calculated as:
f = f g + f h z = [ F x F y ]
where F x , F y are the horizontal and vertical components of terminal force;
By combining Equations (13) and (15), the terminal static component can be expressed as follows:
F x = l 2 cos θ ^ ( M s 1 M 1 ) ( l 2 cos θ ^ + l 1 cos θ 1 ) ( M s 2 M 2 ) ( ( l 2 sin θ ^ + l 1 sin θ 1 ) ( l 2 cos θ ^ ) ( l 2 sin θ ^ ) ( l 2 cos θ ^ + l 1 cos θ 1 ) ) +     f x cos ( θ 1 + θ 2 + θ 3 + π / 2 ) + f y sin ( θ 1 + θ 2 + θ 3 + π / 2 )
In the formula, θ ^ is intermediate variable of joint Angle, θ ^ = θ 1 θ 2 .
F y = l 2 sin θ ^ ( M s 1 M 1 ) + ( l 2 sin θ ^ + l 1 sin θ 1 ) ( M s 2 M 2 ) ( ( l 2 sin θ ^ + l 1 sin θ 1 ) ( l 2 cos θ ^ ) ( l 2 sin θ ^ ) ( l 2 cos θ ^ + l 1 cos θ 1 ) ) +         f x sin ( θ 1 + θ 2 + θ 3 + π / 2 ) f y cos ( θ 1 + θ 2 + θ 3 + π / 2 )
Equations (17) and (18) can completely solve the mapping relationship between terminal force and joint variables, no-load torque and measured torque mentioned in Equation (11), and provide the entry parameters for the following judgment of patients’ motion intention and control strategy.

3. Participation Detection of LLR-II

3.1. Assist Force Training Control

According to the change of the patient’s participation in the training process, the assist mode and the active training mode are divided into different grades to ensure that the patient completes the training and maximize the patient’s training enthusiasm and task completion. Using the impedance control model, the human–computer interaction force is represented by the end position offset, and is magnified by game in the task. With the participation of the robot’s assistant force, the task difficulty is classified, to ensure that patients can find suitably challenging rehabilitation tasks. In order to improve the level of the patients’ active participation, according to the recognized level of physical participation, the size of the auxiliary force is adjusted in real time to ensure that patients maintain a high level of participation for training.
With the progress of rehabilitation training, the patient gradually has a certain ability to control the affected limb, but when it is not enough to fully control it, it is necessary to introduce assistance training, in which the robot obtains the patient’s motion intention through the force/torque sensor, and then drives the affected limb for training. In order to improve the coordination ability of each joint, the patient needs to complete the trajectory training, such as the circular trajectory and the linear trajectory. In many cases, the patient does not have the ability to perform the trajectory training task independently, and the robot needs to assist in suppressing the wrong movement. The assistance training control mode introduces the impedance model as shown in Figure 7. According to the current joint actual position θ a , the position positive solution is compared with the current desired position of the training trajectory, the auxiliary force calculation is performed, the auxiliary force F T is obtained, the patient force F a is summed, and the result is sent to the impedance controller to obtain the end position control amount P d . It then inversely solves the position, calculates the desired joint position θ d , and transmits it to the position controller to realize the assist control.
In order to make patients intuitively understand the movement track of their affected limbs, a task game was designed in which the patient operated the virtual mice to walk in the safe area between the red lines. The position of the mouse on the screen reflects the position of the patient’s limb end in the motion plane. Participation scores increased with the time the mice spent walking in the safe area, and did not increase when the mice were outside the safe area. The trajectory of the safe passage can be selected according to the length of the patient’s limb, and the width of the safe passage is related to the parameters of the impedance model. The impedance model parameters are as follows:
M = [ 0.0625 0.0625 ] , B = [ 5 5 ] , K = [ 1000 1000 ]
In order to make the width of the safety passage between the red lines challenging for most patients, and not tedious at the same time, 70 kg is selected as the standard reference value for the weight of patient, and the positional offset of the standard reference value is selected as the width of the safety channel. The standard reference value of 70 kg was selected as appropriate to the lot of volunteers that the machine was tested on, and it is used to provide an approximate starting point to the initial conditions of the algorithm. During the training task, patients need to resist the weight of their limbs and control the virtual mice to walk at a constant speed in the safe passage. If the virtual mice touch the red line during the task, the physical strength of the mice will decline until the end of the training mission cycle. According to the size of the auxiliary force, the task difficulty is divided into nine levels, with K values ranging from 0 to 0.8, respectively. The degree to which patients participate in training is related to the degree of the patient’s recovery and individual physical strength. These parameters are difficult to quantify. Therefore, the degree of the patients’ participation in training is quantified in stages by means of an experimental questionnaire. In the course of the experiment, patients are asked to try nine different difficulty training tasks, and they are asked to accept questionnaires to determine the current situation. Task difficulty is appropriate for each patient. Tasks with different difficulty can be divided into three states: under-challenge, challenge, and over-challenge, which are expressed by −1, 0, 1.

3.2. Patient Participation and Training Task Difficulty Prediction Model

In order to predict the degree of the patient’s task participation, a mathematical model based on support vector classifiers and regression machines was established according to the characteristics of the small sample and nonlinear data of a small number of patients‘ training data and questionnaire data. The characteristic parameters were extracted from the training data, and the data was analyzed. The implicit mathematical relationship between input value and output value predicts the actual participation of patients, so as to achieve the goal of selecting the appropriate task difficulty.
Using the characteristic quantity of patient training data and task difficulty states, a QPSO-MLSSVM (quantum particle swarm optimization and multi-output least squares support vector machine) model can be established and tested. This model is based on the LS-SVM (least squares support vector machines) model, which is a class of kernel-based learning methods normally used for regression and classification problems. The main distinction in LS-SVM is solving for a set of linear equations, rather than the quadratic programming problem in classical SVMs [42]. The QPSO (quantum particle swarm optimization) algorithm is used to optimize the key parameters in the model to make the model performance better [43,44]. The sample set is { ( x i , y i ) , i = 1 , 2 , , l } , where x i R n is the input value of the ith sample, y i R is the output value of the nth sample. The assumption is
f i ( x ) = ω T Φ ( x i ) + b i , i = 1 , 2 , , l
where, Φ ( x i ) is the spatial conversion variable function, ω is the weight vector, b is the adjustment parameter.
We optimize the confidence interval under this condition, and transform the optimization problem into the minimum value problem according to the principle of structural risk minimization [45]:
{ min 1 2 ω 2 + C i = 1 l ξ i s . t . y i f i ( x ) 1 ξ i , i = 1 , 2 , , l ξ i 0 , i = 1 , 2 , , l
where, C is the weight coefficient; ξ i is the relaxation factor.
The first item in the optimization problem reflects the generalization ability and the model complexity, the second item reflects the model error, and the parameter C adjusts the weight of these two items. Introducing the Lagrange equation into the above formula:
L = 1 2 w 2 + C i = 1 l ( ξ i + ξ i ) i = 1 l a i ( ε + ξ i y i + ( ω T Φ ( x i ) + b ) ) i = 1 l a i ( ε + ξ i + y i ( ω T Φ ( x i ) + b ) ) i = 1 l ( η i ξ i + η i ξ i )
where ξ ( ) , α ( ) and η ( ) represent ξ , α , η with * and without *, α ( ) and η ( ) are Lagrange multipliers. A relaxation variable is introduced ξ i , ξ i * 0 , i = 1 , 2 , , l , ξ is the insensitive.
The radial basis function is selected to calculate the spatial inner product of the kernel function in the support vector machine model. The result obtained by the above formula is inserted back into the Lagrange equation to obtain the dual equation of the optimization function:
max W ( a i , a i ) = 1 2 i = 1 l j = 1 l ( a i a i ) ( a j a j ) × K ( x i , x j ) + i = 1 l ( a i a i ) y i i = 1 m ( a i + a i ) ε i
The constraint of this dual equation is:
{ i = 1 l ( a i a i ) = 0 a i , a i ( 0 , C )
where, C > 0 is the Penalty parameter
When 0 < α i < C , ξ i = 0 ; when 0 < α i < C , ξ i = 0 , the corresponding sample is the standard support vector, and expresses the reliability of the calculation. In general, the b value of the standard support vector is calculated respectively, and then the average value is calculated.
b = 1 N N S V { 0 < α i < C [ y i x i S V ( α i α i ) Φ ( x i ) Φ ( x i ) ε ] + 0 < α i < C [ y i x i S V ( α i α i ) Φ ( x i ) Φ ( x i ) + ε ] }
In order to eliminate local optima problems, the QPSO algorithm and the SVM algorithm are mixed the Hybrid QPSO-SVM algorithm. The formula of QPSO is:
{ m b e s t = 1 M i = 1 M P i P C i j = ϕ P i j + ( 1 ϕ ) P g j x i j ( t + 1 ) = P C i j ± α | m b e s t j x i j ( t ) | ln ( 1 u )
And the particle swarm velocity formula is:
v i j ( t + 1 ) = ω v i j ( t ) + c 1 r 1 j [ P i j ( t ) x i j ( t ) ] + c 2 r 2 j [ P g j ( t ) x g j ( t ) ]
where, p i j , p g j , p i j are the optimal positions of the i particle and the g particle in the j dimension, respectively; m b e s t , m b e s t j are center points of the current best position of all individuals on all dimensions; M is the particle swarm size; p i is the current best position of the i particle; p c i j is the random position between p i j , p g j ; α is the control coefficient.
QPSO optimizes two key parameters, C and σ of the MLSSVM, and the optimization goal is minimizing the f i t n e s s ( σ , γ ) function. The sample mean square error (MSE) is selected as the particle swarm fitness function.
f i t n e s s ( σ , γ ) = 1 M i = 1 M ( y i y ^ i ) 2
where, y i is the actual value and y ^ i is the predicted value. When f i t n e s s reaches its minimum, the optimal solution is obtained.

4. Experiment

In order to verify the effectiveness of this method for patients, 12 groups of healthy volunteers were tested. All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Yanshan University. Because the speed of physical expenditure for stroke patients is different in difficult tasks and due to the changes of the assistant force parameter K, maintaining the foot in a safe area requires a different level of initiative. With the increase of the difficulty coefficient, the K value of the assisted force parameter decreases gradually. Meanwhile, the patients’ goal participation will increase during the training process, which may lead to a rapid decline in the patients’ physical strength. So the human–computer interaction exerted by the patient at the end of the robotic chain is related to the degree to which the patient participates in the task. The degree of the patients’ participation in tasks is also closely related to the difficulty of the tasks. To verify this point, the force/moment of three joints is calculated as the terminal force in robot coordinates. Secondly, the obtained data yields an observation of the changes of the calculated end force in the training cycle and a comparison of the position of the end of the robot under different parameter K values of the assisted force.
Figure 8 shows the end force after transforming the data collected by the sensor into the end force of the robot coordinate after eliminating the self-weight of the robot. Under the assists force with K = 0.4 task difficulty, the human–robot interaction force keeps at a relatively low level for a period of time at the beginning of training. At 380 s, the volunteer is too weak to bear his own weight to complete the task, and the Human–robot interaction has reached its first peak. At 400 s, the volunteer challenges himself again and strives to achieve the goal of the task, so the human–robot interaction force declines rapidly because the volunteer takes the initiative to bear the weight of their limbs. However, the second phase of maintaining a lower level is shorter than the first phase, and the second peak of the interaction force appears. At the end of the training, it is difficult for the volunteer to bear part of the body weight again in order to achieve the goal of the task, so the interaction force, which is almost entirely composed of the volunteer’s limb weight, keeps at a high level. The variations in the time period of repeated challenges for a volunteer at different task difficulties are shown in Figure 9.
The picture above is the terminal force of the first volunteer in training tasks with different K values of the assist force. After a questionnaire survey of the volunteer, the K = 0.1 of assist force task for the patient is “over–challenge”, and the K = 0.5, K = 0.6 of the assist force task are “challenge”, with the K = 0.8 of the assist force task being “under-challenge”. Under the over-challenge task, there being heavier limb weights to load, the volunteer’s physical exertion is fast and there appear many peaks in the human–robot interaction force. With the increasing participation of assistive forces in training, volunteers need less initiative to achieve task goals. This phenomenon can be clearly seen by observing the relationship between the end position and the safe passage.
Figure 10 shows the relationship between the terminal position of the robot and the safe passage of the target in the different difficulty training tasks for the first volunteer. The blue line is the rehabilitation robot terminal position, and the green line is the target terminal trajectory, while the red line is the safe passage. The difficulty of K = 0, K = 0.1, K = 0.2, K = 0.3 assist force tasks are “over-challenge”, K = 0.4, K = 0.5, K = 0.6 assist force tasks are “challenge”, and K = 0.7, K = 0.8 assist force tasks are “under-challenge”. It is difficult for the volunteer to complete the task goal in the over-challenged task. In the under-challenged task, it’s easy for volunteers to reach the goal position.
The degree of patient participation under different task difficulties is reflected in the fluctuation of human–machine interaction mechanical signals, and the feature fluctuation represents the fluctuation of signal data in this period. The greater the fluctuation of the feature, the greater the degree of dispersion, as it is more sensitive to signal fluctuation, and it is more suitable to be used as an input parameter of the detection model for patient participation. The sample data is processed by using four indicators, describing the degree of dispersion of the signals, the interquartile range, and the variance. The above features in the data are statistically analyzed to judge their significance and correlation under different volunteer states. Significance analysis is undertaken to compare the feature data of “under-challenge” and “over-challenge” volunteer states with that of the “challenge” state, in order to judge that the feature data have significant differences in the three states. The correlation is done to compare the insignificant features with the degree of volunteer participation, whether and if there is correlation. This feature will still be used as an input parameter to train the support vector machine. This paper extracted the preliminary feature variables in Table 1.
Ten volunteers were selected to carry out the experimental verification of the participation and task difficulty detection. Each volunteer was trained in 10 difficult tasks that lasted from 15 min to 20 min. In order to eliminate the influence of physical energy consumption between each experiment, they were conducted one day apart. After the end of the experiment, a questionnaire survey was conducted on the difficulty of the task, which is divided into three participation levels: “under-challenge”, “challenge”, and “over-challenge”. As part of the experiment, the data of hip and knee joint torque, plantar six-dimensional force, and terminal trajectory were collected. With the different participation of assistive force, there are different performances of the terminal force and position. The characteristic quantities were extracted, as shown in Table 2, from the training data. The training data characteristic quantities of volunteers were then compared to their classification, according to the predicted task difficulty. The pairwise t-test comparisons of the characteristic quantities were statistically analyzed to verify whether the characteristic quantities are significantly different under different task difficulties. Comparisons of the characteristics of each two difficult tasks, using one-way repeated measure ANOVA, were done separately. Table 2 shows the results of the significance analysis of the characteristic quantity in the difficulty of the three tasks. p value is the test probability, F value is the effect of random error. When p value is less than 0.05, the characteristic quantity has significant difference under different difficult tasks.
The significance analysis of the characteristic quantity from the training data of 10 volunteers shows that there are obvious differences when comparing the PRMSE, PSTD, TSCA, PMAE among different volunteers. The p value of FQ, UMAX, FD are greater than 0.05, only in the case of the difficult and medium groups. For FQ, UMAX, FD there are obvious differences in other groups, as it can distinguish the difficulty of the under-challenge tasks. Although POR has significant differences, its value is rough and its stability is not high. Accordingly, PRMSE, PSTD, TSCA, FQ, UMAX, FD, PMAE are used as feature inputs for volunteer participation and task difficulty classification. Figure 11 shows more intuitively the difference in training characteristics among three difficulty levels for each volunteer.
Figure 11 shows the characteristics of two volunteers under different task difficulty. Among them, the red line is under the task difficulty of over-challenging and difficult; the green line is under the task difficulty of challenging; the blue line is under the task difficulty of under-challenge. PRMSE, PSTD, TSCA, FD, PMAE are positively correlated with the task difficulty evaluation, and FQ, UMAX are negatively correlated with the task difficulty evaluation.
The training data of 100 groups of 10 volunteers receiving the test were used as training sample data. At the same time, another two volunteers were randomly selected as the predictive group. Two volunteers in the predictive group were trained in all tasks with different difficulty levels, and questionnaires were conducted on task difficulty. Their training data is used as predictive sample data. Extracted feature quantities Xs as input from 100 sets of experimental sample data and the training set known category information Ys (Task Difficulty of Patient Evaluation) were taken as output. The prediction model based on QPSO-MLSSVM hybrid optimization algorithm and the two comparison models based on MLSSVM algorithm and a neural network algorithm were established. QPSO is an iterative optimization that optimizes the parameters C and σ in the MLSSVM algorithm to improve the generalization ability and prediction accuracy of the model. The 20 sets of data in the test set are similar to the ones in the training set, and seven feature quantities Xl are extracted as inputs of the existing model, and the predicted output Yp is obtained by the model operation. The accuracy of the model was evaluated by a minimum mean square deviation operation between Yl (Task Difficulty of Patient Evaluation) and Yp.
In the analysis of the results, the training samples obtained from the mathematical model cannot directly reflect the prediction ability of the model. Further model evaluation can be achieved by comparing the prediction data of the training samples with the real data. Common evaluation indexes of the model include MSE, RMSE, correlation coefficient, and so forth. In this paper, the mean square error and the correlation coefficient are used to evaluate the model.
The data from 100 datapoints of the volunteers’ participation status were input into the prediction model to train the model, and the prediction tested on 20 datapoints. Correlation analysis was conducted on the actual values and predicted values of the data, and the linear fitting results are shown in Figure 12.
In order to further analyze the classification effect of the QPSO-MLSSVM support vector machine on the state of volunteer participation, the minimum mean square error (MSE) and mean absolute error (MAE) as well as Standard Deviation (MAPE) of the predicted and true values of various volunteer participation states was obtained, as shown in Table 3
Training concentration simply divides task difficulty evaluation into −1, 0, and 1. Because patients have different evaluation criteria for difficulty, the dynamic trend in training data is different, and the results after algorithm testing will be distributed around three values. If the test results are graded according to the difficulty of (−1.5, −0.75), (−0.25, 0.25), (0.75, 1.25), the accuracy of the test can reach 100%. In order to eliminate the result of slightly larger offset, the determination range is reduced to half of the original one that are (−1.25, −0.5), (−0.5, 0.5), (0.5, 1.5). And the test accuracy can still reach 80%. The matching result of task difficulty evaluation shows that the predicted value of task difficulty is close to the real value, which also verifies that volunteers’ evaluation of the difficulty of training tasks can be obtained from training data.

5. Discussion

Early rehabilitation training for stroke patients is very important and effective. While the rehabilitation robot can be used in the later stages of recuperation and even as a workout enhancer, the research work is aimed at the early stages of post-trauma rehabilitation. The aim is to re-train the nerve control and brain–body associations for typical movements of the affected limb. Overall, the success of training is measured in how quickly and effectively a patient regains normal control of their limbs. In order to make patients take the initiative to participate fully, while not letting the patient’s physical strength drop rapidly, dispelling the enthusiasm of patient training, is a complex problem. It is very important to choose the appropriate training task difficulty for patients. Therefore, this paper determines whether the current task difficulty is suitable for patient training to achieve optimal training effect based on the data of the patient in the training task. The final experiment in the paper proves that the matching degree of task difficulty evaluation of the two volunteers in the test group was worse than that of the test difficulty evaluation of the volunteers in the experimental group from the fitness curve. This is due to the volunteer’s subjective persistence and subjective evaluation of the difficulty of the task. However, the support vector machine task difficulty judgment model still has a prediction accuracy of 80% for the volunteer task difficulty evaluation of the test group. As the training data continues to increase and a variety of training information is introduced, the prediction accuracy of the judgment model will become higher and higher.
As can be seen from Figure 8 and Figure 10, when volunteers perform multiple training tasks with different difficulty, with the decrease of difficulty and the increase of the proportion of assistant power, the strength needed by volunteers to achieve the goal task position and the speed of physical consumption will be reduced. During clinical trials, most volunteers have emotional issues when performing challenging tasks. Most of them have low mood, and some are irritated. They need to continue their psychological counselling and speech encouragement to support their task training. When the volunteers were under challenging tasks, most of them felt bored and emotionally stable. It may have an effect on the experimental results for the frequency of speech encouragement during training. In this clinical trial, speech encouragement was given four times in each training process to minimize the influence of this factor. In the future, the research team will use a variety of measurements to study the emotional and physical characteristics of volunteers to verify their impact on rehabilitation training.
In this paper, the performance of volunteers in training tasks at different levels of difficulty is investigated in order to determine whether the task difficulty is appropriate and to verify and judge the past data. But it also proves the validity and universality of the assistant training strategy. This control strategy can maximize the ability of patients to actively participate in training. In the future, the research team will continue to study and improve upon such clinical trial data. It is expected that the task difficulty can be judged and predicted online, and then the assistant force can be adjusted in real time, so that patients can participate in training actively and optimally.

6. Conclusions

This paper studies a seated and reclining training lower limb rehabilitation robot with a multi-joint sensing system. In order to make the patient participate actively in the training task, an assistive force training control strategy and corresponding task difficulty are proposed. The multi-joint mechanical sensing system is used to solve the more accurate end mechanical model, and then the human–computer interaction force is detected. Clinical trials of 10 volunteers were conducted, and each volunteer underwent nine levels of difficulty training. Through the optimized support vector machines algorithm, quantitative features in the training data are taken as the input set, and the volunteer’s evaluation of the task difficulty is taken as the output set, and a task difficulty judgment model based on the volunteer training data is obtained. The training difficulty of two other volunteers, not in the original 10 persons training set, was predicted. It was verified that the difficulty judgment model of the task was universal and could exclude the influence of body size and weight. By comparing the prediction results of various algorithm models, the accuracy and convergence speed of the optimization algorithm are verified.
Future work will concentrate on extending the research to alternative models, such as described in the introduction, with a detailed comparison providing possible improvements to the data pipeline. The application will also benefit from a continuous expansion of the dataset, as more patient trials become available. This will also lead to the training data being judged and predicted online, and the difficulty of the task being adjusted in real time to optimize the rehabilitation effect of the patient in the future. As discussed throughout the paper, the patient’s perception of the difficulty of the training exercise influences their mood, behavior, and performance. As such, matching the patient’s perception is an important task in itself, even if the mechanical ground truth may be misrepresented. The desired end result for the rehabilitation robot, including future research, is a real-time online assessment which includes individual patient profiles, which should make patient subjectivity less relevant.

Author Contributions

Mechanical design and prototype debug, H.Y. and M.L.; conceptualization and supervision, H.W., L.V. and V.V.; data analysis and validation H.Y. and Y.L.; the main content of this manuscript was created and written by H.Y. and reviewed by all authors.

Funding

This research was funded by China Science and Technical Assistance Project for Developing Countries under grant number KY201501009; Key Research and Development Plan of Hebei Province, under grant number 19211820D, the forty-third regular meeting exchange programs of China Romania science and technology cooperation committee, under grant number 43-2; The European Commission SMOOTH project: Smart Robot for Fire-fighting, under grant number H2020-MSCA-RISE-2016:734875; Romanian Ministry of Research and Innovation, CCCDI-UEFISCDI, within PNCDI III, “KEYT HROB” project under grant number PN-III-P3-3.1-PM-RO-CN-2018-0144 / 2 BM ⁄ 2018; Postgraduate Innovation Research Assistant Support Project, under grant number CXZS201902.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dong, Q.; Guo, Q.; Luo, B.; Xu, Y. Expert consensus on post-stroke cognitive management. Chin. J. Stroke 2017, 12, 519–531. [Google Scholar]
  2. Feigin, V.L.; Forouzanfar, M.H.; Krishnamurthi, R.; Mensah, G.A.; Connor, M.; Bennett, D.A.; Moran, A.E.; Sacco, R.L.; Anderson, L.; Truelsen, T. Global and regional burden of stroke during 1990–2010: Findings from the Global Burden of Disease Study 2010. Lancet 2014, 383, 245–255. [Google Scholar] [CrossRef]
  3. Xiang, S.D.; Meng, Q.L.; Yu, H.L.; Meng, Q.Y. Research status of compliant exoskeleton rehabilitation manipulator. Chin. J. Rehabil. Med. 2018, 33, 461–465. [Google Scholar]
  4. Xie, S. Advanced Robotics for Medical Rehabilitation. Springer Tracts Adv. Robot. 2016, 108, 1–357. [Google Scholar]
  5. Hou, Z.G.; Zhao, X.G.; Cheng, L.; Wang, Q.N.; Wang, W.Q. Recent advances in rehabilitation robots and intelligent assistance systems. Acta Autom. Sin. 2016, 42, 1765–1779. [Google Scholar]
  6. Losey, D.; McDonald, C.; Battaglia, E.; O"Malley, M.K. A review of intent detection, arbitration, and communication aspects of shared control for physical human-robot interaction. Appl. Mech. Rev. 2018, 70, 1. [Google Scholar] [CrossRef]
  7. Tao, H.B.; Yu, Y.; Zhang, Q.; Sun, H.Y. A control system based on MCU for wearable power assist legs. In Proceedings of the IEEE International Conference on Robotics & Biomimetics, Karon Beach, Phuket, Thailand, 7–11 December 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2193–2198. [Google Scholar]
  8. Victor, G.; Svetlana, G.; Bram, V.; Dirk, L.; Carlos, R.G. Multi-Axis force sensor for human-robot interaction sensing in a rehabilitation robotic device. Sensors 2017, 17, 1294. [Google Scholar]
  9. Hwang, B.; Jeon, D. Estimation of the user’s muscular torque for an over-ground gait rehabilitation robot using torque and insole pressure sensors. Int. J. Control Autom. 2018, 16, 275–283. [Google Scholar] [CrossRef]
  10. Wolf, S.; Grioli, G.; Eiberger, O.; Friedl, W.; Alin, A.S. Variable stiffness actuators: Review on design and components. IEEE/ASME Trans. Mechatron. 2016, 21, 2418–2430. [Google Scholar] [CrossRef]
  11. Bongsu, K.; Aurelien, R.; Deshpande, A.D. Impedance control based on a position sensor in a rehabilitation robot. In Proceedings of the ASME 2014 Dynamic Systems and Control Conference, San Antonio, TX, USA, 22–24 October 2014; pp. 1–7. [Google Scholar]
  12. Kim, B.; Deshpande, A.D. Controls for the shoulder mechanism of an upper-body exoskeleton for promoting scapulohumeral rhythm. In Proceedings of the IEEE International Conference on Rehabilitation Robotics, Singapore, 11–14 August 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  13. Novak, D.; Riener, R. A survey of sensor fusion methods in wearable robotics. Robot. Auton. Syst. 2015, 73, 155–170. [Google Scholar] [CrossRef]
  14. Nguyen, T.H.; Chung, W.Y. Detection of driver braking intention using EEG signals during simulated driving. Sensors 2019, 19, 2863. [Google Scholar] [CrossRef] [PubMed]
  15. Washabaugh, E.P.; Guo, J.; Chang, C.K.; Remy, C.D. A portable passive rehabilitation robot for upper-extremity functional resistance training. IEEE Trans. Biomed. Eng. 2019, 66, 496–508. [Google Scholar] [CrossRef] [PubMed]
  16. Magdalena, Z.; Malgorzata, S.; Celina, P. Use of the surface electromyography for a quantitative trend validation of estimated muscle forces. Biocybern. Biomed. Eng. 2018, 38, 243–250. [Google Scholar]
  17. Tang, Z.C.; Shouqian, S.; Sanyuan, Z.; Chen, Y.; Li, C.; Chen, S. A Brain-Machine Interface Based on ERD/ERS for an Upper-Limb Exoskeleton Control. Sensors 2016, 16, 2050. [Google Scholar] [CrossRef]
  18. Yepes, J.C.; Portela, M.A.; Saldarriaga, Á.J.; Pérez, V.Z.; Betancur, M.J. Myoelectric control algorithm for robot-assisted therapy: A hardware-in-the-loop simulation study. Biomed. Eng. Online 2019, 18, 3. [Google Scholar] [CrossRef]
  19. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H. The extraction of neural information from the surface emg for the control of upper-limb prostheses: Emerging avenues and challenges. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 797–809. [Google Scholar] [CrossRef]
  20. Ison, M.; Vujaklija, I.; Whitsell, B.; Farina, D.; Artemiadis, P. High-density electromyography and motor skill learning for robust long-term control of a 7-dof robot arm. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 424–433. [Google Scholar] [CrossRef]
  21. Engemann, D.A.; Gramfort, A. Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals. NeuroImage 2015, 108, 328–342. [Google Scholar] [CrossRef]
  22. Fougner, A.; Scheme, E.; Chan, A.D.C.; Englehart, K.; Stavdahl, J. Resolving the limb position effect in myoelectric pattern recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 644–651. [Google Scholar] [CrossRef]
  23. Peng, L.; Hou, Z.G.; Chen, Y.X.; Wang, W.Q.; Tong, L.N.; Li, P.F. Combined use of sEMG and accelerometer in hand motion classification considering forearm rotation. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 4227–4230. [Google Scholar]
  24. Lo, A.C.; Guarino, P.D.; Richards, L.G.; Jodie, K.H.; George, F.W.; Daniel, G.F.; Robert, J.R.; Todd, H.W.; Hermano, I.K.; Bruce, T.V.; et al. Robot-assisted therapy for long-term upper-limb impairment after stroke. N. Engl. J. Med. 2010, 362, 1772–1783. [Google Scholar] [CrossRef]
  25. Reinkensmeyer, D.J.; Wolbrecht, E.; Bobrow, J. A computational model of human-robot load sharing during robotassisted arm movement training after stroke. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 4019–4023. [Google Scholar]
  26. Qian, Z.Q.; Lv, D.Y.; Lv, Y. Modeling and quantification of impact of psychological factors on rehabilitation of stroke patients. IEEE J. Biomed. Health Inform. 2019, 23, 683–692. [Google Scholar] [CrossRef] [PubMed]
  27. Yatsenko, D.; Mcdonnall, D.; Guillory, K.S. Simultaneous, proportional, multi-axis prosthesis control using multichannel surface EMG. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 6133–6136. [Google Scholar]
  28. Lenzi, T.; De Rossi, S.M.M.; Vitiello, N.; Carrozza, M.C. Intention-based EMG control for powered exoskeletons. IEEE Trans. Biomed. Eng. 2012, 59, 2180–2190. [Google Scholar] [CrossRef] [PubMed]
  29. Duschau-Wicke, A.; Zitzewitz, J.V.; Caprez, A.; Lunenburger, L.; Riener, R. Path control: A method for patient-cooperative robot-aided gait rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 38–48. [Google Scholar] [CrossRef] [PubMed]
  30. Krebs, H.I.; Palazzolo, J.J.; Dipietro, L.; Ferraro, M.; Krol, J.; Rannekleiv, K. Rehabilitation robotics: Performance-based progressive robot-assisted therapy. Auton. Robot. 2003, 15, 7–20. [Google Scholar] [CrossRef]
  31. Cai, L.L.; Fong, A.J.; Liang, Y.Q.; Burdick, J.; Edgerton, V.R. Assist-as-needed training paradigms for robotic rehabilitation of spinal cord injuries. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 3504–3511. [Google Scholar] [Green Version]
  32. Wolbrecht, E.T.; Chan, V.; Reinkensmeyer, D.J.; Bobrow, J.E. Optimizing compliant, model-based robotic assistance to promote neurorehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2008, 16, 286–297. [Google Scholar] [CrossRef]
  33. Pehlivan, A.U.; Losey, D.P.; Ormalley, M.K. Minimal assist-as-needed controller for upper limb robotic rehabilitation. IEEE Trans. Robot. 2016, 32, 113–124. [Google Scholar] [CrossRef]
  34. Kleinsmith, A.; Bianchi-Berthouze, N. Affective body expression perception and recognition: A survey. IEEE Trans. Affect. Comput. 2013, 4, 15–33. [Google Scholar] [CrossRef]
  35. Khosrowabadi, R.; Quek, C.; Ang, K.K.; Wahab, A. ERNN: A biologically inspired feedforward neural network to discriminate emotion from EEG signal. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 609–620. [Google Scholar] [CrossRef]
  36. Xu, G.; Gao, X.; Pan, L.; Chen, s.; Wang, Q.; Zhu, B. Anxiety detection and training task adaptation in robot-assisted active stroke rehabilitation. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef]
  37. Ozkul, F.; Palaska, Y.; Masazade, E.; Erol-Barkana, D. Exploring dynamic difficulty adjustment mechanism for rehabilitation tasks using physiological measures and subjective ratings. IET Signal Process. 2019, 13, 378–386. [Google Scholar] [CrossRef]
  38. Fundaro, C.; Maestri, R.; Ferriero, G.; Chimento, P.; Taveggia, G.; Casale, R. Self-selected speed gait training in parkinson’s disease: Robot-assisted gait training with virtual reality versus gait training on the ground. Eur. J. Phys. Rehabil. Med. 2019, 55, 456–462. [Google Scholar] [CrossRef] [PubMed]
  39. Feng, Y.F.; Wang, H.B.; Yan, H.; Wang, X.; Jin, Z.; Vladareanu, L. Research on safety and compliance of a new lower limb rehabilitation robot. J. Healthc. Eng. 2017, 2017, 1523068. [Google Scholar] [CrossRef] [PubMed]
  40. Feng, Y.; Wang, H.; Vladareanu, L.; Wang, X.C.; Jin, Z.N.; Vladareanu, L. New Motion Intention Acquisition Method of Lower Limb Rehabilitation Robot Based on Static Torque Sensors. Sensors 2019, 19, 3439. [Google Scholar] [CrossRef]
  41. Wang, H.B.; Feng, Y.F.; Yu, H.N.; Wang, Z.H.; Vladareanuv, V.; Du, Y.X. Mechanical design and trajectory planning of a lower limb rehabilitation robot with a variable workspace. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef]
  42. Suykens, J.A.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  43. Zhuang, J.X.; Jiang, H.Y.; Liu, L.L.; Wang, F.F.; Tang, L.; Zhu, Y. Parameters optimization of rice development stage model based on individual advantages genetic algorithm. Sci. Agric. Sin. 2013, 46, 2220–2231. [Google Scholar]
  44. Gou, X.; Peng, C.; Zhang, S.; Yan, J.; Duan, S.; Wang, L. A novel feature extraction approach using window function capturing and QPSO-SVM for enhancing electronic nose performance. Sensors 2015, 15, 15198–15217. [Google Scholar]
  45. Zhao, Y.T.; Shan, Z.Y.; Chang, Y.; Chen, Y.; Hao, X.C. Soft sensor modeling for cement fineness based on least squares support vector machine and mutual information. Chin. J. Sci. Instrum. 2017, 38, 487–496. [Google Scholar]
Figure 1. The LLR-II Rehabilitation.
Figure 1. The LLR-II Rehabilitation.
Sensors 19 04681 g001
Figure 2. The detailed design of LLR-II leg mechanism.
Figure 2. The detailed design of LLR-II leg mechanism.
Sensors 19 04681 g002
Figure 3. The detailed design of LLR-II leg mechanism.
Figure 3. The detailed design of LLR-II leg mechanism.
Sensors 19 04681 g003
Figure 4. The sensors system composition of LLR-II.
Figure 4. The sensors system composition of LLR-II.
Sensors 19 04681 g004
Figure 5. Leg model of lower limb rehabilitation robot.
Figure 5. Leg model of lower limb rehabilitation robot.
Sensors 19 04681 g005
Figure 6. End applied force model.
Figure 6. End applied force model.
Sensors 19 04681 g006
Figure 7. Assistance training control block diagram.
Figure 7. Assistance training control block diagram.
Sensors 19 04681 g007
Figure 8. Sensors measuring terminal force during training: (a) Human–computer interaction of six-dimensional force acquisition under the training task of assistant force parameter K = 0.4; (b) the calculated terminal force in robot coordinates under the training task of assistant force parameter K = 0.4.
Figure 8. Sensors measuring terminal force during training: (a) Human–computer interaction of six-dimensional force acquisition under the training task of assistant force parameter K = 0.4; (b) the calculated terminal force in robot coordinates under the training task of assistant force parameter K = 0.4.
Sensors 19 04681 g008
Figure 9. Terminal force during different difficulty tasks training: (a) the training task of assistant force parameter K = 0.1; (b) the training task of assistant force parameter K = 0.5; (c) the training task of assistant force parameter K = 0.6; (d) the training task of assistant force parameter K = 0.8.
Figure 9. Terminal force during different difficulty tasks training: (a) the training task of assistant force parameter K = 0.1; (b) the training task of assistant force parameter K = 0.5; (c) the training task of assistant force parameter K = 0.6; (d) the training task of assistant force parameter K = 0.8.
Sensors 19 04681 g009
Figure 10. Terminal position during different difficulty tasks training.
Figure 10. Terminal position during different difficulty tasks training.
Sensors 19 04681 g010
Figure 11. Characteristic quantity of training data under different task difficulties: (a) characteristic quantity of Volunteer 1# training data; (b) characteristic quantity of Volunteer 2# training data.
Figure 11. Characteristic quantity of training data under different task difficulties: (a) characteristic quantity of Volunteer 1# training data; (b) characteristic quantity of Volunteer 2# training data.
Sensors 19 04681 g011
Figure 12. Comparison between task difficulty prediction and reality of test group.
Figure 12. Comparison between task difficulty prediction and reality of test group.
Sensors 19 04681 g012
Table 1. Characteristic parameters of volunteer participation.
Table 1. Characteristic parameters of volunteer participation.
TypeDescription
PRMSEMean square error of position
PSTDPosition standard deviation
TSCAThe proportion of time outside the safe passage
FQInter-quartile range of terminal force
UMAXMaximum value in frequency domain of terminal force
fMAXPeak frequency in frequency domain of terminal force
FDComponent at frequency 0 in frequency domain of terminal force
FVARVariance of terminal force
POROffset range of position
UhMAXMaximum value in frequency domain of volunteer motivation
fhMAXPeak frequency in frequency domain of volunteer motivation
PMAEMean absolute error of position
Table 2. Significance comparison of characteristic quantities.
Table 2. Significance comparison of characteristic quantities.
ComparisonPRMSEPSTDTSCA
PFPFPF
Difficult/Medium3.19 × 10−41.09 × 10−111.5 × 10−43.76 × 10−101.44 × 10−35.9 × 10−4
Difficult /Easy2.74 × 10−73.2 × 10−144.26 × 10−81.08 × 10−122.06 × 10−111.9 × 10−4
Medium/ Easy8.08 × 10−90.11.26 × 10−80.1126.13 × 10−100.634
ComparisonFQUMAXfMAX
PFPFPF
Difficult/Medium0.8440.09040.1360.070.390.382
Difficult /Easy9.97 × 10−30.016.37 × 10−40.0280.0280.007
Medium/ Easy4.7 × 10−30.3390.01470.650.1540.066
ComparisonFDFVARPOR
PFPFPF
Difficult/Medium0.7352 × 10−50.2550.0017.59 × 10−60.0035
Difficult /Easy0.0330.2210.9610.0723.07 × 10−135.66 × 10−11
Medium/ Easy0.00130.0020.1930.2122.922 × 10−81.78 × 10−5
ComparisonUhMAXfhMAXPMAE
PFPFPF
Difficult/Medium0.00130.0090.0010.67890.00055.38 × 10−12
Difficult /Easy0.14780.0030.0720.55313.19 × 10−74.29 × 10−13
Medium/ Easy0.52166.06 × 10−70.9290.85652.32 × 10−90.339
Table 3. Significant comparison of characteristic quantities.
Table 3. Significant comparison of characteristic quantities.
MSEMAESTD
Matching degree0.04280.18220.1006

Share and Cite

MDPI and ACS Style

Yan, H.; Wang, H.; Vladareanu, L.; Lin, M.; Vladareanu, V.; Li, Y. Detection of Participation and Training Task Difficulty Applied to the Multi-Sensor Systems of Rehabilitation Robots. Sensors 2019, 19, 4681. https://0-doi-org.brum.beds.ac.uk/10.3390/s19214681

AMA Style

Yan H, Wang H, Vladareanu L, Lin M, Vladareanu V, Li Y. Detection of Participation and Training Task Difficulty Applied to the Multi-Sensor Systems of Rehabilitation Robots. Sensors. 2019; 19(21):4681. https://0-doi-org.brum.beds.ac.uk/10.3390/s19214681

Chicago/Turabian Style

Yan, Hao, Hongbo Wang, Luige Vladareanu, Musong Lin, Victor Vladareanu, and Yungui Li. 2019. "Detection of Participation and Training Task Difficulty Applied to the Multi-Sensor Systems of Rehabilitation Robots" Sensors 19, no. 21: 4681. https://0-doi-org.brum.beds.ac.uk/10.3390/s19214681

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop