Next Article in Journal
Interactions between Phase-Separated Liquids and Membrane Surfaces
Next Article in Special Issue
Composite Error-Based Intelligent Adaptive Sliding Mode Control for Uncertain Bilaterally Symmetrical Hybrid Robot with Variational Desired Trajectories
Previous Article in Journal
Binary Spring Search Algorithm for Solving Various Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research of Calibration Method for Industrial Robot Based on Error Model of Position

1
Fujian Chuanzheng Communications College, Fuzhou 350007, China
2
School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350116, China
*
Author to whom correspondence should be addressed.
Submission received: 25 December 2020 / Revised: 21 January 2021 / Accepted: 25 January 2021 / Published: 31 January 2021
(This article belongs to the Special Issue Robots Dynamics: Application and Control)

Abstract

:
Based on the current situation of high precision and comparatively low APA (absolute positioning accuracy) in industrial robots, a calibration method to enhance the APA of industrial robots is proposed. In view of the “hidden” characteristics of the RBCS (robot base coordinate system) and the FCS (flange coordinate system) in the measurement process, a comparatively general measurement and calibration method of the RBCS and the FCS is proposed, and the source of the robot terminal position error is classified into three aspects: positioning error of industrial RBCS, kinematics parameter error of manipulator, and positioning error of industrial robot end FCS. The robot position error model is established, and the relation equation of the robot end position error and the industrial robot model parameter error is deduced. By solving the equation, the parameter error identification and the supplementary results are obtained, and the method of compensating the error by using the robot joint angle is realized. The Leica laser tracker is used to verify the calibration method on ABB IRB120 industrial robot. The experimental results show that the calibration method can effectively enhance the APA of the robot.

1. Introduction

With the wide application of industrial robot, people have higher requirements for its positioning accuracy. However, due to the influence of manufacturing error, assembly error, joint abrasion, and transmission error of robot parts, the positioning accuracy of robot is not satisfactory, which limits the application of robot in high-precision field. Therefore, it is particularly important to study the positioning accuracy improvement method of industrial robot.
In fact, the positioning accuracy of industrial robots can be divided into two parts: repeated positioning accuracy and absolute positioning accuracy (APA) [1]. In most cases, we only need to improve the repeated positioning accuracy of the robot. For example, when industrial robot welds a workpiece, the repeated accuracy of the robot normally determines the trajectory accuracy of the workpiece directly. At this time, the APA is meaningless. Therefore, people focus on how to improve the repeat positioning accuracy, which can be improved by many methods such as soft computing and machine learning [2,3]. Industrial robots normally have the merit of high repeat positioning accuracy. For some small industrial robots, its repeat positioning accuracy even reach 0.01 mm, while the APA of the robots may be beyond 1mm or worse [4]. So in this field, the research on APA is obviously insufficient. For example, in the application of multi robot cooperation, for kinematics parameter identification and robot off-line programming tasks, it is necessary to calibrate the precise pose of the robot base coordinate frames in the world coordinate system, which requires that the industrial robot must have sufficient APA [5].
The fundamental reason for the low APA of industrial robot is that the pose of TCP (Tool Center Point) are calculated by kinematics mathematical model which contains the structure size of the linkage and motion parameters of joint. Moreover, there are errors between the values taken in calculation and actual parameters of the robot, resulting in a deviation between the calculated results and the actual pose. The following two methods can be used to enhance the robot’s APA: (1) using high accuracy machining equipment and advanced assembly method to reduce robot link machining and assembly errors; (2) combining advanced measurement tools and related algorithms to identify and correct robot structural parameters, that is robot calibration method [6]. The first method is expensive and has limited effect on improving the precision of the robot, and it cannot solve the problem of precision decline caused by wear and deformation. The second method is cheaper than first one, and it is more effective in compensating the precision degradation caused by wear or structural deformation [7]. Therefore, robot calibration is the main method to enhance the APA of robot, and it is also the main research direction of many scholars [8].
In the study of robot calibration method, Knasinski et al. [6] found that the absolute positioning error is mainly caused by the parameter error of robot kinematics model. Therefore, the research of robot calibration technology focuses on the error identification of robot kinematics parameters primarily [9,10,11]. In the current research on the reduction of APA caused by kinematic parameter errors, it usually assumes that the robot base coordinate system (RBCS) and the flange coordinate system (FCS) are known or can be directly measured. However, the fact is that most of the robot flange and base coordinate system are “hidden”—it cannot be directly measured and need to be obtained through special measurement and calculation methods [12].
In reference [13], the position and pose measurement of robot is realized by using coordinate theodolite, and the APA error of the robot is reduced to 0.28 mm after calibration. This instrument is designed according to the principle of angle detecting, and exhibits good measurement accuracy. However, the user needs to have relevant measurement skills, and the measurement results are easy to be interfered by environmental factors. In reference [14], the robot of IRB2400 was calibrated with a laser tracker. In this application, the base coordinate was obtained by means of making axis 1 and axis 2 rotate, respectively, and measuring the base plane, so that the coordinate transformation can be realized between laser tracker and robot base. Reference [15] uses the FARO ARM to calibrate the robot and obtain the RBCS by quaternion form. Nubiola [16] used Faro laser tracker to measure and calibrate ABB IRB1600 (ABB, Zurich, Switzerland) industrial robot from Switzerland. The mean absolute positioning accuracy can be reduced from 0.981 mm to 0.292 mm in eight different measurement points on the robot end effector. The measurement of the equipment is very accurate, and it can further cooperate with other relevant tools to obtain the target attitude. The equipment is easy to operate, but it is very expensive.
In addition to use the above measuring instruments to directly measure the absolute pose of the end of the robot for calibration, some scholars have studied the robot self-calibration method of indirectly measuring the pose of the end of the robot through the algorithm without relying on the measuring instruments. Beenett [17] proposed a closed-loop self-calibration method, which kept the end of the robot at a point unchanged, and changed the pose of the end of robot, so as to finally solve the closed-loop equation. Joubair [18] used multi-plane constraints to calibrate the six axes flexible robot. Legnani [19] used corded sensor to measure the position and pose of the robot and complete the calibration of the kinematic parameters of the robot.
However, in the existing robot calibration and measurement process, the pose errors of the robot caused by base coordinate system and FCS are neglected basically, which is also an important part of the robot pose error model and should be considered in the process of robot calibration, and this part of the error cannot be improved only depending on optimizing the motion algorithm of the manipulator. The absolute positioning error must be measured by a measuring instrument and then improved by the algorithm.
The robot’s calibration can be achieved by establishing the robot’s position error model or pose error model. Since most measuring devices can measure the position of the end of the robot, the position error model is more suitable for a wide range of measuring devices (such as coordinate measuring machine, six or seven axes measuring arm, laser tracker (with a target ball)). It is more versatile. This paper aims to enhance the APA of the robot. The measurement hidden errors from the robot’s flange and base coordinates and the kinematics model parameters errors are unified as the source of the absolute positioning errors. A comparatively general measurement method of robot calibration is proposed, and the algorithms of parameter error identification and compensation are discussed in detail. The algorithms are verified by experiments.

2. Robot Calibration Modeling

Robot calibration needs to obtain the robot end pose error in the RBCS by means of measurement. However, the actual FCS and RBCS cannot be obtained through direct measurement. Therefore, it need to establish a measurement coordinate system directly related to the measurement results. In this paper, the measured position of the measuring instrument is defined as DCP (Detection Center Point), and establishing a measurement coordinate system with DCP as the origin. When measuring the pose of the end of the robot, it is necessary to obtain the position transformation between the DCP coordinate system and FCS, and between the RBCS and MICS (measuring instrument coordinate system), so that the pose model of the DCP coordinate system in the MICS can be obtained, as expressed in Figure 1. The first step of calibration is to build a robot kinematics model that can reflect the characteristics of the end of the robot.

2.1. Robot Kinematics Modeling

The most common kinematics model of industrial robots is the four-parameter DH model (short for Denavit–Hartenberg model) proposed by Denavit [20] et al. in 1955. This model has good generality and is currently the most widely used. However, in practice, it is found that when two adjacent axes are nominally parallel, the small deviation of their mutual joint will lead to the numerical distortion in the process of parameter identification based on DH model that means the parameters are discontinuous. To avoid this problem, Hayati [21] proposes the MDH model—a modified DH model, which adds a rotational parameter to the Y axis and becomes a five-parameter model with two translations and three rotations.
T i 1 i = R o t ( z i 1 , θ i ) · T r a n ( z i 1 , d i ) · T r a n ( x i , a i ) · R o t ( x i , α i ) · R o t ( y i , β i ) = ( c θ i c β i s θ i s α i s β i s θ i c α i c θ i s β i + s θ i s α i c β i a i c θ i s θ i c β i c θ i s α i s β i c θ i c α i s θ i s β i c θ i s α i c β i a i s θ i c α i s β i s α i c α i c β i d i 0 0 0 1 )
In the equation, θ i , d i , a i and α i respectively represent the joint Angle, joint bias, length, and torsion angle of link i. In the case that joint β i exists only when adjacent axes are parallel, in other cases β i = 0 is directly substituted into Equation (1).
According to Equation (1), the kinematic model of a general N degrees of freedom robot can be obtained as
T 0 N = T 0 1 T 1 2 T N 1 N = ( n x o x a x p x n y o y a y p y n z o z a z p z 0 0 0 1 ) .

2.2. Robot Flange Coordinate Calibration Model

The key step of robot calibration is to obtain the actual pose error of robot end by measuring. However, the FCS of robot end is usually “hidden”, that is, it cannot be directly measured by measuring equipment [22]. There is a certain offset of the actual measured position with respect to the FCS (as expressed in Figure 2).
Let T 6 D denote the transformation from robot FCS to DCP coordinate system. If the pose is neglected, it can be expressed as follows:
T 6 D = ( 1 0 0 P x L 0 1 0 P y L 0 0 1 P z L 0 0 0 1 ) .
Assuming that the FCS position of the robot is O in a certain state, when the position of the robot’s end remains unchanged, the position coordinates of O point can be obtained by fitting the center of the sphere through only changing the pose of the robot. The spherical center coordinate can be obtained by fitting more than four non coplanar points in space. Suppose the spherical center coordinates are ( a , b , c ) , radius is R, and the coordinates of the four measuring points are ( x 1 , y 1 , z 1 ) , ( x 2 , y 2 , z 2 ) , ( x 3 , y 3 , z 3 ) , ( x 4 , y 4 , z 4 ) , respectively. The following equations can be obtained from the function expression of the sphere:
{ ( x 1 a ) 2 + ( y 1 b ) 2 + ( z 1 c ) 2 = R 2 ( x 2 a ) 2 + ( y 2 b ) 2 + ( z 2 c ) 2 = R 2 ( x 3 a ) 2 + ( y 3 b ) 2 + ( z 3 c ) 2 = R 2 ( x 4 a ) 2 + ( y 4 b ) 2 + ( z 4 c ) 2 = R 2 .
By solving Equation (4), the origin position of FCS can be obtained, and then the pose of FCS needs to be obtained. As long as the robot controller can control the robot to move along the axes x, y, z of the FCS in its TCP coordinate system, more than two points are taken to determine the direction of the coordinate axis. Finally, the vector n , o , a of the three axes x, y, z in the FCS can be obtained. On the basis, the pose matrix of the FCS at the end of the robot in the MICS can be obtained:
T C 6 = ( R C 6 P C 6 0 1 ) .
In Equation (5), the pose of the FCS in the MICS is represented by R C 6 , and the position of the center of the FCS in the MICS is represented by P C 6 .
Assuming that DCP measured by the measuring instrument under the above robot pose is T C D , and the coordinate transformation from the robot FCS to the measuring point is T 6 D , and the following transformation relations exist:
T C D = T C 6 · T 6 D .
The pose transformation equation T 6 D of the robot’s end FCS to DCP point can be obtained:
T 6 D = T 1 C 6 · T C D .

2.3. Calibration Model of Robot Base Coordinate System

When measuring the pose of robot’s end by measuring instrument, the data is obtained in the MICS. In order to compare the actual pose of robot with the nominal one of robot controller, it is necessary to unify it into the same coordinate system.
Suppose that the nominal position and pose of the FCS at the end of the robot read from the controller are represented by P N and R N respectively when the robot is in a certain state, and the position coordinates of a point in the end of the robot which is measured by measuring instruments as P C . In the case of ignoring the pose of the DCP coordinate system, the following relation can be obtained from the spatial geometric transformation:
( E P C 0 1 ) = ( R T 0 1 ) ( R N P N 0 1 ) ( E P L 0 1 ) .
In the equation, T and R denote the rotation transformation and translation transformation of the MICS to RBCS, E denotes the unit matrix of 3 × 3, P L denotes the position transformation from the FCS to the DCP, and the superscript N denotes that the value isnominal.
Set T N L = ( R N P N 0 1 ) ( E P L 0 1 ) . According to the matrix multiplication rule, the Equation (8) can be expanded into the form of equations:
{ n x T 1 , 4 N L + o x T 2 , 4 N L + a x T 3 , 4 N L + T x = P x C n y T 1 , 4 N L + o y T 2 , 4 N L + a y T 3 , 4 N L + T y = P y C n z T 1 , 4 N L + o z T 2 , 4 N L + a z T 3 , 4 N L + T z = P z C .
In the equation, T i , j N L represents the elements of line j in column i of matrix T N L .
For the solution of Equation (9), there are 12 unknowns, and at least 4 sets of data are needed. That is, at least 4 points need to be measured. In fact, more than 4 sets of positions are used to reduce errors, which makes Equation (9) an overdetermined system of equations. Set:
X = ( n x o x a x T x n y o y a y T y n z o z a z T z 0 0 0 1 ) .
The system of overdetermined equations can be represented as matrix:
X A = B .
In the equation, A is a coefficient matrix composed of elements T N L , and B is a matrix consisting of measured values P C .
The least squares method is used to solve Equation (11):
X = B ( A A T ) 1 A T .
From this, the transformation relation between the MICS and the RBCS can be obtained as X, and then the robot end pose measured by measuring instrument and the pose of the robot end displayed in the robot controller can be unified under the same coordinate system for comparison.

3. DCP Position Error Model

When calculating the RBCS, the position of the end of the robot is read directly from the robot instructor, which is different from the actual position. Therefore, the position of the RBCS obtained by calculation is also different from that of the MICS. Similarly, when measuring and calculating the position of DCP, there are some errors due to the influence of measurement disturbance. The DCP position and the RBCS position calculated by the model are defined as nominal DCP coordinate system and nominal based coordinate system, as shown in Figure 3. The actual position of DCP is measured directly by the measuring instrument. The factors affecting the nominal position and actual position of DCP include robot kinematics parameter error, RBCS position error and DCP coordinate system position error. The following three aspects will be analyzed separately.

3.1. Robot Kinematics Parameter Errors

Because of wear and deformation, the parameters of the robot’s links have errors, which lead to the deviation between the actual value and the nominal value of the robot’s kinematics model parameters. The parameter errors affecting the robot pose error are
Δ θ = [ Δ θ 1 , Δ θ 2 , Δ θ 3 , Δ θ 4 , Δ θ 5 , Δ θ 6 ] ; Δ a = [ Δ a 1 , Δ a 2 , Δ a 3 , Δ a 4 , Δ a 5 , Δ a 6 ] ; Δ α = [ Δ α 1 , Δ α 2 , Δ α 3 , Δ α 4 , Δ α 5 , Δ α 6 ] ; Δ d = [ Δ d 1 , Δ d 2 , Δ d 3 , Δ d 4 , Δ d 5 , Δ d 6 ] ; Δ β = [ Δ β 1 , Δ β 2 , Δ β 3 , Δ β 4 , Δ β 5 , Δ β 6 ] .
where Δ β i only exists parallel axis, the non-parallel axis directly causes Δ β i = 0 .
The actual transformation relationship between the RBCS and FCS can be represented by a model with parameter errors:
T 0 6 = T 0 1 · T 1 2 · T 2 3 · T 3 4 · T 4 5 · T 5 6 = i = 1 6 R ( z i 1 , θ i + Δ θ i ) · T ( z i 1 , d i + Δ d i ) · T ( x i , a i + Δ a i ) · R ( x i , α i + Δ α i ) · R ( y i , β i + Δ β i ) .
In the equation, R means rotation transformation, and T means translation transformation.

3.2. Parameter Errors of Robot Base Coordinate System

The errors of the RBCS represents the parameter errors of the transformation model from the MICS to the RBCS. Let T C 0 be the transformation model from the MICS to the RBCS. The nominal pose matrix of the RBCS calculated by Equation (13) in the MICS can be represented as follows:
T N C 0 = ( n x o x a x T x n y o y a y T y n z o z a z T z 0 0 0 1 ) .
In Equation (14), there are 12 parameters representing position and pose. To reduce the number of parameters to be identified, it can be rewritten as a 6-parameter model composed with a translation transformation and three rotational transformations.
T N C 0 = T ( x c , y c , z c ) R ( x c , θ x c ) R ( y c , θ y c ) R ( z c , θ z c ) .
Therefore, the parameter errors of the RBCS are: [ Δ x c , Δ y c , Δ z c , Δ θ x c , Δ θ y c , Δ θ z c ] .
The parameter errors are added to T C 0 to get the actual pose model of the RBCS as
T C 0 R = T ( x c + Δ x c , y c + Δ y c , z c + Δ z c ) R ( x c , θ x c + Δ θ x c ) R ( y c , θ y c + Δ θ y c ) R ( z c , θ z c + Δ θ z c ) .
In the equation, the superscript R represents the actual value.

3.3. DCP Coordinate System Parameter Errors

The parameter errors of the DCP coordinate system represent the parameter errors of the transformation model from the FCS to the DCP coordinate system. Assuming that the pose transformation matrix of the robot’s DCP coordinate system is T 6 D relative to that of the robot’s end FCS, the transformation model from the robot’s end flange to the DCP coordinate system can also be obtained by imitating the error derivation process of the RBCS:
T R 6 D = T ( x D + Δ x D , y D + Δ y D , z D + Δ z D ) R ( x D , θ x D + Δ θ x D ) R ( y D , θ y D + Δ θ y D ) R ( z D , θ z D + Δ θ z D ) .
After the above three pose models are obtained, they can be integrated into the actual DCP pose model in the MICS. Set T as the transformation matrix of the MICS to the robot DCP coordinate system:
T = ( R P 0 1 ) .
In the equation, R denotes the pose of the DCP coordinate system in the MICS, P denotes the position of the DCP coordinate system in the MICS.
The actual pose of DCP can be represented as
T R = T R C 0 · T R 0 6 · T R 6 D .
Parameter errors affecting DCP pose accuracy:
q = [ Δ x C , Δ y C , Δ z C , Δ θ x C , Δ θ y C , Δ θ z C , Δ θ 1 ~ Δ θ 6 , Δ d 1 ~ Δ d 6 , Δ a 1 ~ Δ a 6 , Δ α 1 ~ Δ a 6 , Δ β 1 ~ Δ β 6 , Δ x D , Δ y D , Δ z D , Δ θ x D , Δ θ y D , Δ θ z D ] .
Position error model is comparatively more general, so the pose error of DCP is neglected. Assuming the set of parameter errors is Δ q , based on Equation (19), the actual position model of DCP can be expressed as follows:
P R = ( P x R P y R P z R ) = ( f P x ( q + Δ q ) f P y ( q + Δ q ) f P z ( q + Δ q ) ) .
Therefore, the mathematical relationship between the DCP position error and the parameters errors of the model can be represented as
Δ P R = ( Δ P x R Δ P y R Δ P z R ) = ( f P x ( q + Δ q ) f P x ( q ) f P y ( q + Δ q ) f P y ( q ) f P z ( q + Δ q ) f P z ( q ) ) .
In Equation (21), the relationship between the DCP position error and the parameter error is nonlinear, so it is difficult to solve them directly. Therefore, it can be analyzed from the perspectives of partial differential. For the model with n parameter errors, Equation (21) can be rewritten as follows:
Δ P R = ( Δ P x R Δ P y R Δ P z R ) = ( f P x Δ q 1 f P x Δ q 2 f P x Δ q n f P y Δ q 1 f P y Δ q 2 f P x Δ q n f P z Δ q 1 f P z Δ q 2 f P x Δ q n ) ( Δ q 1 Δ q 2 Δ q n ) .
The Equation (22) can be simplified to
ε = M Δ q .
In the equation, ε denotes the DCP position error, Δ q denotes all the parameter errors of the calibration model, and M denotes the error coefficient matrix.

4. Parameter Error Identification and Compensation

4.1. Redundancy Parameter Analysis

In Section 2.3, the robot DCP position error model is obtained: ε = M Δ q . In the equation, ε can be measured by measuring equipment; M represents the error coefficient matrix, which can be derived by error model; Δ q is the parameter error need to be solved. Therefore, in order to get the solution of parameter error, it only needs to take the measured pose error ε into the error model and solve Δ q . However, there may be redundancy between the kinematics parameters of the robot, which may lead to the correlation among the columns of the coefficient matrix M . And it will directly affect the accuracy of Δ q .
Let T i i + 1 denote the homogeneous transformation from {i} to {i+1} of the LCS (link coordinate system) of robot. When only the pose transformation is considered, β 0 can be directly brought into Equation (1). The result is as follows:
T i i + 1 = ( c θ i s θ i c α i s θ i s α i a i c θ i s θ i c θ i c α i c θ i s α i a i s θ i 0 s α i c α i d i 0 0 0 1 ) .
According to the differential kinematics of the robot, the differential transformation relation between the LCS {i} and {i+1} can be obtained by combining Equation (23):
( d x i + 1 d y i + 1 d z i + 1 δ x i + 1 δ y i + 1 δ z i + 1 ) = ( c θ i s θ i 0 d i s θ i d i c θ i 0 c α i s θ i c α i c θ i s α i a i s α i s θ i d i c α i c θ i a i s α i c θ i d i c α i s θ i a i c α i s α i s θ i s α i c θ i c α i a i c α i s θ i + d i s α i c θ i a i c α i c θ i + d i s α i s θ i a i s α i 0 0 0 c θ i s θ i 0 0 0 0 c α i s θ i c α i c θ i s α i 0 0 0 s α i s θ i s α i c θ i c α i ) ( d x i d y i d z i δ x i δ y i δ z i ) = J i i + 1 D .
In the equation, J i i + 1 is an error coefficient matrix, representing the transformation matrix from the LCS {i} to {i+1} with differential motion.
In J i i + 1 , for a general robot with rotating joints, θ i are variables and α i are constants. Therefore, only the third and sixth columns of J i i + 1 remain unchanged, and the rest of the columns change with θ i . Let J i + 1 i + 1 represent the differential transformation matrix of the coordinate system {i} relative to itself:
J i + 1 i + 1 = ( 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ) .
From the relationship between column vectors in J i i + 1 and J i + 1 i + 1 , the relation can be found that:
{ J 3 i i + 1 = s α i J 2 i + 1 i + 1 + s α i J 3 i + 1 i + 1 J 6 i i + 1 = a i c α i J 2 i + 1 i + 1 a i s α i J 3 i + 1 i + 1 + s α i J 4 i + 1 i + 1 + c α i J 5 i + 1 i + 1 .
In the Equation (26), J k i i + 1 and J k i + 1 i + 1 denote the kth column of matrix J k i i + 1 and J k i + 1 i + 1 respectively.
Assuming that the degree of freedom of the robot is N, both sides of above equation are multiplied by J i + 1 N , so that the differential errors of the two coordinate systems are converted to the end of the robot:
{ J 3 i N = s α i J 2 i + 1 N + s α i J 3 i + 1 N J 6 i N = a i c α i J 2 i + 1 N a i s α i J 3 i + 1 N + s α i J 4 i + 1 N + c α i J 5 i + 1 N .
Equation (27) shows that the influence of the differential errors d z i and δ z i of the LCS {i} on its end can be expressed by the differential errors of the LCS {i + 1}; that is, the differential errors d z i and δ z i of the LCS {i} are related to the differential errors of the LCS {i + 1}, and are redundant. It shows that d z i and δ z i are redundant parameters in the transformation model of the MICS to the RBCS. The differential error of each LCS of robot model needs to be transformed into kinematic model parameter error:
( d x i d y i d z i δ x i δ y i δ z i ) = ( 0 0 1 0 0 a i c α i s α i 0 0 0 a i s α i c α i 0 0 0 0 0 0 1 0 s α i 0 0 0 1 c α i 0 0 0 0 ) ( Δ θ i Δ d i Δ a i Δ α i Δ β i ) = G i Δ q i .
Equation (28) shows that the variables in G i are only a and α . Different values of a and α are analyzed and the results are shown in Table 1.

4.2. Parameter Identification Algorithm

The differential error model of position and posture of robot DCP is determined as follows:
ε = M Δ q .
Assuming that Δ q removes redundant parameters, the number of identifiable parameters is N, then M p and M δ are both the coefficient matrices of 3 × N. To solve N parameter errors, at least N equations are needed, that is, the position of N/3 sampling points must be measured before the solution to the equations can be obtained. In actual measurement, there may be unavoidable error factors that interfere with the accuracy of the solution. Therefore, a large amount of data needs to be used to fit the model. Assuming that the group K (K > N/3) position data of robot can be measured, the equations can be obtained:
{ M 11 Δ q 1 + M 12 Δ q 2 + + M 1 N Δ q N = Δ P 1 x M 21 Δ q 1 + M 22 Δ q 2 + + M 2 N Δ q N = Δ P 1 y M 31 Δ q 1 + M 32 Δ q 2 + + M 3 N Δ q N = Δ P 1 z M 11 Δ q 1 + M 12 Δ q 2 + + M 1 N Δ q N = Δ P K x M 21 Δ q 1 + M 22 Δ q 2 + + M 2 N Δ q N = Δ P K y M 31 Δ q 1 + M 32 Δ q 2 + + M 3 N Δ q N = Δ P K z .
Equation (30) is an overdetermined equation set, which can be solved by least square method:
Δ q = M + ε .
The M + in the equation represents the generalized inverse of the coefficient matrix M .

4.3. Parameter Error Compensation Algorithm

After the error of each parameter is obtained by the identification algorithm, the revised parameters can be input into robot controller to achieve the robot calibration. However, for most robot controllers, the manufacturer will not authorize the user to modify the robot structure parameters directly. In this case, only the joint angle θ can be adjusted, while other kinematics parameters cannot be modified directly.
The errors of d , a , α , β in kinematics parameters have different effects on pose errors under different robot pose. In this paper, Newton Iteration Method is used to adjust the joint angle θ to compensate the pose deviation caused by the errors of d , a , α , β in kinematics parameters, so as to achieve the compensation of pose accuracy of the robot end. Assuming that the robot’s target pose is T N , Δ d i , Δ a i , Δ α i , Δ β i are substituted into the model to obtain T R , and then the difference between the nominal pose and the actual ones is used to calculate the partial derivative of each joint angle:
T N T R = T θ i Δ θ R i + T θ 2 Δ θ R 2 + T θ 3 Δ θ R 3 + T θ 4 Δ θ R 4 + T θ 5 Δ θ R 5 + T θ 6 Δ θ R 6 .

5. Robot Calibration Experiment

The experimental object of this paper is IRB120 industrial robot. The measurement system is shown in Figure 4. The measuring instrument is Leica AT960MR absolute laser tracker as shown in Figure 5, which can measure the end position of the robot with laser receiving target ball as shown in Figure 6. The software SpatialAnalyzer (SA) can be used for data acquisition connecting Leica AT960MR, data processing, and exporting the data of the robot end position based on its own coordinate system. Using the coordinate conversion method mentioned in Section 2.3, the data in AT960MR coordinate system and the data in the RBCS are fitted with the least square method. The transformational matrix between the coordinate system of AT960MR and the robot base can be obtained by Matlab software. So the data in laser tracker coordinate system can be converted to the RBCS. The measured robot end pose in AT960MR and the robot end pose displayed in the robot controller are unified into the same coordinate system for comparison.
The first step of calibration experiment is modeling. First, the kinematics model of the robot is determined. The MDH model of the ABB IRB120 robot is expressed in Figure 7, and the MDH parameters are expressed in Table 2.
The nominal transformation model parameters can be obtained by calibrating the robot DCP coordinate system and the RBCS as shown in Table 3 and Table 4:
According to the analysis of redundancy parameters in Section 3.1, the error of redundancy parameters in the kinematics model of robot is Δ d 2 , Δ d 6 and Δ θ 6 , and the redundancy parameters in the transformation model of RBCS are Δ z and δ z . So there are 28 discernible parameters in this calibration experiment. Fifty groups of robot pose data are taken as calibration points in robot workspace, and the error of robot parameters is computed.
The parameters errors of the identified error models are expressed in Table 5, Table 6 and Table 7.
The calibrated parameter error is substituted into the model by the compensation method proposed in Section 3.3. After measurement, the robot pose error before and after compensation is obtained as shown in Figure 8. The results of compensation analysis are expressed in Table 8.
Comparing Table 7 and Table 8, the compensation effect of 50 groups of calibration points is better than that of 100 groups of sampling points. This is because the position error of the robot end is not only related to the parameter error of the robot kinematics model, but also influenced by many other factors. In addition, the error caused by measurement disturbance is unavoidable in the process of measurement. Therefore, in the process of identifying parameter errors, a group of optimal solutions are obtained according to the position error characteristics of all 50 groups of calibration points. In the 100 groups of sampling points except the calibration points, some of them differ greatly from the position error characteristics of the calibration points, and the calibration results have comparatively poor compensation effect on these sampling points. Therefore, the overall calibration effect is not as good as the compensation for the calibration point.
In order to verify the correlation between calibration effect and the distribution of calibration points, add the following experiments: Take the point [300, 300, 300] in the RBCS as the center, respectively establish three kinds of cube space with the size of 600 mm, 450 mm, and 300 mm, defined as areas a, b, and c respectively. Fifty groups data of robot’s pose are measured in the three areas, respectively, and ensure that the pose of sampling points are evenly distributed in each space, as shown in Figure 9.
The sampling points in the b and c areas are taken as calibration points to the position error identification algorithm. The calculated parameters errors results are compensated respectively to the sampling points in area a. Calculating and comparing the errors before and after calibration, the experimental results are shown in Figure 10 and Table 9.
From the above results, it can be seen that the more uniform the selected calibration points are distributed in a certain space, the better effect of the calibration solution is.
In comparison, these literature [14,15,16] all use the laser tracker as the measuring instrument to improve the absolute positioning accuracy of the robot. Literature [14] improved the accuracy by about one time. Literature [15] uses different calibration methods to reduce the maximum error by about 60%, but the experiment only measured the data of 20 points, and the optimized maximum positioning deviation is still 1.30 mm. Literature [16] measures 1000 points in a certain working space, which can reduce the mean value of absolute positioning accuracy from 0.981 to 0.292, and the error is reduced by about 70%. The results of this experiment are obviously better than the first two paper. Compared with the method in literature [16], we take relatively lesser number of points, but it can achieve nearly the same accuracy improvement effect.

6. Summary

In order to allow robots to be applied in high-precision fields, this paper studies the methods of improving the accuracy of robots. Firstly, the DCP coordinate system is established from the point of perspectives of actual measurement, and the comparative general measurement methods of DCP coordinate system and RBCS are given, so that the complete error model is established. Then use differential kinematics to establish a differential error model, and the error identification equation is obtained. The error of redundant parameters in the error model is analyzed, and the method of identifying the error of redundant parameters is given. Finally, a compensation method without modifying the parameters of the underlying model of the robot is proposed. Experiments show that the calibration method proposed in this paper can effectively reduce the end position error of the robot and has a certain research significance for robot precision improvement technology.

Author Contributions

Conceptualization, T.C. and J.L.; methodology, T.C.; software, J.L. and D.W.; validation, J.L. and D.W.; data curation, J.L. and D.W.; writing—original draft preparation, T.C.; writing—review and editing, H.W.; visualization, J.L.; supervision, H.W.; project administration, H.W.; funding acquisition, H.W.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Project with grant number 2018YFB1308603, and Transportation Science and Technology Project of Fujian Province with grant number 2020030.

Institutional Review Board Statement

Not applicable for studies not involving humans or animals.

Informed Consent Statement

Not applicable for studies not involving humans.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Summers, M. Robot capability test and development of industrial robot positioning system for the aerospace industry. SAE Trans. 2005, 114, 1108–1118. [Google Scholar]
  2. Guerra, R.H.; Quiza, R.; Villalonga, A.; Arenas, J.; Castaño, F. Digital Twin-Based Optimization for Ultraprecision Motion Systems with Backlash and Friction. IEEE Access 2019, 7, 93462–93472. [Google Scholar] [CrossRef]
  3. Kelly, R.; Haber, R.; Haber-Guerra, R.E.; Reyes, F. Lyapunov Stable Control of Robot Manipulators: A Fuzzy Self-Tuning Procedure. Intell. Autom. Soft Comput. 1999, 5, 313–326. [Google Scholar] [CrossRef]
  4. Han, X.; Du, D.; Chen, Q.; Wang, G.; He, Y.; Sui, B. Research on Trajectory Accuracy Measurement of industrial robot based on kinematic analysis. Robot 2002, 24, 1–5. [Google Scholar]
  5. Hayati, S.; Mirmirani, M. Improving the absolute positioning accuracy of robot manipulators. J. Field Robot. 2010, 2, 397–413. [Google Scholar] [CrossRef]
  6. Judd, R.P.; Knasinski, A.B. A technique to calibrate industrial robots with experimental verification. IEEE Trans. Robot Autom. 1987, 6, 20–30. [Google Scholar] [CrossRef]
  7. Whitney, D.; Lozinski, C.; Rourke, J. Industrial Robot Forward Calibration Method and Results. J. Dyn. Syst. Meas. Control ASME 1986, 108, 1–8. [Google Scholar] [CrossRef]
  8. He, R.; Zhao, Y.; Yang, S.; Yang, S. Kinematic-Parameter Identification for Serial-Robot Calibration Based on POE Formula. Robot IEEE Trans. 2010, 26, 411–423. [Google Scholar]
  9. Wang, Y.; Wang, G.; Yun, C. A calibration method of kinematic parameters for serial industrial robots. Ind. Robot Int. J. 2014, 41, 157–165. [Google Scholar] [CrossRef]
  10. Xin, G.; Bo, F. Calibration procedure for a system of two coordinated manipulators. Int. J. Robot Autom. 1995, 10, 152–158. [Google Scholar]
  11. Wang, Z.; Xu, H.; Chen, G.; Sun, R.; Lining, S. A distance error based industrial robot kinematic calibration method. Ind. Robot Int. J. 2014, 41, 439–446. [Google Scholar] [CrossRef]
  12. Young, K.-Y.; Chen, J.-J.; Wang, C.-C. An automated robot calibration system based on a variable D-H parameter model. In Proceedings of the 35th IEEE Conference on Decision and Control, Kobe, Japan, 13 December 1996; Volume 1, pp. 881–886. [Google Scholar]
  13. Chen, J.; Chao, L.M. Positioning error analysis for robot manipulators with all rotary joints. Robot. Autom. IEEE J. 1987, 3, 539–545. [Google Scholar] [CrossRef]
  14. Ren, Y. Method of robot calibration based on laser tracker. Chin. J. Mech. Eng. 2007, 43, 195–200. [Google Scholar] [CrossRef]
  15. Wang, W.; Liu, L.; Wang, G.; Yun, C. Calibration method of robot base frame by quaternion form. J. Beijing Univ. Aeronaut Astronaut 2015, 41, 47–54. [Google Scholar]
  16. Nubiola, A.; Bonev, I.A. Absolute calibration of an ABB IRB 1600 robot using a laser tracker. Robot Comput. Integr. Manuf. 2013, 29, 236–245. [Google Scholar] [CrossRef]
  17. Bennett, D.J.; Hollerbach, J.M. Self-calibration of single-loop, closed kinematic chains formed by dual or redundant manipulators. In Proceedings of the 27th IEEE Conference on Decision and Control, Austin, TX, USA, 7–9 December 1988; Volume 1, pp. 627–629. [Google Scholar]
  18. Joubair, A.; Bonev, I.A. Non-kinematic calibration of a six-axis serial robot using planar constraints. Precis. Eng. 2015, 40, 325–333. [Google Scholar] [CrossRef]
  19. Legnani, G.; Tiboni, M. Optimal design and application of a low-cost wire-sensor system for the kinematic calibration of industrial manipulators. Mech. Mach. Theory 2014, 73, 25–48. [Google Scholar] [CrossRef]
  20. Denavit, J.; Hartenberg, R.S. A Kinematic Notation for Lower-Pair Mechanisms. ASME J. Appl. Mech. 1955, 22, 215–221. [Google Scholar]
  21. Hayati, S.A. Robot arm geometric link parameter estimation. In Proceedings of the 22nd IEEE Conference on Decision and Control, San Antonio, TX, USA, 14–16 December 1983; pp. 1477–1483. [Google Scholar]
  22. Park, I.-W.; Lee, B.-J.; Cho, S.-H.; Hong, Y.-D.; Kim, J.-H. Laser-Based Kinematic Calibration of Robot Manipulator Using Differential Kinematics. Mechatron. IEEE ASME Trans. 2012, 17, 1059–1067. [Google Scholar] [CrossRef]
Figure 1. Detection Center Point (DCP) coordinate system position model.
Figure 1. Detection Center Point (DCP) coordinate system position model.
Applsci 11 01287 g001
Figure 2. Actual measuring point and flange coordinate system (FCS).
Figure 2. Actual measuring point and flange coordinate system (FCS).
Applsci 11 01287 g002
Figure 3. Actual position and nominal position of DCP.
Figure 3. Actual position and nominal position of DCP.
Applsci 11 01287 g003
Figure 4. Experimental scenario.
Figure 4. Experimental scenario.
Applsci 11 01287 g004
Figure 5. Leica laser tracker.
Figure 5. Leica laser tracker.
Applsci 11 01287 g005
Figure 6. Target ball.
Figure 6. Target ball.
Applsci 11 01287 g006
Figure 7. ABB IRB 120 MDH model.
Figure 7. ABB IRB 120 MDH model.
Applsci 11 01287 g007
Figure 8. Position error curve of robot before and after compensation.
Figure 8. Position error curve of robot before and after compensation.
Applsci 11 01287 g008
Figure 9. Three kinds of space size and sampling point distribution.
Figure 9. Three kinds of space size and sampling point distribution.
Applsci 11 01287 g009
Figure 10. Compensation effect of sampling points in area a.
Figure 10. Compensation effect of sampling points in area a.
Applsci 11 01287 g010
Table 1. Redundancy identification of robot kinematics parameters.
Table 1. Redundancy identification of robot kinematics parameters.
Identification ConditionRedundancy Parameter
α i 0 None
α i = 0 & a i 0 Δ d i
α i = 0 & a i = 0 Δ d i , Δ θ i
Table 2. Nominal parameters of robot MDH model.
Table 2. Nominal parameters of robot MDH model.
Axis i θ i d i a i α i β i
1 θ 1 2900−90
2 θ 2 90 027000
3 θ 3 070−90
4 θ 4 302090
5 θ 5 00−90
6 θ 6 7200
Table 3. Nominal parameters of robot base coordinate transformation model.
Table 3. Nominal parameters of robot base coordinate transformation model.
x   ( mm ) y   ( mm ) z   ( mm ) R x   ( ° ) R y   ( ° ) R z   ( ° )
−1384.051448.6−391.471.61.860.1
Table 4. Nominal parameters of robot DCP coordinate transformation model.
Table 4. Nominal parameters of robot DCP coordinate transformation model.
x   ( mm ) y   ( mm ) z   ( mm ) R x   ( ° ) R y   ( ° ) R z   ( ° )
−27.31−19.8225.71
Table 5. Parameters error of robot base coordinate system (RBCS) error model.
Table 5. Parameters error of robot base coordinate system (RBCS) error model.
Δ x   ( mm ) Δ y   ( mm ) Δ z   ( mm ) Δ R x   ( ° ) Δ R y   ( ° ) Δ R z   ( ° )
0.2451−0.17120.03480.0478
Table 6. Parameters error of DCP coordinate system error model of robot.
Table 6. Parameters error of DCP coordinate system error model of robot.
Δ x   ( mm ) Δ y   ( mm ) Δ z   ( mm ) Δ R x   ( ° ) Δ R y   ( ° ) Δ R z   ( ° )
0.07460.0849−0.0687
Table 7. Parameters error based on end pose recognition.
Table 7. Parameters error based on end pose recognition.
Axis i Δ θ   ( ° ) Δ d   ( mm ) Δ a   ( mm ) Δ a   ( ° ) Δ β   ( ° )
1−0.0616−0.16410.0745−0.0883
2−0.0398−0.24740.06950.0701
30.08010.14870.2094−0.0962
40.09970.1345−0.1471−0.1321
50.01570.0745−0.06770.1027
60.0224−0.0754
Table 8. Position error before and after calibration point compensation.
Table 8. Position error before and after calibration point compensation.
Before CalibrationAfter CalibrationDecrease
Mean position error (mm)1.22480.267878.1%
Maximum position error (mm)1.67800.387676.9%
Standard deviation1.46880.352576.0%
Table 9. Compensation effect of sampling points in area a.
Table 9. Compensation effect of sampling points in area a.
Area c Position Error Average (mm)
Before CalibrationAfter CalibrationDecrease
Area b calibration solution to compensate area a sampling points1.30520.343273.7%
Area c calibration solution to compensate area a sampling points0.462364.5%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, T.; Lin, J.; Wu, D.; Wu, H. Research of Calibration Method for Industrial Robot Based on Error Model of Position. Appl. Sci. 2021, 11, 1287. https://0-doi-org.brum.beds.ac.uk/10.3390/app11031287

AMA Style

Chen T, Lin J, Wu D, Wu H. Research of Calibration Method for Industrial Robot Based on Error Model of Position. Applied Sciences. 2021; 11(3):1287. https://0-doi-org.brum.beds.ac.uk/10.3390/app11031287

Chicago/Turabian Style

Chen, Tianyan, Jinsong Lin, Deyu Wu, and Haibin Wu. 2021. "Research of Calibration Method for Industrial Robot Based on Error Model of Position" Applied Sciences 11, no. 3: 1287. https://0-doi-org.brum.beds.ac.uk/10.3390/app11031287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop