1. Introduction
With the wide application of industrial robot, people have higher requirements for its positioning accuracy. However, due to the influence of manufacturing error, assembly error, joint abrasion, and transmission error of robot parts, the positioning accuracy of robot is not satisfactory, which limits the application of robot in high-precision field. Therefore, it is particularly important to study the positioning accuracy improvement method of industrial robot.
In fact, the positioning accuracy of industrial robots can be divided into two parts: repeated positioning accuracy and absolute positioning accuracy (APA) [
1]. In most cases, we only need to improve the repeated positioning accuracy of the robot. For example, when industrial robot welds a workpiece, the repeated accuracy of the robot normally determines the trajectory accuracy of the workpiece directly. At this time, the APA is meaningless. Therefore, people focus on how to improve the repeat positioning accuracy, which can be improved by many methods such as soft computing and machine learning [
2,
3]. Industrial robots normally have the merit of high repeat positioning accuracy. For some small industrial robots, its repeat positioning accuracy even reach 0.01 mm, while the APA of the robots may be beyond 1mm or worse [
4]. So in this field, the research on APA is obviously insufficient. For example, in the application of multi robot cooperation, for kinematics parameter identification and robot off-line programming tasks, it is necessary to calibrate the precise pose of the robot base coordinate frames in the world coordinate system, which requires that the industrial robot must have sufficient APA [
5].
The fundamental reason for the low APA of industrial robot is that the pose of TCP (Tool Center Point) are calculated by kinematics mathematical model which contains the structure size of the linkage and motion parameters of joint. Moreover, there are errors between the values taken in calculation and actual parameters of the robot, resulting in a deviation between the calculated results and the actual pose. The following two methods can be used to enhance the robot’s APA: (1) using high accuracy machining equipment and advanced assembly method to reduce robot link machining and assembly errors; (2) combining advanced measurement tools and related algorithms to identify and correct robot structural parameters, that is robot calibration method [
6]. The first method is expensive and has limited effect on improving the precision of the robot, and it cannot solve the problem of precision decline caused by wear and deformation. The second method is cheaper than first one, and it is more effective in compensating the precision degradation caused by wear or structural deformation [
7]. Therefore, robot calibration is the main method to enhance the APA of robot, and it is also the main research direction of many scholars [
8].
In the study of robot calibration method, Knasinski et al. [
6] found that the absolute positioning error is mainly caused by the parameter error of robot kinematics model. Therefore, the research of robot calibration technology focuses on the error identification of robot kinematics parameters primarily [
9,
10,
11]. In the current research on the reduction of APA caused by kinematic parameter errors, it usually assumes that the robot base coordinate system (RBCS) and the flange coordinate system (FCS) are known or can be directly measured. However, the fact is that most of the robot flange and base coordinate system are “hidden”—it cannot be directly measured and need to be obtained through special measurement and calculation methods [
12].
In reference [
13], the position and pose measurement of robot is realized by using coordinate theodolite, and the APA error of the robot is reduced to 0.28 mm after calibration. This instrument is designed according to the principle of angle detecting, and exhibits good measurement accuracy. However, the user needs to have relevant measurement skills, and the measurement results are easy to be interfered by environmental factors. In reference [
14], the robot of IRB2400 was calibrated with a laser tracker. In this application, the base coordinate was obtained by means of making axis 1 and axis 2 rotate, respectively, and measuring the base plane, so that the coordinate transformation can be realized between laser tracker and robot base. Reference [
15] uses the FARO ARM to calibrate the robot and obtain the RBCS by quaternion form. Nubiola [
16] used Faro laser tracker to measure and calibrate ABB IRB1600 (ABB, Zurich, Switzerland) industrial robot from Switzerland. The mean absolute positioning accuracy can be reduced from 0.981 mm to 0.292 mm in eight different measurement points on the robot end effector. The measurement of the equipment is very accurate, and it can further cooperate with other relevant tools to obtain the target attitude. The equipment is easy to operate, but it is very expensive.
In addition to use the above measuring instruments to directly measure the absolute pose of the end of the robot for calibration, some scholars have studied the robot self-calibration method of indirectly measuring the pose of the end of the robot through the algorithm without relying on the measuring instruments. Beenett [
17] proposed a closed-loop self-calibration method, which kept the end of the robot at a point unchanged, and changed the pose of the end of robot, so as to finally solve the closed-loop equation. Joubair [
18] used multi-plane constraints to calibrate the six axes flexible robot. Legnani [
19] used corded sensor to measure the position and pose of the robot and complete the calibration of the kinematic parameters of the robot.
However, in the existing robot calibration and measurement process, the pose errors of the robot caused by base coordinate system and FCS are neglected basically, which is also an important part of the robot pose error model and should be considered in the process of robot calibration, and this part of the error cannot be improved only depending on optimizing the motion algorithm of the manipulator. The absolute positioning error must be measured by a measuring instrument and then improved by the algorithm.
The robot’s calibration can be achieved by establishing the robot’s position error model or pose error model. Since most measuring devices can measure the position of the end of the robot, the position error model is more suitable for a wide range of measuring devices (such as coordinate measuring machine, six or seven axes measuring arm, laser tracker (with a target ball)). It is more versatile. This paper aims to enhance the APA of the robot. The measurement hidden errors from the robot’s flange and base coordinates and the kinematics model parameters errors are unified as the source of the absolute positioning errors. A comparatively general measurement method of robot calibration is proposed, and the algorithms of parameter error identification and compensation are discussed in detail. The algorithms are verified by experiments.
2. Robot Calibration Modeling
Robot calibration needs to obtain the robot end pose error in the RBCS by means of measurement. However, the actual FCS and RBCS cannot be obtained through direct measurement. Therefore, it need to establish a measurement coordinate system directly related to the measurement results. In this paper, the measured position of the measuring instrument is defined as DCP (Detection Center Point), and establishing a measurement coordinate system with DCP as the origin. When measuring the pose of the end of the robot, it is necessary to obtain the position transformation between the DCP coordinate system and FCS, and between the RBCS and MICS (measuring instrument coordinate system), so that the pose model of the DCP coordinate system in the MICS can be obtained, as expressed in
Figure 1. The first step of calibration is to build a robot kinematics model that can reflect the characteristics of the end of the robot.
2.1. Robot Kinematics Modeling
The most common kinematics model of industrial robots is the four-parameter DH model (short for Denavit–Hartenberg model) proposed by Denavit [
20] et al. in 1955. This model has good generality and is currently the most widely used. However, in practice, it is found that when two adjacent axes are nominally parallel, the small deviation of their mutual joint will lead to the numerical distortion in the process of parameter identification based on DH model that means the parameters are discontinuous. To avoid this problem, Hayati [
21] proposes the MDH model—a modified DH model, which adds a rotational parameter to the
Y axis and becomes a five-parameter model with two translations and three rotations.
In the equation, , , and respectively represent the joint Angle, joint bias, length, and torsion angle of link i. In the case that joint exists only when adjacent axes are parallel, in other cases is directly substituted into Equation (1).
According to Equation (1), the kinematic model of a general N degrees of freedom robot can be obtained as
2.2. Robot Flange Coordinate Calibration Model
The key step of robot calibration is to obtain the actual pose error of robot end by measuring. However, the FCS of robot end is usually “hidden”, that is, it cannot be directly measured by measuring equipment [
22]. There is a certain offset of the actual measured position with respect to the FCS (as expressed in
Figure 2).
Let
denote the transformation from robot FCS to DCP coordinate system. If the pose is neglected, it can be expressed as follows:
Assuming that the FCS position of the robot is O in a certain state, when the position of the robot’s end remains unchanged, the position coordinates of O point can be obtained by fitting the center of the sphere through only changing the pose of the robot. The spherical center coordinate can be obtained by fitting more than four non coplanar points in space. Suppose the spherical center coordinates are
, radius is R, and the coordinates of the four measuring points are
, respectively. The following equations can be obtained from the function expression of the sphere:
By solving Equation (4), the origin position of FCS can be obtained, and then the pose of FCS needs to be obtained. As long as the robot controller can control the robot to move along the axes
x,
y,
z of the FCS in its TCP coordinate system, more than two points are taken to determine the direction of the coordinate axis. Finally, the vector
of the three axes
x,
y,
z in the FCS can be obtained. On the basis, the pose matrix of the FCS at the end of the robot in the MICS can be obtained:
In Equation (5), the pose of the FCS in the MICS is represented by , and the position of the center of the FCS in the MICS is represented by .
Assuming that DCP measured by the measuring instrument under the above robot pose is
, and the coordinate transformation from the robot FCS to the measuring point is
, and the following transformation relations exist:
The pose transformation equation
of the robot’s end FCS to DCP point can be obtained:
2.3. Calibration Model of Robot Base Coordinate System
When measuring the pose of robot’s end by measuring instrument, the data is obtained in the MICS. In order to compare the actual pose of robot with the nominal one of robot controller, it is necessary to unify it into the same coordinate system.
Suppose that the nominal position and pose of the FCS at the end of the robot read from the controller are represented by
and
respectively when the robot is in a certain state, and the position coordinates of a point in the end of the robot which is measured by measuring instruments as
. In the case of ignoring the pose of the DCP coordinate system, the following relation can be obtained from the spatial geometric transformation:
In the equation, and denote the rotation transformation and translation transformation of the MICS to RBCS, E denotes the unit matrix of 3 × 3, denotes the position transformation from the FCS to the DCP, and the superscript N denotes that the value isnominal.
Set
. According to the matrix multiplication rule, the Equation (8) can be expanded into the form of equations:
In the equation, represents the elements of line j in column i of matrix .
For the solution of Equation (9), there are 12 unknowns, and at least 4 sets of data are needed. That is, at least 4 points need to be measured. In fact, more than 4 sets of positions are used to reduce errors, which makes Equation (9) an overdetermined system of equations. Set:
The system of overdetermined equations can be represented as matrix:
In the equation, A is a coefficient matrix composed of elements , and B is a matrix consisting of measured values .
The least squares method is used to solve Equation (11):
From this, the transformation relation between the MICS and the RBCS can be obtained as X, and then the robot end pose measured by measuring instrument and the pose of the robot end displayed in the robot controller can be unified under the same coordinate system for comparison.
3. DCP Position Error Model
When calculating the RBCS, the position of the end of the robot is read directly from the robot instructor, which is different from the actual position. Therefore, the position of the RBCS obtained by calculation is also different from that of the MICS. Similarly, when measuring and calculating the position of DCP, there are some errors due to the influence of measurement disturbance. The DCP position and the RBCS position calculated by the model are defined as nominal DCP coordinate system and nominal based coordinate system, as shown in
Figure 3. The actual position of DCP is measured directly by the measuring instrument. The factors affecting the nominal position and actual position of DCP include robot kinematics parameter error, RBCS position error and DCP coordinate system position error. The following three aspects will be analyzed separately.
3.1. Robot Kinematics Parameter Errors
Because of wear and deformation, the parameters of the robot’s links have errors, which lead to the deviation between the actual value and the nominal value of the robot’s kinematics model parameters. The parameter errors affecting the robot pose error are
where
only exists parallel axis, the non-parallel axis directly causes
.
The actual transformation relationship between the RBCS and FCS can be represented by a model with parameter errors:
In the equation, means rotation transformation, and means translation transformation.
3.2. Parameter Errors of Robot Base Coordinate System
The errors of the RBCS represents the parameter errors of the transformation model from the MICS to the RBCS. Let
be the transformation model from the MICS to the RBCS. The nominal pose matrix of the RBCS calculated by Equation (13) in the MICS can be represented as follows:
In Equation (14), there are 12 parameters representing position and pose. To reduce the number of parameters to be identified, it can be rewritten as a 6-parameter model composed with a translation transformation and three rotational transformations.
Therefore, the parameter errors of the RBCS are: .
The parameter errors are added to
to get the actual pose model of the RBCS as
In the equation, the superscript R represents the actual value.
3.3. DCP Coordinate System Parameter Errors
The parameter errors of the DCP coordinate system represent the parameter errors of the transformation model from the FCS to the DCP coordinate system. Assuming that the pose transformation matrix of the robot’s DCP coordinate system is
relative to that of the robot’s end FCS, the transformation model from the robot’s end flange to the DCP coordinate system can also be obtained by imitating the error derivation process of the RBCS:
After the above three pose models are obtained, they can be integrated into the actual DCP pose model in the MICS. Set
T as the transformation matrix of the MICS to the robot DCP coordinate system:
In the equation, denotes the pose of the DCP coordinate system in the MICS, denotes the position of the DCP coordinate system in the MICS.
The actual pose of DCP can be represented as
Parameter errors affecting DCP pose accuracy:
Position error model is comparatively more general, so the pose error of DCP is neglected. Assuming the set of parameter errors is
, based on Equation (19), the actual position model of DCP can be expressed as follows:
Therefore, the mathematical relationship between the DCP position error and the parameters errors of the model can be represented as
In Equation (21), the relationship between the DCP position error and the parameter error is nonlinear, so it is difficult to solve them directly. Therefore, it can be analyzed from the perspectives of partial differential. For the model with
n parameter errors, Equation (21) can be rewritten as follows:
The Equation (22) can be simplified to
In the equation, denotes the DCP position error, denotes all the parameter errors of the calibration model, and denotes the error coefficient matrix.
5. Robot Calibration Experiment
The experimental object of this paper is IRB120 industrial robot. The measurement system is shown in
Figure 4. The measuring instrument is Leica AT960MR absolute laser tracker as shown in
Figure 5, which can measure the end position of the robot with laser receiving target ball as shown in
Figure 6. The software SpatialAnalyzer (SA) can be used for data acquisition connecting Leica AT960MR, data processing, and exporting the data of the robot end position based on its own coordinate system. Using the coordinate conversion method mentioned in
Section 2.3, the data in AT960MR coordinate system and the data in the RBCS are fitted with the least square method. The transformational matrix between the coordinate system of AT960MR and the robot base can be obtained by Matlab software. So the data in laser tracker coordinate system can be converted to the RBCS. The measured robot end pose in AT960MR and the robot end pose displayed in the robot controller are unified into the same coordinate system for comparison.
The first step of calibration experiment is modeling. First, the kinematics model of the robot is determined. The MDH model of the ABB IRB120 robot is expressed in
Figure 7, and the MDH parameters are expressed in
Table 2.
The nominal transformation model parameters can be obtained by calibrating the robot DCP coordinate system and the RBCS as shown in
Table 3 and
Table 4:
According to the analysis of redundancy parameters in
Section 3.1, the error of redundancy parameters in the kinematics model of robot is
,
and
, and the redundancy parameters in the transformation model of RBCS are
and
. So there are 28 discernible parameters in this calibration experiment. Fifty groups of robot pose data are taken as calibration points in robot workspace, and the error of robot parameters is computed.
The calibrated parameter error is substituted into the model by the compensation method proposed in
Section 3.3. After measurement, the robot pose error before and after compensation is obtained as shown in
Figure 8. The results of compensation analysis are expressed in
Table 8.
Comparing
Table 7 and
Table 8, the compensation effect of 50 groups of calibration points is better than that of 100 groups of sampling points. This is because the position error of the robot end is not only related to the parameter error of the robot kinematics model, but also influenced by many other factors. In addition, the error caused by measurement disturbance is unavoidable in the process of measurement. Therefore, in the process of identifying parameter errors, a group of optimal solutions are obtained according to the position error characteristics of all 50 groups of calibration points. In the 100 groups of sampling points except the calibration points, some of them differ greatly from the position error characteristics of the calibration points, and the calibration results have comparatively poor compensation effect on these sampling points. Therefore, the overall calibration effect is not as good as the compensation for the calibration point.
In order to verify the correlation between calibration effect and the distribution of calibration points, add the following experiments: Take the point [300, 300, 300] in the RBCS as the center, respectively establish three kinds of cube space with the size of 600 mm, 450 mm, and 300 mm, defined as areas a, b, and c respectively. Fifty groups data of robot’s pose are measured in the three areas, respectively, and ensure that the pose of sampling points are evenly distributed in each space, as shown in
Figure 9.
The sampling points in the b and c areas are taken as calibration points to the position error identification algorithm. The calculated parameters errors results are compensated respectively to the sampling points in area a. Calculating and comparing the errors before and after calibration, the experimental results are shown in
Figure 10 and
Table 9.
From the above results, it can be seen that the more uniform the selected calibration points are distributed in a certain space, the better effect of the calibration solution is.
In comparison, these literature [
14,
15,
16] all use the laser tracker as the measuring instrument to improve the absolute positioning accuracy of the robot. Literature [
14] improved the accuracy by about one time. Literature [
15] uses different calibration methods to reduce the maximum error by about 60%, but the experiment only measured the data of 20 points, and the optimized maximum positioning deviation is still 1.30 mm. Literature [
16] measures 1000 points in a certain working space, which can reduce the mean value of absolute positioning accuracy from 0.981 to 0.292, and the error is reduced by about 70%. The results of this experiment are obviously better than the first two paper. Compared with the method in literature [
16], we take relatively lesser number of points, but it can achieve nearly the same accuracy improvement effect.