Next Article in Journal
Optical-Resolution Photoacoustic Microscopy Using Transparent Ultrasound Transducer
Previous Article in Journal
Cueing Paradigms to Improve Gait and Posture in Parkinson’s Disease: A Narrative Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncalibrated Visual Servoing for Underwater Vehicle Manipulator Systems with an Eye in Hand Configuration Camera

National Key Laboratory of Science and Technology of Underwater Vehicle, Harbin Engineering University, No.145 Nantong Street Harbin, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Submission received: 10 November 2019 / Revised: 3 December 2019 / Accepted: 8 December 2019 / Published: 11 December 2019
(This article belongs to the Section Physical Sensors)

Abstract

:
This paper presents an uncalibrated visual servoing scheme for underwater vehicle manipulator systems (UVMSs) with an eye-in-hand camera under uncertainties. These uncertainties contain vision sensor parameters, UVMS kinematics and feature position information. At first, a linear separation approach is addressed to collect these uncertainties into vectors, and this approach can also be utilized in other free-floating based manipulator systems. Secondly, a novel nonlinear adaptive controller is proposed to achieve image error convergence by estimating these vectors, the gradient projection method is utilized to optimize the restoring moments. Thirdly, a high order disturbance observer is addressed to deal with time-varying disturbances, and the convergence of the image errors is proved under the Lyapunov theory. Finally, in order to illustrate the effectiveness of the proposed method, numerical simulations based on a 9 degrees of freedom (DOFs) UVMS with an eye-in-hand camera are conducted. In simulations, the UVMS is expected to track a circle trajectory on the image plane, meanwhile, time-varying disturbances are exerted on the system. The proposed scheme can achieve accurate and smooth tracking results during simulations.

1. Introduction

Nowadays, underwater vehicles play a pivotal role in the applications of oceanic explorations, submarine salvages and scientific expeditions [1,2]. Highly integrated underwater vehicles, e.g., Ocean One [3], SAUVIM I Intervention Autonomous Underwater Vehicle (I-AUV) [4], and Girona 500 I-AUV [5] are developed for the field intervention operations based on its camera-robot systems. Compared with remote operations of underwater vehicles, autonomous underwater operations by visual servoing not only reduce the dependence on the mother ship, but also decrease disturbances from the umbilical cables, which makes underwater vehicles complete manipulations autonomously with less operations from professional pilots [6,7].
Visual servoing based on the vision sensor (i.e., underwater camera) have gained significant attentions in underwater autonomous interventions. The purpose of visual servoing on underwater vehicles is to control the end-effector to complete the specific intervention tasks through visual information [8]. The visual servoing can be generously divided into two categories [9], the position-based visual servoing (PBVS) and the image-based visual servoing (IBVS). The PBVS scheme demands visual sensors to obtain three-dimensional (3 D) position of the target and then control the end-effector approach to the desired position, which means accurate calibration results of the camera-robot system are often indispensable [10]. Classical IBVS scheme guides the end-effector directly through the information of the target, its core skill is to calculate the inverse or the transpose of the image Jacobian matrix (interaction matrix) [11]. However, these mentioned schemes require intrinsic and extrinsic parameters of the camera. It is well known that camera calibrations are very costly and tedious, especially in underwater environment [12]. Therefore, in order to achieve satisfied visual servoing results in underwater interventions, it is not suitable to directly utilize either the PBVS scheme or the classical IBVS scheme.
To avoid the influences of inaccurate camera calibrations, without perfectly knowing actual parameters of the vision sensor, uncalibrated visual servoing based on the IBVS scheme has attracted great attentions in last twenty years. A large portion of works turned image Jacobian matrix calculation to a parameter identification problem by establishing proper cost functions [13]. Then, without knowing the exact knowledge of camera parameters in advance, the image Jacobian matrix in the IBVS scheme can be acquired through estimating its elements by numerical estimations (i.e., Broyden-based method [14], recursive least square method [15], Kalman method [16], SVM based method [17] etc.). However, such kinematic-based uncalibrated visual servoing schemes usually ignore the nonlinear forces in robot dynamics and assumed the controller of the system is ideal, further, they do not take control schemes into stability analysis.
Such drawbacks are largely remedied under dynamic based visual servoing schemes by using a depth-independent interaction matrix [18], this pioneering work proposed a method to deal with the case of time-varying and uncertain depth in visual servoing process. Wang et al. [19] investigated uncalibrated visual servoing problem with uncertain models, they presented the passivity properties in the overall kinematic system. In [20], Wang H. et al. proposed an adaptive controller with a velocity observer to avoid velocity measurement noises. For object grasping, a manipulator with an eye-in-hand camera is common, because the feature points should be extracted on the object, however, these articles only concern the situation about the eye-to-hand camera configuration. Liang et al. [21] proposed a unified method for both eye-in-hand and eye-to-hand camera configurations, this work also gave a separation algorithm on kinematic properties of the manipulator system, however, the uncertain kinematics of the manipulator was not considered. Wang et al. [22] proposed a dynamic uncalibrated visual servoing approach for a free-floating based manipulator, however, it assumed the system has no external disturbances, and this assumption is often hard to satisfy in underwater environment.
Although some researches that involve uncalibrated visual servoing have been launched on industrial manipulators and nonholonomic mobile robots, visual servoing for the UVMS is a largely under explored domain. Li et al. [23] presented a learning-based IBVS scheme for an absorptive underwater vehicle to realize autonomously sea-cucumbers capture. Gao et al. [24] proposed a hierarchical IBVS strategy for the underwater vehicle dynamic positioning problem. Sivčev et al. [25] proposed an engineering oriented PBVS scheme for work class hydraulic ROVs. The drawback with these strategies is, they require the accurate camera parameters in advance. In the underwater manipulation domain, there are some examples of researches on uncalibrated visual servoing, for example, Xu et al. [26] investigated a dynamic based uncalibrated visual servoing scheme for the underwater soft robot. However, to the best of our knowledge, there is limited work on dynamic based uncalibrated visual servoing concerned with the UVMS. Unlike industrial manipulators or mobile robots, the visual servoing process of the UVMS suffers from two aspects: on one hand, since the UVMS is a free-floating multibody mechanism, the inner coupling effects between the manipulator and the vehicle are strong and cannot be neglected [27]. On the other hand, the underwater environment is complex, which makes the system tend to be affected by currents, accurate calibration results of the camera-UVMS system are hard to achieve in underwater environment. These notable differences from other types of robots make it improper to directly employ previous visual servoing studies on UVMSs.
To solve these problems, we propose a novel uncalibrated visual servoing method for UVMSs with an eye-in-hand configuration camera. The main contributions of this work can be summarized as follows: firstly, we propose a novel linear separation method to collect some uncertainties into constant vectors compared with the one in [21,28]. This method can be applied for other free-floating based articulated manipulators. Secondly, to optimize the restoring moments of the UVMS, a new reference velocity term is proposed through exploiting the kinematic redundancy of the UVMS, moreover, without perfectly knowing the camera parameters, kinematics of the UVMS and the target position, a novel composite controller with new adaptive laws is proposed to solve the visual servoing problem. Thirdly, considering time-varying disturbances of the UVMSs, a high order disturbance observer is presented to estimate and compensate such disturbances, and stability analysis is given under the Lyapunov theory.
The rest of this paper is organized, as follows: in Section 2, we introduce the kinematic relationship of the system in UVMS, the dynamic model of the UVMS is also established. Thereafter, a novel adaptive controller with a disturbance observer is proposed for uncalibrated visual servoing in Section 3, stability proof analysis is also given is this section. In Section 4, numerical simulations on a 9-DOFs UVMS are carried out to demonstrate the effectiveness of the proposed scheme. The flow chart of this work is given in Figure 1.
Notation: Throughout this paper, we define ε a , b c as the vector from the frame a to the frame b expressed in the frame c . R a b and T a b are the rotation matrix and the homogeneous transformation matrix from the frame a to the frame b respectively. Moreover, frames I , V , ee , fe , cam represent the inertial frame, the vehicle fixed frame, the end-effector frame, the target frame, and the camera frame respectively, frame i { 0 , 1 , 2 n } represents the frame of the i -th joint.

2. Kinematics and Dynamics

In underwater visual servoing, camera parameters, UVMS kinematic modelling, and feature position are always difficult to accurately achieve. A linear separation method to collect camera uncertainties into vectors is given in [18,21], however this method can only be used for the fixed manipulator and under the assumption that kinematics of the manipulator is perfectly known. In this section, we propose a novel linear separation method to collect these uncertainties into constant vectors without this assumption for free-floating based articulated manipulators.

2.1. Kinematics

In this paper, we consider a 6 + n DOFs UVMS with a standard fixed pinhole camera, the camera is fixed on the end-effector. A target is fixed w.r.t. to the inertial frame, and the feature point is extracted from the target. The image coordinates x = [ u , v ] T of the feature point can be written as [18]:
[ x 1 ] = 1 z ( ζ ) MT I ee [ ε I , f e I 1 ]
where z ( ζ ) is the depth of the feature in the camera frame. M = Φ T ee cam , M 3 × 4 is the perspective projection matrix, which contains intrinsic and extrinsic parameters of the camera. Φ 3 × 4 depends on intrinsic parameters of the camera and T ee cam depends on extrinsic parameters of the camera. Since the camera is fixed on the end-effector, T ee cam is time-invariant. Therefore, the perspective projection matrix M can be concluded to be a constant. Define M = [ m 1 , m 2 , m 3 ] T , m i 1 × 4 is the i-th column of M . Depth z ( ζ ) can be obtained as:
z ( ζ ) = m 3 T T I ee [ ε I , f e I 1 ]
consider a UVMS with a n-DOFs manipulator, define ζ ˙ = [ v v T , ω v T , q ˙ T ] T ( 6 + n ) × 1 as the generalized velocity term of the UVMS, where v v 3 × 1 and ω v 3 × 1 denote the linear and angular velocity of the vehicle in the vehicle fixed frame. Let η ˙ v = [ v v T , ω v T ] T denote the generalized velocity of the vehicle. q n × 1 is the vector represents joints position of the manipulator. By differentiating (1), the relationship of ζ ˙ and x ˙ can be expressed as:
x ˙ = 1 z ( ζ ) J ( ζ , x ) ζ ˙
where J ( ζ , x ) = [ m ¯ 1 T u m ¯ 3 T m ¯ 2 T v m ¯ 3 T ] ζ ( R I ee ε I , f e I + t ) is the depth- independent Jacobian matrix, R I ee and t are the rotation and translation of T I ee . And it can be further divided into two parts [20]:
J ( ζ , x ) = J zin ( ζ ) x J zd ( ζ )
On one hand, J zin ( ζ ) is so called depth-rate-independent Jacobian matrix, which maps the generalized velocity term ζ ˙ to a plane parallel to the image plane. J zin ( ζ ) can be represented as:
J zin ( ζ ) = [ m ¯ 1 T m ¯ 2 T ] ζ ( R I ee ε I , f e I + t )
m ¯ i is the first three elements of m i .
On the other hand, J zd ( ζ ) is so called depth-rate-dependent Jacobian matrix, which describes the relationship between ζ ˙ and z ˙ ( ζ ) . J zd ( ζ ) and z ˙ ( ζ ) can be represented as:
J zd ( ζ ) = m ¯ 3 T ζ ( R I ee ε I , f e I + t )
z ˙ ( ζ ) = m ¯ 3 T ζ ( R I ee ε I , f e I + t ) ζ ˙ = J zd ( ζ ) ζ ˙
The overall kinematics (2), (5), (7) share following properties.
Property 1.
For a vector η ( 6 + n ) × 1 , J zin ( ζ ) η can be represented as the product of a regressor matrix Y zin ( ζ , η ) and a constant vector a zin .
J zin ( ζ ) η = Y zin ( ζ , η ) a zin
Proof. 
First, we will prove that
ζ ( R I ee ε I , f e I + t ) = ζ ( ε f e , I ee ε ee ,   I ee ) = ζ ( R I ee ε I , ee I )   = ( R I ee ε I , ee I ζ + ϒ ε I , ee I ) = Y α ( ζ , η ) a α
where ϒ = [ ( r 11 + r 21 + r 31 ) ζ ( r 12 + r 22 + r 32 ) ζ ( r 13 + r 23 + r 33 ) ζ ] , and r i j is the (i,j) element of R I ee .
On one side, it can be found that:
ε I , ee I ζ η = [ R V I ( S ( R V I ε V 0 V ) + S ( R 0 I ε 0 , ee 0 ) ) R V I J p o s , m a n ] η    = R V I η 1 , 3 + R V I S ( η 4 , 6 ) ε V 0 V + R V I S ( η 4 , 6 ) R 0 V ε 0 , ee 0   + J p o s , m a n η 7 , 6 + n
where η i , j is the matrix composed from the element η i to η j , i.e., η i , j = [ η i , η i + 1 η j ] T . J p o s , m a n is the manipulator Jacobian matrix mapping q ˙ to ε ˙ 0 , ee I . S ( ) is a function translating a vector a = [ a x , a y , a z ] T to a skew symmetric matrix as:
S ( a ) = [ 0 a z a y a z 0 a x a y a x 0 ]
Inspired by [29], ε 0 , ee 0 can be further divided as:
ε 0 , ee 0 = [ I 3 , R 1 0 , R i 0 , R n 1 0 ] pp a ee = Y ee ( q ) a ee
where pp = diag { cos ( q 1 ) , sin ( q 1 ) , 1 , cos ( q n ) , sin ( q n ) , 1 } and a ee = [ a 1 , a 1 , d 1 a n , a n , d n ] T . q i is the position of the i-th joint, a i , d i are the link length and link offset associated with the i -th link. Besides,
J pos , man η 7 , 6 + n = R 0 I [ S ( z 0 ) η 7 , 6 + n ( 1 ) ( D n D 0 ) pp S ( z n 1 ) η 7 , 6 + n ( n ) ( D n D n 1 ) pp ] a ee = Y man ( η , q ) a ee
where D i = diag { I 3 1 I 3 i , 0 3 × 3 i + 1 0 3 × 3 n } and z i is the unit vector of frame i expressed in frame 0 of the manipulator.
On the other side,
ε I ,   ee I = ε I , V I + R 0 I ε V , 0 0 + R 0 I Y ee ( q ) a ee = Y eeb ( q ) a eeb
where Y eeb ( q ) = [ ε I , V I , R 0 I , R 0 I Y ee ( q ) ] , a eeb = [ I 3 × 1 , ε V , 0 0 , a ee T ] T .
Therefore, by invoking (10)–(13), one can obtain:
Y α ( ζ , η ) = [ R V ee η 1 , 3 , R V ee S ( η 4 , 6 ) ε V 0 V , R V ee S ( η 4 , 6 ) R 0 V Y ee ( q ) , R I ee Y man ( η , q ) , ϒ Y eeb ( q ) ]
a α = [ I 1 × 3 I 1 × 3 a ee T a ee T a eeb T ] T
and results can be further simplified as:
Y α ( ζ , η ) = [ R V ee η 1 , 3 + R V ee S ( η 4 , 6 ) r V 0 V , R V ee S ( η 4 , 6 ) R 0 I Y ee ( q )   + R I ee Y man ( η , q ) , ϒ Y eeb ( q ) ]
a α = [ I 1 × 3 a ee T a eeb T ] T
To achieve the regressor matrix Y zin ( ζ , η ) , Y α need to be transformed in to Y α s 1 × 3 n as:
Y α s ( ζ , η ) = [ y α 11 , y α 12 , y α 13 , y α n 1 , y α n 2 , y α n 3 ]
y α i j is the (i,j) element of Y α .
Finally, from (5), (14), and (15), it can be obtained that:
Y zin ( ζ , η ) = [ Y α s 0 0 Y α s ]
a z in = [ m ¯ 1 T a α 1 m ¯ 1 T a α n , m ¯ 2 T a α 1 m ¯ 2 T a α n ] T
where a α i is the i-th element of the vector a α .□
Property 2.
For a vector ϕ 2 × 1 , z ˙ ( ζ ) ϕ can be represented as the product of a regressor matrix and a constant vector.
z ˙ ( ζ ) ϕ = Y zd ( ζ , ϕ ) a zd
Proof. 
From (7) and the proof on the Property 1, it can be found that:
Y zd = [ ϕ 1 Y α s ( ζ , ζ ˙ ) ϕ 2 Y α s ( ζ , ζ ˙ ) ]
a zd = [ m ¯ 3 T a α 1 m ¯ 3 T a α n ] T
where ϕ i is the i -th element of the vector ϕ .□
Property 3.
For a vector ϕ 2 × 1 , z ( ζ ) ϕ can be represented as the product of a regressor matrix and a constant vector.
z ( ζ ) ϕ = Y z ( ζ , ϕ ) a z
Proof. 
According to (2), z ( ζ ) ϕ can be extended as:
z ( ζ ) ϕ = m 3 T T I ee [ ε I , f e I 1 ] ϕ = ( i = 1 3 r i 1 m 3 i ε I , f e 1 I + i = 1 3 r i 2 m 3 i ε I , f e 2 I     + i = 1 3 r i 3 m 3 i ε I , f e 3 I + m ¯ 3 T R I ee ε ee , I I + m 34 ) [ ϕ 1 ϕ 2 ]
where m 3 i and ε I , f e i I are the i-th elements of the vector m 3 and the vector ε I , f e I , respectively. From (13), one obtains:
Y z ( ζ , ϕ ) = [ R ¯ ϕ 1 ϕ 1 j = 1 3 i = 1 3 r j i y eebj ϕ 1 R ¯ ϕ 2 ϕ 1 j = 1 3 i = 1 3 r j i y eebj ϕ 2 ]
a z = [ m ¯ 3 T ε I , f e 1 I m ¯ 3 T ε I , f e 2 I m ¯ 3 T ε I , f e 3 I   ( m 31 + m 32 + m 33 ) a eeb T   m 34 ] T
where R ¯ = [ r 11   r 21   r 31   r 12   r 22   r 32   r 13   r 23   r 33 ] , y eeb j is the j-th row of the matrix Y eeb ( q ) .□
Therefore, J zin ( ζ ) η , z ˙ ( ζ ) ϕ , z ( ζ ) ϕ can be linearly separated as the product of the regressor matrices and the constant parameter vectors.
As described in previous proofs, camera perspective projection matrix M , the manipulator kinematics parameters a ee and the manipulator relative position vector ε V , 0 0 are all collected in a zin , a z , a zd . The position of the target ε I , f e I is collected in a z .
This work devotes to realize visual servoing under uncalibrated conditions. From the proofs of above three properties, uncertainties of the systems (i.e., uncertainties in intrinsic and extrinsic parameters of the camera, the kinematics of the UVMS, the position of the target w.r.t. the inertial frame) can be collected in these three parameter vectors in (17), (20), and (24). Because the target is fixed with the inertial frame, these vectors maintain constant and will be estimated by gradient update laws in the following section
By defining Δ a = a a d , a compensated depth-Jacobian matrix J Ω can be obtained by combining J zin ( ζ ) with J zd ( ζ ) [19,21].
J Ω = J zin ( ζ ) x + x d 2 J zd ( ζ ) = J ( ζ , x ) + Δ x 2 J zd ( ζ )
where x d is the bounded desired position of the feature point on the image plane. Moreover, x ˙ d and x ¨ d are the bounded desired velocity and desired acceleration of the feature point, respectively.
Substituting (3) and (4) into (25), (3) can be rewritten as:
J Ω ζ ˙ = z x ˙ + 1 2 z ˙ Δ x J Ω ζ ˙ z x ˙ d = z Δ x ˙ + 1 2 z ˙ Δ x
J Ω is used to describe the relationship from the generalized velocity term ζ ˙ to z x ˙ + 1 2 z ˙ Δ x .

2.2. Dynamics

Coordinate frames of a typical UVMS with an eye-in-hand configuration camera can be represented in Figure 2.
Let η ee = [ ε 0 , e e I T , φ e e , θ e e , ψ e e ] T denote the position and Euler angle vector of the end-effector. The relationship between η ˙ ee and ζ ˙ can be expressed as:
η ˙ ee = J uvms ζ ˙
where J uvms is the Jacobian matrix of the UVMS. Besides, the dynamic equation in the vehicle-joint space of UVMS can be written as [30]:
M ( ζ ) ζ ¨ + C ( ζ , ζ ˙ ) ζ ˙ + D ( ζ , ζ ˙ ) ζ ˙ + G ( ζ ) = τ ctrl + τ ext
M ( ζ ) ( 6 + n ) × ( 6 + n ) denotes the mass matrix including the added mass term, C ( ζ , ζ ˙ ) ( 6 + n ) × ( 6 + n ) represents the centrifugal and Coriolis effect matrix, D ( ζ , ζ ˙ ) ( 6 + n ) × ( 6 + n ) is the contribution due to the damping effects, G ( ζ ) ( 6 + n ) × 1 represents the restoring matrix. τ ctrl ( 6 + n ) × 1 is the controller output, τ ext ( 6 + n ) × 1 represents the bounded unknown external disturbances on the UVMS, mainly caused by underwater currents, variation in load.
Specifically speaking, one can extend these dynamics matrices as:
M ( ζ ) = [ M v ( η v ) M v m ( q ) M v m T ( q ) M m ( q ) ]
C ( ζ , ζ ˙ ) = [ C v ( η v , η ˙ v ) 0 0 C m ( q , q ˙ ) ]
D ( ζ , ζ ˙ ) = [ D v ( η v , η ˙ v ) 0 0 D m ( q , q ˙ ) ]
G ( ζ ) = [ G v ( η v ) G m ( q ) ]
Subscript v and m represent vehicle and manipulator, respectively. M v 6 × 6 is the mass matrix of the sole vehicle, M m nxn is the mass matrix of the manipulator. M vm 6 xn is the coupling term between the vehicle and the manipulator. Similar properties can also be found in C ( ζ , ζ ˙ ) , D ( ζ , ζ ˙ ) and G ( ζ ) . Comparing with traditional underwater vehicles and fixed manipulators, UVMSs are more complex due to multibody inner coupling effects.
Inspired by the inherent dynamic characteristics of UVMSs, M ^ and C ^ are chosen to satisfy following properties [30,31]:
Property 4.
The inertial matrix M ( ζ ) is, positive and bounded:
M ( ζ ) > 0 υ 1 I M ( ζ ) υ 2 I
υ 1 and υ 2 are two positive constants.
Property 5.
The 2-norm of Coriolis matrix C ( ζ , ζ ˙ ) is bounded:
C ( ζ , ζ ˙ ) υ 3 ζ ˙ max
where υ 3 is a positive scalar. The 2-norm means:
x R n x = x T x ,   x R nxn x = λ max ( x T x )
where λ max ( ) is the maximum eigenvalue of a matrix.
Property 6.
The matrix M ˙ ( ζ ) 2 C ( ζ , ζ ˙ ) is skew-symmetric:
M ˙ ( ζ ) 2 C ( ζ , ζ ˙ ) = [ M ˙ ( ζ ) 2 C ( ζ , ζ ˙ ) ] T
From Properties 5 and 6, it can be found that:
M ˙ ( ζ ) 2 υ 3 ζ ˙ max
which means the 2-norm of M ˙ ( ζ ) is also upper bounded.
Intrinsic dynamic parameters of a UVMS, e.g., the mass and the inertia tensor of each DOF, the buoyance of each DOF etc., are always difficult to accurately obtain. Hence, it is also difficult to acquire the accurate dynamic modeling in practice. Unmodeled dynamics, in addition to external disturbances are considered as system lumped disturbances. Then, the dynamic Equation (27) can be furthered represented as:
M ^ ( ζ ) ζ ¨ + C ^ ( ζ , ζ ˙ ) ζ ˙ + D ^ ( ζ , ζ ˙ ) ζ ˙ + G ^ ( ζ ) = τ ctrl + τ d
where M ^ ( ζ ) , C ^ ( ζ , ζ ˙ ) , D ^ ( ζ , ζ ˙ ) , G ^ ( ζ ) are nominal dynamic model matrices. M ˜ ( ζ ) , C ˜ ( ζ , ζ ˙ ) , D ˜ ( ζ , ζ ˙ ) , G ˜ ( ζ ) are modeling error terms, and they are defined as: A ˜ = A A ^ . τ d = τ ext ( M ˜ ζ ¨ + C ˜ ζ ˙ + D ˜ ζ ˙ + G ˜ ) denotes the time varying disturbance term.
Restoring moments are caused by gravity and buoyancy forces. For the UVMS, the vehicle is easily affected by the restoring moments because of the movement of the manipulator. Therefore, it is important to decrease the restoring moments by choosing proper ζ ˙ . We assume that G ^ ( ζ ) , G ^ ( ζ ) ζ and 2 G ^ ( ζ ) ζ 2 are all bounded.
Remark 1.
Nominal dynamic model matrices M ^ ( ζ ) , C ^ ( ζ , ζ ˙ ) are chosen to satisfy Properties 4–6. In the following controller, M ^ ( ζ ) , C ^ ( ζ , ζ ˙ ) , D ^ ( ζ , ζ ˙ ) and G ^ ( ζ ) are obtained through choosing another set of intrinsic dynamic parameters (which is different from the actual intrinsic parameters, and it is utilized in forward dynamic routines in simulations), rather than updated by adaptive laws. In other word, we do not estimate or update nominal matrices, we directly calculate them by utilizing system states [ ζ , ζ ˙ ] and intrinsic dynamic parameters. Therefore, M ^ ( ζ ) , C ^ ( ζ , ζ ˙ ) , D ^ ( ζ , ζ ˙ ) and G ^ ( ζ ) can be chosen to satisfy above properties.

3. Controller Design

The control objective is to realize the convergence of image errors. In the process of designing the controller, at first, we establish a new reference velocity term to optimize the restoring moment through using the gradient projection method (GPM), moreover, novel adaptive laws are established to estimate the uncertainties in uncalibrated systems by utilizing the linear separation method which we described in the last section. Secondly, we employ a high-order disturbance observer to compensate the time-varying lumped disturbance term in (29). Thirdly, we propose a novel composite controller to guarantee the convergence of image errors, stability analysis based on the Lyapunov theory is also presented.

3.1. Adaptive Laws

To realize visual servoing, we should firstly design a reference velocity term in vehicle-joint space. If the kinematic parameters are available, then a zin , a z , a zd can be obtained. A reference velocity can be designed as:
ζ ˙ r = J Ω z x ˙ d α v J T Ω z Δ x
where α v is a positive gain. Then, if there exists an ideal controller to make ζ ˙ = ζ ˙ r , substituting (30) into (26), one can find:
α v J Ω J T Ω z Δ x = z Δ x ˙ + 1 2 z ˙ Δ x
Consider a Lyapunov function as:
V temp = 1 2 z Δ x T Δ x ,
take the time-derivative of V temp , yields:
V ˙ temp = α v Δ x T J Ω J T Ω z Δ x .
If J Ω is non-singular, V ˙ temp < 0 and Δ x 0 as t .
To achieve tracking control, it is expected to acquire the accurate value of J Ω , z and z ˙ , which equals to acquire the accurate value of a zin , a z and a zd . Unfortunately, it cannot be accurately obtained by considering the kinematic uncertainties of the system. Hence it is necessary to build adaptive laws to estimate them. The core is to estimate a zin , a z and a zd .
The compensated depth-Jacobian matrix in (25) can be estimated as:
J ^ Ω = J ^ ( ζ , x ) + Δ x 2 J ^ zd ( ζ )
By employing the pseudo inverse and transpose of J ^ Ω in (32) and the estimation of depth z ^ , a reference velocity term that similar with the ones in [19,20,21] can be represented as:
ζ ˙ r = J ^ Ω z ^ x ˙ d α v J ^ T Ω z ^ Δ x
where J ^ Ω denotes the pseudo inverse of J ^ Ω , z ^ is the estimated depth.
The UVMS possesses relatively high DOFs compared with simple manipulators in [19,20,21], hence the redundancy of the UVMS should be taken into consideration when design the reference velocity. Because the UVMS is easily be affected by the restoring moment, the GPM is utilized to optimize the restoring moments through exploiting kinematic redundancy, a novel reference velocity can be represented as:
ζ ˙ r = J ^ Ω z ^ x ˙ d α v J ^ T Ω z ^ Δ x κ v ( I J ^ Ω J ^ Ω ) P r
where κ v is a positive gain. P r is defined as P r = G ^ T ( ζ ) W G G ^ ( ζ ) ζ , W G is a positive weight matrix. ( I J ^ Ω J ^ Ω ) projects P r into the null space of J ^ Ω , hence P r is only effective in the null space.
Although some previous works utilize the GPM to optimize restoring moments for UVMSs [32,33,34], there are two main differences. One is the Jacobian matrix in [32,33,34] is used to describe the velocity relationship between the configuration space and the Cartesian space (i.e., the Jacobian matrix in (27)), rather than the Jacobian matrix described in (26). The other one is previous works [32,33,34] all depend on the accurate Jacobian matrix, rather than the estimated Jacobian matrix.
Then, a sliding vector can be defined as:
s = ζ ˙ ζ ˙ r
estimation error terms can be defined as: Δ a zin = a ^ zin a zin , Δ a zd = a ^ zd a zd , Δ a z = a ^ z a z . Then, substituting (7), (18) into (25), one can find:
( J ^ Ω J Ω ) ζ ˙ = ( J ^ zin ( ζ ) J zin ( ζ ) ) ζ ˙ x + x d 2 ( J ^ zd ( ζ ) J zd ( ζ ) ) ζ ˙ = Y zin ( ζ , ζ ˙ ) Δ a zin 1 2 Y zd ( ζ , ζ ˙ , x + x d ) Δ a zd
From (21) and (34), one has:
J ^ Ω ζ ˙ r = z ^ x ˙ d α v z ^ J ^ Ω J ^ Ω T Δ x = Y z ( ζ , x ˙ d α v J ^ Ω J ^ Ω T Δ x ) Δ a z
Invoking (26), (34)–(36), the closed-loop kinematics system can be represented as:
z Δ x ˙ + 1 2 z ˙ Δ x = Y z ( ζ , x ˙ d α v J ^ Ω J ^ Ω T Δ x ) Δ a z + J ^ Ω s + Y z d ( ζ , ζ ˙ , x + x d 2 ) Δ a z d Y z i n ( ζ , ζ ˙ ) Δ a z i n α v z J ^ Ω J ^ Ω T Δ x
To calculate the reference velocity term in (34) and guarantee the stability of the system, it is necessary to estimate the unknown parameters of the systems (i.e., the constant vectors in (8), (18) and (21)). The adaptive update laws are designed as:
a ^ ˙ zin = λ zin Y zin T ( ζ , ζ ˙ ) Δ x
a ^ ˙ zd = λ zd Y zd T ( ζ , ζ ˙ , x + x d 2 ) Δ x
a ^ ˙ z = λ z Y z T ( ζ , x ˙ d α J ^ Ω J ^ T Ω Δ x ) Δ x
λ zin , λ zd and λ z are the positive gain matrices, respectively.
The novel reference velocity term (34), update laws (39)–(41) and stability analysis in the end of this section is one of the main contributions of this work.
Remark 2.
We assume that the update laws (39) and (40) will not lead to the singularity of J ^ Ω . In other words, J ^ Ω is supposed to be non-singular. This is a common assumption in uncalibrated visual servoing works [19,20,21]. Besides, one may adopt the strategy presented in [35] to avoid the singular on J ^ Ω .

3.2. High Order Disturbance Observer

When compared with nonlinear disturbance observer in [31,36], the performance under high order disturbance observer (HODO) is better, because the disturbance model refers to high order information of the disturbance. Besides, in order to prove the asymptotic stability of the system, the disturbance term is often supposed to be constant in previous works [31,36]. This assumption can be cancelled under the high order disturbance observer.
The purpose of employing a disturbance observer is to estimate and compensate the time-varying unknown lumped disturbance term τ d . The philosophy of designing the disturbance observer is to estimate the unmeasurable term τ d through measurable states. To establish the high order disturbance observer, we first rewrite the closed loop system (29) as a form as state space:
{ x ˙ 1 = x 2 x ˙ 2 = M ^ 1 ( τ ctrl C ^ x 2 D ^ x 2 G ^ ) f + M ^ 1 τ d d
where x 1 = ζ , x 2 = ζ ˙ denote the system variable states. Besides, f = M ^ 1 ( τ ctrl C ^ x 2 D ^ x 2 G ^ ) is available in the system, d = M ^ 1 τ d is the unknown disturbance term to be estimated.
Suppose the disturbance term d has a bounded r t h derivative,
| d ( r ) | δ 0
where δ 0 is the upper bound. The disturbance term d can be described through a linear system, given by [37]
{ ξ ˙ = W ξ + E d ( r ) d = L ξ
where ξ ( 6 + n ) · r is an auxiliary variable. W ( 6 + n ) · r × ( 6 + n ) · r , E ( 6 + n ) · r × ( 6 + n ) , and L ( 6 + n ) × ( 6 + n ) · r can be represented as the following forms:
W = [ 0 ( 6 + n ) · ( r 1 ) × ( 6 + n ) I ( 6 + n ) · ( r 1 ) 0 ( 6 + n ) × ( 6 + n ) 0 ( 6 + n ) × ( 6 + n ) · ( r 1 ) ]
E = [ 0 ( 6 + n ) · ( r 1 ) × ( 6 + n ) I 6 + n ]
L = [ I 6 + n 0 ( 6 + n ) × ( 6 + n ) · ( r 1 ) ]
Combining (42) and (43), one can obtains:
{ ξ ˙ = W ξ + E D ( r ) x ˙ 2 = f + L ξ
Then a new extended system can be derived through introducing an auxiliary variable γ = [ γ 1 γ 2 ] T as follows:
[ γ 1 γ 2 ] = [ x 2 p ( x 2 ) + ξ ]
where p ( x 2 ) is a polynomial vector to be designed. Differentiating (45) with respect to time and substituting (44), the dynamics of γ can be represented as:
{ γ ˙ 1 = f + L ( p ( γ 1 ) + γ 2 ) γ ˙ 2 = ( W l ( γ 1 ) L ) γ 2 + T + E D ( r )
where l ( γ 1 ) = p ( γ 1 ) γ 1 is a vector to be designed. T = W p ( γ 1 ) l ( γ 1 ) ( f + L p ( γ 1 ) ) , it is available through system measurable states and pre-designed vector l ( γ 1 ) .
Then, a high order disturbance observer can be derived as:
{ γ ^ ˙ 2 = ( W l ( γ 1 ) L ) γ ^ 2 + T ξ ^ = γ ^ 2 + p ( γ 1 ) d ^ = L ξ ^
Finally, the estimation of τ d follows:
τ ^ d = M ^ d ^
The estimation error is defined as e ξ = ξ ξ ^ . And it is governed by following dynamic system:
e ˙ ξ = ξ ˙ ξ ^ ˙ = W ξ + E d ( r ) ( W l ( γ 1 ) L ) γ ^ 2 T p ˙ ( γ 1 )   = ( W l ( γ 1 ) L ) e ξ + E d ( r )
The principle of designing l ( γ 1 ) is to make the dominant pole of (49) lie in the left-half plane, then the system described in (49) is bounded-input bounded-output (BIBO) stable. If l ( γ 1 ) is available, then p ( γ 1 ) can be obtained from the relationship l ( γ 1 ) = p ( γ 1 ) γ 1 . Since the system is BIBO stable, the disturbance estimation error is bounded and satisfies:
d d ^ δ 1
where δ 1 is the upper bound. Since M ^ is bounded, it can be deduced that:
τ d τ ^ d = M ^ ( d d ^ ) M ^ ( d d ^ ) σ
where σ is an unknown bounded constant.

3.3. Composite Controller

Now, by using the reference velocity ζ ˙ r , the sliding vector s , the image error Δ x , the disturbance estimation τ ^ d and the approximation of unknown bound σ ^ , a composite controller with the HODO for uncalibrated visual servoing can be designed as:
τ ctrl = M ^ ( ζ ¨ r K p s ) K s J ^ T Ω Δ x + C ^ ζ ˙ r + D ^ ζ ˙ + G ^   τ ^ d σ ^ sgn ( s )
where K p and K s are positive diagonal matrices. In the controller, the first two terms are the feedback control term in vehicle-joint space and image space, receptively. C ^ ζ ˙ r + D ^ ζ ˙ + G ^ is the feed-forward control term. In vehicle-joint space, the reference velocity term ζ ˙ r is provided by (34), (39)–(41) through estimating J Ω and z . τ ^ d is the estimation term from the disturbance observer in (48). σ ^ is the estimated value of the constant bound σ , and it is updated as the following adaptive law:
σ ^ ˙ = λ σ s
Substituting (50) into (29), The close-loop dynamics equation of the UVMS can be represented as:
M ^ s ˙ = M ^ K p s C ^ s + M ^ d ˜ K s J ^ T Ω Δ x σ ^ sgn ( s )
The whole architecture of the proposed scheme can be found in Figure 3.
Theorem 1.
The adaptive laws (39)–(41), the high order disturbance observer (48), and the composite controller (50) guarantee the convergence of the image space tracking errors, i.e., Δ x 0 , Δ x ˙ 0 as t .
Proof. 
Consider a Lyapunov candidate function as:
V = 1 2 ( z Δ x T Δ x + Δ a zin T λ zin 1 Δ a zin + Δ a zd T λ zd 1 Δ a zd + Δ a z T λ z 1 Δ a z    + s T K s 1 M ^ s + K s 1 λ σ σ ˜ 2 )
From Properties 1–3, unknown parameter vectors expressed in (8), (18) and (21) are all constant. By differentiating (53) w.r.t. time, one has
V ˙ = 1 2 Δ x T z ˙ Δ x + Δ x T z Δ x ˙ + Δ a zin T λ zin 1 a ^ ˙ zin + Δ a zd T λ zd 1 a ^ ˙ zd + Δ a z T λ z 1 a ^ ˙ z    + s T K s 1 M ^ s ˙ + 1 2 s T K s 1 M ^ ˙ s K s 1 λ σ σ ^ ˙ σ ˜
take Property 6, (38)–(41), (50) into (54), one has:
V ˙ = α v z Δ x T J ^ Ω J ^ Ω T Δ x + Δ x T J ^ Ω s s T K s 1 M ^ K p s + s T K s 1 M ^ d ˜ s T J ^ Ω T Δ x    σ ^ s T K s 1 sgn ( s ) K s 1 λ σ σ ^ ˙ σ ˜    = α v z Δ x T J ^ Ω J ^ Ω T Δ x s T M ^ K p s + s T K s 1 M ^ d ˜ σ ^ s T K s 1 sgn ( s ) σ ˜ K s 1 s
Let s T K s 1 = η s , since K s is a positive diagonal matrix, it can be found:
sgn ( s T ) = sgn ( η s )
since J ^ Ω is non-singular and M ^ is positive, one can find a nonnegative definite continuous function W ( ζ ) satisfies:
V ˙ = α v z Δ x T J ^ Ω J ^ Ω T Δ x s T K s 1 M ^ K p s + η s M ^ d ˜ σ ^ η s sgn ( η s T ) σ ˜ K s 1 s    α v z Δ x T J ^ Ω J ^ Ω T Δ x s T M ^ K p s + σ η s σ ^ η s σ ˜ η s    = α v z Δ x T J ^ Ω J ^ Ω T Δ x s T M ^ K p s    = W ( ζ ) 0
Integrate W ( ζ ) from time 0 to current time T , it can be obtained
0 T W ( ζ ( r ) ) d r V ( ζ ( 0 ) ) V ( ζ ( T ) ) V ( ζ ( 0 ) )
V ( ζ ( 0 ) ) is fixed by the choice of initial condition, therefore 0 T W ( ζ ( r ) ) d r is bounded.
Since J ^ Ω J ^ Ω T is positive definite, from (56), V ˙ is negative semi-definite, which implies that Δ x , Δ a zin , Δ a zd , Δ a z , s and σ ˜ are all bounded. Therefore, a ^ zin , a ^ zd , a ^ z and σ ^ are all bounded. Then the boundness of J ^ zin ( ζ ) , z ^ ˙ , J ^ z ( ζ ) and z ^ can be obtained. Hence, the boundness J ^ Ω can be concluded from (32). Since x ˙ d and x d are all bounded, ζ ˙ r is also bounded. Because s and ζ ˙ r are all bounded, the boundness of ζ ˙ can be achieved. From (3) and (7), it can be obtained that x ˙ and z ˙ are all bounded, hence the boundness of Δ x ˙ can be achieved. From (39)–(41), the boundness of a ^ ˙ zin , a ^ ˙ zd and a ^ ˙ z can be obtained. Therefore, the boundness of J ^ ˙ z ( ζ ) and J ^ ˙ zin ( ζ ) can be achieved by differentiating (7) and (8), respectively. As J ^ ˙ zin ( ζ ) x ˙ + x ˙ d 2 J ^ z ( ζ ) x + x d 2 J ^ ˙ z ( ζ ) = J ^ ˙ Ω , the boundness of J ^ ˙ Ω can also be achieved.
Differentiating (34) w.r.t. time, one has:
ζ ¨ r = J ^ ˙ Ω z ^ x ˙ d + J ^ Ω z ^ ˙ x ˙ d + J ^ Ω z ^ x ¨ d α J ^ ˙ T Ω z ^ Δ x α v J ^ T Ω z ^ ˙ Δ x α v J ^ T Ω z ^ Δ x ˙    + κ v ( J ^ ˙ Ω J ^ Ω + J ^ Ω J ^ ˙ Ω ) P r κ v ( I J ^ Ω J ^ Ω ) 2 P r ζ ˙
because J ^ ˙ Ω , z ^ ˙ , x ˙ d , x ¨ d , Δ x ˙ , ζ ˙ , G ^ ( ζ ) , G ^ ( ζ ) ζ and 2 G ^ ( ζ ) ζ 2 are all bounded, the boundness of ζ ¨ r and s ˙ can be achieved. Differentiating W w.r.t. time, it satisfies:
W ˙ = α v z ˙ Δ x T J ^ Ω J ^ Ω T Δ x + 2 α v z Δ x T J ^ Ω J ^ Ω T Δ x ˙ + 2 α v z Δ x T J ^ ˙ Ω J ^ Ω T Δ x    + 2 s T K s 1 M ^ K p s ˙ + s T K s 1 M ^ ˙ K p s
W ˙ is bounded due to Properties 4 and 6. The boundness of (59) illustrates that W is uniformly continuous [38]. Consider the boundness of 0 t W ( ζ ( r ) ) d r , Barbalat’s lemma can be applied. Therefore, W 0 as t , which means Δ x 0 , s 0 as t .
By differentiating (3), it is clear that x ¨ is bounded. Hence Δ x ¨ is bounded too, and Δ x ˙ is uniformly continuous. Since the boundness of Δ x is achieved, it can be concluded Δ x ˙ 0 as t by applying Barbalat’s lemma. □

4. Performance Analysis

In order to demonstrate the effectiveness of the proposed controller, numerical simulations have been performed on a UVMS platform. As shown in Figure 4, the UVMS carries a 3 DOFs manipulator and the whole system contains 9 DOFs.
The dynamic parameters of the UVMS are displayed Table 1. The Denavit–Hartenberg table of the manipulator is showed in Table 2.
In simulations, an eye-in-hand camera with parameters α u = 50 pixels, α v = 50 pixels, u 0 = 200 pixels, v 0 = 100 pixels is employed. Parameters ε cam , ee cam = ( 0.1 , 0.2 , 0.1 ) T m and ε V , 0 V = ( 0 , 0 , 1.5 ) T m are employed to describe the kinematic relationships. The configuration of the manipulator follows Table 2. The target is fixed at the point ε I , fe I = ( 2.5 , 6 , 5 ) T m w.r.t. the inertial frame.
We set initial values as follows: α ^ u ( 0 ) = 35 pixels, α ^ v ( 0 ) = 75 pixels, α ^ 0 ( 0 ) = 170 pixels, α ^ 0 ( 0 ) = 210 pixels, ε ^ cam , ee cam ( 0 ) = ( 0.2 , 0.2 , 0.2 ) T m, ε ^ V , 0 V ( 0 ) = ( 0.1 , 1.4 , 1 ) T m, ε ^ I , fe I ( 0 ) = ( 2.5 , 4 , 2 ) T m. The initial configuration of the manipulator follows the last column in Table 2.
As we noted before, these uncertainties can be represented in three constant vectors Δ a z , Δ a zd and Δ a zin . Details can be found in the proofs of Properties 1–3.
During the simulations, the vehicle starts at the position (0, 0, 0; 0, 0, 0) (m; rads), and the manipulator starts at the position (0, 0, 0) (rads). The initial point of the feature on the image plane is (216, 230).
In simulations, the controller gains are set as follows:
α v = 4.5 × 10 4 ,   K s = 10 3 ,   κ v = 0.01 ,   λ zin = 0.1 ,   λ zd = 1 × 10 2 ,   λ z = 1 × 10 2 ,  
K p = 18 × diag { 10 , 10 , 10 , 15 , 15 , 15 , 5 , 5 , 5 } ,  
W = [ 0 9 × 9 I 9 0 9 × 9 0 9 × 9 ] ,   E = [ 0 9 × 9 I 9 ] ,   L = [ I 9 0 9 × 9 ] ,   p = [ 0.1 ζ ˙ 0.2 ζ ˙ ] ,   l = [ 0.1 I 9 0.2 I 9 ] ,   γ ^ 2 ( 0 ) = I 9 × 1
The desired trajectory is given as:
[ u d v d ] = [ 300 + 50 sin ( 0.1 t ) 200 + 50 cos ( 0.1 t ) ]
The proposed scheme is utilized to track the desired circle trajectory in 70 s.
Figure 5a,b show the actual trajectory and the desired trajectory of the feature point. Figure 5c describes the error trajectory during the tracking routine. Figure 5d illustrates the velocity tracking performance under the proposed method. Time histories of vehicle position, vehicle orientation, and manipulator position can be found in Figure 5e–g. The initial state and the final state can be found in Figure 5h,i.
As expectation, rapid convergences have been achieved in both position and velocity tracking performances.
From (18) and (21), one can obtain the connection between a zd and z ˙ , a z and z . Since the dimensions of a z and a zd are all too large, therefore we choose to display the estimation of feature depth and its derivative to reflect the adaption process, the results can be found in Figure 6 and Figure 7.
From (8), we can define following vector:
[ α , β ] T = J zin ( ζ ) ζ ˙ = Y zin ( ζ , ζ ˙ ) a zin
And its estimation follows:
[ α ^ , β ^ ] T = Y zin ( ζ , ζ ˙ ) a ^ zin
According to the same reason, we choose to exhibit the actual and estimated values of α and β to illustrate the adaption process of a zin . As Figure 8 illustrates, estimation errors stay bounded.
From stability analysis in previous section, Δ a z , Δ a zd and Δ a zin are bounded. Therefore, it is reasonable that estimation errors represented in Figure 6, Figure 7 and Figure 8 are bounded.
Compared with the reference velocity term (33), the proposed method with (34) provide restoring moments optimization by using the GPM. Figure 9 illustrates the two-norm of restoring moments based on (33) and (34), respectively. RMS values of tracking errors based on (33) and (34) can be found in Table 3.
Because the initial error is relatively large, we calculate RMS values from the time point t = 4 s to the end of the simulation. From Table 3, the controller under reference velocity from (33) provides slightly better results in both position tracking and velocity tracking than the one under (34). However, from Figure 9, the results obtained under the GPM from (34) substantially reduce the influence of the restoring moments compared with the one under (33). Considering all these factors, the scheme with the GPM provides better overall performances.
In addition, we compare he tracking performances between the proposed method and the one but without the HODO. As showed in Figure 10, the proposed scheme achieves accurate and smooth results compared with the other one.
From theoretical analysis in previous section, the HODO achieves BIBO stable, rather than asymptotical stable. But in simulations, the HODO still shows good performances in estimating disturbances in all DOFs. The results can be found in Figure 11. Disturbance estimation error RMS values are given in Table 4.

5. Conclusions

In this paper, an uncalibrated visual servoing scheme is proposed for the UVMS with an eye-in-hand configuration camera. Parameters of the vision sensor, kinematics of the UVMS and the target position are supposed to be unknown. Firstly, we introduce a linear separation method to collect kinematics uncertainties into vectors, and this method can be utilized in other free-floating based articulated manipulators. Then, to deal with these kinematics uncertainties, novel adaptive laws are proposed to estimate these vectors, the GPM is utilized to optimize the restoring moments of the system. Moreover, a high order disturbance observer is presented to estimate and compensate lumped disturbances. A novel composite controller is addressed in order to realize the convergence of image errors. The stability of the closed-loop system is proved by utilizing Lyapunov theory. Finally, trajectory tracking simulations based on a 9 -DOFs UVMS are carried out to test the proposed scheme under uncertainties and disturbances.

Author Contributions

Conceptualization, investigation, software and writing original draft, J.L.; project administration, supervision and writing -review and editing, H.H.; data curation, Y.X. and H.W.; formal analyzation, J.L., Y.X. and H.W.; methodology, L.W. and J.L. All authors are credited for their contribution on the writing and edition of the presented manuscript.

Funding

This work is supported by National Nature Science Foundation of China (No. 61633009, 51579053, 51209050,51779052, 51779059) and the Field Fund (No. 61403120409) of the 13th Five-Year Plan for Equipment Pre-research Fund, and also funded by the Key Basic Research Project of “Shanghai Science and Technology Innovation Plan” (No.15JC1403300).

Conflicts of Interest

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Xiao, H.; Cui, R.; Xu, D. A sampling-based Bayesian approach for cooperative multiagent online search with resource constraints. IEEE Trans. Cybern. 2018, 48, 1773–1785. [Google Scholar] [CrossRef] [PubMed]
  2. Qin, H.; Chen, H.; Sun, Y.; Wu, Z. The distributed adaptive finite-time chattering reduction containment control for multiple ocean bottom flying nodes. Int. J. Fuzzy Syst. 2019, 21, 607–619. [Google Scholar] [CrossRef]
  3. Khatib, O.; Yeh, X.; Brantner, G.; Soe, B.; Kim, B.; Ganguly, S.; Stuart, H.; Wang, S.; Cutkosky, M.; Edsinger, A.; et al. Ocean one a robotic avatar for oceanic discovery. IEEE Robot. Autom. Mag. 2016, 23, 20–29. [Google Scholar] [CrossRef]
  4. Marani, G.; Choi, S.K.; Yuh, J. Underwater autonomous manipulation for intervention missions AUVs. Ocean Eng. 2009, 36, 15–23. [Google Scholar] [CrossRef]
  5. Prats, M.; Ribas, D.; Palomeras, N.; García, J.C.; Nannen, V.; Wirth, S.; Fernández, J.J.; Beltrán, J.P.; Campos, R.; Ridao, P.; et al. Reconfigurable AUV for intervention missions: A case study on underwater object recovery. Intell. Serv. Robot. 2012, 5, 19–31. [Google Scholar] [CrossRef]
  6. Ridao, P.; Carreras, M.; Ribas, D.; Sanz, P.J.; Oliver, G. Intervention AUVs: The next challenge. Annu. Rev. Control 2015, 40, 227–241. [Google Scholar] [CrossRef] [Green Version]
  7. Eren, F.; Pe’eri, S.; Thein, M.W.; Rzhanov, Y.; Celikkol, B.; Swift, M.R. Position, orientation and velocity detection of unmanned underwater vehicles (UUVs) using an optical detector array. Sensors 2017, 17, 1741. [Google Scholar] [CrossRef] [Green Version]
  8. Corke, P. Robotics, Vision and Control: Fundamental Algorithms in MATLAB®; Springer Tracts in Advanced Robotics: Berlin, Germany, 2017; pp. 455–456. [Google Scholar]
  9. Chaumette, F.; Hutchinson, S. Visual servo control. Part I: Basic approaches. IEEE Robot. Autom. Mag. 2006, 4, 82–90. [Google Scholar] [CrossRef]
  10. Siciliano, B.; Sciavicco, L.; Villani, L.; Oriolo, G. Robotics: Modelling, Planning and Control; Springer: London, UK, 2009; pp. 422–427. [Google Scholar]
  11. Lopez-Franco, C.; Gomez-Avila, J.; Alanis, A.; Arana-Daniel, N.; Villaseñor, C. Visual servoing for an autonomous hexarotor using a neural network based PID controller. Sensors 2017, 17, 1865. [Google Scholar] [CrossRef]
  12. Mark, S. Calibration techniques for accurate measurements by underwater camera systems. Sensors 2015, 15, 30810–30827. [Google Scholar]
  13. Sebastiána, J.M.; Paria, L.; Angelb, L.; Traslosherosa, A. Uncalibrated visual servoing using the fundamental matrix. Robot. Auton. Syst. 2009, 57, 1–10. [Google Scholar] [CrossRef]
  14. Xiaolin, R.; Hongwen, L.; Yuanchun, L. Online image Jacobian identification using optimal adaptive robust Kalman filter for uncalibrated visual servoing. In Proceedings of the 2017 2nd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS 2017), Wuhan, China, 16–18 June 2017. [Google Scholar]
  15. Kim, G.W. Uncalibrated visual servoing through the efficient estimation of the image Jacobian for large residual. J. Electr. Eng. Technol. 2013, 8, 385–392. [Google Scholar] [CrossRef] [Green Version]
  16. Qian, J.; Su, J. Online estimation of image Jacobian Matrix by Kalman–Bucy filter for uncalibrated Stereo Vision feedback. In Proceedings of the International Conference on Robotics and Automation (ICRA 2002), Washington, DC, USA, 11–15 May 2002; pp. 562–567. [Google Scholar]
  17. Huang, X.H.; Zeng, X.J.; Wang, M. SVM-based identification and un-calibrated visual servoing for micro-manipulation. Int. J. Autom. Comput. 2010, 7, 47–54. [Google Scholar] [CrossRef]
  18. Liu, Y.-H.; Wang, H.; Wang, C.; Lam, K.K. Uncalibrated visual servoing of robots using a depth-independent interaction matrix. IEEE Trans. Robot. 2006, 22, 804–817. [Google Scholar]
  19. Wang, H. Passivity-based adaptive control for visually servoed robotic systems. In Proceedings of the 2014 4th Australian Control Conference, Canberra, Australia, 17–18 November 2014; pp. 152–157. [Google Scholar]
  20. Wang, H.; Cheah, C.C.; Ren, W.; Xie, Y. Passive separation approach to adaptive visual tracking for robotic systems. IEEE Trans. Control Syst. Technol. 2018, 26, 2232–2241. [Google Scholar] [CrossRef]
  21. Liang, X.; Wang, H.; Liu, Y.H.; Chen, W.; Zhao, J. A unified design method for adaptive visual tracking control of robots with eye-in-hand/fixed camera configuration. Automatica 2015, 59, 97–105. [Google Scholar] [CrossRef]
  22. Wang, H.; Guo, D.; Xu, H.; Chen, W.; Liu, T.; Leang, K.K. Eye-in-hand tracking control of a free-floating space manipulator. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 1855–1865. [Google Scholar] [CrossRef]
  23. Ji-Yong, L.; Hao, Z.; Hai, H.; Xu, Y. Design and vision based autonomous capture of sea organism with absorptive type remotely operated vehicle. IEEE Access 2018, 6, 73871–73884. [Google Scholar] [CrossRef]
  24. Gao, J.; Proctor, A.A.; Shi, Y.; Bradley, C. Hierarchical model predictive image-based visual servoing of underwater vehicles with adaptive neural network dynamic. IEEE Trans. Cybern. 2016, 46, 2323–2334. [Google Scholar] [CrossRef]
  25. Sivčev, S.; Rossi, M.; Coleman, J.; Dooly, G.; Omerdić, E.; Toal, D. Fully automatic visual servoing control for work-class marine intervention ROVs. Control Eng. Pract. 2018, 74, 153–167. [Google Scholar] [CrossRef]
  26. Xu, F.; Wang, H.; Chen, W.; Wang, J. Adaptive visual servoing control for an underwater soft robot. Assem. Autom. 2018, 38, 669–677. [Google Scholar] [CrossRef]
  27. Huang, H.; Tang, Q.; Li, H.; Liang, L.; Li, W.; Pang, Y. Vehicle-manipulator system dynamic modeling and control for underwater autonomous manipulation. Multibody Syst. Dyn. 2017, 41, 125–147. [Google Scholar] [CrossRef]
  28. Li, T.; Zhao, H.; Chang, Y. Visual servoing tracking control of uncalibrated manipulators with a moving feature point. Int. J. Syst. Sci. 2018, 49, 11. [Google Scholar] [CrossRef]
  29. Jo, J.; Lee, D.; Tran, D.T.; Oh, Y.; Oh, S.R. On-line gravity estimation method using inverse gravity regressor for robot manipulator control. In Proceedings of the IEEE/RSJ in the International Conference on Intelligent Robots and Systems, Hamburg, Germany, 8 September–2 October 2015; pp. 5429–5434. [Google Scholar]
  30. Antonelli, G.; Robots, U. Underwater Robots Motion and Force Control of Vehicle Manipulator Systems; Springer: Berlin, Germany, 2006. [Google Scholar]
  31. Mohammadia, A.; Tavakolib, M.; Hashemzadeh, H.M.F. Nonlinear disturbance observer design for robotic manipulators. Control Eng. Pract. 2013, 21, 253–267. [Google Scholar] [CrossRef] [Green Version]
  32. Han, J.; Chung, W.K. Redundancy resolution for underwater vehicle-manipulator systems with minimizing restoring moments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots & Systems, San Diego, CA, USA, 29 October–2 November 2007. [Google Scholar]
  33. Han, J.; Chung, W.K. Active use of restoring moments for motion control of an underwater vehicle-manipulator system. IEEE J. Ocean. Eng. 2014, 39, 100–109. [Google Scholar] [CrossRef]
  34. Tang, Q.; Liang, L.; Xie, J.; Li, Y.; Deng, Z. Task-priority redundancy resolution on acceleration level for underwater vehicle-manipulator system. Int. J. Adv. Robot. Syst. 2017, 14, 1–9. [Google Scholar] [CrossRef]
  35. Wang, H.; Liu, Y.H.; Zhou, D. Dynamic visual tracking for manipulators using an uncalibrated fixed camera. IEEE Trans. Robot. 2007, 23, 610–617. [Google Scholar] [CrossRef]
  36. Chen, W.; Ballance, D.; Gawthrop, P.; O’Reilly, J. A nonlinear disturbance observer for robotic manipulators. IEEE Trans. Ind. Electron. 2000, 47, 932–938. [Google Scholar] [CrossRef] [Green Version]
  37. Su, J.; Chen, W.H.; Li, B. High order disturbance observer design for linear and nonlinear systems. In Proceedings of the IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015. [Google Scholar]
  38. Slotine, J.J.; Weiping, L. Adaptive manipulator control a case study. IEEE Trans. Autom. Control 1988, 33, 995–1003. [Google Scholar] [CrossRef]
Figure 1. Flow chart of this work.
Figure 1. Flow chart of this work.
Sensors 19 05469 g001
Figure 2. Coordinate frames of the UVMS.
Figure 2. Coordinate frames of the UVMS.
Sensors 19 05469 g002
Figure 3. The architecture of the proposed scheme.
Figure 3. The architecture of the proposed scheme.
Sensors 19 05469 g003
Figure 4. The configuration of the employed UVMS.
Figure 4. The configuration of the employed UVMS.
Sensors 19 05469 g004
Figure 5. Trajectory tracking performance under the proposed scheme. (a) Position tracking performance on the image plane- u ; (b) Position tracking performance on the image plane- v ; (c) Position tracking errors on the image plane; (d) Velocity tracking performance on the image plane; (e) Time histories of vehicle position; (f) Time histories of vehicle orientation; (g) Time histories of joint position; (h) The initial state of the UVMS; (i) The final state of the UVMS.
Figure 5. Trajectory tracking performance under the proposed scheme. (a) Position tracking performance on the image plane- u ; (b) Position tracking performance on the image plane- v ; (c) Position tracking errors on the image plane; (d) Velocity tracking performance on the image plane; (e) Time histories of vehicle position; (f) Time histories of vehicle orientation; (g) Time histories of joint position; (h) The initial state of the UVMS; (i) The final state of the UVMS.
Sensors 19 05469 g005aSensors 19 05469 g005b
Figure 6. Actual depth and estimated depth of the feature.
Figure 6. Actual depth and estimated depth of the feature.
Sensors 19 05469 g006
Figure 7. Actual depth derivative and estimated depth derivative of the feature.
Figure 7. Actual depth derivative and estimated depth derivative of the feature.
Sensors 19 05469 g007
Figure 8. Adaption process on a zin . (a) Visual tracking performance on α ; (b) Visual tracking performance on β .
Figure 8. Adaption process on a zin . (a) Visual tracking performance on α ; (b) Visual tracking performance on β .
Sensors 19 05469 g008
Figure 9. Two-norm of the restoring moments under different reference velocity terms.
Figure 9. Two-norm of the restoring moments under different reference velocity terms.
Sensors 19 05469 g009
Figure 10. Trajectory tracking performance. (a) with the HODO; and, (b) without the HODO.
Figure 10. Trajectory tracking performance. (a) with the HODO; and, (b) without the HODO.
Sensors 19 05469 g010
Figure 11. Disturbance estimations on: (a) vehicle-X (N); (b) vehicle-Y (N); (c) vehicle-Z (N); (d) vehicle-X (N·m); (e) vehicle-Y (N·m); (f) vehicle-Z (N·m); (g) manipulator-Joint 1 (N·m); (h) manipulator-Joint 2 (N·m); (i) manipulator-Joint 3 (N·m).
Figure 11. Disturbance estimations on: (a) vehicle-X (N); (b) vehicle-Y (N); (c) vehicle-Z (N); (d) vehicle-X (N·m); (e) vehicle-Y (N·m); (f) vehicle-Z (N·m); (g) manipulator-Joint 1 (N·m); (h) manipulator-Joint 2 (N·m); (i) manipulator-Joint 3 (N·m).
Sensors 19 05469 g011
Table 1. Main dynamic parameters of the system.
Table 1. Main dynamic parameters of the system.
ItemVehicle Link1Link2Link3
M ( kg ) 82.1312.6033.5203.159
I xx · ( kg · m 2 ) 4.9490.026200
I yy · ( kg · m 2 ) 7.3620.02620.16360.1636
I zz · ( kg · m 2 ) 8.65800.16360.1636
Table 2. DH parameters of the manipulator.
Table 2. DH parameters of the manipulator.
Joints a i / m α i / rads d i / m q i / rads   d ^ i ( 0 ) / m
10π/20 q 1 0
2000.6 q 2 0.5
3000.3 q 3 0.5
Table 3. Tracking error RMS values.
Table 3. Tracking error RMS values.
RMS ErrorPosition Tracking ErrorVelocity Tracking Error
With the GPM2.21982.8284
Without the GPM1.94892.4959
Table 4. Disturbance estimation error RMS values.
Table 4. Disturbance estimation error RMS values.
RMS ErrorXYZKMNJoint-1Joint-2Joint-3
Disturbance estimation error4.98473.84843.74321.32112.78740.69020.73591.03030.1255

Share and Cite

MDPI and ACS Style

Li, J.; Huang, H.; Xu, Y.; Wu, H.; Wan, L. Uncalibrated Visual Servoing for Underwater Vehicle Manipulator Systems with an Eye in Hand Configuration Camera. Sensors 2019, 19, 5469. https://0-doi-org.brum.beds.ac.uk/10.3390/s19245469

AMA Style

Li J, Huang H, Xu Y, Wu H, Wan L. Uncalibrated Visual Servoing for Underwater Vehicle Manipulator Systems with an Eye in Hand Configuration Camera. Sensors. 2019; 19(24):5469. https://0-doi-org.brum.beds.ac.uk/10.3390/s19245469

Chicago/Turabian Style

Li, Jiyong, Hai Huang, Yang Xu, Han Wu, and Lei Wan. 2019. "Uncalibrated Visual Servoing for Underwater Vehicle Manipulator Systems with an Eye in Hand Configuration Camera" Sensors 19, no. 24: 5469. https://0-doi-org.brum.beds.ac.uk/10.3390/s19245469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop