# Model Equivalence-Based Identification Algorithm for Equation-Error Systems with Colored Noise

^{*}

Next Article in Journal

Previous Article in Journal

Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, China

Author to whom correspondence should be addressed.

Academic Editor: Tom Burr

Received: 1 May 2015 / Accepted: 19 May 2015 / Published: 2 June 2015

For equation-error autoregressive (EEAR) systems, this paper proposes an identification algorithm by means of the model equivalence transformation. The basic idea is to eliminate the autoregressive term in the model using the model transformation, to estimate the parameters of the converted system and further to compute the parameter estimates of the original system using the comparative coefficient way and the model equivalence principle. For comparison, the recursive generalized least squares algorithm is given simply. The simulation results verify that the proposed algorithm is effective and can produce more accurate parameter estimates.

System modeling and system identification are the prerequisite and foundation of all control issues. System identification has a significant effect on the filtering [1–3], state estimation [4–6], system control [7–9] and optimization [10]. For example, Scarpiniti et al. proposed a nonlinear filtering approach based on spline nonlinear functions [11]; Zhuang et al. presented an algorithm to estimate the parameters and states for linear systems with canonical state-space descriptions [12]; Khan et al. discussed the theoretical implementation of robust attitude estimation for a rigid spacecraft system under measurement loss [13]. As system identification becomes widely available, many identification methods have been raised, e.g., the gradient identification methods [14,15], the hierarchical identification methods [16–18], the auxiliary model identification methods [19,20] and the multi-innovation identification methods [21].

In all of these identification methods, the recursive identification [22–24] and the iterative identification methods [25–27] constitute two categories of important parameter estimation methods [28]. The variable of the recursive identification is about time, so it can be used to estimate the system parameters online. Yu et al. derived the recursive identification algorithm to identify the parameters in the parameterized Hammerstein–Wiener system model [29]; Filipovic presented a robust recursive algorithm for identification of a Hammerstein model with a static nonlinear block in polynomial form and a linear block described by the ARMAX model [30]; Cao et al. studied constrained two-dimensional recursive least squares identification problems for batch processes, which can improve the identification performance by incorporating a soft constraint term in the cost function to reduce the variation of the estimated parameters [31]. Liu and Lu derived the mathematical models and presented a least squares-based iterative algorithm for multi-input multirate systems with colored noises by replacing the unknown noise terms in the information vector with their estimates [32].

Some estimation methods focus on the estimation problems of equation error type systems [33–35], including the equation-error autoregressive (EEAR) systems, the equation-error moving average systems and the equation-error autoregressive moving average systems. For example, Xiao and Yue derived a filtering-based recursive least squares identification algorithm for nonlinear dynamical adjustment models [36]; Li developed a maximum likelihood estimation algorithm to estimate the parameters of Hammerstein nonlinear CARARMA systems by using the Newton iteration [37]; Ding presented a recursive generalized extended least squares algorithm for identifying controlled ARMA systems [28]; the basic idea is to replace the unknown terms in the information vector with their estimates. On the basis of the work in [28,32], the objective of this paper is to develop new identification algorithms using the model equivalent transformation and to provide more accurate parameter estimates.

The rest of this paper is organized as follows. Section 2 gives the identification model for EEAR systems. Section 3 gives a recursive generalized least squares algorithm, and Section 4 gives a model equivalence-based recursive least squares algorithm. Section 5 computes the parameter estimates of the original system. Section 6 provides numerical examples to prove the validity of the proposed algorithms. Finally, some concluding remarks are made in Section 7.

Let us define some notation. “A =: X” or “X := A” represents “A is defined as X”; the superscript T denotes the matrix/vector transpose; the norm of a matrix **X** is defined by ||**X**||^{2} := tr[**XX**^{T}]; **I** stands for an identity matrix of appropriate size;
$\widehat{\mathit{X}}(t)$ represents the estimate of **X** at time t.

Consider the following equation-error autoregressive system, i.e., the controlled autoregressive autoregressive (CARAR) system, in Figure 1,
where u(t) and y(t) are the measured input and output of the system, respectively, v(t) represents stochastic white noise with zero mean and variance σ^{2} and A(z), B(z) and C(z) denote the polynomials in the unit backward shift operator z^{−}^{1} [i.e., z^{−}^{1}y(t) = y(t − 1)]:

$$A(z)y(t)=B(z)u(t)+\frac{1}{C(z)}v(t)$$

$$\begin{array}{l}A(z):=1+{a}_{1}{z}^{-1}+{a}_{2}{z}^{-2}+\cdots +{a}_{{n}_{a}}{z}^{-{n}_{a}},\\ B(z):={b}_{1}{z}^{-1}+{b}_{2}{z}^{-2}+\cdots +{b}_{{n}_{b}}{z}^{-{n}_{b}},\\ C(z):=1+{c}_{1}{z}^{-1}+{c}_{2}{z}^{-2}+\cdots +{c}_{{n}_{c}}{z}^{-{n}_{c}}\end{array}$$

Suppose that u(t) = 0, y(t) = 0, v(t) = 0 for t ⩽ 0, the orders n_{a}, n_{b} and n_{c} are known, and n := n_{a} + n_{b} + n_{c}.

Define the intermediate variable (the correlated stochastic noise):

$$w(t):=\frac{1}{C(z)}v(t)$$

Define the parameter vector **θ**_{s} and the information vector **φ** _{s}(t) of the system model and the parameter vector **θ**_{n} and the information vector **φ** _{n}(t) of the noise model as
where subscripts s and n denote the first letters of the words “system” and “noise”, respectively. **φ**_{s}(t) is the known information vector, which consists of measured input-output data u(t − i) and y(t − i); **φ**_{n}(t) is the unknown information vector, which consists of noise terms w(t − i). By means of the above definitions, Equations (2) and (3) can be expressed as:

$$\begin{array}{l}\phantom{\rule{0.9em}{0ex}}\mathit{\theta}:=\left[\begin{array}{l}{\mathit{\theta}}_{\mathrm{s}}\\ {\mathit{\theta}}_{\mathrm{n}}\end{array}\right]\in {\mathbb{R}}^{n},\\ \phantom{\rule{0.7em}{0ex}}{\mathit{\theta}}_{\mathrm{s}}:={[{a}_{1},{a}_{2},\cdots ,{a}_{{n}_{a}},{b}_{1},{b}_{2},\cdots ,{b}_{{n}_{b}}]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{a}+{n}_{b}},\\ \phantom{\rule{0.6em}{0ex}}{\mathit{\theta}}_{\mathrm{n}}:={[{c}_{1},{c}_{2},\cdots ,{c}_{{n}_{c}}]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{c}},\\ \mathit{\phi}(t):=\left[\begin{array}{l}{\mathit{\phi}}_{\mathrm{s}}(t)\\ {\mathit{\phi}}_{\mathrm{n}}(t)\end{array}\right]\in {\mathbb{R}}^{n},\\ {\mathit{\phi}}_{\mathrm{s}}(t):={[-y(t-1),-y(t-2),\cdots ,-y(t-{n}_{a}),u(t-1),u(t-2),\cdots ,u(t-{n}_{b})]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{a}+{n}_{b}},\\ {\mathit{\phi}}_{\mathrm{n}}(t):={[-w(t-1),-w(t-2),\cdots ,-w(t-{n}_{c})]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{c}}\end{array}$$

$$\begin{array}{l}w(t)=[1-C(z)]w(t)+v(t)\\ \phantom{\rule{2em}{0ex}}={\mathit{\phi}}_{\mathrm{n}}^{\mathrm{T}}(t){\mathit{\theta}}_{\mathrm{n}}+v(t)\\ y(t)=[1-A(z)]y(t)+B(z)u(t)+w(t)\\ \phantom{\rule{2em}{0ex}}={\mathit{\phi}}_{\mathrm{s}}^{\mathrm{T}}(t){\mathit{\theta}}_{\mathrm{s}}+w(t)\\ \phantom{\rule{2em}{0ex}}={\mathit{\phi}}^{\mathrm{T}}(t)\mathit{\theta}+v(t)\end{array}$$

This is the identification model for the EEAR system in Equation (1). The objective of this paper is to propose new identification algorithms for estimating the parameters of EEAR systems.

As we all know, the recursive generalized least squares algorithm can identify CARAR systems [36].

The core idea is to substitute their estimates for the unmeasurable noise terms in the information vector. The following is the recursive generalized least squares (RGLS) algorithm for estimating the parameter vector **θ** of the EEAR systems:

$$\widehat{\mathit{\theta}}(t)=\widehat{\mathit{\theta}}(t-1)+{\mathit{L}}_{1}(t)[y(t)-{\widehat{\mathit{\phi}}}^{\mathrm{T}}(t)\widehat{\mathit{\theta}}(t-1)]$$

$${\mathit{L}}_{1}(t)={\mathit{P}}_{1}(t-1)\widehat{\mathit{\phi}}(t){[1+{\widehat{\mathit{\phi}}}^{\mathrm{T}}(t){\mathit{P}}_{1}(t-1)\widehat{\mathit{\phi}}(t)]}^{-1}$$

$${\mathit{P}}_{1}(t)=[\mathit{I}-{\mathit{L}}_{1}(t){\widehat{\mathit{\phi}}}^{\mathrm{T}}(t)]{\mathit{P}}_{1}(t-1),\phantom{\rule{0.5em}{0ex}}{\mathit{P}}_{1}(0)-{p}_{0}\mathit{I}$$

$$\widehat{\mathit{\phi}}(t)=\left[\begin{array}{l}{\mathit{\phi}}_{\mathrm{s}}(t)\\ {\widehat{\mathit{\phi}}}_{\mathrm{n}}(t)\end{array}\right],\phantom{\rule{1em}{0ex}}\widehat{\mathit{\theta}}(t)=\left[\begin{array}{l}{\widehat{\mathit{\theta}}}_{\mathrm{s}}(t)\\ {\widehat{\mathit{\theta}}}_{\mathrm{n}}(t)\end{array}\right]$$

$${\mathit{\phi}}_{\mathrm{s}}(t):={[-y(t-1),-y(t-2),\cdots ,-y(t-{n}_{a}),u(t-1),u(t-2),\cdots ,u(t-{n}_{b})]}^{\mathrm{T}}$$

$${\widehat{\mathit{\phi}}}_{\mathrm{n}}(t):={[-\widehat{w}(t-1),-\widehat{w}(t-2),\cdots ,-\widehat{w}(t-{n}_{c})]}^{\mathrm{T}}$$

$$\widehat{w}(t)=y(t)-{\mathit{\phi}}_{\mathrm{s}}^{\mathrm{T}}(t){\widehat{\mathit{\theta}}}_{\mathrm{s}}(t)$$

$${\widehat{\mathit{\theta}}}_{\mathrm{s}}(t):={[{\widehat{a}}_{1}(t),{\widehat{a}}_{2}(t),\cdots ,{\widehat{a}}_{{n}_{a}}(t),{\widehat{b}}_{1}(t),{\widehat{b}}_{2}(t),\cdots ,{\widehat{b}}_{{n}_{b}}(t)]}^{\mathrm{T}}$$

$${\widehat{\mathit{\theta}}}_{\mathrm{n}}(t):={[{\widehat{c}}_{1}(t),{\widehat{c}}_{2}(t),\cdots ,{\widehat{c}}_{{n}_{c}}(t)]}^{\mathrm{T}}$$

The RGLS algorithm can estimate the parameters of EEAR systems on-line.

For the EEAR system in Equation (1), the information vector **φ**(t) of the recursive generalized least squares algorithm contains the unknown noise terms w(t − i). The solution is replacing the unknown noise terms w(t − i) with their estimates. However, the existence of the unknown noise terms in the information vector **φ**(t) affects the accuracy of the parameter estimates to some extent.

The method proposed in this paper is transforming the original system with colored noise into an equation-error system using the model equivalent transformation, so that the information vector in the identification model is composed of the available input u(t − i) and output y(t − i). Since there are no noise terms to be estimated in the information vector, the identification accuracy can be improved.

Consider the CARAR system in Figure 1, which is rewritten as follows,

$$A(z)y(t)=B(z)u(t)+\frac{1}{C(z)}v(t)$$

Multiplying both sides of it by C(z) makes

$$A(z)C(z)y(t)=B(z)C(z)u(t)+v(t)$$

For simplicity, let n_{p} := n_{a} + n_{c} and n_{q} := n_{b} + n_{c}; define the polynomials:

$$P(z):=C(z)A(z)=1+{p}_{1}{z}^{-1}+{p}_{2}{z}^{-2}+\cdots +{p}_{{n}_{p}}{z}^{-{n}_{p}}$$

$$Q(z):=C(z)B(z)={q}_{1}{z}^{-1}+{q}_{2}{z}^{-2}+\cdots +{q}_{{n}_{q}}{z}^{-{n}_{q}}$$

It is clear that Equation (14) reduces to an equation-error model, whose parameters can be estimated by the recursive least squares algorithm. Define the parameter vector **ϑ** and the information vector ϕ (t) as

$$\begin{array}{l}\phantom{\rule{1em}{0ex}}\mathit{\vartheta}:={[{p}_{1},{p}_{2},\cdots ,{p}_{{n}_{p}},{q}_{1},{q}_{2},\cdots ,{q}_{{n}_{q}}]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{p}+{n}_{q}}\\ \varphi (t):={[-y(t-1),-y(t-2),\cdots ,-y(t-{n}_{p}),u(t-1),u(t-2),\cdots ,u(t-{n}_{q})]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{p}+{n}_{q}}\end{array}$$

In this case, Equation (18) can be equivalently written as

$$y(t)={\varphi}^{\mathrm{T}}(t)\mathit{\vartheta}+v(t)$$

That is the identification model of Equation (18). Let
$\widehat{\mathit{\vartheta}}(t)$ be the estimate of **ϑ** at time t. We obtain the recursive least squares algorithm for identifying **ϑ** in Equation (19):

$$\widehat{\mathit{\vartheta}}(t)=\widehat{\mathit{\vartheta}}(t-1)+{\mathit{L}}_{2}(t)[y(t)-{\varphi}^{\mathrm{T}}(t)\widehat{\mathit{\vartheta}}(t-1)]$$

$${\mathit{L}}_{2}(t)={\mathit{P}}_{2}(t-1)\varphi (t){[1+{\varphi}^{\mathrm{T}}(t){\mathit{P}}_{2}(t-1)\varphi (t)]}^{-1}$$

$${\mathit{P}}_{2}(t)=[\mathit{I}-{\mathit{L}}_{2}(t){\varphi}^{\mathrm{T}}(t)]{\mathit{P}}_{2}(t-1),\phantom{\rule{0.5em}{0ex}}{\mathit{P}}_{2}(0)-{p}_{0}\mathit{I}$$

$$\widehat{v}(t)=y(t)-{\varphi}^{\mathrm{T}}(t)\widehat{\mathit{\vartheta}}(t)$$

$$\varphi (t):={[-y(t-1),-y(t-2),\cdots ,-y(t-{n}_{p}),u(t-1),u(t-2),\cdots ,u(t-{n}_{q})]}^{\mathrm{T}}$$

$$\widehat{\mathit{\vartheta}}(t):={[{\widehat{p}}_{1}(t),{\widehat{p}}_{2}(t),\cdots ,{\widehat{p}}_{{n}_{p}}(t),{\widehat{q}}_{1}(t),{\widehat{q}}_{2}(t),\cdots ,{\widehat{q}}_{{n}_{q}}(t)]}^{\mathrm{T}}$$

From Equations (20)–(25), we can compute the parameter estimate
$\widehat{\mathit{\vartheta}}(t)$, i.e., the estimates of the parameters p_{i} and q_{i}. The following derives the model equivalence-based recursive least squares algorithm.

According to the acquired estimates
${\widehat{p}}_{i}(t)$ and
${\widehat{q}}_{i}(t)$ of the parameters p_{i} and q_{i}, we can compute the parameter estimates â_{i}(t),
${\widehat{b}}_{i}(t)$ and ĉ_{i}(t) of the original system. The key idea is using the coefficient equivalent principle, and the details are as follows.

Assume that the estimates of A(z), B(z) and C(z) are

$$\begin{array}{l}\widehat{A}(t,z):=1+{\widehat{a}}_{1}(t){z}^{-1}+{\widehat{a}}_{2}(t){z}^{-2}+\cdots +{\widehat{a}}_{{n}_{a}}(t){z}^{-{n}_{a}},\\ \widehat{B}(t,z):={\widehat{b}}_{1}(t){z}^{-1}+{\widehat{b}}_{2}(t){z}^{-2}+\cdots +{\widehat{b}}_{{n}_{b}}(t){z}^{-{n}_{b}},\\ \widehat{C}(t,z):=1+{\widehat{c}}_{1}(t){z}^{-1}+{\widehat{c}}_{2}(t){z}^{-2}+\cdots +{\widehat{c}}_{{n}_{c}}(t){z}^{-{n}_{c}}\end{array}$$

According to Equations (16) and (17), we can approximately suppose

$$\widehat{P}(t,z):=\widehat{C}(t,z)\widehat{A}(t,z)=1+{\widehat{p}}_{1}(t){z}^{-1}+{\widehat{p}}_{2}(t){z}^{-2}+\cdots +{\widehat{p}}_{{n}_{p}}(t){z}^{-{n}_{p}}$$

$$\widehat{Q}(t,z):=\widehat{C}(t,z)\widehat{B}(t,z)={\widehat{q}}_{1}(t){z}^{-1}+{\widehat{q}}_{2}(t){z}^{-2}+\cdots +{\widehat{q}}_{{n}_{q}}(t){z}^{-{n}_{q}}$$

Based on the above assumptions, we let

$$\widehat{B}(t,z)\widehat{P}(t,z)=\widehat{A}(t,z)\widehat{Q}(t,z)$$

Using
$\widehat{B}(t,z)$,
$\widehat{P}(t,z)$, Â(t, z) and
$\widehat{Q}(t,z)$ gives

$$\begin{array}{l}[{\widehat{b}}_{1}(t){z}^{-1}+\cdots +{\widehat{b}}_{{n}_{b}}(t){z}^{-{n}_{b}}][1+{\widehat{p}}_{1}(t){z}^{-1}+\cdots +{\widehat{p}}_{{n}_{p}}(t){z}^{-{n}_{p}}]\\ \phantom{\rule{5em}{0ex}}=[1+{\widehat{a}}_{1}(t){z}^{-1}+\cdots +{\widehat{a}}_{{n}_{a}}(t){z}^{-{n}_{a}}][{\widehat{q}}_{1}(t){z}^{-1}+\cdots +{\widehat{q}}_{{n}_{p}}(t){z}^{-{n}_{p}}]\end{array}$$

Expanding the above equation and comparing the coefficients of the same power of z^{−}^{1} on both sides, we can set up (n_{b} + n_{p}) equations:
which can be written in a matrix form,
where

$$\begin{array}{l}\phantom{\rule{0.4em}{0ex}}{z}^{-1}:\phantom{\rule{0.4em}{0ex}}{\widehat{b}}_{1}(t)={\widehat{q}}_{1}(t),\\ \phantom{\rule{0.4em}{0ex}}{z}^{-2}:\phantom{\rule{0.4em}{0ex}}{\widehat{b}}_{1}(t){\widehat{p}}_{1}(t)+{\widehat{b}}_{2}(t)={\widehat{q}}_{1}(t){\widehat{a}}_{1}(t)+{\widehat{q}}_{2}(t),\\ \phantom{\rule{0.4em}{0ex}}{z}^{-3}:\phantom{\rule{0.4em}{0ex}}{\widehat{b}}_{1}(t){\widehat{p}}_{2}(t)+{\widehat{b}}_{2}(t){\widehat{p}}_{1}(t)+{\widehat{b}}_{3}(t)={\widehat{q}}_{1}(t){\widehat{a}}_{2}(t)+{\widehat{q}}_{2}(t){\widehat{a}}_{1}(t)+{\widehat{q}}_{3}(t),\\ \phantom{\rule{1em}{0ex}}\vdots \\ {z}^{-({n}_{b}+{n}_{p})+1}:\phantom{\rule{0.2em}{0ex}}{\widehat{b}}_{{n}_{b}-1}(t){\widehat{p}}_{{n}_{p}}(t)+{\widehat{b}}_{{n}_{b}}(t){\widehat{p}}_{{n}_{p}-1}(t)={\widehat{q}}_{{n}_{q}-1}(t){\widehat{a}}_{{n}_{a}}(t)+{\widehat{q}}_{{n}_{q}}(t){\widehat{a}}_{{n}_{a}-1}(t),\\ {z}^{-({n}_{b}+{n}_{p})}:\phantom{\rule{0.2em}{0ex}}{\widehat{b}}_{{n}_{b}}(t){\widehat{p}}_{{n}_{p}}(t)={\widehat{q}}_{{n}_{q}}(t){\widehat{a}}_{{n}_{a}}(t)\end{array}$$

$$\mathit{S}(t){\widehat{\mathit{\vartheta}}}_{1}(t)=\mathit{B}(t)$$

$$\begin{array}{l}\phantom{\rule{0.2em}{0ex}}\mathit{S}(t):=[{\mathit{S}}_{p}(t),-{\mathit{S}}_{q}(t)]\in {\mathbb{R}}^{({n}_{b}+{n}_{p})\times ({n}_{a}+{n}_{b})},\\ {\widehat{\mathit{\vartheta}}}_{1}(t):={[{\widehat{b}}_{1}(t),{\widehat{b}}_{2}(t),\cdots ,{\widehat{b}}_{{n}_{b}}(t),{\widehat{a}}_{1}(t),{\widehat{a}}_{2}(t),\cdots ,{\widehat{a}}_{{n}_{a}}(t)]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{a}+{n}_{b}},\\ {\mathit{S}}_{p}(t):=\left[\begin{array}{ccccc}\hfill 1\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ \hfill {\widehat{p}}_{1}(t)\hfill & \hfill 1\hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill \vdots \hfill \\ \hfill {\widehat{p}}_{2}(t)\hfill & \hfill {\widehat{p}}_{1}(t)\hfill & \hfill 1\hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ \hfill \vdots \hfill & \hfill {\widehat{p}}_{2}(t)\hfill & \hfill {\widehat{p}}_{1}(t)\hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill {\widehat{p}}_{{n}_{p}-1}(t)\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 1\hfill \\ \hfill {\widehat{p}}_{{n}_{p}}(t)\hfill & \hfill {\widehat{p}}_{{n}_{p}-1}(t)\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill {\widehat{p}}_{1}(t)\hfill \\ \hfill 0\hfill & \hfill {\widehat{p}}_{{n}_{p}}(t)\hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill {\widehat{p}}_{2}(t)\hfill \\ \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill {\widehat{p}}_{{n}_{p}-1}(t)\hfill & \hfill \vdots \hfill \\ \hfill \vdots \hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill {\widehat{p}}_{{n}_{p}}(t)\hfill & \hfill {\widehat{p}}_{{n}_{p}-1}(t)\hfill \\ \hfill 0\hfill & \hfill \dots \hfill & \hfill \dots \hfill & \hfill 0\hfill & \hfill {\widehat{p}}_{{n}_{p}}(t)\hfill \end{array}\right]\in {\mathbb{R}}^{({n}_{b}+{n}_{p})\times {n}_{b}},\phantom{\rule{1em}{0ex}}\mathit{B}(t):=\left[\begin{array}{c}\hfill {\widehat{q}}_{1}(t)\hfill \\ \hfill {\widehat{q}}_{2}(t)\hfill \\ \hfill \vdots \hfill \\ \hfill {\widehat{q}}_{{n}_{q}}(t)\hfill \\ \hfill 0\hfill \\ \hfill 0\hfill \\ \hfill \vdots \hfill \\ \hfill 0\hfill \end{array}\right]\in {\mathbb{R}}^{{n}_{b}+{n}_{p}},\\ {\mathit{S}}_{q}(t):=\left[\begin{array}{ccccc}\hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ \hfill {\widehat{q}}_{1}(t)\hfill & \hfill 0\hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill \vdots \hfill \\ \hfill {\widehat{q}}_{2}(t)\hfill & \hfill {\widehat{q}}_{1}(t)\hfill & \hfill 0\hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ \hfill \vdots \hfill & \hfill {\widehat{q}}_{2}(t)\hfill & \hfill {\widehat{q}}_{1}(t)\hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill {\widehat{q}}_{{n}_{q}-1}(t)\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill {\widehat{q}}_{{n}_{q}}(t)\hfill & \hfill {\widehat{q}}_{{n}_{q}-1}(t)\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill {\widehat{q}}_{1}(t)\hfill \\ \hfill 0\hfill & \hfill {\widehat{q}}_{{n}_{q}}(t)\hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill {\widehat{q}}_{2}(t)\hfill \\ \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill {\widehat{q}}_{{n}_{q}-1}(t)\hfill & \hfill \vdots \hfill \\ \hfill \vdots \hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill {\widehat{q}}_{{n}_{q}}(t)\hfill & \hfill {\widehat{q}}_{{n}_{q}-1}(t)\hfill \\ \hfill 0\hfill & \hfill \dots \hfill & \hfill \dots \hfill & \hfill 0\hfill & \hfill {\widehat{q}}_{{n}_{q}}(t)\hfill \end{array}\right]\in {\mathbb{R}}^{({n}_{b}+{n}_{p})\times {n}_{a}}\end{array}$$

It is easy to know that

$${\widehat{\mathit{\vartheta}}}_{1}(t)={[{\mathit{S}}^{\mathrm{T}}(t)\mathit{S}(t)]}^{-1}{\mathit{S}}^{\mathrm{T}}(t)\mathit{B}(t)$$

From Equation (29), we can get the estimates â_{i}(t) and
${\widehat{b}}_{i}(t)$ of parameters a_{i} and b_{i} from
${\widehat{\mathit{\vartheta}}}_{1}(t)$. According to the definition of
$\widehat{P}(t,z)$ in Equation (26), similarly, expanding the equation and comparing the coefficients on both sides of it gives the matrix equation,
where

$${\mathit{S}}_{1}(t){\widehat{\mathit{\vartheta}}}_{2}(t)={\mathit{B}}_{1}(t)$$

$$\begin{array}{l}{\widehat{\mathit{\vartheta}}}_{2}(t):={[{\widehat{c}}_{1}(t),{\widehat{c}}_{2}(t),\cdots ,{\widehat{c}}_{{n}_{c}}(t)]}^{\mathrm{T}}\in {\mathbb{R}}^{{n}_{c}},\\ {\mathit{B}}_{1}(t):=\left[\begin{array}{c}{\widehat{\mathit{p}}}_{1}(t)-{\widehat{\mathit{a}}}_{1}(t)\\ {\widehat{\mathit{p}}}_{2}(t)-{\widehat{\mathit{a}}}_{2}(t)\\ \vdots \\ {\widehat{\mathit{p}}}_{{n}_{a}}(t)-{\widehat{\mathit{a}}}_{{n}_{a}}(t)\\ {\widehat{\mathit{p}}}_{{n}_{a}+1}(t)\\ {\widehat{\mathit{p}}}_{{n}_{a}+2}(t)\\ \vdots \\ {\widehat{\mathit{p}}}_{{n}_{p}}(t)\end{array}\right]\in {\mathbb{R}}^{{n}_{p}},\phantom{\rule{1em}{0ex}}{\mathit{S}}_{1}(t):=\left[\begin{array}{ccccc}\hfill 1\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ \hfill {\widehat{a}}_{1}(t)\hfill & \hfill 1\hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill \vdots \hfill \\ \hfill {\widehat{a}}_{2}(t)\hfill & \hfill {\widehat{a}}_{1}(t)\hfill & \hfill 1\hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ \hfill \vdots \hfill & \hfill {\widehat{a}}_{2}(t)\hfill & \hfill {\widehat{a}}_{1}(t)\hfill & \hfill \ddots \hfill & \hfill 0\hfill \\ \hfill {\widehat{a}}_{{n}_{a}-1}(t)\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 1\hfill \\ \hfill {\widehat{a}}_{{n}_{a}}(t)\hfill & \hfill {\widehat{a}}_{{n}_{a}-1}(t)\hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill {\widehat{a}}_{1}(t)\hfill \\ \hfill 0\hfill & \hfill {\widehat{a}}_{{n}_{a}}(t)\hfill & \hfill \ddots \hfill & \hfill \hfill & \hfill {\widehat{a}}_{2}(t)\hfill \\ \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill {\widehat{a}}_{{n}_{a}-1}(t)\hfill & \hfill \vdots \hfill \\ \hfill \vdots \hfill & \hfill \hfill & \hfill \ddots \hfill & \hfill {\widehat{a}}_{{n}_{a}}(t)\hfill & \hfill {\widehat{a}}_{{n}_{a}-1}(t)\hfill \\ \hfill 0\hfill & \hfill \cdots \hfill & \hfill \cdots \hfill & \hfill 0\hfill & \hfill {\widehat{a}}_{{n}_{a}}(t)\hfill \end{array}\right]\in {\mathbb{R}}^{{n}_{p}\times {n}_{c}}\end{array}$$

Then, we obtain

$${\widehat{\mathit{\vartheta}}}_{2}(t)={[{\mathit{S}}_{1}^{\mathrm{T}}(t){\mathit{S}}_{1}(t)]}^{-1}{\mathit{S}}_{1}^{\mathrm{T}}(t){\mathit{B}}_{1}(t)$$

Based on Equation (30), we can obtain the estimates ĉ_{i}(t) of c_{i} from
${\widehat{\mathit{\vartheta}}}_{2}(t)$. Hence, we obtain all of the parameter estimates â_{i}(t),
${\widehat{b}}_{i}(t)$ and ĉ_{i}(t).

According to the above derivation, it is clear that the model equivalence-based recursive least squares (ME-RLS) algorithm in Equations (20)–(25) and (29)–(30) increases the complexity of computation compared with the RGLS algorithm. However, as the information vector of the ME-RLS algorithm does not contain noise vectors to be estimated, the estimation errors become smaller.

Consider the following CARAR system,

$$\begin{array}{l}A(z)y(t)=B(z)u(t)+\frac{1}{C(z)}v(t),\\ \phantom{\rule{1.5em}{0ex}}A(z)=1+{a}_{1}{z}^{-1}+{a}_{2}{z}^{-2}=1-1.60{z}^{-1}+0.66{z}^{-2},\\ \phantom{\rule{1.5em}{0ex}}B(z)={b}_{1}{z}^{-1}+{b}_{2}{z}^{-2}=0.64{z}^{-1}-0.34{z}^{-2},\\ \phantom{\rule{1.4em}{0ex}}C(z)=1+{c}_{1}{z}^{-1}+{c}_{2}{z}^{-2}=1-0.55{z}^{-1}+0.47{z}^{-2},\\ \phantom{\rule{2.6em}{0ex}}\mathit{\theta}={[{a}_{1},{a}_{2},{b}_{1},{b}_{2},{c}_{1},{c}_{2}]}^{\mathrm{T}}={[-1.60,0.66,0.64,-0.34,-0.55,0.47]}^{\mathrm{T}}\end{array}$$

Here, the input {u(t)} is taken as an uncorrelated persistent excitation signal sequence with zero mean and unit variance, {v(t)} is a stochastic white noise sequence with zero mean and variances σ^{2} = 0.10^{2} and is independent of the input {u(t)}.

Using the model equivalence-based recursive least squares (ME-RLS) algorithm and the recursive generalized least squares (RGLS) algorithm to estimate the parameters of this system, the parameter estimates and their errors are shown in Tables 1–3, and the estimation errors δ versus the data length t are shown in Figure 2, where
${\delta}_{1}:=\left|\right|\widehat{\mathit{\vartheta}}(t)-\mathit{\vartheta}\left|\right|/\left|\right|\mathit{\vartheta}\left|\right|$ and
${\delta}_{2}:=\left|\right|\widehat{\mathit{\theta}}(t)-\mathit{\theta}\left|\right|/\left|\right|\mathit{\theta}\left|\right|$ are the estimation errors of the ME-RLS algorithm and the RGLS algorithm, when σ^{2} = 0.10^{2}; the system noise-to-signal ratio is δ_{ns} = 35.01%.

From Tables 1–3 and Figure 2, we can obtain the following conclusions.

- The estimation errors of the ME-RLS algorithm become smaller, and the estimates converge to their true values with the data length increasing (i.e., the proposed algorithm works well).
- The estimation errors of the ME-RLS algorithm are smaller than those of the RGLS algorithm, which means that the parameter estimates given by the ME-RLS algorithm have higher accuracy than the RGLS algorithm for CARAR systems.

This paper derives the recursive least squares algorithm based on the model transformation principle for CARAR. Compared with the LS identification algorithms, the algorithms presented in this paper reduce the number of the noise items to be estimated, and so, can generate more accurate parameter estimates. The proposed algorithm can be used to study the identification problems for other systems with autoregressive items.

This work was supported by the National Science Foundation of China (No. 61273194) and the PAPDof Jiangsu Higher Education Institutions.

Joint work.

The authors declare no conflict of interest.

- Scarpiniti, M.; Comminiello, D.; Parisi, R.; Uncini, A. Nonlinear system identification using IIR Spline Adaptive Filtering. Signal Process.
**2015**, 108, 30–35. [Google Scholar] - Li, H.; Shi, Y. Robust H-infty filtering for nonlinear stochastic systems with uncertainties and random delays modeled by Markov chains. Automatica
**2012**, 48, 159–166. [Google Scholar] - Zhang, L.; Wang, Z.P.; Sun, F.C.; Dorrell, D.G. Online parameter identification of ultracapacitor models using the extended kalman filter. Algorithms
**2014**, 7, 3204–3217. [Google Scholar] - Viegas, D.; Batista, P.; Oliveira, P.; Silvestre, C.; Chen, C.L.P. Distributed state estimation for linear multi-agent systems with time-varying measurement topology. Automatica
**2015**, 54, 72–79. [Google Scholar] - Fang, H.; Wu, J.; Shi, Y. Genetic adaptive state estimation with missing input/output data. Proc. Inst. Mech. Eng. Part I: J. Syst. Control Eng.
**2010**, 224, 611–617. [Google Scholar] - Fang, H.Z.; Shi, Y.; Yi, J.G. On stable simultaneous input and state estimation for discrete-time linear systems. Int. J. Adapt. Control Signal Process.
**2011**, 25, 671–686. [Google Scholar] - Chen, F.W.; Garnier, H.; Gilson, M. Robust identification of continuous-time models with arbitrary time-delay from irregularly sampled data. J. Process Control
**2015**, 25, 19–27. [Google Scholar] - Na, J.; Yang, J.; Wu, X.; Guo, Y. Robust adaptive parameter estimation of sinusoidal signals. Automatica
**2015**, 53, 376–384. [Google Scholar] - Rincón, F.D.; Roux, G.A.C.L.; Lima, F.V. A novel ARX-based approach for the steady-state identification analysis of industrial depropanizer column datasets. Algorithms
**2015**, 3, 257–285. [Google Scholar] - Upadhyay, P.; Kar, R.; Mandal, D.; Ghoshal, S.P.; Mukherjee, V. A novel design method for optimal IIR system identification using opposition based harmony search algorithm. J. Frankl. Inst.
**2014**, 351, 2454–2488. [Google Scholar] - Scarpiniti, M.; Comminiello, D.; Parisi, R.; Uncini, A. Nonlinear spline adaptive filtering. Signal Process.
**2013**, 93, 772–783. [Google Scholar] - Zhuang, L.F.; Pan, F.; Ding, F. Parameter and state estimation algorithm for single-input single-output linear systems using the canonical state space models. Appl. Math. Model.
**2012**, 36, 3454–3463. [Google Scholar] - Khan, N.; Khattak, M.I.; Gu, D. Robust state estimation and its application to spacecraft control. Automatica
**2012**, 48, 3142–3150. [Google Scholar] - Ding, F.; Liu, X.P.; Liu, G. Gradient based and least-squares based iterative identification methods for OE and OEMA systems. Digit. Signal Process.
**2010**, 20, 664–677. [Google Scholar] - Ma, X.Y.; Ding, F. Gradient-based parameter identification algorithms for observer canonical state space systems using state estimates. Circuits Syst. Signal Process.
**2015**, 34, 1697–1709. [Google Scholar] - Cao, Y.N.; Liu, Z.Q. Signal frequency and parameter estimation for power systems using the hierarchical dentification principle. Math. Comput. Model.
**2010**, 51, 854–861. [Google Scholar] - Ding, F. Hierarchical parameter estimation algorithms for multivariable systems using measurement information. Inf. Sci.
**2014**, 227, 396–405. [Google Scholar] - Liu, Y.J.; Ding, F.; Shi, Y. Least squares estimation for a class of non-uniformly sampled systems based on the hierarchical identification principle. Circuits Syst. Signal Process.
**2012**, 31, 1985–2000. [Google Scholar] - Ding, J.; Fan, C.X.; Lin, J.X. Auxiliary model based parameter estimation for dual-rate output error systems with colored noise. Appl. Math. Model.
**2013**, 37, 4051–4058. [Google Scholar] - Liu, Y.J.; Xiao, Y.S.; Zhao, X.L. Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model. Appl. Math. Comput.
**2009**, 215, 1477–1483. [Google Scholar] - Hu, Y.B.; Liu, B.L.; Zhou, Q. A multi-innovation generalized extended stochastic gradient algorithm for output nonlinear autoregressive moving average systems. Appl. Math. Comput.
**2014**, 247, 218–224. [Google Scholar] - Wang, C.; Tang, T. Recursive least squares estimation algorithm applied to a class of linear-in-parameters output error moving average systems. Appl. Math. Lett.
**2014**, 29, 36–41. [Google Scholar] - Dehghan, M.; Hajarian, M. The generalized QMRCGSTAB algorithm for solving Sylvester-transpose matrix equations. Appl. Math. Lett.
**2013**, 26, 1013–1017. [Google Scholar] - Hu, Y.B.; Liu, B.L.; Zhou, Q.; Yang, C. Recursive extended least squares parameter estimation for Wiener nonlinear systems with moving average noises. Circuits Syst. Signal Process.
**2014**, 33, 655–664. [Google Scholar] - Wang, C.; Tang, T. Several gradient-based iterative estimation algorithms for a class of nonlinear systems using the filtering technique. Nonlinear Dyn.
**2014**, 77, 769–780. [Google Scholar] - Hu, Y.B. Iterative and recursive least squares estimation algorithms for moving average systems. Simul. Model. Pract. Theory
**2013**, 34, 12–19. [Google Scholar] - Zhang, W.G. Decomposition based least squares iterative estimation for output error moving average systems. Eng. Comput.
**2014**, 31, 709–725. [Google Scholar] - Ding, F. System Identification—New Theory and Methods; Science Press: Beijing, China, 2013. [Google Scholar]
- Yu, F.; Mao, Z.Z.; Jia, M.X.; Yuan, P. Recursive parameter identification of Hammerstein-Wiener systems with measurement noise. Signal Process.
**2014**, 105, 137–147. [Google Scholar] - Filipovic, V.Z. Consistency of the robust recursive Hammerstein model identification algorithm. J. Frankl. Inst.
**2015**, 352, 1932–1945. [Google Scholar] - Cao, Z.X.; Yang, Y.; Lu, J.Y.; Gao, F.R. Constrained two dimensional recursive least squares model identification for batch processes. J. Process Control
**2014**, 24, 871–879. [Google Scholar] - Liu, X.G.; Lu, J. Least squares based iterative identification for a class of multirate systems. Automatica
**2010**, 46, 549–554. [Google Scholar] - Kon, J.; Yamashita, Y.; Tanaka, T.; Tashiro, A.; Daiguji, M. Practical application of model identification based on ARX models with transfer functions. Control Eng. Pract.
**2013**, 21, 195–203. [Google Scholar] - Shardt, Y.A.W.; Huang, B. Closed-loop identification condition for ARMAX models using routing operating data. Automatica
**2011**, 47, 1534–1537. [Google Scholar] - Chen, H.B.; Ding, F. Hierarchical least squares identification for Hammerstein nonlinear controlled autoregressive systems. Circuits Syst. Signal Process.
**2015**, 34, 61–75. [Google Scholar] - Xiao, Y.S.; Yue, N. Parameter estimation for nonlinear dynamical adjustment models. Math. Comput. Model.
**2011**, 54, 1561–1568. [Google Scholar] - Li, J.H. Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration. Appl. Math. Lett.
**2013**, 26, 91–96. [Google Scholar]

t | p_{1} | p_{2} | p_{3} | p_{4} | q_{1} | q_{2} | q_{3} | q_{4} | δ (%) |
---|---|---|---|---|---|---|---|---|---|

100 | −2.08475 | 2.00319 | −1.31058 | 0.46149 | 0.64205 | −0.64349 | 0.52244 | −0.25044 | 8.32032 |

200 | −2.05787 | 1.93093 | −1.19902 | 0.39471 | 0.64234 | −0.62503 | 0.49950 | −0.21433 | 5.72365 |

500 | −2.12143 | 2.01030 | −1.18986 | 0.36351 | 0.64167 | −0.66963 | 0.50619 | −0.19317 | 3.17127 |

1000 | −2.12381 | 1.97065 | −1.13217 | 0.34035 | 0.64041 | −0.67566 | 0.48170 | −0.18376 | 1.96727 |

2000 | −2.13697 | 1.99482 | −1.12626 | 0.32412 | 0.64233 | −0.68648 | 0.48887 | −0.16974 | 0.87634 |

3000 | −2.14903 | 2.01217 | −1.12549 | 0.31735 | 0.64077 | −0.69361 | 0.49182 | −0.16431 | 0.43027 |

True values | −2.15000 | 2.01000 | −1.11500 | 0.31020 | 0.64000 | −0.69200 | 0.48780 | −0.15980 |

t | a_{1} | a_{2} | b_{1} | b_{2} | c_{1} | c_{2} | δ_{1} (%) |
---|---|---|---|---|---|---|---|

100 | −1.72608 | 0.78068 | 0.64169 | −0.40895 | −0.36228 | 0.59524 | 8.32032 |

200 | −1.77617 | 0.82717 | 0.64094 | −0.44197 | −0.31792 | 0.52245 | 5.72365 |

500 | −1.72733 | 0.78537 | 0.64185 | −0.41403 | −0.41615 | 0.49611 | 3.17127 |

1000 | −1.69531 | 0.75136 | 0.64083 | −0.39851 | −0.43859 | 0.47136 | 1.96727 |

2000 | −1.61778 | 0.67729 | 0.64252 | −0.35225 | −0.51871 | 0.47880 | 0.87634 |

3000 | −1.58944 | 0.64860 | 0.64075 | −0.33525 | −0.55603 | 0.48171 | 0.43027 |

True values | −1.60000 | 0.66000 | 0.64000 | −0.34000 | −0.55000 | 0.47000 |

t | a_{1} | a_{2} | b_{1} | b_{2} | c_{1} | c_{2} | δ_{2} (%) |
---|---|---|---|---|---|---|---|

100 | −1.42838 | 0.50276 | 0.65682 | −0.24579 | −0.51676 | 0.44419 | 12.68831 |

200 | −1.45138 | 0.52495 | 0.64732 | −0.25074 | −0.58370 | 0.40453 | 11.53063 |

500 | −1.51420 | 0.58095 | 0.64157 | −0.28291 | −0.51480 | 0.46001 | 6.71055 |

1000 | −1.54163 | 0.60665 | 0.63846 | −0.30402 | −0.54078 | 0.47384 | 4.34927 |

2000 | −1.57741 | 0.63814 | 0.64101 | −0.32634 | −0.52490 | 0.49242 | 2.38922 |

3000 | −1.56865 | 0.63248 | 0.63924 | −0.32090 | −0.53319 | 0.48230 | 2.50582 |

True values | −1.60000 | 0.66000 | 0.64000 | −0.34000 | −0.55000 | 0.47000 |

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).