# Robust Rank Reduction Algorithm with Iterative Parameter Optimization and Vector Perturbation

^{1}

^{2}

^{3}

^{4}

^{*}

*Keywords:*adaptive filters; beamforming algorithms; reduced rank

Next Article in Journal / Special Issue

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

School of Electronic and Information Engineering, Nanjing University of Information Science & Technology, Ningliu Road 219, Nanjing 210044, China

Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology (CICAEET), Nanjing 210044, China

CETUC, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro 22451-900, Brazil

Department of Electronics, University of York, Heslington, York YO10 5DD, UK

Author to whom correspondence should be addressed.

Academic Editor: Erchin Serpedin

Received: 18 May 2015 / Revised: 28 July 2015 / Accepted: 29 July 2015 / Published: 5 August 2015

(This article belongs to the Special Issue Algorithms for Sensor Networks)

In dynamic propagation environments, beamforming algorithms may suffer from strong interference, steering vector mismatches, a low convergence speed and a high computational complexity. Reduced-rank signal processing techniques provide a way to address the problems mentioned above. This paper presents a low-complexity robust data-dependent dimensionality reduction based on an iterative optimization with steering vector perturbation (IOVP) algorithm for reduced-rank beamforming and steering vector estimation. The proposed robust optimization procedure jointly adjusts the parameters of a rank reduction matrix and an adaptive beamformer. The optimized rank reduction matrix projects the received signal vector onto a subspace with lower dimension. The beamformer/steering vector optimization is then performed in a reduced dimension subspace. We devise efficient stochastic gradient and recursive least-squares algorithms for implementing the proposed robust IOVP design. The proposed robust IOVP beamforming algorithms result in a faster convergence speed and an improved performance. Simulation results show that the proposed IOVP algorithms outperform some existing full-rank and reduced-rank algorithms with a comparable complexity.

Adaptive beamforming algorithms often encounter problems when they operate in dynamic environments with large sensor arrays. These problems include steering vector mismatches, high computational complexity and snapshot deficiency. Steering vector mismatches are often caused by calibration/pointing errors, and a high complexity is usually introduced by an expensive inverse operation of the covariance matrix of the received data. High computational complexity and snapshot deficiency may prevent the use of adaptive beamforming in important applications, like sonar and radar [1,2]. The adaptive beamforming techniques are usually required to have a trade-off between performance and complexity, which depends on the designer’s choice of the adaptation algorithm.

In order to overcome this computational complexity issue, adaptive versions of the linearly-constrained beamforming algorithms, such as minimum variance distortionless response (MVDR) with stochastic gradient and recursive least squares [1,2,3], have been extensively reported. These adaptive algorithms estimate the data covariance matrix iteratively, and the complexity is reduced by recursively computing the weights. However, in a dynamic environment with large sensor arrays, such as those found in radar and sonar applications, adaptive beamformers with a large number of array elements may fail in tracking signals embedded in strong interference and noise. The convergence speed and tracking properties of adaptive beamformers depend on the size of the sensor array and the eigen-spread of the received covariance matrix [2].

Regarding the steering vector mismatches often found in practical applications of beamforming, they are responsible for a significant performance degradation of algorithms. Prior work on robust beamforming design [4,5,6,7] has considered different strategies to mitigate the effects of these mismatches. However, a key limitation of these robust techniques [4,5,6,7] is their high computational cost for large sensor arrays and their suitability to dynamic environments. These algorithms need to estimate the covariance matrix of the sensor data, which is a challenging task for a system with a large array and operates in highly dynamic situations. Given this dependency on the number of sensor elements M, it is thus intuitive to reduce M while simultaneously extracting the key features of the original signal via an appropriate transformation.

Reduced-rank signal processing techniques [7,8,9,10,11,12,13,14,15,16,17] provide a way to address some of the problems mentioned above. Reduced dimension methods are often needed to speed up the convergence of beamforming algorithms and reduce their computational complexity. They are particularly useful in scenarios in which the interference lies in a low-rank subspace, and the number of degrees of freedom required to mitigate the interference through beamforming is significantly lower than that available in the sensor array. In reduced-rank schemes, a rank reduction matrix is introduced to project the original full-dimension received signal onto a lower dimension. The advantage of reduced-rank methods lies in their superior convergence and tracking performance achieved by exploiting the low-rank nature of the signals. It offers a large reduction in the required number of training samples over full-rank methods [2], which may also addresses the problem of snapshot deficiency at low complexity. Several reduced-rank strategies for processing data collected from a large number of sensors have been reported in the last few years, which include beamspace methods [7], Krylov subspace techniques [13,14] and methods of joint and iterative optimization of parameters in [15,16,17].

Despite the improved convergence and tracking performance achieved with Krylov methods [13,14], they are relatively complex and may suffer from numerical problems. On the other hand, the joint iterative optimization (JIO) technique reported in [16] outperforms the Krylov-based method with efficient adaptive implementations. However, the theoretical JIO dimensionality reduction transform matrix, Equation (63) in [16], is in fact rank-one; the column space of the JIO matrix is precisely the MVDR line. The rank selection scheme may fail to work; performance degradation is then expected. In order to address this problem, in this paper, we introduce a low-complexity robust data-dependent dimensionality reduction algorithm for reduced-rank beamforming and steering vector estimation. The proposed iterative optimization with steering vector perturbation (IOVP) design strategy jointly optimizes a projection matrix and a reduced-rank beamformer by introducing several independently-generated small perturbations of the assumed steering vector. With these vectors, the scheme updates a different column of the projection matrix in each recursion and concatenates these columns to ensure that the projection matrix has a desired rank.

The contributions of this paper are summarized as follows:

- A bank of perturbed steering vectors is proposed as candidate array steering vectors around the true steering vector. The candidate steering vectors are responsible for performing rank reduction, and the reduced-rank beamformer forms the beam in the direction of the signal of interest (SoI).
- We devise efficient stochastic gradient (SG) and recursive least-squares (RLS) algorithms for implementing the proposed robust IOVP design.
- We introduce an automatic rank selection scheme in order to obtain the optimal beamforming performance with low computational complexity.

Simulation results show that the proposed IOVP algorithms outperform existing full-rank and reduced-rank algorithms with a comparable complexity.

Let us consider a uniform linear array (ULA) with M sensor elements, which receive K narrowband signals where $K\le M$. The directionsof arrival (DoAs) of the K signals are ${\theta}_{0},\dots {\theta}_{K-1}$. The received vector $x\left[i\right]\in {\mathbb{C}}^{M\times 1}$ at the i-th snapshot (time instant) can be modelled as:
where $\theta ={[{\theta}_{0},\dots ,{\theta}_{K-1}]}^{T}\in {\mathbb{R}}^{K\times 1}$ convey the DoAs of the K signal sources. $A\left(\theta \right)=[a\left({\theta}_{0}\right),\dots ,a\left({\theta}_{K-1}\right)]\in {\mathbb{C}}^{M\times K}$ comprises K steering vectors, which are given as:
where ${\lambda}_{c}$ is the wavelength and ι is the inter-element distance of the ULA. The K steering vectors $a\left\{{\theta}_{k}\right\}\in {\mathbb{C}}^{M\times 1}$ are assumed to be linearly independent. The source data are modelled as $s\in {\mathbb{C}}^{K\times 1}$, and $n\left[i\right]\in {\mathbb{C}}^{M\times 1}$ is the noise vector, which is assumed to be zero-mean; N is assumed to be the observation size, and $\left[i\right]$ denotes the time instant. For full-rank processing, the adaptive beamformer output for the SoI is written as:
where the beamformer ${\omega}_{k}\in {\mathbb{C}}^{M\times 1}$ is derived according to a design criterion. The optimal weight vector is obtained by maximizing the signal-to-interference plus noise ratio (SINR) and:
where ${R}_{k}$ and ${R}_{i+n}$ denote the SoI and interference plus noise covariance matrices, respectively.

$$x\left[i\right]=A\left(\theta \right)s\left[i\right]+n\left[i\right],\phantom{\rule{2.em}{0ex}}i=1,\dots ,N$$

$$a\left({\theta}_{k}\right)={[1,{e}^{-2\pi j\frac{\iota}{{\lambda}_{c}}cos\left({\theta}_{k}\right)},\dots ,{e}^{-2\pi j(M-1)\frac{\iota}{{\lambda}_{c}}cos\left({\theta}_{k}\right)}]}^{T}$$

$${y}_{k}\left[i\right]={\omega}_{k}^{H}\left[i\right]x\left[i\right]$$

$$\text{SIN}{\text{R}}_{\text{opt}}=\frac{{\omega}_{opt}^{H}{R}_{k}{\omega}_{opt}}{{\omega}_{opt}^{H}{R}_{i+n}{\omega}_{opt}}$$

The MVDR/SCBwas reported as the optimal design criterion of the beamformer ${\omega}_{k}$. The MVDR criterion obtains ${\omega}_{k}\left[i\right]$ by solving the following optimization problem:
where $R=E\left[x\left[i\right]{x}^{H}\left[i\right]\right]\in {\mathbb{C}}^{M\times M}$ is the covariance matrix obtained from the sensor array, and the array response $a\left(\theta \right)$ can be calculated by employing a DoA estimation procedure. By using the technique of Lagrange multipliers, the solution of (5) is easily derived as:
where R is the covariance matrix of the received signal. In practical applications, R is approximated by the sample covariance matrix $\stackrel{\u02f0}{R}$, where:
with N being the number of snapshots. Larger arrays require longer duration snapshots due to the longer transit time of sound across the array. The computation of a reliable covariance matrix $\stackrel{\u02f0}{R}$ requires a higher number of $N\u2a7eM$. More snapshots are needed due to the many elements. Usually, this leads to snapshot-deficient processing [18].

$$\begin{array}{cc}& min{\mathcal{J}}_{\text{MVDR}}\left({\omega}_{k}\left[i\right]\right)={\omega}_{k}^{H}\left[i\right]R{\omega}_{k}\left[i\right],\hfill \\ & \text{subject to}\phantom{\rule{2.em}{0ex}}{\omega}_{k}^{H}\left[i\right]a\left({\theta}_{k}\right)=1\hfill \end{array}$$

$${\omega}_{k}\left[i\right]=\frac{{R}^{-1}{a}_{k}\left({\theta}_{k}\right)}{{a}_{k}^{H}\left({\theta}_{k}\right){R}^{-1}{a}_{k}\left({\theta}_{k}\right)}$$

$$\stackrel{\u02f0}{R}=\frac{1}{N}\sum _{i=1}^{N}x\left[i\right]x{\left[i\right]}^{H}$$

The matrix inversion operation in (6) requires significant computational complexity when M is large. We derive an RLS adaptive algorithm for efficient computation of the MVDR beamformer. The inverse covariance matrix ${R}^{-1}$ can be obtained by solving the standard least-squares (LS) problem; the LS cost function with an exponential window is given by:
where $0\ll \alpha <1$ is the forgetting factor. By replacing the upper equation of (5) with (8), the Lagrangian is obtained as:

$${\mathcal{J}}_{k}\left[i\right]=\sum _{\tau =1}^{i}{\alpha}^{i-\tau}|{\omega}_{k}^{H}\left[i\right]{x}_{k}\left[\tau \right]{|}^{2}$$

$${\mathcal{L}}_{LS}\left({\omega}_{k}\left[i\right]\right)=\sum _{\tau =1}^{i}{\alpha}^{i-\tau}|{\omega}_{k}^{H}\left[i\right]{x}_{k}\left[\tau \right]{|}^{2}+2\Re \left[\lambda ({\omega}^{H}a\left({\theta}_{k}\right)-1)\right]$$

Taking the gradient of (9) with respect to $\omega \left[i\right]$, equating the terms to a zero vector and solving for λ, we obtain the beamformer as:
where the estimated covariance matrix is:

$${\omega}_{k}\left[i\right]=\frac{R{\left[i\right]}^{-1}{a}_{k}}{{a}_{k}^{H}R{\left[i\right]}^{-1}{a}_{k}}$$

$$\stackrel{\u02f0}{R}\left[i\right]=\sum _{\tau =1}^{i}{\alpha}^{i-\tau}x\left[\tau \right]{x}^{H}\left[\tau \right]$$

By comparing Equations (6) and (10), we can see that the MVDR beamformer can be implemented in an iterative manner, and the complexity can be significantly reduced. The filter ${\omega}_{k}\left[i\right]$ can be estimated efficiently via the RLS algorithm. However, the laws that govern their convergence and tracking behaviours imply that they depend on the number of sensor elements M and on the eigenvalue spread of the covariance matrix R. In order to estimate ${R}^{-1}$ without matrix inversion, we use the matrix inversion lemma [2], the gain vector and the Riccati equation for the RLS algorithm given as:

$$k\left[i\right]=\frac{{\alpha}^{-1}{R}^{-1}[i-1]x\left[i\right]}{1+{\alpha}^{-1}{x}^{H}\left[i\right]{R}^{-1}[i-1]x[i-1]}$$

$${R}^{-1}\left[i\right]={\alpha}^{-1}{R}^{-1}[i-1]-{\alpha}^{-1}k\left[i\right]{x}^{H}\left[i\right]{R}^{-1}[i-1]$$

The inverse correlation matrix ${R}^{-1}$ is obtained at each step by the recursive processes for reduced computational complexity. Equation (13) is initialized by using an identity matrix ${R}^{-1}=\delta I$ where δ is a positive constant. The above-mentioned full-rank beamformers usually suffer from high complexity and low convergence speed. In the following section, we focus on the design of the proposed low-complexity reduced-dimension beamforming algorithms.

The filter ${\omega}_{k}\left[i\right]$ in Equation (10) can be estimated efficiently via the RLS algorithm; however, the convergence and tracking behaviours depend on M and on the eigenvalue spread of R. Reduced-dimension methods are introduced to speed up the convergence of beamforming algorithms and to reduce their computational complexity [5,8]. A reduced-rank algorithm must extract the most important features of the processed data by performing dimensionality reduction. This transformation is carried out by applying a matrix ${S}_{D}\in {\mathbb{C}}^{M\times D}$ on the received data as given by:
where, in what follows, we denote the D-dimensional terms with a “bar” sign. The obtained received vector $\overline{x}\left[i\right]$ is the new input to a filter given by the vector $\overline{\omega}={[{\overline{\omega}}_{1},{\overline{\omega}}_{2},\dots ,{\overline{\omega}}_{D}]}^{T}$, and the resulting filter output is:

$$\overline{x}\left[i\right]={S}_{D}^{H}x\left[i\right]$$

$${\overline{y}}_{k}\left[i\right]={\overline{\omega}}_{k}^{H}\overline{x}\left[i\right]$$

In order to design the reduced-rank filter ${\overline{\omega}}_{k}$, from Equation (6), we consider the following optimization problem:

$$\begin{array}{cc}& min{\mathcal{J}}_{\text{reduced}-\text{rank}}\left({\overline{\omega}}_{k}\left[i\right]\right)={\overline{\omega}}_{k}^{H}\left[i\right]\overline{R}{\overline{\omega}}_{k}\left[i\right],\hfill \\ & \text{subject to}\phantom{\rule{2.em}{0ex}}{\overline{\omega}}_{k}^{H}\left[i\right]\overline{a}\left({\theta}_{k}\right)=1\hfill \end{array}$$

The solution to the above problem is:
where the dimensional reduced covariance matrix is given by $\overline{R}=E\left[\overline{x}\left[i\right]{\overline{x}}^{H}\left[i\right]\right]={S}_{D}^{H}R{S}_{D}$ and the reduced-rank steering vector is obtained by $\overline{a}\left({\theta}_{k}\right)={S}_{D}^{H}a\left({\theta}_{k}\right)$, where $E[\xb7]$ denotes the expectation operation. The above contents show how a projection matrix ${S}_{D}$ can be used to perform dimensionality reduction on the received signal and resulting in improved convergence and tracking performance over the full-rank filter listed in Equations (6) and (10).

$${\overline{\omega}}_{k}\left[i\right]=\frac{{\overline{R}}^{-1}{\overline{a}}_{k}\left({\theta}_{k}\right)}{{\overline{a}}_{k}^{H}\left({\theta}_{k}\right){\overline{R}}^{-1}{\overline{a}}_{k}\left({\theta}_{k}\right)}$$

In previous works, the JIO approach reported in [16] outperforms the Krylov-based method with efficient adaptive implementations; however, there was a problem in this approach. Specifically, the theoretical JIO dimensionality reduction transform matrix, Equation (63) in [16], is in fact rank-one. Consequently, when the reduced dimension is selected as greater than one, so that the JIO projection matrix has more than one column, pre-processing with the JIO projection matrix will yield a singular, non-invertible reduced-rank covariance matrix, which means that the reduced dimensional weights will not exist; the rank-one column space of the JIO matrix is precisely the MVDR line, and the rank selection may fail to work.

In order to address this issue, in the following, we detail a set of novel reduced-rank algorithms based on the proposed IOVP design of beamformers. The proposed IOVP design strategy jointly optimizes a projection matrix ${S}_{D}\left[i\right]$ and a reduced-rank beamformer ${\overline{\omega}}_{k}\left[i\right]$ by introducing several independently-generated small perturbations of the assumed steering vector and recursively updates a different column of the projection matrix to ensure a desired rank. The bank of adaptive beamformers in the front-end is responsible for performing dimensionality reduction, which is followed by a reduced-rank beamformer, which effectively forms the beam in the direction of the SoI. This two-stage scheme allows the adaptation with different update rates, which could lead to a significant reduction in the computational complexity per update. Specifically, this complexity reduction can be obtained as the dimensionality reduction performed by the rank reduction matrix could be updated less frequently than the reduced-rank beamformer.

The principle of the proposed IOVP reduced rank scheme is depicted in Figure 1, which employs a projection matrix ${S}_{D}\left[i\right]\in {\mathbb{C}}^{M\times D}$ to perform dimensionality reduction on data vector $x\left[i\right]\in {\mathbb{C}}^{M\times 1}$. The rank-reduced filter ${\overline{\omega}}_{k}\left[i\right]\in {\mathbb{C}}^{D\times 1}$ processes the reduced-rank data vector $\overline{x}\left[i\right]\in {\mathbb{C}}^{D\times 1}$ to obtain a scalar estimate ${\overline{y}}_{k}\left[i\right]$ of the k-th desired signal.

The design criterion of MVDR-IOVP beamformer is given by the following optimization problem:
where R is the covariance matrix obtained from sensors and vector ${q}_{d}$ with dimension $D\times 1$ is a zero vector except its d-th element being one. The vector ${s}_{d}\in {\mathbb{C}}^{M\times 1}$ is the d-th column of the projection matrix ${S}_{D}\in {\mathbb{C}}^{M\times D}$. The vectors ${a}_{d},d=1\dots D$ represent the assumed steering vector and $D-1$ independently-generated small perturbations of the assumed steering vector. Different from the JIO approach in [16], where the columns of ${S}_{D}$ are jointly designed under the same criterion, the proposed IOVP approach (18) uses the vector ${q}_{d}$ to orthogonalize the columns of the projection matrix, and the columns can be independently updated with the perturbation vector in each recursion. The scheme updates a different column of the projection matrix in each recursion and concatenates these columns to form the projection matrix ${S}_{D}$; the concatenation procedure ensures that the projection matrix has a desired rank. According to Equation (28) in Section 3.4, an increased rank of ${S}_{D}$ is obtained for higher d, and the rank-one problem in [16] can be avoided. The constrained optimization problem in (18) can be solved by using the method of Lagrange multipliers [4]. The Lagrangian of the MVDR-IOVP design is expressed by:

$$\begin{array}{cc}\hfill {\displaystyle \underset{\omega ,{s}_{d}}{min}}\phantom{\rule{2.em}{0ex}}& {\overline{\omega}}^{H}\overline{R}\overline{\omega}={\omega}^{H}{S}_{D}^{H}R{S}_{D}\omega ,\hfill \\ \hfill \text{subject to}\phantom{\rule{2.em}{0ex}}& {\overline{\omega}}^{H}\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}{a}_{d}=1\hfill \end{array}$$

$$f(\omega ,{s}_{d})=E\left\{\right|{\omega}^{H}\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}x{|}^{2}\}+\lambda ({\omega}^{H}\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}a-1)$$

In order to efficiently solve the above Lagrangian, in the following subsections, we introduce the stochastic gradient adaptation and the recursive least-squares adaptation methods.

In this subsection, we present a low-complexity SG [2] adaptive reduced-rank algorithm for efficient implementation of the IOVP algorithm. By computing the instantaneous gradient terms of (19) with respect to $\omega {\left[i\right]}^{*}$ and ${s}_{d}{\left[i\right]}^{*}$, we obtain:
where ${w}_{d}$ is the d-th element of the reduced-rank beamformer $\omega \left[i\right]$ and the projection matrices that enforce the constraints are:
the scalar ${z}^{*}\left[i\right]={x}^{H}\left[i\right]{S}_{D}\left[i\right]\overline{\omega}\left[i\right]={\tilde{x}}^{H}\left[i\right]\overline{\omega}$, and:
is the estimated steering vector in reduced dimension. The calculation of ${P}_{\overline{\omega}}\left[i\right]$ requires a number of ${D}^{2}+D+1$ complex multiplications; the computation of ${P}_{s}\left[i\right]$ and $z\left[i\right]$ requires ${D}^{2}+DM+M+1$ and $DM+D$ complex multiplications, respectively. Therefore, we can conclude that for each iteration, the SG adaptation requires $4MD+4{D}^{2}+3D+M+6$ complex multiplications.

$$\overline{\omega}[i+1]=\omega \left[i\right]-{\mu}_{w}{P}_{w}\left[i\right]{S}_{D}^{H}\left[i\right]x\left[i\right]{z}^{*}\left[i\right]$$

$${s}_{d}[i+1]={s}_{d}\left[i\right]-{\mu}_{s}{P}_{s}\left[i\right]x\left[i\right]{z}^{*}\left[i\right]{w}_{d}^{*}\left[i\right],d=1,\dots ,D$$

$${P}_{w}\left[i\right]={I}_{D}-{\left({a}_{D}^{H}\left[i\right]{a}_{D}\left[i\right]\right)}^{-1}{a}_{D}\left[i\right]{a}_{D}^{H}\left[i\right]$$

$${P}_{s}\left[i\right]={I}_{M}-{\left({a}^{H}\left[i\right]a\left[i\right]\right)}^{-1}a\left[i\right]{a}^{H}\left[i\right]$$

$${a}_{D}\left[i\right]=\sum _{d=1}^{D}{q}_{d}{s}_{d}{\left[i\right]}^{H}a\left[i\right]\in {\mathbb{C}}^{D\times 1}$$

Here, we derive an adaptive reduced-rank RLS [2] type algorithm for efficient implementation of the MVDR-IOVP method. The reduced-rank beamformer $\overline{\omega}\left[i\right]$ is updated as follows:
where:

$$\overline{\omega}\left[i\right]=\frac{{R}_{D}^{-1}\left[i\right]{a}_{D}\left[i\right]}{{a}_{D}^{H}\left[i\right]{R}_{D}^{-1}\left[i\right]{a}_{D}\left[i\right]}$$

$$\tilde{k}[i+1]=\frac{{\alpha}^{-1}{R}_{D}^{-1}\left[i\right]\tilde{x}[i+1]}{1+{\alpha}^{-1}{\tilde{x}}^{H}[i+1]{R}_{D}^{-1}\left[i\right]\tilde{x}\left[i\right]}$$

$${R}_{D}^{-1}[i+1]={\alpha}^{-1}{R}_{D}^{-1}\left[i\right]-{\alpha}^{-1}\tilde{k}[i+1]{\tilde{x}}^{H}[i+1]{R}_{D}^{-1}\left[i\right]$$

The columns ${s}_{d}\left[i\right]$ of the rank reduction matrix are updated by:
where ${\beta}_{d}\left[i\right]={\sum}_{d=1}^{D}{s}_{d}\left[i\right]{w}_{d}\left[i\right]-{\sum}_{l=1,l\ne d}^{D}{s}_{l}\left[i\right]{w}_{l}\left[i\right]$ and:
where $0\ll \alpha <1$ is the forgetting factor. The inverse of the covariance matrix ${R}^{-1}$ is obtained recursively. Equation (30) is initialized by using an identity matrix ${R}^{-1}\left[0\right]=\delta I$ where δ is a positive constant. From Equation (28), we can see that with the proposed IOVP approach, by orthogonalizing the columns of the projection matrix ${s}_{d}\left[i\right]$, the M weights can be independently updated in each recursion, and the rank-one problem in Equation (22) of [16] can be addressed. The computational complexity of the proposed adaptive reduced-rank RLS-type MVDR-IOVP method requires $4{M}^{2}+3{D}^{2}+3D+2$ complex multiplications. The MVDR-IOVP algorithm has a complexity significantly lower than a full-rank scheme if a low rank ($D\ll M$) is selected.

$${s}_{d}\left[i\right]=\frac{{R}^{-1}\left[i\right]{a}_{d}\left[i\right]{a}_{d}^{H}\left[i\right]{\beta}_{d}\left[i\right]}{{a}_{d}^{H}\left[i\right]{R}^{-1}\left[i\right]{a}_{d}\left[i\right]{w}_{d}\left[i\right]},d=1,\dots ,D$$

$$k[i+1]=\frac{{\alpha}^{-1}{R}^{-1}\left[i\right]x[i+1]}{1+{\alpha}^{-1}{x}^{H}[i+1]{R}^{-1}\left[i\right]x\left[i\right]}$$

$${R}^{-1}[i+1]={\alpha}^{-1}{R}^{-1}\left[i\right]-{\alpha}^{-1}k[i+1]{x}^{H}[i+1]{R}^{-1}\left[i\right]$$

In this section, we present a robust beamforming method based on the robust capon beamforming (RCB) technique reported in [4] and the IOVP detailed in the previous section for robust beamforming applications with large sensor arrays. The proposed technique, denoted RCB-IOVP, gathers the robustness of the RCB approach [4] against uncertainties and the low complexity of IOVP techniques. Assuming that the DoA mismatch is within a spherical uncertainty set, the proposed RCB-IOVP technique solves the following optimization problem:
where $\overline{a}$ is the assumed steering vector and ${a}_{d}$ is the updated steering vector for each iteration. The constant ϵ is related to the radius of the uncertainty sphere. The Lagrangian of the RCB-IVOP constrained optimization problem is expressed by:
where ${R}_{D}^{-1}={S}_{D}^{H}{R}^{-1}{S}_{D}$ is the reduced rank covariance matrix. From the above Lagrangian, we will devise efficient adaptive beamforming algorithms in what follows.

$$\begin{array}{cc}\hfill {\displaystyle \underset{{a}_{d},{s}_{d}}{min}}\phantom{\rule{2.em}{0ex}}& {a}_{d}^{H}{S}_{D}^{H}{R}^{-1}{S}_{D}{a}_{d},\hfill \\ \hfill \text{subject to}\phantom{\rule{2.em}{0ex}}& {\u2225{S}_{D}^{H}{a}_{d}-{S}_{D}^{H}\overline{a}\u2225}^{2}=\u03f5\hfill \end{array}$$

$$\begin{array}{cc}\hfill {f}_{RCB}({a}_{d},{s}_{d})& ={\left(\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}{a}_{d}\right)}^{H}{R}_{D}^{-1}\left(\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}{a}_{d}\right)+\hfill \\ & {\lambda}_{RCB}\left({\u2225\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}{a}_{d}-\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}\overline{a}\u2225}^{2}-\u03f5\right)\hfill \end{array}$$

We devise an SG adaptation strategy based on the alternating minimization of the Lagrangian in (32), which yields:
where ${\mu}_{a}\left[i\right]$ and ${\mu}_{s}\left[i\right]$ are the step sizes of the SG algorithms, the parameter vectors ${g}_{a}\left[i\right]$ and ${g}_{s}\left[i\right]$ are the partial derivatives of the Lagrangian in (32) with respect to ${\tilde{a}}_{d}^{*}\left[i\right]$ and ${s}_{d}^{*}\left[i\right]$, respectively. The recursion for ${g}_{a}\left[i\right]$ is given by:
where:
and:

$$\begin{array}{c}\hfill {\tilde{a}}_{d}[i+1]={\tilde{a}}_{d}\left[i\right]-{\mu}_{a}\left[i\right]{g}_{a}\left[i\right],\\ \hfill {s}_{d}[i+1]={s}_{d}\left[i\right]-{\mu}_{s}\left[i\right]{g}_{s}\left[i\right]\end{array}$$

$${g}_{a}\left[i\right]={\left({\frac{1}{\lambda}}_{RCB\left[i\right]}{S}_{D}^{H}\left[i\right]{R}^{-1}\left[i\right]{S}_{D}\left[i\right]+{I}_{D}\right)}^{-1}{S}_{D}^{H}\left[i\right]{\tilde{a}}_{d}\left[i\right]$$

$$\begin{array}{cc}\hfill {g}_{s}\left[i\right]={a}_{d}\left[i\right]& {\stackrel{\u02c7}{a}}_{d}^{H}\left[i\right]{r}_{d}\left[i\right]+{\tau}_{d}\left[i\right]{a}_{d}\left[i\right]{a}_{d}^{H}\left[i\right]{s}_{d}\left[i\right]\hfill \\ & +{\lambda}_{RCB}\left[i\right]{\alpha}_{d}\left[i\right]{\alpha}_{d}^{H}\left[i\right]{s}_{d}\left[i\right]\hfill \end{array}$$

$${\tilde{a}}_{d}=\sum _{d=1}^{D}{q}_{d}{s}_{d}^{H}{a}_{d}={S}_{D}^{H}{a}_{d}\in {\mathbb{C}}^{D\times 1}$$

$${\stackrel{\u02c7}{a}}_{d}=\sum _{l=1,l\ne d}^{D}{q}_{l}{s}_{l}^{H}{a}_{l}\in {\mathbb{C}}^{D\times 1}$$

We denote ${\alpha}_{d}\in {\mathbb{C}}^{M\times 1}$ as the difference between the updated steering vectors and the assumed one. The scalar ${\tau}_{d}$ is the d-th diagonal element of ${R}_{D}^{-1}$. The term ${r}_{d}$ denotes the d-th column vector of ${R}_{D}^{-1}$. The Lagrange multiplier obtained is expressed as:

$${\lambda}_{RCB}\left[i\right]=-{\left({S}_{D}{\left[i\right]}^{H}{\alpha}_{d}\left[i\right]{\alpha}_{d}^{H}\left[i\right]{s}_{d}\left[i\right]\right)}^{\u2020}{R}_{D}^{-1}\left[i\right]{\tilde{a}}_{d}\left[i\right]{a}_{d}^{H}\left[i\right]{s}_{d}\left[i\right]$$

The proposed RCB-IOVP SG algorithm corresponds to (7) to (9) and (33) to (38). The calculation of ${\lambda}_{RCB}$ requires $MD+{D}^{2}+4M+D$ complex multiplications, and the computation of ${g}_{a}\left[i\right]$ and ${g}_{s}\left[i\right]$ needs ${D}^{3}+MD+D$ and $5M+D+2$ multiplications, respectively.

We derive an RLS version of the RCB-IOVP method. The steering vector and the columns of the rank reduction matrix are updated as:
$$\tilde{k}[i+1]=\frac{{\alpha}^{-1}{R}_{D}^{-1}\left[i\right]\tilde{x}[i+1]}{1+{\alpha}^{-1}{\tilde{x}}^{H}[i+1]{R}_{D}^{-1}\left[i\right]\tilde{x}\left[i\right]}$$
$${R}_{D}^{-1}[i+1]={\alpha}^{-1}{R}_{D}^{-1}\left[i\right]-{\alpha}^{-1}\tilde{k}[i+1]{\tilde{x}}^{H}[i+1]{R}_{D}^{-1}\left[i\right]$$
where (39) to (42) need $2{D}^{3}+7{D}^{2}+4D+3$ complex multiplications, and the projection operations need a complexity of $MD$ complex multiplications. It is obvious that the complexity is significantly decreased if the selected rank $D\ll M$. The proposed RCB-IOVP RLS algorithm employs (25) and (39) to (42). The key of the RCB-IOVP RLS algorithm is to update the assumed steering vector ${\tilde{a}}_{d}\left[i\right]$ with RLS iterations, and the updated beamformer $\overline{\omega}\left[i\right]$ is obtained by plugging (39) into (25) without significant extra complexity.

$${\tilde{a}}_{d}\left[i\right]=\left[{\tilde{a}}_{d}\left[i\right]-{\left({I}_{D}+{\lambda}_{RCB}\left[i\right]{R}_{D}^{-1}\left[i\right]\right)}^{-1}{\tilde{a}}_{d}\left[i\right]\right]$$

$${s}_{d}=-{\left({\tau}_{d}\left[i\right]{a}_{d}\left[i\right]{a}_{d}^{H}\left[i\right]+{\lambda}_{RCB}\left[i\right]{\alpha}_{d}\left[i\right]{\alpha}_{d}^{H}\left[i\right]\right)}^{-1}{a}_{d}\left[i\right]{\stackrel{\u02c7}{a}}_{d}^{H}\left[i\right]{r}_{d}\left[i\right]$$

Note that the complexity introduced by the pseudo-inverse operation can be removed if ${S}_{D}$ has orthogonal column vectors; this can be achieved by incorporating the Gram–Schmidt procedure in the calculation of ${S}_{D}$. Furthermore, an alternative recursive realization of the robust adaptive linear constrained beamforming method introduced by [19] can be used to further reduce the computational complexity requirement to obtain the diagonal loading terms.

Selecting the rank number is important for the sake of computational complexity and performance. In this section, we examine the efficient implementation of two stopping criteria for selecting the rank number d. Unlike prior methods for rank selection, which utilize MSWF-based algorithms [20] or AVF-based recursions [21], we focus on an approach that jointly determines the rank number d based on the LS criterion computed by the filters ${S}_{D}\left[i\right]$ and $\overline{\omega}\left[i\right]$. In particular, we present a method for automatically selecting the ranks of the algorithms based on the exponentially-weighted a posteriori least-squares type cost function described by:
where α is the forgetting factor and $\overline{\omega}\left[i\right]$ is the reduced-rank filter with rank d. For each time interval i, we can select the rank ${d}_{opt}$ that minimizes the cost function $\mathcal{C}\left({S}_{D}\left[i\right],{\overline{\omega}}_{D}\left[i\right]\right)$, and the exponential weighting factor α is required as the optimal rank varies as a function of the data record. The key quantities to be updated are the projection matrix ${S}_{D}\left[i\right]$, the reduced-rank filter $\overline{\omega}\left[i\right]$, the associated reduced-rank steering vector $\overline{a}\left({\theta}_{k}\right)$ and the inverse of the reduced-rank covariance matrix ${R}_{D}^{-1}\left[i\right]$. To this end, we define the following extended projection matrix ${S}_{D}\left[i\right]$ as:
and the extended reduced-rank filter weight vector $\overline{\omega}\left[i\right]$ as:

$$\mathcal{C}\left({S}_{D}\left[i\right],\overline{\omega}\left[i\right]\right)=\sum _{i}^{l=1}{\alpha}^{i-1}{\left|\overline{\omega}{[i-1]}^{H}{S}_{D}[i-1]r\left[l\right]\right|}^{2}$$

$${S}_{D}=\left[\begin{array}{cccccc}{s}_{1,1}& {s}_{1,2}& \cdots & {s}_{1,{D}_{min}}& \cdots & {s}_{1,{D}_{max}}\\ {s}_{2,1}& {s}_{2,2}& \cdots & {s}_{2,{D}_{min}}& \cdots & {s}_{1,{D}_{max}}\\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {s}_{M,1}& {s}_{M,2}& \cdots & {s}_{M,{D}_{min}}& \cdots & {s}_{M,{D}_{max}}\end{array}\right]$$

$$\overline{\omega}=\left[\begin{array}{c}{\omega}_{1}\\ {\omega}_{2}\\ \vdots \\ {\omega}_{min}\\ \vdots \\ {\omega}_{max}\end{array}\right]$$

The extended projection matrix ${S}_{D}$ and the extended reduced-rank filter weight vector $\overline{\omega}$ are updated along with the associated quantities $\overline{a}\left({\theta}_{k}\right)$ and ${R}_{D}^{-1}\left[i\right]$ (only for the RLS) for the maximum allowed rank ${d}_{max}$, and then, the proposed rank adaptation algorithm determines the rank that is best for each time instant i using the cost function (43). The proposed rank adaptation algorithm is then given by:
where d is an integer and ${D}_{min}$ and ${D}_{max}$ are the minimum and maximum ranks allowed for the reduced-rank filter, respectively. Note that a smaller rank may provide faster adaptation during the initial stages of the estimation procedure and a greater rank usually yields a better steady-state performance. Our studies reveal that the range for which the rank d of the proposed algorithms have a positive impact on the performance of the algorithms is limited. These values are rather insensitive to the system load and the number of array elements and work very well for all scenarios.

$${D}_{opt}=arg\underset{{D}_{min}\le d\le {D}_{max}}{min}\mathcal{C}\left({S}_{D}[i-1],\overline{\omega}[i-1]\right)$$

In this section, we consider simulations for arrays with 64 and 320 sensor elements; the arrays are ULA with regular ${\lambda}_{c}/2$ spacing between the sensor elements. The covariance matrix $\stackrel{\u02f0}{R}$ is obtained by time-averaging recursions with $N=1,\dots ,120$ snapshots. The DoA mismatch is also considered in order to verify the robustness of various beamforming algorithms. For the robust designs, we use the spherical uncertainty set, and the upper bound is set to $\u03f5=140$ for 64 sensor elements and $\u03f5=800$ for 320 sensor elements, respectively. There are four incident signals; while the first is the SoI, the other three signals’ relative power with respect to the SoI and their DoAs in degrees are detailed in Table 1. The algorithms are trained with 120 snapshots, and the signal-to-noise ratio (SNR) is set to 10 dB for all of the simulations.

Snapshots | Signal 1 (SoI) | Signal 2 | Signal 3 | Signal 4 |
---|---|---|---|---|

1 to 120 | 10/90 | 20/35 | 20/135 | 20/165 |

In Figure 2, we compare various beamforming techniques with a steering array of 64 elements. We introduce a maximum of two degrees of DoA mismatch, which is independently generated by a uniform random generator in each simulation run. The proposed IOVP-RLS and IOVP-SG algorithms are implemented in both non-robust MVDR [2] and robust RCB [4] schemes, respectively. The competitors including two conventional full-rank beamformers, such as MVDR-RLS and RCB-RLS, as well as two reduced-rank beamformers, such as MVDR-Krylov and RCB-Krylov [14]. In this simulation, we select $D=2$ for all reduced rank schemes, including MVDR-Krylov, RCB-Krylov, MVDR-IOVP-RLS/SG and RCB-IOVP-RLS/SG. A non-orthogonal Krylov projection matrix ${S}_{D}\left[i\right]\in {\mathbb{C}}^{64\times 2}$ and a non-orthogonal IOVP rank reduction matrix are also generated for rank reduction. It is also important to note that the projection matrix ${S}_{D}\left[i\right]$ can be initialized as ${S}_{D}\left[0\right]=[{I}_{D}^{T}$, ${0}_{D\times (M-D)}^{T}]$, and the inverse of the covariance matrix $\stackrel{\u02f0}{R}{}^{-1}\left[i\right]$ for each snapshot can be obtained by using the proposed RCB-IOVP-RLS algorithm.

In Figure 3, we choose a similar scenario, but without DoA mismatch. We can see from the plots that the IOVP and Krylov algorithms have a superior SINR performance to other existing methods, and this is particularly noticeable for a reduced number of snapshots. By comparing the curves in Figure 2 and Figure 3, we can see that by introducing the DoA mismatch, the conventional MVDR-RLS and RCB-Krylov-RLS schemes have about a 10-dB SINR loss; their performances are prone to steering vector mismatch. In contrast, all of the proposed IOVP reduced-rank schemes experience less than 2.5 dB of performance loss, which implies that these schemes are robust to steering vector mismatch. On the other hand, by comparing the performance of their robust rivals (such as RCB-RLS, MVDR-Krylov-RLS), the proposed schemes may provide higher SINR performance and much higher convergence speed.

In Figure 4, we compare the output SINRs of the Krylov and the proposed IOVP rank reduction technique using a spherical constraint in the presence of steering vector errors with 320 sensor elements. We assume a DoA mismatch with two degrees and four interferences with the profile listed in Table 1. With Krylov and IOVP rank reduction, the MVDR-Krylov, MVDR-IOVP, RCB-Krylov and RCB-IOVP have superior SINR performance and a faster convergence compared to their full-rank rivals.

In this paper, we proposed a robust rank reduction algorithm for steering vector estimation with the method of iterative parameter optimization and vector perturbation. In this algorithm, a bank of perturbed steering vectors was introduced as candidate array steering vectors around the true steering vector. The candidate steering vectors are responsible for performing rank reduction, and the reduced-rank beamformer forms the beam in the direction of the signal of interest (SoI). The perturbation vectors and the vector ${q}_{d}$ were introduced in order to break the correlations among the columns of the projection matrix, and the rank number can be controlled. Additionally, we devised efficient stochastic gradient (SG) and recursive least-squares (RLS) algorithms for implementing the proposed robust IOVP design. Finally, we derived the automatic rank selection scheme in order to obtain the optimal beamforming performance with low computational complexity. The simulation results for a digital beamforming application with a large array showed that the proposed IOVP and algorithms outperformed in convergence and tracking the existing full-rank and reduced-rank algorithms at comparable complexity.

This work is supported by the Startup Foundation for Introducing Talent of NUIST.

Peng Li and Rodrigo de Lamare conceived and designed the experiments; Peng Li performed the experiments; Jiao Feng analyzed the data; Peng Li and Jiao Feng wrote the paper.

The authors declare no conflict of interest.

- Van Trees, H. L. Detection, Estimation, and Modulation Theory, Part IV, Optimum Array Processing; John Wiley & Sons: Hoboken, NJ, USA, 2002. [Google Scholar]
- Haykin, S. Adaptive Filter Theory; Fourth ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 2002. [Google Scholar]
- Li, J.; Stoica, P. Robust Adaptive Beamforming; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Li, J.; Stoica, P.; Wang, Z. On Robust Capon Beamforming and Diagonal Loading. IEEE Trans. Signal Process.
**2003**, 51, 1702–1715. [Google Scholar] - Scharf, L. L.; Tufts, D. W. Rank reduction for modeling stationary signals. IEEE Trans. Acoust. Speech Signal Process.
**1987**, 35, 350–355. [Google Scholar] [CrossRef] - Vorobyov, S. A.; Gershman, A. B.; Luo, Z.-Q. Robust Adaptive Beamforming Using Worst-Case Performance Optimization: A Solution to the Signal Mismatch Problem. IEEE Trans. Signal Process.
**2003**, 51, 313–324. [Google Scholar] [CrossRef] - Somasundaram, S. D. Reduced Dimension Robust Capon Beamforming for Large Aperture Passive Sonar Arrays. IET Radar Sonar Navig.
**2011**, 5, 707–715. [Google Scholar] [CrossRef] - Scharf, L. L.; van Veen, B. Low rank detectors for Gaussian random vectors. IEEE Trans. Acoust. Speech Signal Process.
**1987**, 35, 1579–1582. [Google Scholar] [CrossRef] - Burykh, S.; Abed-Meraim, K. Reduced-rank adaptive filtering using Krylov subspace. EURASIP J. Appl. Signal Process.
**2002**, 12, 1387–1400. [Google Scholar] [CrossRef] - Goldstein, J. S.; Reed, I. S.; Scharf, L. L. A multistage representation of the Wiener filter based on orthogonal projections. IEEE Trans. Inf. Theory
**1998**, 44, 2943–2959. [Google Scholar] [CrossRef] - Hassanien, A.; Vorobyov, S. A. A Robust Adaptive Dimension Reduction Technique with Application to Array Processing. IEEE Signal Process. Lett.
**2009**, 16, 22–25. [Google Scholar] [CrossRef] - Ge, H.; Kirsteins, I. P.; Scharf, L. L. Data Dimension Reduction Using Krylov Subspaces: Making Adaptive Beamformers Robust to Model Order-Determination. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Toulouse, France, 14–19 May 2006; Volume 4, pp. 1001–1004.
- Wang, L.; de Lamare, R. C. Constrained adaptive filtering algorithms based on conjugate gradient techniques for beamforming. IET Signal Process.
**2010**, 4, 686–697. [Google Scholar] [CrossRef] - Somasundaram, S.; Li, P.; Parsons, N.; de Lamare, R. C. Data-adaptive reduced-dimension robust Capon beamforming. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013.
- De Lamare, R. C.; Sampaio-Neto, R. Adaptive reduced-rank processing based on joint and iterative interpolation, Decimation and Filtering. IEEE Trans. Signal Process.
**2009**, 57, 2503–2514. [Google Scholar] [CrossRef] - De Lamare, R. C.; Wang, L.; Fa, R. Adaptive reduced-rank LCMV beamforming algorithms based on joint iterative optimization of filters: design and analysis. Signal Process.
**2010**, 90, 640–652. [Google Scholar] [CrossRef] - Fa, R.; de Lamare, R. C.; Wang, L. Reduced-rank STAP schemes for airborne radar based on switched joint interpolation, decimation and filtering algorithm. IEEE Trans. Signal Process.
**2010**, 58, 4182–4194. [Google Scholar] [CrossRef] - Grant, D. E.; Gross, J. H.; Lawrence, M. Z. Cross-spectral matrix estimation effects on adaptive beamformer. J. Acoust. Soc. Am.
**1995**, 98, 517–524. [Google Scholar] [CrossRef] - Elnashar, A. Efficient implementation of robust adaptive beamforming based on worst-case performance optimisation. IET Signal Process.
**2008**, 4, 381–393. [Google Scholar] [CrossRef] - Honig, M. L.; Goldstein, J. S. Adaptive reduced-rank interference suppression based on the multistage Wiener filter. IEEE Trans. Commun.
**2002**, 50, 986–994. [Google Scholar] [CrossRef] - Haoli, Q.; Batalama, S.N. Data record-based criteria for the selection of an auxiliary vector estimator of the MMSE/MVDR filter. IEEE Trans. Commun.
**2003**, 51, 1700–1708. [Google Scholar] [CrossRef]

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).