Next Article in Journal
Monitoring Suaeda salsa Spectral Response to Salt Conditions in Coastal Wetlands: A Case Study in Dafeng Elk National Nature Reserve, China
Next Article in Special Issue
A Modeling Approach for Predicting the Resolution Capability in Terrestrial Laser Scanning
Previous Article in Journal
Retrieval of Ocean Surface Wind Speed Using Reflected BPSK/BOC Signals
Previous Article in Special Issue
Regularization of Building Roof Boundaries from Airborne LiDAR Data Using an Iterative CD-Spline
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ISAR Image Matching and Three-Dimensional Scattering Imaging Based on Extracted Dominant Scatterers

1
National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
2
Collaborative Innovation Center of Information Sensing and Understand, Xidian University, Xi’an 710071, China
3
School of Electronic Engineering, Xidian University, Xi’an 710071, China
4
Department of Engineering, Università degli Studi di Napoli Parthenope, Centro Direzionale di Napoli, Isola C4, 80143 Napoli, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(17), 2699; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12172699
Submission received: 18 June 2020 / Revised: 11 August 2020 / Accepted: 13 August 2020 / Published: 20 August 2020
(This article belongs to the Special Issue 3D Modelling from Point Cloud: Algorithms and Methods)

Abstract

:
This paper studies inverse synthetic aperture radar (ISAR) image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers. In the condition of a long baseline between two radars, it is easy for obvious rotation, scale, distortion, and shift to occur between two-dimensional (2D) radar images. These problems lead to the difficulty of radar-image matching, which cannot be resolved by motion compensation and cross-correlation. What is more, due to the anisotropy, existing image-matching algorithms, such as scale invariant feature transform (SIFT), do not adapt to ISAR images very well. In addition, the angle between the target rotation axis and the radar line of sight (LOS) cannot be neglected. If so, the calibration result will be smaller than the real projection size. Furthermore, this angle cannot be estimated by monostatic radar. Therefore, instead of matching image by image, this paper proposes a novel ISAR imaging matching and 3D imaging based on extracted scatterers to deal with these issues. First, taking advantage of ISAR image sparsity, radar images are converted into scattering point sets. Then, a coarse scatterer matching based on the random sampling consistency algorithm (RANSAC) is performed. The scatterer height and accurate affine transformation parameters are estimated iteratively. Based on matched scatterers, information such as the angle and 3D image can be obtained. Finally, experiments based on the electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm.

1. Introduction

Inverse synthetic aperture radar (ISAR) imaging, especially two-dimensional (2D) imaging, has been extensively used in civil and military applications because of its capability to generate high-resolution images of noncooperative targets. There are many methods of 2D imaging, including noise suppression, sparse imaging, motion compensation, and so on [1,2,3,4,5,6,7,8,9,10,11,12]. A 2D image is the projection of a three-dimensional (3D) target on the range-Doppler (RD) plane [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. However, 2D imaging has several unavoidable shortcomings. For example, the azimuth dimension of the 2D image just reflects the Doppler distribution of every scattering point. What is more, most calibration algorithms achieve calibration by estimating the target rotation angle during the observation time [13,14]. These algorithms assume the target rotation axis is orthogonal to the radar line of sight (LOS). However, in practical applications, meeting the requirements of complete orthogonality is not an easy matter because of the noncooperative targets. The information derived from a 3D image is more than that derived from a 2D image. Therefore, the study of 3D imaging has received significant attention in recent years [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35]. The existing 3D imaging methods can be roughly categorized into two groups, corresponding to monostatic and multistatic radar systems.
3D imaging methods via monostatic radar include 3D matched filtering [15,16], which is a direct extension of 2D imaging, and 3D reconstruction based on ISAR image sequences [17,18,19,20,21,22]. It is worth noting that 3D imaging with multipass SAR data is excluded [23]. Generally, when the target spinning axis is not perpendicular to the LOS, 3D imaging can be performed by 3D matched filtering. The accuracy of the third dimension is determined by the sampling frequency in the third dimension. Moreover, the methods of 3D reconstruction based on the ISAR image sequence need accurate scattering point extraction, scattering point trace association, and scattering point trace matrix decomposition. 3D reconstruction based on sequencing also requires a large rotation angle. What is more, accurate scatter extraction and association are necessary to generate an accurate trajectory matrix, which is the foundation for high-resolution 3D reconstruction. It should be noted that these algorithms are time-consuming, and the number of ISAR image sequences is large.
3D imaging methods via multistation include Interferometric ISAR (InISAR) and 3D imaging via distributed radar network. InISAR generally employs three antennas that need to be properly designed [24,25,26,27,28,29,30,31,32,33,34,35]. Because InISAR gets a 3D image by phase difference, the baselines between the three antennas are a few meters. Images from the three antennas are basically the same. Although the InISAR system can easily obtain 3D images without prior knowledge of the target motion, it is of relatively high cost and hardware complexity compared with 3D imaging via monostatic radar or distributed radar network. On the contrary, distributed radar can make full use of existing radars for 3D imaging and other applications.
This paper studies ISAR image matching and 3D scattering imaging based on extracted dominant scatterers. Unlike InISAR, the proposed algorithm gets a 3D image by the difference of the scatterer’s position from two arbitrary radars. Therefore, a long baseline is more conducive to high-precision 3D imaging for the proposed method. However, when the baseline is too long, image rotation, scale, distortion, and obvious anisotropy will occur, which are not conducive to radar image matching. Therefore, the optimal baseline is to make the two radar images rotate and distortion, but the anisotropy is not obvious. Meanwhile, 2D imaging strongly depends on the RD plane. Therefore, ISAR images from different RD planes vary dramatically.
These problems are a challenge for ISAR image matching. Improved optical image matching algorithms [36], such as the modified scale invariant feature transform (SIFT) and Harris algorithms, are suitable for theoretically simulated radar images, but not for real ISAR images. Due to different imaging mechanisms, the magnitude of the scattering point from a real ISAR image is not the same as the optic picture. What is more, these problems cannot be solved by motion compensation and cross-correlation on ISAR images. To solve the above problems, the proposed method adopts the image matching based on extracted dominant scatterers, which is different from the image-based SIFT. This method can make full use of existing radars to derive the target 3D image. First, for the sparsity of the ISAR image, the radar images are transformed into scattering point sets by scattering center extraction [37]. Then, a coarse scatterer matching based on the random sampling consistency algorithm (RANSAC) is performed. The scatterer height and accurate affine transformation parameters are estimated iteratively to obtain fine scatterer matching. Based on matched scatterers, information such as the 3D image and the angle between the LOS and the target rotation axis can be obtained. Finally, experiments based on the electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm.
This paper is organized as follows. In Section 2, the radar system and the signal model are established. In Section 3, by extracting scattering centers, two scattering point sets can be obtained. After RANSAC, a coarse scatterer matching is achieved. Then, an automatic iteration process based on affine transformation can be performed to achieve accurate scattering point set matching. Meanwhile, the scattering point height and the angle between the target spinning axis and LOS can be obtained. In Section 4, experiments based on two simulation datasets, using the electromagnetic simulation software CADFEKO, are performed to demonstrate the effectiveness of the proposed algorithm.

2. Signal Model

A radar system with two arbitrary configuration radars locating at A and B is depicted in Figure 1a, where the radar coordinate system O 0 X 0 Y 0 Z 0 , the transition coordinate system O X 1 Y 1 Z 1 , and the target coordinate system O X Y Z are established. For ease of description and analysis, these three reference coordinate systems are marked as S 0 , S 1 , and S , respectively. Having to deal with these three coordinate systems, we denote scattering points in 3D space by using a subscript according to the specific reference, e.g., P s 1 and P s 0 are the same point expressed with the reference system coordinates S 0 and S 1 , respectively.
The reference system S 0 is embedded in radar A and centered in O 0 . The X 0 -axis is aligned with the LOS of radar A. Radar B is located in the plane ( X 0 , Y 0 ) . The reference system S is embedded in the target and centered in O . In practical conditions, external forces on the target produce angular motions that are represented by the angular rotation vector Ω , which is aligned with the Z -axis. Its projection onto the plane orthogonal to LOS of radar A defines the effective rotation component Ω e f f , which forms the Z 1 -axis. The plane ( Y 1 , X 1 ) is the imaging plane, whose axes correspond to range and the cross-range dimensions. For an easier understanding, the radar coordinate system and target rotation axis are abstractly represented in Figure 1b. α is the angle between the target rotation axis and X 0 -axis, marked as ARX, which can influence the projection and calibration results. β is the angle between the target rotation axis and the Y 0 -axis, marked as ARY. Without loss of generality, we assume that the X 1 -axis is aligned with X 0 -axis and the Y -axis is aligned with Y 1 -axis. According to right-handed coordinates, the Z 0 -axis and the X -axis can be derived.
Moreover, we consider the target as a rigid body moving toward the Z -axis concerning system S . The target is composed of point-like scatterers, and the position of scatterer P with respect to S is
P S = [ x cos ( Ω t m ) y sin ( Ω t m ) x sin ( Ω t m ) + y cos ( Ω t m ) z + v t m ]
where Ω = | Ω | , and t m denotes slow time. The angle between the Z -axis and the Z 0 -axis is θ , which can be derived as
cos θ = ± 1 cos 2 α cos 2 β
where the Z 1 -axis is the projection result of the Z -axis on the plane ( Y 0 , Z 0 ) , cos θ = 1 cos 2 α cos 2 β . The angular rotation vector Ω with the radar coordinate system can be expressed as
Z S 0 = [ cos α , cos β , cos θ ]
Concerning the transition coordinate system, the target coordinate system is represented as
X S 1 = [ sin α , 0 , cos α ]
Y S 1 = [ 0 , 1 , 0 ]
Z S 1 = [ cos α , 0 , sin α ]
With respect to the radar coordinate system, the transition coordinate system is represented as
X 1 , S 0 = [ 1 , 0 , 0 ]
Y 1 , S 0 = X 0 × Z S 0 X 0 × Z S 0
Z 1 , S 0 = X 0 × Y 1 , S 0 X 0 × Y 1 , S 0
The relationship between these three coordinate systems can be formulated as
S 1 = S 0 C 1
S = S 1 C
where C = [ X S 1 T , Y S 1 T , Z S 1 T ] , C 1 = [ X 1 , S 0 T , Y 1 , S 0 T , Z 1 , S 0 T ] .
Therefore, the transformation from the target coordinate system into the radar coordinate system can be formulated as
P S 0 = [ R 0 , 0 , 0 ] + C 1 C P S = [ R 0 + ξ 1 , ξ 2 , ξ 3 ]
where cos 2 β < sin 2 α , α ( 0 , π / 2 ) , and ξ i , i = 1 , 2 , 3 can be expressed as
ξ 1 = sin α ( x cos ( w t m ) y sin ( w t m ) ) + z cos α + v t m cos α
ξ 2 = cos α sin α cos β ( x cos ( w t m ) y sin ( w t m ) ) cos θ sin α ( x sin ( w t m ) + y cos ( w t m ) ) ( z + v t m ) cos β
ξ 3 = cos α cos θ sin α ( x cos ( w t m ) y sin ( w t m ) ) + cos β sin α ( x sin ( w t m ) + y cos ( w t m ) ) ( z + v t m ) cos θ
According to Figure 1, the location of radar A can be expressed as D A , S 0 = [ 0 , 0 , 0 ] . The distance from radar A to scattering point P can be expressed by
R A , P ( t m ) = P S D A , S 0 T = ( ( ξ 1 + R 0 ) 2 + ξ 2 2 + ξ 3 2 ) 1 / 2
The first-order Taylor expansion of (16) is carried out, and (16) can be rewritten as
R A , P ( t m ) = R 0 + ξ 1 + ξ 1 2 + ξ 2 2 + ξ 3 2 2 R 0
Substituting ( x , y , z ) into (14), (17) can be rewritten as
R A , P ( t m ) = x sin α cos ( w t m ) y sin α sin ( w t m ) + z cos α + R 0 + v t m cos α + ξ 1 2 + ξ 2 2 + ξ 3 2 2 R 0
The location of radar B can be expressed as follows:
D B , S 0 = [ R 0 R r cos φ , R r sin φ , 0 ]
The distance from scattering point P to radar B is R B , P ( t m ) , which can be expressed as
R B , P ( t m ) = P S D B , S 0 T R r + ξ 1 cos ϕ ξ 2 sin ϕ + ξ 1 2 + ξ 2 2 + ξ 3 2 2 R r
Suppose that the pulse of the linear frequency modulated signal at a pulse repetition frequency of P R F is transmitted by radar A. After preprocessing, including de-modulation to the baseband and range compression, the processed echo data of scattering point P can be expressed as
S i , p ( t , t m ) = σ p sin c ( W ( t 2 R i , p ( t m ) c ) ) exp ( j 4 π f c R i , p ( t m ) c )
where i = A , B is the radar index, σ p is the complex scattering coefficient of scattering point P , t is the fast time in the range dimension, f c is the center frequency of the transmitted signal, c is the transmitting velocity of the electromagnetic wave, and W is the spectrum bandwidth of the transmitted signal.

3. Theory and Method

The proposed method is shown in Figure 2. After receiving the target echoes from two arbitrary radars, motion compensation was accomplished first. Then, through scattering point extraction, the ISAR image was converted into a scatterer set.
The process in the yellow rectangle was used to match scattering point sets and 3D imaging. First, the height of each scatterer was set as one. Meanwhile, a coarse scatterer matching in RANSAC was carried out. Based on the preliminary transformation, one could estimate ARX and ARY. Then, the scatterer height was obtained based on the preliminary ARX and ARY. After scatterer height estimation, affine transformation with estimated scatterer height was performed. The estimation of ARX, ARY, and scatterer height was executed iteratively. The iteration was stopped when the estimation of ARX and ARY varied slightly. Finally, an accurate ARX and ARY was derived from a small range of searching. The operations in the red rectangle will be described in detail in the rest of this section.

3.1. Scattering Point Sets Matching based on RANSAC and Affine Transformation

After motion compensation, the images from radar A and radar B can be expressed as
I i ( t , f a ) = σ i sin c ( W ( t 2 x i c ) ) sin c ( 1 P R F ( f a + y i ω 2 π ) ) exp ( j φ i )
where i = A , B , φ i denotes the corresponding radar residual phase, and x i and y i denote scattering location in the corresponding image i . For a short observation time, cos ( w t m ) can be approximated as 1, and sin ( w t m ) can be approximated as w t m . After scattering point extraction, two scattering point sets can be achieved: scattering point set A ( x A , y A ) with N A scatterers and scattering point set B ( x B , y B ) with N B scatterers. For the convenience of expression, ( x A , y A ) and ( x B , y B ) are defined as SPA and SPB, respectively. According to (16) to (22), SPA can be expressed as
x A = x sin α + z cos α
y A = y sin α
Similarly, SPB can be expressed as
x B = x A cos φ x A cos α sin 2 α sin φ cos β y A cos θ sin 2 α sin φ + z cos β sin φ ( 1 + cot 2 α )
y B = x A cos θ sin 2 α sin φ + y A cos φ y A cos α sin 2 α sin φ cos β z cos α sin φ cos θ sin 2 α
Therefore, the relationship between SPA and SPB can be achieved by
[ x B y B z ] = U [ x A y A z ] = [ u 1 u 2 u 3 u 4 u 5 u 6 0 0 1 ] [ x A y A z ]
where U R 3 × 3 , the element of U is u i , i = 1 , 2 , ... , 6 , u 1 = cos φ cos α / sin 2 α cos β sin φ , u 2 = sin φ cos θ / sin 2 α , u 3 = cos β sin φ ( 1 + cot 2 α ) , u 4 = u 2 , u 5 = u 1 , and u 6 = cos α sin φ cos θ / sin 2 α .
Through the analysis of (27), one can find that angles α , β , φ can cause rotation, scaling, and shift for image B. Because u 5 = u 1 , the scaling in the range and azimuth dimensions is the same. Furthermore, z for each scattering point is different. This means, for range and azimuth dimensions, each scattering point has a different shift. Therefore, compared with image A, rotation, scaling, and shift appeared in image B. In order to estimate preliminary ARX and ARY, assume that z of all the scatterers is one at the start time. According to RANSAC, assume the total estimation number is K . In the k th estimation, three pairs of scatterers are chosen randomly to form an affine transformation, expressed as
[ x B 1 x B 2 x B 3 y B 1 y B 2 y B 3 ] = [ x A 1 y A 1 1 0 0 0 x A 2 y A 2 1 0 0 0 x A 3 y A 3 1 0 0 0 0 0 0 x A 1 y A 1 1 0 0 0 x A 2 y A 2 1 0 0 0 x A 3 y A 3 1 ] [ u 1 u 2 u 3 u 4 u 5 u 6 ]
By solving (28), U k is estimated. The converted SPA ( x B , y B ) can be obtained by
[ x B y B 1 ] = U k [ x A y A 1 ]
Scatterer matching can be performed based on the minimum Euclidean distance between the converted SPA and SPB. For the i th scattering point in image A, the matched scattering point can be expressed as
j i = min j ( ( x B i x B j ) 2 + ( y B i y B j ) 2 )
where i and j are the index of scattering point in image A and image B, respectively. j i is the matched scattering point index. Assume the number of matched scatterers is N k . Then, the preliminary scatterer matching result is
{ U = U d N = N d s . t . d = max k N k

3.2. Estimation of ARX and ARY

After the process of RANSAC, the rough relationship U between the scattering point set can be obtained. To estimate ARX and ARY from U , we let
k 1 = cos φ u 1 sin φ = cos α sin 2 α cos β
k 2 = u 2 sin φ = sin 2 α cos 2 β sin 2 α
Combining (32) and (33), we have
sin α = ( k 2 2 + k 1 2 + 1 ) ± ( k 2 2 + k 1 2 + 1 ) 2 4 k 2 2 2 k 2 2
cos β = k 1 sin 2 α cos α
Due to the limitation that sin α is less than 1, then (34) can be expressed as
sin α = ( k 2 2 + k 1 2 + 1 ) ( k 2 2 + k 1 2 + 1 ) 2 4 k 2 2 2 k 2 2
Therefore, ARX and ARY can be derived by
α = asin ( ( k 2 2 + k 1 2 + 1 ) ( k 2 2 + k 1 2 + 1 ) 2 4 k 2 2 2 k 2 2 )
β = acos ( k 1 sin 2 α cos α )
Therefore, we let
k 3 = cos φ u 5 sin φ = cos α sin 2 α cos β
k 4 = u 4 sin φ = sin 2 α cos 2 β sin 2 α
The final estimation of ARX and ARY can be expressed as
α = asin ( mean ( ( k 2 2 + k 1 2 + 1 ) ( k 2 2 + k 1 2 + 1 ) 2 4 k 2 2 2 k 2 2 , ( k 4 2 + k 3 2 + 1 ) ( k 4 2 + k 3 2 + 1 ) 2 4 k 4 2 2 k 4 2 ) )
β = acos ( mean ( k 1 sin 2 α cos α , k 3 sin 2 α cos α ) )

3.3. Estimation of Scatterer Height and Affine Transformation Considering Scatterer Height

The difference of each scatterer can be represented as
[ Δ x B j i Δ y B j i ] = [ x B j i y B j i ] [ u 1 u 2 u 4 u 5 ] [ x A i y A i ]
According to (25) and (26), the different shift can be rewritten by
[ Δ x B j i Δ y B j i ] = z [ cos β sin φ ( 1 + cot 2 α ) cos α sin φ sin 2 α cos 2 β sin 2 α ]
Substituting α and β into (25) and (26), the estimated target height can be expressed as
z = [ Δ x B j i Δ y B j i ] [ cos β sin φ ( 1 + cot 2 α ) cos α sin φ sin 2 α cos 2 β sin 2 α ] 1
Combining with the estimated z , the relationship between SPA and SPB can be rewritten as
[ x B 1 x B N y B 1 y B N ] = [ x A 1 y A 1 z 1 0 0 0 0 0 0 x A N y A N z N 0 0 0 0 0 0 x A 1 y A 1 z 1 0 0 0 0 0 0 x A N y A N z N ] [ u 1 u 2 u 3 u 4 u 5 u 6 ]
Then, (46) can be rewritten as
b = A μ
where b = [ x B 1 , , x B N d , y B 1 , , y B N d ] T , μ = [ u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ] T , and A is the middle part of (31). After QR decomposition, A can be expressed as
A = Q R
Because Q is an orthogonal matrix, then Q Q T = I , where I is a unit matrix, ( ) T denotes transpose, and R denotes the upper triangular matrix. After computing, μ can be derived by
μ = R \ Q T b
Based on the above analysis, the recalibration of the range and azimuth dimensions can be expressed as
x = x A z cos α sin α
y = y A sin α
Combining with the estimated z , the target 3D scattering image ( x , y , z ) is obtained.

4. Simulations

In this section, to demonstrate the effectiveness and superiority of the proposed algorithm, some experiments based on two different simulation datasets are conducted. To verify the capability of the proposed method, the first experiment compared the modified SIFT algorithm from [36] and the proposed method based on point scattering center theory. Because real data are hard to obtain, a second experiment was conducted based on CADFEKO software, by which the simulated echo was similar to the real one. For these two experiments, the radar parameters were the same. The radar operated in the Ku-band. The signal bandwidth was 2 GHz.

4.1. Experiment on Point Scattering Simulation Data

To compare the modified SIFT with the proposed method, a point simulated satellite was used. The system parameters are listed below in Table 1. Figure 3 shows the key points extracted with respect to modified SIFT and dominant scatterers with respect to ISAR. Comparing Figure 3a,c, one can see that due to the different principles of feature scatterer extraction, the feature point with respect to SIFT was different from the dominant scatterer with respect to ISAR. Figure 4 shows the matched feature point. The number of matched feature points with respect to SIFT was less than the one from the proposed method. Due to different imaging mechanisms, ISAR images do not have strict gray-scale similarity as optical images. Additionally, the modified SIFT feature description vectors are too harsh on the ISAR image. This results in a great reduction of matching points.
Figure 5 shows the fusion images. Different from modified SIFT, the fusion image from the proposed method is a scatterer image. The line in Figure 5b shows the matched scatterer pairs. One can see that almost all the dominant scatterers are matched in Figure 5b. Because U can represent the relationship between two images, the estimated RMSE of U is used to measure the performance of the algorithm. The RMSE is computed according to
RMSE = 1 N i = 1 N ( U i U 0 ) 2
where U i is the estimated U 0 , and U 0 is the real value. Using different signal-to-noise ratio (SNR) ( 10 , 5 , 0 , 5 ) dB , 200 Monte Carlo experiments were carried out. The RMSE of the estimated U is shown in Figure 6a. The performance of the modified SIFT was better as the SNR improved. The proposed method was not affected by the noise in this range. This is because the dominant scatterer extraction algorithm is highly precise. A precise dominant scatterer extraction is the foundation of the proposed method. After feature points matching, the target 3D image can be obtained, as shown in Figure 6b. A comparison of estimated target attitude and the setting target attitude to radar A is shown in Figure 7.

4.2. Simulation based on CADFEKO Software

To verify that the proposed method is suitable for the radar echo close to the real data, an experiment based on CADFEKO software was carried out. The target was a simulated satellite, as shown in Figure 8a. The length and width of the solar panel were 1 m and 0.28 m, respectively. In the middle, there was a sphere, and the diameter was 0.4 m. The distance between the two solar panels was 0.6 m. The setting parameters are listed in Table 2. According to the setting parameters, one can see that the ARX was 80°. However, ARY cannot be directly obtained from the system parameters, which need to be calculated. The unit vectors of radar A and radar B can be expressed as
n A = [ sin α cos ϕ , sin α sin ϕ , cos α ]
n B = [ sin α cos ( ϕ + ψ ) , sin α sin ( ϕ + ψ ) , cos α ]
where ψ is the azimuth angle between radar A and radar B; ϕ is the azimuth angle of radar A. Because the O 0 X 0 Y 0 plane is made of n A and n B , the Z 0 -axis can be expressed as
n Z 0 = n A × n B n A × n B
Then, the Y 0 -axis can be expressed as
n Y 0 = n Z 0 × n A n Z 0 × n A
ARY can be represented by
β = acos ( n Y 0 ( 3 ) )
where n Y 0 ( 3 ) is the third element of n Y 0 . β is influenced by ψ and α , and the relationship is shown in Figure 8b. Because the pitch angle of two radars is the same, β is greater than 90° and less than 115°. In addition, φ cannot be obtained from the preset parameters. The CADFEKO model is abstractly represented in Figure 9a. For the convenience of analysis, the red part of Figure 9a is shown in Figure 9b. φ can be obtained by
φ = π 2 γ
where cos γ = sin θ sin ( ψ / 2 ) . As analyzed above, ARY and φ are calculated as 91.73° and 19.69°, respectively.
.
Through scattering center extraction, the extracted SPA and SPB are shown in Figure 10. Figure 10b shows eight edge points with strong sidelobes. Due to the difference in the angle of incidence and the degree of position occlusion at the edge points, its scattering intensity also changed, two of which were significantly larger than the other six. There were also scattering points between the eight edges. After RANSAC, the converted SPA and SPB are plotted in Figure 11b. Since the influence of noise on RANSAC is relatively small, the influence of sidelobes in SPB can be removed after preliminary scattering point matching. After iteration, the target 3D image can be obtained, as shown in Figure 12a. The estimated length of the solar panel was 2.56m. The estimated ARX and ARY are listed in Table 3. According to the estimated ARX and ARY, the 3D reconstruction with respect to the radar coordinate system is shown in Figure 12b, where two solar panels are identified. The three views of Figure 12b are shown in Figure 12c–e. Compared with SPA in Figure 11a, the front view of Figure 12c is bigger and upside down. According to (50) and (51), the top view shows x and y in the radar coordinate system. Figure 11a shows x A and y A , which is the projection of target to the image plane. This image plane is different from the ( X 0 , Y 0 ) plane.

5. Conclusions

In this paper, ISAR image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers are studied. Compared with 3D imaging based on an ISAR sequence, the proposed method needs two arbitrary radar observations in a short time. Different from SIFT, the matching process in the proposed method is based on the location of scattering points. Without harsh feature descriptions, the matched scatterers are greatly increased. First, taking advantage of the sparsity of the ISAR image, the radar images are transformed into scattering point sets by scattering the center extraction method. Then, coarse scatterer matching can be achieved by RANSAC. The scatterer height and accurate affine transformation parameters are estimated iteratively to obtain fine scatterer matching. Based on matched scatterers, the target 3D image, ARX, and ARY can be obtained. Finally, experiments based on electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm.

Author Contributions

Formal analysis, D.X., B.B., G.-C.S., M.X. and V.P.; Writing—original draft, D.X.; Writing—review & editing, D.X., B.B. and V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant 61621005 and the National Science Found for Distinguished Young Scholars under Grant 61825105.

Acknowledgments

The authors would like to acknowledge the anonymous reviewers for their useful comments and suggestions, which were a great help in improving this paper. The authors are very grateful for the Doctoral Students’ Short Term Study Abroad Scholarship Fund by Xidian University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, C.-C.; Andrews, H.C. Target-motion-induced radar imaging. IEEE Trans. Aerosp. Electron. Syst. 1980, 16, 2–14. [Google Scholar] [CrossRef]
  2. Zhang, Q.; Yeo, T.S.; Du, G. ISAR imaging in strong ground clutter using a new stepped-frequency signal format. IEEE Trans. Geosci. Remote Sens. 2003, 41, 948–952. [Google Scholar] [CrossRef]
  3. Bao, Z.; Xing, M.D.; Wang, T. Radar Imaging Technique; Publishing House of Electronic Industry: Beijing, China, 2005. [Google Scholar]
  4. Wang, Q.; Xing, M.; Lu, G.; Bao, Z. Single range matching filtering for space debris radar imaging. IEEE Geosci. Remote Sens. Lett. 2007, 4, 576–580. [Google Scholar] [CrossRef]
  5. Chen, L.; Liang, B.; Yang, D. Two-Step Accuracy Improvement of Motion Compensation for Airborne SAR with Ultrahigh Resolution and Wide Swath. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7148–7160. [Google Scholar] [CrossRef]
  6. Wang, D.-W.; Ma, X.-Y.; Chen, A.-L.; Su, Y. High-resolution imaging using a wideband MIMO radar system with two distributed arrays. IEEE Trans. Image Process. 2010, 19, 1280–1289. [Google Scholar] [CrossRef]
  7. Carlo, N.; Gianfranco, F.; Marco, M. A novel approach for motion compensation in ISAR system. In Proceedings of the EUSAR 2014, 10th European Conference on Synthetic Aperture Radar, Berlin, Germany, 3–5 June 2014; pp. 1–4. [Google Scholar]
  8. Baselice, F.; Caivano, R.; Cammarota, A.; Pascazio, V.; Ferraioli, G. SAR despeckling based on enhanced winner filter. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 1042–1045. [Google Scholar]
  9. Xu, G.; Yang, L.; Bi, G.; Xing, M. Enhanced ISAR imaging and motion estimation with parametric and dynamic sparse Bayesian learning. IEEE Trans. Comput. Imaging 2017, 3, 940–952. [Google Scholar] [CrossRef]
  10. Fu, J.; Xu, D.; Xing, M. A novel ionospheric TEC estimation method based on L-Band ISAR signal processing. In Proceedings of the IGARSS 2019, IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 314–317. [Google Scholar]
  11. Ferraioli, G.; Pascazio, V.; Schirinzi, G. Ratio-Based NonLocal Anisotropic Despeckling Approach for SAR Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7785–7798. [Google Scholar] [CrossRef]
  12. Wang, J.; Liang, X.; Chen, L.; Wang, L.; Li, K. First Demonstration of joint wireless communication and high-resolution SAR imaging using airborne MIMO radar system. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6619–6632. [Google Scholar] [CrossRef]
  13. Sheng, J.; Xing, M.; Zheng, L.; Mehmood, M.Q.; Yang, L. ISAR cross-range scaling by using sharpness maximization. IEEE Trans. Geosci. Remote Sens. Lett. 2015, 12, 165–169. [Google Scholar] [CrossRef]
  14. Gao, Y.; Xing, M.; Zhang, Z.; Guo, L. ISAR imaging and cross-range scaling for maneuvering targets by using the NCS-NLS algorithm. IEEE Sens. J. 2019, 19, 4889–4897. [Google Scholar] [CrossRef]
  15. Knaell, K.; Cardillo, G. Radar tomography for the generation of three-dimensional images. Proc. Radar Sonar Navig. 1995, 142, 54–60. [Google Scholar] [CrossRef] [Green Version]
  16. Mayhan, J.; Burrows, M.; Cuomo, K.; Piou, J. High resolution 3D snapshot ISAR imaging and feature extraction. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 630–642. [Google Scholar] [CrossRef]
  17. Morita, T.; Kanade, T. A sequential factorization method for recovering shape and motion from image streams. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 858–867. [Google Scholar] [CrossRef]
  18. McFadden, F.E. Three-dimensional reconstruction from ISAR sequences. Proc. SPIE 2002, 4744, 58–67. [Google Scholar]
  19. Liu, L.; Zhou, F.; Bai, X.; Tao, M. Joint cross-range scaling and 3D Geometry reconstruction of ISAR targets based on factorization method. IEEE Trans. Image Process. 2016, 25, 1740–1750. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, F.; Xu, F.; Jin, Y. 3-D information of a space target retrieved from a sequence of high-resolution 2-D ISAR images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 5000–5002. [Google Scholar]
  21. Wang, F.; Xu, F.; Jin, Y. Three-dimensional reconstruction from a multiview sequence of sparse ISAR imaging of a space target. IEEE Trans. Geosci. Remote Sens. 2018, 56, 611–620. [Google Scholar] [CrossRef]
  22. Dan, X.; Xing, M.; Xia, X.-G.; Sun, G.-C.; Fu, J.; Su, T. A multi-Perspective 3D reconstruction method with single perspective instantaneous target attitude estimation. Remote Sens. 2019, 11, 1277. [Google Scholar]
  23. Gianfranco, F.; Francesco, S.; Francesco, S. Three-Dimensional focusing with multipass SAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 507–517. [Google Scholar]
  24. Gianfranco, F.; Antonio, P.; Diego, R. A null-space method for the phase unwrapping of multitemporal SAR interferometric stacks. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2323–2334. [Google Scholar]
  25. Marco, M.; Daniele, S.; Federica, S.; Nicola, B. 3D interferometric ISAR imaging of noncooperative targets. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 3102–3114. [Google Scholar]
  26. Chen, Q.; Xu, G.; Zhang, L.; Xing, M.; Bao, Z. Three-dimensional interferometric inverse synthetic aperture radar imaging with limited pulses by exploiting joint sparsity. IET Radar Sonar Navig. 2015, 9, 692–701. [Google Scholar] [CrossRef]
  27. Rong, J.; Wang, Y.; Han, T. Interferometric ISAR imaging of maneuvering targets with arbitrary Three-Antenna configuration. IEEE Trans. Geosci. Remote Sensi. 2020, 58, 1102–1119. [Google Scholar] [CrossRef]
  28. Zhou, J.; Shi, Z.; Fu, Q. Three-dimensional scattering center extraction based on wide aperture data at a single elevation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1638–1655. [Google Scholar] [CrossRef]
  29. Wang, G.; Xia, X.-G.; Chen, V. Three-dimensional ISAR imaging of maneuvering targets using three receivers. IEEE Trans. Image Process. 2001, 10, 436–447. [Google Scholar] [CrossRef] [PubMed]
  30. Xu, X.; Narayanan, R.M. Three-dimensional interferometric ISAR imaging for target scattering diagnosis and modeling. IEEE Trans. Image Process. 2001, 10, 1094–1102. [Google Scholar] [PubMed]
  31. Ma, C.; Yeo, T.S.; Zhang, Q.; Tan, H.; Wang, J. Three-dimensional ISAR imaging based on antenna array IEEE Trans. Geosci. Remote Sens. 2008, 46, 504–515. [Google Scholar] [CrossRef]
  32. Ma, C.; Yeo, T.S.; Tan, C.; Tan, H. Sparse array 3-D ISAR imaging based on maximum likelihood estimation and clean technique. IEEE Trans. Image Process. 2010, 19, 2127–2142. [Google Scholar]
  33. Xu, G.; Xing, M.; Xia, X.-G.; Zhang, L.; Chen, Q.; Bao, Z. 3D Geometry and motion estimations of maneuvering targets for interferometric ISAR with sparse aperture. IEEE Trans. Image Process. 2016, 25, 2005–2020. [Google Scholar] [CrossRef]
  34. Tomasi, C.; Kanade, T. Shape and motion from image streams under orthography: A factorization method. Int. J. Comput. Vis. 1992, 9, 137–154. [Google Scholar] [CrossRef]
  35. Aghababaee, H.; Ferraioli, G.; Schirinzi, G.; Pascazio, V. Regularization of SAR Tomography for 3-D Height Reconstruction in Urban Areas. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 648–659. [Google Scholar] [CrossRef]
  36. Ma, W.; Wen, Z.; Wu, Y.; Jiao, L.; Gong, M.; Zheng, Y. Remote sensing image registration with modified SIFT and enhanced feature matching. IEEE Geosci. Remote Sens. Lett. 2017, 14, 3–7. [Google Scholar] [CrossRef]
  37. Duan, J.; Zhang, L.; Xing, M. Polarimetric target decomposition based on attributed scarrering center model for synthetic aperture radar targets. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2095–2099. [Google Scholar] [CrossRef]
Figure 1. (a) The radar system. (b) Space vector Z -axis concerning the S 0 system.
Figure 1. (a) The radar system. (b) Space vector Z -axis concerning the S 0 system.
Remotesensing 12 02699 g001
Figure 2. Diagrams of the proposed ISAR scatterer matching and 3D imaging algorithm.
Figure 2. Diagrams of the proposed ISAR scatterer matching and 3D imaging algorithm.
Remotesensing 12 02699 g002
Figure 3. Feature points (a,b) extracted from image A and image B with respect to modified scale invariant feature transform (SIFT). (c,d) extracted from image A and image B with respect to ISAR.
Figure 3. Feature points (a,b) extracted from image A and image B with respect to modified scale invariant feature transform (SIFT). (c,d) extracted from image A and image B with respect to ISAR.
Remotesensing 12 02699 g003
Figure 4. Feature points matching. (a) Modified SIFT. (b) The proposed method.
Figure 4. Feature points matching. (a) Modified SIFT. (b) The proposed method.
Remotesensing 12 02699 g004
Figure 5. Feature points fusion. (a) Fusion image of the board by the modified SIFT. (b) The proposed method.
Figure 5. Feature points fusion. (a) Fusion image of the board by the modified SIFT. (b) The proposed method.
Remotesensing 12 02699 g005
Figure 6. (a) The RMSE of the estimated U . (b) The 3D imaging result by the proposed method.
Figure 6. (a) The RMSE of the estimated U . (b) The 3D imaging result by the proposed method.
Remotesensing 12 02699 g006
Figure 7. The target attitude with respect to radar A. (a) The setting target attitude. (b) The estimated target attitude.
Figure 7. The target attitude with respect to radar A. (a) The setting target attitude. (b) The estimated target attitude.
Remotesensing 12 02699 g007
Figure 8. (a) Satellite target in the CADFEKO model. (b) ARY changes according to different ARX and ψ .
Figure 8. (a) Satellite target in the CADFEKO model. (b) ARY changes according to different ARX and ψ .
Remotesensing 12 02699 g008
Figure 9. The abstract representation. (a) The abstract CADFEKO model. (b) The red part of (a).
Figure 9. The abstract representation. (a) The abstract CADFEKO model. (b) The red part of (a).
Remotesensing 12 02699 g009
Figure 10. The result of two-dimensional (2D) imaging and scattering center extraction. (a) SPA; (b) SPB.
Figure 10. The result of two-dimensional (2D) imaging and scattering center extraction. (a) SPA; (b) SPB.
Remotesensing 12 02699 g010
Figure 11. SPA, SPB, and the coarse matching results. (a) SPA and SPB plotted in one image. (b) The result of converted SPA and SPB.
Figure 11. SPA, SPB, and the coarse matching results. (a) SPA and SPB plotted in one image. (b) The result of converted SPA and SPB.
Remotesensing 12 02699 g011
Figure 12. 3D reconstruction results and the three views. (a) 3D imaging with respect to the target coordinate system; (b) the attitude result with respect to radar A; (c) X-Y view of (b); (d) X-Z view of (b); (e) Y-Z view of (b).
Figure 12. 3D reconstruction results and the three views. (a) 3D imaging with respect to the target coordinate system; (b) the attitude result with respect to radar A; (c) X-Y view of (b); (d) X-Z view of (b); (e) Y-Z view of (b).
Remotesensing 12 02699 g012
Table 1. Parameter settings.
Table 1. Parameter settings.
ParametersARX ARYBistatic Angle Dominant Scatter
Setting value60.00°45°20°212
Table 2. Parameter settings.
Table 2. Parameter settings.
ParametersSetting Value
φ 19.69°
ARX80.00°
ARY91.73°
Radar A ϕ A 20.00°
Radar B ϕ B 0.00°
Table 3. Parameter estimation results.
Table 3. Parameter estimation results.
ParametersSetting ValueEstimation ErrorEstimation Error
ARX80.00°75.67°4.33°
ARY91.73°94.38°2.65°

Share and Cite

MDPI and ACS Style

Xu, D.; Bie, B.; Sun, G.-C.; Xing, M.; Pascazio, V. ISAR Image Matching and Three-Dimensional Scattering Imaging Based on Extracted Dominant Scatterers. Remote Sens. 2020, 12, 2699. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12172699

AMA Style

Xu D, Bie B, Sun G-C, Xing M, Pascazio V. ISAR Image Matching and Three-Dimensional Scattering Imaging Based on Extracted Dominant Scatterers. Remote Sensing. 2020; 12(17):2699. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12172699

Chicago/Turabian Style

Xu, Dan, Bowen Bie, Guang-Cai Sun, Mengdao Xing, and Vito Pascazio. 2020. "ISAR Image Matching and Three-Dimensional Scattering Imaging Based on Extracted Dominant Scatterers" Remote Sensing 12, no. 17: 2699. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12172699

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop