Next Article in Journal
SRBPSwin: Single-Image Super-Resolution for Remote Sensing Images Using a Global Residual Multi-Attention Hybrid Back-Projection Network Based on the Swin Transformer
Next Article in Special Issue
A Novel Multi-Beam SAR Two-Dimensional Ambiguity Suppression Method Based on Azimuth Phase Coding
Previous Article in Journal
Biomass Estimation and Saturation Value Determination Based on Multi-Source Remote Sensing Data
Previous Article in Special Issue
Iterative Adaptive Based Multi-Polarimetric SAR Tomography of the Forested Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Innovative Rotating SAR Mode for 3D Imaging of Buildings

Radar Monitoring Technology Laboratory, School of Information Science and Technology, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(12), 2251; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16122251
Submission received: 5 April 2024 / Revised: 21 May 2024 / Accepted: 18 June 2024 / Published: 20 June 2024
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Data Processing and Application)

Abstract

:
Three-dimensional SAR imaging of urban buildings is currently a hotspot in the research area of remote sensing. Synthetic Aperture Radar (SAR) offers all-time, all-weather, high-resolution capacity, and is an important tool for the monitoring of building health. Buildings have geometric distortion in conventional 2D SAR images, which brings great difficulties to the interpretation of SAR images. This paper proposes a novel Rotating SAR (RSAR) mode, which acquires 3D information of buildings from two different angles in a single rotation. This new RSAR mode takes the center of a straight track as its rotation center, and obtains images of the same facade of a building from two different angles. By utilizing the differences in geometric distortion of buildings in the image pair, the 3D structure of the building is reconstructed. Compared to the existing tomographic SAR or circular SAR, this method does not require multiple flights in different elevations or observations from varying aspect angles, and greatly simplifies data acquisition. Furthermore, both simulation analysis and actual data experiment have verified the effectiveness of the proposed method.

1. Introduction

Health monitoring of buildings is of great significance for urban safety [1]. Timely detection and repair of structural deformation can extend building life and reduce maintenance costs and frequency. Methods to evaluate building health are broadly categorized into contact and non-contact approaches [2]. Contact methods involve attaching sensors directly to the building surface, but this technique needs dense nodes, and it is also complex and inaccurate. Non-contact methods, including LiDAR and radar technologies, offer remote sensing capabilities. These methods can be positioned at an appropriate distance, allowing their beams to accurately target the structures. Though LiDAR is highly accurate, it is affected by weather conditions such as rain and fog. In contrast, synthetic aperture radar (SAR) is safe, convenient, weather independent, can operate continuously, and monitor remote areas reliably. SAR 3D imaging offers comprehensive and accurate structural deformation information for buildings. Currently, traditional ground-based radar 2D imaging is used mainly for monitoring deformation or settlement in slopes and landslides. Thus, using ground-based radar for SAR 3D imaging of building structures is a promising technology for building health monitoring [3,4,5].
The traditional SAR (Synthetic Aperture Radar) image represents a 2D projection of 3D scenes onto the azimuth-slant range plane. Due to the complex structure of buildings, geometric distortions often exist in 2D SAR images. This issue makes it difficult to accurately reflect the real 3D information of buildings [6,7,8,9,10]. Currently, two main methods are used to acquire 3D information of buildings: multi-baseline and multi-angle. The multi-baseline approach, known as TomoSAR, captures spatial 3D information by collecting data at different elevations using multiple flights or antennas. Circular Synthetic Aperture Radar (CSAR) collects data from varying aspect angles. It uses the complementary information obtained from various angles to extract more detailed features of buildings. Detailed state-of-the-art research on the two methods will be discussed in the following.
Researchers have been studying the 3D imaging of urban buildings using TomoSAR for over 20 years. In the 1990s, Reigber et al. of the German Aerospace Agency (DLR) first proposed 3D imaging of multi-baseline SAR, using data from 14 flight tracks to create tomographic images of buildings [11]. Zhuand others first used 25 high-resolution TerraSAR-X Spotlight images to perform 3D scattering imaging of the downtown area of Las Vegas, USA in 2010 [12]. Budillon introduced deep learning into TomoSAR urban 3D imaging in 2021 and used 4000 images for training, solving the problem of inaccurate signals [13]. Using 19 TerraSAR-X images, Omati et al. proposed LP technology to estimate the height of the Eskan Tower in Tehran, Iran in 2022 [14]. In addition, ordinary TomoSAR imaging is observed from one angle, and the SAR image shadow area cannot achieve tomographic imaging. Therefore, the German Aerospace Center conducted research on airborne circular tomography SAR imaging technology, which used 19 circular flight tracks and achieved 360-degree observation of 3D imaging [15]. In [16], 36 X-band images were used for ground-based multifrequency SAR tomography and 3D imaging to estimate snow and ice layer depths and refractive indices in 2017. Richard Welsh used a multiple-antenna geometry design with multiple receivers and one transmitter to create 3D renders from sparse 2D SAR images in 2022 [17]. The following year, the authors of [18] proposed a multi-channel ground-based bistatic synthetic aperture radar (SAR) receiver architecture, and used the Sentinel-1 satellite as an opportunistic emission source for one-way opportunistic tomography. In 2017, Zhao Kexiang and Bi Hui used 14 TerraSAR-X images to rebuild the building on the north side of the Pangu Seven Star Hotel in Beijing [19]. Afterwards, in 2021, Jin Shuang and Bi Hui introduced compressed sensing technology into TomoSAR and D-TomoSAR imaging, using 19 images to get high-resolution 3D and 4D SAR images of typical buildings in Shenzhen [20]. Chang et al., enhanced SAR 3D imaging by combining DBF and BSS, improving echo separation in uneven terrain and simplifying data processing for urban building structures in 2022 [21]. Based on 14 satellite SAR images, the author of [22] using Compressed Sensing (CS) to realize super-resolution 3D TomoSAR imaging in 2022, which was suitable for elevation extraction of urban buildings. The authors of [23] proposed a distributed SAR high-resolution wideband (HRWS) 3D imaging method in 2023, which overcame the limitation of traditional single-satellite antennas by using 14-channel number SAR data. Zhang and others proposed a new robust Gridless CS (RGLCS) algorithm in 2023, which used multiple antenna phase centers (APCs) to acquire data from different viewing angles and provided a demonstration of urban mapping [24].
The concept of CSAR appeared in the mid-1990s. In recent years, multi-angle observation represented by CSAR has also become a research hotspot in SAR 3D imaging of buildings. In 2011, both the German Aerospace Center and the domestic Institute of Electronics of the Chinese Academy of Sciences publicly released full-view high-resolution circular SAR images. In 2021, Li, Chen, and others used the correlation between 120 sub-apertures to extract DEM information, improving the accuracy of 3D imaging [25]. In the same year, Zhang used background constraints to divide C-band CSAR images into 13 sub-apertures and obtain 3D imaging effects for C-band CSAR images [26]. In 2023, Zhang et al. introduced a new holographic SAR volumetric imaging strategy based on single-pass circular InSAR data. Through the data obtained from the flight radius and flight altitude of 3000 m, an improved compressed sensing algorithm was used to effectively deal with the 3D imaging problem [27]. Then, based on 12 channels, total baseline length of 3 m, and average flight altitude of 4150 m X-band airborne CFASAR data, Li proposed a wave number domain layered phase compensation method, gaining clear 3D building reconstruction [28]. In the same year, Zhang, Lin, and others introduced the neural radiation field into CSAR image processing and used 55 images with a flight radius of 1000 m as a training set to reconstruct detailed 3D buildings [29].
Through the analysis of the above references, two primary technological approaches are utilized for 3D synthetic aperture radar (SAR) imaging of buildings: tomographic SAR (TomoSAR) and circular SAR (CSAR). TomoSAR can reconstruct urban architectural landscapes in detail, but it typically requires dozens of tomographic scanning flights or multi-channel data acquisition to achieve this. This method is mainly applied in airborne and satellite platforms, as it necessitates extensive data and complex processing to obtain high-quality 3D images.
On the other hand, the CSAR mode allows for acquiring full-view SAR images of buildings through one or multiple flights, making 3D reconstruction theoretically more direct and comprehensive. However, this often requires a larger flight radius and increased flight duration, adding to the complexity of mission execution. More importantly, since CSAR observes the same building from multiple angles, it introduces anisotropic interference, complicating the image registration process. This interference adversely affects the precision and clarity of 3D imaging results, especially when dealing with complex urban architectural structures.
In summary, while TomoSAR and CSAR each have their advantages, they also exhibit limitations. TomoSAR provides detailed 3D views, but it is limited in application scope due to its high demands on data volume and acquisition frequency. Furthermore, CSAR offers a potentially faster imaging method, but its complexity and sensitivity to environmental interference necessitate the use of additional advanced techniques to optimize the quality of 3D imaging.
To solve these problems, in this paper, we introduce a new mode for Rotating SAR imaging. We obtain images from different angles through the method of Rotating SAR, and use geometric equations to solve for 3D information. By calculating the geometric relationship of the projection point offsets between image pairs of a building, we can accurately retrieve the building’s height. The technical advantages of our method lie in the following aspects:
  • To address the issue of inefficient data acquisition requiring multiple revisits, we propose a technique that needs only a single rotation to gather information on a building from two distinct angles. This technique significantly reduces the need for repeated observations.
  • To solve the problem of anisotropic interference, we observe from the same side of the building, enhancing data reliability and accuracy.
Furthermore, the following are the main contributions of this paper:
  • The RSAR signal model is derived and the basic principles of 3D imaging are proposed.
  • A matching method based on height assumptions is proposed, which improves the search path for matching, thereby avoiding extensive matrix calculations and enhancing computational efficiency.
The chapters of this paper are arranged as follows: In Section 2, we create the geometric observation and signal models for Rotating SAR. Section 3 introduces the RD projection model and the analysis of 3D imaging ability. Section 4 proposes a matching method based on height hypothesis, and gives the main flow chart of the algorithm. Section 5 verifies the efficacy and accuracy of the proposed method through both simulation analysis and actual data experiment. Finally, Section 6 gives conclusions.

2. Geometry and Signal Models of Rotating SAR

2.1. Geometric Model

The experimental observation platform of the Rotating SAR system is shown in Figure 1. In this research, we employ a single-channel millimeter-wave radar, and the radar experimental platform is placed horizontally in front of the building. The experiment only needs to collect data twice. Initially, the radar platform rotates to an angle of θ 1 with the horizontal direction, allowing it to move in a uniform straight line from left to right on the track to form a synthetic aperture and collect data once. Subsequently, the radar platform rotates to θ 2 angles to the horizontal direction and repeats. During the movement of the radar, the antenna beam is consistently directed towards the building area. The radar periodically emits signals and receives the echoes returning from the illuminated radar area.

2.2. Signal Model

Let us consider P as any specific point target within the building. ( x p , y p , z p ) is the 3D coordinate of point target P .
In this paper, we use Frequency Modulated Continuous Wave (FMCW) radar, and Equation (1) is the transmitted signal:
s ( t ) = exp { j 2 π f c t + j π K t 2 } t [ 0 , T p ]
where t represents the fast time, the center frequency is f c , the signal frequency modulation rate is K , and T p is the pulse width.
For radar beam irradiation of point target P , mixing the transmitted and received signals, a baseband echo signal is modeled as:
s r ( t , τ ) = σ p s ( t t 0 ) s ( t ) = σ p exp { j 2 π ( f c + K t ) t 0 + j π K t 0 2 }
where σ p is the scattering coefficient of the target, and t 0 is a delay from transmitted to received signal for radar systems, which can be expressed as follows:
t 0 = 2 R p ( τ ) C
where C is the speed of light, τ represents the slow time, and R p ( τ ) is the distance from the target point P to the antenna phase center; thus, R p ( τ ) is as follows:
R p ( τ ) = ( τ v cos θ x p ) 2 + ( τ v sin θ y p ) 2 + ( 0 z p ) 2 = ( X a cos θ x p ) 2 + ( X a sin θ y p ) 2 + ( 0 z p ) 2
where X a is the coordinate axis of the radar track, v is the constant speed at which the radar platform moves, θ is the rotational angle for radar systems, ( X a cos θ , X a sin θ , 0 ) is the 3D coordinate of the antenna phase center, ( x p , y p , z p ) is the 3D coordinate of the point target P .
In Equation (2) above, the secondary part due to mixing is called residual video phase (RVP). It is a very small value, and it can be safely ignored during near-field FMCW radar detection imaging without significant impact on 3D imaging results. Substituting Equations (3) and (4) into Equation (2), define a new frequency according to the linear time-frequency relationship of the FMCW as follows:
f r = K t
Substituting Equation (2), then the frequency domain representation of the signal is:
S ( f r , τ ) = σ p exp { j 4 π ( f c + f r ) C R p ( τ ) }
Let the wave number K r = 4 π ( f c + f r ) / C , then Equation (6) is:
S ( K r , τ ) = σ p exp { j K r R p ( τ ) }
This paper independently designs the Rotating SAR experimental platform, proposes a new 3D imaging method, and carries out experimental verification. The method will be detailed in the following text.

3. The 3D Imaging Capabilities of Rotating SAR

3.1. SAR Imaging by BP (Back Projection) Algorithm

From the above theoretical knowledge, the SAR images of buildings are obtained by applying the BP (Back Projection) algorithm to data from two angles, with the expression for the SAR images being:
g m ( x , y , z = 0 ) = ( m 1 ) m K r = K min K max S ( K r , τ ) exp { j K r R p ( τ , x , y , z = 0 ) } d K r d τ
In SAR images obtained through the BP (Back Projection) algorithm, the projected positions of the building observed from two different angles have changed. We derive the relationship between the geometric offset of the building in SAR images and their heights, laying the foundation for subsequent height estimation. Given the significant spatial variation in the radar’s incident angle, we employ actual RD calculations in image processing. Moreover, within the same coordinate system, information from buildings at different angles is projected to form the 2D SAR images, as shown in Figure 2; this simplifies further SAR image processing.

3.2. RD Projection Model

By implementing the RD projection model, we utilize the geometric information gathered from diverse angles to accurately estimate the true height of the building. Figure 3 illustrates the building’s RD projection.
To simplify the explanation, Figure 4 illustrates the RD projection model of the Rotating SAR at a specific angle. We set up an o x y coordinates with ground Σ serving as the imaging plane. In this, the x -axis is the horizontal direction, the y -axis is vertical to the horizontal direction, and the z -axis direction follows the right-hand coordinate rule. Angles θ 1 and θ 2 represent two distinct rotational angles of the radar. We assume that target P of the building is located at a height h above the imaging plane Σ . The angle α notes the radar antenna’s incidence angle.
M 1 is the zero-Doppler point. The red arc in Figure 4 represents the equidistant isobaric Doppler line from zero-Doppler point to the target P , and
| M 1 P = M 1 P 1 |
In the SAR distance-zero Doppler imaging approach, Q is the vertical projection point of target P . P 1 represents the focused projection point of target P in a SAR image, located on the zero-Doppler plane.

3.3. The Relationship between Height and Projection Geometry

As pictured in Figure 5, P 1 , P 1 represent the projection points on SAR images corresponding to the specific rotation angles θ 1 and θ 2 . The red box is a primary image, while the blue outline is a secondary image.
For convenience of description, we demonstrate the geometric relationship of the side-looking zero-Doppler plane as an example. The RD projection’s geometry is approximated to a right-angled triangle, as depicted in the geometric model in Figure 6.
Assuming a height interval marked as Δ h , Δ ρ is the distance between the RD projection and the vertical projection point. The incidence angle from the target to the radar is marked as α , we derive the relationship between the Δ ρ , Δ h and α trigonometric functions as follows:
Δ ρ = Δ h tan α
By examining the Δ r distance offset between the projection points P 1 and P 1 on SAR images taken from different angles, we obtain the geometric triangle depicted in Figure 7.
| P 1 Q = P 1 Q |
| P 1 T = P 1 T |
Thus, we can find out the trigonometric between the Δ r and the Δ ρ as follows:
sin ( θ 1 + θ 2 2 ) = | P 1 T | Δ ρ = | P 1 T | Δ ρ = Δ r 2 Δ ρ
By inserting Expression (10) into Expression (13) and rearranging, we obtain the formula between the height interval Δ h and the angles as follows:
Δ h = Δ r tan α 2 sin ( θ 1 + θ 2 2 )
Based on Formula (14), we can derive the correlation between a building’s height and the offset of its projection points. This relationship provides the basis for subsequent altitude estimation and 3D imaging capability analysis.

4. Image Matching Based on Height Assumption

4.1. Basic Principle

Given the significant spatial variance of the radar’s incident angle, real RD (Radar Detection) calculations are utilized during the image processing phase. When gaining data, the same target appears in SAR images at various angles, creating identical projection regions across multiple images. These overlapping areas contain correlated information critical for achieving 3D imaging of the building. The following sections detail the specific procedures involved in Rotating SAR imaging:
  • Step 1: The data received from two angles are processed using the BP (Back Projection) algorithm.
  • Step 2: Select the coordinates of strong points from the primary image.
  • Step 3: Assuming the target building has N different elevations, we calculate the projected position of each strong point pixel from the primary image onto the secondary image at different assumed heights.
  • Step 4: The projection points of SAR images at different elevations are matched in the neighborhood. According to the strength of the correlation, we estimate the best height of the building.
Figure 8 illustrates the principal algorithmic workflow for achieving 3D imaging of the building.

4.2. Image Matching

Using the offset characteristics of SAR Images, we assume n different heights along equidistant Doppler lines, and each height corresponds to projected points on the secondary image. The RD projections at different assumed heights are shown in Figure 9.
The coordinates of strong point pixels in the primary image are designated as P i ( x , y ) , where i represents the index of strong point pixels. On the equidistant iso-Doppler line associated with this strong point pixel, we suppose a series of potential elevations, each separated by a height interval Δ h , and numbered as h n (where n marks the elevation index).
Among these supposed heights, we need to select the elevation estimate h n that is closest to the true value. The elevation h n can be expressed as follows:
h n = ( n 1 ) Δ h
To obtain the required precision, the choice of Δ h must conform to the expression taken from Equation (14). In Figure 9, the black dot P marks the true 3D location, the red arc draws the iso-Doppler line at angle θ 1 , and the blue dots, representing the n elevation estimates, are situated along this red arc. The gray dots represent the projection positions of n kinds of hypothesis heights in the secondary image. At angle θ 2 , the blue arc line represents the iso-Distant iso-Doppler line of the n hypothesized heights. Only the green dots and dashed lines represent the projection positions of a true height and the true iso-Doppler lines in the secondary image.
For n hypothesized elevations, the 3D coordinates of the pixel are given as P i ( x n , y n , h n ) . By applying the equidistant iso-Doppler projection principle with the RD (Range Doppler) equation [26], we calculate the projection coordinates of strong pixel points in the primary image onto different elevations within the secondary image, earmarked as P i ( x n , y n ) . The RD equation is articulated as follows:
{ | M j P i ( x n , y n , h n ) | = | M j P i ( x n , y n , 0 ) | V M j P i ( x n , y n , h n ) ¯ = V M j P i ( x n , y n , 0 ) ¯
Within this expression, the first equation corresponds to the equation of constant distance, and the second one to the constant Doppler equation. Here, M j ( j = 1 , 2 ) notes the location of the zero Doppler point.
Finally, we perform a correlation matching between the strong point neighborhood in the primary image and the neighborhood of each projection point in the secondary image, obtaining the correlation coefficients. To capture more similar features in multi-angle images during neighborhood matching, we avoid a brute-force, row-by-row searching approach across the two SAR images. Instead, we search for matching paths along the offset direction of the projection points and calculate the correlation coefficient. The coefficient value is retained only if the maximum value at the current position exceeds a predefined threshold. Figure 10 illustrates the neighborhood matching for images at n assumed heights.
The neighborhood of the strong point pixel P i ( x , y ) in the primary image and its projection point P i ( x n , y n ) in the secondary image should have similar characteristics. For this, we employ the correlation matching method [30] to take the correlation coefficient I n . The formula for the correlation matching method is as follows:
I n = C o v ( G i , G i , n ) V a r ( G i ) V a r ( G i , n )
G i represents the amplitude map of the neighborhood in the primary image S 1 , while G i , n corresponds to the amplitude map of the neighborhood in the secondary image S 2 . The term C o v denotes the computation of cross-correlation, and V a r refers to the calculation of variance. In the practical estimation of the correlation coefficient, the mathematical expectation is computed as a spatial average.
By selecting the height index n corresponding to the maximum correlation coefficient, we determine the best elevation of the estimated strong point pixel, as delineated in Equation (18):
n ^ = arg max ( I n )
Thus, the 3D coordinates of this strong point pixel are identified as P i ( x n ^ , y n ^ , h n ^ ) .
In data processing, the main operations of RSAR are image matching and height inversion. The calculation amount is proportional to the number of pixels to be matched in the image, the size of the matching window, and the number of height assumptions. Thus, the calculation complexity is O ( β P Q W N ) , where β is the ratio of strong scattering points, ranging from 0 to 1; P∙Q is the number of pixels in the image; W is the matching window size; and n is the number of assumed heights.

5. Numerical Simulation and Actual Data Experiment

In this section, the effectiveness and accuracy of the proposed approach are analyzed through both simulation and actual data experiment. The FMCW millimeter-wave radar used in the experimental scenario is manufactured by Texas Instruments (Dallas, TX, USA), named TI IWR1642 Booster Pack. It has a minimum frequency of 77 GHz. The simulation and actual data parameters are the same, as shown in Table 1.

5.1. Numerical Simulation

In the simulation experiment, we use point targets to verify the proposed approach, and their 3D positions are listed in Table 2. In the 3D space, we set point targets at different heights, labeled A–G, with point targets E–G at a height of 0 m.
In the simulation experiment, the BP algorithm is adopted to image the 0 m plane, and grid size is 0.01 m. We select the 3 dB value of the primary and secondary images as the strong points and set the height interval at 0.2 m in the height range of 0–40 m. Figure 11a–e show the simulation results. Figure 11a is the SAR image of the point targets (A1–G1) at angle θ 1 , obtained by BP algorithm. Figure 11b is the SAR image of the point targets (A2–G2) at angle θ 2 , obtained by BP algorithm. The red boxes show the projection of the point targets. Figure 11c analyzes the relationship between height and the offset of the image pair. The curve shows that as the height increases, the offset becomes larger. Figure 11d represents the maximum correlation coefficient curve. After image matching of different heights, only pixels with a correlation greater than the 0.707 threshold are retained, and other pixels are excluded. Figure 11e,f show the real 3D point targets compared with the final estimated 3D point clouds. Figure 11e is the side view of 3D point clouds. Figure 11f is the top view of 3D point clouds. As shown in Figure 11e,f, the positions of the estimated point clouds broadly match the real locations.
To quantitatively assess the proposed algorithm, we compare the true and estimated values of point targets under the simulation conditions, and calculate their error, with results detailed in Table 3. As shown in Table 3, by calculating the error between the estimated average value and the true value of 3D coordinates, the average errors of 3D coordinates are obtained as 0.0006 m, 0.0018 m, and 0.0090 m, respectively, which are very stable, with all < 0.1 m. It can be inferred that the estimated value of 3D coordinates obtained by the method in this paper is basically the same as the true value. This allows for creating 3D point clouds of the targets. The feasibility and effectiveness of the proposed method are further verified by actual data.

5.2. Actual Data Experiment

In the actual scenario data, we use the same radar parameters listed in Table 1. The millimeter wave radar model is the TI IWR1642 Booster Pack, which is a single-chip radar consisting of two transmitting antennas and four receiving antennas. However, here only one transmitting antenna and one receiving antenna are used, capturing the raw data samples by the TI DCA1000EVM data acquisition board. We place the radar directly in front of the building, approximately 20 m away, and the radar always moves at a constant speed from the left end of the track to the right end over 0.5 m, ensuring that it is facing the building for data collection.
The experimental equipment and scene are shown in Figure 12; the orange box is the observation area. Figure 13 is the observation area, features with typical dihedral corner structures in Figure 13 are labeled A–E, among which the dihedral corner structures between the stone columns and the ceiling in area D are labeled 1–5.
Based on the radar’s parameters and the selected projection plane, the pixel spacing of the image is set to 0.05 m in azimuth and range. Adjusting the image brightness to 0–255, Figure 14 shows the practical scene experiment results. Figure 14a,b exhibit the 2-D SAR image using the BP algorithm at angles θ 1 and θ 2 , where the dihedral corner structures A–E on the building surface can be clearly distinguished. Figure 14c shows the mask image with a threshold of 200. Figure 14d shows the maximum correlation coefficient curve. The red dashed line indicates that the threshold is set to 0.75. Only pixels with a correlation greater than the 0.75 threshold are selected, and these are called strong points.
After selecting the strong points and image registration, the 3D imaging point clouds of the typical observation areas are illustrated in Figure 15, in which different targets of the building appear at different heights. Figure 15a is a side view of the 3D point clouds, which clearly shows five typical dihedral corner structures. Figure 15b shows a top view of 3D point clouds. Figure 15c is the representation of different target heights in 2D image. In Figure 15d, the colors of point clouds are the mapping of building height.
In addition, we use a laser rangefinder to measure the heights of the five stone columns as the reference values. The accuracy of the laser rangefinder is ± 1.5 mm, and the strong points are shown in Figure 13 labeled 1–5. To evaluate the accuracy of the algorithm, the root-mean-square error between the height estimations and the reference values in Figure 15a is calculated. The statistical results show that the root-mean-square error of our proposed method is R M S E ( δ h ) = 0.085 m, as shown in Table 4. According to Table 4, the overall effect of the height reconstruction has been preliminarily verified.

6. Conclusions

In this paper, we introduce a novel 3D imaging mode that acquires images from two different angles using the RSAR technique, and utilizes the geometric distortion difference of buildings in the two SAR images. Firstly, this work illustrates the principles of 3D imaging and derives the geometric relationship between elevation and image offset. Subsequently, we propose a matching method based on height assumptions to reconstruct the 3D point clouds of the building efficiently. Finally, the effectiveness of our method has been validated through both practical experiment and simulation. The advantages of our approach are that it avoids the necessity for multiple revisits to collect data, and it is not affected by anisotropic interference. The method proposed by us only initially forms 3D point clouds of dihedral corner structures of the building at different heights. This method of Rotating SAR can also be used to monitor the deformation and health of critical infrastructure such as bridges, tunnels, embankments, helping to prevent structural failures and catastrophic accidents. In the field of cultural heritage conservation, the technology can be used to monitor the structural health of historical and cultural heritage structures and help develop conservation and restoration strategies. Our future work will mainly focus on algorithm improvement and typical applications of rotating 3D SAR. There are some outliers in the current algorithm, and subsequent work will focus on improving the matching algorithm to eliminate outliers and achieve higher precision point cloud generation. In the future, multiple rotation angles will be added for data acquisition to enhance 3D robustness and precision. Deformation monitoring of infrastructure such as bridges will also be carried out in the future.

Author Contributions

Conceptualization, Y.L.; methodology, Y.L. and Y.W. (Ying Wang); software, Y.L. and Y.W. (Ying Wang); validation, Y.L. and Y.W. (Ying Wang); formal analysis, Y.L. and Y.W. (Ying Wang); resources, W.S.; writing-original draft preparation, Y.L. and Y.W. (Ying Wang); writing-review and editing, Y.L., Y.W. (Ying Wang), Y.W. (Yanping Wang), W.S. and Z.B.; supervision, Y.L.; project administration, Y.W. (Yanping Wang); funding acquisition, Y.L. and Y.W. (Yanping Wang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers 62131001 and 62371005; and the Innovation Team Building Support Program of the Beijing Municipal Education Commission, grant number IDHT20190501.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We thank the anonymous reviewers for their good advice.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, J.J. CD-TransUnet Based SAR Image Urban Building Change Detection. Master’s Thesis, Beijing University of Civil Engineering and Architecture, Beijing, China, 2023. [Google Scholar]
  2. Guo, B. Application of 3D Laser Scanning Technology in Deformation Monitoring to the Xinjiang Grand Theater. Geomat. Spat. Inf. Technol. 2021, 44, 225–227. [Google Scholar]
  3. Hong, W.; Wang, Y.; Lin, Y.; Tan, W.; Wu, Y. Research progress on three-dimensional SAR imaging techniques. J. Radars 2018, 7, 633–654. [Google Scholar] [CrossRef]
  4. Pu, W. Shuffle GAN with autoencoder: A deep learning approach to separate moving and stationary targets in SAR imagery. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 4770–4784. [Google Scholar] [CrossRef]
  5. Ristea, N.-C.; Anghel, A.; Datcu, M.; Chapron, B. Guided Unsupervised Learning by Subaperture Decomposition for Ocean SAR Image Retrieval. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5207111. [Google Scholar] [CrossRef]
  6. Zhu, X.X. Three-Dimensional Urban Mapping Through Tomographic SAR Imaging Techniques. Ph.D. Thesis, National University of Defense Technology, Changsha, China, 2021. [Google Scholar]
  7. Liu, H.; Guo, X.Y.; Guo, Z.Y.; Cheng, B.H. Capon Tomographic SAR Imaging Method Based on Alternating Direction Method of Multipliers. Radar Sci. Technol. 2023, 21, 303–313. [Google Scholar] [CrossRef]
  8. Bi, H.; Jin, S.; Wang, X.; Li, Y.; Han, B.; Hong, W. High-resolution High-dimensional Imaging of Urban Building Based on Gao Fen-3 SAR Data. J. Radars 2022, 11, 40–51. [Google Scholar] [CrossRef]
  9. Batra, A.; Wiemeler, M.; Göhringer, D.; Kaiser, T. Sub-mm resolution 3D SAR imaging at 1.5 THz. In Proceedings of the 2021 Fourth International Workshop on Mobile Terahertz Systems (IWMTS), Essen, Germany, 5–7 July 2021; pp. 1–5. [Google Scholar] [CrossRef]
  10. Sugavanam, N.; Ertin, E.; Jamora, J.R. Deep learning for three dimensional SAR imaging from limited viewing angles. In Proceedings of the 2023 IEEE International Radar Conference (RADAR), Sydney, Australia, 6–10 November 2023; pp. 1–6. [Google Scholar] [CrossRef]
  11. Reigber, A.; Moreira, A. First demonstration of airborne SAR tomography using multibaseline L-band data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2142–2152. [Google Scholar] [CrossRef]
  12. Zhu, X.X.; Bamler, R. Very high resolution spaceborne SAR tomography in urban environment. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4296–4308. [Google Scholar] [CrossRef]
  13. Budillon, A.; Johnsy, A.C.; Schirinzi, G.; Vitale, S. SAR tomography based on deep learning. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3625–3628. [Google Scholar] [CrossRef]
  14. Omati, M.; Omati, M.; Bastani, M. Building Reconstruction Based on a Small Number of Tracks Using Nonparametric SAR Tomographic Methods. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 10, 617–622. [Google Scholar] [CrossRef]
  15. Ponce, O.; Prats-Iraola, P.; Scheiber, R.; Reigber, A.; Moreira, A. First airborne demonstration of holographic SAR tomography with fully polarimetric multicircular acquisitions at L-band. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6170–6196. [Google Scholar] [CrossRef]
  16. Yitayew, T.G.; Ferro-Famil, L.; Eltoft, T.; Tebaldini, S. Lake and Fjord Ice Imaging Using a Multifrequency Ground-Based Tomographic SAR System. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4457–4468. [Google Scholar] [CrossRef]
  17. Welsh, R.; Andre, D.; Finnis, M. Multistatic 3D SAR Imaging with Coarse Elevation and Azimuth Sampling. In Proceedings of the EUSAR 2022, 14th European Conference on Synthetic Aperture Radar, Leipzig, Germany, 25–27 July 2022. [Google Scholar]
  18. Anghel, A.; Cacoveanu, R.; Ciuca, M.; Rommen, B.; Ciochina, S. Multichannel Ground-Based Bistatic SAR Receiver for Single-Pass Opportunistic Tomography. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5212119. [Google Scholar] [CrossRef]
  19. Zhao, K.X.; Bi, H.; Zhang, B.C. SAR tomography method based on fast iterative shrinkage-thresholding. J. Syst. Eng. Electron. 2017, 39, 1019–1023. [Google Scholar] [CrossRef]
  20. Jin, S.; Bi, H.; Wang, X.; Li, Y.; Zhang, J.; Feng, J.; Hong, W. High-Resolution 3-D and 4-D SAR Imaging-The Case Study of Shenzhen. In Proceedings of the 2021 CIE International Conference on Radar (Radar), Haikou, China, 15–19 December 2021; pp. 712–716. [Google Scholar] [CrossRef]
  21. Chang, S.; Deng, Y.; Zhang, Y.; Wang, R.; Qiu, J.; Wang, W.; Zhao, Q.; Liu, D. An advanced echo separation scheme for space-time waveform-encoding SAR based on digital beamforming and blind source separation. Remote Sens. 2022, 14, 3585. [Google Scholar] [CrossRef]
  22. Feng, L.; Muller, J.-P.; Yu, C.; Deng, C.; Zhang, J. Elevation Extraction from Spaceborne SAR Tomography Using Multi-Baseline COSMO-SkyMed SAR Data. Remote Sens. 2022, 14, 4093. [Google Scholar] [CrossRef]
  23. Yang, Y.; Zhang, F.; Tian, Y.; Chen, L.; Wang, R.; Wu, Y. High-Resolution and Wide-Swath 3D Imaging for Urban Areas Based on Distributed Spaceborne SAR. Remote Sens. 2023, 15, 3938. [Google Scholar] [CrossRef]
  24. Zhang, B.; Xu, G.; Yu, H.; Wang, H.; Pei, H.; Hong, W. Array 3-D SAR tomography using robust gridless compressed sensing. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5205013. [Google Scholar] [CrossRef]
  25. Li, Y.; Chen, L.; An, D. A Method for Extracting Dem Based on Sub-Aperture Image Correlation in CSAR Mode. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 5203–5206. [Google Scholar] [CrossRef]
  26. Zhang, H.; Lin, Y.; Feng, S.; Teng, F.; Hong, W. 3-D target reconstruction using C-band circular SAR imagery based on background constraints. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2923–2926. [Google Scholar] [CrossRef]
  27. Zhang, H.; Lin, Y.; Teng, F.; Feng, S.; Hong, W. Holographic SAR Volumetric Imaging Strategy for 3-D Imaging With Single-Pass Circular InSAR Data. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5219816. [Google Scholar] [CrossRef]
  28. Li, Z.; Zhang, F.; Wan, Y.; Chen, L.; Wang, D.; Yang, L. Airborne Circular Flight Array SAR 3D Imaging Algorithm of Buildings Based on Layered Phase Compensation in the Wavenumber Domain. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5213512. [Google Scholar] [CrossRef]
  29. Zhang, H.; Lin, Y.; Teng, F.; Feng, S.; Yang, B.; Hong, W. Circular SAR Incoherent 3D Imaging with a NeRF-Inspired Method. Remote Sens. 2023, 15, 3322. [Google Scholar] [CrossRef]
  30. Yoo, J.-C.; Han, T.H. Fast Normalized Cross-Correlation. Circ. Syst. Signal. Process. 2009, 28, 819–843. [Google Scholar] [CrossRef]
Figure 1. The geometric model of Rotating SAR.
Figure 1. The geometric model of Rotating SAR.
Remotesensing 16 02251 g001
Figure 2. Schematic diagram of BP imaging at different angles in the same coordinate.
Figure 2. Schematic diagram of BP imaging at different angles in the same coordinate.
Remotesensing 16 02251 g002
Figure 3. RD projection model of the building.
Figure 3. RD projection model of the building.
Remotesensing 16 02251 g003
Figure 4. Schematic of the RD projection model.
Figure 4. Schematic of the RD projection model.
Remotesensing 16 02251 g004
Figure 5. Geometric projection model at various rotation angles.
Figure 5. Geometric projection model at various rotation angles.
Remotesensing 16 02251 g005
Figure 6. RD geometric projection relationship.
Figure 6. RD geometric projection relationship.
Remotesensing 16 02251 g006
Figure 7. Diagram of distance offset among projection points.
Figure 7. Diagram of distance offset among projection points.
Remotesensing 16 02251 g007
Figure 8. Flowchart of the main algorithm for 3D imaging.
Figure 8. Flowchart of the main algorithm for 3D imaging.
Remotesensing 16 02251 g008
Figure 9. Different hypothetical elevation schematics.
Figure 9. Different hypothetical elevation schematics.
Remotesensing 16 02251 g009
Figure 10. Flowchart of image neighborhood matching with n hypothetical heights.
Figure 10. Flowchart of image neighborhood matching with n hypothetical heights.
Remotesensing 16 02251 g010
Figure 11. Simulation results for point targets: (a) SAR image of the point targets (A1–G1) at angle θ 1 ; (b) SAR image of the point targets (A2–G2) at angle θ 2 ; (c) the curve depicts the relationship between height and the offset of the image pair; (d) maximum correlation coefficient curve; (e) side view of 3D point clouds; (f) top view of 3D point clouds.
Figure 11. Simulation results for point targets: (a) SAR image of the point targets (A1–G1) at angle θ 1 ; (b) SAR image of the point targets (A2–G2) at angle θ 2 ; (c) the curve depicts the relationship between height and the offset of the image pair; (d) maximum correlation coefficient curve; (e) side view of 3D point clouds; (f) top view of 3D point clouds.
Remotesensing 16 02251 g011
Figure 12. The experimental equipment and scene.
Figure 12. The experimental equipment and scene.
Remotesensing 16 02251 g012
Figure 13. Observation area.
Figure 13. Observation area.
Remotesensing 16 02251 g013
Figure 14. Practical scene experiment results: (a) 2D SAR image at angle θ 1 ; (b) 2D SAR image at angle θ 2 ; (c) mask image of the building; (d) maximum correlation coefficient curve.
Figure 14. Practical scene experiment results: (a) 2D SAR image at angle θ 1 ; (b) 2D SAR image at angle θ 2 ; (c) mask image of the building; (d) maximum correlation coefficient curve.
Remotesensing 16 02251 g014
Figure 15. Three-dimensional SAR images of the proposed algorithm: (a) side view; (b) top view; (c) different target heights in 2D image; (d) color mapping of point clouds height.
Figure 15. Three-dimensional SAR images of the proposed algorithm: (a) side view; (b) top view; (c) different target heights in 2D image; (d) color mapping of point clouds height.
Remotesensing 16 02251 g015
Table 1. Radar parameters.
Table 1. Radar parameters.
ParameterValue
Operating waveformFMCW
Chirp rate (MHz/μs)5.0210
Sampling frequency (MHz)5
Samples in chirp512
Pulse repetition frequency (Hz)500
Frames60,000
Track length (m)0.5
Table 2. Point target locations (unit/m).
Table 2. Point target locations (unit/m).
Point TargetsPosition Coordinates (m)
Point A(0,25,25)
Point B(−5,22,9.4)
Point C(6,15,15.2)
Point D(10,30,4.5)
Point E(−10,22,0)
Point F(8,10,0)
Point G(5,30,0)
Table 3. Three-dimensional coordinates estimation and error results for point targets (unit/m).
Table 3. Three-dimensional coordinates estimation and error results for point targets (unit/m).
Point TargetsTrue 3D CoordinatesEstimated Average 3D CoordinatesError of xError of yError of z
A(0,25,25)(0.0013,25.0033,25.0009)0.00130.00330.0009
B(−5,22,9.4)(−4.9992,21.9967,9.3971)0.00080.00330.0029
C(6,15,15.2)(5.9999,15.0013,15.1642)0.00010.00130.0358
D(10,30,4.5)(9.9997,29.9988,4.5232)0.00030.00120.0232
E(−10,22,0)(−10.0000,22.0000,0.0000)0.00000.00000.0000
F(8,10,0)(8.0018, 10.0023,0.0000)0.00180.00230.0000
G(5,30,0)(5.0001,30.0011,0.0000)0.00010.00110.0000
Table 4. Comparison of estimate heights with reference values (unit/m).
Table 4. Comparison of estimate heights with reference values (unit/m).
Stone
Column 1
Stone
Column 2
Stone
Column 3
Stone
Column 4
Stone
Column 5
Estimate height
h′/m
7.4007.3007.1007.2007.500
Reference value
h/m
7.3347.3677.2507.2307.439
Height
error
δ h /m
0.066−0.067−0.150−0.0300.061
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Y.; Wang, Y.; Wang, Y.; Shen, W.; Bai, Z. Innovative Rotating SAR Mode for 3D Imaging of Buildings. Remote Sens. 2024, 16, 2251. https://0-doi-org.brum.beds.ac.uk/10.3390/rs16122251

AMA Style

Lin Y, Wang Y, Wang Y, Shen W, Bai Z. Innovative Rotating SAR Mode for 3D Imaging of Buildings. Remote Sensing. 2024; 16(12):2251. https://0-doi-org.brum.beds.ac.uk/10.3390/rs16122251

Chicago/Turabian Style

Lin, Yun, Ying Wang, Yanping Wang, Wenjie Shen, and Zechao Bai. 2024. "Innovative Rotating SAR Mode for 3D Imaging of Buildings" Remote Sensing 16, no. 12: 2251. https://0-doi-org.brum.beds.ac.uk/10.3390/rs16122251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop