4.1. CBP Algorithm
The Convolution BackProjection (CBP) algorithm is a point-by-point imaging algorithm, which is a point-to-point image reconstruction process. The CBP algorithm can be artificially set according to the resolution requirements and the actual situation for different modes and different frequency bands. No matter how great the range migration is, the CBP algorithm can accumulate its energy for each point along its own migration curve [
20]. The process is shown in
Figure 5:
Step 1: Construct a ground pixel grid point
According to the resolution requirement, the pixel grid points of different pixel intervals are constructed in the ground imaging area, and the azimuth and distance coordinates of each pixel are recorded, to guarantee that the resolution in azimuth and range domain of the two-dimensional pixel unit are basically matched.
Step 2: Reverse-projection
(a) Range pulse compression
There is no need for multi-points accumulation after pulse compression, all oversampling points are left.
(b) Determination of the beam coverage of ground pixels
According to the corresponding azimuth of each pixel, the range coordinates, the antenna position and azimuth beam width of each pulse, it can be judged which of the pixels is within the coverage of the pulse beam and recorded.
The judgement is based on the two window functions in the echo expression of the ground target. The point target located in the two window functions can be covered by the pulse beam:
(c) Pixel-by-pixel reverse-projection
Then we calculate the distance between each pixel in the imaging area and the corresponding antenna position of the pulse, and then the range data is interpolated according to the distance to obtain the different energy contribution of the pulse to the different pixels covered by the pulse. For the same pixel, the energy from different pulses of its contribution needs coherent accumulation.
In the time domain imaging algorithm such as CBP, the ground pixel are set artificially based on the resolution requirements and the actual situation, and the interval can be slightly smaller than the resolution generally, according to the desired image geometric direction.
4.2. Fast CBP Algorithm Based on Image Segmentation
The backprojection is a point to point image reconstruction process, which requires large amounts of interpolation operations resulting in huge computation [
21]. Therefore, in this paper a fast implementation method based on Quadtree sub-image segmentation is discussed below.
In the pre-processing phase of the CBP algorithm, match filtering and motion compensation are needed according to the center of the scene, which are equivalent to the two-dimension dechirp processing of the original echo signal, eliminating the second order phase of the signal. The echo signal of single target after the two-dimensional dechirp is as follows [
21]:
The first exponential term is a nearly single frequency signal related to the distance from radar to the target, and the range profile can be obtained through Fourier transform with
fτ. Suppose
wr is the range scope of the scene, then the bandwidth in range domain after dechirp processing is:
The second exponential term is also a nearly single frequency signal related to the azimuth position of the target, and the azimuth profile can be obtained through Fourier transform with
t. Suppose
wa is the azimuth scope of the scene, then the bandwidth in azimuth domain after dechirp processing is [
22]:
where
λc is the wavelength,
Rac is the distance between the center point of the aperture and the center of the scene. In the CBP algorithm, if the scene decreases, the bandwidth in azimuth and range domain corresponding to the phase history will be reduced. Accordingly, the sampling rate can be reduced while increasing the sampling interval in azimuth and range domain to reduce the computation. Therefore, a CBP algorithm based on sub-image processing is used below. The schematic diagram is shown in
Figure 6, and the specific process includes the following steps [
23]:
Step 1: Sub-image segmentation
The segmentation of sub-image is based on the quadrant, and the whole image is divided into four sub-images according to the quadrant. The pixels are evenly distributed in the range and azimuth domain. For an image that includes
N × N pixels, corresponding to
Figure 6, each sub-image should contain
N/2 ×
N/2 pixels. The scene is reduced to half of the original in both range and azimuth domain, so the sampling rate of data in range and azimuth domain can also be reduced to half in processing.
Step 2: Filtering the original phase history, then down-sample in the spatial frequency domain.
The filtering should be based on the range and the center point of each sub-image. The original phase history in the full scene is considered:
where
W represents the ground illuminated area, and
g(
x,
y) represents the reflectivity of point target with the coordinate of (
x,
y). The original phase history array size is
N × N, and the sampling intervals in (
t,
fτ) domain are
T0 and
Fs/N. So the discrete values of
t and
fτ are:
A basic image can be obtained from this data array through the basic linear RD algorithm [
23]. Except the center of the scene, the other points could have some defocus. In order to avoid energy leakage caused by defocusing, motion re-compensation are needed, and then linear RD algorithm and low-pass filtering can be applied. In summary, the fast filtering process includes:
(a) Motion re-compensation to the center of the each sub-image.
For each sub-image, the motion compensation function is constructed based on the the central point in the sub-image, and the echo data is compensated by pulse by pulse. Taking sub-image A as an example, the phase compensation factor is:
where
RsA = RsA(t) is the instantaneous distance between the phase center of the antenna and the center of sub-image A, and it can be described in coordinates:
The new phase history is obtained after phase compensation:
(b) Two dimensional imaging and window interception.
At this time an image of the full scene can still be obtained by two-dimensional FFT, but now the center of the scene has been transferred to the center of each sub-image. Then it is convenient to extracting the sub-image data from the central part of the large two-dimensional data array of the full image according to the subscripts. The array size after interception is N/2 × N/2.
(c) Returning to the phase history domain.
The scene of this sub-image is reduced by half in both range and azimuth, so the sampling rate can be also reduced by half in both range and azimuth domain. After returning the image to the space-frequency domain through IFFT, the amount of data is reduced to the 1/4 of original, only containing the information of sub-image A. The signal returning to data domain is:
Here the sampling points is half of the original, and the sampling interval is doubled. So the sampling intervals in (
t,
fτ) domain are 2
T0 and 2
Fs/N. The discrete values of
t and
fτ are:
The flow diagram of fast filtering is shown in
Figure 7.
Step 3: Backprojection.
Still taking sub-image A as an example,
SsA(
t,
fτ) is the down-sample signal
Pθ_A(
U) in frequency domain containing only information of sub-image A, so the reconstruction formula is changed into the following:
The backprojection process is still realized by interpolation and summation. The pixel numbers of each sub-image is N/2 × N/2, and the pulse numbers for backprojection is also reduced to N/2, so the number of interpolation for each sub-image is N3/8 and total is N3/2 for all of the four sub-images, only half comparing to the normal process.
Step 4: Sub-images mosaic.
The full image is obtained by sub-images mosaic according to the original segmentation rules.