Next Article in Journal
Spatial Distribution Estimates of the Urban Population Using DSM and DEM Data in China
Previous Article in Journal
Impaired Water Hazard Zones: Mapping Intersecting Environmental Health Vulnerabilities and Polluter Disproportionality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automatic Recognition and Positioning Method for Point Source Targets on Satellite Images

1
Zhengzhou Institute of Surveying and Mapping, Zhengzhou 450001, China
2
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, 7514 AE Enschede, The Netherlands
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2018, 7(11), 434; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7110434
Submission received: 17 September 2018 / Revised: 2 November 2018 / Accepted: 4 November 2018 / Published: 7 November 2018

Abstract

:
Currently, the geometric and radiometric calibration of on-board satellite sensors utilizes different ground targets using some form of manual intervention. Point source targets provide high precision geometric and radiometric information and have the potential to become a new tool for joint geometric and radiometric calibration. In this paper, an automatic recognition and positioning method for point source target images is proposed. First, the template matching method was used to effectively reduce nonpoint source target image pixels in the satellite imagery. The point source target images were then identified using particular feature parameters. Using the template matching method, the weighted centroid method, and the Gaussian fitting method, the positions of the centroid of the point source target images were calculated. The maximum position detection error obtained using the three methods was 0.07 pixels, which is comparably better than artificial targets currently in use. The experimental results show point source targets provide high precision geometric information, which can become a suitable alternative for automatic joint geometric and radiometric calibration of spaceborne optical sensors.

1. Introduction

With the development of earth observation technology, there have been a number of high-resolution remote sensing satellites operating in orbit, which constitute a high spatial, time, and spectral resolution earth observation system. These satellites have been widely used in the fields of national defense security, territorial survey, natural resource management, and environmental science [1,2,3,4]. However, high spatial and spectral resolution does not indicate high positioning accuracy and image quality. During the operation of satellite sensors, the absolute positioning accuracy and imaging quality are affected by various factors such as the satellite orbital perturbation, space environment, atmospheric environment, and device operation state [5]. The solution to high-precision remote sensing satellite imaging and positioning is to establish a high-resolution calibration field on the ground. The image of the ground target on the calibration field, weather parameters, satellite attitude, and orbit data are combined to calculate and update the calibration parameters after the satellite sensor is used to image the calibration field. Thereby, image quality and geometric positioning accuracy are improved [6,7]. The geometric and radiometric parameters are improved and compensated, and reliable calibration parameters are used to ensure the authenticity, reliability, and accuracy of satellite positioning results [8,9].
For geometric calibration, the calibration field usually provides two categories of basic data. One is evenly distributed ground targets in the calibration field and the other is high-resolution Digital Orthophoto Map (DOM) and Digital Elevation Model (DEM) of the calibration field. Ground targets are natural objects or artificial targets with distinct geometric features, such as road intersections. A ground target with high-precision 3D coordinates is often referred to as Ground Control Point (GCP) and can be directly used for calculation of external calibration parameters, such as positions and attitudes of the satellite sensors [10]. The DOM and DEM data provide the reference data for the extraction of extensive control point information through image correlation matching [11,12,13]. The use of the DOM image matching method to obtain the control points can theoretically achieve automatic generation of GCPs. However, the DOM and satellite images usually come from different sources which differ in time, scale, rotation, and degradation [14]. Therefore, the matching process requires the addition of geometric constraints which are provided by few image control points (ICPs) with manual selection [15,16]. Attributable to good geometry and evenly distributed features of ground targets, ICPs are usually selected from ground targets. Therefore, evenly distributed ground targets or GCPs are the essential requirements for geometric calibration tasks.
To obtain reasonably distributed ground targets or GCPs, artificial targets with good geometric features are necessary. Nowadays, artificial targets are often large structural signs with simple geometry. For example, the cross vertical angle signs and circular signs used in Ziyuan-3 geometric calibration field is 40 × 40 square meters and 15 × 15 square meters, respectively [16,17]. Such large targets create difficulties for site selection and groundwork. Another difficulty is the challenge of ICP identification for satellite imagery. Conducting a manual search is the main approach for the identification of ICPs [15,18,19], which significantly reduces the efficiency and the degree of automation of geometric calibration. For the positioning of ICPs, manual selection is also a common method and can bring pixel error from 0.3 to more than 1 pixel [19,20,21]. Subpixel edge localization methods can also be used for the positioning of ICPs, which can generate errors up to approximately 0.15 pixels [17,18].
Similarly, the on-orbit radiometric calibration of remote sensing satellite sensors is primarily based on targets in the calibration field. Various methods using natural features like ice, sea, and desert, have been successfully applied to radiometric calibration for satellites such as SPOT and Landsat, and have achieved good calibration results [22,23,24]. However, the method of radiometric calibration with the aid of natural objects is highly site-specific and environment-dependent, which makes the radiometric calibration period of the sensor longer and less efficient. In order to rapidly and periodically monitor and evaluate the radiometric parameters of continuously operating satellite sensors, large-area artificial targets or small-sized point source targets are gradually used as reference targets [25,26,27]. These methods effectively solve the limitations of the external environment, time and space in vicarious radiometric calibration.
Point source target is a new kind of target with a passive or an active light source for radiometric calibration of remote sensing sensors [28]. Radiometric calibration experiments have been carried out using reflective point sources. The error between the calibration coefficients obtained by reflective point sources and gray-scale targets is within 5% [28,29]. The radiometric calibration based on the point source targets needs only the atmospheric transmittance synchronously at the experimental time. This calibration method can greatly simplify the observation of the field synchronization data at the time of the satellite overhead, thus provides a practical way for automated radiometric calibration task. In fact, similar point source targets appeared thirty years ago. In 1988, the University of Arizona used a 4 × 4 point source array to acquire subpixel level samples of Thematic Mapper (TM) pixel values and the Point Spread Function (PSF) of the TM imagery is recovered [30]. According to the PSF, the key parameter of image quality evaluation, Modulation Transfer Function (MTF), can be further obtained. Afterwards, some experiments have been conducted for measurements of PSF and MTF on satellite images with point source targets [28,31,32,33]. The experimental results show that point source target images not only provide the PSF of satellite images, but can also provide the subpixel coordinates of the centroid of the point source target images, thus providing high-precision ICPs.
At this stage, the geometric and radiometric calibration tasks of spaceborne optical sensors are generally performed on different calibration fields, or using different data sources on the same calibration field. This increases the manpower and material resources needed to be deployment for calibration. In addition, with the growing number of remote-sensing satellites operating in orbit, the number of geometric and radiometric calibration tasks is also set to increase rapidly. In order to continuously and accurately update the geometric and radiometric parameters of remote-sensing satellites, automatic and intelligent geometric and radiometric calibration needs to be explored. The processing and analysis of point source target images are primarily for radiometric calibration tasks. Point source targets have obvious differences and good geometric shapes compared with natural objects, which facilitate automatic recognition and high-accuracy positioning of the target on the image.
In this paper, we proposed a method for automatically identifying point source target images using remote sensing images. Initially, most of the nonpoint source target image pixels were sorted using template matching method, which was followed by more accurate identification of point source target images using feature parameters. Different methods were then used to verify the positioning accuracy of the centroid of the point source target images. The positional accuracy obtained using point source target images was superior to those from large-area cross sign, circular mark, and square mark targets currently used in the remote sensing satellite geometric calibration. Point source targets have the following characteristics as compared with traditional natural features or artificial targets. (1) Automatic recognition and measurement of point source target images without manual intervention, (2) higher positioning accuracy and smaller area than artificial markers or natural features, and (3) simultaneously providing high-precision geometric and radiometric information of remote sensing image. Combining the above advantages, point source targets will have the potential to become ideal tools for automatic joint geometric and radiometric calibration.
The rest of this paper is organized as follows. Section 2 provides a short introduction to the point source target image recognition and subpixel positioning methods. Section 3 presents the results and discussion from the experiments performed. The last section gives our conclusions and future research directions.

2. Materials and Methods

2.1. Initial Positioning of Point Source Target Image

Point source target is a type of artificial target placed on the ground mainly used for the measurement of satellite imagery radiometric information. It is generally categorized as mirror type reflective point source target [28] or active LED (Light Emitting Diode) point source target. This study used a reflective point source developed by the Hefei Institute of Physical Science, Chinese Academy of Sciences, as shown in Figure 1. Reflective point sources mainly consist of solar observers, mirrors, and electronic theodolites. The solar observer is used to observe the solar altitude and azimuth. The electronic theodolite adjusts the pointing of the point source reflector based on solar observations and satellite orbit prediction data, which then completes the optical path alignment of the spaceborne optical sensor, the reflection point source, and the sun. The optimized reflector can reflect a certain amount of light flux to the satellite sensor, allowing the sensor to obtain the image of the point source. To make the point source clearly visible on the remote sensing image and not saturated, the point source should be tested under appropriate illumination conditions and good atmospheric visibility.
The size of the reflector used in the study was approximately 0.3 m. For a satellite sensor with a resolution of five meters, the point source target can be regarded as an ideal point source. The pixel distribution of the image of the point source can be regarded as the discrete sampling of the sensor overall PSF. When the point source target is placed on the calibration field, the position of the reflection point source on the satellite image can initially be calculated by using the real-time attitude and orbit parameters of the satellite, and the laboratory calibration parameters of the sensor at the time of the satellite overhead with a strict imaging model of the satellite sensor. Based on the positioning error of the satellite in the uncalibrated case, the preliminary search range uses the calculated image coordinates when initially locating the point source target image.

2.2. Point Source Target Image Recognition

There is no ideal naturally occurring point light source, while the point source target can be approximated as an ideal point source relative to the satellite sensor, which allows artificially arranged point source targets to show specific characteristics in satellite images. Using their characteristics, point source target images can be automatically identified and located from satellite images, providing data sources for automatic geometric and radiometric calibration tasks.

2.2.1. Characteristics of Point Source Target Image

When the sensor is acquiring an image, the incident energy has interacted with some combination of mirrors and lenses, primarily to focus the energy onto a detector array. As a result of this interaction, some degree of distortion is introduced into the final image. Assuming that the input original scene is f ( x , y ) , f ( x , y ) is added to the ambient noise n w ( x , y ) after it is degraded by the degenerate function h ( x , y ) , and a degraded image g ( x , y ) is obtained. This process can be expressed as follows
g ( x , y ) = + + f ( μ , v ) h ( x μ , y v ) d μ d v + n w ( x , y )  
Considering the case where the original image contains only one ideal point source, the point source can be regarded as the unit impulse function δ ( x , y ) with infinite radiation energy at the origin and zero otherwise, so there is
δ ( x , y ) h ( x , y ) = + + δ ( μ , v ) h ( x μ , y v ) d μ d v = h ( x , y )  
It can be seen from Equation (2) that h ( x , y ) is the energy distribution of the degraded image generated by an ideal point source in the space after the degenerate function is applied, so h ( x , y ) is also called the Point Spread Function (PSF). For optical remote sensing satellites with high resolution imaging ability, the 2D Gaussian function is usually used to simulate the PSF ignoring the defocusing effect of the optical system [34,35,36], which is also the most commonly used degenerate function for optical camera systems:
h ( x , y ) = 1 2 π σ ξ exp ( ( x x 0 ) 2 2 σ 2 ( y y 0 ) 2 2 ξ 2 )  
where x 0 and y 0 are the peak positions of the PSF, σ and ξ are the standard deviation of PSF on the X and Y axes, respectively. In Equations (1)–(3), the image degeneration formulas are for continuous energy scenes. In the actual situation, the image acquisition system acquires the discretized digital image. Therefore, the sampling is performed at intervals of pixel in Equation (3) and discrete energy distribution of point source target image with noise is obtained as follows
g p ( i , j ) = K 2 π σ ξ exp ( ( i x 0 ) 2 2 σ 2 ( j y 0 ) 2 2 ξ 2 ) + N w ( i , j ) ( 1 i M , 1 j N )  
In Equation (4), K is a constant, M and N are the number of rows and columns of the generated image, N w ( i , j ) is the discretized Gaussian noise, and g p ( i , j ) is the pixel value at coordinate ( i , j ) . In this paper, the fractional part of ( x 0 , y 0 ) is defined as the phase of the point source. When ( x 0 , y 0 ) is located at the center of a pixel on the image, its phase is (0, 0). According to Equation (4), the simulated degraded image of the reflected point source without noise when the phase is (0, 0) and (0.6, 0.8) can be obtained, as shown in Figure 2a and Figure 2b, respectively. As shown in Figure 2, when the phase of point source target image is changed, the pixel distribution of point source target image is also changed.

2.2.2. Pre-Recognition of Point Source Target Image

Based on Equation (4), when x 0 , y 0 , σ , and ξ are known, a simulated point source target image with no noise can be directly generated. Analysis of the real image data for Landsat-4/5, Quickbird II, and Gaofen-2/4 reveal the standard deviations of the PSF of these remote sensing satellites on the X and Y axis are between 0.5 and 1 pixel when using the Gaussian function to simulate PSF [37,38,39,40]. Therefore, the σ 0 and ξ 0 used to generate matching templates can use empirical values directly. If more precise a priori values are needed, the slanted edge method can be used to obtain the one-dimensional Line Spread Function (LSF) of the edge features on the experimental image, and the one-dimensional Gaussian fitting method can be used to obtain σ 0 and ξ 0 .
The phase of the point source target image influences the spatial distribution of g p ( i , j ) , thus affecting the correlation coefficient between the template and the actual image. Figure 3 shows the different distribution of pixel grayscale caused by the sampling phase with 0.25 pixel interval in one-dimensional manner. As shown in Figure 3, when the point sources are imaged in different positions of a pixel, the imaging result of the point source is different too. In order to optimize the template matching results, 16 matching templates are generated with 0.25 pixels as an interval in the X and Y direction. Thus one of the templates can match the point source image well, resulting in a high correlation coefficient of the point source image. For each pixel P ( i , j ) of the image pixel coordinate set P = { ( i , j ) | 1 i M , 1 j N } , the neighboring pixel values g i + r , j + c are used to calculate the correlation coefficients ρ ( k ) , ( k = 1 , 2 , , 16 ) between the pixel and 16 matching templates g i , j ( k ) :
{ ρ i , j ( k ) = r = d d c = d d ( g i + r , j + c g ¯ ) ( g i , j ( k ) g ¯ ( k ) ) r = d d c = d d ( g i + r , j + c g ¯ ) 2 i = 1 w j = 1 w ( g i , j ( k ) g ¯ ( k ) ) 2 g ¯ = r = d d c = d d g i + r , j + c , g ¯ ( k ) = i = 1 w j = 1 w g i , j ( k )  
where d = ( w 1 ) / 2 and w (an odd number) is the size of the template window. Then 16 correlation coefficients can be obtained. The maximum value of the 16 correlation coefficients ρ i , j ( k ) is then defined as the eigenvalue m ρ i , j of the pixel P ( i , j )
m ρ i , j = max ( ρ i , j ( k ) )  
By setting an empirical threshold ρ T , the pixels whose eigenvalues are lower than the threshold can be screened out directly. Then, the reserved pixels are set as 1, and the excluded pixels are set to 0. Thus a binary image is obtained, and the connected region is calculated. For the reserved pixel set P { ( i , j ) | 1 i M , 1 j N , ρ i , j > ρ T } , many pixels belong to the same point source target image, and these pixels generally have good connectivity. In the same eight-contiguous area, the point with the largest eigenvalue is regarded as the center pixel of the point source target image. The template matching-based point source target image recognition is completed. Because there is no ideal point source in the natural object, this method can be used to screen a large number of nonpoint source target images, leaving a small number of pixels to be processed further.

2.2.3. Mismatch Elimination

After using the template matching method to screen out a large number of nonpoint source pixels, there are still many mismatching pixels, which would require further processing. According to Equation (4), ideally, the pixel value distribution of the point source target image is comprised of discrete values of the 2D Gaussian function. Therefore, the point source target image pixel values can be fitted with a 2D Gaussian function and the feature parameter of point source target image can be obtained. The Levenberg–Marquardt (LM) nonlinear least-squares algorithm can be used to calculate the parameters according to Equation (4):
ε min = i = 1 w j = 1 w { g ( i , j ) K exp [ ( i x 0 ) 2 2 σ 2 ( j y 0 ) 2 2 ξ 2 ] b } 2  
The unknown parameters are K , x 0 , y 0 , σ , ξ , and b . This method needs good initial parameter values for the unknown parameters. The initial values of x 0 and y 0 are the positions of the pixels to be detected. The pixel value at ( x 0 , y 0 ) is used as the initial value of K . The initial value of b can be set as the average value of the background pixels, and the initial values of σ and ξ can be set as the same values when generating the 16 matching templates. According to the calculated least squares results K ^ , σ ^ , ξ ^ , and b ^ corresponding to K , σ , ξ , b , and residual ε min of Equation (7), the single parameter or combinations of parameters can be analyzed further to test whether the pixel belongs to a point source target image. The parameters used in this paper are σ ^ , ξ ^ , σ ^ , and ξ ^ , b ^ , ( K ^ + b ^ ) / b ^ , ε min .
σ ^ and ξ ^ represent the standard deviations of the PSF in the X and Y direction. The a priori information of σ ^ and ξ ^ can be obtained by using the slanted edge method with line features in the image. Therefore, the resolved σ ^ and ξ ^ from the candidate image with large deviation from the a priori information can be removed. In the case where the σ ^ and ξ ^ a priori information is unknown, the values of σ ^ and ξ ^ can be set between 0.5 and 1 depending on the general parameters of the optical remote sensing camera. This method can be tested using σ ^ , ξ ^ or a combination of σ ^ and ξ ^ .
The parameter b ^ represents the energy of the background area. In our experiment, each point source target was equipped with a black bottom net, laid underneath the point source target. For a short imaging time, the photographic environment changes little, and thus the background pixel values will not be greatly changed. Therefore, the background region energy can be used as a feature parameter to test whether the pixel belongs to a point source target image.
The point source target reflector has a high reflectivity. After the optical path adjustment, the sensor can receive more light flux. The background bottom net has a much lower reflectivity, which makes the contrast between the point source target and the background higher than natural features. According to Equation (7), the energy value of the center of the point source target image is K + b , so the feature parameter ( K ^ + b ^ ) / b ^ which represents the contrast between the point source and the background can be used to check whether the pixel belongs to a point source target image.
The ε min represents the residual error between the fitted pixel values and the actual pixel values. The smaller value of ε min results in a better fit. Therefore, the empirical threshold can be selected to reject the candidate image with ε min greater than the threshold.

2.3. Subpixel Positioning of Point Source Target Image

In this study, the template matching method, the weighted centroid method, and the Gaussian fitting method were used in performing high-precision positioning of the centroid pixel coordinates of the point source target image. The results of these methods were then used to verify the accuracy of point source target extraction.
1. Template Matching Method. Based on the imaging system degradation function or the image PSF obtained from approaches such as the slanted edge method, simulated point source target images with different phases under ideal conditions are generated. Then the correlation coefficients between the simulated images and the point source target image are calculated. The subpixel position of the point source target image centroid is then determined based on the maximum correlation coefficient. Sixteen matching templates are used in the identification process of the point source images in Section 2.2.2, where efficiency instead of accuracy becomes the primary consideration. A small number of templates can improve the computational efficiency while effectively discriminating against nonpoint source image. The subpixel positions of the centroid of the point source target images are desired with the positioning accuracy as the only consideration here. Therefore, 10,000 templates are generated using an interval of 0.01 pixels, and the accuracy of template matching results can be as high as 0.01 pixels.
2. The Weighted Centroid Method. The weighted centroid method uses the value of each pixel as the weight to calculate the mean value of the pixel coordinates as the centroid of the point source target image. For a point source target image with size w × w pixels, assuming that the value of pixel ( i , j ) is U i j , and the upper left pixel of point source target image is ( r , c ) , the centroid coordinates of point source target image can be calculated using the following equation.
x ^ 0 = i = 0 w 1 j = 0 w 1 ( i + c ) U i + c , j + r 2 i = 0 w 1 j = 0 w 1 U i + c , j + r 2 , y ^ 0 = i = 0 w 1 j = 0 w 1 ( j + r ) U i + c , j + r 2 i = 0 w 1 j = 0 w 1 U i + c , j + r 2  
3. Gaussian Fitting Method. When using Equation (7) to solve the feature parameter of the point source target image, the subpixel coordinates x ^ 0 , y ^ 0 of the point source target image centroid can be obtained at the same time, and the accuracy of the result will be compared with the previous two methods.

3. Results

3.1. Experimental Data

The satellite imagery data was obtained by the Chinese surveying and mapping satellite Tianhui-1 at the remote sensing calibration field near Urumqi, Xinjiang, as shown in Figure 4a. Tianhui-1 is a remote sensing satellite launched by China in a 500 km sun-synchronous solar orbit with 97.3 degree inclination. The three-line camera sensor on board consists of three independent five meter-resolution cameras (forward, nadir, and backward). The size of the image in Figure 4a is 12,000 × 12,000 pixels, corresponding to the ground area of 60 × 60 square kilometers. In the experiment, two kinds of targets were positioned: point source target and large area greyscale target. The 16 point source targets were placed in a 4 × 4 array form, 40 m apart from each other on the ground, corresponding to about 8 pixels in the image. To accurately identify point source images and restore PSF parameters, the areas surrounding point targets were consistently low reflectivity material. In the experiment, each point source target was equipped with a black bottom net for a consistent grayscale of the image background, as explained in Section 2.3.3. The original acquired point source target image is shown in Figure 4b. As shown in Figure 4b, the background of point source target was black, creating a considerable contrast between the point source target and the background pixels. The large area greyscale target is placed near the point source targets, as shown in Figure 4c. The two kinds of targets are placed for MTF measurement and radiometric calibration initially, and the radiometric calibration result has been published in [29].
In this study, we wanted to verify whether the point source target can be recognized and located automatically with high precision, thus providing an automated approach for geometric calibration. To do this, we extracted each point source target in the image and randomly placed it within the entire image range. Then 16 image patches involving the point source targets (Figure 4d) and 84 image patches without point source targets were selected from the image as the test data. The large greyscale target was used to provide the PSF parameter of the image. These were then used for the generation of matching templates and as a prior value for the Gaussian fitting method in Equation (7). Our methods to identify and locate the point source targets were then applied to the test data. Using 16 point source target images and another area of the satellite imagery in Figure 4a, the process to generate 100 image patches is as follows.
(1) Use one of the three point source target image centroid location methods in Section 2.3 to obtain the target image centroid location. Then use the pixel at each target centroid location as the center to open a window with the size w × w ( w is odd) pixels. The pixel value distribution in each window was used as the point source target image. The red rectangle in Figure 4b shows an example of the extracted point source target image;
(2) Randomly select 100 image patches from the satellite imagery, each with an image size of W × W pixels, W should be set bigger than the nominal positioning error of Tianhui-1 satellite in uncalibrated case. The 16-point source target images acquired in step (1) are randomly placed in 16 image patches to replace some pixels in the image patches. Thus, 16 image patches containing one point source target image and 84 image patches containing no point source target image are obtained. Figure 4d shows an image patch containing a point source target (inside the red rectangular) generated by this method.
Since the background of the point source target was black, the surrounding natural objects did not affect the imaging energy distribution of the point source target seriously. Therefore, the point source target image suitable for geometric calibration was obtained by replacing some pixels in the satellite imagery, which could be regarded as the actual point source target image. These image patches were viewed as point source target image ranges that have been initially acquired using orbital and position parameters with the method proposed in Section 2.1.

3.2. Pre-Recognition Experiment Results

The σ and ξ parameter values of the image PSF were calculated using the manually extracted grayscale target in Figure 4c to obtain the matching templates before carrying out the matching experiment. From the slanted edge image of Figure 4c, the horizontal and vertical sloping edges were selected at first. Then, the Canny edge detector [41] was used to locate the horizontal and vertical edge pixels. After which, the least squares method was used to fit the edge pixels to obtain the edge line equation, and thus the edge pixels were corrected to the subpixel precision. Each row of the slanted edge image was registered and merged into discrete distributed Edge Spread Function (ESF) sample values depending on the subpixel coordinates of edge pixels. The ESF samples were then divided by 0.1 pixels window size, and the average value of ESF samples was used in each window as the ESF resampled value of the window. Finally, the resampled ESF values were smoothed to suppress the noise.
The raw sampled values of the ESF in the horizontal and vertical directions, the average sampling curve in each window, and the smoothed sampling curve are shown in Figure 5a and Figure 5b, respectively. The smoothed ESF sampling curve was differentiated to obtain the Line Spread Function (LSF). The shape of LSF is similar to the Gaussian function, and a one-dimensional Gaussian fitting function was used to fit the LSF. The LSF and its fitted Gaussian curve are shown in Figure 6a and Figure 6b, respectively. The Root Mean Square Error (RMSE) between the fitting result and the actual data in Figure 6a was 0.0017 pixels, and the correlation coefficient was 99.74%. The RMSE between the fitting result and the actual data in Figure 6b was 0.0019 pixels, and the correlation coefficient was 99.56%. The standard deviations σ and ξ of the fitted Gaussian functions are 0.66 and 0.68 pixels, respectively.
Using the calculated σ and ξ values, 16 matching templates were generated at intervals of 0.25 pixels according to Equation (4). For each pixel in 100 image patches, the 16 correlation coefficients were calculated, and the maximum value was taken as the degree of similarity of that pixel. The threshold of the degree of similarity was set to 0.8, and the pixel with a degree of similarity smaller than the threshold value was excluded. The reserved pixel value was set as 1, and the excluded pixel value was set to 0. Thus a binary image was obtained, and the contiguous region could then be calculated. The pixels in the same 8-connected area were regarded to belong to one candidate point source target image. The number of candidate point source target image finally retained in each image patch after re-recognition process is shown in Figure 7.
As shown in Figure 7, among the 100 image patches, only 40 image patches contained candidate point source target images after the template matching recognition. In total, there were 52 candidate point source target images. The maximum, minimum, and average values of degree of similarity for point source target images and nonpoint source target images are shown in Table 1. Table 1 shows that the point source target image statistical values obtained using template matching were slightly larger than the nonpoint source target image, but the two categories have basically equivalent correlation coefficients and could be difficult to distinguish directly. Figure 8 shows an example explaining this. Figure 8a and Figure 8b separately show one point source target image and one nonpoint source target image preserved with the same degree of similarity, and the corresponding matching template is shown in Figure 8c. The degree of similarity corresponding to point source and nonpoint source target images are 0.88 and 0.87, respectively. In this case, the degree of similarity of nonpoint source target image is larger than the point source target image, so further methods would be required to eliminate nonpoint source target images.

3.3. Elimination of False Matches

For the reserved candidate point source target images, Equation (7) was used to solve the feature parameters K ^ , σ ^ , ξ ^ , b ^ , and ε min of each image. The ability of different parameters or combinations of parameters to eliminate false matches was calculated. Based on the a priori values of σ ^ and ξ ^ obtained in Section 3.2, their value ranges were set to
0.45 σ ^ 0.85 , 0.45 ξ ^ 0.85  
According to the average energy distribution of the background region of any point source target image, the value of b ^ can calculated to be approximately 0.2 in a normalized image. Therefore, the designed value range for b ^ could be set as
0.15 b ^ 0.25  
Before the experiment, a spectrometer can be used to measure the reflectance of the point source mirror reflector ρ p and the black bottom net ρ b , and set the threshold range according to the ratio between the reflector and the bottom net
( K ^ + b ^ ) / b ^ ρ p ρ b d  
where d is a constant to suppress the influence of noise. Since the a priori values of ρ m and ρ b were not available in this experiment, we used empirical values ρ m / ρ b = 3 and d = 0.5 .
There was no a priori value for ε min and only the empirical value of 0.1 was chosen for the experiment. Using different feature parameters to carry out false match rejection experiment, the error rates of Type I, Type II, and Type III were calculated. The Type I error rate is the ratio of the number of misclassified point source target images to the number of point source target images, the Type II error rate is the ratio of the number of misclassified nonpoint source target images to the number of nonpoint source target images, and the Type III error rate (total error rate) is the ratio of the number of all misclassified candidate point source target images to the number of all candidate point source target images. The Type I and Type II error rates reflect the adaptability of the algorithm, and the Type III error rate reflects the feasibility of the algorithm. The statistical results are shown in Table 2.
As seen from Table 2, when the feature parameters σ ^ and ξ ^ , b ^ or ( K ^ + b ^ ) / b ^ were used, both the point source target images and the nonpoint source target images could be completely and accurately identified. When using only σ ^ , ξ ^ , or ε min the point source target images could likewise be completely identified; however, some nonpoint source target images were incorrectly classified as point source target images, and Type II error rates were 13.9%, 16.7%, and 13.9%, respectively. This experimental result confirms the feasibility of this algorithm for some parameters and also illustrates the obvious differences between point source target images and natural features.
Figure 9a–d presents the curves of feature parameters σ ^ and ξ ^ , b ^ , ( K ^ + b ^ ) / b ^ , ε min for the candidate point source target images. To illustrate the difference between point source target images and natural features, the first 36 images in the figure were nonpoint source target images, and the last 16 were point source target images. From Figure 9a, the changes of σ ^ and ξ ^ of the nonpoint source target image were larger, and the average value was approximately 1 pixel; while the changes of σ ^ and ξ ^ of the point source target image were smaller and the average value was approximately 0.55 pixels, which is closer to the results obtained in Section 3.2. The differences between feature parameters of point source and nonpoint source target images in Figure 9b,c were even more pronounced. In the two figures, the calculated feature parameters b ^ and ( K ^ + b ^ ) / b ^ of point source target images maintained good consistency and the deviations of the feature parameters were small. In Figure 9d, there was still a huge difference between point source target image feature parameters and nonpoint source target image feature parameters, and the ε min of nonpoint source target images had a broad variation range. However, due to the small fitting residual of some nonpoint sources images, it was difficult to completely eliminate nonpoint source target images with ε min .
It should be noted that the priori values of the feature parameters form the basis for the threshold selection. If the priori values of the feature parameters were not available, it would be theoretically impossible to identify the point source target image using feature parameters. Among the six parameters, σ ^ and ξ ^ were directly obtained from natural features, as shown in Section 3.2. In practice, it could also be possible to use more general empirical values for some known sensors. Therefore, the combination of σ ^ and ξ ^ is suitable for automatic recognition of point source target images; b ^ can be used to accurately identify the point source and nonpoint source target images. The a priori value of b ^ needs to be obtained from the point source target image, which would require manual work. But the a priori values of b ^ can be used in future experiments without repeated measurements if the imaging condition does not change much. Before using ( K ^ + b ^ ) / b ^ , it is necessary to measure the reflectance of the mirror and the black bottom net in advance; the threshold of ε min relies mainly on experience and has a certain degree of blindness.

3.4. Subpixel Positioning Results

The results of the positional estimation of the target image are shown in Table 3. In the table, I refers to the template matching method, II refers to the weighted centroid method, and III refers to the Gaussian fitting method. M1 to M16 are numbers corresponding to the point source targets in Figure 4b ordered from left-to-right and top-to-bottom. As shown in Table 3, the results from the three methods are relatively consistent, and their deviations were calculated to be less than 0.15 pixels. The differences in estimated coordinates obtained and the average value of the coordinates were used as evaluation indices of positioning accuracy of the point source target image. The positioning errors of point source target images were then calculated and presented in Table 4. The differences between the three methods from the mean value were greater than 0.07 pixels. The experimental results were superior to the position detection error of the large-area cross angle sign (0.12 pixels) and circular marker (0.86 pixels) used in Chinese satellite calibration [17].

4. Conclusions

This paper introduces an automated method for recognition and positioning of point source targets on satellite images. Using the template matching method, most of the nonpoint source target images could be removed effectively, which improves the efficiency and accuracy of mismatch elimination experiment. The point source target images can be further identified using different feature parameters. We found that σ ^ and ξ ^ , b ^ or ( K ^ + b ^ ) / b ^ can accurately spot the point source target images, while the position of the centroid of the point source target image can be resolved using the template matching method, weighted centroid method, or Gaussian fitting method. The position detection error of the point source target image positions obtained by the three methods is better than 0.07 pixels. The experimental result was better than the detection error of the large-area cross sign used currently in Chinese satellite geometric calibration.
According to the experimental results in this paper, the point source target image has excellent geometric properties. Point source target image can be accurately identified from remote sensing images and perform high-precision positioning, thus providing subpixel high-precision image measure coordinates of GCPs for geometric calibration. The cross sign and circular sign commonly used by Chinese satellites are 42 × 42 square meters and 15 × 15 square meters, respectively. In contrast, the reflective point source target used in this paper covers less than half a square meter and is relatively easy to transport and layout. If the point source targets are set as the GCPs for space sensor geometric calibration, it can simplify the design and implementation of the calibration field, and provide a more accurate data source, which can provide automatic sensor geometric calibration. Combined with the radiometric calibration method using point source targets, an automatic joint geometric and radiometric calibration test based on point source target can be carried out.

Author Contributions

K.L. conceived and designed the experiments, developed the program, and wrote the paper. Y.Z. and Z.Z. analyzed the results and revised the manuscript. Y.Y. conceived of the paper and revised the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (Grant No. 41671409, 41501482).

Acknowledgments

This work was funded by the National Natural Science Foundation of China (Grant No. 41671409, 41501482), which are gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-resolution global maps of 21st-century forest cover change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [PubMed]
  2. Belward, A.S.; Skøien, J.O. Who launched what, when and why; trends in global land-cover observation capacity from civilian earth observation satellites. ISPRS J. Photogramm. 2015, 103, 115–128. [Google Scholar] [CrossRef]
  3. Dowman, I.; Reuter, H.I. Global geospatial data from Earth observation: Status and issues. Int. J. Digit. Earth 2017, 10, 328–341. [Google Scholar] [CrossRef]
  4. Wang, J.; Wang, R.; Hu, X.; Su, Z. The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite. ISPRS J. Photogramm. 2017, 124, 144–151. [Google Scholar] [CrossRef]
  5. Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pléiades-1A stereo images for 3D information extraction. ISPRS J. Photogramm. 2015, 100, 35–47. [Google Scholar] [CrossRef]
  6. Gascon, F.; Bouzinac, C.; Thépaut, O.; Jung, M.; Francesconi, B.; Louis, J.; Lonjou, V.; Lafrance, B.; Massera, S.; Gaudel-Vacaresse, A.; et al. Copernicus Sentinel-2A calibration and products validation status. Remote Sens. 2017, 9, 584. [Google Scholar] [CrossRef]
  7. Montanaro, M.; Lunsford, A.; Tesfaye, Z.; Wenny, B.; Reuter, D. Radiometric calibration methodology of the Landsat 8 thermal infrared sensor. Remote Sens. 2014, 6, 8803–8821. [Google Scholar] [CrossRef]
  8. Guanter, L.; Kaufmann, H.; Segl, K.; Foerster, S.; Rogass, C.; Chabrillat, S.; Kuester, T.; Hollstein, A.; Rossner, G.; Chlebek, C.; et al. The EnMAP spaceborne imaging spectroscopy mission for earth observation. Remote Sens. 2015, 7, 8830–8857. [Google Scholar] [CrossRef] [Green Version]
  9. Storey, J.; Choate, M.; Lee, K. Landsat 8 Operational Land Imager on-orbit geometric calibration and performance. Remote Sens. 2014, 6, 1127–11152. [Google Scholar] [CrossRef]
  10. Takaku, J.; Tadono, T. PRISM on-orbit geometric calibration and DSM performance. IEEE Trans. Geosci. Remote 2009, 47, 4060–4073. [Google Scholar] [CrossRef]
  11. Radhadevi, P.V.; Solanki, S.S. In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model. Photogramm. Rec. 2008, 23, 69–89. [Google Scholar] [CrossRef]
  12. Jiang, Y.; Xu, K.; Zhao, R.; Zhang, G.; Cheng, K.; Zhou, P. Stitching images of dual-cameras onboard satellite. ISPRS J. Photogramm. 2017, 128, 274–286. [Google Scholar] [CrossRef]
  13. Yang, B.; Wang, M.; Xu, W.; Li, D.; Gong, J.; Pi, Y. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images. ISPRS J. Photogramm. 2017, 134, 1–14. [Google Scholar] [CrossRef]
  14. Mulawa, D. On-orbit geometric calibration of the OrbView-3 high resolution imaging satellite. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 1–6. [Google Scholar]
  15. Leprince, S.; Barbot, S.; Ayoub, F.; Avouac, J.P. Automatic and precise orthorectificartion, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements. IEEE Trans. Geosci. Remote 2007, 45, 1529–1558. [Google Scholar] [CrossRef]
  16. Jiang, Y.H.; Zhang, G.; Tang, X.M.; Li, D.; Huang, W.C.; Pan, H.B. Geometric calibration and accuracy assessment of ZiYuan-3 multispectral images. IEEE Trans. Geosci. Remote 2014, 52, 4161–4172. [Google Scholar] [CrossRef]
  17. Si, X.L.; Zhang, L.M.; Fu, X.K.; Zhu, X.Y.; Li, X.; Dou, X.H.; Yang, B.Y.; Wang, B.Y. Research of Satellite On-Orbit Geometric Calibration Method Based on Artificial Signs. J. Atmos. Environ. Opt. 2014, 9, 149–158. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Zheng, M.; Xiong, J.; Lu, Y.; Xiong, X. On-orbit geometric calibration of ZY-3 three-line array imagery with multistrip data sets. IEEE Trans. Geosci. Remote 2014, 52, 224–234. [Google Scholar] [CrossRef]
  19. Cao, J.; Yuan, X.; Gong, J. In-orbit geometric calibration and validation of ZY-3 three-line cameras based on CCD-detector look angles. Photogramm. Rec. 2015, 30, 211–226. [Google Scholar] [CrossRef]
  20. Zhou, G.; Li, R. Accuracy evaluation of ground points from IKONOS high-resolution satellite imagery. Photogramm. Eng. Remote Sens. 2000, 66, 1103–1112. [Google Scholar]
  21. Tang, X.; Zhang, G.; Zhu, X.; Pan, H.; Jiang, Y.; Zhou, P.; Wang, X. Triple linear-array image geometry model of ZiYuan-3 surveying satellite and its validation. Int. J. Image Data Fusion 2013, 4, 33–51. [Google Scholar] [CrossRef]
  22. Chander, G.; Helder, D.L.; Markham, B.L.; Dewald, J.D.; Kaita, E.; Thome, K.J.; Micijevic, E.; Ruggles, T.A. Landsat-5 TM reflective-band absolute radiometric calibration. IEEE Trans. Geosci. Remote 2004, 42, 2747–2760. [Google Scholar] [CrossRef]
  23. Yoshida, M.; Murakami, H.; Mitomi, Y.; Hori, M.; Thome, K.J.; Clark, D.K.; Fukushima, H. Vicarious calibration of GLI by ground observation data. IEEE Trans. Geosci. Remote 2005, 43, 2167–2176. [Google Scholar] [CrossRef]
  24. Gao, H.L.; Gu, X.F.; Yu, T.; Gong, H.; Li, J.G.; Li, X.Y. HJ-1A HSI on-orbit radiometric calibration and validation research. Sci. China Technol. Sci. 2010, 53, 3119–3128. [Google Scholar] [CrossRef]
  25. Barsi, J.A.; Schott, J.R.; Hook, S.J.; Raqueno, N.G.; Markham, B.L.; Radocinski, R.G. Landsat-8 thermal infrared sensor (TIRS) vicarious radiometric calibration. Remote Sens. 2014, 6, 11607–11626. [Google Scholar] [CrossRef]
  26. Czapla-Myers, J.; McCorkel, J.; Anderson, N.; Thome, K.; Biggar, S.; Helder, D.; Aaron, D.; Leigh, L.; Mishra, N. The ground-based absolute radiometric calibration of Landsat 8 OLI. Remote Sens. 2015, 7, 600–626. [Google Scholar] [CrossRef]
  27. Czapla-Myers, J.; Ong, L.; Thome, K.; McCorkel, J. Validation of EO-1 Hyperion and Advanced Land Imager Using the Radiometric Calibration Test Site at Railroad Valley, Nevada. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 816–826. [Google Scholar] [CrossRef]
  28. Schiller, S.J.; Silny, J. The SPecular Array Radiometric Calibration (SPARC) method: A new approach for absolute vicarious calibration in the solar reflective spectrum. In Proceedings of the Remote Sensing System Engineering III, 78130E, San Diego, CA, USA, 26 August 2010; Volume 7813. [Google Scholar]
  29. Xu, W.W.; Zhang, L.M.; Chen, H.Y.; Li, X.; Yang, B.Y.; Wang, J.X. In-Flight Radiometric Calibration of High Resolution Optical Satellite Sensor Using Reflected Point Sources. Acta Opt. Sin. 2017, 37, 340–347. [Google Scholar] [CrossRef]
  30. Rauchmiller, R.F.; Schowengerdt, R.A. Measurement of the Landsat Thematic Mapper modulation transfer function using an array of point sources. Opt. Eng. 1988, 27, 274334. [Google Scholar] [CrossRef]
  31. Robinet, F.; Leger, D.; Cerbelaud, H.; Lafont, S. Obtaining the MTF of a CCD imaging system using an array of point sources: Evaluation of performances. In Proceedings of the IGARSS ’91 Remote Sensing: Global Monitoring for Earth Management, Espoo, Finland, 3–6 June 1991; pp. 1357–1361. [Google Scholar]
  32. Leger, D.; Duffaut, J.; Robinet, F. MTF measurement using spotlight. In Proceedings of the IGARSS ’94—1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August 1994; pp. 2010–2012. [Google Scholar]
  33. Rangaswamy, M.K. Quickbird II: Two-Dimensional On-Orbit Modulation Transfer Function Analysis Using Convex Mirror Array. Ph.D. Thesis, Electrical Engineering Department, South Dakota State University, Brookings, SD, USA, 2003. [Google Scholar]
  34. Xue, F.; Blu, T. A novel SURE-based criterion for parametric PSF estimation. IEEE Trans. Image Process. 2015, 24, 595–607. [Google Scholar] [CrossRef] [PubMed]
  35. Otsuzumi, K.; Ishihara, Y. PSF Estimation for Restoration of Zoom-Blurred Endoscope Images. J. Signal Process. 2016, 20, 213–216. [Google Scholar] [CrossRef] [Green Version]
  36. Huang, C.; Townshend, J.R.; Liang, S.; Kalluri, S.N.; DeFries, R.S. Impact of sensor’s point spread function on land cover characterization: Assessment and deconvolution. Remote Sens. Environ. 2002, 80, 203–212. [Google Scholar] [CrossRef]
  37. Storey, J.C. Landsat 7 on-orbit modulation transfer function estimation. In Proceedings of the Sensors, Systems, and Next-Generation Satellites V, Toulouse, France, 12 December 2001; Volume 4540. [Google Scholar]
  38. Yang, L.; Ren, J. Remote sensing image restoration using estimated point spread function. In Proceedings of the 2010 International Conference on Information, Networking and Automation (ICINA), Kunming, China, 18–19 October 2010; pp. 41–48. [Google Scholar]
  39. Fan, C.; Li, G.D.; Wu, C.Y.; Li, C.; Zhong, L. High Accurate Estimation of Point Spread Function Based on Improved Reconstruction of Slant Edge. Acta Geod. Cartogr. Sin. 2015, 44, 1219–1226. [Google Scholar] [CrossRef]
  40. Gao, H.T.; Liu, W.; He, H.Y.; Wu, M. Static PSF of TDI-CCD Measurement with multi-phase-knife Method. Opto-Electron. Eng. 2016, 43, 13–18. [Google Scholar] [CrossRef]
  41. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. 1986, 6, 679–698. [Google Scholar] [CrossRef]
Figure 1. Reflective point source.
Figure 1. Reflective point source.
Ijgi 07 00434 g001
Figure 2. Degraded images of point sources when phases of point sources are (0, 0) (a) and (0.6, 0.8) (b).
Figure 2. Degraded images of point sources when phases of point sources are (0, 0) (a) and (0.6, 0.8) (b).
Ijgi 07 00434 g002
Figure 3. Different distribution of pixel grayscale caused by sampling phase.
Figure 3. Different distribution of pixel grayscale caused by sampling phase.
Ijgi 07 00434 g003
Figure 4. Experimental data: (a) Satellite imagery; (b) point source target image; (c) large area grayscale target image; and (d) image patch containing a point source target.
Figure 4. Experimental data: (a) Satellite imagery; (b) point source target image; (c) large area grayscale target image; and (d) image patch containing a point source target.
Ijgi 07 00434 g004
Figure 5. (a) Edge Spread Function (ESF) raw sample values, average values within each window, and smoothed curves in the horizontal direction and (b) ESF raw sample values, average values within each window, and smoothed curves in the vertical direction.
Figure 5. (a) Edge Spread Function (ESF) raw sample values, average values within each window, and smoothed curves in the horizontal direction and (b) ESF raw sample values, average values within each window, and smoothed curves in the vertical direction.
Ijgi 07 00434 g005
Figure 6. (a) Edge Line Spread Function (LSF) curve and fitted Gaussian curve in the horizontal direction and (b) Edge LSF curve and fitted Gaussian curve in the vertical direction.
Figure 6. (a) Edge Line Spread Function (LSF) curve and fitted Gaussian curve in the horizontal direction and (b) Edge LSF curve and fitted Gaussian curve in the vertical direction.
Ijgi 07 00434 g006
Figure 7. Number of point source image candidates retained in the 100 image patches.
Figure 7. Number of point source image candidates retained in the 100 image patches.
Ijgi 07 00434 g007
Figure 8. (a) Point source target image, (b) nonpoint source target image, and (c) common matching template of (a,b).
Figure 8. (a) Point source target image, (b) nonpoint source target image, and (c) common matching template of (a,b).
Ijgi 07 00434 g008
Figure 9. Curve of feature parameters as a function of candidate point source image number. (ad) correspond to σ ^ and ξ ^ , b ^ , ( K ^ + b ^ ) / b ^ , and ε min , respectively.
Figure 9. Curve of feature parameters as a function of candidate point source image number. (ad) correspond to σ ^ and ξ ^ , b ^ , ( K ^ + b ^ ) / b ^ , and ε min , respectively.
Ijgi 07 00434 g009
Table 1. Maximum, minimum, and average values of point source image and degree of similarity for nonpoint source image.
Table 1. Maximum, minimum, and average values of point source image and degree of similarity for nonpoint source image.
CategoryMaximumMinimumAverage Value
Point source image0.930.810.86
Nonpoint source image0.870.800.83
Table 2. Elimination of false matches experiment results.
Table 2. Elimination of false matches experiment results.
Error TypeError Rate
σ ^ ξ ^ σ ^ , ξ ^ b ^ ( K ^ + b ^ ) / b ^ ε min
I000000
II13.9%16.7%00013.9%
III9.6%11.5%0009.6%
Table 3. Different methods to solve the centroid coordinates of point source target image (unit: pixels).
Table 3. Different methods to solve the centroid coordinates of point source target image (unit: pixels).
IIIIIIIIIIIIIIIIIIIIIIII
M1M2M3M4
9.299.259.3017.7217.7917.6725.4825.4625.4634.0033.9933.98
19.6019.6219.5919.6419.6619.6219.7419.7719.7219.8019.8319.75
M5M6M7M8
9.379.359.3817.2517.2317.2825.7125.7825.6833.9033.9533.85
27.9627.9727.9128.0027.9927.9728.0127.9927.9828.0328.0028.02
M9M10M11M12
9.179.109.2517.4917.5817.5625.8225.8625.7633.8933.9433.86
36.0736.0136.0736.0736.0136.0936.0836.0236.1136.1136.0436.14
M13M14M15M16
9.109.059.1717.1817.1217.2425.6025.6425.6133.6333.6733.61
44.2244.1644.2644.3544.2744.3544.3344.2744.3344.4044.3644.39
Table 4. Coordinate error of different methods for solving the centroid of point source target image (unit: pixels).
Table 4. Coordinate error of different methods for solving the centroid of point source target image (unit: pixels).
IIIIIIIIIIIIIIIIIIIIIIII
M1M2M3M4
0.01−0.030.0200.01−0.030.0200.01−0.030.020
−0.0030.017−0.0130−0.0030.017−0.0130−0.0030.017−0.0130
M5M6M7M8
0.003−0.0170.013−0.0030.003−0.0170.013−0.0030.003−0.0170.013−0.003
0.0130.023−0.0360.0140.0130.023−0.0360.0140.0130.023−0.0360.014
M9M10M11M12
−0.003−0.0730.077−0.053−0.003−0.0730.077−0.053−0.003−0.0730.077−0.053
0.02−0.040.020.0130.02−0.040.020.0130.02−0.040.020.013
M13M14M15M16
0.027−0.023−0.00300.027−0.023−0.00300.027−0.023−0.0030
0.007−0.0530.0470.0270.007−0.0530.0470.0270.007−0.0530.0470.027

Share and Cite

MDPI and ACS Style

Li, K.; Zhang, Y.; Zhang, Z.; Yu, Y. An Automatic Recognition and Positioning Method for Point Source Targets on Satellite Images. ISPRS Int. J. Geo-Inf. 2018, 7, 434. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7110434

AMA Style

Li K, Zhang Y, Zhang Z, Yu Y. An Automatic Recognition and Positioning Method for Point Source Targets on Satellite Images. ISPRS International Journal of Geo-Information. 2018; 7(11):434. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7110434

Chicago/Turabian Style

Li, Kai, Yongsheng Zhang, Zhenchao Zhang, and Ying Yu. 2018. "An Automatic Recognition and Positioning Method for Point Source Targets on Satellite Images" ISPRS International Journal of Geo-Information 7, no. 11: 434. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7110434

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop