Next Article in Journal
Landslide Susceptibility Mapping and Comparison Using Decision Tree Models: A Case Study of Jumunjin Area, Korea
Previous Article in Journal
Evaluation and Intercomparison of High-Resolution Satellite Precipitation Estimates—GPM, TRMM, and CMORPH in the Tianshan Mountain Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Runway Detection for Infrared Aerial Image Using Synthetic Vision and an ROI Based Level Set Method

1
Key Lab of Enterprise Informationization and Internet of Things of Sichuan Province, Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong 643000, China
2
Department of Computing Science, University of Alberta, Edmonton, AB T6G 2H1, Canada
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2018, 10(10), 1544; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101544
Submission received: 13 August 2018 / Revised: 20 September 2018 / Accepted: 24 September 2018 / Published: 25 September 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
We present a new method for real-time runway detection embedded in synthetic vision and an ROI (Region of Interest) based level set method. A virtual runway from synthetic vision provides a rough region of an infrared runway. A three-thresholding segmentation is proposed following Otsu’s binarization method to extract a runway subset from this region, which is used to construct an initial level set function. The virtual runway also gives a reference area of the actual runway in an infrared image, which helps us design a stopping criterion for the level set method. In order to meet the needs of real-time processing, the ROI based level set evolution framework is implemented in this paper. Experimental results show that the proposed algorithm is efficient and accurate.

1. Introduction

Low visibility is a primary cause of flight accidents and disruptions in flight scheduling [1,2]. Within a limited visual range and under low visibility conditions, pilots are likely to operate and potentially make errors in their judgement. The instrument landing system (ILS) [3,4] is widely used to assist pilots’ visual contact with a runway during landing, taxiing and takeoff. However, it is not suitable for all airports and runways due to its demanding installation criteria. Under these circumstances, enhanced flight vision system (EFVS) and synthetic vision system (SVS) [5] undoubtedly complement the limits of natural vision. An EFVS is a system that provides pilots with an image typically on a heads-up display (HUD) from multiple sensors, such as charge-coupled device (CCD) cameras, infrared cameras or radar. An SVS is a computer mediated reality system that uses 3D scene topography to provide pilots with a rendered image. This image simulates what pilots actually see in that position and pose and is derived from airborne global positioning system (GPS) and attitude-heading reference system (AHRS). In this paper, we call such a rendered image the “virtual image.” An EFVS and a SVS may be combined to create fused information to significantly improve pilots’ visual perception [6].
Runway intrusion is an important aviation safety concern [7,8]. Therefore, runway detection can help mitigate these risks. Runways are regarded as a general landmark in the course of landing and take-off. In published literature, runway detection methods are divided into two categories: line segments based [9,10,11,12] and feature learning based runway detection. In remote sensing images, straight line segments are used as shape primitives to extract runways. In more concrete terms, image edges are first detected, followed by line segments extraction, such as Hough transform, line segmentation detector [13,14] or EDLines [15]. Parallelity is imposed upon line segments to determine candidate runways. Geometric shape is the distinct trait that existing methods use. This approach is similar to validation in remote sensing images. However, under low visibility in infrared images, chances are that edge of runway cannot be extracted. Furthermore, in the phase of approaching and landing, runways cannot be distinguished from line segments, distracted by highway, buildings and fields. As a result, it is difficult to exclude false detections from runway candidates. Feature learning based airport detection algorithms have been proposed in [16,17,18,19,20,21], in which pairs of features were extracted. Then, the Adaboost or an unsupervised learning algorithm was used to identify runways. Convolutional neural networks [22,23] have a strong feature representation power, which is good for object detection. The feature learning method is time-consuming and can only coarsely localize runways, which is not suitable for real-time processing. Simultaneously, a vast sample set is necessary, which confines the feature learning method in applications of runway detection for aerial images under low visibilities.
EFVS can integrate data acquired by weather penetrating sensors to overcome the problems incurred under adverse weather conditions. In combination with line detector and multi-sensor data fusion, runways can be recognized [24,25]. SVS can generate virtual images which roughly match the corresponding IR (Infrared Radiation) images. In comparison with the EFVS based method, SVS is provided with prior knowledge for runway extraction. Thus, the location of a runway in a virtual image, namely the virtual runway, can be considered as prior knowledge. After segmentation, we are able to acquire a subset of the IR runway, which serves as an initial contour. Considering the level set method, introduced by Osher et al. [26], we are able to detect weak edges; this is taken into account in our real-time Runway Detection framework. Numerical implementations of a level set method are complicated and heavily iterative. Li et al. [27] introduced a new functional to constrain level set functions to approach a signed distance function. As a result, re-initialization can be eliminated, which reduces the computation load. Meanwhile, some researchers focused on the fast marching level set method [28,29], which uses only points close to the curve, namely narrow bands, at every time step to decrease the computational cost of the standard level set method. It greatly simplifies the iteration process.
Through repeated experiments on infrared runways, the touch zones lie so close to runways (see Figure 1) that it may be difficult to divide the two areas without an edge prior. Li’s method [27] defined an edge indicator function to make the dynamic curves move toward the object boundaries. Further discussion on this issue will be presented in Section 4. Li’s method makes it possible to differentiate touch zones and runways. Using a level set method to detect runways requires three issues to be addressed: initial contour, real-time processing and a stopping criterion. In this paper, a virtual runway induced by synthetic vision can provide an initial contour and a stopping criterion for the level set method. Incorporating synthetic vision into level set methods, we propose a real-time runway detection method. The main ideas include:
(1)
Locating a virtual runway using heuristic search to find out a subset of runways from infrared images. We generalize Otsu’s thresholding method to trichotomize the region overlapping a virtual runway, one of which is used as the initial contour in a level set method.
(2)
An ROI based level set method is proposed, which makes it possible to detect a runway in real time.
(3)
Shape of the virtual runway provides a rough area of the actual runway, which imposes a stopping strategy for evolution in the level set method.

2. Generation of Virtual Runway

Generation of the virtual runway is shown in Figure 2. First, high-definition texture maps are overlaid on to its digital elevation model (DEM) [30,31,32]. An architecture layout is used to create a high accuracy runway model with 3D professional modeling tools, like the Creator. Subsequently, terrain and runway local coordinate systems are transformed into a unified coordinate framework. Thereby, the terrain model and runway model are merged into an enhanced 3D terrain model. In addition, GPS offers longitude, latitude, and altitude to indicate the position of onboard cameras and AHRS offers heading, pitch and roll angle to indicate the pose of cameras. Finally, with the position and pose of onboard cameras three-dimensional engineering tools convert the six degrees of freedom data into a flat database format to generate a virtual image. The virtual image is intended to project objects from the enhanced 3D terrain model as they should appear in the correct perspective from a pilot’s position and orientation. The setup renders the enhanced 3D terrain model in response to incoming navigation sensor data. Since point coordinates of the 3D runway model, camera position and pose are known, the location of the virtual runway can be determined. The above-mentioned flow chart is demonstrated in Figure 2a. For a more visual explanation, procedures are also represented diagrammatically in Figure 2b, in which the region bounded by the quadrilateral in green is the virtual runway.

3. Real-Time Runway Detection Based on ROI Level Set Method

Li et al. [27] defined an improved level set method skipping re-initialization. The evolution equation of the level set function ϕ proposed in [32] can be given by:
ϕ t = μ ϕ div ϕ | ϕ | + λ δ ( ϕ ) div g ϕ | ϕ | + ν g δ ( ϕ ) , ϕ n + 1 = ϕ n + τ ϕ t , n = 0 , 1 , ,
where μ > 0 is a penalizing coefficient to prevent a level set function from deviating from a signed distance function, λ > 0 and ν are parameters controlling the shape and size of the evolved curves, τ > 0 is an iteration step and g is an edge indicator function, given by:
g 1 1 + | G σ I | 2 ,
where G σ I is the convolution of image I with the Gaussian kernel G σ . The superscript n is the number of iterations, and ϕ 0 is the initial level set function. Empirically, if the initial contour defined by ϕ 0 is located outside the object of interest, v will take a positive value. Otherwise, a negative value is used to accelerate curve propagation towards actual contours of the object of interest. Using level set methods, there are three challenges in extracting runways from infrared images: (i) a proper initial contour for level set methods is necessary to detect a runway excluding touch zones and other connected regions; (ii) real-time processing is essential; and (iii) we need a stopping criterion to identify the exact runway. If a subset of the runway can be extracted, its boundary can naturally serve as an initial contour in a level set method. The histogram of the virtual runway bounded region (highlighted in green) in the infrared image is shown in Figure 3a, in which there are three or four peaks neglecting the effect of noise. Therefore, the traditional Otsu’s method cannot discriminate the touch zone (see the bottom part circle bounded in Figure 3b) from the runway. However, our method mentioned in Section 3.1 can properly partition the runway subset (in red) from the virtual runway bounded region.

3.1. Three-Thresholding Segmentation for Initial Contour

Runway structure in Figure 1 implies that the choice of initial level set function is a crucial factor leading to the accuracy and efficiency of runway detection. It is apparent that the virtual runway roughly outlines the region of IR runway. Specifically, the area where the virtual runway covers the IR image can be divided into three parts: background, touch zone and runway subregion according to the gray value difference. The brightest part is considered as a subset of the IR runway. Henceforth, the contour of such part serves as an initial contour in the level set method.
The classic Otsu method is practical and efficient for automatic thresholding [33,34]. In Otsu’s principle [35], the main goal is to maximize inter-class variance and minimize intra-class variance of classifications. In this paper, the corresponding region of the virtual runway in an IR image is regarded as a ROI, which can be represented as the convex hull of the vertices of the virtual runway P i ( x , y ) , i = 1 , 2 , 3 , 4 with the mathematical formulation:
ROI i = 1 4 α i P i | α i 0 , i = 1 , , 4 , i = 1 4 α i = 1 .
Let the pixels of the ROI be represented as [ g min , , i , , g max ] , where subscript min, max represents the minimum and maximum gray levels of the ROI, respectively. The number of pixels at level i is denoted by n i and the total number of pixels by N = i = g min g max n i .
The gray level histogram is normalized as:
p i = n i N , p i 0 , i = g min g max p i = 1 .
Assume that the three classes of the ROI are background denoted by C 0 , touch zone denoted by C 1 , and runway subset denoted by C 2 . The pixels of the ROI can be trichotomized by two thresholds k 1 and k 2 , with g min k 1 < k 2 g max . Motivated by the term “binary thresholding method” [35], the aforementioned process is named after the three-thresholding method. The probabilities of class occurrence are given by:
w 0 = P r ( C 0 ) = i = g min k 1 p i = ω ( k 1 ) ,
w 1 = P r ( C 1 ) = i = k 1 + 1 k 2 p i = ω ( k 2 ) ω ( k 1 ) ,
w 2 = P r ( C 2 ) = i = k 2 + 1 g max p i = 1 ω ( k 2 ) .
Hence, the class mean levels are given by:
u 0 = i = g m i n k 1 i P r ( i | C 0 ) = i = g m i n k 1 i p i w 0 = μ ( k 1 ) ω ( k 1 ) ,
where μ ( k 1 ) = i = g m i n k 1 i p i and
u 1 = i = k 1 + 1 k 2 i P r ( i | C 1 ) = i = k 1 + 1 k 2 i p i w 1 = μ ( k 2 ) μ ( k 1 ) ω ( k 2 ) ω ( k 1 ) ,
u 2 = i = k 2 + 1 g max i P r ( i | C 2 ) = i = k 2 + 1 g max i p i w 2 = μ T μ ( k 2 ) 1 ω ( k 2 ) ,
where
μ T = μ ( g max ) = i = g min g max i p i .
Based on Equations (5)–(11), the following relations always hold for any choices of k 1 and k 2 :
i = 0 2 w i = 1 , i = 0 2 w i u i = μ T .
The class variances are represented as:
σ 0 2 = i = g min k 1 ( i u 0 ) 2 P r ( i | C 0 ) = i = g min k 1 ( i u 0 ) 2 p i w 0 ,
σ 1 2 = i = k 1 + 1 k 2 ( i u 1 ) 2 P r ( i | C 1 ) = i = k 1 + 1 k 2 ( i u 1 ) 2 p i w 1 ,
σ 2 2 = i = k 2 + 1 g max ( i u 2 ) 2 P r ( i | C 2 ) = i = k 2 + 1 g max ( i u 2 ) 2 p i w 2 .
In order to measure class separability, three class variances are introduced [36]. The within-class variance of levels is given by:
σ W 2 = w 0 σ 0 2 + w 1 σ 1 2 + w 2 σ 2 2 .
The between-class variance of levels is given by:
σ B 2 = w 0 ( u 0 μ T ) 2 + w 1 ( u 1 μ T ) 2 + w 2 ( u 2 μ T ) 2 .
The total variance of levels is given by:
σ T 2 = i = g min g max ( i μ T ) 2 p i .
Thus, the following discriminant criteria are commonly used:
λ = σ B 2 σ W 2 , κ = σ T 2 σ W 2 , η = σ B 2 σ T 2 .
In order to discuss the relations of the indices mentioned above, we draw the following conclusion:
Theorem 1.
σ T 2 = σ W 2 + σ B 2 , f o r a n y k 1 , k 2 .
Proof. 
Substituting Equations (5)–(7), (13)–(15) into Equation (16) and expanding it, we obtain:
σ W 2 = i = g min k 1 ( i u 0 ) 2 p i + i = k 1 + 1 k 2 ( i u 1 ) 2 p i + i = k 2 + 1 g max ( i u 2 ) 2 p i = i = g min g max i 2 p i μ 2 ( k 1 ) ω ( k 1 ) ( μ ( k 2 ) μ ( k 1 ) ) 2 ω ( k 2 ) ω ( k 1 ) ( μ T μ ( k 2 ) ) 2 1 ω ( k 2 ) .
Similarly, substituting Equations (5)–(10) into Equation (17), we obtain:
σ B 2 = ( μ ( k 1 ) μ T ω ( k 1 ) ) 2 ω ( k 1 ) + μ ( k 2 ) μ ( k 1 ) μ T ( ω ( k 2 ) ω ( k 1 ) ) 2 ω ( k 2 ) ω ( k 1 ) + μ T μ ( k 2 ) μ T ( 1 ω ( k 2 ) ) 2 1 ω ( k 2 ) .
Adding Equations (20) and (21) and simplifying, we get:
σ W 2 + σ B 2 = i = g min g max i 2 p i μ T 2 .
On the other hand, expanding Equation (18) in connection with Equations (4) and (11), gives us:
σ T 2 = i = g min g max i 2 p i 2 μ T i = g min g max i p i + μ T 2 i = g min g max p i = i = g min g max i 2 p i μ T 2 .
Therefore, comparing the right-hand sides of Equations (22) and (23), we have σ T 2 = σ W 2 + σ B 2 as desired. ☐
Obviously, κ and η in Equation (19) can be recast in the form of λ :
κ = 1 + λ ; η = 1 1 λ + 1 .
Therefore, the three discriminant criteria are equivalent to each other as follows:
max λ ( k 1 , k 2 ) max κ ( k 1 , k 2 ) max η ( k 1 , k 2 ) .
Without loss of generality, the optimal value of k 1 and k 2 can be chosen as:
[ k 1 , k 2 ] = arg max { λ ( k 1 , k 2 ) } .
Thus, the pixels of the ROI are trichomized into three classes:
g ( x , y ) 255 f ( x , y ) k 2 , ( C 2 ) , 128 , k 1 f ( x , y ) < k 2 , ( C 1 ) , ( x , y ) ROI , 0 f ( x , y ) < k 1 , ( C 0 ) ,
where f is the grayscale intensity IR image and g the segmentation output.
We assume that the runway subset is the brightest part of the three classifications, namely the class C 2 . Trace region boundaries in a binary image containing the C 2 , and the boundary with the largest area is chosen as an initial contour in the level set method. Let the image domain be Ω { ( x , y ) | 1 x W , 1 y H } , and the region bounded by the initial contour be Ω 0 . Thus, the initial level set function ϕ 0 is derived as:
ϕ 0 ( x , y ) ρ , ( x , y ) Ω 0 Ω 0 , 0 , ( x , y ) Ω 0 , ρ , ( x , y ) Ω Ω 0 ,
where ρ > 0 is a predefined constant, and ∂ represents the boundary of the plane domain.

3.2. ROI Based Implementation for the Level Set Method

There is a product rule of divergence operator [37]: if φ is a scalar valued functional, and F is a vector field, then:
div ( φ F ) = φ · F + φ div ( F ) .
Applying the above rule to Equation (1), it will be recast as:
ϕ t = μ ϕ div ϕ | ϕ | + ν g δ ε ( ϕ ) , + λ δ ε ( ϕ ) g · ϕ | ϕ | + g div ϕ | ϕ | , ϕ n + 1 ( x , y ) = ϕ n ( x , y ) + τ ϕ t , ( x , y ) Ω , n = 0 , 1 , .
In numerical analysis, the δ ( x ) in Equation (1) is smoothed as:
δ ε ( x ) 0 , | x | > ε , 1 2 ε [ 1 + cos π x ε ] , | x | ε .
For ease of description, denote ϕ L , ϕ | ϕ | N , div ϕ | ϕ | = div ( N ) K . Note that, in Equation (28), the terms g and g do not vary during the iterative procedure. However, the terms L, N and K are updated in every iteration. Ref. [38] observed that the front can be moved by updating the level set function at a small set of points in the neighborhood of the zero set instead of updating it at all the points on the grid. Hence, the narrow band based evolution method was conceived. Unfortunately, how to identify whether the points fall in the narrow band and how to retrieve the pre-computed values like g and g promptly is a challenging problem. Considering this issue, we propose a rectangular ROI to implement level set methods without re-initialization. Denoting the ROI in the n-th iteration by Ω n Ω and the discretized level set function ϕ i , j n ϕ ( i h , j h , n τ ) , where a grid in the domain Ω with spacing h and the time with spacing τ , we can rewrite Equation (28) as follows:
ϕ i , j n + 1 = ϕ i , j n + τ μ ( L i , j n K i , j n ) + ν g i , j δ ε ( ϕ i , j n ) + λ δ ε ϕ i , j n g i , j · N i , j n + g i , j K i , j n , ( i h , j h ) Ω n , ϕ i , j 0 = ϕ ( i h , j h , 0 ) , ( i h , j h ) Ω 0 .
After n iterative evolutions, the region in which an object of interest lies is depicted by A n ( i h , j h ) | ϕ i , j n 0 , whose minimal up-right bounding rectangle is given by:
B n ( x , y ) | x min n x x max n , y min n y y max n .
Assume that the current ROI can be expanded by x and y in the x- and y-directions, respectively. Hence, the ROI in the following iteration, i.e., Ω n + 1 is determined by:
Ω n + 1 ( x , y ) | max ( x min n x , 1 ) x min ( x max n + x , W ) , max ( y min n y , 1 ) y min ( y max n + y , H ) .

3.3. Stopping Criteria

It is difficult to guarantee that when the computation stops the final shape is exactly the desired runway. We have designed two strategies. One is dependent on the area of the region A n , once the area is close to the area of the virtual runway, the computation comes to a halt. The other one is to measure the variation of the last two level set functions. If all the points on the front come to a stop, it brings the computation to an end. If the values of the specific points of the last two level set functions satisfy:
i , j | ϕ i , j n ϕ i , j n 1 | | A n | α τ , for ( i h , j h ) A n ,
where α < 1 is a tolerance parameter related to time step τ , it will stop the computations, and A n will be the desired final runway.
In this paper, we have a referenced area from the virtual runway. Thus, we choose the first strategy to stop. More specifically, once the current area of A n approximates the area for the virtual runway, iteration stops.

4. Analysis of Results and Comparisons

The algorithm was programmed using the Microsoft Visual Studio development platform (version 9.0, Redmond, WA, U.S.) with an Open CV (Computer Vision) library. How to select the related parameters is dependent on the processed objects. The authors [27] suggest τ μ < 1 4 so as to maintain stable level set evolution. Commonly, p can take any positive value, σ [ 1.0 , 4.0 ] and ε , h , δ are independent of images to some extent. Except for the parameters, considering the stability and time performance, through multiple trials, the relevant parameters for all the experiments in this paper are listed in Table 1.

4.1. Analysis of Proposed Runway Detection

An overview of the proposed method for runway detection is shown in Figure 4. The original infrared image is shown in Figure 4a. Based on (2), quantifying the value from [0,1] to [0,255], the edge indicator g is presented in Figure 4b. Three classification segmentation via the proposed three-thresholding method is illustrated in Figure 4c, red parts of which are candidates for the runway. Followed by tracing boundaries of the region in red, an initial contour is outlined in Figure 4d, which will expand to the final runway. After 16 iterations, the curve is moved as shown in Figure 4e. In Figure 4d,e, the regions enclosed by the white rectangles are ROIs for the next iteration. When the level set function meets the stopping criterion, evolution halts and the final runway is highlighted in Figure 4f. The outcome is almost identical with human visual judgment.
Unfortunately, there is no ground truth to assess the error. In order to quantize the performance of our proposed method, we assume that the shape of a runway is quadrilateral and draw the ground truth manually for 100 continuous frames. The initial contour in the true runway is significant. Therefore, we define three indices below, which depict the error of the initial contour, the final contour and the area differece between ground truth and experimental results, respectively.
(1)
The error of the initial contour e 0 . It is apparent that, if the initial contour lies in the range of the ground truth, it is appropriate for the proposed method. For the error in the initial contour, we employ a “one-vote negation evaluation strategy”, namely, we define the initial error e 0 = 0 if all the points of an initial contour lie within the ground truth; otherwise, e 0 = 1 .
(2)
The error of the final contour e 1 . The distance between the points of the final contour and the ground truth reflects the accuracy of the proposed method [39]. Thus, we define the final error e 1 as follows:
e 1 = i = 1 N | dist ( P i , C g ) | N pel ,
where the function dist ( · ) returns a signed distance between the point P i in the final contour and the nearest ground truth edge. Based on the premise that the manually drawn ground truth is sufficiently precise, it is obvious that the smaller the e 1 is, the more accurate the runway detection is.
(3)
The area differece e 2 . Jaccard Similarity measure calculates [40] the similarity in the G ground truth area and the D detected area. In this measurement, the maximum value 1 means completly similar, while 0 means completely dissimilar. Naturally, we defined the third error index, given as below:
e 2 = 1 J = 1 | G D | | G D | .
Based on the error indices mentioned above, the diagram for error e 0 , e 1 and e 3 is shown in Figure 5. The e 0 = 0 for all the tested images, e 1 takes values in the interval [ 0.73 , 3.11 ] and e 2 in the interval [ 0.16 , 0.51 ] . As for the index e 2 , the acreage of target runway is very small, a slight difference between the ground truth and tested area may lead to an obviously large error. However, if we replace | G D | | G | with | G D | | G D | in Equation (32), the error drops to 0.15 on the average. This means that the runway extracted overlaps the 85% percent of the area ground truth covered. In order to demonstrate the error distinctly, we show a graphical representation of the runways detected and the corresponding ground truth, as shown in Table 2, whose e 1 are the largest ones. Note that there is only a slight visual difference in the runways between the proposed method and the ground truth. Moreover, the error e 1 is not ideally zero for some cases. However, it is sufficient to improve a pilots’ visual perception and runways detected automatically can significantly improve a pilots’ spatial awareness. The relevant parameters are employed as shown in Table 1. The detection performance verifies that the parameters listed are image independent.

4.2. Comparison with Existing Methods

Compared to line segment based runway detection, Canny edge detector and EDLines have excellent accuracy and resistance toward false detections. In Figure 6, edge images by Canny’s method are listed in the first column. It is clear that this method succeeds in extracting the edge of runways, while it is intricate to determine runways from patches of line segments during post processing. In Figure 6b, the line segments of runway by the EDLines are missing. Although three line segments of runway are detected in Figure 6b, three out of 192 line segments is still a tough decision. On the contrary, the proposed method gives a good solution, as listed in the third column.
Compared to Li’s method [27], our ROI-based level set method has significantly better time performance. The average execution time of Li’s method and ours with the same computer configuration (CPU: Intel(R) Core(TM) i7-4790 (3.60 GHZ), RAM: 16 GB and OS: Windows 10 (64 bit)) is listed in Table 3. Our processing speed is 28f/s, which meets the demand for real-time processing.
In addition, we placed a subset of a runway as the initial curve (see Figure 7b) manually. Our final result in Figure 7c or f is similar to Li’s in Figure 7d after 100 iterations. However, it is difficult for Li’s method to design an appropriate stopping criterion when the curve evolves to the desired runway. It is obvious that the curve in Figure 7e has a bigger size than the exact runway after 200 iterations. In this application, a traditional level set with fixed maximal iterations is inadvisable. Simultaneously, it is worth pointing out that the virtual runway can provide clues about the initial contour and the stopping criterion leading to the accurate runway, which has practical significance in automatic processing.
Figure 8 demonstrates the comparison between Chan’s model [41,42] and our method. Due to lack of edge information, Chan’s model cannot separate the runway from the touch zone, as shown in Figure 8d. In this case, our method prevails in accuracy with the assistance of the edge indicator, as shown in Figure 8b. The final result of our method is shown in Figure 8c, which excludes other irrelevant regions apart from the runway.
As for time performance, it was reported that Balla-Arabe’s method [43] reaches 0.7 s per frame for 481 × 321 plane images (CPU: AMD Athlon(TM) 5200 2.3 GHZ, RAM: 2 GB, Matlab 2010b, (MathWorks, Natick, MA, U.S.). Shi [29] proposed a real-time algorithm for the approximation of Level-Set-Based curve evolution and can segment the plane image in 0.0249 s (CPU: Intel 1.6 GHZ). Nevertheless, a circle outside the plane was plotted manually as an initial curve in Shi’s method, which is not feasible in batch processing.

5. Conclusions

We proposed an entirely new approach for runway detection from infrared aerial images incorporating synthetic vision. Synthetic vision is capable of generating virtual runways in reality. However, chances are that pilots are not able to confirm the authenticity of this information if errors occurred in either the navigation data or in the 3D scene database. Therefore, pilots are provided with credible visual cues by means of structures such as the runway, river, freeway, overpass and other landmarks explored under real circumstances [18,19,44]. A framework for runway detection on the basis of synthetic vision and an ROI level set method was presented. We first analyzed the structure of a runway and found that the runway adjoins a touch zone in the infrared image. In order to differentiate the two regions, edge information was used. Subsequently, a level set method without reinitialization was employed, which comprised an edge indicator. Then, synthetic vision provided an initial level set function and a stop condition. For the initial level set functions, we presented a three thresholding method to guarantee that the initial level set function was induced by the subset of IR runway. Furthermore, an ROI based evolution strategy was proposed to achieve real-time processing. Compared to traditional runway detection, the virtual runway proposed in this paper detects an approximate location of an IR runway; hence, it can output an exact runway without any follow-up recognition. Other methods need to carry out identification after runway extraction. In this paper, we assumed that the virtual runway can overlap with the IR runway. This assumption demands slightly reliable synthetic vision. If this prerequisite is not met, an incorrect sign of v in Equation (28) may make the level set function evolve in the wrong direction. This is a problem that we will focus on in future research, along with creating more reliable synthetic vision.

Author Contributions

Conceptualization, C.L. and A.B.; Methodology, C.L. and A.B.; Software, C.L.; Validation, C.L., I.C. and A.B.; Formal Analysis, C.L.; Investigation, A.B. and I.C.; Resources, C.L.; Data Curation, C.L.; Writing Original Draft Preparation, C.L.; Writing Review and Editing, C.L. and A.B.; Visualization, C.L.; Supervision, A.B.

Funding

The work was supported in part by the Open Project of the Key Lab of Enterprise Informationization and Internet of Things of Sichuan Province under Grant No. 2017WZY01, the Open Project of the Artificial Intelligence Key Laboratory of Sichuan Province under Grant Nos. 2015RZJ01 and 2018RYJ02, the Project of Sichuan Provincial Department of Education under Grant Nos. 15ZA0234 and 18ZA0353, National Natural Science Foundation of China under Grant Nos. 11705122 and 11872043, and Natural Science Foundation of Sichuan University of Science and Engineering (SUSE) under Grant Nos. 2015RC08, 2017RCL54 and JG-1707.

Acknowledgments

The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, for their financial support of this research.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Knabl, P.; Schmerwitz, S.; Doehler, H.U. Target Detection with Helmet-Mounted Displays in Low Visibility Helicopter Operations. Procedia Manuf. 2015, 3, 2558–2565. [Google Scholar] [CrossRef]
  2. Cornejo-Bueno, L.; Casanova-Mateo, C.; Sanz-Justo, J.; Cerro-Prada, E.; Salcedo-Sanz, S. Efficient Prediction of Low-Visibility Events at Airports Using Machine-Learning Regression. Bound.-Layer Meteorol. 2017, 165, 349–370. [Google Scholar] [CrossRef]
  3. Stoltny, M. Instrument Landing System as an Example of a Precision Approach System. Sci. J. Silesian Univ. Technol. 2016, 93, 123–129. [Google Scholar] [CrossRef]
  4. Merkisz, J.; Galant, M.; Bieda, M. Analysis of Operating Instrument Landing System Accuracy under Simulated Conditions. Sci. J. Silesian Univ. Technol. 2017, 94, 163–173. [Google Scholar] [CrossRef]
  5. Kramer, L.J.; Etherington, T.J.; Severance, K.; Bailey, R.E.; Williams, S.P.; Harrison, S.J. Assessing Dual-Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations. J. Aerosp. Inf. Syst. 2017, 14, 1–18. [Google Scholar]
  6. Kumar, N.S.; Kashyap, S.K.; Naidu, V.; Gopalratnam, G. Integrated enhanced and synthetic vision system for transport aircraft. Def. Sci. J. 2013, 63, 157–163. [Google Scholar] [CrossRef]
  7. Mathew, J.K.; Major, W.L.; Hubbard, S.M.; Bullock, D.M. Statistical modelling of runway incursion occurences in the United States. J. Air Transp. Manag. 2017, 65, 54–62. [Google Scholar] [CrossRef]
  8. Song, I.; Cho, I.; Tessitore, T.; Gurcsik, T.; Ceylan, H. Data-Driven Prediction of Runway Incursions with Uncertainty Quantification. J. Comput. Civ. Eng. 2018, 32, 04018004. [Google Scholar] [CrossRef]
  9. Jackson, P.T.G.; Nelson, C.J.; Schiefele, J.; Obara, B. Runway detection in High Resolution remote sensing data. Phys. Rev. C 2015, 43, 96–104. [Google Scholar]
  10. Yao, X.; Han, J.; Guo, L.; Bu, S.; Liu, Z. A coarse-to-fine model for airport detection from remote sensing images using target-oriented visual saliency and CRF. Neurocomputing 2015, 164, 162–172. [Google Scholar] [CrossRef]
  11. Zhao, C.; Goshtasby, A.A. Registration of multitemporal aerial optical images using line features. ISPRS J. Photogramm. Remote Sens. 2016, 117, 149–160. [Google Scholar] [CrossRef]
  12. Penariu, P.S.; Bodorin, N.P.; Stroescu, V.C.; Penariu, P.S.; Bodorin, N.P.; Stroescu, V.C.; Penariu, P.S.; Bodorin, N.P.; Stroescu, V.C. A Parallel Approach on Airport Runways Detection Using MPI and CImg; Springer: Berlin, Germany, 2018; pp. 460–471. [Google Scholar]
  13. Gioi, R.G.V.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  14. Gioi, R.G.V.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A line segment detector. Image Process. Line 2012, 2, 35–55. [Google Scholar] [CrossRef]
  15. Akinlar, C.; Topal, C. EDLines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
  16. Aytekin, Ö.; Zongur, U.; Halici, U. Texture-based airport runway detection. IEEE Geosci. Remote Sens. Lett. 2013, 10, 471–475. [Google Scholar] [CrossRef]
  17. Cheng, G.; Han, J.; Zhou, P.; Guo, L. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS J. Photogramm. Remote Sens. 2014, 98, 119–132. [Google Scholar] [CrossRef]
  18. Cheriyadat, A.M. Unsupervised feature learning for aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 439–451. [Google Scholar] [CrossRef]
  19. Zhang, F.; Du, B.; Zhang, L. Saliency-guided unsupervised feature learning for scene classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2175–2184. [Google Scholar] [CrossRef]
  20. Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef] [Green Version]
  21. Marapareddy, R.; Saripalli, S.W. Runway Detection Using K-Means Clustering Method Using UAVSAR Data. In Proceedings of the International Conference on Artificial Intelligence and Applications (AIAPP), Geneva, Switzardland, 25–26 March 2017; pp. 49–54. [Google Scholar]
  22. Cheng, G.; Zhou, P.; Han, J. Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
  23. Li, K.; Cheng, G.; Bu, S.; You, X. Rotation-Insensitive and Context-Augmented Object Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2337–2348. [Google Scholar] [CrossRef]
  24. Wang, J.; Cheng, Y.; Xie, J.; Niu, W. A Real-Time Sensor Guided Runway Detection Method for Forward-Looking Aerial Images. In Proceedings of the 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, China, 19–20 December 2015; pp. 150–153. [Google Scholar]
  25. Zhang, L.; Cheng, Y.; Zhai, Z.; Zhang, L.; Cheng, Y.; Zhai, Z.; Zhang, L.; Cheng, Y.; Zhai, Z. Real-time accurate runway detection based on airborne multi-sensors fusion. Def. Sci. J. 2017, 67, 542–550. [Google Scholar] [CrossRef]
  26. Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef] [Green Version]
  27. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Level set evolution without re-initialization: A new variational formulation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 430–436. [Google Scholar]
  28. Shi, Y.; Karl, W.C. A fast level set method without solving PDEs. In Proceedings of the 30th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Philadelphia, PA, USA, 23 March 2005; pp. 97–100. [Google Scholar]
  29. Shi, Y.; Karl, W.C. A real-time algorithm for the approximation of level-set-based curve evolution. IEEE Trans. Image Process. 2008, 17, 645–656. [Google Scholar] [PubMed]
  30. Robinson, N.; Regetz, J.; Guralnick, R.P. EarthEnv-DEM90: A nearly-global, void-free, multi-scale smoothed, 90 m digital elevation model from fused ASTER and SRTM data. ISPRS J. Photogramm. Remote Sens. 2014, 87, 57–67. [Google Scholar] [CrossRef]
  31. Peng, J.; Zhang, Y.; Shan, J. Shading-based DEM refinement under a comprehensive imaging model. ISPRS J. Photogramm. Remote Sens. 2015, 110, 24–33. [Google Scholar] [CrossRef]
  32. Shean, D.E.; Alexandrov, O.; Moratto, Z.M.; Smith, B.E.; Joughin, I.R.; Porter, C.; Morin, P. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very-high-resolution commercial stereo satellite imagery. ISPRS J. Photogramm. Remote Sens. 2016, 116, 101–117. [Google Scholar] [CrossRef] [Green Version]
  33. Lee, I.K.; Shamsoddini, A.; Li, X.; Trinder, J.C.; Li, Z. Extracting hurricane eye morphology from spaceborne SAR images using morphological analysis. ISPRS J. Photogramm. Remote Sens. 2016, 117, 115–125. [Google Scholar] [CrossRef]
  34. Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P. Segmentation and classification of road markings using MLS data. ISPRS J. Photogramm. Remote Sens. 2017, 123, 94–103. [Google Scholar] [CrossRef]
  35. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  36. Fukunaga, K. Introduction to Statistical Pattern Recognition; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  37. Korn, G.A.; Korn, T.M. Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review; Courier Corporation: North Chelmsford, MA, USA, 2000. [Google Scholar]
  38. Malladi, R.; Sethian, J.A.; Vemuri, B.C. Shape modeling with front propagation: A level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 158–175. [Google Scholar] [CrossRef]
  39. Cheng, I.; Flores-Mir, C.; Major, P.; Basu, A. Measuring and evaluating ground truth for boundary detection in medical images. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 5889–5892. [Google Scholar]
  40. Jain, A.; Jain, A.; Chauhan, N.; Singh, V.; Thakur, N. Information Retrieval using Cosine and Jaccard Similarity Measures in Vector Space Model. Int. J. Comput. Appl. 2017, 164, 28–30. [Google Scholar] [CrossRef]
  41. Chan, T.F.; Sandberg, B.Y.; Vese, L.A. Active contours without edges for vector-valued images. J. Vis. Commun. Image Represent. 2000, 11, 130–141. [Google Scholar] [CrossRef]
  42. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Balla-Arabe, S.; Gao, X.; Wang, B. A fast and robust level set method for image segmentation using fuzzy clustering and lattice Boltzmann method. IEEE Trans. Cybern. 2013, 43, 910–920. [Google Scholar] [CrossRef] [PubMed]
  44. Yang, Y.; Newsam, S. Geographic image retrieval using local invariant features. IEEE Trans. Geosci. Remote Sens. 2013, 51, 818–832. [Google Scholar] [CrossRef]
Figure 1. Typical runway structure in infrared images.
Figure 1. Typical runway structure in infrared images.
Remotesensing 10 01544 g001
Figure 2. Generation of synthetic vision: (a) flow diagram; and (b) graphical representation.
Figure 2. Generation of synthetic vision: (a) flow diagram; and (b) graphical representation.
Remotesensing 10 01544 g002
Figure 3. Segmentation based on Otsu’s and our proposed method on the virtual runway bounded region: (a) a histogram of image data; (b) segmentation by Otsu’s method; (c) three-thresholding segmentation by our method.
Figure 3. Segmentation based on Otsu’s and our proposed method on the virtual runway bounded region: (a) a histogram of image data; (b) segmentation by Otsu’s method; (c) three-thresholding segmentation by our method.
Remotesensing 10 01544 g003
Figure 4. The proposed runway detection method: (a) the original infrared image; (b) edge indicator image; (c) three-thresholding segmentation of ROI; (d) the initial curve evolved; (e) results after 16 iterations; and (f) final results after 60 iterations.
Figure 4. The proposed runway detection method: (a) the original infrared image; (b) edge indicator image; (c) three-thresholding segmentation of ROI; (d) the initial curve evolved; (e) results after 16 iterations; and (f) final results after 60 iterations.
Remotesensing 10 01544 g004
Figure 5. Diagram of error indices in comparison with the manually determined ground truth.
Figure 5. Diagram of error indices in comparison with the manually determined ground truth.
Remotesensing 10 01544 g005
Figure 6. Comparison with line segment based method: (a) edge image for the 15th frame using Canny’s method; (b) line segments for the 15th frame by EDLines; (c) runway detection by the proposed method; (d) edge image for the 67th frame by Canny’s method; (e) line segments for the 67th frame by the EDLines; (f) runway detection by the proposed method.
Figure 6. Comparison with line segment based method: (a) edge image for the 15th frame using Canny’s method; (b) line segments for the 15th frame by EDLines; (c) runway detection by the proposed method; (d) edge image for the 67th frame by Canny’s method; (e) line segments for the 67th frame by the EDLines; (f) runway detection by the proposed method.
Remotesensing 10 01544 g006
Figure 7. Comparison with Li’s method [27]: (a) part of the original infrared image; (b) initial contour by our method (c) final result based on our method (after 71 iterations); (d) contour based on Li’s method (after 100 iterations); (e) contour based on Li’s method (after 200 iterations); (f) final result with ground truth based on our method.
Figure 7. Comparison with Li’s method [27]: (a) part of the original infrared image; (b) initial contour by our method (c) final result based on our method (after 71 iterations); (d) contour based on Li’s method (after 100 iterations); (e) contour based on Li’s method (after 200 iterations); (f) final result with ground truth based on our method.
Remotesensing 10 01544 g007
Figure 8. Comparison with Chan’s method [42]: (a) part of the original infrared image; (b) edge indicator image; (c) runway region based on our method; (d) runway region based on Chan’s method.
Figure 8. Comparison with Chan’s method [42]: (a) part of the original infrared image; (b) edge indicator image; (c) runway region based on our method; (d) runway region based on Chan’s method.
Remotesensing 10 01544 g008
Table 1. Parameters related to experiments.
Table 1. Parameters related to experiments.
Image Dimension σ ρ ε μ ν λ τ hxy
720 × 5761.531.50.06−4.04.01.511515
Table 2. Comparison of results with ground truth.
Table 2. Comparison of results with ground truth.
Sequence Number e 1 (pel)Proposed MethodGround Truth
1003.11 Remotesensing 10 01544 i001 Remotesensing 10 01544 i002
842.62 Remotesensing 10 01544 i003 Remotesensing 10 01544 i004
972.57 Remotesensing 10 01544 i005 Remotesensing 10 01544 i006
162.57 Remotesensing 10 01544 i007 Remotesensing 10 01544 i008
752.47 Remotesensing 10 01544 i009 Remotesensing 10 01544 i010
Table 3. Algorithm execution time (s/f).
Table 3. Algorithm execution time (s/f).
Processing TimeMinMaxAverage
Li’s Method [27]0.1721.8440.821
Proposed Method0.0150.0630.035

Share and Cite

MDPI and ACS Style

Liu, C.; Cheng, I.; Basu, A. Real-Time Runway Detection for Infrared Aerial Image Using Synthetic Vision and an ROI Based Level Set Method. Remote Sens. 2018, 10, 1544. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101544

AMA Style

Liu C, Cheng I, Basu A. Real-Time Runway Detection for Infrared Aerial Image Using Synthetic Vision and an ROI Based Level Set Method. Remote Sensing. 2018; 10(10):1544. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101544

Chicago/Turabian Style

Liu, Changjiang, Irene Cheng, and Anup Basu. 2018. "Real-Time Runway Detection for Infrared Aerial Image Using Synthetic Vision and an ROI Based Level Set Method" Remote Sensing 10, no. 10: 1544. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10101544

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop