Next Article in Journal
A GIS- and Fuzzy Set-Based Online Land Price Evaluation Approach Supported by Intelligence-Aided Decision-Making
Previous Article in Journal
River Basin Information System: Open Environmental Data Management for Research and Decision Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition and Reconstruction of Zebra Crossings on Roads from Mobile Laser Scanning Data

1
School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China
2
Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan 430079, China
3
The Key Laboratory for Geographical Information System, Ministry of Education, Wuhan 430079, China
4
Power China Zhongnan Engineering Corporation Limited, Changsha 410014, China
*
Authors to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2016, 5(7), 125; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5070125
Submission received: 16 May 2016 / Revised: 3 July 2016 / Accepted: 8 July 2016 / Published: 19 July 2016

Abstract

:
Zebra crossings provide guidance and warning to pedestrians and drivers, thereby playing an important role in traffic safety management. Most previous studies have focused on detecting zebra stripes but have not provided full information about the areas, which is critical to both driver assistance systems and guide systems for blind individuals. This paper presents a stepwise procedure for recognizing and reconstructing zebra crossings using mobile laser scanning data. First, we propose adaptive thresholding based on road surface partitioning to reduce the impact of intensity unevenness and improve the accuracy of road marking extraction. Then, dispersion degree filtering is used to reduce the noise. Finally, zebra stripes are recognized according to the rectangular feature and fixed size, which is followed by area reconstruction according to arrangement patterns. We test our method on three datasets captured by an Optech Lynx mobile mapping system. The total recognition rate of 90.91% demonstrates the effectiveness of the method.

1. Introduction

Road markings, as critical transportation infrastructure, provide drivers and pedestrians with information about traffic regulations, warnings, and guidance [1]. The recognition and extraction of road markings have been seen as important functions in many fields, such as traffic safety management [2,3], driver assistance [4,5,6], and intelligent transportation [7,8]. Traditional studies have mostly focused on digital images and videos [9,10,11,12,13,14,15]. The extraction results are sometimes incomplete or insufficient due to poor weather conditions, lighting conditions, and complex shadowing from trees. In addition, the results fail to provide accurate three-dimensional (3D) coordinates of objects, which are crucial inputs to intelligent transportation systems and 3D city modelling.
Recent years have seen the emergence of mobile laser scanning (MLS) as a leading technology for extracting information about the surfaces of urban objects. MLS systems, which integrate laser scanners, global positioning system (GPS), inertial navigation system (INS), and charge-coupled device (CCD) cameras [16], collect information, such as 3D geospatial, texture, and laser intensity data, from complex urban areas when a vehicle is on the move. Such systems have become a promising and cost-effective solution for rapid road environment modelling. Most methods proposed in previous studies are designed for point cloud classification [17,18,19,20], building footprint extraction, façade reconstruction [21,22,23], and detection of vertical pole-like objects [24,25,26] in a road environment. Only a few studies have explored the recognition and extraction of road markings.
Jaakkola et al. [27] first generated georeferenced feature images according to elevation and intensity by applying an interpolation method and then segmenting road markings and curbstones by applying thresholding and morphological operations to elevation and intensity images. Yang et al. [28] calculated the weights and pixel values of grey images based on spatial distributions (e.g., planar distance, elevation difference, and point density) of laser scanning points, which improved their algorithm for generating feature images. Then, they applied an intensity filter and an elevation filter, followed by constraints on shape and distribution patterns. The above-mentioned methods transform 3D points into 2D images because addressing a large volume of unorganized points is time-consuming and complex. The transformation improves the computational efficiency and enables one to capitalize on well-established image processing methods. However, this transformation also causes roughness in detail, especially when extracting small objects, such as road markings.
Kammel [29] and Chen [30] applied the Radon transform and Hough transform to extract solid-edge-line and dashed-lane-line markings, respectively, from MLS points. These methods are effective when extracting straight markings; however, they exhibit a weakness in extracting curve markings. Since curve markings are usually irregular, it is difficult to choose a suitable curve model.
Simple global intensity thresholding is often used to extract road markings [27,28,31]; however, the markings’ non-uniform intensity makes this method less effective in some cases because intensity values are affected by material, laser incidence angle, and range. Guan et al. [32] proposed a novel method that segments the intensity images with multiple thresholds related to the point density. Using their method, an image was partitioned into several blocks in accordance with the point density distribution characteristics. Within different blocks, local optimal thresholds were estimated to extract road markings. However, substantial noise was introduced in this method.
In addition to the extraction of road markings, the recognition of types is also a necessary and challenging task, especially for zebra crossings, which are located in urban road intersections and have important functions in traffic safety management. Mancini et al. [33] identified zebra crossings with area, perimeter, and length-width ratios following connected component labelling. Riveiro et al. [34] used a Canny edge detector and standard Hough transform to detect a set of parallel lines that have similar directions as the road centreline. Yu et al. [35] distinguished zebra crossings from other rectangular-shaped markings according to the geometric perpendicularity of their distribution directions and road centrelines. These studies mostly focused on detecting stripes and did not provide specific information about the areas. For guide systems for blind individuals, mobile robot navigation, etc., it is impossible to confirm whether the frontal area is a zebra crossing without such information. Another problem is that the method is invalid when the distribution directions of zebra crossings and road centrelines are not vertical.
To overcome the aforementioned limitations, we propose a stepwise procedure for recognizing and reconstructing zebra crossings using mobile laser scanning data in this paper. The contributions of this paper are as follows: (1) an adaptive thresholding method based on road surface partitioning was designed to compensate for non-uniformities in intensity data and extract all types of road markings; (2) a dispersion degree filtering method was applied to reduce the noise; and (3) zebra crossings are recognized and reconstructed according to geometrical features, so that we obtain more specific information about the area, including start positions, end positions, distribution directions of zebra crossings, and road centreline directions.
The remainder of this paper is organized as follows: the stepwise description of the proposed method is presented in Section 2; in Section 3, we test the proposed method on MLS data captured in Wuhan, China; following the experiments, we discuss the results; and, finally, conclusions are drawn in Section 4.

2. Method

The method consists of three main procedures: road surface segmentation, road marking extraction, and zebra crossing recognition and reconstruction. Figure 1 illustrates the complete experimental procedures used in this study.

2.1. Road Surface Segmentation

In an urban environment, road surfaces are generally flat, with small elevation jumps caused by curbstones on the road boundaries, as shown in Figure 2. The elevations of road boundary points change substantially more quickly than do those of road surface points. The gradient of a scalar field reflects the rate of change of the scalar; therefore, we attempt to separate roads from other points via elevation gradients and apply a region growing algorithm to the elevation-gradient feature image for precise road surface segmentation.

2.1.1. Preprocessing

Correspondingly large data volumes and the complexity of urban street scenes increase the difficulty of creating a unified road model. Therefore, we use vehicle trajectory data (L) to section the point clouds into a set of blocks at an interval (d), as shown in Figure 3. To ensure that the road in each block is as flat and straight as possible, the value of d should be set smaller under undulating and winding road conditions.

2.1.2. Elevation Filtering

Trees next to roads, street lamps, etc., may cause difficulties in extracting accurate road surface points. For instance, tree crowns can cover road boundaries when laser scanning points are projected onto the XY plane. Therefore, we extend the histogram concavity analysis algorithm proposed by Rosenfeld [36] and use the extended algorithm to select an appropriate threshold for elevation filtering. The algorithm is applicable to both unimodal and bimodal histograms.
As shown in Figure 4, we first draw a histogram hs based on F, one of the laser scanning points’ feature properties. The class width is w, and each rectangle is numbered i = 1 ,..., n. We added two points, (f1, 0) and (fn, 0), for defining region hs more conveniently and accurately.
For a rectangle i, let the feature value fi be the X coordinate of the upper side’s midpoint, and let hi be the height:
f i = F m i n + ( i 1 2 ) w
To find the concavities of hs, we first construct its convex hull HS. This is the smallest convex polygon containing (f1, 0), (fn, 0) and (fi, hi) (i = 1, 2, …, n), midpoints of all the rectangle’s upper sides. Hi is the height of HS when the feature value is fi. The depth of the concavity Di is determined as follows:
Di = Hihi
Any concavity in the histogram may be the location of a threshold; however, not all points of the concavity are good candidates. The deeper the concavity depth is, the larger the differences in objects’ feature values between both sides; thus, we shall consider those points for which Di is a local maximum as candidates for positions of the threshold T:
T = fi, when Di = Dmax
We select elevation information of point clouds as feature properties for histogram concavity analysis in this study. In urban road environments, there are usually a large number of road surface points, and their distribution is concentrated. Trees, street lamps, and other objects that cause serious interference in road segmentation have greater heights, and the points are dispersed. Therefore, based on the elevation distribution characteristics of point clouds, we conclude that the peak of the histogram corresponds to the road surface and that “shoulders” on the right side of the peak correspond to those objects that could cause interference in road segmentation. As shown in Figure 5c, the possible threshold locates in the “shoulder” on the right. First, let hf be the elevation corresponding to the largest frequency, and let hmax be the highest elevation of all points. Then, calculate the threshold T in the range [hf, hmax] by analysing the histogram concavity. Finally, we filter out points with an elevation of less than T and use the remaining points for subsequent procedures.

2.1.3. Segmentation by Region-Growing

To improve the computation speed of the proposed method and apply image processing algorithms, laser scanning points are projected onto an XY plane to generate a georeferenced feature image I. The grey value of each cell equals the central point’s elevation calculated using the Inverse Distance Weighted (IDW) interpolation method [37] and normalization. Then, the bilinear interpolation method is used to smooth the image because there are no points in some cells, which causes image noise. Finally, the elevation feature image I is converted into an elevation-gradient feature image G as follows:
{ G x ( i , j ) = ( I ( i , j + 1 ) I ( i , j 1 ) ) / 2 G y ( i , j ) = ( I ( i + 1 , j ) I ( i 1 , j ) ) / 2 G ( i , j ) = G x ( i , j ) 2 + G j ( i , j ) 2
Since the elevation gradients of the road surface and road boundary are sufficiently distinct, we use a single threshold TG for image binarization. In the binarized image B, the pixel value is set to 1 if the corresponding pixel value in G is greater than TG; otherwise, it is set to 0. The result is shown as Figure 6b. For bridging gaps on road boundaries, dilation is applied to binarized images. Figure 6c shows the results of dilating the image B with a 3 × 3 structuring element, and the road surface is the black area surrounded by a clear white road boundary.
Region-growing is an effective method for extracting the road surface in an image. The first step is to choose a point in the trajectory data and confirm that the value of a pixel in which the point locates is 0. Then, the pixel can be determined as a seed point. All of the pixels that are 8-connected to the seed point and whose values are also 0 are appended to the seed point to form a larger region. 8-connected pixels are neighbours to every pixel that touches one of their edges or corners. Then, we dilate the result of the region-growing process using a 3 × 3 structuring element to compensate for the error caused by the previous dilation. Finally, the point clouds of road surfaces are converted based on the optimized region-growing results.

2.2. Road Marking Extraction

2.2.1. Adaptive Thresholding Based on Road Surface Partitioning

Usually, a road surface is composed of asphalt and concrete and exhibits low or diffuse reflection properties when subject to an incident laser. Road markings are highly reflective white or yellow coatings painted on the road surface. Objects with higher reflectances correspond to stronger laser signals; therefore, the laser intensity value is a key feature for distinguishing road markings from road surfaces. However, the intensity values are also affected by the laser incidence angles and the distances between the target and the scanner centre, which makes single global thresholding less effective for segmentation. Therefore, adaptive intensity thresholding based on road surface partitioning is proposed to solve the problems caused by non-uniform intensities.
Generally, the farther away from the trajectory data, the lower the intensity of road markings. The materials that constitute different road sections also vary considerably. Thus, the road surface is partitioned into non-overlapping rectangles Recti, as shown in Figure 7. The X axis is the direction of the vehicle trajectory, and the Y axis is perpendicular to the X axis in the horizontal plane. The length and width of the rectangle are Rx and Ry, respectively. The size of the rectangle is related to the extent of intensity evenness. Rx and Ry should be set smaller when the intensity distributes more unevenly in order to ensure a uniform intensity distribution in each rectangle.
There are two possibilities concerning the number of point types in a rectangle: (a) only one type, i.e., road surface points, or (b) two types, i.e., road surface points and road marking points. Otsu’s algorithm [38] is first used to find the optimal intensity threshold in each rectangle. Then, the two cases are separated based on the thresholding results.
The point set PA = { p1, p2, …, pm } represents the points whose intensities are larger than the threshold, and PB = { p1, p2, …, pm } represents the remaining points in the rectangle. The 3D coordinates of pi are (xi, yi, zi), and its intensity value is Ii.
For case (a), the points in PA and PB are both road surface points. For case (b), the points in PA are road marking points, and the points in PB are road surface points. The distance of cluster centres between PA and PB in case (b) is far larger than that in case (a). This distance dI is calculated as follows:
d I = | i = 1 m I i m i = 1 n I i n |
The ratio of the number of points between PA and PB is also a critical element to the judgement of cases. In Figure 7, Rect1 and Rect2 are examples of case (a) and case (b). Their thresholding results are shown in Figure 8. In case (b), the number of points in PA is substantially larger than that in PB, which leads to a high ratio. The ratio is defined as follows:
r a t i o = m n
According to the above analysis, the two cases can be distinguished using the following formula:
R e c t i : { i f ( d I < T d ) a n d   ( r a t i o > T r ) ,   c a s e ( a ) o t h e r w i s e ,   c a s e ( b )
where Td and Tr are the thresholds of the cluster-centre distance and the ratio of the number of points, respectively.
Finally, for case (b), all of the points in PA are reserved as the coarse results obtained from road marking extraction.

2.2.2. Dispersion Degree Filtering

Some parts of road surfaces have similar material properties as those of road markings, which causes noise in the coarse extraction results, as shown in Figure 9. The road marking points are more concentrated than the noise; therefore, noise is proposed to be removed according to the difference in dispersion degrees.
The dispersion degree Dp of a point p(x, y, z) is defined as follows:
D P = i = 1 N p ( x i x ) 2 + ( y i y ) 2 N p
where Np denotes the number of local neighbourhood points.
By removing the points whose dispersion degrees are larger than the threshold TD, accurately extracted road markings can be obtained.

2.3. Zebra Crossing Recognition and Construction

2.3.1. The Model of Zebra Crossings

A zebra crossing is an area that consists of a group of broad white stripes painted on the road. As shown in Figure 10, our model of a zebra crossing contains the following four elements: L1 and L2 define the start and end positions, respectively; Vr is the direction of the road centreline, which is also the direction along which vehicles travel; and Vz is the distribution direction of the zebra crossing, which guides pedestrians to cross the road safely.

2.3.2. Detection of Zebra Stripes

Design standards in most countries provide regulations on the exact size and shape of road markings. It is advantageous to recognize different types of road markings based on their different sizes and shapes. The stripe is actually a rectangle with a fixed size. Therefore, there are two factors we can use to recognize stripes: (a) rectangular features and (b) fixed lengths Lz and widths Wz.
To cluster neighbouring road marking points, intensity feature images transformed from point clouds are binarized, followed by 8-connected component labelling. For each connected region, an ellipse with the same second moments as the 8-connected region is calculated. e is the eccentricity of the ellipse, and its value ranges from 0 to 1. The closer this value is to 1, the more likely the region is a rectangle. Based on experience, regions whose e values are larger than 0.99 are candidate stripes, and the corresponding points are denoted as the point set P = {p1, p2, …, pk}.
To calculate the length and width of the candidate stripes, the principal component analysis (PCA) method is performed to judge the principal distribution direction of points in P on the XY plane.
For all the points in P, the correlation between xi and yi can be determined through their variances as follows:
c o v ( x , y ) = i = 1 k ( x i x ¯ ) ( y i y ¯ ) k 1
where x and y are the average values of xi and yi.
The covariance matrix C can be established using Equation (10):
C = ( c o v ( x , x ) c o v ( x , y ) c o v ( y , x ) c o v ( y , y ) )
As shown in Figure 11, through the eigenvalue decomposition (EVD) of C, the eigenvector associated with the larger eigenvalue V1 is defined as the first principal direction, which is parallel to the long side of rectangle; the eigenvector associated with the smaller eigenvalue V2 is defined as the second principal direction, which is parallel to the short side of the rectangle. One dimension matrixes M1 and M2 are established by projecting all points’ plane coordinates to V1 and V2, respectively. Then, the length Lz and width Wz of the region can be calculated as follows:
{ L z = m a x ( M 1 ) m i n ( M 1 ) W z = m a x ( M 2 ) m i n ( M 2 )
The regions whose length and width satisfy the design standards can be reserved as zebra stripes. Considering wear on road markings and calculation errors in real-world cases, the value ranges of Lz and Wz are set as follows:
{ L s × 0.8 L z L s × 1.2 W s × 0.8 W z W s × 1.2
where Ls and Ws are the standard length and width of zebra stripes.

2.3.3. Reconstruction of Zebra Crossings

The centroids of all stripes in a zebra crossing locate along a straight line. This line is found on the centre axis of the zebra crossing, which is important to area reconstruction.
Random sample consensus (RANSAC) [39] is an effective iterative algorithm for mathematical model fitting, such as linear fitting. By adjusting the number of iterations nR and the residual threshold TR, optimal model parameters can be estimated from a set of observed data containing noise. This is adopted to solve the problem of the fitting of zebra crossing’s centre axes in this paper.
We first calculate the coordinates of all stripe centroids on the XY plane. Then, the RANSAC algorithm is applied to these centroids. To ensure the accuracy of the results, there should be at least three points used for fitting with the estimated linear model. Finally, we directly obtain some important information: (a) the number of zebra crossings: the number of iterations; (b) the centre axis of zebra crossings: the lines fitted by RANSAC; and (c) stripes belonging to the same zebra crossing. The distribution direction Vz is the same as the direction of the centre axis. The direction of the road’s centreline Vr is calculated by averaging the stripes’ first principle direction in a zebra crossing. L1 and L2 are obtained by translating the centre axis along Vr, where the translational distance is ±Ls/2. This completes the recognition and reconstruction of zebra crossings.

3. Results and Discussion

The point clouds used in the experiment were captured by an Optech Lynx mobile mapping system, which consists of two laser scanners, one GPS receiver, and an inertial measurement unit. The original data is given in the WGS-84 coordinate system. Then the data is transformed from longitude and latitude coordinates to mathematical X and Y planar coordinates using Gauss projection. The survey area is in Guanggu, a part of the City of Wuhan, which is a major city in central China.
Figure 12 shows the three datasets selected for the evaluation of the performance of the proposed method. The figure contains vegetation (e.g., trees and bushes), street lamps, power lines, and cars in these areas. The roads in these datasets consist of straight sections, curved sections, and crossroads. Detailed information, including road length and the number of points, is presented in Table 1.

3.1. Segmentation of Road Surfaces

To section the experimental data into a number of blocks, we chose d = 50 m in dataset 2. In the other two datasets, we used d = 30 m because there are more curves and ramps. For each block histogram concavity analysis was used to obtain elevation thresholds. Then, following elevation filtering, point clouds were converted into elevation-gradient feature images. The grid size is a critical parameter in image generation. When the size is too small, only a few points, or, possibly, no points, fall inside the grids, whereas a large size may result in low image quality. Taking dataset 1 as an example, a block of data was selected to generate elevation-gradient images with different grid sizes of 0.05, 0.07, 0.09, and 0.11 m. Figure 13 presents the comparison results. Visual inspection suggests that there are few noise points on the road surface and that the details are clear when the grid size is 0.09 m; therefore, this value was applied in the experiment. The grid sizes used in dataset 2 and 3 were set to 0.12 m and 0.10 m, respectively, in the same way.
For the binarization of elevation-gradient images, a threshold should be determined. The grey values of road surfaces generally range from 0 to 0.005, and the grey values of road boundaries are approximately 0.015; therefore, any value between 0.005 and 0.015 could be set as the threshold and we selected 0.005.
Finally, we segmented the road surfaces using a region-growing method; then, the 3D points associated with road surfaces could be extracted easily, as shown in Figure 14. The close-up views in the black rectangles indicate that the road surfaces are basically extracted accurately and completely.

3.2. Extraction of Road Markings

Several parameters and their values used in the extraction of road markings are listed in Table 2. They were mainly selected through a set of tests or based on prior knowledge. Then, road markings were extracted directly with adaptive thresholding and dispersion degree filtering from road surface points. All types of road markings could be extracted fairly well. However, a few road markings were abraded by cars and pedestrians, leading to some of the extraction results being incomplete. Figure 15 shows a part of the extracted road markings, including solid lines, dotted lines, arrow markings, and diamond markings.

3.3. Recognition and Reconstruction of Zebra Crossings

The standard length Ls and width Ws of the zebra stripes are 6 m and 0.4 m, respectively, in the three datasets, which satisfy the design standards of zebra crossings in China. After recognizing stripes according to the above standards, the RANSAC algorithm was applied to the centroids of stripes with an nR of 5000 and a TR of 0.25 to obtain comprehensive information about zebra crossing areas.
A comparative study was conducted to compare our proposed zebra crossing recognition method with a recently published method: Riveiro’s method [34]. As listed in Table 3, a total of eleven zebra crossings were recognized at a recognition rate of 90.91% with our method, which outperforms the other method. One zebra crossing was not detected due to the low reflectivity of road markings caused by serious abrasion, which decreases the completeness of road marking extraction, as shown in the Figure 16.
To further quantitatively evaluate the performance of our method, four measures were computed for each zebra crossing based on manually-extracted results. θz and θr represent the angle deviations of a zebra crossing’s distribution direction and a road centreline’s direction, respectively. The completeness r is used to describe how complete the detected zebra crossing areas are, and the correctness p is used to indicate what percentage of the detected zebra crossing areas are valid. r and p are defined as follows:
r = TP/AP
p = TP/VP
where TP, AP, and VP are the number of road surface points belonging to (1) the correctly detected zebra crossing areas using the proposed method; (2) the zebra crossing areas collected using manual visual interpretation; and (3) the whole detected zebra crossing areas using the proposed method, respectively.
As shown by the quality evaluation results in Table 4, the completeness and correctness of recognized zebra crossings are both greater than 90%, the value of θz is no greater than 2.5°, and the maximum value of θr is 1.2°. In summary, our proposed method exhibits good performance in recognizing and reconstructing zebra crossings.

4. Conclusions

In this paper, we have proposed an effective method for recognizing and reconstructing zebra crossings using mobile laser scanning data. The proposed method first converts point clouds into elevation-gradient images and subsequently applies region-growing-based road surface segmentation. Second, road marking points are extracted with adaptive intensity thresholding based on road surface partitioning and dispersion degree filtering. Finally, the zebra crossing areas are recognized and reconstructed according to geometrical features.
The three datasets acquired by an Optech Lynx mobile mapping system were used to validate our zebra crossing recognition and reconstruction method. The experimental results demonstrate that the proposed method performs well and obtains high completeness and correctness values. The experiment has indicated three main advantages of the method: (1) the method is effective even when points of zebra crossings are incomplete; (2) the method can be effective when the distribution direction of zebra crossings and road centrelines are at arbitrary angles; and (3) more comprehensive information about zebra crossing areas, such as the extent of the area, is obtained.
These research findings could contribute to a more rapid, cost-effective, and comprehensive approach to traffic management and ensure maximum safety conditions for road users. However, our method could only be applied for post-processing instead of real-time use at present, because some parameters need to be selected based on prior knowledge or a set of tests. In the future, we will make further study on the algorithm of selecting optimal parameters automatically. Additionally, it is also important to enhance the computing efficiency of our method, because point clouds with better resolution and higher density are needed if we want to obtain more detailed information about urban objects.

Acknowledgments

This study is funded by the Scientific and Technological Leading Talent Fund of National Administration of Surveying, mapping and geo-information (2014), the National 863 Plan of China (2013AA12A202) and Wuhan ‘Yellow Crane Excellence’ (Science and Technology) program (2014).

Author Contributions

Lin Li provided the main idea of the study and designed the experiments. Shen Ying and Da Zhang performed the experiments together. You Li contributed to analyzing the experimental results. Da Zhang wrote the first version of the paper and all the authors improved the version.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. China National Standardization Management Committee. GB 5768.3-2009: Road Traffic Signs and Markings. Part 3: Road Traffic Markings; China Standards Press: Beijing, China, 2009. [Google Scholar]
  2. Carnaby, B. Poor road markings contribute to crash rates. In Proceedings of Australasian Road Safety Research Policing Education Conference, Wellington, New Zealand, 14–16 November 2005.
  3. Horberry, T.; Anderson, J.; Regan, M.A. The possible safety benefits of enhanced road markings: A driving simulator evaluation. Transp. Res. Part F Traf. Psychol. Behav. 2006, 9, 77–87. [Google Scholar] [CrossRef]
  4. Zheng, N.-N.; Tang, S.; Cheng, H.; Li, Q.; Lai, G.; Wang, F.-Y. Toward intelligent driver-assistance and safety warning system. IEEE Intell. Syst. 2004, 19, 8–11. [Google Scholar] [CrossRef]
  5. McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. IEEE Trans. Intell. Transp. Syst. 2006, 7, 20–37. [Google Scholar] [CrossRef]
  6. Vacek, S.; Schimmel, C.; Dillmann, R. Road-marking analysis for autonomous vehicle guidance. In Proceedings of European Conference on Mobile Robots, Freiburg, Germany, 19–21 September 2007.
  7. Zhang, J.; Wang, F.-Y.; Wang, K.; Lin, W.-H.; Xu, X.; Chen, C. Data-driven intelligent transportation systems: A survey. IEEE Trans. Intell. Transp. Syst. 2011, 12, 1624–1639. [Google Scholar] [CrossRef]
  8. Bertozzi, M.; Bombini, L.; Broggi, A.; Zani, P.; Cerri, P.; Grisleri, P.; Medici, P. Gold: A framework for developing intelligent-vehicle vision applications. IEEE Intell. Syst. 2008, 23, 69–71. [Google Scholar] [CrossRef]
  9. Se, S.; Brady, M. Road feature detection and estimation. Mach. Vision Appl. 2003, 14, 157–165. [Google Scholar] [CrossRef]
  10. Chiu, K.-Y.; Lin, S.-F. Lane detection using color-based segmentation. In Proceedings of IEEE Conference on Intelligent Vehicles Symposium, Las Vegas, NV, USA, 6–8 June 2005; pp. 706–711.
  11. Sun, T.-Y.; Tsai, S.-J.; Chan, V. HSI color model based lane-marking detection. In Proceedings of Intelligent Transportation Systems Conference, Toronto, ON, Canada, 17–20 September 2006; pp. 1168–1172.
  12. Liu, W.; Zhang, H.; Duan, B.; Yuan, H.; Zhao, H. Vision-based real-time lane marking detection and tracking. In Proceedings of International IEEE Conferences on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 49–54.
  13. Soheilian, B.; Paparoditis, N.; Boldo, D. 3D road marking reconstruction from street-level calibrated stereo pairs. ISPRS J. Photogramm. 2010, 65, 347–359. [Google Scholar] [CrossRef]
  14. Foucher, P.; Sebsadji, Y.; Tarel, J.-P.; Charbonnier, P.; Nicolle, P. Detection and recognition of urban road markings using images. In Proceedings of International IEEE Conferences on Intelligent Transportation Systems, Washington, DC, USA, 5–7 October 2011; pp. 1747–1752.
  15. Wu, P.-C.; Chang, C.-Y.; Lin, C.H. Lane-mark extraction for automobiles under complex conditions. Pattern Recogn. 2014, 47, 2756–2767. [Google Scholar] [CrossRef]
  16. Fang, L.; Yang, B. Automated extracting structural roads from mobile laser scanning point clouds. Acta Geod. Cartogr. Sin. 2013, 2, 260–267. [Google Scholar]
  17. Zhao, H.; Shibasaki, R. Updating a digital geographic database using vehicle-borne laser scanners and line cameras. Photogramm. Eng. Remote Sens. 2005, 71, 415–424. [Google Scholar] [CrossRef]
  18. Vosselman, G. Advanced point cloud processing. In Proceedings of Photogrammetric week, Stuttgart, Germany, 7–11 September 2009; pp. 137–146.
  19. Yang, B.; Wei, Z.; Li, Q.; Mao, Q. A Classification-Oriented method of feature image generation for vehicle-borne laser scanning point clouds. Acta Geod. Cartogr. Sin. 2010, 39, 540–545. [Google Scholar]
  20. Wu, B.; Yu, B.; Huang, C.; Wu, Q.; Wu, J. Automated extraction of ground surface along urban roads from mobile laser scanning point clouds. Remote Sens. Lett. 2016, 7, 170–179. [Google Scholar] [CrossRef]
  21. Manandhar, D.; Shibasaki, R. Auto-extraction of urban features from vehicle-borne laser data. In Proceedings of Symposium on Geospatial Theory, Processing and Applications, Ottawa, ON, Canada; 2002; pp. 650–655. [Google Scholar]
  22. Li, B.; Li, Q.; Shi, W.; Wu, F. Feature extraction and modeling of urban building from vehicle-borne laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 934–939. [Google Scholar]
  23. Tournaire, O.; Brédif, M.; Boldo, D.; Durupt, M. An efficient stochastic approach for building footprint extraction from digital elevation models. ISPRS J. Photogramm. Remote Sens. 2010, 65, 317–327. [Google Scholar] [CrossRef]
  24. Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H. Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data. Remote Sens. 2010, 2, 641–664. [Google Scholar] [CrossRef]
  25. Wu, B.; Yu, B.; Yue, W.; Shu, S.; Tan, W.; Hu, C.; Huang, Y.; Wu, J.; Liu, H. A voxel-based method for automated identification and morphological parameters estimation of individual street trees from mobile laser scanning data. Remote Sens. 2013, 5, 584–611. [Google Scholar] [CrossRef]
  26. Li, L.; Li, Y.; Li, D. A method based on an adaptive radius cylinder model for detecting pole-like objects in mobile laser scanning data. Remote Sens. Lett. 2016, 7, 249–258. [Google Scholar] [CrossRef]
  27. Jaakkola, A.; Hyyppä, J.; Hyyppä, H.; Kukko, A. Retrieval algorithms for road surface modelling using laser-based mobile mapping. Sensors 2008, 8, 5238–5249. [Google Scholar] [CrossRef]
  28. Yang, B.; Fang, L.; Li, Q.; Li, J. Automated extraction of road markings from mobile LiDAR point clouds. Photogramm. Eng. Remotr Sens. 2012, 78, 331–338. [Google Scholar] [CrossRef]
  29. Kammel, S.; Pitzer, B. Lidar-based lane marker detection and mapping. In Proceedings of IEEE on Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 1137–1142.
  30. Chen, X.; Kohlmeyer, B.; Stroila, M.; Alwar, N.; Wang, R.; Bach, J. Next generation map making: Geo-referenced ground-level LiDAR point clouds for automatic retro-reflective road feature extraction. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009; pp. 488–491.
  31. Smadja, L.; Ninot, J.; Gavrilovic, T. Road extraction and environment interpretation from LiDAR sensors. In Proceedings of International Archives of the Photogrammetry, Remote sensing and Spatial Information Sciences, Saint-Mande, France, 1–3 September 2010; pp. 281–286.
  32. Guan, H.Y.; Li, J.; Yu, Y.T.; Wang, C.; Chapman, M.; Yang, B.S. Using mobile laser scanning data for automated extraction of road markings. ISPRS J. Photogramm. Remote Sens. 2014, 87, 93–107. [Google Scholar] [CrossRef]
  33. Mancini, A.; Frontoni, E.; Zingaretti, P. Automatic road object extraction from mobile mapping systems. In Proceedings of IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications (MESA), Suzhou, Jiangsu, China, 8–10 July 2012; pp. 281–286.
  34. Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P. Automatic detection of zebra crossings from mobile LiDAR data. Opt. Laser Technol. 2015, 70, 63–70. [Google Scholar] [CrossRef]
  35. Yu, Y.; Li, J.; Guan, H.; Jia, F.; Wang, C. Learning hierarchical features for automated extraction of road markings from 3-d mobile LiDAR point clouds. IEEE J. Sel. Top. Appl. 2015, 8, 709–726. [Google Scholar] [CrossRef]
  36. Rosenfeld, A.; De La Torre, P. Histogram concavity analysis as an aid in threshold selection. IEEE Trans. Syst. Man Cybern. 1983, SMC-13, 231–235. [Google Scholar] [CrossRef]
  37. Franke, R.; Nielson, G. Smooth interpolation of large sets of scattered data. Int. J. Numer. Meth. Eng. 1980, 15, 1691–1704. [Google Scholar] [CrossRef]
  38. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar]
  39. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the method.
Figure 1. The flowchart of the method.
Ijgi 05 00125 g001
Figure 2. A sample of a road profile.
Figure 2. A sample of a road profile.
Ijgi 05 00125 g002
Figure 3. An illustration of sectioning road data.
Figure 3. An illustration of sectioning road data.
Ijgi 05 00125 g003
Figure 4. Histogram concavity analysis.
Figure 4. Histogram concavity analysis.
Ijgi 05 00125 g004
Figure 5. A sample of elevation filtering: (a) point clouds before elevation filtering (coloured by elevation); (b) point clouds after elevation filtering (coloured by elevation); and (c) elevation histogram.
Figure 5. A sample of elevation filtering: (a) point clouds before elevation filtering (coloured by elevation); (b) point clouds after elevation filtering (coloured by elevation); and (c) elevation histogram.
Ijgi 05 00125 g005
Figure 6. (a) Elevation-gradient image; (b) image after binarization; and (c) image after dilation.
Figure 6. (a) Elevation-gradient image; (b) image after binarization; and (c) image after dilation.
Ijgi 05 00125 g006
Figure 7. Road surface partitioning.
Figure 7. Road surface partitioning.
Ijgi 05 00125 g007
Figure 8. Intensity histogram: (a) Rect1; and (b) Rect2.
Figure 8. Intensity histogram: (a) Rect1; and (b) Rect2.
Ijgi 05 00125 g008
Figure 9. Dispersion degree filtering.
Figure 9. Dispersion degree filtering.
Ijgi 05 00125 g009
Figure 10. The model of zebra crossings: (a) the distribution direction is perpendicular to the road centreline; and (b) the distribution direction is oblique to the road centreline.
Figure 10. The model of zebra crossings: (a) the distribution direction is perpendicular to the road centreline; and (b) the distribution direction is oblique to the road centreline.
Ijgi 05 00125 g010
Figure 11. Principal component analysis of road markings.
Figure 11. Principal component analysis of road markings.
Ijgi 05 00125 g011
Figure 12. An overview of the experimental data.
Figure 12. An overview of the experimental data.
Ijgi 05 00125 g012
Figure 13. Different elevation-gradient images with different grid sizes: (a) GSD = 0.03 m; (b) GSD = 0.06 m; (c) GSD = 0.09 m; and (d) GSD = 0.12 m.
Figure 13. Different elevation-gradient images with different grid sizes: (a) GSD = 0.03 m; (b) GSD = 0.06 m; (c) GSD = 0.09 m; and (d) GSD = 0.12 m.
Ijgi 05 00125 g013
Figure 14. The results of road surface segmentation.
Figure 14. The results of road surface segmentation.
Ijgi 05 00125 g014
Figure 15. The extracted road markings.
Figure 15. The extracted road markings.
Ijgi 05 00125 g015
Figure 16. The unrecognized zebra crossing.
Figure 16. The unrecognized zebra crossing.
Ijgi 05 00125 g016
Table 1. Description of the datasets.
Table 1. Description of the datasets.
DatasetLength/(m)Number of Points
1120045,175,744
2135043,407,389
360016,624,370
Table 2. Parameters of road marking extraction.
Table 2. Parameters of road marking extraction.
NameValueNameValue
Rx2 mTr0.8
Ry1 mNP5
Td40TD0.007 m
Table 3. Recognition results of zebra crossings.
Table 3. Recognition results of zebra crossings.
DatasetNumber of Zebra CrossingsRecognition Rate of the Proposed Method (%)Recognition Rate of Riveiro’s Method (%)
13100.0066.67
26100.0066.67
3250.0050.00
Total1190.9163.64
Table 4. Qualitative evaluation results.
Table 4. Qualitative evaluation results.
DatasetZebra crossingr (%)p (%)θz (°)θr (°)
1195.9796.252.501.20
1296.6099.570.870.24
1399.0895.450.080.06
2194.5594.201.000.02
2291.7097.870.280.15
2392.6898.670.330.03
2495.6991.940.500.44
2596.5698.841.510.17
2696.4098.931.510.46
3197.0394.540.740.05
32////

Share and Cite

MDPI and ACS Style

Li, L.; Zhang, D.; Ying, S.; Li, Y. Recognition and Reconstruction of Zebra Crossings on Roads from Mobile Laser Scanning Data. ISPRS Int. J. Geo-Inf. 2016, 5, 125. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5070125

AMA Style

Li L, Zhang D, Ying S, Li Y. Recognition and Reconstruction of Zebra Crossings on Roads from Mobile Laser Scanning Data. ISPRS International Journal of Geo-Information. 2016; 5(7):125. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5070125

Chicago/Turabian Style

Li, Lin, Da Zhang, Shen Ying, and You Li. 2016. "Recognition and Reconstruction of Zebra Crossings on Roads from Mobile Laser Scanning Data" ISPRS International Journal of Geo-Information 5, no. 7: 125. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi5070125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop