Next Article in Journal
Design and Application of Quadrature Compensation Patterns in Bulk Silicon Micro-Gyroscopes
Next Article in Special Issue
Using Smart Phone Sensors to Detect Transportation Modes
Previous Article in Journal
Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions
Previous Article in Special Issue
A Modular Localization System as a Positioning Service for Road Transport
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Object Segmentation Using a Multi-Layer Laser Scanner

1
Electrical and Electronic Engineering Department, Yonsei University, Seoul 120-749, Korea
2
Advanced Driver Assistance System Recognition Development Team, Hyundai Motors, Gyeonggi 445-706, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2014, 14(11), 20400-20418; https://0-doi-org.brum.beds.ac.uk/10.3390/s141120400
Submission received: 14 September 2014 / Revised: 21 October 2014 / Accepted: 21 October 2014 / Published: 29 October 2014
(This article belongs to the Special Issue Positioning and Tracking Sensors and Technologies in Road Transport)

Abstract

: The major problem in an advanced driver assistance system (ADAS) is the proper use of sensor measurements and recognition of the surrounding environment. To this end, there are several types of sensors to consider, one of which is the laser scanner. In this paper, we propose a method to segment the measurement of the surrounding environment as obtained by a multi-layer laser scanner. In the segmentation, a full set of measurements is decomposed into several segments, each representing a single object. Sometimes a ghost is detected due to the ground or fog, and the ghost has to be eliminated to ensure the stability of the system. The proposed method is implemented on a real vehicle, and its performance is tested in a real-world environment. The experiments show that the proposed method demonstrates good performance in many real-life situations.

1. Introduction

With the recent developments in vehicular technology, advanced driver assistance system (ADAS) concept has spread rapidly; however, many problems still remain to be addressed before the field of ADASs can be widely expanded. The biggest problem in ADAS is the use of sensor measurements and recognition of the surrounding environment. To this end, several types of sensors have been considered, including radar and a visual or infrared (IR) camera. Unfortunately, however, none of the sensors are sufficient for ADAS, and each has its own shortcomings.

For example, radar returns relatively accurate distances to obstacles, but its bearing measurements are not accurate. Radar cannot recognize object classes, and also suffers from frequent false detections [14]. A visual camera is another ADAS tool. This type of camera returns a relatively accurate bearing to the obstacle, but its distance measurement is not reliable. The camera is capable of object recognition, but it also exhibits a high false detection rate. Thus, most of the current systems combine several sensors to compensate for the drawbacks of each sensor and to obtain reliable information about the nearby environments [59].

Recently, the laser scanner has received attention within the ADAS community, and it is considered to be a strong candidate for the primary sensor in ADAS [1,10]. The strong points of the laser scanner are its ability to accurately determine both near and far distances as well as the bearing to an obstacle. In addition, the object detection of a laser scanner is reliable and robust, and it can recognize object classes to some extent through determination of the contour of the surrounding environment.

Thus, unlike a camera or radar, a laser scanner can be used as the sole sensor for ADAS without being combined with other sensors. Further, if the laser scanner is combined with other sensors, it can compensate for all the drawbacks, thereby improving the recognition accuracy.

Because the range and bearing measurements of the laser scanner are sufficiently accurate, in this paper, the laser scanner is considered as a single sensor for ADAS. In this paper, the laser scanners are divided into three types according to the number of layers; single-layer, multi-layer, and three-dimensional (3D) laser scanners. A single-layer laser scanner consists of only one layer, a multi-layer laser scanner is composed more than one but fewer than eight layers, and a 3D laser scanner is composed of eight or more layers. A single-layer laser scanner can obtain 2-dimensional (2D) information, a 3D laser scanner can get 3D information, and a multi-layer laser scanner can get limited 3D information. In general, the information from a laser scanner is proportional to the number of layers, but a 3D layer scanner is expensive and difficult to install on the vehicle. However, single-layer and multi-layer laser scanners can be implemented inside a vehicle's body [11]. Therefore, the 3D layer scanner is not yet suitable for ADAS, and the multi-layer laser scanner is currently more suitable for ADAS.

The remainder of this paper is organized as follows: in Section 2, an outline of the obstacle recognition system using a laser scanner is described. Related works are described in Section 3. The segmentation for a multi-layer laser scanner and the ghost elimination are explained in Section 4. The proposed system is installed on a vehicle and is applied to actual urban road navigation. The experimental results are presented in Section 5. The discussion about robustness is described in Section 6. Finally, some conclusions are drawn in Section 7.

2. System Outline

The system aims to detect obstacles through the processes of segmentation, classification, and tracking. Figure 1 shows the outline of the system developed in this paper.

In the segmentation step, the proposed system receives a full scan (a set) of measurement points from a multi-layer laser scanner and decomposes the set into several segments, each of which corresponds to an object. In the segmentation step, the outliers are removed to avoid performance degradation. In the classification step, the segment features are computed and classified [1214]. In the tracking step, the location and velocity of the segment are estimated over time. Segmentation is the essential step to execute classification or tracking [1518]. In this paper, we focus on the methods for segmentation and outlier elimination.

3. Related Works

A laser scanner detects the closest obstacle for a bearing angle and returns the angle-wise distance to the obstacle. The output of the laser scanner can be modeled by the following set of pairs:

Z t = { p 1 t , p 2 t , p 3 t , , p N t }
p i t = ( r i t , θ i t ) for i = 1 , , N
θ i t > θ i 1 t for i = 2 , , N
where N denotes the number of scanner measurement points as shown in Figure 2; the superscript t denotes the measurement time; and r i t and θ i t denote the distance and bearing to the obstacles, respectively. Equation (3) refers to a property of the laser scanner, and it implies that the scanner scans the environment from left to right. As stated in Section 2, segmentation is the first step for the object detection by the laser scanner. A scan of the measurements, as given in Equations (1) and (2), is decomposed into several groups called segments, as shown in Figure 2.

In general, the segmentation methods can be classified into two kinds: the geometric shape method and the breakpoint detection (BD) method. The first method assumes the geometric shapes of the segments, such as a line or an edge, and decomposes the scanner measurements into the predetermined shapes [19,20]. The BD method decomposes the scanner measurements into segments based on the Euclidean distance between the consecutive points p i t and p i 1 t [21] or using a Kalman filter [22]. Methods using BD based on the distances between consecutive points are the most popular and are widely used for laser segmentation [2327].

In the distance-based BD, if the distance between two consecutive points is greater than a threshold Dthd, two points are likely to belong to different objects and a breakpoint is selected between the two points as shown in Figure 2a.

In Figure 2, the laser scanner returns 12 data points ( p 1 t p 12 t) and the segment Sn originates from the n th object (n = 1,2,3). The performance of the BD segmentation depends on the choice of a threshold Dthd, and several methods have been developed for the selection of Dthd. In [23], the threshold Dthd is determined by:

D thd = C 0 + C 1 min { r i t , r i 1 t }
where C 1 = 2 ( 1 cos ( θ i t θ i 1 t ) ) and C0 denotes the sensor noise. In [24], Lee et al., employed Equation (5) to select the break points in [17].
D thd = | r i t r i 1 t r i t + r i 1 t |

Borges et al., recently proposed the adaptive breakpoint detector (ABD) in [25]. In the ABD, the threshold Dthd is determined by:

D thd = r i 1 t sin ( Δ θ ) sin ( λ Δ θ ) + 3 σ r
which adaptively depends on r i 1 t and Δθ as shown in Figure 3. In Equation (6), Δ θ = θ i t θ i 1 t, where λ is chosen on the basis of user experience, and σr is the sensor noise associated with r.

All of the previous segmentation methods, however, were based on a single-layer laser scanner, and to our knowledge, no research has been reported regarding the segmentation for a multi-layer laser scanner. This is one of the contributions of this paper.

4. Segmentation for Multi-Layer Laser Scanner

4.1. ABD Segmentation for Multi-Layer Laser Scanner

A multi-layer laser scanner has multiple layers and returns the measurement points as shown in Figure 4.

Figure 4a shows the laser scanner measurements depicted on the x-y plane, and Figure 4b shows the scanner measurements superimposed on the camera image after calibration [28]. In the figure, the information of the different layers is represented by different colors. In the multi-layer laser, each data point p i t is not a pair but a triplet consisting of the distance r i t and bearing θ i t to the obstacle and the layer information l i t. The output of the multi-layer laser scanner is modeled by:

p i t = ( r i t , θ i t , l i t ) for i = 1 , , N
θ i t θ i 1 t for i = 2 , , N
l i t l i 1 t where θ i t = θ i 1 t
where a four layer laser scanner, the IBEO LUX2010 [29], is used, and:
l i t { 1 , 2 , 3 , 4 }

Equations (8) and (9) refer to the property that the scanner scans the laser from left to right and from bottom to top, respectively. The left points are measured before the right points and at the same θ i t, and the lower layers are measured before the upper layers. The direct application of a standard ABD to multi-layer segmentation would lead to the loss of layer information and would lead to inefficient segmentation.

In order to develop a new segmentation method for the multi-layer scanner, two important properties of the multi-layer laser scanner should be considered:

(1)

Two points at the same bearing but on different layers can belong to different objects. An example is the situation given in Figure 4. In the figure, both a sedan and a bus lie in the same bearing, but the sedan is closer to the scanner than the bus. From the box bounded in red dotted lines in Figure 4b, the points in the lower three layers belong to the sedan, but the points in the top layer belong to the bus. Thus, it must be determined whether or not two data points, p i t and p j t, with consecutive bearings and consecutive layers belong to the same object.

(2)

The measurement sets are not complete, and there are many vacancies in the θ - l plane. When the scanner input is plotted on the θ - l plane, the ideal output will look like a grid as in Figure 5a, but the actual output appears as in Figure 5b. Thus, the grid-type segmentation using a nested for-loop cannot be used.

A layer-wise independent segmentation process can be considered; however, in our experience, this method does not work well and requires multi-layer segmentation. In multi-layer laser segmentation, we will say that two points p i t and p j t are connected if they belong to the same object. Unlike single-layer scanner segmentation, we consider not only the connectivity between the points with consecutive bearings, but also the connectivity between the points with consecutive layers. The foremost requirement in multi-layer segmentation is that the algorithm should operate in a single scan with the running time O(N), and the algorithm must not trace back to the old previous points, making it O(N2) or higher, where N is the number of measurement points.

In this paper, an O(N) fast segmentation method is presented. When each data point p i t is given, a candidate set Sensors 14 20400i1i composed of previous data points p j t (j<i) is built, and the connectivity of the point p i t is tested only with the elements in Sensors 14 20400i1i, thereby implementing an O(N) implementation. In our segmentation method, the candidate set Sensors 14 20400i1i consists of the newest data points in each layer. Therefore, the maximum size of the candidate set Sensors 14 20400i1i is four in this case. Figure 6 illustrates the ABD segmentation process.

In Figure 6, we assume that eight data points ( p 1 t p 8 t) have already been received. p 2 t, p 4 t, p 5 t, p 6 t, and p 8 t belong to the segment S1, and p 1 t, p 3 t, and p 7 t belong to S2, as in Figure 6a. This situation occurs as in Figure 4, in which two vehicles are in the same direction but at the different distances. The candidate set Sensors 14 20400i19 is composed of p 1 t, p 6 t, p 7 t and p 8 t. The four points in Sensors 14 20400i19 are the newest points in each layer at this time and the points in the set will be checked when a new point p 9 t arrives. The points in Sensors 14 20400i19 are indicated by the green circles in Figure 6a.

In Figure 6b, p 9 t is presented, and its connectivity with the elements in Sensors 14 20400i19 is determined in turn from the first to the fourth layers (in order of p 8 t, p 6 t, p 7 t, and p 1 t) with using an ABD. Here, we assume that the data point p 9 t is assigned to the first segment as in Figure 6c. For example, if p 9 t p 8 t D thd, then p 9 t is assigned to the segment of p 8 t, which is S1, and further connectivity determinations with p 6 t, p 7 t, and p 1 t are cancelled. Figure 6d-f show the segmentation of p 10 t. First, Sensors 14 20400i110 is computed by:

M 10 = M 9 { p 9 t } { p 6 t } = { p 1 t , p 7 t , p 8 t , p 9 t }
in Figure 6d, where ∪ and − denote set union and subtraction, respectively. As before, the connectivity of p 10 t with p 8 t, p 9 t, p 7 t, and p 1 t is tested in turn (in order of p 8 t, p 9 t, p 7 t, and p 1 t). If p 10 t p 8 t > D thd and p 10 t p 9 t > D thd but p 10 t p 7 t D thd as in Figure 6e, then p 10 t is assigned to the segment of p 7 t, which is S2, and further connectivity determination with p 1 t is cancelled, as in Figure 6f.

In a similar way, p 11 t is segmented in Figure 6g–i. As before, Sensors 14 20400i111 is updated by:

M 11 = M 10 { p 10 t } { p 7 t } = { p 1 t , p 8 t , p 9 t , p 10 t }
as in Figure 6g. The connectivity of p 11 t is tested with the elements p 8 t, p 9 t, p 10 t and p 1 t in Sensors 14 20400i111 in turn. If all of the distances between p 11 t and the elements in Sensors 14 20400i111 are larger than Dthd by p 11 t p j t > D thd (j = 1,8,9,10), as in Figure 6h, then a new segment S3 is created, and p 11 t is assigned to S3 (Figure 6i).

Table 1 shows the proposed segmentation algorithm for the multi-layer laser scanner. In the table, Nseg denotes the number of segments, and N and L denote the number of data points and the number of layers, respectively. S1 and Sensors 14 20400i11 are initialized with empty sets, and the segmentation proceeds from p 1 t to p N t .

In the i th iteration, the connectivity of p i t with the elements in Sensors 14 20400i1i is tested from the bottom layer to the top layer. The connectivity exist when the distance between p i t and p j t is smaller than threshold, Dthd. p j t is one of elements in Sensors 14 20400i1i and Dthd is calculated using an ABD. If the p j t is the first matching connected point, the p i t is assigned to Sn. Sn is the segment that contains the first matching point p j t .

If p i t is not close enough to any elements in Sensors 14 20400i1i and p i t is not connected to any segment, then it means that p i t belongs to a new segment and we increase Nseg by one. At the end of each iteration, we update the candidate set Sensors 14 20400i1i+1 using Sensors 14 20400i1i and p i t .

4.2. Robust Segmentation through Ghost Elimination

When the above ABD segmentation is applied to actual roads, ghost segments are sometimes detected. Here, a ghost segment refers to a segment that does not actually exist but that is detected by the laser scanner. Figures 7 and 8 show examples of ghost segments. These ghosts pose a serious risk to safe driving. Most ghost segments are caused by (1) laser reflections from the ground surface; or (2) lights from vehicles or fog. The false determination of a ghost segment seriously degrades the subsequent object classification performance.

Ghosts that are caused by reflections from the ground surface often occur when vehicles travel over bumpy roads or when they go up- or downhill. Ghost segments are detected on only one layer, usually the first layer, within a 40-m distance of the scanner, and experience rapid change, appearing and disappearing and changing shape. Ghosts that are caused by headlights or tail lights or by nearby fog are also detected on only a single layer within a 20-m distance of the scanner, do not have a uniform shape, and are detected intermittently.

The two kinds of ghosts both exist only on a single layer and are detected within a short distance. Thus, our robust segmentation method is developed by considering the first property and applying it within a limited 40-m distance around the vehicle. For distances greater than 40 m, the ABD segmentation method explained in Section 4.1 is used.

The robust segmentation method is similar to the ABD segmentation given in Section 4.1, with two main differences. The first difference is that when a point, p i t, is presented in robust segmentation, the point is not segmented with a point on the same layer in Sensors 14 20400i1i. The reason for this is that ghost points tend to gather only on a single layer. By not combining a new point with the points on the same layer, ghost points do not build a meaningful segment and thus are not considered. The second difference is that the candidate set Sensors 14 20400i1i consists of not one but two of the newest points in each other layer. To prevent an object from being divided due to a ghost, we determine the connectivity of up to two of the newest points in each other layer.

Thus, Sensors 14 20400i1i in robust segmentation can have up to eight (2×4) elements. Figure 9 illustrates the robust segmentation process. When a new point, p 9 t, is presented, as in Figure 9a, a candidate set, M 9 = { p 1 t , p 2 t , p 3 t , p 5 t , p 6 t , p 7 t , p 8 t }, is given. When testing the connectivity of p 9 t with other points in Sensors 14 20400i19, we skip the test with p 2 t, and p 6 t because they are on the same layer as p 9 t. Thus, the connectivities of p 9 t are tested only with the five points { p 1 t, p 3 t, p 5 t, p 7 t, p 8 t } in Sensors 14 20400i19, as shown in Figure 9a. Figures 9b and c demonstrate the computation of Sensors 14 20400i110 and Sensors 14 20400i111 and the robust segmentation process when p 10 t and p 11 t are given, respectively.

The problem with this approach is that if an obstacle is very shallow and is detected only on a single layer, it may not be detected. However, this is rarely the case due to the sufficiently small angular resolution of the multi-layer laser scanner. In the case of the IBEO LUX2010, the vertical and horizontal angular resolutions are 0.8° and 0.125°, respectively, allowing an obstacle 20 m or more from the vehicle and larger than 0.56 m to be detected on more than two layers. If the obstacle is larger than 0.26 m, it will return more than six points.

Table 2 shows the pseudo-code of the robust segmentation. In lines 4-14, when a new point, p i t, is within 40 m of the scanner, the connectivity determination with the point on the same layer is skipped, as Figure 9. In lines 15–24, when a new point is far from the scanner, its connectivity with the point on the same layer is determined as the ABD segmentation. The processes of making new segment and updating candidate set, Sensors 14 20400i1i+l, is same as the processes of ABD segmentation. End of this algorithm, small segments are eliminated. The small segments mean the number of point is smaller than Nmin, and Nmin denotes the minimum number of points required for an object.

5. Experiment

In this experiment, an IBEO LUX2010 multi-layer laser scanner and a camera are installed on a Kia K900 as shown in Figure 10. As previously stated, the LUX2010 has a total of four layers, and its horizontal and vertical resolutions are 0.125° and 0.8°, respectively. The camera is used to obtain the ground truth of the environment.

Figure 11 shows the segmentation results for six different scenarios. The first column shows the raw measurements from the IBEO scanner, and the second and third columns show the ABD and robust segmentation results, respectively. The fourth column contains the corresponding camera image with the scanner measurements superimposed.

Figure 11a and b show the results when the road is flat and ghosts are not detected. Only vehicles appear in Figure 11a, while both vehicles and pedestrians appear in Figure 11b. In the two scenarios, ghosts are not observed, and it can be seen that the ABD and robust segmentations produce the same results.

Figure 11c and d show the results when a ghost is detected that is created by the surface. In the figures, the dots in the red box are the ghost, detected by the bottom layer laser, which is indicated in blue. When the ABD segmentation method is applied (second column), the ghost forms an outlier segment and appears to be an obstacle. When the robust segmentation method is applied (third column), however, the ghost is successfully removed, leaving only the segments from the preceding vehicles.

Figure 11e and f show the results in rainy, foggy test conditions. As in Figure 11c and d, the dots in the red box are detected by the second-layer laser, and appear to result from fog. When the ABD segmentation method is applied (second column), the ghost survives and could activate the brake system, which can lead to an accident. When the robust segmentation method is applied (third column), however, the ghost is successfully removed. For quantitative analysis, we gather samples from four scenarios as in Table 3 and apply the ABD and robust segmentation methods.

All the samples are clipped manually from the IBEO scan. In Tables 4, 5, 6, and 7, the results of the ghost elimination are described. The value of λ in Equation (6) is changed from 10° to 15°. The experiments are conducted in uphill road, flat road, rainy weather, and foggy weather conditions and their results are shown in Tables 4, 5, 6, and 7, respectively.

In the tables, the ABD and robust segmentation methods are compared in terms of (1) ghost elimination ratio; (2) inlier survival ratio and (3) computation time. Here, the ghost elimination ratio and the inlier survival ratio are defined as:

Ghost elimination ratio = The number of eliminated ghosts The number of ghosts × 100
Inlier survival ratio = The number of survived inliers The number of inliers × 100

From the tables, the proposed robust method outperforms the ABD in all cases with the similar computation time.

6. Discussion

Obviously, the goal is to remove as many ghosts as possible while maintaining as many inliers as possible and, thus to keep both ratios high. It can be seen from Tables 4, 5, 6, and 7 that the results of robust segmentation are better than those of ABD segmentation in every condition. In particular, the proposed robust method demonstrates more than 95% of the ghost elimination ratio in a robust manner regardless of the weather or the road. The ABD, however, demonstrates 17% to 65% of the ghost elimination ratio depending on the weather or the road. When it rains or the car goes uphill and, thus, ghosts frequently occur, ABD fails in eliminating the ghosts but the robust method removes most of the ghosts well. Interestingly, the ABD also performs well in the foggy weather and the reason is that the ghosts are detected intermittently in the foggy weather and they tend not to form a segment. Further, the ghost elimination ratio is not much affected by the value of λ. The reason might be that the ghosts are very close to the sensors and they are far enough from the other obstacles.

The inlier survival ratio is also an important factor because if the inlier is accidently removed by the algorithm, it will lead to a serious accident. The result of the inlier survival ratio is also shown in Tables 3, 4, 5, and 6. It can be seen that both of the segmentation methods have sufficiently high inlier survival ratios and the both algorithms do not accidently remove the important measurement points.

The ABD and robust segmentation methods are also compared in terms of computation time. The computation time in Tables 4, 5, 6, and 7 is obtained by computing the average over 100 frames. It can be seen that the robust method takes slightly longer time than the ABD but the extra time is not much. The reason is that the ghost tends to form a number of small segment and the elimination of them takes some time.

7. Conclusions

In this paper, a new object segmentation method for a multi-layer laser scanner has been proposed. For robust segmentation, efficient connectivity algorithms were developed and implemented with O(N) complexity. The proposed method was installed on an actual vehicle, and its performance was tested using real urban scenarios. It was demonstrated that the proposed system works well, even under complex urban road conditions.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2010-0012631).

Author Contributions

Beomseong Kim, Baehoon Choi, and Euntai Kim designed the algorithm, and carried out the experiment, analyzed the result, and wrote the paper. Minkyun Yoo and Hyunju Kim carried out the experiment, developed the hardware, and gave helpful suggestion on this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Keat, C.T.M.; Pradalier, C.; Laugier, C. Vehicle detection and car park mapping using laser scanner. Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005 (IROS 2005), Alberta, Canada, 2–6 August 2005; pp. 2054–2060.
  2. Mendes, A.; Bento, L.C.; Nunes, U. Multi-target detection and tracking with laserscanner. Proceedings of the 2004 IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 796–801.
  3. Fuerstenberg, K.Ch.; Dietmayer, K. Object tracking and classification for multiple active safety and comfort applications using a multilayer laser scanner. Proceedings of the 2004 IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 802–807.
  4. Gate, G.; Nashashibi, F. Fast algorithm for pedestrian and group of pedestrians detection using a laser scanner. Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi'an, China, 3–5 June 2009; pp. 1322–1327.
  5. Wu, S.; Decker, S.; Chang, P.; Camus, T.; Eledath, J. Collision sensing by stereo vision and radar sensor fusion. Trans. IEEE Intell. Transp. Syst. 2009, 4, 606–614. [Google Scholar]
  6. Oliveira, L.; Nunes, U.; Peixoto, P.; Silva, M.; Moita, F. Semantic fusion of laser and vision in pedestrian detection. Parttern Recognit. 2010, 43, 3648–3659. [Google Scholar]
  7. Premebida, C.; Ludwig, O.; Nunes, U. LIDAR and vision-based pedestrian detection system. J. Field Robot. 2009, 26, 696–711. [Google Scholar]
  8. Musleh, B.; Garcia, F.; Otamendi, J.; Armingol, J.M.; Escalera, A. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions. Sensors 2010, 10, 8028–8053. [Google Scholar]
  9. Li, Q.; Chen, L.; Li, M.; Shaw, S.L.; Nuchter, A. A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios. IEEE Veh. Technol. Soc. 2014, 63, 540–555. [Google Scholar]
  10. Gidel, S.; Checchin, P.; Blane, C.; Chateau, T.; Trassoudain, L. Pedestrian detection and tracking in an urban environment using a multilayer laser scanner. Trans. IEEE Intell. Transp. Syst. 2010, 11, 579–588. [Google Scholar]
  11. Grisleri, P.; Fedriga, I. The braive autonomous ground vehicle platform. IFAC Symp. Intell. Auton. Veh. 2010, 7, 3648–3659. [Google Scholar]
  12. Lin, Y.; Puttonen, E.; Hyyppä, J. Investigation of Tree Spectral Reflectance Characteristics Using a Mobile Terrestrial Line Spectrometer and Laser Scanner. Sensors 2013, 13, 9305–9320. [Google Scholar]
  13. Premebida, C.; Ludwig, O.; Nunes, U. Exploiting LIDAR-based features on pedestrian detection in urban scenarios. Proceedings of the 12th International IEEE Conference on Intelligent Transportation Systems, St. Louis, MO, USA, 4–7 October 2009; pp. 1–6.
  14. Wender, S.; Fuerstenberg, K.Ch.; Dietmayer, K. Object tracking and classification for intersection scenarios using a multilayer laserscanner. Proceedings of the 11th World Congress on Intelligent Transportation Systems, Nagoya, Japan, 22 October 2004.
  15. García, F.; Jiménez, F.; Anaya, J.J.; Armingol, J.M.; Naranjo, J.E.; de la Escalera, A. The braive autonomous ground vehicle platform. Distributed pedestrian detection alerts based on data fusion with accurate localization. Sensors 2013, 13, 11687–11708. [Google Scholar]
  16. Jiménez, F.; Naranjo, J.E.; Gómez, Ó. Autonomous manoeuvring systems for collision avoidance on single carriageway roads. Sensors 2012, 12, 16498–16521. [Google Scholar]
  17. Ozaki, M.; Kakimuma, K.; Hashimoto, M.; Takahashi, K. Laser-based pedestrian tracking in outdoor environments by multiple mobile robots. Sensors 2012, 12, 14489–14507. [Google Scholar]
  18. Teixidó, M.; Pallejà, T.; Tresanchez, M.; Nogués, M.; Palacín, J. Measuring oscillating walking paths with a LIDAR. Sensors 2011, 11, 5071–5086. [Google Scholar]
  19. Mavaei, S.M.; Imanzadeh, R.H. Line Segmentation and SLAM for rescue robots in unknown environments. World Appl. Sci. J. 2012, 17, 1627–1635. [Google Scholar]
  20. Yang, S.W.; Wang, C.C.; Chang, C.H. Ransac matching: Simultaneous registration and segmentation. Proceedings of the 2010 IEEE International Conference on Robotics & Automation (ICRA), Anchorage, AK, USA, 3–7 May 2010; pp. 1905–1912.
  21. Skrzypczynski, P. Building geometrical map of environment using IR range finder data. Intell. Auton. Syst. 1995, 4, 408–412. [Google Scholar]
  22. Premebida, C.; Nunes, U. Segmentation and Geometric Primitives Extraction from 2D Laser Range Data for Mobile Robot Applications; Technical Report N° ISRLM2005/02; Institute of Systems and Robotics: Coimbra, Portugal, 2005. [Google Scholar]
  23. Dietmayer, K.; Sparbert, J.; Streller, D. Model based object classification and object tracking in traffic scenes from range images. Proceedings of the IV IEEE Intelligent Vehicles Sysposium, Tokyo, Japan, 13–17 May 2001.
  24. Lee, K.J. Reactive Navigation for an Outdoor Autonomous Vehicle. Master's Thesis, University of Sydney, Sydney, Australia, 2001. [Google Scholar]
  25. Borges, G.A.; Aldon, M. Line extraction in 2D range images for mobile robotics. Robot. Syst. 2004, 40, 267–297. [Google Scholar]
  26. An, S.Y.; Kang, J.G.; Lee, L.K.; Oh, S.Y. Line segment-based indoor mapping with salient line feature extraction. Adv. Robot. 2012, 26, 437–460. [Google Scholar]
  27. Jimenez, F.; Naranjo, J.E. Improving the obstacle detection and identification algorithms of a laserscanner-based collision avoidance. Transp. Res. Part C 2011, 19, 658–672. [Google Scholar]
  28. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1, pp. 666–673.
  29. Homepage of IBEO Automotive Systems GmbH; Hamburg, Germany. Technical facts IBEO LUX 2010. Available online: http://www.ibeo-as.com/ (accessed on 28 October 2014).
Figure 1. System Outline.
Figure 1. System Outline.
Sensors 14 20400f1 1024
Figure 2. Segmentation by a single-layer laser scanner, (a) example of laser scanner measurement; (b) result of segmentation.
Figure 2. Segmentation by a single-layer laser scanner, (a) example of laser scanner measurement; (b) result of segmentation.
Sensors 14 20400f2 1024
Figure 3. Adaptive Breakpoint Detector (ABD).
Figure 3. Adaptive Breakpoint Detector (ABD).
Sensors 14 20400f3 1024
Figure 4. Multi-layer laser scanner (a) scan data and (b) corresponding image.
Figure 4. Multi-layer laser scanner (a) scan data and (b) corresponding image.
Sensors 14 20400f4 1024
Figure 5. Scan points of the multi-layer laser scanner on the θ - l plane. (a) ideal output (b) actual output.
Figure 5. Scan points of the multi-layer laser scanner on the θ - l plane. (a) ideal output (b) actual output.
Sensors 14 20400f5 1024
Figure 6. ABD segmentation for multi-layer laser scanner.
Figure 6. ABD segmentation for multi-layer laser scanner.
Sensors 14 20400f6 1024
Figure 7. Ghost detection caused by reflection from the ground surface, (a) uphill road (b) flat road.
Figure 7. Ghost detection caused by reflection from the ground surface, (a) uphill road (b) flat road.
Sensors 14 20400f7a 1024Sensors 14 20400f7b 1024
Figure 8. A ghost caused by moisture. (a) rainy weather (b) foggy weather.
Figure 8. A ghost caused by moisture. (a) rainy weather (b) foggy weather.
Sensors 14 20400f8 1024
Figure 9. Robust segmentation for multi-layer laser scanner.
Figure 9. Robust segmentation for multi-layer laser scanner.
Sensors 14 20400f9 1024
Figure 10. Vehicle and laser scanner for the experiment.
Figure 10. Vehicle and laser scanner for the experiment.
Sensors 14 20400f10 1024
Figure 11. Segmentation results: (a) vehicle; (b) pedestrian; (c) ghost in uphill road; (d) ghost in flat road; (e) ghost created by rain; and (f) ghost created by fog. Result 1 (second column)—ABD segmentation; Result 2—robust segmentation.
Figure 11. Segmentation results: (a) vehicle; (b) pedestrian; (c) ghost in uphill road; (d) ghost in flat road; (e) ghost created by rain; and (f) ghost created by fog. Result 1 (second column)—ABD segmentation; Result 2—robust segmentation.
Sensors 14 20400f11 1024
Table 1. ABD segmentation for multi-layer laser scanner.
Table 1. ABD segmentation for multi-layer laser scanner.
S = ABD _ Segmentation _ for _ multi _ laayer _ laser _ sacanner(Zt)

1 S1 ← Ø, Sensors 14 20400i11 ← Ø, Nseg ← 0
2 for i = 1 to N do
3 Select p i t = ( r i t , θ i t , l i t )
4 for all p j t Sensors 14 20400i1i do Sensors 14 20400i1i is candidate set
5 Select p j t = ( r j t , θ j t , l j t )
6 Calculate Dthd using ( p j t, ) by ABDa
7 if p i t p j t D thd then ⫽ check the connectivity
8 p i t whereSn ⫽ is added to the segment Sn
9 break
10 endif
11 endfor
12 if p i t then ⫽ if is not connected any other point in Sensors 14 20400i1i
13 NsegNseg + 1
14 p i t ⫽ belong to new segment, SNseg
15 endif
16 Update Sensors 14 20400i1i+1 from ( Sensors 14 20400i1i, p i t)
17 endfor

aABD is the adaptive breakpoint detector.

Table 2. Pseudo-Code of Robust Segmentation through Ghost Elimination.
Table 2. Pseudo-Code of Robust Segmentation through Ghost Elimination.
S = Robust _ Segmentation(Zt)

1 S1 ← Ø, Sensors 14 20400i11← Ø, Nseg← 0
2 for i = 1 to N do
3 Select p i t = ( r i t , θ i t , l i t )
4 if ( p i t) then ⫽ in a close area
5 for all p j t Sensors 14 20400i1i do Sensors 14 20400i1i is candidate set
6 Select p j t = ( r j t , θ j t , l j t )
7 if ( p j t) then ⫽ skip on same layer
8 Calculate Dthd using ( p j t, ) by ABDa
9 if p i t p j t D thd then ⫽ check the connectivity
10 p i t whereSn ⫽ is added to the segment Sn
11 break
12 endif
13 endif
14 endfor
15 else ⫽ p i t in a far area
16 for all p j t Sensors 14 20400i1i do Sensors 14 20400i1i is candidate set
17 Select p j t = ( r j t , θ j t , l j t )
18 Calculate Dthd using ( p j t, ) by ABDa
19 if p i t p j t D thd then ⫽ check the connectivity
20 p i t whereSn ∈ is added to the segment Sn
21 break
22 endif
23 endfor
24 endif
25 if p i t then ⫽ if is not connected any other point in Sensors 14 20400i1i
26 Nseg ← Nseg + 1
27 p i t ⫽ belong to new segment, SNseg
28 endif
29 Update Sensors 14 20400i1i+1 from ( Sensors 14 20400i1i, p i t)
30 endfor
31 Eliminate small segments in SS is the set of all segments Sn

aABD is the adaptive breakpoint detector.

Table 3. The number of ghosts, inliers, total measurement.
Table 3. The number of ghosts, inliers, total measurement.
CircumstanceGhostInlierTotal
Uphill road463433,18337,817
Plat road127840,79242,070
Rainy weather214626,33528,481
Foggy weather151136,96438,475
Table 4. Results of ghost elimination for uphill road.
Table 4. Results of ghost elimination for uphill road.
λABD SegmentationRobust Segmentation


Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)
1019.03399.70242.05798.42598.33347.676
1118.49499.73544.40998.14498.37951.036
1218.06299.75645.24397.82098.45751.829
1317.84699.76545.83297.51898.48452.594
1417.71799.76544.41797.19598.50550.557
1517.47999.79243.99197.06598.55050.455
Table 5. Results of ghost elimination for flat road.
Table 5. Results of ghost elimination for flat road.
λABD SegmentationRobust Segmentation


Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)
1042.09799.98030.19398.51399.90931.147
1141.54999.98329.70198.27999.90930.669
1240.14199.98329.38698.12299.90930.413
1338.65499.98329.17397.88799.90930.252
1438.41999.98329.11797.73199.90930.111
1537.63799.98329.78797.57499.90931.369
Table 6. Results of ghost elimination in rainy weather.
Table 6. Results of ghost elimination in rainy weather.
λABD SegmentationRobust Segmentation


Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)
1034.66999.99211.10594.54899.95111.445
1134.66999.99211.15794.54899.95811.425
1234.66999.99212.13394.54899.96212.496
1334.66999.99212.27094.54899.96212.554
1434.66999.99212.59294.54899.96212.883
1534.66999.99212.88594.54899.96613.208
Table 7. Results of ghost elimination in foggy weather.
Table 7. Results of ghost elimination in foggy weather.
λABD SegmentationRobust Segmentation


Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)Ghost Elimination Ratio (%)Inlier Survival Ratio (%)Computation Time (ms)
1063.26999.94621.01997.08899.22123.411
1163.20399.94620.16197.02299.26122.484
1263.13799.95120.71196.95699.29423.019
1363.07199.95420.98396.88999.29423.375
1463.07199.95721.43996.88999.32123.961
1562.67499.95922.86996.82399.35925.473

Share and Cite

MDPI and ACS Style

Kim, B.; Choi, B.; Yoo, M.; Kim, H.; Kim, E. Robust Object Segmentation Using a Multi-Layer Laser Scanner. Sensors 2014, 14, 20400-20418. https://0-doi-org.brum.beds.ac.uk/10.3390/s141120400

AMA Style

Kim B, Choi B, Yoo M, Kim H, Kim E. Robust Object Segmentation Using a Multi-Layer Laser Scanner. Sensors. 2014; 14(11):20400-20418. https://0-doi-org.brum.beds.ac.uk/10.3390/s141120400

Chicago/Turabian Style

Kim, Beomseong, Baehoon Choi, Minkyun Yoo, Hyunju Kim, and Euntai Kim. 2014. "Robust Object Segmentation Using a Multi-Layer Laser Scanner" Sensors 14, no. 11: 20400-20418. https://0-doi-org.brum.beds.ac.uk/10.3390/s141120400

Article Metrics

Back to TopTop