Next Article in Journal
Camera-Based Lane Detection—Can Yellow Road Markings Facilitate Automated Driving in Snow?
Previous Article in Journal
Development and Evaluation of a Threshold-Based Motion Cueing Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge-Based Licence-Plate Template Matching for Identifying Similar Vehicles

Department of Computers System Engineering, Faculty of Information and Communication Technology, Tshwane University of Technology, Soshanguve 0001, South Africa
*
Author to whom correspondence should be addressed.
Submission received: 28 July 2021 / Revised: 30 August 2021 / Accepted: 17 September 2021 / Published: 9 October 2021

Abstract

:
This paper presents licence-plate recognition for identifying vehicles with similar licence-plates. The method uses a modified licence-plate recognition pipeline, with licence-plate template matching replacing character segmentation and recognition. Only edge detection is used, combined with a method for calculating line ratio to locate and extract licence-plates. The extracted licence-plate templates are then compared for licence-plate matching. The results show that the method performs well in differing circumstances, and that it is computationally cost-effective. Results also show that licence-plate template matching is a reliable method of identifying similar vehicles, and has a lower computational cost when compared with character segmentation and recognition.

1. Introduction

Vehicle hijacking is a problem in South Africa, as shown by the yearly crime statistics. In 2018/19 there were roughly 32,000 incidences of hijacking of motor vehicles. About 0.08% of individuals aged 16 and older were hijacked [1]. In 2019, an average of 2600 vehicles were hijacked per month. Many attempts have been made to reduce this on-road crime. These have been inadequate because a vehicle is mobile and a hijacking can take place anywhere. A vehicle is usually followed to a quiet location and then hijacked, which makes this particular type of crime difficult to combat by traditional crime-fighting strategies. Technology is an improved alternative to combating such on-road hijackings.
To this end, automatic licence-plate recognition (ALPR) is utilized as a way of identifying suspicious vehicles on the road which follow the ego vehicle with the intent of hijacking it. ALPR is a process of identifying information in licence-plate images.
Automatic licence-plate recognition is normally performed by first extracting the licence-plate, followed by character segmentation on the extracted licence-plate to separate characters. Finally, character recognition is performed, in which the isolated characters are detected to identify the letters and numbers on the licence-plate. Refs. [2,3,4] presents licence-plate recognition using edge features and [5] binary character processing. These methods use convolution neural network (CNN) and artificial neural network (ANN)-based character recognition that has efficient character classification. However, the methods have a high computational cost because of the neural network. Refs. [6,7,8] presents edge-based licence-plate extraction and recognition methods, while [9] presents a zone-based licence-plate recognition method. These methods have a high computational cost because of the character segmentation and recognition steps, with a possible added drawback of misclassification of characters in the recognition stage.
This paper overcomes the above challenges by using a modified pipeline that replaces character segmentation and recognition stages with licence-plate template matching. This method utilises extracted licence-plates as whole image templates without any further processing steps. It has improved computational cost and no misclassification of characters. A method of line-ratio computation is also presented for licence-plate locating and extraction on an edge map. Ratio computation calculates the width and height of two horizontal edges at a time, to find a ratio that equals that of a licence-plate. The extracted licence-plate templates are then used for comparison to identify similar licence-plates.
The rest of the paper is organized as follows: Section 2 presents the related work; Section 3 sets out the proposed method; Section 4 gives the experiment and results; Section 5 comprises the results discussion; and Section 6 is the conclusion.

2. Related Work

The review of licence-plate methods is carried out to search for a method that extracts licence-plates from a vehicle. The method should have a low computational cost and a high detection rate, because failing to capture licence-plates could result in reduced efficiency.
Automatic licence-plate recognition (ALPR) generally consists of four stages, as shown in Figure 1.
ALPR takes input as an RGB vehicle image that usually originates from a camera mounted on a vehicle or structure.
The RGB image acquisition is followed by licence-plate extraction, which detects a licence-plate in a vehicle image. Licence-plates come with varying characteristics [10] such as different licence-plate locations, different sizes, different colours, different languages and fonts, different backgrounds, and letter colours, although most licence-plates are rectangular. These characteristics are used in licence-plate extraction.
Licence-plate extraction is then followed by character segmentation, in which the individual licence-plate characters are isolated. This is to prepare for the classification of the characters by obtaining the symbol and leaving out irrelevant pixels. The segmentation methods are generally divided based on the features focused on, such as character contour, vertical and horizontal projection attributes, implementing classifiers, connectivity of pixels, prior knowledge of characters, and mathematical morphology attributes [11]. Region props, optical character recognition, and template matching [12] are also used in some segmentation methods. The drawback to these methods is the processing time.
The character recognition stage depends on the accuracy of the previous two stages. This is where the characters are identified and displayed. Deploying extracted attributes, the methods used for this stage are categorized as artificial neural networks, pattern-matching attributes, template matching, deploying classifiers, optical character recognition, and statistical classifiers.
The drawback of character recognition is the high computational cost. Additionally, the characters recognized from the image at times differ from those on the licence-plate. This misclassification and identification of characters may occur because characters in the database are of different shapes, fonts, and sizes [11].
The front-vehicle licence-plate recognition is commonly used to identify licence-plates, in which features of an image are extracted from the licence-plate, allowing for character recognition. In the literature [13], licence-plates are recognized from the front of vehicles using colour information, i.e., character colour combination. Licence-plate detection is performed by means of a geometric template with connected component and support vector machines for segmentation and recognition, respectively.
An algorithm was presented based on background subtraction for vehicle detection and front licence-plate extraction, using texture and colour features [14]. The aspect ratio was used to extract the licence-plate. The drawback of colour features is that it has difficulty with licence-plate extraction when the licence-plate and vehicle have a similar colour. The computational expense is a disadvantage of this method. A support vector machine method [15] was used to determine whether an area is a licence-plate; dilation and erosion contours were used to locate candidate front licence-plates. Modified connected component analysis [16] was used to extract characters on a front vehicle licence-plate. The last stage was character-component identification. The drawback is that this method can label one object as two distinct objects. In [17], a character-based vehicle front licence-plate extraction algorithm was suggested. A candidate licence-plate was located by finding the centroid of connected components, i.e., characters. A window approach, having the highest number of transitions, was then used to extract the licence-plate.
Edge feature extraction is a simple and fast feature extraction technique with binary pixels that can be used to extract and analyse shapes, and to detect objects such as licence-plates. In [6], an edge-based licence-plate recognition was presented for the front image of vehicles. In this method, licence-plate extraction was performed by edge detection using Canny edge. Letters were segmented and identified by template matching techniques. However, the use of Canny edge can create unnecessary edges. In [2], a front licence-plate detection method was proposed using Prewitt edges and a convolution neural network for character recognition. It should be noted that the neural network is computationally expensive.
In the literature [18], a vertical-edge detection method was offered for front licence-plate extraction. Candidate region extraction was performed by selecting either the upper or lower part of the image with a licence-plate region.
Most of the methods for licence-plate recognition make use of plate extraction, segmentation, and recognition; this addition of character segmentation and recognition increases computational cost. The character-recognition step also tends to misclassify characters. Edges have the lowest computational cost. However, edges are usually followed by additional methods after edge detection, and this results in increased computational cost.

3. Proposed Approach

For the proposed approach, a monocular camera is used to capture images with camera intrinsic parameters that include the focal length of [974.1667, 979.1361] pixels; the principal point of [950.8207, 559.9735] pixels; and an image resolution of 1080 × 1920. The camera extrinsic parameters include a pitch of 5 degrees, and a height of 1.1798 m above ground.
The proposed approach begins with licence-plate extraction in which the edge map of the licence-plate is generated with the vertical line ratio computed to locate the licence-plate. Licence-plate extraction is then followed by licence-plate template-matching, using normalized cross-correlation to match the plates.

3.1. Licence-Plate Extraction and Matching

Feature-based licence-plate extraction saves processing time and is robust. The edge-based method is the simplest and fastest of the feature-based extraction methods. In the literature [6], Canny edge was used for licence-plate extraction. The drawback was the higher computational cost compared to the Sobel edge detection. Canny edge detection also produces many unwanted edges. Literature [6] also performed licence-plate segmentation and character recognition which increases computational cost after licence-plate extraction. Character recognition has the further drawback of interpreting characters wrongly. This can cause the same licence-plate to be seen as different, thus failing to identify the vehicle as the same. The traditional licence-plate pipeline is shown in Figure 1, which includes character segmentation and recognition.
To solve the problem of high computational cost and the misclassification of characters in the traditional licence-plate pipeline, we propose the elimination of the character segmentation and the character recognition steps. These two steps will be replaced by the licence-plate template-matching, which will tackle high computational cost and the problem of misclassified characters. The pipeline of the proposed method is shown in Figure 2.
Licence-plate segmentation and character recognition isolates characters of the extracted licence-plate from the background licence-plate using optical character recognition (OCR) or similar methods. After these characters are isolated, each character is identified to find a match or to record the licence-plate. The proposed licence-plate matching method does not perform these steps; it merely takes the extracted licence-plate to find a match with the saved licence-plates.
In this way, a solution to the problem of high computational cost is presented that takes two complex steps and replaces them with a single simple step. The proposed method identifies the licence-plate as a whole to check for similarity and does not make use of individual characters. The extracted licence-plate templates, such as shown in Figure 3, can then be compared to check for similarity, and to identify vehicles as identical.
To avoid detecting different licence-plates as similar, after a licence-plate is matched as similar, matching is repeated several times at different distances (this verification process is repeated 3 times).

Licence-Plate Extraction

The licence-plate extraction method is used without any additional method to improve the computational cost of the licence-plate extraction stage. Only vertical edges and the ratio of the licence-plate are used to extract a licence-plate in this proposed approach. The flowchart of the licence-plate extraction is shown in Figure 4.
(1)
Edge detector
The licence-plate extraction method in this paper is based on edge detection. An edge operator is a neighbourhood operation that determines the extent to which each pixel’s neighbourhood can be partitioned. This is achieved by a simple arc passing through the pixel. All pixels on one side of the arc have a predominant value, while pixels on the other side have a different predominant value. The change in the gray-level intensity at a particular threshold implies a potential edge. In this paper, licence-plate extraction utilizes the Sobel operator to produce edges from a detected vehicle image.
The Sobel edge detector is a simple filter with a low computational cost. It uses the first-order derivative and has a simple calculation to detect the edges and their orientations. The Sobel operator detects edges in a greyscale image. The Sobel operator uses two 3 × 3 kernels, one rotated 90 degrees of the other, as shown in (1) and (2).
G x = [ 1 0 1 2 0 2 1 0 1 ]
G y = [ 1 2 1 0 0 0 1 2 1 ]
The Sobel operator can be broken down as a product of an averaging kernel and a differentiating kernel (combining gradient with smoothing). Gx and Gy can be broken down into:
G x = [ 1 2 1 ] [ 1 0 1 ]
G y = [ 1 0 1 ] [ 1 2 1 ]
At each point in the image, the resulting gradient can be combined to give the magnitude in (5) and direction of the edge (6)
G = G x 2 + G y 2
θ = atan ( G y / G x )
This reduces the computational cost as there are few vector-wise calculations. The direction ( θ ) can be set to 0 for vertical edge detection only. This filter only needs 8 image points around a point to compute a result before gradient computation is performed. Areas of the high gradient are represented as white lines on a black background, which are the edges. For instance, the Sobel edge image in Figure 5b is obtained from a detected vehicle in Figure 5a.
Vehicle licence-plates are rectangular and have a certain ratio between the horizontal and vertical plate lines. The goal is to detect the vertical edges of the licence-plate, extracting the licence-plate by calculating the ratio. To achieve the goal, we first compute the edges of the vehicle image.
(2)
Licence-plate ratio computation
Licence-plates usually appear on the bottom part of a detected image. This helps with the elimination of unwanted edges on the top part of the edge image. This distinction is performed by clearing the top part of the Sobel edge map, as shown in Figure 5b, to eliminate unnecessary edges by ignoring the top part of the edge image. This results in the image in Figure 6a. Thereafter, all short and long unwanted edges are removed, resulting in Figure 6b. This simplifies the search for the correct licence-plate ratio in the image. If there are too many edges when searching for the licence-plate ratio, false positives increase, and the cost of computation increases. Thereafter, fewer edges are left, and the ratio calculation of the remaining edges is performed.
(3)
Defining the licence-plate region
As shown in Figure 7, the corners of a licence-plate are defined as follows:
The top part of the red line is ( x 1 , y 1 ) ; the bottom part of the red line is ( x 2 , y 2 ) ; the top part of the blue line is ( x 3 , y 3 ) ; and the bottom part of the red line is ( x 4 , y 4 ) .
The height is the top and bottom y value of the red or blue line. In Equation (7), the red line is chosen.
h = a b s ( y 1 y 2 )
The width is the top part of the red and blue lines’ x value taken together, or the bottom part of the red and blue lines’ x value taken together. In Equation (8), the top part is chosen as the width.
w = a b s ( x 3 x 1 )
Equation (9) shows the ratio calculation of the width and height. This is to check whether the compared vertical lines fall within the ratio of the licence-plate.
r = w h
Each vertical edge line is compared with all other vertical lines. This is conducted with 2 vertical lines at a time to search for and find a proper ratio, as defined in Equation (9). The ratio of the licence-plate is set within the range of 3.5 and 5.5. When the correct ratio of the licence-plate is found, the licence-plate is cropped from the RGB image; the result is shown in Figure 8.
This cropped licence-plate is then used for licence-plate template-matching. This process compares the cropped licence-plate with other licence-plates.

3.2. Licence-Plate Template-Matching

Licence-plate template-matching is used to compare licence plates to check whether the current vehicle licence-plate is identical with a previous one. This is performed without character segmentation and character recognition to improve the cost of computation. The method that is used for licence-plate template-matching is normalized cross-correlation.
Cross-correlation for template matching is based on squared Euclidean distance measure, Equation (10):
d f , t 2 ( u , v ) = x , y [ f ( x , y ) t ( x u , y v ) ] 2
where f is the extracted licence-plate, the sum is over x , y under the window containing the template t positioned at u , v . d 2 gives the below result. From Equation (10) is found
d f , t 2 ( u , v ) = x , y [ f 2 ( x , y ) 2 f ( x , y ) t ( x u , y v ) + t 2 ( x u , y v ) ]
The term t 2 ( x u , y v ) is constant. If the term f 2 ( x , y ) is approximately constant, the remaining cross-correlation term, Equation (12) is a measure of the similarity between the extracted licence-plate and the template:
c ( u , v ) = x , y f ( x , y ) t ( x u , y v )
The normalized cross-correlation is achieved by normalizing the extracted licence-plate and the template vectors to unit length, producing Equation (13):
γ ( u , v ) = x , y [ f ( x , y ) f ¯ u , v ] [ t ( x u , y v ) t ¯ ] { x , y [ f ( x , y ) f ¯ u , v ] 2 x , y [ t ( x u , y v ) t ¯ ] 2 } 0.5
where t ¯ is the mean of the template and f ¯ u , v is the mean of f ( x , y ) in the region under the template.
The normalized cross-correlation for two matching licence-plates displayed as a surface is shown in Figure 9.

4. Experiment

Licence-Plate Extraction and Template-Matching

The first stage in the proposed method is to extract a licence-plate from a vehicle image. In the second stage the extracted licence-plates are matched to check whether the detected vehicle is the one previously recognized.

Licence-Plate Extraction

This stage extracts a licence-plate from a detected vehicle image. The extraction accuracy of the proposed method is measured using a vehicle image dataset captured via an on-road camera on a South African road. The extraction accuracy is also compared using the Kaggle dataset and dataset in [19]. The proposed method and the semantic segmentation method [19] are compared on extraction accuracy. Thereafter, both methods are applied to Kaggle dataset for further comparison.
(1)
The licence-plate extraction accuracy
Licence-plate extraction is tested on images that are taken on a camera mounted at the back of a vehicle. The dimension of the image is 150 × 210 pixels. Figure 10a is an image of a detected vehicle; its corresponding grayscale image is shown in Figure 10b.
Sobel edge detection is applied to Figure 10b; obtained edges are shown in Figure 11a; and the final edge map is shown in Figure 11b, which is used to crop the licence-plate shown in Figure 12.
The results are shown in Table 1. 38 vehicles with visible plates at different distances were tested. In the proposed method there is a high number of false positives that result from edges of nearly equal ratio to the licence-plate range ratio. The missed detections are from visible licence-plate edges that are too small and have been removed by the processing steps. The accuracy of the proposed method is 92%.
Table 1 shows the licence-plate extraction accuracy of detected vehicles by the proposed method, and by another method [19], using images captured by an on-road camera. Table 1 also shows licence-plates that are there but were not extracted; areas that were extracted but are not licence-plates; and lastly, licence-plates that are correctly extracted.
The semantic-segmentation method in literature [19] has lower detection accuracy in vehicle licence-plate images captured for this paper. This is because it uses semantic segmentation (region props) and takes the largest region as the licence-plate on its test images. In images captured for this project, the largest region is usually not the licence-plate. Therefore, the method produces a very low rate of licence-plate extraction accuracy.
(2)
Comparison of licence-plate extraction based on Kaggle dataset in [20] and literature [19] dataset
The Kaggle dataset contains 433 images of vehicles with licence-plates. The dataset contains both front and back licence-plates. 169 front licence-plate images are taken from Kaggle and combined with the dataset in the literature [19] for front licence-plate extraction. The images of vehicles in the dataset of literature [19] are captured by a 3.2 megapixel digital camera. These images have dimensions from 120 × 160 to 1200 × 1600 pixels. The dataset is used to test the accuracy of the proposed approach. One of the dataset images in the literature [19] is shown in Figure 13a, and the generated vertical-edge image by the proposed licence-plate extraction method is shown in Figure 13b.
After vertical-edge detection, the top part of the image is removed because the licence-plates usually appear at the bottom of the image. The image for the filtered upper part of the image and deleted unwanted edges are shown in Figure 14a,b, respectively. Finally, the extracted licence-plate is shown in Figure 15.
Table 2 shows the licence-plate extraction accuracy of the proposed method and the method of [19] with its dataset, and the Kaggle dataset.
The proposed method was able to obtain an accuracy of 66% on the dataset used by Kaggle and 82% on the literature [19] dataset to demonstrate the algorithm. The accuracy of the proposed method is negatively affected by the distant vehicles in the Kaggle dataset, in which licence-plate edges falling outside of the threshold are removed during processing. The [19] method accuracy is negatively influenced because it takes the largest region to be the licence-plate; however, this is not the truth in many cases of the Kaggle dataset.
The score of 100% on the literature [19] results from the dataset belonging to this method and the largest area in the images is the licence-plate. The 82% accuracy of the proposed method is as a result of some licence-plates being too large, and the edges are deleted during processing because they fall outside the threshold of the retained edges.
(3)
Processing-time comparison using the Kaggle dataset in [20]
The assumption made of the proposed method is that the computational cost of the licence-plate extraction is low. This assumption is based on edge detection because it does not use additional methods, such as histograms or other similar methods, for more advanced verification.
The processing time of the proposed licence-plate detection approach and that of [19] is compared. This is to show that using edges only is more computationally cost-effective. Time in milliseconds is used to compare the proposed method with a semantic-segmentation method. In the proposed method it took 21,896 ms to process 169 images, while in the [19] method it took 49,119 ms to process the same number of images. The computational cost of the proposed method is less than half the time it takes in literature [19] for 169 images. The results are shown in Table 3 and Figure 16.
Table 3 compares the licence-plate extraction processing time of the proposed method and the method in literature [19], using a total of 169 images of the Kaggle dataset.
The above graph visually demonstrates the effectiveness of the proposed edges only method when compared with the semantic segmentation method of literature [19].
(4)
Licence-plate template-matching results
Licence-plate template matching matches the cropped licence-plates. These cropped licence-plates are compared with other templates (plates detected earlier). The purpose of this section is to validate the proposed licence-plate matching method for finding similar vehicles. Due to the variation of the extracted licence-plates for the same car, a threshold is used to determine either the match or mismatch of licence-plates. The computational cost-effectiveness of the proposed licence-plate template matching method is also compared with other methods that use character segmentation and recognition.
The first test is to investigate the threshold to use for licence-plate matching. All licence-plates are resized to 16 × 57. The figures below show a pair of licence-plates that were extracted and are shown before resizing. Figure 17a shows the same licence-plate of a vehicle at the same distance; and Figure 17b shows the same licence-plate at different distances. Figure 18a,b shows different licence-plates at different distances—this is to show the threshold of different licence-plates.
Table 4 shows that the highest score for a different licence-plate is 74%, while the lowest score for the same licence-plate at different distances is 85%. This means that a similarity threshold can be set at 80%. This similarity comparison is to check whether it is the same vehicle that is following the ego vehicle. Figure 19 is a pair of licence-plates resized to 16 × 57 before comparison.
Table 4 compares the template-matching score of extracted licence-plates at different distances. This is to determine the threshold on which to accept licence-plates as similar.
The second test is to compare the processing time of the proposed licence-plate template matching method and the ones containing character segmentation and recognition. This is to show that the proposed licence-plate template matching has a lower computational cost. The results are shown in Table 5.
Table 5 compares the processing time of the proposed method with two other methods [19,21] that perform character segmentation and recognition, using 112 images from the Kaggle dataset.
The proposed method has a lower processing time than both methods for all images, achieving an average time of 36.63 ms; while methods [19,21] achieve an average time of 124.66 ms and 174.45 ms, respectively. This shows that the proposed modified licence-plate pipeline without character segmentation and recognition is computationally more cost-effective than the traditional pipeline.

5. Results Discussion

The proposed method provides a licence-plate detection accuracy of 92% on a dataset captured on a South African road. The proposed method also shows a licence-plate extraction accuracy of 82% on the literature [19] dataset, and 66% on the Kaggle dataset. The accuracy on Kaggle is negatively impacted by vehicle images that are at a distance, in which edges are short and are eliminated in the processing steps. The proposed method achieved a lower plate-extraction computational cost than the existing method [19] on the Kaggle dataset. Better computation can also be achieved by an image with fewer edges. This validated the effectiveness of the proposed method under different circumstances and conditions.
The vehicle licence-plate template matching provides further validation of the proposed method. The matching of similar licence-plates gives high accuracy. A threshold of 80% was observed based on experiments, to determine whether a vehicle in the current frame is the same vehicle in previous frames, even when they have a similar appearance, such as being the same model and colour.
Licence-plate matching has a faster processing time than methods [19,21]. This validated the proposed modified licence-plate pipeline without character segmentation and recognition as saving computational cost over the traditional pipeline.

6. Conclusions

This paper aimed to develop a vehicle following behaviour detection method by matching the extracted vehicle licence-plate with the plates detected earlier. A robust and cost-effective licence-plate recognition method was achieved, which uses edges and licence-plate ratio to extract licence-plates. This method was shown to be reliable, as it produced better licence-plate detection results on tested datasets. It was also more computationally cost-effective than semantic segmentation methods.
The proposed licence-plate matching method was also shown to have a lower computational cost than methods using character segmentation and recognition. This validated the proposed licence-plate recognition pipeline, which replaces character segmentation and recognition with licence-plate template matching.
The edge-based method can miss certain licence-plate extractions because of the processing steps removing the edges of the licence-plate in smaller sized vehicle images. This filtering of edges is performed to reduce the computational cost of locating the licence-plate by the line ratio computation method. Future research will propose a method for the line ratio calculation that is less dependent on the number of edges in the image. The new method will improve licence-plate extraction accuracy without compromising the computational cost of the method.

Author Contributions

Conceptualization, M.M. and C.T.; methodology, M.M. and C.T.; validation, M.M.; writing—original draft preparation, M.M.; writing—review and editing, M.M. and C.T.; supervision, C.T. and P.A.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the National Research Foundation of South Africa, Grant Numbers: 99383.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Crime Statistics, 2018/19. Available online: https://www.statssa.co.za (accessed on 7 April 2021).
  2. Dhar, P.; Guha, S.; Biswas, T.; Abedin, Z. A system design for license-plate recognition by using edge detection and convolution neural network. In Proceedings of the International Conference on Computer Communication, Chemical, Material and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 8–9 February 2018; pp. 1–4. [Google Scholar]
  3. Menon, A.; Omman, B. Detection and recognition of Multiple License Plates from Still Images. In Proceedings of the 2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET), Kottayam, India, 21–22 December 2018. [Google Scholar]
  4. Rabbani, G.; Islam, M.A.; Azim, M.A.; Islam, M.K.; Rahman, M.M. Bangladeshi License Plate Detection and Recognition with Morphological Operation and Convolution Neural Network. In Proceedings of the 2018 21st International Conference of Computer and Information Technology (ICCIT), Dhaka, Bangladesh, 21–23 December 2018. [Google Scholar]
  5. Wang, C.; Liui, J. Licence Plate Recognition System. In Proceedings of the 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 15–17 August 2015. [Google Scholar]
  6. Ha, P.S.; Shakeri, M. License-plate recognition based on edge detection. In Proceedings of the 2016 Artificial Intelligence and Robotics (IRANOPEN), Qazvin, Iran, 9 April 2016; pp. 170–174. [Google Scholar]
  7. Nejati, M.; Majidi, A.; Jalalat, M. License-Plate Recognition Based on Edge Histogram Analysis and Classifier Ensemble. In Proceedings of the 2015 Signal Processing and Intelligent Systems Conference (SPIS), Tehran, Iran, 16–17 December 2015. [Google Scholar]
  8. Maglad, W. A Vehicle License-Plate Detection and Recognition System. J. Comput. Sci. 2012, 8, 310–315. [Google Scholar]
  9. Choudhury, A.; Negi, A. A New Zone Based Algorithm for Detection of License-Plate from Indian Vehicle. In Proceedings of the Fourth International Conference on Parallel, Distributed and Grid Computing (PDGC), Waknaghat, India, 22–24 December 2016. [Google Scholar]
  10. Du, A.; Shehata, M.; Badawy, W. Automatic license-plate recognition: A state of the art review. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 311–325. [Google Scholar] [CrossRef]
  11. Arafat, Y.; Khairuddin, A.S.; Khairuddin, U.; Paramesran, R. Systematic review on vehicular license-plate recognition framework in intelligent transport systems. IET Intell. Transp. Syst. 2019, 13, 745–755. [Google Scholar] [CrossRef]
  12. Khinchi, M.; Agarwal, C. A review on automatic number plate recognition technology and methods. In Proceedings of the International Conference on Intelligent Sustainable Systems, Palladam, India, 21–22 February 2019; ISBN 978-1-5386-7799-5. [Google Scholar]
  13. Ashtari, A.H.; Nordin, J.; Fathy, M. An Iranian license-plate recognition system based on color features. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1690–1705. [Google Scholar] [CrossRef]
  14. Wu, Y.; Liu, S.; Wang, X. License-plate location method based on texture and color. In Proceedings of the 2013 IEEE 4th International Conference on Software Engineering and Service Science, Beijing, China, 23–25 May 2013. [Google Scholar]
  15. Miyata, S.; Oka, K. Automated license-plate detection using vector machines. In Proceedings of the 14th International Conference on Control, Automation, Robotics & Vision Phuket (ICARCV), Phuket, Thailand, 13–15 November 2016. [Google Scholar]
  16. Wu, B.F.; Lin, S.P.; Chiu, C.C. Extracting characters from real vehicle license-plates out of doors. IET Comput. Vis. 2007, 1, 2–10. [Google Scholar] [CrossRef] [Green Version]
  17. Ingole, S.K.; Gundre, S.B. Characters feature based Indian vehicle license-plate detection and recognition. In Proceedings of the International Conference on Intelligent Computing and Control, Coimbatore, India, 23–24 June 2017. [Google Scholar]
  18. Davis, A.M.; Arunvinodh, C.; Menon, N.P. Automatic license-plate detection using vertical edge detection method. In Proceedings of the IEEE Sponsored 2nd International Conference on Innovations in Information Embedded and Communication Systems (ICIIECS), Coimbatore, India, 19–20 March 2015. [Google Scholar]
  19. Bhat, R.; Mehandia, B. Recognition of vehicle number plate using Matlab. Int. J. Innov. Res. Electr. Electron. Instrum. Control Eng. 2014, 2, 1899–1903. [Google Scholar]
  20. Car License-Plate Detection. 2019. Available online: https://www.kaggle.com/andrewmvd/car-plate-detection (accessed on 24 May 2021).
  21. License-Plate Recognition. 2020. Available online: https://github.com/fwangrotk/licence-plate-recognition (accessed on 17 April 2021).
Figure 1. Licence-plate pipeline.
Figure 1. Licence-plate pipeline.
Vehicles 03 00039 g001
Figure 2. Proposed licence-plate pipeline.
Figure 2. Proposed licence-plate pipeline.
Vehicles 03 00039 g002
Figure 3. Licence-plate template.
Figure 3. Licence-plate template.
Vehicles 03 00039 g003
Figure 4. Flowchart for licence-plate extraction.
Figure 4. Flowchart for licence-plate extraction.
Vehicles 03 00039 g004
Figure 5. (a) Post-processed image. (b) Sobel edge map.
Figure 5. (a) Post-processed image. (b) Sobel edge map.
Vehicles 03 00039 g005
Figure 6. (a) Top edges removed. (b) Unwanted edges removed.
Figure 6. (a) Top edges removed. (b) Unwanted edges removed.
Vehicles 03 00039 g006
Figure 7. Marked simplified edge map.
Figure 7. Marked simplified edge map.
Vehicles 03 00039 g007
Figure 8. Cropped licence-plate.
Figure 8. Cropped licence-plate.
Vehicles 03 00039 g008
Figure 9. Normalized cross-correlation for matching licence-plates.
Figure 9. Normalized cross-correlation for matching licence-plates.
Vehicles 03 00039 g009
Figure 10. (a) RGB image detected vehicle. (b) Grayscale image.
Figure 10. (a) RGB image detected vehicle. (b) Grayscale image.
Vehicles 03 00039 g010
Figure 11. (a) Sobel edges. (b) Final edge map.
Figure 11. (a) Sobel edges. (b) Final edge map.
Vehicles 03 00039 g011
Figure 12. Cropped licence-plate.
Figure 12. Cropped licence-plate.
Vehicles 03 00039 g012
Figure 13. (a) Image from [19]. (b) Vertical-edge image.
Figure 13. (a) Image from [19]. (b) Vertical-edge image.
Vehicles 03 00039 g013
Figure 14. (a) Top of the image removed. (b) Unwanted edges deleted.
Figure 14. (a) Top of the image removed. (b) Unwanted edges deleted.
Vehicles 03 00039 g014
Figure 15. Extracted licence-plate for the dataset in literature [19].
Figure 15. Extracted licence-plate for the dataset in literature [19].
Vehicles 03 00039 g015
Figure 16. Processing-time comparisons of plate-extraction methods on Kaggle dataset.
Figure 16. Processing-time comparisons of plate-extraction methods on Kaggle dataset.
Vehicles 03 00039 g016
Figure 17. (a) Same distance, same plate. (b) Different distance, same plate.
Figure 17. (a) Same distance, same plate. (b) Different distance, same plate.
Vehicles 03 00039 g017
Figure 18. (a) Different distance and plate. (b) Different distance and plate.
Figure 18. (a) Different distance and plate. (b) Different distance and plate.
Vehicles 03 00039 g018
Figure 19. Resized licence-plates.
Figure 19. Resized licence-plates.
Vehicles 03 00039 g019
Table 1. Licence-plate extraction accuracy of the proposed method, and a method in literature [19].
Table 1. Licence-plate extraction accuracy of the proposed method, and a method in literature [19].
MethodFalse PositivesMissed ExtractionsCorrect Extractions% Accuracy
Proposed1333592
Literature [19]290924
Table 2. Licence-plate Extraction Accuracy Comparison of [19] and Kaggle Dataset.
Table 2. Licence-plate Extraction Accuracy Comparison of [19] and Kaggle Dataset.
MethodDatasetNo. of ImagesAccuracyAccuracy %
ProposedKaggle16911266%
Literature [19]Kaggle1694627%
ProposedLiterature [19]11982%
Literature [19]Literature [19]1111100%
Table 3. Processing-time Comparison of Plate-extraction Methods on Kaggle Dataset.
Table 3. Processing-time Comparison of Plate-extraction Methods on Kaggle Dataset.
StatisticsMethod [19]Proposed Method
Average290.6129.6
Std1329.73232.98
Maximum17,2513059
Minimum7842
Median188122
Table 4. Cropped Licence-plate Comparison Score of the Proposed Method.
Table 4. Cropped Licence-plate Comparison Score of the Proposed Method.
FigureSimilarity ScoreDescription
Figure 17a100%Same distance, same number plate
Figure 17b85%Different distance, same number plate
Figure 18a74%Different distance, different number plate
Figure 18b58%Different distance, different number plate
Table 5. Comparing running time for methods of licence-plate matching and character segmentation and recognition.
Table 5. Comparing running time for methods of licence-plate matching and character segmentation and recognition.
Running Time (ms)
StatisticsProposed MethodMethod [21]Method [19]
Average36.63174.45124.66
Std25.05154.2778.63
Maximum1681633830
Minimum711483
Median3113694
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Manana, M.; Tu, C.; Owolawi, P.A. Edge-Based Licence-Plate Template Matching for Identifying Similar Vehicles. Vehicles 2021, 3, 646-660. https://0-doi-org.brum.beds.ac.uk/10.3390/vehicles3040039

AMA Style

Manana M, Tu C, Owolawi PA. Edge-Based Licence-Plate Template Matching for Identifying Similar Vehicles. Vehicles. 2021; 3(4):646-660. https://0-doi-org.brum.beds.ac.uk/10.3390/vehicles3040039

Chicago/Turabian Style

Manana, Mduduzi, Chunling Tu, and Pius Adewale Owolawi. 2021. "Edge-Based Licence-Plate Template Matching for Identifying Similar Vehicles" Vehicles 3, no. 4: 646-660. https://0-doi-org.brum.beds.ac.uk/10.3390/vehicles3040039

Article Metrics

Back to TopTop