Next Article in Journal
Variable Angular Rate Measurement for a Spacecraft Based on the Rolling Shutter Mode of a Star Tracker
Next Article in Special Issue
Advances of Future IoE Wireless Network Technology
Previous Article in Journal
Sum Rate Optimization for Multiple Access in Multi-FD-UAV-Assisted NOMA-Enabled Backscatter Communication Network
Previous Article in Special Issue
GDPR Personal Privacy Security Mechanism for Smart Home System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Development of an Autonomous Vehicle Training and Verification System for the Purpose of Teaching Experiments

Department of Computer Science and Information Engineering, Southern Taiwan University of Science and Technology, No. 1, Nantai Street, Tainan 71005, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(8), 1874; https://doi.org/10.3390/electronics12081874
Submission received: 20 March 2023 / Revised: 6 April 2023 / Accepted: 14 April 2023 / Published: 15 April 2023
(This article belongs to the Special Issue Advances of Future IoE Wireless Network Technology)

Abstract

:
To cultivate students’ skills in building autonomous vehicle neural network models and to reduce development costs, a system was developed for on-campus training and verification. The system includes (a) autonomous vehicles, (b) test tracks, (c) a data collection and training system, and (d) a test and scoring system. In this system, students can assemble the hardware of the vehicle, configure the software, and choose or modify the neural network model in class. They can then collect the necessary data for the model and train the model. Finally, the system’s test and scoring system can be used to test and verify the performance of the autonomous vehicle. The study found that vehicle turning is better controlled by a motor and steering mechanism, and the camera should be mounted in a high position and at the front of the vehicle to avoid interference with the steering mechanism. Additionally, the study revealed that the training and testing speeds of the autonomous vehicle are dependent on each other, and high-quality results cannot be obtained solely by training a model based on camera images.

1. Introduction

The frequent occurrence of traffic accidents has allowed for extensive statistical analysis, which shows that more than 90% of traffic accidents are caused by human negligence. Research by Boverie et al. [1] showed that more than 20% of traffic accidents were caused by drivers’ hypo-vigilance, such as dozing off or distracted driving. Therefore, governments around the world have been working diligently to try to reduce the occurrence of traffic accidents. Some experts estimate that about 70% of traffic accidents can be avoided with the assistance of automated driving systems or autonomous driving technologies. In order to encourage schools and manufacturers to invest in the research and development of relevant technologies, the United States held the first DARPA Grand Challenge [2] in the Mojave Desert region of the United States in 2004, starting the research and development competition for autonomous driving technologies, which also contributed to the launch of Google’s Self-Driving Car and Tesla’s autonomous commercial vehicles.
With the rapid development in artificial intelligence and deep learning technologies in recent years, research related to autonomous driving technology through deep learning has also begun to flourish. Navarro et al. [3] proposed using a camera with a deep neural network to develop an automatic driving system and collect information through a camera installed in front of the vehicle for automatic driving. In addition, Bojarski et al. [4,5,6] designed a system that could input the collected images of the road in front of the vehicle into a neural network, extract image features through multi-layer convolutional layers, perform nonlinear operations on features through multiple sets of dense layers, and then conduct regression to output the value (normalized in the range from 0.0 to 1.0) of the steering angle of the vehicle to achieve automatic driving.
However, considering the cost and the feasibility of a project for teaching experiments, the construction of a large-scale system architecture is not easy to achieve. Therefore, one option to consider is to lower the threshold for learning and verifying autonomous driving technology. A variety of simulation systems and small vehicle verification systems have been developed so far, including (a) AWS DeepRacer [7], (b) Udacity’s Autonomous Car Simulator [8], (c) CARLA Simulator [9], and (d) Donkey Car Simulator [10].
AWS DeepRacer is a fully autonomous vehicle simulation and training environment. In this environment, the Amazon corporation uses cloud services to implement reinforcement learning in the cloud. The AWS DeepRacer Evo car is commercially available on amazon.com and is equipped with stereo cameras and an LiDAR sensor. Forward-facing left and right cameras constitute stereo cameras, which allows the vehicle to learn depth information in images. This information can then be used to sense and avoid approaching objects on a track. The LiDAR sensor is backward-facing and detects objects behind and to the sides of the vehicle. From the perspective of education, Cota et al. [11] analyzed and proposed blueprints for the future training of autonomous vehicles through reinforcement learning using the AWS DeepRacer.
Udacity’s Autonomous Car Simulator primarily uses training data generated by driving a car to simulate autonomous driving and uses PilotNet [12] to set up a set of virtual cameras on the left, middle, and right sides of the simulator to collect driving images. The software has two different modes: a training mode and an autonomous mode. In training mode, the user can control and drive the vehicle manually through the keyboard or a mouse. In this mode, there are three cameras set up in front of the vehicle to record the driving behavior and the steering angle, driving speed, throttle, and brake data. Technically, the simulator acts as a server from which the program can connect and receive a stream of image frames. With enough driving data collected, the system is trained with the PilotNet neural network. In autonomous mode, the machine learning model can be tested through a network interface. On top of that, this system also provides two different tracks for users to choose from, and it is considered a good entry-level system for autonomous vehicle research. There are no complex roads, no traffic signs, no buildings, no pedestrians, and no vehicles designed as obstacles in the system. Despite being a very basic software package for simulating autonomous driving, Udacity’s Autonomous Car Simulator is an open-source project.
CARLA Simulator is a complete simulation environment that can be used for autonomous vehicle testing. It uses Unreal Engine as the basic engine software for field object calculation. In addition to the built-in scenes provided, the system also supports users to define street scenes and interactive scenarios, and supports external sensors or related modules, including RGB Cameras, IMUs (Inertial Measurement Units), RADAR (Radio Azimuth Direction and Ranging), DVSs (Dynamic Vision Sensors), etc. Apart from user-defined sensing modules, this system can also switch between different weather conditions for simulation and testing. It is a simulation system virtually identical to the real environment. It can also support SUMO co-simulation mode, which can be used to randomly simulate multiple vehicles driving on the road at the same time. In the CARLA environment, complex traffic scenarios can be simulated and verified through the simulator. Óscar et al. [13,14] used the ROS framework through deep reinforcement learning to verify autonomous driving applications on the CARLA simulator. Terapaptommakol et al. [15] proposed using a deep Q-network method in the CARLA simulator to develop an autonomous vehicle control system that achieves trajectory design and collision avoidance with obstacles on the road in a virtual environment. This approach allows for the avoidance of collisions with obstacles and enables the creation of optimized trajectories in a simulated environment. In addition, Gutiérrez-Moreno et al. [16] presented an approach to intersection handling in autonomous driving, specifically the use of a deep reinforcement learning approach with curriculum learning and the effectiveness of the Proximal Policy Optimization algorithm in inferring desired behavior based on the behavior of adversarial vehicles in the CARLA simulator.
Donkey Car, similar to the AWS DeepRacer, is a simulation system that provides both a virtual autonomous environment and physical vehicles. Donkey Car is also an open-source project, through which users or manufacturers can develop or modify the software and hardware, and there are many related products on the market that can be purchased. However, the primary problem with this system is that although the training methods of the real and simulated environments are the same, the results of training a neural network using data collected on real-world roads cannot be directly applied to enable a vehicle to drive on simulated roads in a simulator. Similarly, the results of training a neural network using data collected in a simulator cannot be directly applied to enable a real-world vehicle to drive on real-world roads. In simple terms, the simulation software and the physical vehicles belong to two different training settings. Only when the training track is exactly the same as the test track can the autonomous vehicle be verified on the same track in the simulation system after the software has been trained. Overall, however, Donkey Car is an excellent system for autonomous vehicle research, with an open-source autonomous platform providing all the necessary details for users, and there are numerous ways to use the simulator, depending on one’s goals.

2. System Design

Based on Donkey Car, the autonomous driving training and verification system in this paper was designed to be used in teaching experiments. In addition to setting up autonomous vehicles freely, developers can adjust the model in the environment for training. At the same time, developers can objectively evaluate the advantages and disadvantages of the model on the automatic test and scoring system. The design of this system is explained as follows.
The system consists of four components: (a) autonomous vehicles, (b) test tracks, (c) the data collecting and training system, and (d) the test and scoring system. The autonomous vehicles use Raspberry Pi as the core and are installed with a Raspberry PI OS and the Donkey Car environment for management. Users design the geometric shapes of the test tracks, which are combined through poster printing. The data collection and training system is a computer running Ubuntu OS with the Donkey Car environment installed. Its main function is to receive data collected by the autonomous vehicles, perform post-data processing and training, and return the results to the autonomous vehicles. The test and scoring system is a computer with Microsoft Windows 11, and installed with Anaconda, PyCharm community, Yolo V7 [17], and the software proposed in this paper. It processes real-time images captured by the overhead camera of the test track and determines if the wheels of the autonomous vehicles touch the track or go beyond the boundary. The system block diagram is shown in Figure 1.

2.1. Autonomous Vehicles

There were two kinds of design for autonomous vehicles in this paper, which we will designate Vehicle A and Vehicle B. Both used a Raspberry Pi 4 B, which is manufactured by the Raspberry Pi Foundation headquartered in London, United Kingdom, as the hardware core for the computing system, Donkey Car version 4.2.1 as the software, the DC Gearbox Motor as the vehicle drive motor, and the L298N as the motor controller. The primary difference between the two was the steering mechanism. For the steering mechanism, Vehicle A was steered by controlling the rotational speed difference between the left and right wheels, as shown in Figure 2a. For Vehicle B, the front wheel steering mechanism was achieved by controlling the steering of the servo motor, as shown in Figure 2b. The hardware block diagrams of the two vehicles are shown in Figure 2.

2.2. Test Track

The test track is assembled by piecing together output posters, and users can adjust or design the geometry of the track during testing. This design enables the testing method of the system to be more flexible. Track A is a closed ellipse track with an S-shaped curve, as shown in Figure 3a. Track B is a closed ellipse track, shown in Figure 3b.

2.3. Data Collection and Training System

The entire autonomous vehicle training and verification process is shown in Figure 4. The trainer supervises the overall situation or the front-facing image of the remote vehicle through visual inspection or a computer screen, respectively, and operates the Xbox racing wheel and pedal to control the steering of the vehicle, as shown in Figure 4a. The speed of the vehicle was controlled by pedaling the accelerator or the brake, and the driving data were collected. When the driving data collection was complete, redundant and blurred images were corrected or removed, as shown in Figure 4b, and then the neural network was trained by collecting and correcting images and data in the direction of vehicle travel. A screenshot of the training screen is shown in Figure 4c.
In the test mode, when the training of the neural network was complete, the trained weight files and neural network description files were transplanted into the autonomous vehicle. The autonomous vehicle predicted the steering angle of the vehicle through the neural network based on the front-facing images that were captured. The system converted the angle into corresponding parameters and sent them to the servo motor, which controlled the steering mechanism through GPIO, which allowed the vehicle to drive by itself on the track. The experimenters conducted scoring and verification through the test and scoring system, as shown in Figure 4d. The screenshots of the flow of the autonomous vehicle training and verification process are shown in Figure 4.

2.4. Testing and Scoring System

In order to evaluate the performance of the autonomous vehicle on the test track, a test and scoring system for the autonomous vehicle was set up. The scoring of the autonomous vehicle was achieved by first setting up an overhead camera, with the camera facing downward towards the track, and then using machine vision technology to conduct image processing and recognition. The system needed to first find the track, then locate the position of the vehicle, and after collecting images of the wheels for neural network training, it was able to identify the exact position of the wheels. Finally, it is possible to determine if a vehicle has gone off the track by checking if the coordinates of the vehicle’s wheels intersect with the lane markings. The identification and processing methods are shown in Figure 5.

2.4.1. Equipment Installation

In order to recognize whether the vehicle was driving on the track, there was an overhead camera set up in the system, and the camera was installed facing the track, as shown in Figure 6.

2.4.2. Explanation of the Testing and Scoring System

When system execution began, the system first captured an image with no footage of a vehicle as the test background, as shown in Figure 7a. The image was converted into grayscale and denoised; then, the Canny edge detection algorithm was used to find the edge features of the track. Finally, the findContours function was used to detect the contour for the extraction of the lane markers of the track, as shown in Figure 7b.
When the system started scoring, the system sequentially processed each frame of images captured by the overhead camera. The vehicle position was obtained by subtracting the background image from the acquired image. The schematic diagram of positioning vehicle coordinates is shown in Figure 8.
As the overhead camera was installed pointing at the track, when the autonomous vehicle was driving, this system could identify whether the wheels of the vehicle touched the lane marker or not and for how long it went off the track. As an example, in the orthographic projection of the vehicle body exhibited in Figure 9 below, the vehicle crosses the track line, but its wheels do not actually touch the lane marker, as shown in Figure 9a. Conversely, the system must be able to detect correctly if a wheel even only slightly touches a lane marker, as shown in Figure 9b. Therefore, instead of scoring lane integrity based on the outline of the vehicle body, it was decided that the system needed to correctly locate the outline of the wheels.
In order to correctly locate the wheel coordinates, Yolo v7 was used as the neural network for recognizing wheels in this study. While the vehicle was driving under different rotation angles, images were simultaneously collected by the overhead camera. After subtracting the background image from the acquired images, the four wheels of the vehicle in the images were labeled and then trained through the Yolo v7 neural network. Finally, a system that could correctly recognize and locate the wheels was developed. The training process of the wheel image recognition system for autonomous vehicles is shown in Figure 10.
The position of the relevant object as recognized by Yolo v7 was marked with a rectangular box. The wheels, however, drove at various angles while the vehicle was in the process of turning.
Figure 11a is a schematic diagram exhibiting the wheels’ angles during vehicle turning. The red boxes frame the wheels as recognized through Yolo v7; this is schematized in Figure 11b. The red rectangles here clearly do not precisely represent the outline of the wheels. Therefore, an algorithm was developed to obtain the actual wheel contours by performing an intersection operation between the vehicle image obtained and the red rectangle regions extracted by Yolo V7; this is schematized in Figure 11c.

2.4.3. Testing and Scoring Algorithm

In this paper, the background image without vehicles is defined as FrameNocar and the image with the vehicle is defined as Framecar. FrameNocar was subtracted from Framecar through Formula (1) to obtain the vehicle image ImageCar, and then the pre-trained Yolo v7 was used to locate the four wheels, defined as WheelsYolo. Using Formula (2) to determine the overlapping area of the ImageCar and WheelsYolo images, the final range of the WheelsEdge is produced. In the following formulas, adding ‘(t)’ to the formula means that the frame read at time t.
ImageCar(t) = Framecar(t) − FrameNocar
WheelsEdge(t) = ImageCar(t) ∩ WheelsYolo(t)
All that is required is to ascertain whether there was an overlap between the outlines of the wheels and the track lane marker to judge that the driving vehicle had gone off the track. FrameNocar was computed through the Canny edge detection algorithm to find the edge features of the track, and the findContours function was used to define the coordinates sets of the contour as TrackContours, as shown in Formula (3).
TrackContours(t) = findContours(CannyEdgeDetection(FrameNocar(t)))
Next, WheelsEdge was computed through the Canny edge detection algorithm to find the edge features of the vehicle wheels, and the findContours function was used to define the coordinate sets of the contour as WheelsContours, as shown in Formula (4).
WheelsContours(t) = findContours(CannyEdgeDetection(WheelsEdge(t)))
Finally, an intersection function was used to determine whether there was an overlap between TrackContours and WheelsContours to determine if the vehicle had driven onto the track lane marker OutputOutofBounds, as in Formula (5).
OutputOutofBounds(t) = intersection (WheelsContours(t), TrackContours(t))
When the test and scoring system was executed, two values were generated: the first being the number of times that the wheels of the autonomous vehicle touched the track lane marker and the second being the number of frames in which the wheels of the autonomous vehicle touched the track lane marker, defined as follows.
The number of times the wheels of the autonomous vehicle touched the lane marker of the track was defined as Countertouch. Once the wheels of the vehicle in the captured image touched the lane marker, going off and then back on to the track again, the count of Countertouch would be increased by 1, as in Formula (6), primarily to keep track of the number of times the autonomous vehicle had touched the track lane marker. The symbol Ø represents an empty set and n represents a positive integer.
Countertouch = Countertouch + 1,
if (OutputOutofBounds(t) ≠ Ø and OutputOutofBounds(t + n) = Ø and n > 0)
However, there might be an omission of relevant data if one evaluates an autonomous vehicle based only on the number of times the wheels touch a lane marker. One needs to consider if the wheels immediately correct back to the track after touching a lane marker, or if they go off the track for a significantly longer time before correcting back, which constitute two different conditions. In order to be able to distinguish between these two conditions, CountertouchFrame was specifically defined in this paper to represent each frame in the video captured by the system that recognized wheels touching a lane marker, which increases the count of CountertouchFrame by 1, as in Formula (7).
CountertouchFrame = CountertouchFrame + 1, if OutputOutofBounds(t) ≠ Ø
During testing, if any of the wheels of the autonomous vehicle touched the lane marker of the track and failed to correct back to the track, causing the vehicle to run off the track, the test would be judged as a failure.
Figure 12 is a screenshot of the execution of the test and scoring system. It shows the position of the driving vehicle and the four wheels. The yellow number in the upper left corner indicates the number of times the vehicle touched the lane marker, Countertouch, and the red number in the upper left corner indicates the number of video frames in which the vehicle touched the lane marker, CountertouchFrame, during the test.

2.5. The Process of the Course

Figure 13 is a flowchart depicting the process of course execution. In this course, students learned how to assemble an autonomous car, install and configure the necessary software, and perform a series of tests to ensure that the car was functioning properly. In the interests of reproducibility, we relate the concrete steps that the students covered:
Step 1:
Students assembled the car hardware, including the chassis, motors, motor drivers, battery pack, wheels, Raspberry Pi, webcam, and wiring.
Step 2:
Students installed the Raspberry Pi and set up the Donkey Car environment, including configuring the necessary hardware drivers. For example, they needed to add a driver for their hardware chip to the actuator.py file in the Donkey Car project’s parts folder. Then, in the manage.py and myconfig.py files, they needed to specify the hardware driver they just set up.
Step 3:
Students tested the functionality of the Donkey Car. They started by lifting the assembled car off the ground and running the command “python manage.py drive”. This started the car and allowed them to verify that the webcam was streaming properly and that the mechanism and wheels were turning correctly.
Step 4:
Students used two different methods provided by the system to operate the vehicle and collect training data. The first method involved manually driving the vehicle and collecting data by entering the command “python manage.py drive”. Prior to starting, they could access a menu by entering “http://vehicle-IP:8887” into a web browser on a computer or mobile device, where"vehicle-IP" refers to the Wi-Fi IP of the autonomous car. In the menu, they could set the value for “Max Throttle” and select “User(d)” in the “Mode and Pilot” menu to configure the parameters for remote control of the vehicle.
The second method involved using an Xbox racing wheel and pedals to control the vehicle via Bluetooth by entering the command “python manage.py drive --js”. The direction of the vehicle could be controlled using the steering wheel, while the throttle could be adjusted using the pedals to increase or decrease the throttle value.
Step 5:
Students uploaded the collected data to a server or computer for further processing.
(Note: Steps 6 through 9 were all carried out on a server or computer.)
Step 6:
Students modified the Donkey Car neural network model on their computer.
Step 7:
Driving the car may cause it to leave the track boundary or collect too much repetitive data; thus, before training the neural network, students needed to delete or correct any data that included improper driving or unclear images.
Step 8:
Students trained the model by running the “python train.py --tub <path_to_collected_data> --model <name_of_output_model_file>“ command.
Step 9:
Students downloaded the completed weight file from the trained model to the autonomous car for testing.
Step 10:
Students launched the testing program with “python manage.py drive --model trained_Model” and configured the maximum throttle value and mode as well as pilot menu setting “Local Pilot(d)” in the browser window at “http://vehicle-IP:8887”. Then, they tested the car using the testing and scoring system. The system measured the car’s performance, including its ability to stay on the track and avoid obstacles.
Step 11:
If the test results did not meet requirements, they went back to step 4 to collect the data and train the model again.
By the end of the process, students learned how to assemble and configure an autonomous car, collect training data, and train and test a neural network model for autonomous driving.

3. Experiment and Results

Before using the test and scoring system, this system must undergo verification. The following describes the verification process for this system.

3.1. Verification of the Test and Scoring System

As the first stage of the experiment, testing and verification had to be conducted for the test and scoring system. The operator used throttle of 20%, 35%, and 50% to remotely control the autonomous vehicle driving on the test track using the Xbox racing wheel and pedal. During the process, the operator intentionally made the vehicle’s wheels touch the lane marker at different locations, and through this process simulated the driving path of the vehicle; this was recorded with the overhead camera.
The next step was to capture frames from the three videos recorded under different throttle conditions, have personnel check them one by one, and manually record the results of Countertouch and CountertouchFrame. Meanwhile, the three different videos were tested separately through the test and scoring system. It was thereby confirmed that the results obtained through system detection were consistent with the results obtained from personnel checking. With the verification of the test and scoring system completed, the evaluation of the performance of the autonomous vehicles could commence.

3.2. Experimental Conditions

The default training model for the Donkey Car, Keras Linear, was used to train all the autonomous vehicles in this study. The track featured an S-curve, turns, and straight sections. During training, the car was operated by personnel, who found it difficult to operate correctly with a throttle value greater than 50% on this short and varied track. On the lower end, a throttle value set to less than 20% tended to cause the car to get stuck at S-curve turns. Therefore, for this experiment, the throttle value was set between 20% and 50%, and testing was conducted using a mid-range value of 35%. The test procedure involved running autonomous Vehicles A and B on both Track A and Track B for 10 laps each and recording the results. However, after confirming that Vehicle B outperformed Vehicle A in steering performance, only Vehicle B was used in subsequent experiments.
The battery of the autonomous vehicle was fully charged for each experiment. In this system, 100% throttle represented full power, with 50% throttle corresponding to half power used to control the accelerator. To collect training data, both vehicles were driven with a throttle value of 35% for 20 laps each on Track A.
All data collection, training, and testing procedures for the autonomous vehicles followed the steps described in Section 2.5 above from Step 4 to Step 11.
In the preliminary stage of the study, it was found that the autonomous vehicle did not operate accurately on the track, and it was observed that three primary problems needed to be resolved: (a) the impact of vehicle steering mechanism on the test results of the autonomous vehicles, (b) the impact of the location of the camera on the test results of the autonomous vehicles, and (c) the correlation between the throttle for the training vehicle and the throttle for the testing vehicle.
Next, we conducted tests to come up with the appropriate solutions for these three problems.

3.3. Impact of the Vehicle Steering Mechanism on the Test Results of Autonomous Vehicles

In order to find out whether different steering mechanisms impact the training and verification of the autonomous vehicle, two vehicles with different steering mechanisms were designed for this study.
Vehicle A was steered by controlling the rotational speed difference between the left and right wheels, as shown in Figure 14a, and Vehicle B was steered by controlling a servo motor, as shown in Figure 14b. The photos of two vehicles are shown in Figure 14.
The camera setup in the two vehicles was installed at the same position. There were two types of tracks in the experimental design, as shown in Figure 3a,b; they are designated Track A and Track B, respectively.
The two vehicles used the same neural network for training in order to facilitate a comparison of the effects of different steering mechanisms on vehicle autonomous training. The experimental conditions are as described in Section 3.2, “Experimental Conditions”.
As shown in Table 1, when testing on Track A, the wheels of Vehicle A touched the lane marker more frequently while driving on an S-shaped curve. The test results showed that the number of times the wheels of Vehicle A touched the lane marker was 32, and the number of video frames in which the wheels touched the lane marker was 526. The number of times the wheels of Vehicle B touched the lane marker was 9, and the number of video frames in which they touched the lane marker was 151. When driving on a closed ellipse Track B, the number of times the wheels of Vehicle A touched the lane marker was 14, and the number of video frames in which the wheels touched the lane marker was 245, whereas the number of times the wheels of Vehicle B touched the lane marker was 4, and the number of video frames that they touched the lane marker was 63.
As in Table 1, the experimental results showed that Vehicle B, whose steering mechanism was controlled by a servo motor, performed much better than Vehicle A regarding controlling the direction of the vehicle, especially when driving on the S-shaped curve or sharp turns.

3.4. Impact of the Location of the Camera on the Test Results of the Autonomous Vehicles

Since the testing described above found that the design of the steering mechanism had a better effect on controlling the direction of the vehicle, the following test was based on this design. This test addressed whether the camera should be installed higher or lower, and whether it should be installed in front of the vehicle or linked up to the steering mechanism.
In order to determine the best position to install the camera, the camera was set up in three different positions, as shown in Figure 15. Position (I): the camera was installed in front of the vehicle body and set up at a low position. Position (II): the camera was installed in front of the vehicle body and set up at a high position, with it facing downward towards the track. Position (III): the camera was mounted on the vehicle steering mechanism. The direction of the camera view installed at position (I) and position (II) was consistent with the vehicle’s travel direction, and the direction of the camera view at position (III) was synchronized with the vehicle’s steering direction (that is, not in the vehicle’s travel direction). Schematic diagrams of the installation positions are shown in Figure 15.
The procedure for collecting training data was to set up the camera on the vehicle in the three different positions mentioned above, and make the vehicle drive on the track, as shown in Figure 3. With the camera installed at different heights, different views would be presented: the camera installed at a low position in front of the vehicle body (I), which is the view shown in Figure 16a; and the camera at a high position in front of the vehicle body (II), which is the view shown in Figure 16b.
The experimental conditions are as described in Section 3.2, “Experimental Conditions”.
As shown in Table 2, when testing on Track A with the camera set at position (I), the test results showed that the number of times the wheels touched the lane marker was 22, and the number of video frames in which the wheels touched the lane marker was 358. When the camera was set at position (II), the number of times the wheels touched the lane marker was 9, and the number of video frames in which the wheels touched the lane marker was 151. Unexpectedly, the test results showed the outcome was a failure when the camera set at position (III).
As shown in Table 2, when testing on Track B, with the camera set at position (I), the test results showed that the number of times the wheels touched the lane marker was 11, and the number of video frames in which the wheels touched the lane marker was 176. When the camera was set at position (II), the number of times the wheels touched the lane marker was 4, and the number of video frames in which the wheels touched the lane marker was 63. Once again, the test results showed the outcome was a failure when the camera was set at position (III).
The experimental results in Table 2 show that when the camera was set at position (III), most of the captured images were blurred due to the shaking of the camera when turning, and the vehicle would then go off the track during the test, which was thus rated as “failed”. When the camera was set at position (I) and position (II), automatic driving was able to be successfully completed. However, it was clear that when the camera was installed in position (II), the effect was much better than it being installed in position (I). The reason for this is that when the camera was installed at position (I), its field of vision facing the track was relatively narrow, and it was prone to falsely detect objects outside the track, which resulted in mistakenly including them as part of the learning data for training, as shown in Figure 16a. When the camera was installed at position (II), however, it was elevated and facing the track, so that the autonomous vehicle could detect the situations ahead of the track much earlier, as shown in Figure 16b, especially where there was a curve ahead.

3.5. Correlation between Throttle for the Training Vehicle and Throttle for the Testing Vehicle

In order to determine the correlation between the throttle for the training vehicle and the throttle for the testing vehicle, tests were carried out as follows: the operator controlled the autonomous vehicles remotely through the Xbox racing wheel and pedal to drive Vehicle A on Track A for 20 laps using successive throttle percentages of 20%, 35%, and 50% to collect data. After training with the collected data, testing was carried of the autonomous vehicle in autonomous mode on Track A for 10 laps at a throttle percentage of 20%, 10 laps at a throttle percentage of 35%, and 10 laps at a throttle percentage of 50%, separately, and the results were recorded using the test and scoring system.

3.5.1. Training and Testing with the Same Throttle

As shown in Table 3, the experimental results show that when using a throttle of 20% for training and testing, the number of times the wheels touched the lane marker was 8, and the number of video frames in which the wheels touched the lane marker was 136. When using a throttle of 35% for training and testing, the number of times the wheels touched the lane marker was 9, and the number of video frames in which the wheels touched the lane marker was 151. Finally, the results showed that when using a throttle of 50% for training and testing, the number of times the wheels touched the lane marker was 9 and the number of video frames in which the wheels touched the lane marker 143.

3.5.2. Training and Testing with Different Throttle

As shown in Table 3, the experimental results showed that when the system was trained with 20% throttle and then the autonomous vehicle was tested at 35% throttle, the number of times the wheels touched the lane marker was 17 and the number of video frames in which the wheels touched the lane marker was 274. The experimental results also showed that when the system was trained with 35% throttle and then the autonomous vehicle was tested at 20% throttle, the number of times the wheels touched the lane marker was 13 and the number of video frames in which the wheels touched the lane marker was 213. Finally, the results showed that when the system was trained with 50% throttle and the autonomous vehicle was tested at 20% throttle, the number of times the wheels touched the lane marker was 13 and the number of video frames in which the wheels touched the lane marker was 217. The experimental results showed that the system was trained with 50% throttle and the autonomous vehicle was tested at 35% throttle, the number of times the wheels touched the lane marker was 11 and the number of video frames in which the wheels touched the lane marker was 182.
To summarize, the results were different when the throttle used for training and for test were not the same. We note first that when samples were collected and trained with 20% throttle and then the autonomous vehicle was tested at 35% throttle, the autonomous vehicle completed driving with the wheels occasionally touching the lane marker. When the throttle was increased to 50% for the test, the autonomous vehicle went off the track and failed the test. Second, when samples were collected and trained with 35% throttle and then the vehicle was tested with 20% throttle, the autonomous vehicle functioned normally, with the wheels occasionally touching the lane marker. However, when testing with 50% throttle, the automatic driving could not be completed. Third, when the samples were collected and trained with 50% throttle and then the vehicle was tested at 20% and 35%, the autonomous vehicle functioned normally, with the wheels occasionally touching the lane marker.

3.6. Comparison Results

As shown in Table 4, AWS DeepRacer, Udacity’s Autonomous Car Simulator, CARLA Simulator, Donkey Car, and the vehicles designed in this paper were compared. All of them have simulation capabilities, but only AWS DeepRacer, Donkey Car, and the vehicles in this paper have physical cars that can be verified.
The DeepRacer and DeepRacer Evo are priced at USD 399 and USD 598, respectively, while Donkey Car is priced at approximately USD 325. The materials for Vehicle A and Vehicle B, which can be assembled by oneself, cost approximately USD 110 and USD 132, respectively. For AWS DeepRacer, training and model evaluation in the cloud cost USD 3.5/2 h each, and additional storage rental is required. Udacity’s Autonomous Car Simulator, CARLA Simulator, and Donkey Car can be installed on a personal computer, so there is no need to pay for usage. Vehicle A and Vehicle B in this paper can be designed to run on self-made tracks, which can be printed on A0 posters for USD 15 per sheet. It is worth noting that this paper introduces a unique test and scoring system, which allows for immediate measurement of the autonomous car’s performance after training, specifically with regard to the duration that its wheels are in contact with the road dividers. Overall, the design in this paper has the advantages of being more cost-effective and flexible in terms of increasing or modifying vehicle functions, and provides a more clear and structured way to evaluate and verify performance in testing, especially in educational settings.

4. Discussion

We conclude from the above experiments that Vehicle A, with the mechanism of steering through rotational speed difference, performed poorly in controlling the direction of the vehicle, especially when driving on an S-shaped curve or sharp turns.
We also found that when the camera was installed in position (II), the track’s front image could be collected much earlier for a turn because of the camera’s higher position, and that under the same training mode, a camera installed on a tall vehicle body would have a much better effect on the front images collected for training by the autonomous vehicle.
The above experiments further showed that with a fixed throttle, tests failed under different conditions when only inputting front driving images to control the steering wheel of the autonomous vehicle, due to the difference between the throttle used for the training and the throttle used for the test. Therefore, throttle and front images must be taken into consideration together.
The vehicle design in this study is relatively cheap compared to other platforms, allowing students must have their own autonomous car for implementation in group projects. Although the hardware for the test and scoring system is more expensive, it can be shared during the course for cost efficiency. In addition, the testing track for the vehicles in this study can be customized by combining output posters, allowing for flexibility in adapting to different classroom sizes. Furthermore, the unique design of the auto scoring and verification system in this paper allows for a more concrete and efficient evaluation of the performance of the trained autonomous vehicles.

5. Conclusions

The system designed in this paper describes a complete environment that can be applied in training and verification of autonomous vehicles in schools or research units. Users can modify or replace the Keras Linear model originally used in Donkey Car and can also quickly verify results through the test and scoring system to optimize the neural network. Moreover, through the output of different posters, diverse types of tracks can be assembled freely, such as intersections, multi-directional tracks, etc. This system can provide a teaching and verification platform for schools to conduct autonomous-vehicle-related research courses.

Author Contributions

C.-C.W.: problem conceptualization, methodology, data analysis, writing—review and editing of final draft, results tabulation, and graphic presentation. Y.-C.W.: software development and execution. Y.-K.L.: data collection, training, and execution. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology (MOST), Taiwan, grant no. MOST 110-2221-E-218-019.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to express their gratitude to the Ministry of Science and Technology (MOST) for supporting this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boverie, S.; Daurenjou, D.; Esteve, D.; Poulard, H.; Thomas, J. Driver Vigilance Monitoring—New Developments. In Proceedings of the 15th IFAC World Congress on Automatic Control, Barcelona, Spain, 21 July 2002. [Google Scholar]
  2. Behringer, R.; Sundareswaran, S.; Gregory, B.; Elsley, R.; Addison, R.; Guthmiller, W.; Daily, R.; Bevly, D. The DARPA grand challenge—Development of an autonomous vehicle. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004. [Google Scholar]
  3. Navarro, A.; Joerdening, J.; Khalil, R.; Brown, A.; Asher, Z. Development of an Autonomous Vehicle Control Strategy Using a Single Camera and Deep Neural Networks; SAE Technical Paper 2018-01-0035; SAE International: Warrendale, PA, USA, 2018. [Google Scholar] [CrossRef]
  4. Bojarski, M.; Testa, D.D.; Dworakowski, D.; Firner, B.; Flepp, B.; Goyal, P.; Jackel, L.D.; Monfort, M.; Muller, U.; Zhang, J.; et al. End to end learning for autonomous cars. arXiv 2016, arXiv:1604.07316v1[cs.CV]. [Google Scholar]
  5. Bojarski, M.; Yeres, P.; Choromanska, A.; Choromanski, K.; Firner, B.; Jackel, L.; Muller, U. Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv 2017, arXiv:abs/1704.07911. [Google Scholar]
  6. Bojarski, M.; Choromanska, A.; Choromanski, K.; Firner, B.; Jackel, L.; Muller, U.; Zieba, K. VisualBackProp: Efficient visualization of CNNs. arXiv 2017, arXiv:1611.05418v3 [cs.CV]. [Google Scholar]
  7. AWS DeepRacer Documentation. Available online: https://docs.aws.amazon.com/deepracer/?icmpid=docs_homepage_ml (accessed on 3 October 2022).
  8. Udacity’s Autonomous Car. Available online: https://github.com/udacity/self-driving-car-sim (accessed on 7 November 2022).
  9. CARLA Open-Source Simulator for Autonomous Driving Research. Available online: https://carla.org/ (accessed on 26 September 2022).
  10. Autorope. Donkeycar: A Python Self Driving Library. Available online: https://github.com/autorope/donkeycar (accessed on 2 May 2022).
  11. Cota, J.L.; Rodríguez, J.A.T.; Alonso, B.G.; Hurtado, C.V. Roadmap for development of skills in Artificial Intelligence by means of a Reinforcement Learning model using a DeepRacer autonomous vehicle. In Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON), Tunis, Tunisia, 28–31 March 2022. [Google Scholar] [CrossRef]
  12. Bojarski, M.; Chen, C.; Daw, J.; Değirmenci, A.; Deri, J.; Firner, B.; Flepp, B.; Gogri, S.; Hong, J.; Jackel, L.; et al. The NVIDIA PilotNet Experiments. arXiv 2020, arXiv:2010.08776v1 [cs.CV]. [Google Scholar]
  13. Óscar, P.G.; Rafael, B.; Elena, L.G.; Luis, M.B.; Carlos, G.H.; Rodrigo, G.; Alejandro, D.D. Deep reinforcement learning based control for Autonomous Vehicles in CARLA. Multimed. Tools Appl. 2022, 81, 3553–3576. [Google Scholar]
  14. Óscar, P.G.; Rafael, B.; Elena, L.G.; Luis, M.B.; Carlos, G.H.; Alejandro, D.D. Deep Reinforcement Learning based control algorithms: Training and validation using the ROS Framework in CARLA Simulator for Autonomous applications. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021. [Google Scholar]
  15. Terapaptommakol, W.; Phaoharuhansa, D.; Koowattanasuchat, P.; Rajruangrabin, J. Design of Obstacle Avoidance for Autonomous Vehicle Using Deep Q-Network and CARLA Simulator. World Electr. Veh. J. 2022, 13, 239. [Google Scholar] [CrossRef]
  16. Gutiérrez-Moreno, R.; Barea, R.; López-Guillén, E.; Araluce, J.; Bergasa, L.M. Reinforcement Learning-Based Autonomous Driving at Intersections in CARLA Simulator. Sensors 2022, 22, 8373. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
Figure 1. System block diagram.
Figure 1. System block diagram.
Electronics 12 01874 g001
Figure 2. Hardware block diagrams of two vehicles, (a) Vehicle A is steered by controlling rotational speed differences between the left and right wheels, while (b) Vehicle B is steered by controlling the servo motor.
Figure 2. Hardware block diagrams of two vehicles, (a) Vehicle A is steered by controlling rotational speed differences between the left and right wheels, while (b) Vehicle B is steered by controlling the servo motor.
Electronics 12 01874 g002
Figure 3. Vehicle test tracks, (a) Track A, a closed ellipse track with an S-shaped curve, and (b) Track B, a closed ellipse track.
Figure 3. Vehicle test tracks, (a) Track A, a closed ellipse track with an S-shaped curve, and (b) Track B, a closed ellipse track.
Electronics 12 01874 g003
Figure 4. Flow of the autonomous vehicle training and verification process, (a) driving for data collection, (b) data correction, (c) training through the neural network, and (d) testing and scoring.
Figure 4. Flow of the autonomous vehicle training and verification process, (a) driving for data collection, (b) data correction, (c) training through the neural network, and (d) testing and scoring.
Electronics 12 01874 g004
Figure 5. Schematic diagram of processing method for vehicle off-track detection.
Figure 5. Schematic diagram of processing method for vehicle off-track detection.
Electronics 12 01874 g005
Figure 6. Photo of the system installation.
Figure 6. Photo of the system installation.
Electronics 12 01874 g006
Figure 7. Marking the lane markers of the test track, (a) the image of the test track background, and (b) the image of the lane marker of the test track.
Figure 7. Marking the lane markers of the test track, (a) the image of the test track background, and (b) the image of the lane marker of the test track.
Electronics 12 01874 g007
Figure 8. Schematic diagram of the program positioning vehicle coordinates: (a) image of a vehicle driving on the track, (b) test track background, and (c) image of vehicle positioning.
Figure 8. Schematic diagram of the program positioning vehicle coordinates: (a) image of a vehicle driving on the track, (b) test track background, and (c) image of vehicle positioning.
Electronics 12 01874 g008
Figure 9. Schematic diagram of lane-touching conditions of the autonomous vehicle, (a) vehicle body crossing the track line not counting as off-track, and (b) wheel touching the lane marker counting as off-track.
Figure 9. Schematic diagram of lane-touching conditions of the autonomous vehicle, (a) vehicle body crossing the track line not counting as off-track, and (b) wheel touching the lane marker counting as off-track.
Electronics 12 01874 g009
Figure 10. The training process of wheel image recognition system for autonomous vehicles.
Figure 10. The training process of wheel image recognition system for autonomous vehicles.
Electronics 12 01874 g010
Figure 11. Schematic diagram of the turning vehicle,:(a) a top-down view of the vehicle, (b) locating the wheels, and (c) obtaining the wheels’ outline.
Figure 11. Schematic diagram of the turning vehicle,:(a) a top-down view of the vehicle, (b) locating the wheels, and (c) obtaining the wheels’ outline.
Electronics 12 01874 g011
Figure 12. Screenshot of the execution of the test and scoring system.
Figure 12. Screenshot of the execution of the test and scoring system.
Electronics 12 01874 g012
Figure 13. A flow chart depicting the process of course execution.
Figure 13. A flow chart depicting the process of course execution.
Electronics 12 01874 g013
Figure 14. Photos of the two vehicles: (a) Vehicle A, steered by controlling the rotational speed difference between left and right wheels, and (b) Vehicle B, steered by controlling a servo motor.
Figure 14. Photos of the two vehicles: (a) Vehicle A, steered by controlling the rotational speed difference between left and right wheels, and (b) Vehicle B, steered by controlling a servo motor.
Electronics 12 01874 g014
Figure 15. Camera installation positions: low position (I), high position (II), and position linked with the steering mechanism (III).
Figure 15. Camera installation positions: low position (I), high position (II), and position linked with the steering mechanism (III).
Electronics 12 01874 g015
Figure 16. Experimental field: (a) the view from position (I), (b) the view from position (II).
Figure 16. Experimental field: (a) the view from position (I), (b) the view from position (II).
Electronics 12 01874 g016
Table 1. Test results of vehicles driving with 35% throttle with different steering mechanisms on different tracks for 10 laps.
Table 1. Test results of vehicles driving with 35% throttle with different steering mechanisms on different tracks for 10 laps.
Test Track
(Results)
Track ATrack B
Test Vehicle CountertouchCountertouchFrameCountertouchCountertouchFrame
Vehicle A32 52614245
Vehicle B9 1514 63
Table 2. Comparison results of test driving for ten laps at three camera installation positions.
Table 2. Comparison results of test driving for ten laps at three camera installation positions.
Test Track
(Results)
Track ATrack B
Camera Installed
Position
CountertouchCountertouchFrameCountertouchCountertouchFrame
Position (I)22 35811176
Position (II)9151463
Position (III)failfailfailfail
Table 3. Comparison results of training and testing of the autonomous vehicle for 10 laps with different throttles.
Table 3. Comparison results of training and testing of the autonomous vehicle for 10 laps with different throttles.
Throttle
in Testing
(Results)
20%35%50%
Throttle
in Training
CountertouchCountertouchFrameCountertouchCountertouchFrameCountertouchCountertouchFrame
20%813617274failfail
35%132139 151failfail
50%13217111829143
Table 4. Comparison results of the items for different methods.
Table 4. Comparison results of the items for different methods.
ItemsPhysical
Vehicles
PricesTest
Environment
Auto Scoring and Verification
System
Types
AWS DeepRacerAWS DeepRacer and Evo carDeepRacer USD 399
DeepRacer Evo USD 598
AWS CloudN/A
Udacity’s Autonomous Car SimulatorN/AN/ASimulator based on the UnityN/A
CARLA SimulatorN/AN/ASimulator based on the Unreal EngineN/A
Donkey CarDonkey CarUSD 325Simulator based on the OpenAI gym wrapperN/A
Vehicle A
in this paper
Vehicle AUSD 110Piecing together output postersAvailable
Vehicle B
in this paper
Vehicle BUSD 132Piecing together output postersAvailable
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, C.-C.; Wu, Y.-C.; Liang, Y.-K. The Development of an Autonomous Vehicle Training and Verification System for the Purpose of Teaching Experiments. Electronics 2023, 12, 1874. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12081874

AMA Style

Wu C-C, Wu Y-C, Liang Y-K. The Development of an Autonomous Vehicle Training and Verification System for the Purpose of Teaching Experiments. Electronics. 2023; 12(8):1874. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12081874

Chicago/Turabian Style

Wu, Chien-Chung, Yu-Cheng Wu, and Yu-Kai Liang. 2023. "The Development of an Autonomous Vehicle Training and Verification System for the Purpose of Teaching Experiments" Electronics 12, no. 8: 1874. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12081874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop