Next Article in Journal
Heuristics and Learning Models for Dubins MinMax Traveling Salesman Problem
Next Article in Special Issue
Towards an Effective Service Allocation in Fog Computing
Previous Article in Journal
SPA-Net: A Deep Learning Approach Enhanced Using a Span-Partial Structure and Attention Mechanism for Image Copy-Move Forgery Detection
Previous Article in Special Issue
Rain Discrimination with Machine Learning Classifiers for Opportunistic Rain Detection System Using Satellite Micro-Wave Links
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework for Representing, Building and Reusing Novel State-of-the-Art Three-Dimensional Object Detection Models in Point Clouds Targeting Self-Driving Applications

1
Algoritmi Centre, University of Minho, 4800-058 Guimarães, Portugal
2
Associação Laboratório Colaborativo em Transformação Digital (DTx Colab), 4800-058 Guimarães, Portugal
3
Bosch Car Multimédia, 4700-113 Braga, Portugal
4
Capacity Building and Sustainability of Agri-Food Production, Centro ALGORITMI, University of Trás-os-Montes and Alto Douro, 5000-801 Vila Real, Portugal
5
Intelligent System Associate Laboratory (LASI), 4800-058 Guimarães, Portugal
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 8 May 2023 / Revised: 7 July 2023 / Accepted: 13 July 2023 / Published: 15 July 2023

Abstract

:
The rapid development of deep learning has brought novel methodologies for 3D object detection using LiDAR sensing technology. These improvements in precision and inference speed performances lead to notable high performance and real-time inference, which is especially important for self-driving purposes. However, the developments carried by these approaches overwhelm the research process in this area since new methods, technologies and software versions lead to different project necessities, specifications and requirements. Moreover, the improvements brought by the new methods may be due to improvements in newer versions of deep learning frameworks and not just the novelty and innovation of the model architecture. Thus, it has become crucial to create a framework with the same software versions, specifications and requirements that accommodate all these methodologies and allow for the easy introduction of new methods and models. A framework is proposed that abstracts the implementation, reusing and building of novel methods and models. The main idea is to facilitate the representation of state-of-the-art (SoA) approaches and simultaneously encourage the implementation of new approaches by reusing, improving and innovating modules in the proposed framework, which has the same software specifications to allow for a fair comparison. This makes it possible to determine if the key innovation approach outperforms the current SoA by comparing models in a framework with the same software specifications and requirements.

1. Introduction

The field of computer vision has seen significant advancements in recent years, particularly in the area of 3D object detection from point cloud data. However, there is still a need for a general representation framework that can be applied to a wide range of 3D object detection tasks, regardless of the specific sensor or application domain. The development verified in recent years of the computational power offered by cutting-edge GPUs has allowed for the application of deep learning algorithms to detect objects in several domains. One such domain is autonomous driving using light detection and ranging (LiDAR) data, representing a considerable gain in detection efficiency, precision and inference speed [1].
In recent years, there has been significant progress in 3D object detection models based on LIDAR data for self-driving applications. A multitude of frameworks and projects have been proposed, each with its own unique approach to addressing the challenges of detecting and tracking objects in a 3D environment. However, this diversity also poses a challenge when it comes to deploying these models for onboard inference in a self-driving vehicle [2,3].
The misclassification of off-road regions is one of the difficulties with LiDAR-based object detection highlighted in [4]. Finding and classifying off-road areas is essential for safe and accurate autonomous navigation. It also suggests combining high-definition (HD) maps with LiDAR data to overcome this problem. The platform improves the process of item recognition and categorization by adding HD maps, which offer comprehensive information about the road network. The LiDAR system can more easily distinguish between legitimate impediments and off-road areas thanks to HD maps’ comprehensive road geometry data. By enhancing object identification precision and lowering false positives and false negatives, this integration makes autonomous navigation safer and more dependable. Another key idea is that to improve item recognition and classification in the context of automated driving systems, it uses LiDAR’s abilities to capture exact 3D information about the environment and integrate it with HD maps.
One major issue is the enormous variation in software versions, libraries and supported platforms, making it difficult to assemble and deploy these models correctly. Additionally, self-driving requirements must be taken into consideration, such as the need for operationalization with different modules and the limited computational resources available in onboard systems.
Regardless, the 3D object detection models discussed in the literature take point clouds as input and are known to be more complex. These models have a deeper pipeline and process a more significant amount of data. For example, a point cloud usually comprises between 100 k–120 k [3], where each point holds data related to the Euclidean distance and signal reflection, that is, 128 bits to translate each information of each point.
The literature includes recent research such as [3,5,6,7]; it has been suggested that the minimum operating requirements for self-driving applications should include an overall class classification of at least 60 mAP and an inference time of less than 100 ms.
In this context, the need for a standardized and optimized framework for 3D object detection based on LIDAR data becomes even more important. Such a framework could simplify the deployment process, enable better interoperability across different systems and facilitate the development of more efficient and effective self-driving systems.

Our Contribution

This paper aims to propose a general SoA representation framework for 3D object detection from the point cloud. It supports multiple SoA 3D object detection methods with highly refactored codes for both one-stage and two-stage methods. Also, it enables the implementation and reusing of different approaches with less manual engineering effort by proposing an abstract way of building object detectors. At the same time, it facilitates the implementation of new methods in each module of the framework. By implementing different SoAs, we are trying to facilitate a new approach for the scientific community. In this way, it will be possible to offer a framework for real-time testing inference and measure the trade-off between metrics (mAPvs inference time) in single-framework 3D model objects applied to self-driving applications.
Therefore, the contributions proposed in this paper are as follows:
  • An abstract framework for the implementation/representation of edges for 3D object detection models using LIDAR data.
  • Less engineer effort to implement new methods in different framework models.
  • A simpler way to change hyperparameters and retrain models using YML files.
  • An easily represented model using these YML files automatically.
The organization of this paper is as follows: In the next Section 2, some of the state-of-the-art works related to 3D object detection systems and hardware platforms for their implementation are presented. Section 3 shows a four-step method used to select, train and tune a deep learning model for deployment on a hardware device. Section 4 presents the selected 3D object detection model, as well as its deep learning components, specifying the details about the architecture of the target hardware device and the implementation of the hardware components and software. The presentation of performance evaluation results, comparison of results and discussion of these results occur in Section 7. Finally, Section 8 presents the main results achieved in this paper and future work.

2. Related Work

In recent years, object detection models in point clouds presented in the literature have been highly improved, and higher and higher detection performance has been achieved. Based on the literature, the most discussed models are divided into two broad categories: approaches based on CNN 3D and approaches based on CNN 2D, where different data representations, backbone networks and multiscale resource learning techniques can be adopted [3].
When it comes to 3D object detection approaches, they can be classified into three types. The first category is based on volumetric representation. The second is based on pillars. Finally, the third is based on raw points. Furthermore, they are novel models recognized by the scientific community that provide innovation in the diverse architecture pipeline, as well as high accuracy and performance in 3D object detection.
The first category, which can be divided into one-stage or two-stage, is usually based on the volumetric representation to discretise the point cloud. The one-stage representation only has a single stage, and SECOND [8] is an example. This 3D convolution-based technique produces item class prediction, bounding box regression and orientation classification. The two-stage representation obtained the same results as the single stage but fine-tuned the bounding box. Examples of two-stage representation are P-RCNN [9], VoxelRCNN [10] and PartA 2 [11]. Usually, these methods require more resources in terms of computing power because they either use the costly volumetric representation of the point cloud or rely on computationally intensive 3D convolutions.
The second category of models fall under one-stage methods, and use 2D convolutions in place of the computationally intensive 3D convolutions. PointPillars [12] is an example of this approach. To decrease the high computational cost of handling 3D LiDAR data, these models usually compress the data into a 2D projection or organize it into pillars [12]. While these methods are quicker and suitable for real-time applications, they sacrifice detection capabilities by losing some information. This highlights the trade-off between inference time and accuracy.
The third category of methods, such as Point RCNN [13], utilizes a two-stage approach based on raw point data and voxel representation to take advantage of their respective benefits. In the first stage, the network uses voxel representation as input and performs light convolutional operations, which results in a small number of high-quality initial predictions. An attention mechanism effectively combines coordinate and indexed convolutional features of each point in the initial forecast, maintaining both accurate localization and contextual information. The second stage uses the fused feature of interior points to refine the prediction [14].
Accurate object recognition in autonomous vehicles can be considerably improved by utilizing shared visual data from numerous vehicles and infrastructure sensors. This method can get beyond restrictions like occlusion and a narrow field of view by exchanging information with nearby infrastructure and vehicles. Accurate vehicle position, velocity and attitude information are essential to achieve this improvement [4].
The autonomous vehicle kinematics and dynamics synthesis can estimate a vehicle’s side slip angle (SSA), which is an important vehicle state parameter in vehicle dynamics and is based on a consensus Kalman filter. The kinematics and dynamics of the vehicle are very nonlinear; yet, after linearization, the linear system may mimic them well. The vehicle state, which consists of the vehicle’s position, velocity and attitude, is present in the linear system. The first-order differential equation can simulate the vehicle kinematics and dynamics, determining how the vehicle state changes in the linear system. The consensus Kalman filter-based SSA estimation approach can estimate the SSA accurately and robustly. Based on a first-order differential equation, the technique estimates the SSA using the vehicle’s kinematics and dynamics. The nonlinear SSA estimation approach is well approximated by the linear system following linearization. The very accurate SSA estimate approach can be utilized to enhance vehicle control and safety [4]. This method’s accurate estimation of vehicle kinematics, including location, velocity and attitude, can considerably improve autonomous cars’ ability to identify objects. The system may better comprehend the dynamics and behavior of nearby automobiles, pedestrians and other objects in the environment by adding this information into the object detection algorithms. As a result, object detection becomes more accurate, especially in conditions when occlusion and a small field of view present difficulties. So, autonomous vehicles can get over limitations like occlusion and a small field of view by incorporating this information into object detection algorithms. The precision of object identification and the general perception abilities of autonomous vehicles are considerably improved by the precise estimation of vehicle kinematics.
Object detection in vehicle surrounding surroundings or remote sensing provide distinct problems in comparison to natural scene picture detection. Specialized detection techniques are needed in these domains to identify certain things of interest, such as cars, people walking or tassels in UAV footage. The “YOLOv5-Tassel” method is one of the many strategies being investigated by researchers to improve object detection performance in these fields [15]. Improvements including architecture adjustments, data augmentation methods and hyperparameter optimization are included in the YOLOv5-Tassel model. These improvements aim to increase the reliability and precision of tassel detection in UAV photography. The YOLOv5-Tassel model’s performance on a variety of datasets is thoroughly evaluated by the authors, who also compare it to other detection techniques to show how successful it is. The results of the tests show that YOLOv5-Tassel detects tassels in UAV imagery with a high degree of accuracy. When comparing remote sensing with object recognition in natural scenarios, object detection in remote sensing images is more challenging because it calls for the detection of targets from different scenes. Although there are a lot of remote sensing images, there are not as many of them labeled as there are in a dataset of natural scenes, which makes it harder for training models to converge [16].

3. Methodology

To implement/represent the 3D object detection models based on deep learning in the framework, we employed a three-step methodology, which is depicted in Figure 1. (1) Firstly, a set of model architecture and hyperparameter specifications are defined in different configuration files. These files define the specifications of the components of each module in the framework (described in Section 4) as well as the training and test specifications that are then used to build, train and test the object detectors. We chose the models for 3D object detection based on a review of the existing literature, which is outlined in Section 2 and elaborated further in [3]. The framework, described in Section 4, was developed to facilitate the representation of any object detection model.
Once the object detector is built, it is subjected to a training and evaluation pipeline (2), where various optimizations can be performed to enhance the accuracy metrics and fulfill the inference time requirements. In our project, since different components need to operate simultaneously, such as the SLAM algorithm and object detector, we define an overall mAP of 60% and an inference time of less than 100 ms (metrics are always subject to trade-offs). The training and evaluation step can be carried out by changing the training specification in the respective model configuration. The concept behind defining the training and testing parameters in these configuration files is to make it easier to modify them and subsequently submit the object detector to the same training and evaluation pipeline. The pipeline was executed on a server-side node with an Intel Core i9 processor, 64 GB of RAM and a Quadro RTX 8000 GPU. Therefore, the proposed workflow follows an iterative process, where the model is fine-tuned. The training and evaluation steps are repeated whenever necessary until they meet the requirements and satisfy the application requirements. The evaluation and comparison process is carried out using KITTI benchmarks using the validation set on the aforementioned server node. In conclusion, this workflow guarantees that the models meet the application requirements and attain the highest possible accuracy. This procedure identifies a group of potential object detection models for the subsequent step.
After completing step (2) workflow, a comparison phase of the resulting models (step (3)) is conducted to select the model that can ensure a better balance between precision and inference time. The subsequent section presents information on the architecture of the framework, the chosen deep learning models and the parameters used in the fine-tuning process.

4. Framework for Representing 3D Object Detection Models

Our framework’s key innovation is that it facilitates the representation of any object detector through YML configuration files that define their module specifications in each framework component. Moreover, this framework, shown in Figure 2, aims to facilitate the implementation and integration of new modules in each framework component to allow for the comprehensive representation of the different state-of-the-art 3D object detectors.
The first component, (1) data representation, receives the set of points and discretizes them in a set of data structures, such as pillars or voxels, or only passes the set of points to be used by the middle extractor module (3). (2) The local feature encoder receives as input these data structures—more specifically, the set of pillars or voxels—and encodes and concatenates their features. Then, in the middle extractor (3), 3D and/or 2D backbones extract features from local encoded features, which are used by the (4) detection head to predict object class, bounding box offsets and direction (5). (4.1) This detection head based on RPN can be assisted by two modules, a (4.2) point head module and (4.3) region of interest (RoI) head module, which refines the predicted bounding box offsets and orientation. (4.2) The point head module is composed of three networks: a point intrapart offset head [11], a point-based segmentation head for keypoint segmentation [17] and another point-based segmentation head based on [13]. The (4.3) RoiHead module is defined for each state-of-the-art model based on their specificities, but typically it is composed of a proposal layer, which proposes a set of RoIs, a RoI feature extraction that pools the RoI features and a RoI head that predicts RoI class and bounding box offsets.

4.1. Point Cloud Data Representation

We receive an unordered set of points P C = { p 1 , p 2 , p 3 p n } , where n > 0 and each point p is represented as ( p x , p y , p z , p r ) , where p x , p y and p z correspond to coordinates in the three-dimension Cartesian axis and p r is the reflectance value provided by the LiDAR sensor. A point cloud range P C R is a tuple ( L , H , W ) , where L consists of ( x m i n , x m a x ) , H consists of ( y m i n , y m a x ) and W consists of ( z m i n , z m a x ) . We denote a point cloud subset with respect to P C R as P C R = { p i : p i P C , x m i n p i x x m a x , y m i n p i y y m a x , z m i n p i z z m a x } .

4.1.1. Pillar Representation

The framework receives the points in P C R and discretizes them in the X–Y axis, thus creating a set of pillars P L p = { p l 1 , p l 2 , p l 3 , , P L p } , where p = m p , m p is the max number of pillars and m p N + . Each P L p has a fixed size in P C R , and it is represented by a tuple S P L p = ( w , h ) , where w is the width of the pillar along the x axis, and h is the height of the pillar along the y axis. The points are grouped accordingly with the pillar that resides.
To deal with the sparsity problem and save computation, a max number of points per pillar N P is defined. The points are randomly sampled if the number of points in each pillar is higher than N P . On the other hand, zero padding is added in cases of less than N P points.

4.1.2. Voxel-Based Representation

The voxelization process assumes a similar way as proposed in pillar discretization; however, the received points are discretized in the X–Y–Z axis. It allows for the creation of a set of voxels V L j = { v l 1 , v l 2 , v l 3 v l j } , where j = m v , m v means the max number of voxels and m v N + , and each V L j assumes a fixed size in P C R ; and a tuple represents S V L j = ( w , h , d ) . w is the width of the voxel along the x axis, h is the height of the voxel along the y axis and d is the depth of the voxel along the z axis.
A random sampling strategy is also applied to save computation, and a max number of points per voxel N V is also used. The strategy to sample points or apply zero padding is the same as the pillar representation.

4.1.3. Point-Based

The idea in the point-based strategy is to pass the cropped point cloud, herein denoted as P C R , to the middle feature encoder.

4.2. Local Feature Encoder

The local feature encoder receives the data representation structures D S , such as pillars, denoted as P L ; voxels, V L ; or just the set of points of cropped area, P C R . Then, a set of methods are applied to obtain features and produce dense tensors in the case of pillar feature network (PFN) and voxel feature encoder (VFE) or calculate these features by simply calculating the mean values of point coordinates within each voxel using the mean VFE method.

4.2.1. Pillar Feature Network

The features of each pillar, P L , are augmented in a tensor D = ( x , y , z , r , x c , y c , z c , x p l c ,
y p l c ) , where c describes the distance to the arithmetic mean of all points in P L , and p l c is the offset distance from the P l x , y center.
For this purpose, (1) the pillar feature network (PFN) receives the pillar augmented features as input and applies linear transformations to each point, herein described as l i n e a r ( P l i n ) = P l o u t , where P l i n corresponds to the initial tensor P l i n = ( P , N , D i n ) , and P l o u t to the output tensor. In P l o u t , all but the last dimension are the same shape as the input. Dimension D o u t results from the linear transformation of D i n , thus producing P l i n = ( P , N , D o u t ) . Then, batch-norm and ReLU are applied to this tensor. Afterwards, all resulting features are aggregated. This process allows for the generation of a dense tensor to represent the pillar as a tuple ( D , P , N ) , where D is the above-mentioned augmented point, P is the number of non-empty pillars per batch and N is the number of points per pillar. Next, max pooling operations over the channels are used to generate a tensor of size ( D o u t , P ) .

4.2.2. Voxel Feature Encoder

Similar to PFN, the points in each voxel, V L j = { p t i = ( x i , y i , z i , r i ) R 4 } , i = { 1 , 2 , , N V } , are augmented by calculating offset distance of the point to the V L x , y , z center, herein denoted as v l c , which generates the tensor V L j = { p t i = ( x i , y i , z i , r i , x v l c , y v l c , z v l c ) R 7 } , i = { 1 , 2 , , N V } , where N V as mentioned before is the max number of points per Voxel. Afterwards, each p t i is subject to VFE layers V F E L l , where l 1 . Each V F E L l is composed by a set of transformations, where linear transformation, batch-norm and ReLU are applied. Then, all points features of V L j resulting from the above-mentioned transformations, herein described as p f j , are aggregated. Each p f j can be described as p f j R o u t , where o u t is the output dimension that results from the linear transformation of all points p t i . The output size o u t resulting from the linear transformation can be described as o u t l = F o / 2 , where F o = { f 1 , f 2 , , f o } , F o N + means the output features of a specific V F E L index l. Then, all point features P F , p f j P F are subject to a max pooling operation over the channels. The output tensor is described as p f r m R o u t , where m = 1 . Afterwards, a repeat process of the above tensor is performed r e p e a t ( p f r , k ) in V L j , which means the repeat point feature resulted from max pooling k times, where k = { 1 , 2 , 3 , , N V } . Each p f r k is augmented with p f j to generate p f o j = ( p f r k , p f j ) R 2 o u t , k = { 1 , 2 , , N V } and j = { 1 , 2 , , N V } . The set of features for each voxel can be described by the tuple V L o u t = { V L j = s t a c k ( p f o j ) } , where j = { 1 , 2 , 3 , , N V } , s t a c k = ( p f o 1 × p f o 2 × p f o j ) , and applies linear, batch-norm, ReLU and max pooling to each V L j . Thus, V L j R F means that V L j has the output dimension F, the output feature of the last VFE layer.
Finally, it generates a list of obtained voxel features V L A o u t , V L A o u t = { V L j = { v l 1 , v l 2 , , v l j } , V L j R F , F = f o , where V L A j is the above-mentioned augmented features of all voxels.

4.2.3. Mean Voxel Feature Encoder

Mean VFE receives a set of voxels V L , sums all points residing in each voxel in a specific axis and divides by the number of points of each one. This operation can be described as V L M o u t { m e a n ( v l j ) k = 0 m v = { p t f i = ( ( p t x ) i = 1 n v c o u n t ( p t x ) , ( p t y i ) i = 1 n v c o u n t ( p t y ) , ( p t z i ) i = 1 n v c o u n t ( p t z ) , ( p t r i ) i = 1 n v c o u n t ( p t r ) ) R 4 } } , k = [ 0 , N V [ , c o u n t ( p t x ) p t x v l j , c o u n t ( p t y ) p t y v l j , c o u n t ( p t z ) p t z v l j , c o u n t ( p t r ) p t r v l j . n v corresponds to the total number of points of the voxel v l j V L in a given axis; m v the max number of voxels; and p t f i V L P F corresponds to a resulting point. This strategy considers the voxel-wise features of a new voxel center V L x , y , z , r and approximate equivalence to raw point cloud data. The idea herein is to process the voxel-wise features in the middle feature encoder more efficiently, especially by the 3D sparse convolutions, since they generate m v (max number of voxels, as described in Section 4.1) number of non-empty voxels.

4.3. Middle Feature Extractor

The (3) middle feature extractor is responsible for extracting more features from the (2) local feature encoders to provide more context for the shape description of objects for the networks of the detection head module. Various methods are used; herein, we separated them into 3D backbones and 2D backbones, which will be described in more detail below.

4.3.1. Backbone 3D

A variety of methods resort to 3D backbones using the sparse CNN component. Also, models can use a voxel set abstraction 3D backbone, aiming to encode the multiscale semantic features obtained by sparse CNN to keypoints. Others use PointNet++ with multiscale grouping for feature extraction and to obtain more context to the shape of objects, and then pass these features to the (4) detection head module.
3D Sparse Convolution
The 3D sparse convolution method receives the voxel-wise features of VFE, V L A o u t or mean VFE, V L M o u t .
This backbone is represented as a set of blocks B L C , in the form { b l c 1 , b l c 2 , b l c m } , where m = 6 . Each block b l c j B L C , j = m can be defined by a set of sparse sequential operations denoted as S S Q s = { s s q 1 , s s q 2 , s s q 3 , s s q s } , s 1 . Each S S Q s is described by ( ( S u M ¬ S p C ) ( S p C ¬ S u M ) ) , B n , R L ) , where S u M means submanifold sparse convolution 3D [18], S p C means spatially sparse convolution 3D [19], B n means 1D batch normalization operation and R L represents the ReLU method. The last method assumes the standard procedure, as mentioned in [20].
In our framework, the set of blocks assumes the following configurations:
  • The input block b l c 1 can be described by b l c 1 = { s q 1 = ( S u M , B N , R L ) } ;
  • The next block is represented in the form b l c 2 = { s q 1 = ( S u M , B N , R L ) } ;
  • The block 3 is represented as b l c 3 = { s q 1 = ( S p C , B N , R L ) , s q 2 = ( S u M , B N , R L ) , s q 3 = ( S u M , B N , R L ) } ;
  • The block 4 is denoted as b l c 4 = { s q 1 = ( S p C , B N , R L ) , s q 2 = ( S u M , B N , R L ) , s q 3 = ( S u M , B N , R L ) } ;
  • The block 5 is denoted as b l c 4 = { s q 1 = ( S p C , B N , R L ) , s q 2 = ( S u M , B N , R L ) , s q 3 = ( S u M , B N , R L ) } ;
  • The last block is defined by b l c 6 = { s q 1 = ( S p C , B N , R L ) } .
The batch normalization B n element is defined by ( I n B , e p , m n ) , which represents the formula in [21]. I n B represents the input features, which are the output features of submanifold sparse or spatially sparse convolutions 3D, so that ( O u t S ¬ O u t M O u t M ¬ O u t S ) . e p represents the eps, and m n the momentum values. These values are defined in Table 1.
The element S p C can be represented as ( I n S , O u t S , K s S , S t S , P d S , D l S , O p S ) . I n S represents the input features of S p C , and it is denoted as S p C N + , I n S = O u t M , where O u t M represents the output features of submanifold sparse Conv3D. The element O u t S represents the output features resulting from applying S p C . K s S means kernel size of a spatially sparse convolution 3D, and it is denoted as K s S s = { K s S 1 , K s S 2 , K s S s } , where s = { 1 , , 3 } , K s S s N + and k s S s = k s S s + 1 . The stride S t S can be described as a set S t S r = { s t s 1 , s t s 2 , s t s r } , r = { 1 , , 3 } , S t S r N + and s t s r = s t s r + 1 . P d S designates padding, and a set can define it P d S v = { p d s 1 , p d s 2 , p d s v } , v = { 1 , , 3 } , P d v N + , p d s v = p d s v + 1 . D l S means dilation, and can be defined as a set D l S l = { d l s 1 , d l s 2 , d l s l } , l = { 1 , , 3 } , D l S l N + , d l s l = d l s l + 1 . The output padding O p S is represented as a in the form O p S a = { o p s 1 , o p s 2 , o p s a } , a = { 1 , , 3 } , O p S a N + and o p s a = o p a + 1 . The configurations used in our framework are represented in Table 2.
S u M is represented by ( I n M , O u t M , k s M , S t M , P d M , D l M , O p M ) [18]. I n M represents the input features passed by (2) the local feature encoder or by the last sparse sequential block S q s , and O u t M represents the output features of S u M . Thus, I n M N + , I n M = 4 in the case of the local encoder being mean VFE, otherwise I n = F , where F represents the output features of the VFE network. Also, I n S can be represented by I n M = O u t M and I n M = O u t S , where O u t S represents the output features of a S p C . The element K s represents the kernel size, which can be defined as K s t = { k s 1 , k s 2 , k s t } , where t = { 1 , , 3 } , K s t N + and k s t = k s t + 1 . S t M means stride, and can be defined as a set S t r = { s t 1 , s t 2 , s t r } , r = { 1 , , 3 } , S t r N + and s t r = s t r + 1 . P d M represents padding, and a set can describe it in the form P d p = { p d 1 , p d 2 , p d p } , p = { 1 , , 3 } , P d p N + , p d p = p d p + 1 . D l means dilation, and can be described as a set D l d = { d l 1 , d l 2 , d l d } , d = { 1 , , 3 } , D l d N + , d l d = d l d + 1 . O p represents the output padding, and a set describes it in the form O p u = { o p 1 , o p 2 , o p p } , u = { 1 , , 3 } , O p u N + and o p u = o p u + 1 . The configurations used in our framework are represented in Table 3.
The hyperparameters used in each b l c j are defined in Table 7.
Finally, the output spatial features S P are defined by S P R , where SP is defined by a tuple (B, C, D, H, W). B represents the batch size; C the output features of b l c 5 represented in S p C as O u t S ; D depth; H height; and W width.
PointNet++
We use a modified version of PointNet++ [9] based on [13] to learn undiscretized raw point cloud data (herein denoted as P C R ) features in multiscale grouping fashion. The objective is to learn to segment the foreground points and contextual information about them. For this purpose, a set abstraction module, herein denoted as S A M , is used to subsample points at a continuing increase rate, and a feature proposal module, described as F P M , is used to capture feature maps per point with the objective of point segmentation and proposal generation. A S A M is composed of S A M = { p t n 1 , p t n 2 , , p t n g } , g N + , g = { 1 , 2 , , 4 } , where p t n means PointNet set abstraction module operations. Each p t n g P T N is represented by ( Q G L , M L ) , where Q G L corresponds to query and grouping operations to learn multiscale patterns from points, and M L is the set of specifications of the PointNet before the global pooling for each scale.
Q G L means ball query operation Q L followed by a grouping operation G L . It can be defined by the set { q g l 1 , q g l 2 } , where q g l 1 and q g l 2 correspond to two query and group operations. A ball query Q L is represented as ( R , N S , P , C P ) , where R means the radius within all points will be searched from the query point with an upper limit N S , N S N + , in a process called ball query; P means the coordinates of the point features in the form P F = { p f n = ( x n , y n , z n , ) R 3 } , n N that are used to gather the point features; C P represents the coordinates of the centers of the ball query in the form C P = { c p p = ( x c p , y c p , z c p ) R 3 } , p N + , p n , p = { 1 , 2 , , 4 } , where x c , y c , and z c are center coordinates of a ball query. Thus, this ball query algorithm searches for point features P in a radius R with an upper limit of N S query points from the centroids (or ball query centers) C P . This operation generates a list of indices I D in the form { i d 1 , i d 2 , , i d x } , x 1 , i d x I D , i d x N N C P × N S , where N C P corresponds to the number of C P . I D represents the indices of point features that form the query balls. Then, a grouping operation G L is performed to group point features, and can be described by ( P F , I D ) , in which P F and I D correspond to point features and indices of the features to group with, respectively. In each Q G L of a p t n , the number of centroids N C P will decrease, so that N C P p > N C P p + 1 , p = { 1 , 2 , , 4 } , N C P N + , and due to the relation of the centroids in ball query search, the number of indices N I D and corresponding point features will also decrease. Thus, in each p t n , the number of points features is defined by N P n > N P n + 1 , N P n + 1 = N C P p , p > 1 . The number of centroids defined in QGL during p t n operations is defined in Table 4.
Afterwards, an M L is performed, defined by a set of specifications of the PointNet before the Q G L operations. The idea herein is to capture point-to-point relations of the point features in each C P local region. The point feature coordinate translation to the local region relative to the centroid point is performed by the operation L R = { f r f = ( p x f x c f , p y f y c f , p z f z c f ) R 3 } , f = { 1 , 2 , , N S } . p x , p y and p z are coordinates of point features P F as mentioned before, and x c , y c and z c are coordinates of the centroid center. M L can be defined by a set S Q = { s q 1 , s q 2 } that represents two sequential methods. Each S Q is represented by the set of operations O P = { o p s = ( C 2 D , B n 2 D , R L ) } , s = { 0 , 1 , , 3 } , where C 2 D means convolution 2D, B n 2 D 2D batch normalization and R L represents the ReLU method. C 2 D is defined by ( I n C 2 D , O u t C 2 D , K s C 2 D , S C 2 D ) . I n C 2 D , where I n C 2 D N + represents the input features that can be received by Q G L or by the output features O u t C 2 D , O u t C 2 D N + of the o p s 1 , K s C 2 D the kernel size, and S C 2 D represents the stride of the convolution 2D. The kernel size K s C 2 D is defined by the set { k s c 2 d 1 , k s c 2 d 2 } , k s c 2 d 1 = k s c 2 d 2 and o p s S Q , S Q M L , M L P T N , k s c 2 d 1 = 1 . Also, the stride S C 2 D is represented by a set { s c 2 d 1 , s c 2 d 2 } , s c 2 d 1 = s c 2 d 2 and s c 2 d 1 = 1 , with o p s S Q , S Q M L , M L P T N . The set of specifications used in our models regarding O P are summarized in Table 5. p t n i P T N can be defined as:
P T N { p t n i = m a x ( M L ( S G ( p f i ) ) ) } ,
where m a x denotes max pooling, S G denotes random sampling of p f i features and M L denotes the multilayer perceptron network to encode features and relative locations.
Finally, a feature proposal F P M is applied employing a set of feature proposal modules { f p 1 , f p 2 , , f p m } , m = { 1 , 2 , , 4 } , m N + . Each f p m F P M is defined by the element S Q as defined above. Also, the element S Q assumes a set { s q 1 , s q 2 } , and each S Q has the same operations, with the only difference in the element s that describes the number of operations, assuming s = { 1 , 2 } instead of s = { 1 , 2 , 3 } . The configurations used in our models are summarized in Table 6.
Voxel Set Abstraction
This method aims to generate a set of keypoints from given point cloud P C R and use a keypoint sampling strategy based on farthest point sampling. This method generates a small number of keypoints that can be represented by K { p j = ( x j , y j , z j ) R B * 3 } , j = [ 1 , N K ] , where N K is the number of points features that have the largest minimum distance, and B the batch size. The farthest point sampling method is defined according to a given subset P A { p a j = ( x a j , y a j , z a j ) } , j = { 1 , 2 , , M } , P A P F , where M is the maximum number of features to sample, and subset P B { p b k ( x b k , y b k , z b k ) } , k = { 0 , 1 , 2 , , N } , P B P F , where N is the total number of points features of P F ; the point distance metric is calculated based on D { d i = { ( x b k x a j ) 2 + ( y b k y a j ) 2 + ( z b k z a j ) 2 ) } } , i M . Based on D, an operation S M { s m k = { m i n ( d i , s m i 1 ) } } , k M , i N is performed, which calculates the smallest value distance between d i and s m i 1 . s m k S M , k < N and S M represent the list of the last known largest minimum distances of point features. Assuming s m k = s m i 1 d i < s m i 1 , it returns the index I D X = { i d x k = ( i 1 ) } , . Based on s m k = { d i d i > s m i 1 } , thus I D X = { i d x k = ( i ) } . Finally, this operation generates a set of indexes in the form I D X { i d x 0 , i d x 1 , , i d x m } , i d x m I D X , m M , and I D X R B * M , where B corresponds to the batch size and M represents the maximum number of features to sample. The keypoints K are given by K { p f i d x 0 , p f i d x 1 , , p f i d x m } .
These keypoints K are subject to an interpolation process utilizing the semantic features encoded by the 3D sparse convolution as S P . In this interpolation process, these semantic features are mapped with the keypoints to the voxel features V L that reside. Firstly, this process defines the local relative coordinates of keypoints with voxels V L by means V L I { v l i i = ( ( k x i P C R x m i n ) v l x k , ( k y i P C R y m i n ) v l y k ) R 2 } , k = [ 0 , NK [ , i = [ 0 , NV [ . Then, a bilinear interpolation is carried out to map the point features S P from 3D sparse convolution in a radius R with the V L B , the local relative coordinates of keypoints. This is perform PR { sp R , s p SP R = ( xr , yr ) R 2 , sp i = ( pfx i , pfy i ) } , i = [ 0 , NK [ . Afterwards, indexes of points are defined according to v l i a V L I v l i a i = vli i in the form ( x a , y a ) and another v l i b ( x b = ( x a + 1 ) , y b = ( y a + 1 ) ) . The expression that gives the features s p i from the BEV perspective based on v l i a and v l i b is the following:
  • S B E V A ( s p v l i a x , s p v l i a y )
  • S B E V B ( s p v l i b x , s p v l i a y )
  • S B E V C ( s p v l i a x , s p v l i b y )
  • S B E V D ( s p v l i b x , s p v l i b y )
Thus, the weights between these indexes v l i a i , v l i b i and v l i i are calculated, as follows:
  • W A { ( v l i x i p r x i ) × ( v l i y i v l i y i ) } ;
  • W B { ( v l i x i p r x i ) × ( v l i y i v l i y a i ) ) }
  • W C { ( v l i x i p r a x i ) × ( v l i b y i v l i y i ) ) }
  • W D { ( v l i x i p r a x i ) × ( v l i y i v l i a y i ) ) }
Finally, the bilinear expression that gives the features s p i from the BEV perspective is P F B E V ( s b e v a i * w a i ) + ( s b e v b i * w b i ) + ( s b e v c i * w c i ) + ( s b e v d i * w d i ) , where s b e v a i S B E V A , s b e v b i S B E V B , s b e v c i S B E V C , s b e v d i S B E V D . Also, w a i W A , w b i W B , w c i W C , w d i W D , and i = [ 0 , NV [ .
The local features of p f b e v j P F B E V are indicated by v l b i = v l k s p i , k = [ 0 , NK [ , i = [ 0 , NV [ and aggregated using PointNet++ according with their specification defined above. They will generate P T N , which are voxel-wise features within the neighboring voxel set v l i i of s p i , transforming using PointNet++ specifications. This generates p t n i P T N according to P T N p t n i = p t n 0 , · , p t n N K , and each p t n i is an aggregate feature of 3D sparse convolution s p i with p f b i from different levels according to Table 4.

4.3.2. Backbone 2D

Two-dimensional backbones are used to extract features from 2D feature maps resulting from a PFN component, such as those used by PointPillars, and to readjust the objects back to LiDAR’s Cartesian 3D system with minimal information loss utilizing a backbone scatter component. Also, models can compress the feature map of 3D backbones into a bird’s-eye view (BEV) feature map employing a BEV backbone and use an encoder Conv2D to perform feature encoding and concatenation. Such methodology is employed by models such as by SECOND, PV-RCNN, PartA 2 and Voxel-RCNN.
Backbone Scatter
The features resulting from the PFN are used by the PointPillars scatter component, which scatters them back to a 2D pseudoimage of size ( D o u t , H , W ) , where H and W denote height and width, respectively.
BEV Backbone
The BEV backbone module receives 3D feature maps from 3D sparse convolution and reshapes them to the BEV feature map. Admitting the given sparse features S P ( B , C , D , H , W ) , the new sparse features are ( B , C × D , H , W ) . The BEV backbone is represented as a set of blocks B L C , in the form b l c 1 , b l c 2 , b l c m , where m 1 . Each block b l c j B L C , j m , is represented by ( n , F , U , S ) . The element n represents the number of convolutional layers in B L C j . The set of convolutional layers C in B L C j is described as a set { c 1 , c 2 , c 3 c n } , where n 1 . F represents the number of filters of each c i C , i n , U is the number of upsample filters of c i . Each of the upsample filters has the same characteristics, and their outputs are combined through concatenation. S denotes the stride in c 1 . If S > 1 , we have a downsampled convolutional layer ( c 1 ). There are a certain convolutional layers ( c i , such that i > 1 ) that follow this layer. batch-norm and ReLU layers are applied after each convolutional layer.
The input for this set of blocks B L C is spatial features extracted by 3D sparse convolution or voxel set abstraction modules and reshaped to the BEV feature map.
Encoder Conv2D
Based on features extracted in each block b l c j and after upsampling based on U = 2 D , where D means the downsample factor of the convolution layer C, the upsample features u j U , j = [ 0 , m [ are concatenated, such that U F cat ( u j ) , where cat means u j + u j + 1 , j = [ 0 , m [ .

4.4. Detection Head

After that, the (4) detection head component receives the 2D encoded features as input and performs operations based on three modules: RPN head, point head, and RoI head.

4.4.1. RPN Head

Based on the 2D encoded features, a set of convolutions to predict class labels, regression offsets and direction are performed. Thus, a set of 1 × 1 convolutions C 1 x = { c 1 x 1 , c 1 x 2 , , c 1 x k } , where k = 3 , is applied. Each c 1 x k can be represented by C 2 D ( I C , O C , K S ) , where C 2 D means convolution 2D, I C input channels, O C output channels and K S kernel size. c 1 x 1 is the class prediction convolution, and can be described by ( U F , N A × N C , 1 ) , where N A means number of anchors per location and N C number of target classes to predict. c 1 x 2 is the convolution for bounding box offset regression, and can be defined by ( U F , N A × N C × 7 , K S ) , where it generates two anchors N A for each class N C and seven bounding box offsets. Finally, c 1 x 3 is performed based on ( U F , N A × N B , K S ) where N A represents the same number of anchors per location, as previously mentioned; N B represents the number of bins per anchor location; and K S represents kernel size.
The figure representing our baseline network for each block can be seen in Figure 2. We use three blocks with a BEV backbone for PointPillars, while for the other models, we use two blocks. Each block is represented as described in Table 7. Table 8 describes the configuration of the RPN head.
Table 7. The different block configuration ( b l c j B L C ) used. N.A.—not applicable.
Table 7. The different block configuration ( b l c j B L C ) used. N.A.—not applicable.
Models blc 1 blc 2 blc 3
PointPillars(3, 64, 128, 2)(5, 128, 128, 2)(5, 128, 128, 2)
SECOND(5, 64, 128, 1)(5, 128, 256, 2)N.A.
PV-RCNN(5, 64, 128, 1)(5, 128, 256, 2)N.A.
PointRCNNN.A.N.A.N.A.
PartA2(5, 128, 256, 2)(5, 128, 256, 2)N.A.
VoxelRCNN(5, 128, 256, 2)(5, 128, 256, 2)N.A.

4.4.2. Point Head

Different implementations of point head have been proposed to refine RPN predictions or generate class labels, bounding box regression offsets and direction. It can be composed of a class layer regression C R in the form C R l i n e a r ( I N , O T ) and bounding box layer B B R described as P R l i n e a r ( I N , O T ) . The point class layer C R provides the segmentation score of foreground points, and P R gives the relative location of foreground points as P R { p r p = ( x f , y f , z f ) } and calculated based on a foreground point f p p = ( x p , y p , z p ) using { ( x t = ( x p x c ) w + 0.5 , y t = ( y p x c ) l + 0.5 , z t = { ( z p z c ) h + 0.5 } , ( c o s ( θ ) p c o s ( θ ) c , s i n ( θ ) p s i n ( θ ) c ) ) } , where x c , y c , z c are center coordinates of the bounding box; h, w, and l means height, width and length of the bounding box, respectively; and θ is the box orientation in bird view.
Firstly, bounding box targets are normalized in a canonical coordinate system by first checking if the given points P T p i = ( x i , y i , z i ) , P T b b k are within the bounding box b b k ( x c i , y c i , z c i , d x i , d y i , d z i , θ i ) by performing ( ( x i x c k 2 + 0.00001 | x i x c k < d x i & y i y c k 2 + 0.00001 | y i y c k < d y i ) , where if the given statement is true, the local l x n i and l y n i are calculated. The operation is l x n i = ( ( x i x c k ) × ( c o s ( θ i ) ) ) + ( ( y i y c k ) × ( s i n ( θ i ) ) ) and l n y i = ( ( x i x c k ) × ( s i n ( θ i ) ) ) + ( ( y i y c k ) × ( c o s ( θ i ) ) ) . Then, we determine the local relative coordinate of p i concerning bounding box b b k in X–Y by means l r i = ( ( x i x c k ) × ( c o s ( θ i ) ) ) + ( ( y i y c k ) × ( s i n ( θ i ) ) ) , l y n i = ( ( x i x c k ) × ( s i n ( θ i ) ) ) + ( ( y i y c k ) × ( c o s ( θ i ) ) ) , and then determine if a point belongs, and return the respective index to bounding box by ( ( l n x i < d x i 2 + 0.00001 l n y i 2 + 0.00001 < d y i ) i d = i . After obtaining the points indexes within the bounding boxes, all inside points are aggregated with PointNet++.
Point Intrapart Offset
This consists of both C R and P R to predict point class labels and point bounding box offsets.
Point Head Simple
This is only composed of C R . However, it has modifications to its architecture C R { c r 1 , c r 2 , c r 3 } , where each c r is represented by a tuple ( L R , B N , R L ) , where L R means linear regression, B N means batch normalization and R L means the ReLU method. B N can be defined by ( N F ) , where N B means the number of features, and typically assumes the same value as O T .
Point Head Box
This is composed of C R and P R with architecture modifications. C R { c r 1 , c r 2 } where C R ( L R , B N , R L ) where L R means linear regression, B N means batch normalization and R L means the ReLU method. P R is composed of P R { p r 1 , p r 2 } , where each p r is defined by the same tuple ( L R , B N , R L ) .

4.4.3. RoI Head

The regions of interest (RoI) head is responsible for taking the RoI features of each box proposal of the RPN Head and then optimizing the imperfect bounding box proposals by predicting and fixing the size and location (centre and orientation) residuals relative to the input bounding box predictions. Besides each model’s specificities, any RoI head is composed of a proposal layer that generates/refines a set of RoIs based on RPN RoIs, denoted as P L ; an RoI feature extraction method R F ; and a head module H M that can be composed but not restricted to the shared fully connected layer S F C , up–down layer U L and D L , class layer C L , regression layer R L , RoI point pool 3D layer ( R o I P L ), RoI grid pool layer ( R o I G L ), RoI-aware pool 3D layer ( R o i A P 3 D ) and a convolution part ( C n v P ) and convolution RPN ( C n v R P N ).
S F C is responsible for feature extraction and can be defined by a set { s f c 0 , s f c f } , f = [ 0 , 2 [ , and s f c f S F C and s f c f are represented by a tuple ( C 1 D , B N 1 D , R L , D R O ) , where C 1 D means convolution 1D, B N 1 D means batch normalization 1D, R L means ReLU and D R O means dropout. C L can be defined by the set { c l 0 , , c l c } , c = [ 0 , 2 [ and each c l c by ( C 1 D , B N 1 D , R L , D R O ) . R L produces box predictions and is composed by the set { r l 0 , , r l r } , r = [ 0 , 2 [ , where each r l r is defined by ( C 1 D , B N 1 D , R L , D R O ) . D L and U L mean bottom-up box generation proposal layers from foreground points. A sequence of convolution 2D and ReLU methods can define the D L . A U L is represented as u l 1 , u l 2 and each u l by the same sequence of convolution 2D and ReLU methods.
R o I P L are specifically pool 3D points and their corresponding point features according to the location of each 3D proposal of P L . Admitting the given output of bounding boxes B B and a specific bounding box b b n B B , where B B { b b n = ( x n , y n , z n , h n , w n , l n , θ n ) } , where x, y, z are center coordinates of the predicted bounding box, h, w, and l denote the height, width and length of the bounding box, and θ denotes the orientation of the bounding box. Herein, the R O I P L produces an enlarged set of b b e n B B E that can be defined by ( x n , y n , z n , h n + η , w n + η , l n + η , θ n ) , where η represents a constant value to resize the bounding box. The depth information loss for each bounding box proposal is compensated by including the distance information to the LiDAR sensor to the u f p U F that are BEV spatial features. Each u f p is augmented with d b ( x p x c ) 2 + ( y p y c ) 2 + ( z p z c ) 2 , d b D , where x p , y p , and z p correspond to coordinates of point features of the local encoder module and x c , y c and z c are the center coordinates of the LiDAR sensor. Thus, it generates a tensor in the form ( V L M o u t , D ) that is fed to PointNet++, as described in Section 4.3.1, to encode the augmented tensor with local features with global semantic BEV features U F . This generates a feature vector for confidence classification and box refinement.
The idea of R o I G L is to aggregate the keypoint features to the RoI grid points with multiple receptive fields. Grid points are uniform sampling, and can be described by G P { g p 1 , g p 2 , , g p s } , s = 216 , which means that a grid 6 × 6 × 6 is usually adopted. Firstly, the identification of neighboring keypoints to grid g p i in a radius R is performed by means G F { p r , p K R = ( x r , y r , z r ) R 3 , p j = ( p x j , p y j , p z j ) R 3 | g p s = ( g p x j , g p y j , g p z j ) R 3 | p j g p s 2 } , i = [ 0 , NK [ . After all, a PointNet block is used to aggregate the neighboring keypoint set G F in the same way as Equation (2):
P T N { p t n i = m a x ( M L ( S G ( g f i ) ) ) }
Then, the two MLP layers, S F C ( P T N ) and S C ( P T N ) , are performed.
R o I A P 3 D aims to provide bounding box score confidence and refinement by aggregating the local feature information ( V L M o u t ) with global semantic BEV features ( U F ) within the proposals. Two operations are performed within the point features p f i of bounding boxes B B , such that B B { b b k = { p f i R C } } , i = [ 0 , m [ , p f i P F and is scattered to the voxel data structures V L B { v l b k = ( x j , y j , z j ) , i = [ 0 , m [ } where x j , y j , z j are encoded in canonical coordinates using the point head module, and m is the number of inside points within bounding box b b k . The objective is to solve the problem of different proposals generating the same pooled points. For this purpose, average pooling for pooled part features operation—denoted as P P F —and max pooling for pooled RPN features—defined as P R P N —are adopted, and can be described as P P F R o I M a x ( V L B , P F , B B ) , P P F R S x × S y × S z × C and P R P N R o I A v g ( V L B , P F , B B ) , P P F R S x × S y × S z × C where S x , S y , S z are the resolution of the voxels’ spatial shape. The operations RoIMax and RoIAvg can be described more specifically:
R o I M a x = m a x ( { p f i v l b k } ) , if c o u n t ( P P F ) > 0 0 , otherwise
R o I A v g = i = 0 c o u n t ( P P F ) p f i c o u n t ( P P F ) , p f i v l b k ( { p f i v l b k } ) , if c o u n t ( P P F ) > 0 0 , otherwise

5. Three-Dimensional Object Detection Model Specifications

Herein, we will specify each model in the different module frameworks. These models were selected based on the requirements established and defined in Section 1, since they are the models that best guarantee the trade-off between metrics (mAP and inference time). The set of models and their specificities concerning the developed framework are illustrated in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. The modules of each model are represented in the figures as green boxes, and the flow of the tensors occurs in the direction of the orange arrows.

5.1. Data Representation

Typically, the models of Figure 4 and Figure 6, Figure 7 and Figure 8 are chosen to represent the point cloud in Voxels. In this data structure, the point cloud is delimited (using the cropping technique), and a grid is produced where the data are discretized along the X–Y–Z axis.
Only PointPillars, illustrated in Figure 3, discretizes this delimited space of the point cloud on the X–Y axis, creating a set of pillars.
In the case of the PointRCNN model (Figure 5), it provides the delimited point cloud without any data discretization and structuring process for the middle feature encoder.

5.2. Local Feature Encoders

As illustrated in the Figures, three strategies are used by the models to improve the efficiency of the object detectors in the feature extraction of the data structures. Typically, these modules are responsible for the local feature extraction, and then, via concatenation, aggregate these features. Three networks are used: VFE for SECOND (Figure 4), PFE for PointPillars (Figure 3) and mean VFE for PV RCNN (Figure 6), PartA 2 (Figure 7) and VoxelRCNN (Figure 8).

5.3. Middle Feature Extractor

The methods described herein use 3D backbones based on sparse and submanifold convolutions, such as SECOND (Figure 4), PV-RCNN (Figure 6), PartA 2 (Figure 7) and Voxel-RCNN (Figure 8). PV-RCNN uses the 3D voxel set abstraction backbone to encode the feature maps obtained by the 3D sparse CNN for keypoints. PointRCNN (Figure 5) uses PointNet++ [9] to extract features and pass them to the detection head module.
Only PointPillars (Figure 3) uses 2D backbones, since they require fewer computational resources when compared to 3D backbones. However, they introduce a loss in the information that is easily mitigated, since it is possible to readjust the objects again to the Cartesian 3D system of LiDAR with less loss of information. For this purpose, the resulting PFE features are used by the backbone scatter component, which scatters them back into a 2D pseudoimage. The next detection head component then uses this 2D pseudoimage.
Other models, such as SECOND (Figure 4), PV RCNN (Figure 6), PartA 2 (Figure 7) and Voxel-RCNN (Figure 8) compress the information in a bird’s-eye view (BEV) using the BEV backbone for feature extraction, then encode and concatenate the features using the encoder Conv2D component. After this process, the resulting features are passed to the detection head.

5.4. Detection Head

As mentioned earlier, this module comprises three networks: RPN head, point head and RoI head.
All models except PointRCNN use the RPN head to generate RoIs using a low-level algorithm called selective search [22] to produce proposed regions per frame of the point cloud. Selective search generates subsegments to generate many candidate regions, and following bottom-up grouping, recursively combines similar regions into larger regions to provide more accurate final candidate proposals. Each of these regions is submitted independently to the CNN module. The output feature map is then fed to an SVM classifier to predict the object class within the candidate RoI. Along with object class prediction, the algorithm also predicts four bounding box offset values.
The point head is used to assist the RPN head, as illustrated in Figure 6 and Figure 7, or generate predictions of object classes and predict four values that are the bounding box offsets, as shown in Figure 5 and Figure 8. Point head generates various masks of objects or parts of objects in a multiscale way, followed by a simple bounding box inference to generate proposals, also called point proposals, using each point to contribute to the reconstruction of the 3D geometry of the object.
The RoI head used by the PointRCNN (Figure 5), PV-RCNN (Figure 6), PartA 2 (Figure 7) and Voxel-RCNN (Figure 8), naturally uses the RoI features of each bounding box proposed in the RPN, and then optimizes the imperfect bounding boxes from previous stages, predicting and correcting the size and location (center and orientation) in relation to the predictions of the input bounding boxes.

6. Network Training and Fine Tuning

The models described in this document were trained using the KITTI data sets. In addition, the models were evaluated based on the KITTY benchmarks, namely for detecting 3D objects and BEV, considering a validation set. Regarding the number of epochs used in the training phase, a methodology spread by the literature was considered. Thus, we use 200 epochs, considering the data described in Table 13. Considering training hyperparameters, We define the initial learning rate of 0.01, learning rate decay of 0.1, decay epoch methodology, weight decay of 0.01, gradient clipping normalization with a max value of 10, beta1 of 0.95 and beta2 of 0.85. We use the learning rate decay, weight decay, and gradient clipping normalization as regularization procedures to prevent overfitting. The evaluation metrics in the results were based on the official KITTY evaluation detection metrics. Hence, the metrics used were mAP for a BEV and 3D object detection. The partition of the training data used in this work consisted of a division discussed in [2]. This approach divides the 7481 training examples that are provided into a training set of 3712 samples, with the remaining 3769 samples belonging to the evaluation set. Moreover, the benchmarks presented in this article are based on the evaluation set only.
We select three target classes in all experiments: car, pedestrian and cyclist. Typically, all the models described herein generate two separate networks. One network is optimized for predicting cars and another for pedestrians and cyclists. However, this approach can be improper in self-driving car applications since low-edge devices with few resources must cope with two parallel models. For this reason, we trained all classes in a one-single model for all 3D object detectors.
For the fine-tuning process, we save the results of the mAP for each epoch to understand when models converge. Herein, we provide a study with the consequences of the number of samplings and min points per class sampling compared with the study made in [23]. In [23], we used different class sampling strategies but without changing the number of min points for class sampling.
Sampling Instance Strategy. We focus on optimizing the number of sampling instances and min points per class sample. The main objective of the sampling strategy is to soften the KITTI dataset imbalance issue. During training, the point cloud is randomly fed with these instances, which means they are placed into the current point cloud. Although this is true, the min points affect whether or not a certain instance can be used for sampling. If we increase the min number of points in the training process, instances such as pedestrians and cyclists are less sampled because few points exist to describe their shape. On the other hand, if we decrease too many min points, the model suffers in distinguishing between the foreground and the background points. In our experiments, we use the configurations described in Table 9. The min point for class sampling was fixed per class as 5 instead of 10 points for pedestrian and cyclist classes and 5 points for the car.
Point Cloud Range. Any object detector’s detection range is impacted by the point cloud range, which reduces it. For all models in our study, the ground truth object locations are represented using the original point cloud range for all frames in the KITTI dataset frame. For instance, it is feasible to confirm using depth data that most ground truth events in automobiles occur between 0 and 70 metres. The number of cases starts to decline sharply beyond 70 metres from the middle of the LiDAR sensor. This can be explained by the fact that beyond this range, relatively few points can accurately characterize an item’s geometry, making object detection challenging. In this experiment, the point cloud range of PointPillars is compared to that of other models whose detection range is unaffected. Table 10 shows the point cloud ranges. We also compare the research in [23] to the quantity of data structures (maximum number of pillars or voxels).
Data structure sizes. The object detection model receives the points in P C R and discretizes them in the X–Y axis, thus creating a set of pillars, or discretizes in X–Y–Z and creates a set of voxels. Each data structure D S has a fixed size in P C R . The data structure size directly impacts model accuracy and inference time. Increasing the data structure size can result in too much data being encoded and consequently randomly sampled, leading to information loss (the maximum number of points per data structure is set for computational saving purposes). On the other hand, reducing the data structure size can increase the number of non-empty data structures, increasing memory usage and inference time. Two D S configurations were used in our fine-tuning process, as shown in Table 11.
Number of Data Structures.
Since most data structures will be empty, a maximum number of data structures is established to investigate the KITTI dataset sparsity problem. In order to generate a dense tensor, using several data structures might cause the majority of them to be filled with zeros, making inference time inefficient. A maximum number of points is also established using the KITTI dataset’s distribution of the number of points per data structure, as shown in Table 12.

7. Performance Evaluation, Comparison and Discussion

This section details a series of tests conducted using the random search approach to improve the trade-off between accuracy and inference time performance parameters. The experiments and related network setups and models are shown in Table 13. To comprehend the effects of constructing a model tuned to create three classes of output instead of splitting into two separate networks (one for cars and another for pedestrians and cyclists), PointPillars settings and their outcomes are also supplied.
Table 13. The set of experiments conducted and respective network configurations.
Table 13. The set of experiments conducted and respective network configurations.
ExperimentModel Config. PC R Config.SI Config.No. Output Classes S PL Config.P Config.
1PointPillars P C R 1 S I 1 3 S D S 16 P 12 K
2SECOND P C R 2 S I 1 3 S D S 5 P 16 K
3PV-RCNN P C R 2 S I 1 3 S D S 5 P 16 K
4PointRCNN P C R 2 S I 1 3 S D S 5 P 16 K
5PartA2 P C R 2 S I 1 3 S D S 5 P 16 K
6VoxelRCNN P C R 2 S I 1 3 S D S 5 P 16 K
7PointPillars P C R 1 S I 2 3 S D S 16 P 12 K
8SECOND P C R 2 S I 2 3 S D S 5 P 16 K
9PV-RCNN P C R 2 S I 2 3 S D S 5 P 16 K
10PointRCNN P C R 2 S I 2 3 S D S 5 P 16 K
11PartA2 P C R 2 S I 2 3 S D S 5 P 16 K
12VoxelRCNN P C R 2 S I 2 3 S D S 5 P 16 K
The results of the experiments provided in Table 13 are shown in Table 14, Table 15, Table 16 and Table 17. We use the metric AP for three difficulty levels (easy, moderate and hard) and various intersection-over-union (IOU) thresholds according to KITTI benchmarks to provide the results. IOU is 70% for cars and 50% for cyclists and pedestrians. The experiment results from this study are compared to the original ones from the literature in Table 18. The comparison considers the three target classes for both 3D and BEV. The results presented for the conceived experiments consider the overall values per class for the best detection metric.
As demonstrated in the aforementioned results, the model implementations in our framework generally produced better mAP. Regarding the point cloud range in our networks, we reproduced original configurations for all models, with fewer D S when compared with the study in [23] since most D S will be empty. This improvement drastically decreases the inference time when comparing PointPillars with the same research. As shown in Table 19 and Table 20, some models, such as PointPillars, PartA 2 and PointRCNN, produce very close inference time results. On the other hand, our results for SECOND are better, and are worse in the cases of PV-RCNN and VoxelRCNN. Clearly, there is always a trade-off in terms of inference time for producing three-class inference models. This can be explained by the fact that original models obtained their results by training separated networks, one for cars and another for pedestrians and cyclists (a standard literature practice on KITTI benchmarks). By training three-class models, gradients are affected by all those instances, which leads to our models losing the specialization for prediction. However, as mentioned in [23], producing separate networks is impractical for self-driving applications. One solution can be increasing the model’s layers to improve the capability to learn the required patterns/weights/representations of the data. Although this is true, increasing the model’s depth will decrease the inference speed, which can result in a model not meeting the self-driving requirement for that metric (model’s inference time above 100 ms).
Reducing the minimum points to consider a sample instance brought gains in terms of mAP and for the same model architecture, since more instances can be used for data augmentation. This allows for the expansion of the diversity of the training data and our models to learn more patterns from data.

8. Conclusions

The research about deep learning methods for 3D object detection on LiDAR data has increased tremendously in recent years, with many models, repositories and different technologies being developed. Although this benefits scientific development in this area, the various technologies, software, repositories and models are a bottleneck for testing and improving the current methods.
To cope with this limitation, we developed a framework for representing multiple SoA 3D object detectors with highly refactored codes for both one-stage and two-stage methods. The main idea of this framework is to facilitate the implementation, reusing and implementation of new techniques in each framework module with less manual engineering effort. In conclusion, it enables the abstract implementation, reusing and building of any object detector in one single 3D object detector framework.
Nonetheless, it is evident that creating three-class inference models comes with a trade-off regarding inference time. Our study’s results are based on the KITTI validation set, while the original findings were obtained using the KITTI test set. We replicated the original network configurations for all models concerning the point cloud range but with fewer DS than the research mentioned in the previous section. The improvement mentioned earlier leads to a considerable reduction in the inference time when PointPillars is compared to the same research.
The current models for 3D object detection in LiDAR data targeting self-driving applications show their results in powerful servers with dedicated graphics cards and an unlimited power source. However, using this kind of server in the context of a self-driving car is impractical due to limited space and power supply. This shows a limitation regarding deploying 3D object detectors in such an environment. Research must evolve to produce models capable of meeting performance metrics while being deployable in resource-constrained edge devices with limited power supply and computational power.
Besides the capability to easily represent SoA 3D object detectors, other models should be integrated as future work. This requires the constant update of the framework in integrating the new components brought by novel methods since scientific research consistently produces innovation, especially in this area.

Author Contributions

Conceptualization, A.L.S., P.O. and D.D.; methodology, A.L.S., P.O. and D.D.; software, A.L.S. and P.O.; validation, P.M.-P., J.M. (José Machado), P.N., A.S, P.O., D.D., D.F., R.N. and J.M. (João Monteiro); formal analysis, A.L.S., P.O., D.D., J.M. (José Machado), P.N., D.F., R.N., P.M.-P. and J.M. (João Monteiro); investigation, A.L.S., P.O. and D.D.; resources, J.M. (José Machado), P.N., P.M.-P. and J.M. (João Monteiro); data curation, A.L.S., P.O. and R.N.; writing—original draft preparation, A.L.S., P.O. and D.D.; writing—review and editing, A.L.S., P.O., D.D., R.N., D.F., J.M. (José Machado), P.N., J.M. (João Monteiro) and P.M.-P.; visualization, A.L.S., P.O., D.D., D.F., J.M. (José Machado), P.N., P.M.-P. and J.M. (João Monteiro); supervision, J.M. (José Machado), P.N., P.M.-P. and J.M. (João Monteiro); project administration, J.M. (José Machado), P.N., J.M. (João Monteiro) and P.M.-P.; funding acquisition, J.M. (José Machado), P.N., J.M. (João Monteiro) and P.M.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020 and the project “Integrated and Innovative Solutions for the well-being of people in complex urban centers” within the Project Scope NORTE-01-0145-FEDER-000086. The work of Pedro Oliveira was supported by the doctoral Grant PRT/BD/154311/2022 financed by the Portuguese Foundation for Science and Technology (FCT), and with funds from European Union, under MIT Portugal Program. The work of Paulo Novais and Dalila Durães is supported by National Funds through the Portuguese funding agency, FCT—Fundação para a Ciência e a Tecnologia within project 2022.06822.PTDC.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beltrán, J.; Guindel, C.; Moreno, F.M.; Cruzado, D.; Garcia, F.; De La Escalera, A. Birdnet: A 3d object detection framework from lidar information. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 3517–3523. [Google Scholar]
  2. Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1907–1915. [Google Scholar]
  3. Fernandes, D.; Silva, A.; Névoa, R.; Simões, C.; Gonzalez, D.; Guevara, M.; Novais, P.; Monteiro, J.; Melo-Pinto, P. Point-cloud based 3D object detection and classification methods for self-driving applications: A survey and taxonomy. Inf. Fusion 2021, 68, 161–191. [Google Scholar] [CrossRef]
  4. Xia, X.; Meng, Z.; Han, X.; Li, H.; Tsukiji, T.; Xu, R.; Zheng, Z.; Ma, J. An automated driving systems data acquisition and analytics platform. Transp. Res. Part Emerg. Technol. 2023, 151, 104120. [Google Scholar] [CrossRef]
  5. Cosmas, K.; Kenichi, A. Utilization of FPGA for onboard inference of landmark localization in CNN-Based spacecraft pose estimation. Aerospace 2020, 7, 159. [Google Scholar] [CrossRef]
  6. Ngadiuba, J.; Loncar, V.; Pierini, M.; Summers, S.; Di Guglielmo, G.; Duarte, J.; Harris, P.; Rankin, D.; Jindariani, S.; Liu, M.; et al. Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml. Mach. Learn. Sci. Technol. 2020, 2, 015001. [Google Scholar] [CrossRef]
  7. Sharma, H.; Park, J.; Amaro, E.; Thwaites, B.; Kotha, P.; Gupta, A.; Kim, J.K.; Mishra, A.; Esmaeilzadeh, H. Dnnweaver: From high-level deep network models to fpga acceleration. In Proceedings of the Workshop on Cognitive Architectures, Atlanta, GA, USA, 2 April 2016. [Google Scholar]
  8. Yan, Y.; Mao, Y.; Li, B. Second: Sparsely embedded convolutional detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5099–5108. [Google Scholar]
  10. Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. arXiv 2017, arXiv:cs.CV/1711.06396. [Google Scholar]
  11. Shi, S.; Wang, Z.; Shi, J.; Wang, X.; Li, H. From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2647–2664. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
  13. Shi, S.; Wang, X.; Li, H. Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 770–779. [Google Scholar]
  14. Chen, Y.; Liu, S.; Shen, X.; Jia, J. Fast point r-cnn. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9775–9784. [Google Scholar]
  15. Liu, W.; Quijano, K.; Crawford, M.M. YOLOv5-Tassel: Detecting tassels in RGB UAV imagery with improved YOLOv5 based on transfer learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8085–8094. [Google Scholar] [CrossRef]
  16. Chen, C.; Gong, W.; Chen, Y.; Li, W. Object detection in remote sensing images based on a scene-contextual feature pyramid network. Remote Sens. 2019, 11, 339. [Google Scholar] [CrossRef] [Green Version]
  17. Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. PV-RCNN: Point-voxel Feature Set Abstraction for 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10529–10538. [Google Scholar]
  18. Graham, B.; van der Maaten, L. Submanifold Sparse Convolutional Networks. arXiv 2017, arXiv:1706.01307. [Google Scholar]
  19. Graham, B. Spatially-sparse convolutional neural networks. arXiv 2014, arXiv:1409.6070. [Google Scholar]
  20. Lu, L.; Shin, Y.; Su, Y.; Karniadakis, G.E. Dying relu and initialization: Theory and numerical examples. arXiv 2019, arXiv:1903.06733. [Google Scholar] [CrossRef]
  21. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  22. Uijlings, J.R.; Van De Sande, K.E.; Gevers, T.; Smeulders, A.W. Selective search for object recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef] [Green Version]
  23. Silva, A.; Fernandes, D.; Névoa, R.; Monteiro, J.; Novais, P.; Girão, P.; Afonso, T.; Melo-Pinto, P. Resource-Constrained Onboard Inference of 3D Object Detection and Localisation in Point Clouds Targeting Self-Driving Applications. Sensors 2021, 21, 7933. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Methodology for object detection model fine-tuning.
Figure 1. Methodology for object detection model fine-tuning.
Sensors 23 06427 g001
Figure 2. Framework used for the implementation/representation of object detection models.
Figure 2. Framework used for the implementation/representation of object detection models.
Sensors 23 06427 g002
Figure 3. Structure of the PointPillars model represented in the developed framework.
Figure 3. Structure of the PointPillars model represented in the developed framework.
Sensors 23 06427 g003
Figure 4. Structure of the SECOND model represented in the developed framework.
Figure 4. Structure of the SECOND model represented in the developed framework.
Sensors 23 06427 g004
Figure 5. Structure of the PointRCNN model represented in the developed framework.
Figure 5. Structure of the PointRCNN model represented in the developed framework.
Sensors 23 06427 g005
Figure 6. Structure of the PV RCNN model represented in the developed framework.
Figure 6. Structure of the PV RCNN model represented in the developed framework.
Sensors 23 06427 g006
Figure 7. Structure of the PartA 2 model represented in the developed framework.
Figure 7. Structure of the PartA 2 model represented in the developed framework.
Sensors 23 06427 g007
Figure 8. Structure of the VoxelRCNN model represented in the developed framework.
Figure 8. Structure of the VoxelRCNN model represented in the developed framework.
Sensors 23 06427 g008
Table 1. Values used in B n .
Table 1. Values used in B n .
Bn ElementValue
e p 0.001
m n 0.01
Table 2. Configurations used in S p C for each element.
Table 2. Configurations used in S p C for each element.
SpC ElementValue
K s S t 3
S t S r 1
P d S v 1
D l S l 1
O p S a 0
Table 3. Configurations used in S u M and S p C for each block. N.A.—not applicable.
Table 3. Configurations used in S u M and S p C for each block. N.A.—not applicable.
SuM ElementInSOutSInMOutMKsStPdDlOp
b l c 1 s q 1 S u M N.A.N.A.41631110
b l c 2 s q 1 S u M N.A.N.A.161631010
b l c 3 s q 1 S p C 1632N.A.N.A.32110
b l c 3 s q 2 S u M N.A.N.A.323231010
b l c 3 s q 3 S u M N.A.N.A.323231010
b l c 4 s q 1 S p C 3264N.A.N.A.32110
b l c 4 s q 2 S u M N.A.N.A646431010
b l c 4 s q 3 S u M N.the A.N.A.646431010
b l c 5 s q 1 S p C 6464N.A.N.A.32010
b l c 5 s q 2 S u M N.A.N.A.646431010
b l c 5 s q 3 S u M N.A.N.A.646431010
b l c 6 s q 1 S p C 64128N.A.N.A.32010
Table 4. Configurations used in N C P for each element.
Table 4. Configurations used in N C P for each element.
NCP ElementValue
n c p 1 4096
n c p 2 1024
n c p 3 256
n c p 4 64
Table 5. Set of configurations used in O P of a specific S Q of the M L element in a specific P T N .
Table 5. Set of configurations used in O P of a specific S Q of the M L element in a specific P T N .
NCP ElementInC2DOutC2D
o p 1 s q 1 p t n 1 416
o p 2 s q 1 p t n 1 1616
o p 3 s q 1 p t n 1 1632
o p 1 s q 2 p t n 1 432
o p 2 s q 2 p t n 1 3232
o p 3 s q 2 p t n 1 3264
o p 1 s q 1 p t n 2 9964
o p 2 s q 1 p t n 2 6464
o p 3 s q 1 p t n 2 64128
o p 1 s q 2 p t n 2 9964
o p 2 s q 2 p t n 2 6496
o p 3 s q 2 p t n 2 96128
o p 1 s q 1 p t n 3 259128
o p 2 s q 1 p t n 3 128196
o p 3 s q 1 p t n 3 196256
o p 1 s q 2 p t n 3 259128
o p 2 s q 2 p t n 3 128196
o p 3 s q 2 p t n 3 196256
o p 1 s q 1 p t n 4 515256
o p 2 s q 1 p t n 4 256256
o p 3 s q 1 p t n 4 256512
o p 1 s q 2 p t n 4 515256
o p 2 s q 2 p t n 4 256384
o p 3 s q 2 p t n 4 384512
Table 6. Set of configurations used in O P of a specific S Q in a specific F P M .
Table 6. Set of configurations used in O P of a specific S Q in a specific F P M .
NCP ElementInC2DOutC2D
o p 1 s q 1 f p 1 257128
o p 2 s q 2 f p 1 128128
o p 1 s q 1 f p 2 608256
o p 2 s q 2 f p 2 256256
o p 1 s q 1 f p 3 768512
o p 2 s q 2 f p 3 512512
o p 1 s q 2 f p 4 1536512
o p 2 s q 2 f p 4 512512
Table 8. The different RPN configurations ( c 1 x k C 1 x ) used. N.A.—not applicable.
Table 8. The different RPN configurations ( c 1 x k C 1 x ) used. N.A.—not applicable.
Models c 1 x 1 c 1 x 2 c 1 x 3
PointPillars(512, 18, 1)(5, 128, 128, 2)(5, 128, 128, 2)
SECOND(512, 18, 1)(512, 42, 1)N.A.
PV-RCNN(512, 18, 1)(512, 42, 1)N.A.
PartA2(512, 18, 1)(512, 42, 1)N.A.
VoxelRCNN(5, 128, 256, 2)(5, 128, 256, 2)N.A.
Table 9. Number of sampling instances (SI) per class.
Table 9. Number of sampling instances (SI) per class.
SI ConfigurationCarPedestrianCyclist
S I 1 151010
S I 2 252020
Table 10. The different point cloud ranges ( P C R ) configurations used in fine tuning.
Table 10. The different point cloud ranges ( P C R ) configurations used in fine tuning.
PC R Configuration X min X max Y min Y max Z min Z max
P C R 1 069.12−39.6839.68−31
P C R 2 070−4040−31
Table 11. Pillar size ( S D S ) configurations used in fine tuning.
Table 11. Pillar size ( S D S ) configurations used in fine tuning.
S DS Configuration S DS length S DS height S DS depth
S D S 16 0.160.161
S D S 5 0.050.050.1
Table 12. Total number of data structures used in fine tuning.
Table 12. Total number of data structures used in fine tuning.
P ConfigurationTotal Number of DS Max Number of Points per DS
P 12 K 12 K100
P 16 K 16 K5
Table 14. Results in validation set for BEV detection metric for experiments 1–6.
Table 14. Results in validation set for BEV detection metric for experiments 1–6.
ModelEpochExperimentCarCyclistPedestrianOverall
EasyMod.HardEasyMod.HardEasyMod.Hard
Voxel R-CNN197696.994.8995.0873.0377.6880.385.0385.5485.9787.12
PartA2187597.6496.7296.681.3783.0283.3890.2190.8190.9590.31
PointPillars160176.2979.0580.8057.5258.0158.1077.7572.5273.6270.84
PointRCNN24492.8388.6488.5580.7179.8580.989.3589.0388.6786.04
PV-RCNN92394.5293.9193.5878.6579.4680.6580.8380.3280.5984.94
SECOND154287.9783.7584.4371.2976.078.2377.9978.9679.5580.74
Table 15. Results in validation set for 3D detection metric for experiments 1–6.
Table 15. Results in validation set for 3D detection metric for experiments 1–6.
ModelEpochExperimentCarCyclistPedestrianOverall
EasyMod.HardEasyMod.HardEasyMod.Hard
Voxel R-CNN140689.5583.3782.6369.7272.773.5372.1671.3872.7176.29
PartA2182579.1577.3177.2573.674.8476.1172.6374.9476.0176.46
PointPillars179163.4958.9859.2752.2760.1663.041.0640.3838.9953.75
PointRCNN89484.8779.8679.3768.9671.1171.3576.5575.0174.3675.03
PV-RCNN139388.8683.5782.8971.5273.2174.3964.3464.5364.2873.86
SECOND147275.5572.1972.4355.2362.3665.0661.7762.0561.3466.28
Table 16. Results in validation set for BEV detection metric for experiments 7–12.
Table 16. Results in validation set for BEV detection metric for experiments 7–12.
ModelEpochExperimentCarCyclistPedestrianOverall
EasyMod.HardEasyMod.HardEasyMod.Hard
Voxel R-CNN1991297.1996.1196.3274.4377.5579.9288.5388.2988.4288.22
PartA21951197.7596.7196.6178.2380.982.7489.9990.4190.7690.04
PointPillars21785.7681.0482.8767.0473.0475.855.3957.1958.5872.42
PointRCNN161096.390.8490.8378.3178.5179.0185.8885.2485.3285.05
PV-RCNN190996.493.4594.0869.0572.3474.7478.7780.1780.783.17
SECOND162890.6186.5186.0578.6679.7679.9166.2773.6676.7980.92
Table 17. Results in validation set for 3D detection metric for experiments 7–12.
Table 17. Results in validation set for 3D detection metric for experiments 7–12.
ModelEpochExperimentCarCyclistPedestrianOverall
EasyMod.HardEasyMod.HardEasyMod.Hard
Voxel R-CNN1861283.7281.2181.3368.4471.0173.6967.6269.2870.4275.15
PartA21871183.2982.5382.8774.1375.3876.269.4670.9570.8276.63
PointPillars21769.4966.3166.9447.5852.7256.9837.436.9139.4854.57
PointRCNN391089.9683.3681.5968.6671.2671.3273.5274.0472.6675.19
PV-RCNN44983.4280.4680.6163.7567.4170.2263.1863.3863.4571.42
SECOND162876.0270.2472.7756.163.5965.7756.258.8758.1465.56
Table 18. Our results in KITTI validation set vs. original results in KITTI test set for 3D and BEV detection metrics.
Table 18. Our results in KITTI validation set vs. original results in KITTI test set for 3D and BEV detection metrics.
ModelOur Results (Overall per Class)Original Results (Overall per Class)
3DBEV3DBEV
CarCyc.Ped.CarCyc.Ped.CarCyc.Ped.CarCyc.Ped.
Voxel R-CNN85.1871.9872.0896.5477.388.4183.19--89.94--
PartA282.975.2470.4196.9982.5990.6679.9466.5445.5088.0371.3434.92
PointPillars67.5852.4337.9383.2271.9657.0575.2962.5644.0986.4866.0750.67
PointRCNN84.9770.4173.4190.0180.4989.0277.7762.1041.1287.4170.0347.91
PV-RCNN85.1173.0464.3894.079.5980.5882.8366.6545.2590.5971.2652.39
SECOND73.3960.8861.7287.7279.4472.2479.2062.5644.0988.468.3647.63
Table 19. Our inference time metric results.
Table 19. Our inference time metric results.
ModelTotal (ms) ~Speed (Hz) ~
PointPillars17.2557.97
SECOND34.129.33
PV-RCNN118.038.47
PointRCNN97.8310.22
PartA282.6612.10
VoxelRCNN5916.95
Table 20. Original model inference time metric results.
Table 20. Original model inference time metric results.
ModelTotal (ms) ~Speed (Hz) ~
PointPillars1662.5
SECOND1109.09
PV-RCNN8012.5
PointRCNN10010
PartA28012.5
VoxelRCNN4025
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Silva, A.L.; Oliveira, P.; Durães, D.; Fernandes, D.; Névoa, R.; Monteiro, J.; Melo-Pinto, P.; Machado, J.; Novais, P. A Framework for Representing, Building and Reusing Novel State-of-the-Art Three-Dimensional Object Detection Models in Point Clouds Targeting Self-Driving Applications. Sensors 2023, 23, 6427. https://0-doi-org.brum.beds.ac.uk/10.3390/s23146427

AMA Style

Silva AL, Oliveira P, Durães D, Fernandes D, Névoa R, Monteiro J, Melo-Pinto P, Machado J, Novais P. A Framework for Representing, Building and Reusing Novel State-of-the-Art Three-Dimensional Object Detection Models in Point Clouds Targeting Self-Driving Applications. Sensors. 2023; 23(14):6427. https://0-doi-org.brum.beds.ac.uk/10.3390/s23146427

Chicago/Turabian Style

Silva, António Linhares, Pedro Oliveira, Dalila Durães, Duarte Fernandes, Rafael Névoa, João Monteiro, Pedro Melo-Pinto, José Machado, and Paulo Novais. 2023. "A Framework for Representing, Building and Reusing Novel State-of-the-Art Three-Dimensional Object Detection Models in Point Clouds Targeting Self-Driving Applications" Sensors 23, no. 14: 6427. https://0-doi-org.brum.beds.ac.uk/10.3390/s23146427

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop