Autonomous driving algorithms developed by driveblocks using Apex.OS*
Updated: Feb 13
Real-time LIDAR object detection for heavy-duty trucks
Autonomous vehicle object detection
With increasing automation levels, autonomous system designers look towards using different sensor types for the various perception tasks to be performed. The utilization of diverse sensing technologies increases the reliability of the autonomous driving system, especially in varying environmental conditions such as weather and lightning variations from day and night. One of these sensing technologies are LIDAR (Light detection and ranging) sensors. They create a three dimensional representation, also called 3D pointcloud, of the world around the autonomous vehicle by using light beams. The key strength of this sensing technology is its ability to reliably judge distances as well as dimensions of the detected pedestrians, cars, and trucks (see Figure 1 for a visualization of the detections with cyan bounding boxes around the objects). In addition, it works reliably by night as it actively emits the light beams (in contrast to cameras which absorb the existing light beams). One of the challenges when working with 3D pointclouds is their datasize. A scan covering the full surrounding of the vehicle can easily require between 50.000 and 200.000 datapoints. This leads to up to 3 megabytes of data per scan, which can result in significant data transport latency and processing time. This has to be kept in mind when designing the software components for an autonomous driving system, as the maximum execution time of the processing pipeline has to be kept to a minimum and have a reliable upper bound to ensure safety and reliability requirements. Therefore, the algorithms as well as the base software has to account for these properties. driveblocks LIDAR perception modules
driveblocks develops a modular and scalable autonomous driving stack with a focus on heavy-duty vehicles, such as semi-hauler trucks for highway applications or vehicles for container terminals. A part of this stack is the LIDAR perception solution, covering various functionalities: It provides an efficient downsampling of the pointcloud based on a voxelization algorithm. This reduces the pointcloud density and therefore ensures efficient computational operation for the following nodes while keeping a sufficiently accurate representation of the world around the vehicle. Another component is ground removal, where points which are clearly related to the street and not to other vehicles are removed from the pointcloud. This allows the clustering and classification algorithms to focus on the object detection task. In addition, it provides a clustering algorithm that groups points which are close to each other and therefore potentially belong to an object. Finally, the classification algorithm judges each cluster individually whether this is a vehicle or not by using various geometric properties.
Figure 1: Visualization of the detections with cyan bounding boxes around the objects
Building on Apex.OS* and Apex.Middleware*
The team at driveblocks has used ROS 2 galactic to prototype the implementations of their algorithms in the past. However, the strict safety requirements faced by autonomous vehicle applications ask for a certified base software layer. As Apex.OS* and Apex.Middleware* are a hardened, production-grade version of ROS 2, it became the option of choice for the team.
They ported their ROS 2 applications to Apex.OS* and leveraged two crucial features: First, shared memory transport to decrease the transport latency between the nodes. Second, the Apex.OS* executor was used to explicitly specify the compute-graph represented by the various nodes in the perception pipeline. In conjunction, these led to a significant decrease in the computational overhead and the worst-case end-to-end latency. Benchmark on embedded hardware
Even though the speedups have been remarkable on an x86-based workstation, the team decided to benchmark different configurations of the LIDAR object detection implementation on ROS 2 Galactic and Apex.OS* on an embedded hardware platform based on an ARM Cortex-A72 processors and 8GB of RAM, a Raspberry Pi 4 Model B. The benchmark was done using two different configurations, one suitable for single-core implementation and another one leveraging up to three cores that parallelized the LIDAR preprocessing LIDAR preprocessing (see Figure 2).
The lower and more predictable latency, achieved by using Apex.OS*, lead to several benefits for the overall development of autonomous vehicles. First, available compute resources are used more efficiently, reducing compute hardware requirements and providing cost benefits to the overall system. Second, the predictability of worst-case latency is an important prerequisite for safety-critical algorithms to achieve certification and master difficult borderline situations in the real world.
Download the white paper HERE
*As of January 2023 we renamed our products: Apex.Grace was formerly known as Apex.OS, and Apex.Ida was formerly known as Apex.Middleware.