Self-Driving Cars

with ROS and Autoware

hosted by Apex.AI

Self-driving cars will transform the way we travel and commute.

This technology merges robotics, machine learning, engineering,

and modern software development methods.

What is the course about?


Developing production-grade autonomous driving systems require a stack of interrelated technologies. This course brings together all the significant parts into a practical step-by-step guide to architect, develop, test, and deploy an autonomous system.

This intermediate-level course using the popular open-source robotics frameworks ROS 2 and Autoware.Auto algorithms and covers through the course of 14 lectures,  state-of-the-art techniques that combine hardware, software, algorithms, methodologies, tools, and data analytics.

What will I learn?


You’ll learn a modern approach to developing complex systems for autonomy that the most innovative automotive companies are adopting. The teachers are experienced professionals who have contributed open-source materials moving the industry towards higher standards of design, engineering, and safety. 


Who should take the course?

This is an intermediate-level course for individuals who develop pre-production autonomous driving systems. ​Participants should have knowledge of C++ (including testing), robotics frameworks, and system integration.

Built in collaboration by

AS logo dark.png

 Get notified when new lectures will be released!

Course Overview

Lecture 1 | Development Environment

You can get the slides of the speakers at the following links: Part 1 slides: Part 2 slides: Led by Dejan Pangercic and Tobias Augspurger Duration: 60-90min This lecture will provide three major take-aways: 1. A concise overview of all 14 lectures 2. The development environment used and in which you will reproduce the labs 3. The methods used by the course to develop software for safety-critical systems Part 1 1. Couse intro · Autoware in a video · Why are we offering this class · What will the students learn · How will the students learn · Walk through the syllabus 2. Quick start - development environment · Install ADE · Install ROS 2 · Install Autoware.Auto · Run object detection demo · Edit and compile your code Part 2 1. Development of complex and safety critical software - the theory · Communication goal of this document · Safety and security in automotive · Can you get sued for bad code if somebody gets injured? · Formal safety development standards in automotive software systems · Popular software development models · Classical ISO26262 development and the conflict with urban autonomous driving · Agile system engineering · Operational design domain · Fail safe vs. Fail operable · Continuous engineering · Separation of concerns 2. Development of complex and safety critical software - the practice · General design · Develop in a fork · Designs and requirements · Validation · Unit testing and structural code coverage · Integration testing · Verification by operational design domain · Continuous integration and DevOp · Conclusion and the next lecture 3. Conclusion and the next lecture

Lecture 2 | ROS 2 101

You can get the slides at the following links: Slides in pdf format: Slides in html format: Led by Katherine Scott Duration: 60-90min 1. Intro
2. Getting help
3. Unofficial resources

4. ROS Intro · Brief intro to ROS
· Core concepts of ROS
· Environment setup
· Colcon nomenclature 5. Nodes and Publishers
· Overview of topics
· Building and running a node
· Simple publisher build and run
· Modify the publisher
· Building a subscriber
· Pub/Sub working together 6. Services
· Concept overview
· Review basic service
· Running basic services
· Calling services from command line
· Building a service client
· Executing service server/client 7. Actions
· Action overview
· Action file review
· Basic action review
· Running / calling an action
· Action client review
· Running action server with client

Lecture 3 | ROS 2 Tooling

You can get the slides at the following links: Slides in pdf format: Slides in html format: Led by Katherine Scott Duration: 60-90min 1. Overview and motivating concepts
· The Command Line
· Environment Variables
2. Setting up our toy environment 3. ROS2 – help 4. Command line tooling in ROS 2:
· ros2 run: execute a program
· ros2 node: inspect a node
· ros2 node list
· ros2 node info
5. "Sniffing the Bus": Examining topics
· ros2 topic list
· ros2 topic echo
· ros2 topic hz
· ros2 topic info
· ros2 msg show
· ros2 topic pub
6. GUI equivalents
· gqt_graph
· ros2 topic pub
7. Parameters
· ros2 param list
· ros2 param get
· ros2 param set
8. Services: Making things happen
· ros2 service list
· ros2 service type
· ros2 srv show
· ros2 service call
9. Actions
· ros2 action list
· ros2 action info
· ros2 action send_goal
· ros2 action show
· More complex calls
10. Logging data: Secure the bag
· What's a bag?
· ros2 bag record
· ros2 bag record -- selecting topics
· ros2 bag info
· ros2 bag play
11. Wrap up and homework

Lecture 4 | Platform (HW, RTOS, DDS)

You can get the slides at the following links: First part slides (ECU & RTOS) in pdf format: Second part slides (DDS Explained) in pdf format: Led by Angelo Corsaro and Stephane Strahm Duration: 60-90min 1. ECU · Which automotive ECUs are available today · Terminology: BSP, ECU, SoC, interfaces · Importance of ECUs for safety critical applications 2. RTOS · What is RTOS compared to e.g. vanilla Linux · Which RTOSes are available and suitable for AD · Micro vs monolithic kernel · Scheduling policies · Safe memory management · Spatial and temporal separation · Support for HW compute accelerators 3. DDS Explained · DDS Foundations · DDS : Selected Advanced Concepts · DDS features for Robotic Applications

Lecture 5 | Autonomous Driving Stacks

You can get the slides at the following links: First part slides: Second part slides: Third part slides: Led by Daniel Watzenig and Markus Schratter Duration 60-90min 1. Motivation to use AD stacks · Complexity · State of the art reference implementation 2. Architecture of AD stack · Sense - Plan - Act · Overview building blocks for AD (sensors, actuators, perception, localization, map, planning …) · How the blocks are in relation 3. Other AD stacks · Nvidia driveworks · Apollo 4. Integration of Autoware into a research vehicle · What is needed? · Hardware and integration overview · Providing AD functionality for research projects 5. Use case: Roborace · Using parts of Autoware components

Lecture 6 | Autoware 101

You can get the slides at the following link: Led by Josh Whitley Duration: 25min Students will learn about The Autoware Foundation and its two primary projects: and Autoware.Auto. Subtopics covered: · The Autoware Foundation structure · The history and capabilities of · Current status and future goals of Autoware.Auto · Architectural overview of Autoware.Auto · The Autoware.Auto development process/how to contribute

Lecture 7 | Object Perception: LIDAR

You can get the slides at the following link: Part 1: Part 2: Part 3: Part 4: Part 5: Part 6: Part 7: Part 8: Led by Christopher Ho and Gowtham Ranganathan Duration 60-90min Students will learn the purpose and role of object detection in the autonom ous driving stack and understand the design of an object detection stack and the space of algorithms within. Students will also develop a detailed knowledge of the Autoware.Auto object detection stack, including how it works, how to use it, and how to tune it. 1. Object detection and the autonomous driving stack 2. A classical lidar-based object detection stack 3. Preprocessing LiDAR data 4. Ground filtering 5. Clustering/object detection 6. Shape Extraction 7. Using detected objects 8. Lab: The Autoware.Auto Object Detection stack

Lecture 8 | Object Perception: Camera

Get the slides here Get the special notebook with details about the practice here Get the code shown on the video here Led by Michael Rek e , Stefan Schiffer , Alexander Ferrein, Gjorgji Nikolovski Duration: 60-90min Cameras are one of the key sensor systems for autonomous driving. What you will learn in this lecture is, how you can use camera pictures to detect real-world objects. Based on a brief introduction to camera technology in general you will learn, what steps are necessary to calibrate your camera system for compensation of the distortion. You will see, how you can make use of neural networks to detect objects like lanes, vehicles, pedestrians, etc. and which toolboxes you might use for that. Finally you will create your own lane detection node in ROS2. 1. Camera basics · Basic KPIs: resolution, ... · Calculating real world points · Monovision systems · Sereovision system · Epipolar coordniates 2. Camera calibration · Installing the camera system · Calibration procedure with chessboard pattern · Calculation of intrinsic / extrinsic parameters 3. Object detection · Basic of neural networks · Examples of available DNNs: YOLO, etc. · Available data sets for training: KITTI, Ford, etc. · The computation problem (real-time?) 4. Available toolboxes · Basic algorithm toolbox: OpenCV · GPU deployment toolbox: Cuda · Higher level integrated toolboxes: e.g. nVidia AD toolbox 5. Example: Lane detection · Basics of lane detection · Polynominal lane-fitting for data-reduction · Step-by-step hands-on 6. Hands-on course · Example lane detection based on real data · Read data from data-stream · Calculate lanes real-world coordinates · Polynomial fitting of detected lanes · Generating ROS2 messages · Visualisation e.g. rviz2

Lecture 14 | HD Maps

Get here course material Part 1 Get here course material Part 2 Led by Simon Thompson Duration: 60-90min 1. HD Maps for autonomous driving 2. What are HD Maps 3. Industry standards for HD Maps 4. Creating HD Maps 5. HD Maps in Autoware 6. HD Map types and formats for Autoware.Auto 7. Use Cases of HD Maps in Autoware 8. HD Map architecture and provision within AD Software stack 9. Examples of HD map usages

Lecture 13 | Data Storage and Analytics

Course material (code and can be found here Led by Florian Friesdorf Duration: 60-90min 1. Setup MARV; find and process ROS1 and ROS2 bags
2. Add metadata files with customer scanner
3. Use metadata for filter and listing
4. Write custom nodes to process data streams
5. Use Tensorflow to detect objects in video

Lecture 11 | LGSVL Simulation

Get the pdf slides here Get the markdown document with the practical material here Led by Steve Lemke Duration: 60-90min 1. Installation of the simulator · System requirements · GPU drivers and libraries 2. Getting started · Basic simulator concepts · Maps, vehicles, clusters, simulations · How to start simulation · Simulation parameters 3. Running simulation with Autoware.Auto · Different sensor configurations · Setting up ROS2 bridge · Visualizing sensor information 4. Automation and Python API · Controlling environment using Python API · Controlling non-ego vehicle actors · Controlling custom objects (controllables) · Callbacks and custom sensors plugins · EXAMPLE: Collecting data for model training · EXAMPLE: Judging ride comfort 5. Advanced Topics · New environment creation · New vehicle creation · New sensor creation

Lecture 12 | Motion Planning & Control

You can get the pdf slides at the following link: Syllabus Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 7 Led by Stefano Longo, Sandro Merkli, Takamasa Horibe Duration: 60-90min 1. Hierarchical architecture in autonomous driving · General architecture for autonomous driving · Hierarchy of decision-making modules · Alternative architectures 2. Decision making in autonomous driving · Route planner · Path planner · Behavior selector · Motion planner · Obstacle avoider · Controller 3. Motion planning in autonomous driving - introduction · Scope of motion planning · Hierarchy of requirements · Classification of algorithms based on outputs, space-time properties and mathematical domain 4. Motion planning in autonomous driving - algorithms · Space configuration · Pathfinding algorithms · Attractive and repulsive forces · Parametric and semi-parametric curves · Artificial intelligenceNumerical optimization 5. Model Predictive Control – FAQs · Feedback control · Optimal control · Relationship between LQR, LQG and MPC · Relationship with DP and RL 6. Motion planning advanced methods · Path planning and tracking as a single problem
· A nonlinear MPC formulation 7. Testing MPC with Autoware.Auto 8. Autoware parking planner · What the parking planner does
· Where to find it in the repository
· How to call it
· How to inspect the results

Lecture 9 | Object Perception: Radar

You can get the slides here You can get the special notebook with details about the hands-on lab here You can get the code of the hands-on lab here Led by Michael Reke, Daniel Peter Duration: 60-90min Radar is the most important sensor system for collision avoidance. What students will learn in this lecture is, how radar sensors basically work and how they can be used for object detection. Automotive grade radar sensors today provide a lot of internal signal processing and integrated object detection. You will learn how to parametrize such sensors and you will finally create your own Radar ROS2 node. 1. Radar basics · What is radar...? · Basic sensor setup with multiple internal receivers · Measurement of the radar cross section (RCS) · Difficult sensing constellations 2. Object detection · Sensor internal object detection · Need for ego car velocity · Measurement of object parameters: distance, dimension, velocity · Object filtering 3. Available sensors · Different frequency bands (SRR and LRR) · Internal signal processing and object detection · Sensor paramterisation · CAN as data interface 4. Example: Integration of Radar sensor to ROS2 node · Continental ARS-408 as example · Step-by-step hands-on 5. Hands-on course Example radar sensor integration · Read data from data-stream · Sorting and filtering of object list · Generating ROS2 messages · Visualization e.g. rviz2

Lecture 10 | State Estimation for Localization

You can get the slides at the following link: Localization with Autoware State Estimation Algorithms 2D and 3D NDT algorithms Led by Josh Whitley, Steve Macenski, Yunus Emre Çalışkan Duration: 60-90min Students will gain an understanding of the transform architecture and localization methods implemented in Autoware.Auto and how they relate to ROS standards. 1. Introduction · Localization for self-driving cars · The Autoware.Auto Transform Tree · Localization in Autoware.Auto: An Example of Flexible Design 2. Odometry state estimator · Kalman filters · What if your system is nonlinear? · EKF and UKF · Robot localization · Setup and use in Autoware.Auto 3. Environmental sensor localizer · NDT algorithm in 2D · NDT algorithm in 3D · NDT class implementation · NDT node implementation · Setup and use in Autoware.Auto

The presentations/training themselves are licensed under the Creative Commons Attribution 3.0 Unported License (CC BY 3.0). Any video of the presentation posted on YouTube is licensed for use under the YouTube license. Any videos posted on a platform other than YouTube are licensed for use under the Creative Commons - Attribution-ShareAlike 3.0 License. (CC BY SA 3.0). Any data sets hosted by Apex.AI for the course are licensed under The Creative Commons Attribution 3.0 Unported License (CC BY 3.0). If the data set is hosted outside of our site under a different license it will be clearly labeled as such.
Any software/source code is licensed under the Apache 2.0 license.