A Brief History of Automated Driving — Part Three: Toward Product Development
Updated: Oct 5, 2020
I have been working on autonomous vehicles and driver assistance systems for 23 years. During this time, I have had several touchpoints with legal and regulatory aspects and with functional safety certification. I helped write the SAE levels of automated driving, and I cover these topics in my Stanford class as well. This post is an attempt to summarize my insights into a concise summary. Comments, additions, input for missing pieces, and suggestions are welcome. Contact firstname.lastname@example.org. Thanks.
From Driver Assistance to Robocars—Evolution or Revolution?
Driver assistance systems evolved from simple, smart distance keeping features to sophisticated assistance systems over the past twenty years:
Adaptive Cruise Control (ACC) was launched as a single driver assistance feature in luxury-class vehicles in the late 1990s. ACC back then was typically based on a single radar sensor, initially specifically designed for this functionality, to maintain a constant time gap to the preceding vehicle while not exceeding a preset maximum velocity.
A couple of years later, Forward Collision Warning (FCW) was added, still integrated in the radar, still driven by only a single radar sensor. Now for the first time, two functions were driven by a single sensor.
In the 2000s, Lane Departure Warning (LDW) systems were introduced. These are typically based on cameras (Citroen had a system based on infrared sensors), which detect the white lane markers in the camera image. Over the years, these evolved from warning systems to systems that applied a certain limited amount of torque to keep the vehicle in the lane, then called Lane Keeping Support (LKS) or Lane Keeping Assist or Lane Centering Support.
In 2013, Mercedes launched for the first time a system in which longitudinal and lateral control were integrated. This was the first level 2 system ever introduced into a production vehicle. It was called "Distronic Plus with Steering Assist and Stop&Go Pilot." Tesla launched a system with similar functionality in 2015 and gave it the much leaner—though controversial—name Autopilot. Many carmakers have introduced level 2 systems since then.
Audi developed the first level 3 system for launch in 2017. The system was an extension of their level 2 driver assistance system and was perceived by customers to provide very little extra value as it was usable in certain environments only. It did not become available though and the manufacturer claimed legal restrictions in Germany as the reason.
In parallel, advances in machine learning in the past ten years has led to a major breakthrough in computer vision, which has provided the field of vehicle automation with the ability to significantly better understand street scenes using cameras. This led to significant advances in the development of level 4 systems a couple of years ago and resulted in the over-optimistic expectation voiced by many executives, investors, and media that level 4 vehicles (a.k.a. robocars) are practically ready to be deployed and that by today we'd have fleets of robocars running on the streets.
Commercially, robocars will go hand in hand with a revolution of mobility business models. Personally-owned vehicles are purchased with the expectation that vehicles will be able to be driven by their owner to—more or less—any reasonably reachable destination. That creates the buyer expectation that—if those vehicles were highly automated—the vehicle automation system in such vehicles would be able to operate on pretty much any drivable road. While level 4 systems have made an incredible amount of progress in the past decade (as outlined in this blog post series), it's a long shot to a level 5 vehicle—defined as a level 4 vehicle that works in all driving modes, i.e., on all roads under all reasonable conditions.
For (most) individuals owning a level 1, 2, or 5 vehicles may make sense. Level 3 vehicles may not make sense at all for passenger vehicles (I'll come to that later) and level 4 vehicles—defined as (potentially) driverless vehicles, capable of operating within a certain Operational Design Domain (ODD) only—make a lot of sense for fleet operators (and the likes of an Uber or Lyft), but not so much for an individual. Who would want to own a robocar that only works in suburban Silicon Valley, but not all the way up to San Francisco or down to LA? That also puts today's carmakers in an interesting position. Will they offer transportation services? Or do they partner with the rideshare operators? Do they rely on technology from Argo or Waymo, or partner with a Tier 1 supplier or with an ecosystem of start-ups, or better build their own technology including ecosystem from scratch? We just saw in June that building a vehicle plus automation plus ecosystem is too much even for a well-funded start-up. On the other hand, automakers today are selling millions of vehicles with driver assistance. How many money-making vehicles do Argo and Waymo operate today?
A Deeper Dive into the History of Driver Assistance
Vehicle automation was first introduced into series production vehicles in the late 1990s through driver assistance systems, now also known as Level 1 systems.
Adaptive Cruise Control (ACC) is an extension of Cruise Control to maintain a set distance to preceding vehicles using most commonly a millimeter-wave FMCW (frequency-modulated-continuous-wave) radar sensor. Lidar sensors were also used in some very early ACC systems in Japan but soon replaced by radars. The radar directly measures distance and relative velocity of all reflecting objects and indirectly estimates their angular offset. Objects irrelevant for ACC are ignored, and the target object, which is typically a preceding vehicle in the same lane of traffic, is selected. A controller then adjusts the relative velocity to the preceding vehicle such that a certain time gap is kept constant.
Forward Collision Warning (FCW) introduced a few years later, is based on the same sensor and makes drivers aware of impending collisions by means of escalating warning levels. Forward collision prevention systems for low-speed applications using low-range and low-resolution inexpensive lidar sensors entered the market around 2010, e.g., Volvo City Safety.
Lateral guidance systems using cameras and computer vision algorithms to detect lane markers in the camera image were introduced into production vehicles about ten years ago with systems such as Lane Departure Warning (LDW) or Lane Keeping Support (LKS), also called Lane Centering.
In 2013, Daimler released the first production passenger vehicle with an SAE Level 2 (a.k.a. Partial Automation) assistance system. Level 2 systems still require constant supervision by the driver who may need to take over control of the vehicle more or less instantaneously. This combines longitudinal distance keeping with lateral lane-centering and was launched under the somewhat complicated name "Distronic Plus with Steering Assist." Shortly after, other automakers released similar level 2 systems, e.g., Tesla with their Autopilot system; however, all of them still require constant supervision by the driver. In 2020, Consumer Reports ranked GM's Super Cruise system as the most advanced driver assistance system available.
Audi announced that the 2017 Audi A8 would be the first production automobile to have been developed especially for conditional automated driving (SAE level 3). The system never became available in the US or Europe, where the manufacturer cited legal restrictions as the reason. Recently Audi announced that the system will be discontinued in the 2021 mid-cycle refresh.
Toward Automated Driving Product Development
The DARPA Urban Challenge—as described in more detail in the previous post—marked the transition from academic research to self-driving product development through three distinct outcomes:
1. Industry attention. The DARPA prize money attracted top-notch researchers, who attracted leading automotive manufacturers (Volkswagen with Stanford, GM with CMU), large automotive suppliers (Continental and Mobileye both with CMU, Bosch with Stanford), chipmakers (Intel with both top teams to be on the safe side, NXP with Stanford), and Google (also with both top teams) as sponsors. Images: Stanford Racing Team and Tartan Racing.
2. High-resolution automotive lidar. Researchers used simple industrial lidar sensors up to the 2005 Grand Challenge (Omron had built a simple automotive lidar in the late 1990s for Adaptive Cruise Control systems, but ultimately gave that up in favor of much more cost-efficient radar). Stanford mounted five separate single-beam lidar sensors at different vertical angles to achieve some vertical resolution mainly to compensate the pitch movement of the vehicle. The Hall brothers decided to build their own 64-beam high-resolution lidar sensor for Velodyne's 2005 Grand Challenge entry, after experimenting somewhat unsuccessfully with stereo vision. Image: Stanford Racing Team.
In 2007, Velodyne focussed on refining the 2005 prototype into the HDL-64 sensor that is still on the market today in the same form factor. By the time the 2007 Urban Challenge took place, Velodyne's sensor was mounted on top of five of the six vehicles that finished. This unforeseen success sparked a whole new industry, Velodyne is now a unicorn and the lidar ecosystem contains over a hundred players. Images: Stanford Racing Team and Tartan Racing.
3. Google entering automotive. On January 17, 2009, about a year after the Urban Challenge, project Chauffeur started in secret with about 15 engineers. Ten days earlier, Stanford's software repository was published under a permissible open source license on Sourceforge, and part of the team left to join Google. Presumably, the Stanford code formed the basis for the project and enabled a racing start. Chris Urmson joined from CMU. Image: Google.
Google Claiming the Lead
About a year later, a small fleet of seven Prius vehicles with Velodyne HDL-64 sensors was frequently spotted on Bay Area highways and NYT's John Markoff at some point put two and two together. On October 9, 2010, Google formally announced the project and John published an article in the NYT on the same day as well. This WIRED story provides some background on the early days of the project and Chris Urmson and Sebastian showed videos of the first three years at IROS 2011, including self-driving Golf carts.
Waymo just recently published the initial ten 100-mile routes the team completed in just the first 18 months. The drives went from LA to the Bay Area, to San Francisco, to Monterey around the Bay, around Lake Tahoe, and through the Santa Cruz Mountains, all with a safety driver, but at least once without human intervention or disengagements.
Detroit automakers were not amused. Chrysler reacted quickly with a counterattack in the form of two TV commercials, the first one saying "Hands-free driving, cars that park themselves, an unmanned car driven by a search-engine company. We've seen that movie, it ends with robots harvesting our bodies for energy", and the second one "Robots can take our food, our clothes, and our homes, but they will never take our cars."
After a short state of shock following the Google announcement in 2010, many carmakers, e.g., Daimler, BMW, Audi, Volkswagen, GM, Nissan, Honda, Toyota, Volvo, Ford, Tesla, Hyundai, Jaguar Land Rover, Faraday Future, as well as the major automotive suppliers, such as Bosch, Delphi, Continental, Mobileye, made similar projects public since then. Also, rideshare companies, such as Uber, IT companies, such as Baidu, and chip manufacturers, such as Nvidia, and Intel joined the field with their own self-driving car projects. Most of these announcements have already missed or will likely miss their over-optimistically announced target dates ("...develop a commercially viable driverless vehicle by 2018" or "...team up to bring fully autonomous driving to streets by 2021") and most have been removed from the corporate websites. I'll address the early misconceptions and over-optimistic announcements in a later post.
Some remarkable research results were published in 2013, when a VisLab research vehicle drove autonomously in public traffic in Parma, Italy, along a 13 km long route, at times even without a safety driver in the driver seat. That same year, Daimler and KIT built an autonomous research vehicle called Bertha, which drove the 100 km long Bertha Benz Memorial Route in public traffic, which follows the world's first long-distance road trip by a vehicle powered with an internal combustion engine, in 1888 from Mannheim to Pforzheim, Germany.
The Start-up Phase from 2013 to 2017
The anticipation that autonomous driving would be the next big thing led to the founding of driving automation focussed start-ups, most notably Cruise Automation and nuTonomy in 2013, Zoox and Navya in 2014, Drive.ai in 2015.
A number of start-ups attempt(ed) to build a full-stack of autonomous driving technology on their own, despite the fact that Waymo (as the leader of the full-stack pack) is spending close to a billion dollars annually on developing self-driving technology. This shows that building a full-stack is very expensive, and very few companies have been able to secure the financial resources to build it. In fact, so far only three companies besides Waymo have been able to secure significantly over $1B:
Four companies are known to have raised over $500M:
Some attempt reduce complexity of the full-stack by focussing on a specific Operational Design Domain (ODD). Among these are, for example:
Trucking: Peloton (2011), Embark (2015), TuSimple (2015), Plus.ai (2016), OTTO (2016, acquired by Uber in the same year), Starsky (2016, shut down 2020), Locomation (2016), Kodiak (2018), Ike (2018, licensed the stack from Nuro for use in trucking), ...
Driving in dedicated closed communities: Voyage (2017)
Others share our perception that developing a complete software stack is too complex and costly by far for most companies and that consequently focusing on developing a best-in-class component:
Camera and machine-learning-based image processing: Deepscale (2015, shut down and acquired by Tesla in 2019), StradVision (2014), Brodmann17 (2016), Helm.ai (2016), Phantom AI (2016), Cartica AI (2019), and many more
Prediction: Perceptive Automata (2015)
Motion planning and control: Embotech (2013)
And furthermore, following Velodyne's success, many start-ups have decided to build lidar sensors, e.g. LeddarTech (2007), Oryx (2009, shut down in 2019), Quanergy (2012), Aeye (2013), Hesai (2013, just agreed to a license agreement with Velodyne, after being sued in 2019), Robosense (2014, also being sued by Velodyne in 2019), Ouster (2016), Innoviz (2016), Cepton (2016), Aeva (2016), Blickfeld (2017), and many more. Wired is probably correct when pointing out that "there are too many lidar companies and that they can't all survive."
Acquisition and Partnership Phase
The start-up phase was followed by the acquisition and partnership phase from 2016 to today:
GM committed to a $1B investment into Cruise in 2016. By now, Cruise has raised a total of $5.3B.
Uber bought Otto for $680M in 2016, which led to a lawsuit with Waymo, which they settled for $245M in 2018. Recently, Anthony Levandowski, the founder of Otto, pleaded guilty to stealing Google trade secrets when leaving in 2016. WIRED published an in-depth article here and and another one about the durch of Artificial Intelligence also founded by Anthony in the same year.
Ford helped jump-start Argo AI with a $1B investment in 2016. In 2019 Volkswagen announced an additional investment of $2.6B, which included its Munich-based AID subsidiary. AID laid off 100 employees only days after the merger with Argo in 2020.
In October 2017, nuTonomy was acquired by Aptiv for $450M.
Intel acquired Mobileye for $15.3B in March 2017. This is the largest amount paid for any company in the field of vehicle automation so far. Mobileye was founded in Israel in 1999, has close to 2000 employees, and has been market leader in the field of camera-based driver assistance systems for many years.
In March 2019, Daimler acquired a majority stake in TORC Robotics for an undisclosed amount.
Tesla acquired DeepScale when the company ran out of funding in 2019. A couple of team members joined the Tesla Autopilot team presumably through acqui-hire but it seems that most have already left again.
Most recently, Amazon acquired Zoox for $1.3B in June 2020 (down from a $3.2B valuation in the last equity round) after Zoox had already raised $955M and was supposedly running out of funding within weeks.
Augustin Friedel does a great job keeping track of partnerships in this autonomous driving network. This is challenging as some companies are very outgoing and announce supplier relationships as partnerships, whereas introverted companies build up productive partnerships that are not being announced at all. Another useful resource is the Autonomous Vehicle Ecosystem landscape found here or here.
This post covered the time from the DARPA Urban Challenge until today. Please let me know if you think essential pieces are missing or incorrect.
The next article will cover the development of legislation, standards, and taxonomy in the field of vehicle automation.
In the fifth and (presumably) last post, I will describe my thoughts on where we are today. Are we still in the acquisition and partnership phase, or is that nearing its end, and if so, what’s next?
Thanks to Ulrich Eberle for pointing out corrections and clarifications!