< Back to Blog

Autonomous Driving – Overcoming Level 3 Challenges

autonomous driving

The automotive industry is racing to deliver highly-anticipated autonomous driving vehicles that won’t require any human intervention.

On the journey to a fully autonomous vehicle, companies have been exploring many different paths to SAE Level 5 (L5); however, they are running into some challenges with Level 3 (L3) autonomous driving due to the complexities of combing automation technology with human involvement. In a featured article with Automotive World, Jean-Paul de Vooght, Senior Director of Client Solutions, explores the challenges associated with Level 3 autonomous driving and how to overcome them.

What is L3 Autonomous Driving?

Developing an L3 vehicle requires sophisticated hardware, software, algorithms, and an immense amount of data. All of this needs to work together seamlessly to ensure that the electronic control units (ECUs) are able to make decisions and execute them in the instant it can take for a vehicle to slam on its brakes or for a child to wander into the street.

Advanced Driver Assistance System

The automotive industry is actively working on fail-operational safety architectures for systems with the powertrain in the context of the Advanced Driver Assistance System (ADAS) and Autonomous Driving (AD) processing chain – from sensors to perception and decision algorithms. This work has led to architectures incorporating hardware and process redundancy, real-time fault detection, masking, and advanced reconfiguration to sustain normal operations after a fault.

Domain-Centric and Legacy Heavy

The automotive industry has long recognized that traditional electrical/electronic (E/E) architecture is too domain-centric and legacy-heavy. This has likely delayed original timelines to achieve SAE L4 autonomy and is compounded by the electrification of the powertrain, which introduces additional safety concerns. The significantly higher compute capacity implied by Automotive Safety Integrity Level D (ASIL-D) systems for autonomous driving, has led to the development of more efficient multicore ECUs, hopefully reducing the count by domain and thus overall complexity.

The Irony of Automation

L3 autonomous driving poses specific safety risks that are inevitable when a system relies on both automation and human supervision. The “irony of automation” refers to the fact that an automated system that requires supervision can retain the attention of a human operator, while a system with greater autonomy will eventually lose the attention of the operator, making a re-entry into the supervision loop much harder.

Machine Learning for Autonomous Driving

The development of machine learning (ML) algorithms that are suitable to meet the collision avoidance safety goals on the ADAS/AD processing chain will require continuous improvement and development. With clever data engineering and data science, automotive companies can leverage L3 autonomous driving with consideration for higher levels of autonomy. It will require constant innovation, perfected combinations of sensors, as well as transparency with consumers about the vehicle’s capability to process data, and the need for the driver to remain vigilant and prepared to jump back into the supervision loop.

To learn more, read the full article here.