1st (2019) & 2nd (2020) Edition of

SAIAD

A conventional analytical procedure for the realization of highly automated driving reaches its limits in complex traffic situations. The switch to artificial intelligence is the logical consequence. The rise of deep learning methods is seen as a breakthrough in the field of artificial intelligence. A disadvantage of these methods is their opaque functionality, so that they resemble a black box solution. This aspect is largely neglected in the current research work and a pure increase in performance is aimed. The use of black box solutions represents an enormous risk in safety-critical applications such as highly automated driving. The development and evaluation of mechanisms that guarantee a safe artificial intelligence is required. The aim of this workshop is to increase the awareness of the active research area for this topic. The focus is on mechanisms that influence the deep learning model for computer vision in the training, test and inference phase.

Automotive safety is one of the core topics in the development and integration of new automotive functions. Automotive safety standards have been established decades ago and provide a description of requirements and processes that ensure the fulfillment of safety goals. However, artificial intelligence (AI) as a core component of automated driving functions is not considered in sufficient depth in existing standardizations. It is obvious that these need to be extended as a prerequisite for developing safe AI-based automated driving functions. And this is a challenge, due to the seemingly opaque nature of AI methods. In this workshop, we raise safety-related questions and aspects that arise under the five phases of the DNN development process. Our focus is on supervised deep learning models for perception.