SCHEDULE

Agenda

June 19th

UTC−5 (Nashville) | UTC+2 (Berlin)

08:00 am | 15:00 Welcome Remarks: Timo Sämann

08:10 | 15:10 Invited Talk: Patrick Perez

08:40 | 15:40 Invited Talk: Bernt Schiele

09:10 | 16:10 Long Orals IDs 1-3 (15 min. each)

09:55 | 16:55 Coffee Break

10:10 | 17:10 Invited Talk: Eric Hilgendorf

10:40 | 17:40 Invited Talk: Zico Kolter

11:10 | 18:10 Long Orals IDs 4-5 (15 min. each)

11:40 | 18:40 Lunch Break

12:40 pm | 19:40 Short Orals IDs 6-15 (4 min. each)

01:30 | 20:30 Q&A for Short Orals in Breakout Sessions

01:50 | 20:50 Invited Talk: Been Kim

02:20 | 21:20 Best Paper Award

02:50 | 21:40 Coffee Break

03:05 | 22:05 Panel Discussion

04:05 | 23:05 Closing

Abstracts of the Keynote Talks

Patrick Perez:

Paying attention to pedestrians.

Safety is the priority for autonomous vehicles. In particular, the driving stack, from perception to action, must handle as well as possible the vulnerable road users (VRUs). Focusing on pedestrians, I shall present some of our works at valeo.ai on detecting and analysing them. These works include GAN-based augmentation of real scenes with controlled synthetic humans for improved detection, the recognition of numerous attributes from age to attention with extreme multi-task learning, or physically-aware monocular 3D pose estimation. I shall also discuss some of the challenges that currently impede the progress on VRU understanding.


Bernt Schiele


Robustness and Interpretability of Deep Learning Methods in Computer Vision

Computer Vision has been revolutionized by Machine Learning and in particular Deep Learning. End-to-end trainable models often allow to achieve top performance across a wide range of computer vision tasks and settings, including automated driving tasks and settings. While recent progress is remarkable, current deep learning models lack inherent interpretability and robustness. In this talk I will discuss two lines of our work addressing these pressing issues. First, I will discuss work that aims to understand the use of context information in autonomous driving scenarios such as semantic scene segmentation and object detection. To address the related robustness issues, we propose several methods to both quantify but also overcome robustness issues concerning the use of context information. Second, I discuss Convolutional Dynamic Alignment Networks which are performant image classifiers with a high degree of inherent interpretability. In particular, these novel networks perform classification through a series of input-dependent linear transformations, that outperform existing attribution methods both quantitatively as well as qualitatively.


Been Kim

Interpretability with skeptical and user-centric mind

Interpretable machine learning has been a popular topic of study in the era of machine learning. But are we making progress? Are we heading in the right direction? In this talk, I start with a skeptically-minded journey of this field on our past-selves, before moving on to recent developments of more user-focused methods. The talk will finish with where we might be heading, and a number of open questions we should think about.


Eric Hilgendorf

Germany is currently in the process of reforming road traffic law for autonomous driving. For the first time, general regulations are to be created for level 4 vehicles. In this context, numerous problems are also being tackled that have so far only been addressed in ethics. The presentation will provide an overview of the proposed legislation and its main innovations.


Zico Kolter

Incorporating robust control guarantees within (deep) reinforcement learning

Abstract: Reinforcement learning methods have produced breakthrough results in recent years, but their application to safety-critical systems has been substantially limited by their lack of guarantees, such as those provided by modern robust control techniques. In this talk, I will discuss a technique we have recently developed that embeds robustness guarantees inside arbitrary RL policy classes. Using this approach, we can build deep RL methods that attain much of the performance advantages of modern deep RL (namely, superior performance in "average case" scenarios), while still maintaining robustness in worst-case adversarial settings. I will highlight experimental results on several simple control systems highlighting the benefits of the method, in addition to a larger-scale smart grid setting, and end by discussing future directions in this line of work.