
Seminar: Investigating Vulnerabilities in Autonomous Vehicle Perception Algorithms
April 16 @ 1:00 pm - 2:00 pm
Autonomous vehicles (AVs) rely on deep neural networks (DNNs) for critical tasks such as environment perception—identifying traffic signs, pedestrians, and lane markings—and executing control decisions like braking, acceleration, and lane changing. However, DNNs are vulnerable to adversarial attacks, including structured perturbations to inputs and misleading training samples that can degrade performance. This presentation begins with an overview of adversarial training, emphasizing the impact of input sizes on DNNs’ vulnerability to cyberattacks. Subsequently, I will share our recent findings that explore the hypothesis that DNNs learn piecewise linear relationships between inputs and outputs. This conjecture is crucial for developing both adversarial attacks and defense strategies in machine learning security. The last part of the presentation will focus on recent work on using error-correcting codes to safeguard DNN-based classifiers.