ABSTRACT

Advances in autonomous vehicle systems (AVS) have been enabled by the incredible real-world performance achieved by deep neural networks (DNNs), and DNNs are a major component behind the decision-making process of AVS. Despite this success, DNNs have also been shown to be brittle and are known to misbehave in the presence of artificial and natural adversarial noise. Fog, rain, and other natural phenomenon have been shown to affect DNN systems adversarially. Artificial adversarial attacks on DNNs range from imperceptible changes to the input that can produce vastly different output from the DNN to physical patch attacks, where an attacker can put a small patch on a traffic sign to make the AVS treat it as an entirely different signal. Adversarial attacks on DNN are pervasive, and all architectures are affected by it. This lack of consistency in safety-critical applications such as AVS is concerning as minor deviation from the expected behavior can lead to massive implications. Various empirical and certified defense methods have been developed to guard against these adversarial attacks. This chapter will survey various strategies to generate natural and artificial adversarial attacks and techniques to mitigate these attacks.