A Compositional Simulation Framework for Testing Adversarial Robustness of Deep Neural Networks
Conference: DVCon Europe 2023 - Design and Verification Conference and Exhibition Europe
11/14/2023 - 11/15/2023 at Munich, Germany
Proceedings: DVCon Europe 2023
Pages: 8Language: englishTyp: PDF
Authors:
Maher Nader, Youssef; Lotfy Hatab, Mostafa; Ghaleb, Mazen Mostafa; Bakr, Safia Medhat; Awaad, Tasneem A. (Siemens EDA - Cairo, Egypt & The Department of Computer and Systems Engineering, Faculty of Engineering, Ain Shams University, Cairo, Egypt)
AlGanzouri, Ahmed; Abdelsalam, Mohamed (Siemens EDA - Cairo, Egypt)
Watheq El-Kharashi, M. (The Department of Computer and Systems Engineering, Faculty of Engineering, Ain Shams University, Cairo, Egypt & Department of Electrical and Computer Engineering, Faculty of Engineering & Computer Science, University of Victoria, Canada)
Abstract:
Deep neural networks (DNN) have reached impressive performance in computer vision, making them a natural choice for object detection problems in automated driving. However, DNNs used for object detection are known to be highly vulnerable to adversarial attacks. Even small changes to the input such as adding a customized noise pattern that remains invisible to the human eye, can stimulate silent prediction errors. In this study, we present a compositional simulation framework for testing the adversarial robustness of DNNs used in object detection. We demonstrate our framework with a comprehensive case study of a speed-sign detector model with two different adversarial attacks.