Downloads: 4
India | Neural Networks | Volume 13 Issue 9, September 2024 | Pages: 1441 - 1443
Theoretical Analysis and Review of Adversarial Robustness in Deep Learning
Abstract: Deep learning and neural networks are widely used in many recognition tasks including safety critical applications like self-driving cars, medical image analysis, robotics, etc., and have shown significant potential in various computer vision applications. The performance and accuracy of the deep learning models is highly important in safety critical systems. Recently some researchers have disclosed that deep neural networks are vulnerable to adversarial attacks. This paper talks about the adversarial examples, analyzes how adversarial noise can affect the performance and accuracy of deep learning models, potential mitigation strategies and the uncertainties in the deep learning models.
Keywords: Deep learning, Adversarial Example, Adversarial Attack, Uncertainty, Bayesian Inference
Received Comments
No approved comments available.