Country not found in response Rate the Article: Theoretical Analysis and Review of Adversarial Robustness in Deep Learning, IJSR, Call for Papers, Online Journal
International Journal of Science and Research (IJSR)

International Journal of Science and Research (IJSR)
Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed

ISSN: 2319-7064

Downloads: 4 | Views: 162 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1

Analysis Study Research Paper | Neural Networks | India | Volume 13 Issue 9, September 2024 | Rating: 4.8 / 10


Theoretical Analysis and Review of Adversarial Robustness in Deep Learning

Karthick Kumaran Ayyalluseshagiri Viswanathan


Abstract: Deep learning and neural networks are widely used in many recognition tasks including safety critical applications like self-driving cars, medical image analysis, robotics, etc., and have shown significant potential in various computer vision applications. The performance and accuracy of the deep learning models is highly important in safety critical systems. Recently some researchers have disclosed that deep neural networks are vulnerable to adversarial attacks. This paper talks about the adversarial examples, analyzes how adversarial noise can affect the performance and accuracy of deep learning models, potential mitigation strategies and the uncertainties in the deep learning models.


Keywords: Deep learning, Adversarial Example, Adversarial Attack, Uncertainty, Bayesian Inference


Edition: Volume 13 Issue 9, September 2024,


Pages: 1441 - 1443



Rate this Article


Select Rating (Lowest: 1, Highest: 10)

5

Your Comments (Only high quality comments will be accepted.)

Characters: 0

Your Full Name:


Your Valid Email Address:


Verification Code will appear in 2 Seconds ... Wait

Top