International Journal of Science and Research (IJSR)

International Journal of Science and Research (IJSR)
Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed

ISSN: 2319-7064


Downloads: 0 | Views: 23

Research Paper | Computer Science & Engineering | India | Volume 13 Issue 4, April 2024


Robustness Testing for AI/ML Models: Strategies for Identifying and Mitigating Vulnerabilities

Praveen Kumar [60] | Shailendra Bade


Abstract: Artificial Intelligence (AI) and Machine Learning (ML) models have become increasingly prevalent in various domains, from healthcare and finance to autonomous systems and cybersecurity. However, the growing reliance on these models has also raised concerns about their robustness and resilience against adversarial attacks, data perturbations, and model failures. Robustness testing plays a critical role in evaluating the ability of AI/ML models to maintain their performance and integrity under challenging conditions. This paper explores the importance of robustness testing in the AI/ML development lifecycle and presents strategies for identifying and mitigating vulnerabilities. We discuss various types of robustness tests, including adversarial attacks, input perturbations, and model - level tests, and provide a framework for integrating these tests into the AI/ML testing process. We also highlight the challenges and considerations in designing effective robustness tests and discuss emerging techniques and tools for enhancing the resilience of AI/ML models. The paper concludes with recommendations for organizations to adopt a comprehensive robustness testing approach to ensure the reliability, security, and trustworthiness of their AI/ML systems.


Keywords: Artificial Intelligence, AI, Machine Learning


Edition: Volume 13 Issue 4, April 2024,


Pages: 923 - 930


How to Download this Article?

Type Your Valid Email Address below to Receive the Article PDF Link


Verification Code will appear in 2 Seconds ... Wait

Top