Downloads: 5 | Views: 380 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1
Research Paper | Computer Science & Engineering | India | Volume 13 Issue 4, April 2024 | Popularity: 4.8 / 10
Robustness Testing for AI/ML Models: Strategies for Identifying and Mitigating Vulnerabilities
Praveen Kumar, Shailendra Bade
Abstract: Artificial Intelligence (AI) and Machine Learning (ML) models have become increasingly prevalent in various domains, from healthcare and finance to autonomous systems and cybersecurity. However, the growing reliance on these models has also raised concerns about their robustness and resilience against adversarial attacks, data perturbations, and model failures. Robustness testing plays a critical role in evaluating the ability of AI/ML models to maintain their performance and integrity under challenging conditions. This paper explores the importance of robustness testing in the AI/ML development lifecycle and presents strategies for identifying and mitigating vulnerabilities. We discuss various types of robustness tests, including adversarial attacks, input perturbations, and model - level tests, and provide a framework for integrating these tests into the AI/ML testing process. We also highlight the challenges and considerations in designing effective robustness tests and discuss emerging techniques and tools for enhancing the resilience of AI/ML models. The paper concludes with recommendations for organizations to adopt a comprehensive robustness testing approach to ensure the reliability, security, and trustworthiness of their AI/ML systems.
Keywords: Artificial Intelligence, AI, Machine Learning
Edition: Volume 13 Issue 4, April 2024
Pages: 923 - 930
DOI: https://www.doi.org/10.21275/SR24409085438
Please Disable the Pop-Up Blocker of Web Browser
Verification Code will appear in 2 Seconds ... Wait