Downloads: 5
United States | Information Technology | Volume 10 Issue 9, September 2021 | Pages: 1819 - 1825
Adaptive XAI Narratives for Dynamic Fraud Detection: Keeping AI Explanations Clear as Models Evolve
Abstract: AI models used for fraud detection are constantly updated to tackle new threats, but their explanation methods often stay static, leading to outdated or misleading interpretations. This research explores how adaptive explainable AI (XAI) can generate real-time, accurate explanations that evolve alongside the models they describe. We introduce a framework for self-updating narrative generation, combining retrieval-augmented generation (RAG) and meta-learning to ensure explanations stay aligned with the latest model behavior and emerging fraud patterns. Testing on real-world transaction data, we compare adaptive narratives against traditional static explanations, measuring robustness, response time, and user understanding. Our results show that adaptive XAI not only preserves transparency in fast-changing fraud environments but also builds stronger trust among users, auditors, and regulators. This work offers a practical solution for real-time interpretability in AI-driven fraud detection-a critical need for deployable, trustworthy systems.
Keywords: Explainable AI (XAI), Fraud Detection, Dynamic Model Interpretability, Adaptive Explanations, Real-Time Decision Making, Retrieval-Augmented Generation (RAG), AI Transparency, Financial Cybersecurity, Robust Machine Learning, Regulatory Compliance
How to Cite?: Rajani Kumari Vaddepalli, "Adaptive XAI Narratives for Dynamic Fraud Detection: Keeping AI Explanations Clear as Models Evolve", Volume 10 Issue 9, September 2021, International Journal of Science and Research (IJSR), Pages: 1819-1825, https://www.ijsr.net/getabstract.php?paperid=SR21923114959, DOI: https://dx.doi.org/10.21275/SR21923114959
Received Comments
No approved comments available.