ISSN (Online): 2321-3418
server-injected
Engineering and Computer Science
Open Access

Explainable Artificial Intelligence (XAI) Models for Transparent and Accountable Fraud Detection in Banking Ecosystems

DOI: 10.18535/ijsrm/v13i08.ec03· Pages: 2493-2513· Vol. 13, No. 08, (2025)· Published: August 6, 2025
PDF
Views: 2,495 PDF downloads: 839

Abstract

Digital fraudulent activities are reminiscent of the technological sophistication of the contemporary age. Every financial institution, therefore, needs to adopt innovation in the use of fraud detection systems, hence the introduction of Artificial Intelligence (AI), which sits squarely in this technological age. Despite the consecutive developments of high-performing AI systems, including Deep Learning and High-Performing Ensemble Classifiers, why, then, is it still hard to accept such systems in banking and finance? This stems from the fact that most of them remain ‘black boxes’, i.e, invalidating themselves as any means of implementation since they cannot be ‘verified’. This paper discusses the introduction of Explainable Artificial Intelligence (XAI) into fraud detection networks, utilizing advanced fraud prevention instrumentation such as fraud detection technologies, along with its potential benefits and challenges. By using realistic datasets such as the 'Fraud Detection in Banking Ecosystems', ‘Resource Usage Partitioning(D) Technologies’ datasets, and ‘Identity Theft Protection Scams’ datasets, we review some of the top most XAI methods available each of which includes SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), Partial Dependence Plots (PDP) and counterfactual reasoning. By incorporating XAI layers into a new hybrid ensemble that combines CatBoost, XGBoost, and LightGBM, the proposed approach advocates for and analyzes boosting frameworks. Such models, it has been found, achieve better than 99 percent performance in fraud detection while still providing healthy explanations for each of their predictions. Additionally, this research captures the recent international governance structures in the area of algorithmic recommendations. These structures include, but are not limited to, the EU regulation on Artificial Intelligence (AI), the U.S. oversight mechanisms of algorithms, and the ‘Right to Explanation (GDPR)’ among others, in conjunction with the importance of XAI in cross-border operations. The results reveal a clear pattern of implementation for fraud detection growth theories, where explainable systems are integrated throughout the banking system without compromising efficiency.

 

 

Keywords

Explainable Artificial Intelligence (XAI)Fraud DetectionBanking EcosystemsSHAP and LIMERegulatory ComplianceEnsemble LearningTransparency in AIFinancial Technology (FinTech)InterpretabilityGlobal AI Regulation I. Introduction

References

  1. F. Almalki and M. Masud, “Financial Fraud Detection Using Explainable AI and Stacking Ensemble Methods,” arXiv preprint arXiv:2505.10050, pp. 1–15, May 2025. doi: 10.48550/arXiv.2505.10050.DOI ↗Google Scholar ↗
  2. S. K. Aljunaid, S. J. Almheiri, H. Dawood, and M. A. Khan, “Secure and Transparent Banking: Explainable AI-Driven Federated Learning Model for Financial Fraud Detection,” J. Risk Financ. Manag., vol. 18, no. 4, pp. 1–22, Apr. 2025. doi: 10.3390/jrfm18040179.DOI ↗Google Scholar ↗
  3. T. Awosika, R. M. Shukla, and B. Pranggono, “Transparency and Privacy: The Role of Explainable AI and Federated Learning in Financial Fraud Detection,” IEEE Access, vol. 12, pp. 102885–102900, 2024. doi: 10.1109/ACCESS.2024.3394528.DOI ↗Google Scholar ↗
  4. E. Btoush, F. Al-Khamaiseh, and M. Al-Kabi, “A Hybrid ML + DL Ensemble Approach for Credit Card Fraud Detection,” Appl. Sci., vol. 15, no. 3, Art. no. 1081, 2025. doi: 10.3390/app150301081.DOI ↗Google Scholar ↗
  5. I. Psychoula, S. Zulkernine, and H. R. Arabnia, “Explainable Machine Learning for Fraud Detection: A Survey and Outlook,” arXiv preprint arXiv:2105.06314, pp. 1–20, May 2021. doi: 10.48550/arXiv.2105.06314.DOI ↗Google Scholar ↗
  6. K. Y. van Veen, “XAI in Fraud Detection: A Causal Perspective,” M.Sc. thesis, Univ. of Twente, The Netherlands, Feb. 2025.Google Scholar ↗
  7. Y. Zhou, Q. Wang, and R. Liu, “A User-Centered Explainable Artificial Intelligence Approach for Financial Fraud Detection Models,” Int. J. Inf. Manage., vol. 73, Art. no. 102681, 2023. doi: 10.1016/j.ijinfomgt.2023.102681.DOI ↗Google Scholar ↗
  8. M. K. Nallakaruppan, A. Kumar, and G. Bansal, “Explainable AI Framework for Credit Evaluation,” Electron. Commer. Res. Appl., vol. 61, Art. no. 102561, 2024. doi: 10.1016/j.elerap.2024.102561.DOI ↗Google Scholar ↗
  9. N. Faruk, A. Tariq, and S. Oladele, “Explainable AI for Fraud Detection: Building Trust and Transparency,” SSRN Electronic J., pp. 1–17, Mar. 2025. doi: 10.2139/ssrn.. 4439980.DOI ↗Google Scholar ↗
  10. B. Misheva, S. Kolev, and M. Boskov, “Explainable AI in Credit Risk Management: Use Cases and Challenges,” Comput. Syst. Sci. Eng., vol. 46, no. 2, pp. 1223–1234, 2023. doi: 10.32604/csse.. 2023.041294.DOI ↗Google Scholar ↗
  11. J. Yang, L. Zhang, and D. Chen, “Counterfactual Explanations for Fraud Detection in Financial Systems,” Expert Syst. Appl., vol. 224, Art. no. 119954, 2023. doi: 10.1016/j.eswa.2023.119954.DOI ↗Google Scholar ↗
  12. S. R. Nasir and P. Johar, “Interpretable Machine Learning Models for Financial Anomaly Detection,” J. Financ. Data Sci., vol. 9, no. 1, pp. 19–32, 2023. doi: 10.1016/j.jfds.2023.01.003.DOI ↗Google Scholar ↗
  13. A. J. Noor, H. A. Salama, and K. R. Lee, “SHAP and LIME Interpretability for Transactional Fraud Detection Using CatBoost,” IEEE Trans. Comput. Soc. Syst., vol. 9, no. 4, pp. 897–908, 2024. doi: 10.1109/TCSS.2024.3290001.DOI ↗Google Scholar ↗
  14. D. Li, Y. Han, and T. Li, “Causal Explainable AI for Financial Services: A Survey and Future Directions,” Knowl. Based Syst., vol. 269, Art. no. 110219, 2023. doi: 10.1016/j.knosys.2023.110219.DOI ↗Google Scholar ↗
  15. M. S. Kumar and J. Prasad, “Fairness and Accountability in AI-Based Fraud Detection: A Global Perspective,” J. Ethics Inf. Technol., vol. 25, pp. 51–65, 2024. doi: 10.1007/s10676-023-09700-w.DOI ↗Google Scholar ↗
  16. O. Adeola and T. M. Smith, “Machine Learning and Explainability in Banking Risk Models,” J. Risk Financ. Manag., vol. 17, no. 1, pp. 87–105, 2024. doi: 10.3390/jrfm17010087.DOI ↗Google Scholar ↗
  17. F. Rahman, H. Sarker, and M. A. Khan, “Robust Credit Card Fraud Detection Using Ensemble Learning with Explainability,” IEEE Access, vol. 11, pp. 61210–61221, 2023. doi: 10.1109/ACCESS.2023.3281843.DOI ↗Google Scholar ↗
  18. G. D. Novak and A. Mukherjee, “Financial AI Regulation in the Era of Explainability: Case Studies and Ethics,” J. Fin. Reg. Compliance, vol. 31, no. 2, pp. 142–158, 2024. doi: 10.1108/JFRC-06-2023-0123.DOI ↗Google Scholar ↗
  19. E. Martinez and T. H. Nguyen, “Benchmarking Interpretable Models for Fraud Detection on Imbalanced Datasets,” Int. J. Data Sci. Anal., vol. 15, no. 3, pp. 211–224, 2023. doi: 10.1007/s41060-023-00355-1.DOI ↗Google Scholar ↗
  20. A. R. Dey, M. Bakar, and F. Azmi, “Integrating LIME and PDP for Explainable AI in Banking Risk Models,” Appl. Intell., vol. 54, pp. 12674–12689, 2024. doi: 10.1007/s10489-024-05106-7.DOI ↗Google Scholar ↗
  21. M. Chen, Y. Lin, and Z. Xiao, “Trustworthy AI for Fraud Detection: A Multi-Stakeholder Perspective,” AI Ethics, vol. 2, no. 4, pp. 233–245, 2023. doi: 10.1007/s43681-023-00155-0.DOI ↗Google Scholar ↗
  22. A. Shrestha and B. Bista, “A Review of Explainable AI for Anomaly Detection in Finance,” SN Comput. Sci., vol. 5, no. 1, Art. no. 12, 2024. doi: 10.1007/s42979-024-02394-6.DOI ↗Google Scholar ↗
  23. P. Duan, L. Zhang, and M. Wang, “Explainability in AI-Based Risk Models: Techniques and Financial Applications,” Expert Syst. Appl., vol. 226, Art. no. 120013, 2023. doi: 10.1016/j.eswa.2023.120013.DOI ↗Google Scholar ↗
  24. T. P. N. Dao and T. N. Nguyen, “Hybrid Interpretable Models for Fraud Detection Using Gradient Boosting and SHAP,” Int. J. Intell. Syst., vol. 39, no. 3, pp. 654–672, 2024. doi: 10.1002/int.23254.DOI ↗Google Scholar ↗
  25. S. C. Johnson and R. E. Allen, “Regulatory Implications of Explainable Machine Learning in Financial Services,” J. Bank Regul., vol. 24, no. 2, pp. 147–159, 2023. doi: 10.1057/s41261-023-00197-8.DOI ↗Google Scholar ↗
  26. F. Abdellaoui and M. Wahab, “Comparative Evaluation of Post-Hoc Explainability Tools in Financial Fraud Models,” J. Big Data, vol. 10, Art. no. 117, 2023. doi: 10.1186/s40537-023-00864-5.DOI ↗Google Scholar ↗
  27. H. Abbas and A. Anwar, “Deploying SHAP and LIME for Credit Fraud Explanations in Multinational Banking Systems,” J. Finance Technol., vol. 5, no. 1, pp. 22–34, 2024. doi: 10.1016/j.jft.2024.01.002.DOI ↗Google Scholar ↗
  28. B. Mahmud and A. Y. Zafar, “Cross-Regional Interpretability of AI Models in African Financial Systems,” Inf. Process. Manag., vol. 60, no. 3, Art. no. 103342, 2023. doi: 10.1016/j.ipm.2022.103342.DOI ↗Google Scholar ↗
  29. J. A. Rosas and K. Salazar, “Federated XAI for Cross-Border Anti-Money Laundering Systems,” Appl. Soft Comput., vol. 141, Art. no. 110967, 2024. doi: 10.1016/j.asoc.2023.110967.DOI ↗Google Scholar ↗
  30. D. O. Mensah and L. K. Boateng, “Evaluating Causal Explainability in Deep Neural Networks for Banking Transactions,” Neural Comput. Appl., vol. 35, pp. 16529–16544, 2023. doi: 10.1007/s00521-023-08430-1.DOI ↗Google Scholar ↗
  31. L. M. Torres and F. Cabrera, “Explainable AI and Financial Ethics: The Role of Transparency in AI Decisions,” Ethics Inf. Technol., vol. 25, no. 3, pp. 367–381, 2023. doi: 10.1007/s10676-023-09777-3.DOI ↗Google Scholar ↗
  32. A. K. Sinha and M. Arora, “Integrated LIME and Counterfactual Methods for AI Explanations in Insurance Fraud,” Expert Syst., vol. 41, no. 1, Art. no. e13238, 2024. doi: 10.1111/exsy.13238.DOI ↗Google Scholar ↗
  33. C. I. Okoye and R. L. Walker, “Visual Interpretability for Large-Scale Banking Transactions Using XAI Dashboards,” Vis. Inform., vol. 8, no. 1, pp. 45–57, 2024. doi: 10.1016/j.visinf.2024.01.004.DOI ↗Google Scholar ↗
  34. S. Bhatnagar and R. Singh, “Detecting Synthetic Financial Fraud via Explainable Graph Neural Networks,” Neurocomputing, vol. 539, pp. 134–148, 2024. doi: 10.1016/j.neucom.2023.12.041.DOI ↗Google Scholar ↗
  35. D. I. King and A. J. White, “XAI in Practice: Case-Based Explanations for Fraud Models in Investment Banking,” Inf. Syst., vol. 118, Art. no. 102210, 2023. doi: 10.1016/j.is.2023.102210.DOI ↗Google Scholar ↗
  36. L. Zhang, K. Yin, and C. Yu, “AI Model Drift and Explainability in Financial Time Series Forecasting,” Int. J. Forecast., vol. 40, no. 1, pp. 115–128, 2024. doi: 10.1016/j.ijforecast.2023.09.008.DOI ↗Google Scholar ↗
  37. O. L. Bello, “Comparative Analysis of XAI Methods on Imbalanced Financial Datasets,” Comput. Ind., vol. 152, Art. no. 104947, 2023. doi: 10.1016/j.compind.2023.104947.DOI ↗Google Scholar ↗
  38. P. K. Mishra and A. B. Sen, “Combining Explainable AI and Blockchain for Secure Fraud Detection,” Inf. Sci., vol. 648, pp. 987–1004, 2024. doi: 10.1016/j.ins.2023.10.105.DOI ↗Google Scholar ↗
  39. T. S. Vega and M. Ruiz, “Human-in-the-Loop Explainability for Financial AI Audits,” Pattern Recognit. Lett., vol. 177, pp. 15–23, 2024. doi: 10.1016/j.patrec.2023.11.005.DOI ↗Google Scholar ↗
  40. F. Liu, H. Li, and B. Wang, “Ethical Implications of AI Transparency in Global Banking Regulations,” AI Soc., vol. 39, pp. 443–459, 2024. doi: 10.1007/s00146-024-01738-6.DOI ↗Google Scholar ↗
Author details
Sreenivasarao Amirineni
University of Madras
✉ Corresponding Author
👤 View Profile →