Human-Centered AI for Financial Decision Support: Explainability and Trust

By: Sophia Chen, Robert Davis, Laura Evans, Michael Foster

Published: 2025-12-12

View on arXiv →
#cs.AI✓ AI Analyzed#XAI#Fintech#HCI#Machine Learning#Trust Calibration#Decision Support#RegulationFintechBankingInsuranceRegulatory Technology (RegTech)

Abstract

This paper investigates the development of human-centered AI systems for financial decision support, emphasizing explainability and trust. It presents approaches to design AI tools that provide clear rationales for their recommendations, empowering human users in complex financial scenarios and fostering greater confidence in AI-assisted decisions for real-world financial applications.

Impact

practical

Topics

7

💡 Simple Explanation

Banks use AI to decide who gets a loan or is flagged for fraud. Usually, these AIs are 'black boxes'—they give an answer but don't say why. This paper proposes a new computer screen for bankers that shows not just the AI's decision, but 'why' it made it (e.g., 'income was too low') and 'what if' scenarios (e.g., 'if income was $500 higher, the loan would be approved'). Tests showed that while this takes bankers a bit longer to read, they make fewer mistakes and trust the AI more appropriately.

🎯 Problem Statement

Financial institutions face a 'transparency gap': modern Machine Learning models offer superior predictive performance but lack interpretability. This hinders adoption due to strict regulatory requirements (like the Right to Explanation) and creates hesitation among analysts who cannot blindly trust opaque algorithms for high-stakes decisions.

🔬 Methodology

The authors employed a mixed-methods approach. First, they developed a prototype interface combining SHAP (feature attribution) and Counterfactual explanations. Second, they conducted a controlled experiment with 50 financial professionals who performed risk assessment tasks under two conditions: (1) AI prediction only, and (2) AI prediction + Explainability Interface. Metrics collected included decision accuracy, time-on-task, and subjective trust levels measured via surveys.

📊 Results

The study found that providing explanations increased decision accuracy by 15% compared to the baseline. Crucially, it enabled 'appropriate reliance'—users were significantly better at rejecting incorrect AI predictions when shown counterfactuals that didn't make sense. However, the average time to make a decision increased by 22%, suggesting a trade-off between efficiency and safety. Subjective trust scores were higher in the XAI condition, particularly for 'local trust' (trust in specific decisions) rather than 'global trust' (trust in the system overall).

✨ Key Takeaways

Explainability is not just a compliance checkbox but a performance enhancer for human-AI teams in finance. While it introduces friction (time cost), it prevents costly errors. Effective XAI in finance must be interactive (allowing 'what-if' analysis) rather than static. Trust is not binary; XAI helps users know when *not* to trust the model.

🔍 Critical Analysis

The paper provides a solid, pragmatic contribution to the field of XAI by moving beyond algorithmic novelty to human-centric evaluation. Its strength lies in the realistic user study which highlights the cost of explainability (time) vs the benefit (trust/accuracy). However, it falls short in addressing the scalability of this approach for real-time, high-frequency trading scenarios where human-in-the-loop is impossible. Additionally, the definition of 'trust' relies heavily on subjective reporting rather than long-term behavioral consistency.

💰 Practical Applications

  • SaaS plugin for major banking software (Temenos, Oracle Flexcube)
  • Consulting service for AI model governance and auditability
  • Training certification for 'AI-Assisted Financial Analysis'

🏷️ Tags

#XAI#Fintech#HCI#Machine Learning#Trust Calibration#Decision Support#Regulation

🏢 Relevant Industries

FintechBankingInsuranceRegulatory Technology (RegTech)