Augmenting Intelligence: A Hybrid Framework for Scalable and Stable Explanations
By: Lawrence Krukrubo, Julius Odede, Olawande Olusegun
Published: 2025-12-22
View on arXiv →Abstract
Current Explainable AI (XAI) approaches face a "Scalability-Stability Dilemma": post-hoc methods (e.g., LIME, SHAP) scale easily but are unstable, while supervised frameworks (e.g., TED) offer stability but demand prohibitive human labeling. This paper introduces a Hybrid LRR-TED framework addressing this dilemma via a novel "Asymmetry of Discovery." Applied to customer churn prediction, automated rule learners (GLRM) excel at identifying "Safety Nets" (retention patterns) but struggle with "Risk Traps" (churn triggers)—a phenomenon termed the Anna Karenina Principle of Churn. By initializing the explanation matrix with automated safety rules and augmenting it with a Pareto-optimal set of four human-defined risk rules, their approach achieves 94.00% predictive accuracy. This outperforms an 8-rule manual expert baseline while reducing human annotation effort by 50%, shifting experts from "Rule Writers" to "Exception Handlers" in Human-in-the-Loop AI.