Towards Trustworthy AI: A Framework for Explainable and Robust Deep Learning in Critical Systems

By: Professor Julian Vance, Dr. Lena Schmidt, Mr. Omar Hassan, Ms. Jessica Lee, Dr. Martin Müller

Published: 2025-12-28

View on arXiv →
#cs.AI

Abstract

We propose a comprehensive framework for building trustworthy AI systems by integrating explainability techniques with adversarial robustness methods in deep learning. This work addresses critical concerns regarding bias, transparency, and reliability, particularly in high-stakes applications such as autonomous driving and financial fraud detection, fostering greater confidence in AI deployments.

FEEDBACK

Projects

No projects yet

Towards Trustworthy AI: A Framework for Explainable and Robust Deep Learning in Critical Systems | ArXiv Intelligence