Dual-Reasoning Framework for Robust and Interpretable LLMs

Published: 2025-12-05

View on arXiv →

Abstract

This work introduces a dual-reasoning training framework that integrates affirmative generation with structured counterfactual denial, leading to more robust, interpretable, and human-reasoning-aligned Large Language Models.

FEEDBACK

Projects

No projects yet