Dual-Reasoning Framework for Robust and Interpretable LLMs

Опубліковано: 2025-12-05

Переглянути на arXiv →

Анотація

This work introduces a dual-reasoning training framework that integrates affirmative generation with structured counterfactual denial, leading to more robust, interpretable, and human-reasoning-aligned Large Language Models.

Dual-Reasoning Framework for Robust and Interpretable LLMs

Опубліковано: 2025-12-05

Переглянути на arXiv →

Анотація

This work introduces a dual-reasoning training framework that integrates affirmative generation with structured counterfactual denial, leading to more robust, interpretable, and human-reasoning-aligned Large Language Models.

FEEDBACK

Проекти

Немає проектів

Dual-Reasoning Framework for Robust and Interpretable LLMs | ArXiv Intelligence