Neural Uncertainty Principle: A Unified View of Adversarial Fragility and LLM Hallucination
By: Michael Johnson, Sarah Davis, Robert Brown, Jessica Miller
Published: 2026-03-20
View on arXiv →#cs.AI
Abstract
This paper introduces the "Neural Uncertainty Principle," a novel theoretical framework that unifies the phenomena of adversarial fragility and hallucination in large language models. By demonstrating an inherent trade-off between a model's robustness to adversarial attacks and its propensity for generating factually incorrect or nonsensical information, this work provides critical insights for developing more reliable and trustworthy AI systems. Understanding this principle is essential for mitigating risks in real-world LLM deployments.