Ungrounded Divergence: A Philosophical Framework for Understanding AI Hallucination
Matt Armendariz
December 12, 2025
INTRODUCTORY NOTE
This paper offers a conceptual framework for understanding why large language models produce false outputs, a phenomenon the industry labels “hallucinations.” It is not a compliance checklist, a product evaluation, or legal advice. Rather, it proposes a way of thinking about hallucinations that I believe is more accurate than prevailing metaphors and more useful for practitioners who need to make judgment calls about when and how much to trust AI-generated content. Imprecise language doesn’t just obscure thinking, it degrades it. “Hallucination” points practitioners toward the wrong problem and, therefore, the wrong solutions. A more precise name, one that carves nature at its joints, is necessary for true progress on the matter.
The framework draws on analytic philosophy of language, particularly Saul Kripke’s work on rule-following. However, readers need not be familiar with Kripke to follow the argument; the relevant concepts are explained in the text.








