Skip to main content
AI FundamentalsBeginner

AI Hallucination — Why LLMs Make Things Up

AI hallucination is when a language model generates plausible-sounding but factually incorrect information. Understanding why it happens helps you use AI tools more safely.

TL;DR: AI hallucination is when a language model generates plausible-sounding but factually incorrect information. Understanding why it happens helps you use AI tools more safely.

What is Hallucination?

When an LLM confidently states a false fact — inventing citations, wrong dates, fictional events, or incorrect code — that's hallucination. The name comes from psychology: perceiving something that isn't there. LLMs don't "lie"; they generate statistically plausible continuations of text, even when those continuations are factually wrong.

hallucinationconfabulationfabricationfactual accuracy