AI Hallucination — Why LLMs Make Things Up
AI hallucination is when a language model generates plausible-sounding but factually incorrect information. Understanding why it happens helps you use AI tools more safely.
TL;DR: AI hallucination is when a language model generates plausible-sounding but factually incorrect information. Understanding why it happens helps you use AI tools more safely.
What is Hallucination?
When an LLM confidently states a false fact — inventing citations, wrong dates, fictional events, or incorrect code — that's hallucination. The name comes from psychology: perceiving something that isn't there. LLMs don't "lie"; they generate statistically plausible continuations of text, even when those continuations are factually wrong.
Why It Happens (Technically)
LLMs are next-token predictors. They learn patterns from training data and generate the most statistically likely next word. There's no built-in "truth checker" — the model has no internal database of verified facts, only patterns. High-confidence hallucination happens when patterns suggest a plausible answer that was never in the training data.
Common Hallucination Types
Factual errors (wrong dates, statistics). Invented citations (real-sounding but fake academic papers). Code hallucinations (plausible-looking but broken code). Outdated information stated as current. Person confabulation (mixing up details from different real people).
How to Reduce Hallucination Risk
Use grounding: attach real documents (RAG) or web search so the model cites sources. Ask for citations and verify them. Use lower temperature settings for factual tasks. Prompt the model to say "I don't know" when uncertain. Use Perplexity.ai for research (it shows sources). Don't use LLMs for medical/legal/financial decisions without verification.