AI

Hallucination

When an AI model generates content that sounds plausible but is factually wrong. The single biggest risk in deploying AI agents to real customers without grounding.

What it means

Hallucination is the technical term for an LLM confidently producing false information. The model does not know it is wrong; it generates whatever statistically-likely-sounding text the prompt steers it toward, even when that text invents prices, policies, or facts.

The cause is structural: LLMs are trained to predict probable next words, not to verify truth. Without guardrails, they will fabricate.

The fix is grounding: tying the model's responses to a known knowledge base (RAG) and explicitly instructing the model to say 'I do not know' when the knowledge base does not cover the question. Done right, hallucination rates drop close to zero.

Why it matters

A hallucinating agent quoting a wrong price, promising a refund policy that does not exist, or inventing a service feature is a real-world liability. In regulated industries it is worse: an insurance agent inventing coverage details has legal consequences.

This is also why you cannot just plug ChatGPT into your business and ship it. Production AI deployment requires the knowledge-base layer, guardrails, and ongoing review of conversations.

Example

A clinic's first agent prototype is a raw LLM with no knowledge base. Asked about pricing, it confidently quotes prices that are not real ('Our consultation fee is $80'). After grounding the agent in the clinic's actual price sheet and adding a 'if unsure, escalate to human' rule, hallucinations drop to near zero in production.

Where this comes up

← Back to all terms