Hallucinations are a natural part of how all large language models work.They can be reduced, but never completely eliminated — because models are rewarded for making confident predictions, even when uncertain. This is normal behaviour for every LLM.
What are hallucinations?
Your AI Agent is hallucinating when it produces an answer that sounds correct but is not based on real data.
This happens when the model predicts a likely-sounding response instead of acknowledging uncertainty.
A hallucination happens when the Agent:
- Fills in missing information with its best guess
- Invents details instead of saying “I don’t know”
- Gives confident answers without reliable knowledge to support them
Example:
Question: “What is the delivery time for product X?”
Agent (hallucinating): “Delivery time is 24 hours.” (even though no such data was provided)
Why does this happen?
Large language models work by predicting the most likely next words.
They do not automatically verify facts or check databases—unless you provide that structure.
Hallucinations often occur when:
- Knowledge is unclear
- Information is incomplete
- Data is outdated
- Instructions are vague
- The Agent is uncertain but still rewarded for guessing
What can you do when you Agent hallucinates?
While you can’t remove hallucinations entirely, you can significantly reduce them by giving your AI Agent strong, structured, and up-to-date knowledge.
Tips to reduce hallucinations
-
Keep your content up-to-date: outdated information leads to incorrect answers. Regularly update your pricing, terms & conditions, product specifications, processes and workflows, manuals and guides.
-
Document key topics thoroughly: if important topics are incomplete, the Agent will fill in gaps on its own.
-
Use structured knowledge: headings, bullets and clear definitions help the model.
Bad example:
We have three packages. Prices change sometimes. The main package is Pro.
Good example:
Our packages and pricing for 2026 are:
- Basic – €19/month
- Pro – €49/month
- Enterprise – custom pricing
-
Write clearly and specifically: avoid vague language and assumptions.
Bad example:
Product X usually costs around €20.
Good example:
Product X costs €19.95 (fixed price).
Bad example:
You can order on our website.
Good example:
You can place an order via www.company.com/order via the button in the top right.
How adding knowledge sources helps reduce hallucinations
Using structured, high-quality knowledge Sources helps your AI Agent rely on real information instead of predictions.
By adding sources such as:
…your Agent can:
- reference exact, machine-readable information
- reduce the need to guess
- base answers on current and verified content
- give more consistent and reliable responses
The more structured and complete your knowledge, the fewer hallucinations your Agent will produce.