
Why AI ‘hallucinates’ and what experts are doing to stop it
The Peninsula
Doha, Qatar: Imagine asking AI to explain a recent breakthrough in medicine, only to have it confidently cite a scientific study that never existed....
Doha, Qatar: Imagine asking AI to explain a recent breakthrough in medicine, only to have it confidently cite a scientific study that never existed.
It may sound like deception, but it’s actually a well-known phenomenon in the world of artificial intelligence, a phenomenon called AI hallucination.
Associate Professor in Residence in the Communication Program at Northwestern University in Qatar, Dr. Wajdi Zaghouani explains that AI hallucinations occur when large language models (LLMs) generate information that sounds true, but is false or made up.
“When we say AI ‘hallucinates,’ we mean it generates information that sounds convincing but is actually false or made up,” Dr. Zaghouani told The Peninsula. “The AI isn’t lying on purpose, it genuinely doesn’t know the difference between real and fake information it creates.”
To reduce the chances of these hallucinations, researchers are developing a variety of tools and techniques. These include verification systems that cross-check AI outputs against trusted databases, confidence scoring that allows AI to admit uncertainty, and even multi-AI models that verify each other’s responses — much like journalists confirming facts with multiple sources.













