Saturday

8 November 2025

Why AI tools avoid saying “I don’t know”

The research says this issue is not only about bad data or coding errors. It also comes from the way Large Language Models (LLMs) are trained and tested. During training, these models are rewarded for giving answers, even if they are wrong, rather than saying “I don’t know.”
1 Min Read 0

A new study by researchers from OpenAI and the Georgia Institute of Technology explains why artificial intelligence (AI) chatbots like ChatGPT often make confident but wrong statements — a problem known as “hallucination.”

The research says this issue is not only about bad data or coding errors. It also comes from the way Large Language Models (LLMs) are trained and tested. During training, these models are rewarded for giving answers, even if they are wrong, rather than saying “I don’t know.”

When AI models are tested, they are scored in a system similar to a school exam. If an answer is right, it gets one point. If it is blank or wrong, it gets zero. Because there is no extra penalty for being confidently wrong, models learn that it is better to guess than to admit uncertainty.

The researchers tested this idea by asking how well models can tell whether a statement is true or false. They found that even a perfect model will still make some mistakes because certain questions are too hard or don’t have clear answers. In other words, some errors are unavoidable — no matter how much data an AI reads.

The study suggests a few fixes. First, AI benchmarks should give negative points for wrong answers instead of treating all mistakes the same. Second, models should get partial credit when they admit they don’t know something. This could help AIs learn honesty instead of overconfidence.

However, the paper also notes a business problem. If AI systems start saying “I don’t know” too often, people may find them less helpful and switch to competitors that sound more confident. This tension between accuracy and popularity makes it hard for companies to change their approach.

The researchers conclude that hallucinations are not simple bugs — they are the result of how AI systems are taught to perform. To make AI more truthful, the industry may need to rethink how it measures success.

Manik Khajuria

Leave a Reply

Your email address will not be published. Required fields are marked *