IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Do requests for short answers make AI chatbots hallucinate more?

Answer: Yes.

Closeup of a live chat with a chatbot that has just been opened on a smartphone. The first message from the bot reads, "Hi! How can I help you?"
Shutterstock/Tero Vesalainen
Next time you turn to ChatGPT for an answer to a question, you may want to consider not asking for a short answer. Paris-based AI testing company Giskard has found that large language models are more likely to hallucinate in their answers when prompted to be short or concise.

The team tested some of the most popular AI bots with prompts that specifically requested short answers against those with more neutral wording. Many of the programs were more likely to present false information in their answers when asked to keep them brief. This was especially true if the prompts were vaguer or asked about ambiguous topics.

Giskard’s team speculates that this may be because asking AI to be brief doesn’t allow it the space to acknowledge and refute inaccuracies. “When forced to keep it short, models consistently choose brevity over accuracy,” they wrote. “Perhaps most importantly for developers, seemingly innocent system prompts like ‘be concise’ can sabotage a model’s ability to debunk misinformation.”
Sign up for GovTech Today

Delivered daily to your inbox to stay on top of the latest state & local government technology trends.