Next time you turn to ChatGPT for an answer to a question, you may want to consider not asking for a short answer. Paris-based AI testing company Giskard has found that large language models are more likely to hallucinate in their answers when prompted to be short or concise.
The team tested some of the most popular AI bots with prompts that specifically requested short answers against those with more neutral wording. Many of the programs were more likely to present false information in their answers when asked to keep them brief. This was especially true if the prompts were vaguer or asked about ambiguous topics.
Giskard’s team speculates that this may be because asking AI to be brief doesn’t allow it the space to acknowledge and refute inaccuracies. “When forced to keep it short, models consistently choose brevity over accuracy,” they wrote. “Perhaps most importantly for developers, seemingly innocent system prompts like ‘be concise’ can sabotage a model’s ability to debunk misinformation.”