IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How often do AI chatbots misrepresent news content?

Answer: 45 percent of the time.

You may want to think twice before turning to an AI chatbot to get your daily news. A new study has found that these bots don’t get the facts right almost half the time.

The research was conducted by 22 public service media organizations including the BBC and NPR and evaluated four of the most popular chatbots: ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI. Journalists asked the bots about the news and analyzed their answers on a range of criteria including accuracy, sourcing, providing context, and the ability to editorialize appropriately and distinguish fact from opinion. At least one significant issue was found in 45 percent of the answers, while 31 percent had serious sourcing issues and 20 percent had major factual errors.

“This research conclusively shows that these failings are not isolated incidents,” said Jean Philip De Tender, deputy director general of the European Broadcasting Union. “They are systemic, cross-border and multilingual, and we believe this endangers public trust. When people don't know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”