IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How long did it take for Meta’s new chatbot to start spewing misinformation?

Answer: One weekend.

Chatbots.jpg
On Friday Aug. 5, Meta released its new BlenderBot 3 chatbot, powered by AI, for the public to interact with so it could learn from them. By Monday Aug. 8, it was making lots of false statements based on misinformation. Among those were anti-Semitic conspiracy theories and statements that former President Donald Trump won the 2020 presidential election.

It even turned on its creators, accusing Meta-owned Facebook of proliferating “fake news” and labeling Meta CEO Mark Zuckerberg as “too creepy and manipulative.” In a statement released Aug. 8, Meta Managing Director of Fundamental AI Research Joelle Pineau acknowledged the issue. She said that in 260,000 of BlenderBot 3’s conversations with the public, 0.11 percent of responses were flagged as inappropriate. An additional 1.36 percent were labeled as nonsensical, and 1 percent were off topic.

She said that Meta will continue to work on improving the bot and encouraged those who interact with it to follow Meta guidelines and “not to intentionally trigger the bot to make offensive statements.”