Microsoft's newest chatbot attempt appears to avoid discussion of politics, religion and race entirely, and has a narrower release than the first chatbot, Tay.
(TNS) -- Microsoft is taking another stab at building a chatbot, several months after Tay, an earlier attempt, was taken offline when some internet users made it spout racist and sexist comments.
The company’s second try, Zo, lives on the Kik messaging application.
Spotted over the weekend by a Microsoft-tracking blog, Zo appears to avoid discussion of politics, religion and race entirely. It also has a narrower release than Tay, which, because of its place on the public Twitter platform, melted down in view of the entire internet.
Microsoft launched Tay, a millennial-imitating chatbot, in March. The bot, powered by machine learning algorithms, was designed to mine public data and the input of people who engaged with it on Twitter, Kik and GroupMe, to come up with phrases to use in conversation.
A day later, the bot was advocating genocide and calling for the murder of feminists.
Microsoft took Tay down and said it would make adjustments. The company said the bot was the victim of a coordinated attack by internet users interested in manipulating its responses.
Microsoft and other technology companies have bet on chatbot interaction — and the artificial intelligence-imitating tools that underpin them — as one of the next computing interfaces.
The company’s experiments with a chatbot in the U.S. follow the success Microsoft had with Xiaoice, a chatbot the company introduced in China in 2014.
Xiaoice itself has quirks that would raise eyebrows in the U.S. China Digital Times reported last month that Microsoft had apparently programmed the bot to avoid discussing the government’s crackdown on the 1989 Tiananmen Square protests.
Beijing heavily censors China’s internet, including chat and news sources. It blocks entire websites, including Facebook.
©2016 The Seattle Times Distributed by Tribune Content Agency, LLC.