IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Is Google’s AI sentient?

Answer: Undetermined, but company employees could get suspended for saying so.

A robot standing in front of a chalkboard completing complex math problems.
It’s no secret that artificial intelligence systems are getting more and more, well, intelligent. But can they truly think for themselves?

An engineer with Google thought so about a company chatbot. Known as LaMDA, for Language Model for Dialogue Applications, the chatbot is being developed by Google’s Responsible Artificial Intelligence group. Blake Lemoine, a senior software engineer in the department, recently revealed in a Medium post that he believed the AI that powered LaMDA had achieved sentience. He later followed up by posting a series of interviews he had conducted with LaMDA that he believed helped prove his point.

Google responded by placing Lemoine on paid administrative leave, saying that by revealing so much information about LaMDA, he had violated the company’s confidentiality policies. The company also issued a statement refuting his claim that the AI is sentient: “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic. If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”