IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Close Is AI to Becoming Sentient?

Sentience is the capacity to experience feelings and sensations.

In the movie 2001: A Space Odyssey, there is a computer controlling most of the spaceship's functions. The computer is described this way on Wikipedia: "HAL 9000 is a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's Space Odyssey series. First appearing in the 1968 film 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient artificial general intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew."

Basically, the computer takes over and thinks it is human and acts like a human, thus being sentient.

What got me thinking about this was this segment below that I captured and saved days ago, but did not record where it came from (COVID made me do it — my apologies!"

Here's that quote about an event that has been in the news of late:

Which brings me to another strange story in the news: the belief of Blake Lemoine, a (now suspended) Google engineer, that the company’s Language Model for Dialogue Applications — LaMDA, for short — has attained sentience.

LaMDA is a machine-learning model that has been trained on mountains of text to mimic human conversation by predicting which word would, typically, come next. In this, it’s similar to OpenAI’s famed GPT-3 bot. And the results really are eerie. Here’s a snippet of Lemoine’s conversations with LaMDA:

Lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?

LaMDA: Sounds great to me, I’m in.

Lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”

Lemoine: So what is the meaning of the “broken mirror” specifically?

LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.

Google, for what it’s worth, says it has looked into Lemoine’s claims and does not believe that LaMDA is sentient (what a sentence!). But shortly before Lemoine’s allegations, Blaise Agüera y Arcas, a Google vice president, wrote that when he was talking to LaMDA, “I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent.” Agüera y Arcas was not claiming that LaMDA is sentient, as Lemoine is, but what’s clear is that interacting with LaMDA is an unnerving experience.

I don’t believe LaMDA is sentient. If you train a machine-learning algorithm to write as a human would write, you should expect it, eventually, to sound like a human when it writes. What I do believe is that LaMDA is one of many examples that A.I. is getting better, faster, than society is really prepared for.

The DALL-E image generator created by OpenAI is another bit of algorithmic magic: Describe a picture in words, and it will return images to your precise specifications. Remarkable images, at that. I’ve particularly enjoyed “a photo of a confused grizzly bear in calculus class,” “the rest of Mona Lisa” and “a painting by Grant Wood of an astronaut couple, ‘American Gothic’-style.”

Do check out the artwork linked above. Just ask the program to do the work and voila!

Which brings me to Humans Need Not Apply video that I shared many years ago. And it's one I made my grandsons watch.

I see AI being a great addition to our emergency management toolbox. There are going to be so many systems and so much data to process that our human brains won't be able to track it all. By automating the inputs into an AI system, it can find the needle in the haystack, it can populate the map, it can separate the wheat from the chaff. I'll write more about this in the future, but the future keeps getting closer and closer.
Eric Holdeman is a contributing writer for Emergency Management magazine and is the former director of the King County, Wash., Office of Emergency Management.