IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI Ethicist: Don't Repeat Regulatory Failures of Facebook

Addressing Carnegie Mellon University this week, Duke University law professor Nita Farahany said ChatGPT was adopted even faster with less safeguards than social media, but we need not repeat the same mistakes.

Gavel relating to AI regulation, laws or policies
(TNS) — In a reprise of an April TED Talk where she warned about artificial intelligence hacking human brains, Nita Farahany traveled to Pittsburgh on Monday to lecture students and faculty at Carnegie Mellon University on the emerging technology's potential for help and harm.

Ms. Farahany, a Duke University law professor and former bioethics advisor under President Barack Obama, said that ChatGPT and other generative AI tools offer the government a chance to correct some of the regulatory failures it made with social media.

"This isn't our first encounter with AI," she said, noting that Justin Rosenstein used computer intelligence to engineer Facebook's "like" button years before he realized the feature's potential to profoundly harm humanity. Likes were designed to spread joy, but Mr. Rosenstein and others now worry the potentially addictive feature is hurting self-esteem, distracting the masses and forever altering the human experience.

(Wired used that example in 2022 to suggest that tech leaders can do more to avoid unintended consequences.)

Meta is only just now facing lawsuits for addicting children to Facebook and Instagram while harming their self-esteem, Ms. Farahany said.

ChatGPT was adopted even faster with less safeguards.

"Given how revolutionary it is for humanity, imagine a technology like that being released with no prior testing, no deliberative democracy, no oversight, no premarket clearance, very little discussion or even safety testing," she said.

Artificial intelligence could make it easier for social media companies to exploit the human brain for profit, Ms. Farahany said. But it can also be used for good.

AI-assisted work can reduce burnout and increase worker safety, she said, citing studies by Pennsylvania State University and Microsoft.

"We're entering into an age of partnership with technology," Ms. Farahany said. "That's threatening for many people, but it doesn't have to undermine human thinking, if we invest in the right way."

CMU's Block Center for Technology and Society expects to release a report this week on operationalizing AI across various business sectors.

Even the controversial idea of computers building psychological profiles for humans isn't inherently harmful, Ms. Farahany said. Duolingo's AI-powered understanding of human learning helps people learn languages faster. Personalized dieting software could similarly help people lose weight.

"I don't think addiction is necessarily in and of itself bad," she said.

But when the addiction overrides humans' ability to act in their own self-interest, then there's a problem.

AI has already infiltrated daily life in ways that can be hard to detect. Ms. Farahany showed statistics that suggest 77 percent of people are using an AI-powered device, but only 33 percent of people are aware they are.

One way to overcome the gap in understanding is through education, she said.

In Finland, public school children are learning how to discern between content that was manipulated with AI. That awareness could become more important as deepfakes infiltrate political races and pornography.

Her strongest example of AI being used for good was in the early detection of seizures. By training computers on epilepsy data, researchers in Israel and Spain can now identify warning signs of a potential seizure before it occurs.

"This is the kind of insight where we're designing technology for human flourishing and we can imagine a really different world," Ms. Farahany said.

Her talk was part of CMU's fall lecture series. Duquesne University is hosting its annual tech ethics convention on Friday, focused on generative AI.

Global leaders met in Britain last week to discuss the responsible development of AI. The summit came days after President Joe Biden signed an executive order demanding safety testing from AI developers and assigning federal agencies to oversee the explosive technology.

©2023 the Pittsburgh Post-Gazette. Distributed by Tribune Content Agency, LLC.