IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: AI Needs Limits Imposed by Real People

Days before Sam Altman was fired — and then rehired — as CEO of OpenAI, researchers at the company wrote a letter to its board of directors warning that a major new discovery could threaten humanity.

Artificial Intelligence
(TNS) — Days before Sam Altman was fired — and then rehired — as CEO of OpenAI, researchers at the company wrote a letter to its board of directors warning that a major new discovery could threaten humanity. We don’t know more about the details of that breakthrough or its precise role in the soap opera that’s consumed the tech world in recent weeks, but we do know that artificial intelligence is advancing at a rapid pace, and our public policy to regulate it is moving at the speed of Washington.

We’re sorry, Dave. We’re afraid we can’t have that.

What can AI already do? As anyone who’s fiddled with ChatGPT knows, it can write reasonably credible, fact-based essays about fairly complicated questions, and fiction and poetry to boot. It can write computer code. It can transcribe speech and summarize long texts with remarkable accuracy. It can generate photorealistic or stylized images of just about anything. It can aid in the discovery of new medicines. It can take a picture or two of a human face at any age and identify it, almost instantly. It can listen to a person for just a few seconds and then spoof his or her voice.

Naysayers are quick to point out the many failures and fumbles and foibles, all the ways in which the tech is not yet ready for deployment. And it’s true; every algorithm makes mistakes, and some make a lot. One AI-enabled innovation, the self-driving car, has been just around the corner for many years now — because the task has proven much more complex than initially thought. (Even still, it’s worth pointing out that autonomous vehicles have made major strides.)

The unavoidable fact is that human beings have developed and are refining a technology with remarkable capabilities. There’s tremendous good AI can do today and tomorrow, from identifying students in need of additional support services, to helping radiologists detect cancerous growths, to supercharging drug development, to helping blind people make sense of what’s in front of them, to quickly scoping out damage in disaster zones.

But like any technology, it can do serious damage as well. Those voice-spoofing capabilities are already being used to steal money. Artists are seeing their creativity exploited and new paintings or songs generated from copyrighted work without their permission. Deepfaked vidoes that, to the naked eye, are indistinguishable from real ones can throw fuel on fires of disinformation — or create nonconsensual pornography. And while well-designed, properly applied AI can help identify and combat human bias, poorly designed algorithms can enable mass bias in hiring, criminal justice and other realms.

It is the job of the federal government to prevent foreseeable AI abuses before they happen and design smart penalties for inevitable bad uses, all while ensuring that the are few if any constraints to AI’s many beneficial applications.

The White House has released an organically intelligent framework for what sorts of safeguards are needed, and Senate Majority Leader Chuck Schumer has led the way. The billionaires who increasingly control this corner of the economy can’t be trusted to regulate themselves. The tech-challenged men and women who play the biggest role writing our laws have their own ineptitudes and blind spots, but they’re the only representatives we’ve got.

© 2023 New York Daily News. Distributed by Tribune Content Agency, LLC.