IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: Regulating AI Requires First Knowing Its Boundaries

As new learning methods are developed, the boundary between what is artificial intelligence and what is simply traditional computing methods keeps shifting.

Image of a woman's hand activating an AI button
(TNS) — Governments around the world are racing to regulate artificial intelligence (“AI”). The Biden administration recently issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. And the European Union reached provisional agreement on its long-awaited AI Act.

This growing effort to regulate and control the development of AI is not surprising. From advancing drug discovery and disease detection to automating menial tasks, the economic and social benefits of AI are real. If misused, however, AI can cause real harms. But the focus on AI technology itself as the target of regulation, rather than harmful uses of AI, may be misguided. It risks missing the goal by either failing to adequately prevent harmful uses of AI, or overly inhibiting socially beneficial ones.

Part of the challenge is defining the boundaries of AI. AI is an amorphous category that encompasses a range of computational methods. Much of the recent progress in AI is thanks to advances in “machine learning” methods, which more effectively mimic human cognitive processes. But as new learning methods are developed, the boundary between AI and “ordinary” computing keeps shifting. Hence the adage, “AI is whatever hasn’t been done yet.”

These shifting boundaries make it much harder to pin down AI as the target of regulation. A broad definition of AI risks capturing all computational systems, and over-regulating technological development. Narrower definitions may be more targeted but, given the dynamic, fast-shifting boundaries of AI, also risk quickly becoming obsolete. The White House executive order flirts with both broad and narrow definitions of AI. For example, it sets very specific thresholds for compliance with AI reporting requirements — like AI models trained “using a quantity of computing power greater than 1026 integer or floating-point operations.” While potentially more administrable, these thresholds seem somewhat arbitrary. And even if they reflect the state of the art in AI today, they will quickly become obsolete.

Not only is it challenging to meaningfully define the boundaries of AI, but it is also challenging to measure the risks of AI, or AI models, in the abstract and without context. We can’t meaningfully quantify whether a large AI model, like the one powering ChatGPT, will ultimately cause more harm than good. We certainly can’t measure the risk of an AI “super-intelligence” taking over the world — a speculative doomsday scenario that has no doubt encouraged a more precautionary approach to AI regulation. We can, in contrast, more easily measure, and penalize, specific harmful uses of these models — like deepfakes spewing disinformation and undermining the democratic process.

To be sure, some laws, like the EU AI Act, also distinguish and regulate specific applications of AI, like facial recognition and credit scoring systems, that are perceived to be higher risk. And there are many other non-AI-specific laws and regulations, from financial laws to copyright laws, that regulate AI use. But these “downstream” rules are part of an AI governance framework that increasingly includes “upstream” regulation of AI methods and models, i.e., the technology itself.

We need to think more carefully about the relative merits of upstream AI regulation, particularly to avoid overregulating AI development. We don’t want to make it harder for smaller startups to compete with large firms, like OpenAI and Microsoft, that dominate the market in AI development and can more easily absorb the costs of regulatory compliance. We must try to both prevent and remedy harms caused by AI. But we must also not act with so much precaution that we end up stifling its many economic and social benefits.

It may be too early to know that the harms of AI so clearly exceed the benefits such that a highly precautionary regulatory stance is justified.

© 2024 Chicago Tribune. Distributed by Tribune Content Agency, LLC.