IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

From Principle to Practice: AI Rulemaking Is an Uphill Battle

Experts participating in the inaugural AI Policy Forum Symposium underscored the need for the world to commit to common AI ethics principles, much in the same way that countries have agreed to manage nuclear weapons.

JulieBishop.png
Julie Bishop, speaking at the May 2021 AI Policy Forum Symposium (screenshot)
The MIT Schwarzman College of Computing-organized AI Policy Forum convened its inaugural symposium panel last week, where members discussed what it takes for public officials to translate generally agreed upon international ideals around ethical AI into tangible policies with teeth, and the many pitfalls lying in the way.

“The real policies questions begin when you are understanding the trade-offs,” said Luis Videgaray, a member of the AI Policy Forum’s leadership group, the director of the MIT AI Policy for the World Project and former foreign minister and finance minster for Mexico.

FROM PRINCIPLE TO POLICY


The Organization for Economic Cooperation and Development (OECD) — an international entity focused on promoting evidence-based policymaking and global standards — released a slate of recommended AI principles, with its member nations signing onto them in 2019. The standards include stipulations such as that individuals must be told when they are interacting with AI systems and allowed to contest the algorithms’ decisions if they believe they are harmed by the conclusions the AI came to.

But it is one thing to commit to ideas of fairness and accountability and another to determine exactly how to achieve that, especially when doing so requires making judgment calls, said Julie Bishop. Bishop is an AI Policy Forum steering committee member, chancellor of Australian National University and Australia’s former minister of foreign affairs.

Policymakers observing or anticipating AI applications will need to decide what exactly counts as fair or discriminatory impacts of the technology’s use, she said, and regulators who fully embrace the ideals that AI should benefit various parties will still need to decide what to do when these different goals come into conflict with each other.

“Some challenges in achieving adoption and implementation come from subjectivity in the principles,” Bishop said. “For example, the principle where AI systems should benefit individuals, society and the environment. There are, of course, going at times to be tension between those benefits and [in] how they’re measured and apportioned.”

Videgaray, added that policymakers may find not only tensions between the principles but also between the principles and other societal goals. Ensuring politicians understand the trade-offs between their various priorities is essential to helping them navigate their options.

“[There is] a tension between implementing privacy and accuracy,” Videgaray said by way of example. “Saying, ‘We want to protect privacy,’ is a relatively low-controversy issue, but saying, ‘There’s a tension between accuracy — particularly in medical diagnosis — and the privacy of the patient’s data, that’s a difficult thing to explain … losing accuracy might mean losing lives, literally.”

The best techniques for ensuring personal data stays private come at the cost of reducing the accuracy of the AI models that analyze and make assessments based off that data, Videgaray explained. This may be acceptable when using AI for making low-stakes decisions, but the pros and cons require deeper consideration when the models are making health-care recommendations. Policymakers may need to weigh just how precise the tools need to be or whether a somewhat lesser level of privacy is acceptable.

LuisVidegaray.png
Luis Videgaray spoke during the AI Policy Forum Symposium, Screenshot

Governments cannot be expected to adopt the same policies around AI, because the trade-offs carry different weight depending on the local conditions, Videgaray said. Areas with limited access to medical professionals are likely to have different opinions on how much inaccuracy they are willing to risk from their AI-powered diagnosis tools compared to countries with plenty of well-trained and resourced doctors.

“As opposed to principles that are global and universal, the actual policies should be very context driven,” he said. “Not every solution is going to be the same for every country even though the principles are the same.”

GLOBAL COLLABORATION


Bishop said that such worldwide commitment to those overarching AI principles, however, is key to ensuring those making or deploying the technologies actually obey.

Firms in jurisdictions that strictly adhere to AI principles could struggle against competitors operating in nations without such limitations and some companies could choose to hop borders to seek out more lax AI regulations, she noted. Given that, the only way to ensure that standards stick is to make them stick everywhere, she argued.

“AI can’t be confined within the boundaries of individual nation-states, so global collaboration is absolutely vital,” Bishop said.

No country wants to be restrained by a standard that their economic or geopolitical rivals aren’t following, and achieving an international AI agreement would require nations to be transparent about their use of the technology, including in covert and military applications, Bishop said.

This is no easy feat, she acknowledged, but said the Treaty on the Non-Proliferation of Nuclear Weapons is a useful example of ways countries have previously agreed to limit technological development and use – although without perfect adherence.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.