“The real policies questions begin when you are understanding the trade-offs,” said Luis Videgaray, a member of the AI Policy Forum’s leadership group, the director of the MIT AI Policy for the World Project and former foreign minister and finance minster for Mexico.
FROM PRINCIPLE TO POLICY
But it is one thing to commit to ideas of fairness and accountability and another to determine exactly how to achieve that, especially when doing so requires making judgment calls, said Julie Bishop. Bishop is an AI Policy Forum steering committee member, chancellor of Australian National University and Australia’s former minister of foreign affairs.
Policymakers observing or anticipating AI applications will need to decide what exactly counts as fair or discriminatory impacts of the technology’s use, she said, and regulators who fully embrace the ideals that AI should benefit various parties will still need to decide what to do when these different goals come into conflict with each other.
“Some challenges in achieving adoption and implementation come from subjectivity in the principles,” Bishop said. “For example, the principle where AI systems should benefit individuals, society and the environment. There are, of course, going at times to be tension between those benefits and [in] how they’re measured and apportioned.”
Videgaray, added that policymakers may find not only tensions between the principles but also between the principles and other societal goals. Ensuring politicians understand the trade-offs between their various priorities is essential to helping them navigate their options.
“[There is] a tension between implementing privacy and accuracy,” Videgaray said by way of example. “Saying, ‘We want to protect privacy,’ is a relatively low-controversy issue, but saying, ‘There’s a tension between accuracy — particularly in medical diagnosis — and the privacy of the patient’s data, that’s a difficult thing to explain … losing accuracy might mean losing lives, literally.”
The best techniques for ensuring personal data stays private come at the cost of reducing the accuracy of the AI models that analyze and make assessments based off that data, Videgaray explained. This may be acceptable when using AI for making low-stakes decisions, but the pros and cons require deeper consideration when the models are making health-care recommendations. Policymakers may need to weigh just how precise the tools need to be or whether a somewhat lesser level of privacy is acceptable.

“As opposed to principles that are global and universal, the actual policies should be very context driven,” he said. “Not every solution is going to be the same for every country even though the principles are the same.”
GLOBAL COLLABORATION
Bishop said that such worldwide commitment to those overarching AI principles, however, is key to ensuring those making or deploying the technologies actually obey.
Firms in jurisdictions that strictly adhere to AI principles could struggle against competitors operating in nations without such limitations and some companies could choose to hop borders to seek out more lax AI regulations, she noted. Given that, the only way to ensure that standards stick is to make them stick everywhere, she argued.
“AI can’t be confined within the boundaries of individual nation-states, so global collaboration is absolutely vital,” Bishop said.
No country wants to be restrained by a standard that their economic or geopolitical rivals aren’t following, and achieving an international AI agreement would require nations to be transparent about their use of the technology, including in covert and military applications, Bishop said.
This is no easy feat, she acknowledged, but said the Treaty on the Non-Proliferation of Nuclear Weapons is a useful example of ways countries have previously agreed to limit technological development and use – although without perfect adherence.