IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

World Health Organization Releases AI Guidelines for Health

The WHO released a report offering guidance for the use of artificial intelligence in the health industry, highlighting six principles to help ensure the technology works to benefit the public.

World Health Organization logo on the side of a building.
Shutterstock/Richard Juilliart
The World Health Organization (WHO) recently released a report presenting guidance around the ethical use of artificial intelligence (AI) in the health sector.

The lack of a general consensus for ethical use of AI has sparked debate among those in the industry, with some raising concerns about the implications of this technology. This has led to organizations seeking to offer their own solutions, such as the National Institute of Standards and Technology’s recent proposal to reduce bias in the use of AI.

The WHO’s report, titled Ethics and Governance of Artificial Intelligence for Health, seeks to address similar concerns — as well as potential benefits — of AI’s potential roles in the health sector.

It offers six primary principles for the use of AI:
  1. protect autonomy
  2. promote human well being, human safety and the public interest
  3. ensure transparency, explainability and intelligibility
  4. foster responsibility and accountability
  5. ensure inclusiveness and equity
  6. promote AI that is responsive and sustainable

The organization’s hope, the report states, is that these principles will be used as a foundation for AI stakeholders, including governments, developers and society.

The first principle, protect autonomy, indicates that decision-making in medicine should be conducted by humans rather than machines, while the second principle, promoting human well being, takes aim at safety and public interest, stating that AI should not harm people — physically or mentally.

Ensuring transparency, explainability and intelligibility seeks to improve transparency of the technology not only between developers and regulators, but also to medical professionals and patients affected by it. The fourth principle, fostering responsibility and accountability, suggests that stakeholders of a given AI product are responsible for ensuring the technology achieves the intended outcome and that procedures should be in place for remedying the situation if something goes wrong.

The fifth principle, ensuring inclusiveness and equity, requires that AI for health be designed for equitable access — regarding any characteristics protected under human rights codes, like age, sexual orientation or race. This is especially important as bias in AI remains a prevalent concern.

And finally, promoting AI that is responsive and sustainable, meaning that there are minimal negative impacts to the environment. It also indicates that AI products should be continuously assessed during use.

In addition to these principles, the report offers use recommendations for stakeholders in the industry, emphasizing a need for collaboration between the public and private sector to ensure accountability.

The report details the key uses of AI as a support tool in the medical field, noting that AI is improving certain medical processes. These areas include diagnoses and clinical care, health research and drug development, health systems management and planning, and public health and public health surveillance.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Dr. Tedros Adhanom Ghebreyesus, WHO director-general, in the announcement.

The document was developed over a two-year process in which 20 experts worked with the organization to identify the principles that would guide AI’s use in the field of health. The effort was led by two departments within WHO: Digital Health and Innovation and Research for Health.
Special Projects
Sponsored Articles