New WHO recommendations regarding the governance and ethics of large multi-modal models of AI

On 18 January 2024, the World Health Organisation (WHO) issued a New Guidance on the ethics and governance of large multi-modal models (LMMs). The document is intended to assist technology companies, healthcare providers, and governments in promoting the responsible application of AI and safeguarding public health.

The WHO asserts that LMMs possess extensive potential for application across various domains within the healthcare sector. These domains include scientific research and drug development, clinical care and diagnosis, medical and nursing education, administrative tasks (e.g., cataloguing and collecting medical examinations in electronic medical records), and patient functionality (e.g., searching for information on symptoms and treatment modalities).

However, the WHO also identifies potential risks associated with the use of AI. These risks primarily pertain to the production of false, inaccurate, or incomplete data, which may have adverse effects on individuals who rely on such information to make health-related decisions. Additionally, the WHO warns of the possibility of bias and distortion in the output generated by AI when it is trained on substandard data or calibrated by bias. Therefore, to improve the capacity of health systems and advance the interests of patients, the WHO has issued a series of recommendations to governments, which are responsible for establishing standards for the development and deployment of AI, and to developers, who ought to involve potential users and stakeholders in the design phase.

These are the upcoming dates for our Annual General Meetings:

Thursday, 21 March 2024
Thursday, 20 March 2025