Protecting public health and promoting ethical practices, WHO calls for the safe and responsible utilisation of artificial intelligence in healthcare.
The World Health Organization (WHO) highlights the importance of safe and ethical use of artificial intelligence (AI) in healthcare. Caution is urged in the utilisation of large language model tools (LLMs) like ChatGPT, Bard, Bert, and others. This is to safeguard human well-being, ensure safety, preserve autonomy, and promote public health.
What is an LLM?
An LLM is an advanced AI system that uses sophisticated algorithms and deep learning techniques to process and generate human-like language. It is trained on extensive datasets, learning patterns, linguistic structures, and semantic relationships.
LLMs understand and generate coherent responses, serving as valuable tools for tasks like natural language processing, information retrieval, and text generation. Their ability to mimic human communication has attracted attention across domains, including healthcare.
The Rising Popularity of LLMs
The advent of LLMs, which imitate human communication and comprehension, has sparked immense excitement regarding their potential to support various health needs.
However, WHO stresses the importance of carefully examining the risks associated with their use in improving access to health information, as decision-support tools, or even for enhancing diagnostic capabilities in resource-limited settings. The ultimate goal is to safeguard people’s health and mitigate inequities.
How is AI being used in Healthcare?
In the healthcare field, researchers and practitioners are exploring the application of LLMs to improve access to health information, assist in diagnosis, aid in decision support for healthcare professionals, and support research efforts.
Their potential to understand and generate human language opens up opportunities to enhance communication, information retrieval, and knowledge sharing within the healthcare ecosystem.
However, it is important to approach the use of LLMs in healthcare with caution and adhere to ethical guidelines. LLM responses may lack accuracy in critical health contexts. This is due to language complexity and data biases. Ethical use necessitates addressing data privacy, bias mitigation, transparency, and responsible information dissemination.
As LLMs evolve, ongoing research, AI-expert collaboration, and robust governance are crucial. These efforts ensure safe integration into healthcare practices, harnessing their potential while mitigating risks.
WHO’s Response to the Rise in AI
While WHO acknowledges the value of incorporating technologies, including LLMs, to assist healthcare professionals, patients, researchers, and scientists, concerns arise due to the inconsistent application of caution in their adoption. This includes ensuring adherence to essential values such as transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.
Rapid adoption of untested systems without adequate oversight may lead to errors by healthcare workers, harm patients, erode trust in AI, and impede the long-term potential and global benefits of such technologies.
Several critical concerns necessitate rigorous oversight in order to utilise these technologies in safe, effective, and ethical ways:
Addressing Data Bias: The data used to train AI models may possess biases. This may result in misleading or inaccurate information that could pose risks to health, equity, and inclusiveness.
Accuracy of LLM-Generated Responses: LLMs can produce responses that appear authoritative and plausible to users on the surface. However, when looked open closer, these responses may be entirely incorrect or contain serious errors, especially in health-related contexts.
Consent and Data Protection: LLMs may be trained on data without prior consent for such usage. They may not adequately protect sensitive data, including health information provided by users to generate responses.
Misinformation and Dissemination: LLMs can be misused to generate and disseminate highly convincing disinformation in the form. This may take shape as text, audio, or video content. The public may face difficulties when trying to distinguish reliable health information from false content.
WHO prioritises patient safety in AI advancements, urging policymakers to ensure protection as technology firms commercialise LLMs. WHO also urges evidence-based deployment of LLMs in healthcare, emphasising the need to address concerns before widespread implementation.
WHO’s Guidance on AI Ethics
WHO underscores the importance of adhering to ethical principles and appropriate governance, as outlined in the organisation’s guidance on AI ethics and governance for health. The six core principles identified by WHO include:
- Protecting Autonomy
- Promoting Human Well-being, Safety, and the Public Interest
- Ensuring Transparency, Explainability, and Intelligibility
- Fostering Responsibility and Accountability
- Ensuring Inclusivity and Equity
- Promoting Responsive and Sustainable AI
- By upholding these principles, stakeholders can harness the potential of AI while ensuring ethical practices and safeguarding the best interests of global health.
WHO calls upon the policymakers and technology developers to adhere to ethical standards and prioritise patient safety. By doing so, we can maximise the benefits of these technologies while mitigating potential risks and ensuring the advancement of healthcare for all.