Earning Trust for AI in Health 2025
Page 9 of 21 · WEF_Earning_Trust_for_AI_in_Health_2025.pdf
Most of the experts convened for this study
emphasized the need for capacity-building
among public stakeholders to develop regulatory
frameworks appropriate for AI technologies.
Regulations for health innovations have historically
been built to assess static products. However,
AI technologies are capable of evolving post-
deployment, meaning the need for post-deployment
monitoring is more critical than before. On the
positive side, AI tools can become safer and better
after release as the size of their dataset increases;
however, current post-market monitoring processes
also risk falling short of being able to intervene in a
timely manner should an unforeseen or undesirable
evolution occur in the AI technology. That said,
some regulatory innovation has taken place to
accommodate the evolving capabilities of AI
technologies, such as through the introduction of
predetermined change control plans in the United
States that allows certain predicted changes to be
approved theoretically, thus lessening the regulatory
burden on AI developers.
There is a strong case for a global capacity-building
effort that should use local capabilities (through
PPPs, for instance, as discussed in Section 3) as
well as financing (through international aid and
domestic sources). Interviews and workshops
conducted for this paper highlight a broad
consensus on the lack of literacy and on the need
for enhanced capacity-building to enable regulatory collaboration and develop appropriate regulatory
frameworks and guidance documents. This could
also include regulatory reliance mechanisms such
as mutual recognition, where trusted assessments
by one authority can be used by others.
In response to these challenges, the Global
Agency for Responsible AI in Health (HealthAI),
a non-profit organization, was created to
expand countries’ capacity to regulate AI in health,
particularly in the Global South. It is actively
supporting the establishment of government-
led regulatory mechanisms within countries to
accelerate the standards-based validation of AI
technologies. HealthAI is also developing a global
regulatory network, a public registry of approved AI
solutions and an associated global early-warning
system for AI products; it also offers advisory
services on AI policies. Its report, Mapping AI
Governance in Health: From Global Regulatory
Alignments to LMICs’ Policy Developments,
published in September 2024, represents a first
step in the implementation of national and regional
regulatory mechanisms to form a global regulatory
network.18 It examines global AI governance
policies developed by key international institutions
from an interoperability perspective and presents
country-specific analyses of four countries
representing different regions to offer diverse
perspectives on the challenges and progress in the
governance of AI in health.1.3 AI regulations must be crafted to keep pace
with innovation –Educational tools: Training programmes,
workshops and continuous learning modules
are essential to equip staff at all levels with the necessary knowledge and skills to engage with
AI systems effectively.
Earning Trust for AI in Health: A Collaborative Path Forward
9
Ask AI what this page says about a topic: