Earning Trust for AI in Health 2025
Page 4 of 21 · WEF_Earning_Trust_for_AI_in_Health_2025.pdf
Executive summary
AI will reshape healthcare, but realizing
its full potential requires responsible
governance, trust and global collaboration.
Healthcare systems globally face growing pressures:
rising costs, workforce shortages and persistent
inefficiencies. In this context, AI offers transformative
opportunities to enhance patient outcomes and
optimize system performance. However, realizing
AI’s benefits in healthcare demands responsible
development, rigorous evaluation and a deliberate
focus on building trust among stakeholders.
Today’s medicine regulatory frameworks – largely
designed for pharmaceuticals and medical
devices – are not fully suited to manage the
probabilistic, dynamic nature of AI technologies.
Traditional evaluation methods, which emphasize
pre-market validation, struggle to accommodate
AI systems that evolve post-deployment. As AI
adoption accelerates, regulatory models must
evolve accordingly.
This paper, developed through a collaboration
between the World Economic Forum’s Centre for
Health and Healthcare and Boston Consulting
Group (BCG), identifies three urgent priorities to
earn trust for AI in health:
1. Address fragmentation and build
technical capacity
–Current AI ecosystems are fragmented,
and many health leaders lack a deep
understanding of AI technologies.
–Health systems must build technical literacy
among decision-makers to critically assess
and responsibly integrate AI solutions.
2. Adapt evaluation and regulatory frameworks
–New approaches, such as regulatory
sandboxes, post-market surveillance and
life-cycle monitoring, are essential.
–Guidelines must complement legislation to
enable innovation while maintaining high
standards of safety, effectiveness and equity. –Independent quality assurance resources
and real-world testing environments, such
as those being developed under initiatives
like the Testing and Experimentation Facility
for Health AI and Robotics (TEF-Health), can
support more dynamic development.
3. Promote public–private collaboration
–Public–private partnerships (PPPs) should
move beyond consultation to active co-
development of evaluation standards and
monitoring frameworks.
–Such collaboration is vital to ensure that
regulatory practices keep pace with AI
innovation while safeguarding patient trust
and public health objectives.
This paper also emphasizes the importance of
global coordination. Divergences in AI regulatory
approaches across regions – especially between
the Global North and Global South – risk creating
barriers to the scalable deployment of AI in
healthcare. Capacity-building efforts, especially
in under-resourced health systems, are crucial to
ensure equitable benefits from AI advances.
Ultimately, the future of AI in healthcare must
be grounded in adaptability, transparency and
shared responsibility. By strengthening evaluation
processes, building technical capacity and fostering
structured public–private collaboration, health
systems can unlock the transformative potential
of AI while upholding patient safety and trust and
ensuring broader access to innovation.
The path forward demands continuous innovation
not only in technology but also in regulation and
system design. The time to act is now, to ensure
that AI fulfils its promise of delivering better health
outcomes for all.
Earning Trust for AI in Health: A Collaborative Path Forward
4
Ask AI what this page says about a topic: