Earning Trust for AI in Health 2025
Page 16 of 21 · WEF_Earning_Trust_for_AI_in_Health_2025.pdf
Healthcare tech companies are working to accelerate
the development of high-quality AI technologies
that meet the needs of the health sector across the
world. Clear regulatory frameworks and supporting
guidelines will be critical to foster the purposeful
innovation of health AI technologies. Unlocking this
new approach will require three strategic shifts:
1. Build technical expertise among health
leaders and clinical decision-makers
Health leaders and clinicians should seek to
upskill and engage with technical experts
with healthy scepticism, actively challenging
technical propositions to ensure that they
align with the overarching vision. In the future,
understanding the capabilities, limitations and
risks of AI technologies will no longer be the
sole responsibility of chief technology officers
(CTOs) but will become a fundamental skill for
health leaders and clinicians to adapt evaluation
practices to the presence of AI technologies.34
2. Support the translation of legislative goals
into actionable guidelines that create
incentives for purposeful innovation
The emergence of the first generation of AI-
focused legislation establishes a paradigm
within which the use of AI technologies
is considered acceptable. The next step
of developing complementary guidance
documents and infrastructure can benefit
significantly from public–private engagement,
such as the organization of regulatory sandboxes, rigorous evaluation methods
including pre- and post-market surveillance
and AI assurance resources to detect early
signals of AI-related risks as soon as possible
and with full transparency. Trust can be earned
even before legislation comes into effect by
adhering to existing guidelines and standards.
3. Mobilize public–private partnerships to
actively engage the private sector in
lifecycle management
Private-sector involvement in AI systems’
evaluation efforts is important due to the rapidly
evolving AI innovation landscape. PPPs are
necessary to engage with the private sector
in order to cope with the increasing number
of AI technologies that need to be tested
and must be compliant with a growing set of
requirements. In addition, these partnerships
can play a crucial role in supporting the
acceleration of model training and development
as well as post-deployment monitoring.
Promoting cooperative engagement such as
public–private partnerships and prioritizing upskilling
and evaluation practices can create an innovation
environment that is agile and transparent.
Collaborative action can build a system that not
only harnesses AI to revolutionize healthcare but
does so in a way that prioritizes patient safety
and trust. The future of AI in health has immense
promise, and with collective effort, society can
ensure that it delivers on that promise responsibly.Conclusion
Health-system leaders, regulatory
bodies and the private sector must
collaborate to unlock AI’s full potential
while mitigating its associated risks.
Ask AI what this page says about a topic: