The Future of AI Enabled Health 2025
Page 20 of 30 · WEF_The_Future_of_AI_Enabled_Health_2025.pdf
Ensuring AI transparency and accountability is
critical for building trust and safely and effectively
implementing AI systems in health. Given the
inherent resistance to change, including inertia
and conservatism, in the medical field, establishing
transparent regulatory frameworks is essential.
This involves developing nuanced regulatory
approaches that keep pace with rapid AI advances,
as traditional one-size-fits-all regulation is
inadequate for the diverse and evolving nature of
AI technologies. Creating adaptable frameworks
that address the specific characteristics of different
AI technologies is crucial. Harmonizing data-
sharing policies across borders facilitates global
collaboration while maintaining security and privacy,
which is essential for the widespread adoption of AI.Maintaining human oversight of AI systems is of
paramount importance for safeguarding trust,
efficacy and ethical standards. The World Health
Organization (WHO) emphasizes that the “principle
of autonomy requires that the use of AI or other
computational systems does not undermine human
autonomy. In the context of health care, this means
that humans should remain in control of health-
care systems and medical decisions.”17 Given
the complexities of AI, especially genAI, human
involvement is necessary to validate AI outputs
and support decision-making. If a culture of trust
and collaboration is established, ensuring that
AI systems are safe, reliable and accepted by all
stakeholders, AI can be effectively integrated into
health provision.3.3 Low confidence in AI within a fragmented
regulatory and governance framework
Fragmented, outdated regulations
that hinder AI innovation
Tactically, AI-driven health faces four significant
regulatory challenges in the short term. First, there
is a fragmented regulatory landscape with a divide
between countries with stringent regulations and
those without.18 Second, the perceived lack of
regulatory clarity, where regulations do not keep
pace with the advance of AI, stifles innovation.
Third, the regulation of software and AI uses a one-
size-fits-all approach that might not be fully relevant
for genAI. Finally, AI development and regulation
without data-sharing rules remains a challenge.
The gap is growing between countries leading the
race in AI regulation and those without the means
to engage in this new field. Public engagement
and political will are essential: in the US, the
2023 Executive Order on the Safe, Secure and
Trustworthy Development and Use of Artificial Intelligence19 was one catalyst for progress.
Supporting emerging economies and local or
regional regulation approaches will help mitigate
inequities: this should remain part of international
development priorities as well as national priorities.
Slow regulation often stems from the expectation
that AI must be fully developed before
implementation, and that risks should be mitigated
with ambitious regulations. This conception is
almost ineffective by design for two reasons: it does
not use all of the tools available for mitigating risks,
and it does not address a core problem of AI, which
is the regulation and control workload. On the first
point, AI risk mitigation should use the full portfolio
of tools: guidance and standards in particular are
more flexible than regulation, and they still help to
strike the right benefit-to-risk ratio. Giving more
space and more funding to support early guidance
and standards could help the ecosystem avoid
premature regulation that would hinder innovation.
The Future of AI-Enabled Health: Leading the Way
20
Ask AI what this page says about a topic: