The Future of AI Enabled Health 2025
Page 21 of 30 · WEF_The_Future_of_AI_Enabled_Health_2025.pdf
On the second point, it must be acknowledged
that the rapid pace of AI development necessitates
a new approach to expanding validation capacity.
This can be achieved by delegating the validation
process, under the supervision of regulators and
governments, to involve not only government
bodies but also non-profits, care providers and
private-sector players. Unlike the slower pace of
drug and medical device development, AI software
evolves rapidly and public capabilities alone cannot
keep pace. New testing and validation methods are
required, and private-sector expertise is crucial in
developing these. While the private sector cannot
directly regulate AI, its collaboration helps to ensure
that new regulatory frameworks are well informed,
practical and agile enough to keep up with
technological advances.
Yet the private sector is a diverse landscape, and
large organizations are more likely to be able to
free up resources to participate in discussions on
guidelines and standards. The academic world
might suffer from the same lack of resources as
smaller private-sector organizations. This is why
public–private partnerships, including funded PPPs
as defined by the American National Standards
Institute (ANSI),20 should be considered for long-
term sustainability, as well as the inclusion of small
and medium-sized enterprises (SMEs) or academics
in the design and rollout of standards, guidance
and, eventually, regulations. Collaboration initiatives
are also promising. For example, the Coalition for
Health AI (CHAI) brings together a diverse array of
stakeholders to drive the development, evaluation
and appropriate use of AI in healthcare. CHAI has
developed a certification framework to establish
a network of quality assurance laboratories that
evaluate AI models for healthcare use.
AI regulation often follows a one-size-fits-all
approach, which is inadequate for the diverse
and rapidly evolving nature of AI technologies.
The nature of genAI, which is non-deterministic
and can evolve as data is collected during use,
requires more flexible and nuanced regulatory
approaches compared to traditional AI. Current
regulatory frameworks struggle to adapt to these
technologies, as traditional methods are ill-equipped
to manage their complexities and rapid evolution.
GenAI’s unique characteristics and risks demand
a regulatory framework that is both adaptive and
forward-thinking, ensuring that regulations keep
pace with technological advances. A stronger focus
on post-market surveillance could be considered
as a way of detecting new risks early on, address
errors and biases and adapt iteratively.
Finally, data protectionism hampers innovation
and limits the potential for AI advances. It restricts
the ability to develop robust AI using unbiased
datasets and to validate AI tools in local contexts.
To facilitate global adoption and development,
it is essential to ensure the convergence of data
models and exchange standards; for example, through locally controlled but globally federated
datasets.21 These datasets enable AI solutions
to be developed and validated for different local
populations, ensuring greater accuracy and safety
while preserving privacy.
Difficulty in building trust within a
complex ecosystem
A global study found 44% of people surveyed
expressed a willingness to trust AI in health
applications,22 reflecting a cautious optimism
about its potential benefits and concerns about its
implementation and oversight. This cautious attitude
is supported by data showing that 67% of health
leaders in the US trusted AI technology to process
medical records by 2020, a significant increase
from 54% in 2018.23 However, the acceptance of
AI in health systems remains at risk due to broader
concerns about misinformation and the quality of
health information. This sentiment is echoed in
consumer attitudes to AI in different countries, as
illustrated in Figure 7, where feelings about AI are
mixed, with more than 40% expressing concern in
the US, Switzerland, the United Kingdom, France
and Australia, while fewer than 20% share this
concern in China, India, Thailand, Saudi Arabia,
Indonesia and Mexico.
Building trust in AI for health requires a concerted
effort on both the regulatory and business fronts.
Transparency is a cornerstone in this endeavour.
Regulatory bodies must ensure openness, clear
communication and full disclosure of important
facts about AI technologies to alleviate public
concerns. As emphasized by the WHO: “it is
fundamental to consider streamlining the oversight
process for AI regulation through […] engagement
and collaboration [among key stakeholders]”.24
Equally important is integrity, with regulations
enforcing consistent honesty and ethical behaviour,
ensuring that actions align with stated goals.
Human oversight also plays a crucial role,
especially given the challenges with genAI, such
as hallucinations. Ensuring that humans remain
involved in validating AI outputs and supporting
decision-making is essential for maintaining
trust and efficacy. Furthermore, avoiding the
anthropomorphization of AI is vital, as this creates
confusion between the perception of capabilities
and the limitations of AI. Humans must remain
accountable to ensure trust, safeguard efficacy
and address potential issues in AI systems.
AI ambassadors can play an important role in
communicating the benefits and limitations of AI-
based products, helping to build a clear strategy for
trust and transparency.
From a business perspective, demonstrating
integrity, competence and potential is fundamental.
The Future of AI-Enabled Health: Leading the Way
21
Ask AI what this page says about a topic: