Earning Trust for AI in Health 2025
Page 15 of 21 · WEF_Earning_Trust_for_AI_in_Health_2025.pdf
Examples of private-sector involvement in the regulatory process FIGURE 2
Renewed private-sector engagement:
Regulatory provisions need well-structured guidelines to have real-world effects, which can be developed using industry standards and experience
Private-sector
consultation to
provide expert
advice and
practical insights2024 consultation
on general-purpose
AI models
PPP to industrialize
post-market
surveillance and
monitoring of the
increasing number
of AI applications
Private-sector
collaboration to
transform
guidelines and
legislation into
actionable
proceduresRegulatory
provisions
OperationalizationIndependent
testingIndependent
guidelines
setting
2023 consultation
on AI regulation
The “Validate”
programme to
evaluate bias and
accuracy of AI
modelsPlatform to test,
monitor and
deploy AI
systems at scaleEvaluation
laboratory to
assess AI,
including its ethics
Collaboration
to harmonize
standards
and reporting
for AICollaboration
to establish
best
practices for
deploying AI
in health
Professional
association to
develop
standards,
including for AI Global non-profit
aiming at
building
responsible AI
solutions in
health
Source: EU consultation: https://digital-strategy.ec.europa.eu/en/consultations/ai-act-have-your-say-trustworthy-general-purpose-ai; UK consultation: https://www.
gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals; FDA consultation: https://www.fda.gov/media/122535/download;
Mayo Clinic Platform: https://www.chiefhealthcareexecutive.com/view/ai-success-in-healthcare-requires-transparency-public-private-partnership
Quality assurance resources are being
established to evaluate and validate AI models
independently, using consensus-driven
standards and best practices. These resources
are structured environments, often in form of
labs hosted at a network of quality assurance
resource providers (QARPs). They can use a set of
community-approved best practices for developing
trustworthy health AI, such as those proposed by
the Coalition for Health AI (CHAI) or the US National
Academy of Medicine’s AI Code of Conduct.
Beyond model evaluation, such assurance
resources in a network of QARPs can serve
as a key infrastructure investment across
an AI model’s entire life cycle (development,
deployment, post-deployment governance
and monitoring), supporting a range of critical
stakeholders in the health AI ecosystem. For example, they can accelerate model training
given their access to robust, heterogeneous data,
speeding up development and improving model
performance across communities, or they can
support longitudinal governance for deployed
AI models. The role of QARPs and assurance
resources continues to evolve and expand as the
concept is tested and scaled.
At the end of 2024, CHAI introduced a framework
to certify quality assurance resources primarily
led by the private sector. Similarly, in the EU, a
network of testing and experimentation facilities
(TEFs) is being established33 – hospital platforms,
living labs and laboratory testing facilities, for
example. These facilities will give innovators
the capacity to carry out tests and experiments
on their AI technologies in large-scale and
sustainable real or realistic environments.3.3 Quality assurance resources: An approach
to PPPs for independent testing and training
Earning Trust for AI in Health: A Collaborative Path Forward
15
Ask AI what this page says about a topic: