Intelligent Clinical Trials 2024
Page 13 of 20 · WEF_Intelligent_Clinical_Trials_2024.pdf
2. Build data infrastructure: Governments should
establish centralized or federated hubs that
aggregate data for improved accessibility and
use. For example, the Indian state of Telangana
has established itself as a leader in promoting
data sharing throughout India’s fragmented
healthcare system with its Citizen Health Profile
initiative to collect biochemical and phenotypic
data from 40 million citizens and make it
available to healthcare stakeholders. In many
cases, public–private partnership will be needed
to fund infrastructure and ensure that sharing
initiatives maximize the value of data while
safeguarding privacy and security. Establishing
regulatory sandboxes, where new technologies
can be tested in a controlled environment, can
encourage innovation while ensuring that new
use cases are safe and ethical.
3. Create incentives for data sharing: Public–
private partnerships must be formed to drive
policies that create incentives for networked data sharing and reduce fragmentation by
aligning interests throughout healthcare.
The UK Biobank – a government initiative
underwritten largely by the pharmaceutical
industry – is a prime example.22 Monetization
models can create incentives for data sharing
– for instance, by monetizing aggregated data
that can then be shared among contributors.
Quasi-private initiatives, such as Epic’s
Cosmos23 – which aggregates data from
community health systems and then sells it to
pharmaceutical companies for R&D – is another
strong example.
Clinical development leaders stressed in interviews
that governments need to strike a balance between
privacy, safety and innovation. The establishment of
the US Food and Drug Association’s AI Council,24
which oversees AI, including its use in regulatory
decision-making, is encouraging. The private sector
should propose adaptive regulatory frameworks
that provide guidance without stifling innovation.
Barriers
Dramatic changes to the clinical development
status quo are inherently discomfiting. Inertia, skill
deficits and lack of trust must be overcome for
humans to fully embrace progress.
–Overcome inertia: Although companies have
invested in using AI and Gen AI to enable
digital end points, adaptive trial designs and
synthetic control arms, these efforts have yet to
consistently demonstrate their full value. That
is starting to change. The Tufts Center for the
Study of Drug Development and the Digital
Medicine Society (DiMe) united with industry
leaders to measure potential ROI from using
digital end points in clinical trials.25 They found
that doing so shortened trial phases, allowed for
smaller enrolment sizes, increased expected net
present value (eNPV) by as much as $40 million
per indication and offered returns of between
four and six times investment. These results
notwithstanding, leaders interviewed by the
World Economic Forum and ZS said they were
hesitant to fully embrace new trial methods, given
the highly regulated nature of life sciences and
the high stakes of each individual trial.
–Build AI skills: With AI’s advance, there is a
growing need for expertise at the crossroads
of technology, healthcare and data science.
While AI has made inroads in biostatistics for
analysis and drug discovery, its integration
often sits outside core development teams. AI
models can produce varying outputs depending on the specific context, underscoring the
importance of teams with expertise in both
clinical development and AI. Additionally, AI
systems are often deployed within an ensemble
of specialized models, requiring teams to
coordinate systems and integrate outputs into
clinical and operational workflows.
–Overcome trust barriers: Operationalizing Gen
AI in clinical development depends on trust –
from the public, who must consent to their data
being used; from research entities, who must
believe the benefits of data sharing outweigh
the risk of intellectual property leakage; among
individual scientists sceptical of AI’s value
compared with traditional techniques; and
among regulators, who must believe that AI’s
outputs are safe, ethical and reliable.
Recommendations
Trust must be promoted in the ecosystem.
1. Develop smart AI policies: Governments
must establish guidelines that ensure the safe
and effective use of AI, while also promoting
workforce upskilling. This could take the form of
certifications or training programmes that build
confidence in trial teams’ skills in using AI in
clinical development.
2. Enforce data transparency: While there is a
push for greater transparency in AI models,
companies are hesitant to reveal the data used
to train models. Regulators should provide 2.2 Innovation culture, trust and
workforce considerations
Intelligent Clinical Trials: Using Generative AI to Fast-Track Therapeutic Innovations
13
Ask AI what this page says about a topic: