Advancing Responsible AI Innovation A Playbook 2025
Page 13 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
Government leaders
Key roadblocks organizations encounter from the broader ecosystem
Undefined data governance standards for generative AI, challenging data minimization principles
Fragmented data governance due to factors such as AI nationalism, the nascent adoption of AI standards
and conflicting regulations, affecting data sharing and interoperability
Limited incentives and lack of standards in emerging data exchange markets, preventing proper oversight
of data quality, provenance tracking and fair compensation
Legal ambiguities, generating resistance from companies to share data for fear that recipients will
use it to train AI models
Data market concentration generating information asymmetries and stifling competition, particularly
for small- and medium-sized enterprises (SMEs) and start-ups
Actions for government leaders
–Clarify data governance to account for
generative AI: Assess the impact of generative
AI on how businesses are incentivized to
collect, retain, use and monetize data.
Identify and address gaps in current data
governance and content management policies.
Consider affordances needed for vulnerable or
marginalized populations, such as protecting
indigenous data sovereignty rights23 or children’s
rights24 (see Play 7).
–Promote open and inclusive data
ecosystems: Develop and harmonize
policy and regulatory frameworks to enable
responsible data sharing, including legal
definitions and guidelines for emerging models
like data trusts and cooperatives. For example,
the EU’s Data Governance Act supports trusted
data intermediaries and promotes data altruism,
laying the groundwork for new stewardship
and sharing models.25 Communicate legal
clarity regarding when data can be used to
train models to incentivize sharing without fear
of exploitation. In the US, the AI Action Plan
directs the National Science Foundation and
Department of Energy to create secure compute
environments for controlled AI access to
restricted federal data, alongside the creation of
an online portal for a demonstration project.26 –Enable secure sharing through:
–Experimentation support: Establish
regulatory sandboxes to test sharing models
without facing compliance risks and provide
compliance-by-design tooling.
–Mutually beneficial data-sharing markets:
Promote conditions of clear rules on use,
value measurement, contributor rights and
compensation, including for aggregators
and individuals. Financial markets offer a
proven blueprint: just as analysts evaluate
stocks and shareholders earn dividends,
data markets could employ analysts to
assess data quality while compensating data
owners for their contributions.
–Shared data infrastructure: Facilitate secure,
ethical and sovereignty-respecting access
to high-quality domestic and cross-border
datasets (see Case study 3).
–Address synthetic data macro-challenges:
While synthetic data offers an alternative to
data scarcity, it requires addressing challenges:
incentivizing foundation models to revise
usage policies that prohibit synthetic data
production, providing transparency into biases
and limitations, and preventing negative
externalities following mass adoption (e.g.
model collapse).27
Advancing Responsible AI Innovation: A Playbook 13
Ask AI what this page says about a topic: