Advancing Responsible AI Innovation A Playbook 2025
Page 25 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
Government leaders
Key roadblocks organizations encounter from the broader ecosystem
Limited incentives to report on responsible AI practices, discouraging companies from sharing information
for fear of facing reputational and liability issues
Lack of standardized incident reporting protocols, impeding the collection of reliable and comprehensive
data, critical for preventing and mitigating future incidents
Opacity of AI’s environmental impact: 84% of generative AI use is done through undisclosed models.61
As AI adoption grows, data on the environmental impacts is increasingly scarce, fuelling misinformation
and public misconceptions.
Ethics washing occurs when companies overstate their capabilities in responsible AI, creating an uneven
playing field where genuine efforts are discouraged or overshadowed by exaggerated claims.
Static benchmarks becoming misaligned with emerging risks, especially as many popular benchmarks are
reaching saturation points or suffer from a lack of transparency, reproducibility and real-world relevance
Actions for government leaders
–Assess the state of responsible AI
practices in the industry: Policy-makers must
understand the state of responsible practices
by AI providers and industry users within their
jurisdiction. Such assessments can:
–Incentivize organizations to measure
maturity: Build awareness of the actual
state of responsible AI practices within
the organization.
–Support evidence-based policy: Educate
policy-makers on industry practices and
responsible AI implementation challenges.
–Prevent unnecessary regulation: In cases
where enough companies demonstrate
proactive and sufficient risk management.
–Provide insights into forthcoming AI
capabilities: Stay abreast of developing
AI to assess potential opportunities and
challenges of jurisdictional interest, such
as national security.
Jurisdictions should consider the advantages
and limitations of various reporting instruments
when incentivizing industry to report responsible
AI practices (see Table 2). Layering multiple
instruments can help offset trade-offs and
bolster overall efficacy (see Case study 7).
Mandating reporting in select instances may
offset participation challenges. Additionally,
governments should support academia, civil
society, and third-party efforts to assess the
state of responsible AI practices.
–Standardize and incentivize risk and incident
reporting: Promote compliance, data quality and insights gathering across jurisdictions through
harmonized taxonomies, safe harbour provisions,
and interoperable disclosure platforms that
encourage transparency while safeguarding
innovation. The level of disclosure for risks and
incidents may vary depending on the audience
and availability of expertise and resources
required to analyse information.62 Disclosures
must balance data privacy and security,
particularly when reporting incidents related to
vulnerable populations. Participation incentives
for reporting could include access to other
organizations’ reported incidents or mandated
disclosure. For example, the EU AI Act requires
general-purpose AI model providers of high-risk
systems to “track, document and report relevant
information about serious incidents and possible
corrective measures to address them.”63
–Drive the evolution of benchmarks and
standardize validation: Access to updated
code, test sets and validation methods is
needed to ensure companies and regulators
base decisions on accurate metrics for system
performance. Convene industry, academia
and civil society to identify benchmarks
and standards for AI safety assessments
across industries and contexts. For example,
Singapore’s Global AI Assurance Pilot
gathered 17 organizations from 10 industries
and nine countries to co-develop norms and
practices for generative AI testing.64
–Facilitate environmental transparency
disclosures: Considerations must be given
to voluntary or mandatory industry-wide
measures that include publishing impact data
(e.g. energy, carbon, water) across the AI value
chain, integrating AI’s environmental costs into
corporate reporting and procurement, and
developing standardized verification processes
(see Case study 2).65
Advancing Responsible AI Innovation: A Playbook 25
Ask AI what this page says about a topic: