Advancing Responsible AI Innovation A Playbook 2025
Page 17 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
Pre-proposal
Bolsters policy adaptiveness
Example
Open Loop, a Meta-led initiative,
prototyped a framework aimed
at informing discussions in the
EU regarding a potential
risk-based approach to AIPost-enactment
Address ambiguities
and enforcement gaps
Example
The General-Purpose AI Code
of Practice, developed through
engagement with over 1,000
stakeholders, is designed to help
industry comply with the EU AI
Act’s rules on general-purpose AI
Example
Canada's financial regulator, the Office
of the Superintendent of Financial Institutions
(OSFI) consulted stakeholders on revising
AI model risk guidance, with feedback
suggesting only scope adjustments
and clarifications were needed, thus avoiding
major changes and prototyping challenges
Post-enforcement
Examines potential material changes
via ongoing efficacy monitoring mechanismsExample
In the US, cost-benefit analysis of
regulations is required as part of the
federal rulemaking process,
enabling policy-makers to consider
the best policy alternative
Pre-enactment
Identifies unintended consequences
and operational needsWhen to prototype policies and regulatory frameworks FIGURE 4
Sources: Meta Open Loop. (2021). AI Impact Assessment: A Policy Prototyping Experiment; US Library of Congress. (2024). Cost-Benefit
Analysis in Federal Agency Rulemaking. https://www.congress.gov/crs-product/IF12058#; European Commission. (2025). General-
Purpose AI Code of Practice now available. https://ec.europa.eu/commission/presscorner/detail/en/ip_25 1787. –Provide foundational support to enable
organizations in responsible AI: Some
approaches could include:
–Resources and tooling: Singapore
provides governance frameworks, tools,
training and certifications, such as the
Model AI Governance Framework and AI
Verify Foundation.
–Ecosystem champions: Canada has
established three non-profit AI Institutes
(Amii, Mila and the Vector Institute) that serve
as third-party facilitators of cross-sector
interaction and public-private engagement to
align priorities and share best practices. –Sandboxes: The UK’s Financial Conduct
Authority (FCA) offers services for safe
experimentation to firms in various stages
of AI, from discovery to use.39 Firms gain
regulatory support and validation for
confident adoption, while the FCA gains
practical insights to shape future oversight
and policy. Another example is the
Government of India’s effort, in collaboration
with the Centre for the Fourth Industrial
Revolution India, to develop a roadmap
for establishing an AI sandbox ecosystem
tailored to India’s unique needs and
sectoral priorities.40
Advancing Responsible AI Innovation: A Playbook 17
Ask AI what this page says about a topic: