Advancing Responsible AI Innovation A Playbook 2025
Page 26 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
CASE STUDY 7
The Hiroshima AI Process International Code
of Conduct and Reporting Framework
In 2023, under Japan’s presidency, the G7 launched the
Hiroshima AI Process, resulting in the Hiroshima Process
International Code of Conduct for Organizations Developing
Advanced AI Systems.72 This voluntary code promotes
ethical, transparent and secure practices. To reinforce
accountability, the G7 and OECD introduced a voluntary
reporting framework in 2025 for organizations in member
and partner countries. While initial reports were submitted,
variations in detail and transparency highlighted limitations
in consistency and comparability. The framework’s voluntary
nature also raised challenges in participation and adherence.
The G7, under its Canadian presidency, is exploring
additional incentives and clearer guidance. There is also a forum led by Japan for broader collaboration through the
Hiroshima AI Process Friends Group, which now comprises
56 countries and regions.73 Increasing participation by
organizations across diverse jurisdictions will also require
reporting requirements to consider language and timing.
Key insight
Commitments to voluntary frameworks alone are insufficient
for ensuring transparent and accountable responsible AI
practices by organizations. They likely require the layering
of instruments to assess claims (see Table 2), such as
standardized reporting. Instruments for reporting on responsible AI practices, by content TABLE 2
Content type Instruments Considerations (non-exhaustive)
Commitments
How an organization
says it will implement
responsible AI –Individual: Informal (blogs, speeches) or formal
commitments (principles, policies, frameworks)
e.g. Perplexity Acceptable Use Policy66
–Joint: Commitments from multiple organizations
(see Case study 7)Advantages:
–Agile method for signalling norms
–Flexible to organization context
Limitations:
–Low adoption with varied adherence
–Limited public evidence correlating responsible
AI commitments with implementation67
Claims
How an organization
self-reports its
responsible
AI practice –Reports: Detailing practices e.g. Microsoft 2025
Responsible AI Transparency Report68
–Cards: Insights into the development, governance
and safety of an AI model, system of models, or
service e.g. Cohere Command R and Command R+
Model Card69Advantages:
–Provides a benchmark for other companies
–Promotes feedback and accountability
Limitations:
–Variability can hinder standardized comparisons
across multiple companies
–Self-reporting bias may occur
Evidence
How an organization
substantiates
its responsible
AI practice –Certifications: A review typically aligned with a
set criteria e.g. Anthropic certified by Schellman
Compliance, LLC against ISO/IEC 42001:202370
–Sandboxes: Third-party controlled or monitored
environments for AI testing e.g. the United Arab
Emirates regulatory sandboxes71Advantages:
–Provides credibility if certified by a reputable party
–Incentivized adoption in pursuit
of market differentiation
Limitations:
–Variability in certification methods risk
practice fragmentation
–Costly to implement and address renewal needs
Advancing Responsible AI Innovation: A Playbook 26
Ask AI what this page says about a topic: