Advancing Responsible AI Innovation A Playbook 2025
Page 45 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
52. Meta Open Loop. (2024). Generative AI Risk Management and the NIST Generative AI Profile (NIST AI 660-1).
https://openloop.org/reports/2024/10/report-2-nist-generative-ai-profile.pdf.
53. The White House. (2025). Winning the Race: America’s AI Action Plan.
https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
54. Organisation for Economic Co-operation and Development (OECD). (n.d.). Catalogue of Tools & Metrics for Trustworthy AI.
https://oecd.ai/en/catalogue/overview .
55. Standards Council of Canada. (2025). Artificial Intelligence and Data Governance Standardization Hub.
https://ai-standards-normes-ia.ca/en/home.
56. World Bank Group. (n.d.). Event Recording: GovTech and Public Sector Innovation Global Forum.
https://www.worldbank.org/en/events/2024/12/19/govtech-and-public-sector-innovation-global-forum#5.
57. World Economic Forum. (2024). Generative AI governance: Shaping a collective global future. AI Governance Alliance:
Briefing Paper Series. https://www.weforum.org/publications/ai-governance-alliance-briefing-paper-series/.
58. Reuel, A., P . Connolly, K. F. Meimandi, S. Tewari, et al. (2025). Responsible AI in the Global Context: Maturity Model and
Survey. Stanford University. https://arxiv.org/pdf/2410.09985.
59. MIT AI Risk Repository. (n.d.). AI Incident Tracker. https://airisk.mit.edu/ai-incident-tracker .
60. Hulagadri. A.V., J. Kreutzer, J. G. Ngui and X. B. Yong. (2025). Towards fair and comprehensive multilingual LLM
benchmarking. https://cohere.com/blog/towards-fair-and-comprehensive-multilingual-and-multicultural-llm-
benchmarking.
61. Luccioni, S., B. Hamazaychikov, T. A. de Costa and E. Strubell. (2025). Misinformation by Omission: The Need for More
Environmental Transparency in AI. https://arxiv.org/pdf/2506.15572.
62. 5Rights Foundation. (2025). Children and AI Design Code.
https://5rightsfoundation.com/wp-content/uploads/2025/03/5rights_AI_CODE_DIGITAL.pdf
63. EU Artificial Intelligence Act. (2025). Article 55: Obligations for Providers of General-Purpose AI Models with Systemic Risk.
https://artificialintelligenceact.eu/article/55/.
64. Singapore Infocomm Media Development Authority & AI Verify Foundation. (2025). Global AI Assurance Pilot, Annex A.
annex-a-global-ai-assurance-pilot.pdf
65. Luccioni, S., B. Hamazaychikov, T. A. de Costa and E. Strubell. (2025). Misinformation by Omission: The Need for More
Environmental Transparency in AI. https://arxiv.org/pdf/2506.15572.
66. Perplexity. (n.d.). Perplexity Acceptable Use Policy. https://www.perplexity.ai/hub/legal/aup.
67. TechBetter. (2024). Evaluating AI Governance: Insights from Public Disclosures.
https://www.ravitdotan.com/_files/ugd/f83391_b853450bcc274e9ba9454d618ee41a94.pdf.
68. Microsoft. (2025). 2025 Responsible AI Transparency Report. https://cdn-dynmedia-1.microsoft.com/is/content/
microsoftcorp/microsoft/msc/documents/presentations/CSR/Responsible-AI-Transparency-Report-2025-vertical.pdf.
69. Cohere. (n.d.). Command R and Command R+ Model Card. https://docs.cohere.com/docs/responsible-use.
70. Anthropic. (13 January 2025). Anthropic achieves ISO 42001 certification for responsible AI.
https://www.anthropic.com/news/anthropic-achieves-iso-42001-certification-for-responsible-ai.
71. United Arab Emirates. (n.d.). Regulatory sandboxes in the UAE. https://u.ae/en/about-the-uae/digital-uae/%20regulatory-
framework/regulatory-sandboxes-in-the-uae.
72. European Commission. (2023). Hiroshima Process International Code of Conduct for Advanced AI Systems.
https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems.
73. Hiroshima AI Process (n.d.). Supporters. https://www.soumu.go.jp/hiroshimaaiprocess/en/supporters.html.
74. World Economic Forum. (2022). Earning Digital Trust: Decision-Making for Trustworthy Technologies.
https://www.weforum.org/publications/earning-digital-trust-decision-making-for-trustworthy-technologies/.
75. World Economic Forum. (2024). Digital Trust: Supporting Individual Agency. https://www.weforum.org/publications/digital-
trust-supporting-individual-agency/.
76. Zhou, L., V. Prabhakaran, R. Ramasubramanian, R. Levin, et al. (2007). Graceful degradation via versions: specifications
and implementations. Symposium on Principles of Distributed Computing. https://www.microsoft.com/en-us/research/
publication/graceful-degradation-via-versions-specifications-and-implementations/ .
77. Internet Matters. (2025). Me, myself and AI: Understanding and safeguarding children’s use of AI chatbots.
https://www.internetmatters.org/wp-content/uploads/2025/07/Me-Myself-AI-Report.pdf .
78. Opet, P . (n.d.). An open letter to third-party suppliers. J.P . Morgan. https://www.jpmorgan.com/technology/technology-
blog/open-letter-to-our-suppliers.
79. Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2025). A framework to report AI’s flaws.
https://hai.stanford.edu/news/a-framework-to-report-ais-flaws.
80. The Alan Turing Institute. (n.d.). Understanding the Impacts of Generative AI Use on Children. https://www.turing.ac.uk/
sites/default/files/2025-05/combined_briefing_-_understanding_the_impacts_of_generative_ai_use_on_children.pdf.
Advancing Responsible AI Innovation: A Playbook 45
Ask AI what this page says about a topic: