Advancing Responsible AI Innovation A Playbook 2025
Page 22 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
CASE STUDY 6
Adapting the NIST AI RMF at Workday
Workday aligned cross-functionally to map its existing controls
covering policy, risk evaluation and third-party tool assessments
to the NIST AI RMF’s categories and taxonomy. An AI Advisory
Board, including C-suite executives, steered the programme,
managed edge cases and enforced reporting lines between
developers and governance teams. Workday also implemented
an RMF-based responsible AI questionnaire for evaluating third-
party AI tools and updated its data sheets for transparency.51 Key insight
By operationalizing NIST AI RMF into standardized templates
and tooling, organizations can continue to refine their risk
management approach while embedding controls into
existing processes to advance responsible, transparent
and risk-aware AI deployment at scale. –Make use of emerging context-specific
guidance: Organizations should consider
participating in community-based working
group efforts that are under way to interpret
actor-agnostic risk management frameworks
for specific business contexts, such as
MLCommons and the OWASP Generative AI
Security Project. Examples of context-specific
frameworks include:
–Activity-based: Risk Management
Framework for the Procurement of
AI Systems (RMF PAIS 1.0), adapted
from traditional risk management
frameworks (e.g. ISO 31000).48 –Size-based: Responsible AI Startups (RAIS)
Framework, providing guidance for the
venture capital industry for investing in early-
stage companies.49
–Sector-based: Monetary Authority of
Singapore’s Artificial Intelligence Model Risk
Management information paper, providing
guidance for financial institutions.50
Contradictory recommendations could emerge from
using multiple context-specific frameworks, which
can be mitigated by AI literacy and accountability.
Government leaders
Key roadblocks organizations encounter from the broader ecosystem
Competing incentives to self-assess responsible AI maturity, creating fear that limitations in their
responsible AI capabilities will expose them to legal liability
Limited awareness of industry best practices, affecting their risk management strategies
Difficulty in adapting industry- or actor-agnostic risk management frameworks, such as NIST or ISO, due
to high workload, complexity of guidelines, and limited human and financial resources of organizations52
Advancing Responsible AI Innovation: A Playbook 22
Ask AI what this page says about a topic: