Advancing Responsible AI Innovation A Playbook 2025
Page 32 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
CASE STUDY 9
Reinventing AI governance with Accenture’s Trusted Agent Huddle
Accenture, a global professional services company, has
been reimagining its marketing operations by integrating
responsible agentic AI directly into its cloud-based AI Refinery
platform.89 To address increasing demand for faster, smarter
campaigns, the organization brought together multiple
autonomous agents to streamline traditional marketing
processes, cutting planning phase steps by 67% and
accelerating time to first draft by 90%. A recent feature,
called the Trusted Agent Huddle,90 has been introduced
to facilitate secure and observable agentic collaboration
across important ecosystem partners like Writer, Adobe and
Salesforce. This is intended to systematize responsible AI practices directly into daily workflows, governing how agents
interact, share data and make decisions.
Key insight
Reinventing work through agentic AI shifts the focus from
automation to augmentation, unlocking new levels of
creativity, speed and strategic impact. These new ways
of working will require responsible AI capabilities – like the
Trusted Agent Huddle – to be systematically integrated into
workflows to ensure accountable collaboration at scale
between humans and AI agents.
Government leaders
Key roadblocks organizations encounter from the broader ecosystem
Limited incentives for the implementation of responsible AI technologies, focusing on investments in AI
innovation rather than the technologies to embed trust and regulatory compliance
Lack of audit mechanisms for third-party AI tools and systems, undermining risk management efforts
and hindering responsibility allocation and governance across the AI ecosystem
Investment uncertainties, due to the lack of established interoperability standards between legacy
systems and new technologies, and between AI systems and responsible AI technologies, discouraging
long-term investments
Actions for government leaders
–Promote R&D of responsible AI technologies:
Motivate a market for responsible AI technologies
with signals such as recognition, insurance
protection for AI liabilities91 or minimum design
thresholds for AI development (see Play 7).
–Promote interoperability between responsible
AI technologies: As technology-enabled
responsible AI becomes common practice,
companies will need common mechanisms to
assess each other’s approaches. Governments
should drive multistakeholder efforts to establish
interoperability parameters between partners
and upstream and downstream actors. Key
components to address include:
–Common standards: Taxonomies,
formats and communication protocols for
responsible AI metrics and audit data –Interoperable application programming
interfaces (APIs): Shared definitions for
bias checks, red-teaming, observability (see
Case study 9), explainability, etc.
–System-to-system transparency
mechanisms: Traceability, documentation
and reporting structures that are comparable
across tools
–Standardized trust and risk mechanisms:
Dynamic trust assessments between AI
agents or systems
–Sandboxes: Environments for safe stress-
testing of responsible AI technologies
–Multistakeholder governance models:
Collaborations between government,
industry, academia and civil society help set
norms and resolve cross-border or cross-
sector inconsistencies
Advancing Responsible AI Innovation: A Playbook 32
Ask AI what this page says about a topic: