Advancing Responsible AI Innovation A Playbook 2025
Page 30 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
Government leaders
Key roadblocks organizations encounter from the broader ecosystem
Absence of guiding principles, benchmarks, and shared accountability structures, impacting responsible AI
design and implementation.
New AI industries without design standards, such as AI therapy and companionship (which are emerging as
the number-one generative AI use case81), highlight sensitive data collection and privacy concerns and pose
unique challenges in terms of trust and security, drawing attention to the need for metrics that assess risk
across psychological, ethical and social dimensions.
Misaligned expectations across the AI value chain, from third-party vendors to the organization’s specific
responsible AI practices, leading to friction or inconsistencies in downstream use.
Reliance on venture capital or corporate backing as AI funding models, prioritizing short-term monetization
and market success over long-term governance of products that promote the common good.
Actions for government leaders
–Take a socio-technical approach to
risk management: Evolve government AI
frameworks, policies and regulations to
move beyond narrow technical engineering
perspectives and consider the role of broader
societal forces in determining AI’s outcomes.82
Example approaches include:
–Fund interdisciplinary research on AI’s
economic, social, environmental and
political effects.
–Ensure employees have a voice in the
deployment of workplace AI technologies,
including protecting organizing rights,
strengthening whistleblower protections and
prohibiting surveillance practices that deter
collective action or expression.
–Prevent outsized influence from any
individual stakeholder group in deciding
what constitutes risk or harm, or to what
values AI should be aligned.83
–Harmonize responsible design standards for
AI: Collaborate across borders and work with
the design community and impacted stakeholder
groups to create consensus around design risks
and mitigation approaches (see Case study 8).
Develop public toolkits to drive awareness and
fund sandboxes to experiment with safety-
centred user experience (UX) innovation.
Encourage adoption of standards and
frameworks for impacted stakeholder groups.
For example, AI products used by children
need design standards for age-appropriate
interfaces, explainability and safeguards against
manipulation, false or misleading information.84 –Address evolving human-AI interaction
impacts: Adopt a multi-pronged approach,
which could include:
–Informing ethical design standards with
multi-disciplinary research that assesses
impacts across diverse stakeholder groups,
such as child-85 or older adult-facing86
products offering AI companionship.
–Proactively examine emerging areas of
human-AI interaction, such as AI use in
neurotechnology.
–Assess impacts on data practices, including
collection and monetization of sensitive data,
such as for engagement-based design.
Address gaps in data governance policies
(see Play 2).
–Creating public campaigns to increase
awareness of the benefits and risks,
including AI literacy in education systems.
–Working with multilateral bodies to
enforce broad international adherence to
human rights.
–Incentivize the diversity of business
models: Encourage approaches to alternative
revenue generation opportunities that can
deliver AI products with greater human
alignment and evaluate models based on
measures of success beyond profit and
engagement metrics, such as contributions to
scientific advancement and/or societal well-
being. Enable academia and civil society to
participate in public-interest frontier AI R&D
with public compute, data access, and focused
research grants, to offset the high costs
associated with AI initiatives.
Advancing Responsible AI Innovation: A Playbook 30
Ask AI what this page says about a topic: