A Blueprint for Intelligent Economies 2025
Page 14 of 21 · WEF_A_Blueprint_for_Intelligent_Economies_2025.pdf
Multiple international AI governance initiatives are
being put in place, including UNESCO’s Global
AI Ethics and Governance Observatory,27 the
Readiness Assessment Methodology (RAM) tool,28
the Hiroshima Principles29 and the OECD’s ethical
AI governance framework.30 While these efforts lay
a strong foundation, there remains a critical need
for further action and broader agreement on global
ethics frameworks for AI.
Responsible use guardrails
Responsible use guardrails promote the ethical
and responsible management and use of AI
across various sectors, helping to prevent harmful
applications while maintaining public trust and
accountability. The recently published USAID AI
Action Plan report notes the need for significant
stakeholder engagement, with governments
providing strategic vision, and academia addressing
complex challenges with civil society.
Establishing responsible AI practices requires
a thoughtful approach to ethical standards,
comprehensive transparency initiatives and a
continuous dedication to societal improvement
in various technological domains. The challenge
encompasses not only the creation of these
standards but also the cultivation of public trust and
the maintenance of accountability amid the swift
progression of AI technologies.
Self-governance tools have been widely adopted
by the largest developers of AI models, such as
Microsoft’s Responsible AI Principles,31 Google’s AI
Principles32 and Salesforce’s Office of Ethical and
Humane Use.33 The promotion of self-governance
processes should be encouraged within the small
and medium-sized technology business community.
However, self-regulation presents a host of challenges
such as limited oversight and accountability. Self-
regulation alone can sometimes be insufficient and
necessitates a degree of governmental intervention
to ensure consistency in the implementation of
responsible and ethical AI principles.
Safety and security standards
AI poses risks that are both known and still
emerging, particularly as researchers progress
towards advanced AI development, such as artificial
general intelligence (AGI). To mitigate these AI safety
and security risks, it is crucial to first establish clear
policy “red lines” and safety guardrails.
The EU Artificial Intelligence Act34 categorizes AI
applications into risk levels, setting requirements for high-risk areas like critical infrastructure while
promoting innovation in low-risk sectors. This
approach defines “red line” areas where AI poses
unacceptable risks. In another example, the
collaboration between the US and UK, through
their AI Safety Institutes,35 focuses on developing
shared frameworks for testing advanced AI models,
emphasizing international collaboration. The NIST
AI risk framework36 is an example of a voluntary
structure for managing AI risks, emphasizing
trustworthiness and alignment with international
standards but still lacking in enforceability and
global consensus.
A global framework and international body for AI
could set boundaries on high-risk technologies, such
as autonomous weapons and mass surveillance
systems. The recent collaboration between
the UK, US and Canada on AI in the nuclear
sector37 highlights the importance of international
cooperation, emphasizing risk management and
balancing human oversight with AI autonomy.
AI regulations
The rapidly changing regulatory landscape for
AI presents significant challenges for industries
delivering technological advancements at regional
or global scale. Companies must adapt to complex
and evolving AI-specific regulations across various
jurisdictions while ensuring compliance with data
protection laws and industry standards. Regulatory
approaches vary widely, from hands-off to hands-
on, and can differ even within regions.
A hands-off approach to regulation minimizes
government intervention, allowing for rapid innovation
and market-driven growth by reducing barriers to
entry. While this creates an environment conducive
to experimentation, it has led to public concerns over
privacy violations and the misuse of technologies
like facial recognition. The alternative is a hands-on
approach that promotes government intervention with
clear guidelines and accountability mechanisms. The
EU Artificial Intelligence Act is one example, setting
regulated requirements for high-risk AI applications,
aiming to protect public interests while encouraging
innovation through structured oversight. Narrowly
targeted regulation by governments can also be a
valuable policy lever and can proactively prevent
emerging AI risks while supporting innovation.
The World Economic Forum’s Governance in
the Age of Generative AI report38 suggests that
governments should enhance existing regulations,
clarify authorities and assign responsibilities to
adapt to AI’s regulatory challenges. This includes
addressing privacy, consumer protection, product
liability and competition issues. Establishing
responsible AI
practices requires
a thoughtful
approach to
ethical standards,
comprehensive
transparency
initiatives and
a continuous
dedication
to societal
improvement.
Blueprint for Intelligent Economies
14
Ask AI what this page says about a topic: