Preparing for Artificial General Intelligence 2025
Page 2 of 4 · WEF_Preparing_for_Artificial_General_Intelligence_2025.pdf
used to detect misaligned objectives, such as chain-of-
thought monitoring.12 These uncertainties reinforce the urgent
need to invest in scientific inquiry while also strengthening
anticipatory oversight and policy frameworks that can adapt
as the technology evolves.
Questions have been raised about how competitive
pressures might affect risk management. As highlighted
in the International AI Safety Report,13 strong competitive
pressure to develop more capable AI systems can incentivize
developers and countries to conduct less thorough risk
mitigations. Likewise, policy-makers face the “evidence
dilemma”, a challenge generated by the pace and uncertainty
of AI’s advancement, in which proactive interventions
may only be catalysed by clear evidence of harm, despite
the risk of waiting too long for this evidence to emerge.14
These dynamics underscore the importance of international
collaboration and dialogue to align competition with safety
and to ensure that trust-building measures keep pace with
technological advances.
Transparency remains a central issue. Despite the
transformative implications, AGI development is often opaque
to the general public. This lack of visibility prevents stakeholders
from detecting whether transformative capabilities, such as
autonomous AI R&D, are imminent or already underway. Efforts
to improve visibility may benefit from continued dialogue and
trust-building among stakeholders, supported by measures
such as transparent reporting, shared benchmarks and
independent evaluations.
Recommendations
Mitigating risks on the path to AGI requires action from
across the ecosystem and is essential to unlock AGI’s
enormous potential. The council therefore suggests the
following guiding principles.
International collaboration is crucial, requiring different
actors to find common ground in averting potentially
severe harms. This could include:
–Establishing a high-level dialogue for coordination on
malicious use and safety challenges.
–Jointly exploring verification mechanisms – technical
procedures to support confidence in claims about an AI
system or related resources.
–Establishing international protocols and sharing best
practices for safe and secure development and deployment.To ensure the safety and security of AI in their jurisdiction,
governments could consider:
–Developing the expertise and technical tools required to
engage with evolving safety research.
–Exploring shared best practices and establishing safety and
security standards for the most advanced systems.
–Facilitating dialogue on how to improve transparency and
accountability around advanced AI development, and
conducting evidence-based assessment of the impact of
AGI on the economy and on society. This could include
adapting existing frameworks to AGI.
Frontier developers should prioritize the safety, security and
reliability of their most advanced systems. Many developers
have made strong progress in setting out frameworks for
how they evaluate and mitigate severe AI risks. Further best
practices for developers to consider include:
–Adding internal deployments of current frontier systems in
safety frameworks. Evaluating safety methodologies and
mitigation for systems before internal deployments, particularly
for models evading control measures or covertly pursuing
misaligned goals. Defining criteria for safeguards requirements,
including internal access and usage restrictions.
–Dedicating a proportion of the overall R&D budget and
compute to developing a robust safety case that addresses
leading catastrophic risks (e.g. as listed in the International
AI Safety Report and subject to review by independent and
recognized external experts.
–Ensuring government awareness of the rate of AI R&D
and any safety issues, including the disclosure of relevant
evaluation results, incidents and major changes.
AI adopters should request robust and verifiable
assurances on safety and reliability. Companies and
institutions that procure AI systems have an important role in
shaping the development and deployment of AGI. They could:
–Require robust assurances for the safety and security of any
system they are procuring.
–Implement reliable monitoring and containment mechanisms
for AI systems, particularly for agents. This could include
defining clear guardrails for expected behaviour and
detecting undesired or unauthorized actions.
Compute providers could support monitoring of AI activities.
This includes robust know-your-customer checks for large-scale
compute use.
Ask AI what this page says about a topic: