Preparing for Artificial General Intelligence 2025
Page 1 of 4 · WEF_Preparing_for_Artificial_General_Intelligence_2025.pdf
VPreparing for Artificial General
Intelligence: Global Risks and
International Coordination
The Global Future Council on Artificial General Intelligence
(AGI) seeks to advance awareness, education and dialogue on
the evaluation, risk assessment and governance of advanced
artificial intelligence (AI) systems, with a specific focus on AGI.
Members collaborate on identifying priorities for policy, research
and coordination, while shaping global awareness on AGI. The
council works towards actionable insights, interacting with key
stakeholders and fostering international alignment on both the
opportunities and the challenges associated with AGI, so that
benefits can be realized responsibly, while addressing open
questions and uncertainties, and maintaining public trust. This
paper draws on the collective expertise of the council and
contains a set of recommendations to guide future efforts.
The council adopts a broad definition of AGI as AI systems that
outperform the majority of skilled adults across a wide range of
non-physical tasks.1
Timelines for AGI
Experts give varying estimates of timelines for AGI
development, which continue to generate debate.2 Some
industry leaders believe that AGI systems could be developed in
the next 2-10 years,3 and even sceptical experts think AGI within
the next 10-20 years is plausible.4 These perspectives highlight
the speculative and uncertain nature of long-range technological
prediction. Rather than a sudden threshold, however, AGI is more
likely to emerge through a gradual accumulation of capabilities
across different domains, with societal and economic impacts
unfolding incrementally over time.
Having made rapid and relatively steady progress recently,
AI is now outperforming many humans in some of the most
challenging tests of programming, abstract reasoning and
scientific reasoning.5 However, how these advances translate
into progress towards AGI remains an open question, in part
because there is no consensus on the scales or benchmarks by
which such progress should be measured.
Automated AI R&D could further accelerate AI progress.
The most advanced AI systems are shifting towards
autonomous “agents”, capable of completing increasingly
complex tasks with less need for human oversight. AI systems that match or exceed human level at software engineering or AI
R&D might lead to exponential increases in AI capabilities.
AI companies are already using AI to accelerate their R&D,6
underscoring the importance of parallel efforts to promote
transparency, reproducibility and ethical standards in
scientific discovery.
Opportunities and considerations
for global preparedness and
governance
AGI can be profoundly transformative, with applications
across healthcare, education, accessibility, sustainability and
scientific discovery. AGI could drive major advances in economic
growth, healthcare outcomes and climate solutions, but the
scale and pace of its societal impact remain under debate.
Preparing for a potential AGI future is complex, and involves
addressing uncertainties around equitable distribution of
benefits, workforce transitions and responsible governance. At
the same time, AGI carries risks including misuse, such as in
acts of terrorism,7 undue concentration of power and disruption
to job markets that could fundamentally challenge the role of
human labour in the social contract.
AGI could increase the risk of loss of control – that is, the
scenarios where one or more AI systems act against human
instructions and come to operate outside of human control,
with no clear path to regaining control if their development is
not contained.8 Evidence for this risk is beginning to emerge
as part of controlled experiments, with current systems
changing their behaviour to avoid modification9 or replacement
by a new AI version,10 and carrying out undesired actions and
lying about them.11
Expert opinions on the likelihood of loss of control vary, and
there is growing consensus that current safeguards may be
insufficient for the scale and complexity ahead. At present,
there is no reliable and established way to control AGI-level
systems or ensure their alignment with human intentions or
values, though significant work is under way on methods Global Future Council on Artificial General Intelligence
BRIEFING PAPER
OCTOBER 2025
Ask AI what this page says about a topic: