Preparing for Artificial General Intelligence 2025
Page 4 of 4 · WEF_Preparing_for_Artificial_General_Intelligence_2025.pdf
Endnotes
1. This roughly falls between “Competent AGI” and “Expert AGI” in the framework on levels of AGI proposed by Google DeepMind. See Morris,
M. R., Sohl-Dickstein, J., Fiedel, N., Warkentin, T., Dafoe, A., Faust, A., ... & Legg, S. (2023). Levels of AGI for Operationalizing Progress on
the Path to AGI. arXiv preprint arXiv:2311.02462. While this broad definition is intended to provide a useful benchmark, achieving AGI will
likely not be a single binary event but rather a process of incremental technological advancements and adoption across society, with greater
capabilities in some benchmarks compared to others.
2. “AI systems that are better than almost all humans at almost all tasks… [are] quite likely… in the next 2 or 3 years,” Dario Amodei, CEO,
Anthropic, told CNBC Television. (2025, January 21). Anthropic CEO: More confident than ever that we’re ‘very close’ to powerful AI
capabilities [Video]. https://www.youtube.com/watch?v=7LNyUbii0zw). “I would say [we are] probably like 3 to 5 years away [from AGI],”
Demis Hassabis, CEO, Google DeepMind told the Big Technology Podcast. (2025, February). Google DeepMind CEO Demis Hassabis:
The path to AGI, deceptive AIs, building a virtual cell. https://www.youtube.com/watch?v=yr0GiSgUvPU). “I think AGI will probably get
developed during this president’s term,” Sam Altman, CEO, OpenAI, said in an interview to Bloomberg. (2025, January 6). Sam Altman
Interview: OpenAI CEO’s plans for ChatGPT, his firing and return, and what’s next. https://www.bloomberg.com/features/2025-sam-altman-
interview/). “Reaching Human-Level AI will take several years if not a decade,” Yann LeCun, Chief AI Scientist, Meta, was quoted as saying
on X. (2024, October 16). I said that reaching human-level AI will take several years if not a decade […] [Post]. X. https://x.com/ylecun/
status/1846574605894340950).
3. Within one year, the aggregate forecast had shortened by 13 years: In 2022, researchers on average estimated a 50% chance of AGI by
2060. In the 2023 survey this forecast had shortened to 2047. See: Grace, K., Stewart, H., Sandkühler, J. F., Thomas, S., Weinstein-Raun,
B., & Brauner, J. (2024). Thousands of AI authors on the future of AI. arXiv preprint arXiv:2401.02843. Provisional data from the 2024 survey
shows an average 50% estimate of 2039.
4. “When AGI does actually come, perhaps 10 or 20 years from now […],” said Gary Marcus, professor emeritus of psychology and neural
science at New York University, on X. (2024, December 24). When AGI does actually come, perhaps 10 or 20 years from now […] [Post].
X. https://x.com/GaryMarcus/status/1871605871282999760). “I think actual transformative effects (e.g. most cognitive tasks being done
by AI) is decades away (80% likely that it is more than 20 years away),” said Arvind Narayanan, professor of computer science at Princeton
University and director of the Center for Information Technology Policy. This implies a 20% chance that AI will be doing most cognitive tasks
by 2045. See: Toner, H. (2025, September 10). “Long” timelines to advanced AI have become more common — here’s why. [Newsletter].
Substack. https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have).
5. International AI Safety Report 2025 (Figure 0.1 and chapters 1.2 and 1.3). See: Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T.,
Bommasani, R., Casper, S., ... & Zeng, Y. (2025). International ai safety report. arXiv preprint arXiv:2501.17805. Additionally, a recent
analysis by the research organization METR concluded that the lengths of software-related tasks AI can do is doubling every 7 months,
and extrapolating this puts human level in 2030. See: Kwa, T., West, B., Becker, J., Deng, A., Garcia, K., Hasin, M., ... & Chan, L. (2025).
Measuring ai ability to complete long tasks. arXiv preprint arXiv:2503.14499; METR. (2025, March 19). Measuring AI ability to complete long
tasks [Blog post]. METR. https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks).
6. Companies are already using AI-powered assistants to aid software development. For example, Google DeepMind used the coding agent
AlphaEvolve to optimize Google’s computing ecosystem and enhance AI training. See: Google DeepMind. (2025, May 14). AlphaEvolve:
A Gemini-powered coding agent for designing advanced algorithms [Blog post]. Google DeepMind. https://deepmind.google/discover/
blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/). Amazon claims to have achieved annual cost
savings of $260m through its AI-powered assistant See: Amazon Web Services. (2024, August 1). Amazon Q Developer just reached a
$260 million milestone [Blog post]. AWS Blogs. https://aws.amazon.com/blogs/devops/amazon-q-developer-just-reached-a-260-million-
dollar-milestone/). Microsoft CEO Satya Nadella said that 20-30% of the company’s code was written by AI. See: Mozur, P . (2025, April 29).
Microsoft CEO says up to 30% of the company’s code was written by AI. TechCrunch. https://techcrunch.com/2025/04/29/microsoft-ceo-
says-up-to-30-of-the-companys-code-was-written-by-ai).
7. Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T., Bommasani, R., Casper, S., ... & Zeng, Y. (2025). International AI Safety Report. arXiv
preprint arXiv:2501.17805.
8. Ibid.
9. Greenblatt, R., Denison, C., Wright, B., Roger, F., MacDiarmid, M., Marks, S., ... & Hubinger, E. Alignment faking in large language models,
2024. https://arxiv. org/abs/2412.14093.
10. Scheurer, J., Balesni, M., & Hobbhahn, M. (2023). Large language models can strategically deceive their users when put under pressure.
arXiv preprint arXiv:2311.07590; Meinke, A., Schoen, B., Scheurer, J., Balesni, M., Shah, R., & Hobbhahn, M. (2024). Frontier models
are capable of in-context scheming. arXiv preprint arXiv:2412.04984; Akin, C. (2024, November 6). Our research on strategic deception
presented at the UK’s AI Safety Summit [Blog post]. Apollo Research. https://www.apolloresearch.ai/research/our-research-on-strategic-
deception-presented-at-the-uks-ai-safety-summit.
11. Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P ., Darrell, T., ... & Mindermann, S. (2024). Managing extreme AI risks amid rapid progress.
Science, 384(6698), 842-845.
12. There is some significant work under way on methods used to detect misaligned objectives, notably on chain-of-thought monitoring, see
e.g., Baker, B., Huizinga, J., Gao, L., Dou, Z., Guan, M. Y., Madry, A., ... & Farhi, D. (2025). Monitoring reasoning models for misbehavior and
the risks of promoting obfuscation. arXiv preprint arXiv:2503.11926.
13. Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T., Bommasani, R., Casper, S., ... & Zeng, Y. (2025). International AI Safety Report. arXiv
preprint arXiv:2501.17805.
14. Ibid.
15. Existing transparency frameworks include the Hiroshima AI Process, which aims to standardize safety and risk mitigation reporting and
promotes responsible governance. See: Ministry of Internal Affairs and Communications, Japan. (n.d.). Hiroshima AI Process: Leading the
Global Challenge to Shape Inclusive Governance for Generative AI. https://www.soumu.go.jp/hiroshimaaiprocess/en/index.html.
Ask AI what this page says about a topic: