Global Risks Report 2026
Page 65 of 100 · WEF_Global_Risks_Report_2026.pdf
once performed by humans, the balance of agency
tilts. Incremental AI advances could steadily erode
human influence over the economy, culture,
governance and societal systems.165
The more that AI agents themselves are used in
R&D to develop AI agents further, the greater the
risk that the technology companies managing them
could cease to understand how those AI systems
work. Such R&D automation could accelerate the
timeline for progress in AI, making it even more
difficult for humans to build the technical and
regulatory capabilities to keep pace.166
Military misuse or mistakes
Following Russia’s invasion of Ukraine, both sides
in the conflict have pushed forward the boundaries
of AI use in military conflict. AI technologies have
played important roles in geospatial intelligence,
autonomous systems, and cyber warfare, among
other areas.167 As militaries embed AI deeper into
intelligence, surveillance, logistics, and command
functions, the risk landscape will shift from tactical
to systemic. AI will increasingly influence how
militaries perceive threats, make decisions, and take
actions. AI system failures could propagate through
entire chains of command and deterrence systems.
Without humans firmly in the loop, AI-powered
platforms may misidentify threats,168 respond
to biased data,169 or behave unpredictably in
conditions outside their training parameters.170
Adversaries might use data poisoning – introducing
corrupted data during model training – as a covert
weapon to undermine military AI systems.171 When humans are in the loop, an additional set
of risks needs to be considered. Weaponized
generative AI models can instantly fabricate
executive orders or create synthetic, convincing
battlefield footage, potentially confusing both
humans and technology-based responses. Human
decision-making is influenced by cognitive biases,
such as confirmation bias or recency bias, when
interpreting AI outputs. This can become especially
challenging in conflict conditions, when it might
also be tempting to over-rely on AI systems even if
these are not yet fully equipped to provide nuanced
decision-making support.172
The speed at which AI systems operate, when
applied without checks and balances, can itself be
a source of risk. Military crises that once unfolded
over days or hours could instead escalate in
seconds. An automated early-warning system
misinterpreting a missile test, for instance, could
trigger defensive responses from an adversary's
AI system, leading to a conflict started by
technical error rather than strategic intent.
Traditional deterrence, built on human deliberation
and diplomacy channels, may not hold when
algorithms initiate actions before leaders can act.
With countries starting to implement AI tools for
managing nuclear weapons stockpiles and in some
areas of nuclear weapons command, control, and
communications, addressing such risks becomes
especially critical.173
However, major powers are rushing to integrate
AI across military domains, each fearing strategic
disadvantage if rivals move first. This dynamic
incentivizes rapid deployment over rigorous
safety testing, increasing the probability of failures
precisely where consequences are most severe.
The intense pace of innovation makes it unlikely
Juli Kosolapova,
Unsplash
Global Risks Report 2026
65
Ask AI what this page says about a topic: