Global Risks Report 2026
Page 66 of 100 · WEF_Global_Risks_Report_2026.pdf
that sufficient international norms or verification
mechanisms will be established in time. Each
country's pursuit of security may, collectively,
produce a more dangerous world.
Beyond state actors, the democratization of AI
capabilities raises the spectre of asymmetric
security threats. Advanced AI tools could accelerate
the development of novel weapons faster than
governance frameworks can adapt. Even small
groups may eventually wield destructive capacities
once reserved for superpowers, leveraging AI to
design bioweapons, conduct infrastructure attacks
or manufacture disinformation at scale. These risks
will be heightened in countries in which the dividing
line is blurred between well-resourced national
militaries and criminal groups with intentions to
cause extreme harms. Corrupt practices and a
declining rule of law (see Section 2.2: Multipolarity
without multilateralism) could contribute to more
frequent illicit sharing of sensitive information,
technologies or weaponry. Militaries may then both
use AI-powered autonomous technology to deflect
human responsibility in warfare174 and in parallel
shift that responsibility towards loosely associated
non-state actors. These dangerous trajectories
could lead to a world in which the very sides in
warfare become difficult to identify, with plausible
deniability becoming the norm.
Actions for today
To build a resilient workforce, governments and
businesses should be proactive in planning ahead,
and treat skills development and job transition
planning as core elements of AI deployment. This
includes funding scalable reskilling infrastructure,
incentivizing job creation in emerging sectors, and
targeting support for high-risk groups such as
youth, people in routine service and administration
roles, and older workers. If the negative impacts of
AI on labour markets accelerate, each year of policy
inaction increases the adaptation gap between
technology and the workforce, raising the costs of
correction. To stay ahead of the curve, governments
should also strengthen their monitoring of labour-
market, social, and geopolitical risks, similar to
monitoring financial markets for systemic exposure.
This includes tracking job churn, trust indicators
and political volatility, including using tools such as
scenario planning.
Beyond workforce considerations, the social
contract between citizens and governments will
itself also require renewal to be fit for the era of
AI. Investing in public digital infrastructure and
ensuring linguistic, geographic and socioeconomic
inclusivity in AI design and access is essential to
avoid the emergence of a globally marginalized AI
underclass. Public awareness and education will
be central to rebuilding the social contract and
trust in an AI-transformed economy over the next
decade. It will also help to mitigate the risks most
closely associated with Adverse impacts of AI
technologies, which include Misinformation and
disinformation and Cyber insecurity (Figures
54 and 55). In parallel, societies must prepare
for extended support to those most impacted by
technological unemployment, exploring adaptive
models of social protection and investing in the
civic, psychological and cultural infrastructure
needed to maintain purpose, meaning and
participation in an AI-transformed economy.
The long-term risks stemming from AI depend
on choices made or avoided within the short
to medium term. However, fragmentation of
regulatory regimes is increasing the risk of a race
to the bottom. Coordination on minimum safety,
transparency and ethical deployment standards,
particularly for military, biometric and large-
scale decision-making systems, is needed - yet
requires cooperation similar to that for nuclear or
bioweapons safeguards.
Leon Andov, Unsplash
Global Risks Report 2026
66
Ask AI what this page says about a topic: