Global Cybersecurity Outlook 2026
Page 21 of 64 · WEF_Global_Cybersecurity_Outlook_2026.pdf
The widespread adoption of AI agentsFrequency of AI security assessments in organizations FIGURE 11
Does your organization have a process in place to assess the security of AI tools before deploying them?
20% 40% 60% 100% 80% 0%
Yes, we review periodically Yes, we review once I don’ t know No29% 7% 24% 40%
In the 2026 survey, 40% of organizations reported
conducting periodic reviews of their AI tools before
deploying them, rather than only doing a one-time
assessment (24%) – a clear sign of progress towards
continuous assurance. However, roughly one-third
still lack any process to validate AI security before
deployment, leaving systemic exposures even as the
race to adopt AI in cyber defences accelerates.
The market’s drive to adopt new AI features often
outpaces security readiness, creating exploitable
vulnerabilities.3 In response to these emerging risks, a number of fundamental measures should
be prioritized to secure AI at the infrastructure
level. This includes protecting the data used in
the training and customization of AI models from
breaches and unauthorized access. AI systems
should be developed with security as a core
principle, incorporating regular updates and
patches to address potential vulnerabilities. It is
also critical for organizations to deploy robust
authentication and encryption protocols to
ensure the protection of customer interactions
and data.4
As AI agents become more widely adopted, they
are poised to transform how digital systems are
designed and developed. AI agents can enhance
efficiency, responsiveness and scalability by
automating complex or repetitive activities with
speed and consistency, but their integration
can challenge traditional security frameworks,
redefining roles and processes, while raising
fundamental questions about decision-making
and the prioritization of alerts.
The multiplication of identities and connections
makes managing their credentials, permissions and interactions just as critical – and likely even more
complex – as managing those of human users. As
outlined in the World Economic Forum’s report AI
Agents in Action: Foundations for Evaluation and
Governance, without strong governance, agents can
accumulate excessive privileges, be manipulated
through design flaws or prompt injections, or
inadvertently propagate errors and vulnerabilities
at scale. Their speed and persistence amplify
these risks, underscoring the need for continuous
verification, audit trails and robust accountability
structures grounded in zero-trust principles that
treat every interaction as untrusted by default.5
BOX 1
Global Cybersecurity Outlook 2026
21
Ask AI what this page says about a topic: