Global Cybersecurity Outlook 2025
Page 21 of 49 · WEF_Global_Cybersecurity_Outlook_2025.pdf
The LLMs currently in use are constitutively insecure, and the adversarial attacks
and supply chain sabotage that are possible are not being addressed in a sufficiently
meaningful way. Integrating these models into critical infrastructure before such
attack vectors are remedied is dangerous and needs to be reevaluated.
Meredith Whittaker, President, Signal
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards BOX 1
The AI Governance Alliance, launched by
the World Economic Forum in June 2023,
seeks to provide guidance on the responsible
design, development and deployment of
artificial intelligence systems. Its report, Artificial
Intelligence and Cybersecurity: Balancing Risks
and Rewards, equips top leaders with a set
of questions that can help them define and
communicate the key parameters within which
decision-making on AI adoption and its associated
cybersecurity can be made:
1. Has the right risk tolerance for AI
technologies been set, and is it understood
by all risk owners?
2. Is there a proper balancing of risks against
rewards when new AI projects are considered?3. Is there an effective process in place to govern
and keep track of the deployment of
AI projects within the organization?
4. Is there a clear understanding of organization-
specific vulnerabilities and cyber risks related
to the use or adoption of AI technologies?
5. Is there clarity on which stakeholders
within the organization need to be involved
in assessing and mitigating the cyber risks
from AI adoption?
6. Are there assurance processes in place
to ensure that AI deployments are
consistent with the organization’s broader
organizational policies and legal and
regulatory obligations – for example, relating
to data protection or health and safety?
AI holds the promise of transforming methods to
defend against cyberthreats. It can give defenders
the upper hand – with advanced tools able to
quickly spot and respond to dangers – if they can
keep up with the pace of AI integration. Simply
put, AI can augment human abilities, making cyber
defence stronger and more efficient.
AI is transforming cybersecurity by reducing toil
and freeing up manpower, enabling systems to
process vast amounts of data for early threat
detection and uncovering hidden risks. This
technology can enhance threat alert triage,
prioritization, anomaly detection and pattern
recognition. It can also classify vulnerabilities,
automate patching, accelerate data processing and
manage configurations.29 Moreover, AI can serve
as a security adviser – like an “AI-CISO” or “virtual
CISO” – improving software security and optimizing
decision-making to make the most of limited
resources. Given recent advances in AI agents,
resource optimization and autonomous assistants
that can help defenders to this end may certainly be
on the horizon.30
Large language models (LLMs) also offer the
opportunity to collect richer intelligence, powering
the threat-intelligence cycle. AI models can analyse
and categorize the types of questions attackers
ask, their interaction patterns and even linguistic
markers that might identify specific groups or individuals. This data can then feed back into
threat-intelligence systems, refining detection
algorithms and providing achievable insights to
cybersecurity teams by refining the content analysis
and triage stages.31 Backed by AI and machine
learning, defenders can use continuous monitoring
and real-time visibility to better identify and address
software vulnerabilities such as zero-day threats
and exploits. Advanced threat detection systems
using behavioural analysis, network segmentation
and machine learning can contain potential
breaches and limit the persistence of threat actors
within compromised environments.
The integration of LLMs into honeypots represents a
new frontier in deception-based cybersecurity.32 By
embedding LLMs into these decoy environments,
defenders can create sophisticated, dynamic
interactions that adapt to adversarial behaviour in
real time.33 At the core of this innovation is the ability
of LLMs to simulate human-like responses, making
honeypots far more convincing to attackers.
Unlike static systems or preconfigured responses,
LLMs can generate contextually appropriate,
nuanced dialogues that respond appropriately
to attacker queries, prolonging engagement
and misleading attackers into thinking they are
interacting with legitimate systems. This creates an
environment in which malicious actors unknowingly
reveal their intent, methods and even operational AI for cyber defence
Global Cybersecurity Outlook 2025
21
Ask AI what this page says about a topic: