Artificial Intelligence and Cybersecurity Balancing Risks and Rewards 2025
Page 7 of 28 · WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
The use of AI by threat actors BOX 1
Cybercriminals can harness AI capabilities to
amplify the scale, sophistication and speed of their
malicious activities, presenting unprecedented
challenges in cybersecurity defence.
–Impersonation, social engineering and
spear phishing: The criminal use of AI has
not only bolstered the scope and efficiency
of cybercrime (including identity theft, fraud,
data privacy violations and intellectual property
breaches), but has also lowered the barriers
to entry for criminal networks that previously
lacked the technical skills.1 A research study
found that large language model (LLM)-
automated phishing can lead to an over-95%
reduction in costs, while maintaining or even
exceeding previous success rates.2
–Reconnaissance: AI has enhanced
reconnaissance efforts for cybercriminals
by automating and refining the information-
gathering process. Attackers can efficiently
analyse vast amounts of data from various
sources, such as by scraping social media,
public records and network traffic to identify
potential targets and vulnerabilities. Though
not a novel use case, AI tools can process
and correlate this data with greater speed and
accuracy, making target selection and external
surface scanning more efficient and effective.3
For example, AI can detect and map out
organizational structures, pinpoint weaknesses
in security configurations and predict likely
security behaviours and responses. –Discovering and exploiting zero-days: AI
allows cybercriminals to accelerate the process
of discovering unpatched vulnerabilities
such as zero-days – unknown vulnerabilities
that do not have any patch or fix available –
more efficiently and at scale. AI-enabled
reconnaissance tools not only streamline the
identification of zero-day vulnerabilities but
also make it easier to create custom malware
capable of exploiting these weaknesses before
patches can be deployed. Researchers have
also found that multiple GPT-4 models working
in tandem are capable of autonomously
exploiting zero-day vulnerabilities.4
–Compromising AI systems: This involves
cybercriminals exploiting weaknesses in
AI training datasets via data poisoning
attacks,5 model architectures and operational
frameworks. Data poisoning can degrade a
model’s performance and reliability, leading
to erroneous outputs6 with far-reaching,
sector-specific consequences. In the
financial sector, for example, a successful
data poisoning attack could manipulate
algorithms used for credit scoring or fraud
detection. Such outcomes not only undermine
the integrity of systems, but also expose
institutions to significant financial losses and
reputational damage.
In the next decade, companies will be defined by their AI
strategy: innovators will succeed, while resistors will vanish.
Today’s chief information security officers (CISOs) play a critical
role in this journey, and must move from blocking the use of AI,
to enabling it. But with the technology still in its infancy, the lack
of understanding around AI has the potential to shift the balance
of power to threat actors. The only viable defence is fighting AI
with AI – developing personalized, adaptive security approaches
that can protect an organization at speed and at scale.
Matthew Prince, CEO and Co-Founder, Cloudflare
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards
7
Ask AI what this page says about a topic: