Artificial Intelligence and Cybersecurity Balancing Risks and Rewards 2025
Page 16 of 28 · WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
Organizations need to develop an understanding
of what vulnerabilities might be introduced as
they adopt AI technologies, and of which security
properties might be weakened should threat actors
successfully exploit them.
Consider Figure 3, which details the potential areas
of vulnerability of the AI system: –The core AI infrastructure and supporting
infrastructure that needs to be taken into
consideration
–How this could expand attack surface and how
this infrastructure might be compromised
–The security properties that must therefore be
considered at risk
New tech, same need for security BOX 2
The traditional CIA triad remains critical: the
compromise of AI systems and supporting
infrastructure has the potential to impact on the
Confidentiality, Integrity and Availability of data and
assets. Other important security properties include:
–Explainability: refers to the concept that human
users can comprehend the outputs generated
by the AI model. –Traceability: a property of the AI that signifies
whether it allows users to track its processes
– including understanding the data used and
how it was processed by the models.
A lack of explainability or traceability may affect
the organization’s ability to investigate and mitigate
against the impacts of an AI-system compromise.
AI system attack surface and security properties FIGURE 3
Input
Data sources feeding into AI models
(customer data, customer requests,
internal requests, sensors, internal
applications e.g. calendars)
Examples of compromise
– Prompt injection
– Model evasion (input data altering
model behaviour)
– Jailbreaking
Related security properties
– Data integrity: lineage, completeness,
bias management, timeliness
(up-to-date)
– Availability of input data
– Confidentiality of input dataMonitoring and logging
Tools for monitoring the performance
and security of AI systems
Data storage
Examples of compromise
– Leakage of data
– Manipulation or insertion of data
(leading to model poisoning)Underlying hardware/software
stack, operating system
Examples of compromise
– Exploitation of vulnerabilities leading to
compromise of underlying infrastructureAPIs and interfaces
Examples of compromise
– Exploitation of vulnerabilities leading to
data compromise at APIs
Manipulated input or output dataModel development and update
Examples of compromise
– Malign insertion of vulnerabilities (backdoors)
– Developer errors
– Compromise of development environmentOutput
Data outputted by the AI model
Examples of compromise
– Manipulation of data post-output
(e.g. through API compromise)
– Leakage of data post-output
– Otherwise preventing output data
from reaching business applications
Related security properties
– Data integrity
– Data reliability
– Availability of output data
– Confidentiality of output data
– Explainability of output dataBusiness
applications
(What is the output
data used for)
(Non-exhaustive list)
– Driving business
processes
– Presenting
information to
end users/clients
(recommendation
engines,
chatbots)Core AI infrastructure
Directly supporting infrastructureRelated security properties
– Integrity (of monitoring information)
– Confidentiality of monitoring and model data
Model
AI model deployed in a live
environment
Examples of compromise
– Exploitation of vulnerabilities
– Alteration of model code
Related security properties
– Integrity of model
– Reliability of model
(can it produce accurate and
consistent information)
– Model explainability and traceability
– Confidentiality of model
– Availability of model functionality– Manipulation of monitoring tools’ integrity
– Data leakage from monitoring tools
– Compromise of monitoring tools access
Lateral movement, e.g. to access AI model codeExamples of compromise
Training
The process of training the AI model on datasets,
which may continue during deployment
Examples of compromise
– Training data poisoning
– Compromise of training environmen tRelated security properties
– Data integrity
– Availability of training data
– Confidentiality of training data
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards
16
Ask AI what this page says about a topic: