Artificial Intelligence and Cybersecurity Balancing Risks and Rewards 2025
Page 11 of 28 · WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
AI systems do not exist in isolation. Organizations
need to consider how the business processes
and data flows built around AI systems can be
designed in a way that reduces the business impact
that a cybersecurity failure might cause. Where
assurance on the security of underlying AI or on the
effectiveness of defences is limited, it’s crucial to
consider how any compromise might be overcome. This could include implementing additional controls
outside the system itself, or reviewing what data
should or should not be exposed to the AI.
To enable such an end-to-end view, risks
and controls need to be integrated into wider
governance structures and enterprise risk
management processes.Alongside shifting left and expanding right, any
approach for mitigating the cybersecurity risks
associated with AI adoption needs to consider how
the technology will evolve and how business use
will change over time. This should be facilitated
via repeated re-evaluation of risks and controls,
alongside frequent rehearsal and regular testing of
the organization’s preparedness (e.g. war gaming, tabletop exercises, disaster recovery drills). This
presents another opportunity to further integrate
cyber risk assessment and intelligence capabilities
into the resilience cycle and adjust testing strategies
based on evolving AI risk profiles and threat actor
developments observed across the industry. This
means that leaders need to expand right, i.e.
embed cyber resilience, and repeat.
2.4 Taking an enterprise view 2.3 Shift left, expand right and repeat The question of how to secure AI is closely related
to a wider body of work related to AI safety. This
work is a significant aspect of the AI Governance
Alliance’s (AIGA’s) agenda. This approach promotes
the need to “shift left”, i.e. implement safety
guardrails early in the AI system life cycle (namely, at the building and pre-deployment stages) to mitigate
related risks.16 As an example of safe and secure-
by-design AI technologies, it mandates the use of
processes that address inherent vulnerabilities in the
AI systems and services being used and procured
by organizations.
Not all risks can be mitigated at the building
and pre-deployment stages. It is not possible
to eliminate all system vulnerabilities, and there
will always be threat actors who will succeed in
circumventing the mitigating measures in place.
To complement the security-by-design practices
that help organizations develop AI technologies
securely and ethically, businesses need to
implement cybersecurity practices that will protect
AI systems once they are in use. This requires:
–An understanding of the wider risks faced by
businesses using and depending on AI
–An understanding of the risks associated with
the criticality of the data being processed
–Effective operational cybersecurity capabilities to
protect against these risks and detect attacks
–Effective response and recovery processes to
deal with incidents when they occur
In short, organizations will need to both “shift left
and expand right”.2.1 Shift left
2.2 Shift left and expand right
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards
11
Ask AI what this page says about a topic: