Artificial Intelligence and Cybersecurity Balancing Risks and Rewards 2025
Page 5 of 28 · WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
Executive summary
AI technologies offer significant opportunities, and
their application is becoming increasingly prevalent
across the economy. As AI system compromise
can have serious business impacts, organizations
should adjust their approach to AI if they are
to securely benefit from its adoption. Several
foundational features capture best practices for
securing and ensuring the resilience of AI systems:
1. Organizations need to apply a risk-based
approach to AI adoption.
2. A wide range of stakeholders need to be involved
in managing the risks end-to-end within the
organization. A cross-disciplinary AI risk function
is required, involving teams such as legal, cyber,
compliance, technology, risk, human resources
(HR), ethics and relevant front-line business units
according to specific needs and contexts.
3. An inventory of AI applications can help
organizations to assess how and where AI is
being used within the organization, including
whether it is part of the mission-critical supply
chain, helping reduce “shadow AI” and risks
related to the supply chain.
4. Organizations need to ensure adequate
discipline in the transition from experimentation
to operational use, especially in mission-
critical applications.
5. Organizations should ensure that there
is adequate investment in the essential
cybersecurity controls needed to protect AI
systems and ensure that they are prepared to
respond to and recover from disruptions.
6. It is necessary to combine both pre-deployment
security (i.e. the “security by design” principle –
also called “shift left”) and post-deployment
measures to monitor and ensure resilience and
recovery of the systems in use (referred to in
this report as “expand right”). As the technology
evolves, this approach needs to be repeated
throughout the life cycle. This overall approach
is described in the report as “shift left, expand
right and repeat”.7. Technical controls around the AI systems
themselves need to be complemented by
people- and process-based controls on
the interface between the technology and
business operations.
8. Care needs to be paid to information
governance – specifically, what data will be
exposed to the AI and what controls are
needed to ensure that organizational data
policies are met.
It is crucial for top leaders to define key parameters
for decision-making on AI adoption and associated
cybersecurity concerns. This set of questions can
guide them in assessing their strategies:
1. Has the appropriate risk tolerance for AI
been established and is it understood by all
risk owners?
2. Are risks weighed against rewards when new AI
projects are considered?
3. Is there an effective process in place to govern
and keep track of the deployment of AI projects?
4. Is there clear understanding of organization-
specific vulnerabilities and cyber risks related to
the use or adoption of AI technologies?
5. Is there clarity on which stakeholders need to be
involved in assessing and mitigating the cyber
risks of AI adoption?
6. Are there assurance processes in place to
ensure that AI deployments are consistent
with the organization’s broader organizational
policies and legal and regulatory obligations?
By prioritizing cybersecurity and mitigating risks,
organizations can safeguard their investments in
AI and support responsible innovation. A secure
approach to AI adoption not only strengthens
resilience but also reinforces the value and reliability
of these powerful technologies.A secure approach to AI adoption can
allow organizations to innovate confidently.
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards
5
Ask AI what this page says about a topic: