Artificial Intelligence and Cybersecurity Balancing Risks and Rewards 2025
Page 8 of 28 · WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
1
Cybersecurity requirements for AI technologies
should be considered in tandem with business
requirements. How a business is using AI should
determine security needs, what to protect and when.
There are numerous influencing factors that drive
cybersecurity requirements, including: the criticality
of the business processes and control systems using
AI and the degree of dependency these processes
have on the AI system outputs; the sensitivity
of the data and devices that AI is involved in
processing and controlling; and the risk culture of the
organization and its approach to digital innovation.
Businesses are innovating with AI in a range of ways,
and are at various stages in the adoption cycle:
–Experimentation and piloting: Much of current
AI deployment by businesses is explorative or
experimental. According to research from the AI
Governance Alliance, organizations are commonly
using “smaller, use-case-based approaches that
emphasize ideation and experimentation”.7 There
is, however, a risk of experiments becoming
embedded within live business operations
without the rigorous risk assessment, system
testing and user training required. –Unconscious use of AI through product
features (off-the-shelf software): For some
organizations, the adoption process involves
a more gradual – and at times passive –
approach. Under this approach, AI is introduced
in enterprise processes through new features
or the enhancement of tools and platforms
already available in an organization’s ecosystem
– e.g. enterprise resource planning (ERP), HR
and IT management platforms. This process
presents the risk of introducing shadow
AI. A lack of formal roll-out programmes
may decrease transparency, which can in
turn weaken management processes and
leadership oversight. Businesses require
visibility and close coordination with vendors
to assess AI feature capabilities and effectively
evaluate potential risks. Furthermore, lax
software management in organizations can
amplify this type of risk due to the introduction
of AI through unsanctioned or unmonitored
tools (e.g. open source tools used by
developers, browsers or software plugins). The context of AI adoption
– from experimentation to
full business integration
Understanding business context is essential
for identifying the security needs of AI.
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards
8
Ask AI what this page says about a topic: