Artificial Intelligence and Cybersecurity Balancing Risks and Rewards 2025
Page 9 of 28 · WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
–Roll-out and integration into live operations:
Some organizations have already identified the
business opportunities presented by AI and are
moving to full deployment. However, they may
not be conducting proper cyber risk assessments
or implementing appropriate controls.
Organizations need to ensure that there’s
adequate discipline around the transition from
experimentation to operational use, especially in
mission-critical applications. The cybersecurity
market’s ability to support specialized tools
for protecting the confidentiality, integrity and
availability of related systems and services may
also not be mature enough to enable these
organizations to implement AI systems securely.
–Disparate projects across the organization:
In most large businesses, there are multiple
projects exploring the use of AI across
different functions and channels. These are not
necessarily following a coordinated process, so
assessment of risk to the business may not be
sufficiently aligned. This applies to both full roll-
out and gradual creep scenarios.
–Hosted by third-party versus on-premises:
Often, businesses are using third-party AI
services hosted in the cloud. Such operations
do not absolve the business from managing
cybersecurity of the AI assets, but they do change the mitigation controls available
and create a need to negotiate appropriate
protections from the suppliers.
–Internal AI tools development: Many
organizations started offering AI features in
their public digital services. Some of these are
based on existing commercial or open- source
tools. Others are developed internally. In either
case, security requirements need to be properly
established at the development stage.
Organizations may also be entering the decision-
making process on risk at different stages:
–AI technologies may already have been
embedded into the business processes or core
assets. In this case, risk owners need to map
what has been implemented and assess how to
manage security retroactively.
–In other cases, the process might start with a
risk-reward-based decision about whether to
embed AI into operations or products. Under
this approach, the AI system is only moved
into the live environment when the rewards
are determined to outweigh or justify the risks.
This risk-reward-based decision necessitates
a proactive approach to security, which can be
integrated during the design phase.
AI holds enormous potential to advance the way people live and
work, but we must ensure that we apply these powerful tools
ethically and sustainably. Rapid advances in AI create opportunities
but also introduce significant cybersecurity and governance
challenges. As AI systems become more integrated into our lives,
we must build secure AI platforms that protect against adversarial
attacks and safeguard data integrity by following secure-by-design
principles. Additionally, we need to introduce the appropriate level of
governance in both development and usage to ensure trustworthy AI.
Antonio Neri, President and Chief Executive Officer,
Hewlett Packard Enterprise
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards
9
Ask AI what this page says about a topic: