Artificial Intelligence and Cybersecurity Balancing Risks and Rewards 2025
Page 10 of 28 · WEF_Artificial_Intelligence_and_Cybersecurity_Balancing_Risks_and_Rewards_2025.pdf
2Emerging cybersecurity
practice for AI
Securing AI systems demands early mitigation,
ongoing operational security, enterprise-level
risk management, and frequent reassessment
of vulnerabilities.
While the understanding of attackers’ and defenders’
use of AI is well established, the recognition of the AI
system as an asset to be protected is relatively new.
Literature is emerging on the cybersecurity risks
associated with AI systems. A range of initiatives are
seeking to outline and categorize the cybersecurity
threats and risks emerging from the use of AI,
including from MITRE8 and the UK National Cyber
Security Centre (NCSC).9 Emerging guidance and
policies are highlighting requirements needed to
address these risks, including (but not limited to):
–The Dubai AI Security Policy10
–The Cyber Security Agency (CSA) of
Singapore’s Guidelines and Companion
Guide on Securing AI Systems11 –The UK Department for Science, Innovation
and Technology’s (DSIT’s) developing AI Cyber
Security Code of Practice12
–The National Institute of Standards and
Technology’s (NIST’s) taxonomy of attacks
and mitigations13
–The Open Worldwide Application Security
Project’s (OWASP) AI Exchange14
Simultaneously, evidence of real-world AI
cybersecurity vulnerabilities, threats and incidents
is being collected, and numerous repositories and
databases are being created.15
Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards
10
Ask AI what this page says about a topic: