The Global Risks Report 2024

Page 55 of 122 · WEF_The_Global_Risks_Report_2024.pdf

AI escalation The application of AI technologies to military objectives could threaten global stability over the next decade, with the integration of machine intelligence into conflict decision-making posing a severe risk. 72 AI will boost cyber warfare capabilities, enabling entire offensive and defensive systems that could act autonomously, with unpredictable impacts to networks and connected infrastructure. When it comes to kinetic warfare, global and regional powers have invested heavily in developing AI-driven weapons systems, and the degree of autonomy afforded to these is increasing: land, air and sea-based weapons can already undertake surveillance without human input. 73 Attempts have been made to establish international governance around their use; however, agreements have yet to be established. 74 Abstentions and votes against a draft UN resolution relating to autonomous weapons systems last year were notable, including China, North Korea, Iran, Israel, Türkiye, United Arab Emirates, India and Russia. 75 There remains a material chance, therefore, that these systems could be empowered to autonomously take decisions on lethal actions, including goal creation and the selection of targets. 76 The potential for miscalculation in these scenarios is high.77 For example, AI could misinterpret the “unwritten” norms of geopolitical posturing, such as flying fighter jets close to airspace or military assets of rival powers, as a material threat, initiating conflict. The most severe risk lies in AI applications to nuclear weapons. While governments have indicated that human control will be maintained over nuclear weapon systems, in principle AI may offer the greatest defense by condensing decision time: making decisions at silicon, not biological speed. 78 At the same time, AI-enabled launch systems could erode strategic stability, given its theoretical potential to target nuclear assets and second strike capability, combined with the near impossible detection of its development by rival states. 79 If states incorporate AI into nuclear weaponry, this would significantly raise the risk of accidental or intentional escalation over the next decade, with potentially existential consequences. In contrast to the upstream tech stack, the downstream application of AI is a more competitive market. Despite being among the most powerful of emerging dual-use technologies, the economic and technical barriers to accessing frontier AI are significantly lower than for its technological counterparts, such as geoengineering and quantum computing. Many GRPS respondents highlight concerns around sudden and widespread access to generative AI applications, given that access to the internet effectively equates to access to these models. Malicious actors can leverage a superhuman breadth of knowledge to conceptualize and proliferate dangerous capabilities, from misinformation and malware to biological weapons (Box 2.7), threatening human rights and safety in a myriad of ways. Alessio Soggetti, Unsplash The next global shock? BOX 2.7 Novel bioweapons The attempted use of biochemical weapons by non-state actors has historically been limited, primarily due to high knowledge barriers. 80 Without regulation limiting open access to the most powerful applications of AI technologies, a combination of AI tools could enable the creation of more targeted and severe biological weapons by a wide spectrum of non-technical actors. Large language models could provide information on dual-use topics, laboratory assistance and, eventually, autonomous research, while biological design tools could allow the creation of new proteins and biological agents that overcome the trade-off between transmissibility and virulence of pathogens. 81 Impacts could be devastating, with pathogens potentially used to disable military personnel before a conflict, mimic a widespread global pandemic or even lethally target specific ethnicities. Global Risks Report 2024 55
Ask AI what this page says about a topic: