The Global Risks Report 2024

Page 50 of 122 · WEF_The_Global_Risks_Report_2024.pdf

Severity score: Adverse outcomes of AI technologies FIGURE 2.11 Source World Economic Forum Global RisksPerception Survey 2023-2024. 10-year rank: 6thIntended or unintended negative consequences of advances in AI and related technological capabilities (including generative AI) on individuals, businesses, ecosystems and/or economies. 10-year average: 5.3 NoteSeverity was assessed on a 1-7 Likert scale[1 – Low severity, 7 – High severity]. The percentages in the graphs may not add up to 100% because figures havebeen rounded up/down.2 years 2%10 years5% 7% 16% 20% 23% 21% 8% 27% 24% 22% 13% 8% 4% Proportion of respondents 7 High Low 6 5 4 3 2 1Severity – Market concentration and national security incentives could constrain the scope of guar drails to AI development. – Adverse outcomes of advanced AI could cr eate a new set of divides between those who are able to access or produce technology resources and intellectual property (IP) and those who cannot. – Deeper integration of AI in conflict decisions could lead to unintended escalation, while open access to AI applications may asymmetrically empower malicious actors.AI in charge 2.4 Unchecked proliferation of increasingly powerful, general-purpose AI technologies will radically reshape economies and societies over the coming decade – for better and for worse. Alongside productivity benefits and breakthroughs in fields as diverse as healthcare, education and climate change, advanced AI carries major societal risks. It will also interact with parallel advancements in other technologies, from quantum computing to synthetic biology, amplifying adverse consequences posed by these frontier developments (Boxes 2.5 and 2.7). Intentional misuse is not required for the implications to be profound. Novel risks will arise from self-improving generative AI models that are handed increasing control over the physical world, triggering large-scale changes to socioeconomic structures. 53 Adverse outcomes of AI technologies is another new entrant to the top 10 rankings, deteriorating significantly in perceived risk severity over the longer-term horizon (Figure 2.11). Alongside the possibility of an entity achieving artificial general intelligence (AGI) – learning to accomplish any human or animal task – key concerns cited by GRPS respondents include: misinformation and disinformation (Chapter 1.3: False information); job loss and displacement (Chapter 2.5: End of development?); criminal use and cyberattacks (Chapter 2.6: Crime wave); bias and discrimination; use in critical decision-making by both organizations and states; and AI’s integration into weaponry and warfare. To date, the precautionary principle (prudence in the face of uncertainty) has largely not been applied in the development of AI, as regulators erred on the side of innovation. However, rapidly evolving development of and reliance on advanced machine intelligence is outpacing our ability to adapt – both to understand the technology itself (the “Black Box Problem”) and to create regulatory safeguards (the “Pacing Problem”), with regulation playing catch up to the technology. 54 The speed of advances, depth of market power and strategic importance of the industry will continue to challenge the appetite and regulatory capacity of governance institutions. Downstream risks could endanger political systems, economic markets and global security and stability. Global Risks Report 2024 50
Ask AI what this page says about a topic: