The Global Risks Report 2024

Page 56 of 122 · WEF_The_Global_Risks_Report_2024.pdf

Acting today GRPS respondents identify Public awareness and education as one of the most effective mechanisms to address risk preparedness and reduction of Adverse outcomes of AI technologies (Figure 2.15) and as a key tool to manage local impacts as well as build governance capacity and societal resilience. Literacy in generative AI is essential, for regulators and for broader society. AI literacy could be integrated into public education systems and trainings for journalists and decision-makers to not only understand capabilities of AI systems but also to identify trustworthy sources of information. GRPS respondents also highlight the need for National and local regulations. While national-level efforts will not necessarily prevent the rapid global proliferation of AI and related risks, robust but flexible standard-setting can help ensure that technological development and deployment are aligned with societal needs. The application of existing legislation around intellectual property, employment, competition policy, data protection, privacy, and human rights will need to evolve to address new challenges posed by generative AI. 82 Other key areas anticipated to be addressed by various regulatory regimes over the short term include the identification of AI-generated products, blocks or limitations to the riskiest uses, and determination of liability for AI-induced harms. 83 Solutions proposed include but are not limited to: registration and licensing of the most powerful versions of the technology, tiering access to computing power, implementation of provenance and/or watermarking systems, Know-Your-Customer procedures and mandatory incident disclosures, and creating a robust auditing and certification system. 84 GRPS respondents also note the role of Global treaties and agreements in the management of both Adverse outcomes of AI technologies and Technological power concentration. Several AI governance frameworks have already emerged at a global level to provide high level guidance for AI development, including the latest G7 Hiroshima Process on Generative Artificial Intelligence, as well as the Bletchley Declaration. In addition, there have already been calls for an “AI version” of the IPCC. 85 This entity could, in collaboration with the private sector, enable global scientific consensus around the risks and opportunities posed by frontier AI. Similarly, it could communicate findings to decision-makers, based on best available projections of global AI hardware and software, albeit with faster assessment cycles by necessity. Oversight could also extend to a reporting database and registry of crucial AI systems. However, the most existential of these risks will require extensive cooperation between powers, to achieve mutual restraint around the proliferation of high-impact technologies, as well as the inadvertent escalation in military AI (Chapter 3: Responding to global risks). Risk governance: AI in charge FIGURE 2.15 Source World Economic Forum Global RisksPerception Survey 2023-2024.“Which approach(es) do you expect to have the most potential for driving action on risk reduction and preparedness over the next 10 years? Select up to three for each risk.” Risk categories Economic Environmental Geopolitical Societal Technologicala b c d e fghia b c d e fghi 14%46% 24% 50% 16%29%47%26%46% Adverse outcomes of AI technologies Technological power concentration 10%54% 10% 48%6% 17%51%58%46% Share of respondentsApproach a. Financial instruments b. National and local regulationsc. Minilateral treaties and agreements d. Global treaties and agreements e. Development assistancef. Corporate strategiesg. Research & developmenth. Public awareness and education i. Multi-stakeholder engagement Global Risks Report 2024 56
Ask AI what this page says about a topic: