The Global Risks Report 2024

Page 19 of 122 · WEF_The_Global_Risks_Report_2024.pdf

National risk perceptions in the context of upcoming elections FIGURE 1.9 Source World Economic Forum Executive Opinion Survey 2023; Worldometer, 2023; Statista, 2023; DataReportal, 2023.“Which five risks are most likely to pose the biggest threat to your country in the next two years?” Misinformation and disinformation 1st10th20th30th36thRank NoteEU excludes Slovakia.18th risk in Indonesia Nearly 278m (88% internet penetration) head for a presidential election in March 2024 22nd risk in South Africa Over 60m (72% internet penetration)head for a general election in 2024 Russia Around 145m (88% internet penetration)head for a presidential election in March 20241st risk in India Over 1.4bn (nearly 50% internet penetration) head for a general election in April-May 2024 6th risk in the United States Nearly 340m (92% internet penetration) head for a presidential election in November 2024 8th risk in European Union Nearly 450m (89% internet penetration)elect the EU Parliament in June 2024 11th risk in United Kingdom Nearly 68m (98% internet penetration)head for a general election by January 2025 11th risk in Mexico 128m (79% internet penetration) head for a general election in June 2024Mistrust in elections Over the next two years, close to three billion people will head to the electoral polls across several economies, including the United States, India, the United Kingdom, Mexico and Indonesia (Figure 1.9). 12 The presence of misinformation and disinformation in these electoral processes could seriously destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes. Recent technological advances have enhanced the volume, reach and efficacy of falsified information, with flows more difficult to track, attribute and control. The capacity of social media companies to ensure platform integrity will likely be overwhelmed in the face of multiple overlapping campaigns. 13 Disinformation will also be increasingly personalized to its recipients and targeted to specific groups, such as minority communities, as well as disseminated through more opaque messaging platforms such as WhatsApp or WeChat. 14 The identification of AI-generated mis- and disinformation in these campaigns will not be clear-cut. The difference between AI- and human-generated content is becoming more difficult to discern, not only for digitally literate individuals, but also for detection mechanisms. 15 Research and development continues at pace, but this area of innovation is radically underfunded in comparison to the underlying technology. 16 Moreover, even if synthetic content is labelled as such,17 these labels are often digital and not visible to consumers of content or appear as warnings that still allow the information to spread. Such information can thus still be emotively powerful, blurring the line between malign and benign use. For example, an AI-generated campaign video could influence voters and fuel protests, or in more extreme scenarios, lead to violence or radicalization, even if it carries a warning by the platform on which it is shared that it is fabricated content. 18 The implications of these manipulative campaigns could be profound, threatening democratic processes. If the legitimacy of elections is questioned, civil confrontation is possible – and could even expand to internal conflicts and terrorism, and state collapse in more extreme cases. Depending on the systemic importance of an economy, there is also a risk to global trade and financial markets. State-backed campaigns could deteriorate interstate relations, by way of strengthened sanctions regimes, cyber offense operations with related spillover risks, and detention of individuals (including targeting primarily based on nationality, ethnicity and religion). 19 Global Risks Report 2024 19
Ask AI what this page says about a topic: