Global Risks Report 2025
Page 39 of 104 · WEF_Global_Risks_Report_2025.pdf
scientists and policy-makers. Governments, civil
society and academia should collaborate to create comprehensive training programmes that are frequent, regular, and reflect the latest advancements in AI and algorithmic fairness.
67 These programmes
should focus not only on technical skills but also emphasize the importance of ethical decision-making, responsible data-handling, and the societal impact of AI systems.
B. Boost funding for digital literacyThe GRPS finds that Misinformation and
disinformation and Societal polarization are
the two risks for which Public awareness and education has the most long-term potential for driving action on risk reduction and preparedness (Figure 1.26). Censorship and surveillance is also within the top five risks that could be addressed in this way. There is an urgent need for comprehensive public awareness campaigns to educate citizens about the risks associated with digital spaces, as well as the tools and practices they can use to protect themselves and boost trust in their use of platforms. For example, citizens should be educated on privacy and security settings for their devices, including two-factor authentication, and app permissions. Awareness programmes should also cover recognizing phishing attempts, protecting personal data, and securely navigating social media. Additionally, digital literacy initiatives should help individuals understand the role of algorithms and data in shaping their online experiences, fostering critical thinking to identify and challenge biased or harmful content. Governments, civil society and private-sector organizations all have a role in promoting these campaigns, ensuring they are accessible to diverse populations.
C. Improve accountability and transparency
frameworks
The World Economic Forum’s Digital Trust
Framework
68 spells out key governance themes for
ensuring AI’s sustainable and responsible adoption. They include accountability and transparency. The former could involve establishing supervisory boards and AI councils, as well as human oversight processes. These committees should consider diverse perspectives from technologists, ethicists, legal experts, creators and others to effectively assess GenAI products and features. They should be responsible for reviewing AI practices, identifying potential risks and ensuring compliance with both internal policies and external regulations.
Regarding transparency, nurturing consumers’
trust requires organizations to inform about AI-generated content and its use through appropriate labelling and disclosures. Information on related data practices, safety policies and potential risks (such as bias and privacy) of the AI model used in GenAI products should be made available via accessible documentation. Standards and technical solutions to ensure content authenticity – such as digital watermarking, content origin and history, and blockchain-based rights management – are currently under development to support a trustworthy information ecosystem. However, successful adoption at scale requires policy frameworks that are aligned with common principles, rules and technological standards.
Share of respondents (%)Misinformation and disinformation
Societal polarization
Online harms
Erosion of human rights and/or of civic freedoms
Decline in health and wellbeingCensorship and surveillance
Intrastate violence
(riots, mass shootings, gang violence, etc.)
Infectious diseases
Crime and illicit economic activity (incl. cyber)
Talent and/or labour shortagesTop risks addressed by public awareness and education FIGURE 1.26
Source
World Economic Forum Global Risks Perception Survey 2024-2025."Which approach(es) do you expect to have the most potential for driving action on risk reduction and preparednessover the next 10 years?"
Risk categories
Economic Environmental Geopolitical Societal Technological
Global Risks Report 2025 39
Ask AI what this page says about a topic: