Rethinking Media Literacy 2025
Page 18 of 45 · WEF_Rethinking_Media_Literacy_2025.pdf
This phase considers the tools used to generate
disinformation and when such activity is likely to
surge. For the former, interventions should assess
how easy it is to produce high-traction content
and where barriers to entry could be raised.
This question will become increasingly urgent as
GenAI products are released on the mass market, with their potential to turbo-charge the scale,
affordability, agility, targeting and persuasiveness
of disinformation campaigns. For the latter, the
focus should be on building collective readiness
and ability to anticipate trends, as well as analysing
where and why disinformation is proving effective.4.2 Content creation (production)Marketplace
Ensure credible information is available on an ongoing basis, with channels tailored to
reach people of all backgrounds and profiles. Invest in local journalism, including stronger
platforms for marginalized and underrepresented voices. Prevent predatory and/or
micro-targeting of users in the online space, for example via the sale of personal data to
advertisers. Support the development of “digital public squares”, from social media to
forums, in which high-trust information breaks through and systems are better geared for
constructive discovery, debate and learning.
Supply
Make it more expensive and labour-intensive to produce disinformation at scale. This may
include stronger guardrails on GenAI tools such as text and image generators, as well
as mixed media “deepfake” technology. Strengthen legal frameworks around copyright
infringement (for example, the logo of a known media outlet) as well as non-consensual use
or impersonation of someone’s image, voice or identity. Platforms should take stronger action
against for-profit human content farms and implement stricter recidivism strategies to prevent
disinformation networks from rebuilding after removal.
Demand
Expose the common features of low-quality or low-trust information, including clickbait,
content farms, propaganda, advertising and synthetic or manipulated media. Raise
awareness of these red flags and champion signals of information integrity (e.g. clearly cited
data and images). Embed access to tools that can support critical thinking, lateral reading
and verification of sources in real time.
Marketplace
Mandate rigorous testing of GenAI tools and services before they enter the market, even in
open-source models where decentralization can make enforcement challenging. This may
include “red team” exercises that simulate the tactics of disinformers, helping to identify
vulnerabilities and design more effective guardrails. Given that open-source AI can be
modified and deployed by various actors, testing should occur not just before release but
also through ongoing scrutiny and adaptation to emerging threats. Ensure transparent risk
assessments that balance the intended value of a product (e.g. entertainment, efficiency,
innovation, learning) with its potential misuse and the scale of related harm (e.g. automated
or dangerous disinformation in response to a given prompt).
Rethinking Media Literacy: A New Ecosystem Model for Information Integrity
18
Ask AI what this page says about a topic: