Rethinking Media Literacy 2025
Page 9 of 45 · WEF_Rethinking_Media_Literacy_2025.pdf
for transparency, accountability and risk
management.14 The absence of global standards
underscores the need for a parallel focus on
societal resilience. Beyond governance, integrating
AI literacy and data literacy into MIL curricula is
essential. Currently, most MIL programmes do not
include discussions on the political economy of
AI-driven business models, despite their profound
influence on information ecosystems. Addressing
these gaps requires targeted interventions, such as
embedding AI literacy in school curricula, training
journalists to detect AI-generated content (AIGC)
and equipping policy-makers with the necessary
tools to assess AI-driven disinformation.
The challenge is particularly urgent for younger
generations, who are increasingly relying on LLMs for
information retrieval and research. Unlike traditional
search engines, which encourage lateral reading
by presenting multiple sources, LLMs provide
a single answer, potentially discouraging critical
analysis. This shift has major implications for digital
media and information, as students and young
professionals may be more inclined to trust AI-
generated responses without verifying information
through independent sources. Addressing this
requires platforms to implement greater transparency
in how AIGC is produced, while MIL programmes
must adapt to equip individuals with the skills needed
to navigate an AI-driven information ecosystem.
Examples of emerging solutions include AI-detection
tools such as Google DeepMind’s SynthID, which
watermarks and identifies AIGC. However, these
efforts are still in their early stages and require
significant scaling to achieve widespread adoption.
Additionally, watermarking techniques have proven
to be inconsistent and easily bypassed,15 highlighting
the need for more robust and multi-layered solutions.
In the absence of robust protection frameworks,
any regulatory response to AI must be
accompanied by efforts to strengthen public
resilience to AI-driven disinformation. This includes
proactive MIL interventions, partnerships with
fact-checking organizations and collaborations with
social media platforms to introduce friction in the
sharing of deceptive AIGC. Without such measures,
the rapid expansion of AI threatens to accelerate
the spread of false information, further complicating
an already volatile information landscape.
Increasing online harms and risks to digital safety
There are also growing concerns about wider online
harms, such as hate speech and threats to digital
safety, specifically for youth online. Some 78% of
youth respondents to a survey conducted by the
Office of the UN Secretary-General’s Envoy on
Youth reported having experienced digital threats,
while 18% experienced them constantly.16Online hate speech has become a pervasive
issue, fuelling discrimination, inciting violence
and deepening societal divides. The rise of
mis- and disinformation, especially during global
emergencies, has undermined public trust and
stability, demonstrating the global impact of harmful
content online.
Online harassment, threats and the non-consensual
sharing of private information disproportionately
target women, creating significant barriers to
their participation in digital spaces and public life.
Women journalists face attacks that aim to silence
their voices, producing a “chilling” effect on freedom
of expression.17 Online harassment, abuse and
disinformation campaigns are pervasive, often
targeting women journalists with gendered threats
of physical and sexual violence, and leading to self-
censorship, psychological stress and even women
leaving the profession.
As the US Surgeon General’s Advisory found in
2023: “more research is needed to fully understand
the impact of social media on children and
adolescents; however, the current body of evidence
indicates that while social media may have benefits,
there are sufficient indicators that social media can
also have a profound risk of harm to the mental
health and well-being”.18 The glorification of mass
shooters19 and the accessibility of terrorist material
on social media fuel radicalization, inspire copycat
attacks and amplify violent extremist ideologies,
posing significant security and societal risks.20
Recommendation feeds further create the
potential to confine users to “echo chambers”,
hindering access to diverse sources of information.
Nevertheless, exposure to different types of news
sources is more likely on social media than in other
types of media, and ranking algorithms do not have
a significant effect on the ideological balance of
news consumption on high-traffic websites such as
Facebook or Google.21 However, some algorithmic
feeds prioritize posts with high engagement, which
can highlight those posts that are more radical
and emotionally charged, simply because they
receive more engagement.22 This phenomenon is
especially dangerous in times of conflict or during
elections.23 GenAI, in particular, poses several
risks to information integrity, specifically in terms
of content creation – AI-generated deepfakes,
“hallucinations”/inaccurate information,24 rewriting
of historical facts – and content distribution
by perpetuating existing societal biases and
amplifying discrimination. These risks undermine
access to public interest information, which is the
cornerstone of democratic societies.
Rethinking Media Literacy: A New Ecosystem Model for Information Integrity
9
Ask AI what this page says about a topic: