Rethinking Media Literacy 2025
Page 8 of 45 · WEF_Rethinking_Media_Literacy_2025.pdf
The digital environment has democratized access to
information, offering new avenues for the enjoyment
of fundamental human rights, including freedom
of expression. Today, however, digital platforms
and online information threaten the very rights that
internet connectivity once promised to fulfil. A few
key trends that point towards the heightened need
for enhanced MIL include:
Increased reliance on digital platforms to
access public interest information
A recent UNESCO/IPSOS survey revealed that in
16 countries that were scheduled to have elections
in 2024, social media was fund to be the primary
source of information. Some 87% of citizens in
these countries believed that online disinformation
was already having a major impact on the political
life of their country, and they feared its influence on
election results.8
The rapid rise of digital platforms has created
spaces where vast amounts of information are
shared, which has significant social, political and
economic impacts. However, these platforms
can enable and accelerate the spread of
misinformation, disinformation, hate speech and
other harmful content, making it crucial to ensure
they operate transparently and in alignment with
human rights principles.
Reduced trust in traditional journalism and
the growing influence of content creators as
information channels
Studies show that people (especially youth) are
increasingly turning to short-form video for news
consumption. This type of format is particularly
favoured by influencers and young news creators,
who are increasingly becoming primary “news”
creators and shaping public discourse on
critical topics including elections, conflicts and
environmental crises.9 Video (and livestream,
which is used by these same creators) creates an
even greater moderation challenge for platforms
that already struggle to apply their policies to
harmful content.
In another concerning trend, a UNESCO-supported
study showed that content creators (e.g. influencers)
on digital platforms do not rely on traditional
journalism to produce content, with mainstream
news media ranked only as the third most common
source used by these actors (36.9%). Alarmingly,
42% of content creators rely on likes and views as
the primary indicator of credibility, indicating a shift
away from traditional journalistic standards, where
fact-checking and credibility are based on evidence
and transparent citations.10Such trends make a compelling case for increased
MIL programmes that help users and individuals
identify reliable news sources, understand the risks
posed by artificial AI and mis- and disinformation
on digital platforms and engage with content in an
inclusive and ethical way. Further, MIL programmes
must be designed to reinforce human rights,
including the right to freedom of expression and
access to information, empowering users to employ
digital technologies and social media platforms in
an open, safe and secure way.
Rapid developments in AI, including GenAI
Recent developments in AI are reshaping human
society, influencing trust, media consumption and
the broader information landscape.
AI has rapidly evolved from basic supervised and
unsupervised learning models into highly complex
deep learning (DL) algorithms, capable of handling
unstructured data and performing advanced
tasks such as image and text analysis, voice
synthesis and predictive modelling. While these
advancements offer significant benefits – such
as improving healthcare diagnostics, streamlining
content creation and enhancing personalized
learning experiences – they also introduce critical
risks, particularly concerning misinformation,
bias and the erosion of trust in digital content.
For example, deepfake technology – enabled by
GenAI – has been used to fabricate realistic images
and videos of public figures.11 The increasing
accessibility of such tools means that even non-
experts can generate misleading content, further
complicating efforts to embed information integrity.
The rise of GenAI models – such as such as those
developed by OpenAI, Anthropic, Google and Meta
among others – has added new layers of complexity
to the challenge of MIL. These tools can produce
convincing but misleading content, often blurring
the line between what is human- or AI-generated.
Research indicates that individuals already struggle
with assessing the reliability of traditional search
results, often assuming that higher-ranked pages
are more credible. With AI-generated summaries
becoming the default for many users,12 there is a
growing risk that misinformation, biases in training
data or subtle manipulations could dictate public
perception without users critically evaluating
multiple sources.
Governments worldwide have responded differently
to the rise of GenAI. Some countries, such as Italy,
initially banned ChatGPT over privacy concerns
before implementing regulatory measures,13
while the European Union established the AI
Act to provide more comprehensive regulatory
oversight of AI systems, including requirements 1.2 How MIL can provide a response
to the challenges of the digital age
Rethinking Media Literacy: A New Ecosystem Model for Information Integrity
8
Ask AI what this page says about a topic: