Rethinking Media Literacy 2025
Page 6 of 45 · WEF_Rethinking_Media_Literacy_2025.pdf
The unprecedented speed and scale at which
information travels, enabled by global connectivity,
social media platforms and emerging technologies,
have transformed how societies communicate,
access knowledge and make decisions. Yet this
transformation has also created vulnerabilities.
The rapid circulation of misleading, inaccurate or
manipulated information, whether intentional or
not, can erode trust in public institutions, polarize
communities and amplify social tensions. Perhaps
most importantly, disinformation disrupts the ability
of individuals to freely make informed choices about
what is in their own best interest.1
The complexity of the information ecosystem,
compounded by algorithmic amplification and the
rise of synthetic media, has made it more difficult
for people to discern reliable information from false
or deceptive content. These challenges not only
affect public health responses, electoral integrity
and crisis management but also have profound
consequences for the exercise of fundamental
rights and the health of democratic societies.
The rise of generative artificial intelligence (GenAI)
has further exacerbated the challenge. Tools
capable of creating realistic images, audio,
videos and text have lowered the barriers to
producing and disseminating highly convincing
and personalized false content. Even as early
as 2020, a study found that in 50% of cases,
humans could not differentiate between news
created by a person and news generated by AI.2
AI has substantially improved its ability to mimic
human writing in the intervening years, along with
audio, video and images. By age 11, children’s
confidence in evaluating online content often
exceeds their actual competence,3 while false
information that has proliferated online is now
being cited by AI large language models (LLMs),
as individuals attempt to fact-check information
which they encounter.4
As disinformation tactics evolve, so too must MIL
initiatives, integrating insights from psychology,
technology and education to remain effective in an
ever-changing digital environment. The proliferation
of user-friendly, relatively inexpensive and easily
accessible applications has enabled the creation of
synthetic media and the widespread interaction with
non-human agents, such as AI-powered chatbots
and virtual assistants. This shift underscores the
need for teaching and training to equip people
with the skills to critically evaluate synthetic media,
discern credible information from AI-generated
content (AIGC) and interact responsibly and
ethically with AI systems.In this context, MIL – which is defined as a set
of competencies that empower individuals to
access, understand, critically evaluate, create and
responsibly share information and media content
across different platforms and formats – is critical
for building resilient societies and protecting
individual freedoms. Beyond enabling individuals
to defend themselves against manipulation or
disinformation, the ability to seek, receive and
impart information freely is a fundamental right,
enshrined in international human rights frameworks
such as Article 19 of the Universal Declaration of
Human Rights. MIL acts as both a safeguard and
an enabler of this right, ensuring that people are
not silenced by manipulation, overwhelmed by
disinformation or disenfranchised by their inability to
critically engage with the information around them.
MIL serves as a foundational tool for furthering
the education of informed digital citizens. It trains
individuals to question sources, recognize biases
and identify manipulative tactics. Moreover, it
cultivates resilience against disinformation by
promoting a culture of enquiry and reflection rather
than passive consumption, and ideally it should
provide citizens with an understanding of the digital
information ecosystem in which they now live.
This report presents a holistic framework that
situates MIL as one node across both the
disinformation life cycle and the socio-ecological
model (SEM) – a framework to capture the multiple,
interacting layers of influence on digital safety, from
personal behaviour to interpersonal and community
dynamics, institutional obligations and policy levers.
By applying this model, the report offers a structured
approach to identify gaps in current interventions and
supports organizations in more effectively targeting
their strategies to strengthen information integrity and
uphold fundamental rights. By analysing interventions
at different stages – prevention, detection, response
and resilience – and examining the influence of
individual, community, institutional and societal
factors, the framework offers a more comprehensive
perspective for action. Indicative case studies offer
practical examples of both MIL interventions and the
application of this wider framework.
The objective of the report is two-fold: first, to
assess the state of MIL efforts in the context of
current information challenges; second, to unpack
the disinformation life cycle and SEM, helping
improve the design and targeting of more holistic
interventions. It offers a new perspective on how
to map and strengthen efforts that seek to bolster
information integrity, including – but not limited to –
MIL initiatives.
Rethinking Media Literacy: A New Ecosystem Model for Information Integrity
6
Ask AI what this page says about a topic: