Rethinking Media Literacy 2025
Page 22 of 45 · WEF_Rethinking_Media_Literacy_2025.pdf
Since the early 2020s, governments around
the world have increasingly turned to regulatory
frameworks to tackle online disinformation while
preserving fundamental rights. Although not without
flaws, these efforts demonstrate how targeted
interventions can disrupt the disinformation life cycle.
Australia and the United Kingdom have embedded
“safety by design” principles into legislation,
encouraging platforms to build user safety into
their core infrastructure via different mechanisms.
The UK’s Online Safety Act 2023 (OSA) mandates
proactive risk planning in platform development,35
while Australia’s Online Safety Bill 2021 promotes
these standards through voluntary guidance.36 This
shift embeds harm prevention into the architecture
of digital services.
The European Union’s Digital Services Act (DSA)
2022 establishes a harmonized legal framework for
digital services across the EU, with the overarching
aim of creating a safer, more transparent and
rights-respecting online environment for all users.37
The DSA places binding obligations on very large
online platforms (VLOPs) and very large online
search engines (VLOSEs) – together referred to as
VLOPSEs – to identify and mitigate a range of online
harms – such as illegal content, disinformation and
threats to public health or democratic processes –
while upholding key fundamental rights, including
freedom of expression and access to information.
The DSA addresses disinformation not by regulating
content directly but by requiring platforms to
assess and mitigate systemic risks linked to the
design and functioning of their services, aiming to
intercept disinformation at the pre-creation, creation
and distribution points of the life cycle. VLOPSEs
must evaluate how their recommender systems,
monetization models and content moderation
practices may facilitate the spread of both illegal
content (as defined by national law) and legal but
harmful content, such as health misinformation,
coordinated harassment or falsehoods that
undermine electoral processes or civic discourse.
Regarding post-creation, VLOPSEs are required
to demonstrate how they mitigate risks identified
on their services, which could include deploying
friction mechanisms, warning labels and source
disclosures to reduce the impact of disinformation after exposure and to support more informed user
decision-making.
Regulations in both the EU and the UK also
strengthen platform accountability at the point of
content consumption. The UK’s OSA addresses
disinformation through a layered approach
that combines platform regulation with public
empowerment – most notably via its strengthened
media literacy duty. While the OSA does not directly
regulate disinformation as a category, it places legal
obligations on platforms to assess and mitigate
risks from illegal content and content harmful to
adults and empowers the regulator Ofcom to
supervise compliance.
The DSA is still in early implementation stage but has
begun reshaping platform behaviour by formalizing
systemic risk governance and increasing scrutiny
of recommender systems, content ranking and
advertising transparency. VLOPSEs published their
first transparency reports in early 2023, followed
by systemic risk assessments submitted to the
European Commission in August 2023. Independent
audits were submitted in August 2024.38 The DSA
Transparency Database enables public access to
content removal decisions and regulatory notices,
strengthening civil society oversight.39 While early
implementation has driven improvements such
as clearer content labelling and user control over
personalization, challenges remain. The DSA
enhances conditions for MIL by demystifying
platform systems and enabling evidence-informed
engagement by educators, researchers and users.
The UK’s OSA complements this approach
through a statutory media literacy duty. Ofcom’s
Media Literacy Strategy 2024–202740 is being
implemented through research, pilot programmes
and civil society partnerships aiming to improve
public understanding of online harms and promote
safer digital participation. This is bolstered by
the UK Department for Science, Innovation and
Technology’s (DSIT) strategic priority principles
on safety by design,41 which guide platforms
to proactively embed user protection and
disinformation mitigation into service architecture.
Together, these initiatives reflect a shift towards
integrated regulatory ecosystems that support both
systemic accountability and user empowerment.Marketplace
Iterate both upstream policies (to disincentivize bad actors) and downstream protocols (to
triage and respond to crises in a timely manner). This could include strengthened penalties
for repeat offender accounts, changes to terms of service or platform functions, improved
user controls or partnerships with high-trust media and other expert institutions.
4.6 Policy approaches to tackling disinformation
Rethinking Media Literacy: A New Ecosystem Model for Information Integrity
22
Ask AI what this page says about a topic: