Rethinking Media Literacy 2025

Page 19 of 45 · WEF_Rethinking_Media_Literacy_2025.pdf

This phase focuses on the role of platforms in the spread and mainstreaming of disinformation. It examines the structural factors that determine how content is surfaced, curated and amplified to consumers as well as the broader effects on public discourse. Transparency in these processes is critical – users, creators and researchers need greater visibility into how moderation decisions are made, how recommendation algorithms function and how content is promoted or suppressed. Without such transparency, public trust in platform governance erodes, fuelling perceptions of bias and allowing disinformers to exploit opaque systems to their advantage. Online approaches should avoid relying too narrowly on content removal or post-level moderation – except in cases of acute and immediate harm. In some instances, what constitutes illegality can vary significantly across jurisdictions, and in some cases disinformation laws are deliberately crafted or applied in ways that suppress opposition speech and restrict legitimate political expression. However, content removal and moderation are necessary and proven tools that reduce the amount and visibility of harmful content on platforms. Instead, a broader review of the platform mechanisms that drive virality – such as algorithmic amplification, engagement-based ranking and advertising models – is needed. Platforms must be more forthcoming about how these systems operate, the criteria used to boost or demote content and the safeguards in place to prevent manipulation. Any safeguards developed must distinguish between freedom of speech and freedom of reach, taking a human rights-based approach to both. This means preserving fundamental rights while ensuring that disinformers cannot abuse opaque platform systems to generate profit, spread falsehoods or undermine democratic discourse.4.3 Distribution (dissemination and promotion) Supply Ensure transparent processes for registering website domains, creating accounts on social media and administering in-platform groups or channels, with greater oversight of group admins, content creators and advertisers who may amplify misinformation unknowingly. Platforms should provide clearer disclosures on who is behind influential pages, groups and paid promotions, enabling users to assess credibility and accountability. Enforce proportionate, clear and consistent action against “super-spreaders” of disinformation, including coordinated networks that operate within and between platforms. This should extend to advertisers as well as group admins. Introduce tools that control how rapidly content can be shared, such as forwarding or tagging limits, while ensuring that these measures are applied transparently and equitably. Platforms must clearly communicate how these restrictions are implemented, who they apply to and how they contribute to reducing the spread of harmful content. Additionally, incorporate nudges – such as prompts encouraging users to verify information before sharing, or notifying them when they are about to engage with content flagged as misleading – that have proven particularly effective in slowing the spread of disinformation and fostering more thoughtful engagement. Demand Elevate trustworthy sources of information, including through partnerships between emergency responders and digital platforms. Develop campaigns that “inoculate” the public against persistent disinformation by exposing the tactics and motives behind misleading content. Strengthen public outreach by leveraging force-multipliers33 and trusted actors – such as health and social workers, religious leaders, employers and trade unions – who can engage communities directly. Additionally, partner with influencers and content creators who shape online discourse and drive engagement, ensuring that accurate information reaches audiences where they naturally consume news. These partnerships can help counter disinformation in a more organic and relatable way, fostering trust and improving the visibility of credible sources across different digital spaces. Rethinking Media Literacy: A New Ecosystem Model for Information Integrity 19
Ask AI what this page says about a topic: