Rethinking Media Literacy 2025

Page 30 of 45 · WEF_Rethinking_Media_Literacy_2025.pdf

Teachers are not only trained to deliver media literacy lessons but also encouraged to host discussions with parents and local organizations, reinforcing digital safety as a shared responsibility. Recognizing that misinformation often spreads through family and social networks, Common Sense provides accessible resources to help parents navigate online risks alongside their children, covering topics including media balance, misinformation detection and AIGC. Schools also collaborate with libraries, youth organizations and community centres to expand access to digital literacy resources beyond the classroom. Additionally, the intervention allows educators to tailor discussions to regional concerns, ensuring communities are equipped to address locally relevant misinformation, from election falsehoods to health myths. By fostering digital literacy at the community level, the Common Sense intervention ensures that media literacy is not confined to formal education settings. Instead, it becomes a shared societal responsibility, where young people, educators, parents and local institutions work collectively to build resilience against misinformation. Disinformation life cycle level The Common Sense media literacy intervention affects multiple stages of the disinformation life cycle by equipping students, educators and communities with the skills to critically engage with digital content. At the distribution stage, it educates students on how algorithms, engagement metrics and virality influence the spread of false information, encouraging more mindful sharing habits. At the consumption stage, students develop the ability to detect misinformation through source verification, lateral reading and exposure to real-world digital dilemmas. Finally, in post-consumption, the programme promotes corrective behaviours, such as debunking misinformation, discussing digital dilemmas with peers and family and understanding the broader societal impact of false narratives. Outcomes The intervention has led to significant and measurable outcomes in strengthening digital resilience. Students demonstrated improved misinformation detection, with a heightened ability to critically assess misleading content and verify sources. Post-intervention assessments revealed higher digital literacy scores, particularly in understanding digital privacy, online identity and the implications of AIGC. The programme also fostered greater student engagement, with participants finding the lessons both relevant and applicable to their everyday online experiences. Beyond the classroom, the intervention created a ripple effect on families, as many students reported helping parents identify fake news and navigate misinformation on social media. Its success has contributed to scalability and policy influence, with findings used to advocate for integrating media literacy into national education policies and broader digital safety frameworks. By embedding structured, research-backed media literacy education in schools and extending its impact to communities, the initiative is cultivating a more informed, critical and responsible digital generation. 6.4 AI-generated content literacy A survey by MediaWise found that while most adults today are concerned about misleading and AI-generated images online, they often lack the skills and confidence to identify them.47 When it comes to content generated with AI, TikTok has a comprehensive approach that includes firm safety policies, reporting and labelling tools and media literacy campaigns to encourage the responsible use of AI on the platform. The approach has been informed by partnering with peers and experts (including Safety Advisory Councils as well as external partners such as the Content Authenticity Initiative) to share learnings and solutions to the collective challenges TikTok is seeing in relation to AIGC.Socio-ecological level TikTok’s intervention operates at the institutional level – for example, with its Community Guidelines,48 which require individuals to label AIGC or heavily edited media that depicts realistic- appearing people or scenes and prohibit certain kinds of realistic AIGC, such as content falsely depicting a public figure making an endorsement they did not make. The organization also prohibits harmful misinformation, non-consensual sexual imagery, impersonation and other harmful content, regardless of whether it is AI-generated. When it comes to reporting tools, as AI evolves, TikTok continuously updates and builds new detection models to identify content that violates its policies, while also enabling its community to report potentially violative content for review. TikTok also partners with more than 20 fact-checking Rethinking Media Literacy: A New Ecosystem Model for Information Integrity 30
Ask AI what this page says about a topic: