The Intervention Journey A Roadmap to Effective Digital Safety Measures 2025
Page 19 of 45 · WEF_The_Intervention_Journey_A_Roadmap_to_Effective_Digital_Safety_Measures_2025.pdf
partnered with the AI Elections Accord to set
expectations for how to manage the risks arising
from deceptive AI content.
In partnership with the Coalition for Content
Provenance and Authenticity (C2PA), TikTok
enhanced its auto-labelling by incorporating content
credentials that attach metadata to content,
enabling the instant recognition and labelling of AI-
generated material. TikTok also joined the Adobe-
led Content Authenticity Initiative to collaborate with
a cross-industry ecosystem focused on restoring
trust and transparency online. This capability has
been implemented for images and videos, with
development still under way for audio-only content.
To help TikTok’s community understand how
to use its tools, the platform launched a video
campaign focused on explaining the origins of
content on TikTok. Developed in consultation
with the international human rights organization
WITNESS, the campaign also encourages users to
report content that appears AI-generated but lacks
proper labelling, raising awareness of the issue.
TikTok partnered with MediaWise, a programme of
the Poynter Institute, to create videos focused on
universal media literacy skills, explaining how tools
like AIGC labels can help provide additional context
for content. Additionally, TikTok is collaborating with
industry peers to support the National Association for
Media Literacy Education’s new AI Literacy Initiative,
aimed at spreading AI literacy to a wider audience.
TikTok also draws essential external advice from its
network of Safety Advisory Councils, which include
nine regional councils and a US Content Advisory
Council. These councils bring together experts from
diverse fields, such as youth safety, free expression
and hate speech. Their insights play a key role
in shaping policies, refining product features
and helping TikTok stay proactive in addressing
emerging safety challenges.
Implementation
To implement its counter-misinformation strategy,
TikTok combines technology and human expertise
to combat misinformation at scale. This effort
includes specialized misinformation moderators
equipped with advanced tools and training, as
well as local teams that collaborate with experts to
ensure responses consider regional context and
nuances. TikTok has also partnered with 19 global
fact-checking organizations that assess content
accuracy in over 50 languages and help enforce its
misinformation policies.
TikTok’s endorsement of the International
Foundation for Electoral System’s Voluntary
Guidelines for Election Integrity for Technology Companies is crucial, given the key role platforms
play in communicating election information. These
guidelines set expectations and practices for
companies and election authorities to promote
election integrity. By implementing measures to
provide trustworthy information to users, TikTok
helps maintain public trust in its platform during
critical election periods.
TikTok’s Edited Media and AIGC policy mandates
that creators label AIGC or edited media that
features realistic-looking scenes or individuals.
This can be accomplished using the AIGC label
or by including a clear caption, watermark or
sticker. Through the partnership with C2PA, TikTok
automatically labels AIGC uploaded from certain
other platforms. Additionally, TikTok’s policies
prohibit content that misrepresents authoritative
sources or crisis events, as well as content that
falsely portrays public figures in contexts such as
being bullied or making endorsements. The platform
also disallows the use of likenesses of young people
or adult private figures without their consent.
In addition to labelling, implementing a series on
media literacy content addressing topics such as
misinformation, AIGC and AI transparency helps
prevent confusion among viewers who may not
understand the meaning behind these labels and
also helps them to detect misinformation more easily.
Feedback, measurement
and transparency
There are several metrics that offer insight into
TikTok’s AI and digital literacy efforts. The first
is focused on the auto-labelling tool. Between
September 2023 and May 2024, 37 million
creators have used the tool that auto-labels AIGC
made with TikTok AI effects. Relatedly, as the
first video-sharing platform to implement content
credentials, the increase in auto-labelled AIGC on
TikTok may be gradual at first. As other platforms
also implement content credentials, TikTok will be
able to label more content. TikTok collects and
publicly shares data related to community guidelines
enforcement. The data is granular, and relevant sub-
metrics can be helpful in understanding how TikTok’s
policies are put into practice. This data includes:
–TikTok’s proactive removal rate for civic and
election integrity, misinformation and synthetic
and manipulated media was nearly 99%, and
over 89% of those videos were removed before
any views.
–Between the week of 7 July 2024 and mid-
September 2024, TikTok has removed over
250,000 edited pieces of media and AIGC
that violated their policies. Since September
2023, 37 million
creators have
used the tool that
auto-labels AIGC
made with TikTok
AI effects.
The Intervention Journey: A Roadmap to Effective Digital Safety Measures
19
Ask AI what this page says about a topic: