The Intervention Journey A Roadmap to Effective Digital Safety Measures 2025
Page 22 of 45 · WEF_The_Intervention_Journey_A_Roadmap_to_Effective_Digital_Safety_Measures_2025.pdf
Feedback, measurement
and transparency
Meta continuously engages in research and close
consultation with academics, parents, teens and
other stakeholders to inform the development of
safe, age-appropriate experiences. Specifically, the
Teen Accounts feature was developed in regular
consultation with Meta’s Youth Advisors and Safety
Advisory Council, which includes third-party experts
and professionals in diverse fields such as online
safety, privacy, media literacy, wellness and social
and emotional health. In developing Teen Accounts, Meta also consulted
with stakeholders to understand their perspectives
and inform the ultimate implementation approach.
Furthermore, since 2018, Trust, Transparency and
Control Labs has consulted with more than 600
stakeholders, 300 teens and 270 parents from more
than 350 countries to inform a number of the safety
and privacy features of Meta technologies. These
consultations have helped develop age-appropriate
experiences for teens that preserve their access to
online connection and community.
In this process, it has also been key to evaluate
external guidance from governmental bodies and
children’s rights groups.
2.5 Addressing CSEA risks: a chatbot
for deterrence and support
A prompt/response chatbot entitled “reThink Chatbot”
was developed with the aim of deterring users from
searching for CSEA, intervening and directing them
to seek support to help change their behaviour. The
chatbot is mainly a technical, behavioural, educational
and partly policy-related intervention.
Aylo (operator of online adult entertainment
platforms) maintains a list of more than 28,000
banned terms in multiple languages, which is
constantly being updated. When a search uses
a banned term, the chatbot appears as a pop-up, along with a warning message. In short, if
users enter a search term associated with child
sexual abuse material (CSAM), they 1) receive a
warning, and 2) a chatbot operated by Internet
Watch Foundation (IWF) appears on their screen.
Through the information provided in the warning or
by engaging with the chatbot, users are informed
about the illegality of CSAM, and they are referred
to the Lucy Faithfull Foundation’s (LFF) free,
anonymous support and advice services, which are
provided for people who are concerned about their
attraction to CSAM.
The Intervention Journey: A Roadmap to Effective Digital Safety Measures
22
Ask AI what this page says about a topic: