Advancing Responsible AI Innovation A Playbook 2025
Page 29 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
CASE STUDY 8
Co-designing with children for responsible AI innovation
Current AI product development often lacks sufficient
consideration of children’s rights and well-being, leading to
potential issues with inappropriate content, bias and unequal
access. The Alan Turing Institute, in collaboration with the LEGO
Foundation, developed a participatory research process to
explore how generative AI impacts children.80 The project – which
surveyed over 1,700 children, parents, caregivers and teachers –
conducted school-based workshops to capture children’s direct
experiences with tools like ChatGPT and DALL·E. The report
recommends a child-centred approach to generative AI, including
the meaningful involvement of children in the design process. Key insight
The research revealed that children both understand
the implications of generative AI and are eager to shape
its future, sharing concerns about misinformation,
environmental impact and online safety. Children favoured
socially beneficial AI uses and opted for creative offline
alternatives when available. This study demonstrates that
involving users as active partners in product design provides
valuable insights to identify or mitigate risks and harms.Actions for organization leaders
–Prioritize and resource responsible AI design
practices: Efforts to encourage and adequately
resource responsible design practices within the
organization include:
–Embed responsible design into performance
metrics, resource allocation and recognition
programmes.
–Encourage employees to question existing
design approaches and instil ethical and
compliant measurements of success.
–Design for potential negative outcomes by
identifying risks and failure scenarios. Build
systems with resilience and mechanisms to
“fail safely” to ensure continuity and minimize
impact when issues arise.76
–Re-evaluate products already deployed77
to assess gaps in responsible design. –Integrate responsible AI criteria into
procurement and third-party risk management
processes to mitigate downstream risks and
signal responsibility expectations.78 Including
confidence scores, limitation warnings or a
reduced authoritative tone can mitigate the
impacts of hallucinations.
–Build awareness and ownership of
established design principles: Increase
understanding of design-specific risks and
mitigations. Assign responsible AI stewards
across product teams (see Case study 2) and
integrate multi-disciplinary design teams into the
AI development life cycle.
–Empower users as partners in responsible
AI: Engage users (e.g. employees, customers
and partners) to contribute to responsibility
throughout the AI life cycle (see Case study 8).
For instance, experts from MIT and Stanford
University proposed a new framework that allows
third-party users to disclose flaws and monitor AI
developers’ responses and resolutions.79
Advancing Responsible AI Innovation: A Playbook 29
Ask AI what this page says about a topic: