Artificial Intelligence in Media Entertainment and Sport 2025
Page 19 of 28 · WEF_Artificial_Intelligence_in_Media_Entertainment_and_Sport_2025.pdf
Intellectual property implications
AccuracyINSIGHT 4
INSIGHT 5Accuracy and bias: GenAI outputs can amplify
existing biases in training data and produce
discriminatory outputs or generate hallucinations
(false information), which can erode consumer trust and content value. Poor representation of diverse
communities in model training datasets increases
algorithmic bias, risking the marginalization of
certain voices and reduction of models’ accuracy.49
Addressing these challenges is essential for industry
leaders and society to ensure AI’s sustainable and
responsible adoption. It calls for a human-centric
and holistic approach. As regulation varies and
evolves at a different pace across regions, there
is opportunity for industry self-governance to
complement regulation (“co-regulation”) through
methods like collective bargaining, binding
commitments, best practices and voluntary
standards. Key governance themes have been
defined and examined as part of the World
Economic Forum’s Digital Trust Framework.51
They include:
–Accountability: Organizations are defining
principles for responsible AI adoption. Some
are establishing governance bodies, like
supervisory boards and AI councils, as well
as human oversight processes to ensure
ethics and transparency standards are
upheld. These committees should consider
diverse perspectives from technologists,
ethicists, legal experts, creators and others to
effectively assess genAI products and features.
They should be responsible for reviewing
AI practices, identifying potential risks and
ensuring compliance with both internal policies
and external regulations. Defining evaluation
frameworks based on different categories of AI
models, data sources and use cases, along with cross-industry standards like ISO-4200152
can streamline review and approval processes.
–Fairness: Companies should use AI models
that minimize bias and mitigate unintended
consequences in content creation and
distribution. This will ensure equitable treatment,
inclusivity and fairness across content platforms
while safeguarding user data rights.
–Transparency: Nurturing consumers’ trust
requires organizations to inform about AI-
generated content and its use through
appropriate labelling and disclosures within the
product experience – whether auto-generated,
auto-generated with human oversight or
human-generated. Information on related data
practices, safety policies and potential risks
(such as bias and privacy) of the AI model used
in genAI products should be made available
via accessible documentation. Standards
and technical solutions to ensure content
authenticity, such as digital watermarking,
content origin and history, and blockchain-
based rights management, are currently
under development to support a trustworthy
information ecosystem. However, successful
adoption at scale requires policy frameworks
that are aligned with common principles, rules
and technological standards.53 Companies
should use AI
models that
minimize bias
and mitigate
unintended
consequences in
content creation
and distribution. A recent survey found that
52%
of respondents view IP infringement
as a significant risk.
Research highlights a growing concern
around genAI inaccuracy, with
63%
of respondents considering it a relevant risk.25%
report actively working to implement
measures to mitigate it.48
38%
declared that they are working to mitigate it.50
Artificial Intelligence in Media, Entertainment and Sport
19
Ask AI what this page says about a topic: