Artificial Intelligence in Media Entertainment and Sport 2025
Page 18 of 28 · WEF_Artificial_Intelligence_in_Media_Entertainment_and_Sport_2025.pdf
Enablers and other
considerations 3
A responsible adoption requires addressing
concerns about dis- and misinformation,
intellectual property and impact on the workforce.
This section gives an overview of the key challenges
and enablers for the responsible adoption of
genAI in the industry. While not an exhaustive
list, it identifies areas that play a role in ensuring responsible and sustainable AI adoption at scale.
Other important enablers, such as technology and
infrastructure, are covered in the AI in Action: Beyond
Experimentation to Transform Industry Value paper.
As genAI advances rapidly, it presents complex
challenges that must be overcome to make this
revolution work for humanity. This section does
not aim to provide comprehensive solutions but
highlights critical areas that require multistakeholder,
industry-level and company-level consideration.
Key challenges include:
–Deepfakes and misinformation: By reducing
production costs, genAI exacerbates the
risk of mis- and disinformation spreading at
an unprecedented speed and scale, and of
deepfakes being used with harmful intent.
From Q1 2023 to Q1 2024, the exchange of deepfake tools on the dark web grew by
223%,42 and AI-generated images surged from
approximately 8,000 in 2018 and 15,000 in
2019 to approximately 15.5 billion in 2023.43
This underscores the need for robust content
moderation, enforcement, transparency, AI
usage disclosure and source attribution.44
Concerns focus on synthetic media’s potential
impact on the trustworthiness and integrity
of the information ecosystem, especially in
elections and conflicts, and the proliferation of
harmful content targeting individuals, such as
non-consensual image sharing.45
–Data ownership and data rights:
The collection and use of user data raise
questions around IP protection and how
privacy rights are protected. A debate is
ongoing as to whether copyright frameworks
are still fit for purpose, striking the appropriate
balance between incentivizing creativity while
ensuring society can benefit from it. This
has implications for accessibility to content
for model training and the potential for AI to
expand content distribution. Related issues,
such as how to protect likeness rights over AI-generated content to ensure that celebrities
and individuals can exert stronger control over
their image, voice and recognizable attributes,
are also gaining traction with policy-makers.
The development of robust frameworks is
essential to protect creators and promote
responsible innovation in this complex
landscape. For example, how do we define the
IP framework for synthetic content and handle
cases where someone draws a portrait of an
actor and then uses AI to generate images/
videos based on it?3.1 Industry governance
Disinformation in the genAI era INSIGHT 3
According to Reuters, more than half
of respondents
59%
are concerned about disinformation.46By 2028,
50%
of enterprises will adopt products, services or
features to address disinformation, up from less
than 5% in 2024.47
Artificial Intelligence in Media, Entertainment and Sport
18
Ask AI what this page says about a topic: