AI in Action Beyond Experimentation to Transform Industry 2025
Page 16 of 30 · WEF_AI_in_Action_Beyond_Experimentation_to_Transform_Industry_2025.pdf
Historically, AI’s biggest challenges were
technological and economic,30 but today, trust
in AI-driven processes is a key barrier. For AI
to succeed at scale, individuals, companies
and partners must trust and take responsibility
for ensuring that processes powered by AI
are safe, reliable and effective. While 95% of
workers see value in genAI, their primary concern
is whether organizations can ensure positive
outcomes for all.31,32
Building trust starts within the organization. A global
survey found that 61% of people hesitate to rely
on AI systems,33 often due to concerns over data
security and third-party involvement. By adopting
trust principles in the development and deployment of intelligent technologies, trust can be built.34
Effective change management, whereby companies
support employees in adopting AI through training
and transparent communication on how to use the
technology responsibly and effectively, can also
lead to a more consistent, positive user experience
with AI.35
At the cross-company level, trust is vital for data-
sharing collaborations that strengthen AI, such
as in federated learning. Leaders must address
security, accountability and ethical concerns,
ensuring AI solutions are secure, transparent,
interoperable and fair. This encourages collaboration
and reduces legal risks, promoting a trustworthy,
collaborative ecosystem.36,37,38
To deploy AI responsibly,39 organizations are
creating self-governance frameworks that
complement regulations, enabling agility and
risk mitigation. These frameworks help align AI
deployment with company values and regional
regulations, focusing on data privacy, security,
transparency and AI’s broader impact. Self-governance integrates privacy, innovation and
compliance to build trust, potentially increasing
customer confidence by up to 30%.40 Companies
should appoint a chief responsible AI officer or
establish ethics committees to ensure AI practices
align with regulations. Additionally, governance
should be embedded in the tools developers
and data scientists use, with clear policies to
ensure compliance.
In October 2024, a major US city launched an AI-
powered app to help new entrepreneurs navigate
the complexities of starting a business. The app
intended to provide resources and guidance
around navigating legislation, however, it often provided misinformation to business owners that
could lead to them breaking the law. Consequently,
the platform received public criticism and has
degraded trust among its user base, prompting
officials to revisit how the tool provides outputs.41CASE STUDY 10
Failure to responsibly deploy AI of people hesitate
to rely on AI systems,
often due to concerns
over data security and
third-party involvement.61%
AI Governance Alliance: Transformation of Industries in the Age of AI 16
Industry-level enabler 2: Stakeholder trust in AI
Company-level enabler 1: Industry self-governance
16
AI Governance Alliance: Transformation of Industries in the Age of AI
Ask AI what this page says about a topic: