Advancing Responsible AI Innovation A Playbook 2025
Page 16 of 47 · WEF_Advancing_Responsible_AI_Innovation_A_Playbook_2025.pdf
CASE STUDY 4
Infosys “comply up” general standard
Infosys adopts a “comply up” strategy, applying the highest
global AI compliance standards – like those in the EU AI
Act – across all operations worldwide. This unified approach
eliminates complexity from fragmented regulations while
exceeding client expectations, as partners increasingly
demand robust, responsible AI practices regardless of local
requirements. Infosys proved this model’s effectiveness
with data privacy, where compliance with the California Consumer Privacy Act (CCPA) created a strong baseline
for global operations.
Key insight
Adopting the highest responsible AI standards across all
jurisdictions streamlines operations and ensures consistent
compliance regardless of the regulatory regime.
Government leaders
Actions for government leaders
–Resolve regulatory tensions and ambiguities
between sectoral and cross-cutting AI
regulations: Provide organizations with clear
guidance on compliance requirements.35
–Prototype AI governance frameworks:
Enhance policy efficacy and feasibility,
and mitigate externalities (e.g. economic
or rights/freedoms infringements), through
policy prototyping, which borrows design
and research practices from products
and services.36
Best practices include:
–Incentivized participation: Across
organization size, sector, expertise and
the public to ensure prototyping considers
all impacted parties and their concerns
(e.g. intellectual property loss).
–Clear criteria: Set goals, metrics and
benchmarks for success upfront.
–Robust methods: Avoid testing in isolation
from existing policies, policy-making
processes and enforcement practicalities.
Layer prototyping approaches and
prototype at multiple stages (see Figure 4).
Refine through agile iteration cycles and
feedback loops. –Transparent process: Document and
communicate decisions, changes and
rationale throughout the process. Provide
sufficient time for submission and review
of feedback.
–Independence: Prototyping should be
adopted in a manner that bolsters rather
than impedes a policy-making process,
representing and benefiting the entire
population. For example, organization
participation for ulterior motives (e.g.
regulatory capture or dilution of policy
accountability) should be deterred.
–Promote jurisdictional interoperability
through multilateral AI governance
frameworks: Set shared principles, standards
and certification protocols to drive innovation and
safety while respecting national interests. Help
businesses make sense of multiple frameworks
through developing crosswalks e.g. the National
Institute of Standards and Technology (NIST)
AI Risk Management Framework (RMF) and
Japan AI Guidelines for Business.37 Consider
participation in multilateral forums that enable
international cooperation (e.g. the World
Economic Forum’s AI Governance Alliance
and the Commonwealth Artificial Intelligence
Consortium), as well as collaboratively working
towards reducing fragmentation (e.g. China-
Pakistan AI Cooperation efforts on innovation
and governance).38Key roadblocks organizations encounter from the broader ecosystem
Tensions arising from conflicting laws and overlapping authorities, creating difficulties in law enforcement
and compliance33
AI regulatory fragmentation and policy instability, generating prohibitive compliance costs34 and
threatening confidence in investments in responsible AI practices
Advancing Responsible AI Innovation: A Playbook 16
Ask AI what this page says about a topic: