Shaping the AI Sandbox Ecosystem for the Intelligent Age 2025
Page 7 of 30 · WEF_Shaping_the_AI_Sandbox_Ecosystem_for_the_Intelligent_Age_2025.pdf
The other key features of the EU AI Act 2024
relevant to this study are outlined below. In this
section, AI regulatory sandboxes are referred to as
AIRS for brevity:
a. How many AIRS? It is appropriate to establish
an AIRS at the national level and additional
AIRS at the subnational level in AI-producing
countries. The Federal Government may
establish an exclusive AIRS for validating AI
solutions designed for the public sector.
b. Exit report: The competent authority of the
AIRS shall issue an “exit report” to the developer
or start-up whose solution has passed all the
test criteria established by the AIRS. This exit
report, serving as a form of validation certificate,
can be used as a credential to support product
marketing and outreach.
c. Equitable access to AIRS services: Clear and
transparent eligibility criteria shall be prescribed
for developers and start-ups to enter the AIRS.
The time limit for remaining in the AIRS may
be determined based on the complexity of the
problem being addressed and the current stage
of the solution design.
d. Free service: The services of AIRS shall be
provided free, except for the cost of specialized
materials required for testing a specific use case.
e. Facilities at AIRS: AIRS shall have the
infrastructure, tools and capabilities for testing,
benchmarking, assessing and explaining
the dimensions of AI systems – accuracy,
robustness, trustworthiness and cybersecurity –
besides measures to investigate risks.
f. Personal data processing: Personal data
collected by an agency for some purposes
may be used for AI systems in AIRS, subject to
certain conditions, which stipulate that public
interest should be served by the proposed AI
system; for instance, relating to public safety,
public health, healthcare, transportation, critical
information infrastructure, networks, climate
action, energy sustainability and, especially, the
efficiency and quality of public administration
and public services.
g. Monitoring systems shall be put in place.
h. Personal data must be processed in
confidential computing rooms.Most of the operational provisions of the Act are
useful to countries seeking to establish an AI
sandbox ecosystem.
Depending on their core objective, sandboxes
worldwide can typically be classified into three
categories:6
Regulatory sandboxes
Provide a supervised space to pilot AI solutions
under the guidance of regulators, enabling
the early identification of risks and compliance
pathways before full-scale deployment.
Particularly useful in sectors such as finance
and healthcare, where safety and compliance
are critical.
Innovation (or operational) sandboxes
Offer shared access to data, compute capacity
and other resources, enabling rapid prototyping
and collaborative development of AI applications.
Especially relevant in sectors such as
agriculture, education, logistics and MSMEs,
where rapid experimentation can unlock
scalable solutions.
Hybrid sandboxes
Combine the benefits of innovation and
regulatory models – promoting experimentation
while ensuring alignment with ethical, safety and
policy frameworks.
Well suited to integrated domains such as digital
health, fintech and smart governance, where
both agility and oversight are necessary.
Globally, countries such as Norway,7 Malaysia,8
Brazil,9 Singapore,10 the United Kingdom11 and
Spain12 are already adopting sandbox models to
advance innovation and safeguard the public interest.
AI sandboxes have the potential to act as critical
enablers for accelerating India’s AI innovation while
embedding trust, safety and inclusiveness into the
ecosystem. This paper examines their relevance in
the Indian context and proposes a framework to
guide the establishment and operationalization of
AI sandboxes.1
2
3
Shaping the AI Sandbox Ecosystem for the Intelligent Age
7
Ask AI what this page says about a topic: