AI Agents in Action Foundations for Evaluation and Governance 2025
Page 5 of 34 · WEF_AI_Agents_in_Action_Foundations_for_Evaluation_and_Governance_2025.pdf
Introduction
AI agents are gradually becoming embedded in
an increasing number of tasks, workflows and
use cases that span cloud and edge computing,
leading the way to more widespread adoption.
As the transition from prototyping to deployment
accelerates, current adoption remains concentrated
among early adopters. According to a recent global
survey of executives, 82% of organizations plan to
integrate agents within the next one to three years,
indicating that most efforts are still in the planning or
pilot phase,1 while moving towards wider adoption.
The concept of software agents has been studied
for decades in fields such as robotics, autonomous
systems and distributed computing. What is different today is the rise of data-driven models, particularly
generative artificial intelligence (AI) and large language
models (LLMs), which are enabling the emergence
of a new generation of LLM-based agents. These
systems can generate plans, simulate reasoning
and adapt their behaviour through feedback
mechanisms in ways that were previously not
possible. This evolution has sparked a new
wave of experimentation, with researchers and
companies rapidly creating prototypes of agents
in various fields. This report focuses mainly on
LLM-based agents (“AI agents” is sometimes
used in short), whose growing capabilities create
both significant opportunities for adoption and a
new set of challenges in governance and safety.AI agents are shifting from prototypes to
deployment, bringing both transformative
opportunities and novel governance challenges.
Foundations for the responsible adoption of AI agents FIGURE 1
Functional
classification2
Define the
agent’s roleEvaluation and
governance3
Scale with
confidenceTechnical
foundations1
Lay the
groundwork
AI Agents in Action: Foundations for Evaluation and Governance 5
Ask AI what this page says about a topic: