Navigating the AI Frontier 2024
Page 20 of 28 · WEF_Navigating_the_AI_Frontier_2024.pdf
and collective cognitive capabilities. For
example, increased reliance on AI agents for
social interactions, such as virtual assistants, AI
agent companions, therapists and so on could
contribute to social isolation and possibly affect
mental well-being over time.
–Societal resistance: Resistance to the
employment of AI agents could hamper their
adoption in some sectors or use cases.
–Employment implications: The use of AI
agents is likely to transform a variety of jobs by
automating many tasks, increasing productivity
and altering the skills required in the workforce,
thus causing partial job displacement.
Such displacement could primarily affect
sectors reliant on routine and repetitive
tasks, in industries such as manufacturing or
administrative services.
–Financial implications: Organizations
could face higher costs associated with the
deployment of AI agents, such as expenses for
securing software systems against cyberthreats
and managing associated operational risks.Ethical risks
Examples of ethical risks include:
–Ethical dilemmas in AI decision-making:
The autonomous nature of AI agents raises
ethical questions about their decision-making
capabilities in critical situations.
–Challenges in ensuring AI transparency
and explainability: Many AI models operate
as “black boxes”, making decisions based
on complex and opaque processes, thereby
making it difficult for users to understand
or interpret how decisions are made.46 A
lack of transparency could lead to concerns
about potential errors or biases in the AI
agent’s decision-making capabilities, which
would hinder trust and raise issues of moral
responsibility and legal accountability for
decisions made by the AI agent.
Addressing the risk and challenges 3.3
To enable the autonomy of AI agents for cases
where it would greatly improve outcomes,
several challenges must be addressed. These
challenges include safety and security-related
assurance, regulation, moral responsibility and legal
accountability, data equity considerations, data
governance and interoperability, skills, culture and
perceptions.47 Addressing these challenges requires
a comprehensive approach throughout the stages
of design, development, deployment and use of
AI agents as well as changes across policy and
regulation. As advanced AI agents and multi-agent
systems continue to evolve and integrate
into various aspects of digital infrastructure,
associated governance frameworks that take
increasingly complex scenarios into consideration
need to be established.
In assessing and mitigating the risks of potential
harm from AI agents, it is essential to understand
the specific application and environment of the AI
agent (including stakeholders that may be affected).
The risks of potential harm from an AI agent stem
largely from the context in which it is deployed.48
In high-stakes environments such as healthcare or
autonomous driving, even small errors or biases can
lead to significant consequences for the users of
such systems. Conversely, in low-stakes contexts,
such as customer service, the same AI agent might
pose minimal risks, as mistakes are less likely to
cause serious harm.Within the context of a specific application
and environment, it is important to adopt a
risk analysis methodology that systematically
identifies, categorizes and assesses all of the
risks associated with the AI agent. Such an
approach helps ensure that appropriate and
effective mitigation mechanisms and strategies
can be implemented by relevant stakeholders
at the technical, socioeconomic and ethical levels.
Technical risk measures
Examples of technical risk measures:
–Improving information transparency: Where,
why, how, and by whom information is used
is critical for understanding how a system
operates and why certain decisions are made
by the agent. Measures can be implemented
to improve the transparency of AI agents such
as the integration of behavioural monitoring
and implementation of thresholds, triggers and
alerts that involve continuous observation and
analysis of the agent’s actions and decisions.
Implementing behavioural monitoring helps to
ensure that failures are better understood and
properly mitigated when they occur.49
Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents
20
Ask AI what this page says about a topic: