Navigating the AI Frontier 2024
Page 9 of 28 · WEF_Navigating_the_AI_Frontier_2024.pdf
The development of AI agents began in the
1950s,6 and since then they have evolved from
simple rule-based systems to sophisticated
autonomous entities capable of complex
decision-making. Early AI was characterized by
deterministic behaviour, relying on fixed rules and
logic that made these systems predictable but
unable to learn or adapt from new experiences.
Advances in AI research introduced systems
that could handle larger datasets and manage
uncertainty, leading to probabilistic outcomes and
non-deterministic behaviour. This shift enabled
more flexible and dynamic decision-making,
moving beyond rigid frameworks.
The 1990s marked a significant turning point, as
machine learning applications became more widespread. AI systems began to learn
from data, adapt over time and improve
performance. The introduction of neural
networks during this period laid the foundation
for deep learning, which has since become
essential to modern AI.
Since 2017, the rise of LLMs has transformed
AI’s capabilities in natural language understanding
and generation. These models use vast amounts
of data to produce human-like text and engage in
complex language-based tasks.
Today’s AI agents use various learning
techniques, including reinforcement learning, or
transfer learning, allowing them to continuously
refine their abilities, adapt to new environments
and make more informed decisions.
Key technological trends 2.1
Over the past 25 years, the increase in computing
capacity, the availability of large quantities
of data on the internet and novel algorithmic
breakthroughs have enabled significant
developments in the base technologies behind
recent advances in the capabilities of AI agents.
These are briefly described below.
Large models
Large language models (LLM) and large multimodal
models (LMM) have revolutionized the capabilities
of AI agents, particularly in natural language
processing and the generation of text, image, audio
and video.
The emergence of large models has been driven
by several technological advances and by the
transformer architecture, which has paved the way
for a deeper understanding of context and word
relationships, considerably improving the efficiency
and performance of natural language processing
tasks.7 In summary, advanced AI models have
enabled better understanding, generation and
engagement with natural language.
Machine learning and deep
learning techniques
A range of techniques have greatly improved AI
models through increased efficiency and greater
specialization. Some examples of machine- and
deep-learning techniques include:1. Supervised learning: facilitates learning from
labelled datasets, so the model can accurately
predict or classify new, previously unseen data.8
2. Reinforcement learning: enables agents
to learn optimal behaviours through trial and
error in dynamic environments. Agents can
continuously update their knowledge base
without needing periodic retraining.9
3. Reinforcement learning with human
feedback: enables agents to adapt and
improve through human feedback, specifically
focusing on aligning AI behaviour with human
values and preferences.10
4. Transfer learning: involves taking a pretrained
model, typically trained on a large dataset (e.g.
to recognize cars) and adapting it to a new but
related problem (e.g. to recognize trucks).11
5. Fine-tuning: involves taking a pretrained model
and further training it on a smaller, task-specific
dataset. This process allows the model to retain
its foundational knowledge while improving its
performance on specialized tasks.12
These and other learning paradigms are often used
in combination and have dramatically expanded the
problem-solving capabilities of AI agents in various
areas of application. The evolution of AI agents
is detailed in Figure 2, while the agent types are
further expanded in the following section.
Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents
9
Ask AI what this page says about a topic: