Global Risks Report 2025
Page 36 of 104 · WEF_Global_Risks_Report_2025.pdf
Algorithmic bias
Algorithmic bias can both be influenced by
Misinformation and disinformation and can be
a cause of it.55 The risks of algorithmic bias are
heightened when the data used for training an
AI model is itself a biased sample. Sometimes,
the bias can be obvious. For example, in a hiring
process, a set of bios used as examples of good
candidates might be drawn from a pool of previous
candidates, all of whom might have the same
gender, race or nationality. Other times, a bias can
be less obvious: for example, a model could be
trained on citizens’ previous spending on education,
without accounting for certain minority groups
typically spending less on education. Synthetic data
may be used, aiming to remove bias, but that can
itself introduce new biases.56
Examples of biases against citizens include
waiting times for a government appointment being
assigned on the basis of a questionable set of input
data and criteria, or automated responses failing
to respond adequately to citizens’ needs. When
algorithms are applied to sensitive decisions, biases
in training data or assumptions made during model
design can perpetuate or exacerbate inequities,
further disenfranchising marginalized groups.
Predictive policing is one area where algorithmic
bias based on race can be a concern.57 Such risks
are heightened further when there is no human
participation in decision-making.
Unless there are clear accountability frameworks
in place, the use of automated algorithms makes
it challenging to assign responsibility when harmful
or erroneous decisions are made, especially when
AI is involved. Automated algorithms often operate
as “black boxes”, making it difficult for individuals to
understand how decisions are made. This lack of
transparency and accountability can foster mistrust
and skepticism about the fairness and accuracy of
decisions taken.In many cases, algorithmic bias can be the result
of lack of knowledge, testing or sufficient oversight.
How a model is developed, applied and governed
is key to mitigating these risks. Independently of
the input dataset used, the personal biases of
individuals designing the assumptions of the model
can also play a role in leading to unjust outcomes.
These personal biases may be accidental (for
example, the result of those inputting the data
having insufficient technical expertise) or intentional,
for example, to pursue political aims.
One risk that could come into focus more over the
next two years is algorithmic bias against people’s
political identity.58 Algorithmic political bias might be
used intentionally to, for example, affect recruitment
into public-sector jobs or access to certain public
services or financial services. What makes this risk
especially dangerous is that individuals’ political
biases are widely known, and those biases can
easily find their way into algorithms or data sets.
Furthermore, individuals’ political views can
increasingly be determined, even against their will,
from their online activities.59
Similarly to individual biases, societal biases can
also play a role.60 These are likely to become more
prevalent as societal divisions deepen. In the
GRPS, Societal polarization is ranked #4 over a
two-year time horizon. Regionally, Latin America
and the Caribbean, Eastern Asia and Europe
manifest the most pressing concerns over Societal
polarization in the next two years, according to the
EOS.
Citizen surveillance risks
Government technology (GovTech) is entering a
new era, as AI, data analytics and digital platforms
become the backbone of public administration.61
Technology companies have long worked closely
with governments, for example, in the sensitive
Mitchell Luo, Unsplash
Global Risks Report 2025
36
Ask AI what this page says about a topic: