AI in Strategic Foresight 2025
Page 18 of 22 · WEF_AI_in_Strategic_Foresight_2025.pdf
The insights from the OECD-World Economic
Forum survey highlight a pivotal moment for the
foresight profession. While the findings confirm AI’s
potential to enhance efficiency, they also reveal a
range of challenges and uncertainties.
Opportunities
Increasing AI literacy: A notable finding is the
disparity in perceived AI skills across sectors, with
public sector respondents reporting a more modest
assessment of their capabilities compared to the
private sector. This may necessitate a coordinated
and systematic increase of AI literacy in the public
sector through training and active experimentation.
Practitioners, particularly those without prior
experience, often have limited ideas about AI’s
potential, and can therefore be less well positioned
to identify, and avoid, its pitfalls. An active,
experimental approach is crucial, as the survey
found that experience with AI significantly increases
the perception of its usefulness. This is not simply
about adopting off-the-shelf tools but also about
automating different parts of the strategic foresight
process, augmenting human creativity and
developing tailored solutions.
Moving beyond simple use cases: The survey
indicates that most strategic foresight practitioners
currently use AI for simpler tasks, such as
synthesizing data, initial scanning and sense-
making. While this is a valuable starting point,
the future lies in more advanced applications.
Practitioners should invest time in using AI as
a sparring partner and idea generator, and to
systematize and summarize signals, suggest
scenarios and compare collected data. The aim
is for a level of maturity where AI is integrated into
the entire strategic foresight process, with tailored
tools developed for complexity mapping, pattern
detection and communication of findings.
Developing human-centred workflows: The
survey reinforces the notion that AI tools are
purely supplemental and require careful analysis
and existing human expertise. Strategic foresight
practitioners should invest in developing workflows
that leverage AI to handle the “heavy lifting” of
data processing and initial drafts, thereby freeing
up time for higher-level analysis, interpretation and
critical thinking. The objective is to use AI to enable
capabilities previously infeasible, such as automated
signal detection and large-scale document analysis,
while maintaining human oversight for nuance and
originality in addition to verification of outputs.Challenges
The survey highlights that while most practitioners
are optimistic about AI’s potential, they also
recognize significant risks.
Guarding against unreliable and biased
outputs: The most frequently cited challenge
is the quality and trustworthiness of AI-
generated outputs. This includes the concern
of hallucinations, weak and shallow content,
and a general lack of originality or imagination.
Practitioners must maintain a critical mindset
and implement robust verification protocols to
fact-check AI outputs, especially since a lack of
transparency regarding sources and logic is a
major concern. Different AI tools could be used in
parallel to validate and check results.
Addressing ethical and governance gaps: The
survey found that a significant challenge is a lack
of clear ethical guidelines and governance. This is
compounded by data security and confidentiality
restrictions, which prevent the feeding of sensitive
internal documents into AI engines. Strategic
foresight practitioners may need to advocate for
and help develop ethical frameworks that address
issues of data ownership, accountability and
responsible use of AI.8 Across countries, there is a
lack of guidance and resources to experiment with
AI in government in a responsible way.9
Tackling skill gaps: The survey highlighted a
differing AI literacy rate among strategic foresight
experts. Successfully experimenting with and
integrating AI into strategic foresight processes also
requires a certain level of skills in both AI and data
management. Targeted hiring and internal upskilling
and training programmes for strategic foresight
teams can help increase the internal capacity
needed to ensure a more systemic uptake of AI.
Overcoming organizational inertia: The survey
showed that respondents in the public sector,
civil society and academia face resistance from
leadership or other stakeholders when integrating
AI. This is linked to a general climate of risk aversion
and a lack of resources and time for the necessary
experimentation. Practitioners need to build a
compelling case for the value of AI, demonstrating
its benefits through small-scale experiments and
pilots. Successfully integrating AI into foresight
may be one of the best ways to demonstrate
the systemically connected issues affecting any
organization and, as such, this integration could
help governments overcome exactly those silos.Conclusion
AI in Strategic Foresight: Reshaping Anticipatory Governance
18
Ask AI what this page says about a topic: