Fighting Cyber-Enabled Fraud 2025
Page 20 of 31 · WEF_Fighting_Cyber-Enabled_Fraud_2025.pdf
Signals in action: A case study for digital infrastructure reputation transparency BOX 7
The Global Signal Exchange (GSE) is a non-profit
clearing house for the real-time, international,
cross-sector sharing of scam and fraud threat
signals. Established in January 2025 with 40
million threat signals, by October 2025 the GSE
had grown to 550 million signals. More than 180
organizations have been onboarded or are in
the pipeline, spanning major industry providers,
technical and non-profit organizations and, now,
government agencies: in September 2025,
Singapore’s Government Technology Agency
became GSE’s first government member.
GSE produces sector analysis that incorporates
AI and machine learning tools and individually
processes 1 million signals each day to create a
dynamic threat score that is adjusted in real time through the community’s feedback. The GSE
enables both open and group sharing. Participants
report that GSE signals are highly accurate, often
unique and therefore immediately actionable.
Industry “league tables” are powerful tools to
benchmark performance across categories of
digital infrastructure providers. For example, GSE’s
Top Level Domain (TLD) league tables present the
share of reported abuse relative to the size of each
TLD provider, allowing comparison among similarly
sized registries. The tables show that some
providers maintain near-zero abuse rates while
others have considerably higher rates – with the
highest at 22.78%.68 This creates visible incentives
for best practice and public accountability.
Action 9 – Implement AI-powered abuse
screening during digital infrastructure enrolment
and beyond: Worsening levels of fraud and
infrastructure abuse require detection capabilities
matching their speed and sophistication. Traditional
manual reviews or static checks can no longer
keep pace with automated scams, large-scale
domain abuse or identity spoofing. AI-powered
shared services offer a scalable way to strengthen
defences at the infrastructure layer by detecting
and flagging malicious activity at the point of
registration or onboarding. AI-powered systems
should be built directly into digital infrastructure
enrolment workflows to evaluate registration requests in real time. Such systems – potentially
operated by existing consortia managing signal-
sharing or abuse-reporting platforms – can analyse
linguistic anomalies, infrastructure reuse patterns,
behavioural signals, identity inconsistencies and
reputation indicators to identify suspicious activity
early. Privacy-preserving techniques (such as
hashing, differential privacy and secure multiparty
computation) can enable cross-provider detection
while protecting user data. These systems should
flag potential risks for human review, monitor
post-enrolment activity for emerging threats and be
refined continuously through feedback from shared
abuse-intelligence networks.
At Microsoft, we’re harnessing AI to stop fraud before it starts –
protecting billions of digital interactions, shutting down 28 million
fraudulent accounts and blocking $4 billion in fraud attempts in
just the last 12 months. Using AI, our Central Fraud and Abuse
Risk (CFAR) team’s real-time detection and global partnerships
are redefining what safety means in a digital world.
Kelly Bissell, Corporate Vice-President, Central Fraud and Product
Abuse Risk, Microsoft
Fighting Cyber-Enabled Fraud: A Systemic Defence Approach
20
Ask AI what this page says about a topic: