Unmasking Cybercrime Strengthening Digital Identity Verification against Deepfakes 2026
Page 16 of 23 · WEF_Unmasking_Cybercrime_Strengthening_Digital_Identity_Verification_against_Deepfakes_2026.pdf
Recommendations
and countermeasures
Defences must be adaptive, multi-layered
and continuously improved.05
The rapid evolution of deepfake technology has created an
escalating contest between genAI capable of producing
synthetic identities and fraud detection systems designed
to stop them. To maintain trust in digital identity verification,
defences must be adaptive, multi-layered and continuously
improved. This section presents a structured set of
recommendations for three major stakeholder groups within
the digital KYC ecosystem: solution providers, fraud teams
and financial institutions.An effective defence strategy must therefore combine
technical rigour, risk-based monitoring and governance
discipline. The following recommendations are organized
according to the roles and responsibilities of three primary
stakeholders. Each set of recommendations is designed
to strengthen resistance against deepfake-based
attacks (particularly face swapping and camera injection
attempts), while balancing privacy, compliance and
operational efficiency.Summary
KYC solution providers
(liveness and anti-spoof vendors)
KYC vendors play a critical role in detecting manipulated
visual streams before identity verification is completed.
The following measures are recommended:
1. Camera path verification – Implement mechanisms to
detect or flag virtual cameras, mid-session device swaps
and new driver installations. This helps verify that the
video source is native and from a trusted device path,
cutting off the easiest delivery route for injected or face-
swapped streams.
2. Active and dynamic liveness checks – Use randomized
prompts and dynamic lighting variations (e.g. brief
screen flashes) to introduce unpredictability that
exposes synchronization errors typical of real-time face
swaps. Since a moving target is hard to pre-render or
synchronize perfectly, this helps surface latency and
sync issues common in real-time face swap pipelines.3. Transport-aware stream scoring – Score the actual
stream received (post-compression) for seams, flicker, lip
audio drift and texture “swim”. Perform quality checks on
the delivered media, not the local preview. Compression
often amplifies artefacts, making spoofs easier to detect.
4. Temporal consistency monitoring – Track frame-to-frame
stability (flicker, seam drift), lip sync offset and frame pacing
jitter over short time windows. Measuring visual consistency
over time helps because real-time swaps often wobble
across frames, while authentic human video does not.
5. Context telemetry APIs – Provide encoder/codec, bitrate,
resolution/aspect ratio, device/OS/browser details and
mid-session changes via API. These lightweight stream
and device fingerprints help expose telltale anomalies of
injected or synthetic video to risk engines.
6. Explainable detection outcomes – Return concise reason
codes (e.g. “VCam suspected,” “temporal jitter high”, “lip
sync off”) to provide human-readable explanations tied to
detections. This speeds up triage and appeals, enabling
consistent reviewer decisions.Recommendations by stakeholder group
Unmasking Cybercrime
16
Ask AI what this page says about a topic: