Library
Real-world incidents, threat patterns, and the research behind every detector.
DOJ unsealed charges against 14 people running an AI voice-clone operation that stole ~$47M from elderly victims across the US using grandparent-emergency pretexts.
Industrialized voice-clone call centers targeting seniors at scale.
Caller demands gift cards or crypto couriers and refuses callback to a saved family number.
Deepfake videos of Portuguese public figures circulated on social media promoting a bogus trading platform, leading to multi-million-euro losses before takedowns.
Localized celebrity deepfakes paired with a custom 'broker' app.
Withdrawals require ever-larger 'tax' deposits — classic pig-butchering tell.
Scammers ran a fake livestream with an AI-generated likeness of Jensen Huang promoting a 'double your crypto' giveaway, draining funds from viewers who scanned the QR code.
Hijacked livestream + celebrity-CEO deepfake to legitimize a crypto drainer.
Looped micro-expressions, static eye-line, and a wallet QR that never appears on any official channel.
A fabricated video impersonating BSE CEO Sundararaman Ramamurthy promised extraordinary stock returns; linked to an overseas syndicate that defrauded Indian investors of ~₹400 crore.
Authority-figure deepfake funneling victims into a private trading group.
Claims contradict SEBI/BSE public statements; audio room-tone doesn't match the background.
AI-generated video ads of UK consumer champion Martin Lewis ran on Meta platforms endorsing fake investment platforms, despite Lewis publicly stating he never endorses products.
Trusted-personality deepfake injected via programmatic ad networks.
Any 'endorsement' from Lewis and an off-platform WhatsApp-group funnel.
Marketplaces and Meta ads filled with AI-generated 'press photos' of celebrities holding skincare and supplement products they never endorsed.
AI product shots at ad-network scale to lend false credibility.
Gibberish label text on close inspection and identical lighting across supposedly different shoots.
Telegram channels began offering AI-generated Aadhaar and PAN card images for ₹500 each, used to open shell wallets and SIM cards before OTP-based KYC tightened.
Generative ID-document forgery for synthetic-identity onboarding.
Photo region has a different noise floor than the card background; QR code fails verification.
Scammers running 'digital arrest' calls in India sent victims AI-generated screenshots of fake Supreme Court / CBI orders to pressure them into transferring money.
Authority-document image used as proof inside a live social-engineering call.
Court seal is slightly asymmetric and case-number formatting doesn't match the real registry.
Generated electricity and telecom bills were used as 'address proof' to onboard mule accounts at neobanks, surfacing in multiple AML investigations.
Document-image generation tuned on real utility templates.
Account number, billing period, and meter reading don't reconcile arithmetically.
A finance employee joined a multi-person video call where every other participant was an AI-generated likeness of senior executives, then approved 15 transfers totaling ~$25M.
Group video impersonation creating false consensus and urgency.
Other participants never deviated from a tight script and avoided side-channel chat.
An exec received WhatsApp messages and a voice call mimicking the CEO, pushing a confidential 'acquisition' deal. The exec asked a personal trivia question and the call collapsed.
Voice clone + trusted-app pretext to bypass formal channels.
New phone number, profile photo of CEO, and refusal to switch to a corporate channel.
AI-cloned voice of US President Biden told NH primary voters not to vote. Traced to a political consultant; FCC later banned AI robocalls.
Mass voice-clone robocall used for voter suppression.
Spoofed caller ID, identical script across calls, no legitimate campaign source.
Attackers used image-to-image models to clone a vendor's invoice template, swapping the IBAN. Email passed DKIM because it came from a compromised vendor mailbox.
Generative redesign of a trusted document to swap payment details.
Slight font kerning differences and a non-matching IBAN country code.
School-age victims were targeted with AI 'nudify' apps that fabricated explicit images from social-media photos, then used for crypto sextortion demands.
Off-the-shelf image-to-image model weaponized for blackmail.
Smooth, anatomically inconsistent skin regions and lighting that doesn't match the source photo.
A viral image of a girl clutching a puppy in floodwaters was used to push partisan narratives about disaster response; later confirmed AI-generated.
Emotionally charged AI image deployed during a live news cycle.
Six-fingered hand, plastic-looking tear streaks, and no source photographer or outlet.
Fraudsters submitted AI-generated 'screenshots' of inflated bank balances to qualify for personal loans on Indian fintech apps; several lenders disbursed before catching the pattern.
Image-of-text generation of plausible bank UI to bypass document KYC.
Pixel-aligned columns but inconsistent font weights between header and rows; no scroll artifacts.
Viral 'BBC' and 'Reuters' headline screenshots were AI-composed to push fake breaking-news narratives, racking up millions of impressions before community notes appeared.
Image-of-text mimicry of trusted publishers to spread disinformation.
Headline font weight doesn't match the outlet's actual style; URL bar is cropped or absent.
An employee received WhatsApp voice messages impersonating the CEO. They reported it as suspicious; no breach occurred.
Audio-only clone targeting a single employee outside business hours.
Off-channel contact, urgency, and pressure for secrecy.
AI-generated video and voice of a popular creator promoted a fake $2 iPhone giveaway, harvesting payment info from clickers.
Creator-likeness deepfake ad on a high-trust short-video feed.
Lip-sync drift on consonants and a too-good-to-be-true CTA.
A fabricated photo of an 'explosion' near the Pentagon went viral via verified accounts, briefly moving the S&P 500 before being debunked.
Single AI-generated news photo amplified through paid-verified social accounts.
Melted railings, distorted lamp posts, and no corroborating photos from any other angle.
A Midjourney image of Pope Francis in a designer puffer jacket fooled millions before being identified as one of the first viral mainstream AI photo hoaxes.
Plausible celebrity-styled AI image shared without provenance.
Warped fingers on the coffee cup, blurred crucifix chain, and inconsistent eyeglass frame.
A widely shared series of AI-generated images depicted Donald Trump being tackled by police, mistaken by many viewers as real news photography.
Photorealistic political image set staged like a news-wire burst.
Extra fingers, fused legs in crowd shots, and no matching photo from any wire service.
A low-quality deepfake video of President Zelensky telling Ukrainian troops to surrender was posted on a hacked news site and social media.
Disinformation deepfake injected via compromised distribution channel.
Mismatched head/body proportions and unnatural neck movement.
An executive transferred €220k after a phone call using an AI-cloned voice of the parent-company CEO requesting an urgent supplier payment.
Single-call voice clone with plausible vendor backstory.
Slight 'metallic' overtone and unusual sentence rhythm noted afterward.
Common synthesis families
Very natural prosody but flat affect on long, emotional sentences.
Subtle robotic 'shimmer' on sibilants; breaths placed mechanically.
Locked head pose, limited gaze, lip-sync slip on plosives.
Single still photo animated — neck and shoulders barely move.
Hand and ear geometry errors; over-smooth skin micro-texture.
Physically impossible object permanence across cuts and reflections.
Public research & standards
Benchmark dataset for face-manipulation detection.
Large-scale labeled deepfake video dataset.
Unified evaluation framework for detectors.
Anti-spoofing challenge for voice / speaker verification.
Open provenance standard for media authenticity.
