Found a fake image of you?
Get proof. Get it removed.
Upload the image. We run a full forensic analysis in under 60 seconds and give you a certified evidence report — with the exact removal contact for wherever it's hosted.
- 99.2%
- Detection accuracy
- <60s
- Mean analysis time
- SHA-256
- Cryptographic hash
The Reality
They made it in seconds.
You've been fighting it for months.
500M+
non-consensual deepfake images in circulation in 2024 — the vast majority never removed
They made it in seconds.
A fake image of you can be generated in under a minute with no technical skill. By the time you find it, it's already been shared, screenshot, and reuploaded across dozens of platforms.
New deepfakes take < 60 seconds to create
Platforms won't act without proof.
Every major platform requires a structured forensic report before they'll investigate. A screenshot and your word are not enough. Without verified evidence, your removal request gets closed.
Unverified reports are rejected >89% of the time
Most victims never get it removed.
You would need to know each platform's specific reporting format, the right contact for abuse teams, and how to establish forensic chain-of-custody — all while dealing with the trauma of it existing at all.
We've mapped 138 platform removal contacts for you
How it works
From image to evidence in three steps.
No account. No install. Results in under a minute.
- 01
Upload the image
Any JPEG, PNG, or WebP. No account needed. The file is SHA-256 hashed on receipt — your original is never retained after analysis.
SHA-256 hash on upload
We never store your image
- 02
Seven-layer forensic analysis
Error-level analysis, EXIF integrity check, GAN artifact detection, facial landmark distortion, clone region mapping, noise patterns, and compression fingerprinting — run concurrently.
~30–60 seconds
All layers run in parallel
- 03
Download a signed evidence report
A structured PDF with per-layer confidence scores, a cryptographic case ID, and a SHA-256 image hash. Formatted to meet platform takedown and legal filing requirements.
PDF · case ID · signature
Accepted by major platforms
Analysis Capabilities
Seven independent layers.
One verdict.
Each layer is independent. A manipulated image rarely fools all seven — the combination is what makes the verdict reliable.
Error-Level Analysis
Reveals compression inconsistencies that indicate regions were edited or composited into the image after original encoding.
GAN Artifact Detection
Identifies spectral and spatial patterns unique to AI-generated faces — invisible to the human eye but statistically consistent across GAN outputs.
EXIF Forensics
Cross-checks camera model, software IDs, GPS tags, and creation timestamps. Missing or inconsistent metadata is a primary manipulation signal.
Clone Region Mapping
Detects copy-move forgeries: regions duplicated within the same frame to hide or add content — common in document and scene manipulation.
Facial Landmark Distortion
Measures deviation in 68 facial landmarks against expected proportions. Warp-based deepfakes leave measurable asymmetries in bone-level geometry.
Noise Pattern Analysis
Every camera sensor leaves a unique noise fingerprint. Composited regions break this pattern — detected even after JPEG recompression.
Compression Fingerprinting
Analyzes DCT block quantization tables to detect double-compression artifacts — a telltale sign of re-saving after editing or AI generation.
Evidence Output
Every case produces a signed forensic record.
Per-layer confidence scores, anomaly annotations, and a cryptographic case ID — formatted to meet platform takedown and legal filing standards.
7
Analysis layers
SHA-256
Hash algorithm
< 60s
Report generation
Sniffer · Forensic Case Report
SNF‑2026‑0312‑XK7
Issued 05 Mar 2026 · 14:32:07 UTC
Verdict
Manipulated
5/7 layers flagged
- 94Flagged
Error-Level Analysis
- 12Clean
EXIF Metadata Integrity
- 87Flagged
GAN Artifact Detection
- 78Flagged
Facial Landmark Distortion
- 31Clean
Clone Region Mapping
- 91Flagged
Noise Pattern Analysis
- 66Flagged
Compression Fingerprinting
SHA-256
a3f8d2c0…1b9e74f3
Sample · actual case data is private by default
Who is Sniffer for?
Built for everyone
who needs the truth.
From individuals protecting themselves, to institutions operating at scale — the same forensic engine, the same institutional-grade report.
Victims of image-based abuse
Someone used an intimate or manipulated image of you without consent. You need verifiable evidence — not a screenshot, not a claim. A cryptographically signed forensic report that platforms, law enforcement, and courts can act on.
Start anonymous casePlatform abuse teams
Your queue is in the millions. Our API returns a machine-readable verdict in under 30 seconds — with a weighted signal breakdown and SHA-256 audit hash your team can append directly to moderation decisions.
Start verificationFact-checkers & newsrooms
Viral images move faster than editorial cycles. Verify source integrity before publication — ELA residuals, C2PA provenance, and perceptual hash consensus surfaced in one report your legal team can cite.
Try a verificationSolicitors & law enforcement
Chain-of-custody integrity starts at first contact with evidence. Every Sniffer report includes a tamper-evident report hash, pipeline version, and full algorithm audit trail — suitable for submission as digital exhibit.
View sample reportNGOs & advocacy organisations
Document image-based harassment at scale. Batch API access, anonymous case IDs, and exportable reports make Sniffer a fit for survivor support programmes and policy research alike.
Start a casePrivacy & Trust
Your report.
Your evidence.
- ✓No account required
- ✓Anonymous by default
- ✓Report is tamper-evident
- ✓No images stored
- ✓SHA-256 audit trail
Sniffer never stores the image after analysis. The report hash is derived from the forensic output, not the original file.
Get Started
Your image deserves
a forensic answer.
Upload once. No account. Receive a cryptographically signed report in under 60 seconds.
Private by defaultNo personal data storedAnonymous reports supported
