
Detect manipulated media, pinpoint tampering, and generate forensic proof for cybercrime action — in minutes.
From verification to forensic proof and cybercrime reporting — Sniffer handles every step.
Detect whether an image or video has been manipulated using cryptographic and forensic checks — not probabilistic guesses.
Pinpoint exactly where an image was altered — faces, regions, or injected elements — with visual forensic overlays.
Automatically classify incidents based on identity misuse, sexual exploitation risk, and manipulation intensity.
Extract EXIF data, timestamps, device fingerprints, and editing traces to strengthen forensic context.
Generate court-ready forensic reports with tamper visuals and submit them directly to cybercrime authorities.
A deterministic pipeline built for evidence, accountability, and real-world action
Media Intake
Suspected images or videos are securely uploaded for forensic verification.
Cryptographic Proof
Deterministic hashing establishes originality and detects post-creation tampering.
Tamper Localization
Manipulated regions are precisely identified and visually highlighted.
Forensic Reporting
A structured, verifiable report is generated for takedown or legal escalation.
Sniffer does not rely on probabilistic visual cues alone. Instead, it establishes deterministic proof of media authenticity using cryptographic verification and precise tamper localization.
Each verification produces a structured forensic record that can support platform takedowns, legal proceedings, and cybercrime investigations — without storing or exposing sensitive content.
Deterministic
No guesswork
Secure
No storage
Scalable
Institutional ready
Verifiable
Court admissible
Designed to integrate with institutional workflows, law enforcement reporting, and victim support pipelines.
Sniffer shortens victim response time, strengthens legal action, and restores trust in digital evidence.
Sniffer empowers victims of non-consensual deepfake abuse by drastically reducing the time and effort required to prove media manipulation. Visual tamper localization and forensic reports help limit prolonged exposure, harassment, and repeated circulation of harmful content.
By providing deterministic cryptographic verification instead of probabilistic AI predictions, Sniffer generates evidence suitable for cybercrime complaints, internal investigations, and legal proceedings. This bridges the technical gap between victims and enforcement agencies.
Sniffer assists digital platforms in validating abuse claims with structured forensic data, reducing false reports while enabling faster takedown decisions. This supports fair moderation without over-censorship.
As generative media becomes more accessible, Sniffer establishes accountability by discouraging malicious use and restoring trust in digital authenticity. Its adoption encourages responsible AI deployment at a societal and regulatory level.
Sniffer began as a college project and evolved into a forensic response to a growing problem — where deepfake harm spreads faster than clarity, evidence, or action.

Founder & Lead Developer
While studying the rapid rise of deepfake misuse, it became clear that detection alone was not enough. Victims often lacked guidance, investigators struggled with verification, and digital evidence frequently lost credibility before any action could begin.
Sniffer was built to bridge this gap — combining AI-assisted analysis with forensic-style reporting, evidence integrity, and a victim-first workflow. The system is being developed and validated within a controlled institutional environment to ensure responsibility before scale.
Sniffer’s resources explain how deepfakes work, what to do if you’re targeted, what not to do, and how digital evidence can be protected before it’s lost.
Explore resources