Born from a dorm room, now serving 500,000+ users across 50 countries in the fight against deepfake abuse.

Early 2023. Three computer science students watching deepfakes destroy lives. Politicians impersonated. Celebrities defamed. Grandparents scammed.
Creating fake media was free. Detecting it cost thousands. Something was fundamentally broken.
We built Sniffer AI to fix that.
Recognition and milestones that shaped our journey













Peer-reviewed papers advancing deepfake detection science
Democratizing access to digital truth verification
Not just enterprises. Not just governments. Everyone. In a world where deepfakes can destroy reputations overnight, where voice clones can drain bank accounts, where manipulated videos can incite violence — access to verification tools isn't a luxury.
It's a fundamental right.
No technical knowledge required. No expensive subscriptions. Just instant, reliable verification of what's real and what's fake.
Built with the same rigor used by law enforcement and security agencies, but accessible to everyone.
Because digital truth shouldn't be reserved for those who can afford it. Protection for everyone, everywhere.
The impact we're creating together
Every morning, someone uses Sniffer AI to verify something critical. A journalist fact-checking. A victim seeking justice. A parent protecting their family.
That responsibility fuels everything we do.
We're not just building software. We're building a defense against the erosion of digital trust. A movement to restore certainty in an uncertain world.

Sniffer began as a college project and evolved into a forensic response to a growing problem — where deepfake harm spreads faster than clarity, evidence, or action.

Founder & Lead Developer
While studying the rapid rise of deepfake misuse, it became clear that detection alone was not enough. Victims often lacked guidance, investigators struggled with verification, and digital evidence frequently lost credibility before any action could begin.
Sniffer was built to bridge this gap — combining AI-assisted analysis with forensic-style reporting, evidence integrity, and a victim-first workflow. The system is being developed and validated within a controlled institutional environment to ensure responsibility before scale.
Sniffer’s resources explain how deepfakes work, what to do if you’re targeted, what not to do, and how digital evidence can be protected before it’s lost.
Explore resources