An image is rarely just an image. It's a timestamp, a location, a face, a logo on a wall, a reflection in a window — and somewhere on the public web, there's a chance it's been posted before. Reverse image search is how you make that chance work for you. Done badly, it's a single drag-and-drop into Google. Done properly, it's a multi-engine sweep that triangulates origin, date, and context — and quietly resolves cases that pure text-based OSINT never could.
This is a working operator's view of IMINT.REVERSE — what each engine is actually good at, the techniques that separate a five-minute lookup from a real investigation, and where the gotchas live.
Why one engine is never enough
Every reverse image engine is the product of two things: what it has crawled, and what it does with the pixels. Those two variables alone make any single engine a partial map of the web.
Google Lens leans on Google's index and its object/landmark recognition models — broad coverage, strong at products, places, and text inside the image. Yandex sees the parts of the web Google ignores: tourist review sites, Russian and Eastern European social platforms, dating sites, forums — and it runs aggressive face matching on top. Bing inherits the Microsoft-indexed corpora and behaves differently on Western faces and product photos. TinEye doesn't try to be clever about subjects; it does pixel-level fingerprinting and tells you when a copy was first seen on the web — its index surpassed 77 billion images by 2025, all built for exact-match and modified-copy detection rather than "looks like".
That's why investigators do not pick a favourite. They run the same image through several engines and triangulate. Bellingcat documented this years ago and the logic hasn't changed: each engine returns a different slice of the truth. If you only check one, you've already lost.
The general-purpose engines
Google Lens / Google Images — the broadest index on Earth and the strongest object/landmark recognition. Use it for products, signage, architecture, plants, animals, screenshots of UI, and anything with text in the frame (Lens will OCR and search the text separately). Weak on faces — Google deliberately throttles face matching on the public surface.
Yandex Images — the operator's favourite for one reason: it does what others won't. Yandex returns confident face matches on the open web, and crawls platforms that Western engines either skip or deprioritise. Bellingcat's comparison testing repeatedly placed Yandex first for face and Eastern-European-content queries. If your subject is a person, a Russian street scene, or anything that touches the post-Soviet web, Yandex is your first call — not your second.
Bing Visual Search — easy to dismiss, costly to skip. Bing's index overlaps with Google's but is not identical, and on Western faces and consumer-product photos it sometimes surfaces results the others miss. Treat it as the third leg of the stool.
TinEye — the only engine that answers "where did this come from, and when?" with any precision. It indexes by fingerprint, sorts results by oldest appearance, and detects modified copies (cropped, recoloured, watermarked). When you need to disprove that a "breaking news" photo is in fact a 2014 stock image, this is the tool that does it in one click.
Baidu Images and Sogou Images — the Chinese-language web is a different internet. If your subject has any plausible link to China, Hong Kong, or Chinese-speaking diaspora content (manufacturing, e-commerce, Weibo screenshots, Douyin captures), running Baidu and Sogou alongside the Western engines is the difference between finding it and concluding it doesn't exist.
Specialty and niche engines
Generic engines fail on certain content types. These exist to fill those gaps:
- SauceNAO — the standard for anime, manga, and illustration. Finds artist, source booru, and original posting context where Google returns nothing.
- Iqdb — overlapping use case with SauceNAO, broader on non-anime illustration. Worth running both; their indexes diverge.
- Karma Decay — narrowly scoped to Reddit. When you need to know if an image has appeared on Reddit (and where), this beats trying to coax Google into restricting by domain.
- RevIMG — feature-region search; you draw a box around the part of the image that matters and search on that fragment.
- Berify — designed for image-theft monitoring; aggregates results across multiple engines and stores history. Useful when you're tracking a subject's images over time, not just looking once.
- ImgOps — not an engine, a launcher. One upload, dispatched to a dozen engines and forensic tools (EXIF readers, error-level analysis, etc.). The fastest way to start a multi-engine sweep.
- Reverse.Photos — a no-account, mobile-friendly front-end that pipes uploads into Google. Useful when you're working from a phone with a screenshot.
Face search is its own discipline
Reverse image search and face search are not the same thing. A face-search engine takes a single face and tries to match it to the same face in any other image — different angle, different lighting, different age. A regular reverse image search wants the same image, not the same person.
PimEyes is the most-known commercial face-search service, with an index of roughly three billion faces scraped from the open web. Strong on well-lit, front-facing photos; weak on extreme angles or low-resolution input. PimEyes explicitly excludes social-media platforms from its crawl, which is both a privacy concession and a meaningful coverage gap operators need to remember.
FaceCheck.ID markets itself as the alternative — accuracy claims around 99% in vendor testing, with stronger social-media coverage and red-flag indicators for known scammers. Take the self-reported numbers with a salt-mine, but the tool is genuinely useful for romance-scam and identity-fraud work where social profiles dominate.
Search4Faces targets the VKontakte and Russian-network surface that PimEyes won't touch. For investigations involving Russian-speaking subjects, it's not optional.
A working rule: treat every face match as a lead, never a conclusion. False positives are the norm, not the exception, especially across age, hairstyle, or quality drops. Verify by walking from the matched profile out to its surrounding posts and connections. A face match alone has never closed a case.
Techniques that separate amateurs from professionals
Uploading the full image and reading the first page of results is not reverse image search. It's the warm-up. The techniques below are what actually move investigations forward.
Crop to the subject
The single most effective technique. Engines weight the entire frame, so a busy background drowns out the subject. Crop tight on the face, the patch on the uniform, the shop sign, the unique tattoo — and search the crop. You will see entirely different result sets from the same source image cropped three different ways.
Feature-only search
A logo, a building, a tattoo, a vehicle plate fragment. Pull just that feature out of the photograph and run it as its own search. Engines that returned nothing on the full image will sometimes give clean matches on the feature.
Rotated and flipped variants
Pixel-fingerprint engines like TinEye treat a horizontally flipped image as a different image — but reposters routinely flip photos to dodge takedowns. Rotate 90°, flip horizontally, search again. This is the single most overlooked move in the whole discipline.
Multi-engine fan-out
Not "I'll try Google, and if that fails, Yandex." Both, in parallel, plus Bing and TinEye, plus Baidu if there's any Chinese-web possibility, plus a face engine if there's a face. ImgOps and the RevEye browser extension exist precisely to make this one click rather than ten.
First-seen dating
TinEye sorts by oldest indexed appearance. That date is not the moment the image was created — but it is a hard upper bound on how old a "fresh" image can claim to be. A photo presented as today's footage that TinEye first saw in 2017 has just answered its own question.
Pivoting from a single found copy
Finding the image is the start, not the end. Once you have one URL, the work is to read the page around it: the username who posted it, their other posts, the comment thread, neighbouring uploads in the same album. The image is bait. Everything attached to it is the catch.
The workflow, end to end
An efficient pass on a single image looks like this: open the file in ImgOps for an instant fan-out across engines and forensic tools. Read EXIF and watch for error-level analysis hints of editing. Crop the subject and re-run the crops. Run any face through Yandex first, then a dedicated face engine. Run TinEye separately for the first-seen date. If anything looks Chinese, hit Baidu. From the first solid match, stop searching and start reading the host page — username, post history, comment thread, related uploads. Document the URLs, the dates, the screenshots, and the engine that produced each lead. That last step is what makes the work reproducible and defensible — without it, you have a hunch, not an investigation.
Where to keep learning
The discipline moves. Engines change index strategies, face tools come online, and platforms close their doors to crawlers. The accounts and resources actually worth following: Bellingcat and the Bellingcat Online Investigations Toolkit; on X/Twitter, @hatless1der, @cyb_detective, @osintcurious, @i_am_osint, and @inteltechniques. Skip the recycled "top 10 tools" listicles — they are written by people who have never opened a case.
Reverse image search is one of the cheapest, fastest, most underused weapons in OSINT. The engines are free. The techniques are public. The only thing standing between a casual user and a useful investigation is the willingness to run the same image through five engines instead of one — and to keep reading once the first hit comes back.
