Satellite GEOINT: A Practical Playbook for Sentinel, SAR and Sub-Meter Imagery

GEOINT · April 30, 2026 · Updated Apr 30, 2026

By the time a Telegram channel posts a "fresh" video from a frontline, the satellites have already imaged that frontline four times. That gap — between what's happening and when civilians can see it — is exactly where GEOINT-by-satellite earns its keep.

This is the discipline analysts file under GEOINT.SAT: pull free or commercial satellite imagery, define a target area, run change detection, explain what changed. No magic. No NSA budget. The same pipeline that powers Bellingcat's damage-assessment tools sits behind a free Copernicus account that takes two minutes to register.

What satellite GEOINT actually does

Strip the acronym and you're left with three jobs:

  • Detect change — what is here today that wasn't here last week.
  • Count things — aircraft on a tarmac, vehicles in a depot, ships in a harbor.
  • Monitor sites — same coordinates, repeat passes, plot the trend.

Everything else in this article is plumbing.

Pick the right sensor or pick the wrong story

The first decision in every investigation is which satellite you're asking. The wrong sensor will hand you a beautiful picture of clouds.

Free, public, good enough for 80% of work

Sentinel-2 — 10-meter optical, 5-day revisit, all of Earth, free. Run by ESA's Copernicus program. This is the default. EO Browser opens it in a tab.

Sentinel-1 — 10-meter synthetic aperture radar, also free. SAR sees through clouds and at night. In a war zone with bad weather, this is the only sensor that actually works.

Landsat-8/9 — 30-meter optical, NASA/USGS, available through USGS EarthExplorer. Coarser than Sentinel-2 but with a 50-year archive. Useful when the investigation goes back further than 2015.

Paid, sub-meter, when the question demands it

Planet Labs — daily-revisit PlanetScope at ~3 m, plus the SkySat constellation at 50 cm, with up to 10 captures per day on a single point. Their next-gen Pelican fleet is rolling in to replace SkySat at higher resolution and faster revisit.

Maxar — WorldView at 30 cm, the highest-resolution commercial optical on the market. Their Open Data Program dumps before/after imagery for major disasters under Creative Commons. Free. No excuses for missing it.

Capella Space and ICEYE — commercial sub-meter SAR, with Capella's Spotlight mode hitting 0.25 m and ICEYE delivering 1 m at 100 km swaths. Sub-hour revisit if you pay enough. SAR you can read like an optical photo.

The boring secret most operators won't tell you: free Sentinel is the daily driver. Maxar and Planet get burned only for set-piece moments where 30 cm matters more than 10 m.

The real tradeoff isn't free vs. paid — it's cadence vs. resolution. Sentinel-2 gives you a fresh 10 m image every five days, but you'll never count individual MANPADS on a truck. SkySat gives you 50 cm ten times a day, but only on the AOI you can afford to task. Pick a sensor that matches the tempo of the event you're tracking. A static airbase wants weekly Sentinel. A live amphibious staging area wants tasked Planet.

The workflow nobody bothers writing down

It looks the same every time:

  1. Define the AOI — area of interest, drawn as a polygon. Not "Donetsk." Specific coordinates around the depot, the airfield, the bridge.
  2. Set a time window — pre-event baseline, post-event capture. The ratio of these two is what tells you something happened.
  3. Pull imagery — through Sentinel Hub, Copernicus Data Space, USGS EarthExplorer, or Google Earth Engine if you want to script it.
  4. Run change detection — NDVI, NBR, SAR coherence, or a manual blink-compare. Pick the one that fits the question.
  5. Overlay and annotate — drop the result on a basemap in QGIS or ArcGIS, mark the changes, write the report.

Most "satellite OSINT investigations" you read in major outlets are this loop, run once, by someone who knows what bands to look at.

Techniques that earn their keep

NDVI — Normalized Difference Vegetation Index. Sensitive to vegetation health. Drop it on agricultural fields and you can see when a crop was destroyed, abandoned, or set on fire.

NBR / dNBR — Normalized Burn Ratio. Compares the near-infrared and shortwave-infrared bands before and after a fire to map burn severity. The standard tool for wildfire mapping — and just as useful for confirming artillery-set steppe fires in eastern Ukraine.

SAR coherence change detection — two SAR passes over the same target. If pixel statistics decorrelate, something physical changed at that pixel. New construction, fresh craters, collapsed buildings — coherence loss says "this pixel is no longer the pixel it was." Bellingcat's 2026 damage-assessment tool for Iran and the Gulf runs on this principle. So does the open-source Ukraine destruction mapper built on Sentinel-1 time series and the Pixel-Wise T-Test damage detector.

Blink-comparison — flip between two timestamps in EO Browser. Crude. Effective. Most "before/after" images in viral OSINT threads were made this way.

Object detection — count vehicles, aircraft, ships. Bellingcat's RS4OSINT course ships a YOLOv5 pipeline trained on the Airbus Aircraft and DOTA datasets. Run it on a tarmac and you get a number. Run it on a forest and you get the Russian armor parked under the trees.

Activity proxies — chimney smoke, cooling-tower plumes, vehicle tracks in snow. The image doesn't tell you a factory is open. The smoke does.

Nighttime lights — the VIIRS Day-Night Band on Suomi-NPP measures emitted light at night, every night, globally, free. NASA used it to track blackouts across Ukraine after Russian strikes on the energy grid; the same data flags curfews, abandoned cities, and large-scale outages anywhere on Earth.

The toolbox you actually open

Browsers and viewersEO Browser and Sentinel Playground for fast triage; LandViewer (eos.com) for a cleaner UI; SnapPlanet for Sentinel-2 mosaics on mobile.

CatalogsCopernicus Data Space and USGS EarthExplorer when you need raw scenes.

ComputeGoogle Earth Engine for petabyte-scale analysis from a browser; Sentinel Hub Custom Scripts for inline band math.

Desktop GISQGIS with the Semi-Automatic Classification Plugin, or ArcGIS Online if your org pays for the license.

CommercialPlanet Explorer, SkyFi for ad-hoc tasking, SpaceKnow for analytic feeds, Allsource for indexed insight.

Targeted OSINT tools — Bellingcat's Radar Interference Tracker for locating active military radars via Sentinel-1 interference patterns, the Gaza damage detector, and the more recent Iran/Gulf damage tool linked above.

The pitfalls that sink amateur reports

Clouds. A 70%-cloud Sentinel-2 scene is not "data" — it's a picture of weather. Always cloud-mask. Or switch to SAR.

Atmospheric correction. Top-of-atmosphere reflectance and surface reflectance are not the same thing. NDVI on uncorrected imagery will lie to you, especially when comparing different dates with different haze.

Date stamping. A scene's "acquisition date" is in UTC. Captioning a screenshot "morning of the strike" without translating to local time is how investigations get debunked on Twitter inside an hour.

Resolution mismatch. You cannot count individual cars in 10-meter Sentinel-2. You can count rows of vehicles. Don't oversell the pixels — someone with Maxar will check your work.

Confusing absence with destruction. SAR coherence loss can mean a building collapsed — or a parking lot got resurfaced. Always pair SAR change detection with optical imagery before claiming damage.

Where the real signal lives

If you want to follow the people who actually do this work, the right names are short and stable: @bellingcat, @geoconfirmed, @ChrisBiggers, @SentinelHub, @CopernicusEU, @planet, @AllSourceA. These accounts post live workflows — what AOI, which sensor, which timestamp, what conclusion. Read them long enough and your own reports get sharper by osmosis.

Bottom line

Satellite GEOINT is no longer a closed military discipline. The same Sentinel-1 stack used to map war damage in Ukraine is one Copernicus login away. The same VIIRS data NASA used to track blackouts is sitting in a public catalog. The same 30 cm Maxar that sells for thousands per scene is free during disasters. The actual moat is methodology — knowing which sensor for which question, which technique for which signal, and how not to oversell your pixels. Get that right and your investigation starts where everyone else's ends: with the data.

If you're still confirming front-line movement off Twitter videos alone, the satellites have been laughing at you for three days.