A grainy 9-second clip filmed from a kitchen window doesn't look like evidence. Until you stretch it across a 30-meter elevation grid, line up the horizon with a digital terrain model, and prove that the smoke trail in frame 47 could only have come from one specific tree line. That's GEOINT 3D — turning flat pixels into a defensible spatial argument.
Reconstructing 3D scenes is where geospatial intelligence stops being a flat map and starts behaving like a courtroom exhibit. You don't just say the shooter was on the ridge. You build the ridge, drop a virtual camera on it, and show that nothing else in the basin had line-of-sight on the target.
What "3D scene reconstruction" actually means in OSINT
Strip away the marketing — there are three jobs being done at once.
- Terrain layer: a digital elevation model (DEM) describing the landscape itself
- Object layer: photogrammetric or modelled geometry of buildings, vehicles, craters, and debris
- Camera layer: the position, focal length, and field of view of the original photo or video frame
Stack those three correctly and you can test claims that no amount of pixel-peeping could resolve. Was the alleged shooter visible from the victim's window? Could that smoke plume have drifted from that direction at that time of day? Did the wreckage land in a pattern consistent with a missile launched from the south-east?
If a single photo can't answer it, this is where the question gets dragged.
Start with elevation. Always.
Before any model, you need terrain. Three free global DEMs do most of the heavy lifting in open-source GEOINT work.
SRTM from NASA's 2000 Shuttle Radar Topography Mission still covers about 80% of Earth's land surface at roughly 30-meter resolution between 60°N and 56°S. It's old, it has voids, and it remains the baseline everybody else gets compared against.
Copernicus DEM (GLO-30) is the current default for serious work — 30 meters globally, with absolute vertical uncertainties around 1–3 meters when checked against ICESat-2 measurements. It also fills voids using ASTER, AW3D30, and several national DEMs, so coverage is closer to complete than SRTM.
ALOS World 3D (AW3D30) from JAXA is the third one you keep in your kit — same 30-meter resolution, with a 5-meter commercial version available for when civilian-grade isn't enough.
None of these will save you in dense forest or built-up city blocks. If you need rooftop-accurate ground truth, you either pay for a high-resolution commercial DEM or you reconstruct the geometry yourself from imagery.
Viewshed: the simplest 3D test, and the most underused
A viewshed answers one question: from this point, what can I see? It's a basic raster operation — but it resolves a surprising number of investigative arguments before they get expensive.
Two flavours matter:
- Point-to-area viewshed: show every pixel of terrain visible from an observer location
- Point-to-point line-of-sight: draw the cross-section between an observer and a specific target, and mark exactly where terrain or vegetation breaks the sightline
You can run both for free. The QGIS Visibility Analysis plugin handles point-to-area on any DEM you load. For point-to-point work, the LoS Tools plugin produces a proper cross-section showing where the round path or sightline clears or hits terrain. ArcGIS Pro does the same job with prettier output and a bigger licence fee.
When a Bellingcat or Forensic Architecture report claims a sniper position is "geometrically impossible," this is what's behind the claim.
Photogrammetry from civilian drone footage
If a DEM gives you the ground, photogrammetry gives you everything sitting on top of it. Hand a piece of software 80–500 overlapping photographs of a building, a vehicle, or a debris field, and it spits out a textured 3D mesh you can measure inside Blender.
Meshroom is the open-source default — free, GPU-hungry, and good enough for most investigative work. RealityCapture is faster and cleaner if you have access. OpenDroneMap handles the drone-mapping case end-to-end. Polycam turns a phone walk-around into a usable mesh in minutes — useful when the only available footage is a 90-second TikTok shot from a street corner.
Free tools deliver functional accuracy in the 3–10 cm range without ground control points. That's not surveying-grade, but it's plenty for proving a vehicle could or couldn't have fit through a gap, or that a crater diameter matches a specific munition.
The techniques that actually matter
The tools are the easy part. The techniques are where investigators earn their keep.
Horizon matching
Take any photo where the sky and skyline are visible. Render a 3D terrain model from candidate locations and compare the synthetic horizon line to the real one. The location whose virtual ridges align with the photographed ridges is your camera. In mountainous terrain it can geolocate footage to within tens of meters.
Vehicle-trajectory back-projection
You know where the vehicle ended up. You have a few frames showing it in motion. Build the road, drop the camera, work backwards from heading and pixel-velocity to estimate origin and timing. This is how the MH17 Buk launcher route was reconstructed across eastern Ukraine — social-media photos and videos stitched against satellite imagery, with the launch site identified by matching geographical landmarks in poor-quality US intelligence imagery to Google Earth.
Debris-field reconstruction
Crashes don't lie. Wreckage falls in a pattern shaped by altitude, heading, wind, and break-up sequence. Plot the debris in 3D, model the trajectory, and you can usually confirm or rule out the claimed flight path. The same logic applies to munition fragments around an impact crater.
Smoke-plume direction modelling
A plume tells you wind. Wind plus impact angle plus terrain tells you where the round came from. Combine the visual direction of plume drift with weather-station data for that hour and the cone of plausible launch sites tightens fast.
Cross-camera 3D positioning
When the same incident is captured from multiple bystander phones, each camera's position can be recovered photogrammetrically and placed inside the same 3D scene. Forensic Architecture has built entire investigations on synchronizing dozens of clips into a single reconstructed timeline using situated-testimony interviews conducted inside the model itself.
The toolchain, in one place
For terrain, viewshed and base mapping:
- Google Earth with KMZ export for fast visual recon
- QGIS with the Visibility Analysis and LoS Tools plugins for free, repeatable analysis
- ArcGIS Pro if your organisation pays for it
For 3D modelling and photogrammetry:
- SketchUp for blocky building geometry
- Blender as the free workhorse for assembling, texturing, and rendering scenes
- Meshroom, RealityCapture, OpenDroneMap, and Polycam for turning images into meshes
For publishing and interactive 3D viewers:
- Cesium Ion with Cesium World Terrain streams planetary 3D scenes to 50 cm resolution in the browser
- Mapbox handles the 2.5D presentation case (2D shapes with extruded height)
- Three.js for embedding custom WebGL reconstructions inside long-form articles
Who's already doing it well
If you want to learn the craft by reverse-engineering the best, three organisations cover most of the surface area.
Bellingcat built much of the public OSINT 3D playbook — the MH17 work alone established viewshed, debris-field analysis, and trajectory back-projection as standard methods. Forensic Architecture publishes its 3D models on GitHub, runs witness interviews inside reconstructed VR environments, and pushes the discipline towards admissible evidence. SITU Research brings the architectural rigor and most of the courtroom-grade visualisations.
On Twitter and Telegram, follow @conflictint, @bellingcat, @forensicarchi, and @BenjaminStrick for fresh GEOINT 3D case studies as they drop.
The standard you should hold yourself to
3D reconstruction makes investigators look smart. It also gives them rope to hang themselves with. A pretty render that hides assumptions is worse than no render — it just makes a wrong conclusion more persuasive.
The discipline is simple. Every object in the scene needs a source. Every camera position needs a geometric justification. Every viewshed needs its DEM, observer height, and assumed eye level documented. If somebody can't reproduce your reconstruction from your sources, it's not evidence. It's a cinematic rendering with a graph attached.
Build the scene like you'll be cross-examined on it. Because in 2026, that's increasingly exactly what happens.
