Photogrammetry, NeRF and 3D Gaussian Splatting - A Simple Comparison for 3D Enthusiasts
- Abhishek Jathan
- Nov 3
- 5 min read
You might have heard of Photogrammetry for a while now, tools like Meshroom and RealityCapture (now called RealityScan) take a bunch of images and with the magic of math and computation it figures out the 3d structure and spits out a fully textured 3D mesh in whichever format that you desire.
But these days some new tech is taking the world by storm, NeRF and 3D Gaussian Splatting.
Let me give you a simple straightforward comparison of all three based on my understanding.
Photogrammetry
Photogrammetry is the technique of reconstructing 3D models from multiple overlapping photographs taken around an object or environment.
Tools like RealityScan and Meshroom use computer vision algorithms such as Structure from Motion (SfM) and Multi-View Stereo (MVS) to automatically process photographs into detailed 3D models, while allowing user control over key steps like photo alignment, mesh generation, and texturing.
This is what a general workflow for photogrammetry looks like:
Capture photos – Take many overlapping images of the object or scene from different angles.
Align photos (SfM) – The software detects matching features between photos to estimate camera positions and create a sparse point cloud.
Build dense cloud (MVS) – It calculates depth for each pixel to generate a detailed dense point cloud.
Create mesh – The dense cloud is converted into a 3D surface mesh.
Apply texture – The original photos are projected onto the mesh to give it realistic colour and detail.

At this point you get a super dense high triangles count mesh file which you can cleanup and optimise for usage, for example in films, games, renders or even 3D printing. The technology has existed for more than a decade, and the process is pretty straight forward.
For a moment let us think beyond triangles. What if there was a way to show 3D Geometry without any triangles?
Enter NeRF.
NeRF
NeRF stands for Neural Radiance Fields. It’s a technique that uses a neural network to learn how light behaves in a 3D space.
Instead of building the scene out of triangles or meshes, NeRF understands how much light passes through every tiny point in space, and what colour that light should appear from any viewing angle.
What makes NeRF fascinating is that the neural network is used in both training and rendering and yet, there’s no rasterization happening at all. That's right. No triangles.
It sends out rays from the camera into an invisible 3D grid, sampling many points along each ray.
At each point, the neural network predicts the color and density, and these are blended together to produce the final image — like light shining through layers of fog.
Because this happens per-pixel, it can almost be thought of as a kind of screen-space computation, where each pixel’s colour is built up by accumulating light information along its ray through space.
Now the problem with this technique is that it needs a good amount of VRAM and a slightly beefy GPU since there are hundreds of neural network queries happening every second.
Gaussian Splatting
Gaussian Splatting takes a very different approach to the same idea of turning photos into 3D scenes. Instead of relying on a neural network during rendering, it builds the scene out of millions of tiny, semi-transparent 3D points called Gaussians.
Each Gaussian has its own position, size, colour, and opacity, like a tiny glowing blob floating in space. When rendering, the GPU projects these blobs onto the screen and blends them together based on their depth and transparency, a process that’s much closer to rasterization than NeRF’s ray-based method.
Because everything is explicitly stored as real 3D data, there’s no neural network running during rendering, which makes Gaussian Splatting incredibly fast and capable of real-time performance compared to NeRF.
Now, you might be wondering, “Wait, isn’t this just geometry again?”
Not quite. While Gaussian Splatting does use real 3D spatial data, the rendering itself isn’t traditional rasterization. Instead of drawing triangles, the GPU projects each blob (like a sprite or billboard) and calculates how much it contributes to each pixel — based on its distance, transparency, and coverage. All of these contributions are then blended together, and when millions of blobs overlap, they form detailed, solid-looking surfaces.
Why?
Now you might be thinking, why bother with NeRF or Gaussian Splatting when we know that standard rasterization is already pretty optimal and efficient, and if needed, it can look super realistic with ray tracing and other modern GPU-intensive features?
The answer is that, at this point in time, the quickest way to make a hyper-realistic 3D scene happens to be Gaussian Splatting or NeRF. But out of the box, a standard NeRF or Gaussian Splat scene is static and baked. There’s very little editing you can do, it’s not possible to relight the scene, and there are no materials, shaders, or concepts of PBR in these, since there’s no actual shading happening.
Think of it as a super high-resolution 3D photograph rather than a full environment.
Maybe there’s potential for it in film and renders, you could capture a location at different times of the day and reuse it indefinitely for multiple renders, visualizations, or virtual film shots without having to actually visit the place again. That would help massively with relighting and reshoots.
As of now, photogrammetry is the only method that provides actually usable assets for games, because it’s real mesh data — and it has been used for decades in both games and film. But there’s potential for a hybrid approach, perhaps. For faraway vistas or static objects, something like Gaussian Splatting could work well — but for now, that’s just an idea.
There is another new technique called Triangle Splatting hoping to bridge the gap between some of these techniques. The advantages are that this could be relit and could work with post processing as well. And the major factor in its favour is the performance, as it boasts FPS numbers in the range of thousands in its research paper. In comparison to Gaussian and NeRF that is pretty damn impressive, and I'm hoping to see more of it and try it out myself.
Cheers!
Happy Learning!
References
If you made it this far in the article, might as well subscribe to my newsletter and get updates on new blog posts. Comment below if you want me to make detailed tutorials on these.



Comments