June 24, 2025
What is Gaussian splatting ?
Discover how 3D Gaussian Splatting is revolutionizing real-time rendering with stunning photorealistic scenes generated from just a few images.
Definition
Its name describes quite well what Gaussian Splatting is: splashes represented by small spots whose color distribution from the center to the edges follows a Gaussian distribution.

Source: Example of a Gaussian – Hugging Face
3D Gaussian Splatting, as described in 3D Gaussian Splatting for Real-Time Radiance Field Rendering, enables real-time rendering of photorealistic scenes learned from small image samples.
This article breaks down in broad strokes how it works and what it means for the future of 3D scene rendering.
For more details, you can check out this excellent article on Hugging Face: https://huggingface.co/blog/gaussian-splatting
Comparison with Other Approaches
The best-known technique until now for 3D object scanning was photogrammetry. This is a technique that generates 3D polygons from images taken from multiple viewpoints—for example, placing the object on a turntable and taking photos while rotating it 360 degrees.
Although photogrammetry is useful for object scanning, it cannot render scenes without edges (like the sky) or fine details in distant scenes. Moreover, since optimization is done through conventional optimization problems, it’s not possible to generate accurate 3D polygons for certain use cases such as reflections and mirrors.

Usual polygon rendering (source: Hugging Face)
The Rise of NeRF
To overcome the limitations of photogrammetry, NeRF (Neural Radiance Fields) emerged as another method to render 3D scenes from multi-view images. The core idea is to train a model to implicitly represent the scene—a volumetric representation of density and color at any point. To render the scene from a given viewpoint, this volume is sampled to compute the final image color at each pixel.
This rendering method accurately reproduces open scenes and reflective surfaces, but the downside is that NeRF rendering is relatively slow.
What is 3D Gaussian Splatting?
Announced in August 2023, 3D Gaussian Splatting is a method for real-time 3D scene rendering using only a few images taken from different viewpoints.
The 3D space is defined as a set of Gaussians, and each Gaussian’s parameters are computed using machine learning.
Each Gaussian is described by the following parameters:
Position (XYZ): its location
Covariance: its scale/stretch (3×3 matrix)
Color (RGB): its color
Alpha (α): its transparency
In practice, several Gaussians are rendered together to represent a scene.

Source: Set of Gaussians – Hugging Face
And what does 7 million Gaussians look like?

Source: Hugging Face
Here is how it looks when each Gaussian is rendered fully opaque:

Source: Hugging Face
How It Works in Detail
Structure from Motion
3D Gaussian Splatting first uses the COLMAP library to generate a point cloud from the input image set. This process only takes a few seconds. From these images, camera extrinsic parameters (angle and position) are estimated by matching pixels between images. The point cloud is then computed based on these parameters. This step is handled through optimization—not machine learning.

Source: Hugging Face
Conversion to Gaussians
Each point is then converted into a Gaussian. This alone is enough for rasterization. However, only position and color can be inferred from the SfM (Structure from Motion) data. To obtain high-quality results, a training step is required.
Training
The training procedure uses Stochastic Gradient Descent (SGD), similar to training a neural network but without layers. Training steps include:
Rasterize the Gaussians into an image using differentiable Gaussian rasterization
Compute the loss based on the difference between the rendered image and the ground truth
Adjust Gaussian parameters based on the loss
Automate densification and pruning
Steps 1–3 are conceptually simple. Step 4 includes:
If a Gaussian has a high gradient (too inaccurate), split or clone it
If a Gaussian is small, clone it
If a Gaussian is large, split it
If a Gaussian’s alpha gets too low, remove it
This helps Gaussians better adapt to fine details while removing unnecessary ones.
Differentiable Gaussian Rasterization
As mentioned, 3D Gaussian Splatting is a rasterization technique that draws data onto the screen. Importantly, it is:
Fast
Differentiable
Rasterization involves:
Projecting each Gaussian to 2D from the camera’s point of view
Sorting Gaussians by depth
For each pixel, iterating over Gaussians front to back and blending them
Additional optimizations are detailed in the original paper.
Why Is This Exciting?
So why all the buzz around 3D Gaussian Splatting? The obvious reason is: the results speak for themselves—high-quality, real-time 3D scenes.
There are still open questions about what can be done with Gaussian Splatting. Can it be animated? The upcoming paper Dynamic 3D Gaussians: tracking by Persistent Dynamic View Synthesis suggests it’s possible. Other questions include whether reflections can be achieved, or whether it can model scenes without being trained on reference images.
Implications for 3D Scene Representation
What does this mean for the future of 3D rendering? Here’s a breakdown:
Advantages
High-quality, photorealistic scenes
Real-time rasterization
Relatively fast training
Disadvantages
High VRAM usage (4 GB to view, 12 GB to train)
Large disk size (1 GB+ per scene)
Incompatible with existing rendering pipelines
Static (for now)
So far, the original CUDA implementation (which runs directly on the GPU) hasn’t been adapted to production rendering pipelines like Vulkan, DirectX, or WebGPU. The real-world impact is still unfolding.
Current Adaptations Include:
Remote viewer
WebGPU viewer
WebGL viewer
Unity viewer
Optimized WebGL viewer
These rely either on remote streaming (1) or traditional quad-based rasterization (2–5). While quad-based approaches are compatible with decades of graphics technology, they may yield lower quality/performance. However, viewer #5 shows that optimization tricks can still deliver high quality and performance.
Will We See Full Production Implementation?
Probably yes.
The main bottleneck is sorting millions of Gaussians—efficiently done in the original implementation using CUB radix sort, a highly optimized CUDA-only sort.
But with enough effort, similar performance could be achieved in other rendering pipelines.
Continue Reading