Geometry-Grounded Gaussian Splatting
Baowen Zhang Chenxing Jiang Heng Li Shaojie Shen Ping Tan
Hong Kong University of Science and Technology
Teaser figure
We prove that Gaussian primitives are equivalent to stochastic solids, and leverage this equivalence to reconstruct high-fidelity, multi-view-consistent shapes from multi-view images.

Abstract

Gaussian Splatting (GS) has demonstrated impressive quality and efficiency in novel view synthesis. However, shape extraction from Gaussian primitives remains an open problem. Due to inadequate geometry parameterization and approximation, existing shape reconstruction methods suffer from poor multi-view consistency and are sensitive to floaters. In this paper, we present a rigorous theoretical derivation that establishes Gaussian primitives as a specific type of stochastic solids. This theoretical framework provides a principled foundation for Geometry-Grounded Gaussian Splatting by enabling the direct treatment of Gaussian primitives as explicit geometric representations. Using the volumetric nature of stochastic solids, our method efficiently renders high-quality depth maps for fine-grained geometry extraction. Experiments show that our method achieves the best shape reconstruction results among all Gaussian Splatting-based methods on public datasets.


Reference lion (ground truth)

Reference Image.

PGSR result (lion)
PGSR
Ours

PGSR (left) vs Ours (right).

The images used for training are taken from the “Komainu / Kobe / Ikuta-jinja” dataset provided by Open Heritage 3D .

Paper and Code

Paper preview (page 1)
B. Zhang, C. Jiang, H. Li, S. Shen, P. Tan
Geometry-Grounded Gaussian Splatting
arXiv, 2026.

Results

We visualize the cycle reprojection error. Our method achieves stronger multi-view consistency.
Results on the DTU dataset.
DTU comparison
Depth map visualization by converting depth maps into a 3D point cloud.
Novel view synthesis comparison
Comparison of our method with previous Gaussian Splatting-based methods on the Tanks and Temples dataset.
DTU reconstructions
Shape reconstructions on the DTU dataset.

Acknowledgements

The website is modified from this template.