DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction

1Australian National University, 2CSIRO, 3The University of Hong Kong
The 33rd ACM International Conference on Multimedia

Abstract

Dynamic scene reconstruction from monocular video is essential for real-world applications. We introduce DGNS, a hybrid framework integrating \underline{D}eformable \underline{G}aussian Splatting and Dynamic \underline{N}eural \underline{S}urfaces, effectively addressing dynamic novel-view synthesis and 3D geometry reconstruction simultaneously. During training, depth maps generated by the deformable Gaussian splatting module guide the ray sampling for faster processing and provide depth supervision within the dynamic neural surface module to improve geometry reconstruction. Conversely, the dynamic neural surface directs the distribution of Gaussian primitives around the surface, enhancing rendering quality. In addition, we propose a depth-filtering approach to further refine depth supervision. Extensive experiments conducted on public datasets demonstrate that DGNS achieves state-of-the-art performance in 3D reconstruction, along with competitive results in novel-view synthesis.

MY ALT TEXT

MY ALT TEXT

Qualitative results on Dg-mesh dataset

Results visualization

Qualitative results on Dg-mesh dataset

BibTeX

@article{li2024dgns,
  title={DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction},
  author={Li, Xuesong and Tong, Jinguang and Hong, Jie and Rolland, Vivien and Petersson, Lars},
  journal={ACM Multimedia},
  year={2025}
}