AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction

Jingnan Gao     Zhuo Chen     Yichao Yan     Xiaokang Yang
Shanghai Jiao Tong University       

 

 


The left part demonstrates the ability of AniSDF to produce accurate geometry and high-quality rendering results.
The right part presents its capability to handle various scenes including complex object, luminous object, highly reflective object, and fuzzy object.


Abstract

Neural radiance fields have recently revolutionized novel-view synthesis and achieved high-fidelity renderings. However, these methods sacrifice the geometry for the rendering quality, limiting their further applications including relighting and deformation. How to synthesize photo-realistic rendering while reconstructing accurate geometry remains an unsolved problem. In this work, we present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction. Different from previous neural surfaces, our fused-granularity geometry structure balances the overall structures and fine geometric details, producing accurate geometry reconstruction. To disambiguate geometry from reflective appearance, we introduce blended radiance fields to model diffuse and specularity following the anisotropic spherical Gaussian encoding, a physics-based rendering pipeline. With these designs, AniSDF can reconstruct objects with complex structures and produce high-quality renderings. Furthermore, our method is a unified model that does not require complex hyperparameter tuning for specific objects. Extensive experiments demonstrate that our method achieves state-of-the-art results in both geometry reconstruction and novel-view synthesis.


Video


Overview

We utilize a fused-granularity neural surface structure where we make the most of coarse grids and fine grids for accurate surface reconstruction. We then employ a view-based radiance field and reflection-based radiance field to model diffuse part and specular part accordingly. By learning a 3D weight field, we blend the radiance fields to obtain high-fidelity renderings.



Results

Mesh Comparisons

Neuralangelo Ours
NeuS Ours


2DGS Ours
NeRO Ours

Rendering Comparisons

Neuralangelo Ours
NeuS Ours


2DGS Ours
NeRO Ours

BibTeX

@article{gao2024anisdf,
      title={AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction}, 
      author={Jingnan Gao and Zhuo Chen and Yichao Yan and Xiaokang Yang},
      journal={arXiv preprint arXiv:2410.01202},
      year={2024},
}