Signed Distance Function
Signed distance functions (SDFs) represent 3D shapes implicitly by encoding the distance to the surface at each point in space, facilitating efficient shape manipulation and rendering. Current research focuses on improving SDF learning from various data sources (e.g., images, point clouds) using neural networks, often incorporating techniques like adversarial training, multi-resolution representations (e.g., octrees, binoctrees), and novel loss functions to enhance accuracy and efficiency. This work is significant for advancing 3D computer vision, enabling applications such as high-fidelity scene reconstruction, novel view synthesis, and robust object manipulation in robotics and other fields.
Papers
NeurCross: A Self-Supervised Neural Approach for Representing Cross Fields in Quad Mesh Generation
Qiujie Dong, Huibiao Wen, Rui Xu, Xiaokang Yu, Jiaran Zhou, Shuangmin Chen, Shiqing Xin, Changhe Tu, Wenping Wang
GS-ROR: 3D Gaussian Splatting for Reflective Object Relighting via SDF Priors
Zuo-Liang Zhu, Beibei Wang, Jian Yang