Signed Distance Function
Signed distance functions (SDFs) represent 3D shapes implicitly by encoding the distance to the surface at each point in space, facilitating efficient shape manipulation and rendering. Current research focuses on improving SDF learning from various data sources (e.g., images, point clouds) using neural networks, often incorporating techniques like adversarial training, multi-resolution representations (e.g., octrees, binoctrees), and novel loss functions to enhance accuracy and efficiency. This work is significant for advancing 3D computer vision, enabling applications such as high-fidelity scene reconstruction, novel view synthesis, and robust object manipulation in robotics and other fields.
Papers
Towards Balanced RGB-TSDF Fusion for Consistent Semantic Scene Completion by 3D RGB Feature Completion and a Classwise Entropy Loss Function
Laiyan Ding, Panwen Hu, Jie Li, Rui Huang
ASDF: Assembly State Detection Utilizing Late Fusion by Integrating 6D Pose Estimation
Hannah Schieber, Shiyu Li, Niklas Corell, Philipp Beckerle, Julian Kreimeier, Daniel Roth
Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing Uncertainty
Carlos Quintero-Peña, Wil Thomason, Zachary Kingston, Anastasios Kyrillidis, Lydia E. Kavraki
Learning Effective NeRFs and SDFs Representations with 3D Generative Adversarial Networks for 3D Object Generation: Technical Report for ICCV 2023 OmniObject3D Challenge
Zheyuan Yang, Yibo Liu, Guile Wu, Tongtong Cao, Yuan Ren, Yang Liu, Bingbing Liu