Surface Normal
Surface normal estimation, the process of determining the orientation of a surface at each point, is crucial for 3D scene understanding and reconstruction. Current research focuses on improving accuracy and efficiency using various deep learning architectures, including convolutional neural networks (CNNs), transformers, and neural implicit surface representations, often incorporating techniques like polarimetric imaging and multi-view stereo to handle challenging scenarios such as specular or transparent surfaces. These advancements are driving progress in robotics (e.g., manipulation, navigation), computer vision (e.g., 3D modeling, object recognition), and manufacturing (e.g., precision control), enabling more robust and accurate applications. The development of self-supervised and zero-shot methods is also a significant trend, reducing reliance on large labeled datasets.
Papers
NormalFlow: Fast, Robust, and Accurate Contact-based Object 6DoF Pose Tracking with Vision-based Tactile Sensors
Hung-Jui Huang, Michael Kaess, Wenzhen Yuan
Neural LightRig: Unlocking Accurate Object Normal and Material Estimation with Multi-Light Diffusion
Zexin He, Tengfei Wang, Xin Huang, Xingang Pan, Ziwei Liu