Implicit 3D Representation
Implicit 3D representation focuses on capturing three-dimensional shapes and scenes using neural networks, aiming to overcome limitations of explicit methods like meshes. Current research heavily utilizes neural radiance fields (NeRFs) and related architectures, often incorporating techniques like diffusion models for pose estimation and refinement, and exploring hybrid explicit-implicit approaches to leverage the strengths of both representation types. This field is significant for its potential to improve various applications, including 3D reconstruction from images and videos, virtual and augmented reality, robotics, and digital content creation.
Papers
Recognizing Scenes from Novel Viewpoints
Shengyi Qian, Alexander Kirillov, Nikhila Ravi, Devendra Singh Chaplot, Justin Johnson, David F. Fouhey, Georgia Gkioxari
3D-Aware Semantic-Guided Generative Model for Human Synthesis
Jichao Zhang, Enver Sangineto, Hao Tang, Aliaksandr Siarohin, Zhun Zhong, Nicu Sebe, Wei Wang