3D Scene
3D scene representation and manipulation are active research areas aiming to create realistic and editable digital environments. Current efforts focus on developing efficient and robust algorithms, such as Gaussian splatting and neural radiance fields (NeRFs), to reconstruct scenes from various data sources (images, videos, point clouds) and handle challenges like occlusions, dynamic objects, and adverse weather conditions. These advancements are driving progress in applications ranging from autonomous driving and virtual/augmented reality to cultural heritage preservation and interactive 3D content creation. The development of generalizable models capable of handling large-scale scenes and diverse tasks is a key focus.
Papers
NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes
Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu, Vitor Guizilini, Thomas Kollar, Adrien Gaidon, Zsolt Kira, Rares Ambrus
NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects
Dakshit Agrawal, Jiajie Xu, Siva Karthik Mustikovela, Ioannis Gkioulekas, Ashish Shrivastava, Yuning Chai