3D Scene
3D scene representation and manipulation are active research areas aiming to create realistic and editable digital environments. Current efforts focus on developing efficient and robust algorithms, such as Gaussian splatting and neural radiance fields (NeRFs), to reconstruct scenes from various data sources (images, videos, point clouds) and handle challenges like occlusions, dynamic objects, and adverse weather conditions. These advancements are driving progress in applications ranging from autonomous driving and virtual/augmented reality to cultural heritage preservation and interactive 3D content creation. The development of generalizable models capable of handling large-scale scenes and diverse tasks is a key focus.
Papers
Virtual Pets: Animatable Animal Generation in 3D Scenes
Yen-Chi Cheng, Chieh Hubert Lin, Chaoyang Wang, Yash Kant, Sergey Tulyakov, Alexander Schwing, Liangyan Gui, Hsin-Ying Lee
Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models
Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, Karsten Kreis
Free-Editor: Zero-shot Text-driven 3D Scene Editing
Nazmul Karim, Umar Khalid, Hasan Iqbal, Jing Hua, Chen Chen
LatentEditor: Text Driven Local Editing of 3D Scenes
Umar Khalid, Hasan Iqbal, Nazmul Karim, Jing Hua, Chen Chen
Bayes3D: fast learning and inference in structured generative models of 3D objects and scenes
Nishad Gothoskar, Matin Ghavami, Eric Li, Aidan Curtis, Michael Noseworthy, Karen Chung, Brian Patton, William T. Freeman, Joshua B. Tenenbaum, Mirko Klukas, Vikash K. Mansinghka