3D Scene
3D scene representation and manipulation are active research areas aiming to create realistic and editable digital environments. Current efforts focus on developing efficient and robust algorithms, such as Gaussian splatting and neural radiance fields (NeRFs), to reconstruct scenes from various data sources (images, videos, point clouds) and handle challenges like occlusions, dynamic objects, and adverse weather conditions. These advancements are driving progress in applications ranging from autonomous driving and virtual/augmented reality to cultural heritage preservation and interactive 3D content creation. The development of generalizable models capable of handling large-scale scenes and diverse tasks is a key focus.
Papers
Scaled Inverse Graphics: Efficiently Learning Large Sets of 3D Scenes
Karim Kassab, Antoine Schnepf, Jean-Yves Franceschi, Laurent Caraffa, Flavian Vasile, Jeremie Mary, Andrew Comport, Valérie Gouet-Brunet
SceneComplete: Open-World 3D Scene Completion in Complex Real World Environments for Robot Manipulation
Aditya Agarwal, Gaurav Singh, Bipasha Sen, Tomás Lozano-Pérez, Leslie Pack Kaelbling
GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians
Shuyi Jiang, Qihao Zhao, Hossein Rahmani, De Wen Soh, Jun Liu, Na Zhao
UW-GS: Distractor-Aware 3D Gaussian Splatting for Enhanced Underwater Scene Reconstruction
Haoran Wang, Nantheera Anantrasirichai, Fan Zhang, David Bull
RenderWorld: World Model with Self-Supervised 3D Label
Ziyang Yan, Wenzhen Dong, Yihua Shao, Yuhang Lu, Liu Haiyang, Jingwen Liu, Haozhe Wang, Zhe Wang, Yan Wang, Fabio Remondino, Yuexin Ma
SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction
Marko Mihajlovic, Sergey Prokudin, Siyu Tang, Robert Maier, Federica Bogo, Tony Tung, Edmond Boyer