3D Scene
3D scene representation and manipulation are active research areas aiming to create realistic and editable digital environments. Current efforts focus on developing efficient and robust algorithms, such as Gaussian splatting and neural radiance fields (NeRFs), to reconstruct scenes from various data sources (images, videos, point clouds) and handle challenges like occlusions, dynamic objects, and adverse weather conditions. These advancements are driving progress in applications ranging from autonomous driving and virtual/augmented reality to cultural heritage preservation and interactive 3D content creation. The development of generalizable models capable of handling large-scale scenes and diverse tasks is a key focus.
Papers
Toon3D: Seeing Cartoons from a New Perspective
Ethan Weber, Riley Peterlinz, Rohan Mathur, Frederik Warburg, Alexei A. Efros, Angjoo Kanazawa
CAT3D: Create Anything in 3D with Multi-View Diffusion Models
Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T. Barron, Ben Poole
Generating Human Motion in 3D Scenes from Text Descriptions
Zhi Cen, Huaijin Pi, Sida Peng, Zehong Shen, Minghui Yang, Shuai Zhu, Hujun Bao, Xiaowei Zhou
GaussianVTON: 3D Human Virtual Try-ON via Multi-Stage Gaussian Splatting Editing with Image Prompting
Haodong Chen, Yongle Huang, Haojian Huang, Xiangsheng Ge, Dian Shao
RefFusion: Reference Adapted Diffusion Models for 3D Scene Inpainting
Ashkan Mirzaei, Riccardo De Lutio, Seung Wook Kim, David Acuna, Jonathan Kelly, Sanja Fidler, Igor Gilitschenski, Zan Gojcic
CorrespondentDream: Enhancing 3D Fidelity of Text-to-3D using Cross-View Correspondences
Seungwook Kim, Kejie Li, Xueqing Deng, Yichun Shi, Minsu Cho, Peng Wang