3D Content
3D content generation and manipulation are active research areas aiming to create realistic and versatile three-dimensional models and scenes. Current efforts focus on improving real-time rendering, AI-assisted collaborative creation, and style transfer using techniques like Gaussian splatting and diffusion models, often incorporating 3D priors or leveraging foundation models like Segment Anything Model. These advancements are significant for various applications, including virtual and augmented reality, computer-aided design, and medical imaging, by enabling more efficient and accurate 3D content creation and analysis.
Papers
MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D
Wei Cheng, Juncheng Mu, Xianfang Zeng, Xin Chen, Anqi Pang, Chi Zhang, Zhibin Wang, Bin Fu, Gang Yu, Ziwei Liu, Liang Pan
GenXD: Generating Any 3D and 4D Scenes
Yuyang Zhao, Chung-Ching Lin, Kevin Lin, Zhiwen Yan, Linjie Li, Zhengyuan Yang, Jianfeng Wang, Gim Hee Lee, Lijuan Wang
3D Audio-Visual Segmentation
Artem Sokolov, Swapnil Bhosale, Xiatian Zhu
Analyzing Multimodal Interaction Strategies for LLM-Assisted Manipulation of 3D Scenes
Junlong Chen, Jens Grubert, Per Ola Kristensson
A Survey on RGB, 3D, and Multimodal Approaches for Unsupervised Industrial Anomaly Detection
Yuxuan Lin, Yang Chang, Xuan Tong, Jiawen Yu, Antonio Liotta, Guofan Huang, Wei Song, Deyu Zeng, Zongze Wu, Yan Wang, Wenqiang Zhang
Robotic Arm Platform for Multi-View Image Acquisition and 3D Reconstruction in Minimally Invasive Surgery
Alexander Saikia, Chiara Di Vece, Sierra Bonilla, Chloe He, Morenike Magbagbeola, Laurent Mennillo, Tobias Czempiel, Sophia Bano, Danail Stoyanov
Dual-Teacher Ensemble Models with Double-Copy-Paste for 3D Semi-Supervised Medical Image Segmentation
Zhan Fa, Shumeng Li, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi