3D Content
3D content generation and manipulation are active research areas aiming to create realistic and versatile three-dimensional models and scenes. Current efforts focus on improving real-time rendering, AI-assisted collaborative creation, and style transfer using techniques like Gaussian splatting and diffusion models, often incorporating 3D priors or leveraging foundation models like Segment Anything Model. These advancements are significant for various applications, including virtual and augmented reality, computer-aided design, and medical imaging, by enabling more efficient and accurate 3D content creation and analysis.
Papers
StereoCrafter: Diffusion-based Generation of Long and High-fidelity Stereoscopic 3D from Monocular Videos
Sijie Zhao, Wenbo Hu, Xiaodong Cun, Yong Zhang, Xiaoyu Li, Zhe Kong, Xiangjun Gao, Muyao Niu, Ying Shan
3DGCQA: A Quality Assessment Database for 3D AI-Generated Contents
Yingjie Zhou, Zicheng Zhang, Farong Wen, Jun Jia, Yanwei Jiang, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners
Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Chengzhuo Tong, Peng Gao, Chunyuan Li, Pheng-Ann Heng
Improving 3D deep learning segmentation with biophysically motivated cell synthesis
Roman Bruch, Mario Vitacolonna, Elina Nürnberg, Simeon Sauer, Rüdiger Rudolf, Markus Reischl
Time-Optimized Trajectory Planning for Non-Prehensile Object Transportation in 3D
Lingyun Chen, Haoyu Yu, Abdeldjallil Naceri, Abdalla Swikir, Sami Haddadin