3D Content
3D content generation and manipulation are active research areas aiming to create realistic and versatile three-dimensional models and scenes. Current efforts focus on improving real-time rendering, AI-assisted collaborative creation, and style transfer using techniques like Gaussian splatting and diffusion models, often incorporating 3D priors or leveraging foundation models like Segment Anything Model. These advancements are significant for various applications, including virtual and augmented reality, computer-aided design, and medical imaging, by enabling more efficient and accurate 3D content creation and analysis.
Papers
Efficient-NeRF2NeRF: Streamlining Text-Driven 3D Editing with Multiview Correspondence-Enhanced Diffusion Models
Liangchen Song, Liangliang Cao, Jiatao Gu, Yifan Jiang, Junsong Yuan, Hao Tang
Projective Parallel Single-Pixel Imaging: 3D Structured Light Scanning Under Global Illumination
Yuxi Li, Hongzhi Jiang, Huijie Zhao, Xudong Li
Denoising diffusion-based synthetic generation of three-dimensional (3D) anisotropic microstructures from two-dimensional (2D) micrographs
Kang-Hyun Lee, Gun Jin Yun
HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image
Tong Wu, Zhibing Li, Shuai Yang, Pan Zhang, Xinggang Pan, Jiaqi Wang, Dahua Lin, Ziwei Liu
Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes
Hmrishav Bandyopadhyay, Subhadeep Koley, Ayan Das, Ayan Kumar Bhunia, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song