Text to 3D
Text-to-3D generation aims to create three-dimensional models from textual descriptions, automating a traditionally laborious process. Current research heavily utilizes pre-trained 2D image diffusion models, adapting their capabilities for 3D generation through techniques like score distillation sampling and incorporating multi-view consistency into model architectures such as NeRFs and Gaussian splatting. This field is significant for its potential to revolutionize 3D content creation across diverse applications, from gaming and virtual reality to robotics and industrial design, by offering faster, more accessible, and potentially more creative design workflows.
Papers
November 4, 2024
October 10, 2024
September 11, 2024
September 5, 2024
August 23, 2024
August 12, 2024
July 31, 2024
July 23, 2024
July 19, 2024
July 17, 2024
July 2, 2024
June 28, 2024
June 24, 2024
June 14, 2024
June 5, 2024
May 29, 2024
May 28, 2024
May 16, 2024