Text Driven 3D
Text-driven 3D aims to create and manipulate three-dimensional scenes using natural language instructions. Current research focuses on developing efficient and accurate methods for generating photorealistic 3D models from text prompts, often employing diffusion models and neural radiance fields, and incorporating techniques like Gaussian splatting for efficient rendering and hash-atlas representations for flexible editing. This field is significant for its potential to revolutionize 3D content creation across various applications, including gaming, virtual and augmented reality, and film production, by enabling intuitive and user-friendly scene design.
Papers
July 14, 2024
July 9, 2024
June 25, 2024
May 9, 2024
March 22, 2024
January 31, 2024
December 21, 2023
October 5, 2023