Text to 3D Synthesis
Text-to-3D synthesis aims to generate three-dimensional models from textual descriptions, leveraging advancements in text-to-image diffusion models and neural radiance fields (NeRFs). Current research focuses on improving efficiency, achieving higher resolution and detail, and enhancing control over object placement and appearance within scenes, often employing techniques like score distillation sampling and amortized optimization across multiple prompts. This field is significant for its potential to revolutionize digital content creation, offering a more intuitive and accessible pathway for generating complex 3D assets for various applications, including animation, gaming, and medical imaging.
Papers
July 2, 2024
May 30, 2024
March 22, 2024
March 17, 2024
December 2, 2023
November 30, 2023
October 5, 2023
June 6, 2023
March 23, 2023
March 21, 2023
December 28, 2022
November 18, 2022