3D Generation
3D generation research focuses on creating realistic three-dimensional models from various inputs like text, images, or existing 3D models. Current efforts center on improving the quality, efficiency, and controllability of generation, employing techniques such as diffusion models, autoregressive transformers, and neural radiance fields, often within a multi-view framework. These advancements are significant for fields like computer graphics, virtual reality, and product design, enabling faster and more intuitive creation of high-fidelity 3D assets. The development of efficient and robust methods for handling diverse data types and achieving high-resolution, consistent outputs remains a key focus.
Papers
Towards Language-guided Interactive 3D Generation: LLMs as Layout Interpreter with Generative Feedback
Yiqi Lin, Hao Wu, Ruichen Wang, Haonan Lu, Xiaodong Lin, Hui Xiong, Lin Wang
T2TD: Text-3D Generation Model based on Prior Knowledge Guidance
Weizhi Nie, Ruidong Chen, Weijie Wang, Bruno Lepri, Nicu Sebe