Compositional Generation
Compositional generation aims to create complex outputs—like images or 3D models—by combining simpler components, addressing the limitations of models that struggle with nuanced instructions or multi-object scenes. Current research focuses on leveraging large language models and diffusion models, often employing training-free methods and techniques like chain-of-thought reasoning or coroutine-based constraints to guide the generation process and improve controllability. This work is significant because it tackles a key challenge in AI—achieving robust generalization and flexible control over complex generative tasks—with implications for various applications, including image and video synthesis, 3D modeling, and program generation.
Papers
November 10, 2024
October 30, 2024
October 29, 2024
October 11, 2024
October 9, 2024
May 11, 2024
April 8, 2024
January 22, 2024
November 29, 2023
September 28, 2023
August 19, 2023
June 3, 2023
February 22, 2023
June 3, 2022
January 27, 2022