Controllable Generation
Controllable generation focuses on creating models that produce outputs (images, text, music, etc.) adhering to specific constraints or user-defined parameters. Current research emphasizes efficient methods for incorporating diverse control signals into existing generative models, such as diffusion models and autoregressive models, often leveraging techniques like prompt engineering, fine-tuning, and reparameterization to achieve this control. This field is significant because it enables more precise and tailored generation, improving applications ranging from autonomous driving and protein design to text summarization and artistic creation. The development of more efficient and robust controllable generation methods is driving progress across numerous scientific disciplines and technological applications.
Papers
Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning
Ayano Hiranaka, Shang-Fu Chen, Chieh-Hsin Lai, Dongjun Kim, Naoki Murata, Takashi Shibuya, Wei-Hsiang Liao, Shao-Hua Sun, Yuki Mitsufuji
CAR: Controllable Autoregressive Modeling for Visual Generation
Ziyu Yao, Jialin Li, Yifeng Zhou, Yong Liu, Xi Jiang, Chengjie Wang, Feng Zheng, Yuexian Zou, Lei Li
Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation
Yixiao Wang, Chen Tang, Lingfeng Sun, Simone Rossi, Yichen Xie, Chensheng Peng, Thomas Hannagan, Stefano Sabatini, Nicola Poerio, Masayoshi Tomizuka, Wei Zhan
Smoothed Energy Guidance: Guiding Diffusion Models with Reduced Energy Curvature of Attention
Susung Hong