Diffusion Based Framework
Diffusion-based frameworks are generative models that transform noise into data by reversing a noise-addition process, offering a powerful approach to various machine learning tasks. Current research focuses on enhancing model flexibility through exploring different data representations and noise schedules, as well as developing unified frameworks capable of handling diverse data types (images, text, molecules) and tasks (generation, regression, segmentation). These models are proving valuable in diverse applications, including image synthesis, speech enhancement, medical image analysis, and scientific computing, particularly where uncertainty quantification is crucial or large datasets are unavailable.
Papers
Diffusion-VLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression
Junjie Wen, Minjie Zhu, Yichen Zhu, Zhibin Tang, Jinming Li, Zhongyi Zhou, Chengmeng Li, Xiaoyu Liu, Yaxin Peng, Chaomin Shen, Feifei Feng
RFSR: Improving ISR Diffusion Models via Reward Feedback Learning
Xiaopeng Sun, Qinwei Lin, Yu Gao, Yujie Zhong, Chengjian Feng, Dengjie Li, Zheng Zhao, Jie Hu, Lin Ma