Diffusion Based Framework
Diffusion-based frameworks are generative models that transform noise into data by reversing a noise-addition process, offering a powerful approach to various machine learning tasks. Current research focuses on enhancing model flexibility through exploring different data representations and noise schedules, as well as developing unified frameworks capable of handling diverse data types (images, text, molecules) and tasks (generation, regression, segmentation). These models are proving valuable in diverse applications, including image synthesis, speech enhancement, medical image analysis, and scientific computing, particularly where uncertainty quantification is crucial or large datasets are unavailable.
19papers
Papers
December 4, 2024
Diffusion-VLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression
Junjie Wen, Minjie Zhu, Yichen Zhu, Zhibin Tang, Jinming Li, Zhongyi Zhou, Chengmeng Li, Xiaoyu Liu, Yaxin Peng, Chaomin Shen, Feifei FengRFSR: Improving ISR Diffusion Models via Reward Feedback Learning
Xiaopeng Sun, Qinwei Lin, Yu Gao, Yujie Zhong, Chengjian Feng, Dengjie Li, Zheng Zhao, Jie Hu, Lin Ma
November 29, 2024
November 25, 2024
November 23, 2024
November 22, 2024
November 7, 2024
October 20, 2024