Autoregressive Model
Autoregressive models are a class of generative models that predict sequential data by probabilistically modeling the next element based on preceding ones. Current research focuses on enhancing their controllability for tasks like image and 3D shape generation, improving their efficiency through parallel processing and non-autoregressive techniques, and applying them to diverse domains including time series forecasting, speech recognition, and even scientific modeling of physical systems. These advancements are driving improvements in various applications, from more accurate weather forecasting and efficient speech recognition to novel approaches in drug discovery and materials science.
Papers
EditAR: Unified Conditional Generation with Autoregressive Models
Jiteng Mu, Nuno Vasconcelos, Xiaolong Wang
On Computational Limits and Provably Efficient Criteria of Visual Autoregressive Models: A Fine-Grained Complexity Analysis
Yekun Ke, Xiaoyu Li, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song
Circuit Complexity Bounds for Visual Autoregressive Model
Yekun Ke, Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song
Parallelized Autoregressive Visual Generation
Yuqing Wang, Shuhuai Ren, Zhijie Lin, Yujin Han, Haoyuan Guo, Zhenheng Yang, Difan Zou, Jiashi Feng, Xihui Liu
DriveGPT: Scaling Autoregressive Behavior Models for Driving
Xin Huang, Eric M. Wolff, Paul Vernaza, Tung Phan-Minh, Hongge Chen, David S. Hayden, Mark Edmonds, Brian Pierce, Xinxin Chen, Pratik Elias Jacob, Xiaobai Chen, Chingiz Tairbekov, Pratik Agarwal, Tianshi Gao, Yuning Chai, Siddhartha Srinivasa