Amortized Model
Amortized models aim to accelerate computationally expensive machine learning tasks by learning a fast approximation model that predicts the output of a more complex, slower model. Current research focuses on applying this technique to improve the efficiency of explainable AI methods (like feature attribution and Shapley value estimation), online adaptation of large language models, and generative models based on Wasserstein distance. This approach offers significant speedups, often achieving orders of magnitude improvement over traditional methods, enabling the application of computationally intensive techniques to larger datasets and more complex models.
Papers
Solving High-dimensional Inverse Problems Using Amortized Likelihood-free Inference with Noisy and Incomplete Data
Jice Zeng, Yuanzhe Wang, Alexandre M. Tartakovsky, David Barajas-Solano
SceneDiffuser: Efficient and Controllable Driving Simulation Initialization and Rollout
Chiyu Max Jiang, Yijing Bai, Andre Cornman, Christopher Davis, Xiukun Huang, Hong Jeon, Sakshum Kulshrestha, John Lambert, Shuangyu Li, Xuanyu Zhou, Carlos Fuertes, Chang Yuan, Mingxing Tan, Yin Zhou, Dragomir Anguelov