Amortized Model
Amortized models aim to accelerate computationally expensive machine learning tasks by learning a fast approximation model that predicts the output of a more complex, slower model. Current research focuses on applying this technique to improve the efficiency of explainable AI methods (like feature attribution and Shapley value estimation), online adaptation of large language models, and generative models based on Wasserstein distance. This approach offers significant speedups, often achieving orders of magnitude improvement over traditional methods, enabling the application of computationally intensive techniques to larger datasets and more complex models.
Papers
May 29, 2024
March 7, 2024
January 29, 2024
May 31, 2023
January 12, 2023
September 2, 2022
June 13, 2022