Amortized Model

Amortized models aim to accelerate computationally expensive machine learning tasks by learning a fast approximation model that predicts the output of a more complex, slower model. Current research focuses on applying this technique to improve the efficiency of explainable AI methods (like feature attribution and Shapley value estimation), online adaptation of large language models, and generative models based on Wasserstein distance. This approach offers significant speedups, often achieving orders of magnitude improvement over traditional methods, enabling the application of computationally intensive techniques to larger datasets and more complex models.

Papers