Meta Learning Prior
Meta-learning priors focuses on improving the efficiency and robustness of machine learning models by learning prior distributions over model parameters from a diverse set of related tasks. Current research emphasizes developing methods that learn expressive priors, going beyond simple Gaussian distributions, often employing Bayesian frameworks and adapting algorithms like score matching in function space. This approach enhances model generalization to new tasks with limited data, improves uncertainty quantification, and finds applications in diverse fields such as inverse problems in imaging, robotics, and human performance capture, leading to more reliable and efficient solutions.
Papers
July 27, 2024
March 27, 2024
November 30, 2023
October 24, 2022