PAC Bayes Bound
PAC-Bayes bounds provide a theoretical framework for analyzing the generalization ability of machine learning models, aiming to quantify the difference between a model's performance on training data and its performance on unseen data. Current research focuses on tightening these bounds using various techniques, including novel divergences beyond the Kullback-Leibler divergence, and applying them to diverse settings such as high-dimensional quantile prediction, inverse reinforcement learning, and bandit problems. This work is significant because tighter PAC-Bayes bounds offer more reliable guarantees on model performance, leading to improved algorithm design and more trustworthy predictions in various applications.
Papers
October 29, 2024
October 14, 2024
September 3, 2024
May 24, 2024
February 19, 2024
February 14, 2024
January 2, 2024
October 6, 2023
April 14, 2023
February 12, 2023
February 7, 2023
December 30, 2022
November 29, 2022
November 24, 2022
October 20, 2022
September 12, 2022
August 31, 2022
June 11, 2022
June 1, 2022