Sample Compression
Sample compression in machine learning aims to represent a learned model using a small subset of the training data, improving efficiency and potentially generalization. Current research focuses on extending sample compression beyond binary classification to real-valued losses and multi-class problems, investigating its relationship to other learning principles like differential privacy and uniform convergence, and exploring its application in diverse areas such as kernel methods and boosting algorithms. These efforts seek to establish fundamental limits on compressibility and develop efficient compression schemes for various model types, ultimately impacting the scalability and robustness of machine learning systems.
Papers
October 17, 2024
October 16, 2024
October 9, 2024
September 26, 2024
June 22, 2024
May 27, 2024
March 16, 2024
February 5, 2024
August 12, 2023
August 11, 2023
January 14, 2023
December 24, 2022
October 11, 2022
July 28, 2022
June 27, 2022