Replicable Learning
Replicable learning aims to develop algorithms that produce identical outputs when trained on different, but identically distributed, datasets, addressing the reproducibility crisis in machine learning and statistics. Current research focuses on adapting existing algorithms (like SGD and value iteration) to ensure replicability, exploring the computational costs of this constraint, and developing new techniques such as list and certificate replicability to manage the inherent trade-offs between replicability and accuracy. This work is crucial for enhancing the reliability and trustworthiness of machine learning models, improving the generalizability of scientific findings, and fostering greater confidence in AI applications.
Papers
June 4, 2024
May 24, 2024
February 21, 2024
May 24, 2023
April 5, 2023
October 4, 2022