Replicable Algorithm

Replicable algorithms aim to create computational methods that consistently produce identical outputs when run on different datasets drawn from the same underlying distribution, addressing the reproducibility crisis in science. Current research focuses on developing and analyzing such algorithms for various machine learning tasks, including clustering, bandit problems, and reinforcement learning, often leveraging techniques from differential privacy and statistical query learning. Achieving reliable replicability offers significant benefits by enhancing the verifiability of scientific findings and improving the trustworthiness of machine learning models in practical applications.

Papers