Tight Guarantee
Tight guarantee research focuses on developing methods to provide strong, mathematically proven bounds on the performance of machine learning models, particularly in safety-critical applications. Current efforts concentrate on improving the accuracy of these guarantees for various models, including Bayesian neural networks and reinforcement learning agents, often employing techniques like Wasserstein distances and novel coefficient measures to quantify uncertainty and model complexity. This work is crucial for increasing the reliability and trustworthiness of AI systems, enabling their deployment in high-stakes domains where predictable performance is paramount.
Papers
January 31, 2024
January 21, 2024
March 22, 2023
January 19, 2023
September 28, 2022
August 19, 2022