Novel Certification
Novel certification methods for machine learning (ML) aim to provide verifiable guarantees about model behavior, addressing concerns about robustness, bias, and trustworthiness in diverse applications. Current research focuses on developing mathematically sound certifications for both training and inference phases, encompassing techniques like randomized smoothing and hierarchical approaches, and extending these to address biases in large language models and the challenges of federated learning. This work is crucial for building trust in AI systems deployed in safety-critical domains like aviation and autonomous driving, as well as for ensuring fairness and accountability in broader applications.
Papers
June 17, 2024
May 29, 2024
May 27, 2024
May 13, 2024
March 21, 2024
February 24, 2024
February 13, 2024
October 10, 2023
October 5, 2023
September 20, 2023
August 29, 2023
May 26, 2023