Verifiable Learning
Verifiable learning aims to create machine learning models whose training processes and outputs can be rigorously verified, addressing concerns about model trustworthiness and intellectual property protection. Current research focuses on developing verifiable training protocols for various model architectures, including decision trees, boosted ensembles, and deep neural networks, often employing techniques like cryptographic proofs, error bounds, and concept-based reasoning to ensure accuracy and prevent malicious manipulation. This field is crucial for building trust in AI systems deployed in sensitive applications, from scientific modeling to industrial IoT, and for establishing robust mechanisms for protecting model ownership.
Papers
October 29, 2024
October 6, 2024
September 29, 2024
July 22, 2024
April 4, 2024
March 14, 2024
February 22, 2024
November 10, 2023
October 23, 2023
July 13, 2023
July 2, 2023
May 5, 2023