Verifiable Learning

Verifiable learning aims to create machine learning models whose training processes and outputs can be rigorously verified, addressing concerns about model trustworthiness and intellectual property protection. Current research focuses on developing verifiable training protocols for various model architectures, including decision trees, boosted ensembles, and deep neural networks, often employing techniques like cryptographic proofs, error bounds, and concept-based reasoning to ensure accuracy and prevent malicious manipulation. This field is crucial for building trust in AI systems deployed in sensitive applications, from scientific modeling to industrial IoT, and for establishing robust mechanisms for protecting model ownership.

Papers