Novel Certification

Novel certification methods for machine learning (ML) aim to provide verifiable guarantees about model behavior, addressing concerns about robustness, bias, and trustworthiness in diverse applications. Current research focuses on developing mathematically sound certifications for both training and inference phases, encompassing techniques like randomized smoothing and hierarchical approaches, and extending these to address biases in large language models and the challenges of federated learning. This work is crucial for building trust in AI systems deployed in safety-critical domains like aviation and autonomous driving, as well as for ensuring fairness and accountability in broader applications.

Papers