Lipschitz Learning
Lipschitz learning focuses on developing machine learning models with bounded Lipschitz constants, ensuring that small input changes lead to proportionally small output changes, thereby improving robustness and fairness. Current research investigates this concept within various architectures, including graph neural networks and ensembles, exploring how to efficiently compute Lipschitz bounds and leverage them for certified robustness against adversarial examples and biased data. This research is significant because it offers a principled approach to enhance the reliability and trustworthiness of machine learning models, particularly in applications requiring stability and fairness guarantees.
Papers
November 5, 2024
September 7, 2023
April 25, 2023
December 28, 2022
October 4, 2022