Collective Robustness
Collective robustness focuses on certifying the simultaneous correctness of multiple predictions made by a single model on a shared input, unlike traditional methods that assess each prediction independently. Current research emphasizes developing efficient algorithms, often based on linear programming relaxations or randomized smoothing techniques, to generate these collective certificates, particularly for graph neural networks and models with localized or soft locality properties. This research is significant because it provides stronger guarantees of model robustness against adversarial attacks, improving the reliability and trustworthiness of predictions in various applications, such as node classification and image segmentation.
Papers
March 3, 2024
February 6, 2023