Group Robustness
Group robustness in machine learning focuses on developing models that perform well across all subgroups within a dataset, mitigating biases stemming from spurious correlations or underrepresentation of certain groups. Current research emphasizes techniques like multi-norm training to improve robustness against various types of data perturbations, and methods for inferring group labels or reweighting training data to enhance worst-group performance, often employing last-layer retraining or tree-based models. This field is crucial for ensuring fairness and reliability in real-world applications, particularly in high-stakes domains where model performance must be consistent across diverse populations.
Papers
February 11, 2023
December 14, 2022
November 23, 2022
October 20, 2022
October 13, 2022
September 19, 2022
August 26, 2022
July 14, 2022
April 20, 2022
March 15, 2022
January 10, 2022
December 31, 2021