Group Robustness
Group robustness in machine learning focuses on developing models that perform well across all subgroups within a dataset, mitigating biases stemming from spurious correlations or underrepresentation of certain groups. Current research emphasizes techniques like multi-norm training to improve robustness against various types of data perturbations, and methods for inferring group labels or reweighting training data to enhance worst-group performance, often employing last-layer retraining or tree-based models. This field is crucial for ensuring fairness and reliability in real-world applications, particularly in high-stakes domains where model performance must be consistent across diverse populations.
Papers
October 7, 2024
October 3, 2024
July 19, 2024
June 26, 2024
June 24, 2024
May 1, 2024
April 22, 2024
March 26, 2024
March 20, 2024
March 12, 2024
February 22, 2024
December 8, 2023
December 5, 2023
October 28, 2023
September 15, 2023
June 29, 2023
June 19, 2023
May 9, 2023
February 23, 2023
February 11, 2023