Group Robustness

Group robustness in machine learning focuses on developing models that perform well across all subgroups within a dataset, mitigating biases stemming from spurious correlations or underrepresentation of certain groups. Current research emphasizes techniques like multi-norm training to improve robustness against various types of data perturbations, and methods for inferring group labels or reweighting training data to enhance worst-group performance, often employing last-layer retraining or tree-based models. This field is crucial for ensuring fairness and reliability in real-world applications, particularly in high-stakes domains where model performance must be consistent across diverse populations.

Papers