Paper ID: 2307.05284
On the Need of a Modeling Language for Distribution Shifts: Illustrations on Tabular Datasets
Jiashuo Liu, Tianyu Wang, Peng Cui, Hongseok Namkoong
Different distribution shifts require different interventions, and algorithms must be grounded in the specific shifts they address. However, methodological development for ''robust'' methods typically relies on structural assumptions that lack empirical validation. Advocating for an empirically grounded inductive approach to research, we build an empirical testbed comprising natural shifts across 5 tabular datasets and 60,000 method configurations encompassing imbalanced learning methods and distributionally robust optimization (DRO) methods. We find $Y|X$-shifts are most prevalent on our testbed, in stark contrast to the heavy focus on $X$ (covariate)-shifts in the ML literature. The performance of ''robust'' methods varies significantly over shift types, and is no better than that of vanilla methods. To understand why, we conduct an in-depth empirical analysis of DRO methods and find that although often neglected by researchers, implementation details -- such as the choice of underlying model class (e.g., XGBoost) and hyperparameter selection -- have a bigger impact on performance than the ambiguity set or its radius. To further bridge that gap between methodological research and practice, we design case studies that illustrate how such a refined, inductive understanding of distribution shifts can enhance both data-centric and algorithmic interventions.
Submitted: Jul 11, 2023