Two Shift
"Shift," in machine learning and related fields, broadly refers to changes in data distribution between training and testing phases, impacting model performance and robustness. Current research focuses on detecting and mitigating these shifts, exploring methods like distributionally robust classifiers, adaptive regularization techniques for sparse networks, and novel model architectures (e.g., Vision Transformers adapted with shift operations) to improve generalization. Understanding and addressing distribution shifts is crucial for building reliable and dependable machine learning systems across diverse applications, from autonomous driving to medical diagnosis, ensuring consistent performance in real-world scenarios.
Papers
SafeShift: Safety-Informed Distribution Shifts for Robust Trajectory Prediction in Autonomous Driving
Benjamin Stoler, Ingrid Navarro, Meghdeep Jana, Soonmin Hwang, Jonathan Francis, Jean Oh
Distributionally Robust Post-hoc Classifiers under Prior Shifts
Jiaheng Wei, Harikrishna Narasimhan, Ehsan Amid, Wen-Sheng Chu, Yang Liu, Abhishek Kumar