Domain Shift
Domain shift, the discrepancy between training and deployment data distributions, significantly degrades machine learning model performance. Current research focuses on developing robust algorithms and model architectures, such as U-Nets, Swin Transformers, and diffusion models, to mitigate this issue through techniques like distribution alignment, adversarial training, and knowledge distillation. These efforts are crucial for improving the reliability and generalizability of machine learning models across diverse real-world applications, particularly in medical imaging, autonomous driving, and natural language processing, where data heterogeneity is common. The ultimate goal is to create models that generalize effectively to unseen data, reducing the need for extensive retraining and improving the practical impact of AI systems.
Papers
CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning
James Seale Smith, Leonid Karlinsky, Vyshnavi Gutta, Paola Cascante-Bonilla, Donghyun Kim, Assaf Arbelle, Rameswar Panda, Rogerio Feris, Zsolt Kira
Robust Mean Teacher for Continual and Gradual Test-Time Adaptation
Mario Döbler, Robert A. Marsden, Bin Yang
PARTNR: Pick and place Ambiguity Resolving by Trustworthy iNteractive leaRning
Jelle Luijkx, Zlatan Ajanovic, Laura Ferranti, Jens Kober
HMOE: Hypernetwork-based Mixture of Experts for Domain Generalization
Jingang Qu, Thibault Faney, Ze Wang, Patrick Gallinari, Soleiman Yousef, Jean-Charles de Hemptinne