Unseen Domain
Unseen domain generalization in machine learning focuses on developing models robust to data distributions unseen during training, aiming for improved generalization and reliability in real-world applications. Current research emphasizes techniques like test-time adaptation, meta-learning, and the use of multimodal data and novel architectures (e.g., graph neural networks, vision transformers, and state space models) to learn domain-invariant features. This field is crucial for deploying machine learning models in diverse and unpredictable environments, particularly in safety-critical applications like medical image analysis and autonomous driving, where retraining on new data is often impractical or impossible. The development of effective unseen domain generalization methods is thus vital for advancing the trustworthiness and applicability of machine learning across various domains.
Papers
DomainAdaptor: A Novel Approach to Test-time Adaptation
Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection
Naif Alkhunaizi, Koushik Srivatsan, Faris Almalik, Ibrahim Almakky, Karthik Nandakumar