Multi Domain Learning

Multi-domain learning (MDL) aims to train a single model capable of performing well across multiple, diverse datasets or tasks, improving efficiency and generalizability compared to training separate models for each domain. Current research focuses on developing effective strategies to handle domain discrepancies, including techniques like adapting pre-trained models with domain-specific modules (e.g., adapters, virtual classifiers), employing novel training methods (e.g., decoupled training, weighted learning), and optimizing model architectures (e.g., factorized tensor networks) for efficient parameter sharing and knowledge transfer. MDL's significance lies in its potential to reduce computational costs, improve data efficiency, and enhance the robustness and generalizability of AI systems across various applications, from recommendation systems and natural language processing to computer vision and medical diagnosis.

Papers