Multi Domain
Multi-domain research focuses on developing models and algorithms capable of handling data from diverse sources or tasks simultaneously, improving efficiency and generalization compared to single-domain approaches. Current efforts concentrate on adapting existing architectures like Transformers and U-Nets, employing techniques such as mixture-of-experts, contrastive learning, and knowledge distillation to achieve robust performance across domains. This work is significant for advancing machine learning capabilities in various fields, including medical imaging, natural language processing, and meteorological forecasting, by enabling more accurate and efficient models that generalize well to unseen data.
Papers
Budget-Aware Pruning: Handling Multiple Domains with Less Parameters
Samuel Felipe dos Santos, Rodrigo Berriel, Thiago Oliveira-Santos, Nicu Sebe, Jurandy Almeida
Overview of AuTexTification at IberLEF 2023: Detection and Attribution of Machine-Generated Text in Multiple Domains
Areg Mikael Sarvazyan, José Ángel González, Marc Franco-Salvador, Francisco Rangel, Berta Chulvi, Paolo Rosso