Multi Domain
Multi-domain research focuses on developing models and algorithms capable of handling data from diverse sources or tasks simultaneously, improving efficiency and generalization compared to single-domain approaches. Current efforts concentrate on adapting existing architectures like Transformers and U-Nets, employing techniques such as mixture-of-experts, contrastive learning, and knowledge distillation to achieve robust performance across domains. This work is significant for advancing machine learning capabilities in various fields, including medical imaging, natural language processing, and meteorological forecasting, by enabling more accurate and efficient models that generalize well to unseen data.
Papers
Bringing Inputs to Shared Domains for 3D Interacting Hands Recovery in the Wild
Gyeongsik Moon
Graph Tensor Networks: An Intuitive Framework for Designing Large-Scale Neural Learning Systems on Multiple Domains
Yao Lei Xu, Kriton Konstantinidis, Danilo P. Mandic
Multi-View Zero-Shot Open Intent Induction from Dialogues: Multi Domain Batch and Proxy Gradient Transfer
Hyukhun Koh, Haesung Pyun, Nakyeong Yang, Kyomin Jung