Joint Learning
Joint learning, a machine learning paradigm, aims to improve model performance and efficiency by simultaneously training multiple related tasks or datasets. Current research focuses on diverse applications, including multimodal data fusion (e.g., audio-text, images-videos), multi-task reinforcement learning, and distributed model training across heterogeneous devices, often employing transformer-based architectures, contrastive learning, and optimal transport methods. This approach offers significant advantages by leveraging shared representations and inter-task relationships, leading to improved accuracy, robustness, and reduced computational costs across various fields like computer vision, natural language processing, and robotics.
Papers
Enhance Incomplete Utterance Restoration by Joint Learning Token Extraction and Text Generation
Shumpei Inoue, Tsungwei Liu, Nguyen Hong Son, Minh-Tien Nguyen
Controllable Missingness from Uncontrollable Missingness: Joint Learning Measurement Policy and Imputation
Seongwook Yoon, Jaehyun Kim, Heejeong Lim, Sanghoon Sull