Multi Task
Multi-task learning (MTL) aims to improve model efficiency and performance by training a single model to handle multiple related tasks simultaneously. Current research focuses on developing effective strategies for sharing information between tasks, including novel architectures like multi-expert systems and the adaptation of large language models (LLMs) for various applications. This approach is particularly valuable in scenarios with limited data or computational resources, finding applications in diverse fields such as medical image analysis, robotics, and online advertising, where improved efficiency and generalization are crucial.
Papers
Cooperative and Collaborative Multi-Task Semantic Communication for Distributed Sources
Ahmad Halimi Razlighi, Maximilian H. V. Tillmann, Edgar Beck, Carsten Bockelmann, Armin Dekorsy
A Multi-Task Role-Playing Agent Capable of Imitating Character Linguistic Styles
Siyuan Chen, Qingyi Si, Chenxu Yang, Yunzhi Liang, Zheng Lin, Huan Liu, Weiping Wang
Multi-Task Dynamic Pricing in Credit Market with Contextual Information
Adel Javanmard, Jingwei Ji, Renyuan Xu
Hierarchical Conditional Multi-Task Learning for Streamflow Modeling
Shaoming Xu, Arvind Renganathan, Ankush Khandelwal, Rahul Ghosh, Xiang Li, Licheng Liu, Kshitij Tayal, Peter Harrington, Xiaowei Jia, Zhenong Jin, Jonh Nieber, Vipin Kumar
Adapt-$\infty$: Scalable Lifelong Multimodal Instruction Tuning via Dynamic Data Selection
Adyasha Maharana, Jaehong Yoon, Tianlong Chen, Mohit Bansal
Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach
Chaoxi Niu, Guansong Pang, Ling Chen, Bing Liu
Unified Representation of Genomic and Biomedical Concepts through Multi-Task, Multi-Source Contrastive Learning
Hongyi Yuan, Suqi Liu, Kelly Cho, Katherine Liao, Alexandre Pereira, Tianxi Cai