Multi Task
Multi-task learning (MTL) aims to improve model efficiency and performance by training a single model to handle multiple related tasks simultaneously. Current research focuses on developing effective strategies for sharing information between tasks, including novel architectures like multi-expert systems and the adaptation of large language models (LLMs) for various applications. This approach is particularly valuable in scenarios with limited data or computational resources, finding applications in diverse fields such as medical image analysis, robotics, and online advertising, where improved efficiency and generalization are crucial.
441papers
Papers - Page 6
September 27, 2024
September 26, 2024
September 25, 2024
September 24, 2024
September 20, 2024
September 18, 2024
September 4, 2024
September 3, 2024
August 31, 2024
August 29, 2024
Theoretical Insights into Overparameterized Models in Multi-Task and Replay-Based Continual Learning
Amin Banayeeanzade, Mahdi Soltanolkotabi, Mohammad RostamiMultitask learning for improved scour detection: A dynamic wave tank study
Simon M. Brealy, Aidan J. Hughes, Tina A. Dardeno, Lawrence A. Bull, Robin S. Mills, Nikolaos Dervilis, Keith WordenAnno-incomplete Multi-dataset Detection
Yiran Xu, Haoxiang Zhong, Kai Wu, Jialin Li, Yong Liu, Chengjie Wang, Shu-Tao Xia, Hongen Liao
August 27, 2024
MTMamba++: Enhancing Multi-Task Dense Scene Understanding via Mamba-Based Decoders
Baijiong Lin, Weisen Jiang, Pengguang Chen, Shu Liu, Ying-Cong ChenNeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals
Wei-Bang Jiang, Yansen Wang, Bao-Liang Lu, Dongsheng Li
August 26, 2024