Multi Task
Multi-task learning (MTL) aims to improve model efficiency and performance by training a single model to handle multiple related tasks simultaneously. Current research focuses on developing effective strategies for sharing information between tasks, including novel architectures like multi-expert systems and the adaptation of large language models (LLMs) for various applications. This approach is particularly valuable in scenarios with limited data or computational resources, finding applications in diverse fields such as medical image analysis, robotics, and online advertising, where improved efficiency and generalization are crucial.
Papers
TimberVision: A Multi-Task Dataset and Framework for Log-Component Segmentation and Tracking in Autonomous Forestry Operations
Daniel Steininger, Julia Simon, Andreas Trondl, Markus Murschitz
TIMRL: A Novel Meta-Reinforcement Learning Framework for Non-Stationary and Multi-Task Environments
Chenyang Qi, Huiping Li, Panfeng Huang
ConfigX: Modular Configuration for Evolutionary Algorithms via Multitask Reinforcement Learning
Hongshu Guo, Zeyuan Ma, Jiacheng Chen, Yining Ma, Zhiguang Cao, Xinglin Zhang, Yue-Jiao Gong
MoDULA: Mixture of Domain-Specific and Universal LoRA for Multi-Task Learning
Yufei Ma, Zihan Liang, Huangyu Dai, Ben Chen, Dehong Gao, Zhuoran Ran, Wang Zihan, Linbo Jin, Wen Jiang, Guannan Zhang, Xiaoyan Cai, Libin Yang
RoboMM: All-in-One Multimodal Large Model for Robotic Manipulation
Feng Yan, Fanfan Liu, Liming Zheng, Yufeng Zhong, Yiyang Huang, Zechao Guan, Chengjian Feng, Lin Ma