Multi Task
Multi-task learning (MTL) aims to improve model efficiency and performance by training a single model to handle multiple related tasks simultaneously. Current research focuses on developing effective strategies for sharing information between tasks, including novel architectures like multi-expert systems and the adaptation of large language models (LLMs) for various applications. This approach is particularly valuable in scenarios with limited data or computational resources, finding applications in diverse fields such as medical image analysis, robotics, and online advertising, where improved efficiency and generalization are crucial.
Papers
DTN: Deep Multiple Task-specific Feature Interactions Network for Multi-Task Recommendation
Yaowen Bi, Yuteng Lian, Jie Cui, Jun Liu, Peijian Wang, Guanghui Li, Xuejun Chen, Jinglin Zhao, Hao Wen, Jing Zhang, Zhaoqi Zhang, Wenzhuo Song, Yang Sun, Weiwei Zhang, Mingchen Cai, Guanxing Zhang
Multi-Task Multi-Fidelity Learning of Properties for Energetic Materials
Robert J. Appleton, Daniel Klinger, Brian H. Lee, Michael Taylor, Sohee Kim, Samuel Blankenship, Brian C. Barnes, Steven F. Son, Alejandro Strachan