Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Multi-Task Dynamical Systems
Alex Bird, Christopher K. I. Williams, Christopher Hawthorne
Improving End-to-End Text Image Translation From the Auxiliary Text Translation Task
Cong Ma, Yaping Zhang, Mei Tu, Xu Han, Linghui Wu, Yang Zhao, Yu Zhou
Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models
Alon Albalak, Akshat Shrivastava, Chinnadhurai Sankar, Adithya Sagar, Mike Ross
Grape Cold Hardiness Prediction via Multi-Task Learning
Aseem Saxena, Paola Pesantez-Cabrera, Rohan Ballapragada, Kin-Ho Lam, Markus Keller, Alan Fern
Extreme Multi-Domain, Multi-Task Learning With Unified Text-to-Text Transfer Transformers
Adebayo Oshingbesan, Courage Ekoh, Germann Atakpa, Yonah Byaruagaba