Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Enhancing Romanian Offensive Language Detection through Knowledge Distillation, Multi-Task Learning, and Data Augmentation
Vlad-Cristian Matei, Iulian-Marius Tăiatu, Răzvan-Alexandru Smădu, Dumitru-Clementin Cercel
The Perfect Blend: Redefining RLHF with Mixture of Judges
Tengyu Xu, Eryk Helenowski, Karthik Abinav Sankararaman, Di Jin, Kaiyan Peng, Eric Han, Shaoliang Nie, Chen Zhu, Hejia Zhang, Wenxuan Zhou, Zhouhao Zeng, Yun He, Karishma Mandyam, Arya Talabzadeh, Madian Khabsa, Gabriel Cohen, Yuandong Tian, Hao Ma, Sinong Wang, Han Fang
MT2KD: Towards A General-Purpose Encoder for Speech, Speaker, and Audio Events
Xiaoyu Yang, Qiujia Li, Chao Zhang, Phil Woodland
Learning Representation for Multitask learning through Self Supervised Auxiliary learning
Seokwon Shin, Hyungrok Do, Youngdoo Son
Task Addition in Multi-Task Learning by Geometrical Alignment
Soorin Yim, Dae-Woong Jeong, Sung Moon Ko, Sumin Lee, Hyunseung Kim, Chanhui Lee, Sehui Han
Tissue Concepts: supervised foundation models in computational pathology
Till Nicke, Jan Raphael Schaefer, Henning Hoefener, Friedrich Feuerhake, Dorit Merhof, Fabian Kiessling, Johannes Lotz
SpinMultiNet: Neural Network Potential Incorporating Spin Degrees of Freedom with Multi-Task Learning
Koki Ueno, Satoru Ohuchi, Kazuhide Ichikawa, Kei Amii, Kensuke Wakasugi