Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
A Stutter Seldom Comes Alone -- Cross-Corpus Stuttering Detection as a Multi-label Problem
Sebastian P. Bayerl, Dominik Wagner, Ilja Baumann, Florian Hönig, Tobias Bocklet, Elmar Nöth, Korbinian Riedhammer
Independent Component Alignment for Multi-Task Learning
Dmitry Senushkin, Nikolay Patakin, Arseny Kuznetsov, Anton Konushin
Multitask learning for recognizing stress and depression in social media
Loukas Ilias, Dimitris Askounis
Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts
Rishov Sarkar, Hanxue Liang, Zhiwen Fan, Zhangyang Wang, Cong Hao
Regularization Through Simultaneous Learning: A Case Study on Plant Classification
Pedro Henrique Nascimento Castro, Gabriel Cássia Fortuna, Rafael Alves Bonfim de Queiroz, Gladston Juliano Prates Moreira, Eduardo José da Silva Luz
Transferring Fairness using Multi-Task Learning with Limited Demographic Information
Carlos Aguirre, Mark Dredze