Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Spatial Aware Multi-Task Learning Based Speech Separation
Wei Sun, Mei Wang, Lili Qiu
Hybrid CNN-Transformer Model For Facial Affect Recognition In the ABAW4 Challenge
Lingfeng Wang, Haocheng Li, Chunyin Liu
Facial Affect Analysis: Learning from Synthetic Data & Multi-Task Learning Challenges
Siyang Li, Yifan Xu, Huanyu Wu, Dongrui Wu, Yingjie Yin, Jiajiong Cao, Jingting Ding
Multi-Task Learning for Emotion Descriptors Estimation at the fourth ABAW Challenge
Yanan Chang, Yi Wu, Xiangyu Miao, Jiahe Wang, Shangfei Wang
HSE-NN Team at the 4th ABAW Competition: Multi-task Emotion Recognition and Learning from Synthetic Images
Andrey V. Savchenko
Multi-Task Learning Framework for Emotion Recognition in-the-wild
Tenggan Zhang, Chuanhe Liu, Xiaolong Liu, Yuchen Liu, Liyu Meng, Lei Sun, Wenqiang Jiang, Fengyuan Zhang, Jinming Zhao, Qin Jin
SS-MFAR : Semi-supervised Multi-task Facial Affect Recognition
Darshan Gera, Badveeti Naveen Siva Kumar, Bobbili Veerendra Raj Kumar, S Balasubramanian