Multi Task Learning
Multi-task learning (MTL) aims to improve model efficiency and generalization by training a single model to perform multiple related tasks simultaneously. Current research focuses on addressing challenges like task interference and optimization difficulties, exploring architectures such as Mixture-of-Experts (MoE), low-rank adaptors, and hierarchical models to enhance performance and efficiency across diverse tasks. MTL's significance lies in its potential to improve resource utilization and create more robust and adaptable AI systems, with applications spanning various fields including natural language processing, computer vision, and scientific modeling.
Papers
Dynamic Adaptive Optimization for Effective Sentiment Analysis Fine-Tuning on Large Language Models
Hongcheng Ding, Xuanze Zhao, Shamsul Nahar Abdullah, Deshinta Arrova Dewi, Zixiao Jiang
Analytical Uncertainty-Based Loss Weighting in Multi-Task Learning
Lukas Kirchdorfer, Cathrin Elich, Simon Kutsche, Heiner Stuckenschmidt, Lukas Schott, Jan M. Köhler
Semi-Supervised Multi-Task Learning Based Framework for Power System Security Assessment
Muhy Eddin Za'ter, Amirhossein Sajadi, Bri-Mathias Hodge
Beyond Text: Leveraging Multi-Task Learning and Cognitive Appraisal Theory for Post-Purchase Intention Analysis
Gerard Christopher Yeo, Shaz Furniturewala, Kokil Jaidka