Mixed Task
Mixed-task learning focuses on developing AI systems capable of handling diverse and potentially conflicting tasks simultaneously, improving efficiency and generalization compared to single-task approaches. Current research emphasizes improving large language model (LLM) performance on complex, multi-faceted problems through techniques like multi-problem prompting, retrieval-augmented generation (RAG) with multi-head attention, and adaptive parameter tuning methods such as LoRA. These advancements are significant for building more robust and adaptable AI systems with applications across robotics, natural language processing, and other fields requiring flexible problem-solving capabilities.
Papers
Sub-network Discovery and Soft-masking for Continual Learning of Mixed Tasks
Zixuan Ke, Bing Liu, Wenhan Xiong, Asli Celikyilmaz, Haoran Li
Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning
Qiming Bao, Gael Gendron, Alex Yuxuan Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu