Diverse Task
Diverse task learning in artificial intelligence focuses on developing models and methods capable of handling a wide range of tasks without extensive retraining for each. Current research emphasizes approaches like multi-task prompt tuning, mixture-of-experts models, and techniques for merging pre-trained models from different domains, often leveraging large language models (LLMs) and vision transformers (ViTs). This area is significant because it addresses the limitations of single-task models, improving efficiency and generalizability across various applications, from natural language processing and computer vision to robotics and personalized medicine.
Papers
The SIFo Benchmark: Investigating the Sequential Instruction Following Ability of Large Language Models
Xinyi Chen, Baohao Liao, Jirui Qi, Panagiotis Eustratiadis, Christof Monz, Arianna Bisazza, Maarten de Rijke
ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents
Haiyang Shen, Yue Li, Desong Meng, Dongqi Cai, Sheng Qi, Li Zhang, Mengwei Xu, Yun Ma