Compositional Task
Compositional tasks challenge artificial intelligence systems to solve problems by combining simpler sub-tasks, mirroring human cognitive abilities. Current research focuses on improving large language models (LLMs) and reinforcement learning (RL) agents' performance on these tasks, exploring techniques like recursive tuning, knowledge distillation, and specialized prompting strategies (e.g., skills-in-context prompting) to enhance their compositional reasoning capabilities. These efforts aim to overcome limitations in current architectures, such as Transformers, which struggle with efficient function composition and generalization to unseen combinations of sub-tasks. Success in this area would significantly advance AI's ability to handle complex, real-world problems requiring multi-step reasoning and flexible adaptation.
Papers
LoRA Soups: Merging LoRAs for Practical Skill Composition Tasks
Akshara Prabhakar, Yuanzhi Li, Karthik Narasimhan, Sham Kakade, Eran Malach, Samy Jelassi
Facilitating Multi-turn Function Calling for LLMs via Compositional Instruction Tuning
Mingyang Chen, Haoze Sun, Tianpeng Li, Fan Yang, Hao Liang, Keer Lu, Bin Cui, Wentao Zhang, Zenan Zhou, Weipeng Chen