Continual Instruction Tuning
Continual instruction tuning (CIT) focuses on adapting large language models (LLMs) to new tasks sequentially, without catastrophic forgetting of previously learned skills. Current research emphasizes mitigating this forgetting through techniques like data replay, model expansion, and parameter-efficient tuning methods, often applied to both unimodal and multimodal LLMs. This area is crucial for developing more robust and adaptable AI systems, enabling them to continuously learn and improve in dynamic real-world environments, and addressing limitations in current instruction tuning approaches.
Papers
November 4, 2024
October 14, 2024
October 8, 2024
August 19, 2024
July 16, 2024
April 11, 2024
March 15, 2024
March 13, 2024
January 17, 2024
November 27, 2023
October 23, 2023