Sequential Learning

Sequential learning focuses on training machine learning models on data arriving in a sequence of tasks, aiming to mitigate catastrophic forgetting—the loss of previously learned knowledge. Current research emphasizes developing algorithms and architectures, such as continual learning frameworks, graph neural networks, and transformer models, to address this challenge and improve knowledge transfer between tasks, often incorporating techniques like knowledge distillation and bias pruning. This field is crucial for building adaptable AI systems capable of handling real-world data streams in applications ranging from recommendation systems and medical image analysis to robotics and materials science.

Papers