Robust Continual Learning
Robust continual learning aims to enable artificial intelligence systems to learn new tasks sequentially without forgetting previously acquired knowledge, a challenge known as catastrophic forgetting. Current research focuses on improving model robustness against various data issues, including noisy labels, outliers, and adversarial attacks, often employing techniques like Bayesian regularization, low-rank approximations, and data sampling strategies to manage model parameters and prevent performance degradation. These advancements are crucial for developing reliable and adaptable AI systems applicable to real-world scenarios such as robotics and autonomous vehicles where continuous learning from dynamic data streams is essential.
Papers
May 27, 2024
May 2, 2024
April 7, 2024
February 17, 2024
November 26, 2023
September 18, 2023
September 15, 2023
July 20, 2023
May 5, 2023
October 11, 2022
May 31, 2022