Anti Forgetting
"Anti-forgetting" in machine learning focuses on mitigating the tendency of models, particularly deep neural networks and large language models, to lose previously learned information when acquiring new knowledge. Current research emphasizes techniques like experience replay, regularization methods, and novel optimization algorithms (e.g., momentum-filtered optimizers) to improve knowledge retention across various tasks and datasets, often within continual learning or machine unlearning frameworks. This field is crucial for developing more robust and adaptable AI systems, impacting areas like robotics, personalized medicine, and natural language processing by enabling lifelong learning and efficient knowledge management.
Papers
Continual Learning by Three-Phase Consolidation
Davide Maltoni, Lorenzo Pellegrini
Auxiliary Classifiers Improve Stability and Efficiency in Continual Learning
Filip Szatkowski, Fei Yang, Bartłomiej Twardowski, Tomasz Trzciński, Joost van de Weijer
Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning
Chongyu Fan, Jiancheng Liu, Alfred Hero, Sijia Liu
Selective Forgetting: Advancing Machine Unlearning Techniques and Evaluation in Language Models
Lingzhi Wang, Xingshan Zeng, Jinsong Guo, Kam-Fai Wong, Georg Gottlob
Flashback: Understanding and Mitigating Forgetting in Federated Learning
Mohammed Aljahdali, Ahmed M. Abdelmoniem, Marco Canini, Samuel Horváth
Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem
Maciej Wołczyk, Bartłomiej Cupiał, Mateusz Ostaszewski, Michał Bortkiewicz, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś
Trinity: Syncretizing Multi-/Long-tail/Long-term Interests All in One
Jing Yan, Liu Jiang, Jianfei Cui, Zhichen Zhao, Xingyan Bin, Feng Zhang, Zuotao Liu