Gradient Subspace
Gradient subspace methods focus on identifying and exploiting low-dimensional structures within the high-dimensional spaces of model parameters and gradients, primarily to improve efficiency and address challenges in machine learning. Current research emphasizes applications in continual learning (reducing catastrophic forgetting), reinforcement learning (enhancing training efficiency), and federated learning (reducing communication overhead and enabling unlearning). These techniques offer significant potential for improving the scalability, efficiency, and robustness of various machine learning algorithms across diverse applications.
Papers
October 30, 2024
March 4, 2024
January 12, 2024
November 25, 2023
October 26, 2023
March 2, 2023
February 24, 2023
June 20, 2022
June 16, 2022