Gradient Subspace

Gradient subspace methods focus on identifying and exploiting low-dimensional structures within the high-dimensional spaces of model parameters and gradients, primarily to improve efficiency and address challenges in machine learning. Current research emphasizes applications in continual learning (reducing catastrophic forgetting), reinforcement learning (enhancing training efficiency), and federated learning (reducing communication overhead and enabling unlearning). These techniques offer significant potential for improving the scalability, efficiency, and robustness of various machine learning algorithms across diverse applications.

Papers