Gradient Harmonization
Gradient harmonization addresses the problem of conflicting gradients in multi-objective machine learning tasks, aiming to improve model performance and stability by aligning optimization directions. Current research focuses on applying this technique to diverse areas, including unsupervised domain adaptation, machine unlearning, and federated learning, often integrating it as a plug-and-play module within existing model architectures. This approach shows promise in enhancing the efficiency and robustness of various machine learning paradigms, particularly in scenarios with heterogeneous data or conflicting training objectives, leading to improved performance across a range of applications.
Papers
November 14, 2024
August 1, 2024
July 15, 2024
September 13, 2023