Gradient Conflict
Gradient conflict, the phenomenon of conflicting gradient updates during model training, hinders performance in various machine learning tasks, particularly in multi-task learning and federated learning settings. Current research focuses on mitigating this conflict through techniques like gradient projection, resampling strategies based on gradient uncertainty, and architectural modifications that disentangle conflicting tasks or layers. Addressing gradient conflict is crucial for improving the efficiency and accuracy of diverse applications, ranging from simultaneous speech translation and combinatorial optimization to reinforcement learning and domain generalization.
15papers
Papers
March 31, 2025
March 8, 2025
March 5, 2025
November 27, 2024
November 20, 2024
October 3, 2024
September 24, 2024
August 22, 2024
March 5, 2024
November 8, 2023
September 28, 2023
September 13, 2023