Gradient Conflict
Gradient conflict, the phenomenon of conflicting gradient updates during model training, hinders performance in various machine learning tasks, particularly in multi-task learning and federated learning settings. Current research focuses on mitigating this conflict through techniques like gradient projection, resampling strategies based on gradient uncertainty, and architectural modifications that disentangle conflicting tasks or layers. Addressing gradient conflict is crucial for improving the efficiency and accuracy of diverse applications, ranging from simultaneous speech translation and combinatorial optimization to reinforcement learning and domain generalization.
Papers
November 27, 2024
November 20, 2024
October 3, 2024
September 24, 2024
August 22, 2024
May 14, 2024
March 5, 2024
November 8, 2023
October 11, 2023
September 28, 2023
September 24, 2023
September 13, 2023
May 31, 2023
March 31, 2023
February 23, 2023
February 22, 2023