Gradient Conflict

Gradient conflict, the phenomenon of conflicting gradient updates during model training, hinders performance in various machine learning tasks, particularly in multi-task learning and federated learning settings. Current research focuses on mitigating this conflict through techniques like gradient projection, resampling strategies based on gradient uncertainty, and architectural modifications that disentangle conflicting tasks or layers. Addressing gradient conflict is crucial for improving the efficiency and accuracy of diverse applications, ranging from simultaneous speech translation and combinatorial optimization to reinforcement learning and domain generalization.

Papers