Sample Gradient

Sample gradients, representing the change in a model's output with respect to individual data points, are central to improving various machine learning processes. Current research focuses on leveraging sample gradients for more efficient optimization algorithms (e.g., accelerating sharpness-aware minimization, improving stochastic gradient descent), enhancing model stealing attacks, and mitigating dataset bias. These advancements aim to improve model accuracy, training speed, and robustness, impacting fields ranging from deep learning and federated learning to combinatorial optimization and differentially private training.

Papers