Sample Gradient
Sample gradients, representing the change in a model's output with respect to individual data points, are central to improving various machine learning processes. Current research focuses on leveraging sample gradients for more efficient optimization algorithms (e.g., accelerating sharpness-aware minimization, improving stochastic gradient descent), enhancing model stealing attacks, and mitigating dataset bias. These advancements aim to improve model accuracy, training speed, and robustness, impacting fields ranging from deep learning and federated learning to combinatorial optimization and differentially private training.
Papers
June 28, 2024
May 18, 2024
April 15, 2024
February 24, 2024
February 21, 2024
February 20, 2024
October 30, 2023
September 28, 2023
May 31, 2023
April 20, 2023
October 12, 2022
July 30, 2022
July 1, 2022
May 31, 2022
May 24, 2022
May 9, 2022