Fast Gradient
Fast gradient methods aim to accelerate the training and inference of various machine learning models by efficiently computing and utilizing gradients. Current research focuses on improving gradient-based optimization in diverse applications, including image restoration (using deep equilibrium models), adversarial attacks (via rescaling and sampling techniques), and boosting decision trees (through approximate scoring functions). These advancements lead to faster training times, improved model performance, and enhanced robustness in various contexts, impacting fields like computer vision, machine learning, and cybersecurity.
Papers
November 20, 2023
July 6, 2023
November 23, 2022
September 22, 2022
August 14, 2022
June 10, 2022
April 6, 2022