Gradient Based
Gradient-based methods are central to training and interpreting many machine learning models, aiming to optimize model parameters and understand their decision-making processes. Current research focuses on improving the efficiency and robustness of gradient-based optimization, particularly within federated learning, and developing novel gradient-informed sampling techniques for enhanced model performance and explainability. These advancements are crucial for scaling machine learning to larger datasets and more complex tasks, impacting fields ranging from medical image analysis to natural language processing and optimization problems.
168papers
Papers - Page 4
July 20, 2024
June 27, 2024
Dataless Quadratic Neural Networks for the Maximum Independent Set Problem
Ismail Alkhouri, Cedric Le Denmat, Yingjie Li, Cunxi Yu, Jia Liu, Rongrong Wang, Alvaro VelasquezStochastic Gradient Piecewise Deterministic Monte Carlo Samplers
Paul Fearnhead, Sebastiano Grazzi, Chris Nemeth, Gareth O. RobertsOn Discrete Prompt Optimization for Diffusion Models
Ruochen Wang, Ting Liu, Cho-Jui Hsieh, Boqing Gong
June 18, 2024
June 13, 2024