Gradient Based
Gradient-based methods are central to training and interpreting many machine learning models, aiming to optimize model parameters and understand their decision-making processes. Current research focuses on improving the efficiency and robustness of gradient-based optimization, particularly within federated learning, and developing novel gradient-informed sampling techniques for enhanced model performance and explainability. These advancements are crucial for scaling machine learning to larger datasets and more complex tasks, impacting fields ranging from medical image analysis to natural language processing and optimization problems.
Papers
Gradient-based Discrete Sampling with Automatic Cyclical Scheduling
Patrick Pynadath, Riddhiman Bhattacharya, Arun Hariharan, Ruqi Zhang
Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers
Xinyu Tang, Xiaolei Wang, Wayne Xin Zhao, Siyuan Lu, Yaliang Li, Ji-Rong Wen
Convex and Bilevel Optimization for Neuro-Symbolic Inference and Learning
Charles Dickens, Changyu Gao, Connor Pryor, Stephen Wright, Lise Getoor
A gradient-based approach to fast and accurate head motion compensation in cone-beam CT
Mareike Thies, Fabian Wagner, Noah Maul, Haijun Yu, Manuela Goldmann, Linda-Sophie Schneider, Mingxuan Gu, Siyuan Mei, Lukas Folle, Alexander Preuhs, Michael Manhart, Andreas Maier