Gradient Based
Gradient-based methods are central to training and interpreting many machine learning models, aiming to optimize model parameters and understand their decision-making processes. Current research focuses on improving the efficiency and robustness of gradient-based optimization, particularly within federated learning, and developing novel gradient-informed sampling techniques for enhanced model performance and explainability. These advancements are crucial for scaling machine learning to larger datasets and more complex tasks, impacting fields ranging from medical image analysis to natural language processing and optimization problems.
Papers
Deep Gradient Learning for Efficient Camouflaged Object Detection
Ge-Peng Ji, Deng-Ping Fan, Yu-Cheng Chou, Dengxin Dai, Alexander Liniger, Luc Van Gool
Gradient-based explanations for Gaussian Process regression and classification models
Sarem Seitz
Gradient-Based Constrained Sampling from Language Models
Sachin Kumar, Biswajit Paria, Yulia Tsvetkov