Orthogonal Gradient
Orthogonal gradient methods aim to improve machine learning models by modifying training processes to encourage parameter updates that are independent of, or orthogonal to, existing learned features. Current research focuses on applying this principle to diverse areas, including denoising and destriping hyperspectral images, boosting the interpretability of rule ensembles, enhancing the diversity of mixture-of-experts models, and improving the generalization of neural networks for tabular data. These techniques offer potential for increased model efficiency, robustness, and interpretability across various applications, impacting fields ranging from image processing and reinforcement learning to natural language processing.
Papers
July 4, 2024
February 24, 2024
February 1, 2024
October 15, 2023
March 9, 2023
December 11, 2022
November 25, 2022
October 12, 2022
July 28, 2022
July 5, 2022
June 17, 2022