Matrix Learning
Matrix learning focuses on efficiently learning and manipulating matrices within machine learning models, primarily aiming to improve model performance, reduce computational costs, and enhance generalization. Current research emphasizes developing parameter-efficient fine-tuning methods, such as those employing low-rank approximations and singular vector decompositions, and exploring the training dynamics of linear and non-linear networks to understand implicit regularization. These advancements have significant implications for various applications, including computer vision, natural language processing, and reinforcement learning, by enabling faster training, improved accuracy, and more compact models.
Papers
May 30, 2024
May 27, 2024
March 18, 2024
February 27, 2024
January 4, 2024
December 1, 2023
December 7, 2022
June 30, 2022