Geometric Complexity
Geometric complexity, a measure of the intricacy of a model's representation or a dataset's structure, is a burgeoning area of research in machine learning. Current investigations focus on its role in explaining the generalization ability of deep neural networks (like ResNets), the success of transfer learning, and the effectiveness of self-supervised learning methods (such as masked autoencoders). Understanding geometric complexity helps clarify implicit regularization in training, leading to improved model design and a deeper understanding of how neural networks learn, with implications for both theoretical advancements and practical applications in various fields.
Papers
May 28, 2024
May 24, 2024
May 20, 2024
February 4, 2024
October 5, 2023
April 5, 2023
September 27, 2022
September 17, 2022
May 17, 2022
December 22, 2021