Unconstrained Feature Model
Unconstrained feature models (UFMs) are simplified analytical frameworks used to understand "neural collapse" (NC), a phenomenon where deep neural networks, during training, exhibit highly structured feature representations at convergence. Current research focuses on extending UFMs to analyze NC in deeper, non-linear networks and under various loss functions (e.g., mean squared error, cross-entropy), investigating the influence of data properties (e.g., imbalance, noise) and network architecture on NC's emergence. Understanding NC through UFMs offers valuable insights into the optimization landscape of deep learning, potentially leading to improved training strategies and a deeper understanding of generalization capabilities.
Papers
October 30, 2024
October 7, 2024
September 3, 2024
July 15, 2024
April 9, 2024
February 29, 2024
September 18, 2023
July 17, 2023
May 22, 2023
January 26, 2023
August 9, 2022
June 11, 2022
March 2, 2022