Unconstrained Feature Model

Unconstrained feature models (UFMs) are simplified analytical frameworks used to understand "neural collapse" (NC), a phenomenon where deep neural networks, during training, exhibit highly structured feature representations at convergence. Current research focuses on extending UFMs to analyze NC in deeper, non-linear networks and under various loss functions (e.g., mean squared error, cross-entropy), investigating the influence of data properties (e.g., imbalance, noise) and network architecture on NC's emergence. Understanding NC through UFMs offers valuable insights into the optimization landscape of deep learning, potentially leading to improved training strategies and a deeper understanding of generalization capabilities.

Papers