Neural Feature Ansatz
The Neural Feature Ansatz (NFA) describes a mechanism by which neural networks learn features, focusing on the correlation between network weights and average gradient outer products. Current research investigates this correlation across various architectures, including convolutional and fully connected networks, and explores its implications for improving feature learning in both neural networks and kernel machines. Understanding the NFA offers insights into the inner workings of deep learning, potentially leading to more efficient training algorithms and improved model interpretability, with applications ranging from image processing to quantum chemistry simulations.
Papers
February 7, 2024
September 1, 2023
December 28, 2022