Neural Network Complexity

Neural network complexity research aims to understand and quantify the capacity and efficiency of neural networks, focusing on how network architecture and parameters relate to computational power and generalization ability. Current efforts investigate various complexity measures, including those based on topological data analysis (e.g., neural persistence), information capacity of representations (e.g., nearest neighbor complexity), and the number of piecewise linear regions generated by the network. These analyses are applied to diverse network types, from simple linear classifiers to deep convolutional and recurrent architectures, and inform the development of more efficient and interpretable models. Ultimately, this research contributes to both a deeper theoretical understanding of neural networks and the development of more practical and resource-efficient AI systems.

Papers