Minimum Width

Minimum width in neural networks and related models is a crucial research area focusing on determining the smallest network width needed to achieve desired approximation capabilities, such as universal approximation or effective string theory simulations. Current research investigates this minimum width across various architectures, including feed-forward networks, recurrent neural networks, and treelike committee machines, often analyzing the trade-offs between width, depth, and other architectural parameters like the use of skip connections. Understanding minimum width is vital for optimizing network efficiency, improving continual learning performance by mitigating catastrophic forgetting, and advancing theoretical understanding of network capacity and expressiveness.

Papers