Paper ID: 2311.17943

LayerCollapse: Adaptive compression of neural networks

Soheil Zibakhsh Shabgahi, Mohammad Sohail Shariff, Farinaz Koushanfar

Handling the ever-increasing scale of contemporary deep learning and transformer-based models poses a significant challenge. Overparameterized Transformer networks outperform prior art in Natural Language processing and Computer Vision. These models contain hundreds of millions of parameters, demanding significant computational resources and making them prone to overfitting on down stream tasks. In this work we present LayerCollapse, a novel structured pruning method to reduce the depth of fully connected layers. We propose an innovative regularizer that promotes shallow fully connected layers, compressing the model with minimal performance impact. This regularizer enables post-training compression without fine-tuning while preserving performance. LayerCollapse controls model expressiveness by regularizing the activation functions between fully connected layers, modulating them to linearity. A linear activation function collapses the rank of a transformation to the rank of the corresponding linear transformation, which demands less resources from the hardware. We demonstrate the effectiveness of LayerCollapse by showing its compression capabilities in sentimental analysis, text generation, and image classification benchmarks.

Submitted: Nov 29, 2023