Paper ID: 2208.10246

SDBERT: SparseDistilBERT, a faster and smaller BERT model

Devaraju Vinoda, Pawan Kumar Yadav

In this work we introduce a new transformer architecture called SparseDistilBERT (SDBERT), which is a combination of sparse attention and knowledge distillantion (KD). We implemented sparse attention mechanism to reduce quadratic dependency on input length to linear. In addition to reducing computational complexity of the model, we used knowledge distillation (KD). We were able to reduce the size of BERT model by 60% while retaining 97% performance and it only took 40% of time to train.

Submitted: Jul 28, 2022