Paper ID: 2403.18547

Neural Architecture Search for Sentence Classification with BERT

Philip Kenneweg, Sarah Schröder, Barbara Hammer

Pre training of language models on large text corpora is common practice in Natural Language Processing. Following, fine tuning of these models is performed to achieve the best results on a variety of tasks. In this paper we question the common practice of only adding a single output layer as a classification head on top of the network. We perform an AutoML search to find architectures that outperform the current single layer at only a small compute cost. We validate our classification architecture on a variety of NLP benchmarks from the GLUE dataset.

Submitted: Mar 27, 2024