Paper ID: 2405.04206
NOVA: NoC-based Vector Unit for Mapping Attention Layers on a CNN Accelerator
Mohit Upadhyay, Rohan Juneja, Weng-Fai Wong, Li-Shiuan Peh
Attention mechanisms are becoming increasingly popular, being used in neural network models in multiple domains such as natural language processing (NLP) and vision applications, especially at the edge. However, attention layers are difficult to map onto existing neuro accelerators since they have a much higher density of non-linear operations, which lead to inefficient utilization of today's vector units. This work introduces NOVA, a NoC-based Vector Unit that can perform non-linear operations within the NoC of the accelerators, and can be overlaid onto existing neuro accelerators to map attention layers at the edge. Our results show that the NOVA architecture is up to 37.8x more power-efficient than state-of-the-art hardware approximators when running existing attention-based neural networks.
Submitted: May 7, 2024