Paper ID: 2307.00368

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

Dario Lazzaro, Antonio Emanuele CinĂ , Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and prediction latency. In this work, we propose EAT, a gradient-based algorithm that aims to reduce energy consumption during model training. To this end, we leverage a differentiable approximation of the $\ell_0$ norm, and use it as a sparse penalty over the training loss. Through our experimental analysis conducted on three datasets and two deep neural networks, we demonstrate that our energy-aware training algorithm EAT is able to train networks with a better trade-off between classification performance and energy efficiency.

Submitted: Jul 1, 2023