Energy Efficient Training

Energy-efficient training of deep learning models focuses on minimizing the substantial computational resources and energy consumption associated with model development. Current research emphasizes techniques like sparse backpropagation, reduced-precision arithmetic (e.g., binary or ultra-low precision), and efficient model architectures (e.g., spiking neural networks, quadratic neural networks), often combined with data-centric approaches such as elite sample selection. These advancements are crucial for mitigating the environmental impact of AI and enabling the deployment of deep learning on resource-constrained devices, broadening accessibility and fostering sustainable AI practices.

Papers