Energy Efficient Training
Energy-efficient training of deep learning models focuses on minimizing the substantial computational resources and energy consumption associated with model development. Current research emphasizes techniques like sparse backpropagation, reduced-precision arithmetic (e.g., binary or ultra-low precision), and efficient model architectures (e.g., spiking neural networks, quadratic neural networks), often combined with data-centric approaches such as elite sample selection. These advancements are crucial for mitigating the environmental impact of AI and enabling the deployment of deep learning on resource-constrained devices, broadening accessibility and fostering sustainable AI practices.
Papers
September 28, 2024
August 22, 2024
July 25, 2024
June 7, 2024
May 6, 2024
February 19, 2024
January 28, 2024
July 19, 2023
July 1, 2023
February 28, 2023
February 2, 2023
July 15, 2022
January 26, 2022