Energy Efficient Deep Learning
Energy-efficient deep learning focuses on reducing the substantial energy consumption of deep neural networks, a critical concern given their increasing prevalence. Current research emphasizes novel hardware architectures like analog in-memory computing and neuromorphic computing (using spiking neural networks), alongside algorithmic optimizations such as efficient neural architecture search, pruning techniques (like the Lottery Ticket Hypothesis), and early exit strategies. These advancements aim to improve the energy efficiency of deep learning models across their lifecycle, from training to inference, impacting both environmental sustainability and the feasibility of deploying AI in resource-constrained environments.
Papers
February 12, 2024
January 31, 2024
January 30, 2024
August 23, 2023
May 9, 2023
February 5, 2023
October 12, 2022
July 4, 2022
June 28, 2022
June 9, 2022