Efficient Deep
Efficient deep learning focuses on developing neural network models and training algorithms that minimize computational resources while maintaining high accuracy. Current research emphasizes techniques like model compression (e.g., pruning, quantization, low-rank approximations), optimized architectures (e.g., EfficientNet, depthwise separable convolutions), and improved training methods (e.g., sparse backpropagation, adaptive sampling). These advancements are crucial for deploying deep learning on resource-constrained devices (e.g., mobile phones, embedded systems) and for reducing the environmental impact of large-scale training.
Papers
November 6, 2024
November 3, 2024
October 28, 2024
October 3, 2024
September 28, 2024
September 26, 2024
September 11, 2024
August 13, 2024
July 16, 2024
July 5, 2024
July 1, 2024
June 21, 2024
June 18, 2024
June 5, 2024
May 7, 2024
May 3, 2024
April 17, 2024
February 27, 2024
February 4, 2024