Compact Neural Network
Compact neural networks aim to minimize model size and computational cost while maintaining high performance, crucial for deploying AI on resource-constrained devices. Research currently emphasizes efficient training strategies, including specialized optimization techniques and knowledge distillation, alongside the development of novel architectures like GhostNet and the exploration of hardware-aware compression methods such as low-rank approximations and activation function pruning. These advancements are significant for expanding the accessibility and applicability of AI across various domains, from mobile applications to embedded systems and resource-limited environments.
Papers
October 10, 2024
April 17, 2024
March 14, 2024
October 24, 2023
August 25, 2023
August 23, 2023
June 22, 2023
May 30, 2023
January 20, 2023
December 31, 2022
October 13, 2022
July 24, 2022
June 2, 2022
May 3, 2022