Efficient Hardware
Efficient hardware design for deep learning and other computationally intensive tasks focuses on minimizing energy consumption and latency while maximizing throughput. Current research emphasizes developing hardware-aware model architectures (like LowFormer and EfficientRep) and algorithms optimized for specific hardware platforms (e.g., FPGAs, specialized ASICs), often incorporating techniques like sparsity, quantization, and novel numerical formats. These advancements are crucial for deploying complex models on resource-constrained devices, enabling broader applications in areas such as mobile computing, edge AI, and high-performance computing, and driving progress in energy-efficient computing.
Papers
October 10, 2024
September 16, 2024
September 5, 2024
March 29, 2024
February 15, 2024
February 9, 2024
December 11, 2023
December 5, 2023
November 16, 2023
September 21, 2023
May 31, 2023
March 14, 2023
February 1, 2023
October 11, 2022
September 22, 2022
August 17, 2022
August 12, 2022
June 2, 2022