Hardware Accelerator
Hardware accelerators are specialized computing devices designed to significantly speed up and improve the energy efficiency of computationally intensive tasks, particularly in machine learning. Current research focuses on accelerating specific model architectures like transformers and convolutional neural networks (CNNs), often employing techniques such as quantization, pruning, and novel dataflow designs implemented on FPGAs and other platforms. This work is driven by the need to deploy increasingly complex AI models on resource-constrained devices (edge computing) and to improve the sustainability of large-scale AI deployments by reducing energy consumption. The resulting advancements have significant implications for various fields, including computer vision, natural language processing, and robotics, enabling faster and more efficient AI applications.
Papers
Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
Shuwen Lu, Zhihui Zhang, Cong Guo, Jingwen Leng, Yangjie Zhou, Minyi Guo
SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-time Performance on Mobile Device
Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu