DNN Accelerator
DNN accelerators are specialized hardware designed to efficiently execute deep neural network (DNN) computations, primarily aiming to improve speed, reduce energy consumption, and minimize latency. Current research focuses on optimizing various aspects of these accelerators, including novel memory hierarchies, efficient in-memory computing (IMC) using stochastic processing, and adaptive hardware/software co-optimization techniques, often applied to models like ResNet and Vision Transformers. These advancements are crucial for deploying DNNs on resource-constrained edge devices and in safety-critical applications, impacting both the efficiency of AI systems and their reliability in real-world deployments.
Papers
ADA-GP: Accelerating DNN Training By Adaptive Gradient Prediction
Vahid Janfaza, Shantanu Mandal, Farabi Mahmud, Abdullah Muzahid
HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity
Yannan Nellie Wu, Po-An Tsai, Saurav Muralidharan, Angshuman Parashar, Vivienne Sze, Joel S. Emer
SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators
Victor J. B. Jung, Arne Symons, Linyan Mei, Marian Verhelst, Luca Benini
eFAT: Improving the Effectiveness of Fault-Aware Training for Mitigating Permanent Faults in DNN Hardware Accelerators
Muhammad Abdullah Hanif, Muhammad Shafique