DNN Accelerator
DNN accelerators are specialized hardware designed to efficiently execute deep neural network (DNN) computations, primarily aiming to improve speed, reduce energy consumption, and minimize latency. Current research focuses on optimizing various aspects of these accelerators, including novel memory hierarchies, efficient in-memory computing (IMC) using stochastic processing, and adaptive hardware/software co-optimization techniques, often applied to models like ResNet and Vision Transformers. These advancements are crucial for deploying DNNs on resource-constrained edge devices and in safety-critical applications, impacting both the efficiency of AI systems and their reliability in real-world deployments.
Papers
SAFFIRA: a Framework for Assessing the Reliability of Systolic-Array-Based DNN Accelerators
Mahdi Taheri, Masoud Daneshtalab, Jaan Raik, Maksim Jenihhin, Salvatore Pappalardo, Paul Jimenez, Bastien Deveautour, Alberto Bosio
AdAM: Adaptive Fault-Tolerant Approximate Multiplier for Edge DNN Accelerators
Mahdi Taheri, Natalia Cherezova, Samira Nazari, Ahsan Rafiq, Ali Azarpeyvand, Tara Ghasempouri, Masoud Daneshtalab, Jaan Raik, Maksim Jenihhin