Analog Compute in Memory
Analog compute-in-memory (CIM) aims to accelerate deep neural network processing by performing computations directly within the memory array, drastically reducing data movement and energy consumption. Current research focuses on optimizing CIM architectures for specific neural network models (like ResNet and VGG), developing robust training algorithms that account for analog circuit imperfections and variability, and mitigating security vulnerabilities arising from power side-channel attacks. This approach holds significant promise for enabling energy-efficient edge AI applications and driving advancements in low-power, high-performance computing.
Papers
October 19, 2024
July 24, 2024
March 19, 2024
May 23, 2023
April 13, 2023
November 11, 2021
November 10, 2021