Analog Compute in Memory

Analog compute-in-memory (CIM) aims to accelerate deep neural network processing by performing computations directly within the memory array, drastically reducing data movement and energy consumption. Current research focuses on optimizing CIM architectures for specific neural network models (like ResNet and VGG), developing robust training algorithms that account for analog circuit imperfections and variability, and mitigating security vulnerabilities arising from power side-channel attacks. This approach holds significant promise for enabling energy-efficient edge AI applications and driving advancements in low-power, high-performance computing.

Papers