Analog in Memory Computing

Analog in-memory computing (AIMC) aims to drastically improve the energy efficiency and speed of deep learning by performing computations directly within the memory array, eliminating the energy-intensive data movement of traditional von Neumann architectures. Current research focuses on optimizing AIMC for various neural network architectures, including transformers and spiking neural networks, often employing hardware-aware training techniques to mitigate the effects of analog device imperfections. This approach holds significant promise for accelerating AI applications, particularly in resource-constrained edge computing environments, by offering substantial reductions in latency and power consumption compared to conventional digital implementations.

Papers