Memory Computing

Memory computing aims to overcome the von Neumann bottleneck by integrating computation directly within memory arrays, significantly reducing data movement and improving energy efficiency. Current research focuses on adapting various neural network architectures, including transformers, convolutional neural networks, and spiking neural networks, for in-memory computation, often employing novel algorithms and hardware designs to mitigate the effects of analog non-idealities. This approach holds significant promise for accelerating machine learning tasks, particularly in resource-constrained environments like edge devices, and is driving innovation in both hardware and algorithm co-design.

Papers