Memory Computing
Memory computing aims to overcome the von Neumann bottleneck by integrating computation directly within memory arrays, significantly reducing data movement and improving energy efficiency. Current research focuses on adapting various neural network architectures, including transformers, convolutional neural networks, and spiking neural networks, for in-memory computation, often employing novel algorithms and hardware designs to mitigate the effects of analog non-idealities. This approach holds significant promise for accelerating machine learning tasks, particularly in resource-constrained environments like edge devices, and is driving innovation in both hardware and algorithm co-design.
37papers
Papers
February 11, 2025
February 10, 2025
January 8, 2025
December 12, 2024
December 4, 2024
October 25, 2024
October 22, 2024
September 28, 2024
September 23, 2024
September 8, 2024
August 22, 2024
August 18, 2024
August 11, 2024
July 17, 2024