Memory Computing
Memory computing aims to overcome the von Neumann bottleneck by integrating computation directly within memory arrays, significantly reducing data movement and improving energy efficiency. Current research focuses on adapting various neural network architectures, including transformers, convolutional neural networks, and spiking neural networks, for in-memory computation, often employing novel algorithms and hardware designs to mitigate the effects of analog non-idealities. This approach holds significant promise for accelerating machine learning tasks, particularly in resource-constrained environments like edge devices, and is driving innovation in both hardware and algorithm co-design.
Papers
December 19, 2023
November 29, 2023
July 8, 2023
May 28, 2023
May 27, 2023
May 22, 2023
April 13, 2023
March 30, 2023
March 8, 2023
February 16, 2023
November 9, 2022
November 3, 2022
October 24, 2022
October 2, 2022
September 21, 2022
May 15, 2022
May 10, 2022
May 9, 2022
April 21, 2022