Computing in Memory
Compute-in-memory (CIM) aims to overcome the von Neumann bottleneck by integrating computation directly within memory, primarily using memristors, to drastically improve energy efficiency and speed in machine learning. Current research focuses on optimizing CIM architectures for various neural network models, including convolutional neural networks (CNNs), transformers, and spiking neural networks (SNNs), often employing techniques like quantization, pruning, and novel training algorithms to address memristor non-idealities. This approach holds significant promise for enabling energy-efficient and high-performance AI at the edge, particularly in resource-constrained applications like mobile devices and embedded systems.
Papers
July 8, 2023
May 23, 2023
March 13, 2023
February 16, 2023
December 29, 2022
November 23, 2022
November 3, 2022
August 21, 2022
May 2, 2022
February 3, 2022
November 14, 2021