Memory Computing
Memory computing aims to overcome the von Neumann bottleneck by integrating computation directly within memory arrays, significantly reducing data movement and improving energy efficiency. Current research focuses on adapting various neural network architectures, including transformers, convolutional neural networks, and spiking neural networks, for in-memory computation, often employing novel algorithms and hardware designs to mitigate the effects of analog non-idealities. This approach holds significant promise for accelerating machine learning tasks, particularly in resource-constrained environments like edge devices, and is driving innovation in both hardware and algorithm co-design.
Papers
Comparative Evaluation of Memory Technologies for Synaptic Crossbar Arrays- Part 2: Design Knobs and DNN Accuracy Trends
Jeffry Victor, Chunguang Wang, Sumeet K. Gupta
Approximate ADCs for In-Memory Computing
Arkapravo Ghosh, Hemkar Reddy Sadana, Mukut Debnath, Panthadip Maji, Shubham Negi, Sumeet Gupta, Mrigank Sharad, Kaushik Roy