Compute in Memory
Compute-in-memory (CIM) aims to drastically improve the energy efficiency and speed of neural network computations by performing calculations directly within memory arrays, eliminating the energy-intensive data movement between memory and processing units. Current research focuses on optimizing CIM architectures for various neural network models, including convolutional neural networks (ResNet) and multilayer perceptrons, developing efficient compilation techniques to map these models onto diverse CIM hardware, and mitigating the effects of device variations and noise inherent in analog CIM implementations. This approach holds significant promise for accelerating machine learning applications, particularly in resource-constrained environments and safety-critical systems where energy efficiency and robustness are paramount.