Resistive Random Access Memory

Resistive Random Access Memory (RRAM) is being actively researched as a high-performance, energy-efficient hardware substrate for accelerating deep neural networks (DNNs), particularly for edge computing applications. Current research focuses on optimizing DNN architectures and training methods for RRAM's unique characteristics, including addressing hardware limitations like fault tolerance and variability through techniques such as hardware-aware training, error compensation, and novel mapping schemes. This work aims to overcome challenges in RRAM-based in-memory computing, ultimately leading to more efficient and powerful AI systems for various applications.

Papers