Content Addressable Memory
Content-addressable memory (CAM) aims to create systems that retrieve information based on partial or noisy input, mirroring aspects of human memory. Current research focuses on improving CAM efficiency and capacity, particularly through novel architectures like Hopfield networks and memristor-based implementations, and optimizing algorithms for training and inference, including those leveraging sparsity and product quantization. These advancements are driving progress in energy-efficient machine learning, particularly for applications like robotic manipulation, and enabling new approaches to in-memory computing for faster and more efficient AI.
Papers
MonoSparse-CAM: Harnessing Monotonicity and Sparsity for Enhanced Tree Model Processing on CAMs
Tergel Molom-Ochir, Brady Taylor, Hai Li, Yiran Chen
Dynamic neural network with memristive CIM and CAM for 2D and 3D vision
Yue Zhang, Woyu Zhang, Shaocong Wang, Ning Lin, Yifei Yu, Yangu He, Bo Wang, Hao Jiang, Peng Lin, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Xumeng Zhang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
Non-Ideal Program-Time Conservation in Charge Trap Flash for Deep Learning
Shalini Shrivastava, Vivek Saraswat, Gayatri Dash, Samyak Chakrabarty, Udayan Ganguly
A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications
Xiaomeng Wang, Fengshi Tian, Xizi Chen, Jiakun Zheng, Xuejiao Liu, Fengbin Tu, Jie Yang, Mohamad Sawan, Kwang-Ting Cheng, Chi-Ying Tsui