Long Term Memory
Long-term memory (LTM) research focuses on enabling artificial systems to retain and utilize information across extended periods, mirroring human cognitive abilities. Current efforts concentrate on developing efficient memory architectures, such as hierarchical compression schemes and memory banks inspired by biological systems (e.g., hippocampus), and integrating these into various models including transformers and recurrent neural networks. This research is crucial for advancing artificial intelligence in areas like conversational agents, robotics, and video processing, where the ability to learn from and act upon past experiences is essential for robust and intelligent behavior. Improved LTM capabilities are expected to lead to more sophisticated and contextually aware AI systems.
Papers
HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models
Bernal Jiménez Gutiérrez, Yiheng Shu, Yu Gu, Michihiro Yasunaga, Yu Su
WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models
Peng Wang, Zexi Li, Ningyu Zhang, Ziwen Xu, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen
PhiNets: Brain-inspired Non-contrastive Learning Based on Temporal Prediction Hypothesis
Satoki Ishikawa, Makoto Yamada, Han Bao, Yuki Takezawa