Context Editing
In-context editing (ICE) aims to efficiently update the knowledge of large language models (LLMs) without full retraining, focusing on modifying the model's behavior during inference rather than its internal parameters. Current research emphasizes improving ICE's accuracy and efficiency by developing novel decoding strategies that selectively bias relevant tokens, addressing issues like lexical bias and "stubborn knowledge" (pre-trained information resistant to modification). These advancements hold significant promise for maintaining the up-to-date accuracy of LLMs and reducing the computational cost associated with knowledge updates, impacting both fundamental LLM research and practical applications requiring dynamic knowledge adaptation.
Papers
Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities
Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Hongcheng Gao, Yilong Xu, Xueqi Cheng
Retrieval Meets Reasoning: Dynamic In-Context Editing for Long-Text Understanding
Weizhi Fei, Xueyan Niu, Guoqing Xie, Yanhua Zhang, Bo Bai, Lei Deng, Wei Han