Context Editing

In-context editing (ICE) aims to efficiently update the knowledge of large language models (LLMs) without full retraining, focusing on modifying the model's behavior during inference rather than its internal parameters. Current research emphasizes improving ICE's accuracy and efficiency by developing novel decoding strategies that selectively bias relevant tokens, addressing issues like lexical bias and "stubborn knowledge" (pre-trained information resistant to modification). These advancements hold significant promise for maintaining the up-to-date accuracy of LLMs and reducing the computational cost associated with knowledge updates, impacting both fundamental LLM research and practical applications requiring dynamic knowledge adaptation.

Papers