Knowledge Editing
Knowledge editing focuses on efficiently updating the factual knowledge within large language models (LLMs) without requiring complete retraining. Current research emphasizes methods that leverage in-context learning, parameter-efficient fine-tuning techniques (like LoRA), and the integration of external knowledge graphs to address challenges like the "ripple effect" (where updating one fact necessitates updating related facts) and the potential for unintended side effects. This field is crucial for maintaining the accuracy and safety of LLMs, impacting both the development of more reliable AI systems and the mitigation of potential harms associated with misinformation or bias.
Papers
Learning to Edit: Aligning LLMs with Knowledge Editing
Yuxin Jiang, Yufei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang
Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models
Tianjie Ju, Yijin Chen, Xinwei Yuan, Zhuosheng Zhang, Wei Du, Yubin Zheng, Gongshen Liu