Knowledge Editing
Knowledge editing focuses on efficiently updating the factual knowledge within large language models (LLMs) without requiring complete retraining. Current research emphasizes methods that leverage in-context learning, parameter-efficient fine-tuning techniques (like LoRA), and the integration of external knowledge graphs to address challenges like the "ripple effect" (where updating one fact necessitates updating related facts) and the potential for unintended side effects. This field is crucial for maintaining the accuracy and safety of LLMs, impacting both the development of more reliable AI systems and the mitigation of potential harms associated with misinformation or bias.
Papers
Benchmarking Chinese Knowledge Rectification in Large Language Models
Tianhe Lu, Jizhan Fang, Yunzhi Yao, Xin Xu, Ningyu Zhang, Huajun Chen
OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System
Ningyu Zhang, Zekun Xi, Yujie Luo, Peng Wang, Bozhong Tian, Yunzhi Yao, Jintian Zhang, Shumin Deng, Mengshu Sun, Lei Liang, Zhiqiang Zhang, Xiaowei Zhu, Jun Zhou, Huajun Chen
Language Modeling with Editable External Knowledge
Belinda Z. Li, Emmy Liu, Alexis Ross, Abbas Zeitoun, Graham Neubig, Jacob Andreas
MEMLA: Enhancing Multilingual Knowledge Editing with Neuron-Masked Low-Rank Adaptation
Jiakuan Xie, Pengfei Cao, Yuheng Chen, Yubo Chen, Kang Liu, Jun Zhao
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Siyuan Qi, Bangcheng Yang, Kailin Jiang, Xiaobo Wang, Jiaqi Li, Yifan Zhong, Yaodong Yang, Zilong Zheng