Knowledge Editing
Knowledge editing focuses on efficiently updating the factual knowledge within large language models (LLMs) without requiring complete retraining. Current research emphasizes methods that leverage in-context learning, parameter-efficient fine-tuning techniques (like LoRA), and the integration of external knowledge graphs to address challenges like the "ripple effect" (where updating one fact necessitates updating related facts) and the potential for unintended side effects. This field is crucial for maintaining the accuracy and safety of LLMs, impacting both the development of more reliable AI systems and the mitigation of potential harms associated with misinformation or bias.
Papers
Leveraging Logical Rules in Knowledge Editing: A Cherry on the Top
Keyuan Cheng, Muhammad Asif Ali, Shu Yang, Gang Lin, Yuxuan Zhai, Haoyang Fei, Ke Xu, Lu Yu, Lijie Hu, Di Wang
Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models
Jingcheng Deng, Zihao Wei, Liang Pang, Hanxing Ding, Huawei Shen, Xueqi Cheng
Multi-hop Question Answering under Temporal Knowledge Editing
Keyuan Cheng, Gang Lin, Haoyang Fei, Yuxuan zhai, Lu Yu, Muhammad Asif Ali, Lijie Hu, Di Wang
Is Factuality Decoding a Free Lunch for LLMs? Evaluation on Knowledge Editing Benchmark
Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng
Detoxifying Large Language Models via Knowledge Editing
Mengru Wang, Ningyu Zhang, Ziwen Xu, Zekun Xi, Shumin Deng, Yunzhi Yao, Qishen Zhang, Linyi Yang, Jindong Wang, Huajun Chen
Editing Knowledge Representation of Language Model via Rephrased Prefix Prompts
Yuchen Cai, Ding Cao, Rongxi Guo, Yaqin Wen, Guiquan Liu, Enhong Chen