Model Editing
Model editing focuses on efficiently updating or correcting the knowledge embedded within large language models (LLMs) without the need for complete retraining. Current research emphasizes developing methods that precisely target and modify specific model parameters or internal representations, often employing techniques like rank-one updates (ROME), memory-efficient methods (MEMIT), or adapter networks, while addressing challenges such as model collapse, catastrophic forgetting, and unintended side effects on unrelated knowledge. This field is significant because it offers a more efficient and scalable approach to maintaining the accuracy and up-to-dateness of LLMs, impacting both the development of more reliable AI systems and reducing the computational cost associated with knowledge updates.
Papers
Resolving Lexical Bias in Edit Scoping with Projector Editor Networks
Hammad Rizwan, Domenic Rosati, Ga Wu, Hassan Sajjad
Attribution Analysis Meets Model Editing: Advancing Knowledge Correction in Vision Language Models with VisEdit
Qizhou Chen, Taolin Zhang, Chengyu Wang, Xiaofeng He, Dakan Wang, Tingting Liu