Sequential Editing

Sequential editing focuses on modifying large language models (LLMs) and other generative models iteratively, updating specific knowledge or attributes without complete retraining. Current research emphasizes improving the robustness and efficiency of these edits, addressing issues like catastrophic forgetting and preserving the model's overall capabilities through techniques such as adapter-based methods, prompt engineering, and careful control of parameter perturbations. This field is crucial for maintaining and updating AI systems over time, enabling more efficient knowledge integration and mitigating the limitations of traditional retraining approaches in various applications, including image generation, speech recognition, and text processing.

Papers