Rank One Model Editing
Rank-one model editing (ROME) is a technique for directly modifying the parameters of large language models (LLMs) to update or correct their knowledge without retraining. Current research focuses on improving ROME's robustness, addressing issues like model collapse caused by inconsistent parameter updates or the inherent difficulty of editing complex or abstract concepts. This research is significant because it offers a potentially faster and more efficient alternative to retraining for adapting LLMs to new information or correcting factual errors, impacting both the efficiency of LLM development and their reliability in practical applications.
Papers
June 25, 2024
June 17, 2024
March 19, 2024
March 11, 2024
March 1, 2024
February 15, 2024
January 15, 2024
December 7, 2023
October 30, 2023
December 9, 2022