Paper ID: 2311.10905

Flexible Model Interpretability through Natural Language Model Editing

Karel D'Oosterlinck, Thomas Demeester, Chris Develder, Christopher Potts

Model interpretability and model editing are crucial goals in the age of large language models. Interestingly, there exists a link between these two goals: if a method is able to systematically edit model behavior with regard to a human concept of interest, this editor method can help make internal representations more interpretable by pointing towards relevant representations and systematically manipulating them.

Submitted: Nov 17, 2023