Paper ID: 2301.04528
The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference
Richard Brath, Daniel Keim, Johannes Knittel, Shimei Pan, Pia Sommerauer, Hendrik Strobelt
With a constant increase of learned parameters, modern neural language models become increasingly more powerful. Yet, explaining these complex model's behavior remains a widely unsolved problem. In this paper, we discuss the role interactive visualization can play in explaining NLP models (XNLP). We motivate the use of visualization in relation to target users and common NLP pipelines. We also present several use cases to provide concrete examples on XNLP with visualization. Finally, we point out an extensive list of research opportunities in this field.
Submitted: Jan 11, 2023