Paper ID: 2408.01963 • Published Aug 4, 2024
A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios
Samuel Ackerman, Ella Rabinovich, Eitan Farchi, Ateret Anaby-Tavor
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
We evaluate the robustness of several large language models on multiple
datasets. Robustness here refers to the relative insensitivity of the model's
answers to meaning-preserving variants of their input. Benchmark datasets are
constructed by introducing naturally-occurring, non-malicious perturbations, or
by generating semantically equivalent paraphrases of input questions or
statements. We further propose a novel metric for assessing a model robustness,
and demonstrate its benefits in the non-adversarial scenario by empirical
evaluation of several models on the created datasets.