Entity Bias

Entity bias in large language models (LLMs) refers to the models' tendency to rely on spurious correlations between entities and their associated attributes, leading to inaccurate or unfair predictions. Current research focuses on identifying and mitigating this bias through methods like causal intervention techniques, counterfactual data augmentation, and improved decoding strategies that selectively bias key entities during generation. Addressing entity bias is crucial for improving the fairness, reliability, and generalizability of LLMs across various applications, including fake news detection, relation extraction, and entity typing.

Papers