Coreference Chain

Coreference resolution aims to identify all mentions of the same entity within a text, grouping them into coreference chains. Current research focuses on improving the accuracy and efficiency of coreference resolution models, particularly addressing challenges posed by singleton mentions (entities mentioned only once) and long documents, often employing neural network architectures like graph autoencoders and transformer-based models. These advancements are crucial for improving downstream NLP tasks such as relation extraction and entity typing, and for creating more robust and nuanced natural language understanding systems across diverse domains, including dialogues and literary texts. Furthermore, research is actively exploring more sophisticated evaluation metrics that capture the complexities of coreference beyond simple accuracy scores.

Papers