Dialogue Coreference
Dialogue coreference research focuses on enabling computers to understand and track the referents of entities across multiple turns in a conversation, mirroring human ability to maintain context and identify what pronouns or phrases refer to. Current research emphasizes improving the safety and robustness of large language models (LLMs) in handling coreference, particularly within multi-turn dialogues, often leveraging transformer architectures and exploring techniques like attention head manipulation to enhance performance. This work is significant because accurate dialogue coreference is crucial for building more natural and reliable conversational AI systems, with applications ranging from improved chatbots to more sophisticated multimodal models capable of understanding and responding to visual and textual information simultaneously.