Fine Grained Claim Dependency
Fine-grained claim dependency research focuses on improving the ability of large language models (LLMs) to understand and reason with complex textual information, particularly within specific domains like patents or social media. Current research emphasizes leveraging graph-based methods to capture intricate relationships between claims within a document, often in conjunction with fine-tuning techniques using instruction-following datasets like FLAN. This work aims to enhance the accuracy and efficiency of tasks such as document understanding, coreference resolution, and fact-checking, ultimately leading to more robust and reliable natural language processing applications.
Papers
September 17, 2024
September 9, 2024
April 22, 2024
March 19, 2024
September 17, 2023
July 5, 2023