Fact Verifier
Fact verification systems aim to automatically assess the truthfulness of online claims, combating the spread of misinformation. Current research heavily utilizes large language models (LLMs), exploring both agentic approaches that mimic human fact-checking workflows and methods to improve LLM performance through knowledge transfer or logic-based control. These efforts are crucial for mitigating the impact of false information on public discourse and decision-making, with ongoing work focused on enhancing accuracy, robustness, and explainability of these automated verification systems.
Papers
June 29, 2024
April 30, 2024
April 16, 2024
October 23, 2023