Fact Checking
Fact-checking research aims to automate the verification of claims, combating the spread of misinformation across various media. Current efforts focus on improving evidence retrieval using techniques like contrastive learning and leveraging large language models (LLMs) for claim verification and explanation generation, often incorporating knowledge graphs and multimodal data (text and images). These advancements are crucial for enhancing the accuracy and efficiency of fact-checking, with implications for journalism, public health communication, and broader efforts to mitigate the impact of misinformation.
Papers
Enhancing Natural Language Inference Performance with Knowledge Graph for COVID-19 Automated Fact-Checking in Indonesian Language
Arief Purnama Muharram, Ayu Purwarianti
Evidence-backed Fact Checking using RAG and Few-Shot In-Context Learning with LLMs
Ronit Singhal, Pransh Patwa, Parth Patwa, Aman Chadha, Amitava Das
Zero-Shot Learning and Key Points Are All You Need for Automated Fact-Checking
Mohammad Ghiasvand Mohammadkhani, Ali Ghiasvand Mohammadkhani, Hamid Beigy
Web Retrieval Agents for Evidence-Based Misinformation Detection
Jacob-Junqi Tian, Hao Yu, Yury Orlovskiy, Tyler Vergho, Mauricio Rivera, Mayank Goel, Zachary Yang, Jean-Francois Godbout, Reihaneh Rabbany, Kellin Pelrine
Similarity over Factuality: Are we making progress on multimodal out-of-context misinformation detection?
Stefanos-Iordanis Papadopoulos, Christos Koutlis, Symeon Papadopoulos, Panagiotis C. Petrantonakis
MetaSumPerceiver: Multimodal Multi-Document Evidence Summarization for Fact-Checking
Ting-Chih Chen, Chia-Wei Tang, Chris Thomas