Fallacy Detection
Fallacy detection in natural language aims to automatically identify flaws in reasoning within text, combating the spread of misinformation and improving argument quality. Current research focuses on developing and refining large language models (LLMs) for this task, often incorporating techniques like few-shot learning, instruction-tuning, and multitask learning, alongside the creation of new, high-quality datasets annotated with various fallacy types. This work is significant because accurate fallacy detection can enhance fact-checking, improve online discourse, and contribute to a better understanding of human reasoning and its limitations.
Papers
MAFALDA: A Benchmark and Comprehensive Study of Fallacy Detection and Classification
Chadi Helwe, Tom Calamai, Pierre-Henri Paris, Chloé Clavel, Fabian Suchanek
Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition
Tariq Alhindi, Smaranda Muresan, Preslav Nakov