Explainable Fake News Detection

Explainable fake news detection aims to identify false information while providing transparent justifications for its classifications, addressing the limitations of "black box" methods. Current research focuses on leveraging large language models (LLMs) within frameworks like generative adversarial networks (GANs) and incorporating multimodal data (text, social context, images) to improve both accuracy and the quality of explanations. This field is crucial for mitigating the spread of misinformation and building trust in automated fact-checking systems, with implications for social media platforms, news organizations, and public discourse.

Papers