Explainable Fake News Detection
Explainable fake news detection aims to identify false information while providing transparent justifications for its classifications, addressing the limitations of "black box" methods. Current research focuses on leveraging large language models (LLMs) within frameworks like generative adversarial networks (GANs) and incorporating multimodal data (text, social context, images) to improve both accuracy and the quality of explanations. This field is crucial for mitigating the spread of misinformation and building trust in automated fact-checking systems, with implications for social media platforms, news organizations, and public discourse.
Papers
September 3, 2024
May 6, 2024
November 16, 2023
December 8, 2022
September 29, 2022
September 4, 2022
July 23, 2022
March 20, 2022
March 9, 2022