Unreliable Source

Unreliable sources pose a significant challenge across various machine learning applications, hindering the accuracy and trustworthiness of AI systems. Current research focuses on improving the reliability of information sources by developing methods to identify and mitigate the impact of unreliable data, including techniques like credibility-aware attention mechanisms in large language models (LLMs) and ensemble learning approaches for news credibility evaluation. These advancements aim to enhance the robustness and fairness of AI systems, particularly in applications where misinformation or biased data can have serious consequences, such as automated fact-checking and information retrieval. The ultimate goal is to build more reliable and trustworthy AI systems that are less susceptible to manipulation and bias.

Papers