False Sense
A "false sense" in artificial intelligence research refers to the misleading appearance of robustness or safety in AI systems, often stemming from limitations in current models and evaluation methodologies. Current research focuses on identifying and mitigating these false senses in various contexts, including adversarial attacks on autonomous driving systems, information leakage in large language models (LLMs), and the limitations of explainable AI (XAI) and unlearning techniques. Understanding and addressing these issues is crucial for building trustworthy and reliable AI systems, impacting both the development of robust AI models and the responsible deployment of AI in real-world applications.
Papers
November 12, 2024
October 13, 2024
October 6, 2024
September 2, 2024
July 2, 2024
June 19, 2024
June 16, 2024
May 25, 2024
May 6, 2024
March 13, 2024
March 2, 2024
January 30, 2024
December 13, 2023
November 3, 2023
October 19, 2023
October 10, 2023
August 15, 2023
August 4, 2023
July 10, 2023