False Sense
A "false sense" in artificial intelligence research refers to the misleading appearance of robustness or safety in AI systems, often stemming from limitations in current models and evaluation methodologies. Current research focuses on identifying and mitigating these false senses in various contexts, including adversarial attacks on autonomous driving systems, information leakage in large language models (LLMs), and the limitations of explainable AI (XAI) and unlearning techniques. Understanding and addressing these issues is crucial for building trustworthy and reliable AI systems, impacting both the development of robust AI models and the responsible deployment of AI in real-world applications.
Papers
June 14, 2023
May 24, 2023
May 20, 2023
May 16, 2023
April 13, 2023
October 31, 2022
October 23, 2022
August 16, 2022
June 8, 2022
June 3, 2022
May 18, 2022
April 11, 2022
April 10, 2022
December 10, 2021