Robustness Benchmark
Robustness benchmarks evaluate the performance of machine learning models under various real-world conditions, aiming to identify and mitigate vulnerabilities to noise, corruption, and distribution shifts. Current research focuses on developing benchmarks for diverse applications, including image classification, object detection, natural language processing, and reinforcement learning, often employing convolutional neural networks, transformers, and reinforcement learning algorithms. These benchmarks are crucial for advancing the reliability and safety of AI systems across various domains, particularly in safety-critical applications like autonomous driving and medical diagnosis.
Papers
July 3, 2024
April 24, 2024
March 28, 2024
March 27, 2024
March 21, 2024
March 8, 2024
February 7, 2024
November 24, 2023
August 30, 2023
June 19, 2023
June 11, 2023
June 6, 2023
April 21, 2023
April 17, 2023
April 4, 2023
March 20, 2023
January 10, 2023
October 28, 2022
July 7, 2022