Adversarial Robustness Benchmark
Adversarial robustness benchmarks evaluate the resilience of machine learning models against adversarial attacks, which involve subtly altering input data to mislead the model. Current research focuses on developing standardized benchmarks across diverse applications, including cybersecurity, image recognition (e.g., point clouds), and genomic sequence analysis, employing various model architectures like decision tree ensembles and deep neural networks. These benchmarks are crucial for improving model reliability and trustworthiness, particularly in high-stakes domains where the consequences of misclassification are significant, ultimately driving the development of more robust and dependable AI systems.
Papers
TSCheater: Generating High-Quality Tibetan Adversarial Texts via Visual Similarity
Xi Cao, Quzong Gesang, Yuan Sun, Nuo Qun, Tashi Nyima
Noisy Ostracods: A Fine-Grained, Imbalanced Real-World Dataset for Benchmarking Robust Machine Learning and Label Correction Methods
Jiamian Hu, Yuanyuan Hong, Yihua Chen, He Wang, Moriaki Yasuhara