ImageNet E
ImageNet-E, along with related datasets like ImageNet-A, -C, -R, and -D, comprises a suite of benchmarks designed to rigorously evaluate the robustness of image classification models beyond standard ImageNet accuracy. Current research focuses on improving model robustness against various distribution shifts (e.g., environmental changes, sensor variations, adversarial attacks) using techniques such as adversarial training, improved architectures (including Vision Transformers), and novel data augmentation strategies. These efforts aim to advance the reliability and generalizability of computer vision systems, ultimately leading to more robust and trustworthy applications in real-world scenarios.
Papers
Neglected Free Lunch -- Learning Image Classifiers Using Annotation Byproducts
Dongyoon Han, Junsuk Choe, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue