Supervised ImageNet
Supervised ImageNet research focuses on improving image classification models by leveraging the massive ImageNet dataset. Current efforts concentrate on enhancing data curation strategies, developing more efficient training methods (including exploring alternative architectures like binary neural networks and leveraging self-supervised learning), and addressing challenges like dataset bias and the need for explainable AI. These advancements are crucial for improving the accuracy, efficiency, and trustworthiness of computer vision systems across various applications, from medical imaging to agricultural technology.
Papers
Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Yecheng Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, Franziska Meier
DIME-FM: DIstilling Multimodal and Efficient Foundation Models
Ximeng Sun, Pengchuan Zhang, Peizhao Zhang, Hardik Shah, Kate Saenko, Xide Xia
Exploring the Limits of Deep Image Clustering using Pretrained Models
Nikolas Adaloglou, Felix Michels, Hamza Kalisch, Markus Kollmann
Neglected Free Lunch -- Learning Image Classifiers Using Annotation Byproducts
Dongyoon Han, Junsuk Choe, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue
From MNIST to ImageNet and Back: Benchmarking Continual Curriculum Learning
Kamil Faber, Dominik Zurek, Marcin Pietron, Nathalie Japkowicz, Antonio Vergari, Roberto Corizzo
Efficient Diffusion Training via Min-SNR Weighting Strategy
Tiankai Hang, Shuyang Gu, Chen Li, Jianmin Bao, Dong Chen, Han Hu, Xin Geng, Baining Guo