Supervised ImageNet
Supervised ImageNet research focuses on improving image classification models by leveraging the massive ImageNet dataset. Current efforts concentrate on enhancing data curation strategies, developing more efficient training methods (including exploring alternative architectures like binary neural networks and leveraging self-supervised learning), and addressing challenges like dataset bias and the need for explainable AI. These advancements are crucial for improving the accuracy, efficiency, and trustworthiness of computer vision systems across various applications, from medical imaging to agricultural technology.
Papers
MoCo-Transfer: Investigating out-of-distribution contrastive learning for limited-data domains
Yuwen Chen, Helen Zhou, Zachary C. Lipton
ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy
Kirill Vishniakov, Zhiqiang Shen, Zhuang Liu
Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations
Cian Eastwood, Julius von Kügelgen, Linus Ericsson, Diane Bouchacourt, Pascal Vincent, Bernhard Schölkopf, Mark Ibrahim
Addressing Weak Decision Boundaries in Image Classification by Leveraging Web Search and Generative Models
Preetam Prabhu Srikar Dammu, Yunhe Feng, Chirag Shah
Are Natural Domain Foundation Models Useful for Medical Image Classification?
Joana Palés Huix, Adithya Raju Ganeshan, Johan Fredin Haslum, Magnus Söderberg, Christos Matsoukas, Kevin Smith
Maximum Knowledge Orthogonality Reconstruction with Gradients in Federated Learning
Feng Wang, Senem Velipasalar, M. Cenk Gursoy