Tiny ImageNet
Tiny ImageNet is a smaller-scale version of the ImageNet dataset, commonly used to benchmark the performance of deep learning models under data-scarcity conditions. Current research focuses on improving model accuracy and robustness on this dataset, exploring techniques like dataset distillation, data augmentation (including image generation and transformations), and novel training methods such as companion learning and sharpness-aware minimization. These efforts aim to bridge the performance gap between convolutional neural networks and vision transformers on limited data, impacting both the development of more efficient models and the advancement of continual and federated learning paradigms.
Papers
Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation
Haizhong Zheng, Jiachen Sun, Shutong Wu, Bhavya Kailkhura, Zhuoqing Mao, Chaowei Xiao, Atul Prakash
Enhancing Neural Architecture Search with Multiple Hardware Constraints for Deep Learning Model Deployment on Tiny IoT Devices
Alessio Burrello, Matteo Risso, Beatrice Alessandra Motetti, Enrico Macii, Luca Benini, Daniele Jahier Pagliari