Lightweight Model
Lightweight models in deep learning aim to achieve high accuracy with minimal computational resources, making them suitable for deployment on resource-constrained devices like mobile phones and embedded systems. Current research focuses on developing efficient architectures, such as variations of UNet, YOLO, and transformers, often incorporating techniques like depthwise separable convolutions, knowledge distillation, and model pruning to reduce model size and computational cost while maintaining performance. This research is significant because it expands the applicability of deep learning to a wider range of applications and devices, impacting fields from medical image analysis and autonomous driving to natural language processing and resource-limited environments.
Papers
Location-Aware Visual Question Generation with Lightweight Models
Nicholas Collin Suwono, Justin Chih-Yao Chen, Tun Min Hung, Ting-Hao Kenneth Huang, I-Bin Liao, Yung-Hui Li, Lun-Wei Ku, Shao-Hua Sun
Federated learning compression designed for lightweight communications
Lucas Grativol Ribeiro, Mathieu Leonardon, Guillaume Muller, Virginie Fresse, Matthieu Arzel