Resource Constrained Device
Resource-constrained devices, such as mobile phones and embedded systems, present unique challenges for deploying computationally intensive machine learning models. Current research focuses on optimizing model architectures (e.g., CNNs, Transformers, Binary Neural Networks) and training techniques (e.g., federated learning, knowledge distillation, quantization, pruning) to reduce memory footprint, power consumption, and inference latency while maintaining accuracy. This work is crucial for enabling on-device AI applications in various fields, including healthcare, agriculture, and mobile computing, where deploying powerful models directly on resource-limited hardware is essential for privacy, efficiency, and real-time responsiveness.
Papers
A CNN-Transformer Deep Learning Model for Real-time Sleep Stage Classification in an Energy-Constrained Wireless Device
Zongyan Yao, Xilin Liu
FedDCT: Federated Learning of Large Convolutional Neural Networks on Resource Constrained Devices using Divide and Collaborative Training
Quan Nguyen, Hieu H. Pham, Kok-Seng Wong, Phi Le Nguyen, Truong Thao Nguyen, Minh N. Do