Heterogeneous Device
Heterogeneous device research focuses on optimizing the performance of deep learning models across diverse hardware platforms, addressing challenges posed by varying computational capabilities and memory constraints. Current efforts concentrate on developing frameworks and algorithms for efficient model deployment and training on heterogeneous devices, including techniques like model partitioning, adaptive resource allocation, and asynchronous decentralized training, often employing convolutional neural networks and transformers. This research is crucial for enabling the widespread adoption of AI applications on resource-constrained devices, improving efficiency, and addressing privacy concerns associated with data centralization.
Papers
October 11, 2024
September 2, 2024
July 24, 2024
July 1, 2024
May 28, 2024
May 3, 2024
May 1, 2024
March 7, 2024
January 17, 2024
January 3, 2024
December 18, 2023
December 7, 2023
November 23, 2023
October 31, 2023
September 13, 2023
July 6, 2023
March 23, 2023
March 8, 2023
August 9, 2022