Device Edge Co Inference
Device-edge co-inference optimizes deep learning inference by strategically partitioning computational tasks between resource-constrained edge devices and more powerful edge servers. Current research focuses on efficient model partitioning and compression techniques, often employing architectures like convolutional neural networks (CNNs) and graph neural networks (GNNs), and algorithms such as reinforcement learning to dynamically adapt to varying network conditions and device capabilities. This approach aims to reduce latency, energy consumption, and communication overhead while maintaining high inference accuracy, impacting applications like autonomous driving and real-time object recognition.
Papers
October 11, 2024
July 1, 2024
April 8, 2024
April 2, 2024
November 24, 2022
June 15, 2022