Edge Device
Edge devices are resource-constrained computing units performing computation closer to data sources, aiming to reduce latency, bandwidth usage, and privacy concerns associated with cloud computing. Current research focuses on optimizing deep learning models (e.g., CNNs, LLMs, GNNs) for edge deployment through techniques like model compression (quantization, pruning, knowledge distillation), efficient parallel processing (pipeline parallelism, tensor parallelism), and federated learning. This work is significant for enabling the deployment of sophisticated AI applications, such as autonomous driving and medical imaging analysis, on low-power devices, thereby expanding the accessibility and applicability of advanced technologies.
Papers
May 14, 2024
April 11, 2024
April 3, 2024
March 19, 2024
March 15, 2024
March 14, 2024
March 12, 2024
February 28, 2024
January 30, 2024
January 19, 2024
January 17, 2024
December 22, 2023
December 21, 2023
December 18, 2023
December 14, 2023
December 4, 2023
November 28, 2023
November 18, 2023