Edge Device
Edge devices are resource-constrained computing units performing computation closer to data sources, aiming to reduce latency, bandwidth usage, and privacy concerns associated with cloud computing. Current research focuses on optimizing deep learning models (e.g., CNNs, LLMs, GNNs) for edge deployment through techniques like model compression (quantization, pruning, knowledge distillation), efficient parallel processing (pipeline parallelism, tensor parallelism), and federated learning. This work is significant for enabling the deployment of sophisticated AI applications, such as autonomous driving and medical imaging analysis, on low-power devices, thereby expanding the accessibility and applicability of advanced technologies.
Papers
May 26, 2023
May 20, 2023
April 26, 2023
April 23, 2023
April 22, 2023
April 13, 2023
April 11, 2023
April 4, 2023
April 3, 2023
March 25, 2023
March 13, 2023
March 8, 2023
February 13, 2023
January 29, 2023
January 26, 2023
January 18, 2023
January 15, 2023
January 12, 2023