Edge Device
Edge devices are resource-constrained computing units performing computation closer to data sources, aiming to reduce latency, bandwidth usage, and privacy concerns associated with cloud computing. Current research focuses on optimizing deep learning models (e.g., CNNs, LLMs, GNNs) for edge deployment through techniques like model compression (quantization, pruning, knowledge distillation), efficient parallel processing (pipeline parallelism, tensor parallelism), and federated learning. This work is significant for enabling the deployment of sophisticated AI applications, such as autonomous driving and medical imaging analysis, on low-power devices, thereby expanding the accessibility and applicability of advanced technologies.
Papers
September 9, 2024
September 8, 2024
September 6, 2024
August 30, 2024
August 27, 2024
August 26, 2024
August 23, 2024
August 22, 2024
August 19, 2024
August 2, 2024
August 1, 2024
July 25, 2024
July 21, 2024
July 16, 2024
July 15, 2024
July 8, 2024
July 3, 2024
June 22, 2024
June 16, 2024
June 6, 2024