Edge Computing

Edge computing focuses on processing data closer to its source, minimizing latency and bandwidth usage for applications like autonomous driving and IoT. Current research emphasizes efficient resource allocation strategies, often employing large language models (LLMs) and reinforcement learning algorithms to optimize task scheduling and model deployment on resource-constrained edge devices, including the use of spiking neural networks and implicit neural representations for improved efficiency. This field is significant for enabling real-time, privacy-preserving AI applications across diverse sectors, driving advancements in both hardware and software architectures for distributed computing.

Papers