Paper ID: 2504.09035 • Published Apr 12, 2025
InterQ: A DQN Framework for Optimal Intermittent Control
Shubham Aggarwal, Dipankar Maity, Tamer Başar
Coordinated Science Laboratory at the University of Illinois Urbana-Champaign...
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
In this letter, we explore the communication-control co-design of
discrete-time stochastic linear systems through reinforcement learning.
Specifically, we examine a closed-loop system involving two sequential
decision-makers: a scheduler and a controller. The scheduler continuously
monitors the system's state but transmits it to the controller intermittently
to balance the communication cost and control performance. The controller, in
turn, determines the control input based on the intermittently received
information. Given the partially nested information structure, we show that the
optimal control policy follows a certainty-equivalence form. Subsequently, we
analyze the qualitative behavior of the scheduling policy. To develop the
optimal scheduling policy, we propose InterQ, a deep reinforcement learning
algorithm which uses a deep neural network to approximate the Q-function.
Through extensive numerical evaluations, we analyze the scheduling landscape
and further compare our approach against two baseline strategies: (a) a
multi-period periodic scheduling policy, and (b) an event-triggered policy. The
results demonstrate that our proposed method outperforms both baselines. The
open source implementation can be found at this https URL
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.