Paper ID: 2411.00572

Enhancing Adaptive Mixed-Criticality Scheduling with Deep Reinforcement Learning

Bruno Mendes (1), Pedro F. Souto (1 and 2), Pedro C. Diniz (2) ((1) Department of Informatics Engineering (DEI) Faculty of Engineering of the University of Porto (FEUP) (2) CISTER Research Centre)

Adaptive Mixed-Criticality (AMC) is a fixed-priority preemptive scheduling algorithm for mixed-criticality hard real-time systems. It dominates many other scheduling algorithms for mixed-criticality systems, but does so at the cost of occasionally dropping jobs of less important/critical tasks, when low-priority jobs overrun their time budgets. In this paper we enhance AMC with a deep reinforcement learning (DRL) approach based on a Deep-Q Network. The DRL agent is trained off-line, and at run-time adjusts the low-criticality budgets of tasks to avoid budget overruns, while ensuring that no job misses its deadline if it does not overrun its budget. We have implemented and evaluated this approach by simulating realistic workloads from the automotive domain. The results show that the agent is able to reduce budget overruns by at least up to 50%, even when the budget of each task is chosen based on sampling the distribution of its execution time. To the best of our knowledge, this is the first use of DRL in AMC reported in the literature.

Submitted: Nov 1, 2024