Paper ID: 2410.15188 • Published Oct 19, 2024
Augmented Lagrangian-Based Safe Reinforcement Learning Approach for Distribution System Volt/VAR Control
Guibin Chen
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
This paper proposes a data-driven solution for Volt-VAR control problem in
active distribution system. As distribution system models are always inaccurate
and incomplete, it is quite difficult to solve the problem. To handle with this
dilemma, this paper formulates the Volt-VAR control problem as a constrained
Markov decision process (CMDP). By synergistically combining the augmented
Lagrangian method and soft actor critic algorithm, a novel safe off-policy
reinforcement learning (RL) approach is proposed in this paper to solve the
CMDP. The actor network is updated in a policy gradient manner with the
Lagrangian value function. A double-critics network is adopted to synchronously
estimate the action-value function to avoid overestimation bias. The proposed
algorithm does not require strong convexity guarantee of examined problems and
is sample efficient. A two-stage strategy is adopted for offline training and
online execution, so the accurate distribution system model is no longer
needed. To achieve scalability, a centralized training distributed execution
strategy is adopted for a multi-agent framework, which enables a decentralized
Volt-VAR control for large-scale distribution system. Comprehensive numerical
experiments with real-world electricity data demonstrate that our proposed
algorithm can achieve high solution optimality and constraints compliance.