Paper ID: 2211.14744

BEAR: Physics-Principled Building Environment for Control and Reinforcement Learning

Chi Zhang, Yuanyuan Shi, Yize Chen

Recent advancements in reinforcement learning algorithms have opened doors for researchers to operate and optimize building energy management systems autonomously. However, the lack of an easily configurable building dynamical model and energy management task simulation and evaluation platform has arguably slowed the progress in developing advanced and dedicated reinforcement learning (RL) and control algorithms for building operation tasks. Here we propose "BEAR", a physics-principled Building Environment for Control And Reinforcement Learning. The platform allows researchers to benchmark both model-based and model-free controllers using a broad collection of standard building models in Python without co-simulation using external building simulators. In this paper, we discuss the design of this platform and compare it with other existing building simulation frameworks. We demonstrate the compatibility and performance of BEAR with different controllers, including both model predictive control (MPC) and several state-of-the-art RL methods with two case studies.

Submitted: Nov 27, 2022