Advanced Building Control via Deep Reinforcement Learning
Building control is a challenging task, not least because of complex building dynamics ad multiple control objectives that are often conflicting. To tackle this challenge, we explore an end-to-end deep reinforcement learning paradigm, which learns an optimal control strategy to reduce energy consumption and to enhance occupant comfort from the data of building-controller interactions. Because real-world control policies need to be interpretable and efficient in learning, this work makes the following key contributions: (1) we investigated a systematic approach to encode expert knowledge in reinforcement learning through “experience replay” and/or “expert policy guidance”; (2) we proposed to regulate the smoothness property of the neural network to penalize the erratic behavior, which is found to dramatically stabilize the learning process and lead to interpretable control laws; (3) we established a virtual testbed for building control by combining the state-of-the-art building energy simulator EnergyPlus with a python environment to provide a systematic evaluation and comparison platform, which will not only further our understanding of the strengths and weaknesses of existing building control algorithms, but also suggest directions for future research. We experimentally verified our proposed deep reinforcement learning paradigm on the virtual testbed in case studies, which demonstrated promising results.