|Title||Performance Metrics and Objective Testing Methods for Energy Baseline Modeling Software|
|Year of Publication||2014|
|Authors||Jessica Granderson, David A Jump, Phillip N Price, Michael D Sohn|
|Institution||Lawrence Berkeley National Laboratory|
With advances in energy metering, communication, and analytic software technologies, providers of Energy Management and Information Systems (EMIS) are opening new frontiers in building energy efficiency. Through their engagement platforms and interfaces, EMIS products can enable energy savings through multiple strategies including equipment operational improvements and upgrades, and occupant behavioral changes. These products often quantify whole-building savings relative to a baseline period using methods that predict energy consumption from key parameters such as ambient weather conditions and operation schedule. These automated baseline models streamline the M&V process and are of critical importance to owners and utility program stakeholders implementing multi-measure energy efficiency programs.
This paper presents the results of a PG&E Emerging Technology program, undertaken to advance capabilities in evaluating EMIS products for building-level baseline energy modeling. A general methodology to evaluate baseline model performance was developed and used with hourly whole-building energy data from nearly 400 small and large commercial buildings. Evaluation metrics describing model accuracy were identified and assessed for their appropriateness in describing model baseline performance, as well as their usefulness for identifying and pre-screening buildings for whole-building savings estimation suitability. The state of five public-domain models was assessed using the methodology and test data set, and implications for whole building M&V described. Finally a protocol was developed to test EMIS vendor’s proprietary models while navigating practical issues concerning test data security, vendor intellectual property, and maintaining appropriate testing ‘blinds,’ while processing a large data set. Ongoing work entails stakeholder vetting, demonstration of the test procedures with new baseline models solicited from the public, and publication of the results for industry adoption.
|LBNL Report Number|| |