Elizabeth Titus:          Hello, everybody. Welcome to the webinar on Protocols for Advanced Measurement and Verification. I am – my name is Elizabeth Titus, and I am coming to you from the Northeast Energy Efficiency Partnership where I am the Senior Advisor for Research and Evaluation. NEEP is a non-profit organization committed to reducing energy consumption and reducing carbon emissions in our region. And I’m pleased to serve as moderator for this webinar. Next slide, please. This webinar is just one product of a multi-year research project that is very near to completion.

 

And the purpose of that project was to gain better understanding of applications of advanced measurement and verification for utility energy efficiency programs, in particular with an eye to evaluation and implementation. I want to give a shout out to the many collaborators and funders who made this project possible, starting with the United States Department of Energy, which is hosting this webinar, and thank you to Virginia Castro for taking care of that for us, and to the many other collaborators, which include Connecticut DEEP, Lawrence Berkeley National Lab, the utilities in Connecticut, Eversource and Avon Grid, and then our state partners in other states within the region. We have been working on this for a number of years now. Next project – next slide, please.

 

And just a quick note. We are muting all of the audience, and we kindly ask you to use the chat box for questions, and do not hesitate to ask questions throughout the webinar in the chat box. We will collect them all and hold a Q&A session and unmute at the end of the webinar. If time runs short to answer questions, we will provide some written responses as well afterwards. And a quick note of reminder that this webinar is being recorded. Okay, next slide, please. So, why should we be talking about protocols and guidance for advanced measurement and verification now? Why are they important? Well, one thing, I think we all need a little break from thinking about the Covid-19 virus, and next slide please.

 

On a more serious note, though, our energy industry experience with AM&V is growing, and over the past five or more years, we’ve accumulated a number of lessons that have been learned and are reflected in some of the activities that you will hear about today. The advanced measurement and verification is increasingly becoming a relevant tool for states. You may be familiar with the pay for performance program designs for which these tools serve very well, or demand response evaluations, or customer engagement platforms, or be thinking about how important it is to have time differentiated savings to really understand climate impact. All of these are examples, over and above AM&V’s penetration into the traditional energy efficiency program world as well.

 

Thirdly, the Efficiency Valuation Organization, EVO, which is the source of protocols I believe many of us are familiar with, IPMVP, is now developing products that specifically address the unique and special features of advanced measurement and verification. And these protocols help to ensure not just a national but global consistency and credibility for impact evaluation results and building analytics. As we all know, guidance and protocols, we can think of them as bedrock, as the foundation for ensuring that our impacts are credible and that resources such as advanced measurement and verification are used appropriately. Next slide, please.

 

So, with this webinar, we are hoping to share a lot of information on advanced measurement and verification, guidance and protocols, that touch on a variety of topics to introduce new resources that are available for you to use now and that are coming soon, and to explore what role protocols play in deploying advanced measurement and verification in building analytics. And spoiler alert, be prepared to be delivered a full plate of information here, coming up on quite a variety of topics. And we’ll wind up with a quick snapshot of some future directions, what more and what next are needed. Next slide, please.

 

To set the table, I just want to share a couple of definitions very quickly. When we say advanced measurement and verification, we are thinking of large granular datasets, data that can be at very fine time intervals, and high volumes of data. And then the software capable of analyzing that data, and capable of producing near real time results. That at a high level is the definition. Protocols and guidance, we draw distinctions. Protocols we view as core concepts and conditions that ensure the credibility of a result; so, the foundation. And, whereas guidance is advice on when and how to use protocols, when and how to apply a resource such as advanced measurement and verification. Next slide, please.

 

And with that, I’m delighted to introduce our lineup of experts who will be speaking, and the agenda, the topics that they’ll be speaking about. Our plan is to proceed just continuously, one after the other to allow us as much time as possible for conversation at the end. So, starting, we will have Kevin Warren, principal of Warren Engineering and Chair of the Evaluation, Measurement, and Verification committee at EVO. And he will be giving us a sort of an overview evaluation perspective on applications or best practice or recommended applications of advanced measurement and verification.

 

Following Kevin comes Eliot Crowe, Program Manager in the Whole Building Systems department of Lawrence Berkeley Lab. And he’s bringing – drilling down slightly to look at software testing protocols, and a guidance resource implementation guide, more of a program planning perspective. Then, next, coming to us is Lia Webster. She is principal at Facility Energy Solutions, and chair of the AM&V subcommittee at EVO. And she is going to cover I guess what I consider the heart, soul, and weeds of advanced measurement and verification with the project-specific details.

 

And then finally we have Carmen Best, last but not least, director of policy and emerging markets, and formerly lead of the evaluation team at the California Public Utility Commission with the state perspective on experience in developing and applying guidance and interacting with the protocols. And then with that, after that, we wrap up with a, as I said, a rapid fire kind of snapshot of future directions, where AM&V guidance and protocols are heading next, and then the question and answer. So, I hope you enjoy this. We look forward to speaking with you after that. And, with that, let me welcome Kevin. Next slide.

 

Kevin Warren:            So, most of the session is gonna focus on tools and methods and guidance, but we thought it would be helpful to begin with a framework and some categories and terms to set the stage. When we’re talking about AM&V, my experience is that though the arguments and disagreements that we often have over this or because we’re each thinking about a slightly different flavor or application, and basically talking past one another, one person is thinking big buildings and another is thinking residential, for example. So, let’s try to avoid that. Next slide, please.

 

                                    So, who will be applying these AM&V methods? Obviously escos and imus providers might use continuous interval data analysis to prove the savings for a building tune-up, for example, or an impact evaluator might use AM&V methods to determine the savings from a utility program after its happened or in real time. But, it can also be embedded into program operations and run by the program implementer. So, to illustrate this, say you’re thinking about a direct install commercial program. Do you run it conventionally and have the evaluator do a, you know, AM&V, M&V 2.0 on all the participants, or do you embed the billing analysis into the program?

 

                                    There are a lot of benefits to using the data for more than just calculating the final savings – customer targeting, customer acquisition, ongoing QAQC, for example – that you really only get when its embedded into program operations and not only done for, you know, savings reconciliation. But I think that’s what you’re primarily gonna be seeing all of us, and so I’m primarily gonna focus on a framework thinking about AM&V as it’s embedded into programs. Next slide, please.

 

                                    So, in the program world, ex-ante savings or what we often call the savings claimed by a program prior to evaluation, so this term ex-ante 2.0 that you see up there is what I suggest we should be calling this idea of conducting pre-post billing analysis on a continuous or at least an on-going basis for all participants. When this is done by the program implanter, this embedded billing analysis creates a new form of claimed savings. This ante 2.0 term is obviously a play on M&V 2.0, and intended to make clear that these claims savings are pre-evaluated savings. So, let’s think a little more specifically about how programs will use AM&V, and what existing M&V knowledge and protocols tell us about that. Next slide, please.

 

                                    So, for any program where the customers’ payments or guarantees are based on the M&V, you obviously care a lot about the facility-level savings. Generally programs are also gonna wanna know the site-specific savings for, you know, for large projects with some reasonable level of rigor, regardless of how the incentives are calculated. But in other cases, you might care primarily about program-level impacts. These are what are called population approaches. You’re going to get site-specific savings values but you only care about the rolled up results, so it matters less if the individual facility-level savings are very imprecise. And beyond the program versus site-level distinction, the size and homogeneity of the measures matter, because they dictate how we’re going to address non-routine adjustments. You’re going to hear a lot about non-routine events and adjustments throughout the webinar. Why do we care so much about them? Next slide, please.

 

                                    So, the algorithm there on top is the basic IPMVP savings equation. Basically we look at the energy use before and after the measure’s installed, and then we make routine adjustments, typically for weather, and then also non-routine adjustments for all the things that happen in buildings unrelated to weather or the measure. So, nothing about the use of shorter intervals or doing billing analysis for all participants, you know, the advanced part AM&V, alters the fact that all billing analysis requires these adjustments. Lots of things happen in buildings, not just our ACMs and how we deal with them and the protocols that apply are largely driven by the homogeneity of the population and the measures.

 

                                    For residential programs, where we have relatively homogenous facilities and large numbers of participants, we create comparison groups to let us adjust for what would have happened in our houses without the program. The UMP chapter eight requires comparison groups, and the C-action impact evaluation guide does also. But comparison groups aren’t generally an option for non-residential. It’s just too hard to identify large numbers of non-participants that are sufficiently similar to our participants. So, we have to identify non-routine events and calculate site-specific non-routine adjustments. The IPMVP talks about the importance of NRAs, non-routine adjustments, as do several UMP chapters. It’s also pretty obvious from decades of esco experience how important non-routine adjustments to M&V. Next slide, please.

 

                                    So, these basic fundamentals of billing analysis lead us to four categories or flavors of ex-ante 2.0, four ways a program could do this embedded building analysis. So, the first two are the population approaches, and then the next two are the facility-level. The first, population with comparison, is when a program is trying to conduct something like the UMP chapter eight protocol for all projects. There is a comparison group basically embedded into the analysis. Because it’s residential, it might be monthly data instead of interval, and, yeah, you care about the program-level, population-level savings, and don’t really care too much if the individual facility level savings are highly uncertain. The second is if you do that, you just run all your participants through a billing analysis, but you ignore a comparison group. That’s another option.

 

The next two are the C& I approaches where we want site-specific adjustments rather than comparison groups. And basically we just have a high rigor and a low rigor option. So, embedded option C – and I’m using the option C term here because it’s basically, you know, compliant with IPMPV option C – is when programs choose to identify and quantify non-routine adjustments with high rigor. But due to the cost, engineering resources, and customer contact required for this approach, this flavor is likely to be limited to programs where the average savings are large, or ongoing contact with the customer is a feature of the program.

 

So, the last one, raw site level, is where, you know, you elect to conduct embedded billing analysis where, say, the average savings are small, and so you just don’t have the budget to really delve into NRAs at high rigor. Maybe you, instead, use automated methods to identify what might be a non-routine adjustment. I think of this as sort of the commercial direct install option. I think this is the one that gives people the most concern because of the rigor, but as I’ll show next, just because an ex-ante 2.0 savings might be, you know, relatively low rigor doesn’t mean that it might not have value. Next slide, please.

 

So, almost everybody shares an interest in aligning claimed and evaluated savings. Nobody likes big evaluation adjustments. However, the question for perfect alignment can very often lead to bad results. You either restrict evaluators and essentially dumb down their work, or, going the other way, you impose unnecessary evaluation rigor on all program claims. Neither is normally ideal. There’s no fundamental reason that incentive payments or claims saving need to comply with evaluation goals for rigor. So, just because some of the 2.0 flavors aren’t evaluation-quality doesn’t mean that they can’t use them if there’s other benefit.

 

So, this table shows the evaluation tasks that should be expected for programs running each flavor of ex-ante 2.0. Basically, for the population methods, the focus is gonna be on the comparison group, either assessing the existing one, if one was built in, or creating one if one was not used. For C&I programs that have a bed to billing analysis, I think we can evaluate them much as we’ve always done, but the difference is that we have more information at the beginning of the process. Instead of just having the tracking database and the program’s engineering calculations, we’ve got this billing analysis for each participant.

 

So, you can do things like, you know, pull a sample, for each project determine if this billing analysis option C makes sense. You know, if it’s very, very small percent savings, options billing analysis is typically not the best option. But if it does make sense for that project and you’re gonna stick with that, then, you know, the evaluator has the option to dig into NRAs with relatively high rigor, because you’re only looking at a sample of projects. You can spend that time look into missing dates, look into missing data, those sorts of things.

 

We might need to adjust baselines if we find a project just doesn’t make sense to have an existing conditions baseline. Calculate realization rates as we’ve sort of always done, and then I think there’s going to be, you know, site visits and perhaps other M&V methods to answer the whys. You know, if you rely only on billing analysis, you know almost nothing about why a particular project failed or exceeded expectations. Programs are generally gonna wanna know this. So, you know, was the new equipment used less or more than expected, what could the implementer have done to have a better prediction next time, those kinds of questions. Next slide, please.

 

So, finally, I’m mostly trying to get you to think about these different flavors and how the program and evaluation parts might fit together, I also wanna mention that evaluators are gonna have some cool new things, new options when – with AM&V. But I think primarily with AM&V being the starting point, not necessarily the final answer. So, say we’re evaluating that commercial direct install program that used ex-ante 2.0. We’re gonna have a lot more information when we sit down to evaluate it. We’re gonna wanna leverage that billing analysis, even if we don’t just accept it all through 100 percent. Could also potentially do, like, crank it through a 2.0 engine basically at the start of your impact evaluation and then use that as a jumping off point.

 

So, for example, we’re gonna have a lot more information about the timing of savings, basically a savings load profile for each participant. And there’s gonna be interesting new options for a new sampling strata based on some new metrics that are available to use when we have this billing analysis for all participants. So, in conclusion, as we learn more about the tools and guidance, try to keep in mind, you know, which flavor they apply to. And I’m gonna hand it over to Eliot Crowe from LBNL who’s gonna talk about some of the AM&V tools available, and how they can be assessed.

 

Eliot Crowe:               Excellent, thanks very much, Kevin. And if we go straight to the next slide, here, I will dive in. The advanced M&V process moves through several phases, and I certainly will not try to talk you through all of it. I will be focusing mainly on the approach to selecting and validating M&V software. But before I do that, I’ll just give a few words on the overall process. Now, as Elizabeth mentioned in her introduction, this work is part of a DOE-funded M&V pilot effort in the Northeast, specifically working in Connecticut. And, as part of that team, we developed an implementation resource guide that was pulling together helpful guidance for programmatic implementation of advanced M&V methods. So, it’s really looking at the programmatic level, slightly higher than getting into all the technical weeds of methods.

 

And that guide is going to be posted in a few places. It will surely end up on NEEP’s website, Connecticut Department of Energy and Environmental Protection will be posting it on a website they have in development, and we also expect to see that in DOE’s state and local solutions center. And so, I’ll say no more about that right now. I think Lia will be diving more into the weeds of the process, but just wanted to give a head’s up and give you all a snapshot of what will be in the resource guide that’s gonna be posted there. Next slide, please.

 

So, we’re gonna focus here on the M&V tool itself. There are a lot of tools out there. We actually worked on a 2017 report that categorized 16 commercially-available tools that offered advanced M&V functionality. They used different model types. They were targeted at some different use cases, et cetera. The tools out there are a mix of proprietary black box models mixed with more transparent, open-source models. And on the slide here, you’ll see listed five examples of free tools with fully transparent model specifications. There’s certainly no judgment on our part of which is best. I think that the question is you need some way to assess the quality of these tools. Next slide, please. I’m not seeing – oh, there we go.

 

So, for someone wanting to use our advanced M&V tool, whether that’s hands-on M&V practitioner or evaluator or an esco or a utility program manager, there are many considerations that will come into play when you’re looking to select your tool. Do you want to be able to customize the energy models? For example, considering multiple ultimate models forms, or having a variety of potential independent variables. Can the tool be easily configured to output baseline-model goodness-of-fit metrics so you can assess accuracy? Will you be reporting savings for each building individually or for an aggregated portfolio? Can you handle batch-mode analysis of data from many buildings, or is it really being configured to execute on just one building?

 

Do you need to be able to accommodate continuous data feeds? Do you want additional input variables beyond ambient temperature? Has the tool been vetted? Has it been used in other pilots? Has it been through third-party testing or some other way that it’s been assessed? And then, there could be some additional value-added features in the tool that you’d be interested to look at, such as a customer-facing dashboard, the ability to identify efficiency opportunities. And some software has project management features. So, there’s a lot of parameters available through these tools. If we go to the next slide here.

 

But here I’m just focusing on one of those aspects. So, beyond the general criteria, there is that fundamental question of whether an M&V tool is accurate. Now, the challenge here is how can you determine the accuracy of a tool that’s designed to measure energy that didn’t get used? This was a very big question for the industry when these kind of tools emerged around a decade ago. And in developing a test method, three key aspects needed to be considered. Firstly, in order to assess predictive accuracy of these tools, you really need so-called out-of-sample testing. That’s a best practice whereby an energy model is created, and then its accuracy is tested using data that wasn’t used in the model creation.

 

Second up, robustness. If you use a dataset large enough and diverse enough, that’s gonna give you a chance to see if the tool is versatile across variety of situations. And, third, for trustworthiness, you need to make sure that a test can’t be gamed or cheated. Now, the graphic here shows an example two-year dataset from a building. First, a model is created using items one and two here. That’s energy data and weather data from the first year of that dataset. Item, three, the weather data here for the second year, is then fed into the model, and the model predicts this particular building’s energy use, which is item four on the graphic there with the question marks. Now, if you repeat this for hundreds of buildings, you get to see how accurate the tool is across that whole breadth. Can we go to the next slide here, please?

 

So, Berkeley Lab developed the test methods, and last year it was licensed to the Efficiency Evaluation Organization, EVO, to be delivered through an online portal. And the main intent of this portal is for an individual or organization to test out their model or software using this service. And they can use the results, then, to provide some assurance to people using that tool. They might also use it several times, as a way to see if their model refinements are improving their accuracy scores over time. And you can potentially come at this from another angle, whereby a utility may not be using this kind of portal directly, hands on, but they might require their M&V software vendors to go through this test and to share their results in order to qualify as a provider for one of their programs. I’m gonna just talk you through how it works here if we go to the next slide, please.

 

So, it’s basically a three-step process. Step one, once the user has set up their account, they download an anonymized dataset of two years’ worth of hourly energy and weather data for hundreds of buildings. They use that to create a baseline model for every building in that dataset, and we call that training the M&V tool. Moving to the second box on this graphic, the user then takes ambient temperature data for a different year for those same buildings and uses their models to predict the corresponding hourly energy use across that period. And then onto the third box, those predictions are uploaded to the EVO online portal, and the user, then, receives the test results without ever seeing the actual energy data they were trying to predict. And that’s how we prevent the gaming of this tool. Go to the next slide here, please.

 

Now, the two metrics by which the portal will assess the M&V tool’s accuracy are normalized mean bias error, or NMBE, and the coefficient of variation of the root mean squared error, CVRMSE. _____ energy model fitness metrics, but it is important to note the distinction with how those metrics are used with this online tool testing. Now, for model fitness, the left-hand portion of this graphic, the tool is being measured on its ability to match the data that was used to create it. So, in theory, you could aim for a perfect match. To assist the – assess the predictive capability of the M&V tool, the testing uses data that wasn’t employed in creating the model. That’s what we see on the right-hand side. It’s from a different year, buildings change, the data isn’t perfect. So, while the test uses the same metrics as for model fitness, you need to be mindful when you’re actually setting the bar for what constitutes a good result.

 

And on the next slide here we’ll get talking about the actual results themselves. So, your model goes through the test; you get the metrics. But the end results here, it’s not a pass fail; it’s not some sort of a certification. Given the real world imperfection of the data, this test is more a case of getting an objective benchmarking against other tools. Now, as more tools go through this test, it’s possible that rules of thumb will emerge as a way to judge certain results as good versus so-so versus bad. And right now, you see the link on the website here on this page. So far, there are 30 tests showing up on this website, and the portal does require all test results to be made public, though the users can anonymize the tool name when they do that. And if we go to the next slide here. Need to go a little bit further. There we go, thanks.

 

So, the analogy that we use for the tool testing is that it tells you that you have a good hammer, you have a good tool. But that alone doesn’t guarantee the tool will produce accurate results every time, in the same way that using a hammer will not. So, when considering overall the potential uncertainty when you’re trying to estimate savings using advanced M&V, the tool’s model accuracy is just one source of error. And, as you see, Lia’s presentation, I think you get more of a sense of the overall quality assurance in the process that can really guarantee that you are gonna get better results, even, you know, with your good tool. But so, I’ll be handing off to Lia here in a moment, but I’ll just touch on a couple of things briefly before handing over, if we go to the next slide here.

 

I won’t say too much here. I think Lia covers this more comprehensively, but model fitness, I did mention earlier on that model fitness metrics are not the ideal measure of a tool’s overall accuracy, but they certainly are recommended as a way to check the baseline model for any individual project. And what we see on the slide here is some of the recommended metrics. We see a couple of examples. The top graphic there shows a visual of what good model fitness looks like, and the lower one is not so good. Given the time limitations, I’ll say no more than that for right now other than just to recommend this as a best practice. Go to the next slide, please.

 

And beyond that, there are many other baseline considerations, far more than would fit on this one slide. But, you know, one of the really nice things about advanced M&V is it really opens up a new world of visualization properties and opportunities. So, here we see a couple of charts here which help you get an understanding of what is this building you’re dealing with, the accuracy of the model. You get these visual cues on whether you have a good model, beyond just your model fitness metrics. I won’t say too much, but, yeah, there’s a lot of angles and a lot of opportunities. Next slide here, please.

 

And just, again, reinforcing that same point here with some graphical examples of what it looks like to monitor savings as they’re accumulating using advanced M&V. You know, these kind of tools are excellent risk management opportunities. So, it enables you to catch potential problems early, and gives you the chance to resolve them. And in the interest of time, I will cut there and hand off to Lia Webster.

 

Lia Webster:               Hi. Thank you, Eliot. Next slide, please. Great, so I appreciate everything that Kevin and Eliot had to say. I really briefly want to put that perspective on the screen as a whole, and then I would like to dive into protocols and guidelines, since that is our fundamental quality control basis for measurement and verification. It’s important that we review those and understand the fundamentals. And there’s a lot of excitement around advanced measurement and verification approaches, but they are not silver bullets. And there are several technical issues that are worth highlighting. I think Eliot did a really nice job talking about some of the complexities there within the modeling arena. And then I’d like to touch upon some of the areas that are under development, some of the work that my team is working on, along with some other technical resources that are available. Next slide, please.

 

                                    So, just really quickly – I’m sorry, if you wouldn’t mind going back, Virginia. Just very quickly to look at the program perspective, I am here to talk about project-focused measurement and verification, which applies primarily to commercial and industrial sites that have unique perhaps custom projects that include a detailed measurement and verification plan. Non-routine adjustments are an important aspect of a project-focused measurement and verification effort, and we – it results, typically, in a fairly accurate site-level reporting of energy savings, which are the ex-ante compared to Kevin’s ex-post, which you see there in the impact evaluation box on the far right.

 

So, AM&V is also used for a aggregated approach; approach is also called population approaches, which Carmen Best will be talking about next. And those primarily apply to small commercial and residential programs that include a uniform population. These programs have tended to include generous acceptance criteria for individual projects as opposed to our project-focused approaches which include more rigorous attention to those details on an individual project level. And they’re looking towards the portfolio-level overall savings with a targeted fractional savings uncertainty as a goal, typically. Next side, please.

 

So, the IPMVP in, let’s see, international performance measurement and verification protocol. This protocol, I hope that most of you are familiar with it. It’s used worldwide. It has been translated into nine languages, and our committee calls are often at very odd times. Thankfully there’s more of us in the states than there are in the far-flung regions of the world, but we certainly have plenty. The IPMVP is the basic protocol that applies to measurement and verification and including – it is really a fundamental protocol for PM&V as well as you’ll see here in the next slide. Provides a framework to determine savings.

 

Savings are the absence of energy use, which is not as trivial to the measure as you might originally think. There are four different general approaches that are proposed within the core concepts of the IPMVP, and it is very much focused on individual projects. Another example of an M&V protocol would be the superior energy performance protocol, which is the basis for ISO 50,001. It’s relatively synced with EVO. It’s newer and we – it’s external to our work. Next slide, please.

 

The other protocol that applies as Kevin – to the work that Kevin was speaking of specifically is an EM&V protocol. So, one of the fundamental differences between a protocol and a guideline, as Elizabeth mentioned at the introduction of our webinar here, is that a protocol provides a framework. It provides terms and definitions, and sets up the subsequent documents, which are usually guidelines that are project specific, or application specific, rather. And so, the primary difference in the energy efficiency evaluation protocol and the measurement and verification protocol, there’s additional approaches included and they use different terminologies. So, they include M&V approaches, which are the four IPMVP options, which are included in the core concepts, but they also include deemed savings approaches and large-scale analyses. So, including those control groups that he mentioned before. Many of the terms have very similar meanings, but they are different. So, just something to note. Next slide, please.

 

As an example of guidelines for evaluation specific, the DOE’s uniform methods project or UMP guidelines is a compilation of a bunch of different methodologies, and Kevin called out several of those. The chapter – the whole building, the retro-commissioning, the A2C controls, and the strategic energy management protocols are individually contained within this document. They are focused on evaluation, although being super focused on measures, oftentimes there is some crossover to their use and programs.

 

The EPA guidebook on EM&V is a little bit newer in its more condensed version that summarizes some of the contents of the UMP and promotes best practices. It’s intended for states and municipalities. So, that might be a document that is of interest to this group. There are also state-by-state guidance documents related to EM&V. California’s standard practice manual has been out there for a long time and is the basis – 1983, I see here in my notes. And so, it really has been the basis for much of the evaluation work and protocols _____ to this. States also have TRMs, public utility commissions, and other guidelines. Next slide, please.

 

An example of a measurement and verification guideline that is focused on the project level is ______ guideline 14, and that’s a pretty famous one. It is in line with IPMVP. IPMVP is the granddaddy of this one, also. It is, itself, is also a subset of IPMVP, although not completely adherent in all cases. The strategic energy management program has a slightly different flavor and approach in terms of how they calculate savings. And then there is a myriad of M&V guidelines out there. It’s amazing how many states and, of course, every utility has their own. Next, please.

 

I wanted to go back to the IPMVP, though, just for a quick moment, because I think that being that it’s the fundamental protocol for all of the remaining guidelines that we develop, and I know this is the topic of the webinar is guidelines. So, we wanna talk about what adherence with IPMVP is. This is a little bit misunderstood and not paid attention to much. Next slide, please. And so, I wanted to go over some of those details with you. So, as a protocol, it establishes principles of measurement and verification. And I think those principles are evident in the way that we apply measurement and verification in our programs and in our projects, and that they’re conservative, they’re consistent, they’re relevant, they’re complete, they’re transparent, and they’re accurate.

 

And so, all of these apply to all of our methods. The IPMVP, in order to adhere, you keep these principles in mind as you follow the specific procedures that are laid out within the document. The core concepts document is not a long document. It is seven chapters, I believe, and it has details for what must be specified for each project in their plan and in their report. It asks that folks use the terminology that is appropriate to IPMVP, because these are complicated topics that require clarity in discussion, and also to use the equations that have been prescribed.

 

There is a mandate to consider the uncertainty included in the savings estimates, and that operational verification of the measures implemented are conducted to establish the potential to perform and save energy. Specifically to option C, which is one of the four options, A, B, C, and D, is the whole building metering specific. In order to adhere to this, there is some specific energy data requirements, such as limited use of estimates in the baseline period, the inclusion of non-routine adjustments, and there also includes some high-level guidance on regression modeling. So, as Elizabeth alluded to, this is an area that we are working on, this option C whole building-specific guidance for IPMVP. We have an upcoming application guide that is on Eliot’s and myself’s desktop at this time, and we’re really looking forward to getting that out. Next slide, please.

 

So, that’s said, there is a lot going on, as Elizabeth said. You know, we are – our industry is adjusting to the new normal, which includes advanced measurement and verification in various formats and tools. We have seen an onslaught of pay for performance utility programs and other types of programs that use advanced measurement and verification. Obviously, California is leading in this way. Seattle City Light is also very proactive, as is New York and other organizations in California. As Eliot highlighted, the software is developing quickly with continued development of open-source methods and collaboration of methods, I would say. And then we have a lot of on-going research, and we’re looking into uncertainty methods.

 

We have an issue right now in in that the fractional savings uncertainty methodology that we’ve all come to reply upon in our advanced measurement and verification analyses has been a bit unhinged because of the use of interval meter data. And so, this has brought forward some issues that have a very active ongoing research area. I see that David Johns is on the line. He’s working with Lawrence Berkeley Lab now on a project. The IPMVP committee is very active on the uncertainty subcommittee. And the non-routine adjustments are also going to be highlighted and detailed in the upcoming application guide for advanced measurement and verification. Next slide, please.

 

So, I mentioned there was a bunch of current programs, either utility or otherwise sponsored by state as the energy trust of Oregon, where I am here. And they are really leading the way across multiple sectors. We have residential programs, which Carmen is gonna highlight, and small commercial, which fall into those uniform population groups. And then we have a lot of programs focused on commercial and industrial, which do require that very project-specific attention to detail that’s mandated in our M&V protocols and guidelines. Some of these are really excellent programs. They have established forms, documents, guidelines, procedures, and so if you were in the market to create a program or are trying to determine some of the details for yourself, a review of these programs can be extremely informative. Next slide, please.

 

As Eliot talked about, there is quite a bit of software out there, and he mentioned that they did an assessment a year or so ago of 16 different M&V analysis and modeling tools. We subsequently did another analysis of the free and open-sourced tools that you see listed on the screen here, which include e-Cam, which is a long-standing and continuously upgraded Excel add-in that’s been in use for a long time that’s popular. We have RM&V 2.0 which is LBNL’s tool, open E e-meter, which is Recurve’s, based on Recurve’s approaches. The UT3 and the Z module incorporate all of those approaches, and then the NMEC-R is a new open-sourced methodology from KW Engineering that also incorporates most all of those methodologies, and made a couple of tweaks which look good.

 

Some of the features that are detailed and things to think about when you go to evaluate software and to apply software is the type of models, which Eliot, I think, did a nice job of discussing. You know, depending on the kind of data that you need to include and features you wanna track in your program, additional variables an inputs could be warranted. Of course, the data granularity, the level of automation, whether or not it calculates avoided or normalized savings or both. They’re all things to be thinking about as you look at software. Next, please.

 

Some of the key considerations are fairly obvious. I think we keep – each of the speakers here today are – continue to highlight the same issues in different flavors. The application issues will largely work themselves out by the context that you’re working within, whether you have a commercial or residential program, whether or not you need to calculate normalized savings for utility planning purposes, and year-over-year planning, or, if you were able to use a more simplified avoided energy use approach.

 

I think the other big thing that varies in applications is the level of automation that is included in the analysis and in the program tracking in these programs and applications. And that’s largely cost-driven, and, you know, the assessments between periodic reporting and having a live dashboard and things connected or – it is a cost-effective, you know, one-off conversation that each application really has. The need for customized models is a very important part, and may help drive that discussion. The energy data access and having energy data connected is getting better and better, and most providers have moved towards being able to do that, although data cleaning and other issues still arise.

 

So, I guess, really, the real-time nature of these results is it costs quite a bit of money. And so, keeping that in mind, I think, is very important. You know, if you’re in it for the long haul and have the infrastructure to set up, then that will offer lots of benefits, but there’s other more lean ways to implement pilot programs and things of that nature. The technical issues, I think Eliot did a nice job of highlighting several of them, including the tools and the need for customized models. The limitation and the savings uncertainty calculations is a issue, as I already mentioned, a little bit of a problem. I wanna cover that next. Another thing to be keeping in mind is the approaches do not work for all buildings. They have to have nice, repeatable, predictable energy performance, and these, you know, we deem these bad buildings in AM&V. The non-routine events and non-routine adjustments is a technical issue that I am really happy to say we’re making some progress on now, and we’ll be able to share that with everyone soon. Next slide, please.

 

So, one of the primary things that’s different in the M&V tools, as you start to look under the hood and speak with the application engineers who’ve used these on a regular basis is the form of the models themselves and how well that works with the data from the facility at hand. So, the change point models, which are created in e-Cam, which is our – and also was originated by ASHRAE project _____ quite a few years ago in the early ‘90s based on monthly date, uses the change point models. And what you see there in that image on the screen is a five point – it’s a five point – five parameter change point model, excuse me, 5P.

 

And you can see there is three segments and two slope, which gives us five parameters. And these very closely mimic the physics of commercial buildings and how they often perform, and they give us indicators on performance in addition to making nice models. The other kind of model that’s very popular is the time of week and temperature model, and that model has been quite effective in many, many places, and it’s – it’s – it’s a – it’s a game changer. It was a huge hit, and we appreciate LBNL’s work in that. There are now tools that include multiple model types that taken these two methodologies and been able to pick one or the other in order to pick the very best one for the individual facility. The inclusion of holidays, additional variables such as occupied square feet; if you need to track that in your program, like the Seattle City Lights likes to do, it is a bit of a – of a difficulty if you’re using a time of week and temperature model in some in some cases. Next slide.

 

So, the fractional savings uncertainty. As I mentioned, this is from guideline 14, and it’s based on the concept that savings uncertainty decreases over time. We have a very exacting calculation for, you know, how many data points we’re – you think we’re gonna have, and we have correction factors. But what we have found through our ongoing analysis is that these correction factors are not sufficient for, certainly for hourly data, and they’re also still under-predicting uncertainty for daily data. So, we are a little bit chagrined by that and having to go back to some basics, and I know there’s some people on this call who are smarter about this topic than I am.

 

                                    But, for now, we’ve got fractional savings uncertainty. We take it

with an asterisk, grain of salt, and we rely strongly on our

goodness of fit metrics for the model. It’s become even more

important. So, it’s always been the fundamentals. It’s statistical-based approach, advanced measurement and verification requires a quality modeling procedure, and, it’s just – it – we’re just – it’s not as transparent. We can still assess our models well; it’s just not as transparent as using the fractional savings uncertainty. Of course, of course, the basics of having the model – the model savings uncertainty is to be sure that you can detect the savings and that you will measure all of the savings. And so, the less uncertainty there is, the more sure you are of your savings numbers.

 

So, by using stringent model acceptance criteria, you can really improve your overall savings certainty. It also allows you to screen more easily for non-routine events. If you have a good model, you’ll be able to see anomalies. One thing to think about as you – as you go to implement these programs is that – and projects is that – that they will not apply successfully to all projects. You should have a back-up plan. Another M&V approach, you don’t wanna eliminate projects or possible participants. So, having some flexibility is important. Also, having a fallback M&V plan for your very critical project in case something goes terrible wrong with a – with a non-routine event, and the metered savings are flawed. You wanna make sure that you’re still able to capture those savings.

 

Avoided energy use is a little bit more accurate, especially in extreme circumstances, than normalized savings, which requires the secondary model be made in the performance period. So, you have two sets of models of uncertainty that are contributing to your savings estimates and then plus it’s at a fictitious set of conditions. One of the things that I think is really great about this as an engineer, of course, I want the most accurate models possible, but they also, by doing so, you will increase your program savings.

 

Having an accurate model is going to allow you to capture lower level of savings at each individual project, so you’ll have more savings to report. If your model is more accurate, you can detect the lower percent savings, if you – you can lower your threshold, then, for who is eligible to use an advanced measurement and verification approach, by being able to measure lower levels of savings. The more accurate models will also allow you to detect your non-routine events much more successfully. If you have a lot of noise in your model, it’s quite difficult to ascertain whether or not that noise is a non-routine event or noise in your model. And with more accurate savings, people on Kevin’s team are gonna be kinder to you and give you higher realization rates. Next slide, please.

 

So, this is a quick summary of some of the primary measurement and verification guidelines. Now, actually guideline 14 has really been the go-to for years, and it relies upon the fractional savings uncertainty. So, you can see they have a relatively low threshold step for that savings uncertainty. It’s pretty generous, trying not to scare people off from participating. As I mentioned, the IPMVP team is working with the guideline 14 team to update the IPMVP uncertainty guide and include some updates on that. So, we’ve got the statisticians at work.

 

Some of the other things you might notice from this table are the level of detail that’s required. It’s not just the CVRMSE, the R-squared, and the net mean bias error, which Eliot was describing. It’s those intermediary steps along the way as well. You know, it’s the validation of the – of the individual variables and model forms that are very detailed. And this last item here, BPA regression for M&V reference guide, is a really nice reference guide on some of these basics, because by the time you get to an advanced measurement and verification guidelines, some of these details are assumed. So, you need to have been past your statistics at that point. I just have a couple more slides here; next please.

 

So, one of the important things that I’ve learned in working in this non-routine event study is how important it is to screen for them and how prolific they really are when you start working closely. In the baseline period, if you don’t address non-routine events, you’re increasing your uncertainty in your – in your basic energy model in your baseline, which will propagate throughout your project. If you have a non-routine event that occurs during an implementation period, it can easily obscure the savings that you were hoping to measure from your ACMS. And then, of course, ongoingly, depending on if you’re using avoided energy use or normalized savings, it will directly increase or decrease the measured energy savings, or reported energy savings.

 

Other areas of ongoing development are the software. As I mentioned, we have several new open-source platforms. There was recently a big ASHRAE _____ contest for $25,000 that completed. We should be getting that posted as an open source soon. The uncertainty methods are underway. I think there are some outstanding issues regarding how the evaluation of some of the aggregated approaches will turn out over time, and the efficacy of those savings. And I think that’s going to be really interesting. And then, there’s a lot of chatter about automating non-routine event _____ and adjustments, and Lawrence Berkeley Lab has done a great job at getting us started on that. Next, please.

 

The current method we have on the non-routine event detection, the automated method, is a little bit overly-prone to false positives. So, we are not advising its adoption quite yet. This last slide that I have is basically a summary of some of my favorite documents that I think would be helpful if you’re interested in reading up on the technical details. I wanted to highlight there the last two on that list, include Lawrence Berkeley Lab deliverable to this project, which is a application guide for – for starting these kinds of programs.

 

And then the Seattle City Lights program, pay for performance program has an extremely technical guideline that they have developed, which is a nice example, potentially, for others. And then, once again, just to highlight that we do have IPMVP producing an application guide that’s on the advanced M&V methods and non-routine adjustments. So, keep your eye out for that. And the next slide, I would just like to say thank you, and introduce Carmen Best from Recurve. She is going to take it from here.

 

Carmen Best:              Great, thanks Lia. And thanks to NEEP for the opportunity to share among my distinguished colleagues here. I really appreciate the efforts of NEEP and _____ to keep the conversation moving forward on advanced M&V, and I’m hopeful that my good friends today will not take offense to my play on words. We’ll change the slide, please. You can go to the next one, too. So, advanced – we use it in the context of M&V generally to describe a next evolution of methods. And as was laid out today, there has been quite a bit of research and effort going into that from Kevin’s conversation about taxonomy to the field testing that Eliot described.

 

A lot of effort has gone into advancing these methods, but I’m most excited about the other aspects of advanced M&V that swirl around the application of the methods, the opportunities to scale, and the insights and the actions that it can enable. So, I’m gonna talk about more of that today. And in keeping with the definition of advanced that I found online, I hope that my views on advanced do not make me unpopular today, but rather illustrate the progress ahead. Next slide.

 

                                    So, let’s start with an advanced history of NMEC in California. Did the slide move forward? I didn’t see it. There we go. I spent about 10 years working with the California Public Utilities Commission, and one of my first assignments, in fact, was doing an edit of the evaluation frameworks of 2004. Didn’t write them; just edited. And NMEC was actually a central focus of the last of my official assignments and has – at the commission – and has really been continuing to define my work at Recurve. So, as we take this step – sorry. So, let’s back up a little bit into the normalized metered energy consumption time capsule. NMEC was adopted via statute, two of them, in fact, in 2015, and the really important part of this legislation was that it reframed reduction and consumption; so, reduction from current use as the objective.

 

This means that the existing condition’s baseline, as a default, was the default, as opposed to code baselines and improved efficiency as a primary objective of energy efficiency programs. So, this legislation also set an expectation to reframe the whole system from our potential analyses to program deployments to savings estimations that they would be framed around NMEC. And, yes, there’s a neat little clause that says apply where measurement techniques are feasible and cost-effective, and the good news is that they are feasible and cost-effective for most of the resource portions of the current portfolio. So, it’s 2020 right now, and we’re still working through this transition. And I’m gonna highlight today how M&V guidance has really been critical along the way. Next slide, please.

 

Which brings us back to those early days in 2015. I was at the CPCU when the legislation was adopted, and we had to act quickly to respond to it. There was a little bit in other legislation called AB802 that allowed the utilities to introduce high opportunity programs and project, and in light of that, we felt like we had to get some bumpers up on the bowling alley to really guide those submissions. So, we honed in on the M&V angle, because we acknowledged that we couldn’t pick winning markets or winning market actors or even the winning technologies on the fly. We only had three months to kind of create this framework.

 

And what we did have was a way to hold everyone accountable, and consistent M&V, grounded in tracking the changes in normalized metered energy consumption, was the common denominator. It could be agreed upon up front, and it could also be accessible to all actors in the process to continue to build confidence around whether or not we were capturing this high opportunity potential. It wasn’t a gotcha new M&V method. In fact, we hit some criticism from many stakeholders that it was a little too stringent and a too tightly tied to IPMVP.

 

But, that’s where we started, and we went to this tried and true toolbox of site-specific M&V, but quickly realized, with the nudging of many market actors with big ideas, that there was more opportunity for scale and balancing risk through different population approaches, and this gave them pathways to demonstrate those ideas. So, for example, the residential pay for performance program was approved by the commission via this ruling structure. Next slide.

 

So, M&V is a variant of advanced – or, NMEC, excuse me, is a variant of advanced M&V, but it’s so much more when you think of it as a framework for deploying energy efficiency and demand-side resources. The real promise that excited me then and does still now is how it could break through those gnarly barriers of applying M&V or even EM&V that were part of my day to day at the commission. How are we using this for system planning? What really happened at the meter? How are we using these investments in analysis to improve the program? Creating those feedback loops. So, not to mention the need for reducing conflict in a notoriously contentious regulatory environment. So, advanced, in this context, is really about enabling effective application of M&V within the system. Next slide, please.

 

So, when I was getting started in California, the left side of this continuum of protocols and standards, many of which Lia was highlighting, was my world, our world. But, as AMI – the deployment of AMI and advanced analytics have progressed, the things on the right-hand side have really emerged and evolved from these professional guidelines. So, now we have a continuum of guidance to move our EM&V systems along. On the far left, you kinda have these best practices considerations for implementation of M&V, and these are highly dependent on professional judgment and execution, and some subjectivity for that execution and a level of professional decision making that’s kind of left up for grabs.

 

As you move further to the right, you have publically-available code and platforms and software systems that offer a little greater consistency in the outcomes given these agreements on the discrete application of the professional guidelines, things like detailed savings, calculation, methods, data cleaning, weather station selection requirements, et cetera. They help enable the reproducibility of those approaches, and any one of these could be adopted as the basis of a settlement, but the further you can get to the right for portfolio or program or site-level agreements, even, you can see fewer lawyers, more streamlined stakeholder reviews, fewer roadblocks from peer and advisory boards, and potentially reduce the number of protracted regulatory proceedings or re-hearings, all which I think reflect some serious entropy in our current systems. Next slide, please.

 

So, reducing these transaction costs for M&V and supporting settlement is why the Caltrack methods were developed, and an open-meter base was built to implement them. Revenue-grade calculations are transparent, consistent, repeatable, and they make those outcomes accessible. There should be no secrets in the savings calculations. Caltrack offers a standard calculation method to quantify avoided energy use, and then other layers of analysis can be added on for normalizing for a normal year, typical weather year, et cetera. It was developed in a collaborative process, including empirical testing, and has provided – and provided – it’s provided in detailed documentation of the monthly, daily, and hourly approaches, including the data cleaning and treatment of outliers. Now, the open-E meter is a Python engine for consistently executing the Caltrack methods at scale, and it’s available without restriction under an open-source Apache 2.0 license. Both of these projects are curated through a collaborative community of implementers, analysts, utilities, and other software vendors at the Linux foundation. Next slide, please.


At first blush, consistency, transparency, repeatability, may seem like a significant limiting factor, but in reality, it’s really what enhances flexibility, not in the application and methods, of course, but in innovations that can capture the value of changes in consumption for the customer and for the grid. And having confidence that it’s having the intended effect at the meter and therefore the grid also supports markets, utilities, customers to buy more of this as a resource. Next slide, please.

 

And we’re seeing this already. The demand world is getting more and more complicated. Innovations are coming out of the woodwork, and that’s great. But remember when I was talking about that first ruling on NMEC? We focused on M&V because we couldn’t chase down every possible high opportunity. We relied on market actors to bring that forward, and advanced M&V gives you that common thread to understand meter-based demand flexibility coming from any one of these technologies or business models. It takes the approach of have at it. If it shows up at the meter, we’re good. If it doesn’t, then maybe you have another value proposition that you’d like to bring forward.

 

Because we don’t really have the time to re-evaluate, test, re-invent every opportunity coming out. But we can use advanced M&V to probe, to target, to optimize, and that’s where things get really interesting. Advanced M&V lets us all scale the solutions that work with confidence. Next slide, please.

 

But to scale with confidence, the CPCU needed some mechanisms to bring programs and projects in, and manage that NASCAR list of logos, and that might not have been their – exactly what they were thinking of when they embarked upon the NMEC rulebook, but it does provide this second-tier set of guidance for program administrators in California to organize the execution of NMEC solutions and make their way to be the default operational structure for resource acquisition programs in California, as envisioned in SB350. So, the NMEC rulebook is intended to cover the bases for the requirements for submitting programs for approval by the CPUC and some broad brush best practices. It’s not a protocol or a standard – it’s a rulebook – but standards play prominently in how one would go about complying with the rulebook and navigating through the approval requirements. Keep that continuum in mind. Next slide, please.

 

The rulebook covers site-specific programs and population NMEC programs. So, the key distinction in the rulebook is coming at the end of the process. How are you claiming savings? So, site-specific is just that; you’re claiming savings for each site project. Population is just as it sounds. It’s an aggregated result for a population of projects as the basis of your savings claim. So, in light of population NMEC, the CPUC set out three kind of boundaries for selecting programs:  program fit – the rulebook notes that NMEC has to fit the program and vice versa based on the rules and visit where it applies; meter based – it goes without saying that NMEC is meter based and results have to be changes in consumption that are demonstrated at the meter and use an existing conditions baseline.

 

They need to be pre-defined and consistent. No funny business here. You need to have a method that’s agreed upon up front. It has to be applied consistently, and it can’t change as you go through the program process without a re-approval. So, it’s pretty general, but it needs to meet these compliance with all the specific tools, methods, and analytical approaches and calculation software that’s also in the decision, or, in the rulebook. Next slide.

 

And this is what they look like. The other parts of the rulebook have a lot of details on a range of pretty standard M&V plan criteria, and to navigate the rulebook, you really need to use a standard or a protocol to comply with each of these requirements around the tools, the methods, and the analytical approaches for the calculation software. This is how the _____ so you can have confidence in approving these programs, and give more flexibility on the program design. The commission also threw in some expectations around how much would – how much of the savings would be tied to performance, which adds another layer of shifting risk to the implementers and program administrators to deliver and also puts a finer point on the value and importance of having your M&V plan agreed upon up front. Next slide, please.

 

So, the commission rulebook also called out the value of public open-source approaches for generating agreement and for continuing to approve upon method, and the Caltrack and open-E meter will continue to play a significant role in operationalizing and mapping California. Recurve has been working directly with implementers and program administrators as they’re developing program ideas to adapt the specific details of their programs to this model and work through the nitty-gritty agreements that make up an NMEC M&V plan and execution. The population NMEC M&V plan and compliance checklist was created as a template for population programs to ensure compliance with the rulebook and enable working through all those details of a strong M&V plan. I’ll be doing a one-on-one training on that in a couple of weeks, so keep any eye out for that.

 

And comparison groups are another interesting angle that have been becoming a more common approach within this framework, including it to calculate net impacts to the grid, for example. As we do these, we take it another step with the client to document in detail the structure of a comparison group, the statistical criteria, and other foundational parameters to ensure that there are no surprises on the method, and as results come in, we can keep the surprises in the insight category. We’re working on this with a variety of utility partners. Next slide, please.

 

So, in looping all the way back to definitions, I think that the most important advances in advanced M&V is in the access to actionable intelligence to improve programs. It helps reduce the friction in transactions with transparent and consistent understanding of performance, it helps us learn through doing and collaborating, making incremental improvements and also innovations, and applying these approach with scaled investments to get the value of the impacts as well as the value of the learning. Last slide, please.

 

Because, ultimately, flexibility resources and EE in particular must move beyond these traditional designs and evaluation approaches to deliver reliable, verifiable changes in demand that meet specific time and locational needs. And advanced M&V, enabled through smart meter interval data and combined with open source methods and softwares, provides the key link of transparent measurement of changes in load shapes to meaningfully integrate demand flexibility to the grid, to provide value to customers, and to drive innovation of business models and technologies that can help reduce carbon and optimize our resource management. Last slide. If anyone has questions, I think we’re gonna move to that segment now. I’ll move it – I’ll turn it back to Elizabeth, and happy to talk to folks offline as well.

 

Elizabeth Titus:          Thank you so much to all of the presenters. Unfortunately, we’re running short on time for Q&A, and we have Q&A and this quick rapid fire on future directions. I’m going to mention that we’ve gotten two questions in the chat box, and so we will not be unmuting and having a conversation. We will respond to the questions in writing after this session, and if we have a few minutes after the future directions, we’ll respond as much as we can also verbally. So, next slide, please.

 

                                    I want to just go – offer each speaker a chance for one parting shot, one takeaway on your thoughts on what is most needed, short or long-term or future directions for AM&V, just as a – to keep in mind after the end of this presentation. So, let me ask Kevin first for your _____ position. Are you on mute?

 

Kevin Warren:            Oops, still on my headset, yes. So, I think there’s gonna be more work on evaluation guidance, generally figure out how sort of program evaluation, program M&V and evaluation fits together, you know, some of the evaluation approaches like the new sampling techniques and stratification, you know, are gonna have to be tested. And sort of in that same realm is the fact that a lot of the way billing analysis has been done for population level has been pooled and ANOVA technique, which is different than summing up the results of a lot of option C, and we’re gonna need, I think, to see comparisons there of the two approaches.

 

Elizabeth Titus:          Thank you, and Eliot?

 

Eliot Crowe:               Yeah, great response there, Kevin. I think in terms of programmatic guidelines, we’re in a really good place right now. I think a few years ago, there were many questions which I believe now have been resolved and with guidance coming up from EVO pretty soon here, I think that’s gonna even boost that further. With that said, I think, you know, tools and guidelines around non-routine event identification and addressing those non-routine events, I think some guidance that can enhance the automation at least of some of the elements of those processes is gonna be really helpful.

 

Elizabeth Titus:          Thank you so much, and Lia.

 

Lia Webster:               Hi. So, I guess to answer the question there, what is needed in the short-term, right now, on my desktop, and I know on some of my colleagues’ minds is what are we going to do regarding the Covid-19 non-routine even that is occurring, and what does that do to our energy projects, and what does that do to our utility forecasts. And it’s a more complicated question than we anticipated, and but in the mid to longer-term, I’m looking forward to some resolution regarding the uncertainty in savings question.

 

Elizabeth Titus:          Thank you, and Carmen.

 

Carmen Best:              Yeah, I have a couple. I think that from the protocol and guidance perspective, I would agree with Lia that there is gonna be some interesting innovations that come out of Covid-19 analysis, one of which I think will be really stress-testing matched comparison groups on kind of just the large _____ that it has in being able to match will present some certain challenges. I think the other, I know DOE has a preliminary RFI out for this connected communities. It’s a fairly big potential funding opportunity that I know lots of folks are looking at how to integrate advanced M&V into that process, and I think that it will be interesting to see how these models can be kind of demonstrated and propagated in different parts of the country with different utilities and different partners.

 

Elizabeth Titus:          Thank you very much, and thank you for mentioning – bringing it to the timeliness of the Covid and the connected communities RFI, both topics of tremendous interest to NEEP as well. With that, I apologize, we’re at time. I’m gonna take a quick shot at a question from Eric on what if you don’t have AMI data, and the answer is clearly, the AMI data is a tremendous facilitator of being able to use these, but there are benefits to using these models even with the more traditional monthly billing analysis and the Connecticut utilizes have experienced that. And we are looking forward to expecting that there will gradually be more – more and more availability of this kind of data.

 

And in terms of whether something less than 15 minutes is necessary, from what little interaction I’ve had with another DOE-related load shape project, 15 minutes seems to be quite acceptable for by and large most applications other than, like, transmission, fault detection kinds of things. I apologize for cutting the Q&A short. I thank all of our speakers hugely, and DOE, and you for listening. And please reach out to all of us. You have our emails here. And, thank you again. Take care.

 

[End of Audio]