Below is the text version of the webinar, "ResStock - Evaluating Home Performance Upgrades Across the U.S. Residential Building Stock," presented in March 2017. Watch the webinar.
Linh Truong:
Hello, everyone. I’m Linh Truong with the National Renewable Energy Laboratory and I’d like to welcome you to today’s webinar hosted by the Building America program. We are excited to have NREL’S Eric Wilson here today to discuss ResStock – Evaluating Home Performance Upgrades Across the U.S. Residential Building Stock.
Before we begin, I’ll quickly go over some of the webinar features. For audio, you have two options. You can either [cuts out] if you choose to listen through your computer, please select the mic and speakers option in the audio pane. By doing so will eliminate the possibility of feedback and echo. If you select the telephone option, you should already see a box that will display the telephone number and audio pin. Panelists, we ask that you please mute your audio device while you’re not presenting. If you’re having technical difficulties with the webinar today, you can contact the GoToWebinar help desk for assistance.
If you’d like to ask a question during today’s webinar, please use the questions pane to type in your question. If you’re having difficulty viewing the materials through the webinar portal, we’ll post PDF copies of the presentation on the Building America website. Today’s webinar is being recorded and the recording will also be available. We’ll be posting on the DOE YouTube channel within a few weeks. Before our speaker begins, I’ll provide a short overview of the Building America program. Following the presentation, we’ll have a question and answer session and closing remarks.
For more than 20 years, the U.S. Department of Energy Building America program has been partnering with industries to bring cutting-edge innovations and resources to market. This Building America webinar presents ResStock, a data-driven, analytical framework supporting residential emergency efficiency at scale. And now for today’s presentation, our speaker today is Eric Wilson, engineer and project manager at NREL. I’d like to welcome Eric to start today’s presentation. Eric.
Eric Wilson:
Thanks, Linh. So first off, I wanted to thank the project team. Oh. Like both screens are going through. One second, everybody. Sorry about that. OK, I’m sorry about that. So I wanted to thank the other members of the ResStock project team, Craig Christensen and Scott Horowitz as well as the other team members who help out in developing these capabilities. And I want to acknowledge all the support that’s been provided by various organizations, the U.S. Department of Energy Building Technologies Office has supported ResStock, as well as other offices within DOE. We currently have a project with the U.S. Environmental Protection Agency and we’re also working with Bonneville Power Administration. And we have a few other industry partnerships that are under development into ResStock.
So I’m gonna start out by talking about the context and motivation for the ResStock capabilities and then go through the approach that it takes. And then I’ll go into some example results and then look ahead to where we’re going next with ResStock. So first, context and motivation. So ResStock and its sister tool for the commercial building stock which is called ComStock, they are both data-driven, physics-based simulations of the U.S. residential and commercial building stocks.
So you can ask what if questions about the building stocks and evaluate energy efficiency potential across the building stocks. Now this has been done before by others, but what makes these capabilities novel is that we’re using large public and private datasets [cuts out] and we’re taking advantage of modern computing resources, both software and hardware to run the simulation. And to what end? Well, to achieve and unprecedented level of granularity in modeling building energy use and demand across these building stocks.
And these capabilities are being developed as free and open sourced so we hope that others can make use of these and build upon them to really scale up the impact that they can have in the marketplace. So that’s the high-level overview of ResStock and ComStock. Turning to the motivation, I’m sure many of you are aware that homes use 22 percent of primary energy in U.S. and 37 percent of electricity. And homes also have a disproportionately large contribution to peak electric demand, roughly 50 percent of peak demand depending on what part of the grid you’re on.
So if, you know, we know there’s a lot of potential out there from the Building Technologies Office’s Multi-Year Program Plan. If just one out of every 10 homes cut its energy use by 25 percent, Americans could save more than $5 billion dollars per year on their energy bills. So we know there’s a lot of potential out there, but the question is, how do we find the best opportunities for savings that are out there?
And that’s the motivation for sort of the ResStock capabilities. So to illustrate why you would do this with this high granularity approach, an example of look at the energy efficiency potential of a particular upgrade. So imagine that this rectangle represents all single-family homes in the states of Washington and Oregon. And let’s say we’re evaluating drill and fill wall cavity insulation. So this upgrade is applicable in about half of the homes in Washington and Oregon which is 1.2 million homes. That’s the number of homes that have empty wall cavities, homes that have solid masonry walls or already have wall cavity insulation would not be eligible for this upgrade.
So taking that portion where this upgrade is eligible, we can evaluate the cost effectiveness of this drill and fill. So the typical approach to doing this that might be taken by a utility program, maybe you would segment the housing stock within those states, maybe you look at gas homes separately from electric homes and maybe look at a few different climate zones within those states. So to illustrate this, we’re showing for this top rectangle which represents all homes with gas heat, there’s 18 different weather stations where this upgrade has been evaluated. And according to the color scale, all of those upgrades have a payback greater than five years. In the electrically heated homes in those 18 different weather stations, you can see that there are a few weather stations where there are homes with a payback less than five years.
So again, this illustrates kind of the typical cost effectiveness of a particular upgrade to existing housing stock. Now we’re showing simple payback here, but you could imagine using a cost-effectiveness threshold of net present value or a savings to investment ratio or maybe a utility cost effectiveness task like total resource cost test where you’re using a benefit-cost ratio to figure out whether or not this upgrade is cost effective.
So contrast this typical approach to an approach that takes a higher level of granularity. Now you can see how we segmented all of these homes by many more parameters and there’s a lot more green that shows up. So what’s happening is a lot of these larger orange rectangles were actually hiding segments of the population where this upgrade was cost effective. You can see that the green was getting averaged in with the darker red and coming out to be orange. So through this high-granularity approach, you can identify a lot more homes and segments of housing stock where this measure is cost effective. In fact, there’s about three times as many homes that we’ve identified that have a less than five-year payback.
Now you can also look at all of these segments and say, “What’s in common? What are the common characteristics among these homes that show up in green?” And we can pull out those common characteristics and use those to target the housing stock, maybe use that to inform how we do targeted marketing or incentives for a program design. So this illustrates the motivation behind taking this more high-granularity approach that ResStock takes.
And we sometimes call this the flaw of averages. This is a term that was put forward in a book called The Flaw of Averages and can be illustrated by using an average debt to say whether or not you’re going to cross the river so you can see in this comic, this statistician or data scientist decides to cross the river but ends up going in over their head because they fell into the flaw of averages trap. So that’s the analog to the way that you’re missing out on a lot of opportunity here when you use this more course typical approach.
So how do we accomplish this high-granularity? Well, so there’s a spectrum of granularity that you can imagine for this type of analysis. The typical approach that I described earlier would fall on the low end of the spectrum where you’re modeling a handful of prototype buildings, maybe it’s even up to 100 prototype buildings. But you can imagine if you’re considering say 4 different fuel types and maybe 6 different vintages of home, that’s 24 prototypes and then maybe we’re looking at a couple of weather locations so maybe you’re up to 50 or 100 prototypes.
So it really gets up to a large number of prototypes without even considering other aspects of the housing [cuts out] can affect cost effectiveness of upgrades. For example, the size of the home or foundation type or different efficiencies of heating and cooling equipment or occupant behavior. So on the opposite end of the spectrum is an approach where maybe you say you’re gonna model every single building and while that’s definitely not practical for the whole country or a whole region, maybe you can do that for an individual city. But it’s still very data-intensive. You need data on every single building. There are some people who have tried to do this using wider data, but again, it’s very data-intensive and also very computationally intensive.
So with the ResStock approach we end up somewhere in the middle where the scale that we’re using is ten to hundreds of thousands of representative building models to represent a larger housing stock. And the approach we take is – an overview of that is shown here. The first step is having a database of building characteristics, census data, information about energy costs and costs of upgrades as well as climate location information. And all of that is a database that we then statistically sample from to generate these tens to hundreds of thousands of statistically representative models.
And those models then get simulated. The baseline versions of those models get simulated using energy plus as the underlying simulation engine and leveraging of the open studio platform and those simulations get run either on NREL supercomputer or open studio enables others to run these capabilities based on cloud computing. So you don’t need to have a supercomputer in your backyard to be able to run this. And it’s actually surpassingly affordable to run these large numbers of simulations.
We perform a validation against consumption data to make sure that these baseline buildings are accurately reflecting the consumption of the U.S. housing stock. And then you can apply different efficiency upgrades to those representative models and run those simulations. And then you can create visualizations, different ways of visualizing the vast amount of data that comes out of the simulation to make the results actionable. I’ll get into some of those example results later on.
First, I’ll touch on each of the steps here and the ResStock methodology. So first on data sources, listed out some of the main data sources that are used in ResStock currently. EIA’s Residential Energy Consumption Survey is one important data source that we use. That’s where we get the consumption data that we use for the validation. Surprisingly, it doesn’t have some of the most important information you need to create building models. It doesn’t have anything about the insulation level of buildings or the air leakage. So we have to get those parameters from other data sources. So we turn to sources like the National Association of Home [cuts out] conducted homebuilder surveys since the 1980s. So we can get information about, as built, insulation levels going back to the eighties.
And for more recent decades, we can use historical energy code information to get insulation levels. And then there’s an array of other national, regional and local audit databases that we use as well, some of which comes from the Building America Program. We use the U.S. Census American Community Survey to get a high resolution look at where homes are in the country and the vintages of homes at a high resolution, at the census track level which is 4,000 people on average. So one example of this in this map to the left here, this shows where – the percentage of homes that use electricity as their main heating fuel.
So that’s just one example of the data we get at a high resolution. So moving on to costs, we use EIA information for electricity and fuel costs. And NREL also hosts a database of utility rate structures where we can get more detailed information about time of use and tiered electricity rates. And then there’s a database of measured costs that was put together by Navigant a few years ago and NREL has updated that over the past several years. And we used that for the cost of measures that we’re evaluating.
And then we use 200 weather file locations to drive these simulation. Now you may notice that this data comes at different geographic resolutions so one of the challenges of this project was meshing together these disparate geographic areas so that we’re able to combine these data sets together and make use of them. So the way all this data is structured so that we can actually use it is we put it into probability distributions. So for example, looking at regions, these are regions for some of the data that we use and these are the RECS, the regions available in RECS aggregated by climate though. And you could see here we have percentage values that show the probability that a given home chosen at random is in each of these regions.
So for example, the Mid-Atlantic region, 8 percent of all single-family homes are in this region. Looking at that region in particular, there’s a distribution of vintages of home within that reason. So you can see here varying by decades how many homes are in each vintage. If you go then to a different region you’ll see a different distribution of vintages and for this illustration this has been simplified. We actually have vintage available at the [cuts out] track level so it’s much more granular than is shown here.
Now we can look at one particular vintage within this region, say the 1980s and we have probably [cuts out] queried from our data sources that show the likelihood of different insulation levels, window glazing types, air conditioner type and efficiency and so on across all parameters that you need for a building energy model. Again, if you go to a different vintage or a different location, you’ll see different probability distributions. So you can imagine this is just one branch being shown here. There’s other branches, for example, furnace efficiency which depend on other parameters like heating fuel here. And there’s other branches to this that vary based on foundation type, house size, et cetera.
So if you explode out all of these branches, you can imagine it’s a tree of these conditional probability distribution that gets pretty enormous and hard to wrap your head around. We have around 6,000 different probability distributions that were queried from data sources and that get used to generate the building models. Now there’s 6,000 of the probability distributions. If you look at all possible combinations of those building characteristics across different locations and vintages, that’s a very large number of combinations that’s not practical at all to model. Therefore, we need to some statistical sampling of this parameter space in order to automatically generate representative models that we then simulate.
Well, how is this done? Well, you can think of it a little bit like Pachinko machine or on The Price Is Right if you’ve seen Plinko, it hits the pins and the disk bounces around and then lands down in one of the slots at the bottom. So you can think of this a little bit similar way as each simulation that gets run is assigned a ball or a disk that gets dropped through this machine and then it ends up falling into a distribution. Now the image shown here is a normal distribution, but looking back at the probability distributions that we have, it can take different shapes.
So imagine we’re doing this kind of Plinko machine for each of these probability distribution and that’s kind of how the building characteristics gets assigned. Go through an example of this so say we’ve done this and the balls or disks fall into the options highlighted in red here so this ends up being – this example shows the highest probability characteristic in each of these categories. So the one with the highest probability, that’s what has gotten selected for this example building model.
And you can imagine this is maybe somewhat you would want to use if you were taking the prototype approach where you’re only able to do a handful or a dozen or a hundred building simulations. You might use something like this to try to represent a typical home, but you can see just looking at these probability distributions that you’d be missing out on a lot of the space, the primary space above and below these typical values which could be significant when you’re evaluating various upgrades.
So that’s one example. Now let’s do another example where you see [cuts out] characteristics have gotten selected. So in this way, we do this many, many thousands of time and we’re able to build up a assess of representative building models that represents the housing stock. And when you look at the characteristics of the representative models, the way the characteristics get distributed is pretty much identical to these input probability distributions that are used to drive the sampling.
OK, so now that we have these thousands of building models, we need to simulate them. So we use these energy modeling ecosystem, the flagship OpenStudio and EnergyPlus simulation engine. And OpenStudio is – well, starting with EnergyPlus, it’s a detailed subhourly simulation engine that is very capable yet can be complicated to use. So OpenStudio is an open-source platform that is designed to be a bridge between the capable yet complicated EnergyPlus engine and various applications that are much easier to use. So this circle here just shows an array of applications that are built upon OpenStudio that ultimately make use of EnergyPlus but use in a much more user-friendly way. And the ResStock and ComStock capabilities leverage a lot of the existing parts of the OpenStudio platform for conducting large-scale simulations and running simulations on Amazon Cloud Computing for example, get [cuts out] libraries for ResStock.
So how many simulations do we need? Well, to look into this, one thing we did was we increased the number of simulations, we looked at various levels and numbers of simulations and looked at how the predicted energy consumption changes, so starting at 100 simulations to represent the whole U.S. housing stock, moving up to 1,000, 10,000 and so on. And what’s plotted here is each purple marker shows the national average of consumption, source energy consumption per house and then these air bar show for the different combinations of location and vintage the maximum and minimum average consumptions.
So you can see moving up to 10,000 and 200,000, it stops bouncing around and the air bags get narrowed in a bit. And what we noticed is when you move from 200,000 to 350,000 you get a leveling off and you’re not seeing consumption change. So each additional simulation that we added did not provide any additional benefit in terms of this predictive capability. Now we also used other ways of looking at this to establish the 350,000 number. One was just more qualitatively looking at maps to see – so we look at maps of the housing stock sliced in different ways and where are we seeing issues, noise come into play because we’re not using enough simulation so that also came into play when settling on this 350,000 number. So its [cuts out] simulation to represent around 80 million single-family detached homes. So each simulation represents around 230 actual houses.
So we ran the baseline simulations using the NREL supercomputer and then the validation and calibration exercise. Now to do this, since we’re running 350,000 simulations and the RECS consumption data only has a sample size of 8,000 for a single family. We can’t really do a one-to-one comparison so we had to look at average consumption values for different slices of the housing stock. So the scatter plots that’s shown here show on the vertical axis the ResStock predicted consumption for electricity and natural gas and the RECS consumption data on the horizontal axis. And these are values per house for different slices of the housing stock. This first set is for ten different regions. This next set is by vintage and then this third set is for heating fuel type. And the size of each of these bubbles indicates the number of homes that make up that segment of the housing stock.
So beyond looking at just the average and comparing that, we wanted to make sure that the distribution of homes within those slices was also in agreement so we’re able to compare cumulative distribution plot of our model against RECS consumption to show that we had an adequate match of the diversity of consumption that exists within the housing stock.
And this just is kind of the tip of the iceberg of what we looked into. Another example of the type of scatter plot we use for calibration is this where there’s 70 of these different points that were plotted for different combinations of region and vintage. And here on the left we’re showing the agreement before calibration and on the right, the agreement after calibration. And it’s worth pointing out that because RECS has a relatively small sample size of 8,000 homes, once you slice things more than two or three ways, you start to see a lot of noise in the RECS data and you kind of lose sight of the trend so we wanted to avoid kind of overfitting to the RECS data.
So once we have the calibrated models for a large analysis that we did last year, we did over 20 million simulations for various upgrades, 50 or so upgrades to the U.S. housing stock and it made up over 2.4 years of computing time to run that analysis. And what I’m gonna show next is example results from that analysis. And this large analysis that was collected last year was to support the Quadrennial Energy Review version 1.2 which was conducted by the DOE Office of Energy Policy and Systems Analysis. So this is [cuts out] the system and we were – so we’re receding into residential buildings so this little piece down here. And then [Clears Throat] tied into this analysis, we also did evaluation of various packages of upgrades to support the Building Technology Office Home Improvement Catalyst Program. And there’s a report that came out in January that documents the analysis that we did for the Quadrennial Energy Review.
So for this analysis, we were focused on technical and economic potential but not market potential. So technical potential being the theoretical potential of energy savings using today’s currently available technology and also assuming that the current equipment, the current stock of heating equipment, cooling equipment, appliances, et cetera turns over. The upgrades we looked at were replacing these pieces of equipment when they wear out so looking out over 20 or so years, that’s about how long we expect it would take to have a full turnover of the equipment stock. So the potential numbers that we show are assuming that the whole equipment stock is turning over.
And economic potential is a subset of that technical potential, only counting those upgrades where it’s cost effective to the building owner. And we primarily use the positive present value to evaluate economic potential. And again, it’s full turnover of the equipment stock. [cuts out] potential was not part of the study so we weren’t looking at anything like adoption rates or market barriers or impacts of policy. And it was focused on single-family housing stock in the 48 contiguous U.S. states. Sample size of our source data for Alaska and Hawaii just doesn’t support doing an analysis in those areas.
So one way you can show the results is using these national maps where we show the aggregated potential in each state with a circle. The area of the circle represents the aggregate potential in each state and the color of the circle here represents the per house energy savings. So the darker the circle is, the more savings per house there is. Right here are these maps shown for four different upgrades and you can see that suddenly they’re pretty regional, replacing oil boilers with ductless heat pumps. See, the potential only really exists in the northeast, similarly with basement wall insulation. The potential exists where there are basements for something like attic insulation I’ll just point out that we see the circle size for California and Texas is about the same but Texas is a lot darker than California is so they’re getting more savings on average in Texas but California has a lot more homes in which the measure applies so the circle sizes ended up being about the same.
Yeah, and [cuts out] also milder climate so you get less savings per house. It is interesting to notice that attic insulation is usually thought of as more of a cold climate upgrade, but there is a lot of potential here in more southern states. And that’s because the analysis accounted for existing attic insulation has been added in to colder climates. So this type of map could be useful to manufacturers or where they’re looking to see where there’s a market for a particular product that they’re looking at at developing or selling. These maps could also be useful to federal or regional policymakers to understand where various measures make sense.
Another way of looking at this is with this bar graph here. This is electricity savings, the Quadrennial Energy Review was focused on electricity savings. And this is showing the technical potential for various types of measures which are the different colors here across all 48 states. And the height of the bar shows the savings potential relative to the electricity consumption in each state. So you see in some states you can get up to 40 plus percent of current electricity consumption saved through all of these measures.
Now I’ll point out that these are measures considered independently. You can’t account for interaction between measures and we do that through the use of packages, but for this result right here [cuts out] isn’t accounting for. But you can point out or you can notice that in all these southern states you get a large – there’s a lot of savings potential from upgrading electric furnaces to high efficiency heat pumps. And then if you contrast this graph which shows technical potential with one that shows economic potential, you can see where the potential remains where it’s still cost effective. So it actually didn’t drop all that much, you’re still getting between 20 and 40 percent of consumption that you’re able to save cost effectively. And this economic potential, if using this positive net present value threshold and you could kind of think of that as something that would be achievable through the use of financing, whether it’s PACE financing or on-bill financing through utility bills, but it tends to be something that is not useful for looking at what homeowners would adopt on their own. For that, you would want to use a more conservative threshold and then something that we did just to get a sense of what the kind of natural market adoption might be.
So contrast this economic potential graph with one that looks at only upgrades that meet a simple payback less than five years. You could see that most of that potential drops out and does not meet the simple payback threshold. So one takeaway from that is that financing is important to achieve [cuts out] more acceptable to building owners. Financing or other incentives may be important. It’s interesting to notice that the replacing electric furnaces with high efficiency heat pumps remains as still having a good amount of savings potential even with this more conservative cost-effective metric.
So beyond these national results – oh, actually so I mentioned that these bars do not account for interaction. To do that, we ran packages of measures for various types. This upper left map here shows packages that enclosure measures. So for each of the representative homes that got simulated, kind of an optimal package of enclosure related measures was put together. You can kind of think of this as somebody received an audit, has a home performance contractor who put together an upgrade package that makes sense for that particular house. So it’s only upgrades that have a positive net present value and if there’s competing upgrades like different attic insulation levels, then the one with the highest net present value was selected for inclusion in the package.
So this is done for enclosure measures which includes wall insulation, attic insulation, foundation insulation, both air ceiling. There was also an HVAC package which considers heating/cooling system upgrades as well as duct ceiling and for thermostats as well. And then we [cuts out] enclosure HVAC measures. And then we also did one that added a water heating as well as one that added in appliance and lighting upgrades. But through this you can kind of see what the total potential across all measures would be while accounting for interactions between the upgrades.
So I mentioned those maps are useful from a federal policy or manufacturer perspective, but for individual utility programs or state energy programs, you may want to dive into the results for a particular state. So an example of doing this is we pulled out the top ten upgrades for the state of Virginia and show what the statewide electricity savings potential is for each of these ten upgrades and then you could also see what the average per house savings is. And this type of results we’re currently working on putting together state fact sheets which collect this information for the 48 states.
So beyond those results, another thing you can look at is what the impact of incentives might be on various upgrades. So I’m going to show an example of this with drill-and-fill wall insulation. So right here this graph shows the distribution of simple payback periods for this upgrade across the whole U.S. housing stock. So you can see that there’s kind of two humps here of where the simple paybacks are. And seven million of the homes [cuts out] so one thing you can do is look at, well, which ones. So you can take this set of results and divide it by various parameters. Here we split it out by heating fuel type. And you can see that, oh, actually most of these seven million homes are those homes that are heated with electricity, fuel oil or propone and not many of these natural gas homes have a favorable payback for this upgrade.
So that makes sense. These are the heating fuels that are more expensive per unit of heat. So then you can say, “OK, what if we add in a rebate?” Say a 50 percent rebate for this [cuts out] for this particular measure. And you can see how that shifts, this distribution of paybacks, so you can see the two humps got kind of squished and moved to the left here. And now about three times as many homes, 23 million fall in that less than 5 year payback. And then you can also split it out by fuel type and see now you’re capturing a lot more of those natural gas homes. So this is just one example of the type of data analysis you can do using these results.
OK, now I’m going to talk about where we’re headed with the ResStock capabilities, some of the things we’re looking forward to doing. So I mentioned some of the applications that we’ve had to date, the Quadrennial Energy Review and the Home Improvement Catalyst. For DOE we’re working on another project looking at using ResStock to do modeling of the load [cuts out] various parts of the grid and looking at various changes to the housing stock and how that would affect the load on the grid.
We’ve developed a regional specific version of ResStock for the Pacific Northwest with our partners Bonneville Power Administration. And we’re currently working on a project to evaluate energy efficiency potential in low-income communities with the U.S. EPA. We have a project with Tendril looking at demand response applications of ResStock and then we’re working with the city of Boulder, Colorado, and Radiant Labs to look at how ResStock can be used to build market engagement strategy specifically for cities.
One thing you can be on the lookout for is the set of 48 fact sheets that I mentioned that were based on the analysis from last year and call out high-level results and the top priority upgrades for each state. So we expect those will be published sometime this summer. And we’re also working on a ResStock website which will allow interactive visualizations of the housing characteristics whether used as inputs for ResStock as well as baseline consumption and savings and cost-effectiveness for different retrofits. So this website will be coming this summer or fall.
I mentioned the project with EPA on low-income energy efficiency potential so to enable this we’re building in demographic parameters so how does household income [cuts out] statistics and how does that effect the potential for various upgrades, specifically in low-income communities. And this could be relevant for, for example, weatherization programs where you might want to evaluate the best savings-to-investment ratio or upgrades that have the best savings-to-investment ratio for each city or state or customer segment.
And next I want to touch on the new capabilities that are being developed for some of the applications I mentioned where I’m looking at the time-of-savings for upgrades as well as load flexibility. So you can use these capabilities to understand questions like when do the savings from various upgrades occur, what is the potential for different measures to reduce peak demands. Just as a quick example of the type of thing you could look at here, so imagine a utility that has a winter peak and they’re interested in replacing electric furnaces with heat pumps to achieve a reduction of that winter peak demand.
Now if you put in a minimum efficiency heat pump, you can see that actually – it doesn’t give you any savings. This graph shows the savings that is achieved through this measure. So when the temperature drops below zero, you actually don’t get any savings from that minimum efficiency heat bump. And you can contrast that with the savings you would get using a high efficiency cold climate heat bump or a high efficiency cold climate heat bump along with a weatherization package of insulation and air ceiling. So [cuts out] be a valuable tool for looking at the impacts of various measures on peak demand.
These capabilities, I think could also be used to look at what the demand response potential of smart thermostats is and beyond just the potential of deploying smart thermostats individually, what synergies are there with doing weatherization along with smart thermostats. And also we think that there’s a nice synergy from the pay-for-performance or M&V 2.0 programs that are out there where can use ResStock to evaluate the best upgrades, the best homes for those upgrades and then get paid for those savings through a program like this.
And then I just wanted to touch briefly on the city scale applications that we’re working on with our partner Radiant Labs. Well, ResStock is a national tool, if you look in at one particular city, the data on building characteristics for that particular city might not be the best data that’s available so what Radiant Labs has done for the city of Boulder is put together a city-specific data based on locally available assessors’ databases or another data set. And that gets meshed into the ResStock workflow and supplemented with ResStock regional characteristics to generate a set of data that can then use for market engagement specifically for that city and that reflects kind of the best available local data for that city.
OK, that’s everything I had to present. thank-you all for listening. On this contacts live I listed our github repository where you can follow along with the progress of the OpenStudio ResStock capabilities and also listed the title of the report that came out in January that documents a lot of the methodology behind ResStock and the analysis that was conducted last year for the Quadrennial Energy Review. And I’d be happy to take any questions.
Linh Truong:
Great, thank-you so much, Eric, for all that information. And I just wanted to remind the audience if you do have any questions, feel free to enter them in the question pane. Eric, the first question for you today, it was at the beginning of your presentation where you were talking about the sources of data. Can you just talk about whether or not all the sources were freely available or was there a cost associated with that? And you mentioned the challenge of sort of meshing all of that data, how long did it really take the team to mesh all the different data sources that you mentioned?
Eric Wilson:
Yeah, good question. So while some of the main data sources like the EIA RECS and census data are publicly available. We did have to supplement with data sources that were published or that were purchased. So for example, the homebuilder surveys were – those, we had to purchase and those are not available on our repository though the probability distributions that we have derived through using those data sources and meshed with others, the resulting probability distributions are available [cuts out].
And the process to mesh these all together, it was an iterative process definitely so actually as part of the validation exercise we went back and looked at where we needed to refine our data sources and at that point we changed the structure, which parameters were functions of other parameters for our data queries as well as we brought in new data sources to supplement and get more detail in particular areas that we thought was necessary to improve the validation against consumption data. So I mean we started this project in 2013 and it was really 2016 when we had the first big application of it. So there was a lot of work in establishing what the methodology was but also a lot of work in putting together these data sources.
Linh Truong:
Great, thank-you so much, Eric. The next question involves some of you were referencing how the information can be useful to, well, the manufacturers, policy makers, that type of thing, and then you gave examples of some of the projects that you’re working with – working on including with the city of Boulder. Could you provide more information about that market engagement strategy if you can and what that project is gonna involve in terms of timeframe and what the outcome is for that project?
Eric Wilson:
Sure. And feel free to email me to get more information, but I’ll give a quick overview. So Radiant Labs is building out an analytics tool that is [cuts out] sits on top of ResStock results for the city of Boulder. And the analytics tool is being used by the city of Boulder to look at which homes they should target for particular upgrades to achieve their goals for energy within the city. And the timeframe I think they have a prototype version of that analytics tool, but they’re gonna be working on it more this summer. But feel free to contact me and we can talk more offline.
Linh Truong:
Thanks, Eric. Can you also speak on a related question about the regional data that’s available? You mentioned that regional data is specific to city like Boulder, but is regional data also available at like a statewide or a community or neighborhood level, as well?
Eric Wilson:
Yeah, good question. So for the national scale analysis that was conducted this last year, we did use some more regional datasets, for example, the Pacific Northwest has an excellent dataset of audit-level data from 1,400 homes. That’s called the Residential Building Stock Assessment, so that was used as part of our ResStock building characteristic data. And then there’s other examples of that as far as being data from the Building America Program, from Wisconsin and Minnesota for example.
As far as city-specific data, it kind of depends and a lot of cities or counties have assessor’s databases but a lot of them are different formats so it’s [cuts out] you kind of take that information and build it into a consistent format that could be used in a tool like ResStock. So that’s why we’re looking to work with partners like Radiant Labs to go to work with specific cities and generate these more city-specific datasets that can be then plugged into the ResStock workflow to generate insights for the cities. And I should mention Radiant Labs is beyond Boulder. They’re looking to work with other major cities within the U.S. so we’re excited about where that project could go.
Linh Truong:
Thanks, Eric. And the next question is about the future interactive website. Is it interactive with measures? Can you fine-tune with your own input like cost or other local market characteristics?
Eric Wilson:
So, yeah, that’s a good question. I think the initial version of it will be just displaying out information that has already been simulated and processed according to cost assumptions that we typically use. Definitely future versions of it could in fact account for user-specific inputs about discount rates or cost of measures, that sort of thing. So I think that is a possibility, but also if somebody’s looking to do that sort of analysis where they’re looking at a particular subset of the housing stock whether it’s the city or state and they have specific measures they want to analyze that maybe we have not analyze or they want to define specific costs for those measures, then certainly because the ResStock capabilities are open source and [cuts out] platform, that is something that other entities can download, ResStock and OpenStudio and run themselves using Amazon Cloud Computing. So feel free to reach out to us and we can talk more about doing that.
Linh Truong:
Thanks, Eric. The next question is, were any new energy-saving measures considered even if specific to localized areas of the country?
Eric Wilson:
Yeah, so maybe two aspects of that, new measures and more localized measures. In terms of new measures, I mean certainly we looked at ductless heat pumps and kind of the top of the line efficiency ductless heat pumps and looked at them in certain areas of the – well, we applied them to the whole country, but certainly you can see that replacing oil boilers for example in an area where there’s savings potential for [cuts out] or a heat pump. Certainly there is more regional or localized potential that comes out of some of the measures that were analyzed. And then other new or emerging technologies, I mean smart thermostats is another relatively recent technology that was looked at. Heat pump water heaters was another one.
For this large analysis that we did last year, we already had 50 plus measures and packages so we had to draw the line somewhere and we couldn’t do everything that we maybe wanted to do like looking at evaporative cooling in dryer climates for example, that wasn’t something we’re able to include. But I think part of the beauty of being built on top of EnergyPlus and OpenStudio is if the capabilities within [cuts out] we can make use of those existing models and run those as part of the ResStock workflow.
Linh Truong:
Thanks, Eric. The next question is, do you know if the TMY3 weather data reflects current warming trends and can you use future weather files?
Eric Wilson:
Good question. So the TMY3 weather data is 30-year normals so it accounts for kind of normal weather over the past 30 days. So no, it doesn’t account for any current warming trends, but you could swap in if you had actual or historical weather files or even future forecasted weather files, those could be plugged in to the workflow certainly.
Linh Truong:
OK, thank-you. The next question is, and you spoke a little bit to it at the beginning, but can you address the applicability of ResStock, the single house question tool, is it a tool for population at home and is it currently suited to identifying opportunities for the single home?
Eric Wilson:
Yeah, that’s a good question. So right, yeah, ResStock is meant to represent population of homes, but say you wanted to look at results for or look at upgrade potential for a particular home. One thing you could do is kind of narrow in on the homes that are similar to the home of interest so for example look at 1950’s homes in the southeast with electric heating and are in a certain size [cuts out] have electric water heating and so on. You could segment down to that population of homes with ResStock and you could probably get a distribution of energy savings within those homes, kind of like the payback distribution I was showing previously.
And every additional piece of information you could add will help you narrow in on that distribution a little bit more, but I think you can use that distribution to kind of look at the uncertainly or certainly of savings from a particular upgrade and that’s probably the best approach for taking the results and applying them to individual homes.
Male Voice:
Or use BEopt.
Eric Wilson:
Yeah. Or you could use a different tool like BEopt, the BEopt software to evaluate a measure in a particular home. And we have thought about using ResStock as a way to inform defaults for a tool like BEopt so that is something that could be considered but it’s not done yet.
Linh Truong:
Thank-you. The next question is, what is the segregation year that your team used to base the model on?
Eric Wilson:
I didn’t hear the word in the beginning, the what year?
Linh Truong:
The segregation.
Eric Wilson:
Segregation year? Well, so the – I’ll speak to the data sources. So most of the data that we use is kind of in the 2009 to 2012 timeframe. So that’s the kind of set of housing stock that we’re using to simulate.
Linh Truong:
Now did you say [cuts out]
Eric Wilson:
2009 to 2012, the RECS data, the RECS consumption data is from the 2009 RECS and the American Community Survey, it’s actually a five-year rolling survey because it has such a large sample size that it takes a number of years to get through the whole sample. So that’s for the rolling period like 2007 to 2012.
Linh Truong:
OK, the next question is, how would ResStock address the field poverty, for example, energy is already suppressed and thermal comfort is adequate?
Eric Wilson:
Yeah, that’s a really interesting question. Well, a lot of the data sources do have expenditures on energy so this isn’t something that we’ve looked at yet, but I think it would be interesting to see if you could pull out from the data sources something like for these households this is the percent of the income that - of household income that gets spent on energy utility bills. And then I mean you can look at what various program, whether it’s incentive programs or weatherization, what programs might be able to do in terms of alleviating energy poverty. And the question also asked about like comfort.
So I think that gets a little bit more tricky whenever you’re talking about occupant behavior and how people are trading off. They’re not able to pay their utility bill so they aren’t able to heat their house properly. I think that gets kind of difficult to quantify, but some of the data sources [cuts out] specifically for this low-income energy efficiency potential project, they actually have quantified things like the percentage of households where they’ve had their utilities turned off because they haven’t been able to pay the bills. So think of some interesting data out there that could be explored to answer that question. I don't know if ResStock is the right tool to look at it, but certainly it could be helpful in looking at the potential for alleviating some of those issues.
Linh Truong:
All right, thank-you, Eric. I think that wraps up our Q&A session. There are a few people who did raise their hand today and unfortunately we won’t be able to meet you, but feel free to reach out to Eric after today’s webinar and he’ll be happy to answer any follow-up questions that you had as well as any additional ones that come in after today. As we wrap up our webinar, we just want to make sure that we let you know that as I mentioned at the very beginning, the presentation will be posted on the Building America webinar and once we have the audio file prepped, we’ll have that available on the Building America website that you see there.
We also want to encourage you, we’re not sure how you heard about the webinar today, but if you heard it through our newsletter, thank-you. And if you haven’t subscribed, we do encourage you to do that so that you can get the latest information and the upcoming events and reports and other resources available through the DOE Building America program.
Also, as we wrap up, I just want to make sure that we get a chance, before you leave, to get some feedback because your feedback does help in terms of helping us adapt how we do future webinars. So we just want to ask you three general questions there. It’s about the content about the webinar so we’ll give you a few seconds to respond. It is anonymous so you don’t need to worry about that, but let us know how you feel about the content of the webinar today and if how useful it was to you.
And here you should see the second question which is about our presenter today. Let us know about Eric and how he did today. Great. And this last question, it’s just a very general one so we’ll give you a few seconds to respond about the webinar overall. Great, thank-you so much. We appreciate your time today. We know it’s valuable and we will – I’ve seen a couple comments come in, we will be providing Eric contact information not only on the Building America website when we post the slides but we’ll make sure that all of you have access to that as well. So thank-you very much for your time today and thank-you again, Eric, and the ResStock team for providing all the information today and have a great week, everyone.