Briefing on the Better Buildings Neighborhood Program Evaluation Webinar (Text Version)

You are here

Below is the text version of the webinar Briefing on the Better Buildings Neighborhood Program Evaluation, presented in June 2015. Watch the presentation.

Sargon:
Presentation cover slide:

Hello, everyone, and good afternoon. Or good morning, if that's the case where you are. Thank-you for joining us, and welcome to today's webinar briefing on the Better Buildings Neighborhood Program evaluation. Before we get started with the presentation today, I wanted to go over a few logistical items. First of all, all participants online have been put on universal mute to prevent background noise during the presentation. If you haven't done so already, if you're using a telephone to join us, please enter your two- or three-digit audio PIN. And you can find your audio PIN in the control panel box on the right-hand side of your screen as part of the GoToWebinar interface. To enter it, you press # into your telephone keypad, the two- or three-digit numbers, and # again. And we ask you to do that so that we can unmute your line if you have any questions during the Q and A period. If you're listening to the presentation through your computer speakers, you will not be able to ask questions orally during the Q and A session. Today, questions will be addressed primarily during a final Q and A session at the end of the presentation, although there might be an opportunity to add a few questions periodically for clarification's sake. During the Q and A period, you can virtually raise your hand by clicking the blue circle button with a hand, which is also located in the control panel, and we can unmute you, so that you may ask your question over the phone. You can also type in a question if you prefer. You can do this at any point during the webcast, into the questions box also in that control panel, and we can address those as part of the session. Lastly, today's webcast is being recorded. The recording will be posted online at the conclusion of today's meeting, in the days to come. With that, I hand over the floor to Danielle Sass Byrnett of the Department of Energy. Danielle? ... OK, it appears that Danielle is not available at this very moment, so ... Ed, if you wouldn't mind chiming in and maybe making a few introductory comments about our speakers today?

Ed Vine:
Hi. This is Ed Vine from Lawrence Berkeley National Laboratory. Today's talk is on the evaluation of the Better Buildings Neighborhood Program, a very important program run by the U.S. Department of Energy. You will hear more details about that.

First slide:
The main speakers on today's talk will be from the evaluation team that led this evaluation. The lead was Research Into Action, that led the teams, and also the process evaluation research. We also had impact evaluation and the market effects of evaluation. Evergreen Economics conducted the analysis of the economic impact of the building regression analysis and worked with Nexant to verify the program savings, where Nexant led the impact evaluation as well as conducted the M&V activities and verified program savings. And the NMR Group led the market impact assessment. I want to send my thanks to DOE on several accounts. First, the project manager was Jeff Dowd from Energy Efficiency and Renewable Energy. My colleague Yaw Agyeman helped provide technical oversight and management responsibilities. And within DOE, Dale Hoffmeyer of the Building Technologies Office was very instrumental in helping us. We also had a number of peer reviewers that led to the review of actually four reports, or actually more reports. But a few years ago -- this is a long-term study -- we had a preliminary impact and market effectiveness evaluation and preliminary process evaluation. And then just recently, we had review of the final buy-ins, and you'll hear more about that. But Marian Brown, who was kind of the consultant, Phil Degens, Lauren Gage from BPA, Ken Keating to consult. And then two other consultants helped out, who were Lisa Petraglia and Skip Laitner. We also had internal reviewers. Jeff Dowd and Dale and Danielle from DOE, as well as Claudia Tighe from DOE, and then Bill Miller as a systems consultant. So what I'd like to do now is hand this over to Research Into Action, in particular Jane Peters. She'll go over the agenda and the topics that we're going to cover. We also have members of the evaluation team on the call, so they can help answer some of the questions that arise. So Jane, why don't you take it over from here?

Danielle Sass Byrnett:
Next slide:

Jane, just one second before you jump in. Thank-you for jumping in, Ed. Sorry for the technical difficulties. This is Danielle Sass Byrnett with Department of Energy. Just before we move to the full agenda, I had wanted to provide a little bit of context and thank you all for joining us on this webinar. As most of you know, what the Better Buildings Neighborhood Program was, I was the program director for the entirety of the time period. Jane's going to talk about the history of BBNP and what it accomplished. But one of the many goals that we had in testing innovative approaches to building efficiency upgrade programs was to collect enough data and information to analytically determine what works and why, for building energy efficiency programs. And this is something that has never been done before. So we're very excited at Department of Energy to have had an independent evaluation be able to look across what DOE and the grant recipients accomplished, but also to figure out what kind of research could be done, what kind of analysis could be done, to really quantify these important questions about measures of success and what program characteristics are associated with success. So you'll hear about 12 factors that are associated with success. After Jane Peters and her colleague Marjorie McRae go over the extent to which DOE and our grant recipients met our goals and objectives, there are also detailed methodology slides available at the end, should folks have questions. And I will read questions as they come in, if they're relevant for clarification, or else provide questions to our folks at Research Into Action and the other evaluation team members who are available and on the line to answer any questions you have. So please, as Sargon mentioned, feel free to type those into the chat box throughout, and I will get them organized and repeat them back to our presenters. Or raise your hand if you're interested in being called on to provide an additional question or some insight. Thank-you, Jane.

Jane Peters:
... You're hearing me OK? ... Thank-you. So the agenda will look at BBNP first, then the findings from the evaluation, and that was a very complicated evaluation. And we'll discuss various findings and recommendations. And there is additional material at the end. I'll try not to rush through this, so that if there are questions, please raise your hand, and we'll try and answer them. We have plenty of time, I think, to do that. One of the things about this program, as was mentioned by Danielle, is that it is in fact a unique opportunity to investigate questions that have been curiously raised and unanswerable over my 35 years of doing evaluation. Because the program covered so many different types of approaches to answering the same sort of objectives, we were able to collect a lot of information and draw comparisons, statistical comparisons, about what was really going on in these programs. So it's been a really exciting project and one that I hope you find fascinating as we go through this.

Next slide:
So first of all, let me talk about BBNP.

Next slide:
What is it? Well, it was to demonstrate self-sustaining efficiency upgrades and to be innovative. These were two key components of the FOA that was promoted for the program. Ultimately, there was $508 million in grants. There were 41 grantees. There were 24 subgrantees. There were well over 100 different upgrade programs that were operated by these 41 grantees and their subgrantees. They were primarily whole building energy upgrade programs. And they covered all sectors. They were residential, low-income, multifamily, commercial, and even agricultural projects. There were 34 states. I believe that's our next slide.

Next slide:
Thirty-four states, one territory, some statewide grants, some grants that just targeted a community. One of the challenges in the evaluation is the fact that there were so many different types of strategies. One of the things that was exciting about the program evaluation was that there were so many different types of strategies. This complexity created this opportunity to say what is more successful and what is less successful when people run programs.

Next slide:
The program kicked off in July 2010. At the time it was called Retrofit Ramp-Up. It was ARRA funds. It was from fiscal year '10. Between 2010 and 2013 there was over $445 million spent, and then $508 million by the end of 2014. The nationwide effort was different. There were urban / suburban / rural environments. There were industry partnerships of all different varieties, some with utilities, some without utilities. Some with nonprofits, some without nonprofits. There was government or nonprofit involved in every grant. The goal was that these programs in that first three years would gain enough traction so that some component of their programs could continue and be sustainable beyond the grant period. And that they would learn over time how to benefit from economies of scale. And ultimately, that DOE, through the evaluation, would learn what was effective and what was replicable, what should be done again, and what shouldn't be done again.

Next slide:
This timeline -- the blue part is the program, and the purple part is the evaluation. See the FOA coming out in 2009, the grants announced in 2010, assigned to account managers and the period beginning, and then the evaluation contract of being awarded in 2011, at the end of 2011. And we rapidly began working on an evaluation plan and data collection resulting in a preliminary evaluation report, at the end of 2012. So within a year of the contract award. The grant period ended at the end of 2013, but extension was granted to the grantees so that they could continue to do things. However, the evaluation had to have a period of time that it ended collecting data, and that was at third quarter 2013. And I have -- so then the final drafts, these evaluation reports, were submitted in March of this year, and all ARRA funds to be spent by June 30, hence this report six days before the end.

Next slide:
This is again the timeline in words, for those of you who prefer. The evaluation verifies the accomplishments of the program, through September 2013. It does not verify the fourth year, again, largely because the ARRA funds were to be expended by this month. So there could be no continuation of the evaluation to cover that fourth year.

Next slide:
OK, next, I want to start with the evaluation findings. But are there any questions on what I've presented so far on BBNP, Ed or Danielle?

Danielle Sass Byrnett:
It doesn't look like it, Jane.

Jane Peters:
OK, great. Thank-you. OK. So the evaluation findings. We'll go over sort of the bottom line. Goal, objective, attainment. Eventually we'll get to the energy and CO2 impacts and then the program implementations, lessons learned, and some market effects findings, and our recommendations to DOE.

Next slide:
OK. Bottom line. It did really well. All of the ARRA-defined goals were met. Most of the program-defined objectives were met within the first three years. And the achievement of six of the seven program-defined objectives, of the key program-defined objectives, are likely, at the end of the four years, but those were not verified. BBNP did demonstrate what works. There are lessons learned, which we'll discuss, which Danielle referenced earlier, that do help facilitate sustainability of upgrade programs. And that can be important for the entire energy efficiency industry, as well as DOE.

Next slide:
So here are the ARRA-defined goals. The ARRA-defined goals, as you can see in the column, the dark-green column, are a fairly high- level goal. Create new jobs, spur economic activity, provide accountability and transparency. All of these were attained. Jobs were created, dollars of economic activity were invested and increased, and actual estimated net benefit cost ratio of 3 to 1, from the expenditures. And because of the way both ARRA was set up, with the Recovery.gov website, such as the grantees reported to that, and the way BBNP was set up, there was transparency within the entire award group so they could see what each other was doing through the various Google websites and various conferences, webinars, peer-to-peer networks, so they could see each other and they could report through Recovery.gov to the public what was going on. So accountability and transparency did occur.

Next slide:
BBNP then defined program objectives. And these were more carefully defined than the ARRA objectives. For instance, ARRA said create jobs; BBNP program-defined was create or retain 10,000 to 30,000 jobs. They also set a goal of developing sustainable energy efficiency upgrade programs. By sustainable, they meant that something would continue after funding. We found that 84 percent of the grantees would actually be continuing some portion of their program after the three-year evaluation period, after the continuation period, as well. This would include activity in their own market, adoption of practices, financing that would continue to be available. So those features were expected to continue differentially across the different grantees. BBNP did create jobs. We estimated that they created over 10,000 net jobs resulting from the effort during the three-year evaluation period. So each of these goals was attained, as verified, and four-year we can say also are attained.

Next slide:
BBNP set a goal of at least 100,000 residential and commercial buildings to be upgraded. They almost achieved that goal at the end of the third year. They had 99,000 upgrades at the end of the three-year period that we verified. They reported 119,000 after that period, so we believe that it is likely to assume that in the fourth year they achieved their goal. BBNP also set a goal of saving consumers $65 million annually on their energy bills. We were able to verify energy savings up to $40 million in annual bill savings during the three-year period. So they did not achieve their full $65 million annual target. They were about 62 percent of that. There were additional savings that were reported, but based on the three-year evaluation findings of what we verified, we think that they still did not quite achieve their $65 million annual bill savings in year 4. There was close to $700 million lifetime energy savings, however.

Next slide:
In terms of other objectives that were set by the program, another objective was to achieve 15 to 30 percent estimated energy savings from these upgrades. In the residential sector, which was where the majority of the upgrades occurred, over 15 percent savings was verified across the residential sector. So that goal was achieved in year 3. Another goal was to reduce the cost of energy efficiency program delivery. This was the achieving cost reductions over time. We did find year-to-year cost of delivery decreased. So that by the end of three years, program costs had reduced by more than 30 percent. They also targeted leveraging $1 to $3 billion in additional resources. The dollars leveraged in the first three years did not achieve that amount. They achieved --

Marjorie McRae:
No, excuse me, sorry. A misinterpretation. Did achieve that amount, the very bottom row says 1.2 billion; it's just that we were not able to fully verify it, so in the column under verified it's inconclusive. We could only verify about a third of that amount for the third year. But it would appear from verified plus reported information, it was attained, but that's why we had to call it inconclusive in that column. And then in the fourth year, therefore assuming that one could verify that, that is likely attained through the fourth year.

Jane Peters:
Next slide:

OK, so moving on to energy savings. Marjorie will take this.

Marjorie McRae:
Energy and greenhouse gas emission reductions. So this slide conveys a lot of information. We present the information by sector down the left, and the total. The growth verified source savings, that's in MMBtu, the savings were across all fuel types -- electricity, natural gas, fuel oil, propane. The "verified" means this is what our impact team verified. So it's NMV analysis, and it's billing regression analysis. So gross verified savings and measured at the source, which is how the program -- that BBNP as a whole was reporting it -- is nearly 4 million MMBtu gross verified savings. Net verified. "Net" is a term that accounts for participation behavior. And that would mean some people would indicate that they would have done it anyway. There's also what's termed spillover of contractors, doing more than they would have done without the program, delivering more to nonparticipants than they would have in the absence of the program. So net includes those behavioral components. And we have 3.5 million MMBtu source savings net. The lifetime. The impact team looked at the measures reported and ascribed measure lifetimes and therefore lifetimes for the savings involved in those projects, and that's in the third column over. So over the course of the project lifetime would be 56.7 million source MMBtu, and again, net. You may notice, those of you that quickly do math in your head, that the proportions in the lifetime column -- if you compare residential and commercial, say -- it's not the same proportions as in the very first gross, because the projects of residential are composed by different measures than the projects in commercial. And the proportions of those measures within the sectors differ. So you get different relative relationship. And then we get the CO2 equivalent, as our metric for greenhouse gas emission reductions. It's in metric tons. The first column is annual. And again, based on net savings, nearly half a million metric tons. And lifetime is over 7 million metric tons of CO2 equivalent emission reductions.

Jane Peters:
So at this point, it's probably a good time to see if there are any questions. Danielle or Ed? Clarifying questions or anything relevant?

Ed Vine:
Yea, there's one that's relevant that we've gotten a few questions. But one was, how much information in your impact analysis was utilized from the grantees' independent evaluation that you're evaluating? For example, Energize Phoenix had an evaluation that was done by (inaudible) University, and they were wondering, was that incorporated into your evaluation or was your evaluation truly independent?

Jane Peters:
Our evaluation was truly independent. We certainly examined the other studies, and Lynn or Wiley from Nexant -- Lynn Roy is leading the team. Do you want to give a more elaborate answer?

Lynn Roy:
Sure. We did look at other evaluation studies that were conducted. We did some benchmarking exercises to see if we could use the results from some of the other evaluations that were conducted. We did end up using the results of one evaluation that was conducted outside of what we did, because we could not get data from that particular grantee.

Marjorie McRae:
And so in summary, ours in an independent evaluation.

Jane Peters:
Any other questions?

Ed Vine:
There are some, but more for leaders. So ...

Danielle Sass Byrnett:
There's one that's relevant now, which was, could you please define "net jobs"? Which would have been from a slide ago.

Marjorie McRae:
So net jobs is referring -- it's not the same notion as on this slide. It's referring to the economic analysis, which was done by an input-output model of the U.S. economy, a macroeconomic analysis. And in that context, the term "net" means net of the counterfactual. On the slide that's up, "net" also means net of the counterfactual; it's just that the counterfactual differs depending on the type of analysis you're doing, so in that modeling, it would say that there was the same amount of funds spent by the federal government, but they didn't spend it on BBNP activity; they spent it on just the historical non-defense spending patterns. So that would generate jobs. So what did BBNP generate, above and beyond, had the same half a billion dollars been spent by the government on typical non-defense activities?

Jane Peters:
And Matt or Cosin, or Steve Grover from Evergreen Economic Consulting, do you want to add anything to that?

Matt:
That's absolutely correct, what Marjorie said. Steve?

Steve Grover:
Yea. We had considered several different counterfactuals, but the one of the federal government spending actually provides a more conservative estimate of net jobs, so that's the one we ultimately went with.

Jane Peters:
OK, so, any other questions? Yes?

Ed Vine:
OK, another question related to jobs, the direct and indirect jobs or induced jobs, and is this for one year or long-term?

Marjorie McRae:
It is direct, indirect, and induced. The total of those. And it is the jobs -- it's defined as a full-time equivalent number. So it is the full-time equivalent, over 10,000 jobs from that half-billion- dollar expenditure, plus it's what the BBNP as a whole spent. It also includes the project expenditures that were covered by the participants. So the spending because of the BBNP program, administering the program, and the upgrades themselves, from that expenditure and the economy. But it's a full-time equivalent number, so that would be 10,000 total. It's not the notion of a job as continuing forever or continuing any number of years -- I mean, that doesn't -- the layman's concept, "I have a job," but you can't really define it, because we don't have -- we don't have the promise of lifetime employment. Matt or Steve, do you want to elaborate?

Matt:
Sure, and I think the second part of the question was what time period it was over. And the jobs number is over the same period of the rest of the evaluation, which is Q4 2010 through Q3 2013. So the 10,000 jobs, full-time equivalent jobs, could potentially be anywhere. It could be 10,000 people employed over that entire program period, or it could be a person working a particular job for a year, somebody working in a job for two years, that sort of blend. So that's kind of where that full-time equivalent job definition is necessary to define what a job really is.

Ed Vine:
OK, and related to that, you might want to answer the next question. You know, a lot of the money in this program went into financing, and so how do the job calculations take that into consideration, if at all?

Marjorie McRae:
Matt, you may as well go first.

Matt:
Sure. To the extent that the financing -- I mean, it would be an input in our model as a -- to the extent that people, participants, spent that money in the economy that would be like an input of them purchasing goods and services. I mean, otherwise. -- Sorry, Marjorie, go ahead.

Marjorie McRae:
Pat explained it to me when I asked the same question. It's captured in the total project cost. Otherwise -- a loan that someone received. Otherwise, you'd be double-counting. They received the loan to cover the total project cost. The amount that might still be in lenders -- on the books at lenders as available to lend hasn't been spent yet, and so hasn't stimulated economic activity yet. It's potentially out there to do so. Matt, have I conveyed it as you told me earlier?

Matt:
I think that's accurate.

Jane Peters:
Ed or Danielle, any other questions?

Ed Vine:
Well, yea, since we're on the bottom line here. And you may not be able to answer this. But did your analysis vary greatly or match the prior results from the individual awardees?

Marjorie McRae:
You're a little fuzzy there, Ed.

Ed Vine:
Did your analysis vary greatly or match the prior results from the individual awardees?

Marjorie McRae:
Oh, the individual. I think that Lynn, you might want to follow up on that, because you were referencing that earlier.

Lynn Roy:
Yea, and I guess I can't speak to it off the top of my head, while we do the call.

Wiley:
Right. So for the independent evaluation that we did use, we weren't able to actually conduct analysis from that particular grantee, because we were unable to get data. For that particular grantee, we're unable to provide feedback on how results back to their independent evaluation compared to ours. I mean, I will say largely, we're (inaudible) reported by those that kind of requested the realization rates which we found, which did vary a bit by sector, across all the grantees.

Jane Peters:
So I think it's pretty difficult to give the questioner a real clear sense of comparison for each of the grantees to these results, but there's no -- we're not questioning the results of the particular grantee evaluations.

Marjorie McRae:
Yes, that's a good point. And our sampling -- as you can imagine with such diversity in a large sample, we were not able to have any reliable findings at the grantee level, so we have grantee samples, but we didn't do the calculations at the grantee level. We did it at the sector level. So we don't have any comparison for the grantee, and we didn't have the overlap as Wiley said.

Danielle Sass Byrnett:
We have one more jobs question. For the initial estimate for grant funding purposes for job creation, we were told to use a factor of about $90,000 per job. This was, for reference, an OMB-described factor, so it was used for the Recovery Act broadly. The question is, has this factor been evaluated so a better number can be used? Or was that verified through the course of your analysis in determining the job creation?

Marjorie McRae:
That's a good question. I did a back-of-the-envelope at one point to see what that math would be. And I can't remember the answer. But we did not do anything formally on that. Matt, if your appendix in the impact report has inputs, which I believe it does, to your model, to your economic model, one could then total up the infusion into the economy in dollars. And since by jobs, and you'd get an implied dollars spent per job, in that way. But that is a different thing. That's a different number than what's the average wage. It's just a multiplier number, but it's not telling you an average wage number. Anybody can chime in, if anybody on the team wants to add anything to that.

Jane Peters:
OK. So I think we'll move along.

Ed Vine:
Why don't you go ahead.

Jane Peters:
Next slide:

OK. So I want to talk about some of the lessons learned. These were -- I have two areas. First of all, for DOE and second of all, for the grantees and future program administrators. There were some specific things that DOE did that contributed to the grantee success. And those are things that ultimately we hope DOE is able to do, or any large administrator who works in this sort of capacity in the future. DOE provided account managers for each grantee. They were part of a pool of four, five, or six grantees who worked with a single account manager. And that account manager was available to help answer questions for how to deal with all the paperwork and to give them guidance on technical assistance that might be useful and help them figure out how to take the next steps. These were considered very, very useful contacts for almost all the grantees. DOE also, through the program services that Danielle ran, offering workshops, conferences, webinars, and peer exchange calls. There was a Google website. I'm hearing some background noise, by the way. So these services, again, were highly valued by the grantees as a way to learn more, to learn from each other. People went to conferences and talked to each other, got to know each other. They listened on peer exchange calls. Now they thought they knew a little bit about each other, so they could put faces with names when they heard people talking. So there was a great deal of back and forth and a lot of activity that occurred as a result of this interaction that DOE provided. It was very valuable to all the grantees.

Next slide:
The next thing is what we found out about the grantee programs themselves. Because this program was so extensive, there were enough data points, enough variables, to conduct a statistical analysis of what is usually just totally qualitative when we do process evaluation. We were able to develop these variables, ask questions -- relatively closed-ended questions about the most successful programs, that they were operating at their most effective time period, and learn what they felt was being successful for them. And then put that into models that we could cluster, analyze, and determine who were the most successful, average successful, and least successful grantees, and what the factors were that were explaining that. So the four factors that were most evident that came through the multivariant analysis were those grantees that were offering multiple types of energy audits -- not a single type of audit, but multiple types that met different needs of their constituency -- tended to have more success. Those that were doing some direct installation of measures during the audit also tended to have more success. Those that really worked to develop large pools of eligible contractors within their communities had more success. And those that provided contractor training -- not just building science, but sales training, how to operate the program, things like that -- were much more successful than those that did not offer this additional training.

Next slide:
Sometimes some things are -- we know that in a multivariant analysis, fewer things will come up as variables will demonstrate significance. So we also looked at bivariant analysis to try and see whether there were things that were trending toward success. And these were all bivariant analysis statistically significant. Other eight factors that we considered to be probably very important in success, as well. If we had more similar programs, less variability, we might have seen some of these come up, as well. So, clearly having that someone on staff with at least 15 years of experience in some aspect -- either efficiency or program management or finance -- was very valuable to the program. Offering financing was another benefit that really worked for programs. Incentives tended to be -- those programs that offered incentives around 25 percent of the cost of the measure as a component of their program were more successful than those who offered more or less. Targeting outreach activities to target specific populations, but not restricted populations. So those that targeted just a small neighborhood or a small group of customers weren't as successful as those who targeted similar types of neighborhoods, say, or similar types of people. Community-based outreach efforts definitely led to success. Again, strong relationships with participating contractors. And a flexible approach. So similar to the multiple types of audits for training contractors, these two factors come up again and again. And finally, effective QA/QC, knowing that you are working with your contractors so that they are getting the types of installations that you want, is much more likely to lead to higher success.

Marjorie McRae:
And at this point I'd like to jump in and say something that's not on the slides. And that is, I know there are grantees on the line. One had posed a question already. I want to say thank-you, thank- you, thank-you. You provided detailed survey information on a web survey that I know was very hard, took a long time to provide, answer all the different questions we had about residential and commercial and low-income programs. It was that detailed information that enabled us to do the analysis that we just presented the results of here. And as Jane said at the outset and Danielle said, it is first of its kind research. There hasn't before been enough programs that had similar objectives but diversity of implementation and the data on them. So we had outcome data from grantee reporting, and we had this survey data that every grantee completed, and I can't thank you enough. This has been extremely valuable research. It wasn't -- without your efforts it couldn't have been done.

Jane Peters:
So at this point, let me just stop and ask if there's any questions on these two slides I just presented -- three slides I just presented.

Ed Vine:
The big question, of course, is how do you define success? How is it quantified ...

Jane Peters:
Yea, I can move to a slide ...

Ed Vine:
... to show that financing is successful and so on? (presenters talking) ...

Jane Peters:
OK, let's look at this slide here.

Next slide:
This talks about our methods for doing the process.

Marjorie McRae:
So this jumps to an intermediate part. Let me just explain this, and then I'll back up. If you can see ... OK. Let's start this one. Based on the BBNP objectives, we identified 12 performance metrics that we could quantify. And these metrics were driven off of grantee reporting to DOE. We could quantify these metrics for 54 grantees and subgrantees for their residential programs. Smaller numbers of grantees and subgrantees ran commercial programs, for example, or the multifamily, so we had a smaller end. We needed a large end. So it's the residential sector that had enough sample size. We quantified these for the residential programs. In doing so, we in a 54 -- nearly every grantee ran a residential program, so virtually all these were residential programs. And in that survey that I mentioned, that people kindly filled out and Jane had referred to, it asked people to describe their most effective program, and to characterize -- to describe that in the survey results.

Jane Peters:
And during the time when it was being most successful.

Marjorie McRae:
Yes, exactly. Now we used a cluster analysis software -- latent profile analysis -- to cluster the grantees on the 12 performance metrics. The software itself looks at data across metrics and clusters. It doesn't have an (inaudible). It's not looking for anything in particular. We found a solution that had three clusters. And the interpretation led to a most successful cluster, average success, and least success on these performance metrics. And again, at the time of the research. This analysis is not to say -- we do not publish in the report who is most success or least success. That's not the purpose of the analysis. It's simply to be able to answer the question, what worked? In order to know what works, you need to have examples of a good program working.

Next slide:
So here we have four of the 12 illustrated. Blue, you're looking for higher value equates to better performance, and the green, lower value equates to better performance. You see here that the most successful group had higher average scores on market penetration of their upgrades and a program savings-to-investment ratio. It had the highest mean scores, the grantees in that cluster. Grantees in the average cluster had medium scores, average scores, on those metrics. And those in the least successful had the lowest scores on those metrics. The green, we looked at program cost per dollar of work invoiced, program cost per MMBtu saved, and you get the inverse relationship. That the most successful ones on average had the lowest costs for those metrics. And the average ones had the average costs. And the least had the highest costs for those metrics. So it simply gave us a way to -- these metrics gave us a way to cluster the grantees. The interpretation of the clusters was clear. It isn't always, with that sort of analysis. But here it was very clear, that the -- you could interpret it as most, average, and least. Then we ran regression models. Multivariant regression and the bivariant. Four factors came out from the multivariant, and another eight came out from the bivariant. And one more point on this: That's the only purpose to which we used the successful, was this analysis of trying to identify what works, so we could see differences.

Next slide:
You need to see differences between people. We're not making any judgment otherwise. We don't publish any names associated with that. And it is limited to the time period that we're looking at. Something may not have been successful by year 3 that turned out, if it were to continue, in year 5 or 10 to be quite successful. And vice versa. Somebody could be quick out of the blocks and have a lot of early success, and if this were to run five or 10 years, have later. So we're not trying to make a judgment. We're just trying to find factors associated with performance on those metrics.

Jane Peters:
And I think the other thing to recognize is that in terms of sustainable program components -- we talked about 84 percent have some component of their program that was continuing beyond the grant period -- they were in every category.

Marjorie McRae:
Yes. So the most successful had a higher proportion, medium had a middle proportion, and less had a lower. But with 84 percent, they were all even in the least category, two-thirds of them were continuing. So that's right there an indicator of success. It wasn't what we used in that clustering analysis, but it is another indicator. In fact, we included it as a potential variable, but it turned out not to be relevant to the cluster. But that itself is a different way of looking at it.

Jane Peters:
So, any other questions?

Ed Vine:
Yes, there are a few related to this. Could you have looked at a more comprehensive list -- actually you did look at a lot more variables, but why don't you explain that?

Marjorie McRae:
We created variables from the survey instruments. I've spoken on that web survey, with lots of questions. And we analyzed those variables for correlations without (inaudible) that were seeking intercorrelations between the variables. But then statistical analysis to develop a model. So we looked at basically all the characteristics that we had.

Jane Peters:
We did try to look at comprehensiveness of the projects that were involved. That was the question.

Ed Vine:
No, no, it wasn't. Nope. Nope, nope. Sorry. No, Marjorie was -- it wasn't looking at the comprehensive indicator but a more comprehensive list of additional variables.

Marjorie McRae:
We looked at more than 12 performance metrics for clustering, and those turned out to be the ones that were predictive of clusters. And we looked at more independent, explanatory variables. And these that we presented are the ones that were driving the results.

Ed Vine:
OK.

Jane Peters:
We did have hypotheses that we would expect certain things to explain and not everything did.

Ed Vine:
Now, again just to repeat: No specific grantees are listed in the report. Correct?

Marjorie McRae:
Correct.

Ed Vine:
If a grantee was interested in learning about their own performance, they could contact you directly, but it won't be in the report?

Marjorie McRae:
Absolutely. And again, this is one measurement for one purpose. Success is a multifaceted thing. And people can feel good about -- I think all grantees can feel really good about their efforts regardless of where they are in those particular analyses. It's been a tremendous effort for many people, and I think quite successful.

Ed Vine:
And let's see ... So here is a key difference between grantees, was whether they were located in a state with strong statewide efficiency programs and infrastructure, like New York, California, Oregon, and so on. Was this factor explored?

Marjorie McRae:
It was. Well, it didn't turn out to be ...

Jane Peters:
It wasn't --

Marjorie McRae:
... explanatory, wasn't the driver.

Ed Vine:
OK. I think that's it for now.

Jane Peters:
OK. So next I'm going to talk about just a brief slide here on market effects.

Next slide:
Just like everything in our report, we could probably talk about each one of these for about 10 slides or more a piece, but we narrowed it down to the key market effects findings. In the preliminary evaluation, we began with thinking, is there a possibility of national market effects? And we quickly found out that among experts in energy efficiency, there wasn't a national awareness of BBNP. So we focused on local effects. We looked at multiple indicators, and we did find local market effects in these communities that were influenced by BBNP. BBNP increased the activity in the upgrade market, which we found from contractors and distributors. Both reported positive influence on their businesses, and also reported adopting more energy-efficient buildings and business practices as a result of BBNP. We also found that both participating and nonparticipating contractors in BBNP areas increased their marketing of efficiency projects and more efficiency-focused and took more training. We found across all of the grantees in a nonparticipant consumer survey, consumer awareness of BBNP programs throughout these grantee areas. So clearly, the program's had an effect. And I'm sure any of you who live in communities with BBNP could attest to that.

Next slide:
So since we only had that one slide, I'm going to assume we can come to market effects in a minute. I have my last slide here on going forward for DOE, and for other program administrators. For DOE, we hope that they continue the dialogue that they started with the grantees. And in fact, they have started the Residential Program Solution Center and the Residential Network. Both of those are locations, opportunities for people involved in upgrade programs, especially in residential sector, to connect. The Solution Center is a website, and the Network is also through a website but brings people together for peer-to-peer connections. There also will be workshops and conferences. And I think there's a conference coming up soon that I saw recently. We also recommend that when providing funding, use account managers. This role has not been used very often by DOE, and it was very, very valuable to the grantees, and we think contributed to their ability to get these programs up and running and to reduce the economies -- and to benefit from economies of scale over time in reducing their program costs. Capability building is one of the things that's very important for energy efficiency. The infrastructure for energy efficiency does not -- cannot be built overnight, cannot even easily be built in six months. It took almost a year for most of the grantees to truly get up and running ...

Marjorie McRae:
Longer.

Jane Peters:
... and get their programs going. Some took longer. The capability that's been invested in should be maintained in some way. So other opportunities that could exist, such as the Solution Center and the Network, that could be developed even for other sectors by DOE or other agencies, could be very helpful to maintain the capability over time. Finally, we recommend that DOE evaluate for long-term effects in three to five years. As we said, we couldn't verify year 4. We don't really know what the evaluation would find if it was done today for year 4. But the sustainability of these programs will only be known in three to five years' time. How is the funding actually continuing to benefit the community? So with that, I'm happy to take more questions.

Next slide:
And we do have additional material to explain some of our methods, if people want to know about that.

Ed Vine:
Yea. There were some questions that I think you've answered. For example, are there plans to look at the long-term sustainability? Which efforts are still active, viable in say one to two years from now? And I think, nothing's planned yet but great idea, that said. How are the lessons learned going to be used to educate not just the ARRA program implementers but all the people across the country, and I guess the DOE tools will be helpful for that. If Danielle's still on the line, or perhaps Dale Hoffmeyer, you wanted to add anything more about what DOE plans to do with the findings here?

Danielle Sass Byrnett:
Sure. Thanks, Ed. It's Danielle. The Better Buildings Residential Program Solution Center is the home for the lessons learned from BBNP and other programs like Home Performance with ENERGY STAR® and utility programs. That currently has a wealth of resources and information, including more than 50 handbooks that describe how to think through the process of implementing an effective residential energy efficiency program. And hundreds of examples based on the grant recipients and others. That information is going to be updated with the results of the independent evaluation you just heard about. We'll be taking care of that over the course of the summer and the near fall. But I'm very happy to say that the evaluation lessons learned corroborated what is in the Solution Center. So our primary task there is to designate many of the lessons, which are referred to as "Tips for Success," as verified best practices. There is also the Better Buildings Residential Network, which is a peer-sharing network of organizations who are committed to expanding the number of homes that are energy efficient across the country. And any and all program administrators, relevant contractors, or other supporting organizations are invited and welcome to join us in collaborating and continuing to share lessons learned and new innovations as things change out in the field, very much in real time. And then lastly, I will mention that the Better Buildings Neighborhood Program website has additional information beyond this independent evaluation. There are summary pages for each of the grant recipients that describe what they did and what their approaches were. They have links to the data they submitted to DOE, and also their own final reports that describe what they believe they learned locally. And there's a release of all of the building- specific data that DOE received as a result of the grant recipients' hard work in energy efficiency upgrades. You can explore all of that through the accomplishments in the Better Buildings Neighborhood Program website, and I will provide that link in the chat box for everyone.

Ed Vine:
And Danielle, the slides for today's talk -- I know this is being recorded, but will the slides be available, and if so, where should they look for them?

Danielle Sass Byrnett:
The slides will be posted on both the Solution Center and the Neighborhood Program website.

Jane Peters:
So an also partial answer to the question, Ed, is the evaluation team has prepared conference papers, have submitted abstracts to other conferences, so we hope that within the community of energy efficiency, that people will have access to this beyond this webinar, as well, through presentations and conference papers.

Ed Vine:
Let's see ... I think -- oh, somebody again asked, can we list any grantees by name? Not interested in the rank or how successful they are, just curious if you can give some specific programs that got funded. Do we list the grantees in the report at all?

Jane Peters:
The grantees are listed in one of the volumes.

Marjorie McRae:
They're listed in every volume; Appendix A of every volume lists the grantees and their award amount.

Next slide:
And it's also on the website, the Better Buildings Residential Program Solution Center.

Jane Peters:
This is a listing of the evaluation reports. There are six volumes. Volume 1 is the final synthesis of everything, so if somebody only wants to read one thing, there's even an executive summary of that. Then Volume 2 covers savings and economic impact. Three is the drivers of success that we were talking about, the statistical analysis. Volume 4 is a traditional, more qualitative process evaluation. Volume 5 is the market effects with detailed results from the contractor and distributors' surveys and interviews. Volume 6 is a spotlight on some specific strategies, if you like, sort of a case study type approach, where we looked at four or five grantees per topic area and described the specific types of things that they did for these, and can you tell the topic areas?

Marjorie McRae:
Yes. There were five strategies we look at, and it is the multiple types of audits or more broad pathways for getting involved in the program. Contractor training, use of community-based organizations, targeted outreach, and strategies to encourage deep or comprehensive retrofits. So in this one, we can give more nuance than could be given in the others, because as Jane said, say four or five grantees in each one of these. We name the names of who's in them, but after that, we don't name names as to who does what. But it gives details of how those strategies played out for those grantees selected.

Jane Peters:
Right. People would probably find this pretty valuable for, so why did people think that this worked, and why did they think something else didn't work, on these key topics?

Ed Vine:
That's slide 33, I believe. If you could move it there. That sort of summarizes what Marjorie was just saying, if people are looking at the slides.

Back to slide 33:
While people look at that, I'll put Danielle on the spot once again. We mentioned that there's no planned study, but the question is, will there be additional funding from DOE to take the successes further?

Danielle Sass Byrnett:
That particular question, if you're talking about grants, is one that's up to congressional appropriations. But we are certainly spending our time and attention on trying to take these successes as far and wide into the same communities through collaborations with other organizations and the Residential Network, Home Performance with ENERGY STAR, etcetera, as I mentioned.

Ed Vine:
Right. And somebody also mentioned that, just for our own information, all the Home Performance program sponsors nationally have now been made Better Buildings Residential Network members. So the networks are expanding. There are no other questions right now, unless people want to send them in. And I don't know about hand signals -- is anybody waving to talk? I didn't see anything, either.

Danielle Sass Byrnett:
We don't have any hands raised right now.

Ed Vine:
OK. We have a little bit more time. Jane, I'll turn it back to you.

Jane Peters:
Well, I thought that maybe I'd just show people the amount of data collection that we conducted here.

Next slide:
It wasn't trivial. Hopefully you're not looking on a small screen, because it's pretty small. But we did web surveys, in-depth interviews, billing data analysis, phone surveys, desk reviews, caddy surveys, web-intercept surveys, and of course, did a lot of document review. Reviewed all the websites, reviewed all the different evaluations that were conducted. When you read these reports, you will find references to many other research efforts in this same relative area of upgrades.

Marjorie McRae:
So we try to present our findings in dialogue with the literature to date.

Jane Peters:
Yea.

Marjorie McRae:
And as the grantees know, we collected information for the preliminary and the final in two ways -- halfway through and at the end -- and used it all in this final report. So here, even when it says, "in-depth interviews with 40 grantees," they know it was twice.

Jane Peters:
Yes. So I don't really have much else, except if there are specific questions about methodology or specific questions about findings that we could discuss.

Next slide:
I would just go on to say that the terms of these -- this is the public webinar, this is the site of the evaluation web page where the reports will be posted. They are not yet posted, other than the preliminary evaluation is posted there. As Danielle described, the findings will be incorporated in the Solution Center. And we will be presenting -- I know there's at least two presentations at the International Energy Program Evaluation Conference this August in Long Beach, California. And I expect to see others in the future.

Danielle Sass Byrnett:
Great. Well, thank-you so much, Jane, Marjorie, and Ed, along with the other folks who jumped in to answer questions on methodology as we received them, from the evaluation team.

Next slide:
And to reiterate, we will be posting this webinar -- the recorded version and the slides -- on the Better Buildings Neighborhood Program website and on the Residential Program Solution Center, in addition to the evaluation reports, as soon as they are completed and released. So thank-you, everyone, for attending the webinar. And you get almost 20 minutes back that you had not anticipated. Have a great afternoon.