Sixty five years ago this week, the world became wiser about how to figure out what works, and what doesn’t.
Back then, the issue of the day was how to cure tuberculosis — the leading cause of death for all age groups in industrialized nations from the 18th century until the early 20th century. With the world clamoring for a cure, a group of British scientists in October 1948 tested the effectiveness of a new antibiotic, called streptomycin, in fighting the disease.
Miraculously, streptomycin seemed to work.
But how could the scientists be certain about the drug’s effectiveness? By carefully designing their study and applying an ingenious experimental approach — now widely considered to be the world’s first “Randomized Controlled Trial” — the scientists broke new ground in how to identify cause and effect.
Since then, Randomized Controlled Trials have become the gold standard for rigorous and unbiased measurement of scientific results — from medicine to psychology to economics to Opower’s own global-scale initiative to increase energy efficiency. Hundreds of thousands of randomized control trials have been published in scientific journals since 1948.
What’s the idea behind a Randomized Controlled Trial?
And how does it critically inform Opower’s scientific approach to measuring the impact of its energy saving programs? For answers, we can begin by taking a closer look at the seminal 1948 tuberculosis study.
In that pioneering study, the doctors set out to gauge the efficacy of streptomycin on patients with advanced pulmonary tuberculosis. Distinguishing effective from ineffective therapies had proved difficult for such patients, as some patients recovered on their own, while the disease proved fatal for others.
Faced with limited supplies of the new drug in postwar Britain, and a desire to understand its efficacy, the doctors took the critical step of randomly assigning patients into two groups. A “control group” would receive the standard treatment of the time (bed rest), while a “treatment group” would receive bed rest along with the antibiotic.
The randomization step ensured that the only fundamental statistical difference between the “treatment group” and the “control group” was whether they took the antibiotic during the trial. And so, the world’s first recorded randomized controlled trial was underway, immune to the kind of statistical biases that can arise from non-experimental designs.
Thanks to the TB study’s careful structure, the scientists’ could be nearly certain that any difference in outcomes between the treatment and control group were attributable to the antibiotic. Streptomycin was found to be effective in conquering TB, as suggested by the treatment group’s 20 percentage point higher survival rate relative to the control group. It remains in use to this day in conjunction with other therapies.
How do RCT’s fuel the world’s largest behavioral field study?
On behalf of our 90 utility partners, Opower currently manages nearly 200 randomized controlled trials (RCTs) around the world. In the tradition of the British Medical Research Council’s landmark study in 1948, Opower’s use of RCTs reflects our dedication to using precise scientific evaluation in order to verify what works.
But instead of measuring the impact of giving antibiotics to tuberculosis patients, we measure the impact of giving personalized energy analysis and advice to utility customers — and quantifying how it reduces their energy use.
The scale of Opower’s initiative to verify what works to drive energy savings — spanning more than 10 million utility customers – is admittedly larger than the 1948 tuberculosis study, but its structure is remarkably similar.
For example, in a given Opower utility program, we might begin with 50,000 households, and then randomly assign them into two groups — a “treatment group” that receives personalized energy analysis, and a “control group” that does not. Randomization ensures that the two groups are statistically equivalent across observed and unobserved characteristics – such as the towns in which they live, the weather they experience, their average annual energy usage, and even their interest in energy efficiency.
As we’ve found time and time again, customers who are empowered with actionable energy advice begin to use less than their control-group counterparts (typically 1.5-2.5% less on average). This reduction in usage is, because of the otherwise statistical symmetry between the test group and control group, solely attributable to Opower’s intervention.
While the efficacy of Opower’s program has been well established, we continue to rely on RCTs in order to measure energy savings as accurately as possible – whether that be in existing programs, programs with new utility partners, and even programs in new countries.
As Head Economist at Opower, my job is to evaluate the results of our programs and ensure their firm basis in RCTs. Due to the strength of RCT evidence, utilities and regulatory authorities increasingly look to its trusted measurement framework to help verify the performance of energy efficiency programs. As part of this process, 30 external evaluations have been conducted verifying the effectiveness of Opower programs by evaluation firms such as Cadmus, DNV Kema, Integral Analytics, and Navigant, as well as researchers at leading academic institutions like Harvard, NYU, and Yale.
It’s Opower’s commitment to RCTs that enable our utility partners to know that they have successfully fulfilled their energy savings targets, and allows us to confidently assert that our company has collectively achieved 3 terawatt-hours of energy savings.
For additional info on our EM&V approach, and to learn more about Opower’s pioneering application of randomized controlled trials to measure the performance of energy efficiency programs, click here.
Alessandro Orfei leads Opower’s Evaluation, Measurement, and Verification team. Prior to joining Opower, he consulted the World Bank on impact evaluation of programs in a variety of sectors including energy, transportation, water supply and sanitation, and microfinance. He holds a BA in Economics from the University of Chicago, and a PhD in Economics from the University of Maryland.