MVM: Minimal Viable Measurement
Posted: October 4th, 2013 | Author: Michael Goldstein | | 1 Comment »
Dai Ellis gave a talk to our team yesterday, about the new university he has launched in Rwanda, Kepler. It’s pictured above.
In Kenya, Dai’s team is trying some scaled-down work right now, trying to develop what he called a “Minimum Viable Product.”
We hear that term a lot around the i-lab.
Wikipedia:
A Minimum Viable Product has just those features that allow the product to be deployed, and no more.
The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. It is a strategy targeted at avoiding building products that customers do not want, that seeks to maximize the information learned about the customer per dollar spent.
“The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” The definition’s use of the words maximum and minimum means it is decidedly not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense.
An MVP is not a minimal product, it is a strategy and process directed toward making and selling a product to customers. It is an iterative process of idea generation, prototyping, presentation, data collection, analysis and learning.
Today Lauren and I were discussing a new use for the MVP idea. We’re taking that thinking and applying it to Measurement. Minimum Viable Measurement. MVM.
Let’s take one of the many questions we care about.
“Do teachers with really strong academic credentials — measured by tests like SAT or KCSE or GRE, or measured by having done well in high school or college — tend to help kids learn more?”
Why is this an important question? We hire lots of teachers at Bridge. We choose from many candidates. How do we choose? One factor we weigh is an applicant’s previous academic performance.
Should we do that? And if so, should we count is as one factor among many, or should we also have a “floor”? I.e., no admission under X score, the same way that some colleges have a GPA cutoff or an SAT cutoff.
Well, you can always pull what’s been published on the topic to help answer the question.
And the answer is, generally, there’s not much correlation between previous academic background and teacher value add. Some, for sure. A little. But not much.
But of course all the previous studies don’t perfectly apply to our specific situation.
So the question becomes — how much work should we do with our own data?
There are 3 choices.
a. Do nothing.
b. MVM.
That is, do some down-and-dirty calculations — a Minimum Viable Measurement.
In this case, it’d be a simple correlation between our teacher KCSE scores, and their pupils’ absolute exam grades. So somebody digs up the KCSE data, and then someone else takes those and matches to exam scores.
There’d be so many imperfections in this approach, I won’t bother to list them here. Takes our team < 10 hours. Over time, as our data systems improve, it'd take a couple hours....i.e., no need to "gather" data, just pull it cleanly and run calculations. So an MVM is a potential analysis you can do, and the question is whether it's worth doing at all. c. Do a very complete study. This option might take hundreds of hours. Doctoral students trying to answer one question with good evidence often spend years answering one question. Often, do a complete study, that can answer all reasonable questions or concerns, we'd be stopping other potentially valuable work to tackle this particular question. * * * So in Kenya, Dai's team has launched a "piece" of their full idea, with 50 people. Does that answer the question of whether his university concept can work in multiple countries with tens of thousands of pupils? Of course not. Could he instead try to invest thousands of hours in planning an elaborate "Full University," getting regulatory clearance, and then testing the whole thing? Sure. That was the old, common way of doing business -- what MVP essentially replaces at times. MVP wants to get customer feedback quickly. The "old business types" hated MVP -- "If you give the customers a crappy version of your idea, of course they don't want it, so why bother creating one?" How about with measurement? When is the right time to do an MVM? It's any time you think you can get valuable, albeit massively imperfect information. If your judgment is the MVP study is going to be PURE noise, then you don't bother. "Do nothing" is a better option. If your institutional life depends on the best possible answer, "Do a very complete study" is the right choice. Often, however, you're in-between. MVM?
Mike,
Great post. I love your framing of Minimal Viable Measurement and look forward to seeing how you guys apply it more in your work.
I spent my last few years at the Clinton Foundation pushing on how measurement was done in global healthcare. Too often projects were being run with the de facto question “how can we collect the most data to analyze this as deeply as possible?” Impact on services is the state goal, but the reality is that the designs are driven by academic publishing ambitions. The results are rarely optimal for managers or government officials that are trying to save lives on a shoe string.
We build a new approach called Rapid Impact Evaluation that flipped this on its head. Every opportunity for analysis is now met with the question “what is the minimal level of rigor needed to reach an effective decision on this issue?” We also built a suite of tools to apply rigorous methods faster and cheaper.
Its great to see you applying this philosophy to global education where the pressure to deliver quality at minimal cost is equally acute. We will be trying to do the same at Kepler and Spire and look forward to swapping notes and see what we learn from each other.
Cheers,
Oliver