Skoll World Forum Review: Measuring Impact by Cost-per-Outcome
Team Lead (Impact, Knowledge and Communications), United Nations Development Programme
April 18, 2014 | 2849 views
Building off the advance series collection of articles written by delegates and speakers of this year's Skoll World Forum, this section will feature live blogs and pieces from the event in Oxford. We will be covering a wide variety of sessions, panels and discussions on-site. View the live-stream on the homepage, and watch here for real-time articles all week! -- Each year at the Skoll World Forum, nearly 1,000 of the world’s most influential social entrepreneurs, key thought leaders and strategic partners gather at the University of Oxford’s Saïd Business School to exchange ideas, solutions and information. Learn more about the 2014 Skoll World Forum, sign up to our newsletter to be notified of the live stream, view the 2014 delegate roster and discover what themes and ideas we'll be covering this year at the event. Also, read about the seven recipients of this year's Skoll Award for Social Entrepreneurship.
The ambitious Impact Genome Project was introduced at the Skoll World Forum by Jason Saul and Nolan Gasser – who holds the title chief musicologist emeritus at Pandora. The project essentially attempts to codify and standardize social impact programmes across 132 outcomes. As their introduction says “The Impact Genome employs a systematic process to crack the code on social impact just as the Human Genome Project enabled us to improve health outcomes and the Music Genome Project enabled Pandora to classify music so that listeners could discover new music they’d enjoy.”
Interestingly, the project allows activities to be classified not by their outputs, but by the outcomes they are designed to effect; for example improving teacher quality or increasing access to libraries. Their idea takes root from using big data to create the ability to predict outcomes rather than using results to retroactively measure impact. For example, the analogy of credit scores which takes into account historical information such as current debt, transaction histories and the like to come up with a score that predicts the likelihood of repayment, was widely used as an example to which the Impact Genome Project aspired to.
The idea is certainly compelling. For those of us who have worked in the social sector for a long time, questions abound. While we have known for a long time that we have to move away from solely measuring outputs (number of malaria bednets delivered for example), to understanding how those outputs move outcomes (change in number of individuals using bednets) and eventually impacts (incidence of malaria). We have also known the challenges of attribution, isolation of effects and measurement, and data collection that make impact difficult to establish.
The IGP seeks to establish essentially a cost-per-outcome for each activity. For example, a programme delivering books to teachers can be categorized into attempting to move 2-3 different outcomes (one primary, secondary and maybe tertiary) and then measuring how effectively it does deliver on these outcomes. The methodology on this gets fuzzy but essentially relies on existing research from other programmes. One can then calculate a ‘cost-per-outcome’. The problems with the methodology – which one expects will be ironed out – are currently few.
One, how do synergies and cross effects between outcomes get measured and valued appropriately? Two, how does the age or phase of an activity get appropriately recognized in the cost-per-outcome? (i.e. an early stage startup might have a high cost-per-outcome that is not currently weighted for its stage). Three, how does one incorporate in wildly differing timelines for different activities trying to effect the same outcomes? That is to say, one activity tries to improve literacy on a 10 year time frame, and another on a two year time frame; so how does one compare both activities through different methodologies? Fourth, how does historical data on impact achievement allow one to get to predictive scores (isn’t this the same problem of extrapolating from past data that needed to change in the first place?). Fifth, how is risk-taking and innovation factored into a cost-per-outcome metric? And sixth, and most importantly; where does this calculation of cost-per-outcome take us as a field?
It’s to the latter point that I would like to sound a cautionary bell. As in other methodologies, the problem of standardization and aggregation has plagued the field for some time, and as a funder and an investee, we recognize the problems inherent on both sides. Funders want a metric that allows them to choose between programmes easily, and investees or grantees want the complexity of their programme to be adequately recognized and resourced.
A cost-per-outcome sounds very compelling; but the devil is in the details, particularly in fields in which new and emerging technologies and methodologies of delivery are changing the way business has been done for the past few decades. In such fields, establishing benchmarks on costs-per-outcome can be reductive and inaccurate because they are changing so rapidly. Additionally, a focus on cost-per-outcome which many funders, being stretched for time, might take as prima facie evidence, may distort the context-specific and complex nature of social change, which we are all in the business of trying to effect. But, we at BCtA certainly look forward to the immense lessons that will be learned from deploying data across such a spectrum to shed light on the pressing issues of our time.