CCBot/2.0 ( - Array ( )

Responses to Metrics 3.0: A New Vision for Shared Metrics

In a recent issue of the Stanford Social Innovation Review, Mike McCreless et al. proposed a new approach to measuring social impact – Metrics 3.0: A New Vision for Shared Metrics. This new approach seeks to build on the progress made in using metrics for accountability and standardization. McCreless et al. argue that the next step is to integrate impact, financial and operational metrics, and to shift towards evaluations that are targeted, actionable, broadly useful and that serve collective learning. In this Skoll World Forum series, co-edited by Mike McCreless, a range of different experts present their responses to the Metrics 3.0 vision.


Decision Data: Metrics Must Lead to Action

Tom Adams

Managing Director, Lean Data, Acumen


Feedback is Insistent: Essential and Useable Metrics

Feedback is Insistent: Essential and Useable Metrics

David Bonbright

CEO, Keystone Accountability

August 27, 2014 | 3108 views

Social impact measurement distills to two questions. What are your metrics? How do you (and others) use them?

The Aspen Network of Development Entrepreneurs asked 30 organizations from the field of social investing to answer these questions and published the findings in a report called “The State of Measurement Practice in the Small and Growing Business Sector” and a blog post on Stanford Social Innovation Review.

On the “what metrics” question, the survey showed that metrics are mainly used for accountability to funders. However you dress this up, it is a disappointing status quo ante.

The report gamely answers the second question about the use of metrics. Seize the opportunity, it says, to use metrics to drive performance improvement by “integrating social metrics with financial and operational ones, and aligning with collective learning agendas — either by leveraging the existing evidence base, or by making sure that multiple stakeholders can act on and use their evaluations.” This is the approach that McCreless et al. refer to as Metrics 3.0.

Our work at Keystone Accountability centers on helping organizations use measurement. “Use it or lose it” is a mantra for us. The type of metric that we have found uniquely effective is quantified feedback from primary constituents. I call it “training wheels” for becoming an evidence-based decision-maker. There are several reasons why this aligns neatly with the vision of Metrics 3.0.

Feedback is cheap

First, good feedback data is increasingly cheap and easy to get. You can find a free catalogue of more than 250 feedback apps and services at Last year, seven leading specialists banded together to create a one-stop website for the feedback field called Feedback Labs.

Second, while small and growing businesses (SGBs) have different feedback needs than the consumer services field – which many feedback systems are designed for – much of the craft and technology port easily to new settings. We do not have to reinvent 50 years of customer satisfaction practice. So an anti-poverty superstar like LIFT can deliver state-of-the-art customer service to some of the poorest citizens in the USA. Our work at Keystone suggests that the SGB sector is lagging behind domestic nonprofit service agencies, development NGOs, foundations, and global corporations when it comes to using feedback data to improve product and service delivery.

The third reason promises something that almost never happens – the reduction of the overall measurement and reporting burden. Feedback data predicts constituent behavior, and therefore social impact. The Gates Foundation discovered that the best indicator of teacher effectiveness was student feedback. What’s more, it found that student feedback was the best predictor of subsequent learning attainment as measured by test scores.

In another context, in this blog post and this blog post, LIFT chief program officer Maria Peña explains how LIFT uses Net Promoter and Constituent Voice methods to discover predictive indicators. LIFT found that the scores its members give after an office session correlate to the progress those members subsequently make on their economic goals. They can now provide extra support to members who give low scores. Once an organization has found reliable predictive feedback indicators, it can concentrate on collecting those and drop less productive measurement efforts.

Feedback is insistent

There are other reasons that feedback data makes a great on-ramp to the effective use of metric data in general. Feedback data is insistent. If you don’t use it, it comes back and bites you. Every question seeds an expectation. Dealing with feedback involves managing those expectations.

Using feedback well means letting your constituents know what you heard from them and how you will respond. It then means asking them again so they can tell you if you got it right. If you do those simple things, you realize immediate benefits: “Hey, in response to feedback I changed the way I conclude my client meetings and my feedback scores went up by 30 percent!” Such benefits are infectious, breeding excitement among frontline staff in a way that other metrics do not.

So, if your small and growing business or social investment fund is ready to take metrics seriously, you might want to start by creating some simple but systematic feedback loops with those you serve.


Let’s keep this going …

Why not join our growing community of social innovators? You’ll get exclusive content and opportunities delivered straight to your inbox and all the latest details from the Skoll World Forum.

We value your privacy and you can unsubscribe anytime.