Politics of Poverty

How 9 leading organizations do it…policy advocacy evaluation, that is

Posted by

How to measure our reach, access and influence? You show me yours. I’ll show you mine.

Gabrielle Watson is the Manager of Policy Advocacy Evaluation at Oxfam America.

We all know policy advocacy and campaigning can be a wild ride.

But how do we know we’re really making a difference? At Oxfam, we are constantly asking ourselves: How do we measure our reach, access and influence?

As advocates, we constantly respond to new opportunities and obstacles. We gather intelligence—who supports our agenda, who opposes it—and then try to devise clever ways to shape debates, raise awareness, build alliances, gain access, and hopefully shape the final policies.

Last year activists outside the offices of Chevron in Houston, Texas called for oil companies to accept strong rules to implement the transparency provisions in the Dodd-Frank Wall Street Reform Bill. Oxfam members portrayed oil companies as monkeys in the "see no evil, speak no evil" proverb, as a way to expose efforts by the oil industry to encourage the Securities and Exchange Commission to enact weak rules that will allow them to continue to make secret payments to governments. Photo: Scott Dalton / Oxfam America
Last year activists outside the offices of Chevron in Houston, Texas called for oil companies to accept strong rules to implement the transparency provisions in the Dodd-Frank Wall Street Reform Bill. Oxfam members portrayed oil companies as monkeys in the “see no evil, speak no evil” proverb, as a way to expose efforts by the oil industry to encourage the Securities and Exchange Commission to enact weak rules that will allow them to continue to make secret payments to governments. Photo: Scott Dalton / Oxfam America

It’s such a wild ride that dressing up in monkey suits and sipping “whiskey” in front of a company’s headquarters are all in a day’s work.

Back in the fall I asked myself if the tools we’ve built were as good as they could be, or if there wasn’t something else out there we should bring in. So I invited nine leading advocacy organizations to join in a comparative review of our approaches, a you-show-me-yours-I’ll-show-you-mine methodology, to learn what we’re all doing in practice. To my surprise, all but one organization said “sign me up.”

Our illustrious final cohort included ONE, Sierra Club, Greenpeace, Action Aid, Amnesty International, CARE USA, Bread for the World, and our “cousins” at Oxfam Great Britain. We had a series of calls to set the main questions and chew on the findings together. ODI’s Simon Hearn, who is a steward of the excellent Outcome Mapping Learning Community and one of the forces behind the new BetterEvaluation project, pitched in with advice and framing questions. We worked with two seasoned campaign evaluators, Jim Coe and Juliette Majot, to run the actual inquiry. They asked campaigners, senior leadership, and evaluation staff what they thought about their monitoring and evaluation (M&E) systems, did in-depth interviews with evaluation staff, and reviewed assessment tools used by each organization. Each participating organization got a private benchmark report, comparing their organization to the cohort.

We have a bit of a good news/bad news story. The final report and executive summary show innovations in the field of advocacy evaluation, raise some cautionary notes, and pose 12 principles for good practice in policy advocacy M&E to advance rigorous relevance and relevant rigor. A few highlights:

1. We’re getting better at setting and measuring sharp objectives. Despite the unpredictable and non-linear nature of policy advocacy, we found strong agreement among campaigners, senior managers and M&E staff that we should have clear, monitor-able objectives.

2. The basic ‘architecture’ of the M&E systems combine some form of theory of change with stock-taking moments. Nearly all organizations use both short-cycle reviews – such as weekly meetings and after-action reviews, and longer-cycle reviews – typically semi-annual, annual and, in a few cases, three-year strategic reviews.

3. We’re collecting and analyzing a lot more data. We found an inclination towards quantifiable results. This tracks with an emphasis on measuring activities, outputs, and internal outcomes such as size of supporter base and social media reach. The more difficult-to-measure, “mushy” outcomes (that’s a technical term) like coalition building, shifting terms of debate and swaying policy makers, are less frequently measured

4. We’re getting more formal. We found a notable trend towards formalization and systematization of planning and monitoring processes. At the same time, many organizations are actively decentralizing, giving more autonomy to geographically dispersed teams. Can these two trends can be reconciled?

5. There is a strong correlation between senior leadership engagement and quality M&E systems. Organizations where senior leadership actively review and discuss evaluation findings also report benefits such as involving the right people in data analysis, generating actionable insights, and acting on those insights. This is definitely a good-news story and a strong nudge to senior managers to get engaged. At the same time, we found a fairly marked orientation towards “upward” reporting to senior managers and funders, while we’re less good at sharing learning with peers, partners, allies, or the people affected by policy decisions.

The study authors asked us a challenging question: When does a focus on near-term, quantifiable results and “upward” accountability to funders and senior managers crowd out strategic learning and more robust transparency and accountability? It’s a question I hope advocacy organizations, M&E professionals, senior managers and funders continue to ponder together.

Oxfam.org Facebook Twitter Instagram YouTube Google+