The results agenda 2.0
ODI and IDSInstitute for Development Studies, Sussex co-hosted a workshop on the results agenda – a topical subject, as those who read my note on Andrew Mitchell’s recent speech will know.
There is universal agreement that work on results is necessary, in order to be able to make the case for aid, but also in order to improve performance. I’ve written about before, and strongly endorse, the political imperative of providing evidence to illuminate the sometimes problematic politics of aid.
There is also strong agreement that the work must move beyond what Andrew Mitchell described as ‘bean-counting’, and take account of the complex social and political processes which underpin development, as well as the learning and experimentation which are associated with successful interventions. Andrew Mitchell made that case eloquently in London. Similar arguments were made at the ODI-IDS meeting. There was enthusiastic engagement with the idea that better information was needed on results – and also lots of talk of social process, beneficiary perception, learning-by-doing, unexpected consequences, and what was described as the ‘excess certitude’ associated with technocratic approaches to results. There was even polite reception for my oft-made point that a simplistic approach to results risks misunderstanding the macroeconomics of aid.
Thinking about all this, it seemed to me that we could characterise two different approaches to results, which I described as Results 1.0 and 2.0, or more graphically as the Fordist and post-Fordist approaches. I have sketched out the table below as a heuristic and obviously simplified device to illustrate the difference. The characterisation is reminiscent of the old debates about blueprint versus process planning: results 1.0 reflects blueprint approaches; results 2.0 is the process column. It is important to emphasise the extent of consensus about the need not to be trapped in the Fordist or blueprint column. Andrew Mitchell’s description of a ‘venture capital’ approach to aid perfectly fits the new paradigm.
The different approaches to the results agenda |
||
|
Results 1.0 |
Results 2.0 |
Industrial analogy |
Fordist |
Post-Fordist |
Planning analogy |
Blueprint |
Process |
Economic perspective |
Aid buys local activity |
Aid is a macro-economic injection |
Development trajectory |
Line of sight from aid to social change |
Aid supports complex pathways to improved well-being |
Level of detail required for planning |
‘Excess certitude’ |
‘Good enough’ |
Leadership |
Donor |
Recipient country |
Aid modality |
Project |
Budget support |
Time horizon |
Shorter |
Longer |
Aid actors |
Individual donors |
Donors working together |
‘Voices’ |
Data |
Beneficiaries |
It is not straightforward to make the transition operational, however. It seems to me that it is useful to distinguish different stages of the ‘project cycle’ and also between information needed to manage development programmes and that needed to communicate results. Thus, there is a 2 x 3 matrix:
|
Managing programmes |
Communicating results |
Ex-ante |
|
|
Mid-term |
|
|
Ex-post |
|
|
Different information is needed at each stage and for each purpose, with of course connections needed between all the boxes and consistency between the columns.
I suggested in the meeting that communicating results was relatively straightforward in a results 2.0 framework, especially given the availability of careful and textured case studies, like those recently produced by ODI. Of course, it’s true that more cases and different voices are needed.
I would also suggest that the ex-post row is easier to complete than the others, thanks to many years’ experience of evaluation. That body of work of course informs ex-ante decisions, about how to allocate resources, and maybe that is the priority for future work. We should be careful, though, in searching for ex ante algorithms, not to be sucked back into Results 1.0. Demonstrating results in a rigorous way in a Results 2.0 approach may or may not mean quantification. Randomised controlled trials are invaluable, as the 3ie initiative has shown; and there need to be more systematic reviews of evidence, of the mind undertaken by the EPPI Centre for systematic reviews at the Institute of Education. DFIDDepartment for International Development has incorporated many of these ideas in its research for development programme.
But sometimes, it will be hard to quantify. It is also worth remembering that budget allocation has always been more of an art than a science – as Adrian Fozzard discovered when he reviewed the literature on the basic budgeting problem back in 2001.
All that said, there are some significant problems in aid allocation, that a Results 2.0 approach needs to help resolve. They apply particularly to the softer or more indirect uses of aid. Thus, the impact of vaccination may be relatively straightforward to measure, especially because there will have been randomised controlled trials of effectiveness. But other areas are more difficult:
- Budget Support, or Sector Budget Support – often the aid professionals’ preferred mode of delivery, but one highly controversial just now in some donor capitals. See, for example, Toby Vogel’s critique of ‘disappearing aid’ in European Voice.
- Institutional and governance work in fragile states, remembering that these countries are the priority for donor support.
- Humanitarian relief, especially in the early days after a catastrophe, when estimates of need and costs of response are both difficult to make – see DFID’s humanitarian review on this topic.
- Policy work, including research.
- Technical cooperation. (I remember a donor telling me once that the best investment they ever made was to support an adviser to the Central Bank in an African country, with dramatic effects on the budget deficit and inflation – how would you evaluate that, I wonder?).
There are two key considerations that must prevail as we apply a Results 2.0 approach to these areas of aid. First, analysis must be country led, taking account of all resources. Second, and following from the first, multi-donor approaches are essential. Thus, the question is not (or not only), ‘what development impacts has donor X bought - or can it buy - with Y resources’, but rather ‘what development progress has country Z achieved – or what can it achieve - with all the resources available’.
That may seem to be a bit of a challenge for the aid system. But is it really?
P.S. Look out for blogs on this meeting by Lawrence Haddad, Ros Eyben, Ben Ramalingam and Owen Barder, among others. I’ll add links as they appear.
Comments
If it would be truly country led, multidonor does not matter any more. The fact you still find a multidonor approach important, it means the country leadership is not really serious.
I think we can get around the micro/vaccine/R CT vs macro/governanc e/growth problem by thinking probabilistical ly.
Aid for vaccines has a very high likelihood of having a known impact.
Aid for governance/grow th has a very uncertain impact, but the potential payoff is incredibly large.
So in a balanced portfolio, it makes sense to include some safe bets and some speculative high risk, high return investments.