Evaluating Participatory Grantmaking

Hannah Paterson
6 min readNov 15, 2019

Before I left to explore participatory grant making in the USA and South Africa I asked those in the UK, and further afield what information they would find helpful to aid their work in this space. One of the most common responses I got back was ‘how are others measuring and evaluating participatory grant making?’.

Key Questions:

I think there are three key questions needed to be asked and answered before you can plan your evaluation and ideally this would be built into the design and set up of your participatory approach.

  1. What is the driver for you using participatory grant making?

Are you trying to develop leadership? Devolve power? Increase funding to a community or geographical area? Improve transparency and accountability? A combination of some/all of these?

2. Why are you evaluating this work?

What’s the purpose of the evaluation — are you trying to understand the impact of the grants? Help grant holders improve their practice by learning from others? Are you trying to demonstrate to a board that PGM is effective? Are you trying measure the difference between grants made in this away against standard practice? Or something else?

3. What part of the process do you want to evaluate?

There are so many different aspects of participatory grant making that you could measure; the impact of the grants, the relationship between funder and communities, the skills of those involved in the process to name just a few.

So the question of how to evaluate PGM can’t be solved with a simple framework that can be applied across the board, there are so many different models, approaches and variables that these questions need to be understood in order to develop and design an approach to learning that best suits need.

Power Dynamics

I think it’s also important to explore and understand the power dynamics that occur through evaluation about whose needs you are serving.

Is evaluation and monitoring purely taking place to help answer questions that a funder is asking? Does this help or hinder communities? How can we approach evaluation in a way that doesn’t just add to the workload of a grant holder without providing them with added resources to do it? Can we re-imagine the ways that we do this so that it is as helpful for grant holders and communities as it is to foundation staff? There is some really interesting work in this space already: https://www.equitableeval.org/.

Often when we approach evaluation it is from the funders point of view. We have decided what we want to understand, and we dictate to grant holders what information they must collect and how they should share it regardless of whether this is the most interesting or impactful learning. We will often want the number of ‘service users’ through the door rather than the arguably more important details of how lives have been changed. This is often because we assume one is more valid then the other and because measuring passion, trust, self-confidence, growth, love etc is hard.

By understanding a variety of approaches to evaluation and thinking more creatively about learning and how we do and share it, conversations in this space become more exciting and less onerous. We can collectively understand what is important and helpful to both funder and organisation. We can start discussing and answering questions honestly together that encourage better relationships, transparency and flexibility between grant holder and funder, such as:

  • What information do you need in order to help you leverage in more funding and how can we support you to collate that?
  • What’s the biggest achievement you have made, this week, this year, this project?
  • What has this funding allowed you to change?
  • What have you learnt from the work you have done with this funding?
  • What went wrong and what did you do about it?
  • How would you do it differently next time?
  • What are you most proud of?
  • How do you want to tell us about what you have learnt?

It’s also important to understand the implications of reporting and evaluation. It is often the case that grant holders will spend time away from changing lives in order to count and report on attendance etc. They will often be having to report on slightly different measurements for a number of different funders supporting their work. It can also mean that grant holders work can be swayed or changed to what a funder is asking for rather than the community need, meaning smaller impact e.g. more bodies through the door rather than quality of intervention.

If we do decide that as a funder we require this type of information (once we have deliberately asked and answered the question of why we need it) we need to understand the time and resources required for an organisation to achieve this, building it into grant budgets or providing top-ups to support it.

What’s happening at the moment?

What has become clear through my travels is that there is a complete spectrum of approaches when it comes to evaluation in participatory grantmaking. Some funders haven’t evaluated any of their work; either the process itself or the grants made through them. Instead trusting that communities are making the right decisions and will hold themselves and the funder to account, flagging when money is misspent or there is an issue with a grant. Others, such as the Disability Rights Fund, have developed extensive Monitoring, Evaluation and Learning frameworks using best practice from the field of advocacy to measure how change has been hard won. This in itself is a great way to support grant holders to recognise their achievements and be able to take time out to look back and celebrate, helping reduce the burn out of constantly being overwhelmed by the vast challenges ahead.

Recommendations to implement:

Here are a few take away actions that funders can do when looking at evaluation:

  • Continuously ask yourself– Why do I need this information? What will I do with this information? How much time and resource will getting this information take and is this worth it? How will this information help communities?
  • Include grant holders and communities in the development of evaluation strategies
  • Ask grant holders what information their other funders require of them and use this as what you will collect rather than adding to the workload
  • Use evaluation to help grantholders reflect on and celebrate their achievements — congratulate and be excited for and with them
  • Be flexible with funding; there’s no point collating learning if you aren’t actually going to learn from it- have honest conversations about what went wrong and right — be explicit that this won’t impact funding (within reason), be willing to allow change to the outcomes/project
  • Ask grant holders what information they need to help them do their work better and agree this as what will be reported on
  • Include funding for evaluation and reporting — ensure this is budgeted for/provide a top up grant support
  • If you are evaluating your own approaches be aware that you are asking people to take time away from their work to do this for you and act accordingly — don’t ask for huge amounts of work or time unless you are compensating them for this.
  • Provide capacity support to help grant holders think through what they need to know and how they can do it
  • Provide funding for evaluation staff and skills training
  • Accept a variety of methods of reporting — videos, blogs, case studies, info graphics, art work, reports, trustee papers, spreadsheets etc.

Excitingly, the Ford Foundation have just funded a range of organisations that will be generating evidence on the benefits and challenges of participatory grantmaking and will announce those they are supporting in the coming weeks. This will provide a really interesting base of knowledge for other funders to use to improve their own practices, so keep an eye out for their announcement and findings.

I’ll be returning to this topic in future blogs as well as my final Churchill report.

--

--

Hannah Paterson

Churchill Fellow exploring how communities can be more involved in decisions about where and how money for their communities is spent