Evaluating our financial health work – not just another box ticking exercise

18 Nov 2016 By Matt Flatman, Financial Health Exchange

The Financial Capability Strategy highlights a lack of impact evidence about what actually improves people’s financial capability, and challenges us all to evaluate our programmes rigorously to develop our collective understanding of what works. During Financial Capability Week we’ve heard a lot about the need for and the benefits of evaluation but not everyone is enthusiastic. In fact, some people and organisations still regard evaluation as a burdensome obligation, seeing it as a disruptive and expensive box ticking exercise to satisfy funders. Evaluation is central to my role as MAP Tool Account Manager, so I’m bound to say that these criticisms are unwarranted.  In fact, I think many of the issues that organisations experience are avoidable, so I want to share some observations I hope will make evaluation less frustrating and more useful for you.

A key accusation is that evaluation is technical, complicated and intimidating. But it doesn’t need to be. Evaluation’s essentially just asking the right questions about your programme and using the most appropriate methods to find reliable answers. But often people feel that, if they don’t know the terminology, they aren’t qualified to ‘do evaluation’.   This often means that evaluation is seen as the responsibility of just one person or specialist team and irrelevant to everyone else.

But those questions and answers should matter to everyone. Front line staff need to know whether they are meeting people’s needs, but they also want a data collection approach that doesn’t disrupt the relationship with service users. Managers need to know how their programmes are performing. Directors need insightful information with which they can make strategic decisions. All staff benefit from having a basic knowledge of evaluation; why and how evaluation is done, what they’re personally responsible for, and what they can get out of the process to do their job better and make even more of a difference to service users.

Organisations are often so focused on service design and delivery that evaluation becomes an afterthought, clumsily shoehorned in once everything else has been planned. This often results in inefficient approaches that are unpopular with staff. Ideally, evaluation should be seamlessly integrated so that it feels like a natural part of the process for staff and beneficiary, rather than a disruptive add-on. That means it’s worth planning your approach early and trying to identify natural opportunities for data collection.

Some people believe that an evaluation’s not worth doing unless it’s some sort of independently validated, triangulated, randomised double-blind trial. This methodology may sound impressive but it won’t be appropriate for every programme, and will certainly be very expensive and time consuming. Evaluation should always be in proportion to the project. You can get a lot out of a simple approach that is well-designed and properly integrated.

Another reason to avoid going straight for a big, complex and expensive evaluation is that you might not get things exactly right at the first attempt. It’s helpful to treat your approach to evaluation itself as a sort of evaluation cycle. You should plan, trial and review, with a view to changing things next time round if necessary. If you’re new to evaluation, start relatively small and simple, perhaps just looking at a couple of outcomes or just using one methodology. Once you’re confident that you have these things right, you can think about increasing scale and complexity.

To make the most of your evaluation, make sure that you only collect data which is useful – and which you will actually put to use. Your beneficiaries won’t appreciate you asking them irrelevant questions and your staff won’t appreciate collecting redundant data. Make sure you know exactly why you will be collecting every bit of data and what you will do with it. Don’t collect data just in case it might one day turn out to be slightly useful.

Finally, I recommend that organisations make the most of pre-existing work and resources. If the link between a certain activity and a certain outcome has already been proved, then don’t feel the need to prove it again. Cite the existing evidence, monitor things to see if your programme achieves the same sort of results, but concentrate your energies on less proven relationships. Nor is it necessary to build an evaluation completely from scratch. There are many great resources out there. For people working in the financial health sector, MAS’s evaluation toolkit provides a guide through the whole process, from developing a Theory of Change and identifying outcomes, to measuring them and learning from the results.

I hope these observations have given you some practical ideas about designing and implementing an evaluation, and reassured you that it needn’t be overly technical or left to  evaluation experts. So, as we come to the end of Financial Capability Week and move into the second year of the Financial Capability Strategy, what more can you do to help your organisation and the wider sector learn about what works?

Find out how the MAP Tool, our financial well-being impact measurement tool, can simplify your organisation’s evaluation. Sign up for a webinar here.