Donna Buxton

Research Manager, The Health Foundation

 

Relish isn't quite the word when watching 'Pete Tong presents Ibiza Classics' on a night out the other week – I was totally absorbed. What struck me was how he was also absorbed, even after 30 years. Pete (if I may be so bold) was clearly committed to his 'art'. I'm in his camp, but my commitment is to the art of evaluating.

One of my key reasons for wanting to join the Health Foundation (which I eventually did in 2015) was my firm belief that the organisation was very much in the evaluation space; developing and promoting a range of robust, credible and independent evaluations across its work. I’ve not been disappointed.

We are involved in a whole host of evaluations. Working with the University of Manchester we are evaluating the devolution of health and social care in Greater Manchester. Part of this work is undertaking mapping and modelling of future services, service users and the changes and impacts as a result of devolution. The evaluation offers exciting, real-time insights into a major change in the way health services could be delivered in the future. I feel privileged to be a part of it.

We sometimes evaluate initiatives which are complex, with tangible and intangible outcomes. For example, we have made a long-term commitment to evaluate Q, which is creating a connected community of people across the UK with health and care improvement expertise. And, most importantly, there is a real commitment from the team delivering Q to use the evaluation in a way that informs and shapes the next phases of the work, which is fantastic to see.

Many of our evaluations help to shape the future direction of work. We are, with UCLPartners, evaluating the first cohort of trailblazing entrepreneurs who are part of the NHS Innovation Accelerator Programme (NIA). Findings will be used to shape work with the second cohort.

And we aim to inform as well as being directly involved. Our evaluation guide is one of our most popular publications to date, with almost 8,000 downloads over the last year.

However, there is still work to be done.

Coming from a social research background, I always felt that evaluations were the ultimate critical friend. They made a difference and that feels good. I’ve absorbed well-acknowledged ‘bibles of evaluation’, such as HM Treasury’s The Magenta Book, the MRC Evaluation Framework, and joined the UK Evaluation Society (UKES). I’ve seen interventions sustained when funding had ended due to persuasive evaluation evidence.

Then in 2013 I attended a presentation of a report by the National Audit Office which reviewed 35 cross-government evaluations in 4 policy areas. Robustness and usefulness to policy makers of the evaluations was considered and it concluded that only 14 of the evaluations had provided sufficient evidence for policy impact. Spending on evaluations had gone down and only four departments intended to evaluate all of their top five major projects. I was in shock – my bubble had burst and it certainly ruined my lunch. Despite all the efforts and cost of producing evaluations, they were not used, in reality, to prove and improve.

More recently, my workshop on ‘De-mystifying Evaluation’ at ISQua 2016 was warmly received and post-workshop conversations started out well, all banging the drum for evaluation. But I was being asked by a few, ‘Why bother?’ I’m going to say it here and now; there were a few evaluation dissenters, non-believers in the house. They didn’t get the art.

I am playing my part in trying to convert the evaluation non-believers. In 2011 I set up a community of practice for third sector research and evaluation managers – the Charity Evaluation Working Group (ChEW) – with the aim of fostering ‘generous leadership’ and offering mutual peer-to-peer support. Over 70 UK-wide charities are represented. After the NAO presentation in 2013 we decided to ramp up ChEW. In addition to supporting each other, we also set standards of evaluation practice across the sector and promoted their use.

My work at the Health Foundation is centred on producing the best evaluations that I can, but I’m certainly not being complacent that these evaluations are having maximum impact. I no longer assume that we’re all on the same evaluation page, but along with my colleagues I’m doing my bit to fly the flag for evaluation. I'm not downing tools and saying it’s all gone a bit ‘Pete Tong’ just yet.

Donna is a Research Manager at the Health Foundation