The theme of the 2017 Annual Evaluation Conference is Exploring the current uses of evaluation. Evaluation is a common term in the English language and means different things to different people. It is used in many different ways. We need to ensure that institutions and their staff who wish to "evaluate" their policies, programmes, projects and institutions know what that really means and what they will gain from the exercise. How, for example, does it differ from an audit, or monitoring, or a review? Is it the same as research? Can they use evaluation to measure impact, or success? Is it conducted before, during or after taking action, is it an external or internal exercise, who is involved, how are its conclusions communicated?
Emphases on different types of use of evaluation change over time. These have included focus on accountability, establishing impact, formative and developmental evaluation to improve implementation, and generation of evidence to learn about what works.
The 2017 conference will explore these issues and in particular how evaluation can become more useful to its commissioners, to the subjects of the evaluation, and to society more widely. To be effective, evaluation results and evaluative evidence need to be used and used correctly. So results and evidence themselves need to be presented in a clear and logical way and be understandable to their audience. They should not come as a surprise as the evaluators should be communicating with both commissioners and participants throughout the process to find out what is required (ie not to come armed with a set approach or suite of methods looking for an application), who it is required for, and why. Taking time at the outset to talk to each other will not only lead to better results but may also save time and money in the process.
What should be the purpose of the findings, recommendations or proposed actions listed in the reports produced by evaluators? Should evaluation reports contain such statements in the first place? And if so, what is the difference between findings and recommendations? How can readers differentiate good ones from not-so-good or even downright bad or misleading?
The conference will consider the design of outputs, developing use strategies and connecting with potential users' needs so that participants will be better able to create and/or use evaluation resources for change, development and accountability.
The conference will comprise presentations from keynote speakers, panel discussions and interactive plenary sessions, together with parallel sessions for which participants from across the evaluation community will present. There will also be an opportunity for participants to display posters.
Programme highlights will include:
Keynote presentations from:
- Michael Anderson, Advisory Board Chair, Fiscal Governance Programme, Open Society Foundations
- Professor Ian Boyd, Chief Scientific Adviser, Department of Environment, Food and Rural Affairs
- Professor Lorraine Dearden, Professor of Economics and Social Studies, University College London
24 parallel sessions, comprising presentations from UK and international speakers. Topics will include:
- Using evaluation findings in real time to influence programme delivery and policy thinking: A case study of the evaluation of the UK Futures Programme
- Monitoring, evaluating and learning at multiple levels: 25 years of the Darwin Initiative
- Reviewing evaluation utility through a Value of Evaluation lens
- From management responses to responsive management: How to make a difference with evaluation
- Opportunities for improving the usefulness and feasibility of impact assessments of climate adaptation programmes: The case of the ECRP programme
- From ‘tracing’ to ‘tracking’: The changing face of international scholarship programme evaluation