At its simplest form, robust evaluation is evaluation that has a clear and logical set of questions, an explicit evaluation framework that supports value judgments, and a transparent, defensible, analysis that is shared via a reporting process or report. So, what do robust questions, evaluation frameworks and reporting look like?
This workshop will explain what good looks like, and how these three components are each used to guide robust evaluation. It will also describe various common pitfalls and their solutions with some practical tips and tricks to help recognise and avoid them.
This will be useful for evaluators and those managing evaluations, with some prior experience. Good questions, frameworks and reporting look like:
- A concise set of evaluation questions, that includes priority questions, and some evaluative (not only descriptive) questions. Each are answered in the evaluation reporting.
- An evaluation framework that describes evaluand and its intent, and provides agreed definitions of ‘good’ or what is valued and (where applicable) references standards.
- A reporting process or report that makes explicit and defensible conclusions that are acceptable and believable and are therefore useful for decisions about proving, improving, expanding, or ceasing evaluand activities.
Good use of an evaluation framework includes using it as a lens to guide the evaluation design, to refine the evaluation throughout and to support analysis, sense-making and value judgements. Robust reporting is an explicit and defensible presentation of findings and judgements. Each finding is based on cohesively presented evidence. Judgements build on these, and make explicit use of the evaluation framework. A number of common pitfalls that will be described and solutions shared. Three examples are:
a) Asking too many or too vague evaluation questions.
b) Not defining ‘good’ or what is valued, even in draft form.
c) Making indefensible conclusions or judgments.