Monday 6 January 2014

Inception

The review process encompasses three phases: Scoping, Inception and the actual Review, i.e. the analysis of evaluations. We have completed the scoping phase, and we are deep into the inception phase now.

In parallel with our search for evaluations (Scoping Phase), we have reviewed literature and initiated a virtual discussion with the Review Reference Group on the dimensions of evaluation practice. We are interested in the characteristics of evaluations and the positive or negative results they produce. To obtain a first understanding as to the characteristics and effects we need to look for, we have studied relevant literature. (See your Scoping Report for the full literature list.)

We have identified a wide range of elements that are considered as influencing the effect of evaluations. That is, they are likely conditions for positive evaluation effects, in QCA terminology. These conditions have been provisionally clustered into five dimensions: 
  1. Conducive circumstances, which are present when the intervention is evaluable and the political environment (among and beyond evaluation stakeholders) favourable.
  2. Powerful mandate, something evaluators have if resources are appropriate, the evaluation is timely and the evaluation team commands high esteem.
  3. Convincing methodology that leads to compelling evidence, is well documented  participatory and ethically sound (‘do no harm’).
  4. Effective communication, which rests on presentation and dissemination of findings, conclusions, recommendations and lessons learned.
  5. High context sensitivity, in particular regarding gender, cultural and professional issues.
This is tentative and fairly abstract – our inception report will come with more precise definitions and calibrations to make fuller sense of these concepts. There is no hierarchy in these conditions. For the time being, the purpose of this initial inventory is to find out what could possibly influence evaluation effects. The provisional model we have built is a ‘maximum model’ in that it attempts to integrate a wide range of possible conditions.

We have also looked more closely at the effects of evaluations, clustering them into three groups:
  • Effects on development practice  – i.e. changes in the further implementation of the intervention evaluated, or in the implementation of subsequent interventions.
  • Effects on accountability and advocacy.
  • Effects on the wider knowledge base– in terms of learning beyond the actual intervention, for example the contribution an evaluation makes to the global knowledge base on “what works” in efforts to end violence against women. 
The Review Reference Group (RRG) examined the tentative model in October and provided rich comments. The dialogue with the RRG and our DfID counterparts has helped us to clarify the terminology used and to appreciate the many facets of these dimensions.

Following from that, we have developed detailed reporting sheets for the coders. The coders have started their first coding round, examining all 74 reports we identified in our search (see earlier post). At this point, their job is to map the data on conditions they find in the reports

As to the effects generated from the evaluations, we cannot rely on the reports for data. Therefore we are building a survey. For every evaluation in our set, we are planning to question at least two out of three types of stakeholders: (1) the evaluator, (2) a person who has commissioned the evaluation, (3) a representative of the organisation that has implemented the intervention evaluated and who can report on the effects of the evaluation. We have interviewed 2-3 representatives of each category to further enrich our picture of the effects evaluations can generate. At the moment, we are building a web-based survey that will be sent out in early January.

By the end of January, we expect to have:
  • An accurate picture of the data available from each of the 74 evaluation reports.
  • Rich data on many of the conditions in our model, from 74 evaluation reports.
  • Information from our survey respondents on evaluation characteristics which the reports have not provided sufficient data on.
  • Data on the effects the evaluations have produced.
Qualitative comparative analysis (QCA) is at the heart of our review methodology. If we obtain meaningful data on conditions and effects, we can go ahead with QCA. That is why we have put in extra shifts to make sure we can contact a large number of evaluation stakeholders – a task that has proven more difficult than expected! (See post below, “Review + detective work”.)

No comments:

Post a Comment

Note: only a member of this blog may post a comment.