Category: realist evaluation

Notes from the ebook: The Science of Evaluation: A Realist Manifesto

This is the external link to the e-book.

Chapter 1

As Bhaskar puts it, ‘Theory without experiment is empty. Experiment without theory is blind’ (1978, 191).

Society is made by, but never under the control of, human intentions.

Evaluation has traditionally been asked to pronounce on whether a programme makes a difference ‘beyond that which would have happened anyway’. We always need to keep in mind that what would have happened anyway is change – unavoidable, unplanned, self-generated, morphogenetic change.

Realist evaluation is a form of theory-driven evaluation. But its theories are not the highfalutin’ theories of sociology, psychology and political science. Indeed, the term ‘realistic’ evaluation is sometimes substituted out of the desire to convey the idea that the fate of a programme lies in the everyday reasoning of its stakeholders. Good evaluations gain power for the simple reason that they capture the manner in which an awful lot of participants think. One might say that the basic currency is common-sense theory.

However, this should only be the starting point. The full explanatory sequence needs to be rooted in but not identical to everyday reasoning. In trying to describe the precise elbow room between social science and common sense one can do no better that to follow Elster’s thinking. He has much else to say on the nuts and bolts of social explanation, but here we concentrate on that vital distinction, as mooted in the following:

Much of science, including social science, tries to explain things we all know, but science can make a contribution by establishing that some of the things we all think we know simply are not so. In that case, social science may also explain why we think we know things that are not so, adding as it were a piece of knowledge to replace the one that has been taken away. (2007: 16)

Evidence-based policy has become associated with systematic review methods for the soundest of reasons. Social research is supremely difficult and prone to all kinds of error, mishap and bias. One consequence of this in the field of evaluation is the increasingly strident call for hierarchies of evidence, protocolised procedures, professional standards, quality appraisal systems and so forth. What this quest for technical purity forgets is that all scientific data is hedged with uncertainty, a point which is at the root of Popperian philosophy of science.

What is good enough for natural science is good enough for evidence-based policy, which comes with a frightening array of unanticipated swans – white, black and all shades of grey. Here too, ‘evidence’ does not come in finite chunks offering certainty and security to policy decisions. Programmes and interventions spring into life as ideas about how to change the world for the better. These ideas are complex and consist of whole chains of main and subsidiary propositions. The task of evaluation research is to articulate and refine those theories. The task of systematic review is to refine those refinements. But the process is continuous – for in a ‘self-transforming’ world there is always an emerging angle, a downturn in programme fortunes, a fresh policy challenge. Evidence-based policy will only mature when it is understood that it is a continuous, accumulative process in which the data pursues, but never quite draws level with, unfolding policy problems. Enlightened policies, like bridges over swampy waters, only hold ‘for the time being’.

Chapter 2

It has always been stressed that realism is a general research strategy rather than a strict technical procedure (Pawson and Tilley, 1997b: Chapter 9). It has always been stressed that innovation in realist research design will be required to tackle a widening array of policies and programmes (Pawson, 2006a: 93–99). It has always been stressed that this version of realism is Popperian and Campbellian in its philosophy of science and thus relishes the use of the brave conjecture and the application of judgement (Pawson et al., 2011a).

 

Notes from academic paper: Review: On the problems of mixing RCTs with qualitative research: the case of the MRC framework and the evaluation of complex healthcare interventions.

The ontology of critical realism, with its identification of numerous causal mechanisms interacting in different ways in different contexts to produce different outcomes, has influenced the development of realistic evaluation, as advanced by Pawson and Tilley (1997). The aim of realistic evaluation is to explain the processes involved between the introduction of an intervention and the outcomes that are produced. In other words, it assumes that the characteristics of the intervention itself are only part of the story, and that the social processes involved in its implementation have to be understood as well if we are going to have an adequate understanding of why observed outcomes come about. In contrast to the assumptions of constant conjunction, realistic evaluation posits the alternative formula of:

Mechanism + contex = outcome

In any given context, there will in all likelihood be a number of causal mechanisms in operation, their relationship differing from context to context. The aim of realistic evaluation is to discover if, how and why interventions have the potential to cause beneficial change. To do this, it is necessary to penetrate beneath the surface of observable inputs and outputs in order to uncover how mechanisms which cause problems are removed or countered by alternative mechanisms introduced in the intervention. In turn, this requires an understanding of the contexts within which problem mechanisms operate, and in which intervention mechanisms can be successfully fired. In other words, realistic evaluators take the middle line between positivism and relativism, in that positivism’s search for a single cause is seen as too simplistic, while relativism’s abandonment of any sort of generalisable explanation is seen as needlessly pessimistic. In contrast to these two poles, realists argue that it is possible to identify tendencies in outcomes that are the result of combinations of causal mechanisms, and to make reasonable predictions as to the sorts of contexts that will be most auspicious for the success of health-promoting mechanisms. The confidence of prediction can be increased through comparison of different cases (i.e. different contexts) in that concentration on context–mechanism–outcome configurations allows for the development of transferable and cumulative lessons about the nature of these configurations.

[Top]

Notes from the academic paper: Exposing the key functions of a complex intervention for shared care in mental health: case study of a process evaluation.

Evaluating complex interventions involves determining both whether and how they work; such evaluations are theoretically and practically complex. First, the components of care themselves may interact positively or negatively with each other and also may be required in different doses or formats depending on context. The kind of interventions required to bring about change are also likely to be multi-faceted; educational, audit-based and facilitation-based interventions each have evidence to support them. The choice and application of appropriate outcome measures at different levels of change is challenging. Lastly, implementation often occurs within the shifting sands of health services reform and development.

Realistic Evaluation is a relatively new framework for understanding how and why interventions work in the real world and has been recommended as a means to understand the dissemination of service innovation. Analysis focuses on uncovering key mechanisms and on the interactions between mechanism and context in order to develop ‘middle range theories’ about how they lead to outcomes. Accumulation of evidence, which may be qualitative or quantitative, and may also be derived from external sources, leads to a refining of these theories. Realistic Evaluation, therefore, promises to be a useful framework for understanding the key functions of an intervention by examining its relationship with the context.

Shared care emphasizes the need for co-ordination between primary care and specialists to reduce duplication and address unmet need; chronic disease management focuses on service redesign in primary care incorporating timely review, expert input, patient involvement and information systems.

 

[Top]

Notes from academic paper: Realist RCTs of complex interventions – An oxymoron

Our own view is that the realist approach is essentially about hypothesis testing. However, both the nature of the hypothesis and the way of testing the hypothesis is different from experimental designs, because of the different philosophical assumptions about the nature of reality (ontology) and about how we might knowthat reality (epistemology). Realist approaches, whether in evaluation or synthesis, begin by developing “a realist hypothesis” about the question at hand. That hypothesis proposes that particular mech- anisms will operate in particular contexts to generate particular outcomes. Empirical work then tests, refines and further specifies
those hypotheses, resulting in context-mechanism-outcome configurations e more detailed understandings of the contexts in which particular mechanisms generate particular outcomes.

 

[Top]

Notes from the academic paper: Realist randomised controlled trials: A new approach to evaluating complex public health interventions

Realism in evaluation represents a paradigm through which the world is seen as an open system of dynamic structures, mechanisms and contexts that intricately influence the change phenomena that evaluations aim to capture. Realistic evaluators argue that RCTs fail to test hypotheses rooted in theory and embrace a crude notion of causality based on comparison groups and statistical association rather than understanding mechanisms. They argue that evaluators must develop a priory theories about how, for whom and under what conditions interventions will work and then use observational data to examine how context and intervention mechanism interact to generate outcomes. While we dispute the realists’ rejection of experimental designs in the social sciences, we agree with their arguments concerning the need for evaluation: to examine how, why and for whom interventions work; to give more attention to context; and to focus on the elaboration and validation of program theory.

RCTs themselves could contribute to a realist approach to evaluation.We examine the extent to which some RCTs are already embracing many of these issues and, bringing together some of these existing innovations alongside our own ideas, sketch out what ‘realist RCTs’ might look like. We argue that it is possible to benefit from the insights provided by realist evaluation without relinquishing the RCT as the best means of examining intervention causality.

RCTs aim to generate minimally biased estimates of intervention effects by ensuring that intervention and control groups are not systematically different from each other in terms of measured and/ or unmeasured characteristics. Random allocation is widely regarded as ethical if there is uncertainty about whether inter- vention confers significant benefit.

 

The most prominent exponents of realist evaluation are criminologists Ray Pawson and Nick Tilley, who criticize the RCT approach for its positivist assumptions, and suggest alternatives based on a realist perspective.

This paper is very judgmental and I believe that referencing this paper should be avoided.

 

[Top]