This is the external link to the e-book.
As Bhaskar puts it, ‘Theory without experiment is empty. Experiment without theory is blind’ (1978, 191).
Society is made by, but never under the control of, human intentions.
Evaluation has traditionally been asked to pronounce on whether a programme makes a difference ‘beyond that which would have happened anyway’. We always need to keep in mind that what would have happened anyway is change – unavoidable, unplanned, self-generated, morphogenetic change.
Realist evaluation is a form of theory-driven evaluation. But its theories are not the highfalutin’ theories of sociology, psychology and political science. Indeed, the term ‘realistic’ evaluation is sometimes substituted out of the desire to convey the idea that the fate of a programme lies in the everyday reasoning of its stakeholders. Good evaluations gain power for the simple reason that they capture the manner in which an awful lot of participants think. One might say that the basic currency is common-sense theory.
However, this should only be the starting point. The full explanatory sequence needs to be rooted in but not identical to everyday reasoning. In trying to describe the precise elbow room between social science and common sense one can do no better that to follow Elster’s thinking. He has much else to say on the nuts and bolts of social explanation, but here we concentrate on that vital distinction, as mooted in the following:
Much of science, including social science, tries to explain things we all know, but science can make a contribution by establishing that some of the things we all think we know simply are not so. In that case, social science may also explain why we think we know things that are not so, adding as it were a piece of knowledge to replace the one that has been taken away. (2007: 16)
Evidence-based policy has become associated with systematic review methods for the soundest of reasons. Social research is supremely difficult and prone to all kinds of error, mishap and bias. One consequence of this in the field of evaluation is the increasingly strident call for hierarchies of evidence, protocolised procedures, professional standards, quality appraisal systems and so forth. What this quest for technical purity forgets is that all scientific data is hedged with uncertainty, a point which is at the root of Popperian philosophy of science.
What is good enough for natural science is good enough for evidence-based policy, which comes with a frightening array of unanticipated swans – white, black and all shades of grey. Here too, ‘evidence’ does not come in finite chunks offering certainty and security to policy decisions. Programmes and interventions spring into life as ideas about how to change the world for the better. These ideas are complex and consist of whole chains of main and subsidiary propositions. The task of evaluation research is to articulate and refine those theories. The task of systematic review is to refine those refinements. But the process is continuous – for in a ‘self-transforming’ world there is always an emerging angle, a downturn in programme fortunes, a fresh policy challenge. Evidence-based policy will only mature when it is understood that it is a continuous, accumulative process in which the data pursues, but never quite draws level with, unfolding policy problems. Enlightened policies, like bridges over swampy waters, only hold ‘for the time being’.
It has always been stressed that realism is a general research strategy rather than a strict technical procedure (Pawson and Tilley, 1997b: Chapter 9). It has always been stressed that innovation in realist research design will be required to tackle a widening array of policies and programmes (Pawson, 2006a: 93–99). It has always been stressed that this version of realism is Popperian and Campbellian in its philosophy of science and thus relishes the use of the brave conjecture and the application of judgement (Pawson et al., 2011a).
Our own view is that the realist approach is essentially about hypothesis testing. However, both the nature of the hypothesis and the way of testing the hypothesis is different from experimental designs, because of the different philosophical assumptions about the nature of reality (ontology) and about how we might knowthat reality (epistemology). Realist approaches, whether in evaluation or synthesis, begin by developing “a realist hypothesis” about the question at hand. That hypothesis proposes that particular mech- anisms will operate in particular contexts to generate particular outcomes. Empirical work then tests, refines and further specifies
those hypotheses, resulting in context-mechanism-outcome configurations e more detailed understandings of the contexts in which particular mechanisms generate particular outcomes.
Realism in evaluation represents a paradigm through which the world is seen as an open system of dynamic structures, mechanisms and contexts that intricately influence the change phenomena that evaluations aim to capture. Realistic evaluators argue that RCTs fail to test hypotheses rooted in theory and embrace a crude notion of causality based on comparison groups and statistical association rather than understanding mechanisms. They argue that evaluators must develop a priory theories about how, for whom and under what conditions interventions will work and then use observational data to examine how context and intervention mechanism interact to generate outcomes. While we dispute the realists’ rejection of experimental designs in the social sciences, we agree with their arguments concerning the need for evaluation: to examine how, why and for whom interventions work; to give more attention to context; and to focus on the elaboration and validation of program theory.
RCTs themselves could contribute to a realist approach to evaluation.We examine the extent to which some RCTs are already embracing many of these issues and, bringing together some of these existing innovations alongside our own ideas, sketch out what ‘realist RCTs’ might look like. We argue that it is possible to benefit from the insights provided by realist evaluation without relinquishing the RCT as the best means of examining intervention causality.
RCTs aim to generate minimally biased estimates of intervention effects by ensuring that intervention and control groups are not systematically different from each other in terms of measured and/ or unmeasured characteristics. Random allocation is widely regarded as ethical if there is uncertainty about whether inter- vention confers significant benefit.
The most prominent exponents of realist evaluation are criminologists Ray Pawson and Nick Tilley, who criticize the RCT approach for its positivist assumptions, and suggest alternatives based on a realist perspective.
This paper is very judgmental and I believe that referencing this paper should be avoided.
This is an important paper
Person-centered care recognizes the patient’s full autonomy as a person in society who happens to need health-related services, and moves away from a hierarchical relationship. The Person-centered health system has recently been defined as one that “supports people to make informed decisions and to successfully manage their own health and care and to invite others to act on their behalf … Person-centered care sees patients as equal partners in planning, developing and assessing care”
The advent of Patient Portals, whereby the patient may access his/her medical or health record, would seem to offer an important contribution to improving the patient-practitioner knowledge balance, and to make the patient fully aware of the facts related to his/her condition and its treatment. In turn, this would appear to empower the patient as a participant in the negotiation of their own care to ensure that it is integrated and fits their preferences wherever possible. However, we find that the evidence of this effect is worryingly sparse, due in particular to the lack of effective evaluation of patient portals.
I can link the above with the studies that show that EHR didn’t have much effect on patients medication intake. see notes here
The sharing of responsibility between diagnosed patients and providers leading to improved coordination and integration of care is not a new idea. For those patients with chronic or enduring conditions, there has been recognition of the expertise they build up about the condition which they have permanently, but which the clinician sees only occasionally, such that the concept of the “Expert Patient” was coined more than a decade ago, accompanied by a related policy program, which still endures in English NHS policy. There is a strong belief that many of today’s patients are increasingly in search of more information about their health and are keen to establish a closer engagement with medical providers about the way their care is undertaken and managed. Research evidence consistently shows that patients want to be kept involved and regularly informed about decisions related to their medical care.
Patient information needs should not be limited to obtaining general knowledge. There should be opportunity for patients to access their medical records and the content within them. Providing patients with access to personalized health information is believed to be a means of improving communication between patients and providers, contributing to more accurate information, and helping patients to prepare for upcoming clinical visits and cope with the potential anxiety. This notion has been supported by evidence which shows that the failure to fully inform patients can lead to poor treatment adherence.
Overall, patient portals may offer one or more of the following functionalities:
- Access to EMR data of the patient
- Access to test results
- Printing or export of the portal data
- Medication refills
- Appointment scheduling
- Ability to obtain referrals
- Access to general medical information such as guidelines
- Secure messaging between the patient and the institution
Advanced patient portals are still a very new and innovative ICT application, whose impact on health care delivery, outcomes, and patient engagement is neither very well known nor understood.
Personal health records
Another, rather different, means of involving patients is personal health records (PHRs). PHRs are “a set of computer-based tools that allow people to access and coordinate their lifelong health information and make appropriate parts of it available to those who need it”. PHRs allow the patient to document health-related information and to make them available to others, for example to their health care providers or families. PHRs are typically owned and administered by the patients themselves. Microsoft’s HealthVault is one platform which can be used to maintain a PHR. It also offers connectivity to devices and apps to enhance tracking of health information. The Apple Health app is yet another platform that can be used as a central repository to aggregate information from other health apps.
Understanding features and functions of the technology, and their context of use, helps us understand what works and how.
The four mechanisms identified by Otte-Trojel, seem to focus on the “what” level. The policy aims of governments, payers, and consumers of patient portals form one part of the “why” mechanism.
Consumer health portals should be developed and evaluated with the understanding of their contexts of use, and based on a theoretical framework such as, for example, the theory of planned behaviour, by which the functions of the portal can perceive the behaviour control to be influenced, as a means of optimising and coordinating care. Evaluation studies which are based on such a scientific theory, and that compare variants of theoretical mechanisms (e.g. providing personalised information versus general patient information) and different contexts (e.g. patient portals used in young, well educated patients such as in vitro fertilisation clinics versus the same patient portals used in elderly patients with dementia) will provide knowledge on what works for who, in what circumstances, in what respect and how.[Top]