Tag: easy to read

Notes from the ebook: The Science of Evaluation: A Realist Manifesto

This is the external link to the e-book.

Chapter 1

As Bhaskar puts it, ‘Theory without experiment is empty. Experiment without theory is blind’ (1978, 191).

Society is made by, but never under the control of, human intentions.

Evaluation has traditionally been asked to pronounce on whether a programme makes a difference ‘beyond that which would have happened anyway’. We always need to keep in mind that what would have happened anyway is change – unavoidable, unplanned, self-generated, morphogenetic change.

Realist evaluation is a form of theory-driven evaluation. But its theories are not the highfalutin’ theories of sociology, psychology and political science. Indeed, the term ‘realistic’ evaluation is sometimes substituted out of the desire to convey the idea that the fate of a programme lies in the everyday reasoning of its stakeholders. Good evaluations gain power for the simple reason that they capture the manner in which an awful lot of participants think. One might say that the basic currency is common-sense theory.

However, this should only be the starting point. The full explanatory sequence needs to be rooted in but not identical to everyday reasoning. In trying to describe the precise elbow room between social science and common sense one can do no better that to follow Elster’s thinking. He has much else to say on the nuts and bolts of social explanation, but here we concentrate on that vital distinction, as mooted in the following:

Much of science, including social science, tries to explain things we all know, but science can make a contribution by establishing that some of the things we all think we know simply are not so. In that case, social science may also explain why we think we know things that are not so, adding as it were a piece of knowledge to replace the one that has been taken away. (2007: 16)

Evidence-based policy has become associated with systematic review methods for the soundest of reasons. Social research is supremely difficult and prone to all kinds of error, mishap and bias. One consequence of this in the field of evaluation is the increasingly strident call for hierarchies of evidence, protocolised procedures, professional standards, quality appraisal systems and so forth. What this quest for technical purity forgets is that all scientific data is hedged with uncertainty, a point which is at the root of Popperian philosophy of science.

What is good enough for natural science is good enough for evidence-based policy, which comes with a frightening array of unanticipated swans – white, black and all shades of grey. Here too, ‘evidence’ does not come in finite chunks offering certainty and security to policy decisions. Programmes and interventions spring into life as ideas about how to change the world for the better. These ideas are complex and consist of whole chains of main and subsidiary propositions. The task of evaluation research is to articulate and refine those theories. The task of systematic review is to refine those refinements. But the process is continuous – for in a ‘self-transforming’ world there is always an emerging angle, a downturn in programme fortunes, a fresh policy challenge. Evidence-based policy will only mature when it is understood that it is a continuous, accumulative process in which the data pursues, but never quite draws level with, unfolding policy problems. Enlightened policies, like bridges over swampy waters, only hold ‘for the time being’.

Chapter 2

It has always been stressed that realism is a general research strategy rather than a strict technical procedure (Pawson and Tilley, 1997b: Chapter 9). It has always been stressed that innovation in realist research design will be required to tackle a widening array of policies and programmes (Pawson, 2006a: 93–99). It has always been stressed that this version of realism is Popperian and Campbellian in its philosophy of science and thus relishes the use of the brave conjecture and the application of judgement (Pawson et al., 2011a).

 

Notes from academic paper: Patient access to medical records and healthcare outcomes: a systematic review.

Providing patients access to their medical records may facilitate a more collaborative relationship between provider and patient. Existing literature suggests that patient-accessible records can improve patient–provider communication, self- management and patient satisfaction.

The IOM (institue of medicine) has recommended six major aims for improving the quality of healthcare delivery: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity.

 

[Top]

Notes from book: Guide to health informatics

In the information sciences the definitions below are the very foundation of informatics: p. 13

Data consists of facts. Facts are observations or measurements about the world. For example, ‘today is Tuesday’

Knowledge defines relationships between data. The rule ‘tobacco smoking causes lung cancer’ is an example of knowledge. Such knowledge is created by identifying recurring patterns in data, for example across many different patients. We learn that events usually occur in a certain sequence, or that an action typically has a specific effect. Through the process of model abstraction, these observations are then codified into general rules about how the world works.

As well as learning such generalized ‘truths’ about the world, once can also learn knowledge that is specific to a particular circumstance. For example, we can create patient specific knowledge by observing a patient’s state over time. By abstracting away patterns in what is observed, one can arrive at specific knowledge such as ‘following treatment with anti-hypertensive medication, there has been no decrease in patient’s blood pressure over the last 2 months.

Information is obtained by the application of knowledge to data. Thus, the datum that ‘the patient’s blood pressure is 125/70 mmHg’ yields information if it tells us something new. In the context of managing a patient’s high blood pressure, using our general knowledge of medicine, and patient specific knowledge, the datum may allow us to draw the inference that the patient’s blood pressure is now under control.


How variations in the structure of clinical messages affect the way in which they are interpreted: p.36-43

What a message is meant to say when it is created, and what the receiver of a message understands, may not be the same. This is because what we humans understand is profoundly shaped by the way data are presented to us, and by the way we react to different data presentations. Thus it is probably as important to structure data in a way so that they can be best understood, as it is to ensure that the data are correct in the first place. What a clinician understands after seeing the data in a patient record and what the data actually show are very different things.

When sending a message, we have to make assumptions about the knowledge that the receiver has, and use that to shape our message. There is no point in explaining what is already known, but is equally important not to miss out important details that the receiver should know to draw the right conclusions. The knowledge share between individuals is sometimes called common ground.

The structure of a message determines how it will be understood. The way clinical data are structured can alter the conclusions a clinician will draw from data.

The message that is sent may not be the message that is received. The effectiveness of communication between two agents is dependent upon:

  • the communication channel which will vary in capacity to carry data and noise which distorts the message
  • the knowledge possessed by the agents, and the common ground between them
  • the resource limitations of agents including cognitive limits on memory and attention
  • the context within which the agents find themselves which dictate which resources are available and the competing tasks at hand.

Grice’s conversational maxims provide a set of rules for conducting message examples:

  • maximum of quantity: say on what is needed.
  • maximum of quality: make you contribution one that is true.
  • maximum of relevance: say only what is pertinent to the context of the conversation at the moment.
  • maximum of manner: avoid obscurity of expression, ambiguity, be brief and orderly.

Medical record’s basic functions: p.112

  1. provides means of communicating between staff who are actively managing a patient.
  2. during the period of active management of a patient’s illness, the record strives to be the single data access point for workers managing a patient. All test results, observations and so forth should be accessible through it.
  3. the record offers and informal ‘working space’ to record ideas and impressions that help build up a consensus view, over the period of care, of what is going on with the patient.
  4. once an episode of care has been completed, the record ultimately forms the single point at which all clinical data are archived, for long-term use.

The traditional way the EMR – record used in care is to be a passive supporter of clinical activity. An active EMR may suggest what patient information needs to be collected, or it might assemble clinical data in a way that assists a clinician in the visualization of a patient’s clinical condition. p.119

There are two quite separate aspects to record systems:

  • the physical nature of the way individuals interact with it
  • the way information is structured when entered into or retrieved from the system.

A summative evaluation can be made in three broad categories:

  1. a user’s satisfaction with the service
  2. clinical outcome changes resulting from using the service
  3. any economic benefit of the service

Technology can be applied to a problem in a technology-drive or a problem-driven manner. Information systems should be created in a problem-driven way, starting with an understanding of user information problems. Only then is it appropriate to identify if an how technology should be used.


Providing access methods that are optimized to local needs can enlarge the range of clinical context s in which evidence is used. p.177

p.354:

AI systems are limited by the data they have access to, and the quality of the knowledge captured withing their knowledge base.

An expert system is a program that captures elements of human expertise and performs reasoning tasks that normally rely on specialist knowledge. Expert systems perform best in straightforward tasks, which have a predefined and relatively narrow scope, and perform poorly on ill-defined tasks that rely on general or common sense knowledge.

An expert system consists of:

  1. a knowledge base, which contains the rules necessary for the completion of its task
  2. a working memory in which data and conclusions can be stored
  3. an inference engine, which matches rules to data to derive its conclusions.
[Top]