Health information systems (HIS) have potential to increase efficiency and save considerable amounts of health expenditure.
Current rates of adoption of health information technology are low and health information systems are under-utilized.
Kaplan notes that “there has been a long history of difficulties in achieving clinical use of some kinds of clinical informatics applications. ” Within this context, it is imperative that HIS implementation is evaluated and features of successful implementation identified.
Eysenbach summarizes the goals of technology in healthcare, suggesting that eHealth should be: (1) Efficient, thereby decreasing costs, (2) Enhance quality of care, (3) Evidence based, proven by rigorous scientific evaluation, (4) Empowering for consumers and patients, (5) Encouraging a partnership relationship between patient and health professional, (6) Educate physicians and consumers, (7) Enabling information exchange and communication in a standardized way between health care establishments, (8) Extending the scope of health care beyond its geographical and conceptual boundaries, (9) Ethical – e-health involves new forms of patient-physician interaction and poses new challenges and threats to ethical issues, and (10) Equitable. Haux identified seven general tasks of HIS over time. These are : (1) to move paper-based processing and storage to computer-based; (2) to move from local to national and global HIS; (3) to include patients as HIS users; (4) to use HIS data for healthcare planning, clinical and epidemiological research (aside from patient care and administration); (5) to change the focus from technical aspects of HIS to management change and strategic information management; (6) to place more emphasis on image and molecular data; and, (7) to acknowledge the steady increase of new types of technologies, perhaps as yet un-imagined.
The reasons cited in favor of the implementation of HIS are primarily around efficiency, cost, quality and safety.
No matter how success or failure is defined (if it is) the evidence of effectiveness is generally weak and inconsistent. Information systems of all types notoriously fall short of their expectations and fail to deliver benefits (see for example, Gauld and Goldfinch. Shpilberg et al. reported that only 15% of business executives surveyed believed that their company’s IT capability was highly effective, ran reliably, and delivered projects with promised functionality, timing, and cost. Systematic reviews of healthcare settings consistently find that there is little evidence that care provided by technological innovations is any better than traditional methods. Whitten’s systematic review of HIS cost-effectiveness found that there is no conclusive evidence that telemedicine is a cost effective way of delivering healthcare. Mistry reviewed the cost-effectiveness literature ten years later and concluded that the results of their review were consistent with previous findings: there is no further conclusive evidence that technological interventions are cost effective compared to conventional healthcare. However, it is also the case that these reviews noted methodological shortcomings in studies evaluating cost effectiveness. These were particularly around the amount of methodological detail provided and the methods used to measure cost effectiveness.
Effective evaluation allows us to understand how and under what conditions HIS work, and determine the safety and effectiveness of the system. Evaluation can provide guidance to the implementation process and mitigate unplanned negative outcomes. Ammenwerth defines evaluation as “the act of measuring or exploring attributes of a health information system (in planning, development, implementation, or operation), the result of which informs a decision to be made concerning that system in a specific context.” This should include the inevitable organisational change which accompanies the implementation of HIS. Early approaches to evaluation focused on the “measurement of changes in processes and of the consequences of these changes” while more recently attention has been paid to the complex, iterative and multidimensional implementation process. Effective evaluation accompanies the whole life cycle of HIS, evaluating technology against a comprehensive set of measures throughout all stages. Measuring the success of HIS is not straightforward and the challenges in the organisation and setting of HIS make both implementation and evaluation of the HIS difficult. Evaluation processes are also often flawed.
This is a very important paper!
Although the term compliance has been in use for the longest period of time, nowadays it is considered that it is not entirely adequate and that changes in the relation between patients and healthcare professionals must be taken into consideration. Patients’ status is no longer submissive and passive and their role in accomplishing therapeutic objectives does not only imply compliance with medical instructions but active cooperation and agreement with a doctor and a pharmacist. Due to these reasons the term adherence has become more desirable in practice since the 90’s of the 20th century.
Differences between compliance and adherence are thus not only in the semantic sense but of essential nature. While in compliance the focus is on the healthcare provider who has a dominant status in relation to the patient, the concept of adherence is oriented to the patient and cooperation. In relation to this, the flow of information is one-way and the objective is to achieve obedience of the patient, while adherence implies a two-way information transfer and engagement of both subjects.[Top]
this is a well written literature review.
A key question across domains is, “how are patients/health agents/consumers persuaded to acquire certain drugs and take them as directed?”
The introduction of “concordance” to the literature on medication compliance and adherence—“adherence” is the most neutral, non-ideological, term for patient behavior, in use at least since the mid 1990s.
The question that concordance theorists have really asked is not, “how do we treat patients’ health beliefs more respectfully?” but rather, “how do we persuade patients to follow the advice of their doctors?”
We can frame a rhetorical question across domains: how are people persuaded to take drugs?
Several authors have written even-handedly about concordance and make clear that cooperation between physicians and patients is likely to lead to better, and more appropriate, use of medications. Elwyn, Edwards, and Britten write, “Concordance describes the process whereby the patient and doctor reach an agreement on how a drug will be used, if at all. In this process doctors identify and understand patients’ views and explain the importance of treatment, while patients gain an understanding of the consequences of keeping (or not keeping) to treatment.” Ferner writes, “Usually…the patient, who has most to gain by success and the most to lose from harm, should decide whether to have treatment, and the prescriber should provide information on the risks and benefits to help make the decision.”
It is easy to agree that cooperative, better-informed, and realistically-prepared patients are more likely to adhere to recommended treatments than those who are resistant, ill- informed, and unprepared.
So, “concordance,” with its egalitarian rhetoric, not only portrays physicians and patients as equals but also portrays all patients as equals—while, in truth and in practice, patients are not all equally well-equipped for consensual decision-making, and, certainly, not all physicians believe that they are. When one exits the concordance literature to enter other literature about patients, what becomes clear is that the respect for patients that is invoked as the key resource of concordance is not always available to be tapped.
The more consumers are aware of a drug, the more they will request it; arguably, if they request it, then, being in agreement with their physician on its prescription, they are more likely to adhere to treatment.
The ontology of critical realism, with its identification of numerous causal mechanisms interacting in different ways in different contexts to produce different outcomes, has influenced the development of realistic evaluation, as advanced by Pawson and Tilley (1997). The aim of realistic evaluation is to explain the processes involved between the introduction of an intervention and the outcomes that are produced. In other words, it assumes that the characteristics of the intervention itself are only part of the story, and that the social processes involved in its implementation have to be understood as well if we are going to have an adequate understanding of why observed outcomes come about. In contrast to the assumptions of constant conjunction, realistic evaluation posits the alternative formula of:
Mechanism + contex = outcome
In any given context, there will in all likelihood be a number of causal mechanisms in operation, their relationship differing from context to context. The aim of realistic evaluation is to discover if, how and why interventions have the potential to cause beneficial change. To do this, it is necessary to penetrate beneath the surface of observable inputs and outputs in order to uncover how mechanisms which cause problems are removed or countered by alternative mechanisms introduced in the intervention. In turn, this requires an understanding of the contexts within which problem mechanisms operate, and in which intervention mechanisms can be successfully fired. In other words, realistic evaluators take the middle line between positivism and relativism, in that positivism’s search for a single cause is seen as too simplistic, while relativism’s abandonment of any sort of generalisable explanation is seen as needlessly pessimistic. In contrast to these two poles, realists argue that it is possible to identify tendencies in outcomes that are the result of combinations of causal mechanisms, and to make reasonable predictions as to the sorts of contexts that will be most auspicious for the success of health-promoting mechanisms. The confidence of prediction can be increased through comparison of different cases (i.e. different contexts) in that concentration on context–mechanism–outcome configurations allows for the development of transferable and cumulative lessons about the nature of these configurations.[Top]
Evaluating complex interventions involves determining both whether and how they work; such evaluations are theoretically and practically complex. First, the components of care themselves may interact positively or negatively with each other and also may be required in different doses or formats depending on context. The kind of interventions required to bring about change are also likely to be multi-faceted; educational, audit-based and facilitation-based interventions each have evidence to support them. The choice and application of appropriate outcome measures at different levels of change is challenging. Lastly, implementation often occurs within the shifting sands of health services reform and development.
Realistic Evaluation is a relatively new framework for understanding how and why interventions work in the real world and has been recommended as a means to understand the dissemination of service innovation. Analysis focuses on uncovering key mechanisms and on the interactions between mechanism and context in order to develop ‘middle range theories’ about how they lead to outcomes. Accumulation of evidence, which may be qualitative or quantitative, and may also be derived from external sources, leads to a refining of these theories. Realistic Evaluation, therefore, promises to be a useful framework for understanding the key functions of an intervention by examining its relationship with the context.
Shared care emphasizes the need for co-ordination between primary care and specialists to reduce duplication and address unmet need; chronic disease management focuses on service redesign in primary care incorporating timely review, expert input, patient involvement and information systems.
After the first audit we recommended that prescription scripts should be checked by the prescribing physician and re-checked by the nurse assistant in the clinic. Can be connected with this paper.
we strongly advocate more training of junior physicians to avoid these errors and to understand the potential hazards due to prescription errors. Same advice as this paper.
Computer-based prescribing systems may minimize the risk of errors due to illegible prescriptions. However there is a considerable financial investment and training involved which may be prohibitive for some institutions. From this external paper.
Knowledge of where and when errors are most likely to occur is generally the first step in prevention of prescription errors.[Top]