Download PPS Medical Errors: Causes and Prevention

Document related concepts

Infection control wikipedia , lookup

Forensic epidemiology wikipedia , lookup

Medical ethics wikipedia , lookup

Adherence (medicine) wikipedia , lookup

Electronic prescribing wikipedia , lookup

Patient safety wikipedia , lookup

Transcript
Medical Errors:
Causes and Prevention
Roger L. Bertholf, Ph.D.
Associate Professor of Pathology
University of Florida Health Science
Center/Jacksonville
IOM: To Err Is Human: Building a
Safer Health System (2000)
•
•
•
•
•
•
Frequency
Cost
Outcomes
Types
Causes
Recommendations
Adverse Event vs. Error
• An adverse event is an injury caused by medical
management rather than the underlying condition of
the patient. An adverse event attributable to error is a
"preventable adverse event." Negligent adverse
events represent a subset of preventable adverse
events that satisfy legal criteria used in determining
negligence (i.e., whether the care provided failed to
meet the standard of care reasonably expected of an
average physician qualified to take care of the patient
in question).*
• An error is defined as the failure of a planned action
to be completed as intended (i.e., error of execution)
or the use of a wrong plan to achieve an aim (i.e.,
error of planning).
*About half of preventable AEs are considered negligent
Examples of Medical Errors
•
•
•
•
•
•
Diagnostic error (inappropriate therapy)
Equipment failure
Infection (nosocomial, post-operative)
Transfusion-related injury
Misinterpretation of medical orders
System failures that compromise
diagnostic or treatment processes.
Frequency of Medical Errors
Study
AEs Errors Fatal Est. Deaths
NY (1984)
2.9%
58%
13.6%
98,000
CO/UT (1992) 3.7%
53%
6.6%
44,000†
†MVA
= 43,000; Breast CA = 42,000; AIDS = 16,000. 8th most frequent
cause overall.
How reliable is this estimate?
• Includes only AEs producing a specified
level or harm
• Two reviewers had to agree on whether
an AE was preventable or negligent
• Included only AEs documented in the
patient record*
*Some studies, using other sources of information about adverse
events, produced higher estimates.
Cost
• Adverse events: $37.6 – 50 billion*
• Preventable adverse events: $17 – 29
billion
• Half of cost is for health care
• Represent 4% (AE)† and 2% (errors) of
all health care costs
*lost income, lost household production, disability, health care costs
†Exceeds
total cost of treating HIV and AIDS
Causes*
Medication error
19%†
Wound infection
14%
Technical complications
13%
*Leape et al. (1991) The nature of adverse events in hospitalized patients
(1,133 AEs studied in 30,195 admissions)
†Overall
frequency (inpatients) is 3 per 1,000 medication orders; 2 per
1,000 considered “significant” errors
AHA List of Medication Errors
•
•
•
•
Incomplete patient information
Unavailable drug information (warnings)
Miscommunication of medication order
Confusion between drugs with similar
names
• Lack of appropriate drug labeling
• Environmental conditions that distract
health care providers
Most Common Medication Errors
Failure to adjust dosage in response
to a change in hepatic/renal function
History of allergy to the same or
related medication
Wrong drug name, dosage form, or
abbreviation on order
13.9%
12.1%
11.4%
Incorrect dosage calculation
11.1%
Atypical or unusual critical dosage
consideration
10.8%
A Comparison of Risks
Risk (per flight) of dying in a
commercial airline accident
1 in 8 million*
Risk (per hospital admission)
of dying from a medical error
>1 in 1,000
*1 in 2 million from 1967-1976
Six Sigma Quality Control
• Quality Management program designed
by Mikel Harry and Richard Schroeder
in 2000
• Strives to make QM a quantitative
science
• Sets performance standards and goals
for a production process
Six Sigma Paradigm: DMAIC
Six Sigma Process Performance
Target
- Tolerance
+ Tolerance
Probability
.999997
.67
.95
-6
-5
-4
-3
-2
-1
0
1
SD ()
2
3
4
5
6
Six Sigma Performance
• Goal is to achieve < 1 DPM
• Not all processes can achieve the 6
level of performance
• “Deming’s Principle” is that fewer
defects leads to increased productivity,
efficiency, and lower cost
Healthcare’s Six Sigma Performance
Process
% Errors Sigma
Preventable adverse events
3.0
2.5
Lab order accuracy
1.8
3.6
0.048
4.8
False negative PAP
2.4
3.45
Unacceptable specimen
0.3
4.25
Duplicate test orders
1.52
3.65
Reporting errors
What Causes Accidents?
Technical
failure
20%
Human
error
80%
Sidney Dekker
“What is striking about many
accidents is that people were doing
exactly the sorts of things they
would usually be doing—the things
that usually lead to success and
safety. . .Accidents are seldom
preceded by bizarre behavior.”
From The Field Guide to Human Error
Investigations (2002)
A Primer on Accident Investigation
• Human error as a cause
• Human error as a symptom
Human Error
• Bad Apple Theory
– Complex systems are inherently safe
– Human intervention subverts the inherent
safety of complex systems
• Reaction to failure
– Bad outcome = bad decision
– Retrospective, proximal, counterfactual,
and judgmental
The Bad Apple Theory
• The illusion of success
– Bad procedures often produce good results
– Success breeds confidence
• Failure is an aberration
– “The system must be safe”
• The economical answer
– It is easier to change human behavior than
it is to change systems
Assigning Blame
• Retrospective
Retrospective Analysis
Assigning Blame
• Retrospective
• Proximal
Proximity
• It is intuitive to focus on the location
where the failure occurred
• “Sharp end” vs. “Blunt end”
– The “sharp end” is the point at which the
failure occurs
– The “blunt end” is the set of systems and
organizational structure that supports the
activities at the “sharp end”
Retrospective Analysis
Sharp End
Institution
Systems
Procedures
Organization
Blunt End
Assigning Blame
• Retrospective
• Proximal
• Counterfactual
What Might Have Been. . .
• In retrospect, it is always easy to see
where different actions would have
averted a bad outcome
• In retrospect, the outcome of any
potential action is already known
• “Counterfactuals” pose alternate
scenarios, which are rarely useful in
determining the true cause
Assigning Blame
•
•
•
•
Retrospective
Proximal
Counterfactual
Judgmental
The Omniscient Perspective
• As an investigator, you always know
more than the participants did
• It is very difficult, if not impossible to
judge fairly the reactions of those who
had less information than you
• Investigators define “failure” based on
outcome
Lessons for Investigators
• There is no “primary” cause
– Every action affects another
• There is no single cause
– Errors in complex systems are nearly
always multi-focal
• A definition of “human error” is elusive
– Definition of “error”
– Humans operate within complex systems
Failure Mode and Effects Analysis
• Everything will eventually fail
• Humans frequently make errors
• The cause of a failure is often beyond
the control of an operator
10 Steps for FMEA
1. Review the process
2. Brainstorm potential failure modes
3. List potential effects of each failure mode
4. Assign a severity rating
5. Assign an occurrence rating
6. Assign a detection rating
7. Calculate the risk priority number for each effect
8. Prioritize these failure modes based on the RPN and severity
9. Take action to reduce or eliminate the high-risk failure modes
10.Recalculate the RPN
Ranking the Failure Modes
• Calculate the RPN
– Rate Severity, Occurrence, and Detection
on a scale of 1 – 10
– RPN = S x O x D (maximum 1000)
• Prioritize Failure modes
– Not strictly based on RPN
– Severity of 9 or 10 should get priority
• Goal is to reduce RPN
Case Exercise #1
A 91-year-old female was transferred to a hospitalbased skilled nursing unit from the acute care
hospital for continued wound care and intravenous
(IV) antibiotics for methicillin-resistant
Staphylococcus aureus (MRSA) osteomyelitis of
the heel. She was on IV vancomycin and began to
have frequent, large stools.
Case Exercise #1
The attending physician ordered a test for Clostridium difficile on
Friday, and was then off for the weekend. That night, the test
result came back positive. The lab called infection control, who in
turn notified the float nurse caring for the patient. The nurse did
not notify the physician on call or the regular nursing staff.
Isolation signs were posted on the patient's door and chart, and
the result was noted in the patient's nursing record. Each nurse
who subsequently cared for this patient assumed that the
physician had been notified, in large part because the patient was
receiving vancomycin. However, it was IV vancomycin (for the
MRSA osteomyelitis), not oral vancomycin, which is required to
treat C. difficile.
Case Exercise #1
On Monday, the physician who originally ordered the C. difficile
test returned to assess the patient and found the isolation signs
on her door. He asked why he was never notified and why the
patient was not being treated. The nurse on duty at that time told
him that the patient was on IV vancomycin. The float nurse, who
had received the original notification from infection control, stated
that she had assumed the physician would check the results of
the test he had ordered. Due to the lack of follow-up, the patient
went three days without treatment for C. difficile, and continued to
have more than 10 loose stools daily. Given her advanced age,
this degree of gastrointestinal loss undoubtedly played a role in
her decline in functional status and extended hospital stay.
Case Exercise #1
• What are the systems/processes
involved in this incident?
• What were the failure points?
Analysis
• MD failed to check the result of an
ordered test
• Float RN wrongly assumed that MD had
been notified of the result
• RN incorrectly assumed that IV
vancomycin was adequate therapy
Failure Points
• Laboratory system for reporting critical
results
– Is a positive C. difficile culture considered a
panic result?
– To whom are panic values reported?
• RN/MD communication
– Does the institution foster an environment
where RNs can comfortably question MD
orders?
Lisa Belkin
“. . . it is virtually impossible for one
mistake to kill a patient in the highly
mechanized and backstopped world of
a modern hospital. A cascade of
unthinkable things must happen,
meaning catastrophic errors are rarely
a failure of a single person, and almost
always a failure of a system.”
From How Can We Save the Next Victim?
(NY Times Magazine, June 15, 1997)
Case Exercise #2
An 81-year-old female maintained on warfarin for a history of
chronic atrial fibrillation and mitral valve replacement developed
asymptomatic runs of ventricular tachycardia while hospitalized.
The unit nurse contacted the physician, who was engaged in a
sterile procedure in the cardiac catheterization laboratory (cath
lab) and gave a verbal order, which was relayed to the unit nurse
via the procedure area nurse. Someone in the verbal order
process said "40 of K." The unit nurse (whose past clinical
experience was in neonatal intensive care) wrote the order as
"Give 40 mg Vit K IV now."
Case Exercise #2
The hospital pharmacist contacted the physician concerning the high
dose and the route and discovered that the intended order was "40
mEq of KCl po." The pharmacist wrote the clarification order.
However, the unit nurse had already obtained vitamin K on override
from the Pyxis MedStation® (an automated medication dispensing
system) and administered the dose intravenoustly (IV). The nurse
attempted to contact the physician but was told he was busy with
procedures. A routine order to increase warfarin from 2.5 mg to 5 mg
(based on an earlier INR) was written later in the day and interpreted
by the evening shift nurse as the physician’s response to the
medication event. The physician was not actually informed that the
vitamin K had been administered until the next day. Heparin was
initiated and warfarin was re-titrated to a therapeutic level. The
patient’s INR was sub-herapeutic for 3 days, but no untoward clinical
consequences occurred.
Case Exercise #2
• What are the systems/processes
involved in this incident?
• What were the failure points?
Analysis
• Verbal orders
– Third party “messengers”
– Use of abbreviations
• Failure to question unusual orders
• Lack of control over medication
availability
Failure Points
• Hospital policy for medication orders
– “Read Back” requirement
• Ability to circumvent pharmacist review
J.C.R. Licklider (1915-1990)
“It seems likely that the
contributions of human
operators and [computers] will
blend together so completely
in many operations that it will
be difficult to separate them
neatly in analysis.”
From Man-Computer Symbiosis (1960)
Anatomy of a Laboratory Error
Phase I: A failed calibration
• Recalibration of the acetaminophen
assay was prompted by a QC failure
• Recalibration was followed by
acceptable QC results
Phase II: QC failures
• Subsequent QC measurements
produced an error code indicating the
result was above the linear limit of the
method
• QC failures went unnoticed, since the
LIS did not display the error code
• Several patient specimens were
reported incorrectly, resulting in
inappropriate treatment
Phase III: Discovery
• The ED staff contacted the laboratory to
question the high acetaminophen result
on a patient who denied recent
ingestion of the drug
• Investigation revealed the QC failures,
and the assay was successfully
recalibrated
Phase IV: Investigation
Principal Questions
• Why was an acceptable QC result
obtained immediately after a failed
calibration?
• Why didn’t the technologists notice
subsequent QC failures?
• Should the clinicians have been more
suspicious of unusually high results?
The Process
Failure Points in The Process
Unrecognized calibration failure
• Roche modular
• Throughput/timing algorithm
Unnoticed QC failures
• Interface through Digital Innovations
box
• Error codes are rare in QC results
• Supervisory review does not occur
regularly on weekends
Lack of clinical suspicion
• History is often unreliable in overdose
cases
• An antidote for acetaminophen exists
• Symptoms of acetaminophen toxicity
may not appear until after the window of
therapeutic opportunity has passed
Conclusions
• An unexpected error occurred in the
calibration algorithm encoded in the
instrument software
• The failure of information to cross the
instrument/LIS interface masked the
erroneous control results
• Suspect results were not immediately
apparent to clinicians
Lessons
• Complex technologies always have
unexpected failure modes
• Interfaces between systems and
operators are opportunities for distortion
or loss of important information
• The fallacy of the “un-rocked boat”
Richard I. Cook
“Recognizing hazard and
successfully manipulating system
operations to remain inside the
tolerable performance boundaries
requires intimate contact with
failure.”
From How Complex Systems Fail (2002)
How Complex Systems Fail
• Complex systems are intrinsically
hazardous systems
• Complex systems are heavily and
successfully defended against failure
• Catastrophe requires multiple failures—
single point failures are not enough
• Complex systems contain changing
mixtures of failures latent within them
How Complex Systems Fail
• Catastrophe is always just around the
corner
• Post-accident attribution to a “root
cause” is fundamentally wrong
• Human operators have dual roles: as
producers and as defenders against
failure
• Human practitioners are the adaptable
element of complex systems
How Complex Systems Fail
• Change introduces new forms of failure
• Safety is a characteristic of systems and
not of their components
• Failure-free operations require
experience with failure
IOM Recommendations
• Establish national focus
• Identify and learn from medical errors
through mandatory reporting
• Raise standards and expectations
• Implement safe practices
AHRQ Safety Recommendations
for Patients
• Ask questions if you have doubts or concerns
• Keep and bring a list of ALL the medicines
you take
• Get the results of any test of procedure
• Talk to your doctor about which hospital is
best for your health needs
• Make sure you understand what will happen if
you need surgery