Download Dekker article notes

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Organization development wikipedia , lookup

Transcript
Dekker article notes
Gaps and Culpability in Medicine, Dekker (Genetics and Group Rights, 2007)
…celebrated accidents shape public perception of safety and risk in health care. They
present heroes (e.g., a care provider who tried to save the patient despite the odds and
errors of others), survivors, and victims. And, of course, they put villains, or anti-heroes,
center stage
[people can] become a villain within a matter of weeks. Such a transformation hinges
often on the severity of the outcome (the patient’s death) and the ability to construct a
story where consequence and act are necessarily linked. This post hoc moral and social
negotiation creates villains out of normal practitioners and celebrated accidents out of
regular adverse events. …a central preoccupation of late 20th-century social science: an
accident does not exist “out there,” to be explained with a good method of inquiry.
Rather, public perception shapes what becomes an accurate story about the accident, and
it creates not only the celebration around an accident (in health care or elsewhere) but
also creates, or constructs, the accident itself.
Discontinuities, or gaps, are a feature of health care Delivery… may show up as
conversions between oral and written treatment orders, handovers between shifts,
movement of patients, transferal of caretaking physician, or interruptions in workflow. …
transitions and shifts in the delivery of health care can produce losses of momentum,
information, and handovers of patients, data, or even responsibility. They all represent
gaps in the continuity of care
Those who work in health care often know where discontinuities in their processes occur,
and they know that these can represent extra risk. Consequently, they invest in
recognizing, anticipating, and absorbing the potentially negative effects of discontinuities
(e.g., through handover briefings). Gaps, then, represent both demands and opportunities
for the expression of highly developed practitioner expertise, such as strategies for
smoothing handovers, double-checking prescriptions, reading back orders, internalizing
charts,
… gaps in care allow for the excision of a particular act, the isolation of a trouble spot in
the wake of disaster (e.g., fatal medication error), or the identification of a single or chief
protagonist. … A culpable act happens at one station (e.g., the nurse mixing the fluids),
and not at another, or less often so (e.g., a physician writing unclearly or not signing in
time). But why? What makes one act – only a gap away from other contributory acts –
more culpable than others?
People are critical of creating a resilient system that necessarily needs to pursue multiple
goals with finite resources. Adverse events – as does safety – emerge from a multitude of
factors and their interactions, as a normal byproduct of pursuing success in resource
constrained circumstances.
Gaps in the delivery of health care offer us the opportunity to see failures both as a
simple, direct result of individual acts, or as emergent from the complex production
process of care.
crime does not exist. Crimes are constructed; they are a negotiated settlement onto one
particular version of “history” that serves social functions
The prototypical story of error and violation and its consequences in Judeo-Christian
tradition tells of people who are equipped with the requisite intellect, who have received
appropriate indoctrination (e.g., “don’t eat that fruit”), who display capacity for reflective
judgment, and who actually have the time to choose between a right and a wrong
alternative. They then proceed to pick the wrong alternative.
As Nietzsche noted, anxiety plays an important role in people’s desire to nail down “the”
cause for an event. All conclusions about cause, however formal, messy, incomplete, or
shallow the analytic route may have been to get there, are constructions that relate to the
future, not the past…. They are not about making the past understandable or explicable,
but about making the future manageable, controllable, or at least about furnishing
illusions that make it seem so.
Being afraid is worse than being wrong. Being soothed is better than being fair. Selecting
a scapegoat to carry the interpretive load of an accident or incident is the easy price
people pay for the illusion that they actually have control over a risky, complex,
inaccessible, discontinuous technology, such as modern health care delivery.
Scapegoats are those who expose the real problem, but do not create it. The real problem
is that people do not enjoy the extent of control they would want or expect over the risky
technologies they build and consume. … Embracing this could be profoundly distressing;
it creates anxiety because it implies a loss of control.
Previous periods of rapid technological development reveal similar responses to a
popular perception of loss of control, of a confusion about moral boundaries. … Late
medieval and early Renaissance witch hunts coincided with the emergence of a new
social order and the crumbling of an older one. People increasingly moved to cities; the
role and responsibility of women in society changed; the Church started losing its
epistemological privilege; and knowledge became more available and widely spread
through the printing press and use of languages other than Latin. … This dissolution of
people’s medieval worldview must have created fertile soil for a witch craze. Witches
were accused of, among other things, poisoning innocent people and killing unbaptized
babies…. The image of a witch, mixing potions and involved in infanticide, could be
equally as appealing today as it was then.
Challenger training note: common remembrance of the story converts the nurse who
mixed the solution into the nurse who gave it,
Just Culture: Who gets to draw the line? Cognition, Technology & Work (2008)
A just culture is meant to balance learning from incidents with accountability for their
consequences. All the current proposals for just cultures argue for a clear line between
acceptable and unacceptable behavior… The critical question is not where to draw the
line, but who gets to draw it. Culpability is socially constructed: the result …Different
accounts of the same incident are always possible (e.g. educational, organizational,
political). They generate different repertoires of countermeasures and can be more
constructive for safety. The issue is not to exonerate individual practitioners but rather
what kind of accountability promotes justice and safety: backward-looking and
retributive, or forward-looking and change-oriented.
We no longer see such accidents as meaningless, uncontrollable events, but rather as
failures of risk management, and behind these failures are people and organizations
Criminalization of any act is not just about retribution and explanation of misfortune, but
also about putative deterrence,.. The deterrence argument is problematic, however, as
threats of prosecution do not deter people from making errors, but rather from reporting
them
A just culture, then, is particularly concerned with the sustainability of learning from
failure through the reporting of errors, adverse events, incidents. If operators and others
perceive that their reports are treated unfairly or lead to negative consequences, the
willingness to report will decline… Writings about just culture… acknowledge this
central paradox of accountability and learning: various stakeholders (e.g. employers,
regulators) want to know everything that happened, but cannot accept everything that
happened and will want to advertise their position as such.
… drawing an a priori line between the acts an organization will accept and those it will
not is difficult.
‘‘Negligence is a conduct that falls below the standard required as normal in the
community. It applies to a person who fails to use the reasonable level of skill expected
of a person engaged in that particular activity, whether by omitting to do something that a
prudent and reasonable person would do in the circumstances or by doing something that
no prudent or reasonable person would have done in the circumstances. To raise a
question of negligence, there needs to be a duty of care on the person, and harm must be
caused by the negligent action. In other words, where there is a duty to exercise care,
reasonable care must be taken to avoid acts or omissions which can reasonably be
foreseen to be likely to cause harm to persons or property. If, as a result of a failure to act
in this reasonably skillful way, harm/injury/damage is caused to a person or property, the
person whose action caused the harm is negligent’’ (GAIN 2004 p. 6).
There is no definition that captures the essential properties of ‘‘negligence’’….What is
‘‘normal standard?’’ How far is ‘‘below?’’ What is ‘‘reasonably skillful?’’ What is
‘‘reasonable care?’’ What is ‘‘prudent?’’ Was harm indeed ‘‘caused by the negligent
action?’’…That judgments are required to figure out whether we deem an act culpable is
not the problem. The problem is guidance that suggests that a just culture only needs to
‘‘clearly draw’’ a line between culpable and blameless behavior. Its problem lies in the
false assumption that acceptable or unacceptable behavior form stable categories with
immutable features that are independent of context, language or interpretation.
What ends up being labeled as culpable does not inhere in the act or the person. It is
constructed …
… deviance is created by society … social groups create deviance by making the rules
whose infraction constitutes deviance and by applying those rules to particular persons
If we see an act as a crime, then accountability means blaming and punishing somebody
for it. Accountability in that case is backward-looking, retributive. If, instead, we see the
act as an indication of an organizational, operational, technical, educational or political
issue, then accountability can become forward-looking. The question becomes: what
should we do about the problem and who should bear liability for implementing those
changes?
The past offers all kinds of opportunities to express and handle current issues, address
current concerns, accommodate current agendas. This makes it critical to consider who
owns the right to write history. Who has the power to tell a story of performance in such
a way—to use a particular rhetoric to describe it, ensuring that certain subsequent actions
are legitimate or even possible (e.g. pursuing a single culprit), and others not—so as to, in
effect, own the right to draw the line?
…judicial involvement (or the threat of it) can engender a climate of fear and silence
Moves to redirect the power to draw the line away from the judiciary can be met with
suspicions that operators want to blame ‘‘the system’’ when things go wrong, and that
they do not want to be held liable in the same way as other citizens
All safety–critical work is ultimately channeled through relationships between human
beings (such as in medicine), or through direct contact of some people with the risky
technology. At this sharp end, there is almost always a discretionary space into which no
system improvement can completely reach.
… space filled with ambiguity, uncertainty and moral choices. And a space that is
typically devoid of relevant or applicable guidance from the surrounding organization,
leaving the difficult calls up to the individual operator or crews. Systems cannot
substitute the responsibility borne by individuals within that space. Individuals who work
in those systems would not even want that. The freedom (and concomitant responsibility)
that is left for them is what makes them and their work human, meaningful, a source of
pride.
organizations can do a number of things. One is to be clear about where that discretionary
space begins and ends…
Blame-free is not accountability-free Equating blame-free systems with an absence of
personal accountability, as some do (e.g. Pellegrino 2004) is wrong. The kind of
accountability wrung out of practitioners in a trial is not likely to contribute to future
safety in their field, and in fact may hamper it. We can create such accountability not by
blaming people, but by getting people actively involved in the creation of a better system
to work in. Holding people accountable and blaming people are two quite different
things. Blaming people may in fact make them less accountable: they will tell fewer
accounts, they may feel less compelled to have their voice heard, to participate in
improvement efforts…. If, instead, we see the act as an indication of an organizational,
operational, technical, educational or political issue, then accountability can become
forward looking (Sharpe 2003). The question becomes what should we do about the
problem and who should bear responsibility for implementing those changes.
first steps involve a normalization of incidents, so that they become a legitimate,
acceptable part of organizational development. Then, the organization must consider
what to do about the question ‘‘who gets to draw the line?’’ both inside its own operation
and in influencing the judicial climate surrounding it.
• An incident must not be seen as a failure or a crisis, neither by management, nor by
colleagues. An incident is a free lesson, a great opportunity to focus attention and to learn
collectively.
• Abolish financial and professional penalties (e.g. suspension) in the wake of an
occurrence. These measures render incidents as something shameful, to be kept
concealed, leading to the loss of much potential safety information and lack of trust.
• Monitor and try to prevent stigmatization of practitioners involved in an incident. They
should not be seen as a failure, or as a liability to work with by their colleagues.
• Implement, or review the effectiveness of, any debriefing programs or critical
incident/stress management programs the organization may have in place to help
practitioners after incidents. … helping practitioners see that incidents are ‘‘normal’’, that
they can help the organization get better, and that they can happen to everybody.
• Build a staff safety department, not part of the line organization that deals with
incidents. The direct manager (supervisor) of the practitioner should not necessarily be
the one who is the first to handle the practitioner in the wake of an incident. Aim to
decouple an incident from what may look like a performance review or punitive
retraining of the practitioner involved.
• Start with building a just culture at the very beginning: during basic education and
training of the profession. Make trainees aware of the importance of reporting incidents
for a learning culture, and get them to see that incidents are not something individual or
shameful but a good piece of systemic information for the entire organization. Convince
new practitioners that the difference between a safe and an unsafe organization lies not
how many incidents it has, but in how it deals with the incidents that it has its people
report.
• Ensure that practitioners know their rights and duties in relation to incidents. Make very
clear what can (and typically does) happen in the wake of an incident (e.g. to whom
practitioners were obliged to speak, and to whom not). A reduction in such uncertainty
can prevent practitioners from withholding valuable incident information because of
misguided fears or anxieties.
Second, the important discussion for an organization is who draws the line between
acceptable and unacceptable inside the organization? (see notes about protecting org data,
talking with prosecuting authority to get domain expertise when deciding whether
something is worth investigating/prosecuting)
Failure to adapt or adaptations that fail: contrasting models on procedures and
safety (Applied Ergonomics, 2003)
There is a persistent notion that not following procedures can lead to unsafe situations.
In the wake of failure it can be tempting to introduce new procedures or change existing
ones, or enforce stricter compliance…Introducing more procedures does not necessarily
avoid the next incident, nor do exhortations to follow rules more carefully necessarily
increase compliance or enhance safety.
real work takes place in a context of limited resources and multiple goals and pressures.
… procedures are inadequate to cope with local challenges and surprises, and because
procedures’ conception of work collides with the scarcity, pressure and multiple goals of
real work.
procedure-following can be antithetical to safety. In the 1949 US Mann Gulch disaster,
firefighters who perished were the ones sticking to the organizational mandate to carry
their tools everywhere
This, then, is the tension. Procedures are an investment in safety—but not always.
Procedures are thought to be required to achieve safe practice—yet they are not always
necessary, nor likely ever sufficient for creating safety. Procedures spell out how to do
the job safely— yet following all the procedures can lead to an inability to get the job
done.
adapting procedures to fit circumstances better is a substantive cognitive activity.
People at work must interpret procedures with respect to a collection of actions and
circumstances that the procedures themselves can never fully specify. In other words,
procedures are not the work itself.
For progress on safety, organizations must monitor and understand the reasons behind the
gap between procedures and practice. Additionally, organizations must develop ways that
support people’s skill at judging when and how to adapt.
…a fundamental double bind [exists] for those who encounter surprise and have to apply
procedures in practice:
* If rote rule following persists in the face of cues that suggests procedures should be
adapted, this may lead to unsafe outcomes. People can get blamed for their inflexibility;
their application of rules without sensitivity to context.
* If adaptations to unanticipated conditions are attempted without complete knowledge of
circumstance or certainty of outcome, unsafe results may occur too. In this case, people
get blamed for their deviations; their non-adherence. In other words, people can fail to
adapt, or attempt adaptations that may fail.
The double bind lays out the challenge for organizations wishing to make progress on
safety with procedures. Organizations need to:
* Monitor the gap between procedure and practice and try to understand why it exists
(and resist trying to close it by simply telling people to comply).
* Help people to develop skills to judge when and how to adapt (and resist telling people
only that they should follow procedures).
The gap between procedures and practice is not constant. After the creation of new work
(e.g. through the introduction of new technology), considerable time can go by before
applied practice stabilizes—likely at a distance from the rules as written for the system
‘‘on the- shelf’’. Social science has characterized this migration from tightly coupled
rules to more loosely coupled practice as ‘‘fine-tuning’’ or ‘‘practical drift’’,
The literature has identified important ingredients in the normalization of deviance,
which can help organizations understand the nature of the gap between procedures and
practice:
* Rules that are overdesigned (written for tightly coupled situations, for the ‘‘worstcase’’) do not match actual work most of the time. In real work, there is time to recover,
opportunity to reschedule and get the job done better or more smartly. This mismatch
creates an inherently unstable situation that generates pressure for change.
* Emphasis on local efficiency or cost-effectiveness pushes operational people to achieve
or prioritize one goal or a limited set of goals (e.g. customer service, punctuality, capacity
utilization). Such goals are typically easily measurable (e.g. customer satisfaction, ontime performance), whereas it is much more difficult to measure how much is borrowed
from safety.
* Past success is taken as guarantee of future safety. Each operational success achieved at
incremental distances from the formal, original rules, can establish a new norm. From
here a subsequent departure is once again only a small incremental step (Vaughan, 1996).
From the outside, such fine-tuning constitutes incremental experimentation in
uncontrolled settings —on the inside, incremental non-conformity is not recognized as
such.
* Departures from the routine become routine. Seen from the inside of people’s own
work, violations become compliant behavior. They are compliant with the emerging,
local ways to accommodate multiple goals important to the organization (maximizing
capacity utilization but doing so safely; meeting technical requirements but also
deadlines).
Merely stressing the importance of following procedures can increase the number of
cases in which people fail to adapt in the face of surprise. Letting people adapt without
adequate skill or preparation, on the other hand, can increase the number of failed
adaptations. One way out of the double bind is to develop people’s skill at adapting. This
means giving them the ability to balance the risks between the two possible types of
failure: failing to adapt or attempting adaptations that may fail. It requires the
development of judgment about local conditions and the opportunities and risks they
present, as well as an awareness of larger goals and constraints that operate on the
situation. Development of this skill could be construed, to paraphrase Rochlin, as
planning for surprise. Indeed, as Rochlin has observed: the culture of safety in high
reliability organizations anticipate and plan for possible failures in “the continuing
expectation of future surprise.”
The question of how to plan for surprise—how to help people develop skill at adapting
successfully
In order to make progress on safety through procedures, organizations need to monitor
the gap between procedure and practice and understand the reasons behind it.
Eve and the Serpent: A Rational Choice to Err (2007)
When we conclude that a death should go onto somebody’s account, that somebody is to
blame for it, we give that death meaning. Death loses its meaninglessness and
randomness, as it was the result of negligence, of criminal behavior. And we can do
something about that (like punishing the culprit)… This tendency, to construct a narrative
of disaster in which somebody made a rational choice to err, has grown to be fundamental
to the Western regulative ideal of moral thinking.
This tendency, to construct a narrative of disaster in which somebody made a rational
choice to err, has grown to be fundamental to the Western regulative ideal of moral
thinking.
The accounts of failure delivered to us by the criminal justice system, for example with
one culprit excised from a hugely complex, discontinuous process of healthcare delivery,
are often far from just. They are also considered bad for safety and quality efforts.
Criminalizing error erodes independent safety investigations, it promotes fear rather than
mindfulness in people practicing safety-critical work, it makes organizations more careful
in creating a paper trail, not more careful in doing their work, it discourages people from
shouldering safety-critical, caring jobs such as nursing, and it cultivates professional
secrecy, evasion, and self-protection. By making it into the main purveyor of
accountability, we are helping our justice system create a climate in which freely telling
each other accounts of what happened (and what to do about it) becomes all but
impossible. By taking over the dispensing of accountability, legal systems slowly but
surely strangle it.
If we can make somebody bear the guilt for the outcome of choices that can be
constructed to have lead to disaster, it at least gives us something to do. We can make
specific changes… By assuming that people had a rational choice to err, and by now
deeming them guilty of an amoral decision, we impose some kind of order, predictability,
preventability onto complex, confusing and threatening events.
Reconstructing human contributions to accidents: the new view on error and
performance (2002)
two different views on human error and human contribution to accidents… ‘‘the old
view’’ … sees human error as a cause of failure:

Human error is the cause of most accidents.

The engineered systems in which people work are made to be basically safe; their
success is intrinsic. The chief threat to safety comes from the inherent
unreliability of people.

Progress in safety can be made by protecting these systems from unreliable
humans through selection, proceduralization, automation, training, and discipline.
…‘‘the new view,’’ sees human error not as a cause, but as a symptom of failure:

Human error is a symptom of trouble deeper inside the system.

Safety is not inherent in systems. The systems themselves are contradictions
between multiple goals that people must pursue simultaneously. People have to
create safety.

Human error is systematically connected to features of people tools, tasks, and
operating environment. Progress on safety comes from understanding and
influencing these connections.
The rationale is that human error is not an explanation for failure, but instead demands an
explanation
When confronted by failure, it is easy to retreat into the old view: seeking out the ‘‘bad
apples’’ and assuming that with them gone, the system will be safer than before. An
investigation’s emphasis on proximal causes “ensures” that the mishap remains the result
of a few uncharacteristically ill-performing individuals who are not representative of the
system or the larger practitioner population in it. It leaves existing beliefs about the basic
safety of the system intact.
Faced with a bad, surprising event, people seem more willing to change the individuals in
the event, along with their reputations, rather than amend their basic beliefs about the
system that made the event possible. …reconstructing the human contribution to a
sequence of events that led up to an accident is not easy… to understand why people did
what they did, it is necessary to go back and triangulate and interpolate, from a wide
variety of sources, the kinds of mindsets that they had at the time.
investigators or outside observers … know more about the incident or accident than the
people who were caught up in it—thanks to hindsight:

Hindsight means being able to look back, from the outside, on a sequence of
events that led to an outcome that has already happened;

Hindsight allows almost unlimited access to the true nature of the situation that
surrounded people at the time (where they actually were vs. where they thought
they were; what state their system was in vs. what they thought it was in);

Hindsight allows investigators to pinpoint what people missed and should not
have missed; what they did not do but should have done.
This contrasts fundamentally with the point of view of people who were inside the
situation as it unfolded around them. To them, the outcome was not known, nor the
entirety of surrounding circumstances. They contributed to the direction of the sequence
of events on the basis of what they saw and understood to be the case on the inside of the
evolving situation
Mechanism 1: making tangled histories linear by cherry-picking and re-grouping
evidence
The investigator treats the record as if it were a public quarry to pick stones from, and the
accident explanation the building he needs to erect. The problem is that each fragment is
meaningless outside the context that produced it: each fragment has its own story,
background, and reasons for being, and when it was produced it may have had nothing to
do with the other fragments it is now grouped with.
Mechanism 2: finding what people could have done to avoid the accident
Counterfactuals prove what could have happened if certain minute and often utopian
conditions had been met. Counterfactual reasoning may be a fruitful exercise when trying
to uncover potential countermeasures against such failures in the future.
However, saying what people could have done in order to prevent a particular outcome
does not explain why they did what they did. This is the problem with counterfactuals.
When they are enlisted as explanatory proxy, they help circumvent the hard problem of
investigations: finding out why people did what they did. Stressing what was not done
(but if it had been done, the accident would not have happened) explains nothing about
what actually happened, or why.
… counterfactuals are a powerful tributary to the hindsight bias. They help us impose
structure and linearity on tangled prior histories. Counterfactuals can convert a mass of
indeterminate actions and events, themselves overlapping and interacting, into a linear
series of straightforward bifurcations.
To the people caught up in the sequence of events, there was likely no compelling reason
at all to re-assess a situation or decide against anything (or else they probably would
have) at the point the investigator has now found significant or controversial. They were
likely doing what they were doing because they thought they were right; given their
understanding of the situation; their pressures
Mechanism 3: judging people for what they did not do but should have done
Recognizing that there is a mismatch between what was done or seen and what should
have been done or seen—as per those standards—it is easy to judge people for not doing
what they should have done.
Not … very informative. There is virtually always a mismatch between actual behavior
and written guidance that can be located in hindsight. Pointing that there is a mismatch
sheds little light on the why of the behavior in question. … mismatches between
procedures and practice are not unique to mishaps.
a standard response after mishaps: point to the data that would have revealed the true
nature of the situation. Knowledge of the ‘‘critical’’ data comes only with the
omniscience of hindsight,
While micro-matching, the investigator frames people’s past assessments and actions
inside a world that s/he has invoked retrospectively… Judging people for what they did
not do relative to some rule or standard does not explain why they did what they did.
Saying that people failed to take this or that pathway—only in hindsight the right one—
judges other people from a position of broader insight and outcome knowledge that they
themselves did not have. It does not explain a thing; it does not shed any light on why
people did what they did given their surrounding circumstances.
It appears that in order to explain failure, we seek failure. In order to explain missed
opportunities and bad choices, we seek flawed analyses, inaccurate perceptions, violated
rules—even if these were not thought to be influential or obvious or even flawed at the
time… effect of the hindsight bias: knowledge of outcome fundamentally influences how
we see a process. If we know the outcome was bad, we can no longer objectively look at
the behavior leading up to it—it must also have been bad
Local rationality
What is striking about many accidents in complex systems is that people were doing
exactly the sorts of things they would usually be doing—the things that usually lead to
success and safety. Mishaps are more typically the result of everyday influences on
everyday decision making than they are isolated cases of erratic individuals behaving
unrepresentatively. .. People are doing what makes sense given the situational
indications, operational pressures, and organizational norms existing at the time.
Accidents are seldom preceded by bizarre behavior. People’s errors and mistakes (such as
there are in any objective sense) are systematically coupled to their circumstances and
tools and tasks. Indeed, a most important empirical regularity of human factor research
since the mid-1940s is the local rationality principle. What people do makes sense to
them at the time—it has to, otherwise they would not do it. People do not come to work
to do a bad job; they are not out to crash cars or airplanes or ground ships. The local
rationality principle, originating in Simon (1969), says that people do things that are
reasonable, or rational, based on their limited knowledge, goals, and understanding of the
situation and their limited resources at the time
The question is not ‘‘where did people go wrong?’’ but ‘‘why did this assessment or
action make sense to them at the time?’’ Such real insight is derived not from judging
people from the position of retrospective outsider, but from seeing the world through the
eyes of the protagonists at the time. When looking at the sequence of events from this
perspective, a very different story often struggles into view.
reconstruction of unfolding mindset.. Five steps are presented below that the investigator
could use to begin to reconstruct a concept-dependent account from context-specific
incident data.
Step 1: laying out the sequence of events in context-specific language… examine how
people’s mindset unfolded parallel with the situation evolving around them, and how
people, in turn, helped influence the course of events… Cues and indications from the
world influence people’s situation assessments, which in turn inform their actions, which
in turn change the world and what it reveals about itself,..
Step 2: divide the sequence of events into episodes, if necessary
Accidents do not just happen; they evolve over a period of time. Sometimes this time
may be long, so it may be fruitful to divide the sequence of events into separate episodes
that each deserve their own further human performance analysis
Step 3: find out how the world looked or changed during each episode
This step is about reconstructing the unfolding world that people inhabited; find out what
their process was doing and what data was available.
building these pictures is often where investigations stop today.
The difficulty (reflected in the next step) will be to move from merely showing that
certain data was physically available, to arguing which of these data was actually
observable and made a difference in people’s assessments and actions—and why this
made sense to them back then.
Step 4: identify people’s goals, focus of attention and knowledge active at the time So out
of all the data available, what did people actually see and how did they interpret it?
It is seldom the case, however, that just one goal governs what people do. Most complex
work is characterized by multiple goals, all of which are active or must be pursued at the
same time (e.g., on-time performance and safety). Depending on the circumstances, some
of these goals may be at odds with one another, producing goal conflicts. Any analysis of
human performance has to take the potential for goal conflicts into account.
What people know and what they try to accomplish jointly determines where they will
look; where they will direct their attention—and consequently, which data will be
observable to them. Recognize how this is, once again, the local rationality principle.
People are not unlimited cognitive processors (there are no unlimited cognitive
processors in the entire universe). People do not know and see everything all the time. So
their rationality is limited, or bounded.
From Punitive Action Reporting To Confidential: A Longitudinal Study of
Organizational Learning from Incidents (Patient Safety & Quality Healthcare _
September/October 2007)
trace a safety-critical organization over a period of 2 years as it attempted to convert from
line-management-driven punitive incident responses to a confidential reporting system
run by the safety staff.
more seemed at play. Before the transition, employees actually turned out very ready to
confess an “error” or “violation” to their line manager…Fear of retribution, … did not
necessarily discourage reporting. In fact, it encouraged a particular kind of reporting: a
mea culpa with minimal disclosure that would get it over with quickly for everybody.
“Human error” as cause seemed to benefit everyone— except organizational learning.
What lacked was the notion that organizational learning through reporting happens by
identifying systemic vulnerabilities that all operators could be exposed to. Not by telling
everybody to pay more attention because somebody did, on one occasion, not do so. Only
by constantly seeking out its vulnerabilities can an organization develop and test more
robust practices to enhance safety
After the transition, such individually oriented countermeasures became rare. Incident
reports and investigations came up with deeper contributory sets that could not be
ignored and that took line management into different areas than before. Learning became
possible because systemic vulnerabilities had been identified, reported, studied,
contextualized, and checked against operational expertise.
the chief reason why operators’ willingness to report went up [following the conversion
to confidential reporting] was not the lack of retribution, but rather the realization that
they could “make a difference.” Giving operators the leverage and initiative to help
achieve safety gains turned out a large motivator to report. It gave them part ownership in
the organization’s safety record.
Nine Steps to Move Forward from Error (Woods and Cook), Cognition, Technology
& Work (2002)
Dramatic and celebrated failures are dreadful events that lead stakeholders to question
basic assumptions about how the system,… under pressure to achieve new levels of
performance and utilize costly resources more efficiently,… works and sometimes breaks
down. … it is very difficult for these stakeholders in high risk industries to make
substantial investments to improve safety [since] common beliefs and fallacies about
human performance and about how systems fail undermine the ability to move forward.
use generalizations from the research base about how complex systems fail and how
people contribute to safety as a guide for stakeholders when celebrated failure or other
developments create windows of opportunity for change and investment.
1. Pursue second stories beneath the surface to discover multiple contributors.
(First stories, biased hindsight, are overly simplified accounts of the apparent ‘cause’ of
the undesired outcome. They are appealing because they are easy to tell and locate the
important ‘cause’ of failure in practitioners closest to the outcome. First stories appear in
the press and usually drive the public, legal, and regulatory reactions to failure.
Unfortunately, first stories simplify the dilemmas, complexities, and difficulties
practitioners face and hide the multiple contributors and deeper patterns. The distorted
view leads to proposals for ‘solutions’ that are weak or even counterproductive and
blocks the ability of organisations to learn and improve.
(Second stories…make different attributions to find out why things go wrong. They
reveal the multiple conflicting goals, pressures, and systemic vulnerabilities beneath the
“error” that everybody in the system is exposed to. Second stories use human error as a
starting point, not as a conclusion.)
2. Escape the hindsight bias.
3. Understand work as performed at the sharp end of the system.
(The substance of the second story resides at the sharp end of the system as
organisational, economic, human and technological factors play out to create outcomes.
Sharp end practitioners who work in this setting face of a variety of difficulties,
complexities, dilemmas and trade-offs and are called on to achieve multiple, often
conflicting, goals. Safety is created here at the sharp end as practitioners interact with the
hazardous processes inherent in the field of activity in the face of the multiple demands
and using the available tools and resources. Improving safety depends on investing in
resources that support practitioners in meeting the demands and overcoming the inherent
hazards in that setting.
doing technical work expertly is not the same thing as expert understanding of the basis
for technical work. This means that practitioners’ descriptions of how they accomplish
their work are often biased and cannot be taken at face value.)
4. Search for systemic vulnerabilities.
(A repeated finding from research on complex systems is that practitioners and
organisations have opportunities to recognise and react to threats to safety. Precursor
events may serve as unrecognised ‘dress rehearsals’ for future accidents. The accident
itself often evolves through time so that practitioners can intervene to prevent negative
outcomes or to reduce their consequences. Doing this depends on being able to recognise
accidents-in-the making.
establishing a flow of information about systemic vulnerabilities is quite difficult because
it is frightening to consider how all of us, as part of the system of interest, can fail.
Repeatedly, research notes that blame and punishment will drive this critical information
underground. Without a safety culture, systemic vulnerabilities become visible only after
catastrophic accidents. In the aftermath of accidents, learning also is limited because the
consequences provoke first stories, simplistic attributions and shortsighted fixes.
examine how the organisation at different levels of analysis supports or fails to support
the process of feedback, learning and adaptation.)
5. Study how practice creates safety.
(all systems confront inherent hazards, trade-offs and are vulnerable to failure. Second
stories reveal how practice is organised to allow practitioners to create success in the face
of threats. Individuals, teams and organisations are aware of hazards and adapt their
practices and tools to guard against or defuse these threats to safety. It is these efforts that
‘make safety’.)
6. Search for underlying patterns.
(knowledge about how people contribute to safety and failure and how complex systems
fail by addressing the factors at work [in a particular setting])
7. Examine how change will produce new vulnerabilities and paths to failure.
(the basic pattern in complex systems is a drift toward failure as planned defences erode
in the face of production pressures and change. As a result, when we examine technical
work in context, we need to understand how economic, organisational and technological
change can create new vulnerabilities in spite of or in addition to providing new benefits.)
8. Use new technology to support and enhance human expertise.
(the idea that ‘a little more technology will be enough’, has not turned out to be the case
in practice.. computerisation can simply exacerbate or create new forms of complexity to
plague operations.. We can achieve substantial gains by understanding the factors that
lead to expert performance and the factors that challenge expert performance. This
provides the basis to change the system, for example, through new computer support
systems and other ways to enhance expertise in practice.)
9. Tame complexity through new forms of feedback.
(The theme that leaps out from past results is that failure represents breakdowns in
adaptations directed at coping with complexity… Recovery before negative
consequences occur, adapting plans to handle variations and surprise, and recognising
side effects of change are all critical to high resilience in human and organisational
performance… Improving feedback is a critical investment area for improving human
performance and guarding against paths toward failure. The constructive response to
issues on safety is to study where and how to invest in better feedback… organisations
need to develop and support mechanisms that create foresight about the changing shape
of risks, before anyone is injured.)