Download Society for Academic Emergency Medicine Annual Meeting

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Medical ethics wikipedia , lookup

Patient safety wikipedia , lookup

Adherence (medicine) wikipedia , lookup

Transcript
SOCIETY FOR ACADEMIC EMERGENCY MEDICINE ANNUAL MEETING
ABSTRACTS - 2012
The editors of Academic Emergency Medicine (AEM) are honored to present these abstracts accepted
for presentation at the 2012 annual meeting of the Society for Academic Emergency Medicine (SAEM),
May 9 to 12 in Chicago, Illinois. These abstracts represent countless hours of labor, exciting intellectual
discovery, and unending dedication by our specialty’s academicians. We are grateful for their consistent
enthusiasm, and are privileged to publish these brief summaries of their research.
This year, SAEM received 1172 abstracts for consideration, and accepted 746. Each abstract was independently reviewed by up to six dedicated topic experts blinded to the identity of the authors. Final
determinations for scientific presentation were made by the SAEM Program Scientific Subcommittee
co-chaired by Ali S. Raja, MD, MBA, MPH and Steven B. Bird, MD, and the SAEM Program Committee,
chaired by Michael L. Hochberg, MD. Their decisions were based on the final review scores and the time
and space available at the annual meeting for oral and poster presentations. There were also 125 Innovation in Emergency Medicine Education (IEME) abstracts submitted, of which 37 were accepted. The
IEME Subcommittee was co-chaired by JoAnna Leuck, MD and Laurie Thibodeau, MD.
We present these abstracts as they were received, with minimal proofreading and copy editing. Any
questions related to the content of the abstracts should be directed to the authors. Presentation numbers precede the abstract titles; these match the listings for the various oral and poster sessions at the
annual meeting in Chicago, as well as the abstract numbers (not page numbers) shown in the key word
and author indexes at the end of this supplement. All authors attested to institutional review board or
animal care and use committee approval at the time of abstract submission, when relevant. Abstracts
marked as ‘‘late-breakers’’ are prospective research projects that were still in the process of data collection at the time of the December abstract deadline, but were deemed by the Scientific Subcommittee to
be of exceptional interest. These projects will be completed by the time of the annual meeting; data
shown here may be preliminary or interim.
On behalf of the editors of AEM, the membership of SAEM, and the leadership of our specialty, we
sincerely thank our research colleagues for these contributions, and their continuing efforts to expand
our knowledge base and allow us to better treat our patients.
David C. Cone, MD
Editor-in-Chief
Academic Emergency Medicine
1
Policy-driven Improvements In Crowding:
System-level Changes Introduced By
A Provincial Health Authority And Its Impact
On Emergency Department Operations In
15 Centers
Grant Innes1, Andrew McRae1, Brian Holroyd2,
Brian Rowe2, Christian Schmid3, MingFu Liu3,
Lester Mercuur1, Nancy Guebert3, Dongmei
Wang3, Jason Scarlett3, Eddy S. Lang1
1
University of Calgary, Calgary, AB, Canada;
2
University of Alberta, Edmonton, AB, Canada;
3
Alberta Health Services, Calgary, AB, Canada
Background: System-level changes that target both ED
throughput and output show the most promise in alleviating crowding. In December 2010, Alberta Health Services
(AHS) implemented a province-wide hospital overcapacity
protocol (OCP) structured upon the Viccellio model.
S4
ISSN 1069-6563
PII ISSN 1069-6563583
Objectives: We sought to determine if the OCP policy
resulted in a meaningful and sustained improvement in
ED throughput and output metrics.
Methods: A prospective pre-post experimental study
was conducted using administrative data from 15 community and tertiary centers across the province. The
study phases consisted of the 8 months from February
to September 2010 compared against the same months
in 2011. Operational data for all centres were collected
through the EDIS tracking systems used in the province. The OCP included 3 main triggers: ED bed occupancy >110%, at least 35% of ED stretchers blocked by
patients awaiting inpatient bed or disposition decision,
and no stretcher available for high acuity patients.
When all criteria were met, selected boarded patients
were moved to an inpatient unit (non-traditional care
space if no bed available). The primary outcome was
ED length of stay (LOS) for admitted patients. The ED
load of boarded patients from 10–11 am was reported
ª 2012 by the Society for Academic Emergency Medicine
doi: 10.1111/j.1553-2712.2012.01332.x
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
in patient-hours (pt-hrs). Throughput is reported as
time from ED arrival to physician assessment and percent left without being seen (LWBS). Continuous variables were compared with the Student’s t-test.
Results: The volume of ED patients across all sites
increased by 6.3% from the pre phase to the post phase
(579071 vs. 615787; p < 0.001) while admission rates
remained constant (12.9% vs. 13.1%; p = NS). ED LOS
for admitted patients decreased from 17.2 hours to
11.6 hours (p < 0.001); the load of admitted patients at
10 am declined from 11.3 pt-hrs to 6.1 (p < 0.001). Average time from ED arrival to physician assessment
decreased in the post phase (113.2 vs. 99.3 minutes;
p < 0.001) as did % LWBS (4.0% vs 3.8%; p < 0.001). All
OCP effects remained constant over time; however,
there were regional disparities in its impact.
Conclusion: Policy-driven changes in ED and hospital
operations were associated with significant improvements in both throughput and output metrics, despite
increased input. Which components of the OCP program had the greatest impact are uncertain, as are
explanations for the differential regional impact.
2
Prevalence of Non-convulsive Seizure
and Other Electroencephalographic
Abnormalities In Emergency Department
Patients With Altered Mental Status
Shahriar Zehtabchi1, Arthur C. Grant2,
Samah G. Abdel Baki3, Omurtag Ahmet3,
Richard Sinert1, Geetha Chari2, Shweta
Malhotra1, Jeremy Weedon4, Andre A. Fenton5
1
Department of Emergency Medicine, State
University of New York, Downstate Medical
Center, Brooklyn, NY; 2Department of Neurology,
State University of New York, Downstate Medical
Center, Brooklyn, NY; 3Biosignal Group Inc.,
Brooklyn, NY; 4Scientific Computing Center, State
University of New York, Downstate Medical
Center, Brooklyn, NY; 5Center for Neural Science,
New York University, New York, NY
Background: Two to ten percent of patients evaluated
in the emergency departments (ED) present with altered
mental status (AMS). The prevalence of non-convulsive
seizure (NCS) and other electroencephalographic (EEG)
abnormalities in this population is not known. This information is needed to make recommendations regarding
the routine use of emergent EEG in AMS patients.
Objectives: To identify the prevalence of NCS and
other EEG abnormalities in ED patients with AMS.
Methods: An ongoing prospective study at two academic urban ED. Inclusion: Patients ‡ 13 years old with
AMS. Exclusion: An easily correctable cause of AMS
(e.g. hypoglycemia, opioid overdose). A 30-minute EEG
with the standard 19 electrodes was performed on each
subject as soon as possible after presentation (usually
within 1 hour). Outcome: The rate of EEG abnormalities
based on blinded review of all EEGs by two boardcertified epileptologists. Descriptive statistics are used
to report EEG findings. Frequencies are reported as
percentages with 95% confidence intervals (CI), and
inter-rater variability is reported with kappa.
•
www.aemj.org
S5
Results: The interim analysis was performed on 130
consecutive patients (target sample size: 260) enrolled
from May to October 2011 (median age: 61, range
13–100, 40% male). EEGs for 20 patients were reported
uninterpretable by at least one rater (6 by both raters).
Of the remaining 110, only 30 (27%, 95%CI 20–36%)
were normal according to either rater (n = 15 by both).
The most common abnormality was background slowing
(n = 75, 68%, 95%CI 59–76%) by either rater (n = 47 by
both), indicating underlying encephalopathy. NCS was
diagnosed in 8 patients (7%, 95%CI, 4–14%) by at least
one rater (n = 4 by both), including 6 (5%, 95%CI 2–12%)
patients in non-convulsive status epilepticus (NCSE). 29
patients (26%,95%CI 19–35%) had interictal epileptiform
discharges read by at least one rater (n = 12 by both)
indicating cortical irritability and an increased risk of
spontaneous seizure. Inter-rater reliability for EEG interpretations was modest (kappa: 0.53, 95%CI 0.39–0.67).
Conclusion: ED patients with AMS have a high prevalence of EEG abnormalities, including encephalopathy
and NCS. ED physicians should have a high index of
suspicion for such pathologies in AMS patients. EEG is
necessary to make the diagnosis of NCS/NCSE, for
which early treatment could significantly reduce
morbidity. (Originally submitted as a ‘‘late-breaker.’’)
3
RNA Transcriptional Profiling for Diagnosis
of Serious Bacterial Infections (SBIs) in
Young Febrile Infants
P. Mahajan1, N. Kuppermann2, A. Mejias3,
D. Chaussabel4, T. Casper5, B. Dimo6, H.
Gramse5, O. Ramilo6
1
Children’s Hospital of Michigan, Detroit, MI;
2
University of California, Davis School of
Medicine, Davis, CA; 3Nationwide Childrens
Hospital, Columbus, OH; 4Benaroya Research
Institute, Seattle, WA; 5University of Utah, Salt
Lake City, UT; 6Nationwide Children’s Hospital,
Columbus, OH
Background: Genomic technologies allow us to
determine the etiology of infection by evaluating
specific host responses (RNA biosignatures) to different
pathogens and have the potential to replace cultures of
relevant body fluids as the reference standard.
Objectives: To define diagnostic SBI and non-bacterial
(non-SBI) biosignatures using RNA microarrays in febrile infants presenting to emergency departments (EDs).
Methods: We prospectively collected blood for RNA
microarray analysis in addition to routine screening
tests including white blood cell (WBC) counts, urinalyses, cultures of blood, urine, and cerebrospinal fluid,
and viral studies in febrile infants 60 days of age in
22 EDs (2008–09). We defined SBI as bacteremia, urinary tract infection (UTI), or bacterial meningitis. We
used class comparisons (Mann-Whitney p < 0.01, Benjamini for MTC and 1.25 fold change filter), modular
gene analysis, and K-NN algorithms to define and validate SBI and non-SBI biosignatures in a subset of
samples.
Results: 81% (939/1162) of febrile infants were
evaluated for SBI. 6.8% (64/939) had SBI (14 (1.5%) bac-
S6
2012 SAEM ANNUAL MEETING ABSTRACTS
teremia, 56 (6.0%) UTIs, and 4 (0.4%) bacterial meningitis). Infants with SBIs had higher mean temperatures,
and higher WBC, neutrophil, and band counts. We analyzed RNA biosignatures on 141 febrile infants: 35 SBIs
(2 meningitis, 5 bacteremia, 28 UTI), 106 non-SBIs (49
influenza, 29 enterovirus, 28 undefined viral infections),
and 11 healthy controls. Class comparisons identified
1,288 differentially expressed genes between SBIs and
non-SBIs. Modular analysis revealed overexpression of
interferon related genes in non-SBIs and inflammation
related genes in SBIs. 232 genes were differently
expressed (p < 0.01) in each of the three non-SBI groups
vs SBI group. Unsupervised cluster analysis of these 232
genes correctly clustered 91% (128/141) of non-SBIs and
SBIs. K-NN algorithm identified 33 discriminatory genes
in training set (30 non-SBIs vs 17 SBIs) which
classified an independent test (76 non-SBIs vs 18 SBIs)
with 87% accuracy. Four misclassified SBIs had
over-expression of interferon-related genes, suggesting
viral-bacterial co-infections, which was confirmed in
one patient.
Conclusion: Analysis on this initial sample set confirms
the potential of RNA biosignatures for distinguishing
young febrile infants with SBIs vs those without bacterial
illness. Funded by HRSA/MCHB grant H34MC16870.
4
Saving Maternal, Newborn, and Child Lives
in Developing Countries: Evaluation of a
Novel Training Package among Frontline
Health Workers in South Sudan
Brett D. Nelson, Roy Ahn, Maya Fehling, Melody
J. Eckardt, Kathryn L. Conn, Alaa El-Bashir,
Margaret Tiernan, Thomas F. Burke
Massachusetts General Hospital, Boston, MA
Background: Improving maternal, newborn, and child
health (MNCH) is a leading priority worldwide. However, limited frontline health care capacity is a major
barrier to improving MNCH in developing countries.
Objectives: We sought to develop, implement, and
evaluate an evidence-based Maternal, Newborn, and
Child Survival (MNCS) package for frontline health
workers (FHWs). We hypothesized that FHWs could be
trained and equipped to manage and refer the leading
MNCH emergencies.
Methods: SETTING - South Sudan, which suffers
from some of the world’s worst MNCH indices.
ASSESSMENT/INTERVENTION - A multi-modal needs
assessment was conducted to develop a best-evidence
package comprised of targeted trainings, pictorial
checklists, and reusable equipment and commodities
(Figure 1). Program implementation utilized a trainingof-trainers model. EVALUTION - 1) Pre/post knowledge
assessments, 2) pre/post objective structured clinical
examinations (OSCEs), 3) focus group discussions, and
4) closed-response questionnaires.
Results: Between Nov 2010 to Oct 2011, 72 local trainers and 708 FHWs were trained in 7 of the 10 states in
South Sudan. Knowledge assessments among trainers
(n = 57) improved significantly from 62.7% (SD 20.1) to
92.0% (SD 11.8) (p < 0.001). Mean scores a maternal
OSCE and a newborn OSCE pre-training, immediately
post-training, and upon 2–3 month follow-up are shown
in the table. Closed-response questionnaires with 54
FHWs revealed high levels of satisfaction, use, and confidence with MNCS materials. Participants reported an
average of 3.0 referrals (range 0–20) to a higher level of
care in the 2–3 months since training. Furthermore,
78.3% of FHWs were more likely to refer patients as a
result of the training program. During seven focus
group discussions with trained FHWs, respondents
(n = 41) reported high satisfaction with MNCS trainings, commodities, and checklists, with few barriers to
implementation or use.
Conclusion: These findings suggest MNCS has led to
improvements in South Sudanese FHWs’ knowledge,
skills, and referral practices with respect to appropriate
management of MNCH emergencies.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Table - Abstract 4: FHW skills tests before training, after training,
and at 2–3 months follow-up
Maternal
OSCE (n = 55)
Newborn
OSCE (n = 54)
5
Pre-training
mean % (SD)
Post-training
mean % (SD)
Follow-up
mean % (SD)
21.1 (13.8)
83.4 (21.5)
61.5 (25.8)
41.6 (16.5)
89.8 (14.0)
45.7 (23.1)
Whole Blood Lactate Kinetics in Patients
Undergoing Quantitative Resuscitation for
Septic Shock
Alan E. Jones1, Michael Puskarich1, Stephen
Trzeciak2, Nathan Shapiro3, Jeffrey Kline4
1
University of Mississippi Medical Center,
Jackson, MS; 2Cooper University Hospital,
Camden, NJ; 3BIDMC, Boston, MA; 4Carolinas
Medical Center, Charlotte, NC
Background: Previous studies have suggested lactate
clearance as an endpoint of early sepsis resuscitation.
No study has compared various lactate measurements
to determine the optimal parameter to target.
Objectives: To compare the association of blood lactate kinetics with survival in patients with septic shock
undergoing early quantitative resuscitation.
Methods: Preplanned analysis of a multicenter EDbased RCT of early sepsis resuscitation targeting three
physiological variables: CVP, MAP, and either central
venous oxygen saturation or lactate clearance. Inclusion
criteria: suspected infection, two or more SIRS criteria,
and either SBP <90 mmHg after a fluid bolus or lactate
>4 mmol/L. All patients had an initial lactate measured
with repeat at two hours. Normalization of lactate was
defined a lactate decline to <2.0 mmol/L in a patient
with an intial lactate ‡2.0. Absolute lactate clearance
(initial - delayed value), and relative ((absolute clearance)/(initial value)*100) were calculated if the initial lactate was ‡2.0. The outcome was in-hospital survival.
Receiver operating characteristic curves were constructed and areas under the curve (AUC) were calculated. Difference in proportions of survival between the
two groups at different lactate cutoffs were analyzed
using 95% CI and Fisher exact tests.
Results: Of 272 included patients, the median initial
lactate was 3.1 mmol/L (IQR 1.7, 5.8), and the median
absolute and relative lactate clearance were 1 mmol/L
(IQR 0.3, 2.5) and 37% (IQR 14, 57). An initial lactate
>2.0 mmol/L was seen in 187/272 (69%), and 68/187
(36%) patients normalized their lactate. Overall mortality
was 19.7%. AUCs for initial lactate, relative lactate clearance, and absolute lactate clearance were 0.70, 0.69, and
0.58. Lactate normalization best predicted survival (OR
6.1, 95% CI 2.2–21), followed by lactate clearance of 50%
(OR 4.3, 95% CI 1.8–10.3), initial lactate of <2 mmol/L
(OR 3.4, 95% CI 1.5–7.8), and initial lactate <4 mmol/L
(OR 2.3, 95% CI 1.3–4.3), with lactate clearance of 10%
not reaching significance (OR 2.3, 95% CI 0.96–5.6).
Conclusion: In ED sepsis patients undergoing early
resuscitation, normalization of lactate during resuscitation was more strongly associated with survival than
•
www.aemj.org
S7
any absolute value or absolute/relative change in lactate. Further studies should address if targeting lactate
normalization leads to improved outcomes.
6
A Comparison of Cosmetic Outcomes of
Lacerations of the Trunk and Extremity
Repaired Using Absorbable Versus
Nonabsorbable Sutures
Cena Tejani1, Adam Sivitz1, Michael Rosen1,
Albert Nakanishi2, Robert Flood2, Matthew
Clott1, Paul Saccone1, Raemma Luck3
1
Newark Beth Israel Hospital, Newark, NJ;
2
Cardinal Glennon Children’s Medical Center,
St. Louis, MO; 3Saint Christopher’s Hospital for
Children, Philadelphia, PA
Background: Although prior studies have compared
the use of absorbable versus nonabsorbable sutures for
traumatic lacerations, most of these studies have focused
on facial lacerations. A review of the literature indicates
that there are no randomized prospective studies to date
that have looked exclusively at the cosmesis of absorbable
sutures on trunk and extremity lacerations that present in
the ED. The use of absorbable sutures in the ED setting
confers several advantages: patients do not need to
return for suture removal which results in a reduction in
ED crowding, ED wait times, missed work or school days,
and stressful procedures (suture removal) for children.
Objectives: The primary objective of this study is to
compare the cosmetic outcome of trunk and extremity
lacerations repaired using absorbable versus nonabsorbable sutures in children and adults. A secondary
objective is to compare complication rates between the
two groups.
Methods: Eligible patients with lacerations were randomly allocated to have their wounds repaired with
Vicryl Rapide (absorbable) or Prolene (nonabsorbable)
sutures. At a 10 day follow-up visit the wounds were
evaluated for infection and dehiscence. After 3 months,
patients were asked to return to have a photograph of
the wound taken. Two blinded plastic surgeons using a
previously validated 100 mm visual analogue scale
(VAS) rated the cosmetic outcome of each wound. A
VAS score of 15 mm or greater was considered to be a
clinically significant difference.
Results: Of the 100 patients enrolled, 45 have currently completed the study including 19 in the Vicryl
Rapide group and 26 in the Prolene group. There were
no significant differences in the age, race, sex, length of
wound, number of sutures, or layers of repair in the
two groups. The observer’s mean VAS for the Vicryl
Rapide group was 55.76 mm (95%CI 41.95–69.57) and
that for the Prolene group was 55.9 mm (95%CI 44.77–
67.03), resulting in a mean difference of 0.14 mm
(95%CI–16.95 to 17.23, p = .98). There were no significant differences in the rates of infection, dehiscence, or
keloid formation between the two groups.
Conclusion: The use of Vicryl Rapide instead of nonabsorbable sutures for the repair of lacerations on the
trunk and extremities should be considered by emergency physicians as it is an alternative that provides a
similar cosmetic outcome.
S8
7
2012 SAEM ANNUAL MEETING ABSTRACTS
Minimally Invasive Burn Care: A Report Of
Six Clinical Studies Of Rapid And Selective
Debridement Using A Bromelain-based
Debriding Gel Dressing
Lior Rosenberg1, Yuval Krieger1,
Eldad Silberstein1, Alex Bogdanov-Berezovsky1,
Ofer Arnon1, Yaron Shoam1, Nir Rosenberg1,
Amiram Sagi1, Keren David2, Guy Rubin3, Adam
J. Singer4
1
Ben-Gurion University of the Negev, Beer-Sheva,
Israel; 2MediWound Ltd, Beer-Sheva, Israel;
3
Haemek Hospital, Afula, Israel; 4Stony Brook
University, Stony Brook, NY
Background: Burns are characterized by an eschar
that delays diagnosis and increases the risk of infection
and scarring. As a result, surgical excision of the eschar
is a cornerstone of care requiring specialized personnel
and facilities.
Objectives: A novel debriding agent that could be
used by emergency physicians (EPs) was developed to
overcome the weaknesses of surgical and conventional
enzymatic debridement. We hypothesized that the novel
agent would reduce the need for surgical excision and
skin grafting compared with standard care (SC).
Methods: The safety and efficacy of a novel Debriding
Gel Dressing (DGD) was determined in six studies; five
were RCTs. Treatments (DGD, control vehicle, or SC)
were randomly allocated to deep partial and full thickness burns covering less than 30% TBSA by a computer-generated
randomization
scheme.
Primary
endpoints were percentage eschar debridement, rate of
surgical burn excision and total area excised. Efficacy
analyses were intention to treat.
Results: 518 patients were enrolled. Percentage eschar
debridement was greater than 90% in all studies for
DGD, which was comparable to SC, and significantly
greater than control vehicle, which was negligible. In the
third study, the total area surgically excised was significantly less in DGD-treated patients compared with
patients treated with the control vehicle (22.9% vs. 73.2%,
P < 0.001) or SC (50.5%, P = 0.006). In the sixth, phase III
RCT the rate of surgical excision was significantly lower
in DGD-treated patients than in control patients treated
with SC (40/163 [24.5%] vs. 119/170 [70.0%], P < 0.001).
The total area surgically excised was also significantly
less in DGD-treated patients compared with patients
treated with SC (13.1% vs. 56.7%, P < 0.001). Local and
systemic adverse events were similar for DGD and SC.
Conclusion: DGD is a safe and effective method of
burn debridement that offers an alternative, minimally
invasive burn care modality to standard surgical excision that could be used by EPs.
8
The Golden Period of Laceration Repair Has
Disappeared
James Quinn1, Michael Kohn2, Steven Polevoi2
1
Stanford University, Stanford, CA; 2University of
California, San Francisco, San Francisco, CA
Background: Much has been written about the
‘‘golden period’’ for lacerations and that ‘‘older’’
wounds are at increased risk of infection.
Objectives: To determine the relationship between
infection and time from injury to closure, and the characteristics of lacerations closed before and after
12 hours of injury.
Methods: Over an 18 month period, a prospective
multi-center cohort study was conducted at a teaching
hospital, trauma center and community hospital. Emergency physicians completed a structured data form
when treating patients with lacerations. Patients were
followed to determine whether they had suffered a
wound infection requiring treatment and to determine
a cosmetic outcome rating. We compared infection
rates and clinical characteristics of lacerations with chisquare and t-tests as appropriate.
Results: There were 2663 patients with lacerations;
2342 had documented times from injury to closure. The
mean times from injury to repair for infected and noninfected wounds were 2.4 vs. 3.0 hrs (p = 0.39) with 78%
of lacerations treated within 3 hours and 4% (85) treated
12 hours after injury. There were no differences in the
infection rates for lacerations closed before (2.9%,
95%CI 2.2–3.7) or after (2.1%, 95%CI 0.4–6.0) 6 hours
and before (3.0%, 95% CI 2.3%–3.8%) or after (1.2%,
95% CI 0.03%–6.4%) 12 hours. The patients treated
12 hours after injury tended to be older (41 vs. 34 yrs
p = 0.02) and fewer were treated with primary closure
(85% vs. 96% P < 0.0001). Comparing wounds 12 or more
hours after injury with more recent wounds, there was
no effect of location on decision to close. Wounds closed
after 12 hours did not differ from wounds closed before
12 hours with respect to use of prophylactic antibiotics,
type of repair, length of laceration, or cosmetic outcome.
Conclusion: Closing older lacerations, even those
greater than 12 hours after injury, does not appear to
be associated with any increased risk of infection or
adverse outcomes. Excellent irrigation and decontamination over the last 30 years may have led to this
change in outcome.
9
The Effects of a Novel TGF-beta Antagonist
on Scarring in a Vertical Progression Porcine
Burn Model
Adam J. Singer1, Steve A. McClain1,
Ji-Hun Kang1, Shuan S. Huang2, Jung S. Huang3
1
Stony Brook University, Stony Brook, NY;
2
Auxagen, Inc., St Louis, MO; 3St Louis
University, St Louis, MO
Background: Deep burns may result in significant
scarring leading to aesthetic disfigurement and functional disability. TGF-b is a growth factor that plays a
significant role in wound healing and scar formation.
Objectives: The current study was designed to test
the hypothesis that a novel TGF-b antagonist would
reduce scar contracture compared with its vehicle in a
porcine partial thickness burn model.
Methods: Ninety-six mid-dermal contact burns were
created on the backs and flanks of four anesthetized
young swine using a 150 gm aluminum bar preheated to
80 Celsius for 20 seconds. The burns were randomized
to treatment with topical TGF-b antagonist at one of
three concentrations (0, 187, and 375 lL) in replicates of
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
8 in each pig. Dressing changes and reapplication of the
topical therapy were performed every 2 days for 2 weeks
then twice weekly for an additional 2 weeks. Burns were
photographed and full thickness biopsies were obtained
at 5, 7, 9, 14, and 28 days to determine reepithelialization
and scar formation grossly and microscopically. A sample of 32 burns in each group had 80% power to detect a
10% difference in percentage scar contracture.
Results: A total of 32 burns were created in each of
the three study groups. Burns treated with the high
dose TGF-b antagonist healed with less scar contracture than those treated with the low dose and control
(52 ± 20%, 63 ± 15%, and 62 ± 14%; ANOVA P = 0.02).
Additionally, burns treated with the higher, but not the
lower dose of TGF-b antagonist healed with significantly fewer full thickness scars than controls (62.5%
vs. 100% vs. 93.8% respectively; P < 0.001). There were
no infections and no differences in the percentage
wound reepithelialization among all study groups at
any of the time points.
Conclusion: Treatment of mid-dermal porcine contact
burns with the higher dose TGF-b antagonist reduced
scar contracture and rate of deep scars compared with
the low dose and controls.
A Double-Blinded Comparison of Insulin
Regimens in Diabetic Ketoacidosis: Does
Bolus Insulin Make a Difference?
Joel Kravitz1, Patricia Giraldo2,
Rika N. O’Malley3, Elizabeth Aguilera3, Claudia
Lares3, Sorin Cadar3
1
Community Medical Center, Toms River,
Southampton, NJ; 2Albert Einstein Medical
Center, Philadelphia, PA, NJ; 3Albert Einstein
Medical Center, Philadelphia, PA
Background: Diabetic ketoacidosis (DKA) is a common and lethal complication of diabetes. The American
Diabetes Association recommends treating adult
patients with a bolus dose of regular insulin followed
by a continuous insulin infusion. The ADA also suggests a glucose correction rate of 75–100 mg/dl/hr to
minimize complications.
Objectives: Compare the effect of bolus dose insulin
therapy with insulin infusion to insulin infusion alone
on serum glucose, bicarbonate, and pH in the initial
treatment of DKA.
Methods: Consecutive DKA patients were screened in
the ED between March ’06 and June ’10. Inclusion criteria were: age >18 years, glucose >350 mg/dL, serum
bicarbonate 15 or ketonemia or ketonuria. Exclusion
criteria were: congestive heart failure, current hemodialysis, pregnancy, or inability to consent. No patient
was enrolled more than once. Patients were randomized to receive either regular insulin 0.1 units/kg or the
same volume of normal saline. Patients, medical and
research staff were blinded. Baseline glucose, electrolytes, and venous blood gases were collected on arrival.
Bolus insulin or placebo was then administered and all
enrolled patients received regular insulin at rate of
0.1 unit/kg/hr, as well as fluid and potassium repletion
per the research protocol. Glucose, electrolytes, and
www.aemj.org
S9
venous blood gases were drawn hourly for 4 hours.
Data between two groups were compared using
unpaired t-test.
Results: 99 patients were enrolled, with 30 being
excluded. 35 patients received bolus insulin; 34 received
placebo. No significant differences were noted in initial
glucose, pH, bicarbonate, age, or weight between the
two groups. After the first hour, glucose levels in the
insulin group decreased by 151 mg/dL compared to
94 mg/dL in the placebo group (p = 0.0391, 95% CI 2.7
to 102.0). Changes in mean glucose levels, pH, bicarbonate level, and AG were not statistically different
between the two groups for the remainder of the
4 hour study period. There was no difference in the
incidence of hypoglycemia in the two groups.
Conclusion: Administering a bolus dose of regular
insulin decreased mean glucose levels more than
placebo, although only for the first hour. There was no
difference in the change in pH, serum bicarbonate or
anion gap at any interval. This suggests that bolus dose
insulin may not add significant benefit in the emergency
management of DKA.
11
10
•
Calibration Of APACHE II Score To Predict
Mortality In Out-of-hospital And In-hospital
Cardiac Arrest
Justin D. Salciccioli, Cristal Cristia, Andre
Dejam, Brandon Giberson, David Toomey,
Michael N. Cocchi, Michael W. Donnino
BIDMC Center for Resuscitation Science,
Boston, MA
Background: Severity of illness scores can predict
outcomes in critically ill patients. However, the calibration of previously established scores in post-cardiac
arrest is poorly established.
Objectives: To assess the calibration of the Acute
Physiology and Chronic Health Evaluation (APACHE II)
score in out-of-hospital (OHCA) and in-hospital cardiac
arrest (IHCA).
Methods: We performed a prospective observational
study of adult cardiac arrest at an urban tertiary care
hospital during the period from 12/2007 to 12/2010.
Inclusion criteria: 1. Adult (>18 years); 2. OHCA or
S10
2012 SAEM ANNUAL MEETING ABSTRACTS
IHCA; 3. Return of spontaneous circulation (RSOC).
Traumatic cardiac arrests were excluded. We recorded
baseline demographics, arrest event characteristics,
follow-up vitals and laboratory data, and in-hospital
mortality. APACHE II scores were calculated at the
time of ROSC, and at 24 hrs, 48 hrs, and 72 hrs. We
used simple descriptive statistics to describe the study
population. Univariate logistic regression was used to
predict mortality with APACHE II as a continuous
predictor variable. Discrimination of APACHE II
scores was assessed using the area under the curve
(AUC) of the receiver operator characteristic (ROC)
curve.
Results: A total of 229 patients were analyzed. The
median age was 70 years (IQR: 56–79) and 32% were
female. APACHE II score was a significant predictor of
mortality for both OHCA and IHCA at baseline and at
all follow-up time points (all p < 0.01). Discrimination of
the score increased over time and achieved very good
discrimination after 24 hrs (Table, Figure).
Conclusion: The ability of APACHE II score to predict
mortality improves over time in the 72 hours following
cardiac arrest. These data suggest that after 24 hours,
APACHE II scoring is a useful severity of illness score
in all post-cardiac arrest patients.
Table - Abstract 11: Comparison of APACHE II with AUC to
Predict Mortality
hyperlactatemia (lactate ‡ 4.0 mmol/L), a logistic regression model was created; outcome- hyperlactatemia; primary variable of interest- hyperglycemia. A second
model was created to determine if concurrent hyperlactatemia affects hyperglycemia’s association with mortality; outcome- 28-day mortality; primary risk variablehyperglycemia with an interaction term for concurrent
hyperlactatemia. Both models were adjusted for demographics, comorbidities, presenting infectious syndrome, and objective evidence of renal, respiratory,
hematologic, or cardiovascular dysfunction.
Results: 1236 ED patients were included; mean age
76 ± 19 years. 133 (9%) subjects were hyperglycemic,
182 (13%) hyperlactatemic, and 225 (16%) died within
28 days of the initial ED visit. After adjustment, hyperglycemia was significantly associated with simultaneous
hyperlactatemia (OR 3.9, 95%CI 2.48, 5.98). Hyperglycemia with concurrent hyperlactatemia was associated
with increased mortality risk (OR 4.4, 95%CI 2.27,
8.59), but hyperglycemia in the absence of simultaneous hyperlactatemia was not (OR 0.86, 95%CI 0.45,
1.65).
Conclusion: In this cohort of septic adult non-diabetic
patients, mortality risk did not increase with
hyperglycemia unless associated with simultaneous
hyperlactatemia. The previously reported association of
hyperglycemia with mortality in this population may be
due to the association of hyperglycemia with hyperlactatemia.
APACHE II Score (AUC)
All-CA
OHCA
IHCA
12
0 hr
24 hr
48 hr
72 hr
Mortality
n (%)
28 (0.67)
26 (0.66)
28 (0.71)
29 (0.71)
29 (0.72)
29 (0.69)
26 (0.81)
26 (0.82)
26 (0.81)
22 (0.84)
23 (0.89)
22 (0.79)
124 (54)
69 (56)
55 (44)
Hyperlactatemia Affects the Association
of Hyperglycemia with Mortality in
Non-Diabetic Septic Adults
Jeffrey P. Green1, Tony Berger1, Nidhi Garg2,
Alison Suarez2, Michael S. Radeos2, Sanjay
Gupta2, Edward A. Panacek1
1
UC Davis Medical Center, Davis, CA; 2New
York Hospital Queens, Flushing, NY
Background: Admission hyperglycemia has been
described as a mortality risk factor for septic non-diabetics, but the known association of hyperglycemia
with hyperlactatemia (a validated mortality risk factor
in sepsis) has not previously been accounted for.
Objectives: To determine whether the association of
hyperglycemia with mortality remains significant when
adjusted for concurrent hyperlactatemia.
Methods: This was a post-hoc, nested analysis of a single-center cohort study. Providers identified study subjects during their ED encounters; all data were collected
from the electronic medical record. Patients: Nondiabetic adult ED patients with a provider-suspected
infection, two or more Systemic Inflammatory Response
Syndrome criteria, and concurrent lactate and glucose
testing in the ED. Setting: The ED of an urban teaching
hospital; 2007 to 2009. Analysis: To evaluate the association of hyperglycemia (glucose >200 mg/dL) with
13
The Effect of Near Infrared Spectroscopy
Monitoring on Patients Undergoing
Resuscitation for Shock
James Miner, Johanna Bischoff, Nathaniel Scott,
Roma Patel, Rebecca Nelson, Stephen W. Smith
Hennepin County Medical Center, Minneapolis,
MN
Background: Near infrared spectroscopy (StO2) represents a measure of perfusion that provides the treating physician with an assessment of a patient’s shock
state and response to therapy. It has been shown to
correlate with lactate and acid/base status. It is not
known if using information from this monitor to guide
resuscitation will result in improved patient outcomes.
Objectives: To compare the resuscitation of patients
in shock when the StO2 monitor is or is not being used
to guide resuscitation.
Methods: This was a prospective study of patients
undergoing resuscitation in the ED for shock from any
cause. During alternating 30 day periods, physicians
were blinded to the data from the monitor followed by
30 days in which physicians were able to see the information from the StO2 monitor and were instructed to
resuscitate patients to a target StO2 value of 75. Adult
patients (age>17) with a shock index (SI) of >0.9
(SI = heart rate/systolic blood pressure) or a blood pressure <80 mmHg systolic who underwent resuscitation
were enrolled. Patients had a StO2 monitor placed on the
thenar eminence of their least-injured hand. Data from
the StO2 monitor were recorded continuously and noted
every minute along with blood pressure, heart rate, and
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
oxygen saturation. All treatments were recorded.
Patients’ charts were reviewed to determine the diagnosis, ICU-free days in the 28 days after enrollment, inpatient LOS, and 28-day mortality. Data were compared
using Wilcoxon rank sum and chi-square tests.
Results: 107 patients were enrolled, 51 during blinded
periods and 56 during unblinded periods. The median
presenting shock index was 1.24 (range 0.5 to 4.0) for
the blinded group and 1.10 (0.5–3.3) for the unblinded
group (p = 0.13). The median time in department was
70 minutes (range 22–407) for the blinded and 76 minutes (range 11–275) for the unblinded groups (p = 0.99).
The median hospital LOS was 1 day (range 0–30) for
the blinded group, and 2 days (range 0–23) in the
unblinded group (p = 0.63). The mean ICU-free days
was 22 ± 9 for the blinded group and 19 ± 11 for the
unblinded group (p = 0.26). Among patients where the
physician indicated using the StO2 monitor data to
guide patient care, the ICU-free days were 21.4 ± 9 for
the blinded group and 16.3 ± 12 for the blinded group
(p = 0.06).
Conclusion: StO2 monitoring of patients in undifferentiated shock did not demonstrate a difference in hospital LOS or ICU-free days in this preliminary study.
(Originally submitted as a ‘‘late-breaker.’’)
14
A Laboratory Study Assessing The Influence
Of Flow Rate And Insulation Upon
Intravenous Fluid Infusion Temperature
Jonathan Studnek1, John Watts1,
Steven Vandeventer2, David Pearson1
1
Carolians Medical Center, Charlotte, NC;
2
Mecklenburg EMS Agency, Charlotte, NC
Background: Inducing therapeutic hypothermia (TH)
using 4C IV fluids in resuscitated cardiac arrest
patients has been shown to be feasible and effective.
Limited research exists assessing the efficiency of this
cooling method.
Objectives: The objective was to determine an efficient infusion method for keeping fluid close to 4C
upon exiting an IV. It was hypothesized that colder
temperatures would be associated with both higher
flow rate and insulation of the fluid bag.
•
www.aemj.org
S11
Methods: Efficiency was studied by assessing change
in fluid temperature (0C) during the infusion, under
three laboratory conditions. Each condition was performed four times using 1 liter bags of normal saline.
Fluid was infused into a 1000 mL beaker through 10
gtts tubing. Flow rate was controlled using a tubing
clamp and in-line transducer with a flowmeter, while
temperature was continuously monitored in a side
port at the terminal end of the IV tubing using a digital thermometer. The three conditions included infusing chilled fluid at a rate of 40 mL/min, which is
equivalent to 30 mL/kg/hr for an 80 kg patient,
105 mL/min, and 105 mL/min using a chilled and insulated pressure bag. Descriptive statistics and analysis
of variance was performed to assess changes in fluid
temperature.
Results: The average fluid temperatures at time 0
were 3.40 (95% CI 3.12–3.69) (40 mL/min), 3.35 (95% CI
3.25–3.45) (105 mL/min), and 2.92 (95% CI 2.40–3.45)
(105 mL/min + insulation). There was no significant difference in starting temperature between groups
(p = 0.16). The average fluid temperatures after 100 mL
had been infused were 10.02 (95% CI 9.30–10.74)
(40 mL/min), 7.35 (95% CI 6.91–7.79) (105 mL/min), and
6.95 (95% CI 6.47–7.43) (105 mL/min + insulation). The
higher flow rate groups had significantly lower temperature than the lower flow rate after 100 mL of fluid had
been infused (p < 0.001). The average fluid temperatures
after 1000 mL had been infused were 16.77 (95% CI
15.96–17.58) (40 mL/min), 11.40 (95% CI 11.18–11.61)
(105 mL/min), and 7.75 (95% CI 7.55–7.99) (105 mL/min
+ insulation). There was a significant difference in temperature between all three groups after 1000 mL of
fluid had been infused (p < 0.001).
Conclusion: In a laboratory setting, the most efficient
method of infusing cold fluid appears to be a method
that both keeps the bag of fluid insulated and is infused
at a faster rate.
15
Outcomes of Patients with Vasoplegic
versus Tissue Dysoxic Septic Shock
Sarah Sterling1, Michael Puskarich1,
Stephen Trzeciak2, Nathan Shapiro3, Jeffrey
Kline4, Alan Jones1
1
University of Mississippi Medical Center,
Jackson, MS; 2Cooper University Hospital,
Camden, NJ; 3BIDMC, Boston, MA; 4Carolinas
Medical Center, Charlotte, NC
Background: The current consensus definition of septic
shock requires hypotension after adequate fluid challenge or vasopressor requirement. Some patients with
septic shock present with hypotension and hyperlactemia
>2 mM/L (tissue dysoxic shock), while others have hypotension alone with normal lactate (vasoplegic shock).
Objectives: To determine differences in outcomes of
patients with tissue dysoxic versus vasoplegic septic
shock.
Methods: Pre-planned secondary analysis of a large,
multi-center RCT. Inclusion criteria: suspected infection, two or more systemic inflammatory response
criteria, and systolic blood pressure <90 mmHg after a
S12
2012 SAEM ANNUAL MEETING ABSTRACTS
fluid bolus. Patients were categorized by presence of
vasoplegic or tissue dysoxic shock. Demographics and
sequential organ failure assessment (SOFA) scores
were evaluated between the groups. The primary outcome was in-hospital mortality. Data were analyzed
using t-tests, chi-squared test, and proportion differences with 95% confidence intervals as appropriate.
Results: A total of 242 patients were included: 89
patients with vasoplegic shock and 153 with tissue dysoxic shock. There were no significant differences in age
(61 vs. 58 years), Caucasian race (53% vs. 58%), or male
sex (57% vs. 52%) between the dysoxic shock and vasoplegic shock groups, respectively. The group with vasoplegic shock had a lower initial SOFA score than did
the group with tissue dysoxic shock (5.7 vs. 7.3 points,
p = 0.0002). The primary outcome of in-hospital mortality occurred in 8/89 (9%) of patients with vasoplegic
shock compared to 40/153 (26%) in the group with
tissue dysoxic shock (proportion difference 17%, 95%
CI 7–26%, p < 0.0001).
Conclusion: In this analysis of patients with septic
shock, we found a significant difference in in-hospital
mortality between patients with vasoplegic versus tissue dysoxic septic shock. These findings suggest a need
to consider these differences when designing future
studies of septic shock therapies.
16
Assessment of Clinical Deterioration
and Progressive Organ Failure in Moderate
Severity Emergency Department Sepsis
Patients
Lindsey J. Glaspey, Steven M. Hollenberg,
Samantha A. Ni, Stephen Trzeciak,
Ryan C. Arnold
Cooper University Hospital, Camden, NJ
Background: The PRE-SHOCK population, ED sepsis
patients with tissue hypoperfusion (lactate of 2.0–
3.9 mM), commonly deteriorates after admission and
requires transfer to critical care.
Objectives: To determine the physiologic parameters
and disease severity indices in the ED PRE-SHOCK
sepsis population that predict clinical deterioration. We
hypothesized that neither initial physiologic parameters
nor organ function scores will be predictive.
Methods: Design: Retrospective analysis of a prospectively maintained registry of sepsis patients with lactate
measurements. Setting: An urban, academic medical
center. Participants: The PRE-SHOCK population,
defined as adult ED sepsis patients with either elevated
lactate (2.0–3.9 mM) or transient hypotension (any sBP
<90 mmHg) receiving IV antibiotics and admitted to a
medical floor. Consecutive patients meeting PRESHOCK criteria were enrolled over a 1-year period.
Patients with overt shock in the ED, pregnancy, or
acute trauma were excluded. Outcome: Primary patientcentered outcome of increased organ failure (sequential
organ failure assessment [SOFA] score increase >1
point, mechanical ventilation, or vasopressor utilization)
within 72 hours of admission or in-hospital mortality.
Results: We identified 248 PRE-SHOCK patients from
2649 screened. The primary outcome was met in 54% of
the cohort and 44% were transferred to the ICU from a
medical floor. Patients meeting the outcome of increased
organ failure had a greater Shock Index (1.02 vs 0.93,
p = 0.042) and heart rate (115 vs 105, p < 0.001) with no
difference in initial lactate, age, MAP, or exposure to
hypotension (sBP <100 mmHg). There was no difference
in the Predisposition, Infection, Response, and Organ
dysfunction (PIRO) score between groups (6.4 vs 5.7,
p = 0.052). Outcome patients had similar initial levels of
organ dysfunction but had higher SOFA scores at 24, 48,
and 72 hours, a higher ICU transfer rate (60 vs 24%,
p < 0.001), and increased ICU and hospital lengths of stay.
Conclusion: The PRE-SHOCK sepsis population has a
high incidence of clinical deterioration, progressive
organ failure, and ICU transfer. Physiologic data in the
ED were unable to differentiate the PRE-SHOCK sepsis
patients who developed increased organ failure. This
study supports the need for an objective organ
failure assessment in the emergency department to
supplement clinical decision-making.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Table - Abstract 16: Physiologic Parameters and Resource
Utilization
Mean arterial pressure
(MAP) [mean (SD)]
Any Systolic blood
pressure < 100 mmHg
[n(%)]
Shock Index (HR/sBP)
[mean (SD)]
Lactate: Initial [mean
(SD)]
SOFA score: Initial
[mean (SD)]
SOFA score: 24 hours
[mean (SD)]
SOFA score: 48 hours
[mean (SD)]
ICU LOS [mean (SD)]
Hospital LOS [mean (SD)]
17
Outcome
(n = 135)
No
Outcome
(n = 113)
p
value
76 (18)
73 (16)
0.171
63 (47)
63 (56)
0.199
1.02 (0.38)
0.93 (0.30)
0.042
2.45 (0.74)
2.29 (0.78)
0.099
2.1 (1.9)
1.8 (1.7)
0.195
2.9 (2.3)
1.3 (1.4)
<0.001
2.5 (2.0)
1.0 (1.3)
<0.001
3 (5)
11 (12)
1 (2)
6 (7)
<0.001
<0.001
Lipopolysaccharide Detection in Patients
with Septic Shock in the ED
Daren M. Beam1, Michael A. Puskarich1, Mary
Beth Fulkerson1, Jeffrey A. Kline1, Alan E. Jones2
1
Carolinas Medical Center, Charlotte, NC;
2
University of Mississippi Medical Center,
Jackson, MS
Background: Lipopolysaccharide (LPS) has long been
recognized to initiate the host inflammatory response to
infection with gram negative bacteria (GNB). Large clinical trials of potentially very expensive therapies continue
to have the objective of reducing circulating LPS. Previous
studies have found varying prevalence of LPS in blood of
patients with severe sepsis. Compared with sepsis trials
conducted 20 years ago, the frequency of GNB in culture
specimens from emergency department (ED) patients
enrolled in clinical trials of severe sepsis has decreased.
Objectives: Test the hypothesis that prior to antibiotic
administration, circulating LPS can be detected in the
plasma of fewer than 10% of ED patients with severe sepsis.
Methods: Secondary analysis of a prospective EDbased RCT of early quantitative resuscitation for severe
sepsis. Blood specimens were drawn at the time severe
sepsis was recognized, defined as two or more systemic
inflammatory criteria and a serum lactate >4 mM or
SPB<90 mmHg after fluid challenge. Blood was drawn
in EDTA prior to antibiotic administration or within the
first several hours, immediately centrifuged, and plasma
frozen at )80C. Plasma LPS was quantified using the
limulus amebocyte lysate assay (LAL) by a technician
blinded to all clinical data.
Results: 180 patients were enrolled with 140 plasma
samples available for testing. Median age was
59 ± 17 years, 50% female, with overall mortality of 18%.
Forty of 140 patients (29%) had any culture specimen
positive for GNB including 21 (15%) with blood cultures
positive. Only five specimens had detectable LPS, including two with a GNB-positive culture specimen, and three
were LPS-positive without GNB in any culture. Prevalence of detectable LPS was 3.5% (CI: 1.5%–8.1%).
•
www.aemj.org
S13
Conclusion: The frequency of detectable LPS in antibiotic-naive plasma is too low to serve as a useful diagnostic test or therapeutic target in ED patients with
severe sepsis. The data raise the question of whether
post-antibiotic plasma may have a higher frequency of
detectable LPS.
18
Evaluation of the Efficacy of an Early Goal
Directed Therapy (EGDT) Protocol When
Using MEDS Score for Risk Stratification
Ameer F. Ibrahim, Michael Mann, Jeff Scull,
Michael Spino Jr., Lisa Voigt, Kimberly Zammit,
Robert McCormack
The State University of New York at Buffalo,
Buffalo General Hospital, Buffalo, NY
Background: EGDT is known to reduce mortality in
septic patients. There is no evidence to date that delineates the role of using a risk stratification tool, such as
the Mortality in Emergency Department Sepsis (MEDS)
score, to determine which subgroups of patients may
have a greater benefit with EGDT.
Objectives: Our objective was to determine if our
EGDT protocol differentially affects mortality based on
the severity of illness using MEDS score.
Methods: This study is a retrospective chart review of
243 patients, conducted at an urban tertiary care center,
after implementing an EGDT protocol on July 1, 2008
(Figure). This study compares in-hospital mortality,
length of stay (LOS) in ICU, and LOS in ED between the
control group (126 patients from 1/1/07–12/31/07) and the
postimplementation group (117 patients from 7/1/08–6/
30/09), using MEDS score as a risk stratification tool.
Inclusion criteria: patients who presented to our ED with
a suspected infection, and two or more SIRS criteria, a
MAP<65 mmHg, a SBP< 90 mmol/L. Exclusion criteria:
age<18, death on arrival to ED, DNR or DNI, emergent
surgical intervention, or those with an acute myocardial
infarction or CHF exacerbation. A two-sample t-test was
used to show that the mean age and number of comorbidities was similar between the control and study groups
(p = 0.27 and 0.87 respectively). Mortality was compared
and adjusted for MEDS score using logistic regression.
The odds ratios and predicted probabilities of death are
generated using the fitted logistic regression model. ED
and ICU LOS were compared using Mood’s median test.
Results: When controlling for illness severity using
MEDS score, the relative risk (RR) of death with EGDT is
about half that of the control group (RR = 0.52, 95% CI
[0.278-0.973], p=0.04). Also, by applying MEDS score to
risk stratify patients into various groups of illness severity, we found no specific groups where EGDT is more
efficacious at reducing the Predicted Probability of death
(Table 1). Without controlling for MEDS score, there is a
trend in reduction of absolute mortality by 9.7% when
EGDT is used (control = 30.2%, study = 20.5%, p =
0.086). EGDT leads to a 40.3% reduction in the median
LOS in ICU (control = 124 hours, study = 74 hours, p =
0.03), without increasing LOS in ED (control = 6 hours,
study = 7 hours, p = 0.50).
Conclusion: EGDT is beneficial in patients with
severe sepsis or septic shock, regardless of their MEDS
score.
S14
2012 SAEM ANNUAL MEETING ABSTRACTS
Table - Abstract 18: RR of Death within Each MEDS Score Illness
Severity Subgroup
MEDS
Score
Very High (>15)
High (13–15)
Moderate (8–12)
Low (5–7)
Very Low (0–4)
19
Control
Post implementation
Difference
P value
0.621
0.496
0.358
0.181
0.127
0.457
0.336
0.222
0.102
0.069
0.164
0.160
0.136
0.079
0.058
0.801
0.856
0.437
0.484
0.625
Validation of Using Fingerstick Blood
Sample with i-Stat POC Device for Cardiac
Troponin I Assay
Devin Loewenstein, Christine Stake, Mark
Cichon
Loyola University Health System, Maywood, IL
Background: In patients experiencing acute coronary
syndrome (ACS), prompt diagnosis is critical in achieving the best health outcome. While ECG analysis is
usually sufficient to diagnose ACS in cases of ST elevation, ACS without ST elevation is reliably diagnosed
through serial testing of cardiac troponin I (cTnI). Pointof-care testing (POCT) for cTnI by venipuncture has been
proven a more rapid means to diagnosis than central
laboratory testing. Implementing fingerstick testing for
cTnI in place of standard venipuncture methods would
allow for faster and easier procurement of patients’ cTnI
levels, as well as increase the likelihood of starting a
rapid test for cTnI in the prehospital setting, which could
allow for even earlier diagnosis of ACS.
Objectives: To determine if fingerstick blood samples
yield accurate and reliable troponin measurements
compared to conventional venous blood draws using
the i-STAT POC device.
Methods: This experimental study was performed in
the ED of a quaternary care suburban medical center
between June-August 2011. Fingerstick blood samples
were obtained from adult ED patients for whom standard (venipuncture) POC troponin testing was ordered.
The time between fingerstick and standard draws was
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Table - Abstract 19: Categorical comparison of standard ED
POCT and fingerstick results
kept as narrow as possible. cTnI assays were performed
at the bedside using the i-STAT 1 (Abbott Point of
Care).
Results: 94 samples from 87 patients were analyzed by
both fingerstick and standard ED POCT methods (see
Table). Four resulted in cartridge error. Compared to
‘‘gold standard’’ ED POCT, fingerstick testing has a positive predictive value of 100%, negative predictive value of
96%, sensitivity of 79%, and specificity of 100%. No significant difference in cTnI level was found between the
two methods, with a nonparametric intraclass correlation
coefficient of 0.994 (95% CI 0.992–0.996, p-value < 0.001).
Conclusion: Whole blood fingerstick cTnI testing using
the i-STAT device is suitable for rapid evaluation of
cTnI level in prehospital and ED settings. However,
results must be interpreted with caution if they are
within a narrow territory of the cutoff for normal vs.
elevated levels. Additional testing on a larger sample
would be beneficial. The practicality and clinical benefit
of using fingerstick cTnI testing in the EMS setting
must still be assessed.
20
Central Versus Local Adjudication of
Myocardial Infarction in a Biomarker Trial
Stephen W. Smith1, John T. Nagurney2,
Deborah B. Diercks3, Rick C. San George4, Fred
S. Apple1, Peter A. McCullough5, Christian T.
Ruff6, Arturo Sesma7, W. Frank Peacock8
1
Hennepin County Medical Center, Minneapolis,
MN; 2Massachusetts General Hospital, Boston,
MA; 3University of California, Davis, CA; 4Alere,
Biosite, San Diego, CA; 5Providence Park Heart
Institute, Novi, MI; 6Brigham and Women’s
Hospital, Boston, MA; 7St. Catherine University,
St. Paul, MN; 8Cleveland Clinic Foundation,
Cleveland, OH
Background: Adjudication of diagnosis of acute myocardial infarction (AMI) in clinical studies typically
occurs at each site of subject enrollment (local) or by
experts at an independent site (central). From 2000–
2007, the troponin (cTn) element of the diagnosis was
predicated on the local laboratories, using a mix of the
99th percentile reference cTn and ROC-determined cutpoints. In 2007, the universal definition of AMI (UDAMI) defined it by the 99th percentile reference alone.
Objectives: To compare the diagnosis rates of AMI as
determined by local adjudication vs. central adjudication using UDAMI criteria.
Methods: Retrospective analysis of data from the
Myeloperoxidase in the Diagnosis of Acute Coronary
•
www.aemj.org
S15
Syndromes (ACS) Study (MIDAS), an 18-center prospective study with enrollment from 12/19/06 to 9/20/07
of patients with suspected ACS presenting to the
ED < 8 hours after symptom onset and in whom serial
cTn and objective cardiac perfusion testing was
planned. Adjudication of ACS was done by single local
principal investigators using clinical data and local cTn
cutpoints from 13 different cTn assays, and applying
the 2000 definition. Central adjudication was done after
completion of the MIDAS primary analysis using the
same data and local cTn assay, but by experts at three
different institutions, using the UDAMI and the manufacturer’s 99th percentile cTn cutpoint, and not blinded
to local adjudications. Discrepant dignoses were
resolved by consensus. Local vs. central cTn cutpoints
differed for six assays, with central cutpoints lower in
all. Statistics were by chi-square and kappa.
Results: Excluding 11 cases deemed indeterminate by
central adjudication, 1096 cases were successfully adjudicated. Local adjudication resulted in 104 AMI (9.5% of
total) and 992 non-AMI; central adjudication resulted in
134 (12.2%) AMI and 962 non-AMI. Overall, 44 local
diagnoses (4%) were either changed from non-AMI to
AMI or AMI to non-AMI (p < 0.001). Interrater reliability
across both methods was found to be kappa = 0.79
(p < 0.001). For ACS diagnosis, local adjudication identified 252 ACS cases (23%) and 854 non-ACS, while central
adjudication identified 275 ACS (25%) and 831 non-ACS.
Overall, 61 local diagnoses (6%) were either changed
from non-ACS to ACS or ACS to non-ACS (p < 0 .001).
Interrater reliability found kappa = 0.85 (p < 0.001).
Conclusion: Central and local adjudication resulted in
significantly different rates of AMI and ACS diagnosis.
However, overall agreement of the two methods across
these two diagnoses was acceptable.
Table - Abstract 20: Differences in AMI Diagnosis Between
Central and Local Adjudication
Local
Central
AMI
Non-AMI
AMI
Non-AMI
–ACS, not AMI
–Non-cardiac
chest pain
–Chest pain not
otherwise specified
Total changes
21
AMI
Non-AMI
Non-AMI
AMI
AMI
AMI
AMI
Number (%)
97 (8.9%)
955 (87.1%)
7 (0.64%)
Total 37 (3.4%)
)17 (1.6%)
)9 (0.8%)
)11 (1.0%)
44 (4.0%, 31%
of all MI Dx’s)
Myeloperoxidase And C-Reactive Protein In
Patients With Cocaine-associated Chest
Pain
Katie O’Conor1, Anna Marie Chang1, Alan Wu2,
Judd E. Hollander1
1
Hospital of the University of Pennsylvania,
Philadelphia, PA; 2University of California, San
Francisco, San Francisco, CA
Background: In ED patients presenting with chest
pain, acute coronary syndrome (ACS) was found to
S16
2012 SAEM ANNUAL MEETING ABSTRACTS
occur four times more often in cocaine users. Biomarkers myeloperoxidase (MPO) and C-reactive protein
(CRP) have potential in the diagnosis of ACS.
Objectives: To evaluate the utility of MPO and CRP in
the diagnosis of ACS in patients presenting to the ED
with cocaine-associated chest pain and compare the
predictive value to nonusers. We hypothesized that
these markers may be more sensitive for ACS in nonusers given the underlying pathophysiology of enhanced
plaque inflammation.
Methods: A secondary analysis of a cohort study of
enrolled ED patients who received evaluation for ACS
at an urban, tertiary care hospital. Structured data
collection at presentation included demographics,
chest pain history, lab, and ECG data. Subjects
included those with self-reported or lab-confirmed
cocaine use and chest pain. They were matched to
controls based on age, sex, and race. Our main outcome was diagnosis of ACS at index visit. We determined median MPO and CRP values, calculated
maximal AUC for ROC curves, and found cut-points
to maximize sensitivity and specificity. Data are presented with 95% CI.
Results: Overall, 95 patients in the cocaine positivegroup and 86 patients in the nonusers group had MPO
and CRP levels measured. Patients had a median age of
47 (IQR, 40–52), 90% black or African American, and
62% male (p > 0.05 between groups). Fifteen patients
were diagnosed with ACS: 8 patients in the cocaine
group and 7 in the nonusers group. Comparing cocaine
users to nonusers, there was no difference in MPO
(median 162 [IQR, 101–253] v 136 [111–235] ng/mL;
p = 0.78) or CRP (3 [1–9] v 5 [1–15] mg/L; p = 0.08). The
AUC for MPO was 0.65 (95% CI 0.39–0.90) v 0.54 (95%
CI 0.19–0.73). The optimal cut-point to maximize sensitivity and specificity was 242 ng/mL which gave a sensitivity of 0.42 and specificity of 0.75. Using this cutpoint,
57% v 29% of ACS in cocaine users vs the nonusers
would be identified. The AUC for CRP was 0.63 (95%
CI 0.39–0.88) in cocaine users vs 0.73 (95% CI 0.52–0.95)
in nonusers. The optimal cut point was 11.9 mg/L with
a sensitivity of 0.67 and specificity of 0.79. Using this
cutpoint, 43% v 88% of ACS in cocaine users and nonusers would have been identified.
Conclusion: The diagnostic accuracy of MPO and CRP
is not different in cocaine users than nonusers and does
not appear to have sufficient discriminatory ability in
either cohort.
22
A Soluble Guanylate Cyclase Stimulator,
Bay 41-8543, Preserves Right Ventricular
Function In Experimental Pulmonary
Embolism
John A. Watts, Michael A. Gellar, Mary-Beth K.
Fulkerson, Jeffrey A. Kline
Carolinas Medical Center, Charlotte, NC
Background: Pulmonary embolism (PE) increases
pulmonary vascular hypertension and can cause right
ventricular (RV) injury by shear forces, stretch, work,
and inflammatory responses. Our recent studies show
that the vasodilation observed following in vivo
treatment with BAY 41-8543 reduces pulmonary vascular resistance in a 5 hr model of acute experimental PE.
Objectives: To test if treatment of rats with BAY 418543 protects against RV dysfunction in two models of
acute experimental PE.
Methods: Experimental PE was induced in anesthetized, male Sprague-Dawley rats by infusing 25 um
polystyrene microspheres in the right jugular vein to
produce RV injury from severe PE (2.6 million/100 gm
body wt, 5 hrs) or moderate PE (2.0 million/100 gm
body wt, 18 hrs). Rats with PE were treated with BAY
41-8543 (50 ug/kg, IV) or its solvent at the time of PE
induction. Control rats received vehicle for microspheres. Hearts were perfused by Langendorff technique to assess RV function. Values are mean ± se,
n = 7 / group. Comparisons between control values, PE,
and PE + BAY 41-8543 were made by ANOVA followed
by Neuman-Keuls test (significance = p < 0.05).
Results: 18 hrs of moderate PE caused a significant
decrease in RV heart function in rats treated with the
solvent for BAY 41-8543: peak systolic pressure (PSP)
decreased from 39 ± 1.5 mmHg, control to 16 ± 1.5, PE,
+dP/dt decreased from 1192 ± 93 mmHg/sec to
463 ± 77, -dP/dt decreased from )576 ± 60 mmHg/sec
to )251 ± 9. Treatment of rats with BAY 41-8543 significantly improved all three indices of RV heart function
(PSP 29 ± 2.6, +dP/dt 1109 ± 116, -dP/dt )426 ± 69).
5 hrs of severe PE also caused significant RV dysfunction (PSP 25 ± 2, -dP/dt )356 ± 28) and treatment with
BAY 41-8543 produced protection of RV heart function
(PSP 34 ± 2, -dP/dt )535 ± 41) similar to the 18 hr moderate PE model.
Conclusion: Experimental PE produced significant RV
dysfunction, which was ameliorated by treatment of the
animals with the soluble guanylate cyclase stimulator,
BAY 41-8543.
23
Prospective Evaluation of a Simplified Risk
Stratification Tool for Chest Pain Patients
in an Emergency Department Observation
Unit
Matthew J. Fuller, Jessica Holly, Thomas
Rayner, Camille Broadwater-Hollifield, Michael
Mallin, Virgil Davis, Erik Barton, Troy Madsen
University of Utah, Salt Lake City, UT
Background: The Thrombolysis in Myocardial Infarction (TIMI) score has been validated as a risk stratification tool in the emergency department (ED) setting, but
certain aspects of the scoring system may not be
applicable when applied to chest pain patients selected
for ED observation unit (EDOU) stay.
Objectives: We evaluated a simplified, three-point risk
stratification tool for EDOU patients which we termed
the CARdiac score: Coronary disease [previous myocardial infarction (MI), stent, or coronary artery bypass
graft (CABG)], Age (65 years or older), and Risk factors
(at least 3 of 5 cardiac risk factors).
Methods: We performed a prospective, observational
study with 30-day phone follow-up for all chest pain
patients admitted to our EDOU over a one-year
period. Baseline data, outcomes related to EDOU stay,
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
inpatient admission, and 30-day outcomes were
recorded. CARdiac scores were calculated based on
patient history and were used to evaluate the risk of
the composite outcome of MI, stent/CABG, or death
during the EDOU stay, inpatient admission, or 30-day
follow-up period. The CARdiac score was not used
during the EDOU stay and was calculated with blinding to patient outcomes.
Results: 552 patients were evaluated. Average age was
54.1 years (19–80 yrs) and 46% were male. Eighteen
patients experienced composite outcomes: stent (12),
CABG (3), MI and stent (2), and MI and CABG (1). Risk
of the composite outcome generally increased by CARdiac score: 0 (1.6%), 1 (3.6%), 2 (9%), and 3 (8.3%).
Patients with a CARdiac score of 2 or 3 (moderate risk)
were significantly more likely to experience MI, stent,
or CABG than those with a score of 0 or 1 (low risk): 7/
79 moderate-risk patients (8.9%) had the composite outcome vs. 11/473 low-risk patients (2.3%, p = 0.008, relative risk = 3.9). Those moderate-risk by the CARdiac
score were also more likely to require inpatient admission from the EDOU (16.5% vs. 9.1%, p = 0.045).
Conclusion: The CARdiac score may prove to be a simple tool for risk stratification of chest pain patients in
an EDOU. Patients considered moderate-risk by CARdiac score may be appropriate for more intensive evaluation in the EDOU or consideration for inpatient
admission rather than EDOU placement.
24
Disease Progression in Patients Without
Clinically Significant Stenosis on Coronary
CT Angiography Performed for Evaluation
of Potential Acute Coronary Syndrome
Anna Marie Chang1, Catherine T. Ginty2,
Harold I. Litt1, Judd E. Hollander1
1
Hospital of the University of Pennsylvania,
Philadelphia, PA; 2Cooper University Hospital,
Camden, NJ
Background: Patients who present to the ED with
symptoms of potential acute coronary syndrome
(ACS) can be safely discharged home after a negative
coronary computerized tomographic angiography
(CTA). However, the duration of time for which a
negative coronary CTA can be used to inform decision making when patients have recurrent symptoms
is unknown.
Objectives: We examined patients who received more
than one coronary CTA for evaluation of ACS to determine whether they had disease progression, as defined
by crossing the threshold from noncritical (<50% maximal stenosis) to potentially critical disease.
Methods: We performed a structured comprehensive
record search of all coronary CTAs performed from
2005 to 2010 at a tertiary care health system. Low-tointermediate risk ED patients who received two or
more coronary CTAs, at least one from an ED evaluation for potential ACS, were identified. Patients who
were revascularized between scans were excluded. We
collected demographic data, clinical course, time
between scans, and number of ED visits between scans.
Record review was structured and done by trained
•
www.aemj.org
S17
abstractors. Our main outcome was progression of coronary stenosis between scans, specifically crossing the
threshold from noncritical to potentially critical disease.
Results: Overall, 32 patients met study criteria (median
age 45, interquartile range [IQR] (37.5–48); 56% female;
88% black). The median time between studies was
27.3 months (IQR, 18.2–33.2). 22 patients did not have
stenosis in any vessel on either coronary CTA, two
studies showed increasing stenosis of <20%, and the
rest showed ‘‘improvement,’’ most due to better imaging quality. No patient initially below the 50% threshold
subsequently exceeded it (0%; 95% CI, 0–11.0%).
Patients also had varying numbers of ED visits (median
number of visits 5, range 0–23), and numbers of ED visits for potentially cardiac complaints (median 1, range
0–6); 10 were re-admitted for potentially cardiac complaints (for example, chest pain or shortness of breath),
and 9 received further provocative cardiac testing, all
of which had negative results.
Conclusion: We did not find clinically significant
disease progression within a 2 year time frame in
patients who had a negative coronary CTA, despite a
high number of repeat visits. This suggests that prior
negative coronary CTA may be able to be used to
inform decision making within this time period.
25
Triple Rule Out CT Scan for Patients
Presenting to the Emergency Department
with Chest Pain: A Systematic Review and
Meta-Analysis
David Ayaram1, Fernanda Bellolio1, Torrey
Laack1, Hassan M. Murad1, Victor M. Montori1,
Judd Hollander2, Ian G. Stiell3, Erik P. Hess1
1
Mayo Clinic, Rochester, MN; 2University of
Pennsylvania, Philadelphia, PA; 3University of
Ottawa, Ottawa, ON, Canada
Background: ‘‘Triple rule out’’ (TRO) CT has emerged
as a new technology to evaluate for coronary artery
disease (CAD), pulmonary embolism (PE), and aortic
dissection (AD) in a single imaging modality.
Objectives: We compared the diagnostic accuracy,
image quality, radiation exposure, contrast volume,
length of stay, and admission rate of TRO CT to other
diagnostic modalities (dedicated coronary, PE, and AD
CT; coronary angiography; and nuclear stress testing)
for the evaluation of chest pain.
Methods: With the assistance of an expert librarian we
searched four electronic databases and reference lists
of included studies, and contacted content experts to
identify articles for review. Included articles met all the
following criteria: enrolled patients with non-traumatic
chest pain, shortness of breath, suspected ACS, PE, or
AD; used 64-slice CT technology; and compared TRO
CT to another diagnostic modality. Statistical comparisons were conducted using RevMan, and Meta-DiSc
was used to pool diagnostic accuracy estimates.
Results: Nine ED studies enrolling 1371 patients (483
TRO, 888 non-TRO) were included (1 RCT and 8 observational). Patients undergoing TRO CT were exposed to
more radiation (mean difference [MD] 5.03 mSv, 95%
CI 4.16–5.91) and more contrast (MD 45.6 mL, 95% CI
S18
2012 SAEM ANNUAL MEETING ABSTRACTS
42.7–48.6) compared to non TRO CT patients. There
was no significant difference in image quality between
TRO CT images and those of dedicated CT scans in any
studies performing this comparison. Similarly, there
was no significant difference between TRO CT and
other diagnostic modalities in regards to length of stay
or admission rate. When compared to conventional
coronary angiography as the gold standard for evaluation of CAD, TRO CT had the following pooled
diagnostic accuracy estimates: sensitivity 0.94 [95%CI
0.89–0.98], specificity 0.98 [0.97–0.99], LR+ 35.7 [95%CI
11.1–115.0], and LR- 0.07 [95%CI 0.02–0.35].
Conclusion: TRO chest CT is comparable to dedicated
PE, coronary, or AD CT in regard to image quality,
length of stay, and admission rate and is highly accurate for detecting CAD. The utility of TRO CT depends
on the relative pre-test probabilities of the conditions
being assessed and its role is yet to be clearly defined.
TRO CT, however, involves increased radiation exposure and contrast volume and for this reason clinicians
should be selective in its use.
26
Impact of Coronary Computer Tomographic
Angiography Findings on the Medical
Treatment of CAD
Anna Marie Chang, Stephen Kimmel, Jeffrey Le,
Judd E. Hollander
Hospital of the University of Pennsylvania,
Philadelphia, PA
Background: Coronary computed tomographic angiography (CCTA) has high sensitivity, specificity, accuracy, and prognostic value for coronary artery disease
(CAD) and ACS. However, how a CCTA informs subsequent use of prescription medication is unclear.
Objectives: To determine if detection of critical or
noncritical CAD on CCTA is associated with initiation of
aspirin and statins for patients who presented to the ED
with chest pain. We hypothesized that aspirin and statins
would be more likely to be prescribed to patients with
noncritical disease relative to those without any CAD.
Methods: Prospective cohort study of patients who
received CCTA as part of evaluation of chest pain in
the ED or observation unit. Patients were contacted
and medical records were reviewed to obtain clinical
follow-up for up to the year after CCTA. The main outcome was new prescription of aspirin or statin. CAD
severity on CCTA was graded as absent, mild (1% to
49%), moderate (50% to 69%), or severe (‡70%) stenosis. Logistic regression was used to assess the association of stenosis severity to new medication
prescription; covariates were determined a priori.
Results: 859 patients who had CCTA performed
consented to participate in this study or met waiver of
consent for record review only (median age, 48.5 IQR
42.7–53.4, 59% female, 71% black). Median follow-up
time was 333 days, IQR 70–725 days. At baseline, 13% of
the total cohort was already prescribed aspirin and 8%
on statin medication. Two hundred seventy nine (32%)
patients were found to have stenosis in at least one vessel.
In patients with absent, mild, moderate, and severe CAD
on CCTA, aspirin was initiated in 11%, 34%, 52%, and
55%; statins were initiated in 7%, 22%, 32%, and 53% of
patients. After adjustment for age, race, sex, hypertension, diabetes, cholesterol, tobacco use, and admission to
the hospital after CCTA, higher grades of CAD severity
were independently associated with greater post-CCTA
use of aspirin (OR 1.9 per grade, 95% CI 1.4–2.2,
p < 0.001) and statins (OR 1.9, 95% CI 1.5–2.4, p < 0.001).
Conclusion: Greater CAD severity on CCTA is associated with increased medication prescription for CAD.
Patients with noncritical disease are more likely than
patients without any disease to receive aspirin and statins. Future studies should examine whether these
changes lead to decreased hospitalizations and
improved cardiovascular health.
Table - Abstract 26: New Prescription of Aspirin or Statin by
Degree of CAD
Degree
of CAD
None
Mild
(1–49%)
Moderate
(50–69%)
Severe
(>70%)
Aspirin
Statin
68/580
38/580
41/160
35/160
40/76
25/76
24/43
23/43
27
Validation of a Clinical Decision Rule for ED
Patients with Potential Acute Coronary
Syndromes (ACS)
Anna Marie Chang1, Julie Pitts1, Frances
Shofer1, Jeffrey Le1, Emily Barrows1, Shannon
Marcoon1, Erik Hess2, Judd Hollander1
1
University of Pennsylvania, Philadelphia, PA;
2
Mayo Clinic, Rochester, MN
Background: Hess et al. developed a clinical decision
rule for patients with acute chest pain consisting of the
absence of five predictors: ischemic ECG changes not
known to be old, elevated initial or 6-hour troponin
level, known coronary disease, ‘‘typical’’ pain, and age
over 50. Patients less than 40 required only a single troponin evaluation.
Objectives: To test the hypothesis that patients less
than 40 years old without these criteria are at <1% risk
for major adverse cardiovascular events (MACE)
including death, AMI, PCI, and CABG.
Methods: We performed a secondary analysis of several
combined prospective cohort studies that enrolled ED
patients who received an evaluation for ACS in an urban
ED from 1999 to 2009. Cocaine users and STEMI patients
were excluded. Structured data collection at presentation included demographics, pain description, history,
lab, and ECG data for all studies. Hospital course was
followed daily. Thirty-day follow up was done by telephone. Our main outcome was 30-day MACE using
objective criteria. The secondary outcome was potential
change in ED disposition due to application of the rule.
Descriptive statistics and 95% CIs were used.
Results: Of 9289 visits for potential ACS, patients had a
mean age of 52.4 ± 14.7 yrs; 68% were black and 59%
female. There were 638 patients (6.9%) with 30-day CV
events (93 dead, 384 AMI, 298 PCI). Sequential removal
of patients in order to meet the final rule for patients
less than 40 excluded patients based upon: ischemic
ECG changes not old (n = 434, 30% MACE rate),
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
elevated initial troponin level (n = 237, 60% MACE),
known coronary disease (n = 1622, 11% MACE), ‘‘typical’’ pain (n = 3179, 3% MACE), and age over 40
(n = 2690, 3.4% MACE) leaving 1127 patients less than
40 with 0.8% MACE [95% CI, 0.4–1.5%]. Of this cohort,
70% were discharged home from the ED by the treating physician without application of this rule. Adding a
second negative troponin in patients 40–50 years old
identified a group of 1139 patients with a 2.0% rate of
MACE [1.3–3.0] and a 48% discharge rate.
Conclusion: The Hess rule appears to identify a cohort
of patients at approximately 1% risk of 30-day MACE,
and may enhance discharge of young patients. However, even without application of this rule, the 70% of
young patients at low risk are already being discharged
home based upon clinical judgment.
28
Utilization of an Electronic Clinical
Decision Rule Does Not Change Emergency
Physicians’ Pattern of Practice in
Evaluating Patients with Possible
Pulmonary Embolism
Salam Lehrfeld1, Corey Weiner2, Brian Gillett2,
Marsia Vermeulen2, Antonios Likourezos2,
Eitan Dickman2
1
UTSouthwestern Medical Center, Dallas, TX;
2
Maimonides Medical Center, Brooklyn, NY
Background: A Clinical Decision Support System
(CDSS) incorporates evidence-based medicine into clinical practice, but this technology is underutilized in the
ED. A CDSS can be integrated directly into an electronic medical record (EMR) to improve physician efficiency and ease of use. The Christopher study
investigators validated a clinical decision rule for
•
www.aemj.org
S19
patients with suspected pulmonary embolism (PE). The
rule stratifies patients using Wells’ criteria to undergo
either D-dimer testing or a CT angiogram (CT). The
effect of this decision rule, integrated as a CDSS into
the EMR, on ordering CTs has not been studied.
Objectives: To assess the effect of a mandatory CDSS
on the ordering of D-dimers and CTs for patients with
suspected PE.
Methods: We assessed the number of CTs ordered for
patients with suspected PE before and after integrating
a mandatory CDSS in an urban community ED. Physicians were educated regarding CDSS use prior to implementation. The CDSS advised physicians as to whether
a negative D-dimer alone excluded PE or if a CT was
required based on Wells’ criteria. The EMR required
physicians to complete the CDSS prior to ordering the
CT. However, physicians maintained the ability to order
a CT regardless of the CDSS recommendation. Patients
‡18 years of age presenting to the ED with a chief complaint of chest pain, dyspnea, syncope, or palpitations
were included in the data analysis. We compared the
proportion of D-dimers and CTs ordered during the
8-month periods immediately before and after implementing the CDSS. All 27 physicians who worked in the
ED during both time periods were included in the analysis. Patients with an allergy to intravenous contrast
agents, renal insufficiency, or pregnancy were excluded.
Results were analyzed using a chi-square test.
Results: A total of 11,931 patients were included in the
data analysis (6054 pre- and 5877 post-implementation).
CTs were ordered for 215 patients (3.6%) in the
pre-implementation group and 226 patients (3.8%) in
the post-implementation group; p = 0.396. A D-dimer
was ordered for 392 patients (6.5%) in the pre-implementation group and 382 patients (6.5%) in the
post-implementation group; p = 0.958.
S20
2012 SAEM ANNUAL MEETING ABSTRACTS
Conclusion: In this single-center study, EMR integration of a mandatory CDSS for evaluation of PE did not
significantly alter ordering patterns of CTs and
D-Dimers.
29
Identification of Patients with Low-Risk
Pulmonary Emboli Suitable for Discharge
from the Emergency Department
Mike Zimmer, Keith E. Kocher
University of Michigan, Ann Arbor, MI
Background: Recent data, including a large, multicenter randomized controlled trial, suggest that a low-risk
cohort of patients diagnosed with pulmonary embolism
(PE) exists who can be safely discharged from the ED
for outpatient treatment.
Objectives: To determine if there is a similar cohort at
our institution who have a low rate of complications
from PE suitable for outpatient treatment.
Methods: This was a retrospective chart review at a
single academic tertiary referral center with an annual
ED volume of 80,000 patients. All adult ED patients
who were diagnosed with PE during a 24-month period
from 11/1/09 through 10/31/11 were identified. The Pulmonary Embolism Severity Index (PESI) score, a previously validated clinical decision rule to risk stratify
patients with PE, was calculated. Patients with high
PESI (>85) were excluded. Additional exclusion criteria
included patients who were at high risk of complications from initiation of therapeutic anticoagulation and
those patients with other clear indications for admission to the hospital. The remaining cohort of patients
with low risk PE (PESI £ 85) was included in the final
analysis. Outcomes were measured at 14 and 90 days
after PE diagnosis and included death, major bleeding,
and objectively confirmed recurrent venous thromboembolism (VTE).
Results: During the study period, 298 total patients
were diagnosed with PE. There were 172 (58%) patients
categorized as ‘‘low risk’’ (PESI £ 85), with 42 removed
because of various pre-defined exclusion criteria. Of the
remaining 130 (44%) patients suitable for outpatient
treatment, 5 patients (3.8%; 95% CI, 0.5% - 7.2%) had
one or more negative outcomes by 90 days. This
included 2 (1.5%; 95% CI, 0% - 3.7%) major bleeding
events, 2 (1.5%; 95% CI, 0% - 3.7%) recurrent VTE, and
2 (1.5%; 95% CI, 0% - 3.7%) deaths. None of the deaths
were attributable to PE or anticoagulation. One patient
suffered both a recurrent VTE and died within 90 days.
Both patients who died within 90 days were transitioned to hospice care because of worsening metastatic
burden. At 14 days, there was 1 bleeding event (0.8%;
95% CI, 0% - 2.3%), no recurrent VTE, and no deaths.
The average hospital length of stay for these patients
was 2.8 days (SD ±1.6).
Conclusion: Over 40% of our patients diagnosed with
PE in the ED may have been suitable for outpatient
treatment, with 4% suffering a negative outcome
within 90 days and 0.8% suffering a negative outcome
within 14 days. In addition, the average hospital
length of stay for these patients was 2.8 days, which
may represent a potential cost savings if these
patients had been managed as outpatients. Our experience supports previous studies that suggest the
safety of outpatient treatment of patients diagnosed
with PE in the ED. Given the potential savings related
to a decreased need for hospitalization, these results
have health policy implications and support the feasibility of creating protocols to facilitate this clinical
practice change.
30
Validation of a Clinical Prediction Rule for
Chest Radiography in Emergency
Department Patients with Chest Pain and
Possible Acute Coronary Syndrome
Joshua Guttman1, Eli Segal2, Mark Levental2,
Xiaoqing Xue3, Marc Afilalo2
1
McGill University, Montreal, QC, Canada;
2
Jewish General Hospital, McGill University,
Montreal, QC, Canada; 3Jewish General
Hospital, Montreal, QC, Canada
Background: Chest x-rays (CXRs) are commonly
obtained on ED chest pain patients presenting with suspected acute coronary syndrome (ACS). A recently
derived clinical decision rule (CDR) determined that
patients who have no history of congestive heart failure, have never smoked, and have a normal lung examination do not require a CXR in the ED.
Objectives: To validate the diagnostic accuracy of the
Hess CXR CDR for ED chest pain patients with
suspected ACS.
Methods: This was a prospective observational study
of a convenience sample of chest pain patients over
24 years old with suspected ACS who presented to a
single urban academic ED. The primary outcome was
the ability of the CDR to identify patients with abnormalities on CXR requiring acute ED intervention. Data
were collected by research associates using the chart
and physician interviews. Abnormalities on CXR and
specific interventions were predetermined, with a positive CXR defined as one with abnormality requiring ED
intervention, and a negative CXR defined as either normal or abnormal but not requiring ED intervention. The
final radiologist report was used as a reference standard for CXR interpretation. A second radiologist,
blinded to the initial radiologist’s report, reviewed the
CXRs of patients meeting the CDR criteria to calculate
inter-observer agreement. Patients were followed up by
chart review and telephone interview 30 days after
presentation.
Results: Between January and August 2011, 178
patients were enrolled, of whom 38 (21%) were
excluded and 10 (5.6%) did not receive CXRs in the ED.
Of the 130 remaining patients, 74 (57%) met the CDR.
The CDR identified all patients with a positive CXR
(sensitivity = 100%, 95%CI 40–100%). The CDR identified 73 of the 126 patients with a negative CXR (specificity = 58%, 95%CI 49–67%). The positive likelihood
ratio was 2.4 (95%CI 1.9–2.9). Inter-observer agreement
between radiologists was substantial (kappa = 0.63,
95%CI 0.41–0.85). Telephone contact was made with
78% of patients and all patient charts were reviewed
at 30 days. None had any adverse events related to a
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
diagnosis made on CXR. If the CDR had been applied,
CXR usage would have dropped from 93% to 41%, an
absolute reduction of 52%.
Conclusion: The Hess CXR CDR was validated in our
setting. Implementation of this CDR has the potential to
reduce CXR usage in this population.
31
The Detection Rate of Pulmonary
Embolisms by Emergency Physicians Has
Increased
Scott M. Alter, Barnet Eskin, John R. Allegra
Morristown Medical Center, Morristown, NJ
Background: Currently most pulmonary embolisms
(PEs) are diagnosed using chest computed tomography
(CT) imaging. A recent study showed that there has
been a six-fold increase in the rate of chest CT imaging in the emergency department (ED) from 2001 to
2007.
Objectives: We hypothesize that this has led to an
increase in the rate of detecting PEs in the ED. Our
goal was to analyze the recent trends in the annual
rates of detection of PEs in the ED.
Methods: Design: Retrospective cohort. Setting: 33 suburban, urban, and rural New York and New Jersey EDs
with annual visits between 8,000 and 75,000. Participants: Consecutive patients seen by ED physicians from
January 1, 1996 through December 31, 2010. Observations: We identified PEs using ICD-9 codes and calculated annual rates by dividing the number of PEs by the
total ED visits for each year. We determined statistical
significance using Student’s t-test, calculated 95%
confidence intervals (CI), and performed a regression
analysis. Alpha was set at 0.05.
Results: Of 9,533,827 ED visits, 5,595 (1 in 1,704 visits,
0.059%) were for PEs. The mean patient age was
58 ± 19 years and 57% were female. From 1996 to 2010,
the rate increased 3.8-fold (95% CI, 3.2–4.6; p < 0.0001)
from 1 in 4,603 visits (0.022%) in 1996 to 1 in 1,213
(0.082%) visits in 2010. The correlation coefficient for
the rate increase was R2 = 0.93 (p < 0.0001).
Conclusion: The rate of detection of pulmonary
embolisms by ED physicians has increased 3.8-fold
over the 15 years of our study. Although it may be
due to increased incidence, we believe this trend is
most likely due to the increased utilization of chest CT
imaging.
32
•
www.aemj.org
S21
D-dimer Threshold Increase With Pretest
Probability Unlikely For Pulmonary
Embolism To Decrease Unnecessary
Computerized Tomographic Pulmonary
Angiography
Jeffrey Kline1, Melanie M. Hogg1, Daniel M.
Courtney2, Chadwick D. Miller3, Alan E. Jones4,
Howard A. Smithline5
1
Carolinas Medical Center, Charlotte, NC;
2
Northwestern University, Chicago, IL; 3Wake
Forest School of Medicine, Winston Salem, NC;
4
University of Mississippi Medical Center, Jackson,
MS; 5Baystate Medical Center, Springfield, MA
Background: Increasing the threshold to define a positive D-dimer in low-risk patients could reduce unnecessary computed tomographic pulmonary angiography
(CTPA) for suspected PE. This strategy might increase
rates of missed PE and missed pneumonia, the most
common non-thromboembolic finding on CTPA that
might not otherwise be diagnosed.
Objectives: Measure the effect of doubling the standard
D-dimer threshold for ‘‘PE unlikely’’ Revised Geneva
(RGS) or Wells’ scores on the exclusion rate, frequency,
and size of missed PE and missed pneumonia.
Methods: Prospective enrollment at four academic US
hospitals. Inclusion criteria required patients to have at
least one symptom or sign and one risk factor for PE,
and have 64-channel CTPA completed. Pretest probability data were collected in real time and the D-dimer
was measured in a central laboratory. Criterion standard for PE or pneumonia consisted of CPTA interpretation by two independent radiologists combined with
necessary treatment plan. Subsegmental PE was
defined as total vascular obstruction <5%. Patients were
followed for outcome at 30 days. Proportions were
compared with 95% CIs.
Results: Of 678 patients enrolled, 126 (19%) were PE+
and 93 (14%) had pneumonia. With RGS£6 and standard threshold (<500 ng/mL), D-dimer was negative in
110/678 (16%, 95% CI: 13–19%), and 4/110 were PE+
(posterior probability 3.8%, 95% CI: 1–9.3%). With
RGS£6 and a threshold <1000 ng/mL, D-dimer was negative in 208/678 (31%, 27–44%) and 11/208 (5.3%, 2.8–
9.3%) were PE+, but 10/11 missed PEs were subsegmental, and none had concomitant DVT. The posterior
probability for pneumonia among patients with
RGS&#8804;6 and D-dimer<500 was 9/110 (8.2%,
4–15%) which compares favorably to the posterior probability of 12/208 (5.4%, 3–10%) observed with RGS&
#8804;6 and D-dimer<1000 ng/mL. Of the 200 (35%)
patients who also had plain film CXR, radiologists found
an infiltrate in only 58. Use of Wells£4 produced similar
results as the RGS&#8804;6 for exclusion rate and posterior probability of both PE and pneumonia.
Conclusion: Doubling the threshold for a positive
D-dimer with a PE unlikely pretest probability can
significantly reduce CTPA scanning with a slightly
increased risk of missed isolated subsegmental PE, and
no increase in rate of missed pneumonia.
S22
33
2012 SAEM ANNUAL MEETING ABSTRACTS
A Randomized Trial of N-Acetyl Cysteine
and Saline versus Normal Saline Alone to
Prevent Contrast Nephropathy in
Emergency Department Patients
Undergoing Contrast Enhanced Computed
Tomography
Stephen Traub1, Alice Mitchell2, Alan E. Jones3,
Aimee Tang1, Jennifer O’Connor1,
John Kellum4, Nathan Shapiro1
1
Beth Israel Deaconess Medical Center, Boston,
MA; 2Carolinas Medical Center, Charlotte, NC;
3
University of Mississippi, Jackson, MS;
4
University of Pittsburgh Medical Center,
Pittsburgh, PA
Background: Prior studies have suggested that both
N-acetylcysteine (NAC) and intravenous (IV) fluids confer renal protection in patients exposed to radiocontrast
media. However, few studies have focused on CT scans
in ED patients.
Objectives: To test the hypothesis that NAC plus normal saline (NS) is more effective than saline alone in the
prevention of radiocontrast nephropathy (RCN).
Methods: Double blind, randomized, controlled trial
conducted in two tertiary care, urban EDs. Inclusion
Criteria: Patients undergoing a clinically indicated CT
scan with IV contrast who were 18 years or older and
had one or more RCN risk factors. Exclusion Criteria:
end-stage renal disease, pregnancy, clinical instability.
Intervention: treatment group: 3 grams of NAC in
500 cc NS as an IV bolus, and then 200 mg per hour in
67 cc NS per hour for up to 24 hours; placebo group:
500 cc NS bolus and then 67 cc/hr NS as above. Primary outcome: RCN defined as an increase in serum
creatinine by 25% or 0.5 mg/dl measured 48–72 hours
after CT. Follow-up: as inpatient or by outpatient phlebotomy. Statistical Methods: comparisons made by
Fisher’s exact test and logistic regression for modeling.
Power calculation: assuming a 20% baseline event
rate, to find a 10% absolute risk reduction, alpha = 0.05
and 90% power, we estimated needing 588 patients.
O’Brien-Flemming stopping rules were specified a
priori.
Results: We enrolled 399 patients, of whom 357 (89%)
completed follow-up and were included. The study was
stopped early due to futility/equivalence. The population was well matched between groups. The RCN rate
in the NAC plus NS group was 14/185 (7.6%), versus
12/172 (6.9%) in the saline only group, p = 0.83. However, we did find a strong correlation between IV fluid
administration and reduced incidence of RCN. The rate
of RCN in patients who received <1 liter of IV fluids
was 19/158 (12.0%), compared to 7/199 (3.5%) in those
who received ‡ 1 liter (p < 0.001). Our final adjusted
model found that NAC, age, and CHF were not associated with RCN, but there was a 69% risk reduction (OR
0.41; 95% CI: 0.21–0.80) per liter of IVF fluids administered.
Conclusion: We found no benefit in the routine administration of NAC in ED patients undergoing contrast
enhanced CT above IV fluids alone, but a strong association between volume of IV fluids administered in the
ED and a reduction in RCN was present.
34
Patients Transferred from an Outpatient
Clinic in Rural Haiti: An Evaluation of
Reason for Transfer
Benjamin D. Nicholson, Harinder S. Dhindsa,
Zachary W. Lipsman, Benjamin Yoon,
Renee D. Reid
Virginia Commonwealth University, Richmond,
VA
Background: The limitations of developing world medical infrastructure require that patients are transferred
from health clinics only when the patient care needs
exceed the level of care at the clinic and the receiving
hospital can provide definitive therapy.
Objectives: To determine what type of definitive care
service was sought when patients were transferred
from a general outpatient clinic operating Monday
through Friday from 8:00 AM to 3:00 PM in rural Haiti
to urban hospitals in Port-au-Prince.
Methods: Design - Prospective observational review of
all patients for whom transfer to a hospital was
requested or for whom a clinic ambulance was requested
to an off-site location to assist with patient care. Setting Weekday, daytime only clinic in Titanyen, Haiti. Participants/Subjects - Consecutive series of all patients for
whom transfer to another health care facility or for
whom an ambulance was requested during the time period of 11/22/2010–12/14/2010 and 3/28/2011–5/13/2011.
Results: Between 11/22/2010–12/14/2010 and 3/28/2011–
5/13/2011, 37 patients were identified who needed to be
transferred to a higher level of care. Sixteen patients
(43.2%) presented with medical complaints, 12 (32.4%)
were trauma patients, 6 (16.2%) were surgical, and 3
(8.1%) were in the obstetric category. Within these categories, 6 patients were pediatric and 4 non-trauma
patients required blood transfusion.
Conclusion: While trauma services are often focused
on in rural developing world medicine, the need for
obstetric care and blood transfusion constituted six
(16.2%) cases in our sample. These patients raise important public health, planning, and policy questions relating to access to prenatal care and the need to better
understand transfusion medicine utilization among
rural Haitian patients with non-trauma related transfusion needs. The data set is limited by sample size and
single location of collection. Another limitation of
understanding the needs is that many patients may not
present to the clinic for their health care needs in certain situations if they have knowledge that the
resources to provide definitive care are unavailable.
35
Competency-Based Measurement of EM
Learner Performance in International
Training Programs
James Kwan1, Cherri Hobgood2, Venkataraman
Anantharaman3, Glen Bandiera4, Gautam
Bodiwala5, Peter Cameron6, Pinchas Halpern7,
C. James (Jim) Holliman8, Nicholas Jouriles9,
Darren Kilroy10, Terrence Mulligan11, Andrew
Singer12
1
Sydney Medical School, Westmead, Australia;
2
Indiana University School of Medicine,
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Indianapolis, IN; 3Singapore General Hospital,
Singapore, Singapore; 4St Michael’s Hospital,
University of Toronto, Toronto, ON, Canada;
5
Leicester Royal Infimary, Leicester, United
Kingdom; 6The Alfred Hospital Emergency and
Trauma Centre, Monash University, Melbourne,
Australia; 7Tel Aviv Medical Center, Tel Aviv,
Israel; 8Uniformed Services University of the
Health Sciences, Bethesda, MD; 9Akron Medical
Center, Akron, OH; 10College of Emergency
Medicine, London, United Kingdom; 11University
of Maryland School of Medicine, Baltimore, MD;
12
Australian Government Department of Health
and Ageing, Australian National University
Medical School, The Canberra Hospital,
Canberra, Australia
Background: The International Federation for Emergency Medicine (IFEM) has previously defined the minimum standards for EM specialist training in a
competency-based curriculum. This curriculum provides a foundation for reforming and standardizing
educational practice within the international community. However, all educators regardless of their nations’
existing academic EM infrastructures require methods
to assess the progress of learners, in both a formative
and summative manner.
Objectives: To develop recommendations for assessment for an IFEM model curriculum for EM specialists.
Methods: An expert panel of medical educators from
IFEM member nations was convened. Each member
provided detailed information regarding training standards, assessment methods, and metrics for their host
nations. In addition, an extensive literature search was
performed to identify best practices that might not
have been identified by the consensus process. EM
curricular content items were mapped to both the
CanMEDS and ACGME curricular frameworks. Similarly, assessment methods used by member nations for
specific curricular elements were mapped, tabulated,
and compared for determination of international
consensus.
Results: A range of assessment methods was identified
as currently being used in member nations with established academic EM infrastructure. As there is a wide
variability in the academic EM infrastructure in many
member nations, feasibility was an important consideration in determining the utility of different assessment
methods in these nations. The portfolio was considered
to be potentially useful for assessing the performance
of a trainee, allowing the triangulation of aggregated
information across multiple sources of information,
assessment methods, and different time points.
Conclusion: These assessment recommendations provide educators, medical professionals, and experts in
EM, with (i) an overview of the basic principles of
designing assessment programs, (ii) a process for mapping the curriculum onto existing competency frameworks which are currently being utilized in IFEM
member nations, (iii) an overview of methods of assessment and standards of assessment currently being
•
www.aemj.org
S23
utilized in IFEM member nations, and (iv) a description
of a process for aligning the curriculum with assessment methods best suited to measure the different
competencies.
36
US Model Emergency Medicine in Japan
Seung Young Huh1, Masaru Suzuki2, Seikei
Hibino3, Takashi Shiga4, Takeshi Shimazu5,
Shoichi Ohta6
1
Aizawa Hospital, Matsumoto, Japan; 2Keio
University School of Medicine, Tokyo, Japan;
3
University of Minnesota Medical Center,
Fairview, Minneapolis, MN; 4Tokyo Bay Urayasu
Ichikawa Medical center, Urayasu, Japan; 5Osaka
University School of Medicine, Osaka, Japan;
6
Tokyo Medical University, Tokyo, Japan
Background: The practice of emergency medicine in
Japan has been unique in that emergency physicians
are mostly engaged in critical care and trauma with a
multi-specialty model. For the last decade with progress in medicine, an aging population with complicated problems, and institution of postgraduate general
clinical training, the US model emergency medicine
with single-specialty model has been emerging
throughout Japan. However, the current status is
unknown.
Objectives: The objective of this study was to investigate the current status of implementation of the US
model emergency medicine at emergency medicine
training institutions accredited by the Japanese Association for Acute Medicine (JAAM).
Methods: The ER Committee of the JAAM, the most
prestigious professional organization in Japanese emergency medicine, conducted the survey by sending questionnaires to 499 accredited emergency medicine
training institutions.
Results: Valid responses obtained from 299 facilities
were analyzed. US model EM was provided in 211 facilities (71% of 299 facilities), either in full time (24 hours
a day, seven days a week; 123 facilities) or in part time
(less than 24 hours a day; 88 facilities). Among these
211 US model facilities, 44% have a number of beds
between 251–500. The annual number of ED visits was
less than 20,000 in 64%, and 37% have ambulance
transfers between 2,001–4,000 per year. The number of
emergency physicians was less than 5 in 60% of the
facilities. Postgraduate general clinical training was
offered at US model ED in 199 facilities, and ninety hospitals adopted US model EM after 2004, when a 2-year
period of postgraduate general clinical training became
mandatory for all medical graduates. Sixty-four facilities provided a residency program to be a US model
emergency physician, and another 9 institutions were
planning to establish it.
Conclusion: US model EM has emerged and become
commonplace in Japan. The background including
advance in medicine, aging population, and mandatory
postgraduate general clinical training system are
considered to be contributing factors.
S24
37
2012 SAEM ANNUAL MEETING ABSTRACTS
Occupational Upper Extremity Injuries
Treated At A Teaching Hospital In Turkey
Erkan Gunay, Ersin Aksay, Ozge Duman
Atilla, Nilay Zorbalar, Savas Sezik
Tepecik Research and Training Hospital, Izmir,
Turkey
Background: Workplace safety and occupational
health problems are increasing issues especially in
developing countries as a result of the industrial automatisation and technologic improvements. Occupational injuries are preventable but they can occasionally
cause morbidity and mortality resulting in work day
loss and financial problems. Hand injuries are one-third
of all traumatic injuries and are the most injured parts
after occupational accidents.
Objectives: We aim to evaluate patients with occupational upper extremity injuries for demographic characteristics, injury types, and work day loss.
Methods: Trauma patients over 15 years old admitted
to our emergency department with an occupational
upper extremity injury were prospectively evaluated
from 15.04.2010 to 30.04.2011. Patients with one or
more of digit, hand, forearm, elbow, humerus, and
shoulder injuries were included. Exclusion criteria were
multitrauma, patient refusal to participate, and insufficient data. Patients were followed up from the hospital
information system and by phone for work day loss
and final diagnosis.
Results: During the study period there were 570
patients with an occupational upper extremity injury.
Total of 521 (91.4%) patients were included. Patients
were 92.1% male, 36.5% between the age 25 to 34,
and mean age was calculated 32.9 ± 9.6 years. 43.8%
of the patients were from the metal and machinery
sector, and primary education was the highest education level for the 74.7% of the patients. Most injured
parts were fingers with the highest rate for index finger and thumb. Crush injury was the most common
injury type. 96.3% (n = 502) of the patients were discharged after treatment in the emergency department.
Tendon injuries, open fractures, and high degree
burns were the reasons for admission to clinics.
Mean work day loss was 12.8 ± 27.2 days and this
increases for the patients with laboratory or radiologic studies, consultant evaluation, or admission. The
15–24 age group had a significantly lower work day
loss average.
Conclusion: Evaluating occupational injury characteristics and risks is essential for identifying preventive measures and actions. With the guidance of this study
preventive actions focusing on high-risk sectors and
patients may be the key factor for avoiding occupational
injuries and creating safer workplace environments in
order to reduce financial and public health problems.
38
Mixed Methods Evaluation of Emergency
Physician Training Program in India
Erika D. Schroeder1, Anne Daul2,
Kate Douglass1, Punidha Kaliaperumal3,
Mary Pat McKay1
1
George Washington University, Washington,
DC; 2Emory University, Atlanta, GA; 3Max
Hospital, New Delhi, India
Background: As emergency medicine (EM) gains
increased recognition and interest in the international
arena, a growing number of training programs for
emergency health care workers have been implemented
in the developing world through international partnerships.
Objectives: To evaluate the quality and appropriateness
of an internationally implemented emergency physician
training program in India.
Methods: Physicians participating in an internationally
implemented EM training program in India were
recruited to participate in a program evaluation.
A mixed methods design was used including an online
anonymous survey and semi-structured focus groups.
The survey assessed the research, clinical, and didactic
training provided by the program. Demographics and
information on past and future career paths were also
collected. The focus group discussions centered around
program successes and challenges.
Results: Fifty of 59 eligible trainees (85%) participated
in the survey. Of the respondents, the vast majority
were Indian; 16% were female, and all were between
the ages of 25 and 45 years (mean age 31 years). All but
two trainees (96%) intend to practice EM as a career.
One-third listed a high-income country first for preferred practice location and half listed India first.
Respondents directly endorsed the program structure
and content, and they demonstrated gains in self-rated
knowledge and clinical confidence over their years of
training. Active challenges identified include: (1) insufficient quantity and inconsistent quality of Indian faculty,
(2) administrative barriers to academic priorities, and
(3) persistent threat of brain drain if local opportunities
are inadequate.
Conclusion: Implementing an international emergency
physician training program with limited existing local
capacity is a challenging endeavor. Overall, this evaluation supports the appropriateness and quality of this
partnership model for EM training. One critical challenge is achieving a robust local faculty. Early negotiations are recommended to set educational priorities,
which includes assuring access to EM journals. Attrition of graduated trainees to high-income countries due
to better compensation or limited in-country opportunities continues to be a threat to long-term local capacity
building.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
39
An Analysis of Proposed Core Curriculum
Elements for International Emergency
Medicine and Global Health Fellowships
Gabrielle Jacquet1, Alexander Vu1, Bill Ewen2,
Bhakti Hansoti3, Steven Andescavage4,
David Price5, Robert Suter2, Jamil Bayram1
1
Johns Hopkins University, Baltimore, MD;
2
University of Texas Southwestern, Dallas, TX;
3
University of Chicago, Chicago, IL; 4George
Washington University, Washington, DC;
5
Gwinnett Medical Center, Lawrenceville, GA
Background: With an increasing frequency and intensity of manmade and natural disasters, and a corresponding surge in interest in international emergency
medicine (IEM) and global health (GH), the number of
IEM and GH fellowships is constantly growing. There
are currently 34 IEM and GH fellowships, each with a
different curriculum. Several articles have proposed the
establishment of core curriculum elements for fellowship training. To the best of our knowledge, no study
has examined whether IEM and GH fellows are actually
fulfilling these criteria.
Objectives: This study sought to examine whether current IEM and GH fellowships are consistently meeting
these core curricula.
Methods: An electronic survey was administered to
current IEM and GH fellowship directors, current fellows, and recent graduates of a total of 34 programs.
Survey respondents stated their amount of exposure to
previously published core curriculum components: EM
System Development, Humanitarian Assistance, Disaster Response, and Public Health. A pooled analysis
comparing overall responses of fellows to those of program directors was performed using two-sampled
t-test.
Results: Response rates were 88% (n = 30) for program
directors and 53% (n = 17) for current and recent fellows. Programs varied significantly in terms of their
emphasis on and exposure to six proposed core curriculum areas: EM System Development, EM Education
Development, Humanitarian Aid, Public Health, EMS,
and Disaster Management. Only 43% of programs
reported having exposure to all four core areas. As
many as 67% of fellows reported knowing their curriculum only somewhat or not at all prior to starting the
program.
Conclusion: Many fellows enter IEM and GH fellowships without a clear sense of what they will get from
their training. As each fellowship program has different
areas of curriculum emphasis, we propose not to
enforce any single core curriculum. Rather, we suggest
the development of a mechanism to allow each fellowship program to present its curriculum in a more transparent manner. This will allow prospective applicants
to have a better understanding of the various programs’ curricula and areas of emphasis.
40
•
www.aemj.org
S25
Predicting ICU Admission and Mortality at
Triage using an Automated Computer
Algorithm
Steven Horng1, David A. Sontag2, David T.
Chiu1, Joshua W. Joseph1, Nathan I. Shapiro1,
Larry A. Nathanson1
1
Beth Israel Deaconess Medical Center / Harvard
Medical School, Boston, MA; 2New York
University, New York, NY
Background: Advance warning of probable intensive
care unit (ICU) admissions could allow the bed placement process to start earlier, decreasing ED length of
stay and relieving overcrowding conditions. However,
physicians and nurses poorly predict a patient’s ultimate disposition from the emergency department at triage. A computerized algorithm can use commonly
collected data at triage to accurately identify those who
likely will need ICU admission.
Objectives: To evaluate an automated computer algorithm at triage to predict ICU admission and 28-day
in-hospital mortality.
Methods: Retrospective cohort study at a 55,000 visit/
year Level I trauma center/tertiary academic teaching
hospital. All patients presenting to the ED between
12/16/2008 and 10/1/2010 were included in the study.
The primary outcome measure was ICU admission
from the emergency department. The secondary outcome measure was 28-day all-cause in-hospital mortality. Patients discharged or transferred before 28 days
were considered to be alive at 28 days. Triage data
includes age, sex, acuity (emergency severity index),
blood pressure, heart rate, pain scale, respiratory rate,
oxygen saturation, temperature, and a nurse’s free
text assessment. A Latent Dirichlet Allocation algorithm was used to cluster words in triage nurses’ free
text assessments into 500 topics. The triage assessment for each patient is then represented as a probability distribution over these 500 topics. Logistic
regression was then used to determine the prediction
function.
Results: A total of 94,973 patients were included in the
study. 3.8% were admitted to the ICU and 1.3% died
within 28 days. These patients were then randomly allocated to train (n = 75,992; 80%) and test (n = 18,981;
20%) data sets. The area under the receiver operating
characteristic curve (AUC) when predicting ICU
S26
2012 SAEM ANNUAL MEETING ABSTRACTS
admission at triage was 0.91 (derivation) and 0.90 (validation). The AUC for predicting 28-day mortality was
0.94 (derivation) and 0.92 (validation).
Conclusion: Triage computer algorithms can accurately
identify patients who will require ICU admission using
entirely automated techniques. These predictions can
be made at the earliest point in a patient’s ED stay, several hours before a formal bed request is made. Predicting future resource utilization is essential in systems
modeling and other approaches to systematically
decrease ED crowding.
41
Replacing Traditional Triage with a Rapid
Evaluation Unit Decreases Left-WithoutBeing-Seen Rate at a Community
Emergency Department
Dominic Ruocco1, Jeffrey P. Green2, Gladys
Sillero1, Tony Berger2
1
Palisades Medical Center, Palisades, NJ; 2UC
Davis Medical Center, Davis, CA
Background: Many patients who leave the emergency
department (ED) without being seen (LWBS) are seriously ill and would benefit from immediate evaluation.
Objectives: To determine whether replacing traditional
ED triage with a Rapid Evaluation Unit (REU) would
decrease the proportion of patients who LWBS at a
community ED.
Methods: This was a pre/post interventional study
design. Setting: A 155-bed community hospital with
35,000 annual ED visits. Intervention: On March 15,
2008 traditional ED triage (focused history & vital sign
assessment with patient return to waiting room) was
replaced with a REU, which entailed immediate placement of ambulatory patients into a treatment area. This
allowed for parallel task performance by registration,
triage, nursing, and providers. No significant changes
to the ED or hospital physical structure, staffing, or
other patient processing changes occurred during the
study period. The primary outcome was change in
monthly average number of patients who LWBS as a
proportion of monthly ED census for a 10-month period prior to (May 2007- February 2008) and after REU
initiation (May 2008–February 2009). ED throughput
times for the same time period were also analyzed.
These time periods represented the longest contiguous
periods before and after REU initiation not involving
other staffing/infrastructure changes or the immediate
intervention period (March-April 2008).
Results: The average monthly ED census increased by
102 visits (95%CI )11 to 215 visits) from the pre-REU to
the post-REU study-period. In spite of this increase, the
average monthly proportion of patients who LWBS
decreased from 3.6% in the 10-month pre-REU period
to 1.9% in the 10-month post-REU period, representing
a 1.7% absolute decrease (95%CI 0.64, 2.72%) and a
53% relative decrease. Similarly, in the post-REU period, time from ED arrival to placement in a treatment
area decreased by 35 minutes (95%CI 30.9, 39.3 min.),
time from ED arrival to provider evaluation decreased
by 28 minutes (95%CI 22.1, 33.9 min.), and ED
length-of-stay decreased by 43 minutes (95%CI 20.1,
65.0 min.) compared to the pre-REU time period.
Conclusion: Replacing traditional ED triage with a
REU decreased the proportion of ED patients who
LWBS as well as throughput times at a community ED.
This improvement occurred without significantly
changing ED or hospital physical structure, staffing, or
other process changes.
42
Failure to Validate Hospital Admission
Prediction Models Adding Coded Chief
Complaint to Demographic, Emergency
Department Operational, and Patient Acuity
Data Available at ED Triage
Neal Handly1, Arvind Venkat2, Jiexun Li3,
David A. Thompson4, David M. Chuirazzi2
1
Drexel University College of Medicine,
Philadelphia, PA; 2Allegheny General Hospital,
Pittsburgh, PA; 3Drexel University School of
Information Science and Technology,
Philadelphia, PA; 4Northwestern University
School of Medicine, Chicago, IL
Background: At the 2011 SAEM Annual Meeting, we
presented the derivation of two hospital admission prediction models adding coded chief complaint (CCC) data
from a published algorithm (Thompson et al. Acad
Emerg Med 2006;13:774–782) to demographic, ED operational, and acuity (Emergency Severity Index (ESI)) data.
Objectives: We hypothesized that these models would
be validated when applied to a separate retrospective
cohort, justifying prospective evaluation.
Methods: We conducted a retrospective, observational
validation cohort study of all adult ED visits to a single
tertiary care center (census: 49,000/yr) (4/1/09–12/31/10).
We downloaded from the center’s clinical tracking system demographic (age, sex, race), ED operational (time
and day of arrival), ESI, and chief complaint data on
each visit. We applied the derived CCC hospital admission prediction models (all identified CCC categories
and CCC categories with significant odds of admission
from multivariable logistic regression in the derivation
cohort) to the validation cohort to predict odds of
admission and compared to prediction models that consisted of demographic, ED operational, and ESI data,
adding each category to subsequent models in a stepwise manner. Model performance is reported by areaunder-the-curve (AUC) data and 95%CI.
Results: 85,144 ED visits were included (51.2% female,
22.6% age>65, 66.1% Caucasian) with 22,363 visits
resulting in hospital admission. 15.8% of visits were at
night (12 am–8 am) and 27.3% on weekends. ESI percentages were as follows: Level 1: 2.3, 2: 9.0, 3: 55.5, 4:
25.6 and 5: 7.6. All descriptive characteristics were
comparable to the derivation cohort. The demographics
and ED operational variable model had an AUC of
0.712 (95%CI 0.708–0.715). Adding ESI data, the AUC
was 0.815 (95%CI 0.812–0.819). Adding all 213 identified
CCC categories to the demographics, ED operational,
and ESI model, the AUC was 0.698 (95%CI 0.694–
0.702). Adding the 87 CCC categories with significant
odds of admission to the demographics, ED
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
operational, and ESI acuity model, the AUC was 0.812
(95%CI 0.811–0.818).
Conclusion: In this application to a retrospective validation cohort of previously derived hospital admission
prediction models adding CCC to demographic, ED
operational, and ESI data, we did not find CCC models’
performance strength that would justify prospective
evaluation.
43
A Decision Tree Algorithm to Assist
Pre-ordering Diagnostics on Emergency
Department Patients During Triage
Gerald Maddalozzo, Greg Neyman,
Greg Sorkin, Joseph Calabro
St Michael’s Medical Center, Newark, NJ
Background: With more patients accessing emergency
departments (ED) nationally, waiting times and lengths
of stay are on the rise. Patients are lingering longer
times with unappreciated critical illness, and patient
dissatisfaction and walk out rates are increasing. Preordering diagnostic blood work or radiology, prior to
physician evaluation, can decrease length of stay
(LOS), by reducing a secondary delay when the physician waits for results after having evaluated the
patient. Over-ordering inappropriately adds to costs,
creates results which may be lost to proper follow-up,
and tips patients towards more expensive tests or
admission.
Objectives: To see if a Decision Tree Algorithm (DTA),
generated from a national dataset, can appropriately
predict physician test orders, using inputs that are
available at triage.
Methods: The National Hospital Ambulatory Medical
Care Survey - Emergency Department (NHAMCS ED) 2009 database was used for algorithm generation.
A C4.5 Decision DTA was trained on demographics,
nursing home residence, ambulance arrival, vital
•
www.aemj.org
S27
signs, pain level, triage level, 72-hour return, number
of past visits in the previous year, injury, and one of
122 chief complaint codes (representing 90% of all visits in the database). Outputs for training included
ordering of a complete blood count, basic chemistry
(electrolytes, blood urea nitrogen, creatinine), cardiac
enzymes, liver function panel, urinalysis, electrocardiogram, x-ray, computed tomography, or ultrasound.
Once trained, it was used on the NHAMCS-ED 2008
database, and predictions were generated. Predictions
were compared with documented physician orders.
Outcomes included the percent of total patients who
were correctly pre-ordered, sensitivity (the percent of
patients who had an order that were correctly predicted), and the percent over-ordered. Waiting time
for correctly pre-ordered patients was highlighted, to
represent a potential reduction in length of stay
achieved by preordering. LOS for patients overordered was highlighted to see if over-ordering may
cause an increase in LOS for those patients. Unit cost
of the test was also highlighted, as taken from the
2011 Medicare fee schedule.
Results: Patients correctly pre-ordered may experience
a median LOS reduction of 30 minutes.
Conclusion: A DTA can assist in pre-ordering tests for
ED patients upon triage of the patient.
44
Reducing ED Length Of Stay For
Dischargeable Patients: Advanced Triage
And Now Advanced Disposition
Jeff Dubin1, H. Joseph Blumenthal2,
David Roseman2, David Milzman1
1
MedStar
Washington
Hospital
Center,
2
Washington,
DC;
MedStar
Institute
for
Innovation, Washington, DC
Background: Advanced triage by a physician/nurse
team reduces left without being seen rates and door to
Table - Abstract 43: Results
All
Ordered
All
Predicted
Correctly
Pre-ordered
Sensitivity
Complete
Blood Count
Basic Chemistry
36.57%
37.65%
21.81%
59.64%
26.50%
25.78%
12.29%
46.38%
Cardiac Enzymes
14.17%
10.23%
3.55%
25.06%
Liver Function
Panel
Urinalysis
9.73%
10.96%
1.92%
19.75%
23.22%
22.32%
9.71%
41.82%
Electrocardiogram
18.06%
15.34%
7.03%
38.95%
X-ray
34.63%
32.89%
17.32%
50.03%
Computed
Tomography
Ultrasound
13.78%
13.96%
4.38%
31.81%
3.31%
1.89%
0.41%
12.39%
Test
Median
wait time
in minutes,
correctly
pre-ordered
(Q1–Q2)
32.08
(15.41–63.53)
31.06
(14.7–62.28)
26.79
(12.82–53.22)
30.81
(15.4–58.95)
35.64
(17.32–69.84)
27.69
(13.11–55.5)
32.59
(16.12–62.92)
30.49
(14.39–61.33)
38.66
(18.72–76.02)
Overordered
143.32%
150.90%
147.14%
192.87%
154.30%
146.02%
144.97%
169.55%
144.78%
Median LOS
in minutes,
over-ordered
(Q1–Q3)
140.26
(82.32–232.61)
167.26
(96.04–282.94)
194.74
(112.19–328.41)
189.21
(107.16–324.01)
172.25
(97.44–295.31)
182.02
(103.13–311.6)
147.74
(83.48–253.52)
165.06
(93.4–282.86)
200.92
(124.42–317.44)
Unit cost
$12.31
$16.09
$53.07
$15.53
$6.02
$22.96
$43.06
$296.36
$226.42
S28
2012 SAEM ANNUAL MEETING ABSTRACTS
physician times. However, during peak ED census
times, many patients with completed tests and treatment initiated by triage await discharge by the next
assigned physician.
Objectives: Determine if a physician-led discharge disposition (DD) team can reduce the ED length of stay
(LOS) for patients of similar acuity who are ultimately
discharged compared to standard physician team
assignment.
Methods: This prospective observational study was
performed from 10/2010 to 10/2011 at an urban tertiary referral academic hospital with an annual ED volume of 87,000 visits. Only Emergency Severity Index
Level 3 patients were evaluated. The DD team was
scheduled weekdays from 14:00 until 23:00. Several ED
beds were allocated to this team. The team was comprised of one attending physician and either one nurse
and a tech or two nurses. Comparisons were made
between LOS for discharged patients originally triaged
to the main ED side who were seen by the DD team
versus the main side teams. Time from triage physician to team physician, team physician to discharge
decision time, and patient age were compared by
unpaired t-test. Differences were studied for number
of patients receiving x-rays, CT scan, labs, and medications.
Results: DD team mean LOS in hours for discharged
patients was shorter at 3.4 (95% CI: 3.3–3.6, n = 1451)
compared to 6.4 (95% CI: 6.3–6.5, n = 4601) on the main
side, p < 0.01. The mean time from triage physician to
DD team physician was 1.4 hours (95% CI: 1.4–1.5,
n = 1447) versus to 2.7 hours (95% CI: 2.7–2.8, n = 4568)
to main side physician, p < 0.01. The DD team physician
mean time to discharge decision was 1.0 hour (95% CI:
1.0–1.1, n = 1432) compared to 2.5 hours (95% CI: 2.4–
2.6, n = 4590) for main side physician, p < 0.01. The DD
team patients’ mean age was 42.6 years (95% CI: 41.9–
43.6, n = 1454) compared to main side patients’ mean
age of 49.1 years (95% CI: 48.5–49.6, n = 4621.) The DD
team patients (n = 1454) received fewer x-rays (40% vs.
59%), CT scans (13% vs. 23%), labs (64% vs. 85%), and
medications (63% vs. 68%) than main side patients
(n = 4621), p < 0.01 for all compared.
Conclusion: The DD team complements the advanced
triage process to further reduce LOS for patients who
do not require extended ED treatment or observation.
The DD team was able to work more efficiently because
its patients tended to be younger and had fewer lab
and imaging tests ordered by the triage physician compared to patients who were later seen on the ED main
side.
45
ED Boarding is Associated with Increased
Risk of Developing Hospital-Acquired
Pressure Ulcers
Candace McNaughton, Wesley H. Self,
Keith Wrenn, Stephan Russ
Vanderbilt University, Nashville, TN
Background: Hospital-acquired pressure ulcers (HAPUs) are reportable hospital-acquired conditions and
‘‘never events’’ according to Centers for Medicaid and
Medicare Services (CMS). Patients boarded in the ED
for extended periods of time may not receive the same
preventive care for HAPU that in-patients do, such as
frequent repositioning, padding vulnerable skin, and
specialized mattresses.
Objectives: To evaluate the association between ED
boarding time and the risk of developing HAPU.
Methods: We conducted a retrospective cohort study
using administrative data from an academic medical
center with an adult ED with 55,000 annual patient visits. All patients admitted into the hospital through the
ED 6/30/2008–2/28/2011 were included. Development of
HAPU was determined using the standardized, national
protocol for CMS reporting of HAPU. ED boarding
time was defined as the time between an order for inpatient admission and transport of the patient out of
the ED to an in-patient unit. We used a multivariate
logistic regression model with development of a HAPU
as the outcome variable, ED boarding time as the
exposure variable, and the following variables as covariates: age, sex, initial Braden score, and admission to
an intensive care unit (ICU) from the ED. The Braden
score is a scale used to determine a patient’s risk for
developing a HAPU based on known risk factors.
A Braden score is calculated for each hospitalized
patient at the time of admission. We included Braden
score as a covariate in our model to determine if ED
boarding time was a predictor of HAPU independent
of Braden Score.
Results: Of 46,704 patients admitted to the hospital
through the ED during the study period, 243 developed
a HAPU during their hospitalization. Clinical characteristics are presented in the table. Per hour of ED boarding time, the adjusted OR of developing a HAPU was
1.02 (95% CI 1.01–1.04, p = 0.007). A median of 40
patients per day were admitted through the ED, accumulating 144 hours of ED boarding time per day, with
each hour of boarding time increasing the risk of developing a HAPU by 2%.
Conclusion: In this single-center, retrospective study,
longer ED boarding time was associated with increased
risk of developing a HAPU.
Table - Abstract 45: Clinical characteristics
Age, median years (IQR)
Male, no. (%)
Braden Score, median (IQR)
Developed HAPU, No. (%)
Admitted to an ICU, No. (%)
ED boarding time hrs, median (IQR)
46
51 (34,66)
22,741 (48)
20 (18, 22)
243 (0.52)
13,163 (28)
3.6 (1.7, 9.7)
Nursing Attitudes Regarding Boarding of
Admitted ED Patients
John Richards, Bryce Pulliam
UC Davis Medical Center, Sacramento, CA
Background: Boarding admitted patients in the
emergency department (ED) is a major cause of overcrowding and access block. One solution is boarding
admitted patients in inpatient ward hallways. This study
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
queried ED and inpatient nurses and compared their
opinions toward inpatient boarding. It also assessed
their preferred boarding location if they were patients.
Objectives: This study queried ED and inpatient nurses
and compared their opinions toward inpatient boarding.
Methods: A survey was administered to a convenience
sample of ED and ward nurses. It was performed in a
631-bed academic medical center (30,000 admissions/yr)
with a 68-bed ED (60,000 visits/yr). Nurses were identified as ED or ward and whether they had previously
worked in the ED. The nurses were asked if there were
any circumstances where admitted patients should be
boarded in the ED or inpatient hallways. They were
also asked their preferred location if they were admitted as a patient. Six clinical scenarios were then presented and their opinions on boarding queried.
Results: Ninety nurses completed the survey; 35 (39%)
were current ED nurses (cED), 40 (44%) had previously
worked in the ED (pED). For the entire group 46 (52%)
believed admitted patients should board in the ED. Overall, 52 (58%) were opposed to inpatient boarding, with
20% of cED versus 83% of current ward (cW) nurses
(P < 0.0001) and 28% of pED versus 85% of nurses never
having worked in the ED (nED) opposed (P < 0.001). If
admitted as patients themselves, overall 43 (54%) preferred inpatient boarding, with 82% of cED versus 33%
of cW nurses (P < 0.0001) and 74% of pED versus 34%
nED nurses (P = 0.0007) preferring inpatient boarding.
For the six clinical scenarios, significant differences in
opinion regarding inpatient boarding existed in all but
two cases: a patient with stable COPD but requiring oxygen and an intubated, unstable sepsis patient.
Conclusion: Ward nurses and those who have never
worked in the ED are more opposed to inpatient boarding than ED nurses and nurses who have worked previously in the ED. Nurses admitted as patients seemed to
prefer not being boarded where they work. ED and
ward nurses seemed to agree that unstable or potentially unstable patients should remain in the ED.
47
Randomized Controlled Trial of
Volume-based Staffing
Brian H. Rowe1, Trevor Lashyn1, Mira Singh1,
Stephanie Couperthwaite1, Cristina Villa-Roel1,
Michael Bullard1, William Sevcik1,
Karen Latoszek2, Brian R. Holroyd1
1
University of Alberta, Edmonton, AB, Canada;
2
Alberta Health Services, Edmonton, AB, Canada
Background: Emergency department (ED) overcrowding is a common and growing problem. Among
throughput interventions, volume-based staffing has
been described infrequently.
Objectives: This study evaluated the effect of adding an
additional shift in a moderate case-complexity area of a
typical crowded urban, high-volume, and academic centre with severe ED overcrowding.
Methods: This un-blinded, parallel group, controlled
trial took place between 24/06 and 24/08/2011 at a Canadian hospital experiencing growing and long-standing
ED overcrowding. Computerized, block-randomized
sequences (2-week blocks) were generated for a total of
•
www.aemj.org
S29
8 weeks. Staff satisfaction was evaluated through pre/
post-shift and study surveys; administrative data (physician initial assessment (PIA), length of stay (LOS),
patients leaving without being seen (LWBS) and against
medical advice [LAMA]) were collected from an electronic, real-time ED information system. Data are presented as proportions and medians with interquartile
ranges (IQR); bivariable analyses were performed.
Results: ED physicians and nurses expected the intervention to reduce the LOS of discharged patients only.
PIA decreased during the intervention period (68 vs
74 minutes; p < 0.001). No statistically/clinically significant differences were observed in the LOS; however,
there was a significant reduction in the LWBS (4.7% to
3.5% p = 0.003) and LAMA (0.7% to 0.4% p = 0.028)
rates. While there was a reduction of approximately 5
patients seen per physician in the affected ED area, the
total number of patients seen on that unit increased by
approximately 10 patients/day. Overall, compared to
days when there was no extra shift, 61% of emergency
physicians stated their workload decreased and 73% felt
their stress level at work decreased.
Conclusion: While this study didn’t demonstrate a
reduction in the overall LOS, it did reduce PIA times
and the proportion of LWBS/LAMA patients. While
physicians saw fewer patients during the intervention
study period, the overall patient volume increased and
satisfaction among ED physicians was rated higher.
48
Provider- and Hospital-Level Variation In
Admission Rates And 72-Hour Return
Admission Rates
Jameel Abualenain1, William Frohna2,
Robert Shesser1, Ru Ding1, Mark Smith2,
Jesse M. Pines1
1
The George Washington University, Washington,
DC; 2Washington Hospital Center, Washington, DC
Background: Decisions for inpatient versus outpatient
management of ED patients are the most important and
costliest decision made by emergency physicians, but
there is little published on the variation in the decision
to admit among providers or whether there is a relationship between a provider’s admission rate and the
proportion of their patients who return within 72 hours
of the initial visit and are subsequently admitted (72HRA).
Objectives: We explored the variation in provider-level
admission rates and 72H-RA rates, and the relationship
between the two.
Methods: A retrospective study using data from three
EDs with the same information system over varying
time periods: Washington Hospital Center (WHC)
(2008–10), Franklin Square Hospital Center (FSHC)
(2006–9), and Union Memorial Hospital (UMH) (2005–9).
Patients were excluded if left without being seen, left
against medical advice, fast-track, psychiatric patients,
and aged <18 years. Physicians with <500 ED encounters or an admission rate <15% were excluded. Logistic
regression was used to assess the relationship between
physician-level 72H-RA and admission rates, adjusting
for patient age, sex, race, and hospital.
S30
2012 SAEM ANNUAL MEETING ABSTRACTS
Results: 389,120 ED encounters were treated by 90
physicians. Mean patient age was 50 years SD 20, 42%
male, and 61% black. Admission rates differed between
hospitals (WHC = 40%, UMH = 37%, and FSHC = 28%),
as did the 72H-RA (WHC = 0.9%, UMH = 0.6%, and
FSHC = 0.6%). Across all hospitals, there was great
variation in individual physician admission rates
(15.4%–50.0%). The 72H-RA rates were quite low, but
demonstrated a similar magnitude of individual variation (0.3%–1.2%). Physicians with the highest admission
rate quintile had lower odds of 72H-RA (OR 0.8 95% CI
0.7–0.9) compared to the lowest admission rate quintile,
after adjusting for other factors. No intermediate
admission rate quintiles (2nd, 3rd, or 4th) were significantly different from the lowest admission rate quintile
with regard to 72H-RA.
Conclusion: There is more than three-fold variation in
individual physician admission rates indicating great
variation among physicians in hospital admission rates
and 72H-RA. The highest admitters have the lowest
72H-RA; however, evaluating the causes and consequences of such significant variation needs further
exploration, particularly in the context of health reform
efforts aimed at reducing costs.
49
Emergency Medicine Resident Physician
Attitudes about the Introduction of a Scribe
Program at an Academic EM Training Site
Mia Tanaka, Jordan Marshall, Christopher
Verdick, Richard C. Frederick, George Z.
Hevesy, Huaping Wang, John W. Hafner
University of Illinois College of Medicine at
Peoria, Peoria, IL
Background: ED scribes have become an effective
means to assist emergency physicians (EPs) with clinical
documentation and improve physician productivity.
Scribes have been most often utilized in busy community EDs and their utility and functional integration into
an academic medical center with resident physicians is
unknown.
Objectives: To evaluate resident perceptions of attending physician teaching and interaction after introduction of scribes at an EM residency training program,
measured through an online survey. Residents in this
study were not working with the scribes directly, but
were interacting indirectly through attending physician
use of scribes during ED shifts.
Methods: An online ten question survey was administered to 31 residents of a Midwest academic emergency
medicine residency program (PGY1–PGY3 program, 12
annual residents), 8 months after the introduction of
scribes into the ED. Scribes were introduced as EMR
documentation support (Epic 2010, Epic Systems Inc.)
for attending EPs while evaluating primary patients and
supervising resident physicians. Questions investigated
EM resident demographics and perceptions of scribes
(attending physician interaction and teaching, effect on
resident learning, willingness to use scribes in the
future), using Likert scale responses (1 minimal, 9 maximum) and a graduated percentage scale used to quantify relative values, where applicable. Data were
analyzed using Kruskal-Wallis and Mann-Whitney
U tests.
Results: Twenty-one of 31 EM residents (68%) completed the survey (81% male; 33% PGY1, 29% PGY2,
38% PGY3). Four residents had prior experience with
scribes. Scribes were felt to have no effect on attending
EPs direct resident interaction time (mean score 4.5, SD
1.2), time spent bedside teaching (4.8, SD 0.9), or quality
of teaching (4.9, SD 0.8), as well as no effect on residents’ overall learning process (4.6, SD 1.1). However,
residents felt positive about utilizing scribes at their
future occupation site (6.0, SD 2.7). No response differences were noted for prior experience, training level,
or sex.
Conclusion: When scribes are introduced at an EM
residency training site, residents of all training levels
perceive it as a neutral interaction, when measured in
terms of perceived time with attending EPs and quality
of the teaching when scribes are present.
50
The Effect of Introduction of an Electronic
Medical Record on Resident Productivity in
an Academic Emergency Department
Shawn London, Christopher Sala
University of Connecticut School of Medicine,
Farmington, CT
Background: There are little available data which
describe the effect of implementation of an electronic
medical record (EMR) on provider productivity in the
emergency department, and no studies which, to our
knowledge, address this issue pertaining to housestaff
in particular.
Objectives: We seek to quantify the changes in provider productivity pre- and post-EMR implementation
to support our hypothesis that resident clinical productivity based on patients seen per hour will be negatively
affected by EMR implementation.
Methods: The academic emergency department at
Hartford Hospital, the principle clinical site in the University of Connecticut Emergency Medicine Residency,
sees over 95,000 patients on an annual basis. This environment is unique in that pre-EMR, patient tracking
and orders were performed electronically using the
Sunrise system (Eclipsys Corp) for over 8 years prior
to conversion to the Allscripts ED EMR in October,
2010 for all aspects of ED care. The investigators completed a random sample of days/evening/night/weekend shift productivity to obtain monthly aggregate
productivity data (patients seen per hour) by year of
training.
Results: There was an initial 4.2% decrease of in productivity for PGY-3 residents on average from 1.44
patients per hour on average in the three blocks preceding activation of the EMR to 1.38 patients seen per
hour compared in the subsequent three prior blocks.
PGY 3 performance returned to baseline in the subsequent three months to 1.48 patients per hour. There
was no change noted in patients seen per hour of PGY1 and PGY-2 residents.
Conclusion: While many physicians tend to assume
that EMRs pose a significant barrier to productivity in
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
the ED, in our academic emergency department, there
was no lasting change on resident productivity based
on the patients seen per hour metric. The minor
decrease which did occur in PGY-3 residents was transient and was not apparent 3 months after the EMR
was implemented. Our experience suggests that
decrease in the rate of patients seen per hour in the
resident population should not be considered justification to delay or avoid implementation of an EMR in the
emergency department.
Block
Block
Block
Block
51
2–4
5–7
7–9
10–13
PGY-2
PGY-3
0.83
0.89
0.91
1.01
1.25
1.30
1.32
1.44
1.44
1.38
1.48
1.52
Physician Feedback Reduces Resource Use
in the Emergency Department
Shabnam Jain1, Gary Frank2, Baohua Wu1,
Brent Johnson1
1
Emory University, Atlanta, GA; 2Children’s
Healthcare of Atlanta, Atlanta, GA
Background: Variation in physician practice is widely
prevalent and highlights an opportunity for quality
improvement and cost containment. Monitoring
resources used in the management of common pediatric emergency department (ED) conditions has been
suggested as an ED quality metric.
Objectives: To determine if providing ED physicians
with severity-adjusted data on resource use and outcomes, relative to their peers, can influence practice
patterns.
Methods: Data on resource use by physicians were
extracted from electronic medical records at a tertiary
pediatric ED for four common conditions in mid-acuity
(Emergency Severity Index level 3): fever, head injury,
respiratory illness, and gastroenteritis. Condition-relevant resource use was tracked for lab tests (blood
count, chemistry, CRP), imaging (chest x-ray, abdominal x-ray, head CT scan, abdominal CT scan), intravenous fluids, parenteral antibiotics, and intravenous
ondansetron. Outcome measures included admission to
hospital and ED length of stay (LOS); 72-hr return to
ED (RR) was used as a balancing measure. Scorecards
were constructed using box plots to show physicians
their practice patterns relative to peers (the figure
shows an example of the scorecard for gatroenteritis
for one physician, showing resources use rates for IV
fluids and labs). Blinded scorecards were distributed
quarterly for five quarters using rolling-year averages.
A pre/post-intervention analysis was performed with
Sep 1, 2010 as the intervention date. Fisher’s exact and
Wilcoxon rank sum tests were used for analysis.
Results: We analyzed 45,872 patient visits across two
hospitals (24,834 pre- and 21,038 post-intervention),
comprising 17.6% of the total ED volume during the
study period. Patients were seen by 100 physicians
www.aemj.org
S31
A
Table - Abstract 50: Average patients seen per hour by PGY year
PGY-1
•
B
(mean 462 patients/physician). The table shows overall
physician practice in the pre- and post-intervention periods. Significant reduction in resource use was seen for
abdominal/pelvic CT scans, head CT scan, chest x-rays,
IV ondansetron, and admission to hospital. ED LOS
decreased from 129 min to 126 min (p = 0.0003). There
was no significant change in 72-hr return rate during the
study period (2.2% pre-, 2.0% post-intervention).
Conclusion: Feedback on comprehensive practice patterns including resource use and quality metrics can
influence physician practice on commonly used
resources in the ED.
Table - Abstract 51: Change in Resource Use, Length of Stay, and
Return Rate Before and After Feedback
Resource/Outcome
Abdomen/Pelvic
CT (%)
Head CT (%)
Chest x-ray (%)
Abdominal x-ray (%)
Lab tests (%)
IV antibiotics (%)
IV fluids (%)
IV ondansetron (%)
admission (%)
52
Preintervention
Postintervention
p-value
1.2
0.5
<0.0001
26.0
31.7
15.7
71.1
12.0
37.8
11.6
7.4
18.9
28.9
16.3
70.4
11.1
38.8
8.1
6.7
<0.0001
0.0004
ns
ns
ns
ns
<0.0001
<0.0001
Publicly Posted Emergency Department
Wait Times: How Accurate Are They?
Nicholas Jouriles, Erin L. Simon, Peter L.
Griffin, Jo Williams
Akron General Medical Center, Akron, OH
Background: Hospitals are advertising their emergency departments (ED) wait times on the internet,
S32
2012 SAEM ANNUAL MEETING ABSTRACTS
billboards, via iPhone application, Twitter, and text
messaging. There is a paucity of data describing the
accuracy of publically posted ED wait times.
Objectives: To examine the accuracy of publicly posted
wait times of four emergency departments within one
hospital system.
Methods: A prospective analysis of four ED-posted
wait times in comparison to the wait times for actual
patients. The main hospital system calculated and
posted ED wait times every twenty minutes for all four
system EDs. A consecutive sample of all patients who
arrived 24/7 over a 4-week period during July and
August 2011 was included. An electronic tracking system identified patient arrival date and the actual
incurred wait time. Data consisted of the arrival time,
actual wait time, hospital census, budgeted hospital
census, and the posted ED wait time. For each ED the
difference was calculated between the publicly posted
ED wait time at the time of patient’s arrival and the
patient’s actual ED wait time. The average wait times
and average wait time error between the ED sites were
compared using a two-tailed Student’s t-test. The correlation coefficient between the differences in predicted/
actual wait times was also calculated for each ED.
Results: There were 8890 wait times within the four EDs
included in the analysis. The average wait time (in minutes) at each facility was: 64.0 (±62.4) for the main ED,
22.0 (±22.1) for freestanding ED (FED) #1, 25.0 (±25.6) for
FED #2, and 10.0 (±12.6) for the small community ED.
The average wait time error (in minutes) for each facility
was 31(±61.2) for the main ED, 13 (±23.65) for FED #1, 17
(±26.65) for FED #2, and 1 (±11.9) for the community hospital ED. The results from each ED were statistically significant for both average wait time and average wait
time error (p < 0.0001). There was a positive correlation
between the average wait time and average wait time
error, with R-values of 0.84, 0.83, 0.58, and 0.48 for the
main ED, FED #1, FED #2, and the small community hospital ED, respectively. Each correlation was statistically
significant; however, no correlation was found between
the number of beds available (budgeted-actual census)
and average wait times.
Conclusion: Publically posted ED wait times are accurate for facilities with less than 2000 ED visits per
month. They are not accurate for EDs with greater than
4000 visits per month.
53
Reduction of Pre-analytic Laboratory Errors
in the Emergency Department Using an
Incentive-Based System
Benjamin Katz, Daniel Pauze, Karen Moldveen
Albany Medical Center, Albany, NY
Background: Over the last decade, there has been an
increased effort to reduce medical errors of all kinds.
Laboratory errors have a significant effect on patient
care, yet they are usually avoidable. Several studies
suggest that up to 90% of laboratory errors occur during the pre- or post-analytic phase. In other words,
errors occur during specimen collection and transport
or reporting of results, rather than during laboratory
analysis itself.
Objectives: In an effort to reduce pre-analytic laboratory errors, the ED instituted an incentive-based program for the clerical staff to recognize and prevent
specimen labeling errors from reaching the patient.
This study sought to demonstrate the benefit of this
incentive-based program.
Methods: This study examined a prospective cohort of
ED patients over a three year period in a tertiary care
academic ED with annual census of 72,000. As part of a
continuing quality improvement process, laboratory
specimen labeling errors are screened by clerical staff
by reconciling laboratory specimen label with laboratory requisition labels. The number of ‘‘near-misses’’ or
mismatched specimens captured by each clerk was then
blinded to all patient identifiers and was collated by
monthly intervals. Due to poor performance in 2009, an
incentive program was introduced in early 2010 by
which the clerk who captured the most mismatched
specimens would be awarded a $50 gift card on a quarterly basis. The total number of missed laboratory
errors was then recorded on a monthly basis. Investigational data were analyzed using bivariate statistics.
Results: In 2009 and the first month of 2010, 80,339
patients were treated in the ED with 89 errors found in
the laboratory. From February 2010 through the end of
the year, 65,411 patients were seen with 35 errors
reaching the laboratory. The institution of the incentive
program was found to have a risk reduction for preanalytic laboratory error of 0.48 (0.33, 0.71).
Conclusion: The institution of an incentive program
was associated with a marked reduction in preanalytic
laboratory errors.
54
Comparison of Emergency Department
Operation Metrics by Annual Volume Over
7 Years
Daniel Handel1, James J. Augustine2,
Heather L. Farley3, Charles M. Shufflebarger4,
Benjamin C. Sun1, Robert E. O’Connor5
1
Oregon Health & Science University School of
Medicine, Portland, OR; 2EMP, Cincinnati, OH;
3
Christiana Care Health System, Wilmington,
DE; 4Indiana University, Indianapolis, IN;
5
University of Virginia, Charlottesville, VA
Background: Most studies on operational research
have been focused in academic medical centers, which
typically have larger volumes of patients and are
located in urban metropolitan areas. As CMS core measures in 2013 begin to compare emergency departments
(EDs) on treatment time intervals, especially length of
stay (LOS), it is important to explore if any differences
exist inherent to patient volume.
Objectives: The objective of this study is to look at differences in operational metrics based on annual patient
census. The hypothesis is that treatment time intervals
and operational metrics differ amongst these different
categories.
Methods: The ED Benchmarking Alliance has collected
yearly operational metrics since 2004. As of 2010, there
are 499 EDs providing data across the United States.
EDs are stratified by annual volume for comparison in
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S33
Table - Abstract 54:
Volume
of ED
(visits/year)
>80K
60–80K
40–60K
20–40K
<20K
High Acuity
(% CPT
4, 5, or
Critical Care)
Pediatric
(%)
Admitted
(%)
Transferred
to Another
Hospital (%)
57.2*
61.2*
61.0*
60.9*
53.0
23.8
20.9*
19.4*
21.8*
23.7
21.8*
21.1*
19.7*
16.8*
11.2
0.8*
0.9*
1.2*
1.7*
2.7
Arrived
by EMS (%)
Median
LOS for
Discharged
Patients
(minutes)
Median
LOS for
Admitted
Patients
(minutes)
Left Before
Treatment
Complete (%)
21.4*
18.4*
19.0*
15.7*
11.5
213.1*
180.9*
167.0*
140.1*
107.4
386.0*
350.7*
322.7*
273.4*
215.7
3.4*
3.5*
2.9*
2.1*
1.3
*p < 0.05 compared to <20K
the following categories: <20K, 20–40K, 40–60K, and
over 80K. In this study, metrics for EDs with <20K visits
per year were compared to those of different volumes,
averaged from 2004–2010. Mean values were compared
to <20K visits as a reference point for statistical difference using t-tests to compare means with a
p-value < 0.05 considered significant.
Results: As seen in the table, a greater percentage of
high acuity of patients was seen in higher volume EDs
than in <20K EDs. The percentage of patients transferred to another hospital was higher in <20K EDs.
A higher percentage arrived by EMS and a higher percentage were admitted in higher volume EDs when
compared to <20K visits. In addition, the median LOS
for both discharged and admitted patients and percentage who left before treatment was complete (LBTC)
were higher in the higher volume EDs.
Conclusion: Lower volume EDs have lower acuity
when compared to higher volume EDs. Lower volume
EDs have shorter median LOS and left before treatment complete percentages. As CMS core measures
require hospitals to report these metrics, it will be
important to compare them based on volume and not
in aggregate.
55
Does the Addition of a Hands-Free
Communication Device Improve ED
Interruption Times?
Amy Ernst, Steven J. Weiss, Jeffrey A. Reitsema
University of New Mexico, Albuquerque, NM
Background: ED
interruptions
occur
frequently.
Recently a hands-free communication device (Vocera)
was added to a cell phone and a pager in our ED.
Objectives: The purpose of the present study was to
determine whether this addition improved interruption
times. Our hypothesis was that the device would significantly decrease length of time of interruptions.
Methods: This study was a prospective cohort study of
attending ED physician calls and interruptions in a Level I
trauma center with EM residency. Interruptions included
phone calls, EKG interpretations, pages to resuscitation,
and other miscellaneous interruptions (including nursing
issues, laboratory, EMS, and radiology). We studied a
convenience sampling intended to include mostly
evening shifts, the busiest ED times. Length of time the
interruption lasted was recorded. Data were collected for
a comparison group pre-Vocera. Three investigators
collected data including seven different addendings’
interruptions. Data were collected on a form, then
entered into an Excel file. Data collectors’ agreement was
determined during two additional four hour shifts to calculate a kappa statistic. SPSS was used for data entry and
statistical analysis. Descriptive statistics were used for
univariate data. Chi-square and Mann Whitney U nonparametric test were used for comparisons.
Results: Of the total 511 interruptions, 33% were
phone calls, 24% were EKGs to be read, 18% were
pages to resuscitation, and 25% miscellaneous. There
were no significant differences in types of interruptions pre- vs. post-Vocera. Pre-Vocera we collected
40 hours of data with 65 interruptions with a mean 1.6
per hour. Post-Vocera, 180 hours of data were collected with 446 interruptions with a mean 2.5 per
hour. There was a significant difference in length of
time of interruptions with an average of 9 minutes
pre-Vocera vs. 4 minutes post-Vocera (p = 0.012, diff
4.9, 95% CI 1.8–8.1). Vocera calls were significantly
shorter than non-Vocera calls (1 vs 6 minutes,
p < 0.001). Comparing data collectors for type of interruption during the same 4-hour shift resulted in a
kappa (agreement) of 0.73.
Conclusion: The addition of a hands-free communication device may improve interruptions by shortening
call length.
56
‘‘Talk-time’’ In The Emergency Department:
The Duration Of Patient-provider
Interactions During An ED Visit
Danielle M. McCarthy, Kenzie A. Cameron,
Francisco Acosta, Jennifer Stancati, Victoria E.
Forth, Barbara A. Buckley, Michael Schmidt,
James G. Adams, Kirsten G. Engel
Northwestern University, Chicago, IL
Background: Analyses of patient flow through the ED
typically focus on metrics such as wait time, total length
of stay (LOS), or boarding time. However, little is
known about how much interaction a patient has with
clinicians after being placed in a room, or what proportion of the in-room visit is also spent ‘‘waiting,’’ rather
than directly interacting with care providers.
Objectives: The objective was to assess the proportion
of time, relative to the time in a patient care area, that a
patient spends actively interacting with providers
during an ED visit.
S34
2012 SAEM ANNUAL MEETING ABSTRACTS
Methods: A secondary analysis of 29 audiotaped
encounters of patients with one of four diagnoses (ankle
sprain, back pain, head injury, laceration) was performed. The setting was an urban, academic ED. ED visits of adult patients were recorded from the time of
room placement to discharge. Audiotapes were edited to
remove all downtime and non-patient-provider conversations. LOS and door-to-doctor times were abstracted
from the medical record. The proportion of time the
patient spent in direct conversation with providers
(‘‘talk-time’’) was calculated as the ratio of the edited
audio recording time to the time spent in a patient care
area (talk-time = [edited audio time/(LOS - door-to-doctor)]). Multiple linear regression controlling for time
spent in patient care area, age, and sex was performed.
Results: The sample was 31% male with a mean age of
37 years. Median LOS: 133 minutes (IQR: 88–169), median door-to-doctor: 42 minutes (IQR: 29–67), median
time spent in patient care area: 65 minutes (IQR: 53–
106). Median time spent in direct conversation with
providers was 16 minutes (IQR: 12–18), corresponding
to a talk-time percentage of 19.2% (IQR: 14.7–24.6%).
There were no significant differences based on diagnosis. Regression analysis showed that those spending a
longer time in a patient care area had a lower percentage of talk time (b = )0.11, p = 0.002).
Conclusion: Although limited by sample size, these
results indicate that approximately 80% of a patients’
time in a care area is spent not interacting with providers.
While some of the time spent waiting is out of the providers’ control (e.g. awaiting imaging studies), this significant ‘‘downtime’’ represents an opportunity for both
process improvement efforts to decrease downtime as
well as the development of innovative patient education
efforts to make the best use of the remaining downtime.
57
Degradation of Emergency Department
Operational Data Quality During Electronic
Health Record Implementation
Michael J. Ward, Craig Froehle,
Christopher J. Lindsell
University of Cincinnati, Cincinnati, OH
Background: Process improvement initiatives targeted
at operational efficiency frequently use electronic
timestamps to estimate task and process durations.
Errors in timestamps hamper the use of electronic
data to improve a system and may result in inappropriate conclusions about performance. Despite the fact
that the number of electronic health record (EHR)
implementations is expected to increase in the U.S.,
the magnitude of this EHR-induced error is not well
established.
Objectives: To estimate the change in the magnitude of
error in ED electronic timestamps before and after a
hospital-wide EHR implementation.
Methods: Time-and-motion observations were conducted in a suburban ED, annual census 35,000, after
receiving IRB approval. Observation was conducted
4 weeks pre- and 4 weeks post-EHR implementation.
Patients were identified on entering the ED and tracked
until exiting. Times were recorded to the nearest second using a calibrated stopwatch, and are reported in
minutes. Electronic data were extracted from the
patient-tracking system in use pre-implementation, and
from the EHR post-implementation. For comparison of
means, independent t-tests were used. Chi-square
and Fisher’s t-tests were used for proportions, as
appropriate.
Results: There were 263 observations; 126 before and
137 after implementation. The differences between
observed times and timestamps were computed and
found to be normally distributed. Post-implementation,
mean physician seen times along with arrival to bed,
bed to physician, and physician to disposition intervals
occurred before observation. Physician seen timestamps
were frequently incorrect and did not improve postimplementation. Significant discrepancies (ten minutes
or greater) from observed values were identified in
timestamps involving disposition decision and exit from
the ED. Calculating service time intervals resulted in
every service interval (except arrival to bed) having at
least 15% of the times with significant discrepancies. It
is notable that missing values were more frequent postEHR implementation.
Conclusion: EHR implementation results in reduced
variability of timestamps but reduced accuracy and an
increase in missing timestamps. Using electronic
timestamps for operational efficiency assessment
should recognize the magnitude of error, and the
compounding of error, when computing service times.
Table - Abstract 57: Difference Between Observed versus Electronic Timestamps Following EHR Implementation
Pre-Implementation Period
Bed Placement
Bed Placement
Physician Seen
Disposition
Decision
Exit from ED
Arrival-to-Bed
Bed-to-Physician
Physician-toDisposition
Disposition-to-Exit
Post-Implementation Period
Mean, min
Prop. >10
Min Diff
from Obs
N
(Missing)
120 (6)
119 (7)
98 (28)
0.38 after
0.34 before
6.59 after
6.7%
25.2%
28.6%
104
119
119
98
2.42
0.68
0.72
7.47
after
before
before
after
19.2%
4.2%
27.7%
38.8%
0.71 before
28.4%
N
(Missing)
(22)
(7)
(7)
(28)
95 (31)
Mean, min
Prop. >10
Min Diff
from Obs
p-value
(mean)
p-value
(Proportion)
70 (67)
115 (22)
107 (30)
0.19 before
6.48 before
11.21 after
2.9%
27.8%
16.8%
0.419
<0.001
0.289
0.329
0.650
0.044
81
70
69
106
4.25
2.86
8.28
18.12
after
before
before
after
8.6%
11.4%
39.1%
39.6%
0.512
<0.001
<0.001
0.016
0.043
0.075
0.106
0.901
6.76 before
17.2%
0.112
0.061
(56)
(67)
(68)
(31)
99 (38)
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
58
Factors Associated With Excessive
Emergency Department Length Of Stay For
Treated & Released Patients in an Urban
Academic Medical Center
Jeremy D. Sperling1, Michael J. McMahon1,
Kabir Rezvankhoo2, Neal E. Flomenbaum1,
Peter W. Greenwald1, Rahul Sharma1
1
Weill Cornell Medical College / NewYorkPresbyterian Hospital, New York, NY; 2NewYorkPresbyterian Hospital / Columbia University,
New York, NY
Background: Prolonged ED length of stay (LOS) negatively affects patient satisfaction and consumes overall
ED resources. ED treated and released patients (TRP)
are the patients later surveyed for satisfaction with
their overall stay. EDs have more control over LOS for
TRPs than they do for admitted patients. Identifying
modifiable variables for TRPs, especially those with
extreme LOS (greater than 90th percentile), may prove
beneficial in improving both patient care and patient
satisfaction scores.
Objectives: To identify factors associated with extreme
LOS for TRPs in a 76,000 visit urban academic tertiary
care center.
Methods: ED electronic tracking systems prospectively
recorded demographics, chief complaints, diagnostics,
and consultations obtained, and ED census. Researchers blinded to study hypotheses collated one month of
consecutive adult data (excluding AMAs, walkouts).
Associations between variables and extreme LOS were
analyzed by single and multiple logistic and linear
regression.
Results: In June 2010, out of 5265 adult ED visits, 3463
(65.8%) met inclusion criteria. Mean LOS for TRPs was
4.92 hours (95%CI 4.8–5.02, range 0.1–29.5). Mean LOS
for TRPs with extreme LOS was 11.8 hours (95%CI
11.5–12.1, range 9.1–29.5). Significant associations with
extreme LOS included age, sex, day of week, time of
day, chief complaint, ED census, diagnostic studies, and
consultant or social work involvement. In multiple
•
www.aemj.org
S35
logistic regression, only diagnostic studies (imaging
and labs) and consultation (social work and specialists)
variables remained significant (table). By multiple linear
regression, hourly delays were (95%CI): 3.6 h (3.1–4.1)
for PO contrast CT, 1.54 h (1.3–1.8) for noncontrast CT,
and 4.1 h (3.4–4.8) for MRI.
Conclusion: Diagnostic imaging (e.g. CT, MRI, ultrasound), labs, and social work involvement had the
strongest associations with extreme LOS while patientspecific factors (e.g. age, sex) were not associated with
extreme LOS after adjustment for other factors. A single-center study is unlikely to be generalizable as system inefficiencies vary by institution. However, this
type of analysis can be implemented in any center and
help EDs identify variables to focus on in order to
reduce LOS for TRPs, especially for those with extreme
LOS. Our next step will be to improve processes associated with extreme LOS and evaluate its effect on overall
ED LOS.
59
Ketamine-Propofol Combination (Ketofol)
versus Propofol Alone for Emergency
Department Procedural Sedation and
Analgesia: A Prospective Randomized Trial
Gary Andolfatto1, Riyad B. Abu-Laban2, Peter J.
Zed3, Sean M. Staniforth1, Sherry Stackhouse1,
Susanne Moadebi1, Elaine Willman4
1
Lions Gate Hospital, North Vancouver, BC,
2
Vancouver
General
Hospital,
Canada;
Vancouver, BC, Canada; 3Queen Elizabeth II
Health Sciences Centre, Halifax, NS, Canada;
4
University of British Columbia, Vancouver, BC,
Canada
Background: Procedural sedation and analgesia is
used in the ED in order to efficiently and humanely perform necessary painful procedures. The opposing physiological effects of ketamine and propofol suggest the
potential for synergy, and this has led to interest in
their combined use, commonly termed ‘‘ketofol’’, to
facilitate ED procedural sedation.
Objectives: To determine if a 1:1 mixture of ketamine
and propofol (ketofol) for ED procedural sedation
results in a 13% or more absolute reduction in adverse
respiratory events compared to propofol alone.
Methods: Participants were randomized to receive
either ketofol or propofol in a double-blind fashion
according to a weight-based dosing protocol. Inclusion
criteria were age 14 years or greater, and ASA Class
1–3 status. The primary outcome was the number and
proportion of patients experiencing an adverse respiratory event according to pre-defined criteria (the ‘‘Quebec Criteria’’). Secondary outcomes were sedation
consistency, sedation efficacy, induction time, sedation
time, procedure time, and adverse events.
Results: A total of 284 patients were enrolled, 142 per
group. Forty-three (30%) patients experienced an
adverse respiratory event in the ketofol group compared to 46 (32%) in the propofol group (difference
2%; 95% CI )9% to 13%; p = 0.798). Thirty-eight
(27%) patients receiving ketofol and 36 (25%) receiving propofol developed hypoxia, of whom three (2%)
S36
2012 SAEM ANNUAL MEETING ABSTRACTS
Table 1 - Abstract 59: Respiratory Events and Interventions
Patients experiencing a respiratory event
Oxygen desaturation
Central apnea
Partial upper airway obstruction
Complete upper airway obstruction
Received airway positioning / stimulation
Also received oxygen
Also received bag-valve-mask
Ketofol, No.,
(%)[95% CI] N = 142
Propofol, No.,
(%)[95% CI] N = 142
Difference, % (95% CI)
43 (30) [23 to 38]
38 (27) [20 to 35]
15 (11) [7 to 17]
11 (8) [4 to 13]
6 (4) [2 to 9]
5 (4) [2 to 8]
35 (25) [18 to 32]
3 (2) [0.7 to 6)
46 (32) [25 to 41]
36 (25) [19 to 33]
13 (9) [6 to 15]
11 (8) [4 to 13]
4 (3) [1 to 7]
14 (10) [6 to 16]
31 (22) [16 to 29]
1 (1) [0.1 to 4]
2
2
2
0
1
6
3
1
(-9 to 13) p = 0.798
(-9 to 12)
(-5 to 9)
(-3 to 6)
(0.4 to 13)
(-7 to 13)
(-2 to 5)
Table 2 - Abstract 59: Drug Dosage and Sedation Time Intervals
Total medication dose mg/Kg
Total medication dose mL/Kg
Sedation time, min
Procedure time, min
Recovery time, min
Time to Ramsay Sedation Score 5, min
Number doses required to reach
Ramsay Sedation Score 5
Number patients Ramsay Sedation
Score <5 or requiring repeat dosing
during procedure
Efficacious sedation
Ketofol, (n = 142)
Median (IQR) [range]
Propofol, (n = 142)
Median (IQR) [range]
0.7 (0.6 to 0.9) [0.4 to 1.6]
0.14 (0.12 to 0.18)[0.08 to 0.32]
7 (4 to 9)[1 to 29]
4 (2 to 7) [1 to 28]
8 (7 to 10) [1 to 26]
2 (1 to 3) [1 to 6]
2 (2 to 3) [1 to 7]
1.5 (1.1 to 2.0) [0.7 to 5.1]
0.15 (0.11 to 0.20) [0.07 to 0.51]
7 (4 to 9) [1 to 18]
5 (2 to 7) [1 to 13]
6 (2 to 8) [2 to 13]
2 (1 to 3) [1 to 10]
2 (1 to 3) [1 to 9]
65 (46) [38 to 54]
93 (65) [57 to 73] difference 19%
(8% to 31%) p = 0.001
129 (91) [85 to 95]
126 (89) [83 to 93]
ketofol patients and 1 (1%) propofol patient received
bag-valve-mask ventilation. Sixty-five (46%) patients
receiving ketofol and 93 (65%) receiving propofol
required repeat medication dosing or lightened to a
Ramsay Sedation Score of 4 or less during their procedure (difference 19%; 95% CI 8% to 31%;
p = 0.001). Procedural agitation occurred in 5 patients
(3.5%) receiving ketofol compared to 15 (11%) receiving propofol (difference 7.5%, 95% CI 1% to 14%).
Recovery agitation requiring treatment occurred in six
patients (4%, 95% CI 2.0% to 8.9%) receiving ketofol.
Other secondary outcomes were similar between the
groups. Patients and staff were highly satisfied with
both agents.
Conclusion: Ketofol for ED procedural sedation does
not result in a reduced incidence of adverse respiratory
events compared to propofol alone. Induction time, efficacy, and sedation time were similar; however, sedation
depth appeared to be more consistent with ketofol.
60
The Effect of CMS Guideline on Deep Sedation
withPropofol
Lindsay Harmon1,AnthonyJ.Perkins1,
Beth Sandford2
1
Indiana University School of Medicine,
Indianapolis, IN; 2Wishard Hospital, Indianapolis,
IN
Background: Emergency physicians routinely perform
emergency department procedural sedation (EDPS)
with propofol and its safety is well established. However, in 2010 CMS enacted guidelines defining propofol
as deep sedation and requiring administration by a
physician. Common EDPS practice had been one physician performing both the sedation and procedure.
EDPS has proven safe under this one-physician practice. However, the 2010 guidelines mandated separate
physicians perform each.
Objectives: The study hypothesis was that one-physician propofol sedation complication rates are similar to
two-physician.
Methods: Before and after, observational study of
patients >17 years of age consenting to EDPS with
propofol. EDPS completed with one physician were
compared to those completed with two (separate physicians performing the sedation and the procedure). All
data were prospectively collected. The study was completed at an urban Level I trauma center. Standard
monitoring and procedures for EDPS were followed
with physicians blinded to the objectives of this
research. The frequency and incremental dosing of
medication was left to the discretion of the treating
physicians. The study protocol required an ED nurse
trained in data collection to be present to record vital
signs and assess for any prospectively defined complications. We used chi-square tests to compare the binary outcomes and ASA scores across the time periods,
and two-sample t-tests to test for differences in age
between the two time periods.
Results: During the 2-year study period we enrolled
481 patients: 252 one-physician EDPS sedations and 229
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
two-physician. All patients meeting inclusion criteria
were included in the study. Total adverse event rates
were 4.4% and 3.1%, respectively (p = 0.450). The most
common complications were hypotension and oxygen
desaturation, and they respectively showed one-physcian rates of 2.0% and 0.8% and two-physician rates of
1.8% and 0.9% (p = 0.848 and 0.923.) The unsuccessful
procedure rates were 4.0% vs 3.9% (p = 0.983).
Conclusion: This study demonstrated no significant difference in complication rates for propofol EDPS completed by one physician as compared to two.
The Use of End-Tidal CO2 Monitoring in
Patients Undergoing Observation for
Sedative Overdose in the Emergency
Department
James Miner, Johanna Moore, Jon B. Cole
Hennepin County Medical Center, Minneapolis,
MN
Background: Overdose patients are often monitored
using pulse oximetry, which may not detect changes in
patients on high-flow oxygen.
Objectives: To determine whether changes in end-tidal
carbon dioxide (ETCO2) detected by capnographic monitoring are associated with clinical interventions due to
respiratory depression (CRD) in patients undergoing
evaluation for a decreased level of consciousness after
a presumed drug overdose.
Methods: This was a prospective, observational study
of adult patients undergoing evaluation for a drug overdose in an urban county ED. All patients received supplemental
oxygen.
Patients
were
continuously
monitored by trained research associates. The level of
consciousness was recorded using the Observer’s
Assessment of Alertness/Sedation scale (OAA/S). Vital
signs, pulse oximetry, and OAA/S were monitored and
recorded every 15 minutes and at the time of occurrence of any CRD. Respiratory rate and ETCO2 were
measured at five second intervals using a CapnoStream20 monitor. CRD included an increase in supplemental oxygen, the use of bag-valve-mask ventilations,
repositioning to improve ventilation, and physical or
verbal stimulus to induce respiration, and were performed at the discretion of the treating physicians and
nurses. Changes from baseline in ETCO2 values and
waveforms among patients who did or did have a clinical intervention were compared using Wilcoxon rank
sum tests.
Results: 100 patients were enrolled in the study (age
35, range 18 to 67, 62% male, median OAAS 4, range 1
to 5). Suspected overdoses were due to opioids in 34,
benzodiazepines in 14, an antipsychotic in 14, and
others in 38. The median time of evaluation was
165 minutes (range 20 to 725). CRD occurred in 47% of
patients, including an increase in O2 in 38%, repositioning in 14%, and stimulation to induce respiration in
23%. 16% had an O2 saturation of <93% (median 88,
range 73 to 92) and 8% had a loss of ETCO2 waveform
at some time, all of whom had a CRD. The median
www.aemj.org
S37
change in ETCO2 from baseline was 5 mmHg, range 1
to 30. Among patients with CRD it was 14 mmHg,
range 10 to 30, and among patients with no CRD it was
5 mmHg, range 1 to 13 (p = 0.03).
Conclusion: The change in ETCO2 from baseline was
larger in patients who required clinical interventions
than in those who did not. In patients on high-flow oxygen, capnographic monitoring may be sensitive to the
need for airway support.
62
61
•
How Reliable Are Health Care Providers in
Reporting Changes in ETCO2 Waveform
Anas Sawas1, Scott Youngquist1, Troy Madsen1,
Matthew Ahern1, Camille BroadwaterHollifield1, Andrew Syndergaard1, Jared
Phelps2, Bryson Garbett1, Virgil Davis1
1
University of Utah, Salt Lake City, UT;
2
Midwestern University, Glendale, AZ
Background: ETCO2 changes have been used in procedural sedation analgesia (PSA) research to evaluate
subclinical respiratory depression associated with sedation regiments.
Objectives: To evaluate the accuracy of bedside clinician reporting of changes in ETCO2.
Methods: This was a prospective, randomized, singleblind study conducted in ED setting from June 2010
until the present time. This study took place at an
academic adult ED of a 405-bed (21 in the ED) and a
Level I trauma center. Subjects were randomized to
receive either ketamine-propofol or propofol according to a standardized protocol. Loss of ETCO2 waveforms for ‡ 15 sec were recorded. Following
sedation, questionnaires were completed by the sedating physicians. Digitally recorded ETCO2 waveforms
were also reviewed by an independent physician and
a trained research assistant (RA). To ensure the reliability of trained research assistants, we compared
their analyses with the analyses of an independent
physician for the first 41 recordings. The target
enrollment was 65 patients in each group (N = 130
total). Statistics were calculated using SAS statistical
software.
Results: 91 patients were enrolled; 53 (58.2%) are males
and 38 (41.8%) are females. Mean age was
44.93 ± 17.93 years. Most participants did not have
major risk factors for apnea or for further complications (86.3% were ASA class 1 or 2). ETCO2 waveforms
were reviewed by 87 (95.6%) sedating physicians and
84 (92.3%) nurses at the bedside. There were 70 (76.9%)
ETCO2 waveforms recordings, 42 (60.0%) were
reviewed by an independent physician and 70 (100%)
were reviewed by an RA. A kappa test for agreement
between independent physicians and RAs was conducted on 41 recordings and there were no discordant
pairs (kappa = 1). Compared to sedating physicians, the
independent physician was more likely to report ETCO2
wave losses (OR 1.37, 95% CI 1.08–1.73). Compared to
sedating physicians, RAs were more likely to report
ETCO2 wave losses (OR 1.39, 95% CI 1.14–1.70).
S38
2012 SAEM ANNUAL MEETING ABSTRACTS
Conclusion: Compared to sedating physicians at the
bedside, independent physicians and RAs were more
likely to note ETCO2 waveform losses. An independent
review of recorded ETCO2 waveform changes will be
more reliable for future sedation research.
63
Effectiveness and Safety in Rapid Sequence
Intubation Versus Non-Rapid Sequence
intubation in the Emergency Department:
Multi-center Prospective Observational
Study in Japan
Masashi Okubo1, Yusuke Hagiwara2,
Kohei Hasegawa3
1
Okinawa Chubu Hospital, Okinawa, Japan;
2
Kawaguchi Medical Center, Kawaguchi, Japan;
3
Massachusetts General Hospital, Boston, MA
Background: Comprehensive studies evaluating current practices of ED airway management in Japan are
lacking. Many emergency physicians in Japan still experience resistance regarding rapid sequence intubation
(RSI).
Objectives: We sought to describe the success and
complication rate of RSI with non-RSI.
Methods: Design and Setting: We conducted a multicenter prospective observational study using the JEAN
registry of EDs at 11 academic and community hospitals in Japan during between 2010 and 2011. Data fields
include ED characteristics, patient and operator demographics, method of airway management, number of
attempts, and adverse events. We defined non-RSI as
intubation with sedation only, neuromuscular blockade
only, and without medication. Participants: All patients
undergoing emergency intubation in ED were eligible
for inclusion. Cardiac arrest encounters were excluded
from the analysis. Primary analysis: We described RSI
with non-RSI in terms of success rate on first attempt,
within three attempts, and complication rate. We present descriptive data as proportions with 95% confidence intervals (CIs). We report odds ratios (OR) with
95% CI via chi-square testing.
Results: The database recorded 2710 intubations (capture rate 98%) and 1670 met the inclusion criteria. RSI
was the initial method chosen in 489 (29%) and non-RSI
in 1181 (71%). Use of RSI varied among institutes from
0% to 79%. Success cases of RSI on first and within
three attempts are 353 intubations (72%, 95%CI 68%–
76%) and 474 intubations (97%, 95%CI 95%–98%),
respectively. The success cases of non-RSI on first and
within three attempts are 724 intubations (61%, 95%CI
58%–64%) and 1105 intubations (94%, 95%CI 92%–
95%). Success rates of RSI on first and within three
attempts are higher than non-RSI (OR 1.64, 95%CI
1.30–2.06 and OR 2.14, 95% CI 1.22–3.77, respectively).
We recorded 67 complications in RSI (14%) and 165 in
non-RSI (14%). There is no significant difference of
complication rate between RSI and non-RSI (OR 0.98,
95% CI 0.72–1.32).
Conclusion: In this multi-center prospective study in
Japan, we demonstrated a high degree of variation in
use of RSI for ED intubation. Additionally we found
that success rate of RSI on first and within
attempts were both higher than non-RSI.
study has the limitation of reporting bias and
founding by indication. (Originally submitted
‘‘late-breaker.’’)
64
three
This
conas a
How Can Technology Help Us Further
Interpret ETCO2 Changes?
Anas Sawas, Scott Youngquist, Troy Madsen,
Matthew Ahern, Camille Broadwater-Hollifield,
Andrew Syndergaard, Bryson Garbett,
Virgil Davis
University of Utah, Salt Lake City, UT
Background: ETCO2 changes have been used in procedural sedation analgesia research to evaluate clinical
respiratory depression (CRP).
Objectives: To determine if the number of episodes
and the duration of lost ETCO2 are better predictors of
CRP compared to any loss of ETCO2.
Methods: This was a prospective, randomized, singleblind study conducted in the ED setting from June 2010
until the present time. This study took place at an academic adult ED of a 405-bed (21 in the ED) and a Level I
trauma center. Subjects were randomized to receive
either ketamine-propofol or propofol according to a standardized protocol. ETCO2 waveforms were digitally
recorded. ETCO2 changes were evaluated by the sedating
physicians at the bedside. Recorded waveforms were
reviewed by an independent physician and a trained
research assistant (RA). To ensure the reliability of
trained RAs, we computed a kappa test for agreement
between the analysis of independent physicians and RAs
for the first 41 recordings. A post-hoc analysis of the
association between any loss, the number of losses, and
total duration of loss of ETCO2 waveform and CRP was
performed. On review we recorded the absence or
presence of loss of ETCO2 and the total duration in
seconds of all lost ETCO2 episodes ‡15 seconds. ORs
were calculated using SAS statistical software.
Results: 91 patients were enrolled; 53 (58.2%) are males
and 38 are (41.8%) females. 86.3% participants were
ASA class 1 or 2. Waveforms were reviewed by 87
(95.6%) sedating physicians. There were 70 (76.9%)
waveforms recordings, 42 (60.0%) were reviewed by an
independent physician and 70 (100%) were reviewed by
RAs, where there were no discordant pairs (kappa = 1).
There were 24 (26.4%) CRP events. Any loss of ETCO2
was associated with a non-significant OR of 4.06 (95%
CI 0.75–21.9) for CRP. However, the duration of ETCO2
loss was significantly associated with CRP with an OR
of 1.38 (95% CI 1.08–1.76) for each 30 second interval
of lost ETCO2. The number of losses was significantly associated with the outcome (OR 1.48, 95% CI
1.15–1.91).
Conclusion: Defining subclinical respiratory depression
as present or absent may be less useful than quantitative measurements. This suggests that risk is cumulative
over periods of loss of ETCO2, and the duration of loss
may be a better marker of sedation depth and risk of
complications than classification of any loss.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
65
One-Year Peer Violence Outcomes
Following a Brief Motivational Interviewing
Intervention for Violence and Alcohol
among Teens
Rebecca M. Cunningham, Lauren K. Whiteside,
Stephen T. Chermack, Marc A. Zimmerman,
Maureen A. Walton
University of Michigan, Ann Arbor, MI
Background: ED visits present an opportunity to deliver brief interventions (BIs) to reduce violence and
alcohol misuse among urban adolescents at risk for
future injury. Previous analyses demonstrated that a
brief intervention resulted in reductions in violence and
alcohol consequences up to 6 months.
Objectives: This paper describes findings examining
the efficacy of BIs on peer violence and alcohol misuse
at 12 months.
Methods: Patients (14–18 yrs) at an ED reporting past
year alcohol use and aggression were enrolled in the
RCT, which included computerized assessment, and
randomization to control group or BI delivered by a
computer (CBI) or therapist assisted by a computer
(TBI). Baseline and 12 months included violence (peer
aggression, peer victimization, violence related consequences) and alcohol (alcohol misuse, binge drinking,
alcohol-related consequences).
Results: 3338 adolescents were screened (88% participation). Of those, 726 screened positive for violence
and alcohol use and were randomized; 84% completed
12-month follow-up. As compared to the control group,
the TBI group showed significant reductions in peer
aggression (p < 0.01) and peer victimization (p < 0.05) at
12 months. BI and control groups did not differ on
alcohol-related variables at 12 months.
Conclusion: Evaluation of the SafERteens intervention
one year following an ED visit provides support for the
efficacy of computer-assisted therapist brief intervention for reducing peer violence.
66
Violence Against ED Health Care Workers:
A 9-Month Experience
Terry Kowalenko1, Donna Gates2, Gordon
Gillespie2, Paul Succop2
1
University of Michigan, Ann Arbor, MI;
2
University of Cincinnati, Cincinnati, OH
Background: Health care (HC) support occupations
have an injury rate nearly 10 times that of the general
sector due to assaults, with doctors and nurses nearly 3
times greater. Studies have shown that the ED is at
greatest risk of such events compared to other HC
settings.
Objectives: To describe the incidence of violence in ED
HC workers over 9 months. Specific aims were to 1)
identify demographic, occupational, and perpetrator
factors related to violent events; 2) identify the predictors of acute stress response in victims; and 3) identify
predictors of loss of productivity after the event.
Methods: Longitudinal, repeated methods design
was used to collect monthly survey data from ED HC
•
www.aemj.org
S39
workers (W) at six hospitals in two states. Surveys
assessed the number and type of violent events, and
feelings of safety and confidence. Victims also completed specific violent event surveys. Descriptive statistics and a repeated measure linear regression model
were used.
Results: 213 ED HCWs completed 1795 monthly surveys, and 827 violent events were reported. The average per person violent event rate per 9 months was
4.15. 601 events were physical threats (3.01 per person
in 9 months). 226 events were assaults (1.13 per person
in 9 months). 501 violent event surveys were completed,
describing 341 physical threats and 160 assaults with
20% resulting in injuries. 63% of the physical threats
and 52% of the assaults were perpetrated by men.
Comparing occupational groups revealed significant
differences between nurses and physicians for all
reported events (p = 0.0048), with the greatest difference in physical threats (p = 0.0447). Nurses felt less
safe than physicians (p = 0.0041). Physicians felt more
confident than nurses in dealing with the violent patient
(p = 0.013). Nurses were more likely to experience acute
stress than physicians (p < 0.001). Acute stress significantly reduced productivity in general (p < 0.001), with
a significant negative effect on ‘‘ability to handle/
manage workload’’ (p < 0.001) and ‘‘ability to handle/
manage cognitive demands’’ (p < 0.05).
Conclusion: ED HCWs are frequent victims of violence
perpetrated by visitors and patients. This violence
results in injuries, acute stress, and loss of productivity.
Acute stress has negative consequences on the workers’ ability to perform their duties. This has serious
potential consequences to the victim as well as the care
they provide to their patients.
67
A Randomized Controlled Feasibility Trial of
Vacant Lot Greening to Reduce Crime and
Increase Perceptions of Safety
Eugenia C. Garvin, Charles C. Branas
Perelman School of Medicine at the University of
Pennsylvania, Philadelphia, PA
Background: Vacant lots, often filled with trash and
overgrown vegetation, have been associated with intentional injuries. A recent quasi-experimental study found
a significant decrease in gun crimes around vacant lots
that had been greened compared with control lots.
Objectives: To determine the feasibility of a randomized vacant lot greening intervention, and its effect on
police-reported crime and perceptions of safety.
Methods: For this randomized controlled feasibility
trial of vacant lot greening, we partnered with the
Pennsylvania Horticulture Society (PHS) to perform the
greening intervention (cleaning the lots, planting grass
and trees, and building a wooden fence around the
perimeter). We analyzed police crime data and interviewed people living around the study vacant lots
(greened and control) about perceptions of safety
before and after greening.
Results: A total of 5200 sq ft of randomly selected
vacant lot space was successfully greened. We used a
master database of 54,132 vacant lots to randomly
S40
2012 SAEM ANNUAL MEETING ABSTRACTS
select 50 vacant lot clusters. We viewed each cluster
with the PHS to determine which were appropriate to
send to the City of Philadelphia for greening approval.
The vacant lot cluster highest on the random list to be
approved by the City of Philadelphia was designated
the intervention site, and the next highest was designated the control site. Overall, 29 participants completed baseline interviews, and 21 completed follow-up
interviews after 3 months. 59% of participants were
male, 97% were black or African American, and 52%
had a household income less than $25,000. Unadjusted
difference-in-differences estimates showed a decrease
in gun assaults around greened vacant lots compared
to control. Regression-adjusted estimates showed that
people living around greened vacant lots reported feeling safer after greening compared to those who lived
around control vacant lots (p < 0.01).
Conclusion: Conducting a randomized controlled trial
of vacant lot greening is feasible. Greening may reduce
certain gun crimes and make people feel safer. However, larger prospective trials are needed to further
investigate this link.
68
Screening for Violence Identifies Young
Adults at Risk for Return ED Visits for
Injury
Abigail Hankin-Wei, Brittany Meagley,
Debra Houry
Emory University, Atlanta, GA
Background: Homicide is the second leading cause of
death among youth ages 15–24. Prior studies, in nonhealth care settings, have shown associations between
violent injury and risk factors including exposure to
community violence, peer behavior, and delinquency.
Objectives: To assess whether self-reported exposure
to violence risk factors can be used to predict future
ED visits for injuries.
Methods: We conducted a prospective cohort study in
the ED of a Southeastern US Level I trauma center.
Patients aged 15–24 presenting for any chief complaint
were included unless they were critically ill, incarcerated, or could not read English. Recruitment took place
over six months, by a trained research assistant (RA).
The RA was present in the ED for 3–5 days per week,
with shifts scheduled such that they included weekends
and weekdays, over the hours from 8 am–8 pm. Patients
were offered a $5 gift card for participation. At the time
of initial contact in the ED, patients completed a written
questionnaire which included validated measures of the
following risk factors: a) aggression, b) perceived likelihood of violence, c) recent violent behavior, d) peer
behavior, e) community exposure to violence, and
f) positive future outlook. At 12 months following the
initial ED visit, the participants’ medical records were
reviewed to identify any subsequent ED visits for
injury-related complaints. Data were analyzed with
chi-square and logistic regression analyses.
Results: 332 patients were approached, of whom 300
patients consented. Participants’ average age was
21.1 years, with 57% female, and 86% African
American. Return visits for injuries were significantly
associated with hostile/aggressive feelings (RR 3.7, CI
1.42,9), self-reported perceived likelihood of violence
(RR 5.16, CI 1.93, 13.78), recent violent behavior (RR
3.16, CI 1.01, 9.88), and peer group violence (RR 4.4, CI
1.72, 11.25). These findings remained significant when
controlling for participant sex.
Conclusion: A brief survey of risk factors for violence
is predictive of return visit to the ED for injury. These
findings identify a potentially important tool for
primary prevention of violent injuries among young
adults visiting the ED for both injury and non-injury
complaints.
69
Firearm Possession among Adolescents and
Young Adults presenting to an Urban
Emergency Department for Assault
Patrick M. Carter1, Manya Newton1, Lauren
Whiteside1, Kevin Loh2, Maureen A. Walton3,
Marc Zimmerman4, Rebecca M. Cunningham1
1
University of Michigan, School of Medicine,
Department of Emergency Medicine; University
of Michigan Injury Center, Ann Arbor, MI;
2
University of Michigan School of Public Health,
University of Michigan Injury Center, Ann
Arbor, MI; 3University of Michigan Department
of Psychiatry, Ann Arbor, MI; 4University of
Michigan, School of Public Health, Department
of Health Behavior and Health Education, Ann
Arbor, MI
Background: Violence is the leading cause of death
among African American youth. Firearms are a leading
cause of death in adolescents, and youth illicit gun carriage and possession is a risk factor for violent injury.
Identification of assault-injured youth who own a gun
is an important component of violence prevention.
Objectives: 1) To determine rates and correlates of gun
possession, how and why firearms are obtained, and
type of firarms common among assaulted youth seeking
care; 2) To understand differences in risk factors (prior
violence, substance use) among these youth seeking
care with and without possession of a gun.
Methods: 14–24 yr old patients presenting to a Level I
ED with violent injury over a 12 mo period were administered a computerized screening survey. Validated
instruments were administered measuring demographics, attitudes towards aggression, substance use, and
prior violence history.
Results: 718 assault-injured youth completed the
survey (84.6% participation); 163 (28%) possessed a
gun, 117 (71.8%) were male, mean 19.8 yrs, 103 (63.2%)
on public assistance. Bivariate analysis found, compared to those without a gun, patients possessing a gun
were more likely involved in a recent physical altercation (85.9% vs. 78.9% p < 0.05), use alcohol before
fights (33.7% vs 19.5%, p < 0.01), or have caused injury
requiring treatment (46.0% vs 20.5%, p < 0.001). They
were more likely to have shot or stabbed another person (8.0% vs 2.5%, p < 0.01) or used a gun in a fight
with a dating partner (6.8% vs 0.7%, p < 0.001). Most
patients kept a gun ‘‘for protection’’ (37.1%), and
obtained guns illegally from friends (23.3%). Among
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
assaulted youth possessing a gun, more believed
‘‘revenge was a good thing’’ (2.9 vs. 3.1, p < 0.05) and it
was ‘‘ok to hurt people if they hurt you first’’ (2.6 vs.
2.8, p < 0.01). Logistic regression results indicated male
sex (AOR 3.55; 95%CI 2.32–5.44), illicit drug use (AOR
1.51, 95% CI 1.01–2.24), recent involvement in a serious
fight (AOR 1.71; 95%CI 1.01–2.90), any previous dating
violence (AOR 2.58; 95%CI 1.67–4.00), and attitudes
favoring retaliation (AOR 1.56; 95%CI 1.08–2.22) predicted gun possession among assault-injured youth.
Conclusion: Among assaulted youth patients seeking
care, 28% have gun possession. Future violence prevention efforts should address risk of future injury or
assault and substance use in this high risk population.
Funding: NIDA RO1 024646
70
Does Cellulitis Belong in an Observation
Unit?
Louisa Canham, Kathryn A. Volz, Emily Kaplan,
Leon D. Sanchez, Christopher Fischer,
Shamai A. Grossman
BIDMC, Boston, MA
Background: Cellulitis is a common patient diagnosis
in emergency department (ED) observation units (OBS).
Prior studies suggest that as many as 38% of ED OBS
cellulitis patients may ultimately need full hospital
admission, versus a national average of 15% for other
diseases. There are little data to help predict patients
who are likely to fail a trial of ED OBS.
Objectives: To identify characteristics in patients with
cellulitis that are predictive of OBS failure.
Methods: Retrospective cohort review of 405 consecutive patients who were seen in an urban, academic ED
with >55,000 visits and admitted to OBS with a diagnosis of skin infection/cellulitis. Data analysis used t-test
and chi square.
Results: 377 patients met study criteria; 29.2% were
admitted to the inpatient unit after a stay in OBS. There
was no significant difference in average age of admitted
vs discharged patients (47.2 vs 45.6, p = 0.39), nor did
sex have an effect on admission (28.3% females admitted vs 30.6% of males, p = 0.74). There was a higher
admission rate in skin infections of the hand (43.4%) vs
other body locations (p < 0.001). Other rates by location
were torso (33.3%), head/neck (30.6%), arm (23.2%),
foot (20%), leg (19.5%), and buttock (0%). Diabetes was
not predictive, with a 26.7% admission rate vs 29.5% in
non-diabetics (p = 0.69). Patients with IVDU history
were no more likely to require admission than those
who reported no drug use (41.7% vs 28.8%, p = 0.33).
133 patients received some oral antibiotics as outpatients prior to their ED OBS stay; these patients were
not more likely to be admitted (31% rate of admission
vs 27.9% in those who had not received oral antibiotics,
p = 0.45). Of patients treated with prior antibiotics, 61%
had received MRSA coverage. Patients treated with PO
MRSA coverage did not have different admission rates
than those without MRSA coverage (28.6% vs 36%,
p = 0.37).
Conclusion: In this study group, almost one-third of
patients admitted to ED OBS with cellulitis ultimately
•
www.aemj.org
S41
were admitted to an inpatient unit. We found no significant difference in admission rates based on age, sex,
diabetes, IVDU, or prior course of oral antibiotics.
Patients with hand infections had the highest admission
rates compared to other body locations. Cellulitis is
more likely than other diagnoses to require inpatient
care, decisions to observe patients based on location of
infection may result in a more judicious use of ED OBS.
71
Comparison of a Novel Clinical Prediction
Rule, MEDS, SIRS, and CURB-65 in the
Prediction of Hospital Mortality for Septic
Patients Visiting the Emergency
Department
Kuan-Fu Chen, Chun-Kuei Chen, Sheng-Che
Lin, Peng-Hui Liu, Jih-Chang Chen, Te-Fa Chiu
Chang-Gung
Memorial
Hospital,
Taoyuan
County, Taiwan
Background: Sepsis is a commonly encountered disease in ED, with high mortality. While several clinical
prediction rules (CPR) including MEDS, SIRS, and
CURB-65 exist to facilitate clinicians in early recognition of risk of mortality for sepsis, most are of suboptimal performance.
Objectives: To derive a novel CPR for mortality of sepsis utilizing clinically available and objective predictors
in ED.
Methods: We retrospectively reviewed all adult septic
patients who visited the ED at a tertiary hospital during
the year 2010 with two sets of blood cultures ordered
by physicians. Basic demographics, ED vital signs,
symptoms and signs, underlying illnesses, laboratory
findings, microbiological results, and discharge status
were collected. Multivariate logistic regressions were
used to obtain a novel CPR using predictors with <0.1
p-value tested in univariate analyses. The existing CPRs
were compared with this novel CPR using AUC.
Results: Of 8699 included patients, 7.6% died in hospital, 51% had diabetes, 49% were older than 65 years of
age, 21% had malignancy, and 16% had positive blood
bacterial culture tests. Predisposing factors including
history of malignancy, liver disease, immunosuppressed
status, chronic kidney disease, congestive heart failure,
and older than 65 years of age were found to be associated with mortality (all p < 0.05). Patients who developed mortality tended to have lower body temperature,
narrower pulse pressure, higher percentage of red cell
distribution width (RDW) and bandemia, higher blood
urea nitrogen (BUN), ammonia, and C-reactive protein
level, and longer prothrombin time and activated partial
thromboplastin time (aPTT) (all p < 0.05). The most parsimonious CPR incorporating history of malignancy
(OR 2.3, 95% CI 1.9–2.7), prolonged aPTT (3.0, 2.4–3.8),
presence of bandemia (1.7, 1.4–2.0), higher BUN level
(2.0, 1.7–2.4), and wider RDW (2.6, 2.2–3.1) were
developed with better AUC (0.77, 95% CI 0.75–0.79),
compared to CURB-65 (0.68, 0.66–0.70), MEDS (0.66,
0.64–0.68), and SIRS (0.54, 0.52–0.56) (all p < 0.05).
Conclusion: We derived a novel clinical prediction rule
for mortality owing to sepsis using a history of
malignancy, prolonged aPTT, presence of bandemia,
S42
2012 SAEM ANNUAL MEETING ABSTRACTS
Objectives: To quantify the overuse of CT in MTBI in
the ED based upon current guideline recommendations. HYPOTHESIS: Physicians are not obtaining
imaging consistent with guidelines, leading to unnecessary CTs.
Methods: DESIGN: Retrospective analysis of secondary data from a prospective observational study. SETTING: Urban, Level I ED with >90,000 visits per year.
SUBJECTS: Adult patients with blunt MTBI, Glasgow
Coma Scale score 13–15, non-focal neurologic exam,
and receiving CT imaging for trauma at the discretion
of the treating physician from March 2006-April 2007.
OBSERVATIONS: Proportion of cases meeting criteria
for CT based on the CCTHR, ACEP Clinical Policy,
and NOC were to be reported using descriptive
statistics.
Results: All 346 patients enrolled in the original study
were included in the analysis. The proportion of cases
meeting criteria for CT were: CCTHR 64.7% (95% CI
0.60–0.70), ACEP 74.3% (95% CI 0.70–0.79), and NOC
90.5% (95% CI 0.87–0.94). Sensitivities of the guidelines
were: CCTHR 79.2% (95% CI 0.58–0.93), ACEP 95.8%
(95% CI 0.80–0.99), and NOC 91.7% (95% CI 0.74–0.98).
Conclusion: 10–35% of CTs obtained in the ED
for MTBI are not guideline-compliant. Successful
implementation of these guidelines could decrease CT
use in MTBI by up to 35%. Performing CT according
to guidelines would save up to $394 million annually
and prevent roughly 36 radiation-induced cancers.
LIMITATIONS: This analysis was limited by the
data collected in the original study. To account for
this limitation, guideline compliance was overestimated.
73
higher BUN level, and wider RDW with better performance of existing CPRs. Further validation study is
merited.
72
Overuse of CT for Mild Traumatic Brain
Injury
Edward R. Melnick1, Christopher M. Szlezak1,
Suzanne K. Bentley2, Lori A. Post1
1
Yale School of Medicine, New Haven, CT;
2
Mount Sinai School of Medicine, New York, NY
Background: 1.4 million patients with mild traumatic
brain injury (MTBI) are treated and released from EDs
annually in the United States. To differentiate MTBI
from clinically important brain injury and to prevent
the overuse of CT, multiple evidence-based guidelines
exist to direct appropriate use of CT. The sensitivity
and specificity of the Canadian CT Head Rule (CCTHR)
and New Orleans Criteria (NOC) have been previously
validated. Despite conventional implementation strategies, CT use is growing rapidly with up to 75% of these
patients receiving imaging. Over-testing exposes
patients to undue radiation risk and cost.
Doctor Knows Best: Published Guidelines
vs. ED Physicians’ Predictions Of ACS
Amisha Parekh, Robert Birkhahn, Paris Ayana
Datillo
New York Methodist Hospital, Brooklyn, NY
Background: Guidelines are meant to encompass the
needs of the majority of patients in many situations.
While they may be generalizable to a primary care setting, in the unique environment of an emergency
department (ED), the feasibility of incorporating AHA/
ACC/ACEP guidelines into the clinical workflow
remains in question. Often the varied patient presentation and acuity of presentation requires a physician’s
insight for accurate diagnosis, rendering guidelines
ineffective for use in the ED.
Objectives: To compare an ACS risk stratification rating assigned by ED physicians to the AHA/ACC/ACEP
guidelines for acute coronary syndrome (ACS), and
assess each for accuracy in predicting a definitive diagnosis of ACS.
Methods: We conducted a prospective observational
cohort study on all patients evaluated for ACS, over a
14-week time period in an urban teaching hospital
ED. The patient’s risk stratification for ACS classified
by the evaluating ED physician was compared to
AHA/ACC/ACEP guidelines. Patients were contacted
at 48 hours and 30 days following the index ED visit
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
to determine all cause mortality, unscheduled hospital/ED
revisits,
and
objective
cardiac
testing
performed.
Results: There was poor agreement between the physician’s unstructured assessment used in clinical practice
and the guidelines put forth by the AHA/ACC/ACEP
task force. ED physicians were more likely to assess a
patient as low risk (42%), while AHA guidelines were
more likely to classify patients as intermediate (50%) or
high (40%) risk. However, when comparing the
patient’s final ACS diagnosis and the relation to the risk
assessment value, ED physicians proved better predictors of high-risk patients who in fact had ACS, while
the AHA/ACC/ACEP guidelines proved better at correctly identifying low-risk patients who did not have
ACS.
Conclusion: In the ED, physicians are far more efficient
at correctly placing patients with underlying ACS into a
high-risk category, while established criteria may be
overly conservative when applied to an acute care population. Further research is indicated to look at ED physicians’ risk stratification and ensuing patient care to
assess for appropriate decision making and ultimate
outcomes.
74
Compartative Accuracy of the Wells Score
and AMUSE Score for the Detection of
Acute Lower Limb Deep Vein Thrombosis
Gabriel E. Blecher1, Ian G. Stiell1,
Michael Y. Woo1, Paul Pageau1, Phil Wells1,
Matthew D.F McInnes1, George A. Wells2
1
University of Ottawa, Ottawa, ON, Canada;
2
University of Ottawa Heart Institute, Ottawa,
ON, Canada
Background: The Wells clinical decision rule is commonly used to risk-stratify patients with potential lower
limb deep vein thrombosis (DVT). Recently another rule
that incorporates a d-dimer was developed to address
the reported lowered sensitivity amongst primary care
patients.
Objectives: We evaluated the test characteristics of
both the Wells and AMUSE clinical decision rules in a
cohort of emergency department patients who presented with suspected acute lower limb DVT.
Methods: We conducted a prospective cohort study on
a sample of adult patients who presented to an academic ED with suspected acute lower limb DVT. Demographic and clinical data were collected and d-dimer
was measured on all patients. The outcome of interest
was any fatal or non-fatal venous thromboembolism at
90 days, as determined by radiology database and medical record review, as well as structured telephone
interview. The sensitivity, specificity, positive and negative predictive values were calculated for patients classified as high risk for DVT by the Wells and AMUSE
scores.
Results: The 148 study patients had the following characteristics: mean age 58.6 and male sex 45.3%; 1.4%
had congestive heart failure, 2.7% peripheral vascular
disease, and 0.7% systemic lupus erythematosis. 19.6%
were taking antiplatelet agents and 5.4% anticoagulants.
•
www.aemj.org
S43
41.7% had an elevated d-dimer and 33.3% of the cohort
were classified as high-risk by the Wells score and
45.5% by the AMUSE score. The Wells score had a sensitivity of 81.0% (95% CI 58.1–94.6) and specificity of
76.3%, (95% CI 65.2–85.3) whereas the AMUSE score
had a sensitivity of 90.5% (95% CI 69.6–98.8) and specificity of 64.0% (95% CI 52.1–74.8).
Conclusion: The AMUSE score was more specific, but
the Wells score was more sensitive for acute lower limb
DVT in this cohort. There is no significant advantage in
using the AMUSE over the Wells score in ED patient
with suspected DVT.
Table - Abstract 74:
Sensitivity
Specificity
Positive predictive value
Negative predictive value
75
Wells
instrument
AMUSE
instrument
81.0% (95%
CI 58.1–94.6)
76.3% (95%
CI 65.2–85.3)
48.6% (95%
CI 31.4–66.0)
93.6% (95%
CI 84.3–98.2)
90.5% (95%
CI 69.6–98.8)
64.0% (95%
CI 52.1–74.8)
41.3% (95%
CI 27.0–56.8)
96.0% (95%
CI 86.3–99.5)
Facial Action Coding to Enhance The
Science of Pretest Probability Assessment
(the FACES initiative)
Melissa Haug, Virginia Krabill, David Kammer,
Jeffrey Kline
Carolinas Medical Center, Charlotte, NC
Background: Pretest probability scoring systems do
not explicitly measure appearance sickness or wellness
as a predictor variable. However, emergency physicians
frequently employ the observation of ‘‘(s)he looks
good’’ to make real-time decisions.
Objectives: Test if patient facial expressions can be
quantized and used to improve pretest probability
assessment.
Methods: This was a prospective, non-interventional,
proof of concept study of diagnostic accuracy. We
programmed a laptop computer to simultaneously
videotape the patient’s face while viewing an automated, 15-second presentation containing three visual
stimuli slides, two humorous, and one sad. Patients
were adults with a chief complaint of chest pain or
dyspnea. Criterion standard: 1. D-: No significant cardiopulmonary disease within 7 d, or 2. D+: Acute disease requiring hospital care for >24 hours (PE, ACS,
aortic disaster, or pneumonia, CHF, COPD or arrhythmia requiring IV medication). Two observers analyzed
the videos and for each slide, they computed the Facial
Action Coding System (FACS) score, based upon action
units (AU, 0–5) of 43 individual facial muscle actions.
IOV assessed with weighted kappa, medians with
Mann-Whitney U, and diagnostic accuracy with ROC
and sensitivity (sens) and specificity (spec).
Results: We enrolled 50 patients (52 ± 16 years, 38%
female, 46% Caucasian, 94% triage code 3 or 4 [4 highest possible]). Complete FACS scores were obtained
S44
2012 SAEM ANNUAL MEETING ABSTRACTS
from all 50. 12/50 (24%) were D+ and their median total
AU score (7.5, IQR 1–36) was significantly lower than
D- (31.5, IQR 14–62, P = 0.21 MWU) with ROC = 0.71.
FACS scores were clustered for expressions of Smile,
Surprise, or Frown, each with AU scores ranging from
0 (completely absent, test negative) to 22 (strongest
expression). Using best response to appropriate slide,
Smile had ROC = 0.85 (0.6 to 1.0), sens = 91%,
spec = 64%; Surprise had ROC = 0.72 (0.40 to 1.0),
sens = 82%, spec = 59%, and Frown had ROC = 0.54,
sens = 82%, spec = 77%. For total AU score, weighted
kappa = 0.23 (0.05 to 0.40) and for Smile, interobserver
agreement within ±1 AU was 80% and weighted
kappa = 0.28 (95% CI 0.1 to 0.47).
Conclusion: Type and magnitude of facial expression
contains useful diagnostic information. These data specifically support the diagnostic value of documenting
presence of patient smile on physical examination for
patients with chest complaints.
dependent on the type of treatment being offered. Our
study found substantially more agreement for standard
treatments (scenarios 1 and 5) than for experimental/
research protocols (scenarios 2, 3, and 4). The adaptive
clinical trial was more acceptable than standard RCT,
although there was substantially more patient/surrogate
disagreement regarding participation in the adaptive
trial, potentially due to the increased complexity of the
design. Further research is needed into optimal consent
approaches in time sensitive, high-stakes diseases such
as stroke and other acute neurological conditions.
Acute Stroke Research and Treatment
Consent: The Accuracy of Surrogate
Decision Makers
Jessica Bryant
University of Michigan Medical School, Ann
Arbor, MI
Background: Emergency patients are often treated
with weight-based fluids and medications with only a
guess as to their actual weight. The patients who are
too ill for a triage-obtained weight are those most likely
to receive weight-based, life-saving medications such as
heparin, low molecular weight heparins, thrombolytics,
vasopressors, and anti-arrhythmics. There are no validated methods by which to estimate adult patient
weight in the emergency department. We have previously derived a rule for converting mid-arm circumference (MAC) to weight in adult emergency patients in a
mostly Hispanic population in southwest Texas. This
formula is: Weight Kg = (MAC cm x 3.5) – 30.
Objectives: To validate the previously derived formula
for converting MAC to weight in a different patient
population.
Methods: This was an IRB-approved study conducted
at a large, university-affiliated hospital in Kentucky seeing a predominately Caucasian population. Triaged
patients had bilateral MAC measured. Age and measured triage weight were also collected. Using regression analysis, a plot and formula for the conversion of
MAC to weight has been derived. This conversion factor will be compared to the previously derived conversion formula.
Results: 294 patients have been evaluated. A formula
converting MAC to weight has been derived and compared favorably to the previously derived formula. Age
does not appear to affect the conversion formula.
Conclusion: The formula for converting MAC to weight,
originally derived in a mostly Hispanic population in
76
Background: Surrogate consent for treatment and
research options on behalf of mentally incapacitated
patients with acute stroke is currently the standard of
practice in emergency departments across the nation.
Many studies have, however, indicated the failure of
surrogates to accurately predict patient preferences in
a variety of other clinical settings.
Objectives: This study is designed to investigate the
hypothesis that such a failure extends to the acute
stroke setting as well–that significant discrepancies
exist between surrogate decisions and the preferences
of the patients they represent.
Methods: We performed a cross-sectional verbal survey of 200 patients in the University of Michigan ED
without stroke, and the family member, friend, or significant other who had accompanied the patient, resulting
in a total enrollment of 400. The patient was presented
with five scenarios for treatment decisions in the event
of an acute stroke or cardiac arrest. After each scenario,
he or she indicated the likelihood of consenting to the
treatment/protocol for each scenario. The same procedure was then performed separately on the surrogate.
Results: Overall, surrogates predicted patients’ treatment preferences with 80.2% accuracy. Patient/surrogate agreement for scenarios 1, 2, and 5 was 96%, 87%,
and 95% respectively. Scenarios 3 and 4–regarding a
standard pharmacotherapy RCT and an adaptive RCT–
gave rise to the vast majority of disagreement between
patients and surrogates. In scenario 3, 69.5% of pairs
refused the trial while 4.5% of pairs consented to the
trial, resulting in an agreement rate of 74%. The adaptive clinical trial (scenario 4) represented that lowest
rate of agreement at 49%.
Conclusion: The accuracy with which surrogates were
able to predict patient preferences was highly
77
Validation of Mid-Arm Circumference for a
Rapid Estimate of Emergency Patient
Weight
Matthew N. Graber1, Christopher Belcher1,
Matthew Mart1, Roba Rowe1, Patrick Jenkins1,
Melchor Ortiz2
1
University of Kentucky, Lexington, KY; 2Texas
Tech, El Paso, TX
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Southwest Texas, has an excellent fit to the data from a
mostly Caucasian population in Kentucky. The formula is
therefore validated in varying populations and can be
used to rapidly estimate a patient’s weight in the emergency department. WeightKg = (MACcm x 3.5) – 30
78
Hospice and Palliative Care in the
Emergency Department
Afzal Beemath1, Robert Zalenski2,
Cheryl Courage2, Alexandra Edelen2
1
Detroit Medical Center and Seasons Hospice
and Palliative Care, Detroit, MI; 2Division of
Palliative Medicine, Department of Emergency
Medicine, Wayne State University School of
Medicine, Detroit, MI
Background: An estimated 2.4–3.6 million visits are
made to hospital EDs across the United States each
year by nursing home (NH) residents. Little is known
regarding the frequency at which NH patients are eligible for a hospice and palliative care (HPC) referral and
how many actually receive an appropriate HPC intervention during their hospitalizations.
Objectives: The goal of the investigation was to examine and quantify hospice eligibility and code status in a
cohort of patients transferred from nursing homes and
other extended care facilities to EDs who are subsequently admitted to the hospital. Additionally, it was
hypothesized that demographic variables would influence whether an HPC intervention was received and
that hospital course would differ for those patients who
received an HPC intervention.
Methods: A retrospective chart review of a random
sample (n = 97) of non-cancer terminally ill NH patients
admitted to a Level I trauma center via the ED was performed. Nursing home residents who presented to the
ED were excluded if they were transferred back to the
NH after a period of observation of less than 24 hours.
Eligibility for HPC referral was assessed using the Hospice Referral Guidelines for Non-Cancer Diagnoses, a
checklist of markers of severity of diagnosis. Demographics, hospital course, and changes to code status
during hospital stay were also examined.
•
www.aemj.org
S45
Results: Almost all patients were eligible for an HPC
referral (97.9%), and only 8.2% (n = 8) of patients actually received an HPC intervention. Patients who received
the intervention did not differ in terms of demographics
or hospital course from those who did not receive the
HPC intervention. Most patients were ‘‘full code’’ at
admission (72.1%), and 18.6% of patients changed their
status to DNR while admitted. Those who had a goal of
care discussion were significantly more likely (X2 (1,
81) = 6.99, p = 0.007) to change their code status to DNR
than those who did not have a discussion. Those who
had HPC referrals had higher rates of goals of care discussions, DNR status, and higher mortality.
Conclusion: These findings suggest that health care
providers should consider an HPC referral for all nursing home patients who enter the ED with a terminal
diagnosis. Providing goals of care discussions and HPC
services to patients is likely to prevent unwanted medical intervention and improve quality of life, and may
also be cost effective.
79
Prospective Validation of a Prediction
Instrument for Endocarditis in Febrile
Injection Drug Users
Hangyul M. Chung-Esaki1, Robert M.
Rodriguez2, Jonathan Fortman2, Bitou Cisse3
1
UCSF, San Francisco, CA; 2San Francisco
General Hospital, San Francisco, CA; 3Alameda
County Medical Center, Oakland, CA
Background: Diagnosis of infectious endocarditis (IE)
among injection drug users (IDUs) in the ED is
challenging due to poor sensitivity and specificity of
individual clinical signs and laboratory tests. Our pilot
derivation study derived a decision instrument (DI)
based on tachycardia, lack of skin infection, and cardiac
murmur, which yielded 100% sensitivity (95% CI 84–
100) and 100% negative predictive value (95% CI 88–
100) for IE in the derivation cohort.
Objectives: To evaluate and validate the diagnostic performance of a previously derived DI to rule out IE in
febrile IDUs using recursive partitioning and multiple
logistic regression models.
Table - Abstract 79: Recursive partitioning model results
Tachycardia (HR >= 100)
Sensitivity (95% CI)
Specificity (95% CI)
Positive predictive value (95% CI)
Negative predictive value (95% CI)
+Likelihood ratio (95% CI)
)Likelihood ratio (95% CI)
Odds ratio (95% CI)
0.889
0.428
0.109
0.980
1.554
0.260
6.107
(0.654–0.986)
(0.363–0.495)
(0.064–0.171)
(0.930–0.998)
(1.275–1.894)
(0.070–0.967)
(1.345–26.637)
Cardiac murmur
0.722
0.706
0.160
0.970
2.453
0.394
6.232
(0.465–0.903)
(0.642–0.764)
(0.088–0.259)
(0.932–0.990)
(1.730–3.489)
(0.186–0.833)
(2.139–18.161)
Absence of skin
infection
0.944
0.433
0.115
0.990
1.665
0.128
12.977
(0.727–0.999)
(0.368–0.500)
(0.068–0.178)
(0.946–1.000)
(1.421–1.952)
(0.019–0.867)
(1.698–99.156)
Decision instrument
(All three)
1.000 (0.815–1.000)
0.134 (0.089–0.180)
0.083 (0.049–0.127)
1.000 (0.884–1.000)
1.149 (1.093–1.208)
0.000
Infinite
S46
2012 SAEM ANNUAL MEETING ABSTRACTS
Methods: Febrile IDUs admitted from two urban EDs
were prospectively enrolled in June 2007 to March 2011
if they were admitted for rule out endocarditis. Clinical
prediction data from the first 6 hours of ED presentation
and outcome data from inpatient records were prospectively collected. Diagnosis of IE was based on Modified
Duke criteria and discharge diagnosis of endocarditis. In
this new cohort, we determined the screening performance of the previously derived recursive partitioning
model, and derived and validated a multiple logistic
regression model with the same criteria.
Results: Of the 249 subjects, 18 (7.3%) had IE. The
recursive partitioning derived DI had 100% sensitivity
(95% CI 83.7–100), 100% negative predictive value (95%
CI 90.6–100), but low specificity (13.4%, 95% CI 12.2–
13.4). The logistic regression model with the three predictors using a probability threshold of 0.13 yielded
94.4% sensitivity (95% CI 72.7–99.9) with higher specificity (58.4%, 95% CI 51.8–64.9).
Conclusion: In this internal validation study, a DI using
tachycardia, cardiac murmur, and absence of skin infection ruled out IE with high sensitivity (94.4–100%) and
low to moderate specificity (13.4–58.4%), using recursive partitioning and logistic regression models. If
external validation demonstrates similar screening performance, the DI may decrease admissions for ‘‘rule
out endocarditis’’ in febrile IDU.
80
Early Predictors of Post-Concussive
Syndrome in Mild Traumatic Brain Injuries
Presenting to the ED
Boris Vidri1, Syed Ayaz2, Patrick Medado2,
Scott Millis2, Brian J. O’Neil2
1
Wayne State, Detroit, MI; 2Detroit Medical
Center, Detroit, MI
Background: Negative head CT does not exclude
intracranial injury and does not predict post-concussive syndrome (PCS) in mild traumatic brain injury
(mTBI).
Objectives: Concussion Symptom Inventory (CSI) is a
self-reported symptom scoring using a Likert scale that
has been validated in mTBI. Standard Assessment of
Concussion (SAC) is another validated scale to test neurocognitive impairment. We studied whether CSI or
SAC obtained in the emergency department (ED) were
predictive of PCS.
Methods: The CSI and SAC surveys were assessed
upon ED presentation, 3 hours post, and at 7 and
45 days after presentation. Patients presenting with
potential mTBI and a Conley Score = 10 were enrolled.
Controls consisted of ED patients with pain, but without head injury. PCS was defined as a CSI>12 at
45 days. Data were analyzed by simple statistics, ROC,
and a linear regression model.
Results: 174 patients were enrolled (55 controls, 119
mTBI) over 18 months. ROC analysis revealed a CSI
cutoff >12 differentiated control from TBI patients. Of
the individual CSI variables, lack of headache and light
sensitivity appropriately classified 73% of mTBI and
76% of controls. The baseline CSI showed significance
(p < 0.0001) in predicting 7-day CSI. Baseline CSI had
moderate correlation to PCS (r = 0.39), while 7-day CSI
was highly correlated to PCS (r = 0.54). The baseline
SAC did not differentiate TBI from controls, mean for
controls was 24.75 ± 3 and for TBI was 23.77 ± 4
(p = 0.08). The initial SAC score was not able to predict
the 7-day CSI (p = 0.25). A ROC curve analysis of SAC
gave c statistics for detection of PCS of: initial
SAC = 0.44, 3-hour SAC = 0.39, and 45-day SAC = 0.41.
Conclusion: A CSI of >12 best predicted head injury
from controls. Lack of headache and light sensitivity
are the best symptoms to differentiate the head injured
from sham control. Baseline CSI predicts symptom
retention at day 7. 7-day CSI is a better predictor of
PCS than baseline CSI. The SAC does not differentiate
head-injured from control and does not predict PCS.
Patients with a CSI >12 would probably benefit from
outpatient referral, particularly if CSI remains high at
day 7.
81
Clinical Characteristics Associated with
Moderate to Severe Carotid Stenosis or
Dissection in Transient Ischemic Attack
Patients Presenting to the Emergency
Department
Heather Heipel1, Mukul Sharma1, Ian G. Stiell1,
Jane Sutherland2, Marco L.A. Sivilotti3,
Marcel Emond4, Andrew Worster5,
Grant Stotts1, Jeffrey J. Perry1
1
University of Ottawa, Ottawa, ON, Canada;
2
Ottawa Hospital Research Institute, Ottawa, ON,
Canada; 3Queen’s University, Kingston, ON,
Canada; 4Laval University, Laval, QC, Canada;
5
McMaster University, Hamilton, ON, Canada
Background: The recent emphasis on early surgical
intervention for symptomatic carotid stenosis >50%
highlights the need to prioritize imaging of transient
ischemic attack (TIA) patients. While clinical decision
rules have sought to identify patients at high risk for
stroke, there are limited data on clinical characteristics
associated with significant carotid findings on imaging,
including moderate to severe stenosis and dissection.
Objectives: We compared clinical features of TIA
patients with significant carotid findings on carotid
artery imaging to those with normal or insignificant
abnormalities.
Methods: We prospectively enrolled adult TIA patients
at eight tertiary-care academic EDs over 5 years as part
of a larger TIA decision rule study. We actively followed patients with health record review and telephone
calls to identify interventions and outcomes for 90 days
following enrolment. Significant carotid findings were
defined as stenosis >50% or dissection by catheter angiography, CTA, MRA, or carotid doppler at the discretion of the treating physician. Characteristics were
recorded on standardized data forms prior to review of
carotid imaging results. We conducted logistic regression for adjusted odds ratios (OR).
Results: We enrolled 3,704 patients in the main TIA
clinical decision rule study (mean 78.2 years; 50.5%
male), of whom 3,069 (82.9%) underwent carotid imaging and represent the study population for this analysis.
Overall, 2,580 (69.7%) had normal or <50% stenosis, 41
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
(1.1%) had near to complete stenosis, 435 (11.7%) had
50–99% stenosis, and 13 (0.4%) had dissection.
Conclusion: TIA patients with aphasia, SBP ‡
160 mmHg, and a history of hypertension, coronary
artery disease, dyslipidemia, or previous carotid stenosis are more likely to have carotid artery stenosis or
dissection. Patients with confusion and vertigo were at
lower risk. Despite the size and power of this study,
many patient characteristics are only weakly associated
with positive imaging. For now, urgent carotid imaging
should remain an essential component of early risk
stratification of all TIA patients.
82
Boarding Costs: ED Care Is More Expensive
Richard Martin, James Miranda
Temple University, Philadelphia, PA
Background: The direct cost of medical care is not
accurately reflected in charges or reimbursement. The
cost of boarding admitted patients in the ED has been
studied in terms of opportunity costs, which are indirect. The actual direct effect on hospital expenses has
not been well defined.
Objectives: We calculate the difference to the hospital
in the cost of caring for an admitted patient in the ED
and in a non-critical care in-patient unit.
Methods: Time-directed
activity-based
costing
(TDABC) has recently been proposed as a method of
determining the actual cost of providing medical services. TDABC was used to calculate the cost per patient
bed-hour both in the ED and for an in-patient unit. The
costs include nursing, nursing assistants, clerks, attending and resident physicians, supervisory salaries, and
equipment maintenance. Boarding hours were determined from placement of admission order to transfer to
in-patient unit. A convenience sample of 100 consecutive non-critical care admissions was assessed to find
the degree of ED physician involvement with boarded
patients.
Results: The overhead cost per patient bed-hour in the
ED was $60.80. The equivalent cost per bed-hour inpatient was $23.39, a differential of $37.41. There were
27,618 boarding hours for medical-surgical patients in
2010, a differential of $1,033,189.38 for the year. For
the short-stay unit (no residents), the cost per patient
•
www.aemj.org
S47
hour was $11.36 and the boarding hours were 11,804.
This resulted in a differential cost of $583,389.76, a
total direct cost to the hospital of $1,616,579.14.
Review of 100 consecutive admissions showed no
orders placed by the ED physician after decision-toadmit.
Conclusion: Concentration of resources in the ED
means considerably higher cost per unit of care as
compared to an in-patient unit. Keeping admitted
patients boarding in the ED results in expensive underutilization. This is exclusive of significant opportunity
costs of lost revenue from walk-out and diverted
patients. This study includes the cost of teaching attendings and residents (ED and in-patient). In a non-teaching setting, the differential would be less and the cost
of boarding would be shared by a fee-for-service ED
physician group as well as the hospital.
83
Improving Identification of Frequent
Emergency Department Users Using
a Regional Health Information Exchange
William Fleischman1, John Angiollilo2, Gilad
Kuperman3, Arit Onyile1, Jason S. Shapiro1
1
Mount Sinai School of Medicine, New York, NY;
2
Columbia University College of Physicians and
3
Surgeons,
New
York,
NY;
New
York
Presbyterian Hospital, New York, NY
Background: Frequent ED users consume a disproportionate amount of health care resources. Interventions
are being designed to identify such patients and direct
them to more appropriate treatment settings. Because
some frequent users visit more than one ED, a health
information exchange (HIE) may improve the ability to
identify frequent ED users across sites of care.
Objectives: To demonstrate the extent to which a HIE
can identify the marginal increase in frequent ED users
beyond that which can be detected with data from a
single hospital.
Methods: Data from 6/1/10 to 5/31/11 from the New
York Clinical Information Exchange (NYCLIX), a HIE in
New York City that includes ten hospitals, were analyzed to calculate the number of frequent ED users (‡4
visits in 30 days) at each site and across the HIE.
S48
2012 SAEM ANNUAL MEETING ABSTRACTS
84
Results: There were 10,555 (1% of total patients) frequent ED users, with 7,518 (71%) of frequent users having all their visits at a single ED, while 3,037 (29%)
frequent users were identified only after counting visits
to multiple EDs (Table 1). Site-specific increases varied
from 7% to 62% (SD 16.5). Frequent ED users
accounted for 1% of patients, but for 6% of visits, averaging 9.74 visits per year, versus 1.55 visits per year for
all other patients. 28.5% of frequent users visited two
or more EDs during the study period, compared to
10.6% of all other patients.
Conclusion: Frequent ED users commonly visited multiple NYCLIX EDs during the study period. The use of a
HIE helped identify many additional frequent users,
though the benefits were lower for hospitals not
located in the relative vicinity of another NYCLIX hospital. Measures that take a community, rather than a single institution, into account may be more reflective of
the care that the patient experiences.
Indocyanine Green Dye Angiography
Effective at Early Prediction of Second
Degree Burn Outcome
Mitchell S. Fourman, Brett T. Phillips,
Laurie Crawford, Filippo Romainelli, Fubao Lin,
Adam J. Singer, Richard A. Clark
Stony Brook University Medical Center, Stony
Brook, NY
Background: Due to their complex nature and high
associated morbidity, burn injuries must be handled
quickly and efficiently. Partial thickness burns are currently treated based upon visual judgment of burn
depth by the clinician. However, such judgment is
only 67% accurate and not expeditious. Laser Doppler
Imaging (LDI) is far more accurate - nearly 96% after
3 days. However, it is too cumbersome for routine
clinical use. Laser Assisted Indocyanine Green
Angiography (LAICGA) has been indicated as an
alternative for diagnosing the depth of burn injuries,
and possesses greater utility for clinical translation.
As the preferred outcome of burn healing is aesthetic,
it is of interest to determine if wound contracture can
be predicted early in the course of a burn by LAICGA.
Objectives: Determine the utility of early burn analysis
using LAICGA in the prediction of 28-day wound
contracture.
Methods: A prospective animal experiment was performed using six anesthetized pigs, each with 20 standardized wounds. Differences in burn depth were
created by using a 2.5 · 2.5 cm aluminum bar at three
exposure times and temperatures: 70 degrees C for
30 seconds, 80 degrees C for 20 seconds, and 80
degrees C for 30 seconds. We have shown in prior validation experiments that these burn temperatures and
times create distinct burn depths. LAICGA scanning,
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
using Lifecell SPY Elite, took place at 1 hour, 24 hours,
48 hours, 72 hours, and 1 week post burn. Imaging was
read by a blinded investigator, and perfusion trends
were compared with day 28 post-burn contraction outcomes measured using ImageJ software. Biopsies were
taken on day 28 to measure scar tissue depth.
Results: Deep burns were characterized by a blue center indicating poor perfusion while more superficial
burns were characterized by a yellow-red center indicating perfusion that was close to that of the normal uninjured adjacent skin (see figure). A linear relationship
between contraction outcome and burn perfusion could
be discerned as early as 1 hour post burn, peaking in
strength at 24–48 hours post-burn. Burn intensity could
be effectively identified at 24 hours post-burn, although
there was no relationship with scar tissue depth.
Conclusion: Pilot data indicate that LAICGA using Lifecell SPY has the ability to determine the depth of injury
and predict the degree of contraction of deep dermal
burns within 1–2 days of injury with greater accuracy
than clinical scoring.
85
The Sepsis Alert: Real-time Electronic
Health Record Surveillance in the
Emergency Department and Sepsis
Outcomes
Thomas Yeich, Suzan Brown, Kristin Thomas,
Robin Walker
York Hospital, York, PA
Background: Early intervention (e.g. fluid resuscitation, antibiotics, and goal directed therapy) improves
outcomes in ED patients with severe sepsis and septic
shock. Clinical definitions of severe sepsis and septic
shock such as SIRS criteria, hypotension, elevated lactate, and end-organ damage (with the exception of
mental status changes) are objectively defined and
quantifiable.
Objectives: We hypothesize that real-time monitoring
of an integrated electronic medical records system and
the subsequent firing of a ‘‘sepsis alert’’ icon on the
electronic ED tracking board results in improved mortality for patients who present to the ED with severe
sepsis or septic shock.
Methods: We retrospectively reviewed our hospital’s
sepsis registry and included all patients diagnosed
with severe sepsis or septic shock presenting to an
academic community ED with an annual census of
73,000 visits and who were admitted to a medical ICU
or stepdown ICU bed between June 2009 and October 2011. In May 2010 an algorithm was added to
our integrated medical records system that identifies
patients with two SIRS criteria and evidence of endorgan damage or shock on lab data. When these criteria are met, a ‘‘sepsis alert’’ icon (prompt) appears
next to that patient’s name on the ED tracking board.
The system also pages an in-house, specially trained
ICU nurse who can respond on a PRN basis and
assist in the patient’s management. 18 months of
intervention data are compared with 11 months of
baseline data. Statistical analysis was via z-test for
proportions.
•
www.aemj.org
S49
Results: For ED patients with severe sepsis, the preand post-alert mortality was 19 of 125 (15%) and 34 of
378 (9%), respectively (p = 0.084; n = 503). In the septic
shock group, the pre- and post-alert mortality was 27
of 92 (29%) and 48 of 172 (28%), respectively
(p = 0.977). With ED and inpatient sepsis alerts combined, the severe sepsis subgroup mortality was
reduced from 17% to 9% (p = 0.013; n = 622).
Conclusion: Real-time ED EHR screening for severe
sepsis and septic shock patients did not improve mortality. A positive trend in the severe sepsis subgroup
was noted, and the combined inpatient plus ED data
suggests statistical significance may be reached as more
patients enter the registry. Limitations: retrospective
study, potential increased data capture post intervention, and no ‘‘gold standard’’ to test the sepsis alert
sensitivity and specificity.
86
Emergency Department Physician
Experience With A Real-time, Electronic
Pneumonia Decision Support Tool
Caroline G. Vines1, Dave S. Collingridge2,
Barbara E. Jones1, Leanne Struck2,
Todd L. Allen2, Nathan C. Dean2
1
University of Utah, Salt Lake City, UT;
2
Intermountain Medical Center, Salt Lake City,
UT
Background: Variability in the decision to admit
patients with pneumonia has been documented. We
developed a real-time, electronic decision support tool
to aid ED physicians in triage, diagnostic studies, and
antibiotic selection for patients presenting with pneumonia. The tool identifies patients likely to have pneumonia and alerts the physician via an electronic tracker
board (ETB). Utilization of the tool in the first 2 months
was low.
Objectives: To describe experience and impressions of
physicians regarding the tool.
Methods: An online survey was sent to all ED physicians and EM residents who work at one or more of
four EDs in Salt Lake Valley 6 months after tool implementation. Surveys were confidential through REDCap
(Research Electronic Data Capture). The survey utilized
discrete questions with continuous, binary, or polychotomous response formats (see table). Descriptive statistics were calculated. Principal component analysis was
used to determine questions with continuous response
formats that could be aggregated. Aggregated outcomes were regressed onto predictor demographic
variables using multiple linear regression.
Results: 80/100 physicians completed the survey. Physicians had a mean of 9.8 ± 9.0 years experience in the
ED. 23.8% were female. Eight physicians (10%)
reported never having used the tool, while 70.8% of
users estimated having used it more than five times.
75% of users cited the ‘‘P’’ alert on the ETB as the most
common notification method. Most felt the ‘‘P’’ alert
did not help them identify patients with pneumonia earlier (mean = 2.5 ± 1.2), but found it moderately useful in
reminding them to use the tool (3.5 ± 1.3). Physicians
found the tool helpful in making decisions regarding
S50
2012 SAEM ANNUAL MEETING ABSTRACTS
triage, diagnostic studies, and antibiotic selection for
outpatients and inpatients (3.7 ± 1.0, 3.6 ± 1.1, 3.6 ± 1.1,
and 4.2 ± 0.9, respectively). They did not feel it negatively affected their ability to perform other tasks
(1.6 ± 0.9). Using multiple linear regression, neither age,
sex, years experience, nor tool use frequency significantly predicted responses to questions about triage
and antibiotic selection, technical difficulties, or diagnostic ordering.
Conclusion: ED physicians perceived the tool to be
helpful in managing patients with pneumonia without
negatively affecting workflow. Perceptions appear consistent across demographic variables and experience.
Table - Abstract 87:
Table - Abstract 86: Sample of continuous response format
survey questions*
9. Does the ‘‘P’’ alert on the PTS board help you to identify
pneumonia patients earlier?
10. Does the ‘‘P’’ alert on the PTS board remind you to start
and use the tool?
13. Do you feel the tool helped you to appropriately triage
your patients?
14. Do you feel the tool helped you to order the appropriate
diagnostic studies for your patients?
15. How useful was th etool in helping you to prescribe
appropriate antibiotics for OUTPATIENTS?
16. How useful was th etool in helping you to prescribe
appropriate antibiotics for INPATIENTS?
17. How frequently do you experience technical difficulties
when using the tool?
18. Did the tool negatively impact your ability to perform
other tasks?
* response options were 1 through 5, with 1 indicating ‘‘not
at all’’ and 5 ‘‘very much so’’
87
Improving Identification of Hospital
Readmissions Using a Regional Health
Information Exchange
John Angiollilo1, William Fleischman2, Gilad
Kuperman3, Arit Onyile2, Jason S. Shapiro2
1
Columbia University College of Physicians and
Surgeons, New York, NY; 2Mount Sinai School
of Medicine, New York, NY; 3New York
Presbyterian Hospital, New York, NY
Background: Hospital readmissions within 30 days,
many of which occur via the ED, are proposed as a target for improvement via ‘‘payment incentives’’ by the
Center for Medicare and Medicaid Services. A portion
of readmissions, however, occur away from the discharging hospital, making it difficult for hospitals to
identify part of this population.
Objectives: To demonstrate the extent to which a
health information exchange system (HIE) can identify
the marginal increase in 30-day readmissions and ED
visits of recently discharged patients, which potentially
lead to these readmissions.
Methods: Data from 5/1/10 to 4/30/11 from the New
York Clinical Information Exchange (NYCLIX), a HIE in
New York City that includes ten hospitals with a total of
7,717 inpatient beds, were analyzed to calculate hospital
index discharges and subsequent readmissions and ED
visits to the discharging hospital versus other hospitals.
Results: There were 320,967 inpatient admissions/discharges by 271,460 patients. There were 41,630 (13% of
total discharges) readmissions within 30 days of discharge, with 37,829 readmissions occurring at the same
hospital, while 3,801 patients (9.1% of total readmissions and 1.2% of total discharges) were readmitted to
a different hospital. Site-specific increases in identification of readmissions ranged from 3%–25%, SD 8.7 (see
table). There were 37,697 ED visits within 30 days of
discharge, with 34,143 (91%) visits to the discharging
hospital’s ED and 3,554 (9%) visits to a different hospital’s ED.
Conclusion: Readmissions and ED visits within thirty
days of discharge to non-discharging hospitals were
common among this group of hospitals and patients
during the study period. The use of a HIE helped identify many additional readmissions, though the benefits
were lower for hospitals not located in the relative
vicinity of another NYCLIX hospital. Measures that take
a community, rather than a single institution, into
account may be more reflective of the care that the
patient experiences.
88
A Comparison Of Outcomes In Post-cardiac
Arrest Patients With And Without
Significant Intracranial Pathology On Head
CT
Sean Doran1, Sameer Syed1, Chris Martin2,
Shelley McLeod1, Matthew Strauss1, Neil Parry1,
Bryan Young1
1
The University of Western Ontario, London,
ON, Canada; 2Royal Victoria Hospital, Barrie,
ON, Canada
Background: When survivors of sudden cardiac arrest
arrive to the emergency department (ED) with return of
spontaneous circulation (ROSC), physicians should
ascertain the etiology of the cardiac arrest (cardiac,
neurologic, respiratory, etc). Consequently, some clinicians advocate for post-cardiac arrest head CT in all
patients with ROSC for diagnostic, treatment, and
prognostic purposes.
Objectives: To explore clinical practice regarding the
use of head CT in the immediate post-cardiac arrest
period in patients with ROSC. A secondary objective
was to compare outcomes of patients with significant
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
(SAH, edema, or infarct) versus non-significant findings
on head CT.
Methods: A retrospective medical record review was
conducted for all adult (‡ 18 years) post-cardiac arrest
(initial rhythm VT/VF, PEA, or asystole) patients admitted to the intensive care unit (ICU) of an academic
tertiary care centre (annual ED census 150,000) from
2006–2007. Data were extracted using a standardized
data collection tool by trained research personnel.
Results: 200 patients were enrolled. Mean (SD) age was
66 (16) years and 56.5% were male. 79 (39.5%) had a
head CT within 24 hours of ICU admission. 14 (17.7%)
had significant findings on head CT. Of these, 1 (7.1%)
patient survived to ICU discharge, compared to 11/65
(16.9%) patients with non-significant findings (D )9.8;
95% CI )22.2%, 15.6%). Of those with significant findings on head CT, median (IQR) ICU length of stay was
1 (1, 2.5) day compared to 4 (2, 4) days for patients with
non-significant findings. Survival to hospital discharge
was not different for patients with significant findings
on head CT (1; 7.1%) compared to those with non-significant (9; 13.8%) findings (D )6.7; 95% CI )18.7%,
18.5%). No patients with significant head CT findings
survived to 1 year, compared to 9 (13.8%) patients with
non-significant findings (D )13.8; 95% CI )24.3%,
8.6%).
Conclusion: The use of post-cardiac arrest head CT
was variable with less than half of patients undergoing
head CT within 24 hours. There were no differences in
outcomes between those with significant versus nonsignificant findings on head CT in this small pilot study.
Further research is required to more accurately determine the prognostic utility of this imaging modality and
determine if there is a difference in other outcomes
such as ICU or hospital length of stay.
89
Bandemia Does Not Predict Mortality,
Positive Cultures, Or Source Location In
Septic Patients Presenting To The ED
Scott Teanu Mataoa, David Conner,
Charles R. Wira
Yale New Haven Hospital, New Haven, CT
Background: Bandemia is believed to be associated
clinically as a predictor of mortality in septic patients
and has been incorporated into severity of illness scoring systems. To date there are very little data looking
specifically at whether bandemia predicts mortality in
this patient population.
Objectives: Early identification and risk stratification
of septic patients in the emergency department (ED)
is important when implementing early clinical interventions to prevent mortality. In this study we evaluated whether bandemia >10% is associated with
mortality in patients with severe sepsis or septic
shock presenting to the ED. Secondary analyses
included evaluating bandemia as a predictor of culture
positivity, gram-negative source of infection, or source
location.
Methods: Retrospective cross-sectional study utilizing
patients identified in the Yale-New Haven Hospital sepsis
registry.
•
www.aemj.org
S51
Results: 521 patients from the sepsis registry were
included in the study with 19.2% (n = 100) meeting
criteria of bandemia. The in-hospital 28-day mortality
rate in patients with bandemia was 18% (18/100),
compared to 14.5% (61/421) in patients without bandemia (p = 0.43). The rate of culture positivity defined
as any positive culture (blood, urine, sputum, wound,
catheter, abscess) was 50% (50/100) in patients with
bandemia and 40% (170/421) in patients without bandemia (p = 0.09). With respect to blood cultures
specifically the rate of positive cultures was 20% (20/
100) in patients with bandemia and 18% (76/421) in
patients without bandemia (p = 0.56). Additionally
the rate of gram-negative organisms from positive
blood cultures was 25% (25/100) in patients with bandemia and 24.7% (105/421) in patients without bandemia (p = 1.00). Regarding source location the
respective rates of incidence among patients with
bandemia were pneumonia 19.6% (31/158), GU 19.5%
(16/82), abdominal 25.4% (14/55), soft tissue 23.5% (8/
34) with no significant difference between the groups
(p = NS).
Conclusion: In septic patients presenting to the ED
with bandemia there was no observed difference in
rates of mortality, positive cultures, or source location.
90
Impact of ED Volumes on Sepsis
Resuscitation Bundle Compliance at an
Urban Level I Trauma Center
Hima Rao, Manu Malhotra, Howard A.
Klausner, Joseph Tsao, Emanuel Rivers
Henry Ford Hospital, Detroit, MI
Background: The Early Goal Directed Therapy (EGDT)
Resuscitation Bundle (RB) has been proven to reduce
mortality in patients with severe sepsis or septic shock.
Universal implementation, however, has proved elusive,
resulting in what we consider a preventable loss of life.
Objectives: The purpose of this study is to determine if
there is an association between ED patient volumes and
RB compliance.
Methods: Our study was a retrospective chart review
performed at an urban, Level I trauma center teaching hospital with an annual volume of >90,000
patients. All patients ‡ 18 years old who presented to
the ED between July 1, 2010 and December 31, 2010
with diagnoses of severe sepsis or septic shock were
included. ED volume data for patients who received
complete compliance with the RB were compared
with those who did not. The electronic medical record
and sepsis registry were used to obtain data regarding bundle compliance, daily census, new arrivals during stay, and nursing and physician to patient ratios.
Wilcox rank sum tests were used to compare differences between the RB compliance and control
groups.
Results: During the review period 224 eligible patients
presented (112 compliance group, 112 control group).
Average daily ED census was comparable (245.2
compliance, 249.86 control, p = 0.206), as was Category
1 (ED high acuity area) census (39.94 compliance, 40.86
control, p = 0.302). New Category 1 patients (10.1
S52
2012 SAEM ANNUAL MEETING ABSTRACTS
compliance, 10.39 control, p = 0.737) and new resuscitation patients during ED stay (1.73 compliance, 2.12 control, p = 0.117) did not show significant differences.
Finally, nursing to patient (2.2 compliance, 2.19 control,
p = 0.843) and physician to patient ratios (4.33 compliance, 4.27 control, p = 0.076) were also similar between
groups.
Conclusion: This study did not show a statistically significant difference between ED volume data for patients
receiving complete resuscitation bundle compliance and
those who did not. Limitations include the retrospective
design of the study and that our sample size may be
too small to detect a threshold of overcrowding over
which bundle compliance is adversely affected. Given
the effect on mortality of this intervention, further study
is needed to identify barriers to RB compliance.
91
An Experimental Comparison of
Endotracheal Intubation Using a Blind
Supraglottic Airway Device During Ongoing
CPR With Manual Compression versus
Automated Compression
Bob Cambridge, Amy Chelin, Austin Lamb,
John Hafner
OSF St. Francis Medical Center, Peoria, IL
Background: There are a variety of devices on the
market, designed to quickly provide an airway during
resuscitation efforts. A new device, the Supraglottic
Airway Laryngopharyngeal Tube (SALT; Life Assist) is
designed for blind tracheal placement of an airway. Use
of the device in previous studies has been in static models.
Objectives: We seek to examine whether use of the
SALT device can provide reliable tracheal intubation
during ongoing CPR. The dynamic model tested the
device with human powered CPR (manual) and with an
automated chest compression device (Physio Control
Lucas 2). The hypothesis is that the predictable movement of an automated chest compression device will
make tracheal intubation easier than the random
movement from manual CPR.
Methods: The project was an experimental controlled
trial and took place in the ED at a tertiary referral center in Peoria, Illinois. This project was an expansion
arm of a similarly structured study using traditional laryngoscopy. Emergency medicine residents, attending
physicians, paramedics, and other ACLS-trained staff
were eligible for participation. In randomized order,
each participant attempted intubation on a mannequin
using the SALT device with no CPR ongoing, during
CPR with a manual compression, and during CPR with
an automatic chest compression. Participants were
timed in their attempt and success was determined
after each attempt.
Results: There were 43 participants in the trial. The
success rates in the control group and the automated
CPR group were both 86% (37/43) and the success
rate in the manual CPR group was 79% (34/43). The
difference in success rates were not statistically significant (p = 0.99 and p = 0.41). The automated CPR
group had the fastest average time but the difference
was not significant (8.051 sec; p = 0.144). The mean
time for intubation with manual CPR and no CPR
were not statistically different (9.042 sec, 8.770 sec;
p = 0.738).
Conclusion: Using the SALT device, the success rate of
tracheal intubation with ongoing chest compression
was similar to the success rate of intubation without
CPR. The SALT device did not guarantee a secure airway every time in a dynamic scenario and it had the
drawback of not providing direct visualization during
placement. The SALT device is a helpful adjunct but
should not replace a direct visualization method for a
first line intubation attempt in the ED.
92
Comparison of Baseline Aortic Velocity
Profiles and Response to Weight-Based
Volume Loading in Fasting Subjects:
A Randomized, Prospective Double-Blinded
Trial
Anthony J. Weekes, Margaret R. Lewis, Zachary
Kahler, Donald Stader, Dale P. Quirke, Courtney
Almond, H. James Norton, Dawn Middleton,
Vivek S. Tayal
Carolinas Medical Center, Charlotte, NC
Background: Cardiac output increases have been used
to define fluid responsiveness in mechanically ventilated
septic patients. The aortic velocity profile is an important variable in cardiac output calculations. We evaluated noninvasive measurements of aortic velocity using
bedside cardiac ultrasound.
Objectives: Our primary hypothesis was that in fasting,
asymptomatic subjects, larger fluid boluses would lead
to proportional aortic velocity changes. Our secondary
endpoints were to determine inter- and intra-subject
variation in aortic velocity measurements.
Methods: The authors performed a prospective randomized double-blinded trial using healthy volunteers.
We measured the velocity time integral (VTI) and maximal velocity (Vmax) with an estimated 0–20 pulsed
wave Doppler interrogation of the left ventricular outflow in the apical-5 cardiac window. Three physicians
reviewed optimal sampling gate position, Doppler angle
and verified the presence of an aortic closure spike.
Angle correction technology was not used. Subjects
with no history of cardiac disease or hypertension
fasted for 12 hours and were then randomly assigned
to receive a normal saline bolus of 2 ml/kg, 10 ml/kg or
30 ml/kg over 30 minutes. Aortic velocity profiles were
measured before and after each fluid bolus.
Results: Forty-two subjects were enrolled. Mean age
was 33 ± 10 (range 24 to 61) and mean body mass index
24.7 ± 3.2 (range 18.7 to 32). Mean volume (in ml) for
groups receiving 2 ml/kg, 10 ml/kg, and 30 ml/kg were
151, 748, and 2162, respectively. Mean baseline Vmax
(in cm/s) of the 42 subjects was 108.4 ± 12.5 (range 87
to 133). Mean baseline VTI (in cm) was 23.2 ± 2.8 (range
18.2 to 30.0). Pre- and post-fluid mean differences for
Vmax were )1.7 (± 10.3) and for VTI 0.7 (± 2.7). Aortic
velocity changes in groups receiving 2 ml/kg, 10 ml/kg,
and 30 ml/kg were not statistically significant (see
table). Heart rate changes were not significant.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Conclusion: Aortic velocity changes associated with
fluid loading were neither proportional nor statistically
significant within the different groups. Pulsed wave
Doppler values showed temporal variations that were
likely unrelated to volume loading. Angle of Doppler
interrogation was difficult to standardize and may have
influenced variability. There are technical limitations to
serial cardiac output calculations and comparisons with
echocardiography.
93
Identification of Critical Areas for
Improvement in ED Severe Sepsis
Resuscitation Utilizing In Situ Simulation
Emilie Powell, David H. Salzman, Susan Eller,
Lanty O’Connor, John Vozenilek
Northwestern University, Chicago, IL
Background: Clinicians recognize that septic shock is a
highly prevalent, high mortality disease state. Evidence
supports early ED resuscitation, yet care delivery is
often inconsistent and incomplete. The objective of this
study was to discover latent critical barriers to successful ED resuscitation of septic shock.
Objectives: Clinicians recognize that septic shock is a
highly prevalent, high mortality disease state. Evidence
supports early ED resuscitation, yet care delivery is
often inconsistent and incomplete. The objective of this
study was to discover latent critical barriers to successful ED resuscitation of septic shock.
Methods: We conducted five 90-minute risk-informed
in-situ simulations. ED physicians and nurses working
in the real clinical environment cared for a standardized
patient, introduced into their existing patient workload,
with signs and symptoms of septic shock. Immediately
after case completion clinicians participated in a 30minute debriefing session. Transcripts of these sessions
were analyzed using grounded theory, a method of
qualitative analysis, to identify critical barrier themes.
Results: Fifteen clinicians participated in the debriefing
sessions: four attending physicians, five residents, five
nurses, and one nurse practitioner. The most prevalent
critical barrier themes were: anchoring bias and
difficulty with cognitive framework adaptation as the
patient progressed to septic shock (n = 26), difficult
interactions between the ED and ancillary departments (n = 22), difficulties with physician-nurse commu-
•
www.aemj.org
S53
nication and teamwork (n = 18), and delays in placing
the central venous catheter due to perceptions surrounding equipment availability and the desire to
attend to other competing interests in the ED prior to
initiation of the procedure (n = 17 and 14). Each theme
was represented in at least four of the five debriefing
sessions. Participants reported the in-situ simulations to
be a realistic representation of ED sepsis care.
Conclusion: In-situ simulation and subsequent debriefing provides a method of identifying latent critical areas
for improvement in a care process. Improvement strategies for ED-based septic shock resuscitation will need
to address the difficulties in shock recognition and cognitive framework adaptation, physician and nurse teamwork, and prioritization of team effort.
94
The Presenting Signs and Symptoms of
Ruptured Abdominal Aortic Aneurysms:
A Meta-analysis of the Literature
Utpal Inamdar, Charles R. Wira
Yale New Haven Hospital Department
Emergency Medicine, New Haven, CT
of
Background: Ruptured abdominal aortic aneurysms
(AAA) are the 15th leading cause of death in the United
States causing more than 10,000 deaths annually. Misdiagnosis rates can range as high as 30–60%. The classic
clinical presentation involves abdominal pain, back
pain, syncope, hypotension, and a pulsatile abdominal
mass. However, the frequency of these signs and symptoms has never been evaluated in a meta-analysis.
Objectives: To identify the frequency of presenting
signs and symptoms of patients with ruptured AAAs.
Methods: A review of the literature from 1980 to present was performed using the MeSH heading of ‘‘Ruptured Abdominal Aortic Aneurysm.’’
Results: 1547 articles were identified. After content
analysis and the application of inclusion and exclusion
criteria, 28 articles were identified regarding the presenting symptoms of abdominal pain, back pain, or syncope (n = 3600), 30 articles were identified regarding
hypotension (n = 3419), and 13 articles were identified
regarding the presence of a pulsatile mass (n = 1926).
58% (1669 of 2857, 95% CI 0.46 to 0.71) of patients presented with abdominal pain, 47% had back pain (1090
of 2299, 95% CI 0.38 to 0.63), and 26% had syncope
(476 of 1797, 95% CI 0.20 to 0.37). 52% were initially
hypotensive with a systolic blood pressure less than 80–
100 mmHg (1771 of 3419, 95% CI 0.43 to 0.55), and 66%
had a palpable abdominal pulsatile mass (1262 of 1926,
95% CI 0.46 to 0.79).
Conclusion: The diagnosis of ruptured AAAs can be
elusive and requires a high index of suspicion. The classic presenting signs and symptoms may not always be
present. If a ruptured AAA is suspected, further
diagnostic imaging is mandatory for confirmation of
diagnosis.
S54
95
2012 SAEM ANNUAL MEETING ABSTRACTS
Normal Initial Blood Sugar Level and
History of Diabetes Might Reduce
In-hospital Mortality of Septic Patients
Visiting the Emergency Department
Hsiao-Yun Chao1, Sheng-Che Lin1, Chun-Kuei
Chen1, Peng-Hui Liu1, Jih-Chang Chen1, Yi-Lin
Chan1, Kuan-Fu Chen2
1
Chang-Gung Memorial Hospital, Taoyuan
county, Taiwan; 2Chang-Gung Memorial
Hospital, Keelung City, Taiwan
Background: The association between blood glucose
level and mortality in critically ill patients is highly
debated. Several studies have investigated the association between history of diabetes, blood sugar level, and
mortality of septic patients; however, no consistent
conclusion could be drawn so far.
Objectives: To investigate the association between diabetes and initial glucose level and in-hospital mortality
in patients with suspected sepsis from the ED.
Methods: We conducted a retrospective cohort study
that consisted of all adult septic patients who visited
the ED at a tertiary hospital during the year 2010 with
two sets of blood cultures ordered by physicians. Basic
demographics, ED vital signs, symptoms and signs,
underlying illnesses, laboratory findings, microbiological results, and discharge status were collected. Logistic
regressions were used to evaluate the association
between risk factors, initial blood sugar level, and history of diabetes and mortality, as well as the effect
modification between initial blood sugar level and
history of diabetes.
Results: A total of 4997 patients with available blood
sugar levels were included, of whom 48% had diabetes,
46% were older than 65 years of age, and 56% were
male. The mortality was 6% (95% CI 5.3–6.7%). Patients
with a history of diabetes tended to be older, female,
and more likely to have chronic kidney disease, lower
sepsis severity (MEDS score), and positive blood culture
test results (all p < 0.05). Patients with a history of diabetes tended to have lower in-hospital mortality after
ED visits with sepsis, controlling for initial blood sugar
level (aOR 0.72, 95% CI 0.56–0.92, p = 0.01). Initial normal blood sugar seemed to be beneficial compared to
lower blood sugar level for in-hospital mortality,
controlled history of diabetes, sex, severity of sepsis,
and age (aOR 0.61, 95% CI 0.44–0.84, p = 0.002). The
effect modification of diabetes on blood sugar level and
mortality, however, was found to be not statistically
significant (p = 0.09).
Conclusion: Normal initial blood sugar level in ED and
history of diabetes might be protective for mortality of
septic patients who visited the ED. Further investigation is warranted to determine the mechanism for these
effects.
96
Sedation and Paralytic Use During
Hypothermia After Cardiac Arrest
William A. Knight1, Shaun Keegan2,
Opeolu Adeoye1, Jordan Bonomo1, Kimberly
Hart1, Lori Shutter1, Christopher Lindsell1
1
University of Cincinnati, Cincinnati, OH; 2UC
Health - University Hospital, Cincinnati, OH
Background: Therapeutic hypothermia improves outcomes in comatose survivors of cardiac arrest, yet
fewer than half of successfully resuscitated patients survive to hospital discharge. Deep sedation and/or
paralytics may be used to avoid shivering and maintain
target temperature, but how often this occurs is
unknown.
Objectives: We aim to determine the frequency of deep
sedation and paralytic usage in patients undergoing
hypothermia.
Methods: This
IRB-approved
retrospective
chart
review included all patients treated with therapeutic
hypothermia after cardiac arrest during 2010 at an
urban, academic teaching hospital. Every patient undergoing therapeutic hypothermia is treated by neurocritical care specialists. Patients were identified by review
of neurocritical care consultation logs. Clinical data
were dually abstracted by trained clinical study assistants using a standardized data dictionary and case
report form. Medications reviewed during hypothermia
were midazolam, lorazepam, propofol, fentanyl, cisatracurium, and vecuronium.
Results: There were 33 patients in the cohort. Median
age was 57 (range 28–86 years), 67% were white, 55%
were male, and 49% had a history of coronary artery
disease. Seizures were documented by continuous
EEG in 11/33 (33%), and 20/33 (61%) died during hospitalization. Most, 30/33 (91%), received fentanyl, 21/33
(64%) received benzodiazepine pharmacotherapy, and
23/33 (70%) received propofol. Paralytics were administered to 23/33 (68%) patients, 14/33 (42%) with cisatracurium and 9/33 (27%) with vecuronium. Of note,
one patient required pentobarbital for seizure
management.
Conclusion: Sedation and neuromuscular blockade are
common during management of patients undergoing
therapeutic hypothermia after cardiac arrest. Patients
in this cohort often received analgesia with fentanyl,
and sedation with a benzodiazepine or propofol. Given
the frequent use of sedatives and paralytics in survivors of cardiac arrest undergoing hypothermia, future
studies should investigate the potential effect of these
drugs on prognostication and survival after cardiac
arrest.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
97
The Implementation of Therapeutic
Hypothermia in the Emergency Department:
A Multi-Institution Case Review
Sara W. Johnson1, Daniel Joseph1, Dina Seif1,
Melissa Joseph1, Meena Zareh1, Christine
Kulstad2, Mike Nelson3, David Barnes4,
Christine Riguzzi5, Adrian Elliot6, Eric Kochert7,
David Slattery8, Sean O. Henderson1
1
Keck School of Medicine of the University of
Southern California, Los Angeles, CA; 2Advocate
Christ Medical Center, Oak Lawn, IL; 3Cook
County Hospital, Chicago, IL; 4University of
California Davis, Davis, CA; 5Highland General
Hospital, Oakland, CA; 6Shands Jacksonville
Medical Center, Jacksonville, FL; 7York Hospital,
York, ME; 8University of Nevada School of
Medicine, Las Vegas, NV
Background: The use of therapeutic hypothermia (TH)
is a burgeoning treatment modality for post-cardiac
arrest patients.
Objectives: We performed a retrospective chart review
of patients who underwent post cardiac arrest TH at
eight different institutions across the United States. Our
objective was to assess how TH is currently being
implemented in emergency departments and assess the
feasibility of conducting more extensive TH research
using multi-institution retrospective data.
Methods: A total of 94 charts with dates from 2008–
2011 were sent for review by participating institutions
of the Peri-Resuscitation Consortium. Of those
reviewed, eight charts were excluded for missing data.
Two independent reviewers performed the review and
the results were subsequently compared and discrepancies resolved by a third reviewer. We assessed patient
demographics, initial presenting rhythm, time until TH
initiation, duration of TH, cooling methods and temperature reached, survival to hospital discharge, and neurological status on discharge.
Results: The majority of cases of TH had initial cardiac
rhythms of asystole or pulseless electrical activity
(55.2%), followed by ventricular tachycardia or fibrillation (34.5%), and in 10.3% the inciting cardiac rhythm
was unknown. Time to initiation of TH ranged from
0–783 minutes with a mean time of 99 min (SD 132.5).
Length of TH ranged from 25–2171 minutes with a
mean time of 1191 minutes (SD 536). Average minimum
temperature achieved was 32.5C, with a range from
27.6–36.7 C (SD 1.5 C). Of the charts reviewed, 29
(33.3%) of the patients survived to hospital discharge
and 19 (21.8%) were discharged relatively neurologically intact.
Conclusion: Research surrounding cardiac arrest has
always been difficult given the time and location span
from pre-hospital care to emergency department to
intensive care unit. Also, as witnessed cardiac arrest
events are relatively rare with poor survival outcomes,
very large sample sizes are needed to make any meaningful conclusions about TH. Our varied and inconsistent results show that a multi-center retrospective
review is also unlikely to provide useful information.
A prospective multi-center trial with a uniform TH
protocol is needed if we are ever to make any evidence-
•
www.aemj.org
S55
based conclusions on the utility of TH for post-cardiac
arrest patients.
98
Serum Lactate as a Screening Tool and
Predictor of Outcome in Pediatric Patients
Presenting to the Emergency Department
with Suspected Infection
Myto Duong, Loren Reed, Jennifer Carroll,
Antonio Cummings, Stephen Markwell, Jarrod
Wall
Southern Illinois University, Springfield, IL
Background: No single reliable sepsis biomarker has
been identified for risk stratification and prognostication in pediatric patients presenting to the ED. A biomarker allowing for early diagnosis would facilitate
aggressive management and improve outcomes. Serum
lactate (LA) is an inexpensive and readily available measure of tissue hypoxia that is predictive of mortality in
adults with sepsis.
Objectives: To determine if ED LA correlates with sepsis, admission, and outcome in pediatric patients with
suspected infection.
Hypothesis: LA is a useful predictor of pediatric sepsis,
admission, and outcome in the ED.
Methods: This retrospective study was performed in a
community hospital ED in which a sepsis protocol
involving concurrent LA draw with every blood culture
obtained in the ED was initiated in 2009–2010. A total
of 289 pediatric (<18 years old) patients with suspected
infection had LA obtained in the ED. Pearson correlation coefficients were used to determine the relationship between LA and variables such as vital signs,
basic labs, admission, length of stay (LOS), and 3-day
return rate. Patients were dichotomized into those with
LA >3 or £ 3. T-tests were used to compare the
groups.
Results: Mean LA was 2.04, SD = 1.45. Mean age was
4.5 years old, SD = 5.20. A statistically significant positive correlation was found between LA and pulse,
respiratory rate (RR), WBC, platelets, and LOS, while
a significant negative correlation was seen with temperature and HCO3-. When two subjects were dropped
as possible outliers with LA >10, it resulted in non-significant temperature correlation, but a significant negative correlation with age and BUN was revealed.
Patients in the higher LA group were more likely to
be admitted (p = 0.0001) and have longer LOS. Of the
discharged patients, there was no difference in mean
LA level between those who returned (n = 25, mean
LA of 1.88, SD = 0.88) and those who did not (n = 154,
mean LA of 1.88, SD = 1.35), p = 0.99. Furthermore,
mean LA levels for those with sepsis (n = 138, mean
LA of 2.18, SD = 1.75) did not differ from those without sepsis (n = 147, mean LA of 1.9, SD = 1.08),
p = 0.11.
Conclusion: Higher LA in pediatric patients presenting
to the ED with suspected infection correlated with
increased pulse, RR, WBC, platelets, and decreased
BUN, HCO3-, and age. LA may be predictive of hospitalization, but not of 3-day return rates or pediatric
sepsis screening in the ED.
S56
2012 SAEM ANNUAL MEETING ABSTRACTS
Table 1 - Abstract 98: Lactate correlation to various variables
Variables
Correlation
factor (r)
p-value
Sample
size
-0.09
-0.13
0.18
0.25
0.17
0.22
)0.18
)0.11
0.33
0.109
0.022
0.003
0.0001
0.0045
0.0004
0.004
0.78
0.0001
288
287
283
272
286
269
245
247
288
Age
Temperature
Pulse
Respiratory rate
WBC
Platelet
HCO3BUN
LOS
Table 2 - Abstract 98: Dichotomized lactate levels and correlation
to various variables
Variables
Age (years)
Temperature
Pulse
WBC
Platelets (K)
HCO3BUN
LOS
Respiratory
Rate
mean
LA</=3
(n = 249)
4.7
38
135.6
11.4
290.6
23.8
10.6
0.83
29
SD
mean
LA>3
(n = 40)
SD
p-value
5.1
1.4
31.3
5.8
119.1
3.8
6.1
1.72
9.7
3.3
37.5
149.5
14.2
376.6
21.6
9.9
2.5
35.9
5.2
1.9
35
7.6
165.2
3.2
4.0
4.4
16.6
0.10
0.13
0.01
0.03
0.006
0.001
0.34
0.02
0.009
Table 3 - Abstract 98: Dichotomized lactate levels and correlation
to sepsis, admission, and Return to ED Visits
Categories
Sepsis
Admission
Return to
ED visit
99
if LA</=3
(n = 249)
Percent
if LA>3
(n = 40)
Percent
p-value
119
80
28
48%
33%
11%
20
28
4
51%
75%
10%
0.71
0.0001
1.0
Failure To Document The Presence Of
Sepsis Decreases Adherence To Process Of
Care Measures In Emergency Department
Patients
Stephanie Dreher, James O’Brien,
Jeffrey Caterino
The Ohio State University, Columbus, OH
Background: As early identification and treatment of
sepsis improves outcomes, physician recognition of
sepsis early in its course is essential. For emergency
department patients, one suggested measure of early
recognition is documentation of the presence of sepsis.
Objectives: To determine (1) the frequency with which
admitted ED patients who receive antibiotics meet criteria for sepsis; (2) the frequency with which early documentation of sepsis occurs in ED patients; and (3) the
association between documentation of sepsis and
provided care.
Methods: We conducted a retrospective cohort study
of ED patients who received antibiotics within 24 hours
of admission. We classified patients as ‘‘septic’’ based
on current sepsis criteria. ‘‘Documentation of sepsis’’
was considered present if the word ‘‘sepsis’’ or ‘‘septic’’
was documented in the ED notes or admission history
and physical. We determined the effect documentation
of sepsis had on process outcomes: ordering of blood
cultures and lactate in the ED, antibiotic ordering in
<2 hours, antibiotic administration in <2 hours, and
fluid administration in <2 hours. We derived a non-parsimonius propensity model to adjust for recognition of
sepsis using patient vital signs, age, sex, and WBC
count. We then constructed a propensity-adjusted multivariable logistic regression analysis for each outcome.
The model included the propensity score, triage level,
and recognition of sepsis.
Results: Out of 500 admitted patients receiving antibiotics, 27.4% met criteria for sepsis and were included in
the study. Of these, 25.5% had sepsis documented. In
univariate analysis, documentation of sepsis was associated with ordering of blood cultures (97% vs. 71%,
p = 0.001), lactate (66% vs. 29%, p = 0.001), and antibiotics within 2 hours (77% vs. 49%, p = 0.011). In the multivariate models, recognition of sepsis was associated
with ordering of blood cultures (OR 8.52, 95% CI 1.08–
67.49)(p = 0.042) and ordering lactate (OR 2.60, 95% CI
1.09–6.22)(p = 0.031). There was a non-significant trend
towards ordering antibiotics within 2 hours (OR 2.22,
95% CI 0.89–5.56)(p = 0.088). There was no association
with 2-hour antibiotic or fluid administration.
Conclusion: ED and admitting physicians failed to document the presence of sepsis 74.5% of the time. Failure
to document sepsis is associated with significantly
lower rates of several recommended processes of care.
100
Sonogram Measured Inferior Vena Cava
Diameter Response to Intravenous Fluid
Bolus
Christopher T. Vogt1, Brandon Smallwood2,
Michael Sanders2, Cathy Rehmeyer1
1
University of Pikeville Kentucky College of
Osteopathic Medicine, Pikeville, KY; 2Pikeville
Medical Center, Pikeville, KY
Background: Prior research data have shown sonogram measured inferior vena cava (IVC) diameter to be
a reliable non-invasive measure of a patient’s fluid
status compared to invasive techniques.
Objectives: The goal of this study was to determine the
normal maximum/exhaled IVC diameter (MIVCD), the
possible relationship to a patient’s age, sex, and BMI,
and the change in the MIVCD after a one liter intravenous fluid bolus (IVF) of normal saline.
Methods: This prospective observational study was
performed in a rural ED and on a college campus.
Two cohorts were evaluated: a convenience sample of
any consenting adult (control group), and any patients
receiving an IVF bolus in the ED (IVF group). Portable
bedside sonogram was used to measure IVC diameter
one centimeter distal to the junction of the hepatic
veins with a curved abdominal probe during exhalation using a longitudinal view. All subjects in the IVF
group received a scan before and after receiving a
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
one liter fluid bolus of normal saline as prescribed by
the emergency medicine provider present. Subject’s
sex, race, age, weight, height, time of last meal, IV
gauge, bolus time, as well as pre-IVF and post-IVF
heart rate and blood pressure were collected. Subjects
were stratified into subgroups within heart rate, blood
pressure, and BMI classifications for data analysis. The
t-test for independent means was the statistical test
used for statistical significance. The level of risk was
set at 0.05 and the power of this pilot study was
approximately 20%.
Results: A total of 213 adults (94 m, 119 f) consented to
the study; 153 (64 m, 89 f) in the control group, 51
(22 m, 29 f) in the IVF group. Subject’s ages ranged
from 18 to 78 years of age with a mean age of 34. There
were 208 Caucasians, 3 African Americans, and 2 Asian
subjects. There was no statically significant relationship
between MIVCD and the subject’s sex, age, race, or
BMI. The mean MIVCD for our control group was
found to be 16.3 mm (11 mm–21.6 mm 95% CI). The
mean change in MIVCD after the one liter IVF bolus
was 3 mm ()1.3 mm–7.3 mm 95% CI).
Conclusion: These data support previous researchers’
findings that demonstrated the standard MIVCD in an
adult is not correlated to the subject’s sex, age, race, or
BMI. These data should be used as a guide for clinicians
treating patients requiring intravenous fluid resuscitation where portable bedside ultrasound is available.
101
Ultrasound-Guided Vascular Access On
A Phantom: A Training Model For Medical
Student Education
Lydia M. Sahlani, David P. Bahner,
Eric J. Adkins, Diane Gorgas, Clint Allred
The Ohio State University Medical Center,
Columbus, OH
Background: Ensuring patient safety and prevention of
medical errors has become an integral part of medical
education. Vascular access is an area in which medical
errors can result in serious complications. Medical student training has been increasing early exposure to the
clinical aspects of medicine in hopes of reducing medical
errors.
Objectives: We reviewed a cohort of second year medical students (MS2s) to assess their proficiency with
ultrasound-guided vascular access.
Methods: This study was an observational cohort study
of MS2s during their Introduction to Clinical Medicine
(ICM) program. Students were required to review an
online training module from EMSONO.com about ultrasound-guided vascular access. Students completed a
quiz testing material presented in the online module.
Students participated in a didactic session on ultrasound-guided vascular access using a blue phantom
block gel model. Students were divided into groups of
4–5 participants and allowed to practice the skills on
the vascular access model while being proctored by an
experienced provider. They had no time limitations during their practice session. After the practice session the
students were graded by the same proctor using a standardized scoring sheet. The students were evaluated on
•
www.aemj.org
S57
their ability to visualize the simulated vessel in different
planes, perform vascular cannulation in both the short
and long axis, the number of needle sticks attempted,
and successful cannulation.
Results: A total of 134 MS2s were evaluated. Twentyseven students were excluded due to incomplete data.
Of the 107 students with complete data, 100% (107/107)
could visualize the vessel in long axis, short axis, and
visualize the needle in the near field. 103/107 (96.26%)
could visualize the needle entering in the long axis
while 101/107 (94.39%) could visualize the needle entering the short axis. Students were able to cannulate the
vessel in two sticks or less for 99/107 (92.52%) in the
long axis and 100/107 (93.46%) in the short axis.
Conclusion: A structured ultrasound curriculum can
help MS2s learn the psychomotor skills necessary to
cannulate a vessel on a phantom using ultrasound guidance. Future studies could be developed to assess
whether this skill translates into clinical practice during
clerkship experiences.
102
The Tongue Blade Test: Still Useful As
A Screening Tool For Mandibular
Fractures?
Nicholas Caputo1, Andaleeb Raja1,
Christopher P. Shields1, Nathan Menke2
1
Lincoln Medical and Mental Health Center,
2
Bronx,
NY;
University
of
Pittsburgh,
Pittsburgh, PA
Background: Mandibular fractures are one of the most
frequently seen injuries in the trauma setting. In terms
of facial trauma, madibular fractures account for 40–
62% of all facial bone fractures. Prior studies have demonstrated that the use of a tongue blade to screen these
patients to determine whether a mandibular fracture is
present may be as sensitive as x-ray. One study showed
the sensitivity and specificity of the test to be 95.7%
and 63.5%, respectively. In the last ten years, high-resolution computed tomography (HCT) has replaced panoramic tomography (PT) as the gold standard for
imaging of patients with suspected mandibular fractures. This study determines if the tongue blade test
(TBT) remains as sensitive a screening tool when compared to the new gold standard of CT.
Objectives: The purpose of the study was to determine
the sensitivity and specificity of the TBT as compared to
the new gold standard of radiologic imaging, HCT. The
question being asked: is the TBT still useful as a screening tool for patients with suspected mandibular fractures when compared to the new gold standard of
HCT?
Methods: Design: Prospective cohort study. Setting:
An urban tertiary care Level I trauma center. Subjects:
This study took place from 8/1/10 to 8/31/11 in which
any person suffering from facial trauma presented.
Intervention: A TBT was performed by the resident
physician and confirmed by the supervising attending
physician. CT facial bones were then obtained for the
ultimate diagnosis. Inter-rater reliability (kappa) was
calculated, along with sensitivity, specificity, accuracy,
S58
2012 SAEM ANNUAL MEETING ABSTRACTS
PPV, NPV, likelihood ratio (LR) (+), and likelihood ratio
(LR) (-) based on a 2 · 2 contingency tables generated.
Results: Over the study period 85 patients were
enrolled. Inter-rater reliability was kappa = 0.93 (SE
+0.11). The table demonstrates the outcomes of both
the TBT and CT facial bones for mandibular fracture.
The following parameters were then calculated based
on the contingency table: sensitivity 0.97 (CI 0.81–0.99),
specificity 0.72 (CI 0.58–0.83), PPV 0.67 (CI 0.52–0.78),
NPV 0.97 (CI 0.87–0.99), accuracy 0.81, LR(+) 3.48 (CI
2.26–5.38), LR (-) 0.04 (CI 0.01–0.31).
Conclusion: The TBT is still a useful screening tool to
rule out mandibular fractures in patients with facial
trauma as compared to the current gold standard of
HCT.
history or presentation to the pediatric ED. EP read of
the CXR was positive for pneumonia in 80% of patients;
radiology read of the CXR was positive for pneumonia
in 45% of study patients. In comparison to radiology
read, the sensitivity of an EP read of pneumonia was
0.88 (95% CI 0.75–0.97) with a specificity of 0.27 (95%
CI 0.15–0.34). The kappa statistic was 0.1 indicating
poor agreement. Bedside US compared to radiology
read had a sensitivity of 0.8 (95% CI 0.66–0.87) and a
specificity of 0.93 (95% CI 0.81–0.99) with a kappa
statistic of 0.74 indicating moderate agreement.
Conclusion: Bedside US showed high correlation with
radiology interpretation and may be a useful adjunct to
radiographic evaluation in order to differentiate pneumonic infiltrates from other causes of consolidation.
Table - Abstract 102: Two-by-Two Contingency Table
104
Tongue
Blade
Test
+
)
Total
103
andibular
CT
Fractures
Facial Bones
+
)
Total
30
1
31
15
39
54
45
40
85
Bedside Ultrasound Evaluation Of Lung
Infiltrates
Stephanie G. Cohen1, Adam B. Sivitz2
1
Emory University Medical School, Atlanta, GA;
2
Newark Beth Israel Medical Center, Newark,
NJ
Background: CXR in pediatric patients is often
obtained looking for infiltrates that may require antibiotics. A radiographic finding of consolidation may be
difficult to differentiate from atelectasis leading the clinician to unnecessarily prescribe antibiotics. This is
especially true for vague haziness that is commonly
seen in the area of the right middle lobe (RML) of the
lung.
Objectives: Determine the utility of bedside US by
pediatric emergency physicians (EP) to differentiate
pneumonia from other causes of consolidation.
Methods: This is a prospective observational study of
patients aged 0–18 years presenting to an urban pediatric ED. Patients were eligible if CXR revealed a RML
infiltrate noted by the pediatric EP or radiologist. Ultrasonography-trained pediatric emergency physicians
performed focused lung US examination of the lower
right parasternal area to specifically investigate the
right middle lobe. Ultrasound findings were classified
as positive if consolidation consistent with pneumonia
was present, and negative when sonographic features
consistent with atelectasis or lung sliding were present.
Ultrasound images used for review were compared to
the final pediatric radiologist read which was considered the gold standard. Images were reviewed by
blinded study personnel unaware of CXR findings or
clinical condition of the patient.
Results: Fifty-five patients were enrolled in the study.
The median age was 20 months (range 3–219 months),
56% of patients were male, and 83% had fever by
Inter-rater Reliability of Emergency
Physician Ultrasonography For Diagnosing
Lower Extremity Deep Venous and Great
Saphenous Vein Thromboses Compared To
Ultrasonographic Studies Performed By
Radiology
Mary R. Mulcare, Tomas Borda, Debbie Hana
Yi, Dana L. Sacco, Jennifer F. Kherani,
David C. Riley
New York Presbyterian Hospital, New York, NY
Background: Several studies have been done comparing the accuracy of lower extremity ultrasonography
done by emergency physicians (EPs) compared to an
institution-specific criterion standard; however, none of
these studies have included evaluation of the proximal
great saphenous vein. A clot in the proximal great saphenous vein is usually an indication for anticoagulation.
Objectives: The goal of this study is to assess the interrater reliability of EP-performed ultrasound examinations for lower extremity venous thromboembolism,
including assessment of the common femoral vein
(CFV), femoral vein of the thigh (FV), great saphenous
vein (GSV), and popliteal vein (PV), compared to the
clinical standard of ultrasound technologist performed,
radiologist read examinations. We aim to demonstrate
whether EPs can reliably identify venous thromboembolism in the studied veins.
Methods: This clinical trial is a prospective, blinded,
convenience sample, in an urban teaching hospital emergency department (ED), NewYork-Presbyterian Hospital/
Columbia University Medical Center and Allen Hospital,
with an adult ED volume of approximately 120,000
patients/year. Patients with a clinical suspicion of a lower
extremity venous thrombosis were enrolled. Each
patient underwent an ED bedside ultrasonography exam
for DVT in the four identified venous locations by an EP
blinded to an official sonography exam completed by the
department of radiology on the same patient. The data
were analyzed using the two-rater unweighted kappa
statistic to compare the two groups conducting the
examination. To detect a kappa of 0.9, with a power of
0.8 and an acceptable type I error of 0.05, we need a sample size of 433 patients. This is an interim analysis.
Results: There were a total of 102 enrollments included
in this analysis, with 21 enrollments excluded for
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
incomplete data. The observed overall agreement and
the inter-rater reliability (kappa) were: CFV, 95.1%,
kappa of 0.64 (95% CI 0.35–0.93), FV, 94.9%, kappa of
0.59 (95% CI 0.26–0.92), PV, 90.9%, kappa of 0.43 (95% CI
0.12–0.73), and GSV, 91.1%, kappa of 0.42 (95% CI 0.11–
0.73).
Conclusion: EPs can reliably evaluate for lower
extremity venous thromboembolism, with substantial
inter-rater agreement for the CFV and moderate interrater agreement for the FV, PV, and GSV.
105
A Prospective Evaluation of Emergency
Department Bedside Ultrasonography for
the Detection of Acute Pediatric
Appendicitis
David J. McLario1, Richard L. Byyny2,
Michael Liao2, John L. Kendall2
1
Denver Health Medical Center, Denver, CO;
2
Denver Health Medical Center, Denver, CO
Background: Appendicitis is the most common surgical emergency occurring in children. The diagnosis of
pediatric appendicitis is often difficult and computerized tomography (CT) scanning is utilized frequently.
CT, although accurate, is expensive, time-consuming,
and exposes children to ionizing radiation. Radiologists
utilize ultrasound for the diagnosis of appendicitis, but
it may be less accurate than CT, and may not incorporate emergency physician (EP) clinical impression
regarding degree of risk.
Objectives: The current study compared EP clinical
diagnosis of pediatric appendicitis pre- and post-bedside ultrasonography (BUS).
Methods: Children 3–17 years of age were enrolled if
their clinical attending physician planned to obtain a
consultative ultrasound, CT scan, or surgical consult
specific for appendicitis. Most children in the study
received narcotic analgesia to facilitate BUS. Subjects
were initially graded for likelihood of appendicitis
based on research physician-obtained history and physical using a Visual Analogue Scale (VAS). Immediately
subsequent to initial grading, research physicians performed a BUS and recorded a second VAS impression
of appendicitis likelihood. Two outcome measures were
combined as the gold standard for statistical analysis.
The post-operative pathology report served as the gold
standard for subjects who underwent appendectomy,
while post 2-week telephone follow-up was used for
subjects who did not undergo surgery. Various specific
ultrasound measures used for the diagnosis of appendicitis were assessed as well.
Results: 29/56 subjects had pathology-proven appendicitis. One subject was pathology-negative post-appendectomy. Of the 26 subjects who did not undergo
surgery, none had developed appendicitis at the post
2-week telephone follow-up. Pre-BUS sensitivity was
48% (29–68%) while post-BUS sensitivity was 79% (60–
92%). Both pre- and post-BUS specificity was 96% (81–
100%). Pre-BUS LR+ was 13 (2–93), while post-BUS LR+
was 21 (3–148). Pre- and post-BUS LR- were 0.5 and
0.2, respectively. BUS changed the diagnosis for 20% of
subjects (9–32%).
•
www.aemj.org
S59
Conclusion: BUS improved the sensitivity of evaluation
for pediatric appendicitis. BUS also confirmed EP
clinical suspicion of appendicitis, with specificity
comparable to historical norms for CT evaluation.
106
Sonographic Measurement of Glenoid to
Humeral Head Distance in Normal and
Dislocated Shoulders in the Emergency
Department
Brent Becker, Alan Chiem, Art Youssefian,
Lynne Le, Michael Peyton, Negean
Vandordaklou, Graciela Maldonaldo, Chris Fox
UC Irvine, Orange, CA
Background: There are very little data on the normal
distance between the glenoid rim and the posterior
aspect of the humeral head in normal and dislocated
shoulders. While shoulder x-rays are commonly used to
detect shoulder dislocations, they may be inadequate,
exacerbate pain in the acquisition of some views, and
lead to delay in treatment, compared to bedside ultrasound evaluation.
Objectives: Our objective was to compare the glenoid
rim to humeral head distance in normal shoulders and
in anteriorly dislocated shoulders. This is the first study
proposing to set normal and abnormal limits.
Methods: Subjects were enrolled in this prospective
observation study if they had a chief complaint of
shoulder pain or injury, and received a shoulder ultrasound as well as a shoulder x-ray. The sonographers
were undergraduate students given ten hours of training to perform the shoulder ultrasound. They were
blinded to the x-ray interpretation, which was used as
the gold standard.
We used a posterior-lateral approach, capturing an
image with the glenoid rim, the humeral head, as well
as the infraspinatus muscle. Two parallel lines were
applied to the most posterior aspect of the humeral
head and the most posterior aspect of the glenoid rim.
A line perpendicular to these lines was applied, and the
distance measured. In anterior dislocations, a negative
measurement was used to denote the fact that the
glenoid rim is now posterior to the most posterior
aspect of the humeral head. Descriptive analysis was
applied to estimate the mean and 25th to 75th interquartile range of normal and anteriorly dislocated
shoulders.
Results: Eighty subjects were enrolled in this study.
There were six shoulder dislocations, however only
four were anterior dislocations. The average distance
between the posterior glenoid rim and the posterior
humeral head in normal shoulders was 8.7 mm, with a
25th to 75th inter-quartile range of 6.7 mm to 11.9 mm.
The distance in our four cases of anterior dislocation
was )11 mm, with a 25th to 75th interquartile range of
)10 mm to )12 mm.
Conclusion: The distance between the posterior humeral head to posterior glenoid rim may be 7 mm to
12 mm in patients presenting to the ED with shoulder
pain but no dislocation. In contrast, this distance in anterior dislocations was greater than )10 mm. Shoulder
S60
2012 SAEM ANNUAL MEETING ABSTRACTS
second. Another blinded EP recorded two 6-second video
clips of the intraosseous CD-US signal; one during normal
crystalloid flow and another during a 10cc flush. Twenty
blinded EPs later reviewed a randomized file of all video
clips and rated each as (+) IO flow or (-) no IO flow.
Results: Forty needles were placed and interrogated
using the three confirmation methods. Eighty CD-US
signal video clips were reviewed. The respective sensitivity and specificity of identifying IO placement by CD-US
in the tibia was 89% (95CI 85–92) and 89% (95CI 85–92),
in the humerus was 76% (95CI 71–80) and 80% (95CI 76–
84), and combined were 82% (95CI 79–85) and 84% (95CI
82–87). There was no difference between crystalloid flow
and flush methods. The sensitivity and specificity of confirming placement with aspiration were 85% (95CI
61–96) and 95% (95CI 73–100). The sensitivity of confirming placement with crystalloid rate was 100% (95CI 80–
100), but the specificity was only 10% (95CI 1.8–33).
Conclusion: In fresh cadavers, we found that visualization of intraosseous CD-US flow by EPs may be a reliable method of confirming IO placement. This method
appears to have superior characteristics for confirmation than crystalloid rate and is comparable to aspiration. Since CD-US can be performed rapidly at any time
during resuscitation, it may have the most utility of the
available confirmation methods.
108
ultrasound may be a useful adjunct to x-ray for diagnosing anterior shoulder dislocations.
107
Confirmation Of Intraosseous Needle
Placement With Color Doppler Ultrasound
In An Adult Fresh Cadaver Model
Kenton L. Anderson, Catherine Jacob, Joseph
D. Novak, Daniel G. Conway, Jason D. Heiner
San Antonio Military Medical Center, San
Antonio, TX
Background: Intraosseous (IO) needle placement during resuscitation is a valuable method of obtaining vascular access when an IV cannot promptly be
established during resuscitation. Traditional methods of
confirming IO placement include aspiration of blood/
marrow or the free flow of infused crystalloid. Color
Doppler ultrasound (CD-US) has recently been proposed as a method to confirm IO needle placement.
Objectives: We hypothesized that CD-US would have
better test characteristics than traditional methods of
confirming IO placement.
Methods: DESIGN: A prospective observational study
comparing confirmation methods for IO needles randomly placed either intraosseously or in adjacent soft tissue (ST) of the proximal tibia and humeral heads of five
adult fresh unembalmed cadavers. Needle placement was
verified by cutdown. OBSERVERVATIONS: Two emergency physicians (EPs) blinded to needle placement
attempted to confirm IO placement either by successful
aspiration or by crystalloid flow greater than one drop/
Introducing Bedside Limited Compression
Ultrasound by Emergency Physicians into
the Diagnostic Algorithm for Patients with
Suspected DVT: A Prospective Cohort Trial
Rachel Poley, Joseph Newbigging,
Marco L.A. Sivilotti
Queen’s University, Kingston, ON, Canada
Background: Diagnosing deep venous thrombosis
(DVT) relies on clinical characteristics (modified Wells
score), serum D-dimer, and formal imaging, but can be
inefficient.
Objectives: To evaluate whether a novel diagnostic
approach that incorporates bedside limited compression ultrasound (LCU) could be used to improve diagnostic efficiency for DVT.
Methods: We performed a prospective cohort study of
ED patients with suspected DVT. We excluded patients
on anticoagulants, with a chronic DVT, leg cast or
amputation, or when the results of formal imaging
were already known. All patients were treated in the
usual fashion based on the protocol in use at our centre: treating physicians classified patients as ‘‘DVT unlikely’’ or ‘‘DVT likely’’ using the modified Wells score,
then obtained serum D-dimer (latex immunoassay) and/
or formal ultrasound imaging per protocol. Seventeen
physicians were trained and performed LCU in all subjects. DVT was considered ruled out in ‘‘DVT unlikely’’
patients if the LCU was negative, and in ‘‘DVT likely’’
patients if both the LCU and D-dimer were negative.
Results: We enrolled 227 patients (47% ‘‘DVT likely’’),
of whom 24 had DVT. The sensitivity and specificity of
the novel approach were 0.96 [95% CI 0.77, 1.00] and
0.66 [0.59, 0.72] respectively, compared with the current
protocol 1.00 [0.83, 1.00] and 0.35 [0.28, 0.42]. Overall,
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
the stand-alone sensitivity and specificity of LCU were
0.91 [0.70, 0.98] and 0.97 [0.92, 0.99]. Incorporating LCU
into the diagnostic approach would have reduced the
rate of formal imaging from 67% to 40%, the mean
time to diagnostic certainty by 5.0 hours, and eliminated 24 (11%) return visits of whom 10 were empirically anticoagulated. The interobserver disagreement
rate between the treating and scanning physicians for
the Wells score was 19% (kappa 0.62 [0.48, 0.77]),
including the one patient with DVT who would have
been missed on the index visit using the new approach.
Conclusion: Limited compression ultrasound holds
promise as one component of the diagnostic approach to
DVT, but should not be used as a stand-alone test due to
imperfect sensitivity. Tradeoffs in diagnostic efficiency
for the sake of perfect sensitivity remain a difficult issue
collectively in emergency medicine, but need to be scrutinized carefully in light of the costs of over-investigation.
109
Point of Care Focused Cardiac Ultrasound
for Pulmonary Embolism Short-Term
Adverse Outcomes
Jennifer M. Davis, Vishal Gupta, Rachel Liu,
Christopher Moore, Andrew Taylor
Yale University School of Medicine,
New Haven, CT
Background: Pulmonary embolism remains a potentially lethal disease, yet prior studies have demonstrated
that for normotensive patients or patients without signs
of right ventricular strain (RVS), adverse outcomes are
less than 1%. Recent prognostic scoring symptoms
have sought to identify patients who may be treated as
outpatients, but have not incorporated echocardiographic measures of RVS.
Objectives: To determine the sensitivity, specificity,
and likelihood ratios of RVS using Point-of-Care
Focused Cardiac Ultrasound (FOCUS) and hypotension
for pulmonary embolism short-term adverse outcomes,
and to compare them with the Pulmonary Severity
Index (PESI) and Geneva predictor rule.
Methods: Retrospective record review of ED patients
between 1/2007–12/2010 who had both a diagnosis of
pulmonary embolism by ICD9 code and a FOCUS exam.
Adverse outcomes were defined as shock (SBP persistently less than 100 mmHg refractory to volume loading
and requiring vasopressors), respiratory failure requiring intubation, death, recurrent venous thromboembolism, transition to higher level of care, or major bleeding
within 7 days of admission. RVS on FOCUS was defined
as the presence of either RV greater than or equal to LV,
RV hypokinesis, or the presence of a McConnel’s sign.
Results: 1318 records were identified with a diagnosis
of pulmonary embolism of which 171 had a FOCUS
performed. Mean age was 61 ± 18 and 47% were male.
There were 27 adverse outcomes. The prevalence of
RVS on FOCUS was 23%. Likelihood ratios (95%CI) for
RVS, hypotension, RVS or hypotension, RVS + hypotension, and for neither RVS nor hypotension are
3.55(2.16–5.83), 2.08(1.15–3.77), 2.23(1.45–3.42), 4.33(1.43–
13), and 0.29(0.13–0.67) respectively. The table shows
test characteristics for the prognostic rules.
•
www.aemj.org
S61
Conclusion: In this retrospective study, the presence of
RV strain on FOCUS significantly increases the likelihood of an adverse short term event from pulmonary
embolism and its combination with hypotension performs similarly to other prognostic rules.
Table - Abstract 109:
Prognostic Rule
RV strain or hypotension
PESI
Geneva Predictor Rule
110
Sensitivity
(95%CI)
Specificity
(95%CI)
0.81(0.61–0.93)
0.78 (0.57–0.91)
0.63 (0.42–0.80)
0.69(0.60–0.77)
0.38(0.30–0.46)
0.54 (0.45–0.63)
Indocyanine Green Dye Angiography
Accurately Predicts Jackson Zone Survival
in a Horizontal Burn Comb Model
Mitchell S. Fourman, Brett T. Phillips,
Laurie Crawford, Fubao Lin, Adam J. Singer,
Richard A. Clark
Stony Brook University Medical Center, Stony
Brook, NY
Background: Burns are expensive and debilitating injuries, compromising both the structural integrity and
vascular supply to skin. They exhibit a substantial
potential to deteriorate if left untreated. Jackson
defined three ‘‘zones’’ to a burn. While the innermost
coagulation zone and the outermost zone of hyperemia
display generally predictable healing outcomes, the
zone of stasis has been shown to be salvageable via
clinical intervention. It has therefore been the focus of
most acute therapies for burn injuries. While Laser
Doppler Imaging (LDI) - the current gold standard for
burn analysis - has been 96% effective at predicting the
need for second degree burn excision, its clinical translation is problematic, and there is little information
regarding its ability to analyze the salvage of the stasis
zone in acute injury. Laser Assisted Indocyanine Green
Dye Angiography (LAICGA) also shows potential to
predict such outcomes with greater clinical utility.
Objectives: To test the ability of LDI and LAICGA to
predict interspace (zone of stasis) survival in a horizontal
burn comb model.
Methods: A prospective animal experiment was performed using four pigs. Each pig had a set of six dorsal
burns created using a brass ‘‘comb’’ - creating four
rectangular 10 · 20 mm full thickness burns separated
by 5 · 20 mm interspaces. LAICGA and LDI scanning
took place at 1 hour, 24 hours, 48 hours, and 1 week
post burn using Novadaq SPY and Moor LDI respectively. Imaging was read by a blinded investigator, and
perfusion trends were compared with interspace viability and contraction. Burn outcomes were read clinically,
evaluated via histopathology, and interspace contraction was measured using Image J software.
Results: LAICGA data showed significant predictive
potential for interspace survival. It was 83.3% predictive at 24 hours post burn, 75% predictive 48 hours
post burn, and 100% predictive 7 days post burn using
a standardized perfusion threshold. LDI imaging failed
to predict outcome or contraction trends with any
degree of reliability. The pattern of perfusion also
S62
2012 SAEM ANNUAL MEETING ABSTRACTS
appears to be correlated with the presence of significant interspace contraction at 28 days, with an 80%
adherence to a power trendline.
Conclusion: Preliminary data suggest that LAICGA can
potentially be used to predict burn extension, as well as
to test the effectiveness of acute burn therapy.
Figure – Abstract 110: LDI (left) and LAIGA (right)
48 hours post-burn
111
Ultrasound Experts Rapidly And
Accurately Interpret Ultrasound Images
Obtained Using Cellphone Video Cameras
Transmitted By Cellular Networks
Stephen Leech1, Jillian Davison1,
Michelle P. Wan1, F. Eike Flach2, Linda Papa1
1
Orlando Regional Medical Center, Orlando,
FL; 2Univerity of Florida Shands-Gainesville,
Gainesville, FL
Background: Emergency ultrasound (US) is an integral
skill to the practice of EM, but is heavily user-dependent
and has a steep initial learning curve. Direct supervision
by an expert is not always available and may limit US
use. Newer cell phones have built-in video cameras and
allow video transmission via cellular networks, which
would allow remote review, interpretation, and management by experts if not available on-site.
Objectives: We hypothesize that experts can correctly
interpret US images and guide management of patients
in real time using US videos captured and transmitted
via cellphone networks.
Methods: This prospective observational study was a
blinded image review. US images were captured using
a cell phone video camera (iPhone 3GS) and transmitted
to two fellowship-trained US experts via text messaging
using a 3G data network. Experts interpreted images,
returned interpretation, provided next step in patient
management, and rated images on a five-point scale
(1-poor, 2-fair, 3-adequate, 4-good, 5-excellent) via text
message in real time. Experts were blinded to each
other’s interpretations and ratings. Outcome measures
included correct interpretation (normal/abnormal), correct next step in patient management, time from initial
transmission to interpretation, and ratings of image
quality. Data were analyzed using descriptive statistics,
raw agreement, and Cohen’s j.
Results: Two experts reviewed 50 videos from six core
applications in emergency US (aorta, cardiac, DVT,
FAST, GB, renal) for 100 total reviews. There were 14
normal and 36 abnormal US. Experts correctly interpreted 96/100 videos as normal or abnormal with excellent agreement, raw agreement 0.97 (95%CI 0.91–0.99),
and Cohen’s j 0.93 (95%CI 0.84–1). Experts recommended the correct next step in management in 97/100
cases with excellent agreement, raw agreement 0.97
(95%CI 0.91–0.99), and Cohen’s j 0.94 (95%CI 0.87–1).
Mean time from initial image transmission to interpretation was 141 seconds (95%CI 127–155) with a range
from 45 seconds–504 seconds. Mean image quality rating was 4.0 (95%CI 3.9–4.1), with 98 images rated as
adequate or better.
Conclusion: Transmission of cell phone video camera
captured US videos via text messaging allows rapid and
accurate interpretation of US images by US experts. This
medium can provide support to EM physicians who may
not always be comfortable with US image interpretation.
112
Renal Colic: Does Urine Dip and/or Serum
WBC Predict the Need For CT To Identify
Kidney Stone Mimics?
Raashee S. Kedia1, Kaushal Shah1, Nelson
Wong1, David H. Newman1, Salah Baydoun2,
Barbara Kilian2
1
Mount Sinai School of Medicine, New York,
NY; 2St. Luke’s Roosevelt, New York, NY
Background: n/a
Objectives: Our primary objective was to identify the
percentage of patients with serious alternative diagnoses when presenting with renal colic, and our secondary objective was to determine if immediately available
clinical data can predict the presence or absence of
dangerous alternative diagnoses.
Methods: We conducted an observational study
between January 2007 and June 2008 in two academic,
inner city EDs with an annual census of 185,000. Inclusion criterion was ‘patient with possible renal colic’ per
treating physicians. Exclusions were non-English
speaking, non-literate, prisoners, and abdominal aortic
aneurysm believed to be among the top three possibilities. Trained research assistants (RAs) staffed EDs on a
university calendar from 8 am to midnight (62% of calendar days) and monitored the trackboard for potential
cases of renal colic. If an attending physician or senior
resident confirmed suspicion of renal colic, a data form
was completed by the medical provider. Urine dispstick
results, serum WBC, and CT scan results were
recorded
when
obtained.
‘‘Serious
alternative
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
diagnosis’’ on CT scan was defined a priori. Discharged
patients were contacted after 4 weeks for follow-up.
Descriptive statistics including 95% confidence intervals
were calculated for all measures. Leukocytosis was
defined as WBC >12 and abnormal urine dipstick was
defined as presence of either leukocytes or nitrites.
Results: 444 patients with suspected renal colic were
enrolled. 300 (67.5%) received a CT scan of which 118
(39%) had a confirmed stone and 15 (5% [95%CI = 3–
8%]) had an alternative serious diagnosis for their flank
pain. The other 124 imaged patients (41%) had no clear
etiology on CT scan. Of the 144 (32.5%) that did not
receive a CT scan, 42 (9.5%) were contacted and found
to have no adverse outcome (hospital visit, future CT
scan, or surgery within 14 days), and none appeared in
the social security death index. Leukocytosis (+LR 1.3,
-LR 0.9, sensitivity 29%) and abnormal urine dipstick
(+LR 1.0, -LR 0.99, sensitivity 24%) either individually or
combined (+LR 1.0, -LR 0.99, sensitivity 39%) yielded
poor sensitivity and unhelpful likelihood ratios as a
predictor for kidney stone or adverse outcome.
Conclusion: Serious alternative diagnoses identified by
CT were uncommon (5%) in this cohort of suspected
renal colic patients, and leukocytosis and urine dipstick
could not predict these individuals.
113
‘‘Child in Hand’’ - A Prospective Cohort
Study To Assess The Health Status And
Psychosocial Distress Of Haitian Children
One Year After 2010 Earthquake
Srihari Cattamanchi1, Robert D. Macy1, Dicki
Johnson Macy2, Amalia Ciottone3, Svetlana
•
www.aemj.org
S63
Bivens3, Bader S. Al-Otaibi1, Majed Al-johani1,
Shannon Straszewski1, Gregory Ciottone1
1
Harvard Medical School / Beth Israel
Deaconess Medical Center, Boston, MA;
2
Boston Childrens Foundation, Boston, MA;
3
Child in Hand, Boston, MA
Background: The 2010 earthquake that struck Haiti
caused more than 200,000 deaths and significant damage to health care infrastructure. One year later Haitian
children and youth are most at risk, with an estimated
500,000 orphans exposed to debilitating diseases.
Objectives: To assess the health and psychosocial
status of a sample of Haitian children, one year after
the 2010 earthquake.
Methods: A prospective cohort study, assessing the
health and psychosocial status among Haitian children,
one year after 2010 earthquake, from seven orphanages
and two schools in and around Port Au Prince, Haiti.
Children, ages 1–18 years, from 7 were included in the
study. These children were assessed for any medical
illness, which was diagnosed based on their chief
complaints, history of presenting illness, vital signs, and
physical examination by medical teams. Based on their
findings, children were either treated on-site or sent to
medical clinics for further evaluation and treatment.
Some children (ages 7–18) were also screened for psychosocial distress by psychosocial teams using Child
Psychosocial Distress Screener (CPDS).
Results: A total of 423 Haitian children were assessed,
out of whom 28 were excluded because of age >18, leaving 395 children included in the study. There were 209
(54%) males and 186 (47%) females. The mean age was
10.59 years (SD 3.84). Most common clinical findings
S64
2012 SAEM ANNUAL MEETING ABSTRACTS
were symptoms of abdominal pain (22.28%), joint and
muscle pain (14.94%), fever (14.18%), headache (12.66%),
and malnutrition (11.65%). From the above 395 children,
357 children completed CPDS screening. The seven-item
CPDS rates psychosocial distress into three main public
mental health treatment categories: general treatment
(scores of 1–3) in 45 children, indicated treatment (scores
of 4–7) in 238 children, and selected treatment (scores of
8–10) in 74 children. Mean CPDS score was 5.79 (SD
2.06), with a median score of 6.0 (range 1 to 10).
Conclusion: We found only 20% of the symptoms correlated with clinical findings. Earthquake exposure,
including entrapment, witnessing relatives die, displacement, and orphan status are all primary causes contributing to psychosocial distress and degradation among
Haitian children. Coupled with poor hygiene, environmental and nutritional factors, our findings argue for
simultaneous improvement of psychosocial well-being
and child health care.
114
Systematic Review of Interventions to
Mitigate the Effect of Emergency
Department Crowding in the Event of a
Respiratory Disease Outbreak
Melinda J. Morton1, Kevin Jeng2, Raphaelle
Beard1, Andrea Dugas1, Jesse M. Pines3,
Richard E. Rothman1
1
Johns Hopkins School of Medicine, Baltimore,
MD; 2Duke University School of Medicine,
Durham, NC; 3George Washington University
School of Medicine, Washington, DC
Background: Seasonal influenza is a common cause of
ED crowding; however, increased patient volumes
associated with a true influenza pandemic will require
additional planning and ED response resources.
Objectives: This systematic review aimed to describe
the breadth and diversity of interventions that have
been reported to improve patient flow during a respiratory outbreak. Secondarily, we qualitatively assessed
the effectiveness of various types of interventions to
determine which interventions may be most effective
in different settings to mitigate surge during an
outbreak.
Methods: We conducted a formal literature search
including MEDLINE, EMBASE, Cochrane, PubMed,
Global Health Library (WHO), ISI Web of Science,
and CINAHL databases. Interventions to mitigate
influenza or any known respiratory pathogen were
included. Initial search results were screened by title
and abstract; studies were excluded based on criteria
listed in Table 1. Six intervention categories were
identified a priori: Triage and Screening, Clinic-Based,
Testing, Treatment, Isolation, and ‘‘Other’’ Interventions. Data on outbreak
and intervention
characteristics, facility characteristics, ‘‘triggers’’ for
implementing interventions, and input / output
measures were extracted.
Results: 1761 articles were identified via the search
algorithm. 1638 were excluded based on title and
abstract. Of the 173 articles remaining, full text was
reviewed on 136 (full text not available on 37 articles);
24 articles were selected for the final review. For full
results, see Table 2. Sixteen Triage and Screening Interventions, 12 Clinic-Based, 11 Isolation, 4 Testing, 4
Treatment, and 1 ‘‘Other’’ category intervention were
identified. One intervention involving school closures
was associated with a 28% decrease in pediatric ED
visits for respiratory illness.
Conclusion: Most interventions were not tested in
isolation, so the effect of individual interventions was
difficult to differentiate. Interventions associated with
statistically significant decreases in ED crowding were
school closures, as well as interventions in all categories studied. Further study and standardization of
intervention input, process, and outcome measures
may assist in identifying the most effective methods of
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Table 1- Abstract 114:. Study Inclusion / Exclusion Criteria
Inclusion Criteria
Exclusion Criteria
• Include studies with
interventions designed to
impact ED workflow in
response to a known
respiratory outbreak.
Studies must include a
discussion of the
observed intervention
effect on ED surge
capacity.
• Include peer-reviewed
journal articles,
peer-reviewed reports/
papers by nongovernmental
organizations, and policy
and procedure documents
• Include all study types
• Exclude non-English
publications
• Exclude studies that do
not take place during a
known, documented
respiratory outbreak
• Exclude studies that do
not describe
interventions
that impact ED
workflow
• Exclude studies that
detail interventions
implemented indepen
dently of ED surge; i.e.
exclude studies with
interventions that did
not require a trigger for
activation
• Exclude interventions
that were published in
2000 or earlier
• Exclude animal studies
• Exclude studies that
have not been
field-validated
Table 2- Abstract 114: Output Measures, Interventions, and Results
Output
Measure1
Change in
length of stay
Change in
wait time
Result1
• Decreased from
241 to 212 minutes
• Decreased by
2 hours
• Decreased for all
levels of acuity
(from 0.4 to
2.1 hours)
• Decreased by up to
3.5 hours
• Decreased from 92.8
to 81.2 min
• ‘‘Decreased’’ but not
quantified in 4
studies
• 44% decrease
Associated
Intervention
• Triage and
screening
• Clinic, Triage and
Screening,
Treatment
• All types of
interventions
• Clinic, Testing,
Isolation
• Triage and
Screening
• Various
• All types
implemented
• Decreased
from • Triage and
12%
Screening,
to 1%
Clinic
• Decreased to 4.8%
• Isolation, Clinic,
Triage and
Screening
Change in
• Decreased by 3%
• All types
hospital
implemented
admissions
• Decreased by 17%
• Triage and
Screening,
Clinic, Isolation
• Decreased from
• Triage and
48.6% to 12.2%
Screening
• Decreased
from • Isolation, Clinic,
21%
Triage and
to 18%
Screening
Cost
• $59,000 / 30 days
• Triage and
Screening
• $280,000
• All types
implemented
1
By study, among studies reporting this data
Change in left
without being
seen rates
•
www.aemj.org
S65
mitigating ED crowding and improving surge capacity
during an influenza or other respiratory disease outbreak.
115
Communication Practices and Planning in
US Hospitals Caring for Children During
the H1N1 Influenza Pandemic
Marie M. Lozon1, Sarita Chung2,
Daniel Fagbuyi3, Steven Krug4
1
University of Michigan, Ann Arbor, MI;
2
Children’s Hospital Boston, Boston, MA;
3
Children’s National Medical Center,
Washington D.C., DC; 4Children’s Memorial
Hospital, Chicago, IL
Background: The H1N1 influenza pandemic produced
surges of children presenting to US emergency departments (EDs) in both spring and fall of 2009. Communication strategies were critical to relay the most current
information to both the public and ED and hospital
staff.
Objectives: To identify communication practices used
by hospitals caring for children during the 2009 H1N1
influenza pandemic.
Methods: A retrospective survey tool was developed
and refined using a modified Delphi method with nine
subject matter experts in pediatric disaster preparedness and distributed to ED medical directors and/or
hospital disaster medical leaders who reported on institutional preparedness practices. The survey tool was
sent to hospitals across the ten Federal Emergency
Management Agency (FEMA) regions. We describe
hospitals’ use and modification of communication strategies during the pandemic.
Results: Seventy-eight hospitals were surveyed, with
52 responding (69%). 47% of participants reported having an external communication plan (ECP) for families,
primary care physicians, and the public prior to the
pandemic. 79% reported that the ECP required further
modifications. Modifications include treatment and testing plans for primary care providers (76%), hotline/
central call center for community health care providers
(58%), creation of a central website (53%), direct telephone messages to families (39%), and use of social
media (26%). 84% of participants reported having an
established internal communication plan (ICP) for staff
prior to the pandemic. 74% reported that the ICP
required further modifications. Modifications include
creation of algorithms for treatment and testing (89%),
flexibility to make institutional changes for testing and
treatment based on CDC guidelines (69%), algorithms
for patient placement to inpatient units (61%), e-mail
messaging to all staff members (53%), and central
website (36%).
Conclusion: During H1N1 pandemic, more institutions
report having an established internal communication
plan than external plans. Most plans required further
modifications predominantly around testing and treating policies, with some participants reporting modifications of the external communication plan to include the
use of social media.
S66
116
2012 SAEM ANNUAL MEETING ABSTRACTS
An Investigation of the Association between
Extended Shift Lengths, Sleepiness, and
Occupational Injury and Illness among
Nationally Certified EMS Professionals
Antonio Ramon Fernandez1, J. Mac Crawford2,
Jonathan R. Studnek3, Michael L. Pennell2,
Timothy J. Buckley2, Melissa A. Bentley4,
John R. Wilkins III2
1
EMS
Performance
Improvement
Center
University of North Carolina - Chapel Hill, Chapel
Hill, NC; 2The Ohio State University, Columbus,
OH; 3The Center for Prehospital Medicine,
Carolinas Medical Center and the Mecklenburg
EMS Agency, Mecklenburg, NC; 4The National
Registry of EMTs, Columbus, OH
Background: The link between extended shift lengths,
sleepiness, and occupational injury or illness has been
shown, in other health care populations, to be an
important and preventable public health concern but
heretofore has not been fully described in emergency
medical services (EMS).
Objectives: Evaluate the influence of extended shift
lengths and sleepiness on occupational injury or illness
among nationally certified EMS professionals.
Methods: In 2009, previous respondents to the Longitudinal EMT Attributes and Demographics Study were
mailed a survey. This survey included the Epworth
Sleepiness Scale (ESS), items inquiring about health
and safety, and work-life characteristics. Occupational
injury or illness items (involved in ambulance crash
while driving, missed work due to occupational injury
or illness, needle-stick while performing EMS duties)
were combined generating one dichotomous outcome
variable. Multiple logistic regression modeling was performed, forcing ESS and shift length (<24 hours,
‡24 hours) into the model.
Results: The response rate was 67.2% (n = 1,078).
Ambulance crashes, missed work due to occupational
injury or illness, and needle sticks were reported in
1.8%, 13.1%, and 4.0% of respondents, respectively.
Combining these variables revealed 17.5% (186/1,060)
experienced occupational injury or illness in the past
12 months. The relationship between shift length and
odds of occupational injury or illness differed according
to overtime work (p = 0.01): among individuals who did
not work mandatory overtime, the odds of occupational
injury or illness for those who worked ‡24-hours was
1.72 (95% CI 1.01–2.95) times that of individuals who
worked <24-hours. There was no statistically significant
difference when comparing shift lengths for those who
worked mandatory overtime (OR 1.13, 95% CI 0.59–
2.18). For every ESS point increase, the odds of reporting an occupational injury or illness increased by 7%
adjusting for shift-length and overtime work (OR 1.07;
95%CI 1.03–1.12).
Conclusion: This study revealed significant associations
between occupational injury or illness and shift length,
working mandatory overtime, and sleepiness. Results
suggest that EMS professionals’ health and safety can
be protected by preventing mandatory overtime and
extended shift lengths.
117
Does An Intimate Partner Violence Kiosk
Intervention in the ED Impact Subsequent
Safety Behaviors?
Justin Schrager, Debra Houry, Shakiyla Smith
Emory University School of Medicine, Atlanta,
GA
Background: Computer kiosks in the ED have not previously been employed to screen for intimate partner
violence (IPV) or disseminate health information.
Objectives: To assess the effect of an ED-based computer screening and referral intervention for IPV victims and to determine what characteristics resulted in a
positive change in their safety. We hypothesized that
women who were experiencing severe IPV and/or were
in contemplation or action stages would be more likely
to endorse safety behaviors.
Methods: We conducted the intervention for female
IPV victims at three urban EDs using a computer kiosk
to deliver targeted education about IPV and violence
prevention as well as referrals to local resources. All
adult English-speaking non-critically ill women triaged
to the ED waiting room were eligible to participate.
The validated Universal Violence Prevention Screening
Protocol was used for IPV screening. Any who disclosed IPV further responded to validated questionnaires for alcohol and drug abuse, depression, and IPV
severity. The women were assigned a baseline stage of
change (precontemplation, contemplation, action, or
maintenance) based on the URICA scale for readiness
to change behavior surrounding IPV. Participants were
contacted at 1 week and 3 months to assess a variety of
pre-determined actions such as moving out, to prevent
IPV during that period. Statistical analysis (chi-square
testing) was performed to compare participant characteristics to the stage of change and whether or not they
took protective action.
Results: A total of 1,474 people were screened and 154
disclosed IPV and participated in the full survey. 53.3%
of the IPV victims were in the precontemplative stage
of change, and 40.3% were in the contemplation stage.
110 women returned at 1 week of follow-up (71.4%),
and 63 (40.9%) women returned at 3 months of followup. 55.5% of those who returned at 1 week, and 73% of
those who returned at 3 months took protective action
against further IPV. There was no association between
the various demographic characteristics and whether
or not a woman took protective action.
Conclusion: ED-based kiosk screening and health
information delivery is both a feasible and effective
method of health information dissemination for women
experiencing IPV. Stage of change was not associated
with actual IPV protective measures.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
118
Prime Time Television Programing Fails to
Lead Safely By Example: No Seat Belts No
Helmets
David Milzman1, Han Huang1, Kyle Pasternac1,
Michael Phillipone2, Jenica Ferretti-Gallon2,
Anna Ruff2
1
Georgetown University School of Medicine,
Washington, DC; 2Georgetown University,
Washington, DC
Background: A 1998 Michigan State University study
recorded prime time TV portrayal of 25% seat belt usage
when actual national usage was 65% that year. In
13 years since, the US reported usage approaches 90%.
Other safety precautions for personal protection, such as
helmet use for motor and pedal cycles, remain at 50%.
Objectives: Compare prime time TV traffic/safety exposures and seat belt and helmet use with USDOT NHTSA
figures.
Methods: Researchers watched non-news, non realityTV totaling 53 programs across 10 weeks of spring
2011 prime time (8–11 PM EST) from the following networks: ABC, CBS, NBC, FOX. Commercials were
excluded. All instances of seat belt usage (driver and
passenger), helmets (bikes and motorcycle), and miscellaneous pedestrian and vehicular traffic infractions
were also recorded.
Results: a total of 273 hours of prime time TV was
viewed with an overall rate for proper seat belt usage
in 37.6% (95% CI 32.4–42.9) of drivers, and 22.3% (95%
CI18.5–26.0) of passengers. Proper seating and childseat usage, not noted in original 1998 study, was only
14%. Helmets were used by 15.9% of bicyclists, and
70.3% of motorcyclists. There was 17% rate of pedestrian and 22% vehicular traffic violations, also. Overall
proper 2011 restraint use was 30.1% (95%CI 25.4–34.6).
This figure represents only a 4.2% rise and NS increase
since the prior study. Portrayal of prime time TV seatbelt usage rose a 4.8% (p £ 0.11) from 1998 to 2011
while actual US seat belt use increased significantly.
Conclusion: Recent studies have found traffic safety
behaviors continue to increase in US population; however, major TV network programs have not incorporated such simple safety changes into current
programming despite prior study into these deficiencies. A poor example continues to be set.
119
The Effect Of Young Unlicensed Drivers
On Passenger Safety Restraint Use In U.S.
Fatal Crashes: Concern For Risk Spillover
Effect?
Jonathan Fu1, Michael J. Crowley1, Jim
Dziura1, Craig L. Anderson2, Federico E. Vaca1
1
Yale University School of Medicine, New
Haven, CT; 2University of California, Irvine
School of Medicine, Irvine, CA
Background: Despite recent prevention gains, motor
vehicle crashes continue to top the list of causes of
death for US adolescents and young adults. Many of
these deaths involve young unlicensed drivers who are
more likely to be in fatal crashes and to engage in
•
www.aemj.org
S67
high-risk driving behaviors like impaired driving,
speeding, and driving unrestrained. In a crash context,
the influence of these high-risk behaviors may spill over
to adversely affect passengers safety restraint use.
Objectives: To examine the effect of young unlicensed
drivers on safety restraint use and mortality of their
front seat passengers.
Methods: A retrospective analysis of the National
Highway Traffic Safety Administration’s Fatality Analysis Reporting System from years 1996–2008 was conducted. Fatal crashes involving unlicensed drivers (15–
24 yrs) and their front seat passengers (15–24 yrs) were
included. Contingency tables, univariate, and multivariate logistic regression were undertaken to assess the
relationship between unlicensed driving and passenger
restraint use, controlling for established predictors of
restraint use, including sex, time of day, alcohol use,
number of occupants, crash year, and crash location
(rural vs. urban).
Results: 85,563 15–24 year-old front seat passenger
crash fatalities occurred from 1996–2008. 14,447 (19%)
of their drivers were unlicensed or inadequately
licensed. Rates of unlicensed driving ranged from 17%
to 21% and trended upwards. Compared to passengers
of licensed drivers, passengers of unlicensed drivers
had decreased odds of wearing a safety restraint (OR
0.65, 95% CI 0.63–0.67). Other significant factors were
male passenger (0.75, 0.73–0.77), driver drinking (0.37,
0.36–0.39), rural location (0.62, 0.60–0.64), and crash
year (1.06, 1.06–1.07).
Conclusion: We found a strong negative correlation
between unlicensed driving and front seat passenger
restraint use, suggesting a significant risk spillover
effect. Unlicensed driving is involved in a disproportionate number of fatal crashes and plays an important
role in the safety of not only the drivers but also their
passengers. Our findings highlight an alarming trend
that has considerable implications for US highway
safety and the public’s health. Further in-depth study in
this area can guide future countermeasures and traffic
safety programs.
120
Emerging Conducted Electrical Weapon
Technology: Is it Effective at Stopping
Further Violence?
Jeffrey D. Ho1, Donald M. Dawes2, James D.
Sweeney3, Paul C. Nystrom1, James R. Miner1
1
Hennepin County Medical Center,
Minneapolis, MN; 2Lompoc Valley Medical
Center, Lompoc, CA; 3Florida Gulf Coast
University, Ft. Myers, FL
Background: Conducted electrical weapons (CEWs)
are effective law enforcement tools used to control
violent persons, thus preventing further violence and
injury. Older generation TASER X26 CEWs are most
widely in use. New generation TASER X2 CEWs will
begin to replace them. X2 technology differences are
a multi-shot feature and redesigned electrical waveform/output characteristics. It is not known if the X2
will be as effective in preventing further violence and
injury.
S68
2012 SAEM ANNUAL MEETING ABSTRACTS
Objectives: We present a pilot, head-to-head comparison of X26 and X2 effectiveness in stopping a motivated
person. The objective is to determine comparative
injury prevention effectiveness of the newer CEW.
Methods: Four humans had metal CEW probe pairs
placed. Each volunteer had two probe pairs placed (one
pair each on the right and left of the abdomen/inguinal
region). Superior probes were at the costal margin, 5
inches lateral of midline. Inferior probes were vertically
inferior at predetermined distances of 6, 9, 12, and 16
inches apart. Each volunteer was given the goal of
slashing a target 10 feet away with a rubber knife during CEW exposure. As a means of motivation, they
believed the exposure would continue until they
reached the goal (in reality, the exposure was terminated once no further progress was made). Each volunteer received one exposure from a X26 and a X2 CEW.
The exposure order was randomized with a 2-minute
rest between them. Exposures were recorded on a
hi-speed, hi-resolution video. Videos were reviewed
and scored by six physician, kinesiology, and law officer experts using standardized criteria for effectiveness
including degree of upper and lower extremity, and
total body incapacitation, and degree of goal achievement. Reviews were descriptively compared independently for probe spread distances and between devices.
Results: There were 8 exposures (4 pairs) for evaluation and no discernible, descriptive reviewer differences
in effectiveness between the X26 and the X2 CEWs
when compared.
Conclusion: New generation CEWs have improved
safety technology while exhibiting similar performance
when compared to older generation CEWs. There is no
discernible effectiveness difference between them. New
generation CEWs appear to be as effective in stopping
a motivated person when compared to older generation
CEWs.
121
Barriers to Colorectal Cancer Screening as
Preventive Health Measure among Adult
Patients Presenting to the Emergency
Department
Nidhi Garg, Sanjey Gupta
New York Hospital Queens, Flushing, NY
Background: Colorectal cancer is the second leading
cause of cancer death in the United States. Health promotion and disease prevention are increasingly recognized activities that fall within the scope of emergency
medicine.
Objectives: To identify barriers in colorectal cancer
screening in adult patients (‡50 yrs) of age presenting
to an ED that services a large immigrant and non-English speaking population.
Methods: A prospective, cross-sectional, survey based
study was conducted at urban Level I trauma center with
annual ED visits of 120,000/year. Trained research assistants interviewed a convenience sample of patients over
36 months with a three-page survey recording demographics, knowledge of occult blood in stool testing (OCBST), and colonoscopy based on current preventive
health recommendations from the Agency for Healthcare Research and Quality (AHRQ). Chi-square test was
used for categorical data as appropriate. Logistic regression was performed for the significant factors.
Results: A total of 904 males ‡50 yrs were interviewed
over the study period with a median age of 68 yrs (IQR
59–79), 384 (34%) immigrants and 825 (91%) insured.
Overall, 647 (71%) subjects and 714 (79%) subjects had
the knowledge about OCBST and colonoscopy, respectively and 24 (2.6%) and 27 (3%) did not answer the
respective questions. Total 581 (64%) and 606 (67%) subjects had OCBST and colonoscopy in the past, and 28
(3.1%), 29 (3.2%) did not answer the respective questions. There was no significant difference in OCBST and
colonoscopy stratified by immigrant status. Separate
logistic regression models were compared with OCBST
and colonoscopy as outcomes, adjusted with knowledge
of obtaining these tests by AHRQ recommendations,
age, immigration status, insurance status, and smoking
status. OCBST and colonoscopy were associated with
having knowledge about OCBST and colonoscopy, OR
36 (CI 23–56), p < 0.001 and OR 26 (CI 15–43), p < 0.001
and insurance status OR 2.2 (CI 1.2–4.2), p = 0.02 and OR
3.5 (CI 1.9–6.2), p < 0.001. Immigration status was associated with OCBST OR 1.6 (CI 1.1–2.4), p = 0.02 but was
not associated with colonoscopy.
Conclusion: Males are more likely to have prostate
cancer screening if they have insurance and are educated about it. These interventions and preventive
health practice reinforcements can be easily accomplished in ED while interviewing a patient.
122
Impact of Rising Gasoline Prices on
Bicycle Injuries in the United States,
1997–2009
Mairin Smith, M. Kathryn Mutter,
Jae Lee, Jing Dai, Mark Sochor,
Matthew J. Trowbridge
University of Virginia, Charlottesville, VA
Background: The trend towards higher gasoline prices
over the past decade in the U.S. has been associated
with higher rates of bicycle use for utilitarian trips. This
shift towards non-motorized transportation should be
encouraged from a physical activity promotion and sustainability perspective. However, gas price induced
changes in travel behavior may be associated with
higher rates of bicycle-related injury. Increased consideration of injury prevention will be a critical component
of developing healthy communities that help safely support more active lifestyles.
Objectives: The purpose of this analysis was to a)
describe bicycle-related injuries treated in U.S. emergency departments between 1997 and 2009 and b) investigate the association between gas prices and both the
incidence and severity of adult bicycle injuries. We
hypothesized that as gas prices increase, adults are
more likely to shift away from driving for utilitarian travel toward more economical non-motorized modes of
transportation, resulting in increased risk exposure for
bicycle injuries.
Methods: Bicycle injury data for adults (16–65 years)
were obtained from the National Electronic Injury
Surveillance System (NEISS) database for emergency
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
department visits between 1997–2009. The relationship
between national seasonally adjusted monthly rates of
bicycle injuries, obtained by a seasonal decomposition
of time series, and average national gasoline prices,
reported by the Energy Information Administration,
was examined using a linear regression analysis.
Results: Monthly rates of bicycle injuries requiring
emergency care among adults increase significantly as
gas prices rise (p < 0.0001, see figure). An additional
1,149 adult injuries (95% CI 963–1,336) can be predicted
to occur each month in the U.S. (>13,700 injuries annually) for each $1 rise in average gasoline price. Injury
severity also increases during periods of high gas prices,
with a higher percentage of injuries requiring admission.
Conclusion: Increases in adult bicycle use in response
to higher gas prices are accompanied by higher rates of
significant bicycle-related injuries. Supporting the use of
non-motorized transportation will be imperative to
address public health concerns such as obesity and climate change; however, resources must also be dedicated
to improve bicycle-related injury care and prevention.
123
Obesity and Seatbelt Use: A Fatal
Relationship
Dietrich Jehle, Joseph Consiglio, Jenna
Karagianis, Gabrielle Jehle
SUNY@Buffalo, Williamsville, NY
Background: Motor vehicle crashes are a leading
cause of mortality in the United States. Although seatbelts significantly reduce the risk of death, a number of
subgroups of individuals tend not to wear their seatbelts. A third of the population is now considered to be
obese and obese drivers may find it more difficult to
buckle up a standard seatbelt.
Objectives: In this study, we hypothesized that obese
drivers were less likely to wear seatbelts than their
normal weight counterparts.
Methods: A retrospective study was conducted on the
drivers in severe motor vehicle crashes entered into the
FARS (Fatality Analysis Reporting System) database
•
www.aemj.org
S69
between 2003 and 2009. This database includes all
motor vehicle crashes in United States that resulted in a
death within 30 days. The study was limited to drivers
(336,913) of passenger vehicles in severe crashes. A
number of pre-crash variables were found to be significantly associated with seatbelt use. These were entered
into a multivariate logistic regression model using stepwise selection. Drivers were grouped into weight categories based on the World Health Organization
definitions of obesity by BMI. Seatbelt use was then
examined by BMI, adjusted for 12 pre-crash variables
that were significantly associated with seatbelt use.
Results: The odds of seatbelt use for normal weight
individuals were found to be 67% higher than the odds
of seatbelt use in the morbidly obese. The table below
displays the relationship of seatbelt use between the
different weight groups and the morbidly obese. Odds
ratios (OR) for each comparison are displayed with the
lower and upper 95% confidence limits.
Conclusion: Seatbelt use is significantly less likely in
obese individuals. Automobile manufacturers need to
investigate methods of making seatbelt use easier for the
obese driver in order to save lives in this population.
Table - Abstract 123:
Underweight vs. Morbidly Obese
Normal Wt. vs. Morbidly Obese
Overweight vs. Morbidly Obese
Slightly Obese vs. Morbidly Obese
Moderately Obese vs. Morbidly Obese
124
OR
L CL
U CL
1.616
1.666
1.596
1.397
1.233
1.464
1.538
1.472
1.284
1.120
1.784
1.805
1.730
1.520
1.358
Days Out of Work Do Not Correlate with
Emergency Department Pain Scores for
Patients with Musculoskeletal Back Pain
Barnet Eskin, John R. Allegra
Morristown Memorial Hospital, Morristown, NJ
Background: This is a secondary analysis of data
collected for a randomized trial of oral steroids in
S70
2012 SAEM ANNUAL MEETING ABSTRACTS
emergency department (ED) musculoskeletal back pain
patients. We hypothesized that higher pain scores in the
ED would be associated with more days out of work.
Objectives: To determine the degree to which days out
of work for ED back pain patients are correlated with
ED pain scores.
Methods: Design: Prospective cohort. Setting: Suburban ED with 80,000 annual visits. Participants: Patients
aged 18–55 years with moderately severe musculoskeletal back pain from a bending or twisting injury £ 2 days
before presentation. Exclusion criteria included nonmusculoskeletal etiology, direct trauma, motor deficits,
and employer-initiated visits. Observations: We captured
initial and discharge ED visual analog pain scores (VAS)
on a 0–10 scale. Patients were contacted approximately
5 days after discharge and queried about the days out of
work. We plotted days out of work versus initial VAS,
discharge VAS, and change in VAS and calculated correlation coefficients. Using the Bonferroni correction
because of multiple comparisons, alpha was set at 0.02.
Results: We analyzed 67 patients for whom complete
data were available. The mean age was 40 ± 9 years
and 30% were female. The average initial and discharge
ED pain scales were 8.0 ± 1.5 and 5.7 ± 2.2, respectively.
On follow-up, 88% of patients were back to work and
36% did not lose any days of work. For the plots of the
days out of work versus the initial and discharge VAS
and the change in the VAS, the correlation coefficients
(R2) were 0.03 (p = 0.17), 0.08 (p = 0.04), and 0.001
(p = 0.87), respectively.
Conclusion: For ED patients with musculoskeletal back
pain, we found no statistically significant correlation
between days out of work and ED pain scores.
125
Prevalence of Cardiovascular Disease Risk
Factors Among a Population of Volunteer
Firefighters
David Jaslow, Melissa Kohn, Molly Furin
Albert Einstein Medical Center, Philadelphia, PA
Background: Cardiovascular disease (CVD) is cited
annually by NIOSH as the most common cause of line
of duty death among U.S. firefighters (FF). There is
scant scientific literature and no state or national databases to document CVD risk factors among the volunteer FF workforce, which represents 71% of all FF
nationwide.
Objectives: To describe CVD risk factors among a population of volunteer FF.
Methods: 24 (100%) FF age 18 and older of a rural
Pennsylvania volunteer fire department completed an
NFPA 1582 fitness for duty exam in 2011 that was
funded by a FEMA Assistance to Firefighters Grant.
The exams consisted of OSHA 1910.134 respiratory
protection questionnaire, complete physical exam,
blood work and urinalysis, vision and hearing tests, 12lead ECG, hepatitis titers, colon cancer screening, chest
x-ray and PFTs if indicated, and a personal wellness
profile. We performed a cross-sectional observational
study of this population to determine the prevalence of
eight ‘‘major’’ CVD risk factors cited by the AHA: age,
sex, blood pressure or history of hypertension, BMI,
elevated blood sugar or history of diabetes, cholesterol
levels, smoking history, and level of physical activity.
Descriptive statistics, effect size, and 95% confidence
intervals are reported.
Results: All (100%) FF in this volunteer department are
male and their median age is 44 (range 19–87). 23/24
(96%, 95% CI: 78, 99%) have BMIs in the overweight or
obese category. In their personal wellness profile, 17/24
(71%, 95% CI: 50, 85%) report physical inactivity, 12/24
(50%, 95% CI: 31, 69%) have a family history of CVD, 8/
24 (33%, 95% CI:18, 53%) are smokers, and only one FF
was diabetic. 14/24 (58%, 95% CI: 39, 76%) FF had
either a history of hypertension or were hypertensive
upon exam. 14/24 (58%, 95% CI: 39, 76%) FF also had
either a history of hyperlipidemia or elevated total cholesterol, triglycerides, or LDL on their blood panel.
Conclusion: 22/24 (92%, 95% CI: 74, 98%) firefighters
in a small rural volunteer fire department have four or
more CVD risk factors and 15/24 (63%, 95% CI:43, 91%)
have five or more CVD risk factors. Fire department
physicians and EMS medical directors who serve as
consultants to the volunteer fire service should promote
annual or biannual fitness for duty examinations as a
tool to evaluate both personal and agency-wide CVD
risk. The FEMA AFG program is one option to fund
such an endeavor.
126
Prevalence Rates Of Intimate Partner
Violence Perpetration Among Male
Emergency Department Patients
Daniel S. Bell, Shakiyla Smith, Debra E. Houry
Emory University, Atlanta, GA
Background: The Institute of Medicine recently recommend screening women for IPV in health care settings,
but little research has been conducted on screening
men for perpetration.
Objectives: To assess the feasibility of ED computerized screening for IPV perpetration (IPVP) and the prevalence of IPVP among urban ED male patients. We
hypothesized there would be an 80% acceptance rate in
screening and that we would identify an IPVP rate of
20% in male patients.
Methods: We conducted a cross-sectional study over a
6-week period in an urban ED at a Level I trauma center using computer kiosks loaded with validated scales
to identify male perpetrators of IPVP. All male adult
patients over 18 years of age who presented to the ED
during study hours triaged to the waiting room were
eligible to participate. Patients were excluded if they
are non-English speaking, acutely intoxicated, critically
ill, or either medically or psychiatrically and unable to
complete a 20-minute questionnaire. At a private computer kiosk patients answered questions on general
health, substance abuse, mental health, and IPVP using
an eight-item IPVP screen developed by KV Rhodes,
three substance abuse screens (HONC, TWEAK, and
DAST), and two mental health surveys (PC-PTSD and
PC-BDI). Patients received target health and resource
information after the survey. IPVP prevalence was
evaluated with descriptive statistics. Chi-square tests
analyzed differences in among perpetrators and
non-perpetrators.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Results: 113 men were approached for survey, and 94
were eligible for the study, of whom 67 (71%) completed questionnaires; 19 were not eligible and 27 were
not interested. Of the men who had been in a relationship in the past year (n = 25), 16% screened positive for
intimate partner violence perpetration and 44%
endorsed at least one IPV perpetration behavior.
Conclusion: We found that men accepted a screening
protocol for perpetration in the ED and self-disclosed
perpetration at relatively high rates.
127
Human Physiologic Effects of a New
Generation Conducted Electrical Weapon
Jeffrey D. Ho1, Donald M. Dawes2,
Paul C. Nystrom1, James R. Miner1
1
Hennepin
County
Medical
Center,
Minneapolis, MN; 2Lompoc Valley Medical
Center, Lompoc, CA
Background: Conducted Electrical Weapons (CEWs)
are common law enforcement tools used to subdue and
repel violent subjects and, therefore, prevent further
injury or violence from occurring in certain situations.
The TASER X2 is a new generation of CEW that has the
capability of firing two cartridges in a ‘‘semi-automatic’’
mode, and has a different electrical waveform and different output characteristics than older generation technology. There have been no data presented on the human
physiologic effects of this new generation CEW.
Objectives: The objective of this study was to evaluate
the human physiologic effects of this new CEW.
Methods: This was a prospective, observational study
of human subjects. An instructor shot subjects in the
abdomen and upper thigh with one cartridge, and subjects received a 10-second exposure from the device.
Measured variables included: vital signs, continuous
spirometry, pre- and post-exposure ECG, intra-exposure echocardiography, venous pH, lactate, potassium,
CK, and troponin.
Results: Ten subjects completed the study (median age
31.5, median BMI 29.4, 80% male). There were no
important changes in vital signs or in potassium. The
•
www.aemj.org
S71
median increase in lactate during the exposure was 1.2,
range 0.6 to 2.8. The median change in pH was )0.031,
range )0.011 to 0.067. No subject had a clinically relevant ECG change, evidence of cardiac capture, or positive troponin up to 24 hours after exposure. The
median change in creatine kinase (CK) at 24 hours was
313, range )40 to 3418. There was no evidence of
impairment of breathing by spirometry. Baseline median minute ventilation was 14.2, which increased to 21.6
during the exposure (p = 0.05), and remained elevated
at 21.6 post-exposure (p = 0.01).
Conclusion: We detected a small increase in lactate
and decrease in pH during the exposure, and an
increase in CK 24 hours after the exposure. The physiologic effects of the X2 device appear similar to previous
reports for ECD devices.
128
Use of Urinary Catheters in U.S. EDs 1995–
2009: A Potentially Modifiable Cause of
Catheter-Associated Urinary Tract
Infection?
Jennifer Gibson Chambers1, Jeremiah Schuur2
1
University of New England, Biddeford, ME;
2
Brigham and Women’s Hospital, Boston, MA
Background: Catheter-associated urinary tract infection (CAUTI) is the most prevalent hospital-acquired
infection. In 2007, the Centers for Disease Control
(CDC) published guidelines for reducing CAUTI, including appropriateness criteria for urinary catheters (UCs).
Objectives: To calculate frequency and trends of UC
placement and potentially avoidable UC (PAUC) placement in US EDs. We hypothesized that ED use of UCs
and PAUCs in admitted patients did not decrease after
the CDC’s guideline publication.
Methods: We analyzed the National Hospital Ambulatory Medical Care Survey (NHAMCS), a weighted probability sample of US ED visits, from 1995–2009 for use of
UCs in adults. UCs were classified as PAUC if the primary diagnosis did not meet CDC appropriateness criteria. Use of UCs and PAUCs before (2005–7) and after
(2008–9) the CDC guideline were compared with a
S72
2012 SAEM ANNUAL MEETING ABSTRACTS
chi-square test. Predictors of ED placement of UC for
admitted patients were assessed with multivariate logistic regression, results shown as odds ratio (OR) and 95%
CI. Statistics controlled for the survey sampling design.
Results: UC placement varied from 22 to 33 per 1000
adult ED visits, peaking in 2003. Overall, 1.6% (CI 1.5–
1.7%) of discharged patients and 8.5% (CI 7.9–9.1%) of
admitted patients received UCs. More than half of EDplaced UCs were for potentially avoidable diagnoses.
There was not a significant change in UC placement
(8.7% vs. 8.0%, p = 0.4) or PAUC placement (4.8% vs.
4.7%, p = 0.4) in admitted ED patients after CDC guideline publication. Predictors of UCs in admitted patients
included increasing age (‡80 y vs. 18–59 y, OR 3.1, CI
2.7–3.6), female sex (OR 1.3, CI 1.2–1.4), race (Hispanic
vs. white, 0.8, CI 0.6–0.9), arrival by ambulance (OR 2.5,
CI 2.3–2.8), increasing urgency (‡2 h vs. immediate, OR
0.8, CI 0.6–1.1), and longer ED visits (‡4 h vs. <2 h, OR
1.4, CI 1.2–1.7); facility characteristics included region
(South vs. Northeast, OR 2.2, CI 1.9–2.5), teaching ED
(OR 1.3, CI 1.2–1.5), and urban location (OR 1.3, CI 1.1–
1.5). The most common reasons for visit and diagnosis
categories among patients receiving UCs are shown in
the table.
Conclusion: The high rates of PAUC suggest a potential
for reduction of UCs in admitted ED patients - a proven
strategy to reduce CAUTI. Publication of the CDC CAUTI
guideline in 2007 did not affect ED use of UCs.
Table - Abstract 128: Most Common Reasons for Visit and
Discharge Diagnoses Resulting in UC Placement In U.S. EDs
Reason for visit
Stomach and abdominal
pain, cramps and
spasms
Other urinary
dysfunctions
Shortness of breath
Chest pain and related
symptoms (not referable
to a specific body
system)
Discharge diagnosis
(Clinical Classification
Software groupings)
Genitourinary symptoms
and ill-defined
conditions
Urinary tract infections
Abdominal pain
Congestive heart failure;
non-hypertensive
Average #
receiving
UC per year
% with
RFV who
received UC
256,800
4.2%
177,600
60.1%
144,500
78,000
5.8%
1.4%
Average #
receiving
UC per year
% with
diagnosis who
received UC
236,600
44.6%
167,900
119,400
96,700
8.8%
3.5%
14.0%
1
University of Illinois College of Medicine at
Peoria, Peoria, IL; 2University of Arkansas for
Medical Sciences, Little Rock, AR; 3University
of Alabama Medical Center and Children’s
Health Center, Birmingham, AL
Background: Motor vehicle crashes (MVC) are the
leading cause of childhood fatality, making child passenger safety restraint (CPS) usage a public health priority. While MVCs in rural environments are associated
with increased injuries and fatalities, no literature currently separates and focuses CPS misuse by geographic
location.
Objectives: We hypothesize that proper CPS usage will
be lower in a rural population as compared to a similar
matched urban population.
Methods: A multisite (Alabama, Arkansas, Illinois),
observational, case-control study was performed using
rural (economically and population controlled) CPS
unscheduled check data collected during the Strike
Out Child Passenger Injury Trial and unscheduled
urban CPS check data matched by age, site, and year.
All CPS checks were performed using nationally certified CPS technicians who utilized the best practice
standards of the American Academy of Pediatrics and
collected subject demographics, misuse patterns, and
interventions using identical definitions. Misuse patterns were defined using National Highway Traffic
Safety Administration (NHTSA) standardized criteria
and examined by state, location, age, and type. Pearson chi-square and Fisher’s test were conducted
using SAS 9.2. The two-tailed p values were
calculated and p = 0.05 was considered for statistical
significance.
Results: Four-hundred eighty-four CPS checks (242
rural and 242 urban) involving 603 total children
(<1 years 46 (8%), 1–3 years 215 (36%), 4–8 years 321
(53%), ‡9 years 21 (3%)) from three states (AL 43 (7%),
AR 442 (73%), IL 118 (20%)) were examined; of which
86% had at least one documented CPS misuse (arrived
unrestrained 6%, improper direction 6%, harness incorrect 66%, LATCH incorrect 52%, airbag location 11%,
seatbelt loose 49%, incorrect tether 63%). CPS misuse
did not vary by age category (p = 0.31) but did by state
(p = 0.001). Rural CPS misuse was more common than
urban CPS misuse (91.5% vs. 81.2%; p = 0.0002,
OR = 2.5, 95% CI = 1.5–4.1).
Conclusion: In this multisite study, rural location was
associated with higher CPS misuse. CPS education and
resources that target rural populations specifically
appear to be justified.
130
129
Child Passenger Restraint Misuse in Rural
vs. Urban Children: A Multisite CaseControl Study
John W. Hafner1, Stephanie Kok1,
Huaping Wang1, Dale Wren1, Kathy Baker1,
Mary E. Aitken2, Byron L. Anderson2, Beverly
K. Miller2, Kathy W. Monroe3
A National Estimate of Injuries to Elders
from Falls in the Home that Presented to
U.S. Emergency Departments in 2009
Uwe Stolz, Alejandro Gonzalez, Sief Naser,
Katy Orr
University of Arizona, Tucson, AZ
Background: An estimated 40% of community dwelling adults 65 or older fall at least once each year in the
US, and 1 in 40 falls result in a hospitalization. In fact,
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
25% of all elderly hospital admissions are directly
related to falls. Falls are an important preventable cause
of morbidity and mortality in the elderly and they pose
a tremendous financial burden on our health care system and society in general.
Objectives: To investigate the basic epidemiology of
home falls that occur in those 65 years of age or older
with injuries that present to US emergency departments (EDs). We hypothesized that the severity of injuries would be significantly related to age and the
consumer product associated with each injury.
Methods: We used the National Electronic Injury Surveillance System (NEISS), a national probability sample
of US EDs that collects patient data for all consumer
product-related injuries. Inclusion criteria were all injuries that occurred in a residential setting in 2009 to
adults 65 or older. Exclusion criteria were all injuries
not related to or caused by a home fall. Fall-related
injuries were identified by key word searches. National
estimates and 95% confidence intervals (CIs) for frequencies and proportions were calculated by accounting for the complex survey data.
Results: There were 22,134 observations corresponding
to a national estimate of 938,577 (95% CI 769,981–
1,107,173) residential fall-related injuries in adults 65 or
older that presented to US EDs in 2009. Females
accounted for 65% (95% CI 63–66) of injuries. A total of
30% (95% CI 27–33) were severe injuries requiring hospitalization or further observation. Frequency of severe
injuries was 36.3% in those 85 or older, 31.2% for those
75–84, and 22.3% for those 65–74 (p < 0.001). A total of
92% (95% CI 89–94) of falls were mechanical falls while
8% (95% CI 6–11) had a physiological cause (syncope,
alcohol, etc.). The table shows the top eight homerelated consumer products associated with home falls.
After excluding floors and walls, stairs were the most
common injury-related home product (10.6%, see table).
Toilets had the highest proportion of severe injuries
(40%).
Conclusion: Nearly 1 million Americans 65 and older
sustained a home fall-related injury in 2009 requiring
treatment at a US ED. Further efforts to study and prevent home falls in the elderly are needed.
Table - Abstract 130: Top Eight Products Related to Home Falls
(Excluding Floors & Walls)
Consumer
Products
Total
Stairs
Bed
Chair
Bathtubs/
Showers
Tables/Night
Stands
Toilet
Rugs
Ladders
Actual
Observations
(n)
National
Estimate
(N)
Weighted
Percentage
(95% CI)
22,134
2,983
2,270
1,072
1,045
938,577
120,444
93,928
44,677
44,503
527
22,845
2.1 (1.9–2.4)
558
539
435
22,531
21467
17,604
2.1 (1.9–2.4)
2.0 (1.6–2.4)
1.6 (1.3–1.8)
100.0
10.6
9.0
4.5
4.0
(9.3–12.0)
(8.2–9.8)
(4.1–4.9)
(3.5–4.6)
•
131
www.aemj.org
S73
Prevalence of Bicycle Helmet Use By Users
of Public Bicycle Sharing Programs
Christopher M. Fischer1, Czarina E. Sanchez1,
Mark Pittman2, David Milzman2,
Kathryn A. Volz1, Han Huang2, Shiva Gautam1,
Leon D. Sanchez1
1
Beth Israel Deaconess Medical Center/Harvard
Medical School, Boston, MA; 2Georgetown
University/Washington
Hospital
Center/
MedStar, Washington, DC
Background: Public bicycle sharing (bikeshare) programs are becoming increasingly common in the US
and around the world. These programs make bicycles
easily accessible for hourly rental to the public. There
are currently 15 active bikeshare programs in cities in
the US, and more than 30 programs are being developed in cities including New York and Chicago. Despite
the importance of helmet use, bikeshare programs do
not provide the opportunity to purchase or rent helmets. While the programs encourage helmet use, no
helmets are provided at the rental kiosks.
Objectives: We sought to describe the prevalence of
helmet use among adult users of bikeshare programs
and users of personal bicycles in two cities with
recently introduced bicycle sharing programs (Boston,
MA and Washington, DC).
Methods: We performed a prospective observational
study of bicyclists in Boston, MA and Washington,
DC. Trained observers collected data during various
times of the day and days of the week. Observers
recorded the sex of the bicycle operator, type of bicycle, and helmet use. All bicycles that passed a single
stationary location in any direction for a period of
between 30 and 90 minutes were recorded. Data are
presented as frequencies of helmet use by sex, type
of bicycle (bikeshare or personal), time of the week
(weekday or weekend), and city. Logistic regression
was used to estimate the odds ratio for helmet use
controlling for type of bicycle, sex, day of week, and
city.
Results: There were 43 observation periods in two cities at 36 locations. 3,073 bicyclists were observed.
There were 562 (18.2%) bicylists riding bikeshare bicycles. Overall helmet use was 45.5%, although helmet
use varied significantly with sex, day of use, and type of
bicycle (see figure). Bikeshare users were helmeted at a
lower rate compared to users of personal bicycles
(19.2% vs 51.4%). Logistic regression, controlling for
type of bicycle, sex, day of week, and city demonstrate
that bikeshare users had higher odds of riding unhelmeted (OR 4.34, 95% CI 3.47–5.50). Women had lower
odds of riding unhelmeted (OR 0.62, 0.52–0.73), while
weekend riders were more likely to ride unhelmeted
(OR 1.32, 1.12–1.55).
Conclusion: Use of bicycle helmets by users of public
bikeshare programs is low. As these programs become
more popular and prevalent, efforts to increase helmet
use among users should increase.
S74
132
2012 SAEM ANNUAL MEETING ABSTRACTS
Keeping Infants Safe and Secure for
Schools (KISS): A School-Based Abusive
Head Trauma Prevention Program
Faisal Mawri1, Elaine Pomeranz1, Brain
Nolan2, Rachel Stanley1
1
University of Michigan Health System, Ann
Arbor, MI; 2Hurley Medical Center, Flint, MI
Background: Abusive head trauma (AHT) represents
one of the most severe forms of traumatic brain injury
(TBI) among abused infants with 30% mortality. Young
adult males account for 75% of the perpetrators. Most
AHT prevention programs are hospital-based and reach
a predominantly female audience. There are no published reports of school-based AHT prevention programs to date.
Objectives: 1. To determine whether a high schoolbased AHT educational program will improve students’
knowledge of AHT and parenting skills. 2. To evaluate
the feasibility and acceptability of a school-based AHT
prevention program.
Methods: This program was based on an inexpensive
commercially available program developed by the
National Center on Shaken Baby Syndrome. The program was modified to include a 60-minute interactive
presentation that teaches teenagers about AHT, parenting skills, and caring for inconsolable crying infants.
The program was administered in three high schools in
Flint, Michigan during spring 2011. Student’s knowledge was evaluated with a 17-item written test administered pre-intervention, post-intervention, and two
months after program completion. Program feasibility
and acceptability were evaluated through interviews
and surveys with Flint area school social workers,
parent educators, teachers, and administrators.
Results: In all, 342 high school students (40% male)
participated. Of these, 317 (92.7%) completed the pretest and post-test with 171 (50%) completing the twomonth follow-up test. The mean pre-intervention, postintervention, and two-month follow-up scores were
53%, 87%, and 90% respectively. From pre-test to posttest, mean score improved 34%, p < 0.001. This
improvement was even more profound in young males,
whose mean post-test score improved by 38%,
p < 0.001. Of the 69 participating social workers, parent
educators, teachers, and administrators, 97% ranked
the program as feasible and acceptable.
Conclusion: Students participating in our program
showed an improvement in knowledge of AHT and parenting skills which was retained after two months.
Teachers, social workers, parent educators, and school
administrators supported the program. This local pilot
program has the potential to be implemented on a
larger scale in Michigan with the ultimate goal of
reducing AHT amongst infants.
133
Will Patients Exaggerate Their Symptoms
To Increase The Likelihood Of A Cash
Settlement?
Thomas Gilmore1, J. Matthew Fields1, Ian
Storch2, Wayne Bond Lau1, Gerald O’Malley1
1
Thomas
Jefferson
University
Hospital,
Philadelphia, PA; 2Thomas Jefferson Medical
College, Philadelphia, PA
Background: Fear of litigation has been shown to
affect physician practice patterns, and subsequently
influence patient care. The likelihood of medical malpractice litigation has previously been linked with
patient and provider characteristics. One common concern is that a patient may exaggerate symptoms in
order to obtain monetary payouts; however, this has
never been studied.
Objectives: We hypothesize that patients are willing to
exaggerate injuries for cash settlements and that there
are predictive patient characteristics including age, sex,
income, education level, and previous litigation.
Methods: This
prospective
cross-sectional
study
spanned June 1 to December 1, 2011 in a Philadelphian
urban tertiary care center. Any patient medically stable
enough to fill out a survey during study investigator
availability was included. Two closed-ended paper surveys were administered over the research period. Standard descriptive statistics were utilized to report
incidence of: patients who desired to file a lawsuit,
patients previously having filed lawsuits, and patients
willing to exaggerate the truth in a lawsuit for a cash
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
settlement. Chi-square analysis was performed to determine the relationship between patient characteristics and
willingness to exaggerate injuries for a cash settlement.
Results: Of 126 surveys, 11 were excluded due to
incomplete data, leaving 115 for analysis. The mean age
was 39 with a standard deviation of 16, and 40% were
male. The incidence of patients who had the desire to
sue at the time of treatment was 9%. The incidence of
patients who had filed a lawsuit in the past was 35%.
Of those patients, 26% had filed multiple lawsuits. Fifteen percent [95% CI 9–23%] of all patients were willing
to exaggerate injuries for cash settlement. Sex and
income were found to be statistically significant predictors of willingness to exaggerate symptoms: 22% of
females vs. 4% of males were willing to exaggerate
(p = 0.01), and 20% of people with income less than
$100,000/yr vs. 0% of those with income over $100,000/
yr were willing to exaggerate (p = 0.03).
Conclusion: Patients at a Philadelphian urban tertiary
center admit to willingness to exaggerate symptoms for
a cash settlement. Willingness to exaggerate symptoms
is associated with female sex and lower income.
134
Expert Consensus Meeting Recommendations on Community Consultation for
Emergency Research Conducted with an
Exception from Informed Consent
Lynne D. Richardson1, Ilene Wilets1, Meg
Smirnoff1, Rosamond Rhodes1, Cindy Clesca1,
Patria Gerardo1, Katherine Lamond2, Robert
Lowe3, Jill Baren2
1
Mount Sinai School of Medicine, New York,
NY; 2University of Pennsylvania, Philadelphia,
PA; 3Oregon Health & Science University,
Portland, OR
Background: Federal regulations allow an exception
from informed consent (EFIC) for emergency research
on life-threatening conditions but require additional
protections, such as community consultation, prior to
granting such an exception.
Objectives: To develop consensus on effective methods
of conducting and evaluating community consultation
for EFIC research.
Methods: An expert meeting was convened to develop
recommendations about best practices for community
consultation. An invited panel of experienced EFIC
researchers including representation from federally
funded emergency care research networks, IRB members, and community representatives met to review the
experiences of emergency care networks in conducting
EFIC trials and the findings of the Community VOICES
study in order to identify areas of consensus.
Results: Twenty experts participated and a total of eleven recommendations were developed. All participants
agreed community consultation efforts should consist of
two-way open-ended communication between the PI or
senior study staff and community members utilizing several effective modalities such as focus groups, meetings
(group or individual) with community leaders/representatives, or in-person interviews conducted by a member
of the research team. Expert consensus did not endorse
•
www.aemj.org
S75
some frequently used modalities such as random-digit
dialing telephone surveys, written questionnaires, webbased surveys, and social networks (e.g. Facebook,
Twitter) as meeting the federal requirements and recommended these not be used in isolation to conduct community consultation. Participants endorsed methodology
developed by the Community VOICES study showing that
five domains (feasibility, composition of participants, participant perception, investigator perception, and quality
of communication) are essential in evaluating the effectiveness of community consultation efforts.
Conclusion: This expert meeting promulgated recommendations regarding best practices for conducting and
evaluating community consultation for EFIC research.
135
Is A History Of Psychiatric Disease Or
Substance Abuse Associated With An
Increased Incidence Of Syncope Of
Unknown Etiology?
Zev Wiener1, Nathan I. Shapiro2,
Shamai A. Grossman2
1
Harvard Medical School, Boston, MA;
2
Harvard
Medical
School,
Beth
Israel
Deaconess Medical Center, Boston, MA
Background: Current data suggest that as many as
50% of patients presenting to the ED with syncope
leave the hospital without a defined etiology. Prior studies have suggested a prevalence of psychiatric disease
as high as 26% in patients with syncope of unknown
etiology.
Objectives: To determine whether psychiatric disease
and substance abuse are associated with an increased
incidence of syncope of unknown etiology.
Methods: Prospective, observational, cohort study of
consecutive ED patients ‡18 presenting with syncope
was conducted between 6/03 and7/06. Patients were
queried in the ED and charts reviewed about a history
of psychiatric disease, use of psychiatric medication,
substance abuse, and duration. Data were analyzed
using SAS with chi-square and Fisher’s exact tests.
Results: We enrolled 519 patients who presented to the
ED after syncope, 159 of whom did not have an identifiable etiology for their syncopal event. 36.5% of those
without an identifiable etiology were male. 166 (32%)
patients had a history of or current psychiatric disease
(42% male), and 55 patients (11%) had a history of or
current substance abuse (60% male). Among males with
psychiatric disease, 39% had an unknown etiology of
their syncopal event, compared to 22% of males without psychiatric disease (p = 0.009). Similarly, among all
males with a history of substance abuse, 45% had an
unknown etiology, as compared to 24% of males without a history of substance abuse (p = 0.01). A similar
trend was not identified in elderly females with psychiatric disease (p = 0.96) or substance abuse (p = 0.19).
However, syncope of unknown etiology was more common among both men and women under age 65 with a
history of substance abuse (47%) compared to those
without a history of substance abuse (27%; p = 0.01).
Conclusion: Our results suggest that psychiatric disease
and substance abuse are associated with increased
S76
2012 SAEM ANNUAL MEETING ABSTRACTS
incidence of syncope of unknown etiology. Patients evaluated in the ED or even hospitalized with syncope of
unknown etiology may benefit from psychiatric screening
and possibly detoxification referral. This is particularly
true in men. (Originally submitted as a ‘‘late-breaker.’’)
136
Scope of Practice and Autonomy of
Physician Assistants in Rural vs. Urban
Emergency Departments
Brandon T. Sawyer, Adit A. Ginde
Department of Emergency Medicine, University
of Colorado School of Medicine, Aurora, CO
Background: Physician assistants (PAs) are being utilized in greater capacity in both rural and urban EDs to
supplement the EM workforce shortages, improve efficiency of emergency physicians, and reduce the cost of
emergency care.
Objectives: We sought to compare the scope of practice and autonomy of EM PAs practicing in rural vs.
urban EDs, and hypothesized that rural PAs would
have a broader scope of practice and higher reported
autonomy, while receiving less direct supervision.
Methods: Using the American Academy of Physician
Assistants Masterfile, we surveyed a random sample of
400 U.S. PAs who self-identified EM as their specialty.
We classified location as rural or urban by zip code-based Rural-Urban Commuting Area codes, and oversampled 200 rural PAs to ensure adequate rural
representation. We asked all PAs about conditions
managed, procedures performed, and physician supervision, comparing groups using chi-square test.
Results: To date, 223 (56%) of 400 PAs in 44 states
responded, of whom 112 rural and 85 urban PAs currently practice in EDs. In the past year, rural PAs
more frequently managed acute coronary syndrome
(94% vs 84%); cardiac arrest (64% vs 43%); stroke
(85% vs 71%); anaphylaxis (80% vs 67%); multi-system
trauma (82% vs 66%); active labor (40% vs 25%); and
critically ill child (83% vs 61%; all p < 0.05). While
rural PAs were less likely to have performed bedside
ultrasound (49% vs 61%) and lumbar puncture (44%
vs 60%), they were more likely to have performed
bag-valve-mask ventilation (74% vs 52%); intubation
(64% vs 42%); needle thoracostomy (20% vs 6%); tube
thoracostomy (42% vs 27%); and thoracotomy (5% vs
0%) in the past year (all p < 0.05). Rural PAs more
often reported never having a physician present in
the ED compared to urban PAs (33% vs 2%), and less
often reported always having a physician present
(57% vs 93%; p < 0.001). When no physician was
present in the ED, rural PAs were also less likely to
have a physician onsite in less than 10 minutes (33%
vs 71%; p = 0.047). Additionally, rural PAs were less
likely to have at least one supervising physician be
EM-board certified (67% vs 99%; p < 0.001).
Conclusion: Rural PAs reported a broader scope of
practice, more autonomy, and less access to physician
supervision than urban PAs. Adequate training and
supervision of all EM PAs, particularly those in rural
EDs, should be considered a high priority to optimize
the quality of emergency care.
137
Video Education Intervention in the
Emergency Department
Nancy Stevens, Amy L. Drendel,
Steven J. Weisman
Medical College of Wisconsin, Milwaukee, WI
Background: After discharge from an emergency
department (ED), pain management often challenges
parents, who significantly under-treat their children’s
pain. Rapid patient turnover and anxiety make education about home pain treatment difficult in the ED.
Video education standardizes information and circumvents insufficient time and literacy.
Objectives: To evaluate the effectiveness of a 6-minute
instructional video for parents that targets common
misconceptions about home pain management.
Methods: We conducted a randomized, double-blinded
clinical trial of parents of children ages 1–18 years who
presented with a painful condition, were evaluated, and
discharged home in June and July 2011. Parents were
randomized to a pain management video or an injury
prevention control video. Primary outcome was the
proportion of parents who gave pain medication at
home. These data were recorded in a home pain diary
and analyzed using a chi-square test. Parents’ knowledge about pain treatment was tested before, immediately following, and 2 days after intervention.
McNemar’s test statistic determined odds that knowledge correlated with the intervention group.
Results: 100 parents were enrolled: 59 watched the pain
education video, and 41 the control video. 72.9% completed follow up, providing information about home pain
education use. Significantly more parents provided at
least one dose of pain medication to their children after
watching the educational video: 96% vs. 80% (difference
16%, 95% CI 7.8%, 31.3%). The odds the parent had correct knowledge about pain treatment significantly
improved immediately following the educational video
for knowledge about pain scores (p = 0.04), the effect of
pain on function (p < 0.01), and pain medication misconceptions (p < 0.01). These significant differences in
knowledge remained 3 days after the video intervention.
Conclusion: The educational video about home pain
treatment viewed by parents significantly increased the
proportion of children receiving pain medication at
home and significantly improved knowledge about
at-home pain management. Videos are an efficient tool
to provide medical advice to parents that improves outcomes for children.
138
Secondary Shockable Rhythms: Prognosis
in Out-of-Hospital Cardiac Arrests with
Initial Asystole or Pulseless Electrical
Activity and Subsequent Shockable
Rhythms
Andrew J. Thomas, Mohamud R. Daya,
Craig D. Newgard, Dana M. Zive, Rongwei Fu
Oregon Health & Science University, Portland,
OR
Background: Non-shockable cardiac arrest rhythms
(pulseless electrical activity and asystole) represent an
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
increasing proportion of reported cases of out-of-hospital cardiac arrest (OHCA). The prognostic significance
of conversion from non-shockable to shockable
rhythms during the course of resuscitation has been
debated in published literature.
Objectives: To evaluate whether OHCA survival with
an initially non-shockable cardiac arrest rhythm is
improved with subsequent conversion to a shockable
rhythm.
Methods: This study is a secondary analysis of the prospectively collected Epistry: Cardiac Arrest, an epidemiologic registry organized by the Resuscitation
Outcomes Consortium (ROC). Epistry collects data from
all out-of-hospital cardiac arrests at ten North American sites followed through hospital discharge. The sample for this analysis includes OHCA from six US and
two Canadian sites from December 1, 2005 through
May 31, 2007. The investigational cohort includes all
EMS-treated adult (18 and older) cardiac arrest patients
who presented with non-shockable cardiac arrest
rhythms and were treated by EMS personnel. We compared survival to hospital discharge between patients
who did versus those who did not develop a subsequent shockable rhythm which we defined as receiving
subsequent defibrillation (presumed conversion to VF/
VT). Missing data were handled using multiple imputation. Multivariable logistic regression was used to
adjust for potentially confounding variables: age, sex,
public location, witnessed status, bystander resuscitation, EMS response interval, and ROC site.
Results: A total of 6,593 adult cardiac arrest cases presented in non-shockable rhythms, were treated by
EMS, and had known survival status. Survival to discharge in patients who converted to a shockable
rhythm during out-of-hospital resuscitation was 2.7%
compared to 2.7% for those who did not convert to a
shockable rhythm, a statistically insignificant difference
(chi-square, p = 0.90). After accounting for known confounders, the adjusted odds of survival for conversion
to a shockable rhythm was not associated with
improved survival (OR 1.03, 95% CI: 0.71–1.50).
Conclusion: Out-of-hospital cardiac arrest patients presenting in PEA/asystole had no better or worse survival
to hospital discharge with conversion to a shockable
rhythm during EMS resuscitation efforts.
139
Racial Disparities in Stress Test Utilization
in a Chest Pain Unit
Anthony Napoli1, Esther Choo1,
Bethany Desroches2, Jessica Dai1
1
Warren Alpert Medical School of Brown
University, Providence, RI; 2Downstate College
of Medicine, New York, NY
Background: Epidemiologic studies have demonstrated
racial disparities in the workup of emergency department (ED) patients with chest pain and the referral of
admitted patients for subsequent catheterization and
bypass surgery.
Objectives: To determine if similar disparities occur in
the stress test utilization of low-risk chest pain patients
admitted to ED chest pain units (CPU).
•
www.aemj.org
S77
Methods: This was a prospective, observational study
of consecutive admitted CPU patients in a large-volume
academic urban ED. Cardiology attendings round on
all patients and stress test utilization is driven by their
recommendation. Eligibility criteria include: age>18,
AHA low/intermediate risk, nondynamic ECGs, and
normal initial Troponin I. Patients >75 and with a history of CAD or co-existing active medical problem
were excluded. Based on prior studies and our estimated CPU census and demographic distribution, we
estimated a sample size of 2,242 patients in order to
detect a difference in stress utilization of 7% (2-tailed,
a = 0.05, b = 0.8). We calculated a TIMI risk prediction
score and a Diamond & Forrester (D&F) CAD likelihood score on each patient. T-tests were used for univariate
comparisons
of
demographics,
cardiac
comorbidities, and risk scores. Logistic regression was
used to estimate odds ratios (ORs) for receiving testing
based on race, controlling for insurance and either
TIMI or D&F score.
Results: Over 18 months, 2,451 patients were enrolled.
Mean age was 53 ± 12, and 54% (95% CI 52–56) were
female. Sixty percent (95% CI 58–62) were Caucasian,
12% (95% CI 10–13) African American, and 24% (95% CI
23–26) Hispanic. Mean TIMI and D&F scores were 0.5
(95% CI 0.5–0.6) and 38% (95% CI 37–39). The overall
stress testing rate was 52% (95% CI 50–54). After controlling for insurance status and TIMI or D&F scores,
African American patients had significantly decreased
odds of stress testing (ORTIMI 0.67 (95% CI 0.52–0.88),
ORD&F 0.68 (95% CI 0.51–0.89)). Hispanics had significantly decreased odds of stress testing in the model controlling for D&F (ORD&F 0.78 (95% CI 0.63–0.98)).
Conclusion: This study confirms that disparities in the
workup of African American patients in the CPU are
similar to those found in the general ED and the outpatient setting. Further investigation into the specific provider or patient level factors contributing to this bias is
necessary.
140
The Usefulness of the 3-Minute Walk Test
in Predicting Adverse Outcomes in ED
Patients with Heart Failure and COPD
Ian G. Stiell1, Catherine M. Clement2, Lisa A.
Calder1, Brian H. Rowe3, Jeffrey J. Perry1,
Robert J. Brison4, Bjug Borgundvaag5, Shawn
D. Aaron1, Eddy Lang6, Alan J. Forster1,
George A. Wells7
1
University of Ottawa, Ottawa, ON, Canada;
2
Ottawa Hospital Research Institute, Ottawa,
ON, Canada; 3University of Alberta, Edmonton,
AB, Canada; 4Queen’s University, Kingston,
ON, Canada; 5University of Toronto, Toronto,
ON, Canada; 6University of Calgary, Calgary,
AB, Canada; 7University of Ottawa Heart
Institute, Ottawa, ON, Canada
Background: ED physicians frequently treat and make
disposition decisions for patients with acute exacerbations of heart failure (HF) and COPD.
Objectives: We sought to evaluate the usefulness of a
unique, structured 3-minute walk test in predicting the
S78
2012 SAEM ANNUAL MEETING ABSTRACTS
risk for serious adverse events (SAE) amongst HF and
COPD patients.
Methods: We conducted a prospective cohort study in
six large, academic EDs and enrolled 1,504 adult
patients who presented with exacerbations of HF or
COPD. After treatment, each patient underwent a
3-minute walk test, supervised by registered nurses or
respiratory therapists, who monitored heart rate and
oxygen saturation and evaluated the Borg score.
Patients walked at their own pace around the ED on
room air or home oxygen levels, without physical support from staff. We evaluated patients for multiple clinical and routine laboratory findings. Both admitted and
discharged patients were followed for SAEs, defined as
death, intubation, admission to a monitored unit, myocardial infarction, or relapse back to the ED requiring
admission within 14 days. We evaluated both univariate
and multivariate associations of the walk test components with SAE.
Results: The characteristics, respectively, of the 559
HF and 945 COPD patients were: mean age 76.0, 72.6;
male 56.4%, 51.6%; too ill to start walk test 13.2%,
15.3%; unable to complete walk test 15.4%, 21.5%.
Outcomes for HF and COPD were SAE 11.6%, 7.8%;
death 2.3%, 1.0%. We found univariate associations
with SAE for these walk test components: too ill to
walk (both HF, COPD P < 0.0001); highest heart rate
‡110 (HF P = 0.02, COPD P = 0.10); lowest SaO2 < 88%
(HF P = 0.42, COPD P = 0.63); Borg score ‡5 (HF
P = 0.47, COPD P = 0.52); walk test duration £ 1 minute
(HF P = 0.07. COPD P = 0.22). After adjustment for
multiple clinical covariates with logistic regression
analyses, we found ‘‘walk test heart rate ‡110’’ had an
odds ratio of 1.9 for HF patients and ‘‘too ill to start
the walk test’’ had an odds ratio of 3.5 for COPD
patients.
Conclusion: We found the 3-minute walk test to be
easy to administer in the ED and that maximum heart
rate and inability to start the test were highly associated with adverse events in patients with exacerbations
of HF and COPD, respectively. We suggest that the
3-minute walk test be routinely incorporated into the
assessment of HF and COPD patients in order to
estimate risk of poor outcomes.
141
The Effect of Prone Maximal Restraint
(PMR, aka ‘‘Hog-Tie’’) Position on Cardiac
Output and Other Hemodynamic
Measurements
Davut J. Savaser, Colleen Campbell, Theodore
C. Chan, Virag Shah, Christian Sloane, Allan
V. Hansen, Edward M. Castillo, Gary M. Vilke
UCSD, San Diego, CA
Background: The prone maximal restraint (PMR), hogtie or hobble position has been used by law enforcement
and emergency care personnel to restrain the acutely
combative or agitated patient. The position places the
subject prone with wrists handcuffed behind the back
and secured to the ankles. Some have argued that PMR,
as well as the weight force required to place an individual
into the position, can negatively affect cardiac function.
Objectives: To measure the effect of PMR with and
without weight force on measures of cardiac function
including vital signs, oxygenation, stroke volume (SV),
cardiac output (CO), and left ventricular outflow tract
diameter (LVOTD).
Methods: We conducted a randomized prospective
cross-over study of healthy volunteers placed in five different body positions: supine, prone, PMR, PMR with
50 lbs added to the subject’s back (PMR50), and PMR
with 100 lbs added to the subject’s back (PMR100) for
3 minutes. Data were collected on subject vital signs
and echocardiographic measurement of SV, CO, and
LVOTD, measured by credentialed ED faculty sonographers. Anthropomorphic measurements of height,
weight, arm span, chest circumference, and BMI were
also collected. Data were analyzed using repeated measures ANOVA to evaluate changes with each variable
with respective positioning.
Results: 25 male subjects were enrolled in the study,
ages ranging from 22 to 43 years. Cardiac output did
change from the supine to prone position, decreasing on
average by 0.61 L/min (p = 0.013, 95% CI [0.142, 1.086]).
However, there was no significant change in CO when
placing the patient in the PMR position ()0.11 L/min,
p = 0.489, 95% CI [)0.43, 0.21]), PMR50 position (+0.19 L/
min, p = 0.148, 95% CI [)0.07, 0.46]), or the PMR100 position (+0.14 L/min, p = 0.956, 95% CI [)0.29, 0.27]) as compared with the prone position. Also, mean HR (70.7, 68.4,
72.1, 71.8, 75.8 bpm) and mean arterial pressure (88.1,
86.0, 88.9, 84.7, 95.1 mmHg) for each respective position
did not demonstrate any hemodynamic compromise.
Conclusion: While there was a difference from supine
position, there was no difference in CO between prone
and any PMR position with and without weight force.
Moreover, there was no evidence of hemodynamic
compromise in any of these positions.
142
The Associations Between Numeracy and
Health Literacy and 30-Day Recidivism in
Adult Emergency Department Patients
Suspected of Having Acute Heart Failure
Candace McNaughton, Sean Colllins, Cathy
Jenkins, Patrick Arbogast, Karen Miller, Sunil
Kripalani, Russell Rothman, Allen Naftilan,
Robert Dittus, Alan Storrow
Vanderbilt University, Nashville, TN
Background: Heart failure is a common, costly condition. While prior work suggests a relationship between
literacy and clinical outcomes, no large prospective
studies have evaluated the relationships between literacy or numeracy and 30-day recidivism.
Objectives: Evaluate the association between numeracy
and health literacy and 30-day recidivism in ED patients
with signs and symptoms of acute heart failure (AHF).
Methods: Convenience sample of a prospective cohort
of adult ED patients at three hospitals who presented
with signs and symptoms of AHF between 1/2008 and
12/2011. Patients meeting modified Framingham Criteria
were consented for enrollment within three hours of ED
evaluation. Research assistants administered subjective
measures of numeracy and health literacy. Thirty-day
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
follow-up information was obtained by phone interview
and confirmed with chart review. The primary outcome
was 30-day recidivism to the ED or hospital; secondary
outcomes evaluated components of the primary outcome
individually, and subgroup analysis evaluated the relationship between numeracy and literacy and 30-day
recidivism in patients with and without a clinical diagnosis of AHF. Adjustment was made for age, sex, race,
insurance status, diabetes, renal disease, abnormal
hemoglobin, low ejection fraction, number of days at risk
for recidivism, and hospital site.
Results: Of the 1823 patients enrolled in the cohort,
988 took the subjective numeracy and/or the subjective
health literacy test. Thirty patients were excluded
because they died during the index hospitalization or
were admitted for the entire follow-up period, leaving
958 patients for analysis. Mean age was 61 years, and
48% were female; additional clinical characteristics are
in Table 1. Results of adjusted analysis are found in
Table 2. Lower numeracy was associated with increased
30-day recidivism (OR 1.02, 95%CI 1.00–1.03 per point
change on a 43-point scale); the relationship for literacy
was not significant (OR 1.04, 95% CI 1.00–1.08 per point
change on the 13-point health literacy scale). For
patients with AHF, lower health literacy and numeracy
were associated with increased odds of 30-day
recidivism.
Conclusion: Lower numeracy and health literacy are
associated with higher risk of 30-day ED and hospital
recidivism for ED patients with AHF.
Table 1 - Abstract 142: Clinical Characteristics
Diabetes, No. (%)
Chronic kidney disease, No. (%)
Hgb <13 or ‡17, No. (%)
EF<30%, No. (%)
£High school (9–12), No. (%)
Subjective literacy, median (IQR)
Low subjective literacy, No. (%)
Subjective numeracy, median (IQR)
Low subjective numeracy, No. (%)
Numeracy/
Literacy
Cohort
(n = 958)
Total
Cohort
(n = 1823)
391
205
571
193
625
13
369
31
339
745
380
1095
345
1200
n/a
n/a
n/a
n/a
(41)
(21)
(60)
(20)
(66)
(9–15)
(39)
(23–38)
(37)
(41)
(21)
(60)
(20)
(68)
•
www.aemj.org
143
S79
The Influence of Clinical Context on
Patients’ Code Status Preferences
John E. Jesus1, Matthew B. Allen2, Glen E.
Michael3, Michael W. Donnino4, Shamai A.
Grossman4, Caleb P. Hale4, Anthony C. Breu4,
Alexander Bracey4, Jennifer L. O’Connor4,
Jonathan Fisher4
1
Christiana Care Health Systems, Newark, DE;
2
Brigham and Women’s Hospital, Boston, MA;
3
University of Virginia University Hospital,
Charlottesville, VA; 4Beth Israel Deaconess
Medical Center, Boston, MA
Background: Many patients have preferences regarding the use of cardiopulmonary resuscitation (CPR) and
intubation that go undocumented and incompletely
understood, despite the presence of do-not-resuscitate
(DNR)/do-not-intubate (DNI) orders.
Objectives: To assess patients’ awareness and understanding of their code status as it applies to the hypothetical scenarios.
Methods: A prospective survey of patients with documented DNR/DNI code status was conducted from
October 2010 to October 2011. Patients were surveyed
by a research assistant starting with a validated cognitive assessment. The researcher then administered the
survey consisting of four scenarios of varying degrees
of severity and reversibility (angioedema, pneumonia,
severe stroke, and cardiac arrest). Each patient was
asked whether he or she would agree to specific treatments in specific situations, and about whom he or she
would want to make health care decisions (previous
declaration, family, or health care provider). Descriptive
statistics including SD and 95% CI were calculated
using Microsoft Excel and SPSS 17.
Results: 110 patients were identified and screened; 3
patients failed the cognitive screen, 5 patients had code
statuses inconsistent with DNR/DNI, and 2 patients
were unable to complete the survey. Patients had a
mean age of 78 (SD 14). 2% (CI 0–5) of patients were
not aware of their documented code status. While 98%
of patients knew they had a documented code status of
DNR/DNI, 58% (CI 48–68) would want to be intubated
in the face of a life-threatening but potentially reversible situation, and 20% (CI 12–28) of patients would
want a trial of resuscitation including intubation and
Table 2 - Abstract 142: Outcomes, Adjusted Logistic Regression
Outcomes, OR (95% CI)
Any 30-day recidivism, adjusted
Subgroup: patients with clinical
diagnosis of heart failure
Adjusted Secondary Outcomes
Any ED visit for AHF
Any ED visit for complaint
other than AHF
Any unscheduled hospitalization
for AHF
Any unscheduled hospitalization
for complaint other than AHF
Literacy,
Continuous
P
Low Literacy
(SLS<12)
1.04 (1.00–1.08)
1.05 (1.00–1.10)
0.06
0.05
1.19 (0.89–1.6)
1.05 (1.00–1.10)
1.03 (0.96–1.11)
1.02 (0.98–1.07)
0.38
0.31
0.98 (0.91–1.07)
1.05 (1.00–1.11)
P
Low
Numeracy
(SNS<28)
P
0.03
0.05
1.28 (0.95–1.75)
1.29 (0.90–1.85)
0.11
0.17
1.02 (0.99–1.05)
1.01 (0.99–1.02)
0.17
0.38
1.44 (0.82–2.53)
1.12 (0.80–1.56)
0.2
0.5
0.7
1.01 (0.98–1.04)
0.67
1.28 (0.72–2.28)
0.4
0.03
1.02 (1.00–1.04)
0.02
1.51 (1.04–2.18)
0.03
P
Numeracy,
Continuous
0.24
0.05
1.02 (1.00–1.03)
1.02 (1.00–1.04)
1.03 (0.96–1.11)
1.02 (0.98–1.07)
0.38
0.31
0.7
0.98 (0.91–1.07)
0.03
1.05 (1.00–1.11)
S80
2012 SAEM ANNUAL MEETING ABSTRACTS
Table - Abstract 143:
Pt. desires
intubation
Family
decides
Doctor
decides
One week
trial of
resuscitation
Long-term
family care
Opportunity
for family
to say
goodbye
Angioedema
Pneumonia
Severe
Stroke
Cardiac
Arrest
58%
28%
5%
3%
52%
37%
30%
28%
31%
20%
13%
12%
39%
47%
24%
20%
5%
7%
2%
1%
35%
33%
33%
28%
CPR in the setting of cardiac arrest. Across all scenarios, 32% (CI 28–37) would want to be kept alive long
enough for family members to say good-bye. The willingness to be intubated decreased with the potential
reversibility of the disease process (P < 0.001). See
table. No demographic factors including education or
religion predicted such discrepancies.
Conclusion: Important discrepancies exist between
patients’ code status and their actual end of life preferences. These discrepancies may lead to the denial of
life-saving or life-prolonging care from patients. A better way of elucidating patient end of life preferences is
needed.
144
End of Life Decision-Making for Patients
Admitted Through the ED: Patient
Demographics, Hospital Attributes, and
Changes over Time
Derek K. Richardson, Dana Zive,
Craig D. Newgard
Oregon Health & Science University, Portland,
OR
Background: Early studies suggested that racial, economic, and hospital-based factors influence the do not
resuscitate (DNR) status of admitted patients, though it
remains unknown how these factors apply to patients
admitted through the ED and whether use is increasing
over time.
Objectives: To examine patient and hospital attributes
associated with DNR orders placed within the first
24 hours among patients admitted through the ED and
changes over time.
Methods: This was a population-based, retrospective
cross-sectional study of patients over 65 years old
admitted to acute care hospitals in California between
2002 and 2010; the subset of patients admitted through
the ED formed the primary sample. The primary variable (outcome) was placement of a DNR order within
24 hours of admission. We tested associations between
early DNR orders, hospital characteristics, patient
demographics, and year. For data analysis, we used
descriptive statistics and multivariable logistic regression
models with generalized estimating equations to
account for clustering within hospitals.
Results: 9,507,921 patients older than 65 were admitted to
California hospitals over the 9-year period, of whom
10.8% had an early DNR order; 83% of early DNR orders
were placed on patients admitted through the ED. Among
patients over 65 admitted through the ED (n = 6,396,910),
early DNR orders were less frequent at teaching hospitals
(9.5% vs. 13.7%), for-profit hospitals (8.6% vs. 14.6% nonprofit), non-rural hospitals (12.0% vs. 26.2%), and large
hospitals (15.0% vs. 11.1% smallest quartile hospitals) (all
p < 0.0001). In regression modeling adjusted for clustering, the prior trends were reproduced. Additionally,
decreased DNR frequency was associated with race (black
OR 0.59, 95% CI 0.51–0.67; Asian OR 0.70, 95% CI 0.59–
0.82), ethnicity (Hispanic OR 0.61, 95% CI 0.55–0.68), sex
(male OR 0.90, 95% CI 0.88–0.92), and MediCal insurance
(OR 0.70, 95% CI 0.57–0.85). Over the last ten years, rates of
early DNR use have steadily increased (figure, p < .0005).
Conclusion: While statewide rates of early DNR use
have increased over time among patients admitted
through the ED, there is variable penetrance of this
practice between hospital types and use is less common
among some patient groups. These patterns may
suggest barriers to end-of-life discussions or differing
cultural or institutional beliefs.
145
Disparities in the Successful Enrollment of
Study Subjects Involving Informed
Consent in the Emergency Department
Lea H. Becker, Marcus L. Martin,
Chris A. Ghaemmaghami, Pamela A. Ross,
Robert E. O’Connor
University of Virginia Health System,
Charlottesville, VA
Background: It has been established that certain populations are often under-represented in clinical trials for
many reasons.
Objectives: The objective of this study was to investigate differences in consent rates between patients of
different demographic groups who were invited to participate in minimal-risk clinical trials conducted in an
academic emergency department.
Methods: This descriptive study analyzed prospectively
collected data of all adult patients who were identified
as qualified participants in ongoing minimal risk clinical
trials. These trials were selected for this review because
they presented minimal factors known to be associated
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
with patient refusal (e.g. additional length of stay, travel, adverse drug reactions). Consented patients underwent one to three blood draws, telephone follow-up, no
modification of treatment, and received no payment.
Prior to being invited to participate, patients self-identified their race according to the 2010 US Census definitions. Age, sex, and race were recorded without any
patient identifiers. The primary endpoint was whether
approached patients consented or declined participation. Statistical analysis was performed using Fisher’s
exact test.
Results: Of the 627 patients approached for enrollment,
291 (42%) were women and 336 (58%) were men, and
488 (78%) were self-described as white. A total of 496
(79%) consented and 131 (21%) declined. The consent
rate was 80% for women and 78% for men
(p = 0.49 ns). The consent rate was 83% for whites and
67% for non-whites (p < 0.0001). The average ages of
consenting vs. declining patients were 57.8 and 60.7,
respectively (p = 0.92 ns). Within each race classification, there were no differences in rates of consent by
age or sex.
Conclusion: Significant demographic differences continue to exist in the rate of consent in emergency
department based studies involving minimal risk and
time commitment for study subjects. These enrollment
disparities may severely limit study external validity.
Although this review may show some improvement
compared to previous studies, enrollment and consent
strategies aimed at reducing demographic disparities
are needed for the development and application of
evidence-based emergency care.
146
Informed
Consent
for
Computerized
Tomography via Video Educational Module
in the Emergency Department
Lisa H. Merck, Michael Holdsworth, Joshua
Keithley, Debra Houry, Laura A. Ward,
Kimberly E. Applegate, Douglas W. LoweryNorth, Kate Heilpern
Emory University, Atlanta, GA
Background: Increasing rates of patient exposure to
computerized tomography (CT) raise questions about
appropriateness of utilization, as well as patient awareness of radiation exposure. Despite rapid increases in
CT utilization and published risks, there is no national
standard to employ informed consent prior to radiation
exposure from diagnostic CT. Use of written informed
consent for CT (ICCT) in our ED has increased patient
understanding of the risks, benefits, and alternatives to
CT imaging. Our team has developed an adjunct video
educational module (VEM) to further educate ED
patients about the CT procedure.
Objectives: To assess patient knowledge and preferences regarding diagnostic radiation before and after
viewing VEM.
Methods: The VEM was based on ICCT currently utilized at our tertiary care ED (census 37,000 patients/
year). ICCT is written at an 8th grade reading level.
This fall, VEM/ICCT materials were presented to a convenience sample of patients in the ED waiting room
•
www.aemj.org
S81
9 AM–7 PM, Monday-Sunday. Patients who were
<18 years of age, critically ill, or with language barrier
were excluded. To quantify the educational value of the
VEM, a six-question pretest was administered to assess
baseline understanding of CT imaging. The patients
then watched the VEM via iPad (Macintosh) and
reviewed the consent form. An eight-question post-test
was then completed by each subject. No PHI were collected. Pre- and post-test results were analyzed using
McNemar’s test for individual questions and a paired
t-test for the summed score (SAS version 9.2).
Results: 100 patients consented and completed the survey. The average pre-test score for subjects was poor,
66% correct. Review of VEM/ICCT materials increased
patient understanding of medical radiation as evidenced
by improved post-test score to 79%. Mean improvement between tests was 13% (p < 0.0001). 78% of subjects responded that they found the materials helpful,
and that they would like to receive ICCT.
Conclusion: The addition of a video educational module improved patient knowledge regarding CT imaging
and medical radiation as quantified by pre- and posttesting. Patients in our study sample reported that they
prefer to receive ICCT. By educating patients about the
risks associated with CT imaging, we increase
informed, shared decision making – an essential component of patient-centered care.
147
Does Pain Intensity Reduce Willingness to
Participate in Emergency Medicine
Research?
Alexander T. Limkakeng, Caroline Freiermuth,
Weiying Drake, Giselle Mani, Debra Freeman,
Abhinav Chandra, Paula Tanabe, Ricardo
Pietrobon
Duke University, Durham, NC
Background: Multiple factors affect patients’ decisions
to participate in ED research. Prior studies have shown
that altruism and convenience to the patient may be
important factors. Patients’ pain levels have not been
explored as a reason for refusal.
Objectives: We sought to determine the relationship
between patients’ pain scores and their rate of consent
to ED research. We hypothesized that patients with
higher pain scores would be less likely to consent to
ED research.
Methods: Retrospective observational cohort study of
potential research subjects in an urban academic hospital ED with an average annual census of approximately
70,000 visits. Subjects were adults older than 18 years
with chief complaint of chest pain within the last
12 hours, making them eligible for one of two cardiac
biomarker research studies. The studies required only
blood draws and did not offer compensation. Two
reviewers extracted data from research screening logs.
Patients were grouped according to pain score at triage, pain score at the time of approach, and improvement in pain score (triage score - approach score). The
main outcome was consent to research. Simple proportions for consent rates by pain score tertiles were calculated. Two multivariate logistic regression analyses
S82
2012 SAEM ANNUAL MEETING ABSTRACTS
were performed with consent as outcome and age,
race, sex, and triage or approach pain score as predictors.
Results: Overall,
396
potential
subjects
were
approached for consent. Patients were 58% Caucasian,
49% female, and with an average age of 57 years. Six
patients did not have pain scores recorded at all and 48
did not have scores documented within 2 hours of
approach and were excluded from relevant analyses.
Overall, 80.1% of patients consented. Consent rates by
tertiles at triage, at time of approach, and by pain score
improvement are shown in Tables 1 and 2. After adjusting for age, race, and sex, neither triage (p = 0.75) nor
approach (p = 0.65) pain scores predicted consent.
Conclusion: Research enrollment is feasible even in ED
patients reporting high levels of pain. Patients with
modest improvements in pain levels may be more likely
to consent. Future research should investigate which
factors influence patients’ decisions to participate in ED
research.
Table 1 - Abstract 147: Consent Rates by Pain Scores.
Pain
Score
Triage
Score (n)
Consent
Rates by
Triage
Score
0
1–3
4–6
7–10
105
86
104
95
77.1%
84.9%
79.8%
80.0%
Approach
Score (n)
Consent
Rates by
Approach
Score
139
61
76
69
76.3%
85.2%
80.3%
78.3%
Table 2 - Abstract 147: Consent Rates by Pain Improvement
Pain Improvement
Pain Worsened (-1 to -10)
0
1
2
3–10
n
Consent rate
36
231
26
29
68
77.8%
80.1%
84.6%
89.7%
79.4%
Methods: We performed a 16-center, prospective
cohort study of hospitalized children age <2 years with
a physician diagnosis of bronchiolitis. For three consecutive years from November 1 until March 31, beginning
in 2007, researchers collected clinical data and a nasopharyngeal aspirate. Intensive care unit visits were
oversampled. Polymerase chain reaction was conducted
for 15 viruses and viral load testing for the five most
common viruses. Analysis used chi-square and Kruskal
Wallis tests and multivariable logistic regression. Model
validation was performed using bootstrapping. Results
are reported as odds ratios (OR) with bias-corrected
and accelerated 95% confidence intervals (95%CI).
Results: Of the 2,207 enrolled children, 161 (7%)
required CPAP or intubation. Overall, the median age
was 4 months; 59% were male; 61% were white, 24%
black and 36% Hispanic; 43% had respiratory syncytial
virus (RSV)-A, 30% had RSV-B, and 26% had rhinovirus. In the multivariable model predicting CPAP/intubation, the significant factors were: age <2 months (OR
4.3; 95%CI 1.7–11.5), mother smoked during pregnancy
(OR 1.4; 95%CI 1.1–1.9), birth weight <5 pounds (OR
1.7; 95%CI 1.0–2.6), breathing difficulty began
<24 hours prior to admission (OR 1.6; 95%CI 1.2–2.1),
presence of apnea (OR 4.8; 95%CI 2.5–8.5), inadequate
oral intake (OR 2.5; 95%CI 1.3–4.3), severe retractions
(OR 11.1; 95%CI 2.4–33.0), and room air oxygen saturation <85% (OR 3.3; 95%CI 2.0–4.8). Sex, race/ethnicity,
viral etiology, and viral load were not predictive.
Conclusion: In this multicenter study of children hospitalized with bronchiolitis neither specific viruses nor
their viral load predicted the need for CPAP or intubation, but young age, low birth weight, presence of
apnea, severe retractions, and oxygen saturation <85%
did. We also identified that children requiring CPAP or
intubation were more likely to have mothers who
smoked during pregnancy and a rapid respiratory
worsening. Mechanistic research in these high-risk children may yield important insights for the management
of severe bronchiolitis.
149
148
A Multicenter Study To Predict Continuous
Positive Airway Pressure And Intubation
For Children Hospitalized With
Bronchiolitis
Patricio De Hoyos1, Jonathan M. Mansbach2,
Pedro A. Piedra3, Ashley F. Sullivan1, Sunday
Clark4, Janice A. Espinola1, Tate F. Forgey1,
Carlos A. Camargo Jr.1
1
Massachusetts General Hospital, Boston, MA;
2
Children’s Hospital Boston, Boston, MA;
3
Baylor College of Medicine, Houston, TX;
4
University of Pittsburgh, Pittsburgh, MA
Background: It is unclear which children hospitalized
with bronchiolitis will require continuous positive
airway pressure (CPAP) or intubation.
Objectives: To examine the historical, clinical, and
infectious pathogen factors associated with CPAP or
intubation.
Prevalence of Abusive Injuries in Siblings
and Contacts of Abused Children
Daniel Lindberg
Brigham & Women’s Hospital, Boston, MA
Background: Siblings and children who share a home
with a physically abused child are thought to be at high
risk for abuse. However, rates of injury in these children are unknown. Disagreements between medical
and Child Protective Services professionals are common and screening is highly variable.
Objectives: Our objective was to measure the rates of
occult abusive injuries detected in contacts of abused
children using a common screening protocol.
Methods: This was a multi-center, observational cohort
study of 20 child abuse teams who shared a common
screening protocol. Data were collected between Jan
15, 2010 and April 30, 2011 for all children <10 years
undergoing evaluation for physical abuse and their
contacts.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
For contacts of abused children, the protocol recommended physical examination for all children <5 years,
skeletal survey and physical exam for children
<24 months, and physical exam, skeletal survey, and
neuroimaging for children <6 months old.
Results: Among 2,825 children evaluated for abuse, 618
met criteria as ‘‘physically abused’’ and these had 477
contacts. For each screening modality, screening was
completed as recommended by the protocol in approximately 75% of cases. Of 134 contacts who met criteria
for skeletal survey, new injuries were identified in 16
(12.0%). None of these fractures had associated findings
on physical examination. Physical examination identified new injuries in 6.2% of eligible contacts. Neuroimaging failed to identify new injuries among 25 eligible
contacts less than 6 months old. Twins were at significantly increased risk of fracture relative to other nontwin contacts (OR 20.1).
Conclusion: These results support routine skeletal survey for contacts of physically abused children
<24 months old, regardless of physical examination
findings. Even for children where no injuries are identified, these results demonstrate that abuse is common
among children who share a home with an abused
child, and support including contacts in interventions
(foster care, safety planning, social support) designed to
protect physically abused children.
Table - Abstract 149: Results of Screening Tests
Screening
Test
Skeletal
Survey
Neuroimaging
(CT or MR)
Physical
Examination
150
#
Eligible
#
Tested
# With
Injury
%
Injured
95%CI
134
101
16
11.9
7.5–18.5
25
19
0
0.0
0.0–13.7
355
259
22
6.2
4.1–9.3
Validity of the Canadian Triage and Acuity
Scale for Children: A Multi-centre,
Database Study
Jocelyn Gravel1, Eleanor Fitzpatrick2, Serge
Gouin3, Kelly Millar4, Sarah Curtis5, Gary
Joubert6, Kathy Boutis7, Chantal Guimont8,
Ran D. Goldman9, Sasha Alexander
Dubrovsky10, Robert Porter11, Darcy Beer12,
Martin H. Osmond13
1
Sainte-Justine UHC, Université de Montréal,
Montreal, QC, Canada; 2IWK Health Centre,
Halifax, NS, Canada; 3CHU Sainte-Justine,
Montreal, QC, Canada; 4Alberta Children’s
Hospital, Calgary, AB, Canada; 5Stollery
Children’s Hospital, Edmonton, AB, Canada;
6
Children’s Hospital of Western Ontario,
London, ON, Canada; 7The Hospital for Sick
Children, Toronto, ON, Canada; 8Centre
Hospitalier Université Laval, Quebec, QC,
Canada; 9BC Children’s Hospital, Vancouver,
BC, Canada; 10Montreal Children’s Hospital,
•
www.aemj.org
S83
Montreal, QC, Canada; 11Janeway Children’s
Health and Rehabilitation Centre, St-John’s, NL,
Canada; 12Children’s Hospital of Winnipeg,
Winnipeg, MB, Canada; 13Children’s Hospital
of Eastern Ontario, Ottawa, ON, Canada
Background: The Canadian Triage and Acuity Scale
(CTAS) is a five-level triage tool constructed from a
consensus of experts.
Objectives: To evaluate the validity of the Canadian
Triage and Acuity Scale (CTAS) for children visiting
multiple pediatric Emergency Departments (ED) in
Canada.
Methods: This was a retrospective study evaluating
all children presenting to eight paediatric, universityaffiliated EDs during one year in 2010–2011. In each
setting, information regarding triage and disposition
were prospectively registered by clerks in the ED
database. Anonymized data were retrieved from the
ED computerized database of each participating centre. In the absence of a gold standard for triage, hospitalisation, admission to intensive care unit (ICU),
length of stay in the ED, and proportion of patients
who left without being seen by a physician (LWBS)
were used as surrogate markers of severity. The primary outcome measure was the association between
triage level (from 1 to 5) and hospitalisation. The
association between triage level and dichotomous outcomes was evaluated by a chi-square test, while a
Student’s t-test was used to evaluate the association
between triage level and length of stay. It was estimated that the evaluation of all children visiting these
EDs for a one year period would provide a minimum
of 1,000 patients in each triage level and at least 10
events for outcomes having a proportion of 1% or
more.
Results: A total of 404,841 children visited the eight
EDs during the study period. Pooled data demonstrated
hospitalisation proportions of 59%, 30%, 10%, 2%, and
0.5% for patients triaged at level 1, 2,3, 4, and 5 respectively (p < 0.001). There was also a strong association
between triage levels and admission to ICU (p < 0.001),
the proportion of children who LWBS (p < 0.001), and
length of stay (p < 0.001).
Conclusion: The CTAS is a valid triage tool for children as demonstrated by its good correlation with
markers of severity in multiple pediatric emergency
departments.
151
Diagnosing Intussusception by Bedside
Ultrasonography in the Pediatric
Emergency Department
Jessica Zerzan, Alexander Arroyo, Eitan
Dickman, Hector Vazquez
Maimonides Medical Center, Brooklyn, NY
Background: Intussusception (INT) is the most common cause of bowel obstruction in children ages
3 months to 6 years. If not recognized and treated expeditiously, edema and hypoperfusion can lead to
decreased rates of nonsurgical reduction, bowel perforation, necrosis, and even death. Many hospitals do not
S84
2012 SAEM ANNUAL MEETING ABSTRACTS
have 24-hour availability of ultrasonography through
the radiology department (RADS). Bedside ultrasonography (BUS) performed by pediatric emergency medicine (PEM) physicians for the diagnosis of INT has not
previously been studied. This technique could decrease
time to diagnosis and definitive treatment.
Objectives: The primary objective of this study is to
compare the BUS results obtained by PEM physicians
with ultrasounds performed by staff in RADS when
evaluating patients for INT. Our hypothesis is that with
focused training, PEM physicians can perform and
interpret BUS for INT with results similar to that performed in RADS.
Methods: We conducted a prospective observational
diagnostic study in a community-based academic urban
pediatric ED with an annual census of 36,000 patients.
The principal investigator provided a brief in-service
consisting of a didactic and hands-on BUS training session for all PEM attendings and fellows. We enrolled
patients aged 3 months through 6 years in whom there
was clinical suspicion of INT. After obtaining informed
consent, the treating PEM physician performed a
focused BUS, recorded specific images, and documented an interpretation of the study. Then the patient
had an additional abdominal ultrasound in RADS.
Results between PEM BUS and RADS sonography were
compared using a kappa analysis.
Results: 99 patients were enrolled. Kappa = 0.825 (0.63–
1.0). +LR = 40 (10–160), -LR = 0.11 (0.02–0.72). Sensitivity = 89% (0.05–0.99). Specificity = 98% (0.91–0.99).
Conclusion: This study demonstrates that PEM physicians can accurately utilize BUS to evaluate patients for
INT after a brief training period. This has the potential
to expeditie definitive diagnosis and treatment of a true
pediatric abdominal emergency and offers further evidence of the utility of BUS in the pediatric emergency
department. Limitations of the study are those inherent
in convenience sampling and its small sample size. Further large studies are warranted to determine whether
these results can be replicated across other settings.
Objectives: To determine the test performance characteristics for point-of-care (POC) US performed by pediatric emergency medicine (PEM) physicians compared
to radiographic diagnosis of elbow fractures.
Methods: This was a prospective study of children
<21 years presenting to the ED with elbow injuries
requiring x-rays. Patients were excluded if they arrived
at the ED with elbow x-rays or a diagnosis of fracture.
Prior to the start of the study, PEM physicians received
a one-hour didactic and hands-on training session on
US examination of the elbow. Before obtaining x-rays,
the PEM physician performed a brief elbow US using a
linear 10–5 MHz transducer probe and recorded images
and clips in longitudinal and transverse views. A positive US for fracture at the elbow was defined as the
PEM physician’s determination of an elevated posterior
fat pad (PFP) and/or lipohemarthrosis (LH) of the PFP.
All study patients received a standard care elbow x-ray
in the ED and clinical telephone follow-up. The gold
standard for fracture in this study was defined as fracture on initial or follow-up x-rays as determined by a
radiologist.
Results: 122 patients were enrolled with a mean age of
7.6 (± 5.4) years. 42/122 (34%) had a positive x-ray for
fracture. Of the 65/122 (53%) patients with a positive US,
58/65 (89%) had an elevated PFP, 56/65 (86%) had LH,
and 49/65 (75%) had both an elevated PFP and LH. A
positive elbow ultrasound with an elevated PFP or LH
had a sensitivity of 0.98 (95% CI 0.88–1.00), specificity of
0.72 (95% CI 0.61–0.81), positive predictive value of 0.67
(95% CI 0.55–0.77), negative predictive value of 0.98
(95% CI 0.91–1.00), positive likelihood ratio of 3.54 (95%
CI 2.45–5.10), and negative likelihood ratio of 0.03 (95%
CI 0.01–0.22) for a fracture. The use of POC elbow US
would reduce the need for x-rays in 56/122 (46%)
patients with elbow injuries but would miss one fracture.
Conclusion: POC US was found to be highly sensitive
for elbow fractures when performed by PEM physicians. A negative POC US may reduce the need for
x-rays in children with elbow injuries.
Table - Abtsract 151: Results
153
+BUS
)BUS
Total
152
+RADS
)RADS
Total
8
1
9
2
88
90
10
89
99
Accuracy of Point-of-Care Ultrasound for
Diagnosis of Elbow Fractures in Children
Joni E. Rabiner1, Hnin Khine1, Jeffrey R.
Avner1, James W. Tsung2
1
Children’s Hospital at Montefiore, Bronx, NY;
2
Mount Sinai Medical Center, New York, NY
Background: Ultrasound (US) has been shown to be
useful in the diagnosis of pediatric skeletal injuries. It
can be performed accurately and reliably by emergency department (ED) physicians with focused US
training.
Persistent Failure to Understand Key
Elements of Medication Safety after
Pediatric Emergency Department Visit
Margaret E. Samuels-Kalow1, Anne M. Stack2,
Stephen C. Porter3
1
BWH/MGH Harvard Affiliated Emergency
Medicine Residency, Boston, MA; 2Children’s
Hospital Boston, Boston, MA; 3The Hospital for
Sick Children, Toronto, ON, Canada
Background: Parents frequently leave the emergency
department (ED) with incomplete understanding of the
diagnosis and plan, but the relationship between comprehension and post-care outcomes has not been well
described.
Objectives: To explore the relationship between comprehension and post-discharge medication safety.
Methods: We completed a planned secondary analysis
of a prospective observational study of the ED discharge
process for children aged 2–24 months. After discharge,
parents completed a structured interview to assess
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
comprehension of the child’s condition, the medical
team’s advice, and the risk of medication error. Limited
understanding was defined as a score of 3–5 from 1
(excellent) to 5 (poor). Risk of medication error was
defined as a plan to use over-the-counter cough/cold
medication and/or an incorrect dose of acetaminophen
(measured by direct observation at discharge or
reported dose at follow-up call). Parents identified as at
risk received further instructions from their provider.
The primary outcome was persistent risk of medication
error assessed at phone interview 5–10 days post-discharge.
Results: The original cohort screened 270 patients, of
whom 204 (76%) were eligible and consented. 150/204
(74%) completed the assessment at discharge. 130/150
(87%) completed the follow-up interview. At discharge,
38 parents (25%) reported limited understanding and 48
(31%) had a risk of medication error. At follow-up, 42
parents (32%) were at risk of medication error, of
whom 22 (52%) had also been at risk at discharge. Limited parental understanding at discharge was significantly associated with risk of medication error at
follow-up [odds ratio (OR) 3.9, 95% CI 1.7, 8.9], and
remained so after adjustment for parental language,
health literacy, and error risk at discharge (aOR 3.1,
95% CI 1.1, 8.8). Risk of medication error at discharge
was significantly associated with risk at follow-up,
despite provider intervention after the risk was identified, (OR 4.6, 95% CI 2.0, 10.3), with a trend toward significance after adjustment for language and literacy
(aOR 2.4, 95% CI 0.96, 6.0).
Conclusion: Parental report of limited understanding
predicted risk of medication errors after discharge from
the pediatric ED. Despite provider intervention, families
at risk of medication error at discharge appear to experience persistent risk of error after leaving the ED.
154
Use Of Oral Disintegrating Ondansetron To
Treat Nausea In Prehospital Patients
Steven Weiss1, Lynne Fullerton1, Phillip
Froman2
1
University of New Mexico, Albuquerque, NM;
2
Albuquerque Ambulance, Albuquerque, NM
Background: Use of various antiemetics or use of crystalloid alone has been shown to improve nausea in the
ED and hospital. Their prehospital use is not yet clear.
Objectives: Our hypotheses were that during EMS
transport: (1) the amount of saline given is not related
to a change in nausea and, (2) that the addition of
ondansetron Oral disintegrating tablet (ODT) decreases
the degree of nausea.
Methods: During the first phase of the study, EMS providers completed a form whenever a patient with nausea
and/or vomiting was assessed and transported to one of
the area hospitals. Patients were asked to rate their nausea on a visual analogue scale (VAS) and a Likert scale.
Saline administration and active vomiting were documented. During the second phase, ODT ondansetron
was added for all patients with moderate or severe nausea and providers continued to complete the same forms.
•
www.aemj.org
S85
The forms completed during phase 1 (saline-only) and
phase 2 (ODT ondansetron) were evaluated and compared. For both phases, the primary outcome measures
were change in VAS nausea rating (0 = no nausea,
100 = most nausea ever) from beginning to end of the
transport, and the results on the Likert scale completed
at the end of the transport. Negative changes in VAS
scores indicated improvement in nausea. Spearman correlations and Wilcoxon tests were used for analysis.
Results: Data were collected from 274 transports in
phase 1, and 372 transports in phase 2. The average
age was 50 ± 12 yrs. In phase 1, 135/228 (59%) received
normal saline (mean volume/sd = 265 ± 192 ml).There
was no significant correlation between either the VAS
change or the Likert scale results and amount of fluid
administration in the saline-only phase of the study.
Conversely, during phase 2, patients receiving ondansetron ODT showed significant improvement in both measures of nausea. (difference in VAS change: )24.6, 95%
CI 20.9, 28.3). See figure.
Conclusion: There was no relationship between nausea
and quantity of saline during an EMS transport. However, ODT ondansetron resulted in a significant
improvement in nausea.
155
Does the Addition of the Mucosal
Atomizer Device Increase Fentanyl
Administration in Prehospital Pediatric
Patients?
Daniel O’Donnell, Luke Schafer, Andrew
Stevens
Indiana University School of Medicine,
Indianapolis, IN
Background: Prehospital research demonstrates that
providers inadequately treat pain in pediatric patients.
A major barrier to administering analgesics to children
is the perceived discomfort of intravenous access. The
delivery of intranasal analgesia may be a novel solution
to this problem.
Objectives: We investigated whether the addition of
the Mucosal Atomizer Device (MAD) as an alternative
for fentanyl delivery would improve overall fentanyl
administration rates in pediatric patients transported by
a large urban EMS system.
S86
2012 SAEM ANNUAL MEETING ABSTRACTS
Methods: We performed a historical control trial comparing the rate of pediatric fentanyl administration
6 months before and 6 months after the introduction of
the MAD. Study subjects were pediatric trauma
patients (age <16 years) transported by a large urban
EMS agency. The control group was composed of
patients treated in the 6 months before introduction of
the MAD. The experimental group included patients
treated in the 6 months after the addition of the MAD.
Two physicians reviewed each chart and determined
whether the patient met predetermined criteria for the
administration of pain medication. A third reviewer
resolved any discrepancies. Fentanyl administration
rates were measured and compared between the two
groups. We used two-sample t-tests and chi-square
tests to analyze our data.
Results: 228 patients were included in the study: 137
patients in the pre-MAD group and 91 in the post-MAD
group. There were no significant differences in the
demographic and clinical characteristics of the two
groups. 42 (30.4%) patients in the control arm received
fentanyl. 34 (37.8%) of patients in the experimental arm
received fentanyl with 36% of the patients receiving
fentanyl via the intranasal route. The addition of the
MAD was not associated with a statistically significant
increase in analgesic administration. Age and mechanism of injury were statistically more predictive of analgesia administration.
Conclusion: While the addition of the Mucosal Atomizer Device as an alternative delivery method for fentanyl shows a trend towards increased analgesic
administration in a prehospital pediatric population,
age and mechanism of injury are more predictive in
who receives analgesia. Further research is necessary
to investigate the effect of the MAD on pediatric analgesic delivery.
Table - Abstract 155: Calculated Odds Ratio of Receiving Fentanyl
Post MAD
Female
Age
Initial RR
Initial GCS
Initial Pulse
Burn vs. Other
Fall/Musc. vs. Other
MVC vs. Other
Odds Ratio
95% CI
P Value
1.27
0.84
1.17
0.99
1.32
1.02
7.66
2.05
0.36
(0.66–2.44)
(0.44–1.58)
(1.06–1.30)
0.92–1.06)
(0.58–3.01)
(0.99–1.04)
(1.52–38.57)
(0.99–4.25)
(0.13–1.03)
0.468
0.580
0.002
0.738
0.512
0.083
0.014
0.055
0.053
156
EMS Provider Self-efficacy Retention after
Pediatric Pain Protocol Implementation
April Jaeger1, Maija Holsti2, Nanette Dudley2,
Xiaoming Sheng2, Kristin Gurley3, Kathleen
Adelgais4
1
Sacred Heart Medical Center, Spokane, WA;
2
University of Utah, Salt Lake City, UT;
3
University of Pittsburgh Institute of Aging,
Pittsburgh, PA; 4University of Colorado,
Denver, CO
Background: Prehospital providers (PHPs) commonly
treat pain in pediatric patients. Self-efficacy (SE) is a
person’s judgment of his or her capability to perform
certain actions and is congruent with clinical performance. An initial pilot study showed that a pediatric
pain protocol (PPP) increases PHP-SE. The retention of
PHP-SE after PPP implementation is unknown.
Objectives: To evaluate the retention of PHP-SE after
PPP implementation.
Methods: This was a prospective study evaluating PHPSE before (pre) and after (post) a PPP introduction and
13 months later (13-mo). PHP groups received either PPP
review and education or PPP review alone. The PPP
included a pain assessment tool. The SE tool, developed
and piloted by pediatric EMS experts, uses a ranked
ordinal scale ranging from ‘certain I cannot do it’ (0) to
‘completely certain I can do it’ (100) for 10 items: pain
assessment (3 items), medication administration (4) and
dosing (1), and reassessment (2). All 10 items and an
averaged composite were evaluated for three age groups
(adult, child, toddler). Paired sample t-tests compared
post- and 13-mo scores to pre-PPP scores.
Results: Of 264 PHPs who completed initial surveys,
146 PHPs completed 13-mo surveys. 106 (73%) received
education and PPP review and 40 (27%) review only.
PPP education did not affect PHP-SE (adult P = 0.87,
child P = 0.69, toddler P = 0.84). The largest SE increase
was in pain assessment. This increase persisted for
child and toddler groups at 13 months. The immediate
increase in composite SE scores for all age groups persisted for the toddler group at 13 months.
Conclusion: Increases in composite and pain assessment PHP-SE occur for all age groups immediately
after PPP introduction. The increase in pain assessment
SE persisted at 13 months for pediatric age groups.
Composite SE increase persisted for the toddler age
group alone.
Table - Abstract 156: Self-efficacy Scores
Adult
Averaged Composite 0–100 (95% CI)
Pain Assessment 0–100 (95% CI)
*P < 0.05
Pre
Post
13-mo
Pre
Post
13-mo
82.40
84.99
84.28
89.42
94.64
92.57
(78.55–86.35)
(81.37–88.62)*
(80.80–87.76)
(85.42–93.67)
(91.55–97.74)*
(89.72–95.42)
Child
75.37
80.91
77.62
77.86
89.14
84.14
(71.14–79.59)
(77.09–84.73)*
(73.70–81.55)
(72.93–82.80)
(85.99–92.30)*
(80.68–87.61)*
Toddler
64.64
76.89
71.39
50.89
83.86
72.50
(60.19–69.10)
(73.02–80.76)*
(67.03–75.74)*
(45.21–56.57)
(80.24–87.47)*
(67.87–77.13)*
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
157
Hemodynamic Changes In Patients
Receiving Ketamine Sedation By
Emergency Medical Services
Jay L. Kovar, Guy R. Gleisberg, Eric R. Ardeel,
Abdullah Basnawi, Mark E.A. Escott
Baylor College Of Medicine / EMS Collaborative Research Group
Background: Ketamine is widely used to achieve sedation and analgesia. In the prehospital setting, ketamine
facilitates endotracheal intubation, pain control, and initial management of acutely agitated patients. Knowledge of the hemodynamic effects of ketamine in the
prehospital setting is limited and may prevent acceptance of the practice.
Objectives: Describe the hemodynamic effects of prehospital ketamine administration in patients treated by
paramedics while undergoing continuous monitoring.
Methods: Retrospective review of electronic prehospital care records for 98 consecutive patients receiving
prehospital ketamine from paramedics in Montgomery
County, Texas between August 1, 2010 and October 25,
2011. Ketamine administration indications were: control
of violent/agitated patients requiring treatment and
transport; sedation and analgesia after trauma; facilitation of intubation and mechanical ventilation. All
patients receiving ketamine were treated under unified
prehospital protocols and were eligible for study inclusion. Exclusion criteria included those waveforms:
where hemodynamic monitoring data did not contain at
least three data points within the duration of action;
distorted by explainable events; or from subsequent
doses in a multi-dose administration. Linear regression
trend analysis modeled each waveform and the percentage difference algorithm was used to calculate each
hemodynamic parameter effect.
Results: 219 of 490 (45%) waveforms met inclusion criteria. Observed hemodynamic percentage difference
medians: heart rate decreased by 0.2% (range )99 to
+87%); SpO2 increased by 0.6% (range )16 to +33%);
etCO2 increased by 1.3% (range )75 to +123%); respiratory rate decreased by 14% (range )85 to +156%); and
mean arterial pressure 0.0% (range )78 to +80%). Mean
age was 41 years (range 3–94 years), 50 subjects (51%)
were male, and the mean ketamine dose was 150 mg
(range 25–500 mg).
Conclusion: Our findings demonstrate the feasibility of
prehospital hemodynamic monitoring in patients
receiving ketamine. Minimal hemodynamic changes
were observed during the dose response window. Further studies are necessary to discern the clinical implications of the findings in these specific parameters.
158
Do Paramedics Give Epinephrine When
Indicated For Anaphylaxis?
Herbert G. Hern1, Harrison Alter1,
Joseph Barger2
1
Alameda County - Highland, Oakland, CA;
2
Contra Costa County EMS Agency, Martinez, CA
Background: Allergic and anaphylactic reactions in the
pediatric population are a serious and potentially
•
www.aemj.org
S87
deadly disease. Instances of allergic and anaphylactic
reactions have been increasing and the need for lifesaving intervention with epinephrine must remain an
important part of EMS provider training.
Objectives: To characterize dosing of and timing of
epinephrine, diphenhydramine, and albuterol in the
pediatric patient with severe allergy and anaphylaxis.
Methods: We studied medication administration in
pediatric patients with severe allergy or anaphylaxis
during our study period of 19 months, from January 1,
2010 to July 10, 2011. We compared rates of epinephrine, diphenhydramine, and albuterol given to patients
with allergic conditions (including anaphylaxis). In addition, we calculated the rate of epinephrine administration in cases of anaphylaxis and determined what
percentage of time the epinephrine was given by EMS
or prior to EMS arrival.
Results: Out of 239,320 total patient contacts, 12,898
were pediatric patients. Of the pediatric patient contacts, 199 were transported for allergic complaints. Of
those with allergic complaints, 97 of 199 (49%; 95% CI
42%, 56%) had symptoms consistent with anaphylaxis
and indications for epinephrine. Of these 97, 49 (51%,
95% CI 41%, 60%) were given epinephrine. Among
patients in whom epinephrine was indicated and given,
42 (86%; 95% CI 73%, 94%) were given epinephrine
prior to EMS arrival (by parent, school, or clinic). Of
the 55 patients who might have benefited from epinephrine from EMS, 7 (14%; 95% CI 6%, 24%) received
epinephrine, 28 (51%; 95% CI 38%, 64%) received
diphenhydramine, and 28 (51%; 95% CI 38%, 64%)
received albuterol.
Conclusion: In anaphylaxis patients who met criteria
for epinephrine, only 49% received epinephrine and the
overwhelming majority received it prior to EMS arrival.
EMS providers were far more likely to give diphenhydramine or albuterol than epinephrine in patients who
met criteria for epinephrine. EMS personnel are not
treating anaphylaxis appropriately with epinephrine.
159
Weight-Based Pediatric Dosing Errors Are
Common Among EMS Providers
Joseph Barger1, Herbert G. Hern2, Patrick
Lickiss3, Harrison Alter2, Maria Fairbanks1,
Micheal Taigman3, Monica Teeves4, Karen
Hamilton4, Leslie Mueller4
1
Contra Costa County EMS Agency, Martinez,
2
CA;
ACMCHighland,
Oakland,
CA;
3
American Medical Response, Oakland, CA;
4
American Medical Response, Concord, CA
Background: Pediatric medications administered in the
prehospital setting are given infrequently and dosage
may be prone to error. Calculation of dose based on
known weight or with use of length-based tapes occurs
even less frequently and may present a challenge in
terms of proper dosing.
Objectives: To characterize dosing errors based on
weight-based calculations in pediatric patients in two
similar emergency medical service (EMS) systems.
Methods: We studied the five most commonly administered medications given to pediatric patients weighing
S88
2012 SAEM ANNUAL MEETING ABSTRACTS
36 kg or less. Drugs studied were morphine, midazolam, epinephrine 1:10,000, epinephrine 1:1000, and
diphenhydramine. Cases from the electronic record
were studied for a total of 19 months, from January
2010 to July 2011. Each drug was administered via
intravenous, intramuscular, or intranasal routes. Drugs
that were permitted to be titrated were excluded. An
error was defined as greater than 25% above or below
the recommended mg/kg dosage.
Results: Out of 248,596 total patients, 13,321 were pediatric patients. 7885 had documented weights of <36 kg
and 241 patients were given these medications. We
excluded 72 patients for weight above the 97%ile or
below the 3%ile, or if the weight documentation was
missing. Of the 169 patients and 187 doses, errors were
noted in 53 (28%; 95% CI 22%, 35%). Midazolam was
the most common drug in errors (29 of 53 doses or
55%; 95% CI 40%, 68%), followed by diphenhydramine
(11/53 or 21%; 95% CI 11%, 34%), epinephrine (7/53 or
13%; 95% CI 5%, 25%), and morphine sulfate (6/53 or
11%; 95% CI, 4%, 23%). Underdosing was noted in 34
of 53 (64%; 95% CI 50%, 77%) of errors, while excessive dosing was noted in 19 of 53 (36%; 95% CI 23%,
50%).
Conclusion: Weight-based dosing errors in pediatric
patients are common. While the clinical consequences
of drug dosing errors in these patients are unknown, a
considerable amount of inaccuracy occurs. Strategies
beyond provision of reference materials are needed to
prevent pediatric medication errors and reduce the
potential for adverse outcomes.
160
Drivers Of Satisfaction: What Components
Of Patient Care Experience Have The
Greatest Influence On The Overall
Perceptions Of Care?
Timothy Cooke1, Alaina Aguanno2, John
Cowell2, Richard Schorn1, Markus Lahtinen1,
Andrew McRae2, Brian Rowe3, Eddy Lang2
1
Health Quality Council of Alberta, Calgary,
AB, Canada; 2University of Calgary, Calgary,
AB, Canada; 3University of Alberta, Edmonton,
AB, Canada
Background: Improving patient satisfaction is a core
mandate of all health care institutions and EDs. Which
specific components of patient care have the greatest
influence on the overall patient experience is poorly
understood.
Objectives: This study examines the magnitude of
influence of specific modifiable factors (e.g. wait time)
on the patient ED experience.
Methods: This was a cross-sectional survey of a random patient sample from 12 urban/regional EDs in winter 2007 and 2009. A previously validated questionnaire,
based on the British Healthcare Commission Survey,
was distributed according to a modified Dillman protocol. Exclusion criteria: age 0–15 years, left prior to
being seen/treated, died during the ED visit, no contact
information, presented with a ‘‘privacy’’ sensitive case.
Nine previously identified and validated composite
variables (staff care/communication, respect, pain
management, wait time/crowding, facility cleanliness,
discharge communication, wait time communication,
medication communication, and privacy) were evaluated for their influence on patients’ overall rating of
care. Composite variables and rating of care were rated
on a scale from from 0 (lowest) to 100 (highest). Calculated influence is represented as a standardized composite coefficient.
Results: 21,639 surveys were distributed with a
response rate of 46%. Composite variable coefficients
were: staff care/communication (0.38), respect (0.17),
pain management (constituent variables not comparable
so decomposed for regression), wait time/crowding
(0.09), facility cleanliness (0.13), discharge communication (0.10), wait time communication (0.03), medication
communication (no significant influence), and privacy
(0.02). Thus, if a patient’s staff care/communication
composite score increases from 50/100 to 70/100, an initial global rating of care of 70/100 is predicted to
increase to 78/100. Path analysis showed cascading
effects of pain management and wait time on other
variables. Variable coefficients affect patients according
to situational relevance.
Conclusion: Global patient ED experience is most
strongly influenced by staff care/communication. Pain
management and wait time synergistically effect downstream variables. Efforts to improve patient satisfaction
should focus on these factors.
161
Street Outreach Rapid Response Team for
the Homeless
Lindsay M. Harmon1, Aaron Kalinowski1,
Dorian Herceg2, Anthony J. Perkins1
1
Indiana University School of Medicine,
Indianapolis, IN; 2Wishard Hospital,
Indianapolis, IN
Background: Homelessness affects up to 3.5 million
people a year. The homeless present more frequently to
EDs, their ED visits are four times more likely to occur
within 3 days of a prior ED evaluation, and they are
admitted up to five times more frequently than others.
We evaluated the effect of a Street Outreach Rapid
Response Team (SORRT) on the health care utilization
of a homeless population. A nonmedical outreach staff
responds to the ED and intensely case manages the
patient: arranges primary care follow-up, social services, temporary housing opportunities, and drug/
alcohol rehabilitation services.
Objectives: We hypothesized that this program would
decrease the ED visits and hospital admissions of this
cohort of patients.
Methods: Before and after study at an urban teaching
hospital from June, 2010–December, 2011 in Indianapolis, Indiana. Upon identification of homeless status,
SORRT was immediately notified. Eligibility for SORRT
enrollment is determined by Housing and Urban Development homeless criteria and the outreach staff
attempted to enter all such identified patients into the
program. The patients’ health care utilization was evaluated in the 6 months prior to program entry as compared to the 6 months after enrollment by prospectively
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
collecting data and a retrospective medical record
query for any unreported visits. Since the data were
highly skewed, we used the nonparametric signed rank
test to test for paired differences between periods.
Results: 22 patients met criteria but two refused participation. The 20-patient cohort had 388 total ED visits
(175 pre and 213 post) with a mean of 8.8 (SD 10.1) and
median of 6.5 (range 1–44) ED visits in 6 months preSORRT as compared to a mean of 10.7 (SD 19.5) and
median of 5.0 (0–90) in 6 months post-SORRT
(p = 0.815). There were 28 total inpatient admissions
pre-intervention and 27 post-intervention, with a mean
of 1.4 (SD 2.0) and median of 0.5 (0.7) per patient in the
pre-intervention period as compared to 1.4 (SD 1.9) and
1.0 (0–6) in the post-intervention period (p = 0.654). In
the pre-SORRT period 50.0% had at least one inpatient
admission as compared to 55.0% post-SORRT
(p = 1.00). There were no differences in ICU days or
overall length of stay between the two periods.
Conclusion: An aggressive case management program
beginning immediately with homeless status recognition in the ED has not demonstrated success in
decreasing utilization in our population.
162
Communicating with Patients with Limited
English Proficiency: Analysis of Interpreter
Use and Comfort with a Second Language
David T. Chiu, Jonathan Fisher, Alden Landry
Beth Israel Deaconess Medical Center, Boston,
MA
Background: The number of Limited English Proficiency (LEP) patients presenting to emergency departments is increasing. Professional interpreters decrease
errors in communication, improve patient satisfaction,
equalize utilization of health care resources, and
improve outcomes. Working with interpreters is often
times inconvenient, forcing providers to self-translate
or use family members despite not having any training.
Objectives: To determine interpreter usage and selftranslation patterns by ED physicians.
Methods: This is a cross-sectional survey study that
was conducted at an urban, tertiary referral, academic
hospital with 55,000 ED visits. All ED physicians at the
institution were included; 45 attending and 37 postgraduate. The authors excluded themselves from the
study. The anonymous survey consisted of questions
regarding interpreter utilization, comfort with interpreters, self-translation, and use of family or nontrained staff for translation. Proportions and confidence
intervals were calculated using SPSS 17.
Results: 77 physicians completed the survey yielding a
96% response rate. 97% reported working with an interpreter at least once a month, with 47% working with an
interpreter daily. Only 5% (CI 0–10) reported always
working with an interpreter with 95% (CI 90–100) working with an interpreter sometimes or often. 77% (CI 67–
83) use ED staff to interpret, while 81% (CI 72–90) use
family to interpret. 99% responded that they were comfortable taking a history, performing a physical exam,
and discussing treatment plans or reassuring results
with an interpreter. 10% reported being uncomfortable
•
www.aemj.org
S89
giving bad news with an interpreter. 48% reported selftranslating with LEP patients. 47% of self-translators
reported comfort with history taking while 33% are comfortable giving results and discharging.
Conclusion: While ED physicians work with interpreters for LEP patients a majority of the time, only a small
minority of ED physicians work with them all of the
time. A large number use non-trained staff or family
for translation. Nearly half of physicians self-translate;
however, only half of those are comfortable taking a
history and only a third comfortable with giving results
or discharging. Future studies include barriers to interpreter use in the ED and testing those who self-translate for proficiency.
163
Why Did You Go to the Emergency
Department? Findings from the Health
Quality Council of Alberta Urban and
Regional Emergency Department Patient
Experience Report
Eddy Lang1, Timothy Cooke2, Alaina
Aguanno1, John Cowell2, Brian Rowe3,
Andrew McRae1, Fareen Zaver4
1
University of Calgary, Calgary, AB, Canada;
2
Health Quality Council of Alberta, Calgary,
AB, Canada; 3University of Alberta, Edmonton,
AB, Canada; 4Mayo Clinic, Rochester, MN
Background: Understanding why patients seek ED
care is relevant to policy development. Large-scale population-based determinations of what motivates an ED
visit are lacking.
Objectives: To characterize and obtain a populationbased and in-depth understanding of what drives
patients to seek ED care.
Methods: Cross-sectional survey of a random patient
sample from 12 urban/regional EDs in winter 2007 and
2009. A previously validated questionnaire, based on
the British Healthcare Commission Survey, was distributed according to a modified Dillman protocol. Exclusion criteria: age 0–15 years, left prior to being seen/
treated, died during the ED visit, no contact information, presented with a privacy sensitive case. Aggregate
responses are reported unless statistically significant
differences were identified. Sample weights were
applied adjusting for inter-facility sample proportion
differences.
Results: 21,639 surveys were distributed. Response rate
was 46%. Patients sought ED care on the advice of a
health care professional (35%), after consulting with a
friend/family member (34%), and/or based on own
assessment of need (34%). The ED was perceived as the
best place for my problem (46% in 2007 and 48% in
2009; p = 0.03), the only choice available at the time
(43%), and/or the most convenient place to go (12%).
Most patients consulted the ED for a new illness (32%)
or new injury (27%). The remainder of presentations
were associated with pre-existing conditions: worsened
chronic condition (23%), complication of recent medical
care (13%), routine care (2%), follow-up care (2%), or
other (2%). 89% of patients had a personal family
doctor/specialist whom they saw for most of their
S90
2012 SAEM ANNUAL MEETING ABSTRACTS
health care needs and 96% of those patients had visited
this physician within the past 12 months.
Conclusion: Most patients consult with others prior to
presenting for ED care. The decision to visit an ED is
based on perceived need rather than convenience. The
majority of presentations are related to new conditions,
exacerbation of chronic conditions, or complications of
prior care.
164
Variation of Patient Preferences for
Written and Cell Phone Instructional
Modality of Discharge Instructions by
Patient Health Literacy Level
Travis D. Olives, Roma G. Patel,
Rebecca S. Nelson, Aileen Yew, Scott Joing,
James R. Miner
Hennepin County Medical Center, Minneapolis,
MN
Background: Outpatient antibiotics are frequently prescribed from the ED, and limited health literacy may
affect compliance with recommended treatments. It is
unknown whether the preference for discharge instructional modality varies by health literacy level.
Objectives: We sought to determine if patient preference for multimodality discharge instructions for outpatient antibiotic therapy varies by health literacy level.
Methods: This was a secondary analysis of a prospective randomized trial that included consenting patients
discharged with outpatient antibiotics from an urban
county ED with an annual census of 100,000. Patients
unable to receive text messages or voice-mails were
excluded. Health literacy was assessed using a validated
health literacy assessment, the Newest Vital Sign (NVS).
Patients were randomized to a discharge instruction
modality: 1) standard care, typed and verbal medication
and case-specific instructions; 2) standard care plus
text-messaged instructions sent to the patient’s cell
phone; or 3) standard care plus voice-mailed instructions sent to the patient’s cell. Patients were called at
30 days to determine preference for instruction delivery
modality. Preference for discharge instruction modality
was analyzed using z-tests for proportions.
Results: 758 patients were included (55% female, median age 30, range 5 months to 71 years); 98 were
excluded. 23% had an NVS score of 0–1, 31% 2–3, and
46% 4–6. Among the 51.1% of participants reached at
30 days, 26% preferred a modality other than written.
There was a difference in the proportion of patients
who preferred discharge instructions in written plus
another modality (see table). With the exception of written plus another modality, patient preference was similar across all NVS score groups.
Conclusion: In this sample of urban ED patients, more
than one in four patients prefer non-traditional (text message, voice-mail) modalities of discharge instruction
delivery to standard care (written) modality alone. Additional research is needed to evaluate the effect of instructional modality on accessibility and patient compliance.
All Health Care is Not Local: An Evaluation
of the Distribution of Emergency
Department Care Delivered in Indiana
John T. Finnell, J Marc Overhage, Shaun Grannis
Indiana University, Indianapolis, IN
165
Background: The emergency department (ED) delivers
a major portion of health care - often with incomplete
knowledge about the patient. As such, EDs are particularly likely to benefit from a health information
exchange (HIE).
Objectives: To describe patient crossover rates throughout the entire state of Indiana over a three-year period.
This information provides one estimate of the opportunities for cross-institutional data to influence medical care.
Methods: The Indiana Public Health Emergency Surveillance System (PHESS) sends real-time registration information for emergency department encounters. We
validated these transactions, determined which were
unique visits, then matched the patients across the state.
Table - Abstract 164: Patient-reported Preferences for Discharge Instruction Modality
NVS Health Literacy Score:
Written % (n)
Written plus another modality
Texted
Voice-mailed
Text and voice-mail
Text, voice-mail, and written
No preference
Limited
(0–1)
40 (32)
13.75 (11)
18.75 (15)
7.50 (6)
3.75 (3)
0.00 (0)
16.25 (13)
Possibly
limited (2–3)
31.82
42.21
10.39
3.25
4.55
1.30
6.49
(49)
(65)
(16)
(5)
(7)
(2)
(10)
Adequate
(4–6)
44.08
20.39
12.50
5.92
5.26
0.66
11.18
(67)
(31)
(19)
(9)
(8)
(1)
(17)
p-val, NVS score 0–1
compared to 2–6
0.732
0.0017
0.083
0.294
0.664
0.374
0.052
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S91
Results: Over the three-year study period, we found
2.8 million patients generated 7.4 million ED visits. The
average number of visits was 2.6 visits/patient (range
1–385). We found more than 40% of ED visits during
the study period were for patients having data at multiple institutions. When examining the network density,
we found nearly all EDs share patients with more than
80 other EDs. (image)
Conclusion: Our results help clarify future health care
policy decisions regarding optimal NHIN architecture
and discount the notion that ‘all health care is local’.
166
Cumulative SAPS II Score Fails To Predict
Mortality In Out-of-Hospital Cardiac Arrest
Justin D. Salciccioli, Cristal Cristia, Andre
Dejam, Tyler Giberson, Tara Melillo, Amanda
Graver, Michael N. Cocchi, Michael W. Donnino
BIDMC Center for Resuscitation Science,
Boston, MA
Background: Severity of illness scores can predict outcomes in critically ill patients. However, the calibration
of existing scoring systems in post-cardiac arrest
patients is poorly established.
Objectives: To determine if the Simplified Acute Physiology Score (SAPS II) will predict mortality in out-ofhospital cardiac arrest (OHCA).
Methods: We performed an observational study of adult
cardiac arrest at an urban tertiary care hospital during
the period from 12/2007 to 12/2010. Data were collected
prospectively and recorded in the Utstein style. Inclusion
criteria: 1. Adult (>18 years); 2. OHCA; 3. Return of spontaneous circulation. Traumatic cardiac arrests were
excluded. Patient demographics, co-morbid conditions,
vital signs, laboratory data, and in-hospital mortality
were recorded (Table). SAPS II scores were calculated.
We used simple descriptive statistics to describe the
study population and logistic regression to predict mortality with SAPS II as a continuous predictor variable.
Forward stepwise logistic regression selection was used
to identify individual SAPS II variables that contribute to
the sensitivity of the score. Discrimination was assessed
using area under the curve (AUC) of the receiver operating characteristic (ROC) curve.
Results: 115 patients were analyzed. The median age
was 68 years (IQR 55–79) and 28% were female. Median
SAPS II score was 67 (IQR 53–77) and 61% of patients
died. Cumulative SAPS II score was a poor predictor of
mortality (p = 0.19, OR 1.012, 95% CI 0.99–1.03) and
demonstrated poor discrimination (AUC 0.61). Stepwise
selection identified the following individual SAPS II
variables to predict mortality: HCO3 (p = 0.006, OR 1.46,
95%CI 1.12–1.90), GCS (p = 0.03, OR 1.12, 95%CI 1.01–
1.23), and age (p = 0.02, OR 1.12, 95%CI 1.02–1.23);
together these are strong predictors of mortality:
p = 0.02, AUC: 0.75 (Figure).
Conclusion: Cumulative SAPS II scoring fails to predict
mortality in OHCA. The risk scores assigned to age,
GCS, and HCO3 independently predict mortality and
combined are good mortality predictors. These findings
suggest that an alternative severity of illness score
should be used in post-cardiac arrest patients. Future
studies should determine optimal risk scores of SAPS II
variables in a larger cohort of OHCA.
Table - Abstract 166: Baseline Characteristics of Out-of-Hospital
Cardiac Arrest Patients
Characteristics
Total number (n)
Age yr(+ ⁄ )SD)
Female- no. (%)
Initial arrest rhythm
Ventricular fibrillation/tachycardia
Pulseless electrical activity
Asystole
Unknown
Therapeutic hypothermia- no. (%)
167
OHCA
115
66 (17)
32 (28)
52
34
21
7
54
(44)
(30)
(18)
(6)
(47)
Restoring Coronary Perfusion Pressure
Before Defibrillation After Chest
Compression Interruptions
Ryan A. Coute, Timothy J. Mader, Adam R.
Kellogg, Scot A. Millay, Lennard C. Jensen
Baystate Medical Center, Springfield, MA
Background: Generation of a threshold coronary perfusion pressure (CPP) is required for defibrillation success during VF cardiac arrest resuscitation. Rescue
shock (RS) outcomes are directly related to the CPP
achieved. Chest compression interruptions (for ECG
rhythm analysis) cause a precipitous drop in CPP.
Objectives: To determine the extent to which CPP
recovers to pre-pause levels with 20 seconds of CPR
after a 10-second interruption in chest compressions
for ECG rhythm analysis.
Methods: This was a secondary analysis of prospectively
collected data from an IACUC-approved protocol. Fortytwo Yorkshire swine (weighing 25–30 kg) were instrumented under anesthesia. VF was electrically induced. After
S92
2012 SAEM ANNUAL MEETING ABSTRACTS
12 minutes of untreated VF, CPR was initiated and a standard dose of epinephrine (SDE) (0.01 mg/kg) was given.
After 2.5 minutes of CPR to circulate the vasopressor,
compressions were interrupted for 10 seconds to analyze
the ECG rhythm. This was immediately followed by
20 seconds of CPR to restore CPP before the first RS was
delivered. If the RS failed, CPR resumed and additional
vasopressors (SDE, and vasopressin 0.57 mg/kg) were
given and the sequence repeated. The CPP was defined as
aortic diastolic pressure minus right atrial diastolic pressure. The CPP values were extracted at three time points:
immediately after the 2.5 minutes of CPR, following the
10-second pause, and immediately before defibrillation
for the first two RS attempts in each animal. Eighty-three
sets of measurements were logged from 42 animals.
Descriptive statistics were used to analyze the data.
Results: The data were not normally distributed. Our
findings are presented in the table. Interrupting chest
compressions to analyze the ECG VF rhythm caused a
significant drop in CPP. Resuming CPR for 20 seconds
prior to delivery of a RS restored CPP to 94.0% (95%CI
89.7–97.9) of the pre-pause values.
Conclusion: Our data suggest that delaying defibrillation for 20 seconds after a pause in chest compressions
and resuming CPR while the defibrillator is charging
can restore CPP to levels more favorable to RS success.
Whether or not this actually translates into greater RS
success, higher rates of ROSC, and improved survival
remains to be determined.
168
Rescue Shock Timing And Outcomes
Following 12 Minutes Of Untreated
Ventricular Fibrillation
Ryan A. Coute, Timothy J. Mader, Adam R.
Kellogg, Scot A. Millay, Lennard C. Jensen
Baystate Medical Center, Springfield, MA
Background: According to the three-phase time-sensitive model of VF, the metabolic phase begins after
10 minutes of untreated cardiac arrest. Optimal CPR
duration prior to first rescue shock (RS) to maximize
the probability of successful defibrillation during this
phase remains unknown.
Objectives: The purpose of this study was to determine
if 3 minutes of CPR prior to first RS is sufficient to
achieve ROSC after 12 minutes of untreated VF.
Methods: This is a secondary analysis of prospectively
collected data from an IACUC-approved protocol.
Forty-eight Yorkshire swine (weighing 25–30 kg) were
instrumented under anesthesia. VF was electrically
induced. After 12 minutes of untreated VF, CPR was
initiated (and continued prn) and a standard dose of
epinephrine (SDE) (0.01 mg/kg) was given (and
repeated every 3 minutes prn). The first RS was delivered after 3 minutes of CPR (and every 3 minutes thereafter prn). A failed RS was followed (in series) by
vasopressin (VASO, 0.57 mg/kg), amiodarone (AMIO,
4.3 mg/kg), and sodium bicarbonate (BICARB, 1 mEq/
kg) prn. Resuscitation attempts continued until ROSC
was achieved or 20 minutes elapsed without ROSC. The
primary outcome measures were ROSC (SBP>80 mmHg
for >60s) and survival (SBP>60 mmHg for 20 minutes).
Data were analyzed using descriptive statistics.
Vasopressor support was available to maintain
SBP>60 mmHg following ROSC.
Results: ROSC was achieved in 25 of the 48 (52%) animals. Survival occurred in 23 of the 48 (48%) animals.
Our findings are summarized in the table.
Conclusion: Our data suggest that during the metabolic phase of VF, 3 minutes of CPR and 1 SDE may be
insufficient to achieve ROSC on first RS attempt.
A longer duration of CPR and/or additional vasopressors may increase the likelihood of successful defibrillation on first attempt.
Table - Abstract 168:
169
Effect of Time of Day on Prehospital Care
During Out-of-Hospital Cardiac Arrest
Sarah K. Wallace, Benjamin S. Abella, Frances
S. Shofer, Marion Leary, Robert W. Neumar,
C. C. Mechem, David F. Gaieski, Lance B.
Becker, Roger A. Band
Hospital of the University of Pennsylvania,
Philadelphia, PA
Background: There are over 300,000 out-of-hospital
cardiac arrests (OHCAs) each year in the United States.
In most cities, the proportion of patients who achieve
prehospital return of spontaneous circulation (ROSC) is
less than 10%. The association between time of day and
OHCA outcomes in the prehospital setting is unknown.
Objectives: We sought to determine whether rates of
prehospital ROSC varied by time of day. We hypothesized that night OHCAs would exhibit lower rates of
ROSC.
Methods: We performed a retrospective review of cardiac arrest data from a large, urban EMS system.
Included were all OHCAs occurring in individuals
>18 years of age from 1/1/2008 to 12/31/2010. Excluded
were traumatic arrests and cases where resuscitation
measures were not performed. Day was defined as
7:00 am–6:59 pm, while night was 7:00 pm–6:59 am. We
examined the association between time of day and
paramedic-perceived prehospital ROSC in unadjusted
and adjusted analyses. Variables included age, sex,
race, presenting rhythm, AED application by a bystander or first responder, defibrillation, and bystander
CPR performance. Analyses were performed using chisquare tests and logistic regression.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S93
Results: Our study population comprised 3742 arrest
cases (42.6% at night). Mean age was 65.4 (SD 16.8)
years. Males comprised 55.3% of the cohort; 53.0%
were black. The unadjusted rate of ROSC was 3.1% at
night vs. 4.4% during the day (p = 0.034). Night OHCAs
were significantly less likely to receive bystander-initiated CPR than day OHCAs (4.6% vs. 7.5%, p < 0.001).
AED application did not vary significantly by time of
day. Patients with night OHCAs were significantly
younger (64.2 vs. 66.4 years, p < 0.001) and less likely to
present in a shockable rhythm (16.8% vs. 20.1%,
p = 0.012). After adjusting for significant prehospital
and patient-level risk factors, the association between
time of day and ROSC was no longer significant (odds
ratio [OR] 1.17, 95% CI 0.79–1.73, p = 0.43). Time of day
remained significantly associated with bystander CPR
(OR 2.59, 95% CI 1.66–4.03, p < 0.001).
Conclusion: In adjusted analysis, the significant difference observed between time of day and ROSC no
longer persisted. Night arrests remained significantly
less likely to receive important prehospital care measures, such as bystander CPR, suggesting they may
exhibit more variable prehospital management by lay
and first responders.
170
Lung Protective Ventilation is Uncommon
among ED Patients
Brian M. Fuller1, Nicholas M. Mohr2, Craig A.
McCammon3, Rebecca Bavolek1, Kevin
Cullison4, Matthew Dettmer1, Jacob Gadbaw5,
Sarah Kennedy1, Nicholas Rathert1, Christine
Taylor5
1
Washington University School of Medicine, St.
Louis, MO; 2University of Iowa Carver College of
Medicine, Iowa City, IA; 3Barnes-Jewish
Hospital, St. Louis, MO; 4St. Louis University
School of Medicine, St. Louis, MO; 5Washington
University in St. Louis, St. Louis, MO
Background: Endotracheal intubation is frequently
performed in the emergency department (ED), but no
evidence exists to guide ED mechanical ventilation.
Clinical data suggest that ventilator-induced lung injury
can occur quickly, and may be preventable with the use
of protective ventilatory strategies.
Objectives: (1) To describe current ED mechanical ventilation practice, and (2) to determine whether low tidal
volume ventilation as the initial mechanical ventilation
strategy in ED patients with severe sepsis is associated
with lower 28-day in-hospital mortality and more ventilator-free days.
Methods: Single-center, retrospective observational
cohort study of 250 adult patients with severe sepsis
who were intubated in a 90,000-visit urban academic
ED between June 2005 and June 2010. All patients with
suspected infection and either lactate ‡ 4 mmol/L or
SBP £ 90 after a 30 mL/kg fluid bolus were included.
Results: Two-hundred forty (96.0%) patients were ventilated initially with volume-targeted ventilation. The
median ED length of stay was 5.5 hours (IQR 4.2–7.5).
One-hundred twenty-one (48.4%) patients ventilated in
the ED with severe sepsis died. Corrected for ideal
body weight (IBW), tidal volumes greater than 8 mL/kg
IBW were used in 51.2% of patients. Ventilator peak
pressures >35 cmH2O were observed in 22% of
patients. Obesity was not predictive of high tidal volumes (p = 0.60). Low tidal volume ventilation (<8 mL/kg
IBW) was not associated with 28-day in-hospital mortality (42% vs. 51%, p = 0.17) or ventilator-free days
(22.2 vs. 20.8 days, p = 0.34).
Conclusion: Low tidal volume ventilation was not associated with 28-day in-hospital mortality, but high tidal
volume and high peak pressure ventilation are common
among intubated ED patients. These findings are
hypothesis-generating for future clinical trials, as the
role of lung protective ventilation has been largely
unexplored in the ED (Grant UL1 RR024992, NIHNCRR).
S94
171
2012 SAEM ANNUAL MEETING ABSTRACTS
Life-Threatening Etiologies of Altered
Mental Status in Children
Antonio Muniz
Dallas Regional Medical Center, Mesquite, TX
Background: The epidemiology of children who present to an emergency department with altered mental
status is not well known. The diagnostic evaluation of
these children is controversial and is quite variable
between physicians.
Objectives: The study’s objective was to determine the
factors that may place a child at risk for life-threatening
causes (LTC) of altered mentation.
Methods: Prospective observational evaluation of all
children <17 years-old who presented as their primary
chief complaint with altered mental status. No children
were excluded. Data were analyzed using Stata 11 with
continuous variables expressed as means, while categorical variables were summarized as frequencies of
occurrence and assessed for statistical significance
using chi-square test or Fisher’s exact.
Results: There were 203 children, 119 (58.6%) males,
140 (68.9%) African Americans, and 62 (30.5%) Caucasians. Mean age was 8.2 ± 5.9 years-old (95% CI 7.2–
9.1). Hypotension occurred in 6 (2.9%) and hypertension in 22 (10.8%). Tachycardia occurred in 38 (18.7%)
and tachypnea in 27 (13.3%). There were 31 different
diagnoses. Most common diagnoses were: overdoses
27 (13.3%), ethanol intoxication 22 (10.8%), seizures 20
(9.8%), dehydration 19 (9.3%), medication side effects
13 (6.4%), and psychiatric disorders 12 (5.9%). There
were 24 (11.8%) with life-threatening causes (LTC).
Hypotension occurred in 4 (1.9%) and tachycardia in
17 (8.3%) in those with life-threatening causes. WBC
count was elevated in 20 (9.8%); 15 of these had lifethreatening causes. Tachycardia had a sensitivity
70.5%, specificity 89%, positive predictive value (PPV)
48%, and negative predictive value (NPV) 95.8% for
LTC. Tachypnea had a sensitivity 47%, specificity 89%,
PPV 50%, and NPV 93% for LTC. Leukocytosis had a
sensitivity 66.6%, specificity 97.6%, PPV 80%, and NPV
95.3% for LTC. Combining tachycardia, tachypnea,
and leukocytosis predicted those children with LTC
with a sensitivity 100%, specificity 83.5%, PPV 44%,
and NPV 100%.
Conclusion: Altered mentation in children is uncommon
and in some children may have life-threatening etiologies. In those with life-threatening causes of altered mentation, abnormal vital signs and leukocytosis were able
to detect all these children.
172
The Impact of Early DNR Orders on Patient
Care and Outcomes Following
Resuscitation from Out of Hospital Cardiac
Arrest
Derek K. Richardson, Dana Zive, Mohamud
Daya, Craig D. Newgard
Oregon Health & Science University, Portland,
OR
Background: Among patients successfully resuscitated
from out-of-hospital cardiac arrest (OHCA) and surviving to hospital admission, prior research has suggested
that decisions regarding withdrawal of care are best
made 48 to 72 hours after admission.
Objectives: To evaluate use of early (<24 hours) DNR
(do not resuscitate) orders and their association with
therapeutic procedure counts and outcomes among
patients successfully resuscitated from OHCA surviving
to hospital admission.
Methods: This was a population-based, retrospective
cross-sectional study of adult patients admitted to 332
acute care hospitals in California through the ED with a
primary diagnosis of cardiac arrest from 2002 to 2010.
Our primary exposure variable was a DNR order
recorded within 24 hours of admission. We evaluated
in-hospital mortality, ventilation, and procedures based
on ICD-9 codes. We assessed associations between
early DNR orders and hospital characteristics (size,
rural vs. urban, teaching hospital, OHCA volume, ownership, hospital identity). We used descriptive statistics
and multivariable models to analyze the sample.
Results: 5,212 patients were admitted to California hospitals after OHCA over the 9 year period, of whom 1,692
(32.5%) had a DNR order documented within 24 hours of
admission. Compared to post-arrest patients without an
early DNR order, these patients were less likely to
undergo cardiac catheterization or stenting (1.1% vs.
4.3%), ICD/pacemaker placement (0.1% vs. 1.1%), transfusion (7.6% vs. 11.2%) or ventilation over 96 hours
(8.7% vs. 18.6%) (all p < 0.0001). Patients with early DNR
orders were less likely to survive to hospital discharge
(5.2% vs. 21.6%) with a short length of stay (median
1 day). Variability in hospital rates of early DNR placement was substantial, even after restricting to the top
quartile of OHCA volume (10.5%–64.0%); in multivariate
models, specific hospital factors were not associated
with early DNR order placement.
Conclusion: Successfully resuscitated OHCA patients
with early DNR orders had fewer therapeutic interventions and lower survival to hospital discharge. There
was substantial variability in early DNR use between
hospitals. Providers, patients, and surrogates should be
aware of the outcomes associated with early DNR
placement when making this crucial decision.
173
Does a Simulation Module Educational
Intervention Improve Physician
Compliance and Reduce Patient Mortality
in Severe Sepsis and Septic Shock?
Michelle Sergel1, Erik Nordquist1, Brian
Krieger1, Alan Senh1, Conal Roche1, Nicole
Lunceford1, Tamara Espinoza1, Rashid Kysia1,
Bharat Kumar2, Renaud Gueret1, John Bailitz1
1
Cook County (Stroger), Chicago, IL; 2Rush
University Research Mentoring Program,
Chicago, IL
Background: The Institute for Healthcare Improvement
(IHI) recommends a ‘‘bundle’’ comprised of a checklist of
key clinical tasks to improve physician compliance and
reduce patient mortality in severe sepsis and septic shock
(S&S). In order to educate and improve compliance with
the six-hour goals of the IHI S&S bundle, we conducted a
simulation module educational intervention (SMEI) for
all emergency department (ED) physician staff.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Objectives: Determine whether a SMEI helps to
improve physician compliance with IHI bundle and
reduce patient mortality in ED patients with S&S.
Methods: We conducted a pre-SMEI retrospective
review of four months of ED patients with S&S to
determine baseline pre-SMEI physician compliance and
patient mortality. We designed and completed a SMEI
attended by 25 of 28 ED attending physicians and 28 of
30 ED resuscitation residents. Finally, we conducted a
twenty-month post-SMEI prospective study of ongoing
physician compliance and patient mortality in ED
patients with S&S.
Results: In the four month pre-SMEI retrospective
review, we identified 23 patients with S&S, with a 61%
physician overall compliance and mortality rate of 30%.
The average ED physician SMEI multiple-choice
pre-test score was 74%, and showed a significant
improvement in the post-test score of 94% (p = 0.0003).
Additionally, 87% of ED physicians were able to
describe three new clinical pearls learned and 85%
agreed that the SMEI would improve compliance. In
the twenty months of the post-SMEI prospective study,
we identified 144 patients with S&S, with a 75% physician overall compliance, and mortality rate of 21%. Relative physician compliance improved 23% (p = 0.0001)
and relative patient mortality was reduced by 32%
(p < 0.0001) when comparing pre- and post-SMEI data.
Conclusion: Our data suggest that a SMEI improves
overall physician compliance with the six hour goals of
the IHI bundle and reduces patient mortality in ED
patients with S&S.
174
Direct Linkage of Low-Acuity Emergency
Department Patients with Primary Care:
A Pseudo-Randomized Controlled Trial
Kelly M. Doran1, Ashley C. Colucci2, Cherry
Huang2, Calvin K. Ngai2, Robert A. Hessler2,
Andrew B. Wallach3, Michael Tanner3, Lewis
R. Goldfrank2, Stephen P. Wall2
1
Robert Wood Johnson Foundation Clinical
Scholars Program, Yale University School of
Medicine and U.S. Department of Veteran’s
Affairs, New Haven, CT; 2Department of
Emergency Medicine, New York University,
New York, NY; 3General Internal Medicine,
Bellevue Hospital Center, New York, NY
Background: Having a usual source of primary care is
known to improve health. Currently only two-thirds of
ED patients have a usual source of care outside the ED,
far short of Healthy People 2020’s target of 84%. Prior
attempts to link ED patients with primary care have
had mixed results.
Objectives: To determine if an intervention directly
linking low-acuity patients with a primary care clinic at
the time of an ED visit could lead to future primary care
linkage.
Methods: DESIGN: Pseudo-randomized controlled trial.
SETTING: Urban safety-net hospital. SUBJECTS: Adults
presenting to the ED 1/07–1/08 for select problems a layperson would identify as low-acuity. Patients were
excluded if they arrived by EMS, had a PCP outside our
•
www.aemj.org
S95
hospital, were febrile, or the triage nurse felt they needed
ED care. Consecutive patients were enrolled weekday
business hours when the primary care clinic was open.
Patients were assigned to usual care in the ED if a provider was ready to see them before they had completed
the baseline study survey. Otherwise they were offered
the intervention if a clinic slot was available. INTERVENTION: Patients agreeing to the intervention were
escorted to a primary care clinic in the same hospital
building. They were assigned a personal physician and
given an overview of clinic services. A patient navigator
ensured patients received timely same-day care. Intervention group patients could refuse the intervention and
instead remain in the ED for care. Both clinic and ED
patients were given follow-up clinic appointments, or a
phone number to call for one, as per usual provider practice. ANALYSIS: The main outcome measure was primary care linkage, defined as having one or more
primary care clinic visits within a year of the index ED
visit for patients with no prior PCP.
Results: 1,292 patients were potentially eligible and 853
were enrolled (662 intervention and 191 controls). Groups
had similar baseline characteristics. Nearly 75% in both
groups had no prior PCP. Using an intention to treat analysis, 50.3% of intervention group patients with no prior
PCP achieved successful linkage (95%CI 45.7–54.9%) vs.
36.9% of the control group (95%CI 28.9–45.4%).
Conclusion: A point-of-care program offering low-acuity ED patients the opportunity to instead be seen at
the hospital’s primary care clinic resulted in increased
future primary care linkage compared to standard ED
referral practices.
175
Ambulatory Care Sensitive Conditions
And The Likelihood Of 30-Day Hospital
Readmissions Through The ED
Joseph A. Tyndall, Wei Hou, Doug Dame,
Donna Carden
University of Florida, Gainesville, FL
Background: Emergency department (ED) visits have
increased dramatically over the last decade along with
growing concerns about available resources to support
care delivery within this safety net. Recent evidence
suggests that 30-day hospital readmissions through the
ED are also rising. Hospital admission for one of 14
ambulatory care sensitive conditions (ACSC) is thought
to be preventable with timely and effective outpatient
care. However, the risk of 30-day hospital readmission
in patients with an ACSC is unexplored.
Objectives: The objective of this study was to examine
30-day readmission rates through the ED for patients
discharged from an inpatient setting with an ACSC compared to patients without a preventable hospitalization.
Methods: Adult ED admissions between Jan 1, 2006
and August 1, 2010 were evaluated. Administrative data
were processed using AHRQ’s QI Windows Application
Version 4.1a and SAS to flag specific diagnosis codes
that qualified as an ACSC. Statistical analysis was performed using SAS v9.2. Chi-square testing was used to
test for significant differences in 30-day readmission
through the ED in patients previously admitted for an
S96
2012 SAEM ANNUAL MEETING ABSTRACTS
ACSC compared to those admitted for a non-preventable condition. Charlson co-morbidity index was
applied to discharge data to control for coexisting
illness. Cumulative hazard curves for readmission
within 30 days were generated based on Nelson-Aalon
estimators and comparison was done by a log-rank test.
A p value £ 0.05 was considered significant.
Results: 78,982 index admissions were analyzed with
5475 (6.9%) representing admissions for an ACSC. Of
12,574 readmissions within 30 days of index hospitalization, 990 represented admission for an ACSC. Patients
with ACSC-associated admissions were more likely to
be readmitted within 30 days than patients with nonpreventable admissions (p < 0.0001). Patients with an
ACSC-associated admission and a Charlson comorbidity index of >1 had the highest rate of readmission
within 30 days of a hospital discharge.
Conclusion: Patients with ACSC-associated hospital
admissions are more likely to present to the ED and be
readmitted within 30 days. A Charlson comorbidity
index of >1 exacerbates the risk of a 30-day readmission.
176
Visit Urgency Between Frequent
Emergency Department Users In A Large
Metropolitan Region Network
Edward M. Castillo, Gary M. Vilke, James P.
Killeen, Jesse J. Brennan, Chan T. Chan
University of California, San Diego, San Diego,
CA
Background: There is growing focus on so-called ‘‘Hot
spotter’’ or frequent flier (FF) patients who are high utilizers of health care resources, particularly acute care
services, from both care quality and cost standpoints.
Objectives: We sought to evaluate FF use of emergency services in a large, metropolitan area served by
multiple hospital emergency departments (EDs).
Methods: We conducted a region-wide retrospective
cohort study of all visits to 16 hospital EDs in a metropolitan region of 3.2 million from 1/1/08–12/31/10 using
data from the California Office of Statewide Health
Planning and Development (OSHPD) inpatient and ED
dataset. Data included demographics and visit specific
information. FF patients were defined as those having
at least 6 ED visits and Super Users (SU) were defined
as having at least 24 ED visits within any 12-month
span of the study period. Visits were compared
between groups as to primary reason for the ED visit
and visit urgency using the ‘‘Billings Algorithm’’ to
determine ED visit necessity defined as non-emergent
(NE), emergent: primary care treatable (EPCT), emergent: preventable (EP), and emergent: not preventable
(ENP). The probability of each urgency level is reported
and compared for differences.
Results: During the study period, 925,719 patients with
a valid patient identifier were seen in the ED resulting
in 2,016,537 total visits. Of these, 28,969 patients (3%)
were classified as FF and responsible for 347,004 of all
visits (17.2%), and 1,261 patients (0.1%) were SU and
responsible for 77,080 (3.8%) of all visits. FF and SU
were more likely than all other ED patients to present
with mental health (5.2% vs 5.6% vs 2.3%, respectively)
and drug/alcohol-related problems (3.2% vs 5.7% vs
1.2%, respectively). With respect to visit necessity, visits
by FF and SU were more likely than other ED patients
to be classified as NE (21.4%, 23.9%, 18.9%, respectively), whereas there were no other differences in the
other necessity classifications. All comparisons were
statistically significant.
Conclusion: In this 3-year study conducted in a large
metropolitan area, FF and SU patients who are seen
across the EDs in the region present with mental
health, drug/alcohol, and non-emergent problems more
frequently than other ED patients.
177
Return Visits to the Emergency
Department and Costs: A Multi-State
Analysis
Reena Duseja, Ted Clay, Mitzi Dean, R. Adams
Dudley
UCSF, San Francisco, CA
Background: Return visits to the emergency department (ED) or hospital after an index ED visit affect
patient care and place a strain on the U.S. health system. However, little is known at a multi-state level
about the rates at which ED patients return for care,
the characteristics of patients who return, or the costs
associated with return visits.
Objectives: The primary objectives were to determine
the (1) rates of return for ED care or for hospitalization
after an index ED visit, (2) characteristics of patients who
return, and (3) costs associated with those return visits.
Methods: Data were used from six geographically dispersed states (Arizona, California, Florida, Nebraska,
Utah, and Hawaii) that had available linkage files, for
the years 2006–2008, in the Healthcare Cost and Utilization Project (HCUP) State Emergency Department Databases (SEDD) and State Inpatient Databases (SID)
databases. Patients 17 or younger were excluded. Costs
were calculated using publicly available Medicare rates
for the year 2008, using CPT codes for ED visits, and
DRG codes for inpatient admissions.
Results: Among 37,363,104 records from 2006–2008,
34,168,691 (91.5%) had encrypted identifiers to allow
linkage. The largest number of return visits occurred
between days 0 and 3; the cumulative return visit rate
was 8.2%, 5.8% of those resulting in a patient being discharged back to the community. The largest percentage
of return visits within 3 days and discharged were
between the ages 18–44 (60%), while 26.5% return visits
that resulted in admission were over 65 years of age.
Patients with Medicaid had the highest rate of return to
the ED within 3 days (10.5%), while those with Medicare
had the highest rate of admission if they returned (3.9%).
The Clinical Classification System (CCS) groups with the
highest 3-day revisit rates were skin and subcutaneous
tissue infections (20.7%), and administrative/social visits
accounted for 10.1% of ED returns. Within 3 days, ED
cumulative costs were $550 million, with return visits
leading to an inpatient visit totaling $4 billion.
Conclusion: Using a population-level, longitudinal, and
multi-state analysis, the rate of return visits within
3 days is higher than previously reported, with nearly 1
in 12 returning back to the ED. We also provide the
first estimation of health care costs for ED revisits.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
178
Can Patients Accurately Assess Their Own
Acuity? Findings From A Large Scale
Emergency Department Patient Experience
Survey
Alaina Aguanno1, Timothy Cooke2, John
Cowell2, Andrew McRae1, Brian Rowe3,
Eddy Lang1
1
University of Calgary, Calgary, AB, Canada;
2
Health Quality Council of Alberta, Calgary,
AB, Canada; 3University of Alberta, Edmonton,
AB, Canada
Background: The ability of patients to accurately determine their level of urgency is important in planning
strategies that divert away from EDs. In fact, an understanding of patient self-triage abilities is needed to
inform health policies targeting how and where
patients access acute care services within the health
care system.
Objectives: To determine the accuracy of a patient’s
self-assessment of urgency compared against triage
nurses.
Methods: Setting: ED patients are assigned a score by
trained nurses according to the Canadian Emergency
Department Triage and Acuity Scale (CTAS). We present a cross-sectional survey of a random patient sample
from 12 urban/regional EDs conducted during the winters of 2007 and 2009. This previously validated questionnaire, based on the British Healthcare Commission
Survey, was distributed according to a modified
Dillman protocol. Exclusion criteria consisted of: age 0–
15 years, left prior to being seen/treated, died during
ED visit, no contact information, presented with a privacy-sensitive case. Alberta Health Services provided
linked non-survey administrative data.
Results: 21,639 surveys distributed with a response rate
of 46%. Patients rated health problems as life-threatening (6%), possibly life-threatening (22%), urgent (30%),
somewhat urgent (37%), or not urgent (5%). Triage
nurses assigned the same patients CTAS scores of I
(<1%), II (20%), III (45%), IV (29%) or V (5%). Patients
self-rated their condition as 3 or 4 points less urgent
than the assigned CTAS score (<1% of the time), 2
points less urgent (5%), 1 point less urgent (25%),
exactly as urgent (38%), 1 point more urgent (24%), 2
•
www.aemj.org
S97
points more urgent (7%), or 3 or 4 points more urgent
(1%, respectively). Among CTAS I or II patients, 54%
described their problem as life-threatening/possibly
life-threatening, 26% as urgent (risk of permanent damage), 18% as urgent (needed to be seen that day), and
2% as not urgent (wanted to be but did not need to be
seen that day).
Conclusion: The majority of ED patients are generally
able to accurately assess the acuity of their problem.
Encouraging patients with low-urgency conditions to
self-triage to lower-acuity sources of care may relieve
stress on EDs. However, physicians and patients must
be aware that a small minority of patients are unable to
self-triage safely.
179
Intervention to Integrate Health and Social
Services for Frequent ED Users with
Alcohol Use Disorders
Ryan McCormack, Lily Hoffman,
Lewis Goldfrank
NYU School of Medicine/Bellevue Hospital,
New York, NY
Background: The ED is a point of frequent contact for
medically vulnerable, chronically homeless patients
with alcohol use disorders, or chronic public inebriates
(CPI). Despite this population’s exposure to health and
social agencies, its outcomes suffer due, in part, to lack
of stable housing and fragmented, ‘treat and street’
medical care.
Objectives: NYU School of Medicine and the Bellevue
Hospital Center ED partnered with the Department of
Homeless Services (DHS) to implement a multifaceted
pilot initiative. This integration of services is hypothesized to improve access to housing and comprehensive
medical care resulting in reduced costly ED and inpatient admissions, and homelessness. Engaging the ED
as a point of intervention, a cohort of CPIs received
needs assessments, enhanced care management, and
coordination with DHS outreach.
Methods: CPIs were identified primarily through an
administrative database search and chart reviews. At
the time of this 10-month analysis, 20 of the 56 patients
who met inclusion criteria were enrolled. Enrolled
S98
2012 SAEM ANNUAL MEETING ABSTRACTS
patients had a minimum of 20 ED visits in a 24-month
period with at least one visit within 5 months of the
pilot commencement in January 2011 and met the DHS
standard for chronic homelessness. Preference was
given to those with greater visit frequency, co-morbidities, or staff referral. The intervention for enrolled
patients included the ongoing implementation of individualized multidisciplinary action plans, case management, and coordination with the housing outreach team
upon discharge.
Results: Eighteen of the 20 enrolled patients were placed
in housing. After first housing placement (mean length,
4.7 months), monthly ED and inpatient use declined 48%
and 40%, respectively. ED and inpatient use by the nonenrolled remained stable throughout the study period.
Prior to intervention, hospital use had increased over
time for the enrolled patients (Figures 1,2).
Conclusion: ED-based collaboration amongst medical
and social services for a small cohort of CPIs resulted
in housing placements and reduced ED and inpatient
visits. While promising, the results of this interim pilot
data are limited by the non-random sampling method,
power, duration, and singular location. Further study is
needed to determine the intervention’s effect on public
health expenditures and patient outcomes.
180
A Comparison Of Hand-On-Syringe Versus
Hand-On-Needle Technique For UltrasoundGuided Nerve Blocks
Brian Johnson, Arun Nagdev, Michael Stone,
Andrew Herring
Alameda County Medical Center, Oakland, CA
Background: Ultrasound-guided regional nerve blocks
are increasingly used in emergency care. The hand-onsyringe (HS) needling technique is ideally suited to the
fast-paced ED setting because it allows a single operator to perform the block without assistance. In the
anesthesia literature, the HS technique is commonly
assumed to provide less needle control than the alternative two operator hand-on-needle (HN) technique; however, this assumption has never been directly tested.
Objectives: To compare needle control under ultrasound guidance by emergency medicine (EM) residents
using HN and HS techniques on a standardized phantom simulation model.
Methods: This prospective, randomized study evaluated task performance on a simulated ultrasoundguided nerve block phantom model comparing HN and
HS techniques. Participants were EM residents at a
large, urban, academic hospital. Each participant performed a set of structured needling maneuvers (both
simple and difficult) on a standardized phantom simulator. Parameters of task performance and needle control
were evaluated including time to task completion, needle visualization during advancement, and placement of
the needle tip at target. Additionally, resident technique
preference was assessed using a post-task survey.
Results: Sixty tasks performed by 10 EM residents
were analyzed. The HN technique did not demonstrate superior control compared to the HS technique. There was no difference in time to complete
the simple model (HN vs. HS, 18 seconds vs. 18 seconds, p = 0.92). There was no difference in time to
complete the difficult model (HN vs. HS, 56 seconds
vs. 50 seconds, p = 0.63). There was no difference in
first pass success between the HN (8%) and HS
(12%) technique. Needle visualization was similar for
both techniques. There were more instances of
advancing the needle tip into the target in the HN
(4/60) versus the HS (1/60) technique. Most residents
preferred the HS technique (60%), 40% preferred the
HN technique, and 10% had no preference.
Conclusion: For EM residents learning ultrasoundguided nerve blocks, the HN technique did not provide
superior needle control. Our results suggest that the
single-operator HS technique provides equivalent
needle control when compared to the two-operator HN
technique.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
181
Does Level of Training Matter When EM
Residents Provide Patient Care While
Distracted?
Dustin Smith1, Jeffrey Cukor2, Gloria Kuhn3,
Daniel G. Miller4
1
Loma Linda University Medical Center, Loma
Linda, CA; 2University of Massachusetts,
Worcester, MA; 3Wayne State University,
Detroit, MI; 4University of Iowa Hospitals and
Clinics, Iowa City, IA
•
182
Background: Work interruptions during patient care
have been correlated with error. It is unknown at what
level of training EM physicians become competent to
execute patient care tasks despite distractions.
Objectives: We hypothesized that level of training
affects EM resident physicians’ ability to execute
required patient care tasks in the simulation environment when a second patient with a STEMI interrupts a
septic shock case.
Methods: The study was a multisite prospective observational cohort study. The study population consisted
of EM residents in their first 3 years of EM training.
Data were collected spring 2011. Each subject performed a standardized simulated encounter by evaluating and treating a patient in septic shock. At a
predetermined point in every case the septic patient
became hypotensive. Shortly thereafter, the subject was
given a STEMI ECG for a separate chest pain patient in
triage and required to verbalize an interpretation and
action. Data were collected on the subjects’ treatment
of the septic shock patient with acceptable interventions
defined as administration of appropriate antibiotics and
fluids per EGDT and recognition of STEMI.
Results: 91 subjects participated (30 PGY1s, 32 PGY2s,
and 29 PGY3s). 87 properly managed the patient with
septic shock (90.0% PGY1s, 100% PGY2s, 96.6% PGY
3s; p-value 0.22). Of the 87 who successfully managed
the septic shock, 80 correctly identified STEMI on the
simulated STEMI patient (86.7% PGY1s, 96.9% PGY2s,
93.1% PGY3s; p-value 0.35). PGY2s were 5.39 times
more likely than PGY1s to correctly identify a STEMI
(CI 0.56–51.5). PGY3s were 2.26 times more likely than
PGY1s to correctly identify a STEMI (CI 0.38–13.5).
Conclusion: When management of septic shock was
interrupted with a STEMI ECG in simulation we
observed no significant difference in completion of
early goal directed therapy or recognition of STEMI
when compared across different years of training.
www.aemj.org
S99
Developing a Perfused Cadaver Training
Model for Invasive Lifesaving Procedures:
Uncontrolled Hemorrhage
Robert A. De Lorenzo, John A. Ward, Syed H.
Husaini, Allison L. Abplanalp, Suzanne McCall
Brooke Army Medical Center, Fort Sam
Houston, TX
Background: This is a project to develop a perfused
human cadaver model to train military medical personnel in invasive lifesaving procedures. The simulation
evaluated was hemorrhage control, and a swine carcass
model served as a prototype.
Objectives: Develop a perfused swine carcass model
for hemorrhage control which can be used to develop a
perfused human cadaver model for military medical
training purposes.
Methods: Carcasses were exsanguinated, eviscerated,
and refrigerated overnight. The next day, arteries supplying the neck and iliac regions were cannulated.
Regional circulations were perfused with red fluid
using a pulsatile blood pump. Surgical wounds were
made in the neck, thigh, and popliteal fossa to open
major arteries and simulate hemorrhage. QuickTime
digital video files were recorded for evaluation.
Results: Twenty-four, 30 to 60 kg female swine carcasses were used to develop the model and one fresh frozen human cadaver was used to test the concept. Web
belts and combat application tourniquets (CAT) were
used for validation testing. After establishing simulated
hemorrhage, pulse pressure oscillations were observed.
The tourniquet was tightened until hemorrhage stopped.
When the tourniquet was released, blood spurted from
the injured artery as hydrostatic pressure decayed. Pressure and flow were recorded in three animals (see table).
The concept was proof-tested in a single fresh frozen
human cadaver with perfusion through the femoral
artery and hemorrhage from the popliteal artery. The
results were qualitatively and quantitatively similar to
the swine carcass model.
Conclusion: A perfused swine carcass can simulate
exsanguinating hemorrhage for training purposes and
serves as a prototype for a fresh-frozen human cadaver
model. Additional research and development are
required before the model can be widely applied.
Table - Abstract 182: Pressure and flow measurements from perfused swine carcass models
Pressure
(mmHg)
Animal
1
2
3
3
Perfused Artery
Femoral a.
Thoracic aorta
Thoracic aorta
Thoracic aorta
Flow
(L/min)
Statham Site
Low
High
Probe Site
Low
High
Rate (strokes/min)
Femoral a.
Thoracic aorta
Thoracic aorta
Abdominal aorta
40
73
57
80
215
84
99
85
Thoracic aorta
Thoracic aorta
Thoracic aorta
Abdominal aorta
0.00
0.00
0.07
0.00
0.60
1.03
0.78
1.20
34.3
36.9
S100
183
2012 SAEM ANNUAL MEETING ABSTRACTS
Development of a Simulation-Enhanced
Multidisciplinary Teamwork Training
Program in a Pediatric Emergency
Department
Susan Duffy, Linda Brown, Frank Overly
Brown University, Providence, RI
Background: In the pediatric emergency department
(PED), clinicians must work together to provide safe
and effective care. Crisis resource management (CRM)
principles have been used to improve team performance in high-risk clinical settings, while simulation
allows practice and feedback of these behaviors.
Objectives: To develop a multidisciplinary educational
program in a PED using simulation-enhanced teamwork training to standardize communication and
behaviors and identify latent safety threats.
Methods: Over 6 months a workgroup of physicians
and nurses with experience in team training and simulation developed an educational program for clinical
staff of a tertiary PED. Goals included: create a didactic
curriculum to teach the principles of CRM, incorporate
principles of CRM into simulation-enhanced team training in-situ and center-based exercises, and utilize
assessment instruments to evaluate for teamwork, completion of critical actions, and presence of latent safety
threats during in-situ SIM resuscitations.
Results: During Phase I, 130 clinicians, divided into
teams, participated in 90-minute pre-training assessments of PALS-based in-situ simulations. In Phase II,
staff participated in a 6-hour curriculum reviewing key
CRM concepts, including team training exercises utilizing simulation and expert debriefing. In Phase III, staff
participated in post-training 90 minute teamwork and
clinical skills assessments in the PED. In all phases, critical action checklists (CAC) were tabulated by simulation educators. In-situ simulations were recorded for
later review using the assessment tools. After each simulation, educators facilitated discussion of perceptions
of teamwork and identification of systems issues and
latent hazards. Overall, 54 in-situ simulations were conducted capturing 97% of the physicians and 84% of the
nurses. CAC data were collected by an observer and
compared to video recordings. Over 20 significant systems issues, latent hazards, and knowledge deficits
were identified. All components of the program were
rated highly by 90% of the staff.
Conclusion: A workgroup of PEM, simulation, and
team training experts developed a multidisciplinary
team training program that used in-situ and centerbased simulation and a refined CRM curriculum.
Unique features of this program include its multidisciplinary focus, the development of a variety of assessment tools, and use of in-situ simulation for evaluation
of systems issues and latent hazards. This program was
tested in a PED and findings will be used to refine care
and develop a sustainment program while addressing
issues identified.
184
ACLS Training: Does High-Fidelity
Simulation Matter?
Lauren N. Weinberger
Hospital of the University of Pennsylvania,
Philadelphia, PA
Background: An emerging area of simulation research
seeks to identify what type of simulation technology
offers the greatest benefit to the learner. Our study
intends to quantitatively assess the performance of
learners while varying the fidelity of simulation technology used to teach Advanced Cardiac Life Support
(ACLS).
Objectives: Our hypothesis is that participants trained
on high-fidelity mannequins will perform better than
participants trained on low-fidelity mannequins on both
the ACLS written exam and in performance of critical
actions during megacode testing.
Methods: The study was performed in the context of
an ACLS Initial Provider course for new PGY1 residents
at the Penn Medicine Clinical Simulation Center and
involved three training arms: 1) low fidelity (low-fi):
Torso-Rhythm Generator; 2) mid-fidelity (mid-fi): Laerdal SimMan turned OFF; and 3) high-fidelity (high-fi):
Laerdal SimMan turned ON. Training in each arm of
the study followed standard AHA protocol. Educational
outcomes were evaluated by written scores on the
ACLS written examination and expert rater reviews of
ACLS megacode videos performed by trainees during
the course. A sample of 54 subjects were randomized
to one of the three training arms: low-fi (n = 18), mid-fi
(n = 18), or high-fi (n = 18).
Results: Statistical significance across the groups was
determined using analysis-of-variance (ANOVA). The
three groups had similar written pre-test scores [low-fi
0.4 (0.1), mid-fi 0.5 (0.1), and high-fi 0.4 (0.2)] and written post-test scores [low-fi 0.9 (0.1), mid-fi 0.9 (0.1), and
high-fi 0.8 (0.1)]. Similarly, test improvement was not
significantly different. After completion of the course,
high-fi subjects were more likely to report they felt
comfortable in their simulator environment (p = 0.005).
Low-fi subjects were less likely to perceive a benefit in
ACLS training from high-fi technology (p < 0.001).
ACLS Instructors were not rated significantly different
by the subjects using the Debriefing Assessment for
Simulation in Healthcareª (DASH) student version
except for element 6, where the high-fi group subjects
reported lower scores (6.1 vs 6.6 and 6.7 in the other
groups, p = 0.046).
Conclusion: Overall, there was no difference among
the three groups in test scores or test improvement.
Evaluations of the training modality were different in
regards to user comfort and utility of simulator
training.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
185
Using Heart Rate Variability as a
Physiologic Marker of Stress During the
Performance of Complex Tasks
Douglas Gallo1, Walter Robey III1, Carmen
Russoniello2, Matthew Fish2, Kori Brewer1
1
Pitt County Memorial Hospital, Greenville, NC;
2
East Carolina University, Greenville, NC
Background: Managing critically ill patients requires
that complex, high-stress tasks be performed rapidly
and accurately. Clinical experience and training have
been assumed to translate into increased proficiency
with these tasks.
Objectives: We sought to determine if stress associated
with the performance of a complex procedural task can
be affected by level of medical training. Heart rate variability (HRV) is used as a measure of autonomic balance, and therefore an indicator of the level of stress.
Methods: Twenty-one medical students and emergency
medicine residents were enrolled. Participants performed airway procedures on an airway management
trainer. HRV data were collected using a continuous
heart rate variability monitoring system. Participant
HRV was monitored at baseline, during the unassisted
first attempt at endotracheal intubation, during supervised practice, and then during a simulated respiratory
failure clinical scenario. Standard deviation of beat to
beat variability (SDNN), very low frequency (VLF), total
power (TP), and low frequency (LF) was analyzed to
determine the effect of practice and level of training on
the level of stress. A Cohen’s d test was used to determine differences between study groups.
Results: SDNN data showed that second-year residents
were less stressed during all stages than were fourthyear medical students (avg d = 1.12). VLF data showed
third-year residents exhibited less sympathetic activity
than did first-year residents (avg d = )0.68). The opportunity to practice resulted in less stress for all participants. TP data showed that residents had a greater
degree of control over their autonomic nervous system
(ANS) than did medical students (avg d = 0.85). LF data
showed that subjects were more engaged in the task at
hand as the level of training increased indicating autonomic balance (avg d = 0.80).
Conclusion: Our HRV data show that stress associated
with the performance of a complex procedural task is
reduced by increased training. HRV may provide a
quantitative measure of physiologic stress during the
learning process and thus serve as a marker of when a
subject is adequately trained to perform a particular
task.
186
An Experimental Comparison of
Endotracheal Intubation During Ongoing
CPR With Manual Compression versus
Automated Compression
Bob Cambridge, Amy Chelin, Austin Lamb,
John Hafner
OSF St. Francis Medical Center, Peoria, IL
Background: The ACLS recommendations for CPR are
that chest compressions should be uninterrupted to
•
www.aemj.org
S101
help maintain perfusion pressures. Every time CPR
stops the perfusion pressure drops and it takes time to
get back up to a clinically helpful level. One common
reason CPR is stopped in the ED or in the prehospital
setting is to place a definitive airway through tracheal
intubation. Intubation during ongoing compressions is
difficult due to the randomness of tracheal motion secondary to the chest compressions.
Objectives: We seek to examine whether intubation
during CPR can be done as efficiently as intubation
without ongoing CPR. The hypothesis is that the predictable movement of an automated chest compression
device will make intubation easier than the random
movement from manual CPR.
Methods: The project was an experimental controlled
trial and took place in the emergency department at a
tertiary referral center in Peoria, Illinois. Emergency
medicine residents, attendings, paramedics, and other
ACLS trained staff were eligible for participation. In
randomized order, each participant attempted intubation on a mannequin with no CPR ongoing, during CPR
with a human compressor, and during CPR with an
automatic chest compression device (Physio Control
Lucas 2). Participants could use whichever style laryngoscope they felt most comfortable with and they were
timed during the three attempts. Success was determined after each attempt.
Results: There were 43 participants in the trial. The
success rate in the control group and the automated
CPR group were both 88% (38/43) and the success rate
in the manual CPR group was 74% (32/43). The differences in success rates were not statistically significant
(p = 0.99 and p = 0.83). The automated CPR group had
the fastest average time (13.6 sec; p = 0.019). The mean
times for intubation with manual CPR and no CPR
were not statistically different (17.1 sec, 18.1 sec;
p = 0.606).
Conclusion: The success rate of tracheal intubation
with ongoing chest compression was the same as the
success rate of intubation without CPR. Although intubation with automatic chest compression was faster
than during other scenarios, all methods were close to
the 10 second timeframe recommended by ACLS.
Based on these findings, it may not always be necessary
to hold CPR to place a definitive airway; however,
further studies will be needed.
187
Fibroblast Growth Factor 2 Affects
Vascular Remodeling After Acute
Myocardial Infarction
Stacey L. House, Thomas Belanger, Carla
Weinheimer, David Ornitz
Washington University in St. Louis, St. Louis, MO
Background: After acute myocardial infarction, vascular remodeling in the peri-infarct area is essential to
provide adequate perfusion, prevent additional myocyte
loss, and aid in the repair process. We have previously
shown that endogenous fibroblast growth factor 2
(FGF2) is essential to the recovery of contractile
function and limitation of infarct size after cardiac
S102
2012 SAEM ANNUAL MEETING ABSTRACTS
ischemia-reperfusion (IR) injury. The role of FGF2 in
vascular remodeling in this setting is currently
unknown.
Objectives: Determine the role of endogenous FGF2 in
vascular remodeling in a clinically relevant, closed-chest
model of acute myocardial infarction.
Methods: Mice with a targeted ablation of the Fgf2 gene
(Fgf2 knockout) and wild type controls were subjected to
a closed-chest model of regional cardiac IR injury. In this
model, mice were subjected to 90 minutes of occlusion
of the left anterior descending artery followed by reperfusion for either 1 or 7 days. Immunofluorescence was
performed on multiple histological sections from these
hearts to visualize capillaries (endothelium, anti-CD31
antibody), larger vessels (venules and arterioles, antismooth muscle actin antibody), and nuclei (DAPI). Digital
images were captured, and multiple images from each
heart were measured for vessel density and vessel size.
Results: Sham-treated Fgf2 knockout and wild type
mice show no differences in capillary or vessel density
suggesting no defect in vessel formation in the absence
of endogenous FGF2. When subjected to closed-chest
regional cardiac IR injury, Fgf2 knockout hearts had
normal capillary and vessel number and size in the
peri-infarct area after 1 day of reperfusion compared to
wild type controls. However, after 7 days, Fgf2 knockout hearts showed significantly decreased capillary and
vessel number and increased vessel size compared to
wild type controls (p < 0.05).
Conclusion: These data show the necessity of endogenous FGF2 in vascular remodeling in the peri-infarct
zone in a clinically relevant animal model of acute myocardial infarction. These findings may suggest a potential role for modulation of FGF2 signaling as a
therapeutic intervention to optimize vascular remodeling in the repair process after myocardial infarction.
188
The Diagnosis of Aortic Dissections by ED
Physicians is Rare
Scott M. Alter, Barnet Eskin, John R. Allegra
Morristown Medical Center, Morristown, NJ
Background: Aortic dissection is a rare event. The most
common symptom of dissection is chest pain, but chest
pain is a frequent emergency department (ED) chief
complaint and other diseases that cause chest pain, such
as acute coronary syndrome and pulmonary embolism,
occur much more frequently. Furthermore, 20% of dissections are without chest pain and 6% are painless. For
all these reasons, diagnosing dissection can be difficult
for the ED physician. We wished to quantify the magnitude of this problem in a large ED database.
Objectives: Our goal was to determine the number of
patients diagnosed by ED physicians with aortic dissections compared to total ED patients and to the total
number of patients with a chest pain diagnosis.
Methods: Design: Retrospective cohort. Setting: 33 suburban, urban, and rural New York and New Jersey EDs
with annual visits between 8,000 and 75,000. Participants:
Consecutive patients seen by ED physicians from January 1, 1996 through December 31, 2010. Observations:
We identified aortic dissections using ICD-9 codes and
chest pain diagnoses by examining all ICD-9 codes used
over the period of the study and selecting those with a
non-traumatic chest pain diagnosis. We then calculated
the number of total ED patients and chest pain patients
for every aortic dissection diagnosed by emergency physicians. We determined 95% confidence intervals (CIs).
Results: From a database of 9.5 million ED visits, we
identified 782 (0.0082%) aortic dissections, or one for
every 12,200 (95% CI 11,400 to 13,100) visits. The mean
age of aortic dissection patients was 58 ± 19 years and
57% were female. Of the total visits there were 763,000
(8%) with a chest pain diagnosis. Thus there is one aortic dissection diagnosis for every 980 (95% CI 910 to
1,050) chest pain diagnoses.
Conclusion: The diagnosis of aortic dissections by ED
physicians is rare. An ED physician seeing 3,000 to
4,000 patients a year would diagnose an aortic dissection approximately once every 3 to 4 years. An aortic
dissection would be diagnosed once for approximately
every 1,000 ED chest pain patients.
189
Prevalence and ECG Findings for Patients
with False-positive Cardiac Catheterization
Laboratory Activation among Patients
with Suspected ST-Segment Elevation
Myocardial Infarction
Kelly N. Sawyer1, Audra L. Robinson2,
Charlotte S. Roberts2, Michael C. Kurz2,
Michael C. Kontos2
1
William Beaumont Hospital, Royal Oak, MI;
2
Virginia Commonwealth University, Richmond,
VA
Background: Patients presenting with an initial electrocardiogram (ECG) consistent with ST-elevation myocardial infarction (STEMI) represent a cardiovascular
emergency. Low levels of false cardiac catheterization
laboratory (CCL) activation are acceptable to ensure
maximum sensitivity.
Objectives: To describe the patients with false positive
CCL activation presenting to our institution with potential STEMI.
Methods: This study was a case series conducted from
June 2006 to December 2010 of consecutive CCL activations from the ED at our urban, academic, tertiary care
hospital. Patients were excluded if they suffered a cardiac arrest, were transferred from another hospital, or
if the CCL was activated for an inpatient or from EMS
in the field. FP CCL activation was defined as 1) a
patient for whom activation was cancelled in the ED
and ruled out for MI or 2) a patient who went to catheterization but no culprit vessel was identified and MI
was excluded. ECGs for FP patients were classified
using standard criteria. Demographic data, cardiac biomarkers, and all relevant time intervals were collected
according to an on-going quality assurance protocol.
Results: A total of 506 CCL activations were reviewed,
with 68% male, average age 57, and 59% black. There
were 210 (42%) true STEMIs and 86 (17%) FP activations. There were no significant differences between
the FP patients who did and did not have catheterization. For those FP patients who had a catheterization
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
(13%), ‘‘door to page’’ and ‘‘door to lab’’ times were
significantly longer than the STEMI patients (see table),
but there was substantial overlap. There was no difference in sex or age, but FP patients were more likely to
be black (p = 0.02). A total of 82 FP patients had ECGs
available for review; findings included anterior elevation with convex (21%) or concave (13%) elevation, ST
elevation from prior anterior (10%) or inferior (11%)
MI, pericarditis (16%), presumed new LBBB (15%),
early repolarization (5%), and other (9%).
Conclusion: False CCL activation occurred in a minority of patients, most of whom had ECG findings warranting emergent catheterization. The rate of false CCL
activation appears acceptable.
Table - Abstract 189:
Time Intervals
(minutes)
STEMI
Patients
(n = 210)
Median (IQR)
FP Patients
(n = 86)
Median (IQR)
p-value
Door to ECG
Door to Page
Door to CCL
9 (5, 15)
14.5 (8, 23)
45 (32, 59)
12 (7, 17.5)
21.5 (10.5, 41)
55 (40, 70)*
0.281
0.001
0.004
*n = 59 patients who went to the CCL and MI was excluded
190
An Evaluation of an Atrial Fibrillation
Clinic for the Follow-up of Patients
Presenting to the Emergency Department
with Newly Diagnosed or Symptomatic
Arrhythmia
Brandon Hone1, Eddy Lang2, Anne Gillis2,
Renee Vilneff2, Trevor Langhan2, Russell
Quinn2, Vikas Kuriachin2, Laurie Burland2,
Beverly Arnburg2
1
University of Alberta, Edmonton, AB, Canada;
2
University of Calgary, Calgary, AB, Canada
Background: Atrial fibrillation (AF) is the most common cardiac arrhythmia treated in the ED, leading to
high rates of hospitalization and resource utilization.
Dedicated atrial fibrillation clinics offer the possibility
of reducing the admission burden for AF patients presenting to the ED. While the referral base for these AF
clinics is growing, it is unclear to what extent these
clinics contribute to reducing the number of ED visits
and hospitalizations related to AF.
Objectives: To compare the number of ED visits and
hospitalizations among discharged ED patients with a
primary diagnosis of AF who followed up with an AF
clinic and those who did not.
Methods: A retrospective cohort study and medical
records review including three major tertiary centres in
Calgary, Canada. A sample of 600 patients was taken
representing 200 patients referred to the AF clinic from
the Calgary Zone EDs and compared to 400 matched
control ED patients who were referred to other providers for follow-up. The controls were matched for age
and sex. Inclusion criteria included patients over
18 years of age, discharged during the index visit, and
seen by the AF clinic between January 1, 2009 and October 25, 2010. Exclusion criteria included non-residents
•
www.aemj.org
S103
and patients hospitalized during the index visit. The
number of cardiovascular-related ED visits and hospitalizations was measured. All data are categorical, and
were compared using chi-square tests.
Results: Patients in the control and AF clinic cohorts
were similar for all baseline characteristics except for a
higher proportion of first episode patients in the intervention arm. In the six months following the index ED
visit, 55 study group patients (27.5%) visited an ED on 95
occasions, and 12 (6%) were hospitalized on 16 occasions.
Of the control group, 122 patients (30.5%) visited an ED
on 193 occasions, and 44 (11%) were hospitalized on 55
occasions. Using a chi-square test we found no significant
difference in ED visits (p = 0.5063) or hospitalizations
(p = 0.0664) between the control and AF clinic cohorts.
Conclusion: Based on our results, referral from the ED
to an AF clinic is not associated with a significant
reduction in subsequent cardiovascular related ED visits and hospitalizations. Due to the possibility of residual confounding, randomized trials should be
performed to evaluate the efficacy of AF clinics.
Table - Abstract 190: Cardiovascular-related ED visits and
hospitalizations after index ED visit
AF Clinic
Group
(N = 200)
Subsequent CV related ED visits
Number (%) of patients
55 (27.5)
Total number of visits
95
Number (%) of patients with 51 (25.5)
1 or more arrhythmic
event leading to ED visit
Subsequent CV related hospitalizations
Number (%) of patients
12 (6)
Total number of admissions 16
191
Usual
Care
(N = 400) P-value
122 (30.5)
193
98 (24.5)
44 (11)
55
0.5063
0.8673
0.0664
Repolarization Abnormalities in Previous
Electrocardiograms of Adult Victims of
Non-Traumatic Sudden Cardiac Death
Michael C. Plewa
Mercy St. Vincent Medical Center, Toledo, OH
Background: Repolarization abnormalities, such as
long or short QTc interval (LQTI, SQTI), early repolarization (ER), Brugada syndrome (BrS), increased angle
between R- and T-axes (QRS-T angle), and a prolonged
interval between the peak and end of the T-wave (TPE)
on the electrocardiogram (ECG) may increase the risk
of ventricular arrhythmia leading to sudden cardiac
death (SCD).
Objectives: To describe the incidence of LQTI, SQTI,
ER, BrS, wide QRS-T angle, and prolonged TPE interval
on the most recent previous ECG of adult SCD cases.
Methods: Retrospective, structured medical record
review of all adult SCD cases for a 5-year period from
8/2006–7/2011 of a 63,000 visit emergency department
(ED). Excluded were cases of age <18, trauma, overdose, hemorrhage, terminal illness, lack of prior ECG,
paced rhythm, and bundle branch block, but not heart
or renal disease or upper age limit. Records were
S104
2012 SAEM ANNUAL MEETING ABSTRACTS
reviewed for age, sex, race, and initial SCD rhythm.
Twelve-lead ECG tracings obtained at 25 mm/sec with
MAC 5500 ECG system (Marquette Medical Systems,
Inc) were reviewed for computer-derived QTc, R-axis,
and T-axis, and interpreted for LQTI, SQTI, ER, BrS,
planar QRS-T angle, and TPE interval according to published criteria. Data are expressed as mean ± standard
deviation, percentage, and 95% confidence interval.
Results: A total of 164 cases were reviewed, average
age 62 ± 16 years (range 25–99), with SCD initial
rhythm of ventricular fibrillation in 49%. Previous ECG
was an average of 36 ± 48 months prior to SCD, with
97% sinus rhythm and 3% atrial fibrillation. Average
QTc was 446 ± 38 ms, with QTc ‡ 500 ms in 7%
(3–12%), LQTI in 39% (31–47%), and SQTI in none (0–
2%). Previous ECG revealed ER in 12% (8–18%) and
BrS type II morphology (without full BrS criteria) in
0.6% (0.3–4%). Average QRS-T angle was 56 ± 54o with
22% (16–29%) widened ‡ 90o. Average TPE was
114 ± 22 ms, with 36% (29–44%) prolonged ‡ 120 ms.
Conclusion: Repolarization abnormalities, especially
LQTI, wide QRS-T angle ‡ 90o, and prolonged TPE ‡
120 ms, but not SQTI or Brugada syndrome morphology, are relatively common in the previous ECG of this
population of older adults with SCD. Further research,
including adults with similar medical illnesses and medications, is needed to clarify if these repolarization
abnormalities are truly predictive of SCD.
192
The Association of Health Literacy, Self
Care Behaviors, and Knowledge with
Emergency Department Readmission Rates
for Heart Failure
Carolyn Overman, Daniel C. Hootman, Lydia
Odenat, Douglas S. Ander
Emory University School of Medicine, Atlanta,
GA
Background: Readmission rates at 30 days for heart
failure (HF) are estimated at 13%. Changes to the Medicare payment policy for HF support study of readmission factors. Little is known regarding patient factors
that affect HF readmission.
Objectives: Objective: Determine the influence of
health literacy and self-care knowledge on HF readmission rates.
Methods: Prospective observational study of patients
with the clinical diagnosis of HF in an inner-city ED.
Forty-nine patients were enrolled in a 6-month period.
Patient assessment included the Health Literacy Assessment (HLA), Heart Failure Knowledge Test (HFKT), and
self-care behavior scores (SCBS). HLA, HFKT, and
SCBS were compared to 30-day, 31–90 day, and 90-day
readmission via independent binary logistic regressions. Chi-square tests were performed to identify associations between variables. The study was sufficiently
powered to show a difference in readmission rates.
Results: Of all participants, 69.4% were male, with a
mean age of 56.1 years (sd = 11.9), 93.9% were African
American, 38.8% were single, 26.5% were divorced or
separated, and 36.7% had completed some high school
education, with 30.6% having earned a high school
degree or equivalency diploma. Additionally, 68.6%
reported an income of less than $10,000. There were no
significant associations between sex, race, marital status, education level, income, insurance status, and subsequent 30- and-90 day readmission rates. HLA score
was not found to be significantly related to readmission
rates. The mean HLA score was 18.9 (sd = 7.87), equivalent to less than 6th grade literacy, meaning these
patients may not be able to read prescription labels.
For each unit increase in HFKT score, the odds of being
readmitted within 30 days decreased by 0.219
(p < 0.001) and for 31–90 days decreased by 0.440
(p < 0.001). For each unit increase in SCBS score, the
odds of being readmitted within 90 days decreased by
0.949 (p = 0.038).
Conclusion: Health care literacy in our patient population
is not associated with readmission, likely related to the low
literacy rate of our study population. Better HF knowledge
and self-care behaviors are associated with lower readmission rates. Greater emphasis should be placed on
patient education and self-care behaviors regarding HF as
a mechanism to decrease readmission rates.
193
Comparison of Door to Balloon Times in
Patients Presenting Directly or Transferred
to a Regional Heart Center with STEMI
Jennifer Ehlers, Adam V. Wurstle, Luis
Gruberg, Adam J. Singer
Stony Brook University, Stony Brook, NY
Background: Based on the evidence, a door-to-balloon-TIME (DTBT) of less than 90 minutes is recommended by the AHA/ACC for patients with STEMI. In
many regions, patients with STEMI are transferred to a
regional heart center for percutaneous coronary intervention (PCI).
Objectives: We compared DTBT for patients presenting directly to a regional heart center with those for
patients transferred from other regional hospitals. We
hypothesized that DTBT would be significantly longer
for transferred patients.
Methods: Study Design-Retrospective medical record
review. Setting-Academic ED at a regional heart center
with an annual census of 80,000 that includes a catchment area of 12 hospitals up to 50 miles away.
Patients-Patients with acute STEMI identified on ED
12-lead ECG. Measures-Demographic and clinical data
including time from triage to ECG, from ECG to activation of regional catheterization lab, and from initial triage
to PCI (DTBT). Outcomes-Median DTBT and percentage
of patients with a DTBT under 90 minutes. Data Analysis-Median DTBT compared with Mann Whitney U tests
and proportions compared with chi-square tests.
Results: In 2010 there were 379 catheterization lab activations for STEMI: 183 were in patients presenting
directly, and 196 in transferred patients. Thrombolytics
were administered in 19 (9.7%) transfers. Compared
with patients presenting directly to the heart center,
transferred patients had longer median [IQR] DTBT (127
[105–151] vs. 64 [49–80]; P < 0.001). Transferred patients
also had longer door to ECG (9 [5–18] vs. 5 [2–8];
P < 0.001) and ECG to catheterization lab activation
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
times (18 [12–38] vs. 8 [4–17]; P < 0.001). The percentages
of patients with a DTBT within 90 minutes in direct and
transfer patients were 83% vs. 17%; P < 0.001.
Conclusion: Most patients transferred to a regional
heart center do not meet national DTBT guidelines.
Consideration should be given to administering thrombolytics in transfer patients, especially if the transport
time is prolonged.
194
A Comparison of the Management of
ST-elevation Myocardial Infarction
Between Patients Who Are English And
Non-English Speaking
Scott G. Weiner1, Kathryn A. Volz2, Matthew
B. Mostofi1, Leon D. Sanchez2, John J. Collins3
1
Tufts Medical Center, Boston, MA; 2Beth Israel
Deaconess Medical Center, Boston, MA;
3
Robert Wood Johnson University Hospital,
New Brunswick, NJ
Background: Prompt treatment with revascularization
in the catheterization lab (cath) is essential to preserve
cardiac function for patients who present with ST-elevation myocardial infarction (STEMI).
Objectives: To determine if there are disparities
between patients who speak English or do not speak
English who present with STEMI in times from door to
first EKG (D2E), door to cath (D2C), and door to intravascular balloon deployment (D2B).
Methods: The study was performed in an inner-city academic ED between 1/1/07 and 12/31/10. Every patient for
whom ED activation of our STEMI system occurred was
included. All times data from a pre-existing quality assurance database were collected prospectively. Patient language was determined retrospectively by chart review.
Results: There were 132 patients between 1/1/07 and
12/31/10. 21 patients (16%) were deemed too sick or
unable to provide history and were excluded, leaving
111 patients for analysis. 85 (77%) spoke English and 26
(23%) did not. In the non-English group, Chinese was
the most common language, in 22 (20%) patients. There
was no difference in mode of arrival (EMS 44% English
vs. 46% non-English, p = 0.85) or arrival between
5:00 pm–9:00 am (47% English vs. 58% non-English,
p = 0.34). English patients were more likely to have a
documented chief complaint of chest pain or pressure
(81% vs. 42%, p < 0.001). D2E times were not different
(English median 8 min [2.5–13.5] vs. non-English median 11 min [4.5–17], p = 0.10). English speakers were
more likely to go to cath lab (79% vs. 54%, p = 0.01). Of
those who had went to cath, there was no significant
D2C time difference (English median 53 min [33–71] vs.
non-English median 45.5 min [36–87], p = 0.39). For
those who had revascularization, D2B times were also
similar (English median 78.5 min [57.8–94.5] vs. nonEnglish median 65 min [55–126], p = 0.44.) The percentage of patients with D2B time <90 min was not different
(72% English vs. 64% non-English, p = 0.60).
Conclusion: We found that non-English speakers are
much less likely to complain of chest pain as an initial
chief complaint and are less likely to go to cath. Other
parameters such as time and mode of arrival are not
•
www.aemj.org
S105
different. We discovered that our D2E, D2C, and D2B
times are not significantly different between these two
patient populations. Further research is needed to
determine why non-English speaking patients were less
likely to be taken to cath.
195
A Four-year Population-Based Analysis Of
Emergency Department Syncope:
Predictors Of Admission/Readmission, And
Regional Variations In Practice Patterns
Xin Feng1, Zhe Tian2, Brian Rowe3, Andrew
McRae1, Venkatesh
Thiruganasambandamoorthy4, Rhonda
Rosychuk3, Robert Sheldon1, Eddy Lang1
1
University of Calgary, Calgary, AB, Canada;
2
McGill University, Montreal, QC, Canada;
3
University of Alberta, Edmonton, AB, Canada;
4
University of Ottawa, Ottawa, ON, Canada
Background: Syncope is a common, potentially highrisk ED presentation. Hospitalization for syncope,
although common, is rarely of benefit. No populationbased study has examined disparities in regional admission practices for syncope care in the ED. Moreover,
there are no population-based studies reporting prognostic factors for 7- and 30-day readmission of syncope.
Objectives: 1) To identify factors associated with
admission as well as prognostic factors for 7- and
30-day readmission to these hospitals; 2) To evaluate
variability in syncope admission practices across different sizes and types of hospitals.
Methods: DESIGN - Multi-center retrospective cohort
study using ED administrative data from 101 Albertan
EDs. PARTICIPANTS/SUBJECTS - patients >17 years of
age with syncope (ICD10: R55) as a primary or secondary diagnosis from 2007 to June 2011. Readmission was
defined as return visits to the ED or admission <7 days
or 7–30 days after the index visit (including against
medical advice and left without being seen during the
index visit). OUTCOMES - factors associated with hospital admission at index presentation, and readmission
following ED discharge, adjusted using multivariable
logistic regression.
Results: Overall, 44521 syncope visits occurred over
4 years. Increased age, increased length of stay (LoS),
performance of CXR, transport by ground ambulance,
and treatment at a low-volume hospital (non-teaching or
non-large urban) were independently associated with
index hospitalization. These same factors, as well as hospital admission itself, were associated with 7-day readmission. Additionally, increased age, increased LoS,
performance of a head CT, treatment at a low-volume
hospital, hospital admission, and female sex were independently associated with 7–30 day readmission. Arrival
by ground ambulance was associated with a decreased
likelihood of both 7- and 7–30 day readmission.
Conclusion: Our data identify variations in practice as
well as factors associated with hospitalization and readmission for syncope. The disparity in admission and
readmission rates between centers may highlight a gap
in quality of care or reflect inappropriate use of
resources. Further research to compare patient out-
S106
2012 SAEM ANNUAL MEETING ABSTRACTS
Table 1 - Abstract 195: Multivariate analysis of prognostic factors for admission and readmission
Characteristic
Increased age
Increased length of stay
Treatment at a low-volume hospital
(non-teaching or not large urban)
compared to high-volume hospital
(teaching or large urban)
CXR performed
Head CT performed
Ground ambulance transportation
Hospital admission
Sex (M:F)
Admission
(odds ratio, 95% CI)
7-day re-admission
(odds ratio, 95% CI)
7–30 day re-admission
(odds ratio, 05% CI)
1.03 (1.02–1.05) per year
1.04 (1.03–1.05) per hour
1.32 (1.09–1.60)
1.006 (1.001–1.01) per year
1.025 (1.005–1.05) per hour
1.93 (1.78–2.09)
1.009 (1.001–1.03) per year
1.026 (1.004–1.06) per hour
1.97 (1.85–2.10)
1.54 (1.27–1.86)
Not significant
1.31 (1.07–1.60)
Not applicable
Not significant
1.07 (1.009–1.127)
Not significant
0.90 (0.83–0.97)
1.79 (1.38–2.33)
Not significant
Not significant
1.08 (1.008–1.165)
0.87 (0.82–0.93)
2.15 (1.73–2.67)
0.93 (0.88–0.99)
comes and quality of patient care among urban and
non-urban centers is needed.
Table 2 - Abstract 195: Descriptive statistics on syncope in Alberta
from 2007 to 2011 (statistical outliers removed)
Sex
Age
Admission
rate from ED
7-day
readmission
rate
7–30 day
readmission
rate
Male
20559 (46.2%)
Mean
54.2 years
Female
23962 (53.8%)
Median
55 years
High-volume:
teaching or
large urban
Mean (range)
19% (9%–27%)
High-volume:
teaching or
large urban
Mean (range)
10% (3%–17%)
High-volume:
teaching or
large urban
Mean (range)
12% (3%–30%)
Low-volume:
non-teaching,
non-large urban
Mean (range)
15% (11%–43%)
Low-volume:
non-teaching,
non-large urban
Mean (range)
16% (2%–29%)
Low-volume:
non-teaching,
non-large urban
Mean (range)
16% (8%–34%)
Range
18–104
years
was the 4-hour DS change. Two clinical outcome measures were obtained: 1) the number of days hospitalized
or dead within 30 days of the index visit (30-day outcome), and 2) the number of days hospitalized or dead
within 90 days of the index visit (90-day outcome).
Results: Data on 86 patients were analyzed. The median
30-day outcome variable was 6 days with an interquartile
range (IQR) of 3 to 16. The median 90-day outcome variable was 10 days (IQR 4 to 27.5). The median 1-hour DS
change was 2.6 cm (IQR 0.3 to 6.7). The median 4-hour
DS change was 4.9 cm (IQR 2.2 to 8.2). The 30-day and
90-day mortality rates were 9% and 13% respectively.
The spearman rank correlations and 95% confidence
intervals are presented in the table below.
Conclusion: While the point estimates for the correlations were below 0.5, the 95% CI for two of the correlations extended above 0.5. These pilot data support
change in DS as a valid outcome measure for AHF
when measured over 4 hours. A larger prospective
study is needed to obtain a more accurate point
estimate of the correlations.
Table - Abstract 196: Spearman Rank Correlations
196
Correlation Between Change in Dyspnea
Severity and Clinical Outcome in Patients
with Acute Heart Failure
Howard Smithline
Baystate Medical Center, Springfield, MA
Background: Change in dyspnea severity (DS) is a frequently used outcome measure in trials of acute heart
failure (AHF). However, there is limited information
concerning its validity.
Objectives: To assess the predictive validity of change
in dyspnea severity.
Methods: This was a secondary analysis of a prospective observational study of a convenience sample of
AHF patients presenting with dyspnea to the ED of an
academic tertiary referral center with a mixed urban/
suburban catchment area. Patients were enrolled weekdays, June through December 2006. Patients assessed
their DS using a 10-cm visual analog scale at three
times: the start of ED treatment (baseline) as well as at
1 and 4 hours after starting ED treatment. The difference between baseline and 1 hour was the 1-hour DS
change. The difference between baseline and 4 hours
1-hour DS change vs 30-day
outcome:
4-hour DS
change vs 30-day outcome:
1-hour DS
change vs 90-day outcome:
4-hour DS
change vs 90-day outcome:
197
0.027 (95% CI -0.186 to 0.238)
)0.314 (95% CI -0.514 to -0.082)
0.016 (95% CI -0.196 to 0.227)
)0.307 (95% CI -0.508 to -0.073)
Ability of a Triage Decision Rule for Rapid
Electrocardiogram (ECG) to Identify
Patients with Suspected ST-elevation
Myocardial Infarction (STEMI)
Karim Ali, Anwar D. Osborne, James P. Capes,
Douglas Lowery-North, Matthew Wheatley,
Rachel E. O’Malley, George Leach, Vicki
Hertzberg, Franks M. Nicole, Ryan Stroder,
Stephen R. Pitts, Michael A. Ross
Emory University, Atlanta, GA
Background: ACC/AHA guidelines for STEMI state
that an ECG should be performed upon presentation to
the ED within 10 minutes.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Objectives: To determine the performance of a previously published rapid ECG screening criteria in a
population of suspected STEMI patients. This rule
was originally designed to identify patients with acute
myocardial infarction needing a rapid ECG at ED
triage based on presenting complaints in the
lytic therapy era. We hypothesize that it would not
have identified all patients in which STEMI was
suspected.
Methods: Three trained physician reviewers retrospectively applied the decision rule to 430 consecutive
patients from a database of emergent cardiac catheterization lab (CCL) activations by ED physicians. The decision rule recommends that patients between the ages
of 30 and 49 received a rapid ECG if they complained
of chest pain and those aged 50 years or older when
they complained of chest pain, shortness of breath, palpitations, weakness, or syncope. The triage note or earliest medical contact documentation was used to
determine if the patient’s complaints would have
resulted in a rapid ECG by the decision rule. Acute
myocardial infarction (AMI) was defined as high-grade
•
www.aemj.org
S107
stenosis on the subsequent emergent cardiac catheterization. A single data collection Microsoft Excel
spreadsheet was used and descriptive statistics were
performed in Excel.
Results: The triage ECG rule would have identified
97% patients causing activation of the CCL (see figure).
Among these patients, the rule was 98% sensitive
(95%CI 95%–98%) for identifying patients who had a
high grade stenois on catheterization (see table). Of the
430 STEMI activation patients, 412 patients would have
been identified by this rule. Of the 18 patients who
would not have been indentified by the rule, four cases
were unwitnessed cardiac arrests that could not provide a history. The remaining 14 patients largely presented with nausea/vomiting or were under 30 years
old.
Conclusion: This triage rule would have identified
almost all of our patients where STEMI was suspected.
It would have failed to identify patients needing a rapid
EKG who presented with nausea and vomiting as a sole
complaint as well as patients who were under 30 years
old.
S108
198
2012 SAEM ANNUAL MEETING ABSTRACTS
Health Care Resource Utilization Among
Patients with Acute Decompensated Heart
Failure Managed by Two University
Affiliated Emergency Department
Observation Units, 2007–2011
Justin Schrager, Matthew Wheatley, Stephen
Pitts, Daniel Axelson, Anwar Osborne,
Andreas Kalogeropoulos, Vasiliki
Georgiopoulou, Javed Butler, Michael Ross
Emory University School of Medicine, Atlanta,
GA
Background: Emergency department observation units
(EDOU) have the potential to reduce costs and unnecessary admissions in the adult heart failure population. It
is not known whether EDOU treatment yields similar
readmission rates, which could limit this benefit.
Objectives: To compare readmission rates and
resource utilization between patients admitted following EDOU treatment of acute decompensated heart
failure (ADHF) and those successfully discharged from
OU.
Methods: DESIGN - Retrospective observational
cohort study. SETTING - Two university-affiliated
EDOUs. SUBJECTS - 358 patients treated for ADHF in
two protocol-driven OUs from 10/01/07–6/30/11. Thirtyone patients were excluded for a final diagnosis other
than ADHF. OBSERVATIONS - The exposure was
admission or discharge following OU treatment. The
outcome was readmission within 30 days of either OU
discharge or hospital discharge if initially admitted
from OU. Descriptive statistical analyses of covariates
included age, race, sex, clinical site, ED length of stay
(LOS), and OU LOS, as well as B-type natriuretic peptide (BNP), blood urea nitrogen (BUN), serum creatinine
(Cr), and ejection fraction (EF). Time to readmission
analysis was performed with Cox proportional hazards
regression. We also examined resource utilization. The
study was powered to show a difference in rate of readmission of 40%.
Results: Patients did not differ significantly by exposure based on age, race, sex, ED LOS, or OU LOS.
Admitted patients had a higher median BNP (1063 pg/
ml vs. 708 pg/ml, p = 0.0019), and higher BUN (19 mg/
dL vs. 17 mg/dL, p = 0.0445). Admitted patients had a
lower median EF (22.5% vs. 35%, p = 0.0020). In
adjusted Cox proportional hazards models, the 30-day
readmission rate was not significantly different between
those admitted and those discharged from OU
(HR = 0.95; 95% CI 0.46–1.99). Patients discharged from
OU spent a median of 1.7 days as inpatients compared
to 3.5 days among those admitted from OU, within
30 days (p < 0.0001). Among all readmitted patients the
total median inpatient time was not significantly
different.
Conclusion: ADHF patients treated in the EDOU and
discharged were not more likely to be readmitted
within 30 days than those admitted to the hospital.
Patients successfully treated in the EDOU for ADHF
used fewer hospital bed-days.
199
Emergency Department Case Volume and
Short-term Outcomes in Patients with
Acute Heart Failure
Chu-Lin Tsai1, Wen-Ya Lee1,
George L. Delclos1, Carlos A. Camargo2
1
Division of Epidemiology, Human Genetics
and Environmental Sciences, The University of
Texas School of Public Health, Houston, TX;
2
Department
of
Emergency
Medicine,
Massachusetts General Hospital, Harvard
Medical School, Boston, MA
Background: The majority of volume-quality research
has focused on surgical outcomes in the inpatient setting; very few studies have examined the effect of emergency department (ED) case volume on patient
outcomes.
Objectives: To determine whether ED case volume of
acute heart failure (AHF) is associated with short-term
patient outcomes.
Methods: We analyzed the 2008 Nationwide Emergency
Department Sample (NEDS) and Nationwide Inpatient
Sample (NIS), the largest, all-payer, ED and inpatient databases in the US. ED visits for AHF were identified with
a principal diagnosis of ICD-9-CM code 428.xx. EDs
were categorized into quartiles by ED case volume of
AHF. The outcome measures were early inpatient mortality (within the first 2 days of admission), overall inpatient mortality, and hospital length of stay (LOS).
Results: There were an estimated 946,000 visits for
AHF from approximately 4,700 EDs in 2008; 80% were
hospitalized. Of these, the overall inpatient mortality
rate was 3.2%, and the median hospital LOS was
4 days. Early inpatient mortality was lower in the highest-volume EDs, compared with the lowest-volume EDs
(0.8% vs. 2.1%; P < 0.001). Similar patterns were
observed for overall inpatient mortality (3.0% vs. 4.1%;
P < 0.001). In a multivariable analysis adjusting for 37
patient and hospital characteristics, early inpatient mortality remained lower in patients admitted through the
highest-volume EDs (adjusted odds ratios [OR], 0.70;
95% confidence interval [CI], 0.52–0.96), as compared
with the lowest-volume EDs. There was a trend
towards lower overall inpatient mortality in the highest-volume EDs; however, this was not statistically significant (adjusted OR, 0.92; 95%CI, 0.75–1.14). By
contrast, using the NIS data including various sources
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S109
in the Midwest (76.0%) and West (74.8%). Total monies
spent on ED services were highest in the South
($69,078,042) followed by the Northeast ($18,233,807),
West ($6,360,315) and Midwest ($5,899,481).
Conclusion: This large retrospective ED cohort suggests a very high national admission rate with significant
regional variation in both disposition decisions as well as
total monies spent on ED services for patients with a primary diagnosis of heart failure. Examining these estimates and variations further may provide strategies to
reduce the economic burden of heart failure.
Table - Abstract 200:
Disposition
of admissions, a higher case volume of inpatient AHF
patients predicted lower overall inpatient mortality
(adjusted OR, 0.51; 95%CI, 0.40–0.65). The hospital LOS
in patients admitted through the highest-volume EDs
was slightly longer (adjusted difference, 0.7 day; 95%CI,
0.2–1.2), compared with the lowest-volume EDs.
Conclusion: ED patients who are hospitalized for AHF
have an approximately 30% reduced early inpatient
mortality if they were admitted from an ED that handles a large volume of AHF cases. The ‘‘practice-makesperfect’’ concept may hold in emergency management
of AHF.
200
Emergency Department Disposition and
Charges for Heart Failure: Regional
Variability
Alan B. Storrow, Cathy A. Jenkins, Sean P.
Collins, Karen P. Miller, Candace McNaughton,
Naftilan Allen, Benjamin S. Heavrin
Vanderbilt University, Nashville, TN
Background: High inpatient admission rates for ED
patients with acute heart failure are felt partially
responsible for the large economic burden of this most
costly cardiovascular problem.
Objectives: We examined regional variability in ED disposition decisions and regional variability in total dollars spent on ED services for admitted patients with
primary heart failure.
Methods: The 2007 Nationwide Emergency Department Sample (NEDS) was used to perform a retrospective, cohort analysis of patients with heart failure (ICD9 code of 428.x) listed as the primary ED diagnosis.
Demographics and disposition percentages (with SE)
were calculated for the overall sample and by region:
Northeast, South, Midwest, and West. To account for
the sample design and to obtain national and regional
estimates, a weighted analysis was conducted.
Results: There were 941,754 weighted ED visits with
heart failure listed as the primary diagnosis. Overall,
over eighty percent were admitted (see table).
Fifty-two percent of these patients were female; mean
age was 72.7 years (SE 0.20). Hospitalization rates were
higher in the Northeast (89.1%) and South (81.2%) than
Treated and released
Admitted to same hospital
Transferred
ED death
Discharged alive,
destination unknown
Unknown
201
Weighted
Frequency (SD)
Percent (SE)
150,667 (4,449)
759,005 (19,599)
27,392 (2,023)
1,180 (94)
44 (24)
16.0 (0.41)
80.6 (0.482)
2.9 (0.22)
0.13 (0.01)
0.005 (0.003)
3,466 (859)
0.37 (0.09)
Hospital-Based Shootings in the United
States: 2000–2010
Gabor D. Kelen, Christina L. Catlett,
Joshua G. Cubit, Yu-Hsiang Hsieh
Johns Hopkins University, Baltimore, MD
Background: Workplace violence in health care settings
is a frequent occurrence. Gunfire in hospitals is of particular concern. However, information regarding such
workplace violence is limited. Accordingly, we characterized U.S. hospital-based shootings from 2000–2010.
Objectives: To determine extent of hospital-based
shootings in the U.S. and involvement of emergency
departments.
Methods: Using LexisNexis, Google, Netscape, PubMed, and ScienceDirect, we searched reports for acute
care hospital shooting events from January 2000
through December 2010, and those with at least one
injured victim were analyzed.
Results: We identified 140 hospital-related shootings (86
inside the hospital, 54 on hospital grounds), in 39 states,
with 216 victims, of whom 98 were perpetrators. In comparison to external shootings, shootings within the hospital have not increased over time (see figure).
Perpetrators were from all age groups, including the
elderly. Most of the events involved a determined shooter: grudge (26%), suicide (19%), ‘‘euthanizing’’ an ill relative (15%), and prisoner escape (12%). Ambient societal
violence (8%) and mentally unstable patients (4%) were
comparatively infrequent. The most common injured
was the perpetrator (45%). Hospital employees comprised only 21% of victims; physician (3%) and nurse
(5%) victims were relatively infrequent. The emergency
department was the most common site (29%), followed
by patient rooms (20%) and the parking lot (20%). In
13% of shootings within hospitals, the weapon was a
security officer’s gun grabbed by the perpetrator.
‘‘Grudge’’ motive was the only factor determinative of
hospital staff victims (OR = 4.34, 95% CI 1.85–10.17).
S110
2012 SAEM ANNUAL MEETING ABSTRACTS
Conclusion: Although hospital-based shootings are relatively rare, emergency departments are the most likely
site. The unpredictable nature of this type of event represents a significant challenge to hospital security and deterrence practices, as most perpetrators proved determined,
and many hospital shootings occur outside the building.
202
Impact of Emergency Physician Board
Certification on Patient Perceptions of ED
Care Quality
Albert G. Sledge IV1, Carl A. Germann1,
Tania D. Strout1, John Southall2
1
Maine Medical Center, Portland, ME; 2Mercy
Hospital, Portland, ME
Background: The Hospital Value-Based Purchasing
Program mandated by the Affordable Care Act is the
latest example of how patients’ perceptions of care will
affect the future practice environment of all physicians.
The type of training of medical providers in the emergency department (ED) is one possible factor affecting
patient perceptions of care. A unique situation in a
Maine community ED led to the rapid transition from
non-emergency medicine (EM) residency trained physicians to all EM residency trained and American Board
of Emergency Medicine (ABEM) certified providers.
Objectives: The purpose of this study was to evaluate
the effect of the implementation of an all EM-trained,
ABEM-certified physician staff on patient perceptions
of the quality of care they received in the ED.
Methods: We retrospectively evaluated Press Ganey
data from surveys returned by patients receiving treatment in a single, rural ED. Survey items addressed
patient’s perceptions of physician courtesy, time spent
listening, concern for patient comfort, and informativeness. Additional items evaluated overall perceptions of
care and the likelihood that the respondent would recommend the ED to another. Data were compared for
the three years prior to and following implementation
of the all trained, certified staff. We used the independent samples t-test to compare mean responses during
the two time periods. Bonferroni’s correction was
applied to adjust for multiple comparisons.
Results: During the study period, 3,039 patients provided surveys for analysis: 1,666 during the pre-certification phase and 1,373 during the post-certification
phase. Across all six survey items, mean responses
increased following transition to the board-certified
staff. These improvements were noted to be statistically
significant in each case: courtesy p < 0.001, time listening p < 0.001, concern for comfort p < 0.001, informativeness p < 0.001, overall perception of care p < 0.001,
and likelihood to recommend p < 0.001.
Conclusion: Data from this community ED suggest that
transition from a non-residency trained, ABEM certified
staff to a fully trained and certified model has important
implications for patient’s perceptions of the care they
receive. We observed significant improvement in rating
scores provided by patients across all physicianoriented and general ED measures.
203
Electronic, Verbal Discussion-Optional
Signout for Admitted Patients: Effects on
Patient Safety and ED Throughput
Christopher M. Fischer, Julius Yang,
Carrie Tibbles, Elizabeth O’Donnell, Ethan
Ellis, Larry Nathanson, Leon D. Sanchez
Beth Israel Deaconess Medical Center/Harvard
Medical School, Boston, MA
Background: Transfer of care from the ED to the inpatient floor is a critical transition when miscommunication
places patients at risk. The optimal form and content of
handoff between providers has not been defined. In July
2011, ED-to-floor signout for all admissions to the medicine and cardiology floors was changed at our urban,
academic, tertiary care hospital. Previously, signout was
via an unstructured telephone conversation between ED
resident and admitting housestaff. The new signout utilizes a web-based ED patient tracking system and
includes: 1) a templated description of ED course is completed by the ED resident; 2) when a bed is assigned, an
automated page is sent to the admitting housestaff; 3)
ED clinical information, including imaging, labs, medications, and nursing interventions (figure) is reviewed by
admitting housestaff; 4) if housestaff has specific questions about ED care, a telephone conversation between
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S111
S112
2012 SAEM ANNUAL MEETING ABSTRACTS
the ED resident and housestaff occurs; 5) if there are no
specific questions, it is indicated electronically and the
patient is transferred to the floor.
Objectives: To describe the effects on patient safety
(floor-to-ICU transfer in 24 hours) and ED throughput
(ED length of stay (LOS) and time from bed assignment
to ED departure) resulting from a change to an electronic, discussion-optional handoff system.
Methods: Review of all patients admitted from the ED
to the medicine and cardiology floor from July-October
2010 and 2011. Rate of floor to ICU transfer in 24 hours,
ED LOS, and time from bed assignment to ED departure were calculated by review of medical records.
Results: There were 3334 admissions in 2010 and 3347
in 2011. Patient characteristics are in the table. After
July 2011, 28.2% of patients had a verbal signout
between the ED and inpatient teams. For the remaining
71.8%, there was review of clinical information and no
verbal discussion. The rate of floor-to-ICU transfer was
2.0% in 2010, and 2.2% in 2011 (p = 0.46). ED LOS was
similar (6:13 vs 6:10, p = 0.18). Median time from bed
assignment to ED departure decreased 5 minutes (1:36
in 2010 vs 1:31 in 2011, p < 0.01).
Conclusion: Transition to a system in which signout of
admitted patients is accomplished by accepting housestaff review of ED clinical information supplemented by
verbal discussion when needed resulted in no significant
change in rate of floor-to-ICU transfer or ED LOS and
reduced time from bed assignment to ED departure.
Table - Abstract 203: Patient Characteristics And Outcomes
2010
Total admissions
Percent male
Age (median, IQR)
Percent white
Hospital LOS
(median, IQR)
ED LOS (median, IQR)
Percent verbal
discussion
Time from bed
assign to ED
departure (median, IQR)
% transfer from
floor to ICU
within 24 hours
of admission
2011
3334
48.5%
64 (50–79)
66.8%
3 (2–5)
3347
44.8%
64 (50–79)
68.6%
2 (1–4)
6:13
(4:52–8:04)
100%
6:10
(4:45–8:02)
28.2%
1:36
(1:15–2:02)
1:31
(1:14–1:56)
2.0%
2.2%
204
Does the Nature of Chief Complaint,
Gender, or Age Affect Time to be Seen in
the Emergency Department?
Ayesha Sattar1, John Marshall2, Kenneth
Sable2, Antonios Likourezos2, Christian
Fromm2
1
Stanford University School of Medicine,
Stanford, CA; 2Maimonides Medical Center,
Brooklyn, NY
Background: Emergency physicians may be biased
against patients presenting with nonspecific complaints
or those requiring more extensive work-ups. This may
result in patients being seen less quickly than those with
more straightforward presentations, despite equal triage scores or potential for more dangerous conditions.
Objectives: The goal of our study was to ascertain
which patients, if any, were seen more quickly in the
ED based on chief complaint.
Methods: A retrospective report was generated from
the EMR for all moderate acuity (ESI 3) adult patients
who visited the ED from January 2005 through December 2010 at a large urban teaching hospital. The most
common complaints were: abdominal pain, alcohol
intoxication, back pain, chest pain, cough, dyspnea, dizziness, fall, fever, flank pain, headache, infection, pain
(nonspecific), psychiatric evaluation, ‘‘sent by MD,’’
vaginal bleeding, vomiting, and weakness. Non-parametric independent sample tests assessed median time
to be seen (TTBS) by a physician for each complaint.
Differences in the TTBS between genders and based on
age were also calculated. Chi-square testing compared
percentages of patients in the ED per hour to assess for
differences in the distribution of arrival times.
Results: We obtained data from 116,194 patients.
Patients with a chief complaint of weakness and dizziness waited the longest with a median time of 35 minutes and patients with flank pain waited the shortest
with 24 minutes (p < 0.0001) (Figure 1). Overall, males
waited 30 minutes and females waited 32 minutes
(p < 0.0001). Stratifying by gender and age, younger
females between the ages of 18–50 waited significantly
longer times when presenting with a chief complaint of
abdominal pain (p < 0.0001), chest pain (p < 0.05), or
flank pain (p < 0.0001) as compared to males in the
same age group (Figure 2). There was no difference in
the distribution of arrival times for these complaints.
Conclusion: While the absolute time differences are not
large, there is a significant bias toward seeing young
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
male patients more quickly than women or older males
despite the lower likelihood of dangerous conditions.
Triage systems should perhaps take age and gender better into account. Patients might benefit from efforts to
educate EM physicians on the delays and potential quality issues associated with this bias in an attempt to move
toward more egalitarian patient selection.
205
The Impact of a Neurology Consult in
Patients Placed in Observation for
Syncope or Near Syncope
Simon Katrib, Margarita E. Pena,
Robert B. Takla, Patrick Frank, Susan Szpunar
St. John Hospital and Medical Center, Detroit, MI
Background: In patients placed in an ED-staffed observation unit (OU) for syncope or near syncope (S/NS),
some physicians’ practice pattern includes a routine
neurology consult (NC) for all patients, including those
without a previous seizure history.
Objectives: We tested the hypothesis that very few
S/NS patients without a previous seizure history placed
in the OU would have a neurologic etiology, and a NC
would not contribute significantly to diagnosis or recidivism, but may add to length of stay (LOS) and cost.
Methods: This is a retrospective chart review study of
all adult patients placed in an OU staffed by ED physicians with a primary diagnosis of S/NS from October
2009 to October 2011. Patients with and without a NC
were compared. Patient data collected included demographics, past seizure history, NC, discharge summary
reports, tests ordered by neurology, hospital LOS,
48-hour return to the ED after discharge, 30-day readmission, and death. Financial data collected included
direct, indirect, and total cost. Chi-square analysis was
used to examine associations between the two groups
and Student’s t-test to examine differences between
mean values. All data were analyzed with SPSS v. 19.0.
Results: A NC was obtained in 38.1% of 247 study
patients. Of these, 12.1% (11/94) were diagnosed with
a neurologic etiology for their S/NS; 10/11 had a final
diagnosis of recurrent seizures and 1/11 had a new
neurologic diagnosis (subacute stroke seen on CT
after patient with new unilateral weakness arrived in
OU). Mean age was similar in both groups (p = 0.327).
Length of stay for discharged patients with a NC was
•
www.aemj.org
S113
not affected (1.89 days vs. 1.83 days, p = 0.448), but
did increase for admitted patients with a NC
(5.24 days vs. 2.89 days, p = 0.01). Patients with a NC
were less likely to be discharged (77.7% vs. 93.2%,
p = 0.001) but tended to be more likely to return to
the ED within 48 hours (p = 0.06). There was no difference in 30-day readmission between the two
groups (p = 0.293). There were no deaths. A NC adds
to direct, indirect, and total costs (p = 0.001, 0.004,
and 0.001 respectively).
Conclusion: In patients presenting to an OU for evaluation of S/NS, only a small percentage have a neurologic etiology, most of these due to recurrent seizures.
A NC does add to cost, increases the likelihood of
admission, and prolongs LOS, but does not contribute
significantly to a neurologic diagnosis or decrease
recidivism.
206
Time Burden of Emergency Department
Hand Hygiene with Glove Use
Joseph M. Reardon1, Josephine E. Valenzuela1,
Siddharth Parmar2, Arjun Venkatesh3,
Jeremiah D. Schuur2, Daniel J. Pallin2
1
Harvard Medical School, Boston, MA; 2Brigham
and Women’s Hospital, Boston, MA; 3Brigham
and Women’s Hospital-Massachusetts General
Hospital-Harvard Affiliated Emergency Medicine
Residency, Boston, MA
Background: Guidelines require ED personnel to perform hand hygiene before and after patient contact,
whether nonsterile gloves are used or not. This requirement is based on old studies in the operating room setting. ED staff are known to be less likely to comply
when using gloves vs. when not using gloves, perhaps
because this seems arbitrary and burdensome. Knowledge of time and materials costs, and benefits measured
in decreased disease transmission, are key to understanding and encouraging proper hand hygiene.
Objectives: To measure the time burden of alcoholbased handrub use before and after non-sterile glove
use among ED staff.
Methods: Research assistants counted PGY-2 and -3
EM residents’ glove donning events per hour
for 42 hours of observation during clinical shifts.
S114
2012 SAEM ANNUAL MEETING ABSTRACTS
Table - Abstract 206: Average time (seconds) for alcohol-based handrub use, glove donning, and glove removal
Times for
Provider Type:
Attending
Resident
Physician
Nurse
Student
Mean (95%CI)
Hand
rub before
gloves
Glove
donning
9
8
8.8
13.4
7
10.8
(9.0 – 12.6)
16.5
23
17
22.2
23
21.7
(17.7 – 25.7)
Glove
removal
Hand
rub after
gloves
Glove
donning
alone
Glove
removal
alone
Added
time for
handrub
before gloves
Added
time for
handrub
after gloves
2.5
3.9
3.2
3.7
2.3
3.6
(3.0 – 4.1)
12.5
6.7
9
7.8
7.7
7.8
(6.9 – 8.7)
17
14.6
11.2
15.4
20
15.2
(12.8 – 17.6)
4
3.1
2.8
6.6
3.3
4.9
(2.8 – 7.0)
8.5
16.4
14.5
13.9
17.3
17.3
(12.9 – 21.7)
11
7.5
9.5
4.8
6.7
6.5
(4.5 – 8.5)
Separately, in a controlled setting, the observers timed
40 ED physicians, physician assistants, residents, and
nurses donning and removing gloves with and without
handrub. We report glove donning events per hour,
and donning times and removal times with and without
handrub, with 95%CI.
Setting: Urban, academic ED, census 60,000.
Results: Residents used gloves 0.83 times per hour
(95%CI 0.6–1.1) at an ED occupancy rate of 71%. Simultaneously, the average number of ideal gloving events
based on a checklist of clinical necessity was 0.69 per
hour (95%CI 0.51–1.06). Handrub use added a mean of
17 seconds (95%CI 13–22) before gloving and 8 seconds
(95%CI 2–14) after gloving. Thus, handrub use added
14 seconds (95%CI 8–20) per physician per hour.
Among the 44% of residents who used sanitizer before
putting on gloves (95%CI 26–62), compliance with
WHO guidelines for minimum 30 seconds of rub was
3% (95%CI 0–8).
Conclusion: Alcohol-based handrub use represents a
small time burden for providers when combined with
nonsterile glove use. These data may be helpful in
efforts to motivate increased hand hygiene compliance.
207
Door-to-Balloon Times for Primary
Percutaneous Coronary Intervention: How
Do Freestanding Emergency Departments
Perform?
Erin L. Simon, Peter Griffin, Thomas Lloyd
Akron General Medical Center, Akron, OH
Background: Freestanding emergency departments
(FEDs) have become increasingly popular as the need
for emergency care continues to grow. It is unclear
how door-to-balloon (D2B) times are affected or if the
D2B goal of <90 minutes is met when ST-elevation myocardial infarction patients present to a FED.
Objectives: The objective of this study was to determine the proportion of STEMI patients transported by
ground ambulance from two different FEDs to a single
percutaneous coronary intervention (PCI) center who
meet the 90-minute goal for D2B for PCI (utilizing a
rapid STEMI protocol). Secondary aims included analyzing individual time components from arrival to the
FED until the completion of the PCI and their effect on
D2B times.
Methods: We conducted a retrospective cohort review
and included all patients who presented to two FEDs
through April 2011 with a STEMI since the opening of
these facilities in July 2007 and August 2009, respectively. Demographic information and key time points
were abstracted and statistical evaluation was performed using chi-square analysis.
Results: Thirty-five patients met inclusion criteria. The
mean arrival time to initial ECG time was 2.77 minutes.
The average door-to-transfer time was 32.5 minutes.
The average D2B time was 84.97 minutes (SDEV ±
13.04 minutes), with 74.3% of patients having a D2B
time less than 90 minutes. The time between the ECG
recording and the STEMI alert call was found to be
highly significant (p < 0.005), as was the transport time
from the ambulance arrival to the PCI center to the
catheterization lab itself (p < 0.005). The door-to-transfer time, door-to-doortime, and length of transport time
were also significant (p < 0.005; p < 0.001; p < 0.001
respectively). The average door-to-transfer time from
the FED was 29.5 minutes in the group achieving D2B
times under 90 minutes, and was 39.3 minutes in the
group who did not meet the 90 minute goal.
Conclusion: A total of 74.3% of STEMI patients seen in
two FEDs achieved D2B times under 90 minutes. Factors associated with D2B times under 90 minutes were
time from initial ECG to calling a STEMI alert, door-totransfer time, ambulance arrival at the hospital to arrival in the catheterization lab, and overall transport time.
Minimizing delays in these areas may decrease D2B
times from FEDs.
208
Emergency Department Holding Orders
Reduce ED Length Of Stay By Decreasing
Time To Bed Order
Samir A. Haydar, Joel Botler, Tania D. Strout,
Karen D. Taylor
Maine Medical Center, Portland, ME
Background: Detailed analysis of emergency department (ED) event data identified the time from completion of emergency physician evaluation (Doc Done) to
the time patients leave the ED as a significant contributor to ED length of stay (LOS) and boarding at our
institution. Process flow mapping identified the time
from Doc Done to the time inpatient beds were ordered
(BO) as an interval amendable to specific process
improvements.
Objectives: The purpose of this study was to evaluate
the effect of ED holding orders for stable adult
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
inpatient medicine (AIM) patients on: a) the time to BO
and b) ED LOS.
Methods: A prospective, observational design was
used to evaluate the study questions. Data regarding
the time to BO and LOS outcomes were collected
before and after implementation of the ED holding
orders program. The intervention targeted stable AIM
patients being admitted to hospitalist, internal medicine, and family medicine services. ED holding orders
were placed following the admission discussion with
the accepting service and special attention was paid
to proper bed type, completion of the emergent
work-up and the expected immediate course of the
patient’s hospital stay. Holding orders were of limited
duration and expired 4 hours after arrival to the
inpatient unit.
Results: During the 6-month study period, 7321
patients were eligible for the ED holding orders intervention; 6664 (91.0%) were cared for using the standard adult medicine order set and 657 (9.0%) received
the intervention. The median time from Doc Done to
BO was significantly shorter for patients in the ED
holding orders group, 41 min (IQR 19, 88) vs. 95 min
(IQR 53, 154) for the standard adult medicine group,
p < 0.001. Similarly, the median ED LOS was significantly shorter for those in the ED holding orders
group, 413 min (IQR 331, 540) vs. 456 min (IQR 346,
581) for the standard adult medicine group, p < 0.001.
No lapses in patient care were reported in the intervention group.
Conclusion: In this cohort of ED patients being admitted to an AIM service, placing ED holding orders
rather than waiting for a traditional inpatient team
evaluation and set of admission orders significantly
reduced the time from the completion of the ED workup to placement of a BO. As a result, ED LOS was
also significantly shortened. While overall utilization of
the intervention was low, it improved with each
month.
209
Emergency Department Interruptions in
the Age of Electronic Health Records
Matthew Albrecht, John Shabosky, Jonathan
de la Cruz
Southern Illinois University School of Medicine,
Springfield, IL
Background: Interruptions of clinical care in the emergency department (ED) have been correlated with
increased medical errors and decreased patient satisfaction. Studies have also shown that most interruptions happen during physician documentation. With the
advent of the electronic health record and computerized documentation, ED physicians now spend much of
their clinical time in front of computers and are more
susceptible to interruptions. Voice recognition dictation
adjuncts to computerized charting boast increased provider efficiency; however, little is known about how
data input of computerized documentation affects physician interruptions.
Objectives: We present here observational interruptions
data comparing two separate ED sites, one that uses
•
www.aemj.org
S115
computerized charting by conventional techniques and
one assisted by voice recognition dictation technology.
Methods: A prospective observational quality initiative
was conducted at two teaching hospital EDs located
less than 1 mile from each other. One site primarily
uses conventional computerized charting while the
other uses voice recognition dictation computerized
charting. Four trained observers followed ED physicians for 180 minutes during shifts. The tasks each ED
physician performed were noted and logged in 30 second intervals. Tasks listed were selected from a predetermined standardized list presented at observer
training. Tasks were also noted as either completed or
placed in queue after a change in task occurred. A total
of 4140 minutes were logged. Interruptions were noted
when a change in task occurred with the previous task
being placed in queue. Data were then compared
between sites.
Results: ED physicians averaged 5.33 interruptions/
hour with conventional computerized charting compared to 3.47 interruptions/hour with assisted voice
recognition dictation (p = 0.0165).
Conclusion: Computerized charting assisted with voice
recognition dictation significantly decreased total per
hour interruptions when compared to conventional techniques. Charting with voice recognition dictation has the
potential to decrease interruptions in the ED allowing for
more efficient workflow and improved patient care.
210
Physician Documentation of Critical Care
Time While Working in the Emergency
Department
Jonathan W. Heidt, Richard Griffey
Washington University School of Medicine in
Saint Louis, Saint Louis, MO
Background: The Current Procedural Terminology
(CPT) Code 99291, describing the first 30–74 minutes of
critical care provision, has been called ‘‘the most underreported code in the emergency department (ED).’’ We
are not aware of publications describing successful
interventions to improve documentation and coding in
this area within the ED. We evaluated the effect of having medical coders provide feedback to emergency physicians (EPs), identifying their ED visits that may have
qualified for code 99291, in order to improve critical
care billing.
Objectives: The aim of this study was to first determine
the proportion of emergency department charts that
would qualify for code 99291 based on services/care
rendered, but lacked necessary documentation. The
effect on proper critical care documentation was then
determined after physicians were provided with
individualized examples of patient records that may
have qualified for critical care billing but lacked
documentation.
Methods: We conducted a retrospective record review
at an urban academic ED with 90,000 annual patient
visits, consisting of two 3-month periods, preceding
and following an intervention consisting of feedback
provided by ED coders to providers on their critical
care documentation. We queried our electronic medical
S116
2012 SAEM ANNUAL MEETING ABSTRACTS
record (EMR) system for eligible visits, which included
all ED patients admitted to an intensive care unit (ICU)
during the study period. The intervention consisted of
individualized e-mail feedback to EPs on their patient
encounters that may have qualified for 99291 billing
based upon complexity of services/care rendered but
lacked supporting documentation. This feedback was
ongoing during the post-intervention phase. The primary outcome measure was the proportion of visits
documenting critical care time (99291) before as compared to after the intervention.
Results: Among the 501 ICU admissions identified in
the 3-month pre-intervention period, 88 (18%) documented critical care time. Following the intervention,
among 382 ICU admissions identified, 243 (64%) charts
included such documentation (P < 0.0001).
Conclusion: In this single-center study, documentation
of critical care time (99291) among ED patients admitted to an ICU significantly improved following individualized e-mail feedback by medical coders providing
physician education.
211
Attitudes Toward Health Care Robot
Assistants In The ED: A Survey Of ED
Patients And Visitors
Karen F. Miller, Wesley H. Self,
Candace D. McNaughton, Lorraine C. Mion,
Alan B. Storrow
Vanderbilt University Medical Center,
Nashville, TN
Background: Using robot assistants in health care is
an emerging strategy to improve efficiency and quality
of care while optimizing the use of human work hours.
Robot prototypes capable of performing vital signs and
assisting with ED triage are under development. However, ED users’ attitudes toward robot assistants are
not well studied. Understanding of these attitudes is
essential to design user-friendly robots and to prepare
EDs for the implementation of robot assistants.
Objectives: To evaluate the attitudes of ED patients
and their accompanying family and friends toward the
potential use of robot assistants in the ED.
Methods: We surveyed a convenience sample of adult
ED patients and their accompanying adult family members and friends at a single, university-affiliated ED, 9/
26/11–10/27/11. The survey consisted of eight items from
the Negative Attitudes Towards Robots Scale (Normura
et al.) modified to address robot use in the ED. Response
options included a 5-point Likert scale. A summary score
was calculated by summing the responses for all 8 items,
with a potential range of 8 (completely negative attitude)
to 40 (completely positive attitude). Research assistants
gave the written surveys to subjects during their ED visit.
Internal consistency was assessed using Cronbach’s
alpha. Bivariate analyses were performed to evaluate the
association between the summary score and the following variables: participant type (patient or visitor), sex,
race, time of day, and day of week.
Results: Of 121 potential subjects approached, 113
(93%) completed the survey. Participants were 37%
patients, 63% family members or friends, 62%
women, 79% white, and had a median age of
45.5 years (IQR 18–84). Cronbach’s alpha was 0.94.
The mean summary score was 22.2 (SD = 0.87), indicating subjects were between ‘‘occasionally’’ and
‘‘sometimes’’ comfortable with the idea of ED robot
assistants (see table). Men were more positive toward
robot use than women (summary score: 24.6 vs 20.8;
p = 0.033). No differences in the summary score were
detected based on participant type, race, time of day,
or day of week.
Conclusion: ED users reported significant apprehension about the potential use of robot assistants in the
ED. Future research is needed to explore how robot
designs and strategies to implement ED robots can help
alleviate this apprehension.
Table - Abstract 211: *Reverse scored.
Item
Mean score (SD)
I would feel relaxed talking to robots in
the ED.*
I would feel uneasy if I were in an ED
where I had to use robots.
I would feel nervous about using a robot
in front of other people in the ED.
I would dislike the idea that nurses and
doctors rely on robots to make
judgments in ED care.
I feel that if nurses and doctors depend
on robots too much in the ED bad
things might happen.
I would feel uncomfortable with a robot
watching me in the ED.
I am concerned robots would be a bad
influence on my care in the ED.
I feel that in the future, robots will be a
valuable part of ED patient care.*
Total (summary score)
212
2.3 (1.3)
2.9 (1.5)
3.1 (1.6)
2.5 (1.5)
2.7 (1.3)
3.1 (1.6)
3.0 (1.5)
2.8 (1.4)
22.2 (0.87)
The Effectiveness Of A Nurse Telephone
Triage Protocol For Emergency
Department Disposition During The H1N1
Epidemic Of 2009
Jeremy R. Monroe, Christopher M. Verdick,
John W. Hafner, Huaping Wang
University of Illinois College of Medicine at
Peoria, Peoria, IL
Background: The H1N1 influenza epidemic affected
already overcrowded ED resources and highlighted the
need for effective patient triage during widespread contagious outbreaks. Adequate public triage is necessary
to avoid dangerous ED overcrowding while identifying
patients needing advanced medical care. Nurse-assisted
public telephone triage represents a simple but possibly
effective tool that can screen large numbers of patients
with influenza-like illness.
Objectives: This study evaluates the effectiveness of a
nurse telephone triage protocol during the H1N1 epidemic of 2009.
Methods: A retrospective observational cohort trial
was conducted of all patients contacting a hospitalbased nurse telephone triage service during the 2009
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
H1N1 epidemic (peak community health department
prevalence 9/28/09–11/9/09). Patients were screened
using an adapted CDC criteria protocol for influenzalike illness and further assessed for illness severity and
complicating medical conditions; triage logs were
recorded in an associated hospital EMR (Epic 2010,
Epic Systems Inc.). Triage calls utilizing the protocol, as
well as any associated outpatient or ED visits over the
next 24 hours, were queried. Patient demographics,
interventions, and disposition were abstracted. Ten
nurse triage dispositions were grouped into four categories (home care, physician notification, outpatient
physician visit, and ED visit). Group differences were
analyzed using chi-square tests.
Results: Three-hundred fifty triage calls (287 pediatric,
mean age 7.8 years; 63 adult, mean age 41.3 years)
were documented. Overall, 254 (72.6%) patients followed the recommend disposition. Patients triaged to
outpatient physician visits had higher compliance than
those triaged to the ED (85.9% vs. 53.3%,
p < 0.01).Twenty-seven patients (18.5%) were evaluated
in the ED and 20/27 (74.1%) were diagnosed with influenza-related illness; no ED visits required hospital
admission. The telephone triage protocol was highly
specific (0.95, 95% CI 0.94–0.97) but poorly sensitive
(0.26, 95% CI 0.13–0.42) for predicting ultimate patient
disposition.
Conclusion: In our population, during the 2009 H1N1
influenza epidemic, a nurse telephone triage was effective in delineating ultimate patient disposition. Nurse
telephone triage may be one means to adequately
distribute medical resources during epidemics.
213
ED Impact of Rh Factor Testing in Patients
with First Trimester Vaginal Bleeding
Raviraj Patel, Nicholas Genes
Mount Sinai School of Medicine,
New York, NY
Background: Management of patients with first trimester vaginal bleeding in the emergency department commonly involves checking blood for Rh factor and, if
negative, dosing Rho(D) immune globulin, to prevent
isoimmunization and protect future instances of Rh
hemolytic disease.
Despite the lack of evidence supporting the effectiveness, this practice continues. The effect of Rh testing on
ED resource utilization has not been studied, however,
and if significant, may help prompt a change in practice.
Objectives: We sought to determine the effect on ED
resources of Rh factor testing on patients with first
trimester vaginal bleeding.
Methods: We retrospectively reviewed operations data
from our large urban academic ED from June 1 to September 30, 2011, to determine the number of patients
discharged who presented with first trimester bleeding,
their length of stay compared to other discharged
patients, the turnaround time for blood typing, and
frequency of Rh immune globulin administration.
Results: Based on discharge diagnoses during the
study period, we identified 311 patients with first trimester vaginal bleeding. Their ED length-of-stay (LOS)
•
www.aemj.org
S117
was 292 minutes (SD ± 176 min), 42.9 minutes longer
than the average LOS of 16,946 discharged patients in
that period (95% CI 11.8–73.9 minutes). Blood type
turnaround time was 198 minutes (SD ± 82.3 minutes).
Rh immune globulin was administered 15 times.
Conclusion: Patients discharged with first trimester
vaginal bleeding have a significantly longer than average LOS; the long turnaround time for Rh factor determination likely plays a major role in LOS. Just 4.8% of
patients with first trimester bleeding received Rh
immune globulin, reflecting the low prevalence of Rh
negativity in the population and underscoring the inefficiency of current practice. If blood type turnaround
time was subtracted from LOS for these patients, a total
of 29,300 bed minutes would be saved over the study
period, or 38.0 hours / week. (Originally submitted as a
‘‘late-breaker.’’)
214
Are We Punishing Hospitals for
Progressive Treatment of Atrial
Fibrillation?
Nicole E. Piela, Alfred Sacchetti, Darius
Sholevar, Reginald Blaber, Steven Levi
Our Lady of Lourdes Medical Center, Camden,
NJ
Background: Emergency department cardioversion
(EDC) of recent-onset atrial fibrillation or flutter (AF)
patients is an increasingly common management
approach to this arrhythmia. Patients who qualify for
EDC generally have few co-morbidities and are often
discharged directly from the ED. This results in a shift
towards a sicker population of patients admitted to the
hospital with this diagnosis.
Objectives: To determine whether hospital charges and
length of stay (LOS) profiles are affected by emergency
department discharge of AF patients.
Methods: Patients receiving treatment at an urban
teaching community hospital with a primary diagnosis
of atrial fibrillation or flutter were identified through the
hospital’s billing data base. Information collected on
each patient included date of service, patient status,
length of stay, and total charges. Patient status was categorized as inpatient (admitted to the hospital), observation (transferred from the ED to an inpatient bed but
placed in an observation status), or ED (discharged
directly from the ED). The hospital billing system automatically defaults to a length of stay of 0 for observation
patients. ED patients were assigned a length of stay of 0.
Total hospital charges and mean LOS were determined
for two different models: a standard model (SM) in
which patients discharged from the ED were excluded
from hospital statistics, and an inclusive model (IM) in
which discharged ED patients were included in the hospital statistics. Statistical analysis was through ANOVA.
Results: A total of 317 patients were evaluated for AF
over an 18–month period. Of these, 197 (62%) were
admitted, 22 (7%) were placed in observation status,
and 98 (31%) were discharged from the ED. Hospital
charges and LOS in days are summarized in the table.
All differences were statistically significant at
(p < 0.001).
S118
2012 SAEM ANNUAL MEETING ABSTRACTS
Conclusion: Emergency department management can
lead to a population of AF patients discharged directly
from the ED. Exclusion of these patients from hospital
statistics skews performance profiles effectively punishing institutions for progressive care.
Table - Abstract 214:
Group
Standard Model
Inclusive Model
215
Mean Hospital
Charges (95% CI)
Mean LOS
(95% CI)
$47,542 ($41–54,000)
$34,063 ($29–39,000)
3.42 (3.0–3.8)
2.37 (2.0–2.7)
A Comparison of Two Hospital Electronic
Medical Record Systems and Their Effects
on the Relationship Between Physician
Charting and Patient Contact
John Shabosky, Matthew Albrecht, Jonathan
de la Cruz
Southern Illinois University School of Medicine,
Springfield, IL
Background: Recent health care reform has placed an
emphasis on the electronic health record (EHR). With
the advent of the EHR it is common to see ED providers spending more time in front of computers documenting and away from patients. Finding strategies to
decrease provider interaction with computers and
increase time with patients may lead to improved
patient outcomes and satisfaction. Computerized charting adjuncts, such as voice recognition software, have
been marketed as ways to improve provider efficiency
and patient contact.
Objectives: We present here observational data comparing two separate ED sites, one where computerized charting is done by conventional techniques and
one that is assisted with voice recognition dictation,
and their effects on physican charting and patient
contact.
Methods: A prospective observational quality initiative
was conducted at two teaching hospitals located less
than 1 mile from each other. One site primarily uses conventional computerized charting while the other uses
voice recognition dictation. Four trained quality assistants observed ED physicians for 180 minutes during
shifts. The tasks each physician performed were noted
and logged in 30 second intervals. Tasks listed were
identified from a predetermined standardized list presented at observer training. A total of 4140 minutes were
logged. Time allocated to charting and that allocated to
direct patient care were then compared between sites.
Results: ED physicians spent 28.6% of their time charting using conventional techniques vs 25.7% using voice
recognition dictation (p = 0.4349). Time allocated to
direct patient care was found to be 22.8% with conventional charting vs 25.1% using dictation (p = 4887). In
total, ED physicians using conventional charting techniques spent 668/2340 minutes charting. ED physicians
using voice recognition dictation spent 333/1800 minutes dictating and an additional 129.5/1800 minutes
reviewing or correcting their dictations.
Conclusion: The use of voice recognition assisted dictation rather than conventional techniques did not significantly change the amount of time physicians spent
charting or with direct patient care. Although voice
recognition dictation decreased initial input time of
documenting data, a considerable amount of time was
required to review and correct these dictations.
216
Emergency Department Rectal
Temperatures are Frequently Discordant
from Initial Triage Temperatures
Daniel Runde1, Daniel Rolston1, Graham
Walker2, Jarone Lee3
1
St. Luke’s Roosevelt, New York, NY; 2Stanford
University, Palo Alto, CA; 3Massachusetts
General Hospital, Boston, MA
Background: Fever in patients can provide important
clues to the etiology of a patient’s symptoms. Non-invasive temperature sites (oral, axillary, temporal) may be
insensitive due to a variety of factors. This has not been
well-studied in adult emergency department (ED)
patients.
Objectives: For our primary objective, we studied
whether emergency department triage temperatures
detected fever adequately when compared to a rectal
temperature. As secondary objectives, we examined the
temperature differences when a rectal temperature was
taken within an hour of non-invasive temperature, temperature site (oral, axillary, temporal), and also examined the patients that were initially afebrile but were
found to be febrile by rectal temperature.
Methods: We performed an electronic chart review at
our inner city, academic emergency department with
an annual census of 110,000 patients. We identified all
patients over the age of 18 who received a non-invasive
triage temperature and a subsequent rectal temperature
while in the ED from January 2002 through February
2011. Specific data elements included many aspects of
the patient’s medical record (e.g. subject demographics,
temperature, and source). We analyzed our data with
standard descriptive statistics, t-tests for continuous
variables, and Pearson chi-square tests for proportions.
Results: A total of 27,130 patients met our inclusion criteria. The mean difference in temperatures between the
initial temperature and the rectal temperature was
1.3F, with 25.9% having higher rectal temperatures
‡2F, and 5.0% having higher rectal temperatures ‡4F.
The mean temperature difference among the 10,313
patients who an initial noninvasive temperature and a
rectal temperature within one hour was 1.4F. The
mean difference among patients that received oral, axillary, and temporal temperatures was 1.2F, 1.8F, and
1.2F respectively. Approximately one in five patients
(18.1%) were initially afebrile and found to be febrile by
rectal temperature, with an average temperature difference of 2.5F. These patients had a higher rate of
admission, and were more likely to be admitted to the
intensive care unit.
Conclusion: There are significant differences between
rectal temperatures and non-invasive triage temperatures in this emergency department cohort. In almost
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
one in five patients, fever was missed by triage
temperature.
217
Direct Bedding, Bedside Registration, and
Patient Pooling to Improve Pediatric
Emergency Department Length of Stay
Niel F. Miele, Neelam R. Patel,
Rachel D. Grieco, Ernest G. Leva
University of Medicine and Dentistry of New
Jersey, New Brunswick, NJ
Background: Pediatric emergency department (PED)
overcrowding has become a national crisis, and has
resulted in delays in treatment, and patients leaving
without being seen. Increased wait times have also
been associated with decreased patient satisfaction.
Optimizing PED throughput is one means by which to
handle the increased demands for services. Various
strategies have been proposed to increase efficiency
and reduce length of stay (LOS).
Objectives: To measure the effect of direct bedding,
bedside registration, and patient pooling on PED wait
times, length of stay, and patient satisfaction.
Methods: Data were extracted from a computerized
ED tracking system in an urban tertiary care PED.
Comparisons were made between metrics for 2010
(23,681 patients) and the 3 months following process
change (6,195 patients). During 2010, patients were triaged by one or two nurses, registered, and then sent
either to a 14-bed PED or a physically separate 5-bed
fast-track unit, where they were seen by a physician.
Following process change, patients were brought
directly to a bed in the 14-bed PED, triaged and registered, then seen by a physician. The fast-track unit was
only utilized to accommodate patient surges.
Results: Anticipating improved efficiencies, attending
physician coverage was decreased by 9%. After instituting process changes, improvements were noted immediately. Although daily patient volume increased by 3%,
median time to be seen by a physician decreased by
20%. Additionally, median LOS for discharged patients
decreased by 15%, and median time until the decisionto-admit decreased by 10%.
Press-Ganey satisfaction scores during this time
increased by greater than 5 mean score points, which
was reported to be a statistically significant increase.
Conclusion: Direct bedding, bedside registration, and
patient pooling were simple to implement process
•
www.aemj.org
S119
changes. These changes resulted in more efficient PED
throughput, as evidenced by decreased times to be
seen by a physician, LOS for discharged patients, and
time until decision-to-admit. Additionally, patient satisfaction scores improved, despite decreased attending
physician coverage and a 30% decrease in room
utilization.
Table - Abstract 217:
Patients/day
Median time to be seen–M.D.
Median LOS for D/C
Median decision to admit
218
Year
2010
April
2011
May
2011
June
2011
65
0:39
2:00
3:12
67
0:31
1:42
3:12
68
0:28
1:42
2:54
67
0:30
1:42
2:36
Are Emergency Physicians More CostEffective in Running an Observation Unit?
Margarita E. Pena, Robert B. Takla,
Susan Szpunar, Steve Kler
St. John Hospital and Medical Center, Detroit,
MI
Background: There are no studies comparing costeffectiveness when an observation unit (OU) is
managed and staffed by EM physicians versus non-EM
physicians.
Objectives: To compare cost-effectiveness when the
same OU is managed and staffed by EM physicians versus non-EM physicians.
Methods: This was an observational, retrospective data
collection study of a 30-bed OU in an urban teaching
hospital. Three time periods were compared: November
2007 to August 2008 (period 1), November 2008 to
August 2009 (period 2), and November 2010 to August
2011 (period 3). During period 1, the OU was managed
by the internal medicine department and staffed by primary care physicians and physician assistants. During
periods 2 and 3, the OU was managed and staffed by
EM physicians. Data collected included OU patient volume, length of stay (LOS) for discharged and admitted
patients, admission rates, and 30-day readmission rates
for discharged patients. Cost data collected included
direct, indirect, and total cost per patient encounter.
Data were compared using chi-square and ANOVA
analysis followed by multiple pairwise comparisons
using the Bonferroni method of p-value adjustment.
Table - Abstract 218:
Characteristic
OU volume/month
% of ED volume
LOS (hours) discharged
LOS (hours) admitted
Admission rate
30-day readmission rate
Direct cost
Indirect cost
Total cost
Period 1
576.2 ±
7.1%
27.3 ±
20.7 ±
32.5%
11.6%
1367.9 ±
817.4 ±
2185.4 ±
10.4
1.7
2.2
1055.0
552.5
1579.6
Period 2
620.1 ±
7.1%
17.3 ±
16.5 ±
21.6%
7.7%
1018.7 ±
592.8 ±
1611.5 ±
66.7
1.3
3.0
759.6
462.9
1156.3
Period 3
758.0 ±
7.9%
16.9 ±
15.0 ±
19.6%
7.9%
938.0 ±
938.0 ±
1592.3 ±
34.2
0.4
0.44
743.0
743.0
1199.8
p-values
1 vs 3, 2 vs 3, p < 0.0001
p < 0.0001
1 vs 2, 1 vs 3, p < 0.0001
1 vs 2 p = 0.001, 1 vs 3, p < 0.0001
p < 0.0001
p < 0.0001
All comparisons, p < 0.0001
All comparisons, p < 0.0001
All comparisons, p < 0.0001
S120
2012 SAEM ANNUAL MEETING ABSTRACTS
Results: See table. The OU patient volume and percent
of ED volume was greater in period 3 compared to
periods 1 and 2. Length of stay, admission rates, 30-day
readmission rates, and costs were greater in period 1
compared to periods 2 and 3.
Conclusion: EM physicians provide more cost-effective care for patients in this large OU compared to
non-EM physicians, resulting in shorter LOS for
admitted and discharged patients, greater rates of
patients discharged, and less 30-day readmission rates
for discharged patients. This is not affected by an
increase in OU volume and shows a trend towards
improvement.
219
A Long Term Analysis of Physician
Screening in the Emergency Department
Jonathan G. Rogg1, Benjamin A. White2, Paul
Biddinger2, Yuchiao Chang2,
David F. M. Brown2
1
Harvard
Affiliated
Emergency
Medicine
2
Residency,
Boston,
MA;
Massachusetts
General Hospital, Boston, MA
Background: Emergency department (ED) crowding
continues to be a problem, and new intake models may
represent part of the solution. However, little data exist
on the sustainability and long-term effects of physician
triage and screening on standard ED performance
metrics, as most studies are short-term.
Objectives: We examined the hypothesis that a physician screening program (START) sustainably improves
standard ED performance metrics including patient
length of stay (LOS) and patients who left without
completing assessment (LWCA). We also investigated
the number of patients treated and dispositioned by
START without using a monitored bed and the median
patient door-to-room time.
Methods: Design and Setting: This study is a retrospective before-and-after analysis of START in a Level I tertiary care urban academic medical center with
approximately 90,000 annual patient visits. All adult
patients from December 2006 until November 2010 are
included, though only a subset was seen in START.
START began at our institution in December 2007.
Observations: Our outcome measures were length of
stay for ED patients, LWCA rates, patients treated and
dispositioned by START without using a monitored
bed, and door-to-room time. Statistics: Simple descriptive statistics were used. P-values for LOS were calculated with Wilcoxon test and p-value for LWCA was
calculated with chi-square.
Results: Table 2 shows median length of stay for ED
patients was reduced by 56 minutes/patient (p-value
<0.0001) when comparing the most recent year to the
year before START. Patients who LWCA were reduced
from 4.8% to 2.9% (p-value <0.0001) during the same
time period.
We also found that in the first half-year of START, 18%
of patients screened in the ED were treated and dispositioned without using a monitored bed and by the end
of year 3, this number had grown to 29%. Median
door-to-room time decreased from 18.4 minutes to
9.9 minutes over the same period of time.
Conclusion: A START system can provide sustained
improvements in ED performance metrics, including a
significant reduction in ED LOS, LWCA rate, and doorto-room time. Additionally, START can decrease the
need for monitored ED beds and thus increase ED
capacity.
Table 1 - Abstract 219: Provides demographic data for patients eligible for START from December 2006–November 2007 and actual
START volume after December 2007.
Dec 09 – Nov 07
START volume
Age, median (IQR)
Male, population (%)
Hospital characteristics
ED volume
Boarders per day, median (IQR)
Boarding hours per day, median (IQR)
Boarding hours per patient, median (IQR)
Dec 07 – Nov 08
39142
43 (24–61)
19245 (49.2)
42723
43 (24–61)
20952 (49)
81578
44 (39–49)
179.5 (118.6–246.9)
2.31 (0.92–5.46)
85551
44 (37–50)
180.7 (115.3–258.1)
2.21 (0.88–5.47)
Dec 08 – Nov 09
48756
41 (22–60)
23944 (49.1)
91428
43 (36–50)
172 (111.7–250.3)
2.05 (0.78–5.19)
Dec 09 – Nov 10
50249
43 (23–61)
24966 (49.7)
91395
43 (35–48)
179 (100.6–246.7)
2.08 (0.8–5.76)
Table 2 - Abstract 219: Length of Stay and LWCA
ED length of Stay overall
(min), median (IQR)
Discharged Patients
(min), median (IQR)
Admitted Patients
(min), median (IQR)
Other Disposition
(min), median (IQR)
LWCA
Dec 06–Nov 07
Dec 07–Nov 08
Dec 08–Nov 09
Dec 09–Nov 10
diff*
p-value
362 (234–544)
342 (216–348)
310 (193–478)
306 (189–477)
56
<0.0001
317 (208–475)
293 (188–447)
263 (168–399)
257 (163–394)
60
<0.0001
461 (320–678)
456 (311–666)
431 (298–635)
425 (287–639)
36
<0.0001
208 (115–329)
152 (82–267)
138 (74–247)
139 (70–251)
69
<0.0001
4.80%
3.10%
2.70%
2.90%
1.90%
<0.0001
* difference is comparing year prior to START to the most recent year
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
220
Professional Translation Does Not Result
In Decreased Length Of Stay For Spanish
Speaking Patients With Abdominal Pain
Otar Taktakishville, Gregory Garra,
Adam J. Singer
Stony Brook University, Stony Brook, NY
Background: Language discordance is the most frequently reported communication barrier (CB) with
patients. CBs are associated with decreased diagnostic
confidence, increased diagnostic test utilization, and
increased ED length of stay (LOS).
Objectives: Our primary objective was to determine
whether professional translation results in decreased
ED LOS for patient with abdominal pain. Our secondary objective was to determine differences in test/
consult utilization and disposition. Our hypothesis was
that professional translation service would result in a 1hour decrease in LOS.
Methods: Study design: Prospective observational. Setting: University ED with 90,000 visits/ yr. Subjects:
Spanish-speaking patients presenting to the ED for
abdominal pain. Measures: An anonymous survey tool
was completed by the treating physician. Data collected
included demographics, triage time, disposition time,
type of translation method utilized, and ancillary testing
and consultations obtained in the ED. Analysis: descriptive statistics. Continuous variables were compared
with analysis of variance (ANOVA), and binary variables were compared with phi coefficient.
Results: Ninety-two patients were enrolled; mean age
was 35 (IQR 27–42), 76% were female. The median ED
LOS was 270 min (IQR 199–368). Labs were obtained in
98%, CT in 37%, US in 30%, and consultation in 23%.
18% of the cohort was admitted to the hospital. The
most commonly utilized source of translation was a layman (35%). A professional translator was used in 9%
and translation service (language line, MARTY) in 30%.
The examiner was fluent in the patient’s language in
11%. Both the patient and examiner were able to maintain basic communication in 11%. There were 47
patients in the professional/ fluent translation group
and 44 patients in the lay translation group. There was
no difference in ED LOS between groups 288 vs
304 min; p = 0.6. There was no difference in the frequency of lab tests, computerized tomography, ultrasound,
consultations,
or
hospital
admission.
Frequencies did not differ by sex or age.
Conclusion: Translation method was not associated
with a difference in overall ED LOS, ancillary test use,
or specialist consultation in Spanish-speaking patients
presenting to the ED for abdominal pain.
221
Emergency Department Patients on
Warfarin - How Often Is the Visit Due to
the Medication?
Jim Killeen, Edward Castillo, Theodore Chan,
Gary Vilke
UCSD Medical Center, San Diego, CA
Background: Warfarin has important therapeutic value
for many patients, but has been associated with signi-
•
www.aemj.org
S121
ficant bleeding complications, hypersensitivity reactions, and drug-drug interactions, which can result in
patients seeking care in the emergency department
(ED).
Objectives: To determine how often ED patients on warfarin present for care as a result of the medication itself.
Methods: A multi-center prospective survey study in
two academic EDs over 6 months. Patients who presented to the ED taking warfarin were identified, and
ED providers were prospectively queried at the time of
disposition regarding whether the visit was the result
of a complication or side effect associated with warfarin. Data were also collected on patient demographics,
chief complaint, triage acuity, vital signs, disposition,
ED evaluation time, and length of stay (LOS). Patients
identified with a warfarin-related cause for their ED
visit were compared with those who were not. Statistical analysis was performed using descriptive statistics.
Results: During the study period, 31,500 patients were
cared for by ED staff, of whom 594 were identified as
taking warfarin as part of their medication regimen. Of
these, providers identified 54.7% (325 patients) who
presented with a warfarin-related complication as their
primary reason for the ED visit. 56.9% (338) of patients
were seen at an academic facility and 43.1% (256)
patients were seen at a community hospital. 53.4% (317)
were male patients and 46.6% (277) were females, with
42.3% (251) over the age of 65 vs. 57.7% (343) under the
age of 65. Providers admitted 33.8% (201) of patients to
the hospital while 63.1% (375) were discharged home.
Providers identified 8.1% (48) of the admitted patients
were attributed to a bleeding complication. Patients
with a warfarin-related ED visit were more likely to be
triaged at an urgent level 75.3% (447) vs. emergent
14.3% (85) or non-urgent 10.4% (62). There was no significant difference between ED evaluation times by
patients with active bleeding complaints vs. non-active
bleeding complaints. There was no statistical difference
between ED evaluation time for all patients identified as
taking warfarin from triage.
Conclusion: Half of all patients identified as taking
warfarin from triage were in the ED for a complication
related to warfarin use.
222
Ultrasound in Triage in Patients at Risk for
Ectopic Pregnancy Decreases Emergency
Department Length of Stay
Kenneth J. Cody, Daniel Jafari,
Nova L. Panebianco, Olan A. Soremekun,
Anthony J. Dean
University of Pennsylvania, Philadelphia, PA
Background: First trimester abdominal pain and vaginal
bleeding are common ED complaints. ED overcrowding
leads to long wait times, delayed treatment, and high left
without being seen (LWBS) rates in these patients.
Objectives: To determine whether receiving a pelvic
ultrasound in triage (TUS group) decreased ED length
of stay (LOS) and LWBS rate compared to routine ED
care (EDUS group).
Methods: We prospectively enrolled a convenience
sample of patients and matched with historic controls
S122
2012 SAEM ANNUAL MEETING ABSTRACTS
receiving routine care. Study setting: urban academic
ED with an annual census of 58,000. Inclusion criteria
were women ages 16 to 49 arriving during pre-determined high volume periods with a positive urine
b-HCG, Emergency Severity Index of 3, and one or
more of the following: abdominal or pelvic pain, vaginal
bleeding, dizziness, or syncope. Exclusion criteria: documented intrauterine pregnancy (IUP), hemodynamic
instability, or assisted reproductive technique use. After
initial triage evaluation, eligible patients received a pelvic ultrasound while still in triage. Controls were
matched to cases by time of visit and ED census. Routine care for EDUS controls consisted of triage, then
transfer to the main ED when a bed became available.
Pelvic ultrasound was performed after room placement.
ED physicians performed all ultrasounds for cases and
controls. After the ultrasound, both groups received
similar care: those with an identified IUP were discharged. Those with no IUP received radiology ultrasound and gynecology evaluation. LOS was defined as
patient intake time to disposition time. LWBS rate was
determined during enrollment and control periods.
Results: 27 TUS cases were enrolled and compared to
27 EDUS controls. The groups were similar with respect
to age, ethnicity, parity, gestational age, presenting
symptoms, medical history, ED census, and final diagnosis. T-test results showed TUS group LOS was significantly shorter than that of EDUS (249 minutes [95%CI
203–293] vs. 497 minutes [95%CI 416–579]; p = 0.0003).
The LWBS rate for EDUS trended higher than that of
TUS (6% vs. 20%; p = 0.1, Fisher’s exact test).
Conclusion: Emergency ultrasound in triage significantly reduced LOS in patients presenting with possible
ectopic pregnancy. There was a trend towards
decreased LWBS rate in the TUS group.
223
Boarding and Press Ganey Patient
Satisfaction Scores Among Discharged
Patients - Quantifying the Relationship
Paris B. Lovett, Frederick T. Randolph,
Rex G. Mathew
Thomas Jefferson University, Philadelphia, PA
Background: Long wait times, long length of stay, use
of hallway beds, and physical crowding have all been
reported to negatively affect patient satisfaction. There
is a need for quantitative assessment of the relationship
among patients discharged from the ED.
Objectives: To describe the association between total
boarding hours for given calendar days, and mean
Press Ganey patient satisfaction raw scores (PGs) on
those days. To determine a quantitative coefficient for
the relationship.
Methods: We measured total hours of boarding (stays
greater than two hours after admission decision) for
each calendar day in a nine month period. We obtained
mean PGs for the same dates (by date of visit). A linear
regression analysis was performed.
Results: Scatter plots with regression lines are shown
in the figure. The relationships were statistically significant. See the table for regression data.
Conclusion: Our research supports prior reports that
boarding has a negative affect on patient satisfaction.
Each 100 hours of daily boarding is associated with a
drop of 1.3 raw score points in both PG metrics. These
seemingly small drops in raw scores translate into
major changes in rankings on Press Ganey national
percentile scales (a difference of as much as 10 percentile points). Our institution commonly has hundreds of
hours of daily boarding. It is possible that patient-level
measurements of boarding impact would show stronger correlation with individual satisfaction scores, as
opposed to the daily aggregate measures we describe
here. Our research suggests that reducing the burden
of boarding on EDs will improve patient satisfaction.
224
Effect of Day of Week on Number of
Patients Transferred to a Tertiary Care
Emergency Department
Wendy L. Woolley, Daniel K. Pauze, Denis R.
Pauze, Dennis P. McKenna, Wayne R. Triner
Albany Medical Center, Albany, NY
Background: The lack of subspecialty coverage in
many EDs often results in patient transfers for higher
levels of care. Many hospitals do not have consistent
daily coverage by such specialists, and frequently lack
weekend coverage. As the gaps in coverage widen,
transfer rates to tertiary referral EDs are thought to
increase. This can significantly effect both the sending
and receiving EDs’ throughput, efficiency, and ultimately patient satisfaction and safety.
Objectives: We sought to determine whether the day
of the week has any effect on the number of patients
transferred and their disposition.
Methods: A retrospective chart-review of patient transfers from January - December 2010 into a Level I, tertiary care ED with an annual patient volume of 72,000
visits. Transfer center database was queried for day
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
and date of transfer, requested specialty service, and
patient disposition.
Results: In 2010 a total of 3888 patients were received
in transfer. Those cases where no specific specialty service was requested by the transferring provider were
excluded. The remaining 3707 patients were analyzed
using descriptive and statistical analysis. 33.4% of these
transfers (1238) were received on Saturday or Sunday
(weekend) as opposed to 66.6% (2469) patients received
Monday through Friday (weekday). Mean number of
patient transfers per day were as follows: Monday 9.8,
Tuesday 9.3, Wednesday 9.1, Thursday 9.1, Friday 9.9,
Saturday 11.5, Sunday 12.3. When dichotomized to
weekend or weekday, the mean for total number of
transfers per day was greater for weekends (p < 0.01).
The admission rate on weekends was 75.9% compared
to 80.5% on weekdays (risk ratio 1.23, CI 1.09, 1.29).
Conclusion: Our data show a significantly disproportionate increase in the number of patients received in
transfers to our tertiary care ED on the weekends with
a lower likelihood of hospital admission. Maintaining
subspecialty coverage is a national challenge that seems
to be worsening with the current health care crisis. The
potential for increased volume due to patient transfers
on the weekends should be considered by tertiary care
centers when making staffing and on-call decisions.
Further discussion may center on the types of specialty
coverage needed during these times.
225
The Impact Of Increased Output Capacity
Interventions On Emergency Department
Length Of Stay And Patient Flow
Hallam M. Gugelmann1, Olanrewaju A.
Soremekun1, Elizabeth M. Datner1, Asako C.
Matsuura1, Jesse M. Pines2
1
Department
of
Emergency
Medicine,
University of Pennsylvania, Philadelphia, PA;
2
Departments of Emergency Medicine and
Health Policy, George Washington University,
Washington, DC
Background: Prolonged emergency department (ED)
boarding is a key contributor to ED crowding. The
•
www.aemj.org
S123
effect of output interventions (moving boarders out of
the ED into an intermediate area prior to admission or
adding additional capacity to an observation unit) has
not been well studied.
Objectives: We studied the effect of a combined observation-transition (OT) unit, consisting of observation
beds and an interim holding area for boarding ED
patients, on the length of stay (LOS) for admitted
patients, as well as secondary outcomes such as LOS for
discharged patients, and left without being seen rates.
Methods: We conducted a retrospective review
(12 months pre-, 12 months post-design) of an OT unit at
an urban teaching ED with 59,000 annual visits (study
ED). We compared outcomes to a nearby communitybased ED with 38,000 annual visits in the same health system (control ED) where no capacity interventions were
performed. The OT had 17 beds, full monitoring capacity,
and was staffed 24 hours per day. The number of beds
allocated to transition and observation patients fluctuated
throughout the course of the intervention, based on
patient demands. All analyses were conducted at the level
of the ED-day. Wilcoxon rank-sum and analysis of covariance tests were used for comparisons; continuous variables were summarized with medians.
Results: In unadjusted analyses, median daily LOS of
admitted patients at the study ED was 31 minutes lower
in the 12 months after the OT opened, 6.98 to
6.47 hours (p < 0.0001). Control site daily LOS for
admitted patients increased 26 minutes from 4.52 to
4.95 hours (p < 0.0001). Results were similar after
adjusting for other covariates (day of week, ED volume,
and triage level). LOS of discharged patients at study
ED decreased by 14 minutes, from 4.1 hours to
3.8 hours (p < 0.001), while the control ED saw no significant changes in discharged patient LOS (2.6 hours
to 2.7 hours, p = 0.06). Left without being seen rates did
not decrease at either site.
Conclusion: Opening an OT unit was associated with a
30-minute reduction in average daily ED LOS for admitted patients and discharged patients in the study ED.
Given the large expense of opening an OT, future studies should compare capacity-dependent (e.g., OT) vs.
capacity-independent (e.g, organizational) interventions
to reduce ED crowding.
S124
226
2012 SAEM ANNUAL MEETING ABSTRACTS
The Epidemiology of Pelvic Inflammatory
Disease in a Pediatric Emergency
Department
Fran Balamuth, Katie Hayes, Cynthia Mollen,
Monika Goyal
Children’s Hospital of Philadelphia, Philadelphia,
PA
Background: Lower abdominal pain and genitourinary
problems are common chief complaints in adolescent
females presenting to emergency departments. Pelvic
inflammatory disease (PID) is a potentially severe complication of lower genital tract infections, which
involves inflammation of the female upper genital tract
secondary to ascending STIs. PID has been associated
with severe sequelae including infertility, ectopic pregnancy, and chronic pelvic pain. We describe the prevalence and microbial patterns of PID in a cohort of
adolescent females presenting to an urban emergency
department with abdominal or genitourinary complaints.
Objectives: To describe the prevalence and microbial
patterns of PID in a cohort of adolescent patients presenting to an ED with lower abdominal or genitourinary complaints.
Methods: This is a secondary analysis of a prospective
study of females ages 14–19 years presenting to a pediatric ED with lower abdominal or genitourinary complaints. Diagnosis of PID was per 2006 CDC guidelines.
Patients underwent Chlamydia trachomatis (CT) and
Neisseria gonorrhea (GC) testing via urine APTIMA
Combo 2 Assay and Trichomonas vaginalis (TV) testing
using the vaginal OSOM Trichomonas rapid test.
Descriptive statistics were performed using STATA
11.0.
Results: The prevalence of PID in this cohort of 328
patients was 19.5% (95% CI 15.2%, 23.8%), 37.5%
(95% CI 25.3%, 49.7%) of whom had positive sexually
transmitted infection (STI) testing: 25% (95% CI 14.1%,
35.9%) with CT, 7.8% (95% CI 1.1, 14.6%) with GC,
and 12.5% (95% CI 4.2%, 20.8%) with TV. 84.4% (95%
CI 75.2, 93.5%) of patients diagnosed with PID
received antibiotics consistent with CDC recommendations. Patients with lower abdominal pain as their
chief complaint were more likely to have PID than
patients with genitourinary complaints (OR 3.3, 95%
CI 1.7, 6.4).
Conclusion: A substantial number of adolescent
females presenting to the emergency department with
lower abdominal pain were diagnosed with PID, with
microbial patterns similar to those previously reported
in largely adult, outpatient samples. Furthermore,
appropriate treatment for PID was observed in the
majority of patients diagnosed with PID.
227
Impact Of Maternal Ultrasound
Implementation In Rural Clinics In Mali
Melody Eckardt1, Roy Ahn1, Raquel Reyes1,
Elizabeth Cafferty1, Kathryn L. Conn1, Alison
Mulcahy2, Jean Crawford1, Thomas F. Burke1
1
Department of Emergency Medicine, Division
of Global Health and Human Rights,
Massachusetts General Hospital, Boston, MA;
2
Alameda County Medical Center, Highland
Hospital, Oakland, CA
Background: In resource-poor settings, maternal
health care facilities are often underutilized, contributing to high maternal mortality. The effect of ultrasound
in these settings on patients, health care providers, and
communities is poorly understood.
Objectives: The purpose of this study was to assess the
effect of the introduction of maternal ultrasound in a
population not previously exposed to this intervention.
Methods: An NGO-led program trained nurses at four
remote clinics outside Koutiala, Mali, who performed
8,339 maternal ultrasound scans over three years. Our
researchers conducted an independent assessment of
this program, which involved log book review, sonographer skill assessment, referral follow-up, semi-structured interviews of clinic staff and patients, and focus
groups of community members in surrounding villages.
Analyses included the effect of ultrasound on clinic
function, job satisfaction, community utilization of prenatal care and maternity services, alterations in clinical
decision making, sonographer skill, and referral frequency. We used QRS NVivo9 to organize qualitative
findings, code data, and identify emergent themes, and
GraphPad software (La Jolla, CA) and Microsoft Excel
to tabulate quantitative findings
Results: -Findings that triggered changes in clinical
practice were noted in 10.1% of ultrasounds, with a 3.5%
referral rate to comprehensive maternity care facilities.
-Skill retention and job satisfaction for ultrasound providers was high.
-The number of patients coming for antenatal care
increased, after introduction of ultrasound, in an area
where the birth rate has been decreasing.
-Over time, women traveled from farther distances to
access ultrasound and participate in antenatal care.
-Very high acceptance among staff, patients and community members.
-Ultrasound was perceived as most useful for finding
fetal position, sex, due date, and well-being.
-Improved confidence in diagnosis and treatment plan
for all cohorts.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
-Improved compliance with referral recommendations.
-No evidence of gender selection motivation for ultrasound use.
Conclusion: Use of maternal ultrasound in rural and
resource-limited settings draws women to an initial
antenatal care visit, increases referral, and improves
job satisfaction among health care workers.
228
Predicting Return Visits for Patients
Evaluated in the Emergency Department
with Nausea and Vomiting of Pregnancy
Brian Sharp, Kristen Sharp, Suzanne DooleyHash
University of Michigan Hospital, Ann Arbor, MI
Background: Nausea and vomiting in pregnancy is a
condition experienced by up to 50% of pregnant
women. Despite adequate ED treatment of their symptoms and frequent utilization of observational emergency medicine, these patients have high rates of
subsequent repeat ED visits.
Objectives: To evaluate what factors are predictive of
return visits to the ED in patients presenting with nausea and vomiting of pregnancy.
Methods: A retrospective database analysis was conducted using the electronic medical record from a single, large academic hospital. ED patients who received
a billing diagnosis of ‘‘nausea and vomiting of pregnancy’’ or ‘‘hyperemesis gravidarum’’ between 1/1/10
and 12/31/10 were selected. A manual chart review was
conducted with demographic and treatment variables
collected. Statistical significance was determined using
multiple regression analysis for a primary outcome of
return visit to the emergency department for nausea
and vomiting of pregnancy.
Results: 113 patients were identified. The mean age
was 27.1 years (SD±5.25), mean gravidity 2.90
(SD±1.94), and mean gestational age 8.78 weeks
(SD±3.21). The average length of ED evaluation was
730 min (SD±513). Of the 113 patients, 38 (33.6%) had a
return ED visit for nausea and vomiting of pregnancy,
17 (15%) were admitted to the hospital, and 49 (43%)
were admitted to the ED observation protocol. Multiple
regression analysis showed that the presence of medical
co-morbidity
(p = 0.039),
patient
gravditity
(p = 0.016), gestational age (p = 0.038), and admission to
the hospital (p = 0.004) had small but significant effects
on the primary outcome (return visits to the emergency
department). No other variables were found to be predictive of return visits to the ED including admission to
the ED observation unit or factors classically thought to
be associated with severe forms of nausea and vomiting
in pregnancy including ketonuria, electrolyte abnormalities, or vital sign abnormalities.
Conclusion: Nausea and vomiting in pregnancy has a
high rate of return ED visits that can be predicted by
young patient age, low patient gravidity, early gestational age, and the presence of other comorbidities.
These patients may benefit from obstetric consultation
and/or optimization of symptom management after discharge in order to prevent recurrent utilization of the
ED.
•
229
www.aemj.org
S125
Prevalence of Human Trafficking in Adult
Sexual Assault Victims
David E. Slattery1, Jeff Westin1, Jerri
Dermanelian2, Wesley Forred2, Toshia Shaw2,
Dale Carrison1
1
University of Nevada School of Medicine, Las
Vegas, NV; 2University Medical Center of
Southern Nevada, Las Vegas, NV
Background: Sex trafficking, the major form of human
trafficking (HT), is defined as ‘‘the recruitment, harboring,
transportation, provision, or obtaining of a person for the
purpose of a commercial sex act’’. It is estimated that 2 million people are trafficked internationally and 25,000 within
US borders each year. The majority are women and children. It is important for emergency physicians to recognize the signs of HT; however, there is a paucity of data
regarding how commonly these victims present to the ED.
Objectives: To determine the prevalence of human trafficking (HT) in adult sexual assault (SA) victims in an
urban ED.
Methods: IRB-approved, prospective, observational
study conducted by sexual assault nurse examiners
(SANE) in an urban, academic ED which is the sole
provider of care and forensics for adult SA in our community. Inclusion criteria: Adult, SA victims evaluated
by the SANE nurses. Exclusion criteria: Victims known
to be pregnant, in custody, or psychiatric emergencies.
Using convenience sampling, at the end of the examination, four questions were asked. The data were not
associated with the medical record and there was no
link to any individual. The primary measure was the
prevalence of HT defined as a positive response to the
question ‘‘have you ever exchanged sex for money,
drugs, housing, transportation, clothes, food’’. Secondary measures were the prevalence of ever working in
the adult industry and characteristics of involvement.
Results: During the 15 month study period, 644
patients were seen by a SANE. Of those patients, 296
were screened for HT. For the primary outcome measure, 73 patients (31%; 95%CI 25,38) met the HT criteria. Of those, 64 (22%) admitted to involvement in the
adult industry, and an additional 33 admitted to
exchanging sex for material goods or needs. See table.
Limitations: Convenience sampling, question results not
directly linked to victims’ demographics.
Conclusion: There is a high prevalence of HT in adult
SA victims. Although our study design and data do not
allow us to make any inferences regarding causation,
this first report of HT ED prevalence suggests the
opportunity to clarify this relationship and the potential
opportunity to intervene.
Table - Abstract 229:
N (%)
Adult Industry
Involvement
Dancer
Escort
Massage
Phone Sex
Prostitution
Pornography
65 (22)
25
5
1
3
41
2
(8.4)
(1.7)
(0.34)
(1)
(13.9)
(0.68)
N (%)
Material
Goods for Sex
Money
Drugs
Housing
Transportation
Clothes
Food
89 (30)
68
56
21
17
17
24
(22.9)
(18.9)
(7.1)
(5.7)
(5.7)
(8.1)
S126
230
2012 SAEM ANNUAL MEETING ABSTRACTS
Should Empiric Treatment of Gonorrhea
and Chlamydia Be Used in the Emergency
Department?
Scarlet Reichenbach, Leon D. Sanchez,
Kathryn A. Volz
BIDMC, Boston, MA
Background: Sexually transmitted infections (STI) are a
significant public health problem. Because of the risks
associated with STIs including PID, ectopic pregnancy,
and infertility the CDC recommends aggressive treatment with antibiotics in any patient with a suspected
STI.
Objectives: To determine the rates of positive gonorrhea and chlamydia (G/C) screening and rates of
empiric antibiotic use among patients of an urban academic ED with >55,000 visits in Boston, MA.
Methods: A retrospective study of all patients who had
G/C cultures in the ED over 12 months. Chi-square was
used in data analysis. Sensitivity and specificity were
also calculated.
Results: A positive rate of 9/712 (1.2%) was seen for
gonorrhea and 26/714 (3.6%) for chlamydia. Females
had positive rates of 2/602 (0.3%) and 17/603 (2.8%)
respectively. Males had higher rates of 7/110 (6.4%)
(p =< 0.001) and 9/111 (8.1%) (p = 0.006). 284 patients
with G/C sent received an alternative diagnosis, the
most common being UTI (63), ovarian pathology (35),
vaginal bleeding (34), and vaginal candidiasis (33); 4
were excluded. This left 426 without definitive diagnosis. Of these, 24.2% (87/360) of females were treated
empirically with antibiotics for G/C, and a greater percentage of males (66%, 45/66) were treated empirically
(p < 0.001). Of those empirically treated, 109/132
(82.6%) had negative cultures. Meanwhile 9/32 (28.1%)
who ultimately had positive cultures were not treated
with antibiotics during their ED stay. Sensitivity of the
provider to predict presence of disease based on decision to give empiric antibiotics was 71.9 (CI 53.0–85.6).
Specificity was 72.3 (CI 67.6–76.6).
Conclusion: Most patients screened in our ED for G/C
did not have positive cultures and 82.6% of those treated empirically were found not to have G/C. While
early treatment is important to prevent complications,
there are risks associated with antibiotic use such as
allergic reaction, C difficile infection, and development
of antibiotic resistance. Our results suggest that at our
institution we may be over-treating for G/C. Furthermore, despite high rates of treatment, 28% of patients
who ultimately had positive cultures did not receive
antibiotics during their ED stay. Further research into
predictive factors or development of a clinical decision
rule may be useful to help determine which patients
are best treated empirically with antibiotics for
presumed G/C.
231
Impact of Airline Travel on Outcome in
NHL and NFL Players Immediately Post
mTBI: Increased Recovery Times
David Milzman1, Jeremy Altman2,
Matt Milzman2, Carla Tilchin3, Greg Larkin4,
Jordy Sax5
1
Georgetown University School of Medicine,
Washington, DC; 2Georgetown University,
Washington, DC; 3Bates, Lewis, ME; 4Yale
University School of Medicine, New Haven, CT;
5
Johns Hopkins Dept of EM, Baltimore, MD
Background: Air travel may be associated with unmeasured neurophysiological changes in an injured brain
that may affect post-concussion recovery. No study has
compared the effect of commercial airtravel on concussion injuries despite rather obvious decreased oxygen
tension and increased dehydration effect on acute mTBI.
Objectives: To determine if air travel within 4–6 hours
of concussion is associated with increased recovery
time in professional football and hockey players.
Methods: Prospective cohort study of all active-roster
National Football League and National Hockey League
players during the 2010–2011 seasons. Internet website
review of league sties for injury identification of concussive injury and when player returned to play solely
for mTBI. Team schedules and flight times were also
confirmed to include only players who flew immediately
following game (within 4–6 hr). Multiple injuries were
excluded as were players who had injury around
all-star break for NHL and scheduled off week in NFL.
Results: During the 2010–2011 NFL and NHL seasons,
122 (7.2%) and 101 (13.0%) players experienced a
concussion (percent of total players), in the respective
leagues. Of these, 68 NFL players (57%) and 39 NHL
players (39%) flew within 6 hours of the incident injury.
The mean distance flown was shorter for NFL (850
miles, SD 576 vs. NHL 1060, SD 579) miles and all were
in a pressurized cabin. The mean number of games
missed for NFL and NHL players who traveled by air
immediately after concussion was increased by 29%
and 24% (respectively) than for those who did not travel by air NFL: 3.8 (SD 2.2) vs. 2.6 games (SD 1.8) and
NHL: 16.2 games (SD 22.0) vs.12.4 (SD 18.6); p < 0.03.
Conclusion: This is an initial report of an increased
rate of recovery in terms of more games missed, for
professional athletes flying commercial airlines postmTBI compared to those that do not subject their
recently injured brains to pressurized airflight. The
obvious changes of decreased oxygen tension with altitude equivalent of 7,500 feet, decreased humidity with
increased dehydration, and duress of travel accompanying pressurized airline cabins all likely increase the concussion penumbra in acute mTBI. Early air travel post
concussion should be further evaluated and likely postponed 48–72 hr. until initial symptoms subside.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
232
Regional Differences in Emergency
Medical Services Use For Patients with
Acute Stroke: Findings from the National
Hospital Ambulatory Medical Care Survey
Emergency Department Data file
Prasanthi Govindarajan, Kristin Kuzma, Ralph
Gonzales, Judith Maselli, S Claiborne
Johnston, Jahan Fahimi, Sharon Poisson, John
Stein
UCSF Medical Center, San Francisco, CA
Background: Previous studies have shown better
in-hospital stroke time targets for those who arrive by
ambulance compared to other modes of transport.
However, regional studies report that less than half of
stroke patients arrive by ambulance.
Objectives: Our objectives were to describe the proportion of stroke patients who arrive by ambulance
nationwide, and to examine regional differences and
factors associated with the mode of transport to the
emergency department (ED).
Methods: This is a cross-sectional study of all patients
with a primary discharge diagnosis of stroke based on
previously validated ICD-9 codes abstracted from the
National Hospital Ambulatory Medical Care Survey for
2007–2009. We excluded subjects <18 years of age and
those with missing data. The study related survey variables included patient demographics, community characteristics, mode of transport to the hospital, and
hospital characteristics.
Results: 566 patients met inclusion criteria, representing 2,153,234 patient records nationally. Of these, 50.4%
arrived by ambulance. After adjustment for potential
confounders, patients residing in the west and south
had lower odds of arriving by ambulance for stroke
when compared to northeast (Southern Region, OR
0.45, 95% CI 0.26–0.76, Western Region, OR 0.45, 95%
CI 0.25–0.84, Midwest Region, OR 0.56, 95% CI 0.31–
1.01). Compared to the Medicare population, privately
insured and self insured had lower odds of arriving by
ambulance (OR for private insurance 0.48, 95% CI 0.28–
0.84 and OR for self payers 0.36, 95% CI 0.14–0.93).
Age, sex, race, urban or rural location of ED, or safety
net status were not independently associated with
ambulance use.
Conclusion: Patients with stroke arrive by ambulance
more frequently in the northeast than in other regions
of the US. Identifying reasons for this regional difference may be useful in improving ambulance utilization
and overall stroke care nationwide.
233
Effect of Race and Ethnicity on the
Presentation of Stroke Among Adults
Presenting to the Emergency Department
Bradley Li, Sayed Hussain, Hani Judeh,
Kruti Joshi, Michael S. Radeos
New York Hospital Queens, Bayside, NY
Background: Stroke is recognized as a time-urgent disease that requires prompt diagnosis and treatment.
Management differs depending on whether the stroke
is hemorrhagic or ischemic.
•
www.aemj.org
S127
Objectives: We sought to determine whether there
was a difference in type of stroke presentation based
upon race. We further sought to determine whether
there is an increase in hemorrhagic strokes among
Asian patients with limited English proficiency.
Methods: We performed a retrospective chart review
of all stroke patients age 18 and older for 1 year of
patients that were diagnosed with cerebral vascular
accident (CVA) or intracranial hemorrhage (ICH). We
collected data on patient demographics, and past medical history. We then stratified patients according to
race (white, black, Latino, Asian, and other). We classified strokes as ischemic, intracranial hemorrhage (ICH),
subarachnoid hemorrhage (SAH), subdural hemorrhage
(SDH), and other (e.g., bleeding into metatstatic
lesions). We used only the index visit. We present the
data percentages, medians and interquartile ranges
(IQR). We tested the association of the outcome of
intracranial hemorrhage against demographic and clinical variables using chi-square and Kruskal-Wallis tests.
We performed a logistic regression model to determine
factors related to presentation with an intracranial
hemorrhage (ICH).
Results: A total of 457 patients presented between 7/1/
09 and 6/30/10. The median age was 74 years (IQR 60 to
83). 251 (55%) were female; 194 (42%) were white, 58
(12%) black, 76 (17%) Latino, 111 (24%) Asian/Pacific
Islander (of these 14% were Chinese and 6% were Korean, 3% Indian, and 2% other Asian/Pacific Islander),
2% missing, and 2% other race. 94 patients (21%) had a
primary language other than English. Of all strokes,
353 (77%) were ischemic, 69 (15%) were ICH, 17 (4%)
were SAH, 14 (3%) were SDH, and 4 (1%) were other.
There was no association between the presentation of
ICH and race, either in the univariate analysis (OR 1.31,
95% CI 0.80 to 2.15) or in a model adjusted for age, sex,
non-English speaking status and comorbidities (OR
1.29, 95% CI 0.49 to 3.40).
Conclusion: Asian patients, when compared to nonAsian patients, had no detectable difference in the rate
of ICH on ED presentation. Further research in this
area should continue to focus on primary prevention
equally to all races while searching for risk factors that
may be present in certain races.
234
Non-febrile Seizures In The Pediatric
Emergency Department: To Draw Labs And
CT Scan Or Not?
Vikramjit S. Gill, Ashley Strobel
University of Maryland, Baltimore, MD
Background: The practice of obtaining laboratory
studies and routine CT scan of the brain on every child
with a seizure has been called into question in the
patient who is alert, interactive, and back to functional
baseline. There is still no standard practice for the management of non-febrile seizure patients in the pediatric
emergency department (PED).
Objectives: We sought to determine the proportion of
patients in whom clinically significant laboratory studies and CT scans of the brain were obtained in children
who presented to the PED with a first or recurrent
S128
2012 SAEM ANNUAL MEETING ABSTRACTS
non-febrile seizure. We hypothesize that the majority of
these children do not have clinically significant laboratory or imaging studies. If clinically significant values
were found, the history given would warrant further
laboratory and imaging assessment despite seizure
alone.
Methods: We performed a retrospective chart review
of 93 patients with first-time or recurrent non-febrile
seizures at an urban, academic PED between July 2007
to June 2011. Exclusion criteria included children who
presented to the PED with a fever and age less than
2 months. We looked at specific values that included a
complete blood count, basic metabolic panel, and liver
function tests, and if the child was on antiepileptics
along with a level for a known seizure disorder, and CT
scan. Abnormal laboratory and CT scan findings were
classified as clinically significant or not.
Results: The median age of our study population is
4 years with male to female ratio of 1.7. 70% of patients
had a generalized tonic-clonic seizure. Laboratory studies and CT scans were obtained in 87% and 35% of
patients, respectively. Five patients had clinically significant abnormal labs; however, one had ESRD, one
developed urosepsis, one had eclampsia, and two others had hyponatremia, which was secondary to diluted
formula and trileptal toxicity. Three children had an
abnormal head CT: two had a VP shunt and one had a
chromosomal abnormality with developmental delay.
Conclusion: The majority of the children analyzed did
not have clinically significant laboratory or imaging
studies in the setting of a first or recurrent non-febrile
seizure. Of those with clinically significant results, the
patient’s history suggested a possible etiology for their
seizure presentation and further workup was indicated.
stroke-like symptoms fall 2006–summer 2010. The product-moment correlation coefficient was used to estimate
the strength of correlation between providers’ NIHSS
scores and decisions to give or withhold rtPA. A BlandAltman analysis was used to determine level of agreement (LOA) between NIHSS scores, and frequency
analyses were performed on emergency physician’s
rationale for withholding rtPA.
Results: Correlation coefficients of NIHSS scores
between providers were 0.914 for 2006–2007, 0.869 for
2010, and 0.907 for both time periods combined
(P < 0.0001). Bland-Altman analysis demonstrated a
slight, but statistically insignificant, increase in the difference in emergency department scores over stroke
team scores as the average NIHSS scores increased.
The overall point estimate for LOA for NIHSS scores
showed strong agreement (1.096, 95%CI )4.87–6.77),
but 95% limits of the LOA for NIHSS scores were
±5.82. For the decision to treat the correlation coefficients were 0.282 for 2006–2007, 0.204 for 2010, and
0.250 for both time periods combined, and kappa coefficients were 0.245 for 2006–2007, 0.186 for 2010, and
0.226 for all years. Frequency analysis showed the most
frequent reason (29%) for EM physicians’ decision to
withhold rtPA was the onset of patient’s symptoms
>3 hours prior to the decision to give rtPA.
Conclusion: EM and neurology decisions to give or
withhold rtPA cannot serve as substitutes for each
other. Despite very similar initial neurologic assessments, EM physicians and the stroke team may weigh
the risks of harm and benefit differently in making the
decision to administer thrombolytics.
236
235
Thrombolytics in Acute Ischemic Stroke
Assessment and Decision-Making: EM vs.
Neurology
Margaret Vo1, Yashwant Chathampally2, Raja
Malkani3, David Robinson2, Emily Pavlik4
1
Paul L. Foster School of Medicine, El Paso,
TX; 2University of Texas Health Science Center
at Houston, Houston, TX; 3University of Texas
School of Public Health - Austin Regional
Campus, Austin, TX; 4Emergency Physicians
Affiliate, San Antonio, TX
Background: The amount of time elapsed after an
ischemic stroke may increase the extent of irreversible
neuronal loss, making avoiding delays in reperfusion
pertinent to reducing mortality and morbidity, and
improving quality of life.
Objectives: The purpose of this study was to compare
the emergency physician’s and stroke team’s assessments and independent decisions to administer or withhold thrombolytic therapy in patients presenting to the
emergency department (ED) with stroke-like symptoms.
Methods: Stroke assessment and decisions performed
by EM and neurology providers at a tertiary care,
university hospital were collected on a convenience
sample of the emergency physician and stroke team
neurologists for 77 patients presenting to the ED with
Lumbar Puncture or CT Angiography
Following a Negative CT Scan for
Suspected Subarachnoid Hemorrhage:
A Decision Analysis?
Foster R. Goss, John B. Wong
Tufts Medical Center, Boston, MA
Background: In patients with a negative CT scan for
suspected subarachnoid hemorrhage (SAH), CT angiography (CTA) has emerged as a controversial alternative diagnostic strategy in place of lumbar puncture
(LP).
Objectives: To determine the diagnostic accuracy for
SAH and aneurysm of LP alone, CTA alone, and LP followed by CTA if the LP is positive.
Methods: We developed a decision and Bayesian analysis to evaluate 1) LP, 2) CTA, and 3) LP followed by
CTA if the LP is positive. Data were obtained from the
literature. The model considers probability of SAH
(15%), aneurysm (85% if SAH), sensitivity and specificity of CT (92.9% and 100% overall), of LP (based on
RBC and xanthochromia), and of CTA, traumatic tap
and its influence on SAH detection. Analyses considered all patients and those presenting at less than
6 hours or greater than 6 hours from symptom onset
by varying the sensitivity and specificity of CT and
CTA.
Results: Using the reported ranges of CT scan sensitivity and the specificity, the revised likelihood of SAH
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
following a negative CT ranged from 0.5–3.7%, and
the likelihood of aneurysm ranged from 2.3–5.4%. Following any of the diagnostic strategies, the likelihood
of missing SAH ranged from 0–0.7%. Either LP strategy diagnosed 99.8% of SAHs versus 83–84% with
CTA alone because CTA only detected SAH in the
presence of an aneurysm. False positive SAH with LP
ranged from 8.5–8.8% due to traumatic taps and with
CTA ranged from 0.2–6.0% due to aneurysms without
SAH. The positive predictive value for SAH ranged
from 5.7–30% with LP and from 7.9–63% with CTA.
For patients presenting within 6 hours of symptom
onset, the revised likelihood of SAH following a negative CT became 0.53%, and the likelihood of aneurysm
ranged from 2.3–2.7%. Following any of the diagnostic strategies, the likelihood of missing SAH ranged
from 0.01–0.095%. Either LP strategy diagnosed 99.8%
of SAH versus 83–84% with CTA alone. False positive
SAH with LP was 8.8% and with CTA ranged from
0.2–5.1%. The positive predictive value for SAH was
5.7% with LP and from 7.9–63% with CTA. CTA
following a positive LP diagnosed 8.5–24% of
aneurysms.
Conclusion: LP strategies are more sensitive for detecting SAH but less specific than CTA because of traumatic taps, leading to lower predictive value positives
for SAH with LP than with CTA. Either diagnostic
strategy results in a low likelihood of missing SAH, particularly within 6 hours of symptom onset.
237
Abnormalities on CT Perfusion in Patients
Presenting with Transient Ischemic Attack
(TIA)
Sharon N. Poisson1, Jane J. Kim2, Prasanthi
Govindarajan1, S. Claiborne Johnston1,
Mai N. Nguyen-Huynh3
1
University of California San Francisco, San
Francisco, CA; 2Kaiser Permanente Medical
Care Plan, San Francisco, CA; 3Kaiser
Permanente Division of Research, Oakland, CA
Background: Recent studies support perfusion imaging
as a prognostic tool in ischemic stroke, but little data
exist regarding its utility in transient ischemic attack
(TIA). CT perfusion (CTP), which is more available and
less costly to perform than MRI, has not been well
studied.
Objectives: To characterize CTP findings in TIA
patients, and identify imaging predictors of outcome.
Methods: This retrospective cohort study evaluated
TIA patients at a single ED over 15 months, who had
CTP at initial evaluation. A neurologist blinded to CTP
findings collected demographic and clinical data. CTP
images were analyzed by a neuroradiologist blinded to
clinical information. CTP maps were described as qualitatively normal, increased, or decreased in mean transit
time (MTT), cerebral blood volume (CBV), and cerebral
blood flow (CBF). Quantitative analysis involved measurements of average MTT (seconds), CBV (cc/100 g)
and CBF (cc/[100g x min]) in standardized regions of
interest within each vascular distribution. These were
compared with values in the other hemisphere for
•
www.aemj.org
S129
relative measures of MTT difference, CBV ratio, and
CBFfratio. MTT difference of ‡2 seconds, rCBV as
£0.60, and rCBF as £0.48 were defined as abnormal
based on prior studies. Clinical outcomes including
stroke, TIA, or hospitalization during follow-up were
determined up to one year following the index event.
Dichotomous variables were compared using Fisher’s
exact test. Logistic regression was used to evaluate the
association of CTP abnormalities with outcome in TIA
patients.
Results: Of 99 patients with validated TIA, 53 had CTP
done. Mean age was 72 ± 12 years, 55% were women,
and 64% were Caucasian. Mean ABCD2 score was
4.7 ± 2.1, and 69% had an ABCD2 ‡ 4. Prolonged MTT
was the most common abnormality (19, 36%), and 5
(9.4%) had decreased CBV in the same distribution. On
quantitative analysis, 23 (43%) had a significant abnormality. Four patients (7.5%) had prolonged MTT and
decreased CBV in the same territory, while 17 (32%)
had mismatched abnormalities. When tested in a multivariate model, no significant associations between mismatch abnormalities on CTP and new stroke, TIA, or
hospitalizations were observed.
Conclusion: CTP abnormalities are common in TIA
patients. Although no association between these abnormalities and clinical outcomes was observed in this
small study, this needs to be studied further.
238
Withdrawn
239
Economic Benefit of an Educational
Intervention to Improve tPA Use as
Treatment For Acute Ischemic Stroke in
Community Hospitals: Secondary Analysis
of the INSTINCT Trial
Cemal B. Sozener, David W. Hutton,
William J. Meurer, Shirley M. Frederiksen,
Allison M. Kade, Phillip A. Scott
University of Michigan, Ann Arbor, MI
Background: Prior work demonstrates substantial economic benefit from tPA use in acute ischemic stroke
(AIS).
Objectives: We hypothesized a T2 knowledge translation (KT) program to increase community tPA treatment
in AIS would be cost-effective beyond research funds
spent.
Methods: Data were utilized from the INcreasing
Stroke Treatment through INterventional behavior
Change Tactics (INSTINCT) trial, a prospective, cluster
randomized, controlled trial involving 24 community
hospitals in matched pairs. Within pairs, hospitals were
randomly assigned to receive a barrier assessmentinteractive educational intervention (BA-IEI) vs. control.
Cost analyses were conducted from a societal perspective for two cases: 1) using total trial costs, and 2) using
intervention costs alone (no research overhead) as an
estimate of the cost of generalization of the results.
Trial costs are defined as total INSTINCT funding combined with opportunity costs of health professionals
S130
2012 SAEM ANNUAL MEETING ABSTRACTS
attending study events. Savings attributable to
increased tPA use were determined by applying published stroke economic data, adjusted for inflation, to
the study cohorts. These data were integrated in a Markov model to determine the long-term economic effect
of the INSTINCT BA-IEI versus control.
Results: The INSTINCT trial cost (US)$3.3 million. Intervention sites treated 2.30% (244/10,627) of patients with
tPA compared to 1.59% (160/10,071) at control sites (per
protocol analysis). This increase in tPA use resulted in
direct savings of approximately $540,000 due to reduced
length of hospital and nursing facility stay. Increased
tPA usage resulted in an estimated additional 81 quality
adjusted life years (QALY), with an incremental costeffectiveness ratio of $34,000/QALY. Using $50,000 as a
conservative estimate of societal value per QALY, an
additional benefit of $4,100,000, or net societal economic
benefit of $1.3 million, was realized. Generalizing the
intervention in a similar population (excluding research
overhead) would cost an estimated $680,000 and provide
a net benefit of $3.9 million, assuming similar effectiveness and treatment outcomes.
Conclusion: Due to the underlying cost-effectiveness of
tPA, community KT efforts with modest absolute gains
in tPA usage produce substantial societal economic
returns and are considered good value when compared
to spending on other health interventions.
240
Anti-hypertensive Treatment Prolongs tPA
Door-to-treatment Time: Secondary
Analysis Of The Increasing Stroke
Treatment Through Interventional
Behavior Change Tactics (INSTINCT) Trial
Lesli E. Skolarus, Phillip A. Scott,
James F. Burke, Eric E. Adelman,
Shirley M. Frederiksen, Allison M. Kade, Jack
D. Kalbfleisch, William J. Meurer
University of Michigan, Ann Arbor, MI
Background: Increased time to tPA treatment is
associated with worse outcomes. Thus, identifying
modifiable treatment delays may improve stroke
outcomes.
Objectives: We hypothesized that pre-thrombolytic
anti-hypertensive treatment (AHT) may prolong door to
treatment time (DTT).
Methods: Secondary data analysis of consecutive tPAtreated patients at 24 randomly selected Michigan
community hospitals in the INSTINCT trial. DTT
among stroke patients who received pre-thrombolytic
AHT were compared to those who did not receive
pre-thrombolytic AHT. We then calculated a propensity score for the probability of receiving pre-thrombolytic AHT using a logistic regression model with
covariates including demographics, stroke risk factors,
antiplatelet or beta blocker as home medication,
stroke severity (NIHSS), onset to door time, admission
glucose, pretreatment systolic and diastolic blood
pressure, EMS usage, and location at time of stroke.
A paired t-test was then performed to compare the
DTT between the propensity-matched groups. A separate generalized estimating equations (GEE) approach
was also used to estimate the differences between
patients receiving pre-thrombolytic AHT and those
who did not while accounting for within-hospital
clustering.
Results: A total of 557 patients were included in
INSTINCT; however, onset, arrival, or treatment times
were not able to be determined in 23, leaving 534
patients for this analysis. The unmatched cohort consisted of 95 stroke patients who received pre-thrombolytic AHT and 439 stroke patients who did not receive
AHT from 2007–2010 (table). In the unmatched cohort,
patients who received pre-thrombolytic AHT had a
longer DTT (mean increase 9 minutes; 95% confidence
interval (CI) 2–16 minutes) than patients who did not
receive pre-thrombolytic AHT. After propensity matching (table), patients who received pre-thrombolytic AHT
had a longer DTT (mean increase 10.4 minutes, 95% CI
1.9–18.8) than patients who did not receive pre-thrombolytic AHT. This effect persisted and its magnitude
was not altered by accounting for clustering within
hospitals.
Conclusion: Pre-thrombolytic AHT is associated with
modest delays in DTT. This represents a feasible target
for physician educational interventions and quality
improvement initiatives. Further research evaluating
optimum hypertension management pre-thrombolytic
treatment is warranted.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
241
Protocol Deviations During and After IV
Thrombolysis in Community Hospitals
Eric E. Adelman, William Meurer,
Lesli E. Skolarus, Allison M. Kade,
Shirley M. Frederiksen, Jack D. Kalbfleisch,
Phillip A. Scott
University of Michigan, Ann Arbor, MI
Background: Protocol deviations (PDs) before and
immediately after IV thrombolysis for acute ischemic
stroke are common. Patient and hospital factors associated with PDs are not well described.
Objectives: We aimed to determine which patient or
hospital factors were associated with pre- and posttreatment PDs in a cohort of community-treated thrombolysis patients.
Methods: The INSTINCT (Increasing Stroke Treatment
through Interventional Behavior Change Tactics) study
was a multicenter, cluster-randomized controlled trial
in 24 Michigan community hospitals evaluating the efficacy of a barrier assessment and educational intervention to increase appropriate tPA use. PDs were defined
based on 2007 AHA guidelines with the addition of the
3–4.5 hour treatment window, for which the ECASS III
criteria were applied. PDs were categorized as pretreatment (Pre-PDs), post-treatment (Post-PDs), or both.
Multi-level logistic regression models were fitted to
determine whether patient and hospital variables were
associated with Pre-PDs or Post-PDs. The models
included all variables specified a priori to be potentially
clinically relevant; Pre-PD was included as a covariate
in the model for Post-PD.
Results: During the study, 557 patients (mean age 70;
52% male; median NIHSS 12) were treated with IV tPA.
PDs occurred in 233 (42%) patients: 26% had only
Table - Abstract 241:
•
www.aemj.org
S131
Post-PDs, 7% had only Pre-PDs, and 9% had both. The
most common PDs included failure to treat post-treatment hypertension (131, 24%), antiplatelet agent within
24 hours of treatment (61, 11%), pre-treatment blood
pressure over 185/110 (39, 7%), anticoagulant agent
within 24 hours of treatment (31, 6%), and treatment
outside the time window (29, 5%). Symptomatic intracranial hemorrhage (SICH) was observed in 7.3% of
patients with PDs and 6.5% of patients without any PD.
In-hospital case fatality was 12% with and 10% without
a PD. In the fully adjusted model, older age was significantly associated with Pre-PDs (Table). When Post-PDs
were evaluated with adjustment for Pre-PDs, age was
not associated with PDs; however, Pre-PDs were
associated with Post-PDs.
Conclusion: Older age was associated with increased
odds of Pre-PDs in Michigan community hospitals. PrePDs were associated with Post-PDs. SICH and in-hospital case fatality were not associated with PDs; however,
the low number of such events limited our ability to
detect a difference.
242
CT Is Sensitive For The Detection Of
Advanced Leukoaraiosis In Patients With
Transient Ischemic Attack
Matthew S. Siket1, Ross C. Avery1,
Eitan Auriel1, Johanna A. Helenius1,
Gyeong Moon Kim1, J. Alfredo Caceres1,
Octavio Pontes-Neto1, Hakan Ay2
1
Massachusetts General Hospital, Boston, MA;
2
Massachusetts General Hospital, AA Martinos
Center for Biomedical Imaging, Harvard
Medical School, Boston, MA
Background: MRI has become the gold standard for
the detection of cerebral ischemia and is a component
of multiple imaging enhanced clinical risk prediction
rules for the short-term risk of stroke in patients with
transient ischemic attack (TIA). However, it is not
always available in the emergency department (ED) and
is often contraindicated. Leukoaraiosis (LA) is a radiographic term for white matter ischemic changes, and
has recently been shown to be independently predictive
of disabling stroke. Although it is easily detected by
both CT and MRI, their comparative ability is unknown.
Objectives: We sought to determine whether leukoaraiosis, when combined with evidence of acute or old
infarction as detected by CT, achieved similar sensitivity
to MRI in patients presenting to the ED with TIA.
Methods: We conducted a retrospective review of consecutive patients diagnosed with TIA between June
2009 and July 2011 that underwent both CT and MRI as
part of routine care within 1 calendar day of presentation to a single, academic ED. CT and MR images were
reviewed by a single emergency physician who was
blinded to the MR images at the time of CT interpretation. LA was graded using the Van Sweiten scale (VSS),
a validated grading scale applicable to both CT and
MRI. Anterior and posterior regions were graded independently from 0 to 2.
Results: 361 patients were diagnosed with TIA during
the study period. Of these, 194 had both CT and MRI
S132
2012 SAEM ANNUAL MEETING ABSTRACTS
performed within 1 day of presentation. Images from
both modalities were available for review in 172.
Abnormalities defined as acute infarction, old infarction, or LA were present in 133 (77.3%) by CT compared to 161 (93.6%) by MRI. LA was detected in 115
(66.9%) by CT compared to 145 (84.3%) by MRI
(P < 0.0002). CT achieved sensitivity and specificity of
75.9% and 81.5% respectively in the overall detection of
LA. Positive and negative predictive values were 95.6%
and 38.6% respectively. Advanced LA (VSS 2) was similar in both modalities, with 68 (39.5%) by CT and 63
(36.6%) by MRI (p = 0.577).
Conclusion: MRI is significantly more sensitive than CT
in detecting cerebral ischemia in patients with TIA.
However, CT is sufficiently sensitive in detecting
advanced LA, which may serve to increase CT-based
stroke risk prediction strategies of TIA patients in the
ED.
(median 23 vs 21, p = 0.042) but similar CS (39 vs 40,
p = 0.16) and TE (19 vs 19, p = 0.38) compared to those
working mostly day shifts. Residents with children had
similar levels of BO (21 vs 22, p = 0.47) and CS (41 vs
40, p = 0.26), but higher TE (21 vs 19, p = 0.007) than
those without children. Emergency medicine residents
had a nonsignificant trend towards higher TE, but similar BO and CS compared to those in primarily surgical
or medical specialties.
Conclusion: Compassion satisfaction was similar in all
groups. Burnout was associated with working more
hours per week and working predominantly nights.
Traumatic experiences appear to be more pronounced
in those working in the emergency medicine specialty,
those working more hours per week, and those with
children.
244
243
Compassion Fatigue: Emotional Exhaustion
After Burnout
M. Fernanda Bellolio, Daniel Cabrera, Annie T.
Sadosty, Erik P. Hess, Kharmene L. Sunga
Mayo Clinic, Rochester, MN
Background: Helping others is often a rewarding
experience but can also come with a ‘‘cost of caring’’
also known as compassion fatigue (CF). CF can be
defined as the emotional and physical toll suffered by
those helping others in distress. It is affected by three
major components: compassion satisfaction (CS), burnout (BO), and traumatic experiences (TE). Previous literature has recognized an increase in BO related to work
hours and stress among resident physicians.
Objectives: To assess the state of CF among residents
with regard to differences in specialty training, hours
worked, number of overnights, and demands of child
care. We aim to measure associations with the three
components of CF (CS, BO, and TE).
Methods: We used the previously validated survey,
ProQOL 5. The survey was sent to the residents after
approval from the IRB and the program directors.
Results: A total of 193 responses were received (40%
of the 478 surveyed). Five were excluded due to incomplete questionnaires. We found that residents who
worked more hours per week had significantly higher
BO levels (median 25 vs 21, p = 0.038) and higher TE
(22 vs 19, p = 0.048) than those working less hours.
There was no difference in CS (42 vs 40, p = 0.73). Eighteen percent of the residents worked a majority of the
night shifts. These residents had higher levels of BO
Effective Methods To Improve Emergency
Department Documentation In A Teaching
Hospital To Enhance Education, Billing,
And Medical Liability
Anurag Gupta, Joseph Habboushe
Beth Israel Medical Center, New York, NY
Background: Emergency department (ED) billing
includes both facility and professional fees. An algorithm derived from the medical provider’s chart generates the latter fee. Many private hospitals encourage
appropriate documentation by financially incentivizing
providers. Academic hospitals sometimes lag in this initiative, possibly resulting in less than optimal charting.
Past attempts to teach proper documentation using our
electronic medical record (EMR) were difficult in our
urban, academic ED of 80 providers (approximately 25
attending physicians, 36 residents, and 20 physician
assistants).
Objectives: We created a tutorial to teach documentation of ED charts, modified the EMR to encourage
appropriate documentation, and provided feedback
from the coding department. This was combined with
an incentive structure shared equally amongst all attendings based on increased collections. We hypothesized
this instructional intervention would lead to more
appropriate billing, improve chart content, decrease
medical liability, and increase educational value of
charting process.
Methods: Documentation recommendations, divided
into two-month phases of 2–3 proposals, were administered to all ED providers by e-mails, lectures, and
reminders during sign-out rounds. Charts were
reviewed by coders who provided individual feedback
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S133
if specific phase recommendations were not followed.
Our endpoints included change in total RVU, RVUs/
patient, E/M level distribution, and subjective quality of
chart improvement. We did not examine effects on
procedure codes or facility fees.
Results: Our base average RVU/patient in our ED from
1/1/11–6/30/11 was 2.615 with monthly variability of
approximately 2%. Implementation of phase one
increased average RVU/patient within two weeks to
2.73 (4.4% increase from baseline, p < 0.05). The second
aggregate phase implemented 8 weeks later increased
average RVU/patient to 3.04 (16.4% increase from baseline, p < 0.05).
Conclusion: Using our teaching methods, chart
reviews focused on 2–3 recommendations at a time,
and EMR adjustments, we were able to better reflect
the complexity of care that we deliver every day in our
medical charts. Future phases will focus on appropriate
documentation for procedures, critical care, fast track,
and pediatric patients, as well as examining correlations
between increase in RVUs with charge capture.
mentoring evaluation, create a mechanism to identify
and reward mentoring. National practice examples
offered critical recommendations to address multi-generational attitudes and faculty diversity in terms of gender, race, and culture.
Conclusion: Mentoring strategies can be identified to
serve a diverse faculty in academic medicine. Interventions to improve mentoring practices should be targeted at the level of the institution, department, and
individual faculty members. It is imperative to adopt
results such as these to design effective mentoring programs to enhance the success of emergency medicine
faculty seeking robust academic careers.
Identifying Mentoring ‘‘Best Practices’’ for
Medical School Faculty
Julie L. Welch, Teresita Bellido,
Cherri D. Hobgood
Indiana University, Indianapolis, IN
Background: For graduating residents applying for
academic emergency medicine positions, there is a paucity of information on what attributes emergency
department (ED) chairs are seeking.
Objectives: To determine which qualities academic ED
chairs are looking for when hiring a new physician
directly out of residency or fellowship.
Methods: An anonymous 15-item web-based survey
was sent to the department chairs of all accredited civilian emergency medicine residency programs in March
and April of 2011. Respondents were asked to rate different parts of the candidate’s application using a fivepoint Likert scale, rank important attributes, and list
any desirable fellowships. They were also asked to give
the current number of available job openings.
Results: 84 of 152 eligible chairs responded, giving a
response rate of 55%. The most important parts of a candidate’s application were the interview (4.79 ± 0.41),
another employee’s recommendation (4.70 ± 0.52), and
the program director’s recommendation (4.54 ± 0.67).
Less weight was given to the reputation of the residency
program attended (3.80 ± 0.71) as well as attending a
3- or 4-year program (2.78 ± 1.46). The single most
important attribute possessed by a candidate was identified as ‘‘Ability to work in a team.’’ Other common
responses included ‘‘Clinical productivity,’’ ‘‘Teaching
potential,’’ and ‘‘Substantial publications.’’ Advanced
training in ultrasound was listed as the most sought after
fellowship by 55% of the chairs. Other fellowships receiving >30% of affirmative votes include critical care, pediatrics, and toxicology. Overall, department chairs did not
have a difficult time in recruiting EM-trained physicians
(2.5 ± 1.3 on a five-point scale), with 56% of respondents
stating that they had no current job openings.
Conclusion: This study is the first attempt to examine
what academic ED chairs are looking for when they
hire new attendings. How a physician relates to others
was consistently rated and ranked as the most important part of the candidate’s application with the interview and recommendations taking the lead. As they are
245
Background: Mentoring has been identified as an
essential component for career success and satisfaction
in academic medicine. Many institutions and departments struggle with providing both basic and transformative mentoring for their faculty.
Objectives: We sought to identify and understand the
essential practices of successful mentoring programs.
Methods: Multidisciplinary institutional stakeholders in
the school of medicine including tenured professors,
deans, and faculty acknowledged as successful mentors
were identified and participated in focused interviews
between Mar-Nov 2011. The major area of inquiry
involved their experiences with mentoring relationships, practices, and structure within the school,
department, or division. Focused interview data were
transcribed and grounded theory analysis was performed. Additional data collected by a 2009 institutional mentoring taskforce were examined. Key
elements and themes were identified and organized for
final review.
Results: Results identified the mentoring practices for
three categories: 1) General themes for all faculty,
2) Specific practices for faculty groups: Basic Science
Researchers, Clinician Researchers, Clinician Educators, and 3) National examples. Additional mentoring
strategies that failed were identified. The general
themes were quite universal among faculty groups.
These included: clarify the best type of mentoring for
the mentee, allow the mentee to choose the mentor,
establish a panel of mentors with complementary skills,
schedule regular meetings, establish a clear mentoring
plan with expectations and goals, offer training and
resources for both the mentor and mentee at
institutional and departmental levels, ensure ongoing
246
A Pilot Study to Survey Academic
Emergency Medicine Department Chairs
on Hiring New Attendings
Ryan D. Aycock, Moshe Weizberg, Brahim
Ardolic
Staten Island University Hospital, Staten Island,
NY
S134
2012 SAEM ANNUAL MEETING ABSTRACTS
tasked with training the next generation of physicians,
academic programs are also concerned with teaching
and research ability. Overall, finding a job in academic
emergency medicine is difficult, with graduates having
limited job prospects.
247
Reliability of the Revised Professional
Practice Environment Scale when Used
with Emergency Physicians
Tania D. Strout, Michael R. Baumann,
Julie E.O. Pelletier, Katie W.D. Dolbec
Maine Medical Center, Portland, ME
Background: The Revised Professional Practice Environment Scale (RPPE) is a 39-item multidimensional
measure designed to evaluate eight components of professional clinical practice in the acute care setting.
Developed for and used primarily with registered
nurses, the RPPE is a valuable tool for evaluating clinicians’ perceptions of the practice environment, the efficacy of changes in practice, for strategic planning, and
for developing supports for the IOM’s six dimensions
of quality.
Objectives: We sought to evaluate the reliability and
validity of the RPPE when used with a sample of emergency physicians.
Methods: A psychometric evaluation of the RPPE was
undertaken with a sample of emergency physicians.
Participants completed the 39-item instrument in an
anonymous fashion using a pencil and paper format.
Analyses included evaluations of: a) internal consistency
reliability using Cronbach’s alpha coefficient and item
analysis, b) principle components analysis (PCA) with
Varimax rotation and Kaiser normalization, and c) internal consistency reliability of the components identified
through PCA.
Results: The initial Cronbach’s alpha for all 39 items
was 0.88; 11 items were then removed from analysis
due to low (<0.30) corrected item-to-total correlations.
Cronbach’s alpha for the remaining 28 items was
0.905, indicating high internal consistency reliability.
The 28 items subjected to PCA yielded a six-component solution explaining 76.1% of observed variance.
Cronbach’s alpha coefficient for the individual
components comprising the scale ranged from 0.70 to
0.89, indicating that the subscales may be used
independently to measure the various aspects of
the emergency physician’s professional practice
environment.
Conclusion: The results of this pilot study suggest that
a shortened, 28-item version of the RPPE scale is psychometrically sound when used with emergency physicians to evaluate their perceptions regarding their
practice environment. While the RPPE holds promise as
a method for evaluating changes to the practice environment and for creating an environment supportive of
the IOM’s dimensions of quality, additional research
with a larger, more diverse sample of physicians is indicated prior to widespread utilization of the instrument
in the physician population.
248
Gender Diversity in Emergency Medicine:
Measuring System’s Change
Lori Post, Nupur Garg, Gail D’Onofrio
Yale University School of Medicine, New
Haven, CT
Background: Women comprise half of the talent pool
from which the specialty of emergency medicine draws
future leaders, researchers, and educators and yet only
5% of full professors in US emergency medicine are
female. Both research and interventions are aimed at
reducing the gender gap, however, it will take decades
for the benefits to be realized which creates a methodological challenge in assessing system’s change. Current
techniques to measure disparities are insensitive to systems change as they are limited to percentages and
trends over time.
Objectives: To determine if the use of Relative Rate
Index (RRI) better predicts which stage in the system
women are not advancing in the academic pipeline than
traditional metrics.
Methods: RRI is a method of analysis that assesses the
percent of sub-populations in each stage relative to their
representation in the stage directly prior. Thus, there is a
better notion of the advancement given the availability to
advance. RRI also standardizes data for ease of interpretation. This study was conducted on the total population
of academic professors in all departments at Yale School
of Medicine during the academic year of 2010–2011. Data
were obtained from the Yale University Provost’s office.
Results: N = 1305. There were a total of 402 full, 429
associate, and 484 assistant professors. Males comprised 78%, 59%, and 54% respectively. RRI for the
Department of Emergency Medicine (DEM) is 0.67,
1.93, and 0.78, for Full, Associate, and Assistant Professors, respectively while the percentages were 44%,
60%, and 33% respectively.
Conclusion: Relying solely on percentages masks
improvements to the system. Women are most represented at the associate professor level in DEM, highlighting the importance of systems change evidence.
Specifically, twice as many women are promoted to
associate professor rank given the number who exists
as assistant professors. Within 5 years, the DEM should
have an equal system as the numbers of associate professors have dramatically increased and will be eligible
to promote to full professor. Additionally, DEM has a
better record of retaining and promoting women than
other Yale Departments of Medicine at both associate
and full professor ranks.
Table 1 - Abstract 248: Interpretation of Relative Rate Index (RRI)
RRI
1
<1
>1
Interpretation
there is an equal rate of progression of females
and males from one stage to the next
there are fewer females progressing relative to
males given their representation in the step prior
there are more females progressing relative to
males given their representation in the step prior
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
249
Quantifying the Safety Net Role of the
Academic Emergency Department
Benjamin S. Heavrin, Cathy A. Jenkins
Vanderbilt University, Nashville, TN
Background: EDs have an important role in providing
care to safety net populations. The safety net role for
the academic ED has not been quantified at a national
level.
Objectives: We examine the payer mixes of community
non-rehabilitation EDs in metropolitan areas by region
to identify the proportion of academic and nonacademic EDs that could be considered safety net EDs. We
hypothesize that the proportion of safety net academic
•
www.aemj.org
S135
EDs is greater than that for non-academic EDs and is
increasing over time.
Methods: This is an ecological study examining US ED
visits from 2006 through 2008. Data were obtained from
the Nationwide Emergency Department Sample (NEDS).
We grouped each ED visit according to the unique hospital-based ED identifier, thus creating a payer mix for
each ED. We define a ‘‘Safety Net ED’’ as any ED where
the payer mix satisfied any one of the following three
conditions: 1) >30% of all ED visits are Medicaid patients;
2) >30% of all ED visits are self-pay patients; or 3) >40%
of all ED visits are either Medicaid or self-pay patients.
NEDS tags each ED with a hospital-based variable to
delineate metropolitan/non-metropolitan locations and
academic affiliation. We chose to examine a subpopulation of EDs tagged as either academic metropolitan or
non-academic metropolitan, because the teaching status
of non-metropolitan hospitals was not provided. We
then measured the proportion of EDs that met safety net
criteria by academic status and region.
Results: We examined 2,821, 2,793, and 2,844 weighted
metro EDs in years 2006–2008, respectively. Table 1
presents safety net proportions. The proportions of academic safety net EDs increased across the study period.
Widespread regional variability in safety net proportions existed across all years. The proportions of safety
net EDs were highest in the South and lowest in the
Northeast and Midwest. Table 2 describes these findings for 2008.
Conclusion: These data suggest that the proportion of
safety-net academic EDs may be greater than that of
non-academic EDs, is increasing over time, and is
Table 1 - Abstract 249: Hospital Frequencies and Safety Net Proportions by Academic Status
Academic Metropolitan
EDs
Year
2006
2007
2008
Safety Net Academic Metropolitan EDs
Frequency
Weighted
Frequency
Frequency
Weighted
Frequency
Academic
Safety Net %
95% CI
181
167
172
886.0
805.0
815.0
77
83
85
376.1
416.9
409.4
42.45
51.79
50.23
(36.13, 48.77)
(45.06, 58.52)
(43.35, 57.11)
Non-Academic Metropolitan EDs
2006
2007
2008
Safety Net Non-Academic Metropolitan EDs
Frequency
Weighted
Frequency
Frequency
Weighted
Frequency
Non-Academic
Safety Net %
95% CI
392
404
411
1935.0
1,988.0
2,029.0
166
167
171
825.7
832.4
842.8
42.67
41.87
41.54
(38.05, 47.30)
(37.40, 46.34)
(37.23, 45.85)
Table 2 - Abstract 249: Safety Net Proportions by Region, 2008
Academic Metropolitan EDs
Region
Northeast
Midwest
South
West
Safety Net Academic
Metropolitan EDs
Weighted
Frequency
Weighted
Frequency
Academic ED
Safety Net %
205.0
225.0
241.0
144.0
61.3
109.5
162.0
76.6
29.88
48.68
67.22
53.17
95% CI
(15.77,
(35.23,
(56.33,
(34.77,
43.99)
62.13)
78.11)
71.58)
Non-Academic
Metropolitan EDs
Safety Net Non-Academic
Metropolitan EDs
Weighted
Frequency
Weighted
Frequency
Non-Academic ED
Safety Net %
95% CI
272.0
460.0
852.0
445.0
39.4
89.2
555.6
158.7
14.47
19.39
65.21
35.65
(5.09, 23.85)
(11.30, 27.48)
(58.14, 72.27)
(25.64, 45.67)
S136
2012 SAEM ANNUAL MEETING ABSTRACTS
markedly variable by region. Such findings likely have
important policy implications. Our study has several
limitations: this is a preliminary descriptive analysis;
temporal confounders may affect our findings; our definition of ‘‘safety net’’ may not be optimal; and the academic status of the affiliated hospital may not reflect
the academic status of the ED.
250
Impact of Health Care Reform in
Massachusetts on Emergency Department
and Hospital Utilization
Peter Smulowitz1, Xiaowen Yang2,
James O’Malley3, Bruce Landon3
1
Beth Israel Deaconess Medical Center, Boston,
MA; 2Massachusetts Institute of Technology,
Department of Economics, Boston, MA;
3
Department of Health Care Policy, Harvard
Medical School, Boston, MA
Background: Massachusetts’ health care reform dramatically reduced the number of uninsured in the state
and served as a model for national health reform legislation. The implementation of MA health reform provides a unique opportunity to study the effect of a
large-scale expansion of health insurance on the
patterns of seeking health care services.
Objectives: To examine the effect of MA health reform
implementation on ED and hospital utilization before
and after health reform, using an approach that relies
on differential changes in insurance rates across different areas of the state in order to make causal inferences
as to the effect of health reform on ED visits and hospitalizations. Our hypothesis was that health care reform
(i.e. reducing rates of uninsurance) would result in
increased rates of ED use and hospitalizations.
Methods: We used a novel difference-in-differences
approach, with geographic variation (at the zip code
level) in the percentage uninsured as our method of
identifying changes resulting from health reform, to
determine the specific effect of Massachusetts’ health
care reform on ED utilization and hospitalizations.
Using administrative data available from the Massachusetts Division of Health Care Finance and Policy Acute
Hospital Case Mix Databases, we compared a one-year
period before health reform with an identical period
after reform. We fit linear regression models at the
area-quarter level to estimate the effect of health
reform and the changing uninsurance rate (defined as
self-pay only) on ED visits and hospitalizations.
Results: There were 2,562,330 ED visits and 777,357
hospitalizations pre-reform and 2,713,726 ED visits and
787,700 hospitalizations post-reform. The rate of uninsurance decreased from 6.2% to 3.7% in the ED group
and from 1.3% to 0.6% in the hospitalization group. A
reduction in the rate of the uninsured was associated
with a small but statistically significant increase in ED
utilization (p = 0.03) and no change in hospitalizations
(p = 0.13).
Conclusion: We find that increasing levels of insurance
coverage in Massachusetts were associated with small
but statistically significant increases in ED visits, but no
differences in rates of hospitalizations. These results
should aid in planning for anticipated changes that
might result from the implementation of health reform
nationally.
251
Access to Appointments Following a
Policy Change to Improve Medicaid
Reimbursement in Washington DC
Janice Blanchard, Rachelle Pierre Mathieu,
Lara Oyedele, Rachel Nash, Adith Sekaran,
Lauren Winter, Paige Diamant
George Washington, Washington, DC
Background: In April, 2009 the District of Columbia
adopted a policy in which Medicaid reimbursement
rates are now on par with Medicare. It is unclear
whether this policy has improved access to care in this
population. This policy is similar to what will be
adopted as part of the nationwide health reform initiative.
Objectives: To evaluate changes in realized access to
care since implementation of a local policy change
affecting Medicaid reimbursement. We hypothesized
that access would be improved among persons with
Medicaid in 2011 as compared to years before implementation of the policy change.
Methods: We used a scripted hypothetical patient scenario of a patient with hypertension seen in the emergency department requiring close outpatient follow-up.
We compared our results in 2011 to prior results from
studies conducted with similar methodology in 2005
and 2008. Calls were made to private providers (Medicare, Medicaid, uninsured, and private insurance scenarios) and safety net clinics (Medicaid HMO and
uninsured scenarios). Appointment success rates were
compared across scenarios using bivariate (chi-square)
analysis.
Results: Calls were made to a total of 31 private provider offices (Medicaid, private, uninsured, and Medicare scenarios) and 35 safety net clinics (uninsured,
Medicaid, and DC Alliance safety net scenarios).
When comparing 2011 appointment success rates, the
Medicaid fee for service scenario calls to private providers had the lowest success rate (30.8%) as compared to the highest success rate of 50.6% among the
private insurance scenario calls (p = 0.09). Analyzing
trends over time, access to appointments for private
providers for all insurance scenarios combined
decreased from 60% in 2005 to 53.8% in 2008 and
46.2% in 2011 (p = 0.02 comparing 2005 to 2011). For
the Medicaid fee for service scenario, there was no
significant change in appointment success rate before
and after implementation of the reimbursement
changes.
Conclusion: Despite high rates of insurance coverage
in Washington DC, our study indicates that accessing
care was more difficult for consumers in 2008 and 2011
as compared to 2005. Policy changes designed to
improve Medicaid reimbursement to private providers
did not improve appointment accessibility among the
Medicaid fee for service population. Health reform initiatives that expand insurance coverage should also
address realized access to care.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
252
Access to Urgent Pediatric Primary Care
Appointments in the District of Columbia
Rachelle Pierre Mathieu, Janice Blanchard,
Adith Sekaran, Rachel Nash, Lauren Winter,
Christine Prideaux
George Washington, Washington, DC
Background: Timely access to acute primary care
appointments after an emergency department (ED) visit
has become a challenge for both providers and
patients. Previous studies have documented disparities
in accessing adult primary care and pediatric specialty
care, especially among those lacking private insurance.
There are little data regarding urgent pediatric primary
care access.
Objectives: This study measured pediatric access to
urgent primary care appointments within the District of
Columbia following an ED visit. We hypothesized there
would be a disparity in access for uninsured children
and those with Medicaid.
Methods: We used mystery caller methodology to
evaluate rates of appointment access for pediatric
patients. Calls were made to randomly selected private pediatric practices as well as pediatricians at
safety net clinics. Research assistants posed as a parent calling to secure an urgent appointment for their
child following a recent ED visit for a urinary tract
infection using a standardized clinical script varying
by insurance status. We calculated rates of appointment success as well as average appointment wait
time, and analyzed differences using bivariate (chisquare) analysis.
Results: We sampled 57 safety net clinics and 29 private clinics. As compared to private scenario calls made
to private providers (36.8% success rate), appointment
success rates were lowest for the Medicaid scenario
calls made to private providers (27.8%). Calls made to
safety net providers for the Medicaid patient scenario
(48.8%, p = 0.38) and uninsured patient scenario (47.7%,
p = 0.42) had higher appointment success rates but
longer wait times. Average appointment wait time at
safety net clinics was 12.3 days (95% CI, 3.5 to 21.1) for
Medicaid patients and 10.4 days (95% CI, 6.7 to 14.1)
for uninsured patients. Average appointment wait times
for the privately insured at private practices were
1.9 days (95% CI, 1.0 to 2.7).
Conclusion: This study demonstrated disparities in
access to urgent pediatric primary care appointments
by health insurance in the District. Although appointment success rates were not different by practice setting or insurance type, wait times were significantly
longer for callers to safety net providers as compared
to private practices. Pediatric provider access needs to
be improved with public and private insurance expansions in the wake of health reform.
•
253
www.aemj.org
S137
Comparison Of Three Prehospital Cervical
Spine Protocols With Respect To
Immobilization Requirements And Missed
Injuries
Rick Hong, Molly Meenan, Erin Prince, Ronald
Murphy, Caitlin Tambussi, Rick Rohrbach,
Brigitte M. Baumann
Cooper University Hospital, Camden, NJ
Background: The ideal cervical spine immobilization
protocol avoids unnecessary immobilization and potential morbidity while properly immobilizing patients at
greatest risk of cervical spine injury.
Objectives: We compared three immobilization protocols, the Prehospital Trauma Life Support (PHTLS)
(mechanism-based), the Domeier protocol (parallels
NEXUS criteria), and the Hankins criteria (requires
immobilization for those <12 or >65 yrs, with altered
consciousness, focal neurologic deficit, distracting
injury, and midline or paraspinal tenderness) to determine the number of patients who would require cervical
immobilization. Our secondary objective was to determine the percentage of missed cervical spine injuries,
had each protocol been followed with 100% compliance.
Methods: Design: Cross sectional. Setting: Inner city
ED. Subjects: Patients ‡18 yrs transported by EMS after
a traumatic mechanism. For patients meeting inclusion
criteria, a structured data form which obtained demographics and data for all three protocols was completed
by the treating physician immediately after the history
and physical exam but before radiologic imaging. Medical record review ascertained cervical spine injuries.
Both physicians and data abstractors were blinded to
the objective of the study. Analysis: Chi-square.
Results: Of the 498 enrolled patients, 58% were male
and the mean age was 48 ± 20 yrs. The following proportions of patients would have required cervical spine
immobilization based on the respective protocols:
PHTLS, 95.4% (95% CI 93.1–96.9%); Domeier, 68.7%
(95% CI 64.5–72.6%); Hankins, 81.5% (95% CI 77.9–
84.7%), p < 0.001. There were a total of 16 (3.2%) cervical spine injuries: 11 (2%) vertebral fractures, 2 (0.4%)
subluxations/dislocations, and 3 (0.6%) spinal cord injuries. Complete compliance with each of the three protocols would have led to appropriate cervical spine
immobilization of all injured patients.
Conclusion: The mechanism-based PHTLS protocol
required immobilization of the greatest percentage of
patients, as compared to the Domeier and Hankins protocols. Although physician-determined presence of cervical spine immobilization criteria cannot be
generalized to the findings obtained by EMS personnel,
our findings suggest that mechanism-based criteria
may result in unnecessary cervical spine immobilization
without any benefit to injured patients.
S138
254
2012 SAEM ANNUAL MEETING ABSTRACTS
The Cost-Effectiveness Of Improvements
In Prehospital Trauma Triage In The U.S
M. Kit Delgado1, David Spain1, Sharada Weir2,
Jeremy Goldhaber-Fiebert1
1
Stanford University School of Medicine,
Stanford, CA; 2University of Massachusettes
Medical School, Shrewsbury, MA
Background: Trauma centers (TC) reduce mortality by
25% for severely injured patients but cost significantly
more than non-trauma centers (NTC). The CDC 2009
field triage guidelines set targets to reduce undertriage
of these patients to NTC to <5% and reduce overtriage
of minor injury patients to TC to <50%.
Objectives: Determine the cost-effectiveness of reaching CDC targets for prehospital trauma triage performance in U.S. regions with <1 hour EMS access to
Level I TC.
Methods: Using a decision-analytic Markov model, we
evaluated the effect of incremental improvements in
prehospital trauma triage performance on costs and
survival given a baseline undertriage rate of major
injury patients to NTC of 20% and overtriage rate of
minor trauma patients to TC of 75%. The model followed patients from injury through prehospital care,
hospitalization, first year post-discharge, and the
remainder of life. Patients were trauma victims with a
mean age of 43 (range: 18–85) with Abbreviated Injury
Scores (AIS) from 1–6. Cost and survival probability
inputs were derived from the National Study on the
Costs and Outcomes of Trauma for patients with moderate to severe injury (AIS 3–6), National Trauma Data
Bank, and published literature for patients with minor
injury (AIS 1–2). Outcomes included costs (2009$), quality adjusted life-years (QALY), and incremental costeffectiveness ratios.
Results: Reducing undertriage rates from 20% to 5%
would yield 4.0 (95% CI 3.5–4.4) QALYs gained (or 0.16
to 0.20 lives saved) per 100 patients transported by
EMS. Reducing overtriage rates from 75% to 50%
would save $108,000 per 100 patients transported.
Reducing undertriage is cost-effective at less than
$100,000/QALY as long as overtriage rates do not proportionally increase by a factor >0.7. Ideal simultaneous
reductions of undertriage to 5% and overtriage to 50%
would be cost-effective at $17,400/QALY gained. Results
were only sensitive to situations in which the cost of
treating patients with minor injures at TC, relative to
NTC, was smaller than expected.
Conclusion: Reducing prehospital undertriage of trauma
patients would be cost-effective provided overtriage does
not proportionally increase by a factor >0.7. Improving
prehospital trauma triage should be a national priority;
reducing undertriage by 15% could save 7,000–8,800
lives/year and reducing overtriage by 25% could save up
to $4.8 billion/year.
Emergency Medical Services Compliance
With Prehospital Trauma Life Support
(PHTLS) Cervical Spine Immobilization
Guidelines
Rick Hong, Molly Meenan, Erin Prince, Caitlin
Tambussi, Rachel Haroz, Michael E. Chansky,
Brigitte M. Baumann
Cooper University Hospital, Camden, NJ
255
Background: PHTLS provides guidelines for prehospital cervical spine immobilization in trauma patients and
is followed by many EMS personnel. Yet, it is unknown
how closely EMS providers adhere to these guidelines.
Objectives: To determine EMS compliance with the
PHTLS guidelines in trauma patients transported to a
Level I trauma center ED who did not result in a trauma
alert. Our secondary objective was to identify criteria
associated with inadequate cervical spine immobilization by EMS which then led to immobilization in the ED.
Methods: Design: Prospective cohort. Setting: Urban,
academic ED. Subjects: Patients ‡18 years transported
by EMS after a traumatic mechanism. Trained research
associates screened all EMS patients for inclusion. For
patients meeting inclusion criteria, a structured data
form which obtained demographics and PHTLS data
was completed by the treating physician immediately
after the history and physical exam but before radiologic imaging. Both RAs and physicians were blinded
to the objective of the study. Analysis: Chi-square and
multivariable regression.
Results: Of the 498 patients who were enrolled, 58%
were male, mean patient age was 48 ± 20 years, and
mean GCS was 14.8 ± 0.8. Of the 475 patients with at
least one PHTLS criterion, 386 (81%) underwent cervical spine immobilization by EMS. Compliance with
PHTLS criteria is presented in the table. Multivariable
Table - Abstract 255: Implementation of PHTLS Cervical Spine Immobilization
Criteria noted by
physician (n = 498)
Anatomic deformity of spine
Unable to communicate
Violent impact to: Head
Neck
Torso
Pelvis
Sustained a fall
Ejected or fell from vehicle
Sudden acceleration,
deceleration, or bending forces
2
40
292
223
153
123
193
38
253
C collar placed
by EMS (n = 386)
2
26
221
186
132
107
130
31
214
(7)
(57)
(48)
(34)
(28)
(34)
(8)
(55)
C collar placed
by MD (n = 82)
0
12
60
34
19
12
44
6
31
(15)
(73)
(42)
(23)
(15)
(54)
(7)
(38)
p value
1
0.02
0.01
0.27
0.05
0.01
<0.001
0.83
0.004
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
analysis of PHTLS criteria demonstrated that traumatic
impact to the head (OR 2.41, 95% CI 1.30–4.46) and
mechanism of fall (OR 1.82, 95% CI 1.04–3.21)
were associated with insufficient cervical spine immobilization by EMS.
Conclusion: Compliance by EMS with PHTLS guidelines is insufficient and results in nearly one fifth of
patients who are not properly immobilized. Trauma to
the head and a mechanism of fall are the two PHTLS
criteria that resulted in inadequate immobilization by
EMS.
•
A Computer-Assisted Self-Interview
Focused on Sexually Transmitted
Infections in the Pediatric Emergency
Department Is Easy To Use and WellAccepted
Fahd A. Ahmad1, Katie Plax1,
Karen K. Collins1, Donna B. Jeffe1,
Kenneth B. Schechtman1, Jane Garbutt1,
Dwight E. Doerhoff2, David M. Jaffe1
1
Washington University in St. Louis School of
Medicine, St. Louis, MO; 2St. Louis Children’s
Hospital, St. Louis, MO
Background: Chlamydia trachomatis and Neisseria
gonorrhea are sexually transmitted infections (STIs)
with high levels of co-morbidity when untreated in adolescents. Despite broad CDC screening recommendations, many youth do not receive testing when
indicated. The pediatric emergency department (PED) is
a venue with a high volume of patients potentially in
need of STI testing, but assessing risk in the PED is difficult given constraints on time and privacy. We
hypothesized that patients visiting a PED would find an
Audio-enhanced
Computer-Assisted
Self-Interview
(ACASI) program to establish STI risk easy to use, and
would report a preference for the ACASI over other
methods of disclosing this information.
Objectives: To assess acceptability, ease of use, and
comfort level of an ACASI designed to assess adolescents’ risk for STIs in the PED.
Methods: We developed a branch-logic questionnaire
and ACASI system to determine whether patients aged
15–21 visiting the PED need STI testing, regardless of
chief complaint. We obtained consent from participants
and guardians. Patients completed the ACASI in private
on a laptop. They read a one-page computer introduction describing study details and completed the ACASI.
Patients rated use of the ACASI upon completion using
five-point Likert scales.
Results: 2030 eligible patients visited the PED during
the study period. We approached 873 (43%) and
enrolled and analyzed data for 460/873 (53%). The median time to read the introduction and complete the
ACASI was 8.2 minutes (interquartile range 6.4–
11.5 minutes). 90.7% of patients rated the ACASI ‘‘very
easy’’ or ‘‘easy’’ to use, 90.6% rated the wording as
‘‘very easy’’ or ‘‘easy’’ to understand, 60% rated the
ACASI ‘‘very short’’ or ‘‘short’’, 60.3% rated the audio
as ‘‘very helpful’’ or ‘‘helpful,’’ 82.9% were ‘‘very
comfortable’’ or ‘‘comfortable’’ with the system
S139
confidentiality, and 71.2% said they would prefer a
computer interface over in-person interviews or written
surveys for collection of this type of information.
Conclusion: Patients rated the computer interface of
the ACASI as easy and comfortable to use. A median of
8.2 minutes was needed to obtain meaningful clinical
information. The ACASI is a promising approach to
enhance the collection of sensitive information in the
PED.
257
256
www.aemj.org
The Association Between Prehospital
Glasgow Coma Scale and Trauma Center
Outcomes in Victims of Moderate and
Severe TBI: At Statewide Trauma System
Analysis
Daniel W. Spaite1, Uwe Stolz1, Bentley J.
Bobrow2, Vatsal Chikani3, Michael Sotelo2,
Joshua B. Gaither1, Chad Viscusi1, David
Harden3, Jason Roosa4, Kurt R. Denninghoff1
1
University of Arizona, Tucson, AZ; 2University
of Arizona, Phoenix, AZ; 3Arizona Department
of Health Services, Phoenix, AZ; 4Maricopa
Integrated Health System, Phoenix, AZ
Background: The Glasgow Coma Scale (GCS) is utilized widely for evaluation and decision-making in
emergency medical services (EMS) systems. However,
since linkage of EMS and trauma center (TC) outcome
data is highly challenging, there is a paucity of large,
multisystem studies that directly assess the association
between EMS GCS and distal outcomes.
Objectives: To evaluate the association between EMS
GCS and outcomes in patients with moderate or severe
(m/s) TBI in a statewide EMS system.
Methods: The Arizona State Trauma Registry (ASTR)
contains EMS and TC data from all trauma patients
transported by 300 EMS agencies to any of the eight
formally designated Level I TCs in the state. We evaluated the associations between initial EMS GCS and various TC outcomes in all m/s TBI cases based upon final
diagnoses (CDC Barell Matrix Type 1, 1/1/09–12/31/10).
GCS was grouped into four commonly used categories:
15–13 (mild), 12–9 (mod), 8–4 (severe), 3 (ominous). We
compared survival, TC length of stay (LOS), TC
charges, and final disposition across groups using
Fisher’s exact or Kruskal-Wallis test.
Results: There were a total of 6,985 m/s TBIs and 5,375
(77%) had documented EMS GCS (study group). The
proportion in each GCS group was 15–13: 68.5%, 12–9:
8.6%, 8–4: 7.9%, 3: 15.1%. Survival was 97.8%, 91.7%,
74.1%, and 41.8% respectively (p < 0.001). Proportion of
survivors discharged home vs. to rehab (R) or skilled
care facility (SCF) was 82.7%, 59.8%, 38.6%, and 41.0%
(p < 0.001). Median LOS (days, interquartile range) was
3.1 (1.6–6.1), 6.8 (3.2–13.3), 7.9 (2.4–15.9), and 2.0 (0.1–
10.3, early death very common) (p < 0.001). Median
hospital charges (US$) per patient were $37,024 (21,336–
75,976), $91,861 (47,800–181,137), $125,501 (59,536–
237,886), and $62,006 (21,437–150,553) (p < 0.001).
Conclusion: EMS GCS showed strong association with
survival and other outcomes, with lower GCS associated with lower survival, greater hospital LOS, higher
S140
2012 SAEM ANNUAL MEETING ABSTRACTS
hospital charges, and greater proportion of survivors
discharged to R/SCF. While initial GCS was strongly
associated with outcomes, it is notable that two-thirds
of all patients with final diagnosis of moderate or
severe TBI had an initial GCS of ‡13. Furthermore,
17.3% of patients with ‘‘mild’’ GCS were discharged to
R/SCF. Thus, while EMS GCS is useful for risk stratification, a substantial number of patients with a ‘‘good’’
prehospital GCS actually have significant TBI.
Conclusion: The computer-based intervention shows
promise for delivering content that decreases moderate
dating victimization over 6 months. The therapist BI is
promising for decreasing moderate dating victimization
over 12 months and severe dating victimization over
3 months. ED-based BIs delivered on a computer
addressing multiple risk behaviors could have
important public health effects.
259
258
Dating Violence: Outcomes Following a
Brief Motivational Interviewing
Intervention Among At-Risk Adolescents
in an Urban ED
Lauren K. Whiteside1, Rebecca Cunningham1,
Stephan T. Chermack2, Marc A. Zimmerman2,
Jean T. Shope2, C. Raymond Bingham2,
Frederick C. Blow2, Maureen A. Walton2
1
University of Michigan and Hurley Medical
Center, Ann Arbor and Flint, MI; 2University of
Michigan, Ann Arbor, MI
Background: Dating violence is a serious cause of
emotional and physical injury among adolescents. Studies of nationally representative samples show that one
in ten high school students report being the victim of
violence from a dating partner. A recent study demonstrated the efficacy of the SafERteens intervention on
reducing peer violence among adolescents presenting
to the emergency department (ED).
Objectives: To determine the efficacy of this ED-based
brief intervention (BI) on dating violence one year following the ED visit, among the subsample of adolescents in the intervention reporting past year dating
violence.
Methods: Patients (14–18 years old) presenting for
medical illness or injury were recruited from an urban,
Level I trauma center ED. Participants were eligible for
the BI if they had past year violence and alcohol use.
Participants were randomized to one of three conditions, BI delivered by a computer (CBI), BI delivered by
a therapist assisted by a computer (TBI), or control, and
completed 3, 6, and 12 month follow-up. In addition to
content on alcohol misuse and peer violence, adolescents reporting dating violence received a tailored module on dating violence. The main outcome for this
analysis was frequency of moderate and severe dating
victimization and aggression at the baseline assessment
and 3, 6, and 12 months post ED visit.
Results: Among eligible adolescents, 55% (n = 397)
reported dating violence and were included in these
analyses. Compared to controls, after controlling for
baseline dating victimization, participants in the CBI
showed reductions in moderate dating victimization at
3 months (OR 0.7; CI 0.51–0.99; p < 0.05, effect size 0.12)
and 6 months (OR 0.56; CI 0.38–0.83; p < 0.01, effect
size 0.18); models examining interaction effects were
significant for the CBI on moderate dating victimization
at 3 and 6 months. Significant interaction effects were
found for the TBI on moderate dating victimization at 6
and 12 months and severe dating victimization at
3 months.
Relationship of Intimate Partner Violence
to Health Status and Preventative
Screening Behaviors Among Emergency
Department Patients
Anitha E. Mathew, L. Shakiyla Smith, Brittany
Marsh, Debra Houry
Emory University, Atlanta, GA
Background: Intimate partner violence (IPV) is a health
problem that many ED patients experience. It is unclear
whether IPV victims engage in health screening behaviors at different rates than other women, placing them
at varying risks for other diseases.
Objectives: To assess the association of IPV with health
status and preventative screening behaviors in an ED
population. We hypothesized that IPV victims would
report poorer physical and mental health and have
lower frequencies of preventative screening behaviors
than nonvictims.
Methods: Adult female patients who presented to three
EDs on weekdays from 11 AM to 7 PM were asked by
trained research staff to participate in a computerized
survey about ‘‘women’s health’’ over a 14-month period. Women were excluded if they were critically ill, did
not speak English, intoxicated, or exhibited signs of
psychosis. Validated measures were used, including the
Universal Violence Prevention Screen to identify IPV
and the Danger Assessment Scale to measure IPV
severity. We also assessed respondents’ physical and
mental health using the Short Form-12. Patients were
asked about chronic disease history, including diagnoses of HIV and diabetes, if they had a regular doctor,
and how often they had received pap smears, selfbreast exams, and doctor or nurse-performed breast
exams. We used chi-square tests, t-tests, and linear
regression analyses to measure associations.
Results: 1,474 women out of 3381 approached (43.6%)
agreed to take the survey. Age averaged 38 years
± 12.8 (range 18–68), and the majority of participants
were black (n = 1218, 83.9%). 153 out of 832 women
(18.4%) who had been in a relationship the previous
year had experienced IPV. IPV victims were more
likely to report positive HIV status (p = 0.017) and less
likely to conduct monthly self-breast exams (p = 0.003)
than nonvictims. Victims scored significantly lower in
physical and mental health (38.6 [SD 3.9] and 45.1 [SD
5.5], respectively; p < 0.001) than the population mean
of 50 on the Short Form-12. IPV victims who reported
HIV testing had significantly higher Danger Assessment scores than victims who had not been tested
(p = 0.048).
Conclusion: IPV victims are more likely to report lower
measures of health status, higher rates of HIV, and less
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
frequent self-breast exams than other female ED
patients, putting them at risk for other chronic diseases.
•
www.aemj.org
S141
data suggest that other cities should consider similar ordinances to prevent unwanted consequences of alcohol.
Table - Abstract 260: Comparison of Visits by Year
260
The 21-Only Ordinance Reduced
Alcohol-Related Adverse Consequences
among College-Aged Adults
Michael E. Takacs, Nicholas Edwards,
Christopher Peterson, Gregory Pelc
University of Iowa, Iowa City, IA
Background: In an effort to reduce underage drinking,
Iowa City, IA adopted a 21-only ordinance prohibiting
persons under 21 from being in bars after 10 PM. This
ordinance went into effect on June 1, 2010. Preliminary
data studied over a 3-month period showed a significant decrease in alcohol related emergencies. Iowa City
is mainly a college town and home of the University of
Iowa (UI).
Objectives: The primary goal was to determine whether
there was a change in alcohol-related (AR) emergency
visits over a one year period from the start of the ordinance. Secondary goals were to measure the effect on
underage alcohol emergencies, student alcohol-related
emergencies, and arrests for public intoxication.
Methods: We performed a retrospective cohort study
of emergency department patients, ages 18 to 22, presenting for AR reasons from June 1, 2010 to May 31,
2011 (AY2010) and compared data to the same time period a year earlier (AY2009). Data were also obtained
from UI student records and public arrest data. Pearson
chi square analysis compared categorical variables.
Results: There were 2954 total visits in AY2009 and
2989 total visits in AY2010. AR visits decreased from
453 in AY2009 (15.3%) to 359 in 2010 (12.0%, p < 0.001)
and underage AR visits decreased from 15.9% to
12.3%, p < 0.005, see table.
UI student AR visits decreased by 16% from 218 to 183,
and Non-UI student AR visits decreased by 25% from
235 to 176. Public intoxication bookings for 18 to
20 year olds decreased from 598 to 409 (32%), Figure 1.
Conclusion: The 21-only ordinance was associated with
a significant reduction of AR visits. This ordinance was
also associated with reduction in underage AR visits, UI
student visits, and public intoxication bookings. These
AY2009
AY2010
AY2009
AY2010
261
Total
Visits
AR
Visits
Rate
per 100
2954
2989
453
359
15.3
12.0
<0.001
Underage
Visits
Underage
AR Visits
1685
1664
268
204
15.9
12.3
<0.005
P Value
Factors Affecting Success of Prehospital
Intubation in an Air and Land Critical Care
Transport Service: Results of a
Multivariate Analysis
Anna MacDonald1, RD MacDonald2,
Jacques S. Lee3
1
University of Toronto, Toronto, ON, Canada;
2
Ornge Transport Medicine, Mississauga, ON,
Canada; 3Sunnybrook Health Sciences Centre,
Toronto, ON, Canada
Background: Prehospital providers perform tracheal
intubation in the prehospital environment, and failed
attempts are of concern due to the danger of hypoxia and
hypotension. Some question the appropriateness of intubation in this setting due to the morbidity risk associated
with intubation in the field. Thus it is important to gain an
understanding of the factors that predict the success of
prehospital intubation attempts to inform this discussion.
Objectives: To determine the factors that affect success
rates on first attempt of paramedic intubations in a
rapid sequence intubation (RSI) capable critical care
transport service.
Methods: We conducted a multivariate logistic analysis on a prospectively collected database of airway
management from an air and land critical care transport service that provides scene responses and interfacility transport in the Province of Ontario. The study
population includes all intubations performed by flight
paramedics from January 2006 to July 2009. The primary outcome is success on first attempt. A list of
potential factors predicting success was obtained from
a review of the literature and included age, sex, Glasgow Coma Scale, location of intubation attempt, paralytics and sedation given, a difficult airway prediction
score, and type of call (trauma, medical, or cardiac
arrest).
Results: Data from 549 intubations were analysed. The
success rate on first attempt at intubation was 317/549
(57.7%) and the overall success rate was 87.4%. The
mean age was 43.5 years and 69.4% were male and
56.4% were trauma patients. Of these, 498 had complete
data for all predictive variables and were included in the
multivariate analysis. The factors that were found to be
statistically significant were age per decade (OR 1.12, CI
1.04–1.2), female sex (OR 1.5, CI 1.03–2.32), paralytics
given (OR 2.66, CI 1.5–4.7), and sedation given (OR 0.61,
S142
2012 SAEM ANNUAL MEETING ABSTRACTS
CI 0.41–0.91). This model demonstrated a good fit (Hosmer Lemeshow = 8.906) with an AUC of 0.632.
Conclusion: Use of a paralytic agent, age, and sex were
associated with increased success of intubation. The
association of sedative use only with decreased success
of intubation was unexpected and may be due to confounding related to the indications for sedation, such as
patient agitation. Our findings may have implications
for RSI-capable paramedics and require further study.
262
Incidence and Predictors of Psychological
Distress after Motor Vehicle Collision
Gemma C. Lewis1, Timothy F. Platts-Mills1,
Robert Swor2, David Peak3, Jeffrey Jones4,
Neils Rathlev5, David Lee6, Robert Domieir7,
Phyllis Hendry8, Samuel A. McLean1
1
University of North Carolina, Chapel Hill,
NC; 2William Beaumont Hospital, Royal Oak,
MI; 3Massachusetts General Hospital,
Boston, MA; 4Spectrum Health - Butterworth
Campus, Grand Rapids, MI; 5Baystate Medical
Center, Springfield, MA; 6North Shore
University Hospital, Manhasset, NY;
7
St. Joseph’s Mercy Hospital, Ann Arbor, MI;
8
University of Florida, Jacksonville, FL
Background: Motor vehicle collisions (MVCs) are one
of the most common types of trauma for which people
seek ED care. The vast majority of these patients are
discharged home after evaluation. Acute psychological
distress after trauma causes great suffering and is a
known predictor of posttraumatic stress disorder
(PTSD) development. However, the incidence and predictors of psychological distress among patients discharged to home from the ED after MVCs have not
been reported.
Objectives: To examine the incidence and predictors of
acute psychological distress among individuals seen in
the ED after MVCs and discharged to home.
Methods: We analyzed data from a prospective observational study of adults 18–64 years of age presenting
to one of eight ED study sites after MVC between 02/
2009 and 10/2011. English-speaking patients who were
alert and oriented, stable, and without injuries requiring hospital admission were enrolled. Patient interview
included assessment of patient sociodemographic and
psychological characteristics and MVC characteristics.
Level of psychological distress in the ED was assessed
using the 13-item Peritraumatic Distress Inventory
(PDI). PDI scores >23 are associated with increased risk
of PTSD and were used to define substantial psychological distress. Descriptive statistics and logistic regression were performed using Stata IC 11.0 (StataCorp LP,
College Station, Texas).
Results: 9339 MVC patients were screened, 1584 were
eligible, and 949 were enrolled. 361/949 (38%) participants had substantial psychological distress. After
adjusting for crash severity (severity of vehicle damage,
vehicle speed), substantial patient distress was predicted by sociodemographic factors, pre-MVC depressive symptoms, and arriving to the ED on a backboard
(table).
Conclusion: Substantial psychological distress is common among individuals discharged from the ED
after MVCs and is predicted by patient characteristics separate from MVC severity. A better under
standing of the frequency and predictors of
substantial psychological distress is an important first
step in identifying these patients and developing effective interventions to reduce severe distress in the
aftermath of trauma. Such interventions have the
potential to reduce both immediate patient suffering
and the development of persistent psychological
sequelae.
Table - Abstract 262: Logistic Regression Analysis of Predictors of
Peritraumatic Distress
Predictor
Age
Female sex
Educational attainment
Pre-MVC depressive
symptoms (CESD)
Patient was driver
Extent of vehicle damage
Vehicle speed at impact
Patient backboarded
263
Odds
Ratio
95% CI
P value
0.986
3.074
0.698
4.029
0.974–0.997
2.232–4.235
0.599–0.813
2.144–7.569
0.015
0.015
<0.001
<0.001
1.839
1.735
1.077
1.384
1.174–2.882
1.377–2.186
1.003–1.157
1.024–1.871
0.008
<0.001
0.041
0.035
Derivation of a Simplified Pulmonary
Embolism Triage Score (PETS) to Predict
the Mortality in Patients with Confirmed
Pulmonary Embolism from the Emergency
Medicine Pulmonary Embolism in the Real
World Registry (EMPEROR)
Beau Briese1, Don Schreiber2, Brian Lin3, Gigi
Liu2, Jane Fansler4, Samuel Z. Goldhaber5,
Brian J. O’Neil6, David Slattery7, Brian
Hiestand8, Jeff A. Kline9, Charles V. Pollack10
1
Stanford/Kaiser
Emergency
Medicine
Residency Program, Stanford, CA; 2Stanford
University School of Medicine, Stanford, CA;
3
University of California, San Francisco, San
Francisco, CA; 4Christian Hospital, Saint Louis,
MO; 5Brigham and Women’s Hospital, Boston,
MA; 6Wayne State University School of
Medicine, Detroit, MI; 7University Medical
Center of Southern Nevada, Las Vegas, NV;
8
Ohio State University College of Medicine,
Columbus, OH; 9Carolinas Medical Center,
10
Charlotte,
NC;
Pennsylvania
Hospital,
Philadelphia, PA
Background: The Pulmonary Embolism Severity Index
(PESI) with 11 variables and simplified PESI (sPESI)
with seven variables are the two leading risk stratification tools for mortality in patients with acute pulmonary embolism (PE).
Objectives: To derive a simple four-variable prognostic
model of mortality for patients with confirmed PE, the
‘‘Pulmonary Embolism Triage Score’’ (PETS), with equal
performance characteristics to the more complicated
PESI and sPESI.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Methods: PETS was retrospectively derived from 1438
patients with data to compute PESI in the Emergency
Medicine Pulmonary Embolism in the Real World Registry (EMPEROR), an observational database of 1880
image-confirmed PE patients prospectively enrolled at
22 U.S. academic and community hospitals from 1/1/
2005 to 12/29/2008. Logistic regression identified four
variables that when positive were independently associated with higher 30-day mortality: presence of massive
pulmonary embolism (SBP<90 mm Hg), tachypnea
(respiratory rate‡24 breaths per minute), history of cancer, and leukocytosis (WBC>11,000 cells per cubic mm).
PETS was defined as HIGH if any of the four variables
was positive, LOW if otherwise. (See figure) The predictive characteristics of PETS, PESI, and sPESI for 30-day
mortality in EMPEROR, including AUC, negative predictive value, sensitivity, and specificity were calculated.
•
www.aemj.org
S143
Results: The 646 of 1438 patients (44.9%; 95% CI
42.3%–47.5%) classified as PETS LOW had 30-day mortality of 0.5% (95% CI 0.1–1.5%), versus 10.2% (95% CI
8.0%–12.4%) in the PETS HIGH group, statistically similar to PESI and sPESI. PETS is significantly more specific for mortality than the sPESI (47.0% v 37.6%;
p < 0.0001), classifying far more patients as low-risk
while maintaining a sensitivity of 96% (95% CI 88.3%–
99.0%), not significantly different from sPESI or PESI
(p > 0.05).
Conclusion: With four variables, PETS in this derivation cohort is as sensitive for 30-day mortality as the
more complicated PESI and sPESI, with significantly
greater specificity than the sPESI for mortality, placing
25% more patients in the low-risk group. External
validation is necessary.
S144
264
2012 SAEM ANNUAL MEETING ABSTRACTS
Denver Trauma Organ Failure Score
Outperforms Traditional Methods of Risk
Stratification in Trauma
Nicole Seleno, Jody Vogel, Michael Liao,
Emily Hopkins, Richard Byyny, Ernest Moore,
Craig Gravitz, Jason Haukoos
Denver Health Medical Center, Denver, CO
Background: The Sequential Organ Failure Assessment (SOFA) Score, base excess, and lactate have been
shown to be associated with mortality in critically ill
trauma patients. The Denver Emergency Department
(ED) Trauma Organ Failure (TOF) Score was recently
derived and internally validated to predict multiple
organ failure in trauma patients. The relationship
between the Denver TOF Score and mortality has not
been assessed or compared to other conventional
measures of mortality in trauma.
Objectives: To compare the prognostic accuracies of
the Denver ED TOF Score, ED SOFA Score, and ED
base excess and lactate for mortality in a large heterogeneous trauma population.
Methods: A secondary analysis of data from the
Denver Health Trauma Registry, a prospectively
collected database. Consecutive adult trauma patients
from 2005 through 2008 were included in the study.
Data collected included demographics, injury characteristics, prehospital care characteristics, response to
injury characteristics, ED diagnostic evaluation and
interventions, and in-hospital mortality. The values of
the four clinically relevant measures (Denver ED TOF
Score, ED SOFA score, ED base excess, and ED lactate)
were determined within four hours of patient arrival,
and prognostic accuracies for in-hospital mortality for
the four measures were evaluated with receiver operating characteristic (ROC) curves. Multiple imputation
was used for missing values.
Results: Of the 4,355 patients, the median age was 37
(IQR 26–51) years, median injury severity score was 9
(IQR 4–16), and 81% had blunt mechanisms. Thirty-eight
percent (1,670 patients) were admitted to the ICU with a
median ICU length of stay of 2.5 (IQR 1–8) days, and 3%
(138 patients) died. In the non-survivors, the median values for the four measures were ED SOFA 5.0 (IQR 0.0–
8.0); Denver ED TOF 4.0 (IQR 4.0–5.0); ED base excess
7.0 (IQR 8.0–19.0) mEq/L; and ED lactate 6.5 (IQR 4.5–
11.8) mmol/L. The areas under the ROC curves for these
measures are demonstrated in the figure.
Conclusion: The Denver ED TOF Score more accurately predicts in-hospital mortality in trauma patients
as compared to the ED SOFA Score, ED base excess, or
ED lactate. The Denver ED TOF Score may help identify
patients early who are at risk for mortality, allowing for
targeted resuscitation and secondary triage to improve
outcomes in these critically ill patients.
265
The Relationship Between Early Blood
Pressure Goals and Outcomes in
Post-Cardiac Arrest Syndrome Patients
Treated with Therapeutic Hypothermia
Maria Beylin, Anne Grossestreuer, Frances
Shofer, Benjamin S. Abella, David F. Gaieski
University of Pennsylvania School of Medicine,
Philadelphia, PA
Background: The 2010 American Heart Association
Guidelines for Post-Cardiac Arrest Care Consensus recommend immediate treatment of hypotension to maintain adequate tissue perfusion. If mean arterial pressure
(MAP) is <65 mmHg, they recommend infusion of fluid
and use of vasopressors to restore adequate pressure.
However, there is no literature to date examining the
relationship between early blood pressure goals and
outcomes in post-cardiac arrest syndrome (PCAS)
patients treated with therapeutic hypothermia (TH).
Objectives: We examined the relationship between
MAP at specific time points post-arrest and neurologic
and survival outcomes for PCAS patients.
Methods: All consecutive PCAS patients treated with
algorithmic post-arrest care including hemodynamic
optimization and TH at the University of Pennsylvania
Health System between May, 2005 and October, 2011
were included. Hemodynamic data, including MAP and
number of vasopressors, were analyzed at 1, 6, 12, and
24 hour time points after return of spontaneous circulation. Outcomes data collected included survival to hospital
discharge. Data were analyzed using logistic regression
analysis and ANOVA in repeated measures over time.
Results: 168 patients were included in the analysis;
45% (75/168) survived, and 35% (58/168) had a good
neurological outcome at hospital discharge. The majority of the 168 patients were at or above goal MAP (80–
100 mmHg) at all time points and between 51–59%
were on at least one vasopressor at any time point. In
the linear regression model, increasing vasopressor use
(point estimate [PE] 0.41; 95% confidence intervals [95%
CI] 0.269–0.626) and increasing age (PE 0.977; 95% CI
0.955–1.00) were associated with worsened survival. In
addition, higher MAP showed a trend toward improved
survival (PE 1.015; 95% CI 1.0–1.32). In the ANOVA
analyses, at all time points MAP was higher in
survivors than non-survivors.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
Initial Lactate Level Not Associated With
Mortality in Post-Arrest Patients Treated
with Therapeutic Hypothermia
David F. Gaieski, Anne Grossestreuer, Marion
Leary, Lance B. Becker, Benjamin S. Abella
University of Pennsylvania School of Medicine,
Philadelphia, PA
Background: Lactate levels rise during the no- or lowflow states associated with cardiac arrest. Prior studies
have demonstrated that initial post-arrest lactate levels
correlate with mortality (higher levels associated with
worse outcomes). These findings need further validation in
health systems delivering state-of-the-art post-arrest care
including therapeutic hypothermia (TH), hemodynamic
optimization, and other aspects of modern critical care.
Objectives: To assess the relationship between initial
lactate levels and outcomes in post-cardiac arrest syndrome patients.
Methods: A retrospective chart review was performed
of 155 post-cardiac arrest syndrome (PCAS) patients
admitted to an academic medical center between April
2005 and October 2010 who were treated with bundled
post-arrest care and had serial lactate values drawn
during the first 24 hours after return of spontaneous
circulation (ROSC). Unadjusted (t-test) analyses were
performed to determine the association between initial
and serial lactate values and mortality in these patients.
Results: The mean post-ROSC lactate value (121
patients) was 9.5 mmol/L (SD 5.0; range 1.2–21.9). This
value did not correlate with mortality, including when
limited to initial lactate values >10 mmol/L. The mean
value for the second lactate measured (105 patients) was
6.0 mmol/L (SD 4.6; range 0.7–20.0); higher second lactate
values were associated with higher mortality (p = 0.049).
The mean value for 12-hour post-ROSC lactate (130
patients) was 3.6 mmol/L (SD 2.8; range 0.3–14.3); higher
12-hour lactate values were associated with higher mortality (p = 0.027). Similar results were seen at 24 hours.
Conclusion: Initial lactate levels are not associated with
mortality in PCAS patients treated with bundled postarrest care. Subsequent lactate levels (second, 12-hour,
and 24-hour) were associated with mortality with lower
levels associated with better survival.
267
Prehospital
Initiation
Of
Therapeutic
Hypothermia In Adult Patients After
Cardiac Arrest Does Not Improve Time To
Target Temperature
Eric M. Schenfeld1, David A. Pearson1,
Jonathan Studnek2, Marcy Nussbaum3,
Kathi Kraft4, Alan C. Heffner5
1
Carolinas Medical Center, Department of
Emergency Medicine, Charlotte, NC; 2Carolinas
Medical Center, The Center for Prehospital
S145
Medicine, Charlotte, NC; 3Carolinas Medical
Center, Dickson Institute for Health Studies,
Charlotte, NC; 4Carolinas Medical Center,
Center for Clinical Data Analysis, Charlotte,
NC; 5Carolinas Medical Center, Department of
Emergency Medicine, Department of Internal
Medicine, Division of Critical Care Medicine,
Charlotte, NC
Conclusion: In comatose PCAS patients undergoing
TH, early optimization of MAP as a measure of adequacy of perfusion may improve outcomes. Further
prospective studies with specific MAP goals and hemodynamic optimization algorithms need to be performed.
266
www.aemj.org
Background: Both animal and human studies suggest
that early initiation of therapeutic hypothermia (TH) and
rapid cooling improve outcomes after cardiac arrest.
Objectives: The objective was to determine if administration of cold IV fluids in a prehospital setting decreased
time-to-target-temperature (TT) with secondary analysis
of effects on mortality and neurological outcome.
Methods: Patients resuscitated after out-of-hospital
cardiac arrest (OOHCA) who received an in-hospital
post cardiac arrest bundle including TH were prospectively enrolled into a quality assurance database from
November 2007 to November 2011. On April 1, 2009 a
protocol for intra-arrest prehospital cooling with 4C
normal saline on patients experiencing OOHCA was
initiated. We retrospectively compared TT for those
receiving prehospital cold fluids and those not receiving
cold fluids. TT was defined as 34C measured via Foley
thermistor. Secondary outcomes included mortality,
good neurological outcome defined as Cerebral Performance Category (CPC) score of 1 or 2 at discharge, and
effects of pre-ROSC cooling.
Results: There were 132 patients who were included in
this analysis with 80 patients receiving prehospital cold
IV fluids and 52 who did not. Initially, 63% of patients
were in VF/VT and 36% asystole/PEA. Patients receiving
prehospital cooling did not have a significant improvement in TT (256 minutes vs 271 minutes, p = 0.64). Survival to discharge and good neurologic outcome were
not associated with prehospital cooling (54% vs 50%,
p = 0.67) and CPC of 1 or 2 in 49% vs 44%, (p = 0.61).
Initiating cold fluids prior to ROSC showed both a nonsignificant decrease in survival (48% vs 56%, p = 0.35)
and increase in poor neurologic outcomes (42% vs 50%,
p = 0.39). 77% of patients received £ 1L of cooled IVF
prior to hospital arrival. Patients receiving prehospital
cold IVF had a longer time from arrest to hospital arrival
(44 vs 34 min, p =< 0.001) in addition to a prolonged
ROSC to hospital time (20 vs 12 min, p = 0.005).
Conclusion: At our urban hospital, patients achieving
ROSC following OOHCA did not demonstrate faster TT
or outcome improvement with prehospital cooling compared to cooling initiated immediately upon ED arrival.
Further research is needed to assess the utility of
prehospital cooling.
268
Assessment of Building Type and Provision
of Bystander CPR and AED Use in Public
Out-of-Hospital Cardiac Arrest Events
Kenneth Jones1, Steven C. Brooks2, Jonathan
Hsu3, Jason Haukoos4, Bryan F. McNally5,
Comilla Sasson4
1
University of Colorado, Aurora, CO; 2Division
of Emergency Medicine, Department of
S146
2012 SAEM ANNUAL MEETING ABSTRACTS
Medicine, University of Toronto, Toronto, ON,
Canada; 3Undergraduate Medical Education
Program, Faculty of Medicine, University of
Toronto, Toronto, ON, Canada; 4Department of
Emergency Medicine, University of Colorado,
Aurora, CO; 5Department of Emergency
Medicine, Emory University, Atlanta, GA
Background: Approximately 20% of all out-of-hospital
cardiac arrests (OHCA) occur in a public location. However, little is known about how differences in building
type affect the provision of bystander CPR and AED
use for OHCA victims.
Objectives: To categorize the locations of public OHCA
events and to estimate and compare the prevalence of
AED use and bystander CPR performance between
these building types.
Methods: Design: Secondary analysis of the Cardiac
Arrest Registry to Enhance Survival (CARES) dataset.
Setting: CARES comprises 29 U.S. cities in 17 states
with a catchment of over 22 million people.
Population: Consecutive arrests restricted to those that
occurred in a public location (excluding street/highway)
and had an AED used by either layperson or first
responder from October 1, 2005 through December 31,
2009. Street address and latitude/longitude information
were used to identify the type of building. Buildings
were categorized using an adaptation of the City of
Toronto Employment Survey Classification Protocols.
Satellite and street imagery and property/tax records
were used to categorize the business/institution.
Results: A total of 20,018 arrests occurred during the
study period. Of the 971 public arrests which had an
AED used, 77 were unable to be geocoded, while an
additional 150 were found to be misclassified. The most
common reasons for misclassification were the use of
the categories ‘‘Public Building’’ and ‘‘Other’’ for structures that were non-public (e.g. private residences or
health care facility). The final sample included 744
OHCA events (see table). An AED was used by a
layperson in 224 (30.1%) or first responder in 520 (69.9%)
of these arrests, while bystander CPR was performed in
441 arrests (59.3%). Almost one in six arrests where an
AED was used occurred in a retail shopping center. Colleges (1.2%, n = 10) and convention centers (1.0%, n = 8)
made up a small percentage of the public arrests where
an AED was used, but a large proportion of events
occurring in these locations received bystander CPR (80,
87%) and had a layperson use an AED (60, 62%).
Conclusion: Building types have differing layperson
bystander CPR and AED use. Building types in which
larger proportions of OHCA victims are receiving these
life-saving interventions could be models for successful
public education campaigns.
269
Ethnic Disparities In The Utilization Of
EMS Resources And The Impact On Doorto-PCI Times In STEMI Patients
Nick Testa, Garren MI Low, David Shavelle,
Stephanie Hall, Kim Newton, Linda Chan
Los Angeles County + USC Medical Center,
Los Angeles, CA
Background: EMS literature has shown that the use of
9-1-1 services varies with ethnicity.
Objectives: In this study, we examined whether ethnic
variability exists among STEMI patients utilizing EMS
and whether there is a difference in door-to-PCI time
between ethnic groups.
Methods: We analyzed data prospectively collected
from January 2009 through June 2011 to assess the difference in door-to-PCI time between the two largest
ethnic groups of STEMI patients (Hispanic and African
Americans).
Results: Of a total of 494 STEMI patients, 276 (56%)
were Hispanic and 95 (19%) were African American.
African Americans utilized EMS more than Hispanics
(75% versus 40%, p < 0.0001). To assess the difference
in door-to-PCI time between the two ethnic groups, we
Table - Abstract 268: Building Type and Percentage of Layperson Bystander CPR and AED Use
Location Type
Retail Shopping / Services
Office
Jail/Prison
Hotel/Motel
Industrial/Warehouse Facility
Airport (Civil)
Fitness Club
Gambling Establishment
Place of Worship
Primary/Secondary School
Restaurant / Bar / Nightclub
Country Club / Golf Course
Outdoor Athletic Facility
Police/Fire/EMS facility
College / University
Stadium
Convention Center
Other
Events
(n)
Events
(%)
Events with
Layperson
CPR (n)
Events with
Layperson
CPR (%)
Events with
Layperson
AED Use (n)
Events with
Layperson
AED Use (%)
146
82
63
58
48
45
40
33
32
30
27
24
22
12
10
9
8
55
14.9%
8.6
7.7
7.1
5.8
5.5
4.9
4.0
3.9
3.7
3.6
2.9
3.0
1.6
1.3
1.2
1.1
7.4
66
57
52
18
24
34
30
13
15
22
14
20
17
3
8
5
7
36
44.3
69.0
82.5
31.0
50.0
75.6
75.0
39.4
46.9
73.3
55.0
83.3
77.3
25.0
80.0
55.6
87.5
65.4
17
31
35
0
16
22
22
10
4
16
2
11
6
0
6
4
5
17
13.9
38.0
55.6
0
33.3
48.9
55.0
30.3
12.5
53.3
5.0
45.8
27.3
0
60.0
44.4
62.5
30.9
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
compared the median and percent of door-to-PCI times
<90 minutes. The median door-to-PCI time was 46 minutes for African Americans and 70 minutes for Hispanics (p = 0.13). The median door-to-PCI time was
43 minutes for EMS patients and 83 minutes for walk-in
patients (p < 0.0001). The percentage of patients with
door-to-PCI time less than 90 minutes was 75% for Hispanics and 84% for African Americans (p = 0.48). The
percentage less than 90 minutes was 94% for patients
who arrived via EMS and was 60% for walk-in patients
(p < 0.0001). Both measures of door-to-PCI time point
to a significant difference in door-to-PCI time between
EMS and walk-in patients but not between the two ethnic groups. The odds ratio of having a door-to-PCI time
less than 90 minutes for Hispanics as compared to African Americans was 0.58 (95% CI: 0.16, 1.95; p = 0.48).
After adjustment for the difference in patients who
arrive via EMS and walk-in utilization, the adjusted
odds ratio was 1.79 (95% CI: 0.36, 8.16; p = 0.67). The
reverse relationship between the unadjusted and
adjusted odds ratio indicates an interaction between
ethnicity and patients mode of entry. Among patients
who arrive via EMS, the odds ratio of having door-toPCI time less than 90 minutes for Hispanics as compared to African Americans was 0.69 (95% CI: 0.03,
7.29, p = 1.00) while the odds ratio was 4.80 (95% CI:
0.41, 126; p = 0.30) among the walk-in patients.
Conclusion: These findings indicate that there is a significant difference between EMS and walk-in STEMI
patients in door-to-PCI time but no significant difference between African American and Hispanic STEMI
patients after adjusting for EMS utilization rates.
270
Identification of Delirium in Elderly
Emergency Department Patients
Maura Kennedy, Richard A. Enander,
Richard E. Wolfe, Edward R. Marcantonio,
Nathan I. Shapiro
Beth Israel Deaconess Medical Center, Boston,
MA
Background: Delirium is common in the ED, associated with increased morbidity and mortality, and is
often under-diagnosed by ED physicians. A better
understanding of predictors of ED delirium would help
to target case-finding, prevention, and treatment
efforts.
Objectives: The objective of this study was to identify
independent risk factors for delirium in the ED utilizing
patient demographic and clinical characteristics.
Methods: We performed a prospective, observational
study of elderly patients in our urban university ED.
Inclusion criteria were: ED patient >= 65 years and ability of patient or surrogate to provide informed consent
and cooperate with testing. A trained research assistant
performed a structured mental status assessment
including the MMSE and Delirium Symptom Interview.
Delirium was determined using the Confusion Assessment Method (CAM). We collected data on patient
demographics, comorbidities, medications, and ED
course. Using the outcome of delirium from the CAM,
we identified univariate correlates of delirium. Variables
•
www.aemj.org
S147
with a p < 0.1 were included in a multivariate logistic
regression model; backward selection was used to create a final model of significant predictor variables with
a p <= 0.05. We allowed approximately 1 predictor per
10 delirious patients to avoid over-fitting the model.
Model accuracy was assessed by the c-statistic and
Hosmer Lemeshow tests.
Results: There were 706 subjects enrolled with a complete delirium assessment, of whom 67 (9.5%) were
delirious. Mean age was 77 (SD 8), 49% were male,
87% were Caucasian, and 11% were African American.
The final multivariable predictor model consisted of age
(OR 1.6; 95%CI 1.1–2.2 per decade above 65); history of
dementia (4.3; 2.2–8.2), TIA or stroke (2.4; 1.1–5.2), or
seizure disorder (3.8; 1.3–11.5); respiratory rate >20 in
the ED (3.3; 1.6–6.9); and ED diagnosis of urinary tract
infection (3.0; 1.4–6.8) or intracranial hemorrhage (5.9;
1.3–27.1). The c-statistic for our model was 0.80 and the
Hosmer Lemeshow p-value was 0.17.
Conclusion: Patients with preexisting neurological disease, evidence of respiratory distress, or presenting
with a urinary tract infection or intracranial hemorrhage were most likely to present with ED delirium.
This simple risk prediction model, easily performed in
the emergency setting, demonstrated excellent discrimination and calibration.
271
Hospitalization Rates and Resource
Utilization of Delirious ED Patients
Maura Kennedy, Richard A. Enander,
Richard E. Wolfe, Nathan I. Shapiro,
Edward R Marcanatonio
Beth Israel Deaconess Medical Center, Boston,
MA
Background: Delirium is common in elderly ED
patients and under-diagnosed in the ED. Delirium in
the hospital is associated with higher health care
expenditures and mortality, but these associations have
not been well-studied in ED delirium.
Objectives: The purpose of this study is to compare
resource utilization and mortality among delirious and
non-delirious elderly ED patients.
Methods: We performed a prospective, observational
study of elderly patients in our urban university ED.
Inclusion criteria were ED patients >= 65 years and ability of patient/surrogate to provide informed consent and
cooperate with testing. A trained research assistant performed a structured mental status assessment after
which delirium was determined using the Confusion
Assessment Method. Patients were followed through
their ED and hospital courses, and by telephone at 7 and
30 days. Data were collected on hospital and ICU admission, length of stay, hospital charges, and 30-day re-hospitalization and mortality. Proportions and Fisher’s exact
tests are reported for nominal data. Medians, 25%–75%
interquartile ranges (IQR), and Wilcoxon Rank tests are
reported for non-normally distributed continuous data.
Results: There were 706 subjects enrolled, of whom 67
(9.5%) were delirious. Delirious subjects were more
likely to be admitted to the hospital than non-delirious
patients (85% versus 63%, p < 0.001). Of the admitted
S148
2012 SAEM ANNUAL MEETING ABSTRACTS
patients, delirious subjects had a greater length of stay
(median 4 days, IQR 2–6 days) compared with nondelirious patients (2 days, IQR 1–4 days; p < 0.001) and
incurred greater hospital charges (median $17, 094, IQR
$11,747–$31,825 vs. $13,317, IQR $8,789–$20,111;
p = 0.004). There was a trend toward more ICU admissions among delirious patients (17% vs 9%, p = 0.06).
At discharge, patients with ED delirium were more
likely to be transferred to a new long term care or
rehabilitation facility (49% vs 20%, p < 0.001). At
30 days, patients with ED delirium were more likely to
have been re-admitted to the hospital (23% vs 11%,
p = 0.006) or have died (8.8% vs 1.1%, p < 0.001).
Conclusion: Delirium on presentation to the ED is associated with greater health care utilization and costs, and
higher rates of discharge to a long term care facility and
30-day mortality. Further research is needed to determine
if early diagnosis in the emergency setting may improve
the outcome of patients presenting with delirium.
272
Physician and Prehospital Provider
Unstructured Assessment of Delirium in
the Elderly
Adam N. Frisch, Brian P. Suffoletto,
Thomas Miller, Christian Martin-Gill,
Clifton W. Callaway
UPMC, Pittsburgh, PA
Background: An estimated 10% of emergency department (ED) patients 65 years of age and older have delirium, which is associated with short- and long-term risk
of morbidity and mortality. Early recognition could
result in improved outcomes, but the reliability of delirium recognition in the continuum of emergency care is
unknown.
Objectives: We tested whether delirium can be reliably
detected during emergency care of elderly patients by
measuring the agreement between prehospital providers, ED physicians, and trained research assistants
using the Confusion Assessment Method for the ICU
(CAM-ICU) to identify the presence of delirium. Our
hypothesis was that both ED physicians and prehospital
providers would have poor ability to detect elements of
delirium in an unstructured setting.
Methods: Prehospital providers and ED physicians
completed identical questionnaires regarding their clinical encounter with a convenience sample of elderly (age
>65 years) patients who presented via ambulance to
two urban, teaching EDs over a three-month period.
Respondents noted the presence or absence of (1) an
acute change in mental status, (2) inattention, (3) disorganized thinking, and (4) altered level of consciousness
(using the Richmond Agitation Sedation Scale). These
four components comprise the operational definition of
delirium. A research assistant trained in the CAM-ICU
rated each component for the same patients using a
standard procedure. We calculated inter-rater reliability
(kappa) between prehospital providers, ED physicians,
and research assistants for each component.
Results: A total of 261 patients were enrolled. By CAMICU, 24 (9.2%) were positive for delirium. Physician and
prehospital unstructured assessment would have catego-
rized fewer patients as delirious (5.36%, 7.28%). For each
component, there was poor to fair agreement (kappa
0.086–0.486) between observers (table).
Conclusion: Prehospital providers, ED physicians and
trained researchers using validated delirium instrument,
have poor to fair agreement when assessing the components of delirium. Future studies should test whether
more regimented bedside tools could improve agreement and recognition of delirium.
Table - Abstract 272: Inter-rater Agreement for Delirium
Components (Kappa Values)
Delirium
Component
Acute Mental
Status Change
Agitation Score
Inattention
Disorganized
Thinking
273
MD/EMS
MD/CAM-ICU
EMS/CAM-ICU
0.266
0.486
0.34
0.218
0.256
0.313
0.223
0.331
0.086
0.213
0.189
0.119
Mode of Arrival: The Effect of Age on EMS
Use for Transportation to an Emergency
Department
Courtney Marie Cora Jones, Erin B. Philbrick,
Manish N. Shah
University of Rochester Medical Center,
Rochester, NY
Background: Previous studies have found older adults
are more likely to use EMS than younger adults; however, these studies are limited because the effect of
potentially important confounding variables, such as
patient acuity and presence of comorbid medical conditions, were not evaluated.
Objectives: This study aimed to assess the association
between age and EMS use while controlling for potential confounders. We hypothesized that this association
use would persist after controlling for confounders.
Methods: A cross-sectional survey study was conducted at an academic medical center’s ED. An interview-based survey was administered and included
questions regarding demographic and clinical characteristics, mode of ED arrival, health care use, and the
perceived illness severity. Age was modeled as an ordinal variable (<60, 60–79, and ‡ 80 years). Bivariate analyses were used to identify potential confounders and
effect measure modifiers and a multivariable logistic
regression model was constructed. Odds ratios were
calculated as measures of effect.
Results: A total of 1092 subjects were enrolled and had
usable data for all covariates, 465 (43%) of whom
arrived via EMS. The median age of the sample was
60 years and 52% were female. There was a statistically
significant linear trend in the proportion of subjects
who arrived via EMS by age (p < 0.0001). Compared to
adults aged less than 60 years, the unadjusted odds
ratio associating age and EMS use was 1.41 (95% CI:
1.19, 1.67) for subjects 60–79 years and 1.98 (95% CI:
1.66, 2.36) for subjects ‡ 80 years. After adjustment for
acuity, chief complaint, self-rated health, education,
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S149
place of residence, comorbidities, history of depression,
and perceived illness severity, age remained a statistically significant risk factor for with EMS use (1.36; 95%
CI: 1.12, 1.65 for subjects 60–79 and 1.85; 95% CI: 1.25,
2.72 for subjects ‡ 80 years).
Conclusion: In this sample of ED patients, the proportion of subjects arriving via EMS was found to increase
linearly with age. In multivariable analysis, the relationship between increasing age and EMS use remained
statistically significant and was not confounded by
other variables. Additional research is needed to
account for potential unmeasured confounders (e.g.,
insurance status and access to alternative transportation) and to elucidate reasons for the increased likelihood of EMS use among older adults.
274
Impact Of A New Senior Emergency
Department On ED Recidivism, Rate Of
Hospital Admission, And Hospital Length
Of Stay
Daniel Keyes, Ani Habicht, Bonita Singal,
Mark Cowen
St Joseph Mercy Ann Arbor/ University of
Michigan EM Residency, Ann Arbor, MI
Background: Senior (geriatric) EDs are opening with
increasing frequency around the US, providing elders
with more comprehensive screening for depression,
polypharmacy, and other concerns, which may
decrease the frequency of return visits (recidivism).
Evaluation in the ED by social workers may also
decrease admissions and allow earlier hospital discharge. Currently little information describes the effect
on recidivism, rate of admission, or hospital length of
stay for this important new phenomenon.
Objectives: To investigate the effect of a Senior ED on
recidivism, rate of hospital admission, and hospital
length of stay, comparing a group before and after this
intervention in a large suburban community hospital.
Methods: We created a propensity variable for recidivism within 30 days for each of 66 DRG categories in
the pre-Senior ED group, only. These DRG categories
were then ranked by quintiles of propensity to return,
and the rankings were applied to each patient in both
cohorts. Multivariable analyses were used to balance
any chance difference in risk between recidivism,
admission rate, and LOS. The Cox proportional hazard
model was used to test whether the intervention
affected time to return within 30 days. Length of stay,
propensity to return, and age were summarized using
median and interquartile range (IQR). Triage level, sex,
and insurance type were summarized using percentage.
Results: A total of 8154 patients were included, 4252 in
the pre-Senior ED group and 3902 in the Senior ED
group. Subsequent exclusions were: 201 in the highest
emergency severity index (ESI level 1), in 274 cases ESI
was not determined, and one had an unrealistic
recorded age of 130 years, leaving 7678, with 3936 in
the pre-Senior ED group and 3742 in the Senior ED
group. The mean (SD) age of the patients was 77.2 (8.3),
ranging from 65 to 102, and 51% were female (see
table). The risk of being admitted was lower in the
Senior ED group (RR = 0.95, 95% CI: 0.92–0.98), but
there was no significant difference on time to return
within 180 days (HR = 1.08, 95% CI: 0.96–1.23) or time
to return within 30 days (HR = 1.08, 95% CI: 0.95–1.23)
or LOS (3.6 days for both, p = 0.85).
Conclusion: A new senior ED resulted in a small
decrease in rate of admission, but no significant effect on
either recidivism or LOS was seen. Benefits of this ED
may be apparent in other ways, including patient satisfaction, which will be valuable topics for future research.
275
Prospective Validation and Refinement of
a Clinical Decision Rule for Chest
Radiography in ED Patients with Chest
Pain and Possible Acute Coronary
Syndrome
Joseph K. Poku1, Venkatesh R. Bellamkonda
Athmaram1, Fernanda Bellolio1, Ronna L.
Campbell1, David M. Nestler1, Ian G. Stiell2,
Erik P. Hess1
1
Mayo Clinic, Rochester, MN; 2University of
Ottawa, Ottawa, ON, Canada
Background: We previously derived a clinical decision
rule (CDR) for chest radiography (CXR) in patients with
chest pain and possible acute coronary syndrome
(ACS) consisting of the absence of three predictors: history of congestive heart failure, history of smoking, and
abnormalities on lung auscultation.
Objectives: To prospectively validate and refine a CDR
for CXR in an independent patient population.
Methods: We prospectively enrolled patients over
24 years of age with a primary complaint of chest pain
S150
2012 SAEM ANNUAL MEETING ABSTRACTS
and possible ACS from September 2009 to January 2010
at a tertiary care ED with 73,000 annual patient visits.
Physicians completed standardized data collection forms
before ordering chest radiographs and were thus
blinded to CXR findings at the time of data collection.
Two investigators, blinded to the predictor variables,
independently classified CXRs as ‘‘normal,’’ ‘‘abnormal
not requiring intervention,’’ and ‘‘abnormal requiring
intervention’’ (e.g, heart failure, infiltrates) based on
review of the radiology report and the medical record.
Analyses included descriptive statistics, inter-rater reliability assessment (kappa), and recursive partitioning.
Results: Of 1159 visits for possible ACS, mean age (SD)
was 60.3 (15.6) and 51% were female. Twenty-four percent had a history of acute myocardial infarction, 10%
congestive heart failure, and 11% atrial fibrillation. Seventy-one (6.1%, 95% CI 4.9–7.7) patients had a radiographic abnormality requiring intervention. The kappa
statistic for CXR classification was 0.93 (95% CI 0.88–
0.97). The derived prediction rule (no history of congestive heart failure, no history of smoking, and no abnormalities on lung auscultation) was 67.6% sensitive (95%
CI 56.1–77.3), 43.1% specific (95% CI 40.2–46.1), LR+
1.19 (95% CI 1.00–1.41), LR- 0.75 (95% CI 0.53–1.06).
The refined rule (no shortness of breath, no history of
smoking, no abnormalities on lung auscultation, and
age<55) was 93.0% sensitive (95% CI 84.6–97.0), 10.4%
specific (95% CI 8.7–12.3), LR+ 1.04 (95% CI 0.97–1.11),
LR- 0.68 (95% CI 0.29–1.61).
Conclusion: The diagnostic accuracy of the refined CDR
was substantially improved but is insufficient to recommend for use in clinical practice. Although the prevalence of CXR abnormalities in patients with chest pain
and possible ACS is low, clinical data appear to inadequately predict abnormalities requiring intervention.
276
Cholesteryl Esters Associated with AcylCoA:cholesterol acyltransferase-2 Predict
Coronary Artery Stenosis in Patients with
Symptoms of Acute Coronary Syndrome
Chadwick D. Miller, Michael J. Thomas, Brian
Hiestand, Michael P. Samuel, Martha D.
Wilson, Janet Sawyer, Lawrence L. Rudel
Wake Forest Health Sciences, Winston-Salem, NC
Background: After excluding MI in patients with
symptoms of acute coronary syndrome (ACS), identify-
ing the likelihood of coronary artery disease (CAD)
could reduce the need for stress testing or coronary
imaging.
Acyl-CoA:cholesterol
acyltransferase-2
(ACAT2) activity has been shown in monkey and murine models to correlate with atherosclerosis.
Objectives: To determine if a novel cardiac biomarker
consisting of plasma cholesteryl ester levels (CE) typically derived from the activity of ACAT2 is predictive of
CAD in a clinical model.
Methods: A single center prospective observational
cohort design enrolled a convenience sample of subjects from a tertiary care center with symptoms of
acute coronary syndrome undergoing coronary CT
angiography or invasive angiography. Plasma samples
were analyzed for CE composition with mass spectrometry. The primary endpoint was any CAD determined at
angiography. Multivariable logistic regression analyses
were used to estimate the relationship between the sum
of the plasma concentrations from cholesteryl palmitoleate (16:1) and cholesteryl oleate (18:1) (defined as
ACAT2-CE) and the presence of CAD. The added value
of ACAT2-CE to the model was analyzed comparing
the C-statistics and integrated discrimination improvement (IDI).
Results: The study cohort was comprised of 113 participants enrolled over 24 months with a mean age 49
(±11.7) years, 59% with CAD at angiography. The median plasma concentration of ACAT2-CE was 938 lM
(758, 1099) in patients with CAD and 824 lM (683, 998)
in patients without CAD (p = 0.03) (Figure). When considered with age, sex, and the number of conventional
CAD risk factors, ACAT2-CE were associated with a
6.5% increased odds of having CAD per 10 lM increase
in concentration. The addition of ACAT2-CE significantly improved the C-statistic (0.89 vs 0.95, p = 0.0035)
and IDI (0.15, p < 0.001) compared to the reduced
model. In the subgroup of low-risk observation unit
patients, the CE model had superior discrimination
compared to the Diamond Forrester classification (IDI
0.403, p < 0.001).
Conclusion: Plasma levels of ACAT2-CE, considered in
a clinical model, have strong potential to predict a
patient’s likelihood of having CAD. In turn, this could
reduce the need for cardiac imaging after the exclusion
of MI. Further study of ACAT2-CE as biomarkers in
patients with suspected ACS is needed.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
277
The Role Of Bedside Carotid
Ultrasonography In The Emergency
Department To Risk Stratify Patients With
Chest Pain
Anita Datta1, Anjali Bharati1, Michelle Pearl1,
Kenneth Perry1, Cristina Sison2, Sanjey Gupta1,
Nidhi Garg1, Penelope Chun Lema1
1
New York Hospital Queens, Flushing, NY;
2
Feinstein Institute for Medical Research at the
North Shore-LIJ Health System, Manhasset, NY
Background: Outpatient studies have demonstrated a
correlation between carotid intima-media thickness
(CIMT) on ultrasound and coronary artery disease
(CAD). There are no known published studies that
investigate the role of CIMT in the ED using cardiac CT
or percutaneous cardiac intervention (PCI) as a gold
standard.
Objectives: We hypothesized that CIMT can predict
cardiovascular events and serve as a noninvasive tool
in the ED.
Methods: This was a prospective study of adult
patients who presented to the ED and required evaluation for chest pain. The study location was an urban
ED with a census of 120,000 annual visits and 24-hour
cardiac catheterization. Patients who did not have CT
or PCI or had carotid surgery were excluded from the
study. Ultrasound CIMT measurements of right and
left common carotid arteries were taken with a 10MHz
linear transducer (Zonare, Mountain View, CA). Anterior, medial, and posterior views of the near and far
wall were obtained (12 CIMT scores total). Images
were analyzed by Carotid Analyzer 5 (Mailing Imaging
Application LLC, Coralville, Iowa). Patients were classified into two groups based on the results from CT or
PCI. A subject was classified as having significant
CAD if there was over 70% occlusion or multi-vessel
disease.
Results: Ninety of 102 patients were included in the study;
55.7% were males. Mean age was 56.6 ± 13 years. There
were 34 (37.8%) subjects with significant CAD and 56
(62.2%) with non-significant CAD. The mean of all 12 CIMT
measurements was significantly higher in the CAD group
than in the non-CAD group (0.60 ± 0.20 vs. 0.35 ± 0.23;
p < 0.00001). A logistic regression analysis was carried out
with significant CAD as the event of interest and the following explanatory variables in the model: sex, age group
(‡55 yrs. vs. <55 yrs), DM, hypercholesterolemia, previous
CAD, vascular disease, right mean IMTs, and left mean
IMTs. The mean of all right IMT measurements and left
IMT measurements correlated with significant CAD
(p < 0.001). With each unit increase in CIMT value, subjects were 64 times more likely to have significant CAD
(OR = 64.77, 95% CI 6.03, 696.06; P = 0.0006). For every 0.1
unit increment in the mean CIMT measurement, a subject
•
www.aemj.org
S151
would be 1.52 times more likely to have significant CAD
(64.77^0.1 = 1.52; 95% CI: 1.19,1.92).
Conclusion: Increased thickness of the CIMT on ultrasound correlates with cardiovascular disease in the
acute setting.
278
Are Echocardiography, Telemetry,
Ambulatory Electrocardiography
Monitoring, and Cardiac Enzymes in
Emergency Department Patients
Presenting with Syncope Useful Tests?
Shamai A. Grossman1, Benjamin Sun2,
David Chiu1, Nathan I. Shapiro1
1
Harvard
Medical
School,
Beth
Israel
Deaconess Medical Center, Boston, MA;
2
Oregon Health and Science University,
Portland, OR
Background: Prior studies of admitted geriatric
patients with syncope suggest diagnostic tests affect
management <5% of the time; whether this is true
among all ED patients with syncope remains unclear.
Objectives: To determine the diagnostic yield of routine testing in-hospital or following ED discharge
among patients presenting to an ED following syncope.
Methods: A prospective, observational, cohort study of
consecutive ED patients ‡18 years old presenting with
syncope was conducted. The four most commonly utilized tests (echocardiography, telemetry, ambulatory
electrocardiography monitoring, and cardiac markers)
were studied. Interobserver agreement as to whether
tests results determined the etiology of the syncope
was measured using kappa (k) values.
Results: Of 570 patients with syncope, 150 (26%) had
echocardiography with 33 (6%) demonstrating a likely
etiology of the syncopal event such as critical valvular
disease or significantly depressed left ventricular function (k = 0.78). On hospitalization, 349 (61%) patients
were placed on telemetry, 19 (3%) of these had worrisome dysrhythmias (k = 0.66). 317 (55%) patients had
troponin levels drawn of whom 19 (3%) had positive
results (k = 1); 56 (10%) patients were discharged with
monitoring with significant findings in only 2 (0.4%)
patients (k = 0.65). Overall, 73 (8%, 95% CI 7–10%) studies were diagnostic.
Conclusion: Although routine testing is prevalent in
ED patients with syncope, the diagnostic yield is relatively low. Nevertheless, some testing, particularly
echocardiography, may yield critical findings in some
cases. Current efforts to reduce the cost of medical
care by eliminating non-diagnostic medical testing and
increasing emphasis on practicing evidence-based
medicine argue for more discriminate testing when
evaluating syncope. (Originally submitted as a
‘‘late-breaker.’’)
S152
279
2012 SAEM ANNUAL MEETING ABSTRACTS
Needles In A Needlestack: ‘‘Prodromal’’
Symptoms of Unusual Fatigue and
Insomnia Are Too Prevalent Among Adult
Women Visiting the ED to be Useful in
Diagnosing ACS Acutely
Paris B. Lovett1, Yvonne N. Ezeala1,
Rex G. Mathew1, Julia L. Moon2
1
Thomas Jefferson University, Philadelphia, PA;
2
Drexel University, School of Public Health,
Philadelphia, PA
Background: In 2003, McSweeney et al. reported surveys on ‘‘prodromal’’ symptoms recalled by women
who had experienced myocardial infarctions (MI).
Unusual fatigue was reported by 70.7% (severe
29.7%) and insomnia by 47.8% (severe 21.0%). These
findings have led to risk management recommendations to consider these symptoms as predictive of
acute coronary syndromes (ACS) among women visiting the ED.
Objectives: To document the prevalence of these symptoms among all women visiting an ED. To analyze the
potential effect of using these symptoms in the ED
diagnostic process for ACS.
Methods: A survey on fatigue and insomnia symptoms
was administered to a convenience sample of all adult
women visiting an urban academic ED (all arrival
modes, acuity levels, all complaints). A sensitivity analysis was performed using published data and expert
opinion for inputs.
Results: We approached 548 women, with 379 enrollments. See table. The top box shows prevalences of
prodromal symptoms among all adult female ED
patients. The bottom box shows outputs from sensitivity analysis on the diagnostic effect of initiating an ACS
workup for all female ED patients reporting prodromal
symptoms.
Conclusion: Prodromal symptoms of ACS are highly
prevalent among all adult women visiting the ED in this
study. This likely limits their utility in ED settings. While
screening or admitting women with prodromal symptoms in the ED would probably increase sensitivity,
that increase would be accompanied by a dramatic
Table - Abstract 279:
reduction in specificity. Such a reduction in specificity
would translate to admitting, observing, or working up
somewhere between 29% and 61% of all women visiting the ED, which is prohibitive in terms of personal
costs, risks of hospitalization, and financial costs. While
these symptoms may or may not have utility in other
settings such as primary care, their prevalence, and the
implied lack of specificity for ACS suggest they will not
be clinically useful in the ED.
280
Length of Stay for Observation Unit Chest
Pain Patients Tested with Coronary
Computed Tomography Angiography
Compared to Stress Testing Depends on
Time of Emergency Department
Presentation
Simon A. Mahler, Brian C. Hiestand, James W.
Hoekstra, David C. Goff, Chadwick D. Miller
Wake Forest University Medical School,
Winston-Salem, NC
Background: Coronary computed tomography angiography (CCTA) has been identified as a way of reducing
length of stay (LOS) relative to stress testing for emergency department (ED) and observation unit (OU)
patients with chest pain. However, this relationship
may differ based on the time of patient presentation to
the ED.
Objectives: To determine if the relationship between
LOS and testing modality varies based on the time of
ED patient presentation.
Methods: We examined a cohort of low-risk chest pain
patients evaluated in an ED-based OU using prospective
and retrospective OU registry data elements. Cox proportional hazard modeling was performed to assess the
effect of testing modality (stress testing vs. CCTA) on the
LOS in the CDU. As CCTA is not available on weekends,
only subjects presenting on weekdays were included.
Cox models were stratified on time of patient presentation to the ED, based on four hour blocks beginning at
midnight. The primary independent variable was first
test modality, either stress imaging (exercise echo, dobutamine echo, stress MRI) or CCTA. Age, sex, and race
were included as covariates. The proportional hazards
assumption was tested using scaled Schoenfield residuals, and the models were graphically examined for outliers and overly influential covariate patterns. Test
selection was a time varying covariate in the 8AM strata,
and therefore the interaction with ln (LOS) was included
as a correction term. After correction for multiple comparisons, an alpha of 0.01 was held to be significant.
Results: Over the study period, 841 subjects (of 1,070 in
the registry) presented on non-weekend days. The median LOS was 18.5 hours (IQR 12.4–23.3 hours), 57%
were white, and 61% were female. The table shows the
number of subjects in each time strata, the number
tested, and the number undergoing stress testing vs.
CCTA. After adjusting all models for age, race, and sex,
the hazard ratio (HR) for LOS is as shown. Only those
patients presenting between 8AM and noon noted a
significant improvement in LOS with CCTA use
(p < 0.0001).
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Conclusion: In OU management of low risk chest pain
patients, in a setting of weekday/standard business
hour CCTA availability, CCTA testing decreased LOS
only in patients presenting to the ED during 8am–
11:59am time period.
Table - Abstract 280:
Time block
Total
n
Tested
n
First test
stress
00:00–03:59
04:00–07:59
08:00–11:59
12:00–15:59
16:00–19:59
20:00–23:59
81
74
197
223
152
113
77
67
186
206
142
110
57
34
84
149
99
73
HR (95% CI)
0.67
0.75
0.75
1.23
1.07
0.93
(0.38–1.16)
(0.44–1.27)
(0.68–0.84)*
(0.90–1.69)
(0.74–1.54)
(0.49–1.39)
* Time varying covariate, HR presented is for the interaction
term of stress test*ln (LOS).
281
Correlation of Student OSCE Scores with
Other Performance Metrics in an EM
Clerkship: A Three Year Review
Joshua Wallenstein, Douglas Ander,
Connie Coralli
Emory University, Atlanta, GA
Background: A management-focused Objective Structured Clinical Examination (OSCE) became a component of our emergency medicine (EM) clerkship’s
student performance evaluation in 2005. Since that time
we have developed formal training protocols and standardization of our evaluators. There is limited literature
exploring the validity of an OSCE as a measure of
clinical skills in EM.
Objectives: Determine the validity of a managementfocused EM OSCE as a measure of clinical skills by
determining the correlation between OSCE scores and
faculty assessment of student performance in the ED.
Methods: Medical students in a fourth year EM clerkship
were enrolled in the study. On the final day of the clerkship students participated in a five-station EM OSCE.
Student performance on the OSCE was evaluated using a
task-based evaluation system with 3–4 critical management tasks per case. Task performance was evaluated
using a three-point system: performed correctly/timely
(2), performed incorrectly/late (1), or not performed (0).
Descriptive anchors were used for performance criteria.
Communication skills were also graded on a three-point
scale. Student performance in the ED was based on traditional faculty assessment using our core-competency
evaluation instrument. A Pearson correlation coefficient
was calculated for the relationship between OSCE score
and ED performance score. Case item analysis included
determination of difficulty and discrimination.
Results: Between 2009–2011, 281 students completed
the OSCE. Complete OSCE data were available for 242
students. A moderate positive correlation was found
(r(237) = 0.400, p < 0.001), indicating a significant linear
relationship between OSCE score and ED performance
score. Difficulty index ranged from 21.9 to 97. Index of
discrimination ranged from 34 to 118.
•
www.aemj.org
S153
Conclusion: Our study of a management-focused EM
OSCE revealed a moderate correlation between
students’ OSCE scores and ED performance scores. This
supports the use of the EM OSCE as a valid assessment
of clinical skills. The presence of a moderate rather than
a strong correlation may indicate the presence of confounding variables in both OSCE and ED performance
evaluation. It may also indicate that the OSCE measures
variables independent from traditional evaluations of
clinical performance. Item analysis reveals specific case
items that can be adjusted to refine the OSCE.
282
Rating The Emergency Medicine Core
Competencies: Hawks, Doves, And The
ACGME
Jason T. Nomura, Hannah Bollinger,
Debra Marco, James F. Reed, III
Christiana Care Health System, Newark, DE
Background: The ACGME has defined six core competencies (6CCs) of Patient Care, Medical Knowledge,
Practice Based Learning, Interpersonal Communications, Professionalism, and System Based Practice. The
ACGME also requires that trainees are evaluated on
these 6CCs during their residency. Trainee evaluation
in the 6CCs are frequently on a subjective rating scale.
One of the recognized problems with a subjective scale
is the rating stringency of the rater, commonly known
as the Hawk-Dove effect. This has been seen in Standardized Clinical Exam scoring. Recent data have
shown that score variance can be related to evaluator
performance with a negative correlation. Higher-scoring physicians were more likely to be a stringent or
Hawk type rater on the same evaluation. It is unclear if
this pattern also occurs in the subjective ratings that
are commonly used in assessments of the 6CCs.
Objectives: Comparison of attending physician scores
on the ACGME 6CCs with attending ratings of residents for a negative correlation or Hawk-Dove effect.
Methods: Residents are routinely evaluated on the
6CCs with a 1–9 numerical rating scale as part of their
training. The evaluation database was retrospectively
reviewed. Residents anonymously scored attending
physicians on the 6CCs with a cross-sectional survey
that utilized the same rating scale, anchors, and
prompts as the resident evaluations. Average scores
for and by each attending were calculated and a
Pearson Correlation calculated by core competency
and overall.
Results: In this IRB-approved study, a total of 43
attending physicians were scored on the 6CCs with 447
evaluations by residents. Attendings evaluated 162 residents with a total of 1,678 evaluations completed over a
5-year period. Attending mode score was 9 ranging
from 2 to 9; resident scores had a mode of 8 with a
range of 1 to 9. There was no correlation between the
rated performance of the attendings overall or in each
6CCs and the scores they gave (p = 0.065–0.861).
Conclusion: Hawk-Dove effects can be seen in some
scoring systems and has the potential to affect trainee
evaluation on the ACGME core competencies. However, a negative correlation to support a Hawk-Dove
S154
2012 SAEM ANNUAL MEETING ABSTRACTS
scoring pattern was not found in EM resident evaluations by attending physicians. This study is limited by
being a single center study and utilizing grouped data
to preserve resident anonymity.
283
Emergency Medicine Resident
Self-Assessment of Competency During
Training and Beyond
Jeremy Voros1, Rita Cydulka2, Debra Perina3,
John Moorhead4
1
Denver
Health
Emergency
Medicine
Residency, Denver, CO; 2MetroHealth Medical
Center, Cleveland, OH; 3University of Virginia
Health Sciences Center, Charlottesville, VA;
4
Oregon Health & Science University, Portland,
OR
Background: All ACGME-accredited residency programs are required to provide competency-based education and evaluation. Graduating residents must
demonstrate competency in six key areas. Multiple
studies have outlined strategies for evaluating competency, but data regarding residents’ self-assessments of
these competencies as they progress through training
and beyond is scarce.
Objectives: Using data from longitudinal surveys by
the American Board of Emergency Medicine, the primary objective of this study was to evaluate if resident
self-assessments of performance in required competencies improve over the course of graduate medical training and in the years following. Additionally, resident
self-assessment of competency in academic medicine
was also analyzed.
Methods: This is a secondary data analysis of data
gathered from two rounds of the ABEM Longitudinal
Study of Emergency Medicine Residents (1996–98 and
2001–03) and three rounds of the ABEM Longitudinal
Study of Emergency Physicians (1999, 2004, 2009). In
both surveys, physicians were asked to rate a list of 18
items in response to the question, ‘‘What is your current level of competence in each of the following
aspects of work in EM?’’ The rated items were grouped
according to the ACGME required competencies of
Patient Care, Medical Knowledge, Practice-based
Learning and Improvement, Interpersonal and Communication Skills, and System-based Practice. An additional category for academic medicine was also added.
Results: Rankings improved in all categories during
residency training. Rankings in three of the six categories improved from the weak end of the scale to the
strong end of the scale. There is a consistent decline in
rankings one year after graduation from residency. The
greatest drop is in Medical Knowledge. Mean self-ranking in academic medicine competency is uniformly the
lowest ranked category for each year.
Conclusion: While self-assessment is of uncertain value
as an objective assessment, these increasing rankings
suggest that emergency medicine residency programs
are successful at improving residents’ confidence in the
required areas. Residents do not feel as confident about
academic medicine as they do about the ACGME
required competencies. The uniform decline in rankings
the first year after residency is an area worthy of
further inquiry.
Screening Medical Student Rotators From
Outside Institutions Improves Overall
Rotation Performance
Shaneen Doctor, Troy Madsen, Susan Stroud,
Megan L. Fix
University of Utah, Salt Lake City, UT
284
Background: Emergency medicine is a rapidly growing
field. Many student rotations are limited in their ability
to accommodate all students and must limit the number
of students they allow per rotation. We hypothesize
that pre-screening visiting student rotators will improve
overall student performance.
Objectives: To assess the effect of applicant screening
on overall rotation grade and mean end of shift card
scores.
Methods: We initiated a medical student screening
process for all visiting students applying to our 4-week
elective EM rotation starting in 2008. This consisted of
reviewing board scores and requiring a letter of
intent. Students from our home institution were not
screened. All end-of-shift evaluation cards and final
rotation grades (honors, high pass, pass, fail) from
2004 to 2011 were analyzed. We identified two
cohorts: home students (control) and visiting students.
We compared pre-intervention (2004–2008) and postintervention (2008–2011) scores and grades. End of
shift performance scores are recorded using a fivepoint scale that assesses indicators such as fund of
knowledge, judgment, and follow-through to disposition. Mean ranks were compared and P-values were
calculated using the Armitage test of trend and
confirmed using t-tests.
Results: We identified 162 visiting students (91 pre, 81
post) and 160 home students (90 pre, 80 post). 12
(13.2%) visiting students achieved honors pre-intervention while 31 (38.3%) achieved honors post-intervention
(p = 0.000093). No significant difference was seen in
home student grades: 28 (31.1%) received honors pre2008 and 17 (21.3%) received honors post-2008
Table - Abstract 283: Mean Rankings by Category (reported with Mean CI)
EM1
PC
MK
PBL
ICS
SBP
AM
2.81 (2.71–2.91)
3.19 (3.11–3.27)
2.92 (2.85–2.99)
3.56 (3.49–3.64)
2.47 (2.40–2.53)
2.20(2.14–2.26)
EM2
3.19
3.58
3.41
3.72
2.83
2.48
(3.04–3.34)
(3.45–3.70)
(3.32–3.51)
(3.62–3.83)
(2.73–2.92)
(2.39–2.57)
EM3
3.44
3.90
3.62
3.88
3.21
2.76
(3.34–3.54)
(3.82–3.99)
(3.56–3.68)
(3.81–3.94)
(3.15–3.28)
(2.69–2.83)
PR1
3.37
3.33
3.43
3.79
3.01
2.47
(3.26–3.48)
(3.23–3.44)
(3.37–3.50)
(3.72–3.88)
(2.95–3.08)
(2.40–2.54)
PR5
PR10
3.44 (3.34–3.54)
3.56(3.46–3.66)
3.47 (3.39–3.55)
3.80 (3.72–3.88)
3.26 (3.19–3.33)
2.27 (2.19–2.36)
3.43 (3.27–3.59)
3.80(3.64–3.96)
3.47 (3.35–3.59)
3.79 (3.66–3.91)
3.32 (3.21–3.44)
2.37 (2.24–2.50)
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S155
(p = 0.17). Mean numerical grade comparison confirmed these results with pre and post visiting student
means 4.13 and 4.27 (p = 0.00038) and home student
means 4.19 and 4.13 (p = 0.17), respectively. Mean shift
performance scores for visiting students was 3.99 pre
and 4.18 post (p < 0.0001), while mean scores for home
students was 4.00 and 4.06, respectively (p = 0.016).
Conclusion: We found that implementation of a screening process for visiting medical students improved overall rotation scores and grades as compared to home
students who did not receive screening. Screening rotating students may improve the overall quality of applicants and thereby the residency program.
median score was 82.0. The average rbp was 0.226
(range 0.021–0.385).
Conclusion: The National EM M4 Examination is a
50-question multiple choice high-stakes examination
developed to help fill a void in the assessment of EM
students. Initial data demonstrate acceptable mean and
median scores as well as acceptable rpbs.
Creation of a National Emergency
Medicine Fourth-year Medical Student
Examination
Emily L. Senecal1, Corey Heitz2, Michael
Beeson3
1
Massachusetts General Hospital - Harvard
Medical School, Boston, MA; 2Carilion Clinic Virginia Tech Carilion School of Medicine,
Roanoke, VA; 3Akron General Hospital, Akron,
OH
Background: There are many descriptions in the literature of computer-assisted instruction in medical education, but few studies that compare them to traditional
teaching methods.
Objectives: We sought to compare the suturing skills
and confidence of students receiving video preparation
before a suturing workshop versus a traditional instructional lecture.
Methods: 88 first and second year medical students
were randomized into two groups. The control group
was given a lecture followed by 40 minutes of suturing
time. The video group was provided with an online
suturing video at home, no lecture, and given 40 minutes
of suturing time during the workshop. Both groups were
asked to rate their confidence before and after the workshop, and their belief in the workshop’s effectiveness.
Each student was also videotaped suturing a pig’s foot
after the workshop and graded on a previously validated
16-point suturing checklist. 83 videos were scored.
Results: There was no significant difference between
the test scores of the lecture group (M = 11.21,
SD = 3.17, N = 42) and the video group (M = 11.27,
SD = 2.53, N = 41) using the two-sample independent ttest for equal variances (t(81) = )0.09, p = 0.93). There
was a statistically significant difference in the proportion of students scoring correctly for only one point:
‘‘Curvature of needle followed’’: 25/42 in the lecture
group and 35/41 in the video group (chi = 6.92, df = 1,
p = 0.008). Students in the video group were found to
be 2.45 times more likely to have a neutral or favorable
feeling of suturing confidence before the workshop
(p = 0.067, CI 0.94–6.4) using a proportional odds
model. No association was detected between group
assignment and level of suturing confidence after the
workshop (p = 0.475). There was also no association
detected between group assignment and opinion of the
suturing workshop (p = 0.681) using a logistic regression odds model. Among those students who indicated
a lack of confidence before training, there was no
detected association (p = 0.967) between group assignment and having an improved confidence using a
logistic regression odds model.
Conclusion: Students in the video group and students
in the control group achieved similar levels of suturing
skill and confidence, and equal belief in the workshop’s
effectiveness. This study suggests that video instruction
could be a reasonable substitute for lectures in procedural education.
285
Background: A National Board of Medical Examiners
(NBME) subject examination does not exist for EM students. To fill this void, the Clerkship Directors in Emergency Medicine (CDEM) tasked a committee with
development of an end-of-rotation examination geared
towards fourth-year (M4) EM students, based on a
national syllabus, and consisting of examination questions written according to published question writing
guidelines.
Objectives: To describe the development of the examination and provide data concerning its initial usage and
performance.
Methods: Exam Development: The CDEM Testing
Committee systematically reviewed an existing EM, student-focused question database at www.saemtests.org.
Question assessment included statistical performance
analysis, compliance with published item writing guidelines, and topic inclusion within the published EM M4
syllabus. For syllabus topics without existing questions,
committee members wrote new items. Committee
members developed a system for secure online examination administration.
Data Analysis: LXR 6.0 testing software was used to
administer the examination. Data gathered included
numbers of examinations completed and institutions
participating, mean and median scores with standard
deviation, and point biserial correlation (rpb) for each
question.
Results: Thirty-six questions meeting the stated criteria
were selected for inclusion in the National EM M4
Examination. An additional fourteen questions were
written by committee members to generate a 50-question examination. The National EM M4 Examination
was released online on August 1, 2011. Three months
into its availability, the examination had been completed 703 times by students from 22 participating
clerkships. The mean score was 80.2 (SD 4.0) and the
286
A Novel Approach To ‘‘See One, Do One’’:
Video Instruction For Suturing Workshops
Amita Sudhir, Claire M. Plautz, William A.
Woods
University of Virginia, Charlottesville, VA
S156
287
2012 SAEM ANNUAL MEETING ABSTRACTS
Educational Technology Can Improve ECG
Diagnosis of ST Elevation MI Among
Medical Students
Ali Pourmand, Steven Davis, Kabir Yadav,
Hamid Shokoohi, Mary Tanski
George Washington University, Washington,
DC
Background: Accurate interpretation of the ECG in the
emergency department is not only clinically important
but also critical to assess medical knowledge competency. With limitations to expansion of formal didactics,
educational technology offers an innovative approach
to improve the quality of medical education.
Objectives: The aim of this study was to assess an
online multimedia-based ECG training module evaluating ST elevation myocardial infarction (STEMI) identification among medical students.
Methods: A convenience sample of fifty-two medical
students on their EM rotations at an academic medical
center with an EM residency program was evaluated in
a before-after fashion during a 6-month period. One
cardiologist and two ED attending physicians independently validated a standardized exam of ten ECGs: four
were normal ECGs, three were classic STEMIs, and
three were subtle STEMIs. The gold standard for diagnosis was confirmed acute coronary thrombus during
cardiac catheterization. After evaluating the 10 ECGs,
students completed a pre-intervention test wherein they
were asked to identify patients who required emergent
cardiac catheterization based on the presence or
absence of ST segment elevation on ECG. Students
then completed an online interactive multimedia module containing 13 minutes of STEMI training based on
American Heart Association/American College of Cardiology guidelines on STEMI. Medical students were
asked to complete a post-test of the 10 ECGs after
watching online multimedia.
Results: The participants included 52 medical students
in their fourth year of training, of whom 27 (52%) were
female. Overall, 36 (69%) had an improvement in their
score, with a pre-test median of 5 [IQR 5–6] out of 10 correct ECG interpretations and a post-test median 7 [IQR
6–8]. Treating data as ordinal, Wilcoxon signed-rank test
confirmed
post-test
was
significantly
different
(p < 0.001). Spearman’s rank correlation and regression
analysis confirmed that sex did not influence test scores.
Conclusion: Educational technology using online multimedia modules improves medical students’ ability to
interpret ECGs and can augment traditional teaching
due to its ready availability and interactiveness. Future
studies using standardized assessment tools are needed
to validate improved competency in ECG interpretation
in broader populations.
288
Better Than Expected: External Validation
of the PECARN Head Injury Criteria in a
Community Hospital Setting
Payal Shah1, David Donaldson1, Shawn
Munafo1, Teresa Thomas1, Robert Swor2,
William Anderson1, Aveh Bastani1
1
Troy Beaumont Hospital, Troy, MI; 2William
Beaumont Hospital, Royal Oak, MI
Background: The Pediatric Emergency Care Applied
Research Network (PECARN) head injury criterion provides an algorithm to emergency physicians for identifying patients at very low risk for clinically important
traumatic brain injury (ciTBI). The PECARN investigators noted that by implementing their algorithm, head
CTs could potentially be avoided for 25% of pre-verbal
pediatric patients (age < 2).
Objectives: Our objective was to quantify the number
of pre-verbal pediatric head CTs performed at our community hospital that could have been avoided by utilizing the PECARN criteria.
Methods: We conducted a standardized chart review
of all children under the age of 2 who presented to our
community hospital and received a head CT
between Jan 1st, 2010 and Dec 31st, 2010. Following
recommended guidelines for conducting a chart review,
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
we: 1) utilized four blinded chart reviewers, 2) provided
specific training, 3) created a standardized data extraction tool, and 4) held periodic meetings to evaluate coding discrepancies. Our primary outcome measure was
the number of patients who were PECARN negative
and received a head CT at our institution. Our secondary outcome was to reevaluate the sensitivity and specificity of the PECARN criteria to detect ciTBI in our
cohort. Data were analyzed using descriptive statistics
and 95% confidence intervals were calculated around
proportions using the modified Wald method.
Results: A total of 138 patients under the age of 2
received a head CT at our institution during the study
period. 23 patients were excluded from the final analysis
because their head CTs were not for trauma. The prevalence of a ciTBI in our cohort was 2.6% (95% CI 0.6%–
7.7%) (Table). Of the three patients with ciTBI, all were
identified utilizing the PECARN criteria. A total of 74
patients (64.3%, 95% CI 55.3%–72.5%) were identified as
not requiring a head CT based on PECARN criteria. At
our community hospital the PECARN criteria retained its
high sensitivity of 100% (95% CI 31%–100%) and modest
specificity of 66.1% (95% CI 56.4%–74.6%).
Conclusion: Application of the PECARN criteria in the
community hospital setting can significantly decrease
the number of head CTs performed on pre-verbal
pediatric patients while retaining its high sensitivity for
ruling out ciTBI.
Table - Abstract 288:
(+) PECARN
(-) PECARN
Total
289
(+) ciTBI
(-) ciTBI
Total
3
0
3
38
74
112
41
74
115
Prevalence of Non-traumatic Incidental
Findings Found on Pediatric Cranial CT
scans
Alexander Rogers1, Cormac O. Maher1, Jeff E.
Schunk2, Kimberly S. Quayle3, Elizabeth S.
Jacob4, Richard Lichenstein5, Elizabeth C.
Powell6, Michelle Miskin2, Peter S. Dayan7,
James F. Holmes8, Nathan Kuppermann8, the
PECARN9
1
University of Michigan, Ann Arbor, MI;
2
University of Utah, Salt Lake City, UT;
3
Washington University School of Medicine, St.
Louis, MO; 4Brown University, Providence, RI;
5
University of Maryland School of Mediine,
Baltimore, MD; 6Northwestern University,
Chicago, IL; 7Columbia University Medical
Center, New York, NY; 8UC Davis School of
•
www.aemj.org
S157
Medicine, Sacramento, CA;
MD
9
HRSA, Rockville,
Background: Blunt head trauma is a leading cause of
emergency department (ED) visits in children. Cranial
CT scans are often used to detect intracranial injuries,
but are also sensitive for unexpected non-traumatic findings that may be inconsequential or clinically important.
Objectives: To describe the prevalence and urgency of
incidental findings on CT in children evaluated in a
large, multi-center observational study of children with
blunt head trauma.
Methods: This was a planned secondary analysis of a
study of children with blunt head trauma. Radiologist
CT reports were reviewed. Reports were abstracted
and categorized into three groups of clinical urgency
based on an a priori list of diagnoses: Category
1) requires prompt evaluation/intervention, 2) requires
timely outpatient follow-up, 3) no specific follow-up
needed. Consensus was achieved on diagnosis and
urgency by a pediatric emergency medicine physician
and pediatric neurosurgeon.
Results: 43,904 patients were enrolled in the parent
study, of whom 16,137 (37%) received a cranial CT scan
in the ED. We identified incidental, non-traumatic findings on 830/16,137 (5%) CT scans. Of these, 125/830 (15%)
were reclassified as normal variants. Of the remaining
705, the most common non-traumatic findings were: ventricular abnormality (12%), arachnoid cyst (10%), and
increased extra-axial fluid (10%). Four percent were category 1, with hydrocephalus most common. Category 3
represented 70%, and category 2 represented 26%.
Conclusion: Incidental non-traumatic findings on cranial CT are present in 5% of children evaluated in the
ED for blunt head trauma. Most do not require specific
intervention. However, 30% (category 1 + 2) warranted
either prompt or timely subspecialty consultation. ED
providers should be alert to the possibility of incidental
findings on cranial CT obtained following trauma, and
engage mechanisms to ensure appropriate follow-up.
Funded by HRSA/MCHB (R40MC02461-01-00).
290
Evidence of Axonal Injury for Children
with Mild Traumatic Brain Injuries
Lynn Babcock1, Weihong Yuan1,
Nicole McClanahan1, Yingying Wang1,
Jeffrey Bazarian2, Shari Wade1
1
Cincinnati Children’s Hospital Medical Center,
Cincinnati, OH; 2University of Rochester,
Rochester, NY
Background: Diffuse axonal injury (DAI) is hypothesized to be the underlying neuropathology in mild traumatic brain injury (mTBI). Diffusion tensor imaging
Table - Abstract 289: Findings by Urgency Category
Category 1
Hydrocephalus
Tumor/Mass
AVM
Cysticercosis
N = 25
12
11
1
1
(46%)
(42%)
(4%)
(4%)
Category 2
Arachnoid Cyst
Extra-Axial Fluid
Chiari
Other
N = 185
71
67
16
31
(38%)
(36%)
(9%)
(17%)
Category 3
Ventricular abnormality
Sinus Cyst
Skull Abnormality
Other
N = 495
83 (17%)
65 (13%)
53 (11%)
294 (59%)
S158
2012 SAEM ANNUAL MEETING ABSTRACTS
(DTI) measures disruption of axonal integrity on the
basis of anisotropic diffusion properties. Findings on
DTI may relate to the injury, as well as the severity of
postconcussion syndrome (PCS) following mTBI.
Objectives: To examine acute anisotropic diffusion
properties based on DTI in youth with mTBI relative to
orthopedic controls and to examine associations
between white matter (WM) integrity and PCS
symptoms.
Methods: Interim analysis of a prospective casecontrol cohort involving 12 youth ages 11–16 years
with mTBI and 10 orthopedic controls requiring
extremity radiographs. Data collected in ED included
demographics, clinical information, and PCS symptoms measured by the postconcussion symptom scale.
Within 72 hours of injury, symptoms were re-assessed
and a 61-direction, diffusion weighted, spin-echo
imaging scan was performed on a 3T Philips scanner.
DTI images were analyzed using tract-based spatial
statistics. Fractional anisotropy (FA), mean diffusivity
(MD), axial diffusivity (AD), and radial diffusivity were
measured.
Results: There were no group demographic differences
between mTBI cases and controls. Presenting symptoms within the mTBI group included GCS = 15 83%,
loss of consciousness 33%, amnesia 33%, post-traumatic seizure 8%, headache 83%, vomiting 33%, dizziness 42%, and confusion 42%. PCS symptoms were
greater in mTBI cases than in the controls at ED visit
(30.1 ± 17.0 vs. 15.5 ± 16.8, p < 0.06) and at the time of
scan (19.1 ± 12.9 vs. 5.7 ± 6.5, p < 0.01). The mTBI group
displayed decreased FA in cerebellum and increased
MD and AD in the cerebral WM relative to controls
(uncorrected p < 0.05). Increased FA in cerebral WM
was also observed in mTBI patients but the group difference was not significant. PCS symptoms at the time
of the scan were positively correlated with FA and
inversely correlated with RD in extensive cerebral WM
areas (p < 0.05, uncorrected). In addition, PCS symptoms in mTBI patients were also found to be inversely
correlated with MD, AD, and RD in cerebellum
(p < 0.05).
Conclusion: DTI detected axonal damage in youth with
mTBI which correlated with PCS symptoms. DTI performed acutely after injury may augment detection of
injury and help prediction of those with worse outcomes.
Objectives: The aim of this study was to evaluate
parental knowledge of concussion in young children
who participate in recreational tackle football.
Methods: Parents/legal guardians of children aged 5–
15 years enrolled in recreational tackle football were
asked to complete an anonymous questionnaire based
on the CDC’s Heads Up: Concussion In Youth Sports
quiz. Parents were asked about their level of agreement
in regard to statements that represent definition, symptoms, and treatment of concussion.
Results: A total of 310 out of 369 parents voluntarily
completed the questionnaire (84% response rate). Parent and child demographics are listed in Table 1.
Ninety four percent of parents believed their child had
never suffered a concussion. However, when asked to
agree or disagree with statements addressing various
aspects of concussion, only 13% (n = 41) could correctly identify all seven statements. Most did not identify that a concussion is considered a mild traumatic
brain injury and can be achieved from something
other than a direct blow to the head. Race, sex, and
zip code had no significant association with correctly
answering statements. Education (0.24; p < 0.01) and
number of years the child played (0.11; p < 0.05) had a
small effect. Fifty-three percent of parents reported
someone had discussed the definition of concussion
with them and 58% the symptoms of concussion. See
Table 2 for source of information to parents. No parent was able to classify all symptoms listed as correctly related or not related to concussion. However,
identification of correct concussion definitions correlated with identification of correct symptoms (0.25;
p < 0.05).
Conclusion: While most parents had received some
education regarding concussion from a health care provider, important misconceptions remain among parents
of young athletes regarding the definition, symptoms,
and treatment of concussion. This study highlights the
need for health care providers to increase educational
efforts among parents of young athletes in regard to
concussion.
Table 1 - Abstract 291: Demographics
Parent Demographics
Race
291
Knowledge Assessment of Sport-related
Concussion among Parents of Children
Aged 5–15 years Enrolled in Recreational
Tackle Football
Carol Mannings, Colleen Kalynych, Madeline
Joseph, Carmen Smotherman, Dale Kraemer
University of Florida, Jacksonville, FL
Background: Sports-related concussion among professional, collegiate, and more recently high school athletes has received much attention from the media and
medical community. To our knowledge, there is a paucity of research in regard to sports-related concussion
in younger athletes.
Child Demographics
N (%)
White
Black of African
American
Other
Education Level
133 (43)
148 (48)
High school or less
68 (22)
Some college
College graduate
117 (38)
124 (40)
25 (8)
N (%)
Age (years)
N (%)
5–7
8–10
45 (15)
156 (50)
11–15
Years child
has played
This is the
first year
1–2
3–4
5 or more
109 (35)
N (%)
103 (33)
85 (28)
75 (24)
46 (15)
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Table 2 - Abstract 291: Concussion-related information was
discussed with me by someone
Definition N = 165 (53%)
Symptoms
(58%)
N (%)
N = 179
38 (23)
93 (56)
List of
symptoms,
only some of
which are
consistent with
concussion
Headache, nausea, difficulty
remembering things, ringing in the
ears, concentration difficulty,
increased irritability or emotional
outburst, disorientation, sleeping
difficulty, difficulty urinating,
weakness in arms or legs, hearing
voices, numbness in the arms or legs
292
Coach
Doctor/health
care provider
Athletic trainer
Friends/relative
293
40 (22)
95 (53)
27 (15)
28 (16)
Young Children Have Difficulty Using the
Wong Baker Pain Faces Scale in an ED
Setting
Gregory Garra, Anna Domingo,
Henry C. Thode Jr, Adam J. Singer
Stony Brook University, Stony Brook, NY
Background: Pain is a multidimensional experience.
Fear and anxiety may bias pain reporting and interfere
with measuring pain. Few studies on pain severity reporting in children have evaluated discriminate validity.
Objectives: The objective of our study was to determine discriminate validity of the Wong Baker FACES
Pain Rating Scale (WBS). We hypothesized that children (3–6 years) would be able to appropriately use the
WBS for reporting pain.
Methods: Study Design: Prospective observational.
Setting: University-based, suburban pediatric ED. Subjects: Patients age 3–6 years. Measures: Research assistants recorded demographic variables along with
presence of pain (yes/no) and pain severity rating using
the six-category WBS. Patients also completed a 26item Likert questionnaire assessing fears in medical settings: the Child Medical Fear Scale (CMFS), minimum
score 26, maximum 78. Analysis: Descriptive statistics.
Pearson’s correlation was used to measure agreement
between dichotomous and ordinal/categorical data.
Spearman’s correlations were used to measure agreement between WBS and CMFS.
Results: One hundred and thirty four children were
enrolled; 118 with pain, 16 without. There was an even
distribution of WBS ratings. Five children with no pain
utilized face categories reflective of pain. Likewise, 23
children (20%) with reported pain provided assessments
using the ‘‘no pain’’ face. The correlation between pain
presence and WBS reporting was poor (r = 0.38; 95%CI
0.23 to 0.52). Only 50% were able to complete the CMFS.
The median CMFS score was 41 (IQR 34–48). Correlation
between pain presence and CMFS score was moderate
(r = 0.54; 95%CI 0.35 to 0.69) although correlation
between CMFS and WBS was poor (q = 0.27).
Conclusion: Young children appear to have difficulty
reporting pain on the WBS. However, young children
www.aemj.org
S159
do not appear to mistake fear for pain presence or misuse face categories for fear severity.
N (%)
Coach
Doctor/health
care provider
Athletic trainer
Friends/relative
23 (14)
18 (11)
•
Timeliness and Effectiveness of Intranasal
Fentanyl Administration for Children
Ryan J. Scheper, Amy L. Drendel, Marc H.
Gorelick, Martha W. Stevens, Steven J.
Weisman, Raymond J. Hoffmann, Kim
Gejewski
Medical College of Wisconsin, Milwaukee, WI
Background: Inadequate treatment of children with
painful conditions is a significant problem in the emergency department (ED). Intranasal (IN) fentanyl is a
painless, rapid means to administer analgesia for moderate to severe pain that may be particularly useful for
the pediatric patient.
Objectives: Determine if IN fentanyl 1) improves the
timeliness of analgesic administration for children with
fractures and 2) improves the treatment of pain for children compared to usual treatment in the ED prior to its
utilization.
Methods: A retrospective observational study, before
and after the introduction of IN fentanyl on September
1, 2009, was performed. All children ages 0–18 years
with a fracture and an Emergency Severity Index triage
level 2 were identified. A random sample of children
evaluated during a 15-month period prior to initiation
of IN fentanyl (Group A) was compared to a consecutive sample of children administered IN fentanyl during
the first 15 months of IN fentanyl use (Group B).
A chart review identified demographic information, ED
analgesic used, time to analgesic, and pain scores.
Mann-Whitney and chi-square tests were used to test
for significant differences.
Results: 255 children (Group A) prior to, and 121 children (Group B) after initiation of IN fentanyl were
enrolled. Most children in Group A received morphine
(87.9%), oxycodone with acetaminophen (5.2%), or ibuprofen (3.6%). There was no difference in age, race/
ethnicity, sex, or initial pain score between the two
groups. Median time to analgesic administration was
28.2 minutes for Group A, compared to 15.0 minutes for
Group B (p < 0.0001). The median decrease in pain score
on a 10-point scale was 2 points for Group A, compared
to 3 points for Group B (p = 0.04). One hour after receiving an analgesic, only 60.83% of children in Group A
achieved a clinically relevant (2-point) improvement in
pain score, compared to 84.93% in Group B (p = 0.0006).
Conclusion: IN fentanyl was associated with a more
timely analgesic administration and more effective pain
relief for children with fractures compared to usual
care in the ED.
294
National Study of Emergency DepartmentAssociated Deaths
Elaine Reno, Adit Ginde
University of Colorado, Denver, CO
Background: The ED centers on the stabilization and
resuscitation of the acutely ill and injured patients.
S160
2012 SAEM ANNUAL MEETING ABSTRACTS
Deaths during and immediately following ED visits are
becoming increasingly common, and emergency physicians must be adept at identifying patients at risk for
death.
Objectives: To characterize the incidence of ED-associated deaths nationally and to compare characteristics of
patients who die during or immediately following ED
visits (within 3 days of admission), to those that are
admitted to the hospital and survive to discharge.
Methods: We analyzed adult participants of the 2006–
2009 National Hospital Ambulatory Medical Care Survey, an annual, nationally representative sample of all
U.S. ED visits. We grouped deaths as those who were
dead on arrival, died during the ED visit, and admitted
to the hospital and died prior to hospital discharge. We
analyzed the survey-weighted data to generate nationally representative estimates.
Results: Of the 124 million annual ED visits, 110,000
(95%CI, 78,000–140,000) patients/year were dead on
arrival, 140,000 (95%CI, 110,000–180,000) patients/year
died in the ED, and 350,000 (95%CI, 300,000–420,000)
patients/year died during hospitalization after an ED
visit. Of the 130,000 patients/year who died within
3 days of hospital admission, the majority (120,000)
were admitted to a critical care area. Patients who
died in the ED, compared to those who died during
hospital stay, were more likely to have been hypothermic (28% vs. 17%), bradycardic (29% vs. 6%), and
hypotensive (44% vs. 11%) on ED vitals signs. Of note,
for patients who died during their ED stays, 71%
arrived normothermic (36–37.9C), 20% had a normal
heart rate (60–89 beats/minute), 37% had a normal
range sBP (90–159 mmHg), 39% had a normal respiratory rate (12–20/minute), and 45% had a normal oxygen saturation (‡93%). Of note, only 73% of patients
who died during their ED visit were triaged as
category 1 (immediate).
Conclusion: Despite the large number of patients,
emergency physicians identified most patients that died
within 3 days of admission and triaged them to ICU
care. Many patients who died during the ED visit had
lower triage acuity and were without severe vital sign
abnormality at presentation; this finding was magnified
in those that were admitted and died during hospitalization.
295
Post-intubation Care In Mechanically
Ventilated Patients Boarding In The
Emergency Department
Rahul Bhat1, Munish Goyal1, Anu Bhooshan1,
Bill Frohna1, Jeff Dubin1, Daniel Moy1, Vikaas
Kataria1, Linn Lung1, Mihrye Mete2, David
Gaieski3
1
Georgetown University Hospital/Washington
Hospital Center, Washington, DC; 2Medstar
Health Research Institute, Washington, DC;
3
University of Pennsylvania, Philadelphia, PA
Background: Emergency department (ED) crowding
has led to increased length of stay for intubated
patients, leaving early post-intubation care to the emergency physician. The quality of post-intubation care in
the ED has not been extensively studied. We sought to
determine how frequently common post-intubation
interventions are performed while patients board in the
ED.
Objectives: We hypothesized that few intubated
patients receive all interventions while boarding in the
ED.
Methods: This is a retrospective chart review of all
patients intubated in the Washington Hospital Center
ED between 11/14/09 to 6/1/11. Patients were excluded
if they were in the ED for <2 hours post-intubation,
had emergent surgery within six hours post-intubation, were managed by the trauma team, had incomplete data, or had their status changed to ‘‘do not
resuscitate’’(DNR) during their admission. Trained
research assistants blinded to the objectives reviewed
each chart to determine if the following interventions
were performed in the ED post-intubation: appropriate
initial tidal volume (6–10 cc/kg ideal body weight),
sedation given <30 minutes of intubation, oro/nasogastric tube (OGT) placement, chest x-ray (CXR), arterial
blood gas (ABG) sampling, and use of continuous endtidal capnography (ETCO2). Additionally, ventilator
duration, ICU length of stay, mortality, and development of ventilator-associated pneumonia (VAP) were
recorded.
Results: 622 charts were reviewed, with 453 charts
excluded (158 in the ED for <2 hours post-intubation,
143 trauma patients, 112 DNR, 40 other reasons), leaving 169 patients in our cohort. Of the six post-intubation interventions, CXR was obtained most frequently
(96.4%), followed by OGT placement (84.0%), sedation
(83.4%), ABG sampling (76.9%), appropriate tidal volume (71.0%), and least frequently ETCO2 monitoring
(8.3%). The percentage of patients receiving all interventions was 2.4%; 5 of 6 interventions was 42.0%.
Mean duration of ventilation and ICU length of stay
were 3.4 days, and 5.2 days respectively with a VAP
rate of 5.9% and a mortality rate of 8.9%.
Conclusion: In this single center study, few patients
received all six measured post-intubation interventions
while boarding in the ED.
296
Assessing Vitamin D Status In Sepsis And
Association With Systemic Inflammation
Justin D. Salciccioli1, Adit A. Ginde2, Amanda
Graver1, Tyler Giberson1, Cristal Cristia1,
Michael N. Cochhi1, Michael W. Donnino1
1
BIDMC Center for Resuscitation Science,
Boston, MA; 2University of Colorado School of
Medicine, Denver, CO
Background: Recent evidence has suggested a potential role of vitamin D in modulating innate immune
function. However, the relationship between vitamin D
and systemic inflammation has yet to be assessed in
human sepsis.
Objectives: To determine vitamin D status in critically
ill patients with sepsis and the association between
25(OH)D level and systemic inflammation.
Methods: We performed a prospective study of
patients with sepsis at an urban tertiary care hospital
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S161
markers of inflammation. Whether vitamin D supplementation can attenuate systemic inflammation in critically ill patients with sepsis should be assessed in
future investigations
297
during the period from 10/2010 to 05/2011. Inclusion
criteria: 1. Adult (>18 years); 2. Presence of two or more
SIRS criteria; 3. Admission to ICU with sepsis. Patients
with a life-expectancy <24 hrs from arrival were
excluded. Patient demographics, co-morbid conditions,
vital signs and laboratory data, and in-hospital mortality were recorded. Serum 25(OH)D levels and inflammatory cytokines (TNF-a and IL-6) were measured with
previously described assays. Patients were categorized
by 25(OH)D level as follows: Deficient: <20 ng/mL;
Insufficient 20–29 ng/mL; Normal: >29 ng/mL. We used
simple descriptive statistics to describe the study population and multiple linear regression (adjustments for
age and baseline liver dysfunction) to assess the linear
association between 25(OH)D level and markers of
inflammation.
Results: 39 patients were enrolled. The median age was
68 years (IQR: 56–79) and 44% were female. The median
25(OH)D level was 25.2 (IQR: 21.9–32.7). 15% of patients
expired in-hospital. 23/39 (59%) of patients were either
deficient or insufficient (Figure 1). 2/2 (100%) of patients
with baseline liver dysfunction were 25(OH)D deficient
and 5/6 (83%) of deaths were patients who had insufficient levels of 25(OH)D. There was an inverse association between 25(OH)D level and TNF-a (p = 0.03; Figure
2) and IL-6 (p = 0.04).
Conclusion: In this small cohort of critically ill adults
with sepsis, there was a high prevalence of vitamin D
deficiency or insufficiency. The majority (83%) of who
expired in-hospital were vitamin D insufficient. There is
an inverse association between 25(OH)D levels and
Antipyretic Use Does Not Increase
Mortality in Emergency Department
Patients with Severe Sepsis
Nicholas M. Mohr1, Brian M. Fuller2, Craig A.
McCammon3, Rebecca Bavolek2, Kevin
Cullison4, Matthew Dettmer2, Jacob Gadbaw5,
Elizabeth C. Hassebroek1, Sarah Kennedy2,
Nicholas Rathert2, Christine Taylor5
1
University of Iowa Carver College of Medicine,
Iowa City, IA; 2Washington University School
of Medicine, St. Louis, MO; 3Barnes-Jewish
Hospital, St. Louis, MO; 4Saint Louis University
School
of
Medicine,
St.
Louis,
MO;
5
Washington University in St. Louis, St. Louis,
MO
Background: Fever is common in the emergency
department (ED), and 90% of those diagnosed with
severe sepsis present with fever. Despite data suggesting that fever plays an important role in immunity,
human data conflict on the effect of antipyretics on
clinical outcomes in critically ill adults.
Objectives: To determine the effect of ED antipyretic
administration on 28-day in-hospital mortality in
patients with severe sepsis.
Methods: Single-center, retrospective observational
cohort study of 171 febrile severe sepsis patients presenting to an urban academic 90,000-visit ED between
June 2005 and June 2010. All ED patients meeting the
following criteria were included: age ‡ 18, temperature ‡ 38.3C, suspected infection, and either systolic
blood pressure £ 90 mmHg after a 30 mL/kg fluid
bolus or lactate of ‡ 4. Patients were excluded for a
history of cirrhosis or acetaminophen allergy. Antipyretics were defined as acetaminophen, ibuprofen, or
ketorolac.
Results: One hundred-thirty five (78.9%) patients were
treated with an antipyretic medication (89.4% acetaminophen). Intubated patients were less likely to receive
antipyretic therapy (51.9% vs. 84.0%, p < 0.01), but the
groups were otherwise well matched. Patients requiring
ED intubation (n = 27) had much higher in-hospital
mortality (51.9% vs. 7.6%, p < 0.01). Patients given an
antipyretic in the ED had lower mortality (11.9% vs.
25.0%, p < 0.05). When multivariable logistic regression
was used to account for APACHE-II, intubation status,
and fever magnitude, antipyretic therapy was not associated with mortality (adjusted OR 0.97, 0.31–3.06,
p = 0.96).
Conclusion: Although patients treated with antipyretic
therapy had lower 28-day in-hospital mortality, antipyretic therapy was not independently associated with
mortality in multivariable regression analysis. These
findings are hypothesis-generating for future clinical
trials, as the role of fever control has been largely unexplored in severe sepsis (Grant UL1 RR024992, NIHNCRR).
S162
298
2012 SAEM ANNUAL MEETING ABSTRACTS
Risk Factors for Unplanned Transfer to
Intensive Care Within 24 Hours of
Admission from the Emergency
Department in an Integrated Health Care
System
M. Kit Delgado1, Vincent Liu2, Jesse M. Pines3,
Patricia Kipnis2, Gabriel J. Escobar2
1
Stanford University School of Medicine,
Stanford, CA; 2Kaiser Permanente, Division of
Research, Oakland, CA; 3George Washington
University, Washington, DC
Background: ED patients admitted to hospital wards
who are subsequently transferred to the intensive care
unit (ICU) within 24 hours have higher mortality than
direct ICU admissions.
Objectives: Describe risk factors for unplanned transfer
to the ICU within 24 hours of ward arrival from the ED.
Methods: Retrospective cohort analysis of all ED nonICU admissions (N = 178,315) to 14 U.S. community
hospitals from 2007–09. We tabulated patient demographics, clinical characteristics, and hospital volume
by the outcome of unplanned ICU transfer. We present
factors that were independently associated with
unplanned ICU transfer within 24 hours after adjusting
for patient and hospital differences in a multilevel
mixed-effects logistic regression model.
Results: Of all ED non-ICU admissions, 4,252 (2.4%)
were transferred to the ICU within 24 hours. After
adjusting for patient and hospital differences, the top
five admitting diagnoses associated with unplanned
transfer were: sepsis (odds ratio [OR] 2.6; 95% CI 2.1–
3.1), catastrophic conditions (OR 2.3; 95% 1.9–2.8),
pneumonia/acute respiratory infections (OR 1.6; 95% CI
1.4–1.8), acute myocardial infarction (AMI) (OR 1.6; 95%
CI 1.3–1.8), and chronic obstructive pulmonary disease
(COPD) (OR 1.5; 95% CI 1.3–1.7). Other factors associated with unplanned transfer included: male sex, Comorbidity Points Score (COPS) >145, Laboratory Acute
Physiology Score (LAPS) >7, and arriving on the ward
between 11 PM-7 AM. Decreased risk of unplanned
transfer was found with admission to monitored transitional care units vs. non-monitored wards (OR 0.86;
95% CI 0.80–0.96) and admission to a high-volume vs.
low-volume hospital (OR 0.73; 95% CI 0.59–0.89).
Conclusion: ED patients admitted with respiratory conditions, sepsis, AMI, multiple comorbidities, and abnormal lab results are at higher risk for unplanned ICU
transfer and may benefit from better inpatient triage from
the ED, earlier intervention to prevent acute decompensation, or closer monitoring. More research is needed to
determine how intermediate care units, hospital volume,
time of day, and sex affect risk of unplanned ICU transfer.
299
Effect of Weight-Based Volume Loading on
the Inferior Vena Cava in Fasting Subjects:
A Randomized, Prospective Double-Blinded
Trial
Margaret R. Lewis1, Anthony J. Weekes1,
Zachary P. Kahler1, Donald E. Stader1, Dale P.
Quirke2, H. James Norton1, Courtney Almond1,
Dawn Middleton1, Vivek S. Tayal1
1
Carolinas Medical Center, Charlotte, NC; 2Bay
Area Emergency Physicians, Clearwater, FL
Background: Inferior vena cava ultrasound assessment
(IVC-US) has been proposed as a noninvasive method
of assessing volume status. Current literature is divided
on its ability to do so.
Objectives: Our primary hypothesis was that, in fasting
asymptomatic subjects, larger fluid boluses would lead
to proportional IVC-US changes. Our secondary endpoint was to determine inter-subject variation in IVCUS measurements.
Methods: The authors performed a prospective randomized double-blinded trial using fasting volunteers to
determine the inter-subject variation of baseline IVCUS and changes associated with different acute weightbased volume loads. Subjects with no history of cardiac
disease or hypertension fasted for 12 hours and were
then randomly assigned to receive a normal saline
bolus of 2 ml/kg, 10 ml/kg or 30 ml/kg over 30 minutes.
IVC-US was performed before and after each bolus.
Results: Forty-two fasting subjects were enrolled.
Mean (±SD) (in cm) baseline maximum IVC diameter
(IVC max) of the 42 subjects was 2.07 ± 0.57: range was
0.85 to 4.53 cm. Mean baseline minimum IVC diameter
(IVC min) was 1.4 ± 0.64. Mean baseline caval index
was 0.33 ± 0.15. The overall mean (±SD) (95% confidence interval [CI]) findings for pre- and post-fluid differences were: IVC max 0.18 ± 0.40 (CI 0.06, 0.31); IVC
min 0.31 ± 0.46 (CI 0.17, 0.46), and caval index
)0.09 ± 0.14 (CI )0.14, )0.05) and all were statistically
significant. The groups receiving 10 ml/kg and 30 ml/kg
had statistically significant changes in caval index;
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
however the 30 ml/kg group had no significant change
in mean IVC diameter. One-way ANOVA differences
between the means of all groups were not statistically
different.
Conclusion: Overall, there were statistically significant
differences in mean IVC-US measurements before and
after fluid loading, but not between groups. Fasting
asymptomatic subjects had a wide inter-subject variation
in both baseline IVC-US measurements and fluid-related
changes. The wide differences within our 30 ml/kg group
may limit conclusions regarding proportionality.
•
www.aemj.org
S163
come under increasing pressure to improve scores in
order to reduce potential financial losses under the program. Our data provide early information on the types of
hospitals with the greatest opportunity for improvement.
An Early Look at Performance Variation on
Emergency Care Measures included in
Medicare’s Hospital Inpatient Value-Based
Purchasing Program
Megan McHugh, Jennifer Neimeyer,
Rahul K. Khare, Emilie Powell
Northwestern University, Chicago, IL
Do Improvements In Emergency Department
Operations Influence The Patient
Experience? Impacts Of Large Scale
Implementation Of Overcapacity Protocols
On Perceptions Of Crowding And Overall
Ratings Of Care
Timothy Cooke1, Eddy Lang2, Christian Schmid3,
Grant Innes2, Nancy Guebert3, Brian Holroyd4,
Brian Rowe4, Andrew McRae2, John Cowell1
1
Health Quality Council of Alberta, Calgary,
AB, Canada; 2University of Calgary, Calgary, AB,
Canada; 3Alberta Health Services, Calgary,
AB, Canada; 4University of Alberta, Edmonton,
AB, Canada
Background: Medicare’s Hospital Inpatient ValueBased Purchasing (VBP) program is the first national,
mandatory effort to reward and penalize hospitals for
their performance. In 2012, 1% of Medicare payments
to hospitals will be withheld and distributed back based
on achievement or improvement on twelve process
measures and eight patient satisfaction measures. Four
process measures are related to care provided in the
ED: fibrinolytic therapy within 30 minutes, percutaneous intervention within 90 minutes, blood cultures in
ED before initial antibiotic, and initial antibiotic selection for community acquired pneumonia.
Objectives: To identify variations in performance on
ED measures included in the VBP program. These early
data offer a first look at the types of hospitals where
improvement efforts may be best directed.
Methods: This was an exploratory, descriptive analysis.
We obtained 2008–2010 performance data from Hospital Compare for the population of hospitals that met the
criteria for the VBP program (i.e., paid under the Inpatient Prospective Payment System with adequate sample sizes). Data were merged with the 2009 American
Hospital Association Annual Survey to obtain hospital
characteristics. Two researchers independently calculated performance scores for each hospital following
the program guidelines published in the Federal Register, and compared with mean performance across all
hospitals. T-tests and ANOVA were used to detect differences in performance across by hospital bed size,
number of ED visits, teaching status, ownership, system
affiliation and region.
Results: 3,030 hospitals qualified for the VBP program.
There were significant differences in performance on
ED measures by ownership (P < 0.0001) and region
(P = 0.0002). Scores on ED process measures were
highest at for-profit hospitals (27% above average) and
hospitals in the south (5% above average), and lowest
at public hospitals (16% below average) and hospitals
in the northeast (8% below average).
Conclusion: There was considerable variation in performance on the ED measures included in the VBP program
by hospital ownership and region. ED directors may
Background: In December 2010, the province-wide
health authority launched a series of measures, known
as the Overcapacity Protocols (OCP), to address crowding in Alberta EDs. Its effect on the patient experience
was evaluated as changes in throughput and output can
compromise components of care ranging from perceptions of crowding to communication and privacy.
Objectives: To determine if OCP and its resulting effect
on ED crowding influenced how patients perceived
their care. We hypothesized that OCP would result in
improved perceptions of ED crowding.
Methods: Design/Setting - An independent agency mandated by the government collected and analyzed ED
patient experience data using a comprehensive, validated
multidimensional instrument and a random periodic
sampling methodology of all ED patients. A prospective
pre-post experimental study design was employed in the
eight community and tertiary care hospitals most
affected by crowding. Two 5.5 month study periods were
evaluated (Pre: 28/06- 12/12/2010; Post: 13/12/2010–29/
05/2011). Outcomes - The primary outcome was patient
perception of wait times and crowding reported as a
composite mean score (0–100) from six survey items with
higher scores representing better ratings. The overall
rating of care by ED patients (composite score) and other
dimensions of care were collected as secondary outcomes. All outcomes were compared using chi-square
and two-tailed Student’s t-tests.
Results: A total of 3774 surveys were completed in
both the pre-OCP and post-OCP study periods representing a response rate of 45%. The composite for perceived wait times and crowding improved from 61.7 to
65.3 in Calgary and from 59.8 to 64.4 in Edmonton
(P < 0.001) with some variation among specific sites.
The overall composite score for care was 75.6 of Calgary patients and 74.1 of Edmonton patients, respectively in the pre-OCP phase and did not change in the
post-period (77.3 and 76.1 for both cities in the postOCP phase comparison; P = NS). The global rating also
remained unchanged following the intervention for
both Calgary and Edmonton zones. Other dimensions
of care remained unchanged. (See tables.)
300
301
S164
2012 SAEM ANNUAL MEETING ABSTRACTS
Table 1 - Abstract 301: Specific items: patient experience of wait times
Specific question item
Proportion who found the
waiting room extremely or
very crowded
Proportion who could not find
a comfortable place to sit in
the waiting room
Proportion who considered
leaving
Proportion who reported waited
more than 15 minutes to
speak with the triage nurse
Proportion who reported waited
longer than 2 hours to see a
physician
Proportion whose emergency
department visit was reported
as longer than 4 hours
Weighted
N
Pre-OCP
Calgary (%)
Post -OCP
Calgary (%)
Pre-OCP
Edmonton (%)
Pre-OCP
Edmonton (%)
p-value
3696
21.7
16.1
36.8
28.8
<0.001
3215
21.4
21.0
27.8
24.0
.04
4268
24.0
17.6
25.5
21.8
.02
3738
36.8
37.5
NS
34.8
31.1
.05
4070
41.6
29.5
<0.001
40.4
33.8
<0.001
4103
62.2
57.0
.055
60.2
54.1
0.001
p-value
.015
NS
.003
Table 2 - Abstract 301: Patient Experience Domain - Composite Scores
Domain
Staff care composite
Wait and crowding composite
Wait time communication composite
Discharge information and concerns
Respect composite without LWBS
Medication communication composite
Facility cleanliness composite
Privacy composite
Pain management composite
N
Pre-OCP
Calgary
Post-OCP
Calgary
Sig
(2-tailed)
Pre-OCP
Edmonton
Post-OCP
Edmonton
Sig
(2-tailed)
3761
3726
3572
2774
3767
1275
3628
3722
2299
79.3
61.7
52.9
51.9
83.5
80.3
82.5
82.7
63.0
80.5
65.3
52.7
55.5
85.8
76.1
82.6
84.0
65.2
NS
<0.001
NS
NS
0.018
NS
NS
NS
NS
77.4
59.8
43.8
52.6
84.7
77.4
76.0
80.3
59.2
78.1
64.4
46.2
53.8
85.3
76.4
78.5
84.3
63.1
NS
<0.001
NS
NS
NS
NS
.008
<0.001
NS
Conclusion: Improving ED crowding results in significant, albeit modest, improvements in patients’ perceptions of crowding in busy urban hospital EDs. Site level
variability in these effects should be explored further.
302
Cost Savings Associated with Use of
Observation Units as Compared to 1-day
Inpatient Admission in Massachusetts
Philip Anderson, Leah S. Honigman,
Marie-France Petchy, Laura Burke, Shiva
Gautam, Peter Smulowitz
Beth Israel Deaconess Medical Center/Harvard
Medical School, Boston, MA
Background: ED observation units are increasingly utilized nationwide and allow continued medical care for
patients who are unable to be safely discharged but
who may not require a full hospital admission. In addition to providing effective care, observation units have
been tied to decreased ED crowding and lower rates of
diversion. Additionally, ED observation units have the
potential economic benefit of avoiding the high cost of
hospital admissions. Interventions to reduce admission
are vital to cost-savings in health care; however, the
cost savings associated with preferential use of observation units has not been specifically described.
Objectives: To determine the cost savings associated
with using ED observation stays as compared to a
1-day inpatient admission in the state of Massachusetts.
Methods: Using the Massachusetts Division of Health
Care Finance and Policy Acute Hospital Case Mix Databases, we analyzed patient visits in Fiscal Year 2008
from 65 hospitals for ED observation stays and 1-day
length-of-stay (LOS) inpatient admissions from the ED
as well as the total charges for each category of visit.
Within each hospital, we first calculated the average
charge for observation visits. The cost savings was then
calculated as the difference between the calculated
charges for 1-day LOS and the charges that would have
been accrued if these additional admissions had instead
been ED observation status. We determined total cost
savings, mean cost savings per hospital, and average
savings per patient with a 95%CI constructed around
the difference in savings.
Results: A total of 103,150 ED observation stays and
67,352 1-day LOS inpatient admissions were examined.
The mean cost savings per hospital in Massachusetts
was $354,372 (95%CI, -$194,015 to $902,759), and the
total estimated savings for all 65 hospitals was $23
million (p = 0.07, paired t-test). This translated to an
average savings per patient of $289 (95%CI -$54 to
$633) for an observation stay as opposed to 1-day
admission.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Conclusion: Despite the range of potential savings
between hospitals in Massachusetts, significant cost
savings might be realized if patients who only require a
short-stay admission are instead placed in ED observation units. These results highlight the opportunity for
EDs to utilize observation units to their full capacity to
reduce overall health spending.
303
National Study of Non-Urgent Emergency
Department Visits
Leah S. Honigman1, Jennifer L. Wiler2,
Sean Rooks2, Adit A. Ginde2
1
Beth Israel Deaconess Medical Center/Harvard
Medical School, Boston, MA; 2University of
Colorado School of Medicine, Denver, CO
Background: Policymakers have suggested reducing
‘‘unnecessary’’ ED visits as a way to generate significant cost savings for the U.S. health care system.
Although what constitutes an unnecessary ED visit is
ill-defined, visits classified as non-urgent at triage by
the emergency severity index (ESI) classification system
are often considered unnecessary.
Objectives: To compare resource utilization of ED visits characterized as non-urgent at triage to immediate,
emergent, or urgent (IEU) visits.
Methods: We performed a retrospective, cross-sectional analysis of the 2006–2009 National Hospital
Ambulatory Medical Care Survey. Visits were categorized by the assigned ESI five-level triage acuity score
representing immediate, emergent, urgent, semi-urgent,
or non-urgent. Within each triage categorization, clinical and hospital characteristics were analyzed.
Results: In 2006–2009, 10.1% (95%CI 9.2–11.2) of U.S.
ED visits were categorized as non-urgent. Most (87.8%,
95%CI 86.3–89.2) non-urgent visits had some diagnostic
testing or treatment in the ED. Although more common in IEU visits (52.9%, 95%CI 51.6–54.2), imaging
was also common in non-urgent visits (29.8%, 95%CI
27.8–31.8), with 7.3% (95%CI 6.2–8.6) of non-urgent visits requiring cross-sectional imaging. Similarly, procedures were performed more frequently in IEU (56.3%,
95%CI 53.5–59.0) compared to non-urgent (34.1%,
95%CI 31.8–36.4), while medication administration was
similar between the two groups (80.6%, 95%CI 79.5–
81.7 vs. 76.3%, 95% CI 74.7–77.8, respectively). The rate
of hospital admission was 4.0% (95%CI 3.3–4.8) for
non-urgent visits compared to 19.8% (95%CI 18.4–21.3)
for IEU visits. Visits requiring critical care, operating
room, or catheterization lab intervention was 0.5%
(95%CI 0.3–0.6) for non-urgent vs. 3.4% (95%CI 3.1–
3.8) for IEU.
Conclusion: Most non-urgent ED patients required
some diagnostic or therapeutic interventions during
their visits. Relatively low, but not insignificant, proportions of non-urgent ED patients had advanced imaging
or were admitted to the hospital, sometimes to critical
care settings. These data call into question non-urgent
ED visits being categorized as ‘‘unnecessary,’’ particularly in the setting of limited access to primary care for
acute illness or injury.
•
304
www.aemj.org
S165
‘If My ER Doesn’t Close, But Someone
Else’s Does, Am I At Higher Risk Of
Dying?’ An Analysis Of Cardiac Patients
And Access To Care
Renee Hsia1, Tanja Srebotnjak2, Judy Maselli1
1
University of California San Francisco, San
Francisco, CA; 2Ecologic Institute, San Mateo,
CA
Background: Between 1990–2009, the number of nonrural emergency departments (EDs) dropped by 27%,
contributing to the national crisis of supply and
demand for ED services. Negative health outcomes
have been documented in patients who experience ED
closure directly perhaps because they travel farther distances to reach emergency care.
Objectives: We seek to determine if patients who experience ED closure indirectly - those who have a closure
in their vicinity but do not need to travel farther - have
higher inpatient mortality rates due to AMI than those
patients without closures in their communities.
Methods: We performed a retrospective study using all
admissions from non-federal hospitals in California for
acute myocardial infarction (AMI) between 1999–2008.
We compared in-patient mortality from AMI for
patients who lived in a community with either 2.5 miles
or 5 miles of a closure but did not need to travel farther
to the nearest ED with those who did not. We used
patient-level data from the California Office of Statewide Health and Planning Development (OSHPD) database patient discharge data, and locations of patient
residence and hospitals were geo-coded to determine
any changes in distance to the nearest ED. We applied
a generalized linear mixed effects model framework to
estimate a patient’s likelihood to die in the hospital of
AMI as a function of being affected by a neighborhood
closure event.
Results: Between 1998–2008, there were 352,064
patients with AMI and 29 ED closures in California. Of
the AMI patients, 5868 (1.7%) experienced an ED closure within a 2.5-mile radius and had 15% higher odds
(OR 1.15, 95% CI 1.04, 1.96) of mortality compared with
those who did not. Results of the 5-mile radii analyses
were similar (OR 1.13, 95% CI 1.03, 1.23).
Conclusion: Our findings suggest that patients indirectly experiencing ED closure do face poor health outcomes. These results should be factored into closure
decisions and regionalization plans. (Originally submitted as a ‘‘late-breaker.’’)
305
Emergency Department Visit Rates after
Common Inpatient Procedures for
Medicare Beneficiaries
Keith E. Kocher, Justin B. Dimick
University of Michigan, Ann Arbor, MI
Background: Fragmentation of care has been recognized as a problem in the US health care system. However, little is known about ED utilization after
hospitalization, a potential marker of poor outpatient
care coordination after discharge, particularly for
common inpatient-based procedures.
S166
2012 SAEM ANNUAL MEETING ABSTRACTS
Objectives: To determine the frequency and variability
in ED visits after common inpatient procedures, how
often they result in readmission, and related payments.
Methods: Using national Medicare data for 2005–2007,
we examined ED visits within 30 days of hospital discharge after six common inpatient procedures: percutaneous coronary intervention, coronary artery bypass
grafting (CABG), elective abdominal aortic aneurysm
repair, back surgery, hip fracture repair, and colectomy. We categorized hospitals into risk-adjusted quintiles based on the frequency of ED visits after the index
hospitalization. We report visits by primary diagnosis
ICD-9 codes and rates of readmission. We also assessed
payments related to these ED visits.
Results: Overall, the highest quintile of hospitals had
30-day ED visit rates that ranged from a low of 17.8%
with an associated 7.3% readmission rate (back surgery) to a high of 27.8% with an associated 13.6% readmission rate (CABG). The most variability was more
than 3-fold and found among patients undergoing
colectomy in which the worst-performing hospitals saw
24.1% of their patients experienced an ED visit within
30 days while the best-performing hospitals saw 7.4%.
Average total payments for the 30-day window from
initial discharge across all surgical cohorts varied from
$18,912 for patients discharged without subsequent ED
visit; $20,061for those experiencing an ED visit(s);
$38,762 for those readmitted through the ED; and
$33,632 for those readmitted from another source. If all
patients who did not require readmission also did not
incur an ED visit within the 30-day window, this would
represent a potential cost savings of $125 million.
Conclusion: Among elderly Medicare recipients there
was significant variability between hospitals for 30-day
ED visits after six common inpatient procedures. The
ED visit may be a marker of poor care coordination in
the immediate discharge period. This presents an
opportunity to improve post-procedure outpatient care
coordination which may save costs related to preventable ED visits and subsequent readmissions.
306
Does Pharmacist Review of Medication
Orders Delay Medication Administration in
the Emergency Department?
James P. Killeen, Theodore C. Chan,
Gary M. Vilke, Sally Rafie, Ronald Dunlay,
Edward M. Castillo
University of California, San Diego, San Diego,
CA
Background: Despite the recommendation of The Joint
Commission (TJC), prospective pharmacist review of
medications administered in the ED remains controversial. Proponents believe pharmacist review reduces
medication errors, whereas others are concerned about
potential delays in medication delivery.
Objectives: We sought to assess the effect of pharmacist medication review on ED patient care, in particular
time from physician order to medication administration
for the patient (order-to-med time).
Methods: We conducted a multi-center, before-after
study in two EDs (urban academic teaching hospital
and suburban community hospital, combined census of
61,000) after implementation of the electronic prospective pharmacy review system (PRS). The system allowed
a pharmacist to review all ED medication orders electronically at the time of physician order and either
approve or alter the order. We studied a 5-month time
period before implementation of the system (pre-PRS,
7/1/10-11/30/11) and after implementation (post-PRS, 7/
1/11-11/30/11). We collected data on all ED medication
orders including dose, route, class, pharmacist review
action, time of physician order, and time of medication
administration. Differences in order-to-medication
between the pre- and post-PRS study periods were
compared using a Mann-Whitney U test. Median times
and interquartile ranges (25, 75) are reported.
Results: In the two study EDs, there were 17,428
patients seen in the ED resulting in 58,097 medication
orders during the pre-PRS period, and 18,862 ED
patients resulting in 65,968 medication orders in the
post-PRS period. Overall median order-to-med times
were minimally, but statistically longer for the post-PRS
(19 minutes; IQR 9, 35) compared with pre-PRS periods
(18 minutes; IQR 8, 35). When stratified by route, a similar pattern in was seen for PO (17 [10, 31] vs 16 [7, 29])
and IM/SQ (18 [10, 33] vs 16 [8, 33]) routes, but no difference was seen for IV routes (19 [9, 37] vs 19 [9, 39],
p = 0.127) post- and pre-PRS respectively. Topical medications demonstrated a larger magnitude difference of
23 [10, 52] vs 13 [4, 28] minutes, post- vs pre-PRS
respectively.
Conclusion: In this multicenter ED study, an electronic
prospective pharmacy review system for medications
was associated with a minimal, but significant increase
in time from physician order to medication delivery.
307
Empiric Antibiotic Prescribing Practices
after the Introduction of a Computerized
Order Entry System in an Adult Emergency
Department
Sheeja Thomas, Ian Schwartz, Jeffrey Topal,
Vinnita Sinha, Caroline Pace, Charles R. Wira III
Yale University School of Medicine, New Haven,
CT
Background: Antibiotics for pneumonia are frequently
prescribed in the ED. Selection does not always comply
with existing guidelines. Incorporating computerized
decision support systems in the clinical arena may
bolster compliance to prescribing guidelines.
Objectives: The objective of this study was to assess
whether compliance to institutional empiric antibiotic
guidelines for pneumonia could be improved through
implementation of a computerized order entry system.
Methods: This is a prospective and consecutive preand post-interventional study evaluating empiric antibiotic prescribing patterns in a high-volume, academic
ED after the installation of an electronic order entry
screen for non-critically ill ED patients admitted with
pneumonia. Appropriateness of regimens was based on
adherence to Infectious Disease Society of America
(IDSA) and hospital guidelines. Established a priori, the
pre-intervention (PRE) enrollment period was from
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
September 1, 2009 to October 21, 2009. The intervention was launched on September 21, 2010 with a subsequent educational and transitional period. The
post-intervention (POST) enrollment period was from
February 1, 2011 to March 6, 2011.
Results: 205 patients were identified (PRE, n = 100;
POST, n = 105). Demographic variables in the PRE and
POST groups respectively were age of 70.1 + 20.2 vs.
67.5 + 18.8 (P = 0.35), with 65% and 44.7% being female
(P = 0.005). In the PRE group, 59% (59 of 100) received
appropriate antibiotics by ED providers. In the POST
group, 64.8% (68 of 105) received appropriate antibiotics (P = 0.47). Subgroup analysis revealed a significant
increase in ED provider compliance with hospitalacquired pneumonia guidelines from 5.1% (2/39) to
31.8% (14/44) (P = 0.002) after the implementation of the
computerized order entry screens.
Conclusion: The institution of a computerized physician order entry system did not change overall empiric
antibiotic prescribing compliance for patients admitted
from the ED with pneumonia. But, there was significantly higher compliance in the POST group patients
who were suspected of having hospital-acquired pneumonia. Implementation of a computerized order entry
system is feasible, and may improve provider compliance with more complicated treatment guidelines.
308
Factors Associated with Left Before
Treatment Complete Rates Differ
Depending on ED Patient Volume
Daniel A. Handel1, John R. Woods2,
James Augustine3, Charles M. Shufflebarger2
1
Oregon Health & Science University School of
Medicine, Portland, OR; 2Indiana University
School of Medicine, Indianapolis, IN; 3EMP,
Cincinnati, OH
Background: The proportion of patients leaving the
emergency department (ED) before treatment is
•
www.aemj.org
S167
complete (LBTC) is known to vary as a function of ED
patient volume; high-volume EDs have generally higher
LBTC rates.
Objectives: The objective of this study was to look at
factors associated with higher rates of LBTC across a
large, national sample. We hypothesized that explanatory variables associated with LBTC rate would differ
depending on ED patient volume.
Methods: EDs (n = 161) from the 2010 national ED
Benchmarking Alliance database were grouped by
annual patient volume. Within each volume category,
separate linear regressions were performed to identify
factors that help explain differences in LBTC rates.
Results: ED metrics that were significantly associated
with LBTCs varied across ED patient-volume categories (Table). For EDs seeing less than 20K patients
annually, the percentage of EMS arrivals admitted to
the hospital and ED square footage were both weakly
associated with LBTCs (p = 0.09). For EDs seeing at
least 20K–39K patients, median ED length of stay
(LOS), percent of patients admitted to hospital through
the ED, percent of EMS arrivals admitted to hospital,
and percent of pediatric patients were all positively
associated, while percent of patients admitted to the
hospital was negatively associated with LBTCs. For
EDs seeing 40K–59K, median LOS and percent of
x-rays performed were positively associated, while
percent of EKGs performed was negatively associated
with LBTCs. For EDs seeing 60K–79K, percent of
patients admitted to the hospital through the ED was
negatively associated and percent of EKGs performed
was positively associated with LBTCs. For EDs with
volume greater than 80K, none of the selected variables were associated with LBTC.
Conclusion: ED factors that help explain high
LBTC rates differ depending on the size of an ED.
Interventions attempting to improve LBTC rates by
modifying ED structure or process will need to consider baseline ED volume as a potential moderating
influence.
Table - Abstract 308: Coefficients from regression analyses to predict variation in LBTC rates. Separate regressions were performed for
categories of EDs grouped by annual patient volume
<20K
(n = 25)
Predictor Variable
Constant
Median LOS
LOS (Patients released)
LOS (Patients admitted)
Hospital admits thru the ED (%)
Patients admitted to hospital (%)
Patients transferred (%)
High CPT acuity (%)
EMS arrivals (%)
EMS arrivals (admitted %)
Pediatric patients (%)
EKGs per 100 patients
X)rays per 100 patients
ED square footage
Variance explained (R2)
Coeff
0.01238
)0.00009
0.00029
)0.00004
)0.01318
)0.05402
0.04529
)0.01008
0.04739
0.05882
)0.01061
)0.00042
)0.00009
)0.00000
0.68
20–39K
(n = 79)
p
0.09
0.09
Coeff
)0.05078
0.00023
)0.00008
0.00002
0.02178
)0.08750
0.12118
0.01701
0.02291
0.02192
0.02829
0.00003
0.00014
)0.00000
0.68
40–59K
(n = 47)
p
0.0002
0.05
0.0005
<0.05
<.05
Coeff
)0.03475
0.00028
0.00001
0.00003
0.01451
)0.06167
)0.21078
0.00740
)0.08348
)0.00522
)0.02314
)0.00084
0.00064
0.00000
0.49
60–79K
(n = 27)
P
.05
.04
.03
Coeff
)0.00062
)0.00004
0.00027
0.00004
)0.09562
)0.09336
)0.66502
0.02750
0.05302
0.01902
0.05935
0.00141
)0.00043
0.00000
0.80
>80K
(n = 17)
p
0.03
0.03
Coeff
)0.01270
)0.00057
0.00083
0.00015
)0.03242
0.18719
)0.83174
0.01789
)0.37387
)0.10471
0.01148
0.00230
)0.00067
0.00000
)
p
S168
309
2012 SAEM ANNUAL MEETING ABSTRACTS
Environmental Contamination in the
Emergency Department - A Non-toxic
Alternative to Surface Cleaning
Stephanie Benjamin, Ashlee Edgell,
Allie Thompson, Neil Batra, Victor Blanco,
Christopher Lindsell, Andra L. Blomkalns
University of Cincinnati, Cincinnati, OH
Background: Work environment surface contamination plays a central role in the transmission of many
health care acquired pathogens such as MRSA and
VRE. Current federal guidelines for hospitals recommend the use of quaternary ammonium compounds
(QUATs) for surface cleaning. These agents have limitations in the pathogens for which they are effective, toxicity, corrosiveness, and time needed for effectiveness.
No data exist comparing these sterilants in a working
ED setting where varied surfaces need to be cleaned
quickly and effectively.
Objectives: Our study sought to compare bacterial
growth of samples taken from surfaces after use of a
common approved QUAT compound and a virtually
non-toxic, commercially available solution containing
elemental silver (0.02%), hydrogen peroxide (15%), and
peroxyacetic acid (20%) (SHP) in a working ED. We
hypothesized that, based on controlled laboratory data
available, SHP compound would be more effective on
surfaces in an active urban ED.
Methods: We cleaned and then sampled three types of
surfaces in the ED (suture cart, wooden railing, and the
floor) during midday hours one minute after application
of tap water, QUAT, and SHP and then again at
24 hours without additional cleaning. Conventional
environmental surface surveillance RODAC media
plates were used for growth assessment. Images of
bacterial growth were quantified at 24 and 48 hours.
Standard cleaning procedures by hospital staff were
maintained per usual.
Results: SHP was superior to control and QUAT one
minute after application on all three surfaces. QUAT
and water had 10x and 40x more bacterial growth than
the surface cleaned with SHP, respectively. 24 hours
later, the SHP area produced fewer colonies sampled
from the wooden railing: 4x more bacteria for QUAT,
and 5x for water when compared to SHP. 24h cultures
from the cart and floor had confluent growth and could
not be quantified.
Conclusion: SHP outperforms QUAT in sterilizing surfaces after one minute application. SHP may be a superior agent as a non-toxic, non-corrosive, and effective
agent for surfaces in the demanding ED setting. Further studies should examine sporidical and virucidal
properties in a similar environment.
310
Emergency Department Extended Waiting
Room Times Remain a Barrier to the
Opportunity for High Patient Satisfaction
Scores
John B. Addison, Scott P. Krall
Christus Spohn Memorial Hospital, Corpus
Christi, TX
Background: Hospitals subject to the Inpatient Prospective Payment System (IPPS) submit HCAHPS data
to CMS for IPPS annual payment update. IPPS hospitals
that fail to publicly report may have their annual payment update reduced. Improving and maintaining
patient satisfaction has become a prominent focus for
hospital administration.
Objectives: Evaluate the effect on patient satisfaction
of increasing waiting room times and physician evaluation times.
Methods: Emergency department flow metrics were
collected on a daily basis as well as average daily
patient satisfaction scores. The data were from July
2010 through February 2011, in a 44,000 census urban
hospital. The data were divided into equal intervals.
The arrival to room time was divided by 15 minute
intervals up to 135 minutes with the last group being
greater than 136 minutes. The physician evaluation
times were divided into 20 minute intervals, up to 110,
the last group greater than 111 with 46 days in the
group. Data were analyzed using means and standard
deviations, and well as ANOVA for comparison
between groups.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
•
www.aemj.org
S169
Table 1 - Abstract 310: Effect Waiting Room Time
minutes
0–15
16–30
31–45
46–60
61–75
76–90
91–105
106–120
121–135
Overall Visit Satisfaction
Provider Satisfaction
88.4
94
83
84
86
84
79
77
84
88
80
82
84
88
80
84
75
74
Results: The overall satisfaction score for the outpatient emergency visit was higher when the patient
was in a room within 15 minutes of arrival (88.4, std
deviation 5.9), analysis of variation between the
groups had a p = 0.13, for the means of each interval
(see table 1). The total satisfaction with the visit as
well as satisfaction with the provider dropped when
the evaluation extended over 110 minutes, but was not
statistically significant on ANOVA analysis (see table 2
for means).
Conclusion: Once a patient’s time in the waiting room
extends beyond 15 minutes, you have lost a significant
opportunity for patient satisfaction; once they have
been in the waiting room for over 120 minutes, you are
also much more likely to receive a poor score. Physician
evaluation time scores are much more consistent but as
longer evaluation times occurred beyond total of
110 minutes we started to see a trend downward in the
satisfaction score.
Table 2 - Abstract 310: Effect of Physician Evaluation Time
Minutes
Overal Visit Satisfaction
Satisfaction with Provider
311
51–70
71–90
91–110
>110
82
84
83
85
83
83
78
79
Pain Management Practices Vary
Significantly at Three Emergency
Departments In A Single Health Care
System
Melissa L. McCarthy1, Ru Ding1, Scott L.
Zeger2, Melinda J. Ortmann2, Donald M.
Steinwachs2, Julius C. Pham2, Rodica Retezar2,
Walter Atha2, Edward S. Bessman2, Sara C.
Bessman2, Gabor D. Kelen2, Nancy K. Roderer2
1
George Washington University, Washington,
DC; 2Johns Hopkins University, Baltimore, MD
Background: Previous studies have noted variation in
ED analgesic rates by patient factors such as age and race.
Objectives: To compare the pain management practices among three EDs in the same health care system
by patient, clinical, provider, and site characteristics.
We expected variation in pain management practices
across the sites and providers.
Methods: We conducted a randomized controlled trial
(RCT) to evaluate the effect of information prescriptions
on medication adherence. This study focused on a subgroup of the subjects who reported moderate (4–7) to
severe pain (8–10) at triage. Subjects were enrolled at
three EDs affiliated with one health care system. Site A
is an academic, tertiary care hospital, site B is a community teaching affiliate hospital, and site C is a community hospital. To be eligible for the RCT, adult ED
patients had to be discharged with a prescription medication. Patients were excluded if they were non-English
speaking, disoriented, or previously enrolled. Research
assistants prospectively enrolled subjects from November 15, 2010–September 9, 2011 (rotating among the
three EDs) during the hours of 10AM–10PM seven days
per week. Of the 3,940 enrollees, 2,603 subjects
reported moderate (35%) to severe pain (65%). The
main outcomes are pain medication received in the ED
or prescribed at discharge (Rx). Pain medication rates
were modeled as a function of patient, clinical, provider, and site factors using a hierarchical logistic
regression model that accounted for clustering of
patients treated by the same provider and clustering of
providers working in the same ED.
Results: In all three EDs, pain medication rates (both in
ED and Rx) varied significantly by clinical factors
including location of pain, discharge diagnosis, pain
level, and acuity. We observed little to no variation in
pain medication rates by patient factors such as age,
sex, race, insurance, or prior ED visits. The table displays key pain management practices by site and provider. After adjusting for patient and clinical
characteristics, significant differences in pain medication rates remained by provider and site (see figure).
Conclusion: Within this health system, the approach to
pain management by both providers and sites is not
standardized. Investigation of the potential effect of this
variability on patient outcomes is warranted.
Table - Abstract 311: Pain Management Practices by Site And
Provider.
Pain Management
Practice
‡ 1 pain
assessment by
treating nurse
Discharge pain
score documented
by nurse
Received pain
medication in ED
ED pain medication
rate by providers
Pain medication
prescribed at
discharge
Pain medication
prescription rate
by providers
Overall
N = 2603
Site A
N = 879
Site B
N = 978
Site C
N = 746
77%
55%
89%
98%
38%
35%
69%
1%
52%
72%
28%
61%
41–98%
12–33%
44–86%
76%
78%
72%
59–92%
53–91%
37–93%
75%
S170
312
2012 SAEM ANNUAL MEETING ABSTRACTS
Decreases in Provider Productivity After
Implementation of Computer Physician
Order Entry
Neil Roy, Thomas Damiano, Heather F. Farley,
Richard Bounds, Debra Marco, James F. Reed
III, Jason T. Nomura
Christiana Care Health Systems, Newark, DE
Background: Reported patient safety, care integration,
and performance improvement of health information
technology have resulted in increasing adoption of
computer provider order entry (CPOE). However, there
are frequent anecdotes of decreased productivity due to
workflow changes.
Objectives: This study compared the changes in provider productivity measures before, during, and after
CPOE implementation.
Methods: This was a retrospective cross-sectional
study at an academic tertiary care system with an ED
volume of >160,000 visits/year. Average of measures of
productivity including patients per hour (pt/h), charges
per hour (charge/h), and relative value units (RVUs) per
hour (RVU/h) were calculated for all providers and visits during the study period. Productivity was compared
for three time periods: 6 months prior to CPOE implementation, a transitional period of 4 months immediately following CPOE implementation that included
refinements and customizations, and 6 months posttransition. Data were analyzed for significant variances
by Kruskal-Wallis nonparametric testing. Significant
groups then had time period pairs compared using Student Neuman-Keuls analysis with Wald type 95% CI.
Results: All measures showed significant differences,
p < 0.01. Average pts/h decreased post-CPOE and did
not recover post transitional period, 1.92 ± 0.13 vs
1.75 ± 0.11, p < 0.05. RVU/h also decreased post-CPOE
and did not recover post transitional period, 5.23 ± 0.37
vs 4.79 ± 0.32 and 4.82 ± 0.33, p < 0.05. Charges/h also
decreased after CPOE implementation and did not
recover after system optimization. There was a sustained significant decrease in charges/h of 4.5% ± 6.5%
post CPOE and 3.6% ± 6.4% post optimization,
p < 0.05. Sub-group analysis for each provider group
was also evaluated and showed variability for different
providers.
Conclusion: There was a significant decrease in all productivity metrics four months after the implementation
of CPOE. The system did undergo optimization initiated
by providers with customization for ease and speed of
use. However, productivity measurements did not
recover after these changes were implemented. These
data show that with the implementation of a CPOE
system there is a decrease in productivity that
continues even after a transition period and system
customization.
313
Computer Order Entry Systems in the
Emergency Department Significantly
Reduce the Time to Medication Delivery
for High Acuity Patients
Shahbaz Syed, Dongmei Wang, Debbie
Goulard, Tom Rich, Grant Innes, Eddy Lang
University of Calgary, Calgary, AB, Canada
Background: Computerized physician order entry
(CPOE) systems are designed to increase safety and
quality of care; however, their effect on emergency
department (ED) efficiency has been evaluated in limited settings.
Objectives: Our objective was to assess the effect of
CPOE on process times for medication delivery, laboratory testing, and diagnostic imaging after region-wide
multi-hospital ED-CPOE implementation.
Methods: Setting: This administrative database study
was conducted in Calgary, Alberta, a city of 1.2 million
people served by three acute care hospitals. Intervention: In March, 2010, our three tertiary care emergency
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
departments implemented a common ED CPOE system.
Patients: Eligible patients consisted of a stratified random sample of CTAS 2 emergent (40%) and CTAS 3
urgent (60%) patients seen 30 days preceding CPOE
implementation (Control group), 30 days immediately
after CPOE implementation (early CPOE group), and 5–
6 months after CPOE implementation (late CPOE
group). Overall, nine patient groups of 100 patients
were analyzed, including control, early, and late groups
from all three hospitals. Outcomes: Primary outcomes
were time from physician assessment to imaging and
lab completion. An ANOVA and t-test were employed
for statistical analysis.
Results: After CPOE implementation, time to (TT) first
medication decreased in the early and late CPOE
groups (from 102.6 to 62.8 min and 65.7 minutes
respectively, p < 0.001). Time to first lab result
increased from 73.8 min (control) to 76.4 and 85.3 minutes in the early and late groups respectively
(p < 0.001). TT first x-ray also increased from 68.1 min
to 80.4 and 84.8 minutes, respectively (p < 0.001).
Conclusion: Regional implementation of CPOE afforded important efficiencies in time to medication delivery for high acuity ED patients. Increased times
observed for laboratory and radiology results may
reflect system issues outside of the emergency department and as a result of potential confounding may not
be a reflection of CPOE effect.
314
Emergency Medicine Procedures:
Examination of Trends in Procedures
Performed by Emergency Medicine
Residents
Kristin Swor Wolf, Suzanne Dooley-Hash
University of Michigan, Ann Arbor, MI
Background: Procedural competency is a key component of emergency medicine residency training. Residents are required to log procedures to document
quantity of procedures and identify potential weaknesses in their training. As emergency medicine
evolves, it is likely that the type and number of procedures change over time. Also, exposure to certain rare
procedures in residency is not guaranteed.
Objectives: We seek to delineate trends in type and
volume of core EM procedures over a decade of emergency medicine residents graduating from an accredited four-year training program.
Methods: Deidentified procedure logs from 2003–2011
were analyzed to assess trends in type and quantity of
procedures. Procedure logs were self-reported by individual residents on a continuous basis during training
onto a computer program. Average numbers of procedures per resident in each graduating class were noted.
Statistical analysis was performed using SPSS and
includes a simple linear regression to evaluate for significant changes in number of procedures over time
and an independent samples two-tailed t-test of procedures performed before and after the required resident
duty hours change.
Results: A total of 112 procedure logs were analyzed
and the frequency of 29 different procedures was
•
www.aemj.org
S171
evaluated. A significant increase was seen in one procedure, the venous cutdown. Significant decreases were
seen in 12 procedures including key procedures such as
central venous catheters, tube thoracostomy, and procedural sedation. The frequency of five high-stakes/
resuscitative procedures, including thoracotomy and
cricothyroidotomy, remained steady but very low (<4
per resident over 4 years). Of the remaining 11 procedures, 8 showed a trend toward decreased frequency,
while only 5 increased.
Conclusion: Over the past 9 years, EM residents in our
program have recorded significantly fewer opportunities to perform most procedures. Certain procedures in
our emergency medicine training program have
remained stable but uncommon over the course of
nearly a decade. To ensure competency in uncommon
procedures, innovative ways to expose residents to
these potentially life saving skills must be considered.
These may include practice on high-fidelity simulators,
increased exposure to procedures on patients during
residency (possibly on off-service rotations), or practice
in cadaver and animal labs.
315
A Brief Educational Intervention
Effectively Trains Senior Medical Students
to Perform Ultrasound-Guided Peripheral
Intravenous Access in the Simulation
Setting
Scott Kurpiel, William Manson,
Douglas Ander, Daniel Wood
Emory University School of Medicine, Atlanta,
GA
Background: Studies have shown ultrasound-guided
peripheral intravenous access (USGPIV) to be rapid,
safe, and effective in patients with difficult IV access.
Little data exist on training medical students in USGPIV.
Objectives: To study the effectiveness of a unique educational intervention using didactic and hands-on training in USGPIV. We hypothesized that senior medical
students would improve performance and confidence
with USGPIV after the simulation training.
Methods: Fourth year medical students were enrolled
in an experimental, prospective, before and after study
conducted at a university medical school simulation
center. Baseline skills in participant’s USGPIV on simulation vascular phantoms were graded by ultrasound
expert faculty using standardized checklists. The primary outcome was time to cannulation, and secondary
outcomes were ability to successfully cannulate, number of needle attempts, and needle-tip visualization.
Subjects then observed a 15-minute presentation on
correct performance of USGPIV followed by a 30-minute hands-on practical session using the vascular simulators with a 1:4 to 1:6 ultrasound instructor to student
ratio. An expert blinded to the participant’s initial performance graded post-educational intervention USGPIV
ability. Pre- and post-intervention surveys were
obtained to evaluate USGPIV confidence, previous
experience with ultrasound, peripheral IV access, USGPIV, and satisfaction with the educational format.
S172
2012 SAEM ANNUAL MEETING ABSTRACTS
Results: 37 subjects (54% men; 26.2 mean age) were
enrolled. Time spent during cannulation improved from
70.1 seconds to 33.2 seconds (p = 0.048) with 100% success in cannulation post-intervention (89% pre-intervention). Subjects required fewer needle attempts
(p = 0.004) after the educational intervention. Participants reported improved confidence in their ability to
perform USGPIV after education from 1.92 to 4.32
(p < 0.001) and a satisfaction of 4.92 with the teaching
intervention on a five-point Likert scale. There was no
statistically significant difference in needle-tip visualization before and after the intervention.
Conclusion: A brief educational intervention improves
time to cannulation, number of attempts, and confidence in senior medical students learning USGPIV. Skill
retention and translation to patient care remain to be
assessed.
>25 scans had improved IIS in the FAST and AAA
(p < 0.08 for RUQ; p < 0.03 for FAST; p < 0.001 for
AAA). IIS trended with PGY for each competency, and
significantly for RUQ (p < 0.03, p < 0.27 and p < 0.39,
for RUQ, FAST, and AAA). In general, confidence in
diagnosis was directly related to accurate image interpretation, and incorrect interpretation was poorly associated with high level of confidence.
Conclusion: Competency in interpretation of RUQ and
AAA images improved with increased number of scans;
the 25-scan threshold did not hold for RUQ. Conversely,
PGY year was significant for RUQ alone. This difference
may be explained by increased clinical knowledge or
other modalities such as didactics or simulation. A numbers only-based guideline for competency assessment in
EM US applications is not adequate.
317
316
Factors Predicting EM Resident Ultrasound
Competency: Number of Scans or PostGraduate Year?
Angela Cirilli1, Alvin Lomibao1, Andrew
Loftus1, David Francis2, Jordana Kaban3, Sarah
Sartain3, Christine Haines1, Ann Prokofieva1,
Mathew Nelson1, Christopher Raio1
1
North Shore-LIJ Health System, Manhasset,
NY; 2Stanford University Hospital, Palo Alto,
CA; 3University of Missouri-Kansas City School
of Medicine, Truman Medical Center, Kansas
City, MO
Background: Although the ACGME requires EM residents to be competent in emergency ultrasound (US)
applications, a standardized curriculum in emergency
US, an educational methodology, and a validated competency assessment tool do not exist. Many EM residency programs define a resident US-’’competent’’
upon completing a pre-determined number of scans,
commonly 25.
Objectives: To assess the numbers-based guideline for
competency in three US applications.
Methods: This prospective, multi-site study examined
EM residents’ ability to interpret right upper quadrant
(RUQ), FAST, and aortic US to diagnose aneurysms
(AAA). After viewing 20, 1-minute video clips each of
RUQ, FAST, and aorta exams, residents were asked two
questions: a diagnosis, and confidence in this diagnosis.
Image interpretation score (IIS) for these competencies
was compared to number of clinical scans in that area
completed at time of study, as well as post-graduate year
(PGY). Analyses were performed on multiple relationships using Pearson correlations and t-tests.
Results: In total, 88 residents participated, representing
PGY1-4 (n = 32, 23, 29, 4, respectively) in 3-year and 4year EM residencies. On average, residents had previously completed 15.7 ± 18.9 RUQ, 20.1 ± 23.8 FAST, and
7.9 ± 12.0 AAA scans. Residents’ IIS was 73%, 93%,
and 92% for RUQ, FAST, and AAA. For the RUQ and
AAA competency, there was a significant association
between the number of previous scans completed and
IIS (q = 0.283 and p < 0.008, and q = 0.280 and
p < 0.009; for FAST, q = 0.193 and p < 0.07). Those with
Comparison of Resident Self, Peer, and
Faculty Evaluations in a Simulation-based
Curriculum
Dylan Cooper, Lee Wilbur, Kevin Rodgers,
Jennifer Walthall, Katie Pettit, Gretchen
Huffman, Carey Chisholm
Indiana University School of Medicine,
Indianapolis, IN
Background: The Accreditation Council of Graduate
Medical Education recommends incorporation of 360degree evaluations into residency training. This
includes multiple perspectives which can improve evaluation validity and reliability.
Objectives: To compare self, peer, and faculty evaluations of residents in a simulation session.
Methods: Emergency medicine (EM) residents at Indiana University participate in bimonthly simulation sessions throughout their training. Each session involves
several simulation scenarios with one resident functioning as the team leader in each case. Resident team leaders are evaluated with a written evaluation form of
competency categories: medical knowledge, clinical
competence, patient advocacy/ professionalism, systems-based practice, and interpersonal/ communication
skills. Each category is scored on a 1–10 scale: 1–4
(below expected level), 5–7 (expected level), 8–10 (above
expected level). For this study, the team leader completed a ‘‘self’’ evaluation, fellow residents observing
the case completed a ‘‘peer’’ evaluation, and faculty
trainers completed a ‘‘faculty’’ evaluation. Scores were
compared based upon evaluation type and year of
training using various models: mixed effects regression,
fixed effect for evaluation type and year of training, and
random effect for the resident being evaluated.
Results: There were a total of 76 residents with 146
self, 157 peer, and 174 faculty evaluations completed.
All evaluation types differed significantly, with self evaluations significantly lower and peer evaluations significantly higher than faculty evaluations. When level of
training was taken into account, EM 1st year residents
scored significantly lower than EM 3rd year residents
within each type of evaluation.
Conclusion: There was no correlation between self,
peer, and faculty evaluations in this study. This finding
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
was consistent across all levels of training. Residents
scored themselves significantly lower than faculty,
while peers rated fellow residents higher than faculty
evaluations. This tendency should be taken into
account when peer and self evaluations are completed
in 360-degree evaluations during graduate medical
education.
•
Do Evaluations Of Residents Change When
The Evaluator Becomes Known?
Jonathan S. Jones, Bentley W. Curry,
Mary J. Johnson
University of Mississippi Medical Center,
Jackson, MS
Background: Effective evaluation of residents requires
honest and candid assignment of grades as well as individualized feedback. Truly candid assignment of grades
may be dependent on whether the identity of the evaluator is anonymous or made known to the resident.
Objectives: This study examines the grade distribution
of resident evaluations when the identity of the evaluator was anonymous as compared to when the identity
of the evaluator was known to the resident. We hypothesize that there will be no change in the grades
assigned to residents.
Methods: We retrospectively reviewed all faculty evaluations of residents and grades assigned from July 1,
2008 through November 15, 2011. Prior to July 1, 2010
the identity of the faculty evaluators was anonymous,
while after this date, the identity of the faculty evaluators was made known to the residents. Throughout this
time period, residents were graded on a five-point
scale. Each resident evaluation included grades in the
six ACGME core competencies as well as in select
other abilities. Specific abilities evaluated varied over
the dates analyzed. Evaluations of residents were
assigned to two groups, based on whether the evaluator was anonymous or made known to the resident.
Grades were compared between the two groups.
Results: A total of 10,760 grades were assigned in the
anonymous group, with an average grade of 3.90 (95CI
3.88, 3.91). A total of 7,122 grades were assigned in the
known group with an average grade of 3.77 (95CI 3.75,
3.79). Specific attention was paid to assignment of
unsatisfactory grades (1 or 2 on the five-point scale).
The anonymous group assigned 355 grades in this category, comprising 3.3% of all grades assigned. The
known group assigned 100 grades in this category,
comprising 1.4% of all grades assigned. Unsatisfactory
grades were assigned by the anonymous group 1.9%
(95CI 1.5, 2.3) more often. Additionally, 5.8% (95CI 3.8,
6.8) fewer exceptional grades (4 or 5 on the five-point
scale) were assigned by the anonymous group.
Conclusion: The average grade assigned was closer to
average (3 on a five-point scale) when the identity of
the evaluator was made known to the residents.
S173
Additionally, fewer unsatisfactory and exceptional
grades were assigned in this group. This decrease of
both unsatisfactory and exceptional grades may make it
more difficult for program directors to effectively identify struggling and strong residents respectively.
319
318
www.aemj.org
Testing to Improve Knowledge Retention
from Traditional Didactic Presentations: A
Pilot Study
David Saloum, Amish Aghera, Brian Gillett
Maimonides Medical Center, Brooklyn, NY
Background: The ACGME requires an average of at
least 5 hours of planned educational experiences each
week for EM residents, which traditionally consists of
formal lecture based instruction. However, retention by
adult learners is limited when presented material in a
lecture format. More effective methods such as small
group sessions, simulation, and other active learning
modalities are time- and resource-intensive and therefore not practical as a primary method of instruction.
Thus, the traditional lecture format remains heavily
relied upon. Efficient strategies to improve the effectiveness of lectures are needed. Testing utilized as a
learning tool to force immediate recall of lecture material is an example of such a strategy.
Objectives: To evaluate the effect of immediate postlecture short answer quizzes on EM residents’ retention
of lecture content.
Methods: In this prospective randomized controlled
study, EM residents from a community based 3-year training program were randomized into two groups. Block
randomization provided a similar distribution of postgraduate year training levels and performance on both USMLE and in-training examinations between the two
groups. Each group received two identical 50-minute lectures on ECG interpretation and aortic disease. One group
of residents completed a five-question short answer quiz
immediately following each lecture (n = 13), while the
other group received the lectures without subsequent
quizzes (n = 16). The quizzes were not scored or reviewed
with the residents. Two weeks later, retention was
assessed by testing both groups with a 20-question multiple choice test (MCT) derived in equal part from each
lecture. Mean and median test results were then compared
between groups. Statistical significance was determined
using a paired t-test of median test scores from each group.
Results: Residents who received immediate post-lecture quizzes demonstrated significantly higher MCT
scores (mean = 57%, median 58%, n = 10) compared to
those receiving lectures alone (mean = 48%, median = 50%, n = 15); p = 0.023.
Conclusion: Short answer testing immediately after a
traditional didactic lecture improves knowledge retention at a 2-week interval. Limitations of the study are
that it is a single center study and long term retention
was not assessed.
S174
320
2012 SAEM ANNUAL MEETING ABSTRACTS
Pediatric Emergency Medicine Core
Education Modules: Utility in
Asynchronous Learning
Michael Preis, Jessica Salzman, Zac Kahler,
Sean Fox
Carolinas Medical Center, Charlotte, NC
Background: The task of educating the next generation
of physicians is steadily becoming more difficult with
the inherent obstacles that exist for faculty educators
and the work hour restrictions that students must
adhere to. The obstacles make developing curricula that
not only cover important topics but also do so in a
fashion that helps support and reinforce the clinical
experiences very difficult. Several areas of medical education are using more asynchronous techniques and
self-directed online educational modules to overcome
these obstacles.
Objectives: The aim of this study was to demonstrate
that educational information pertaining to core pediatric emergency medicine topics could be as effectively
disseminated to medical students via self-directed
online educational modules as it could through traditional didactic lectures.
Methods: This was a prospective study conducted from
August 1, 2010 through December 31, 2010. Students participating in the emergency medicine rotation at Carolinas Medical Center were enrolled and received education
in a total of eight core concepts. The students were
divided into two groups which changed on a monthly
basis. Group 1 was taught four concepts via self-directed
online modules and four traditional didactic lectures.
Group 2 was taught the same core concepts, but in opposite fashion to Group 1. Each student was given a pre-test,
post-test, and survey at the conclusion of the rotation.
Results: A total of 28 students participated in the study.
Students, regardless of which group assigned, performed similarly on the pre-test, with no statistical difference among scores. When looking at the summative
total scores between online and traditional didactic lectures, there was a trend towards significance for more
improvement among those taught online. The student’s
assessment of the online modules showed that the
majority either felt neutral or preferred the online
method. The majority thought the depth and length of
the modules were perfect. Most students thought having access to the online modules was valuable and all
but one stated that they would use them again.
Conclusion: This study demonstrates that self-directed,
online educational modules are able to convey important
concepts in emergency medicine similar to traditional
didactics. It is an effective learning technique that offers
several advantages to both the educator and student.
Table - Abstract 320: Difference in Total Scores Pre- and Post-Test
and Overall Improvement in Online vs. Standard Method
Pre-Test
Post-Test
Overall Improvement
N
Mean
Standard
Deviation
P Score
28
21
21
)0.46
1.33
1.33
3.2
3.0
3.9
0.45
0.05
0.13
321
Asynchronous vs Didactic Education: It’s
Too Early To Throw In The Towel On
Tradition
Jaime Jordan1, Azadeh Jalali2, Samuel Clarke1,
Pamela Dyne3, Tahlia Spector2, Sebastian
Uijtdehaage2, Wendy Coates1
1
Harbor-UCLA Medical Center, Torrance, CA;
2
David Geffen School of Medicine at UCLA,
Los Angeles, CA; 3OliveView-UCLA Medical
Center, Sylmar, CA
Background: Computer-based asynchronous learning
(ASYNCH) is cost-effective, allows self-directed pacing
and review, and addresses preferences of millennial
learners. Current research suggests there is no significant difference from traditional classroom instruction.
Data are limited for novice learners in emergency medicine.
Objectives: We compared ASYNCH modules with traditional didactics for rising fourth year medical students
during a week-long intensive course in acute care, and
hypothesized they would be equivalent.
Methods: Prospective observational quasi-experimental
study of fourth year medical students who were novice
learners with minimal prior exposure to curricular elements. A pre-test assessed baseline knowledge. The
acute care curriculum was delivered in either traditional
classroom lecture format (shock, acute abdomen, dyspnea, field trauma) or via ASYNCH modules (chest pain,
EKG interpretation, pain management, ED trauma). An
interactive review of all topics was followed by a posttest. Retention at 10 weeks was also measured. Preand post-test items were written and validated by a
panel of medical educators. Mean scores were analyzed
using dependent t-test and attitudes were assessed by a
five-point Likert scale.
Results: 44 of 48 students completed the protocol. Students initially acquired more knowledge from didactic
education as demonstrated by mean gain test scores
(didactic: 28.39% ± 18.06; asynchronous 9.93% ± 23.22).
Mean
difference
between
didactic
and
ASYNCH = 18.45% with 95% CI 10.40–26.50; p = 0.0001.
Retention testing demonstrated similar knowledge attrition: mean gain scores )14.94% (didactic); )17.61%
(ASYNCH), which was not significantly different: 2.68%
± 20.85, 95% CI )3.66 to 9.02, p = 0.399. Students had
positive attitudes towards ASYNCH; 60.4% believed
asynchronous modules were educational. 95.8%
enjoyed flexibility. 39.6% preferred asynchronous education for required didactics; 37.5% were neutral; 23%
preferred traditional lectures.
Conclusion: ASYNCH education was not equivalent to
traditional lectures for novice learners of acute care
topics. Interactive didactic education was valuable;
however, retention rates were similar. Students had
mixed attitudes towards ASYNCH. We urge caution in
trading in traditional didactic lectures in favor of
ASYNCH for novice learners.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
322
Access To Pediatric Equipment And
Medications In Critical Access Hospitals: Is
A Lack Of Resources A Valid Concern?
Jessica Katznelson1, C. Scott Forsythe2,
William A. Mills2
1
Johns Hopkins School of Medicine, Baltimore,
MD; 2University of North Carolina School of
Medicine, Chapel Hill, NC
Background: Critical access hospitals (CAH) provide
crucial emergency care to rural populations that
would otherwise be without ready access to health
care. Data show that many CAH do not meet standard adult quality metrics. Adults treated at CAH
often have inferior outcomes to comparable patients
cared for at other community-based emergency
departments (EDs). Similar data do not exist for pediatric patients.
Objectives: As part of a pilot project to improve pediatric emergency care at CAH, we sought to determine
whether these institutions stock the equipment and
medications necessary to treat any ill or injured child
who presents to the ED.
Methods: Five North Carolina CAH volunteered to
participate in an intensive educational program targeting pediatric emergency care. At the initial site visit
to each hospital, an investigator, in conjunction with
the ED nurse manager, completed a 109-item checklist
of commonly required ED equipment and medications
based on the 2009 ACEP ‘‘Guidelines for Care of
Children in the Emergency Department’’. The list was
categorized into monitoring and respiratory equipment, vascular access supplies, fracture and trauma
management devices, and specialized kits. If available,
adult and pediatric sizes were listed. Only hospitals
stocking appropriate pediatric sizes of an item were
counted as having that item. The pharmaceutical supply list included antibiotics, antidotes, antiemetics, antiepileptics, intubation and respiratory medications, IV
fluids, and miscellaneous drugs not otherwise categorized.
Results: Overall, the hospitals reported having 91% of
the items listed (range 87–96%). The two greatest deficiencies were fracture devices (range 33–66%), with no
hospital stocking infant-sized cervical collars, and antidotes, with no hospital stocking Pralidoxime, 1/5 hospitals stocking Fomepizole, and 2/5 hospitals stocking
pyridoxine and methylene blue. Only one of the five
institutions had access to Prostaglandin E. The hospitals
stated cost and rarity of use as the reason for not stocking these medications.
Conclusion: The ability of CAH to care for pediatric
patients does not appear to be hampered by a lack of
equipment. Ready access to infrequently used, but
potentially lifesaving, medications is a concern. Tertiary
care centers preparing to accept these patients should
be aware of these potential limitations as transport
decisions are made.
•
323
www.aemj.org
S175
PEWS Program In The Pediatric Emergency
Department Halves Unanticipated
In-hospital Transfers To A Higher Level Of
Care For Patients With Respiratory
Complaints
Patrick Crocker1, Eric Higginbotham1, Diane
Taylor1, Ben King2, Fernanda Santos1, Manasi
Nabar2, Truman J. Milling2
1
Dell Children’s Medical Center, Austin, TX;
2
University Medical Center at Brackenridge,
Austin, TX
Background: Pediatric Early Warning Scores (PEWS)
were developed to evaluate floor patients for impending clinical deterioration necessitating transfer to a
higher level of care, but it has not been integrated into
the pediatric emergency department (PED) as a triage
tool.
Objectives: To measure the effect of a program using
PEWS, as an admission level of care triage tool in the
PED, on unanticipated transfers to a higher level of
care in the first 12 hours after admission, and proportions of level of care admissions.
Methods: This is an analysis of the year before and
after (2008 and 2009) implementation of a PEWS program for patients being admitted with respiratory
complaints to a dedicated children’s hospital. Chief
complaints from ED triage were used to identify all
patients presenting with respiratory complaints in
those two years, and the year groups were divided
first by discharge versus admission, then in admitted
patients by whether they experienced an unanticipated
transfer to a higher level of care (floor to IMC, floor
to ICU, IMC to ICU) in the first 12 hours after admission. The z test for differences in proportions was
used, and p values less than 0.05 were considered
significant.
Results: There were 8,021 patients presenting to the
PED with respiratory complaints in 2008, the pre-PEWS
group (PPG), with 6,823 discharges (85%) and 1,198
admissions (15%). There were 14,199 patients presenting with the same complaints in 2009, the PEWS group
(PG), with 12,723 discharges (90%) and 1,476 admission
(10%).There were 40 unanticipated transfers in the PPG
(25 floor to IMC; 7 floor to ICU, 8 IMC to ICU), and 24
in the PG (16 floor to IMC; 5 floor to ICU; 3 IMC to
ICU) (p value less than 0.05). The distribution of the
admissions in the PPG (1075 or 90% to floor; 99 or 8%
to IMC; 7 or 0.6% to ICU) and the PG (1344 or 91% to
floor; 109 or 7% to IMC; 7 or 0.5% to ICU) were not
significantly different (p value greater than 0.05), indicating patients in the PG were not routinely overtriaged.
Conclusion: Implementation of a PEWS program in
the PED using PEWS as a triage tool for in-patient
admission nearly halved the number of unanticipated
transfers to a higher level of care without significantly
over-triaging.
S176
324
2012 SAEM ANNUAL MEETING ABSTRACTS
Children’s Emergency Department
Recidivism in Infancy and Early Childhood
Whitney V. Cabey1, Emily MacNeill2, Lindsey
N. White3, James Norton2, Alice M. Mitchell2
1
University of Michigan, Ann Arbor, MI;
2
Carolinas Medical Center, Charlotte, NC;
3
Washington Hospital Center, Washington, DC
Background: Data defining frequent children’s emergency department (CED) use, or recidivism, is lacking
particularly in early childhood.
Objectives: To define the threshold and risk factors for
CED recidivism in the first 36 months of life in patients
treated at an urban tertiary care center.
Methods: We conducted a retrospective cross sectional
study of children born between 2003 and 2006 who
were seen at a single CED (40,000 visits per year) at any
time during the first 36 months of life. Exclusion criteria
included age greater than 36 months at time of index
visit, or residency outside of the CED’s county during
the study period. Patients were followed longitudinally
for CED utilization in the first 36 months of life. The
distribution of data was used to determine the threshold for recidivism, defined by the 90th percentile for frequency of CED visits. Risk factors associated with CED
recidivism were identified by multivariate logistic
regression analysis.
Results: 16,664 patients met the inclusion/exclusion criteria. Data skewed heavily toward patients with no
more than two CED visits, with 70% of patients falling
within this range (range 1 to 39 visits, IQR 2 visits). The
threshold for recidivism, 5 or more visits within the first
36 months of life defines the 90th percentile for frequency of CED visits within the study population and
occurred with 14% (95% CI 13 to 15%) of patients. Multivariate analysis identified the following risk-factors:
black race (OR 2.8, 95% CI 2.4 to 3.4), Hispanic ethnicity
(OR 1.6, 95% CI 1.4 to 1.9), public insurance or lack of
insurance (OR 3.5, 95% CI 2.9 to 4.2), residence in a zip
code associated with greater than 70% poverty (OR 2.0,
95% CI 1.8 to 2.2), and an index visit in the first
12 months of life (OR 8.4, 95% CI 7.2 to 9.8).
Conclusion: CED recidivism, defined by the 90th percentile for CED utilization within the first 36 months of
life, was 5 or more visits. Black race, Hispanic ethnicity,
public or absence of insurance, living in a zip code
associated with greater than 70% poverty, and an index
visit in the first 12 months of life were associated with
CED recidivism. These findings identify a population
for which frequent CED use begins early. Further study
is needed to better understand the connection between
socioeconomic and sociocultural risk factors for CED
recidivism and patterns of health care utilization, and to
identify potentially effective interventions.
325
Prevalence of Urinary Tract Infections in
Febrile Infants <90 days old with RSV
Antonio Muniz
Dallas Regional Medical Center, Mesquite, TX
Background: The prevalence of significant bacterial
infections in infants <90 days old who have RSV is
unknown in the era after the introduction of H. influenzae and S. pneumoniae vaccinations.
Objectives: The study’s hypothesis is that infants with
fever and RSV are at low risk for secondary bacterial
infections and may not require extensive and invasive
laboratory evaluations.
Methods: Prospective evaluation of all infants <90 days
old who had sepsis evaluations for fever and an RSV
antigen test performed. Data were analyzed using Stata
11 with continuous variables expressed as means, while
categorical variables were summarized as frequencies
of occurrence and assessed for statistical significance
using chi-squared test or Fisher’s exact.
Results: There were 360 infants; 140 (38.8%) were RSV
(+) and 220 (61.1%) were RSV (-). In the RSV (+) group,
the median temperature was 38.6 ± 0.6C, while in the
RSV (-) group it was 38.5 ± 0.5C. There was no significant difference in vital signs between groups. There
were more coryza and coughing in the RSV (+) group
(p < 0.05). There were 16 (11.4%) infants admitted to the
PICU in the RSV (+) group versus 4 (1.8%) in the RSV
(-). The laboratory evaluation in the RSV (+) included a
CBC 130 (92.8%), urinalysis and culture 124 (88.5%),
blood culture 126 (90.0%), chest radiograph 102
(72.8%), and CSF fluid analysis 62 (44.2%). In the RSV
(-) group, the laboratory evaluation included a CBC 200
(90.9%), urinalysis and urine culture 200 (90.9%), blood
culture 204 (92.7%), chest radiograph 100 (45.4%), and
CSF fluid analysis 148 (67.2%). There were more chest
radiographs performed in the RSV (+) group (p < 0.05),
while all other tests were similar. There were 20
(14.2%) significant infections in the RSV (+) group versus 24 (10.9%) in the RSV (-) group. In the RSV (+)
group there were 14 (10%) pneumonias, 6 (4.2%) UTI.
There were no significant (+) blood cultures in the RSV
(+) group, but there were 12 (8.5%) contaminants. In
the RSV (-) group there were 12 (5.4%) pneumonias, 8
(3.6%) UTI, 2 (0.9%) bacteremia with group B streptococcus, and 2 (0.9%) meningitis with enterococcus.
Conclusion: The risk of serious bacterial infection is
low in the RSV (+) group, especially if one excludes
pneumonias. There were no cases of bacteremia or
meningitis, however UTIs still are significant. Therefore,
it may be prudent to exclude an UTI in febrile infants
with a (+) RSV antigen test.
326
Impact of an English-Based Pediatric
Software on Physician Decision Making:
A Multicenter Study in Vietnam
Michelle Lin1, Trevor N. Brooks2, Alex C.
Miller2, Jamie L. Sharp2, Le Thanh Hai3,
Tu Nguyen3, Daniel R. Kievlan1,
Ronald A. Dieckmann1
1
University of California, San Francisco, San
Francisco, CA; 2KidsCareEverywhere, Oakland,
CA; 3National Hospital of Pediatrics, Hanoi,
Viet Nam
Background: Global health agencies and the Vietnam
Ministry of Health have identified pediatric health care
and universal health information technology access as
high priority goals. Medical decision support software
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
(MDSS) provides physicians with rapid access to current, evidence-based literature, but there are limited
data about its effectiveness in developing countries.
Objectives: We hypothesized that Vietnamese physicians, with only brief software training, can understand
and effectively apply MDSS information to answer clinical questions about pediatric emergencies.
Methods: This multicenter, prospective, pretest-posttest crossover study was conducted in 11 Vietnamese
hospitals during November 2010-April 2011. A convenience sample of physicians volunteered to attend software training on a pediatric MDSS (PEMSoft; Brisbane,
Australia) in English. Two 15-question, multiple-choice
exams (Test A and B) were administered before and
after a brief 80-minute software training session in Vietnamese and English. Participants who received Test A
as a pretest received Test B as a posttest, and vice
versa. The primary outcome measure was the difference between pretest and posttest scores, as calculated
by the difference in two means using the NewcombeWilson method without continuity correction.
Results: For the 208 physician participants, the mean
pretest, posttest, and improvement scores were 38%
(95% CI: 36–40%), 70% (95% CI: 69–72%), and 32%
(95% CI: 30–35%), respectively, after the brief software
training session. Subgroup analysis of physician practice setting, years of clinical experience, and comfort
level with written English and computer proficiency
showed no association.
Conclusion: Vietnamese physicians can effectively use
a pediatric MDSS (PEMSoft) written in English, after
receiving minimal utilization training, for pediatric
emergency care decisions on written tests. MDSS technologies may offer a highly scalable, sustainable, and
potentially transformative tool in resource-poor environments.
•
Ability of Acutely Ill Children with Asthma
to Perform Acceptable Forced Expiratory
Maneuvers in the Emergency Department
Christopher A. Hollweg1, Jennifer Eng-Lunt2,
Maria T. Santiago2, Jahn T. Avarello1, Megan
McCullough1, Alvin Lomibao1, Mae F. Ward1,
Robert A. Silverman1
1
NSLIJ Emergency Department, Long Island
Jewish Medical Ctr, New Hyde Park, NY;
2
Steven and Alexandra Cohen Children’s
Medical Center, New Hyde Park, NY
Background: Treatment guidelines recommend that
objective measures of pulmonary function be used to
assess pulmonary function in children with acute
asthma. Although FEV1 is considered the best noninvasive measure of obstruction, in practice it is not
often measured in the emergency department, in part
because of the perception that good performance is not
achievable in the acute care setting.
Objectives: Our goal is to determine if acceptable and
reproducible FEV1 measurements can be obtained in
acutely ill asthmatic children.
Methods: A convenience sample of asthmatic children
was taken from an academic ED of a pediatric hospital
S177
with an annual census of 35,000 visits. Participants,
ages 6–18 years, presenting with shortness of breath
had spirometry performed by a trained research associate. Those with developmental or behavioral diagnoses
were excluded. The first set of efforts was obtained on
ED arrival depending on research associate availability;
if not, then after treatment was given. A minimum of
two efforts were performed at each time point. Acceptability for a time point was defined as two or more
efforts with time to peak <120ms, back extrapolated
volume <5%, and expiratory time >1s. Reproducibility
was defined as the two best acceptable FEV1 efforts
within 10% of each other. Chi-square tests were used
for comparison; for multiple visits only the first data set
was included in the analysis.
Results: 61 children were eligible with one child too
ill to participate. The average age was 10 years (SD
3.1; range 6–18); 45% were female; and 32% required
hospitalization. At the first testing time point, 53/60
(88%) children met acceptability criteria and 49/60
(82%) met reproducibility criteria. Reproducibility in
younger children (£10 years) trended lower compared
to older children (76% vs. 95%, p = 0.08). Children
with more severe obstruction (FEV1 £ 50% predicted) less frequently met reproducibility criteria
than children with higher FEV1% predicted (67% vs.
90%, p = 0.03). Performance was similar when FEV1
was assessed at subsequent time points (data not
shown).
Conclusion: In this pilot study the majority of children
with acute asthma in an ED setting obtained acceptable and reproducible FEV1 measurements. FEV1
determination should be considered as an option to
objectively assess airway obstruction in acutely ill
children.
328
327
www.aemj.org
Antibiotics For The Treatment Of
Abscesses: A Meta-analysis
Jahan Fahimi1, Amandeep Singh2,
Bradley Frazee2
1
Alameda County Medical Center - Highland
Hospital;
University
of
California,
San
Francisco,
Oakland/San
Francisco,
CA;
2
Alameda County Medical Center - Highland
Hospital, Oakland, CA
Background: While incision and drainage (I&D) alone
has been the mainstay of management of uncomplicated abscesses for decades, some advocate for adjunct
antibiotic use, arguing that available trials are underpowered and that antibiotics reduce treatment failures
and recurrence.
Objectives: To investigate the role of antibiotics in
addition to I&D in reducing treatment failure as compared to management with I&D alone.
Methods: We performed a search using MEDLINE,
EMBASE, Web of Knowledge, and Google Scholar
databases (with a medical librarian) to include trials and
observational studies analyzing the effect of antibiotics
in human subjects with skin and soft-tissue abscesses.
Two investigators independently reviewed all the
records. We performed three overlapping meta-analy-
S178
2012 SAEM ANNUAL MEETING ABSTRACTS
ses: 1. Only randomized trials comparing antibiotics to
placebo on improvement of the abscess during standard follow-up. 2. Trials and observational studies comparing appropriate antibiotics to placebo, no
antibiotics, or inappropriate antibiotics (as gauged by
wound culture) on improvement during standard follow-up. 3. Only trials, but broadened outcome to
include recurrence or new lesions during a longer follow-up period as treatment failure. We report pooled
risk ratios (RR) using a fixed-effects model for our point
estimates with Shore-adjusted 95% confidence intervals
(CI).
Results: We screened 1,937 records, of which 12 studies fit inclusion criteria, 9 of which were meta-analyzed
(5 trials, 4 observational studies) because they reported
results that could be pooled. Of the 9 studies, 5 enrolled
subjects from the ED, 2 from a soft-tissue infection
clinic, and 2 from a general hospital without definition
of enrollment site. Five studies enrolled primarily
adults, 3 pediatrics, and 1 without specification of ages.
After pooling results for all randomized trials only, the
RR = 1.03 (95% CI: 0.97–1.08). Exposure being ‘‘appropriate’’ antibiotics (using trials and observational studies) resulted in a pooled RR = 1.01 (95% CI: 0.98–1.03).
When we broadened our treatment failure criteria to
include recurrence or new lesions at longer lengths of
follow-up (trials only), we noted a RR = 1.05 (95% CI:
0.97–1.15).
Conclusion: Based on available literature pooled for
this analysis, there is no evidence to suggest any benefit
from antibiotics in addition to I&D in the treatment of
skin and soft tissue abscesses. (Originally submitted as
a ‘‘late-breaker.’’)
329
Primary Versus Secondary Closure of
Cutaneous Abscesses in the Emergency
Department: A RCT
Adam J. Singer, Breena R. Taira, Stuart Chale,
Anna Domingo, Gillian Schmidt
Stony Brook University, Stony Brook, NY
Background: Cutaneous abscesses are common and
have traditionally been treated with incision and drainage (I&D) and left to heal by secondary intention. Prior
studies outside the emergency department (ED) and the
US have shown faster healing and comparable low
recurrent rates when drained abscesses are closed
primarily.
Objectives: To compare wound healing and recurrence
rates after primary vs. secondary closure of drained
abscesses. We hypothesized the percentage of drained
ED abscesses that would be completely healed at 7 days
would be higher after primary closure.
Methods: This randomized clinical trial was undertaken in two academic emergency departments. Immunocompetent adult patients with simple, localized
cutaneous abscesses were randomly assigned to I & D
followed by primary or secondary closure. Randomization was balanced by center, with an allocation
sequence based on a block size of four, generated by a
computer random number generator. The primary outcome was percentage of healed wounds seven days
after drainage. A sample of 50 patients had 80% power
to detect an absolute difference of 40% in healing rates
assuming a baseline rate of 25%. All analyses were by
intention to treat.
Results: Twenty-seven patients were allocated to primary and 29 to secondary closure, of whom 23 and 27,
respectively, were followed to study completion. Healing rates at seven days were similar between the primary and secondary closure groups (69.6% [95% CI
49.1 to 84.4] vs. 59.3% [95% CI 40.7 to 75.5]; difference
10.3% [95% CI )15.8 to 34.1]). The rate of abscess
recurrence at 7 days was also similar between the primary and secondary closure groups (4.3% [95% CI 0.8
to 21.0] vs. 14.3% [95% CI 5.7 to 31.5]; difference )9.9%
[95% CI )27.5 to 8.8]).
Conclusion: The rates of wound healing and abscess
recurrence seven days after I&D of simple abscesses
are similar with either primary or secondary closure.
330
Low Incidence of MRSA Colonization
Among House Officers
Philip Giordano, Kurt Weber,
Rachel Semmons, Josef Thundiyil, Jay Ladde,
Jordan Rogers, Brian Batt
Orlando Regional Medical Center, Orlando, FL
Background: Due to the length of time spent in hospitals, house officers may be at high risk for becoming
colonized with MRSA and could potentially act as vectors for transmission. The risk of MRSA conversion by
house officers has not been widely studied.
Objectives: We sought to determine the incidence of
MRSA conversion among house officers after
18 months of training.
Methods: We conducted a prospective cohort study at
an urban, tertiary-care teaching hospital over an 18month period. The study population included all incoming residents, including emergency medicine, internal
medicine, obstetrics/gynecology, general surgery,
orthopedics, and pediatrics. Subjects with an active skin
infection were excluded. Trained investigators sampled
bilateral nares of each subject using standard Dacron
swabs. Swabs were plated onto BBL CHROMagar
MRSA plates (Becton Dickinson) and incubated for
48 hours. Cultures were interpreted by certified microbiologists and recorded at 24 and 48 hours. Subjects
had baseline samples collected during their orientation
in July 2010 and subsequent testing in December 2011.
Incidence of MRSA conversion was determined and
reported with 95% confidence intervals using z
statisitcs.
Results: In July 2010, 64 out of 70 incoming residents
consented to participate. Baseline cultures were positive in 0 of 64 samples. Follow up cultures were
obtained 18 months later on 45 (70%) subjects. Only
one of these was positive. Of the 19 residents lost to
follow-up, 15 had left the institution. The incidence of
MRSA conversion was 2.2% (95% CI )2.1, 6.5). This
contrasts with the 50% MRSA positivity rate for all
hospital cultures during the study period.
Conclusion: We had a low baseline prevalence of
MRSA colonization when compared to reported rates
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
for house officers and other health care workers. Our
results suggest a low risk to house officers of becoming
colonized with MRSA at our institution. This is consistent with published data from a different geographic
location.
331
Do Physical Exam Findings Correlate with
the Presence of Fever in Patients with
Skin and Soft Tissue Infections?
Jillian Mongelluzo, Brian Tu, Jonathan
Fortman, Robert Rodriguez
University of California San Francisco, San
Francisco, CA
Background: Skin and soft tissue infections (SSTI) are
common reasons for presentation to acute care facilities
and admission to inpatient hospital facilities. Few investigations have actually been conducted to understand
the relationship between clinical examination findings
and the occurrence of fever.
Objectives: Given the effect that the presence of fever
has on treatment and admission to hospital decisions,
the objective of our study was to determine whether
physical exam findings and laboratory tests with
regards to SSTIs are associated with the development
of fever.
Methods: We conducted a prospective observational
study at an urban county trauma center, from June
2007 until October 2011, enrolling adults >18 years who
presented to the ED for evaluation of SSTIs. Treating
providers prospectively completed a data sheet (with an
attached tape measure for accuracy) recording demographic information (intravenous drug use (IDU), HIV,
diabetes), area of erythema (cm-sq), location, presence
of adenopathy, streaking, necrosis, bullae, and joint
involvement. The highest temperature within the first
6 hours of ED presentation was recorded. Fever was
defined as temperature ‡ 38 C. Enrolled subjects were
followed through ED and hospital stays to determine
laboratory, radiologic, microbiologic, and admission
data.
Results: Of 734 patients enrolled, 96 (13%) were febrile.
Febrile patients were found to have a significantly larger area of erythema compared to those patients who
were afebrile (Kruskal-Wallis test p = 0.0001). Patients
with upper extremity infections were more likely to be
febrile when compared to patients with infections in
other locations (chi-square = 12.8, p < 0.0001). A leukocytosis was significantly more common in patients who
were febrile than in afebrile patients (chi-square = 27.2,
p < 0.0001). IDU patients were significantly more likely
to be febrile than non-IDU patients (chi-square = 9.2,
p = 0.0025).
Conclusion: Fever is uncommon in patients presenting
to the ED for evaluation of suspected SSTI. Larger
area of erythema, location of infection, leukocytosis,
and history of IDU are all correlated with the presence
of fever. Understanding of these clinical factors that
are associated with higher rates of fever may aid in
development of treatment and admission decision
guidelines.
•
www.aemj.org
S179
Table - Abstract 331: Characteristics of patients with and without
fever
All Patients
Febrile
Afebrile
Male
45 years
(IQR 35–55)
529 (72%)
IDU
211 (29%)
44 years
(IQR 36–53)
71/529
(13%)
36/211
(17%)
15/124 (12%)
45
(IQR 35–55)
457/529
(86%)
175/211
(83%)
109/124
(88%)
50/60 (83%)
63/68 (93%)
Age (years)
Diabetes
124 (17%)
mellitus
HIV
60 (8%)
Face/neck
68 (9%)
infection
Area of erythema
Mean
159 cm-sq
Median
32 cm-sq
Upper
197 (24%)
extremity
infection
Leukocytosis
151/337*
(45%)
10/60 (17%)
5/68 (7%)
266 cm-sq
80 cm-sq
40/197 (20%)
143 cm-sq
30 cm-sq
157/197
(80%)
48/151 (32%)
103/151
(68%)
*Total CBCs sent
332
Preliminary Clinical Feasibility of an
Improved Blood Culture Time To Detection
Using a Novel Viability Protein Linked PCR
Assay Enabling Universal Detection of
Viable BSI Hematopathogens Three-fold
Earlier Than The Gold standard
Jason Rubino1, John Morrison1, Zweitzig
Daniel2, Nichol Riccardello2, Bruce Sodowich2,
Jennifer Axelband1, Rebecca Jeanmonod1,
Mark Kopnitsky2, S. Mark O’Hara2
1
St. Luke’s Hospital and Health Network,
Bethlehem, PA; 2Zeus Scientific, Branchburg, NJ
Background: The diagnosis of blood stream infection
(BSI) is usually made on clinical grounds due to the
length of time to detection needed to obtain blood culture (BC) results, the gold standard test.
Objectives: Zeus Scientific Inc. has developed a viability
protein linked-PCR (VP-PCR) assay that detects hematopathogens in model systems three times faster than BC.
The aim of this study was to validate this novel test with
suspected BSI patients in a clinical setting.
Methods: This prospective cohort study was performed
at a Level I community trauma center with an annual
census of 75,000. It was reviewed and approved by the
IRB. After informed consent, a convenience sample of
patients with suspected BSI were enrolled. For each
enrolled patient, routine hospital BC were obtained
(4 BC bottles: 2 aerobic and 2 anaerobic). For the Zeus
(Z) VP-PCR test, one additional aerobic BC bottle was
obtained. The single Z BC bottle was blind coded,
refrigerated, and twice weekly transported 50 miles for
independent incubation and VP-PCR time course testing. VP-PCR test was performed on 1 ml time course
aliquots from the single Z BC bottle. Patients’ hospital
BC laboratory results were decoded and compared to
the Z BC and VP-PCR results. Sensitivity and specificity
of Z VP-PCR was determined as compared to the gold
S180
2012 SAEM ANNUAL MEETING ABSTRACTS
standard of hospital lab BC results as well as to Z BC
results.
Results: Preliminary data from 223 patients were analyzed comparing hospital BC to Z VP-PCR as well as
comparing Z BC to Z VP-PCR. There were a total of 23/
223 (10.3%) gold standard positive hospital BC. Z
VP-PCR performed with a sensitivity of 88% and a
specificity of 99% as compared to hospital BC. Z
VP-PCR performed with a sensitivity of 100% and a
specificity of 100% when compared to Z BC. VP-PCR
positives detected BSI three-fold earlier than any of its
five corresponding BC bottle incubator flip times.
Conclusion: Z VP-PCR test provides early detection
of BSI with a high rate of sensitivity and specificity.
Discordance in Z BC and hospital BC may be secondary to specimen handling, but requires further study.
With successful validation, Z VP-PCR should be studied further to determine if its use improves patient
outcomes.
333
Operating Characteristics of History,
Physical, and Urinalysis for the Diagnosis
of Urinary Tract Infections in Adult
Women Presenting to the Emergency
Department: An Evidence-Based Review
Lisa Meister, Diane Scheer, Eric J. Morley,
Richard Sinert
SUNY Downstate, Brooklyn, NY
Background: Women often present to the ED with
symptoms suggestive of urinary tract infections (UTI).
History, physical exam, and laboratory studies are often
used to diagnosis UTI.
Objectives: To perform a systematic review of the literature to determine the utility of history, physical, and
urinalysis in diagnosing uncomplicated female UTI.
Methods: The medical literature was searched from
January 1965 to November 2011 in PUBMED and EMBASE using a strategy derived from the following
PICO formulation of our clinical question: Patients:
Females greater than 18 years in the ED suspected of
having a UTI. Interventions: Historical, physical exam
and laboratory findings commonly used to diagnose a
UTI. Comparator: UTI was defined as a urine culture
with greater than 100,000 colonies of bacteria per ml
of urine. Outcome: The operating characteristics of
the interventions in diagnosing a UTI. Studies
were assessed using the Quality Assessment Tool
for Diagnostic Accuracy Studies. Data analysis was
performed using Meta-DiSc and a random-effects
model.
Results: Four studies with a total of 670 patients were
included for review. The prevalence of UTI varied from
40% to 67%. History of (previous UTI, dysuria, urgency,
back pain, abdominal pain, fever, and hematuria) and
physical exam findings (temperature >37.2 C, and costovertebral angle tenderness) had positive likelihood
ratios (LE+) from 2.2–0.8 and negative likelihood ratios
(LR-) from 0.7–1.0. On rine test strip analysis (leukocyte
esterase (LE), nitrite (N), blood (B), and protein (P)) only
a positive nitrite reaction (LR+ = 7.5–24.5) was useful to
rule in a UTI. To rule out a UTI only the absence of any
leukocyte esterase or blood reaction were significantly
robust (LR- 0.2–0.4). When the LE, N, and B were all
non-reactive the LR- is 0.1. Combining the elements of
the urine test strip analysis did not significantly
improve the LR+. Microscopic urine analysis showed
that the presence of any WBC had the highest LR+ (5.0)
and the absence of both RBC and WBC had the best
LR- (0.08–0.2).
Conclusion: No single historical or physical exam finding can rule out or rule in UTI. Positive nitrite and the
presence of any WBCs were good markers of a UTI.
The combination of a negative LE, N, and B or the
absence of any RBC or WBC can rule out UTI in
patients with low pretest probability.
334
Comparison of 64 and 320 Slice Coronary
Computed Tomography Angiography
Adam J. Singer, Summer J. Ferraro, Alexander
Abamowicz, Peter Viccellio, Michael Poon
Stony Brook University, Stony Brook, NY
Background: Many EDs have recently introduced coronary computed tomography angiography (CCTA) for
the evaluation of low to intermediate risk chest pain.
Objectives: We compared radiation doses and intravenous (IV) contrast volumes using 64 and 320 slice CCTA
in ED patients with chest pain. We hypothesized that
320 slice CCTA would reduce radiation doses and IV
contrast volumes.
Methods: Study Design-Prospective, observational. Setting-Suburban, academic ED with annual census of
90,000 and dedicated 64 and 320 slice CTs. Subjects-ED
patients with low-to-intermediate risk chest pain undergoing CCTA to exclude CAD. We compared 100 consecutive patients each scanned on the 64 or 320 slice
CCTA in 2010–2011. Measures and outcomes-Data were
prospectively collected using standardized data
collection forms required prior to performing CCTA.
The main outcomes were cumulative radiation doses
and volumes of intravenous contrast. Data AnalysisGroups compared with t-, Mann Whitney U, and
chi-square tests.
Results: The mean age of patients imaged with the 64
and 320 scanners were 49 (SD 10) vs. 51 (13) (P = 0.27).
Male:female ratios were also similar (57:43 vs. 51:49
respectively, P = 0.40). Both mean (P < 0.001) and median (P = 0.006) effective radiation dose were significantly lower with the 320 (6.8 and 6 mSv) vs. the
64-slice scanner (12.2 and 10 mSv) respectively.
Prospective gating was successful in 100% of the 320
scans and only in 38% of the 64 scans (P < 0.001). Mean
IV contrast volumes were also lower for the 320 vs. the
64-slice scanner (74 ± 10 vs. 96 ± 12 ml; P < 0.001). The
% non-diagnostic scans was similarly low in both scanners (3% each). There were no differences in use of
beta-blockers or nitrates.
Conclusion: When compared with the 64-slice scanner,
the 320-slice scanner reduces the effective radiation
doses and IV contrast volumes in ED patients with CP
undergoing CCTA. Need for beta-blockers and nitrates
was similar and both scanners achieved excellent diagnostic image quality.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
335
Use of a Novel Ambulatory Cardiac
Monitor to Detect Arrhythmias in
Discharged ED Patients
Ayesha Sattar1, Dorian Drigalla2, Steven
Higgins3, Donald Schreiber1
1
Stanford University School of Medicine,
Stanford, CA; 2Scott & White Memorial
Hospital, Texas A&M College of Medicine,
Temple, TX; 3Scripps Memorial Hospital, La
Jolla, CA
Background: Emergency department (ED) patients
who present with symptoms that may be due to arrhythmias (ARR), such as palpitations or syncope, may
be discharged after a negative evaluation and referred
for ambulatory cardiac monitoring. This approach is
costly and ineffective due to poor follow-up and compliance.
Objectives: To assess the utility of a novel, single-use
outpatient ambulatory cardiac monitoring patch applied
in the ED at discharge to detect arrhythmias.
Methods: Between February and October 2011, a continuous recording patch (Zio Patch - iRhythm Technologies, Inc.) was applied at discharge on a
consecutive sample of ED patients with suspected ARR.
Patients could wear the patch for up to 14 days and
press the integrated marker button when symptomatic.
Devices were mailed back for analysis for any significant ARR defined as: ventricular tachycardia (VT), SVT,
paroxysmal atrial fibrillation (PAF), ‡3 sec pause, 2nd
degree Mobitz II or 3rd degree AV block, or symptomatic bradycardia. Descriptive statistics and t-tests with
95% confidence intervals (95%CI) were used for analysis.
Results: 135 patients - 65 males (48%), mean age 48.6
were enrolled and none were lost to follow-up. Palpitations (30%) or syncope (18%) were common indications. Average device wear time was 6.1 days (95%CI
5.8–6.4; max 14 days). 51 (38%) had ‡1 significant
ARR and 7 (13.7%) were symptomatic at the time.
Average time to first ARR episode was 1.8 days
(95%CI 1.6–2.0; max 9.8 days) and first symptomatic
ARR 2.1 days (95%CI 1.8–2.4; max 8.6 days). 44 SVT,
5 PAF, 3 VT, and 1 AV block were detected. Women
•
www.aemj.org
S181
and patients with palpitations required significant
longer monitoring time to detect ARR (Table 1). 81
symptomatic patients (60%) did not have any significant ARR. No patients who presented with syncope
and discharged from the ED were found to have a
significant ARR.
Conclusion: The Zio Patch is a novel, single-use
ambulatory cardiac monitor that successfully facilitates
ARR diagnosis upon ED discharge. Its ease-of-use and
excellent (100%) compliance detected symptomatic and
asymptomatic ARR over 14 days. Patients with palpitations and women required longer wear times to detect
ARR. The device ruled out arrhythmias in 60% of
symptomatic patients. The potential for documenting
clinical arrhythmias over a longer term than the
standard 24–48 hour Holter monitor may have clinical
benefit.
336
Inferior Vena Cava To Aorta Ratio Has
Limited Value For Assessing Dehydration
In Young Pediatric Emergency Department
Patients
Molly Theissen, Jonathan Theoret,
Michael M. Liao, John Kendall
Denver Health Medical Center, Denver, CO
Background: A few studies have demonstrated that
bedside ultrasound measurement of inferior vena cava
to aorta (IVC-to-Ao) ratio is associated with the level of
dehydration in pediatric patients and a proposed cutoff
of 0.8 has been suggested, below which a patient is
considered dehydrated.
Objectives: We sought to externally validate the ability
of IVC-to-Ao ratio to discriminate dehydration and the
proposed cutoff of 0.8 in an urban pediatric emergency
department (ED).
Methods: This was a prospective observational study at
an urban pediatric ED. We included patients aged 3 to
60 months with clinical suspicion of dehydration by the
ED physician and an equal number of control patients
with no clinical suspicion of dehydration. We excluded
children who were hemodynamically unstable, had
chronic malnutrition or failure to thrive, open abdominal
wounds, or were unable to provide patient or parental
consent. A validated clinical dehydration score (CDS)
(range 0 to 8) was used to measure initial dehydration status. An experienced sonographer blinded to the CDS and
not involved in the patient’s care measured the IVC-to-Ao
ratio on the patient prior to any hydration. CDS was
collapsed into a binary outcome of no dehydration or any
level of dehydration (1 or higher). The ability of IVC-to-Ao
ratio to discriminate dehydration was assessed using area
under the receiver operating characteristic curve (AUC)
and the sensitivity and specificity of IVC-to-Ao ratio was
calculated for three cutoffs (0.6, 0.8, 1.0). Calculation of
AUC was repeated after adjusting for age and sex.
Results: 92 patients were enrolled, 39 (42%) of whom had
a CDS of 1 or higher. Median age was 28 (interquartile
range 16–39) months, and 53 (58%) were female. The IVCto-Ao ratio showed an unadjusted AUC of 0.66 (95% CI
0.54–0.77) and adjusted AUC of 0.67 (95% CI 0.56–0.79).
For a cutoff of 0.6 sensitivity was 26% (95% CI 13%–42%)
S182
2012 SAEM ANNUAL MEETING ABSTRACTS
and specificity 92% (95% CI 82%–98%); for a cutoff of 0.8
sensitivity was 51% (95% CI 35%–68%) and specificity 74%
(95% CI 60%–85%); for a cutoff of 1.0 sensitivity was 79%
(95% CI 64%–91%) and specificity 40% (95% CI 26%–54%).
Conclusion: The ability of the IVC-to-Ao ratio to discriminate dehydration in young pediatric ED patients
was modest and the cutoff of 0.8 was neither sensitive
nor specific.
337
Multicenter Randomized Comparative
Effectiveness Trial of Cardiac CT vs
Alternative Triage Strategies in Acute Chest
Pain Patients in the Emergency Department:
Results from the ROMICAT II Trial
Udo Hoffmann1, Quynh A. Truong1, Hang Lee1,
Eric T. Chou2, Shant Kalanjian2, Pamela
Woodard3, John T. Nagurney1, James H. Pope4,
Thomas Hauser5, Charles White6, Mike
Mullens3, Nathan I. Shapiro5, Michael Bond6,
Scott Weiner7, Pearl Zakroysky1, Douglas
Hayden1, Stephen D. Wiviott8, Jerome Fleg9,
David Schoenfeld1, James Udelson7
1
Massachusetts General Hospital, Boston, MA;
2
Kaiser Permanente Fontana, Fontana, CA;
3
Washington University School of Medicine, St.
4
Louis,
MO;
Baystate
Medical
Center,
Springfield, MA; 5Beth Israel Deaconess Medical
Center, Boston, MA; 6University of Maryland,
Baltimore, Baltimore, MD; 7Tufts Medical Center,
Boston, MA; 8Brigham and Women’s Hospital,
Boston, MA; 9NHLBI, Bethesda, MD
Background: While early cardiac computed tomographic angiography (CCTA) could be more effective to
manage emergency department (ED) patients with
acute chest pain and intermediate (>4%) risk of acute
coronary syndrome (ACS) than current management
strategies, it also could result in increased testing, cost,
and radiation exposure.
Objectives: The purpose of the study was to determine
whether incorporation of CCTA early in the ED evaluation process leads to more efficient management and
earlier discharge than usual care in patients with acute
chest pain at intermediate risk for ACS.
Methods: Randomized comparative effectiveness trial
enrolling patients between 40–75 years of age without
known CAD, presenting to the ED with chest pain but
without ischemic ECG changes or elevated initial troponin and require further risk stratification for decision
making, at nine US sites. Patients are being randomized
to either CCTA as the first diagnostic test or to usual
care, which could include no testing or functional testing such as exercise ECG, stress SPECT, and stress
echo following serial biomarkers. Test results were provided to physicians but management in neither arm
was driven by a study protocol. Data on time, diagnostic testing, and cost of index hospitalization, and the following 28 days are being collected. The primary
endpoint is length of hospital stay (LOS). The trial is
powered to allow for detection of a difference in LOS
of 10.1 hours between competing strategies with 95%
power assuming that 70% of projected LOS values are
true. Secondary endpoints are cumulative radiation
exposure, and cost of competing strategies. Tertiary
endpoints are institutional, caregiver, and patient characteristics associated with primary and secondary outcomes. Rate of missed ACS within 28 days is the safety
endpoint.
Results: As of November 21st, 2011, 880 of 1000
patients have been enrolled (mean age: 54 ± 8, 46.5%
female, ACS rate 7.55%). The anticipated completion of
the last patient visit is 02/28/12 and the database will be
locked in early March 2012. We will present the results
of the primary, secondary, and some tertiary endpoints
for the entire cohort.
Conclusion: ROMICAT II will provide rigorous data on
whether incorporation of CCTA early in the ED evaluation process leads to more efficient management and
triage than usual care in patients with acute chest pain
at intermediate risk for ACS. (Originally submitted as a
‘‘late-breaker.’’)
338
Meta-analysis Of Magnetic Resonance
Imaging For The Diagnosis Of Appendicitis
Michael D. Repplinger, James E. Svenson,
Scott B. Reeder
University of Wisconsin School of Medicine
and Public Health, Madison, WI
Background: The diagnosis of appendicitis is increasingly reliant on imaging confirmation. While CT is commonly used, it exposes patients to ionizing radiation,
which increases lifetime risk of cancer. MRI has been
evaluated in small studies as an alternative.
Objectives: This study aims to perform a meta-analysis
on all published studies since 2005 evaluating the use of
MRI to diagnose appendicitis. Calculated measures
include sensitivity, specificity, positive predictive value,
and negative predictive value.
Methods: This is a meta-analysis of all studies that
evaluated the use of MRI for diagnosing appendicitis.
All retrospective and prospective studies published in
English and listed in PubMed since 2005 were included.
Earlier studies were excluded in order to only report
on modern imaging sequences. Studies were also
excluded if a gold standard was not explicitly stated or
if raw numbers were not provided to calculate the
study outcomes. Data were abstracted by one investigator and confirmed by another. This included absolute
number of true positives, true negatives, false positives,
false negatives, number of equivocal cases, type of MRI
sequence, and demographic data including study setting and sex distribution. Data were analyzed using
Microsoft Excel.
Results: There were 11 studies which met inclusion and
exclusion criteria. A total of 626 subjects from six different countries were enrolled in these trials, 445 (70.1%) of
whom were women. A majority of participants were
pregnant. Nearly all studies reported routine use of
unenhanced MRI with optional use of contrast-enhanced
imaging or diffusion weighted imaging. Of the total
cohort, 33 subjects (5%, 95% CI 4–6%) had equivocal
imaging results. Sensitivity, specificity, positive predictive value, and negative predictive value were 96.9%
(95%CI 95.6–98.2%), 96.7% (95%CI 95.8–97.6%), 94.8%
(95%CI 93.5–96.1%), and 98% (95%CI 97.4–98.6%).
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Conclusion: The diagnosis of appendicitis appears to
be readily assessed by MRI. Using this imaging modality instead of CT will decrease the overall burden of
ionizing radiation from medical imaging.
339
Physician Variability In Positive Diagnostic
Yield Of Advanced Radiography To
Diagnose Pulmonary Embolus In Four
Hospitals: 2006–2009
Dana Kindermann1, Jordan Sax2,
Kevin Maloy3, Dave Milzman3, Jesse Pines4
1
Georgetown University Hospital / Washington
Hospital Center, Washington, DC; 2Johns
Hopkins Hospital, Baltimore, MD; 3Washington
Hospital Center, Washington, DC; 4George
Washington University Hospital, Washington,
DC
Background: Many studies have documented higher
rates of advanced radiography utilization across U.S.
emergency departments (EDs) in recent years, with an
associated decrease in diagnostic yield (positive tests /
total tests). Provider-to-provider variability in diagnostic yield has not been well studied, nor have the factors
that may explain these differences in clinical practice.
Objectives: We assessed the physician-level predictors
of diagnostic yield using advanced radiography to diagnose pulmonary embolus (PE) in the ED, including
demographics and D-dimer ordering rates.
Methods: We conducted a retrospective chart review of
all ED patients who had a CT chest or V/Q scan ordered
to rule out PE from 1/06 to 12/09 in four hospitals in the
Medstar health system. Attending physicians were
included in the study if they had ordered 50 or more scans
over the study period. The result of each CT and VQ scan
was recorded as positive, negative, or indeterminate, and
the identity of the ordering physician was also recorded.
Data on provider sex, residency type (EM or other), and
year of residency completion were collected. Each provider’s positive diagnostic yield was calculated, and logistic regression analysis was done to assess correlation
between positive scans and provider characteristics.
Results: During the study period, 15,015 scans (13,571
CTs and 1,443 V/Qs) were ordered by 93 providers. The
physicians were an average of 9.7 years from residency,
36% were female, and 98% were EM-trained. Diagnostic yield varied significantly among physicians
(p < 0.001), and ranged from 0% to 18%. The median
diagnostic yield was 5.9% (IQR 3.8%–7.8%). The use of
D-dimer by provider also varied significantly from 4%
to 48% (p < 0.001). The odds of a positive test were significantly lower among providers less than 10 years out
of residency graduation (OR 0.80, CI 0.68–0.95) after
controlling for provider sex, type of residency training,
D-dimer use, and total number of scans ordered.
Conclusion: We found significant provider variability
in diagnostic yield for PE and use of D-dimer in this
study population, with 25% of providers having diagnostic yield less than or equal to 3.8%. Providers who
were more recently graduated from residency appear
to have a lower diagnostic yield, suggesting a more
conservative approach in this group.
•
340
www.aemj.org
S183
Association Between the ‘‘Seat Belt Sign’’
and Intra-abdominal Injury in Children with
Blunt Torso Trauma in Motor Vehicle
Collisions
Dominic Borgialli1, Angela Ellison2,
Peter Ehrlich3, Bema Bonsu4, Jay Menaker5,
David Wisner6, Shireen Atabaki7, Michelle
Miskin8, Peter Sokolove6, Kathy Lillis9, Nathan
Kuppermann6, James Holmes6, and the
PECARN IAI Study Group10
1
University of Michigan School of Medicine
and Hurley Medical Center, Flint, MI;
2
University of Pennsylvania School of Medicine,
Philadelphia, PA; 3University of Michigan
School of Medicine, Ann Arbor, MI;
4
Nationwide Children’s Hospital, Columbus,
OH; 5University of Maryland Medical Center,
Shock Trauma, Baltimore, MD; 6UC Davis
School of Medicine, Sacramento, CA; 7The
George Washington University School of
Medicine, Washington, DC; 8University of Utah
School of Medicine, Salt Lake City, UT;
9
University of New York at Buffalo School of
Medicine, Buffalo, NY; 10EMSC, Washington,
DC
Background: The presence of the ‘‘Seat Belt Sign’’
(SBS) is believed to be associated with significant injury
in children but a large, prospective multicenter study
has not been performed.
Objectives: To determine the association between the
abdominal SBS and intra-abdominal injury (IAI) in children presenting to emergency departments (ED) with
blunt torso trauma after a motor vehicle collision
(MVC).
Methods: We performed a prospective, multicenter,
observational study of children with blunt torso trauma
presenting after MVCs. Patient history and physical
examination findings were documented before abdominal CT or laparotomy. SBS was defined as a continuous
area of erythema, ecchymosis, or abrasion across the
abdomen secondary to a seat belt restraint. We calculated the relative risk (RR) of IAI with 95% confidence
intervals (CI) for children with and without SBS and, as
a secondary analysis, the risk of IAI in those patients
with SBS who were without abdominal pain or tenderness.
Results: 3740 children with blunt torso trauma in
MVCs with documentation about presence/absence of
SBS from 20 EDs were enrolled; 585 (16%) had SBS.
IAIs were more common in patients with SBS (14.4%
vs 5.2%, RR 2.7; 95% CI: 2.1 to 3.5). Patients with SBS
were more likely to have gastrointestinal injury (8.0%
vs 0.5%, RR 15.8; 95% CI: 9.0 to 27.7), pancreatic injury
(1.0% vs 0.3%, RR 3.6; 95% CI: 1.3 to 10.1), and solid
organ injury (6.5% vs 4.4%, RR 1.5; 95% CI: 1.05 to 2.1)
than children without SBS. Patients with SBS were
more likely to undergo therapeutic laparotomy than
those without SBS (6.3% vs 0.7%, RR 9.5; 95% CI: 5.6
to 16.1). IAI was diagnosed in 11/196 patients (5.6%,
95% CI: 2.8 to 9.8%) with SBS and no abdominal pain
or tenderness.
S184
2012 SAEM ANNUAL MEETING ABSTRACTS
Conclusion: Patients with SBS after MVC are at
greater risk of IAI than those without SBS, predominately due to gastrointestinal and pancreatic injuries. In
addition, those with SBS are more likely to undergo
therapeutic laparotomy than those without. Although
IAI is uncommon in SBS patients with no abdominal
pain or tenderness, the risk of IAI is such that additional evaluation is generally necessary.
341
Pelvic CT Imaging In Adult Blunt Trauma:
Does It Add Clinically Useful Information?
Omayra L. Marrero, Alice M. Mitchell, Stacy
Reynolds
Carolinas Medical Center, Charlotte, NC
Background: CT diagnostic imaging for blunt trauma
routinely includes the abdomen (CTa) and pelvis (CTp)
to evaluate suspected intra-abdominal injury (IAI).
Objectives: We hypothesized that CTp contributes half
the effective dose (E) of radiation to imaging protocols
but adds to the diagnosis of IAI in less than 2% of adult
blunt trauma patients without suspected pelvic fracture.
Methods: We performed a retrospective study of
patients older than 17 years evaluated with CTa and
CTp at a Level I trauma center from March to June
2010 for blunt trauma. Two trained abstractors
recorded data on standardized forms. Patients were
excluded for known or suspected pelvic fracture
defined by pelvic x-ray, CT bony pelvis, pelvic angiography, or an unstable or tender pelvis on exam. The primary outcomes were IAI identified on CTp and the
mean percent reduction in E without CTp (Ep/Eap).
Duplicate CT images were excluded from E calculations. We defined IAI as solid organ, hollow viscus, or
vascular injury reported to the abdomen (dome of diaphragm to top of iliac crest) and pelvis (iliac crest to the
greater trochanter). As a secondary outcome, we
recorded occult pelvic fractures (OPF) on CTp. We estimated E based on the dose length product (DLP). We
calculated mean E for combined CTa and CTp studies
(Eap) and CTa (Ea) and CTp (Ep) studies alone. The mean
effective percent dose reductions (Ep/Eap) were
reported. A kappa statistic was calculated for the
primary outcome of IAI.
Results: 730 adult patients were evaluated at our center
for blunt trauma during the study period. 210 patients
met study criteria (84 female, 126 male). IAIs occurred
in 23 patients (11%, 95% CI: 7 to 16%). CTp diagnosed
IAI (traumatic ventral hernia) in 1 patient (0.5%, 95%
CI: 0 to 2.6%), and detected 2 OPF and 1 non-displaced
sacral fracture. Of the patients with OPF, 1 patient had
a negative pelvic x-ray. The observed agreement was
99.5% (k = 0.66, 95% CI: 0.53 to 0.79). Eap was 10 mSv
and Ea was 6 mSv. Elimination of CTp reduced E by a
mean of 37% (95% CI: 29 to 46%).
Conclusion: In this pilot study, pelvic imaging diagnosed IAI in 0.5% of patients. Elimination of CTp
reduced the mean E by one third. The utility of CTp in
low risk adult trauma patients without suspected pelvic
fracture warrants additional investigation.
342
The Clinical Significance of Chest CT When
the CXR is Normal in Blunt Trauma
Patients
Bory Kea1, Ruwan Gamarallage1, Hemamalini
Vairamuthu1, Jonathan Fortman1, Eli Carrillo1,
Kevin Lunney2, Gregory W. Hendey2, Robert
M. Rodriguez1
1
University of California, San Francisco, School
of Medicine; San Francisco General Hospital,
San Francisco, CA; 2University of California,
San Francisco, School of Medicine-Fresno,
Fresno Regional Community Medical Center,
Fresno, CA, Fresno, CA
Background: Although computed tomography (CT)
has been shown to detect more injuries than plain radiography in blunt trauma patients, it is unclear whether
these injuries change patient management.
Objectives: We sought to determine: 1) the proportion
of patients with a normal initial CXR who are subsequently diagnosed with radiologic injuries on CT, 2) the
proportion of patients with an abnormal initial CXR
who are found not to have injuries on CT, and 3) the
clinical significance (major, minor, or no clinical significance) of radiologic injuries seen on CT as determined
by trauma expert panel.
Methods: At two urban Level I trauma centers, blunt
trauma victims over 14 years of age who received ED
chest imaging as part of their evaluation were enrolled.
An independent trauma expert panel consisting of six
EM physicians and four trauma surgeons classified
pairs of chest injuries and interventions/management
(e.g. pneumothorax and chest tube placement) as major,
minor, or no clinical significance. Charts of enrolled
subjects were reviewed to determine their appropriate
designation according to the trauma panel’s classification.
Results: Of the 3639 subjects enrolled, 2808 (77.2%)
had a CXR alone and 831 (22.8%) had both a CXR and
chest CT. 7.0% (256/3639) had CXRs suggesting blunt
trauma, of which 177 (69.1%, 95% CI: 63.2–74.5) had
chest CTs confirming injury and 25 (9.8%, 95% CI: 6.7–
14.0%) revealed no injury on chest CT. Of 590 patients
who had a chest CT after a normal CXR, 483 (81.6%,
95% CI: 78.6–84.8%) had CTs that were also read as
normal and 18.1% (107/590) had CTs that diagnosed
injuries, primarily rib fractures, pneumothorax, and
hemothorax. Of the 590 patients with a normal CXR,
chest CT led to a diagnosis of major injury in 13 (2.2%,
95% CI: 1.3–3.7%) and minor injury in 62 (10.5%, 95%
CI: 8.3–13.2%) cases.
Conclusion: Although chest CT frequently detects injuries missed on CXR in blunt trauma patients, it rarely
changes patient management. Given the relatively low
added value with the high costs and radiation risks,
development of a guideline for selective chest CT utilization in blunt trauma is warranted.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
343
Occult Pneumothoraces Visualized in
Children with Blunt Torso Trauma
Lois Lee1, Alexander Rogers2, Peter Ehrlich2,
Maria Kwok3, Peter Sokolove4, Stephen
Blumberg5, Josh Kooistra6, Michelle Miskin7,
Sandra Wootton-Gorges4, Art Cooper8,
Nathan Kuppermann4, James Holmes4,
PECARN IAI Study Group4
1
Children’s Hospital Boston, Boston, MA; 2Mott
Children’s Hospital, Ann Arbor, MI; 3Columbia
University Medical Center, New York City, NY;
4
UC Davis Children’s Hospital, Sacramento,
CA; 5Jacobi Medical Center, Bronx, NY;
6
Spectrum Health Helen Devos Children’s
Hospital, Grand Rapids, MI; 7University of
Utah, Salt Lake City, UT; 8Columbia University
Medical Center at Harlem Hospital, New York
City, NY
Background: Chest radiography is frequently the initial
screening test to identify pneumothoraces (PTX) in
trauma patients, but it has limitations. Computed
tomography scans may identify PTX not seen on CXR
(‘‘occult PTX’’).
Objectives: The objectives of this study were to determine the prevalence of occult PTX in injured children
and the rate of treatment with tube thoracostomy
among those with occult PTX.
Methods: We conducted a planned sub-study of children with CXRs performed from a large prospective
multicenter observational cohort study of children
<18 years old evaluated in emergency departments for
blunt torso trauma from 5/07 to 1/10. The faculty radiologist interpretations of the initial CXRs and any subsequent imaging studies, including CT scans, were
reviewed for the presence of a PTX. An ‘‘occult PTX’’
was defined as a PTX not identified on initial CXR, but
visualized on CT scan. Prevalence rates and rate differences with 95% confidence intervals were calculated.
Results: Of 12,044 enrolled in the parent study, 8,030
(67%) children (median age 11.3 years) underwent CXR
in the ED. PTX on any imaging modality was identified
in 383 (4.8%, 95% CI 4.3%, 5.3%) patients. The initial
CXR demonstrated a PTX in 148 patients (1.8%, 95% CI
1.6%, 2.2%) including one false positive PTX. Occult
PTX was diagnosed in 235 (2.9%, 95% CI 2.6%, 3.3%)
patients. A tube thoracostomy was placed in 85 (57.8%,
95% CI 49.4%, 65.9%) patients with PTX on initial CXR
and in 42 (17.9%, 95% CI 13.2%, 23.4%) patients with
occult PTX (rate difference 39.9%, 95% CI 30.6%,
49.3%).
Conclusion: In pediatric patients with blunt torso
trauma, PTX are uncommon, but only one-third were
identified on initial CXR. CT will frequently identify
occult PTX, and although some undergo tube thoracostomy, most do not.
•
344
www.aemj.org
S185
Use and Impact of the FAST Exam in
Children with Blunt Abdominal Trauma
Jay Menaker1, Stephen Blumberg2, David
Wisner3, Peter Dayan4, Michael Tunik5,
Madelyn Garcia6, Prashant Mahajan7, Michelle
Miskin8, David Monroe9, Dominic Borgialli10,
Nathan Kuppermann3, James Holmes3,
PECARN IAI Study Group11
1
University of Maryland, Baltimore, MD; 2Jacobi
Medical Center, Bronx, NY; 3UC Davis School of
Medicine, Sacramento, CA; 4Columbia Medical
School, New York, NY; 5New York University, New
York, NY; 6Rochester University, Rochester, NY;
7
Wayne State University, Detroit, MI; 8University
of Utah, Salt Lake City, UT; 9Howard County
Hospital, Columbia, MD; 10Hurley Medical
Center, Flinth, MI; 11EMSC, Washington, DC
Background: Use of the focused assessment sonography for trauma (FAST) is controversial in children with
blunt abdominal trauma.
Objectives: To evaluate the effect of clinician-performed
FAST in children with abdominal trauma. We hypothesized that those undergoing a FAST would be less likely
to undergo abdominal computed tomography (CT).
Methods: We performed a planned analysis of a 20-center prospective study of children (<18 years) with blunt
abdominal trauma. Patients with GCS scores >8 were eligible but those with hypotension and taken directly to the
operating suite prior to CT were excluded. Patients from
eight centers which did FAST on <5% of patients were
excluded. Patients were risk stratified by clinician suspicion for intra-abdominal injury (IAI) as very low <1%, low
1–5%, moderate 5–10%, high 11–50%, and very high
>50%. The relative risk (RR) for CT use based on undergoing a FAST was determined in each of these strata.
Results: Of 11,025 eligible, 6,558 (median age = 10.7,
IQR = 6.3, 15.5) met eligibility. 3,076 (46.9%) underwent
abdominal CT and 381 (5.8%) were diagnosed with an
IAI. 887 (13.7%) underwent clinician-performed FAST
exam during emergency department evaluation. Use of
the FAST exam among the 12 participating sites ranged
from 5.5% to 58% and increased as clinician suspicion
for IAI increased: 11.0% with <1% risk, 13.5% with
1–5% risk, 20.5% with 6–10% risk, 23.2% with 11–50%
risk, and 30.7% with >50% risk. RRs for CT use stratified by clinician suspicion of IAI showed that patients
with a low to moderate risk of IAI were less likely to
receive a CT following a FAST exam compared to those
not undergoing FAST: RR = 0.83 (0.67, 1.03), RR = 0.81
(0.72, 0.91), RR = 0.85 (0.78, 0.94), RR = 0.99 (0.94, 1.05),
and RR = 0.97 (0.91, 1.05) for patients with suspicion for
IAI of <1%, 1–5%, 5–10%, 11–50%, and >50%.
Conclusion: The FAST exam is used in a small percentage of children with blunt abdominal trauma. While its
use is highly variable amongst centers, use increases as
clinician suspicion for IAI increases. Patients with a low
to moderate clinician suspicion of IAI are less likely to
undergo abdominal CT if they received a FAST exam.
A randomized controlled trial is required to determine
the benefits and drawbacks of the FAST exam in the
evaluation of children with blunt abdominal trauma.
S186
345
2012 SAEM ANNUAL MEETING ABSTRACTS
History Of Anticoagulation And Head
Trauma In Elderly Patients: Is There Really
An Increased Risk Of Bleeding?
Laura D. Melville, Ronald Tanaka
New York Methodist Hospital, Brooklyn, NY
Background: The literature reports that anticoagulation increases the risk of mortality in patients presenting to emergency departments (ED) with head trauma
(HT). It has been suggested that such patients should be
treated in a protocolized fashion, including CT within
15 minutes, and anticipatory preparation of FFP before
CT results are available. There are significant logistical
and financial implications associated with implementation of such a protocol.
Objectives: Our primary objective was to determine the
effect of anticoagulant therapy on the risk of intracranial
hemorrhage (ICH) in elderly patients presenting to our
urban community hospital following bunt head injury.
Methods: This was a retrospective chart review study
of HT patients >60 years of age presenting to our ED
over a 6-month period. Charts reviewed were identified
using our electronic medical record via chief complaints
and ICD-9 codes and cross referencing with written CT
logs. Research assistants underwent review of at least
25% of their contributing data to validate reliability. We
collected information regarding use of warfarin, clopidogrel, and aspirin and CT findings of ICH. Using univariate logistic regression, we calculated odds ratios
(OR) for ICH with 95% CI.
Results: We identified 363 elderly HT patients. The
mean age of our population was 72, 34 (8.3%) admitted
to using anticoagulant therapy, and 23% were on antiplatelet drugs. 14 (3.8%) of the cohort had ICB, 3
patients required neurosurgical intervention, and 1 had
transfusion of blood products. Of the non-anticoagulated patients, 12 (3.6%) were found to have ICH, half of
those (6) were on an anti-platelet medication. Thirtyfour patients on warfarin, 2 (5.9%) had ICH. Six
patients (7.1%) taking anti-platelet medication had ICH.
OR for warfarin = 1.33 (95% CI, 0.20, 5.17) p = 0.71 OR
for clopidogrel = 2.31 (0.34, 9.25) p = 0.30. OR for
ASA = 2.260 (0.72, 6.71) p = 0.14 and for any combination of these treatments was OR = 2.33 (0.79, 7.26)
p = 0.13.
Conclusion: In our patient population, the risk of ICH
in our anticoagulated patients was not significantly
higher than those not taking warfarin. Anticipatory
preparation of FFP may not be ideal resource utilization in our setting. Although not statistically significant,
the trend toward higher risk in the patients on antiplatelet therapy warrants continues data collection and
further investigation. (Originally submitted as a ‘‘latebreaker.’’)
346
The Value of microRNA for the Diagnosis
and Prognosis of Emergency Department
Patients with Sepsis
Mike Puskarich1, Nathan Shapiro2, Stephen
Trzeciak3, Jeffrey Kline4, Alan Jones1
1
University of Mississippi Medical Center,
Jackson, MS; 2BIDMC, Boston, MA; 3Cooper
University Hospital, Camden, NJ; 4Carolinas
Medical Centera, Charlotte, NC
Background: MircoRNAs are short, non-coding RNA
sequences that regulate gene expression and protein
synthesis, and have been proposed as biomarkers for a
number of disease processes, including sepsis.
Objectives: Determine diagnostic and prognostic ability of three previously identified microRNAs in ED
patients with sepsis.
Methods: Prospective observational study of a convenience sample of patients presenting to one of three
large, urban, tertiary care EDs. Inclusion criteria: 1)
Septic shock: suspected infection, two or more systemic
inflammatory response (SIRS) criteria, and systolic
blood pressure (SBP) <90 mmHg despite a 20 mL/kg
fluid bolus; 2) Sepsis: suspected infection, two or more
SIRS criteria, and SBP >90 mmHg; and 3) Control: ED
patients without suspected infection, no SIRS criteria,
and SBP >90 mmHg. Three microRNAs (miR-150, miR146a, and miR-223) were measured using real-time
quantitative PCR from serum drawn at enrollment. IL-6,
IL-10, and TNF-a were measured using a Bio-Plex suspension system. Baseline characteristics, IL-6, IL-10,
TNF-a and microRNAs were compared using one way
ANOVA or Fisher exact test, as appropriate. Correlations between miRNAs and SOFA scores, IL-6, IL-10,
and TNF-a were determined using Spearman’s rank.
A logistic regression model was constructed using
in-hospital mortality as the dependent variable and
miRNAs as the independent variables of interest. Bonferroni adjustments were made for multiple comparisons.
Results: Of 93 patients, 24 were controls, 29 had sepsis,
and 40 had septic shock. We found no difference in
serum miR-146a or miR-223 between cohorts, and
found no association between these microRNAs and
either inflammatory markers or SOFA score. miR-150
demonstrated a significant correlation with SOFA score
(q = 0.31, p = 0.01), IL-10 (q = 0.37, p = 0.001), but not
IL-6 or TNF-a (p = 0.046, p = 0.59). Logistic regression
demonstrated miR-150 to be associated with mortality,
even after adjusting for SOFA score (p = 0.003).
Conclusion: miR-146a or miR-223 failed to demonstrate
any diagnostic or prognostic ability in this cohort. miR150 was associated with inflammation, increasing severity of illness, and mortality, and may represent a novel
prognostic marker for diagnosis and prognosis of sepsis.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
347
Prognostic Value of Significantly Elevated
Serum Lactate Measurements in
Emergency Department Patients with
Suspected Infection
Michael Puskarich1, Jeffrey Kline2,
Richard Summers1, Alan Jones1
1
University of Mississippi Medical Center,
Jackson, MS; 2Carolinas Medical Center,
Charlotte, NC
Background: Previous studies have confirmed the
prognostic significance of lactate concentrations categorized into groups (low, intermediate, high) among
emergency department (ED) patients with suspected
infection. Although the relationship between lactate
concentrations categorized into groups and mortality
appears to be linear, the relationship between lactate as
a continuous measurement and mortality is uncertain.
Objectives: We sought to evaluate the association
between blood lactate concentrations along an incremental continuum, up to a maximum value of 20 mmol/
L, and mortality in ED patients with suspected infection.
Methods: Retrospective cohort analysis of adult ED
patients with suspected infection from a large urban
emergency department during 2007–2010. Inclusion criteria: suspected infection evidenced by administration
of antibiotics in the ED and measurement of whole
blood lactate in the ED. Data were identified using a
query of the electronic medical record via a previously
compiled quality assurance database. All antibiotics
available on formulary over the study period were
included. If multiple lactate values were available for a
patient, only the first value was included. The primary
outcome was in-hospital mortality. Fractional polynomial regression was used to model the relationship
between lactate concentration and in-hospital mortality.
Results: 2,596 patients met inclusion criteria and were
analyzed. The initial median lactate concentration was
2.1 mmol/L (IQR 1.3, 3.3) and the overall mortality rate
was 14.4%. Of the entire cohort, 459 (17.6%) had an initial lactate >4 mmol/L. Mortality continued to rise
across the continuum of incremental elevations, 6% for
lactate <1.0 mmol/L up to 39% for a lactate of 18–
20 mmol/L. Polynomial regression analysis showed a
strong curvilinear correlation between lactate and mortality (R = 0.92, p < 0.0001).
Conclusion: In ED patients with suspected infection,
we found a strong curvilinear relationship between
incremental elevations in lactate concentration and
mortality up to 20 mmol/L. These data support the use
of lactate as a continuous variable rather than categorical variable when used for prognostic purposes.
348
Does Documented Physician Consideration
Of Sepsis Lead To More Aggressive
Treatment?
Aaron M. Stutz, Jenifer Luman, John Winkler,
Uwe Stolz, Lisa Stoneking, Kurt Denninghoff
University of Arizona, Tucson, AZ
Background: Timely treatment of sepsis is critical for
patient outcomes.
•
www.aemj.org
S187
Objectives: To examine the association between emergency physician recognition of SIRS and sepsis and
subsequent treatment of septic patients.
Methods: A retrospective cohort study of all-age
patient medical records with positive blood cultures
drawn in the emergency department from 11/2008–1/
2009 at a Level I trauma center. Patient parameters
were reviewed including vital signs, mental status,
imaging, and laboratory data. Criteria for SIRS, sepsis,
severe sepsis, and septic shock were applied according
to established guidelines for pediatrics and adults.
These data were compared to physician differential
diagnosis documentation. The Mann-Whitney test was
used to compare time to antibiotic administration and
total volume of fluid resuscitation between two groups
of patients: those with recognized sepsis and those with
unrecognized sepsis.
Results: SIRS criteria were present in 233/338 reviewed
cases. Sepsis criteria were identified in 215/338 cases
and considered in the differential diagnosis in 121/215
septic patients. Severe sepsis was present in 89/338 cases
and septic shock was present in 42/338 cases. The sepsis
6-hour resuscitation bundle was completed in the emergency department in 16 cases of severe sepsis or septic
shock. 121 patients who met sepsis criteria and were recognized by the ED physician had a median time to antibiotics of 150 minutes (IQR: 89–282) and a median IVF of
1500 ml (IQR: 500–3000). The 94 patients who met sepsis
criteria but went unrecognized in the documentation
had a median time to antibiotics of 225 minutes (IQR:
135–355) and median volume of fluid resuscitation of
1000 ml (IQR: 300–2000). Median time to antibiotics and
median volume of fluid resuscitation differed significantly between recognized and unrecognized septic
patients (p = 0.003 and p = 0.002, respectively).
Conclusion: Emergency physicians correctly identify
and treat infection in most cases, but frequently do not
document SIRS and sepsis. Lack of documentation of
sepsis in the differential diagnosis is associated with
increased time to antibiotic delivery and a smaller total
volume of fluid administration, which may explain poor
sepsis bundle compliance in the emergency department.
349
Elevated Inter-alpha Trypsin Inhibitor (ITI)
levels in Emergency Department Patients
with Severe Sepsis and Septic Shock
Anthony M. Napoli, Ling Zhang, Fenwick
Gardiner, Patrick Salome
Warren Alpert Medical School of Brown
University, Providence, RI
Background: Serine protease inhibitors, specficially
Inter-alpha trypsin inhibitor (ITI), are thought to be
protective in sepsis by mitigating the neutrophil-derived
proteinases that are upregulated during states of severe
inflammation. Prolonged negative feedback of the
inflammatory cascade is thought to exhaust the body’s
reserves leading to dysregulation of cellular immunity
seen in sepsis. Levels of ITI have been associated with
mortality in ICU sepsis patients while small studies have
suggested improved mortality in animals and hemodynamic stability in animals when repleted.
S188
2012 SAEM ANNUAL MEETING ABSTRACTS
Objectives: Characterize ITI levels in ED patients and
determine if these levels are associated with severity of
sepsis.
Methods: Prospective cohort of controls(C), acutely ill
inflammatory non-septic illnesses (AINS) with SIRS,
and patients with severe sepsis or septic shock (SS)
(lactate >4, SBP<90 after 2L normal saline). A competitive ELISA assay using murine 69.26 antibody was used
to measure ITI levels. Based on prior results from inpatients with sepsis, a sample size of 100 patients would
be necessary (two-tail, p < 0.05,b = 0.8), assuming a
standard error of difference of 140.
Results: 96 patients are enrolled to date with completed
assay analysis on 66. ITI levels are lower in patients with
SS vs. C (262 ± 73 vs. 318 ± 89, p = 0.03) but not vs. AINS
(264 ± 93, p = 0.94). There is a trend toward an association
of ITI levels with APACHE II score (r = 0.33, p = 0.11).
Conclusion: ITI levels in ED patients presenting with
sepsis are higher than control patients but not AINS.
The degree of overlap with control patients and the lack
of distinction of ITI levels in SS vs. AINS may limit its
utility as a diagnostic and prognostic marker. Larger
studies are needed to confirm these results. (Originally
submitted as a ‘‘late-breaker.’’)
350
The Microcirculation is preserved in Sepsis
Patients without Organ Dysfunction or
Shock
Michael R. Filbin1, Peter Hou2, Apurv Barche1,
Katherine Wai2, Allen Gold2, Siddharth
Parma2, Michael Massey3, Alex Bracey3,
Nicolaj Duujas3, Hendrick Zijlstra3, Nathan I.
Shapiro3
1
Massachusetts General Hospital, Boston, MA;
2
Brigham & Women’s Hospital, Boston, MA;
3
Beth Israel Deaconess Hospital, Boston, MA
Background: The advent of sidestream darkfield imaging (SDF) has allowed the direct observation of the
microcirculation at the bedside in a manner that was
previously not possible. Recent research with SDF has
shown that the microcirculation is dysfunctional in critically ill patients with septic shock; however, it is
unstudied in patients of lower acuity presenting to the
ED with uncomplicated sepsis.
Objectives: We hypothesize that patients with early
sepsis (without organ dysfunction or shock) have signs
of microcirculatory dysfunction compared to noninfected control patients.
Methods: Prospective, observational study of a convenience sample of sepsis patients meeting the following
criteria: clinical suspicion of infection, age 18 or older,
two or more SIRS criteria, no organ dysfunction or
shock; control patients were ED patients without infection or SIRS criteria. The study was conducted across
three urban tertiary care EDs. The images were
obtained using a videomicroscope and sent to a coordinating center for offline analysis. A trained, blinded
scorer analyzed tapes of microcirculatory flow using a
previously validated semi-quantitative scale (0–3) for
small vessel blood flow velocity. The flow velocity was
compared using an unpaired t-test alpha set at 0.05.
Results: There were 98 patients enrolled: 59 sepsis
patients and 39 controls. The mean age of the sepsis
(58.0, 95% CI 53.6–62.4) and control patients (58.3, 52.3–
64.3) was similar between groups, p = 0.94. There was
no difference in average flow velocity for the infected
patients 2.94 (2.90–2.97) versus non-infected controls
2.92 (2.88–2.96), with a mean difference of 0.017 ()0.067
to 0.033), p = 0.51.
Conclusion: We did not observe microcirculatory flow
disturbances in patients with sepsis by SIRS criteria
without organ dysfunction or shock. One may postulate
that infection with SIRS criteria alone is not indicative
of microcirculatory dysfunction. Further study is warranted as to the role of microcirculatory dysfunction in
progression of disease in sepsis.
351
Benchmarking The Incidence And
Mortality Of Severe Sepsis In The United
States
David F. Gaieski, J. Matthew Edwards,
Michael J. Kallan, Brendan G. Carr
University of Pennsylvania School of Medicine,
Philadelphia, PA
Background: Severe sepsis is a common clinical syndrome with substantial human and financial impact. In
1992 the first consensus definition of sepsis was published. Subsequent epidemiologic estimates were collected using administrative data, but ongoing
discrepancies in the definition of severe sepsis led to
large differences in estimates.
Objectives: We seek to describe the variations in incidence and mortality of severe sepsis in the US using
four methods of database abstraction.
Methods: Using a nationally representative sample,
four previously published methods (Angus, Martin,
Dombrovskiy, Wang) were used to gather cases of
severe sepsis over a 6-year period (2004–2009). In addition, the use of new ICD-9 sepsis codes was compared
to previous methods. Our main outcome measure was
annual national incidence and in-hospital mortality of
severe sepsis.
Results: The average annual incidence varied by as
much as 3.5 fold depending on method used and ranged from 894,013 (300 / 100,000 population) to 3,110,630
(1,031 / 100,000) using the methods of Dombrovskiy and
Wang, respectively. Average annual increase in the
incidence of severe sepsis was similar (13.0–13.3%)
across all methods. Total mortality mirrored the
increase in incidence over the 6-year period (168,403–
229,044 [Dombrovskiy] and 299,939–375,608 [Wang]).
In-hospital mortality ranged from 14.7% to 29.9% using
abstraction methods of Wang and Dombrovskiy,
respectively. Using all methods, there was a decrease in
in-hospital mortality across the 6-year period (35.2–
25.6% [Dombrovskiy] and 17.8–12.1% [Wang]). Use of
ICD-9 sepsis codes more than doubled over the 6-year
period [158,722–489,632 (995.92 severe sepsis), 131,719–
303,615 (785.52 septic shock)].
Conclusion: There is substantial variability in incidence
and mortality of severe sepsis depending on method of
database abstraction used. A uniform, consistent
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
definition of severe sepsis is needed for use in national
registries to facilitate accurate assessment of clinical
interventions and outcomes comparisons between hospitals and regions.
352
Emergency Medical Services Focused
Assessment with Sonography in Trauma
and Cardiac Ultrasound in Cardiac Arrest:
The Training Phase
Barry Knapp, Donald Byars, Virginia Stewart,
Rebecca Ryszkiewicz, David Evans
Eastern Virginia Medical School, Suffolk, VA
Background: With improvement in ultrasound (US)
technology, portable units are now able to endure the
rigors of the prehospital setting. The ability of prehospital providers using US to make an accurate assessment of free abdominal and pericardial fluid in trauma
victims or to determine cardiac activity in arrest
patients using US may improve both resuscitative
efforts and resource utilization. No studies address
how best to train emergency medical services (EMS)
providers.
Objectives: The purpose of this study is to determine
whether EMS providers at the EMT-Intermediate and
and EMT-Paramedic levels can, in a four-hour training
session, acquire the knowledge and skill to operate a
portable US machine and achieve a high level of accuracy performing cardiac and FAST exams.
Methods: This was a prospective cohort educational
study conducted on Paramedic and Intermediate EMS
providers in an urban city. Participants used a Sonosite
M-Turbo US unit with a phased-array probe. We
recruited 90 EMS providers, 70 EMT-Ps and 20 EMT-Is.
All enrolled participants were required to complete a
one-hour online web-based home study program. The
four hour training program consisted of: 1) didactic lecture, 2) practice scanning, and 3) testing scenarios. All
subjects also underwent pre- and post-training written
tests. The testing scenarios included one normal and
abnormal cardiac, and one normal and abnormal FAST.
All scenarios were performed on live standardized
patients (SP) and graded in an Observed Standardized
Clinical Examination (OSCE) format. In abnormal scenarios, clips were displayed once proper ultrasound
views were obtained on the live SP.
Results: The average score on the pre-test was 73%
(14.62 ± 2.34), compared with the post-test score of
95% (19.09 ± 1.29) (p-value <0.0001). Each station was
graded with an 18-point tool for the cardiac exam and
a 32-point tool for the FAST exam. In total, EMS providers (n = 90) scored on average 98.9 points out of 100
on the OSCE testing stations. EMT-Ps (n = 70) scored,
on average 98.9 points out of 100 on the OSCE stations.
Average score for EMT-Is (n = 20) was 99.1 points out
of 100 on the OSCE stations. There was no statistical
difference between the performance of EMT-Is and Ps
(p-value of 0.86).
Conclusion: EMS providers were able to successfully
demonstrate proper image acquisition and interpretation of both basic cardiac and FAST US exams after a
four-hour training module.
•
353
www.aemj.org
S189
Videolaryngoscopy as an Educational Tool
for the Novice Pediatric Intubator: A
Comparative Study
Jendi L. Haug, Geetanjali Srivastava
University of Texas Southwestern, Dallas, TX
Background: Prior studies show that novice residents
have low intubation success rates. Videolaryngoscopy
is a technology that allows for enhanced airway visualization.
Objectives: We hypothesized that residents who
receive videolaryngoscope training (video group) would
have a higher success rate in intubating high-fidelity
simulation mannequins compared to residents trained
with traditional methods. We also predicted that the
video group would have a decrease in the mean time to
intubation, number of intubation attempts, and right
main stem (RMS) intubation rate, as well as a higher
rate of intubation before the onset of oxygen desaturation and an improved view of the glottis.
Methods: This was a randomized controlled study
comparing a traditional method of teaching intubation
to instruction with the adjunct of a videolaryngoscope.
Participants received a didactic presentation reviewing
intubation basics. They were then randomized into two
groups: traditional and video. All were taught and practiced direct laryngoscopy on a low-fidelity mannequin.
The video group received additional training with a videolaryngoscope. Afterwards, each participant intubated an
infant and adult high-fidelity mannequin. Data were collected on length of time to intubation, number of attempts,
desaturation before intubation, Cormack Lehane (CL)
grade view, and RMS intubations. We defined successful
intubation as tracheal intubation in £ 3 attempts.
Results: 32 residents (17 pediatric, 15 emergency medicine) presented for participation. There was no difference in intubation success rate between the groups for
either the adult (71% traditional vs. 77% video, p = 1) or
infant (100% vs. 100%, p = 1) mannequin. There were
differences in the following secondary outcomes for the
infant mannequin: mean intubation time (traditional vs.
video: 35 vs. 19 seconds, 15.8 second difference, 95% CI
7.9–23.7) and the rate of intubation before the onset of
desaturations (53% traditional vs. 100% video,
p = 0.003). There was no difference for the following:
number of attempts, CL grade, and RMS intubation.
Conclusion: There was no difference in intubation success rate between the group trained with videolaryngoscopy and those trained with traditional method. The
video group had both a lower mean time to intubation
and higher rate of intubation before the onset of oxygen desaturation with the infant mannequin.
354
CT Imaging In The ED: Perception Of
Radiation Risks By Provider
Richard Bounds1, Michael Trotter2, Wendy
Nichols1, James F. Reed III1
1
Christiana Care Health System, Newark, DE;
2
Taylor Hospital, Ridley Park, NJ
Background: Radiation exposure from medical imaging has been the subject of many major journal articles,
as well as the topic of mainstream media. Some
S190
2012 SAEM ANNUAL MEETING ABSTRACTS
estimate that one-third of all CT scans are not medically
justified. It is important for practitioners ordering these
scans to be knowledgeable of currently discussed risks.
Objectives: To compare the knowledge, opinions, and
practice patterns of three groups of providers in
regards to CTs in the ED.
Methods: An anonymous electronic survey was sent to all
residents, physician assistants, and attending physicians
in emergency medicine (EM), surgery, and internal medicine (IM) at a single academic tertiary care referral Level I
trauma center with an annual ED volume of over 160,000
visits. The survey was pilot tested and validated. All data
were analyzed using the Pearson’s chi-square test.
Results: There was a response rate of 32% (220/668).
Data from surgery respondents were excluded due to a
low response rate. In comparison to IM, EM respondents
correctly equated one abdominal CT to between 100 and
500 chest x-rays, reported receiving formal training
regarding the risks of radiation from CTs, believe that
excessive medical imaging is associated with an increased
lifetime risk of cancer, and routinely discuss the risks of
CT imaging with stable patients more often (see Table 1).
Particular patient factors influence whether radiation
risks are discussed with patients by 60% in each specialty (see Table 2). Before ordering an abdominal CT in
a stable patient, IM providers routinely review the
patient’s medical imaging history less often than EM
providers surveyed. Overall, 67% of respondents felt
that ordering an abdominal CT in a stable ED patient is
a clinical decision that should be discussed with the
patient, but should not require consent.
Conclusion: Compared with IM, EM practitioners report
greater awareness of the risks of radiation from CTs and
discuss risks with patients more often. They also review
patients’ imaging history more often and take this, as well
as patients’ age, into account when ordering CTs. These
results indicate a need for improved education for both
EM and IM providers in regards to the risks of radiation
from CT imaging.
Table 1 - Abstract 354:
Question
EM n = 89
IM n = 115
p-value
Formal training?
Inc life risk of cancer
belief?
Correctly identified CT
comparison with
chest x-rays
Discussion of risks
with particular
patients
Discussion of risks
with all stable
patients
History review prior to
order
63 (70.8%)
81 (91.0%)
45 (39.1%)
83 (72.2%)
0.001
0.002
30 (34%)
21 (18%)
0.004
71 (80%)
55 (49%)
0.001
14 (16%)
10 (9%)
0.001
70 (79%)
67 (62%)
0.014
Table 2 - Abstract 354:
Decision Factors
EM n = 57
IM n = 45
p-value
Female sex
Previous # CTs
53 (93%)
57 (100%)
33 (73%)
38 (84%)
0.001
0.001
355
A Mobile Lightly-embalmed Cadaver Lab
As A Possible Model For Training Rural
Providers
Wesley Zeger1, Paul Travis2, Michael
Wadman1, Carol Lomneth1, Sara Keim1,
Stephanie Vandermuelen1
1
UNMC, Omaha, NE; 2Creighton Medical
School, Omaha, NE
Background: In Nebraska, 80% of emergency departments have annual visits less than 10,000, and the predominance are in rural settings. General practitioners
working in rural emergency departments have reported
low confidence in several emergency medicine skills.
Current staffing patterns include using midlevels as the
primary provider with non-emergency medicine trained
physicians as back-up. Lightly-embalmed cadaver labs
are used for resident’s procedural training.
Objectives: To describe the effect of a lightlyembalmed cadaver workshop on physician assistants’
(PA) reported level of confidence in selected emergency
medicine procedures.
Methods: An emergency medicine procedure lab was
offered at the Nebraska Association of Physician Assistants annual conference. Each lab consisted of a 2-hour
hands-on session teaching endotracheal intubation
techniques, tube thoracostomy, intraosseous access,
and arthrocentesis of the knee, shoulder, ankle, and
wrist to PAs. IRB-approved surveys were distributed
pre-lab and a post-lab survey was distributed after lab
completion. Baseline demographic experience was collected. Pre- and post-lab procedural confidence was
rated on a six-point Likert scale (1–6) with 1 representing no confidence. The Wilcoxon Signed-Rank Test was
use to calculate p values.
Results: 26 PAs participated in the course. All completed a pre- and post-lab assessment. No PA had done
any one procedure more than 5 times in their career.
Pre-lab modes of confidence level were £3 for each procedure. Post-lab modes were >4 for each procedure
except arthrocentesis of the ankle and wrist. However,
post lab assessments of procedural confidence significantly improved for all procedures with p values <0.05.
Conclusion: Midlevel providers’ level of confidence
improved for emergent procedures after completion of
a procedure lab using lightly-embalmed cadavers.
A mobile cadaver lab would be beneficial to train rural
providers with minimal experience.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
356
Creation Of A Valid And Reliable
Competency Assessment For Advanced
Airway Management
James K. Takayesu1, Demian Szyld2, Calvin
Brown III3, Benjamin Sandefur3, Eric Nadel3,
Ron M. Walls3
1
Massachusetts General Hospital, Boston, MA;
2
NYU, New York City, NY; 3Brigham and
Women’s Hospital, Boston, MA
Background: Difficult airway management is a core
competency in EM. Current assessment of individual
resident performance in managing the difficult airway
typically is by non-systematic global assessments by
varying faculty members in the ED, rather than by controlled, reliable assessment methods.
Objectives: To design an assessment method for difficult airway management that is reliable and valid to
determine competence during residency training.
Methods: Expert consensus was used to derive three
difficult airway simulated scenarios: unanticipated difficult airway (UDA) requiring alternative method,
anticipated difficult airway (ADA) using flexible fiberoptics, and failed airway requiring cricothyrotomy
(FAC). PGY3 resident performance on each of the
three scenarios was assessed using a standardized
performance checklist, including cognitive and
psychomotor items, by each of two faculty assessors.
Interobserver agreement was calculated for each
checklist item to determine reliability of assessment.
Validity was evaluated by global assessment of
performance.
Results: The checklist instruments had 24 items for
UAD, 31 items for ADA, 14 items for FAC. Using a cutoff of either kappa agreement greater than 0.6 or interobserver agreement greater than 90%, the checklists
were refined to include 19 items for UAD, 18 items for
ADA, 8 items for FAC.
Conclusion: A simple, standardized checklist with high
inter-rater reliability can be used to determine competency in difficult airway management.
•
357
www.aemj.org
S191
Implementing and Measuring Efficacy of a
Capnography Simulation Training Program
for Prehospital Health Care Providers
Jared T. Roeckner, Ivette Motola, Angel A.
Brotons, Hector F. Rivera, Robert E. Soto,
S. Barry Issenberg
University of Miami Miller School of Medicine,
Miami, FL
Background: Capnography is increasingly used in
prehospital and emergency department settings to aid
diagnosis and management of patients with respiratory emergencies. End-tidal CO2 (ETCO2) monitoring
can verify correct endotracheal tube placement, determine hypo/hyperventilation, and monitor effective
CPR. While there is evidence for the effectiveness of
capnography in the prehospital setting, little has been
published on educational methodologies for training
EMS personnel. To address this void, the University
of Miami Gordon Center for Research in Medical education incorporated capnography training as part of
its 8-hour simulation-based airway management
course.
Objectives: Determine the effect of capnography skills
training on knowledge acquisition in practicing EMS
personnel. We hypothesized that there would be no difference between the results of the precourse and postcourse assessments.
Methods: From August 2008 to September 2011, 424
paramedics throughout Florida participated in the airway course. The educational components included a
didactic session of capnography principles utilizing
tracings, demonstration on a live capnography monitor,
case-based scenarios using an audience-response system, and participation in six simulation scenarios. We
measured cognitive outcomes using precourse and
postcourse assessments. Eleven of the 30 questions
incorporated capnogram interpretation or decisionmaking that relied on participants’ ability to interpret
ETCO2. Statistical analysis was performed using a
paired, two-sample t-test.
S192
2012 SAEM ANNUAL MEETING ABSTRACTS
Results: Eighteen of the 424 learners were excluded
from analysis because of incomplete assessments. The
precourse mean score for 406 participants was 68%
(SD 0.150) for all of the capnography questions and the
postcourse mean was 91% (SD 0.104). The mean
improvement was 22.8% (SD 0.181, 99% CI 20.4–25.2,
p < 0.0001). Questions depicting capnograms illustrating esophageal intubation (Pre 29%, Post 82%) and
rebreathing (Pre 29%, Post 92%) showed the greatest
score improvement.
Conclusion: Training that incorporated didactic sessions and hands-on simulation scenarios significantly
increased knowledge of capnography. Marked improvement was seen in the interpretation of capnograms of
esophageal intubation, which has direct application in
verifying proper intubation.
358
Current Knowledge of and Willingness to
Perform Hands-Only CPR in Laypersons
Jennifer Urban, Adam J. Singer, Henry C.
Thode Jr
Stony Brook University, Stony Brook, NY
Background: Sudden cardiac arrest is the leading
cause of death. Early and effective cardiopulmonary
resuscitation (CPR) improves mortality, although participation in CPR remains low. Recent simplified guidelines recommend hands-only CPR (HCPR) for
laypersons.
Objectives: We determined current knowledge of and
willingness to perform hands-only CPR.
Methods: Design-Prospective anonymous survey. Setting-Academic suburban emergency department. Subjects-Adult patients and visitors in suburban ED. Survey
instrument-33 item closed question format based on
prior studies that included baseline demographics and
knowledge and experience of CPR. Main outcomeKnowledge of and willingness to perform hands-only
CPR. Data Analysis-Descriptive statistics. Univariate
and multivariate analyses were performed to determine
the association between predictor variables and knowledge of and willingness to perform HCPR.
Results: We surveyed 532 subjects; mean age was
44 ± 16; 53.2% were female, 75.6% were white. 45.5%
were college graduates, and 44.4% had an annual
income of greater than $50,000. 41.9% had received
prior CPR training; only 10.3% had performed CPR. Of
all subjects, 124 (23.3%) had knowledge of HCPR, yet
414 (77.8%) would be willing to perform HCPR on a
stranger. Age (p = 0.003) and income (p = 0.014) predicted knowledge of HCPR. A history of a cardiacrelated event in the family (p = 0.003) and previous CPR
training (p = 0.01) were associated with likelihood to
perform HCPR .
Conclusion: Less than one fifth of surveyed laypersons
knew of hands-only CPR, yet three quarters would be
willing to perform hands only CPR even on a stranger.
Efforts to increase layperson education are required to
enhance CPR performance.
359
Use of Automated External Defibrillators
for Pediatric Out-of-Hospital Cardiac
Arrests: A Comparison to Adult Patients
Austin Johnson1, Brian Harahan2, Jason S.
Haukoos1, David Slattery3, Bryan McNally4,
Comilla Sasson5
1
Denver Health Medical Center, Denver, CO;
2
University of Minnesota, Minneapolis, MN;
3
University of Nevada, Las Vegas, NV; 4Emory
5
University,
Atlanta,
GA;
University
of
Colorado, Aurora, CO
Background: Use of automated external defibrillators
(AED) improves survival in out-of-hospital cardiopulmonary arrest (OHCA). Since 2005, the American Heart
Association has recommended that individuals one year
of age or older who sustain OHCA have an AED
applied. Little is known about how often this occurs
and what factors are associated with AED use in the
pediatric population.
Objectives: Our objective was to describe AED use in
the pediatric population and to assess predictors of
AED use when compared to adult patients.
Methods: We conducted a secondary analysis of prospectively collected data from 29 U.S. cities that participate in the Cardiac Arrest Registry to Enhance Survival
(CARES). Patients were included if they had a documented resuscitation attempt from October 1, 2005
through December 31, 2009 and were ‡1 year old.
Patients were considered pediatric if they were less
than 19 years old. AED use included application by laypersons and first responders. Hierarchical multivariable
logistic regression analysis was used to estimate the
associations between age and AED use.
Results: There were 19,559 OHCAs included in this
analysis, of which 239 (1.2%) occurred in pediatric
patients. Overall AED use in the final sample was
5,517, with 1,751 (8.9%) total survivors. AEDs were
applied less often in pediatric patients (19.7%, 95% CI:
14.6%–24.7% vs 28.3%, 95% CI: 27.7%–29.0%). Within
the pediatric population, only 35.4% of patients with a
shockable rhythm had an AED used. In all pediatric
patients, regardless of presenting rhythm, AED use
demonstrated a statistically significant increase in
return of spontaneous circulation (AED used 29.8%,
95% CI: 16.2–43.4 vs AED not used 16.8%, 95% CI:
11.4–22.1, p < 0.05), although there was no significant
increase in survival to hospital discharge (AED used
12.8%; AED not used 5.2%; p = 0.057). In the adjusted
model, pediatric age was independently associated
with failure to use an AED (OR 0.61, 95% CI: 0.42–
0.87) as was female sex (OR 0.88, 95% CI: 0.81–0.95).
Patients who had a public arrest (OR 1.35, 95% CI:
1.24–1.46) or one that was witnessed by a bystander
(OR 1.20. 95%: CI 1.11–1.29) were also predictive of
AED use.
Conclusion: Pediatric patients who experience OHCA
are less likely to have an AED used. Continued education of first responders and the lay public to increase
AED use in this population is necessary.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
360
Does Implementation of a Therapeutic
Hypothermia Protocol Improve Survival
and Neurologic Outcomes in all Comatose
Survivors of Sudden Cardiac Arrest?
Ken Will, Michael Nelson, Abishek Vedavalli,
Renaud Gueret, John Bailitz
Cook County (Stroger), Chicago, IL
Background: The American Heart Association (AHA)
currently recommends therapeutic hypothermia (TH)
for out of hospital comatose survivors of sudden cardiac arrest (CSSCA) with an initial rhythm of ventricular fibrillation (VF). Based on currently limited data, the
AHA further recommends that physicians consider TH
for CSSCA, from both the out and inpatient settings,
with an initial non-VF rhythm.
Objectives: Investigate whether a TH protocol
improves both survival and neurologic outcomes for
CSSCA, for out and inpatients, with any initial
rhythm, in comparison to outcomes previously
reported in literature prior to TH.
Methods: We conducted a prospective observational
study of CSSCA between August 2009 and May 2011
whose care included TH. The study enrolled eligible
consecutive CSSCA survivors, from both out and inpatient settings with any initial arrest rhythm. Primary
endpoints included survival to hospital discharge and
neurologic outcomes, stratified by SCA location, and by
initial arrest rhythm.
Results: Overall, of 27 eligible patients, 11 (41%, 95%
CI 22–66%) survived to discharge, 7 (26%, 95% CI 9–
43%) with at least a good neurologic outcome. Twelve
were out and 15 were inpatients. Among the 12 outpatients, 6 (50%, 95% CI 22–78%) survived to discharge, 5
(41%, 95% CI 13–69%) with at least a good neurologic
outcome. Among the 15 inpatients, 5 (33%, 95% CI 9–
57) survived to discharge, 2 (13%, 95% CI 0–30%) with
at least a good neurologic outcome. By initial rhythm, 6
patients had an initial rhythm of VF/T and 21 non-VF/T.
Among the 6 patients with an initial rhythm of VF/T, 4
(67%, CI 39–100%) survived to discharge, all 4 with at
least a good outcome, including 3 out and 1 inpatients.
Among the 21 patients with an initial rhythm of nonVF/T, 7 (33%, CI 22–53%) survived to discharge, 3
(14%, CI 0–28%) with at least a good neurologic outcome, including 2 out and 1 inpatients.
Conclusion: Our preliminary data initially suggest that
local implementation of a TH protocol improves survival and neurologic outcomes for CSSCA, for out and
inpatients, with any initial rhythm, in comparison to
outcomes previously reported in literature prior to TH.
Subsequent research will include comparison to local
historical controls, additional data from other regional
TH centers, as well as comparison of different cooling
methods.
•
361
www.aemj.org
S193
Protocolized Use of Sedation and Paralysis
with Therapeutic Hypothermia Following
Cardiac Arrest
Timothy J. Ellender1, Dustin Spencer2, Judith
Jacobi3, Michelle Deckard2, Elizabeth Taber2
1
Indiana University Department of Emergency
2
Medicine,
Indianapolis,
IN;
IU
Health
Methodist Hospital, Indianapolis, IN; 3Indiana
University, Indianapolis, IN
Background: Therapeutic hypothermia (TH) has been
shown to improve the neurologic recovery of cardiac
arrest patients who experience return of spontaneous
circulation (ROSC). It remains unclear as to how earlier
cooling and treatment optimization influence outcomes.
Objectives: To evaluate the effects of a protocolized
use of early sedation and paralysis on cooling optimization and clinical outcomes in survivors of cardiac
arrest.
Methods: A 3-year (2008–2010), pre-post intervention
study of patients with ROSC after cardiac arrest treated
with TH was performed. Those patients treated with a
standardized order set which lacked a uniform sedation
and paralytic order were included in the pre-intervention group, and those with a standardized order set
which included a uniform sedation and paralytic order
were included in the post-intervention group. Patient
demographics, initial and discharge Glasgow Coma
Scale (GCS) scores, resuscitation details, cooling time
variables, severity of illness as measured by the
APACHE II score, discharge disposition, functional status, and days to death were collected and analyzed
using Student’s t-tests, Man-Whitney U tests, and the
Log-Rank test.
Results: 232 patients treated with TH after ROSC were
included, with 107 patients in the pre-intervention
group and 125 in the post-intervention group. The
average time to goal temperature (33C) was 227 minutes (pre-intervention) and 168 minutes (post-intervention) (p = 0.001). A 2-hour time target was achieved in
38.6% of the patients (post-intervention) compared to
24.5% in the pre-group (p = 0.029). Twenty-eight day
mortality was similar between groups (65.4% and
65.3%) though hospital length of stay (10 days pre- and
8 days post-intervention) and discharge GCS (13 preand 14-post-intervention) differed between cohorts.
More post-intervention patients were discharged to
home (55.8%) compared to 43.2% in the pre-intervention group.
Conclusion: Protocolized use of sedation and paralysis
improved time to goal temperature achievement. These
improved TH time targets were associated with
improved neuroprotection, GCS recovery, and disposition outcome. Standardized sedation and paralysis
appears to be a useful adjunct in induced TH.
S194
362
2012 SAEM ANNUAL MEETING ABSTRACTS
Physician Clinical Impression Compared to
a Novel Clinical Decision Rule for Use in
Sparing Pediatric Patients with Signs and
Symptoms of Acute Appendicitis Exposure
to Computed Tomography
Michael Wandell1, Michael Brown2, Harold
Simon3, Karen Copeland4, David Huckins5,
Brent Blumenstein6
1
AspenBio Pharma, Castle Rock, CO; 2Michigan
State
University,
Grand
Rapids,
MI;
3
Emergency Medicine Children’s Healthcare of
Atlanta, Atlanta, GA; 4AspenBio Pharma, Castle
Rock, CO; 5Newton Wellesley, Newton, MA;
6
TriArch Consulting, Washington, DC
Background: CT is increasingly used to assess children
with signs and symptoms of acute appendicitis (AA)
though concerns regarding long-term risk of exposure
to ionizing radiation have generated interest in methods
to identify children at low risk.
Objectives: We sought to derive a clinical decision
rule (CDR) of a minimum set of commonly used signs
and symptoms from prior studies to predict which
children with acute abdominal pain have a low likelihood of AA and compared it to physician clinical
impression (PCI).
Methods: We prospectively analyzed 420 subjects aged
2 to 20 years in 11 U.S. emergency departments with
abdominal pain plus signs and symptoms suspicious for
AA within the prior 72 hours. Subjects were assessed
by study staff unaware of their diagnosis for 17 clinical
attributes drawn from published appendicitis scoring
systems and physicians responsible for physical examination estimated the probability of AA based on PCI
prior to their medical disposition. Based on medical
record entry rate, frequently used CDR attributes were
evaluated using recursive partitioning and logistic
regression to select the best minimum set capable of
discriminating subjects with and without AA. Subjects
were followed to determine whether imaging was used
and use was tabulated by both PCI and the CDR to
assess their ability to identify patients who did or did
not benefit based on diagnosis.
Results: This cohort had a 27.3% prevalence (118/431
subjects) of AA. We derived a CDR based on the
absence of two out of three of the following attributes:
abdominal tenderness, pain migration, and rigidity/
guarding had a sensitivity of 89.8% (95% CI: 83.1–94.1),
specificity of 47.6% (95% CI: 42.1–53.1), NPV of 92.5%
(95% CI: 87.4–95.7), and negative likelihood ratio of 0.21
(95% CI: 0.12–0.37). The PCI set at AA <30% pre-test
probability had a sensitivity of 94.1% (95% CI: 88.3–
97.1), specificity of 49.4% (95% CI: 43.9–54.9), NPV of
95.7% (95% CI: 91.3–97.9), and negative likelihood ratio
of 0.12 (95% CI: 0.06–0.25). The methods each classified
37% of the patients as low risk for AA. Our CDR identified 29.1% (43/148) of low risk subjects who received
CT but being AA (-), could have been spared CT, while
the PCI identified 20.1% (30/149).
Conclusion: Compared to physician clinical impression,
our clinical decision rule can identify more children at
low risk for appendicitis who could be managed more
conservatively with careful observation and avoidance
of CT.
363
Negative Predictive Value of a Low
Modified Alvarado Score For Adult ED
Patients with Suspected Appendicitis
Andrew C. Meltzer1, Brigitte M. Baumann2,
Esther H. Chen3, Frances S. Shofer4,
Angela M. Mills4
1
George Washington University, Washington,
DC; 2Cooper University Hospital, Camden, NJ;
3
UCSF, San Francisco, CA; 4University of
Pennsylvania, Philadelphia, PA
Background: Abdominal pain is the most common
complaint in the ED and appendicitis is the most
common indication for emergency surgery. A clinical
decision rule (CDR) identifying abdominal pain
patients at a low risk for appendicitis could lead to a
significant reduction in CT scans and could have a
significant public health impact. The Alvarado score is
one of the most widely applied CDRs for suspected
appendicitis, and a low modified Alvarado score (less
than 4) is sometimes used to rule out acute appendicitis. The modified Alvarado score has not been prospectively validated in ED patients with suspected
appendicitis.
Objectives: We sought to prospectively evaluate the
negative predictive value of a low modified Alvarado
score (MAS) in ED patients with suspected appendicitis.
We hypothesized that a low MAS (less than 4) would
have a sufficiently high NPV (>95%) to rule out acute
appendicitis.
Methods: We enrolled patients greater than or equal to
18 years old who were suspected of having appendicitis
(listed as one of the top three diagnosis by the treating
physician before ancillary testing) as part of a prospective cohort study in two urban academic EDs from
August 2009 to April 2010. Elements of the MAS and
the final diagnosis were recorded on a standard data
form for each subject. The sensitivity, specificity, negative predictive value (NPV), and positive predictive
value (PPV) were calculated with 95% CI for a low
MAS and final diagnosis of appendicitis.
Results: Of 290 enrolled patients, 28 were excluded for
missing a MAS variable. The remaining 262 patients
were included for analysis (mean age 35 years [range
18–89], 68% female, 52% white), of whom 54 (21%) had
acute appendicitis. The test characteristics were as follows: sensitivity 72.2% (95% CI 58–84%); specificity
54.3% (95% CI 47–61%); PPV 29.1% (95% CI 22–38%);
and NPV 88.3% (95% CI 81–93%).
Conclusion: The negative predictive value of a MAS
less than 4 was 88.3%. Given the serious complications of a missed appendicitis, we believe that the
NPV is too low to recommend adoption of the low
MAS to clinically rule out appendicitis. These observations lay the groundwork for future studies to improve
clinical diagnosis of appendicitis and safely reduce
the amount of CT scans for patients at low risk for
appendicitis.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
364
A Novel BioMarker Panel to Rule Out
Acute Appendicitis in Pediatric Patients
with Abdominal Pain
Michael Wandell1, Michael Brown2, Roger
Lewis3, Harold Simon4, David Huckins5, Karen
Copeland1, Brent Blumenstein5, David Spiro6
1
AspenBio Pharma, Castle Rock, CO; 2Michigan
State University, Grand Rapids, MI; 3UCLA
Harbor, Los Angeles, CA; 4Emergency
Medicine Children’s Healthcare of Atlanta,
Atlanta, GA; 5Newton Wellesley, Newton, MA;
6
Pediatric Emergency Services, Oregon Health
Sciences University, Portland, OR
Background: Evaluating children for appendicitis is
difficult and strategies have been sought to improve the
precision of the diagnosis. Computed tomography is
now widely used but remains controversial due to the
large dose of ionizing radiation and risk of subsequent
radiation-induced malignancy.
Objectives: We sought to identify a biomarker panel
for use in ruling out pediatric acute appendicitis as a
means of reducing exposure to ionizing radiation.
Methods: We prospectively enrolled 431 subjects aged
2 to 20 years presenting in 11 U.S. emergency departments with abdominal pain and other signs and symptoms suspicious for acute appendicitis within the prior
72 hours. Subjects were assessed by study staff unaware of their diagnosis for 17 clinical attributes drawn
from appendicitis scoring systems and blood samples
were analyzed for CBC differential and 5 candidate proteins. Based on discharge diagnosis or post-surgical
pathology, the cohort exhibited a 27.3% prevalence
(118/431 subjects) of appendicitis. Clinical attributes and
biomarker values were evaluated using principal component, recursive partitioning, and logistic regression
to select the combination that best discriminated
between those subjects with and without disease. Mathematical combination of three inflammation-related
markers in a panel comprised of myeloid-related protein 8/14 complex (MRP), C-reactive protein (CRP), and
white blood cell count (WBC) provided optimal discrimination.
Results: This panel exhibited a sensitivity of 98% (95%
CI, 94–100%), a specificity of 48% (95% CI, 42–53%),
and a negative predictive value of 99% (95% CI, 95–
100%) in this cohort. The observed performance was
then verified by testing the panel against a pediatric
subset drawn from an independent cohort of all ages
enrolled in an earlier study. In this cohort, the panel
exhibited a sensitivity of 95% (95% CI, 87–98%), a specificity of 41% (95% CI, 34–50%), and a negative predictive value of 95% (95% CI, 87–98%).
Conclusion: AppyScore is highly predictive of the
absence of acute appendicitis in these two cohorts. If
these results are confirmed by a prospective evaluation
currently underway, the AppyScore panel may be useful to classify pediatric patients presenting to the emergency department with signs and symptoms suggestive
of, or consistent with, acute appendicitis and thereby
sparing many patients ionizing radiation.
•
365
www.aemj.org
S195
Video Capsule Endoscopy in the
Emergency Department: A Novel Approach
to Diagnosing Acute Upper
Gastrointestinal Hemorrhage
Andrew C. Meltzer1, Gayatri Patel1, Jeff
Smith1, Payal Shah1, Amir Ali1, Roderick
Kresiberg1, Meaghan Smith1, David Fleischer2
1
George Washington University, Washington,
DC; 2Mayo Clinic, Scottsdale, AZ
Background: Video capsule endoscopy (VCE) has an
established role in diagnosing occult gastrointestinal
hemorrhage and other small bowel disease. It is a novel
and potentially useful method to diagnose an acute
upper GI hemorrhage. Potential advantages include the
ability to be performed 24 hours a day without sedation
and to be interpreted at the bedside by the ED physician.
Objectives: Our objectives were to demonstrate (1) ED
patient tolerance for VCE, (2) the agreement of VCE
interpretation between ED and GI physicians, and (3)
the ability of VCE to detect active bleeding compared
to subsequent upper endoscopy (EGD) or patient
follow-up.
Methods: This study was conducted over a 6-month
period at an urban academic ED. Investigators performed VCE (Pillcam Eso2, Given Imaging) on subjects
identified to have suspected acute upper GI hemorrhage (melena, hematemesis, or coffee-ground emesis
within past 24 hours). Following the VCE, subjects
completed a short survey regarding procedure tolerance. Approximately 30 minutes of video were recorded
per subject and reviewed by four blinded physicians
(two ED physicians with training in reading VCE studies but no formal endoscopic training, and two GI physicians with VCE experience.) Subjects were followed
for clinical outcomes and EGD results.
Results: Twenty-five subjects (mean age 52 years old,
13 female) were enrolled. No eligible subjects declined
and 96% stated the VCE was well tolerated. No subjects
suffered any complications related to the VCE. Between
the two GI physicians, there was good agreement on
the presence of fresh blood (j = 0.84). Compared to the
GI physicians’ interpretation, each of the two ED physicians demonstrated good agreement regarding the
presence of fresh blood (j = 0.83 and j = 0.90). The
presence or absence of fresh blood on VCE showed a
sensitivity of 83%, specificity of 84%, PPV of 0.63, and
NPV of 0.94 compared to the gold standard of active
bleeding on EGD within 24 hours (20 subjects) or
patient follow-up (5 subjects).
Conclusion: VCE was well tolerated in ED patients with
suspected acute upper GI hemorrhage. ED physicians
with VCE training were able to interpret the presence of
fresh blood with good agreement with experienced GI
physicians. Finally, VCE was accurate compared to the
gold standard of EGD or patient follow-up for detection
of active bleeding. These observations lay the groundwork for prospective multi-center studies which could
allow ED physicians and GI physicians to further collaborate in the care of acute upper GI hemorrhage. (Originally submitted as a ‘‘late-breaker.’’)
S196
366
2012 SAEM ANNUAL MEETING ABSTRACTS
A Novel Way to Track Patterns of ED
Patient Dispersal to Nearby Hospitals
When a Major ED Closes
Thomas Nguyen1, Okechukwu Echezona1, Arit
Onyile2, Gilad Kuperman3, Jason Shapiro2
1
Beth Israel Medical Center, New York, NY;
2
Mount Sinai Medical Center, NYCLIX Inc.,
3
New
York,
NY;
Columbia-Presbyterian
Hospital, New York, NY
Background: There are no current studies on the
tracking of emergency department (ED) patient dispersal when a major ED closes. This study demonstrates a novel way to track where patients sought
emergency care following the closure of Saint Vincent’s
Catholic Medical Center (SVCMC) in Manhattan by
using de-identified data from a health information
exchange, the New York Clinical Information Exchange
(NYCLIX). NYCLIX matches patients who have visited
multiple sites using their demographic information. On
April 30, 2010, SVCMC officially stopped providing
emergency and outpatient services. We report the patterns in which patients from SVCMC visited other sites
within NYCLIX.
Objectives: We hypothesize that patients often seek emergency care based on geography when a hospital closes.
Methods: A retrospective pre- and post-closure analysis was performed of SVCMC patients visiting other
hospital sites. The pre-closure study dates were January
1, 2010–March 31, 2010. The post closure study dates
were May 1, 2010–July 31, 2010. A SVCMC patient was
defined as a patient with any SVCMC encounter prior
to its closure. Using de-identified aggregate count data,
we calculated the average number of visits per week by
SVCMC patients at each site (Hospital A-H). We ran a
paired t-test to compare the pre- and post-closure averages by site. The following specifications were used to
write the database queries:
Of patients who had one or more prior visits to
SVCMC for each day within the study return the following:
a. EID: a unique and meaningless proprietary ID generated within the NYCLIX Master Patient Index (MPI).
b. Age: Thru the age of 89. Persons over 90 were listed
as ‘‘90 + ’’
c. Ethnicity/Race
d. Type of visit: emergency
e. Location of visit: specific NYCLIX site.
Results: Nearby hospitals within 2 miles saw the highest number of increased ED visits after SVCMC closed.
This increase was seen until about 5 miles. Hospitals >5
miles away did not see any significant changes in ED
visits. See table.
Conclusion: When a hospital and its ED close down,
patients seem to seek emergency care at the nearest
hospital based on geography. Other factors may
include the patient’s primary doctor, availabilities of
outpatient specialty clinics, insurance contracts, or
preference of ambulance transports. This study is limited by the inclusion of data from only the eight hospitals participating in NYCLIX at the time of the SVCMC
closure.
Upstream Relief: Benefits On EMS Offload
Delay Of A Provincial ED Overcapacity
Protocol Aimed At Reducing ED Boarding
Andrew D. McRae1, Dongmei Wang2,
Ian E. Blanchard2, Wadhah Almansoori2,
Andrew Anton1, Eddy Lang1, Grant Innes1
1
University of Calgary, Calgary, AB, Canada;
2
Alberta Health Services, Calgary, AB, Canada
367
Background: EMS offload delays resulting from ED
overcrowding contribute to EMS system costs, operational inefficiencies, and compromised patient safety.
Overcapacity protocols (OCP) that enhance ED outflow
to inpatient units may improve EMS offload delays for
arriving patients.
Objectives: To examine the effect of a provincial, system-wide OCP policy implemented in December 2010
on ambulance offload delays at three urban EDs.
Methods: Data were collected on all ED EMS arrivals
from the metro Calgary (population 1.1 million) area to
its three urban adult hospitals. The study phases consisted of the 7 months from February to October 2010
(pre-OCP) compared against the same months in 2011
(post-OCP). Data from the EMS operational database
and the Regional Emergency Department Information
System (REDIS) database were linked. The primary
analysis examined the change in EMS offload delay
defined as the time from EMS triage arrival until
patient transfer to an ED bed. A secondary analysis
evaluated variability in EMS offload delay between
receiving EDs.
Table - Abstract 366: SVCMC Patients Visiting Other Hospitals Pre and Post Closure
Site
SVCMC
Hospital
Hospital
Hospital
Hospital
Hospital
Hospital
Hospital
Hospital
A
B
C
D
E
F
G
H
Distance from
SVCMC (miles)
Pre-Closure
Patients/week
Post-Closure
Patients/week
Changes in
the number of
visits/week
from SVCMC
NA
1.3
2.0
2.8
5.0
5.7
7.4
8.8
11.2
354.8
101.3
39.5
60.8
25.2
38.8
14.2
69.5
5.5
460
177.5
53.5
68.8
30.8
38.1
14.5
70.8
6.0
NA
76.2
14
8.0
5.6
)0.7
0.3
1.3
0.4
Percentage
change (p-value)
NA
75.2% (<0.0001)
35.4% (<0.0001)
13.1% (0.0179)
22.2% (0.0068)
)1.80% (0.7849)
)2.10% (0.8521)
1.86% (0.4679)
9.09% (0.4669)
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
Results: 11431 patients had linked data in both the
EMS and REDIS databases. The mean EMS offload
delay time was reduced following OCP implementation.
Mean EMS offload delay decreased by 18.2 minutes
(95%CI 16.4–19.9) from 33.5 minutes to 15.8 minutes.
The decrease in EMS offload delay following OCP
implementation was observed at all three receiving
EDs to varying degrees. At site one, which has the
highest acuity, offload delay improved by 7.5 minutes,
from 14.4 minutes (95%CI 13.1–15.7) to 6.9 minutes
(95%CI 6.1–7.7). At site two, which has the next-highest
acuity, offload delay improved by 22.0 minutes from
36.8 minutes (95%CI 33.7–39.9) to 14.8 minutes (95%CI
13.8–16.0). At site three, which has the lowest acuity,
offload delay improved by 29.6 minutes from 59.2
(95%CI 54.9–63.7) minutes to 29.6 minutes (95%CI 27.4–
31.8).
Conclusion: Implementation of a regional overcapacity
protocol to reduce ED crowding was associated with
an important reduction in EMS offload delay, suggesting that policies that target hospital processes have
bearing on EMS operations. Variability in offload delay
improvements is likely due to site-specific issues, and
the gains in efficiency correlate inversely with acuity.
368
What is the Impact of a Rapid Assessment
Zone on Wait Times to Care for the Acute
Care Unit of the Emergency Department?
Alex Guttman, Marc Afilalo, Antoinette
Colacone, Xiaoqing Xue, Nathalie Soucy, Eli
Segal, Bernard Unger
Jewish General Hospital, McGill University,
Montreal, QC, Canada
Background: Timely access to ED care is a serious and
persistent problem that continues to challenge health
care providers to identify new management strategies
that optimize patient flow.
Objectives: Evaluate the effect of a Rapid Assessment
Zone (RAZ) on wait times to cubicle access and nurse
•
www.aemj.org
S197
and physician assessment for patients directed to the
Acute Care Unit (ACU) of the ED.
Methods: A pre-post intervention study was conducted
in the ED of an adult university teaching hospital in Montreal (annual visits = 69 000). The RAZ unit (intervention),
created to offload the ACU of the main ED, started operating in January, 2011. Using a split flow management
strategy, patients were directed to the RAZ unit based on
patient acuity level (CTAS code 3 and certain code 2), likelihood to be discharged within 12 hours, and not requiring an ED bed for continued care. Data were collected
weekdays from 9:00 to 21:00 for 4 months (September December 2008) (pre-RAZ) and for 1.5 months (February
- March 2011) (post-RAZ). In the ACU of the main ED,
research assistants observed and recorded cubicle access
time, and nurse and physician assessment times. Databases were used to extract socio-demographics, ambulance arrival, triage code, chief complaint, triage and
registration time, length of stay, and ED occupancy. Multiple linear regression analysis was used to compare the
wait times (calculated from Triage-End Time) between
pre-RAZ and post-RAZ periods with adjustment of
potential confounding factors: age, triage code and ED
occupancy (at Triage-End Time of a new patient).
Results: During the pre-RAZ and post-RAZ periods,
the ACU received 1692 and 876 visits respectively, with
mean age (±SD) 68 (±18) vs. 70 (±17); Triage code 1–2:
30% vs. 35%; ambulance arrival 36% vs. 46%; and %
ED occupancy 115% vs. 159%. ED staffing was re-distributed but not increased during the post-RAZ period
and hospital admission policy remained unchanged.
Compared to pre-RAZ, the post-RAZ period wait times
(in minutes) to cubicle access, nurse assessment, and
physician assessment had decreased on average by 50
(95% CI: 41–60), 46 (95% CI: 38–55), and 22 (95% CI:
13–31) respectively. Other factors associated with these
wait times are ED occupancy, age, and triage code.
Conclusion: Implementation of the RAZ unit resulted
in a significant reduction in wait time to cubicle access,
nurse and physician assessment for patients directed to
the ACU of the main ED.
S198
369
2012 SAEM ANNUAL MEETING ABSTRACTS
Factors Influencing Completion of a
Follow-Up Telephone Interview of
Emergency Department Patients One Week
after ED Visit
Sara C. Bessman1, Julius C. Pham1, Ru Ding2,
Melissa L. McCarthy2, EDIP Study Group1
1
Johns Hopkins University School of Medicine,
Baltimore, MD; 2George Washington University
Medical Center, Washington, DC
Background: Telephone follow-up after discharge from
the ED is useful for treatment and quality assurance
purposes. ED follow-up studies frequently do not
achieve high (i.e. ‡ 80%) completion rates.
Objectives: To determine the influence of different factors on the telephone follow-up rate of ED patients. We
hypothesized that with a rigorous follow-up system we
could achieve a high follow-up rate in a socioeconomically diverse study population.
Methods: Research assistants (RAs) prospectively
enrolled adult ED patients discharged with a medication prescription between November 15, 2010 and September 9, 2011 from one of three EDs affiliated with
one health care system: (A) academic Level I trauma
center, (B) community teaching affiliate, and (C) community hospital. Patients unable to provide informed
consent, non-English speaking, or previously enrolled
were excluded. RAs interviewed subjects prior to ED
discharge and conducted a telephone follow-up interview 1 week later. Follow-up procedures were standardized (e.g. number of calls per day, times to place
calls, obtaining alternative numbers) and each subject’s
follow-up status was monitored and updated daily
through a shared, web-based data system. Subjects
who completed follow-up were mailed a $10 gift card.
We examined the influence of patient (age, sex, race,
insurance, income, marital status, usual major activity,
education, literacy level, health status), clinical (acuity,
discharge diagnosis, ED length of stay, site), and procedural factors (number and type of phone numbers
received from subjects, offering two gift cards for difficult to reach subjects) on the odds of successful followup using multivariate logistic regression.
Results: Of the 3,940 enrolled, 45% were white, 59%
were covered by Medicaid or uninsured, and 44%
reported an annual household income of <$26,000. 86%
completed telephone follow-up with 41% completing
on the first attempt. The table displays the factors associated with successful follow-up. In addition to patient
demographics and lower acuity, obtaining a cell phone
or multiple phone numbers as well as offering two gift
cards to a small number of subjects increased the odds
of successful follow-up.
Conclusion: With a rigorous follow-up system and a
small monetary incentive, a high telephone follow-up
rate is achievable one week after an ED visit.
Table - Abstract 369: Adjusted OR and 95% CI for Significant
Predictors of Completing Follow-Up (N = 3386)
Characteristic
Age (10 year increase)
Female vs. Male
Private vs. Medicaid, Medicare,
Self-Pay
Completed some college/
beyond vs. 12th grade/below
Triage acuity 3–5 vs. acuity 1–2
Multiple phones vs. home/
work/other only
Cell phone only vs. home/work/
other only
Policy change (2 gift cards for
difficult to reach subjects)
370
Follow-Up
Completion OR
95% CI
1.1
1.4
1.6
1.1, 1.2
1.2, 1.7
1.3, 2.0
1.5
1.2, 1.9
1.9
1.8
1.2, 2.8
1.3, 2.5
1.5
1.2, 1.9
1.6
1.3, 2.0
Effect of the Implementation of an
Electronic Clinical Decision Support Tool
on Adherence to Joint Commission
Pneumonia Core Measures in an Academic
Emergency Department
Michael A. Gibbs1, Michael R. Baumann2,
James Lyden2, Tania D. Strout2, Daniel
Knowles2
1
Carolinas Medical Center, Charlotte, NC;
2
Maine Medical Center, Portland, ME
Background: In 2005, the Centers for Medicare and
Medicaid Services (CMS) introduced a series of ‘‘core
measures’’ designed to standardize the care of hospitalized patients with pneumonia (PNA). Several core measures are related to care provided in the emergency
department (ED), where work flow and complex patient
presentations may make adherence difficult.
Objectives: To evaluate the effect of the implementation of an enhanced electronic clinical decision-support
tool on adherence to CMS pneumonia core measures.
Methods: An interrupted time-series design was used
to evaluate the study question. Data regarding adherence with the following pneumonia core measures were
collected pre-and post-implementation of the enhanced
decision-support tool: blood cultures prior to antibiotic,
antibiotic within 6 hours of arrival, appropriate antibiotic selection, and mean time to antibiotic administration. Prescribing clinicians were educated on the use of
the decision-support tool at departmental meetings and
via direct feedback on their cases.
Results: During the 33-month study period, complete
data were collected for 1185 patients diagnosed with
CAP: 613 in the pre-implementation phase and 572
post-implementation. The mean time to antibiotic
administration decreased by approximately one minute
from the pre- to post-implementation phase, a change
that was not statistically significant (p = 0.824). The proportion of patients receiving blood cultures prior to
antibiotics improved significantly (p < 0.001) as did the
proportion of patients receiving antibiotics within
6 hours of ED arrival (p = 0.004). A significant improvement in appropriate antibiotic selection was noted with
100% of patients experiencing appropriate selection in
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
the post-phase, p = 0.0112. Use of the available support
tool increased throughout the study period, v2 = 78.13,
df = 1, p < 0.0001. All improvements were maintained
15 months following the study intervention.
Conclusion: In this academic ED, introduction of an
enhanced electronic clinical decision support tool significantly improved adherence to CMS pneumonia core
measures. The proportion of patients receiving blood
cultures prior to antibiotics, antibiotics within 6 hours,
and appropriate antibiotics all improved significantly
after the introduction of an enhanced electronic clinical
decision support tool.
371
Continued Rise In The Use Of Midlevel
Providers In US Emergency Departments,
1993 To 2009
David F.M Brown, Ashley F. Sullivan, Janice
A. Espinola, Carlos A. Camargo Jr.
Massachusetts General Hospital, Boston, MA
Background: ED visits in the US have risen dramatically
over the past two decades. In order the meet the growing
demand for emergency care, mid-level providers (MLPs)
- both physician assistants (PAs) and nurse practitioners
(NPs) - have been utilized in EDs in various ways. We
previously demonstrated a striking increase in MLP utilization between 1993 and 2005. The extent to which MLPs
currently are used in US EDs and the degree to which
this use has changed since 2005 are not known.
Objectives: To test the hypothesis that MLP usage in
US EDs continues to rise.
Methods: We analyzed ED visits from the National
Hospital Ambulatory Medical Care Survey (NHAMCS)
to identify those seen by mid-level providers (MLP).
MLP visits were defined as those seen by PAs and/or
NPs, with or without the involvement of physicians.
Trends in all MLP visits were examined over the
17-year study period. Also, MLP-only visits, defined as
visits where the patient was seen by a MLP without
being seen by a physician, were compared with those
seen by physicians only. We compared weighted proportions with 95%CI and analyzed trends using
weighted logistic regression.
Results: During 1993 to 2009, 8.4% (95%CI, 7.6–9.2%)
of all US ED visits were seen by MLPs; 6.3% (95%CI,
5.5–7.0%) were seen by PAs and 2.5% (95%CI, 2.1–
2.8%) by NPs. These summary data include marked
changes in MLP utilization. During the 17-year study
period, PA visits rose more than 3-fold, from 2.9% to
9.9%, while NP visits rose more than 4-fold, from 1.1%
to 4.7% (both Ptrend<0.001). Together, MLP visits
accounted for approximately 15% of all US ED visits.
Among all ED visits involving MLPs during the study
period, most (58%) were seen by MLPs with physicians,
while 42% were seen only by MLPs only. Compared to
physician-only visits, those seen by MLPs only arrived
by ambulance less frequently (6% vs. 16%), had lower
urgent acuity (34% vs. 60%), and were admitted less
often (3% vs. 14%).
Conclusion: MLP use has increased in US EDs, with
increases seen for both PAs and NPs. By 2009, approximately 1 in 7 ED visits involved a MLP, with a
•
www.aemj.org
S199
substantial number seen by MLPs only. Although ED
visits seen by MLPs only are generally of lower acuity,
their involvement in some ED urgent visits and those
requiring admission confirms that the role of MLPs
extends beyond minor presentations.
372
Should Osteopathic Students Applying to
Emergency Medicine Take the USMLE
Exam?
Moshe Weizberg, Dara Kass, Abbas Husain,
Jennifer Cohen, Barry Hahn
Staten Island University Hospital, Staten Island,
NY
Background: Board scores represent an important
aspect of an emergency medicine (EM) residency application. Whereas allopathic (MD) students take the United States Medical Licensing Examination (USMLE),
osteopathic (DO) students take the Comprehensive
Osteopathic Medical Licensing Examination (COMLEX).
It is difficult to compare these board scores. Previous
literature proposed an equation to predict USMLE
scores based on COMLEX. Recent analyses suggested
that this may no longer be accurate. DO students applying to allopathic programs frequently ask whether they
should take USMLE in addition to COMLEX.
Objectives: #1: Compare the likelihood of MD and DO
students to match in allopathic EM residencies. #2:
Compare the likelihood to match of DO applicants who
took USMLE to those who did not.
Methods: IRB-approved review of ERAS and NRMP
data for application season 2010–2011 in conjunction
with a survey of all EM residency program leadership.
ERAS provided the number of DO applicants and how
many reported USMLE. NRMP supplied how many DO
students ranked and matched in allopathic EM programs and whether they reported USMLE. A questionnaire was sent to all allopathic EM residency programs
asking about the importance of DO students taking USMLE.
Results: 1,482 MD students ranked EM programs;
1,277 (86%, 95% CI 84.3–87.9) matched. 350 DO students ranked EM programs; 181 (52%, 95% CI 46.4–
57.0) matched (95% CI 29.8–39.0, p < 0.0001). 208 DO
students reported a USMLE score; 126 (61%, 95% CI
53.6–67.2) matched. 142 did not report a USMLE score;
55 (39%, 95% CI 30.7–47.3) matched (95% CI 11.2–32.5,
p < 0.0001). Operating characteristics of USMLE to prevent not matching: absolute risk reduction = 21.9%, relative risk reduction = 55.6%, number needed to
‘‘take’’ = 4.6. Programs were surveyed about the importance of DO students taking USMLE: extremely important 40%, somewhat important 38%, not at all
important 22%.
Conclusion: In the 2010–2011 application season, MD
students were more likely than DO students to match in
an allopathic EM residency. DO students who took USMLE were more likely to match than those who did
not. DO students applying to allopathic EM programs
should consider taking USMLE to improve their
chances of a successful match. Limitations: Single application season, students may not have reported USMLE,
S200
2012 SAEM ANNUAL MEETING ABSTRACTS
positions obtained outside the match, factors other than
boards may have contributed.
373
Emergency Medicine Residents’
Association (EMRA) Emergency Medicine
Qualifying and Certification Exam
Preparation Survey
Todd Guth
University of Colorado, Aurora, CO
Background: Emergency medicine (EM) residency
graduates need to pass both the written qualifying
exam and oral certification exam as the final benchmark
to achieve board certification. The purpose of this project is to obtain information about the exam preparation habits of recent EM graduates to allow current
residents to make informed decisions about their individual preparation for the ABEM written qualifying and
oral certification exams.
Objectives: The study sought to determine the amount
of residency and individual preparation, to determine
the extent of the use of various board review products,
and to elicit evaluations of the various board review
products used for the ABEM qualifying and certification exams.
Methods: Design: An online survey instrument was
used to ask respondents questions about residency
preparation and individual preparation habits, as well
as the types of board review products used in preparing for the EM boards. Participants: As greater than
95% of all EM graduates are EMRA members, an
online survey was sent to all EMRA members who have
graduated for the past three years.
Observations: Descriptive statistics of types of preparation, types of resources, time, and quantitative and
qualitative ratings for the various board preparation
products were obtained from respondents.
Results: A total of 520 respondents spent an average of
9.1 weeks and 15 hours per week preparing for the
written qualifying exam and spent an average of
5 weeks and 7.8 hours per week preparing for the oral
certification exam. In preparing for the written qualification exam, 90% used a preparation textbook with
16% using more than one textbook and 47% using a
board preparation course. In preparing for the oral
qualifying exam, 56% used a preparation textbook
while 34% used a preparation course.
Sixty-seven percent of respondents reported that their
residency programs had a formalized written qualifying
exam preparation curriculum of which 48% was centered on the annual in-training exam. Eight-five percent
of residency programs had a formalized oral certification exam preparation.
Respondents reported spending on average $715 preparing for the qualifying exam and $509 for the certification exam.
Conclusion: EM residents spend significant amounts of
time and money and make use of a wide range of
residency and commercially available resources in preparing for the ABEM qualifying and certification exams.
374
Use Of The Multiple Mini Interview (MMI)
For Emergency Medicine Resident
Selection: Acceptability To Participants
And Comparison With Application Data
Laura R. Hopson1, Eve D. Losman1, R. Brent
Stansfield1, Taher Vohra2, Danielle TurnerLawrence3, John C. Burkhardt1
1
University of Michigan, Ann Arbor, MI;
2
Henry Ford Hospital, Detroit, MI; 3William
Beaumont Hospital, Royal Oak, MI
Background: Communication
and
professionalism
skills are essential for EM residents but are not wellmeasured by selection processes. The Multiple MiniInterview (MMI) uses multiple, short structured
contacts to measure these skills. It predicts medical
school success better than the interview and application. Its acceptability and utility in EM residency selection is unknown.
Objectives: We theorized that the MMI would provide
novel information and be acceptable to participants.
Methods: 71 interns from three programs in the first
month of training completed an eight-station MMI
developed to focus on EM topics. Pre- and post-surveys
assessed reactions using five-point scales. MMI scores
were compared to application data.
Results: EM grades correlated with MMI performance
(F(1.66) = 4:18, p < 0.05) with honors students having
higher MMI summary scores. Higher third year clerkship grades trended to higher MMI performance
means, although not significantly. MMI performance
did not correlate with a match desirability rating and
did not predict other individual components of the
application including USMLE Step 1 or USMLE Step 2.
Participants preferred a traditional interview (mean difference = 1.36, p < 0.0001). A mixed format was preferred over a pure MMI (mean difference = 1.1,
p < 0.0001). Preference for a mixed format was similar
to a traditional interview. MMI performance did not
significantly correlate with preference for the MMI;
however, there was a trend for higher performance to
associate with higher preference (r = 0.15, t(65) = 1.19,
n.s.) Performance was not associated with preference
for a mix of interview methods (r = 0.08, t(65) = 0.63,
n.s.).
Conclusion: While the MMI alone was viewed less
favorably than a traditional interview, participants were
receptive to a mixed methods interview. The MMI
appears to measure skills important in successful completion of an EM clerkship and thus likely EM residency. Future work will determine whether MMI
performance correlates with clinical performance
during residency.
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
375
Novel Comprehensive Emergency Medicine
In-Training Exam Course Can Improve
Residency-Wide Scores
Rahul Sharma, Jeremy D. Sperling, Peter W.
Greenwald, Wallace A. Carter
Weill Cornell Medical College / NewYorkPresbyterian Hospital, New York, NY
Background: The annual American Board of Emergency Medicine (ABEM) in-training exam is a tool to
assess resident progress and knowledge. When the
New York-Presbyterian (NYP) EM Residency Program
started in 2003, the exam was not emphasized and resident performance was lower than expected. A course
was implemented to improve residency-wide scores
despite previous EM literature failing to exhibit
improvements with residency-sponsored in-training
exam interventions.
Objectives: To evaluate the effect of a comprehensive,
multi-faceted course on residency-wide in-training
exam performance.
Methods: The NYP EM Residency Program, associated
with Cornell and Columbia medical schools, has a 4year format with 10–12 residents per year. An intensive
14-week in-training exam preparation program was
instituted outside of the required weekly residency conferences. The program included lectures, pre-tests,
high-yield study sheets, and remediation programs.
Lectures were interactive, utilizing an audience
response system, and consisted of 13 core lectures (2–
2.5 hours) and three review sessions. Residents with
previous in-training exam difficulty were counseled on
designing their own study programs. The effect on intraining exam scores was measured by comparing each
resident’s score to the national mean for their
•
www.aemj.org
S201
postgraduate year (PGY). Scores before and after
course implementation were evaluated by repeat measures regression modeling. Overall residency performance was evaluated by comparing residency average
to the national average each year and by tracking
ABEM national written examination pass rates.
Results: Resident performance improved following
course implementation. Following the course’s introduction, the odds of a resident beating the national
mean increased by 3.9 (95% CI 1.9–7.3) and the percentage of residents exceeding the national mean for their
PGY year increased by 37% (95% CI 23%–52%). Following course introduction, the overall residency mean
score has outperformed the national exam mean annually and the first-time ABEM written exam board pass
rate has been 100%.
Conclusion: A multi-faceted in-training exam program
centered around a 14-week course markedly improved
overall residency performance on the in-training exam.
Limitations: This was a before and after evaluation as
randomizing residents to receive the course was not
logistically or ethically feasible.
376
Difference in Rates of Medical Board
Disciplinary Action between Emergency
Medicine Trained and Non-Emergency
Medicine Trained Physicians in the
Emergency Department
David J. Kammer, Jeffery A. Kline
Carolinas Medical Center, Charlotte, NC
Background: Quantifying the effect of residency training and board certification in emergency medicine on
the quality of practice of medicine in the emergency
S202
2012 SAEM ANNUAL MEETING ABSTRACTS
department remains an unmet goal. The public North
Carolina Medical Board (NCMB) database archives
information about training status, type of practice, and
disciplinary actions from any state.
Objectives: Measure the frequency and incidence of
state board disciplinary actions for residency-trained,
board-certified or board-eligible physicians practicing
emergency medicine compared with non-residency
trained practitioners of emergency medicine in the state
of North Carolina.
Methods: We downloaded and analyzed the North Carolina Medical Board licensing database for state medical board disciplinary actions against all registered
physicians who self-reported the ED as one of their
areas of medical practice. A reviewer blinded to training status counted all unique disciplinary actions from
all states, and years of medical practice from the
reported date of medical school graduation. Groups
were compared using 95% CI for difference in independent proportions.
Results: The database contained 2,195 physicians who
reported EM as their area of practice. Of these, 1,626
(74%) reported ABEM or AOA board certification or
reported training at an accredited EM residency, with
a mean of 18.0 ± 10.5 years of practice whereas 569
(26%) reported no EM training or board certification,
with a 22.3 ± 11.0 years of practice. Among the nonresidency trained, non-boarded EM physicians, the
percentage of individuals with board actions against
them was significantly higher (6.9% vs. 1.9%, 95% CI
for difference of 5.0% = 3.1 to 7.5%), but the incidence
of actions was not significant (1.3 vs. 3.4 events/
1000 years of practice, 95% CI for difference of 2.1/
1000 = )3/1000 to +8/1000), but the power to detect a
difference was 30%.
Conclusion: In this study population, EM-trained physicians had significantly fewer total state medical board
disciplinary actions against them than non-EM trained
physicians, but when adjusted for years of practice
(incidence), the difference was not significantly different
at the 95% confidence level. The study was limited by
low power to detect a difference in incidence.
377
Does the Residency Selection Cycle
Impact What Information Is Accessed on
the Web?
Jim Killeen, Gary Vilke, Theodore Chan, Leslie
Oyama, Maegan Carey, Edward Castillo
UCSD Medical Center, San Diego, CA
Background: Information on EM residency programs
is widely available and most commonly accessed
through the internet. Less is known on what types of
information are accessed and when such access occurs
given the yearly residency selection cycle.
Objectives: To determine what information is most
commonly accessed as measured by internet web page
views, and when that access occurs for EM residency
programs.
Methods: A retrospective study of all internet access to
an EM department and residency website for a 1 year
period (7/1/10-6/30/11). Data collected included
frequency of website ‘‘hits’’, number of unique visitors,
duration of view, traffic source, and specific webpages
accessed over the study period. Data were stratified by
quarter to determine patterns of temporal variations.
Statistical analysis used chi-square tests to assess differences over quarters.
Results: During the study period, there were 28,827
website visits or ‘‘hits’’, from 16,581 unique visitors.
The majority of traffic source came from search engines
(52.7%, primarily Google), with the remaining coming
from direct traffic (32.3%) or a referring site (15.0%).
Visitors spent an average of 1:50 minutes on the website and visited 2.94 separate pageviews for a total of
84,610 pageviews cumulatively for 1 year. There were
820 unique visitors to the EM residency application
page. 46.8% of these occurred in the quarter prior to
applications being due, followed by 21.5%, 15.1%, and
16.6% in the following quarters. The percentages of visitors for the same quarter viewing specific residency
info webpages were: conferences (26.4%), current residents (28.7%), curriculum (28.2%), FAQs (29.1%), and
goals (25.1%). Comparison of all other pages with the
application page for numbers of visitors was significantly lower for all comparisons (p < 0.001).
Conclusion: In the quarter prior to residency applications being due, almost half of the unique annual website hits to the residency application page occur, which
is significantly higher than access of the specific informational pages about the residency.
378
A Focused Educational Intervention
Increases Paramedic Documentation of
Patient Pain Complaints
Herbert G. Hern1, Harrison Alter1, Joseph
Barger2, Monica Teves3, Karen Hamilton3,
Leslie Mueller3
1
Alameda County - Highland, Oakland, CA;
2
Contra Costa County EMS Agency, Martinez,
CA; 3American Medical Response, Concord,
CA
Background: Patients in the EMS setting are often
underassessed and undertreated for acute painful conditions. Multiple studies document the failure to assess,
quantify, or treat pain by EMS providers.
Objectives: We chose pain documentation as a long
term project for quality improvement in our EMS system. Our objectives were to enhance the quality of pain
assessment, to reduce patient suffering and pain
through improved pain management, to improve pain
assessment documentation, to improve capture of initial
and repeat pain scales, and to improve the rate of pain
medication. This study addressed the aim of improving
pain assessment documentation.
Methods: This was a quasi-experiment looking at paramedic documentation of the PQRST mnemonic and
pain scales. Our intervention consisted of mandatory
training on the importance and necessity of pain
assessment and treatment. In addition to classroom
training, we used rapid cycle individual feedback and
public posting of pain documentation rates (with unique
IDs) for individual feedback. The categories of chief
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
complaint studied were abdominal pain, blunt injury,
burn, chest pain, headache, non-traumatic body pain,
and penetrating injury. We compared the pain documentation rates in the 3 months prior to intervention,
the 3 months of intervention, and 3 months post intervention. Using repeated-measures ANOVA, we compared rates of paramedic documentation over time.
Results: Our EMS system transported 42166 patients
during the study period, of whom 15490 were for painful
conditions in the defined chief complaint categories.
There were 168 paramedics studied, of whom 149 had
complete data. Documentation increased from 1819 of
5122 painful cases (35.5%) in Qtr 1 to 4625 of 5180 painful
cases (89.3%) in Qtr 3. The trend toward increased rates of
pain documentation over the three quarters was strongly
significant (p < 0.001). Paramedics were significantly
more likely to document pain scales and PQRST assessments over the course of the study with the highest rates
of documentation compliance in the final 3-month period.
Conclusion: A focused intervention of education and
individual feedback through classroom training, one on
one training, and public posting improves paramedic
documentation rates of perceived patient pain.
379
Emergency Medical Service Providers
Perspectives on Management Of The
Morbidly Obese
Graham Ingalsbe1, John Cienki2, Kathleen
Schrank1
1
University of Miami, Miami, FL; 2Jackson
Memorial Hospital, Miami, FL
Background: Obesity is epidemic in the United States,
with an increasing burden to the health care system.
Management and transport of the morbidly obese (MO)
poses unique challenges for EMS providers. Though
resources are being directed to improving transport of
the obese, little research exists to guide these efforts.
Objectives: We sought to identify EMS providers’ perspectives on the challenges of caring for MO patients in
the field and areas for improvement.
Methods: We administered an anonymous, web-based
survey to all active providers of prehospital transport
from a large, urban, fire-based EMS system to determine the specific challenges of managing MO patients.
This survey looked at various components of the transport: lifting, transport time, airway management, establishing IV access, drug administration, as well as
demographics, equipment, and education needs. The
survey contained yes/no, rank-order, and Likert scale
questions with free response available.
Results: Of approximately 550 EMS providers surveyed
203 completed responses. All had transported an MO
patient. Providers felt MO patients frequently call EMS
for assistance around the home (29%) and non-emergent
transport to a health care facility (21%), with pain (29%)
and shortness of breath (23%) as the most common
emergent complaints. Of specific challenges to properly
care for MO patients, 91% thought lifting and/or moving
the patient was most difficult, followed by IV access, airway management, transport time, and measuring accurate vital signs. Respondents felt that transporting MO
•
www.aemj.org
S203
patients requires at least six EMS personnel (87%) . Of
EMS providers who responded, 73% had attempted IV
access on a MO patient, but only 36% had ever intubated, and 25% had ever calculated or adjusted medication
dosages specifically for MO patients. More than 87% felt
it would be beneficial to receive more training on transport, intubation, and medication dosing for MO and 96%
felt they needed more equipment. Of respondents, 89%
felt that MO patients were not able to receive the same
standard of care as other patients.
Conclusion: Surveyed EMS providers felt that difficulties in lifting, as well as lack of experience, hinders care
for MO patients in the prehospital setting, and most felt
that more training would be beneficial. Surveyed EMS
providers reported that MO patients are not able to
receive the same standard of care as non-MO patients.
(Originally submitted as a ‘‘late-breaker.’’)
380
Anaphylaxis Knowledge Among
Paramedics: Results of a National Survey
Ryan C. Jacobsen1, Serkan Toy2, Joseph
Salomone1, Aaron Bonham3, Jacob
Ruthstrom1, Matthew C. Gratton1
1
Truman Medical Center, Kansas City, MO;
2
Children’s Mercy Hospitals and Clinics,
Kansas City, MO; 3University of MissouriKansas City, Kansas City, MO
Background: Anaphylaxis is a potentially life-threatening emergency and prompt recognition and treatment
by EMS providers can be critically important for early
initiation of appropriate life-saving interventions. Anaphylaxis has been shown to be both under-recognized
and under-treated by physicians with limited data on
the prehospital care of anaphylaxis.
Objectives: The purpose of this study was to determine
how well paramedics recognize and treat anaphylaxis
in an adult population.
Methods: Blinded survey of a random sample of paramedics registered by the National Registry of
Emergency Medical Technicians (NREMT). A SurveyMonkeyTM survey was designed and validated with local
paramedics and then distributed via e-mail to a random
sample of NREMT-certified paramedics. The survey contained three sections: demographic, self-assessment of
confidence regarding anaphylaxis care, and cognitive
assessment. Paramedics were contacted a total of four
times to ensure compliance with the survey.
Results: 3538 of 9655 responded (36.6% response rate).
79% were male; mean age 36; mean EMS experience
12.1 years; 78.5% full time EMS; and 30.9% fire-based,
27.9% private, 17.2% county, and 12.9% hospital-based.
98% were confident that they could recognize anaphylaxis; 97% were confident that they could treat anaphylaxis; 95.4% were confident they could use an
epinephrine auto-injector (EAI). 98.9% correctly recognized a case of classic anaphylaxis while only 2.5% correctly identified the atypical case. Only 46.7% gave
epinephrine as the first treatment for anaphylactic
shock (airway already secured) and only 38.9% gave it
IM (58.4% SQ); and 60.5% gave it in the deltoid (11.6%
thigh).
S204
2012 SAEM ANNUAL MEETING ABSTRACTS
Conclusion: While a large percentage of paramedics
recognized classic anaphylaxis, a very small percentage
recognized atypical anaphylaxis; less than half chose
epinephrine as the initial drug of choice and most
respondents were unable to identify the correct route
of administration. This survey points out a number of
areas for improved education and training.
381
Thermal Medication Stress in Air
Ambulances: The Mercy Saint Vincent Life
Flight Experience
Richard Tavernetti1, Edward Tavernetti2, David
J. Ledrick1
1
MSVMC, Toledo, OH; 2University of California
at Davis, Davis, CA
Background: Temperature variations can degrade the
potency of advanced life support medications. While
mitigating the effects of temperature on medications
carried by ground ambulances has been well studied,
there is little information to guide helicopter emergency
medical services (HEMS). To date a comprehensive
evaluation of thermal medication stress in the HEMS
environment has not been completed. Previous studies
have been limited by their brevity, attention only to
maximum and minimum temperatures, lack of replication, and variability in storage practices.
Objectives: The objective of this study was to provide a
high-resolution picture of medication temperatures
across a large HEMS system with sufficient replication
to allow for correlations to be made between thermal
medication stress and specific operational practices.
Methods: Our program is located in Northwest Ohio
and conducts approximately 2,500 annual patient transports across five aircraft. Bags containing medications
are located in the cabin and tail compartments of all
aircraft. Temperature loggers were secured inside all
the medication bags and temperatures were recorded
every 30 minutes for one year.
Results: 139,523 data points were included in the analysis. The temperature range was between 46.5C and
)2.5C. The mean kinetic temperature (MKT) goal of
25C was exceeded in the May-September period with
cabin MKT being significantly cooler (p < 0.003). Cabin
storage was associated with lower thermal stress
throughout the year. A larger cabin size appeared to be
associated with lower MKT. Maximum temperatures
during winter and summer were similar due to the wintertime use of heaters. Freezing temperatures (<0C)
were only encountered in the tail compartments.
Conclusion: Current HEMS medication storage practices are not within U.S. Pharmacopeia standards,
although they appear attainable with minimal changes
in operational practices. Storing medications in the aircraft’s cabin can reduce thermal medication stress.
Operational procedures aimed at reducing thermal
medication stress appear to be most important MaySeptember. Heaters can subject medications to
extremes of temperature and require greater care in
their implementation. Stock rotation protocols based on
previous data from the ground ambulance setting may
be inadequate for HEMS.
382
Opportunities for Emergency Medical
Services Care of Syncope
Brit Long, Luis Serrano, Fernanda Bellolio,
Erik Hess
Mayo Clinic College of Medicine, Rochester, MN
Background: Emergency medical services (EMS) systems are vital in the identification, assessment, and
treatment of trauma, stroke, myocardial infarction, and
sepsis and improving early recognition, resuscitation,
and transport to adequate medical facilities. EMS personnel provide similar first-line care for patients with
syncope, performing critical actions such as initial
assessment and treatment as well as gathering key
details of the event.
Objectives: To characterize emergency department
patients with syncope receiving initial care by EMS and
their role as initial providers.
Methods: We prospectively enrolled patients over
18 years of age who presented with syncope or near syncope to a tertiary care ED with 72,000 annual patient visits from June 2009 to June 2011. We compared patient
age, sex, comorbidities, and 30-day cardiopulmonary
adverse outcomes (defined as myocardial infarction, pulmonary embolism, significant cardiac arrhythmia, and
major cardiovascular procedure) between EMS and
non-EMS patients. Descriptive statistics, two-sided ttests, and chi-square testing were used as appropriate.
Results: Of the 669 patients enrolled, 254 (38.0%) arrived
by ambulance. The most common complaint in patients
transported by EMS was fainting (50.4%) or dizziness
(45.7%); syncope was reported in 28 (11.0%). Compared
to non-EMS patients, those who arrived by ambulance
were older (mean age (SD) 64.5 (18.7), vs. 60.6 (19.5)
years, p = 0.012). There were no differences in the proportion of patients with hypertension (20.0% vs 32.0%,
p = 0.75), coronary artery disease (8.85% vs 15.3%,
p = 0.67), diabetes mellitus (6.5% vs 9.5%, p = 0.57), or
congestive heart failure (3.8% vs 6.6%, p = 0.74). Sixtynine (10.8%) patients experienced a cardiopulmonary
event within 30 days. Twenty-eight (4.4%) patients who
arrived by ambulance and 41 (6.4%) non-EMS patients
had a subsequent cardiopulmonary adverse event (RR
1.08, 95%CI 0.68–1.69) within 30 days. The table tabulates interventions provided by EMS prior to ED arrival.
Conclusion: EMS providers care for more than one
third of ED syncope patients and often perform key
interventions. EMS systems offer opportunities for
advancing diagnosis, treatment, and risk stratification
in syncope patients.
Table - Abstract 382:
ACADEMIC EMERGENCY MEDICINE • April 2012, Vol. 19, No. 4, Suppl. 1
383
Electronic Accountability Tools Reduce CT
Overutilization in ED Patients with
Abdominal Pain
Angela M. Mills1, Daniel N. Holena1, Caroline
Kerner1, Hanna M. Zafar1, Frances S. Shofer1,
Brigitte M. Baumann2
1
University of Pennsylvania, Philadelphia, PA;
2
Cooper University Hospital, Camden, NJ
Background: Abdominal pain is the most common reason for visiting an emergency department (ED), and
abdominopelvic computed tomography (APCT) use has
increased dramatically over the past decade. Despite
this, there has been no significant change in rates of
admission or diagnosis of surgical conditions.
Objectives: To assess whether an electronic accountability tool affects APCT ordering in ED patients with
abdominal or flank pain. We hypothesized that implementation of an accountability tool would decrease
APCT ordering in these patients.
Methods: Before and after study design using an electronic medical record at an urban academic ED from
Jul-Nov 2011, with the electronic accountability tool
implemented in Oct 2011 for any APCT order. Inclusion
criteria: age >= 18 years, non-pregnant, and chief complaint or triage pain location of abdominal or flank
pain. Starting Oct 17th, 2011, resident attempts to order
APCT triggered an electronic accountability tool which
only allowed the order to proceed if approved by the
ED attending physician. The attending was prompted to
enter the primary and secondary diagnoses indicating
APCT, agreement with need for CT and, if no agreement, who was requesting this CT (admitting or consulting physician), and their pretest probability (0–100)
of the primary diagnosis. Patients were placed into two
groups: those who presented prior to (PRE) and after
(POST) the deployment of the accountability tool. Primary outcome was percentage of APCT performed by
group. Continuous data were compared using Mann
Whitney test while categorical data were compared
using Fisher’s exact test.
Results: Of 1419 patients enrolled (mean age
46.7 ± 17 years, 54% male, 58% black, 33% admitted),
the majority had a chief complaint of abdominal pain
(81%). There were no statistically significant differences
in age, sex, or race between the PRE (N = 1014; 71%)
and POST (N = 405; 29%) groups. There was a significant reduction in APCT use after the tool’s implementation (37.8% PRE vs. 31.9% POST, difference 6%, 95%
CI 0.4–11%) with no change in the rate of admission
(33.5% PRE vs. 30.9% POST, p = 0.35).
Conclusion: An electronic accountability tool with
attending forced completion reduced unnecessary
APCT in ED patients with abdominal pain and did not
lead to increased rates of admission. Extension of these
tools into other areas could lead to further reductions
in unnecessary radiologic testing.
•
384
www.aemj.org
S205
Identification Of Patients Less Likely To
Have Significant Alternate Diagnoses On
CT For Renal Colic
Brock Daniels, Cary Gross, Dinesh Singh,
Chris Moore
Yale University School of Medicine, New
Haven, CT
Background: Previous studies on unselected populations have reported that more than 10% of CT scans for
suspected kidney stone may reveal significant alternate
pathology.
Objectives: To determine the relative effect on prevalence of siginificant alternate diagnoses in CT for suspected renal colic after excluding patients without
flank/back pain; or with evidence of infection (fever
and/or leukocytes on urine dip), active malignancy,
known renal disease, or prior urologic intervention.
Methods: Retrospective record review. All CT ‘‘flank
pain protocol’’ (FPP) done in patients >18y.o. at an academic ED between 4/05–11/10 were electronically
retrieved (n = 5379). 4585 records (85%) were randomly
selected for review by trained research assistants who
categorized dictated CT reports as ‘‘normal’’, ‘‘symptomatic stone’’, or ‘‘other’’. All ‘‘other’’ CT findings
were reviewed along with medical records by one of
three physician authors to determine if CT was diagnostic, and if so, if there was an intervention or followup (f/u) ‘‘required’’, ‘‘recommended’’, or ‘‘not needed’’.
A random subset (1868/4585) of records underwent full
review for exclusion criteria, as well as all records with
f/u ‘‘required’’ or ‘‘recommended’’. Any categorization
or exclusion that was not clear was adjudicated by the
three physicians. A subset of records randomly selected
from each category was blindly double reviewed by
two physicians to ensure agreement.
Results: Results are shown in the table. Overall, 58.2%
(95% CI 56.8–59.7) of the CTs reviewed were diagnostic,
with 51.8% (50.4–53.6) of all CTs revealing symptomatic
kidney stones, 4.1% (3.6–4.7) intervention or f/u
required, 1.4% (1.1–1.7) f/u recommended, and 1.6%
(0.7–1.3) not needed. Of the 188 intervention or f/u
required, 139 or 73.9% (67.2–79.7) would be screened
by the exclusion rule, while on full record review 42.4%
(39.2–43.6) would be excluded. The data suggest that if
applied prospectively, the prevalence of alternate diagnoses requiring intervention or follow-up in CTs for
renal colic could be reduced from 4.1% to under 1.6%
(50 significant diagnoses out of 3101 not excluded).
Conclusion: On thorough record review, less than five
percent of CTs for renal colic reveal alternate diagnoses
requiring intervention or follow-up. Applying a rule
with basic historical data and point-of-care urine dip
could identify a population where this prevalence is
under 2%. Prospective study is needed.
S206
2012 SAEM ANNUAL MEETING ABSTRACTS
Table- Abstract 384: Review of CT FPPs (n = 4585)
n=
Age ± SD
Female
CTs diagnostic
Symptomatic stone
Intervention or f/u required
f/u recommended
f/u not needed
Expected* not excluded
(derived from sample)
Not excluded intervention
or f/u required (actual)
385
44.9 ± 15.8
2363
2670
2376
188
62
44
3101
50
%
51.5%
85.2%
51.8%
4.1%
1.4%
1.0%
57.4%
1.6%
Delayed Outcomes For Patients With
Suspected Renal Colic After Discharge
From The Emergency Department
Justin Yan, Marcia Edmonds, Shelley McLeod,
Rob Sedran, Karl Theakston
The University of Western Ontario, London,
ON, Canada
Background: Although renal colic is a relatively benign
disease, it can have a significant effect on patientrelated outcomes that may not be immediately apparent
during an isolated emergency department (ED) visit.
Objectives: The objective of this study was to determine the burden of disease for patients with suspected
renal colic after discharge from their initial ED visits.
Methods: This was a prospective observational cohort
study involving all adult (‡ 18 years) patients who presented to the EDs of a tertiary care centre (combined
census 150,000) with suspected renal colic over a oneyear period (Oct 2010–Oct 2011). Patients were contacted by telephone by trained research personnel at
48–72 hours and 10–14 days after their ED visit to
determine use of analgesics, missed days of school or
work, and repeat visits to a health care provider. Electronic patient charts were reviewed at 90 days after the
initial visit to determine if urologic intervention (lithotripsy, stents, etc.) was eventually required.
Results: Of 397 patients enrolled, 38 (9.6%) were
excluded for definite alternate diagnoses. Of 359
remaining patients, 12 (3.3%) were lost to follow-up,
leaving 347 (96.7%) providing post-discharge outcomes.
Ten patients (2.9%) were admitted from the ED. Of
those patients contacted, 95 (27.4%) reported passing a
stone within 72 hours and 31 additional patients
(36.3%) reported passing a stone within 14 days. 270
(77.8%) patients required analgesia for a median (IQR)
duration of 4 (2, 7) days. There were 88 (25.4%) patients
who still