Download SIIM 2017 Scientific Session Enterprise Imaging Integrating

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Image-guided radiation therapy wikipedia , lookup

Medical imaging wikipedia , lookup

Transcript
SIIM 2017 Scientific Session
Enterprise Imaging
Saturday, June 3 | 8:00 am – 9:30 am
Integrating Outcome and Quality Measures in an Enterprise Imaging Initiative
Victoria Barnosky, PHD, University of Pittsburgh Medical Center; Harrold Barrett; Malvika Sharma
Background
Value-based payment demands that every type of department in a hospital measure and report
their quality and outcomes to patients, referring physicians, health system management, and
payers. Currently, consensus on the appropriate set of quality and outcome measures is lacking
and imaging systems, for example, are not well-equipped to measure them. This session will
describe how a large health system in Western Pennsylvania created and implemented a
process for identifying, measuring and benchmarking radiology quality and outcome measures
during a large system conversion. Attendees will hear about the processes used to identify and
prioritize measures, learn which measure were selected, and understand the challenges of
integrating these measures in a new enterprise-wide imaging initiative.
Case Presentation
While some radiology quality, and efficiency measures exist, there is not yet consensus on what
these measures should be or how they should be defined. As we began transitioning to new
enterprise radiology VNA and PACS applications, we found it necessary to both measure the
success of this transition as well as begin creating benchmarks for future quality initiatives.
Therefore, we sought out to create a de novo process for identifying and selecting measures.
Reaching internal agreement on measure definition and prioritization was also a challenge,
which we overcame by implementing a qualitative, objective measure scoring system. Our final
challenges were engineering the new system to efficiently and routinely generate these
measures, and to extract data for selected measures from our legacy system that enables us to
assess the impact of our new technology on quality, efficiency and patient outcomes.
We determined that an area for improvement centered on current turn-around-time measures.
Many radiology metrics are dependent on the calculation of turn-around-times in the imaging
department, but not all times are synonymous with the rate at which radiologists interpret
exams or the actions that are taking place during the interpretation. Because of these known
barriers, we defined a new metric, termed reading velocity, which measures the amount of time
that radiologists are viewing primary exams, comparison studies, and actively dictating reports.
Using this new criteria, we were able to develop a tool to extrapolate non-clinical and
administrative tasks from the turn-around-time equation and focus primarily on the exam at
hand.
Using prior turn-around-time metrics as our theoretical framework, we designed our research
project for the following questions:
1. How frequently do radiologists use comparison exams when interpreting reports?
a. How does the frequency vary by regular vs. stat exams, radiologist, radiologist
experience, modality, time of day, facility, or other factors?
b. What is the correlation between viewing comparison and age of the comparison
exam?
c. Did implementation of VNA affect patterns of comparison exam use?
d. Can comparison exam use be limited to certain exams or certain diagnoses?
e. Does comparison exam use increase or decrease after the introduction of a VNA?
2. How frequently are radiologists not looking at the most recent relevant comparison?
a. By exam or modality? Routine or stat?
3. Can velocity – the rate at which radiologists interpret exams – be measured accurately?
a. What are the mean/median values of velocity, and how do they vary by regular vs.
stat exams, radiologist, radiologist experience, modality, time of day, facility, or
other factors?
b. Did implementation of VNA affect velocity, overall or by subgroup (eg, facility,
modality, time of day, etc.)?
Outcome
Upon successful development and quality testing of our analytics tool, we obtained IRB approval
and deployed the tool in 100% of radiologist workstations at our targeted hospital. All
participants signed consent to allow for research data collection, but to reduce any Hawthorne
Effect, were not educated on the details of what the analytics tool would measure. After only
two weeks of data collection, basic themes began to emerge.
The first theme that we immediately identified was the linear negative correlation for the age of
comparison exams. Meaning, when the age of comparison exams are charted on a scatter plot,
several predictions can be made (see Fig. 1). In general, comparison exams used were under 5
days old for the average exam interpretation. Further investigation of this objective will take
place in order to identify clinical best practices in addition to technical workflows in regards to
image life cycle management.
Figure 1
The next theme that was reviewed in the initial dataset was the median reading velocity for all
radiologists. We compared the velocity both by modality and also by body-part specialty. Figure
2 demonstrates the duration of reading times by modality for each radiologist participating in
the research.
Figure 2
As theorized, MR and CT exams required the most amount of time for radiologist viewing.
Likewise, Figure 3 displays this time relationship with the addition of body part specialty. As we
continue our data collection, our intent is to focus on identifying outlying exams and
investigating workflows for defining why these exams require substantially more or less reading
time.
Figure 3
Discussion
This research encompasses a holistic presentation that focuses on both the research planning
process as well as the actual outcome results. The development of an analytics tool to
accurately measure the velocity of radiologists’ interpretations is an innovative approach to a
traditional turn-around-time metric. Adding the use of comparison exams to the velocity
measurements will now provide radiologists and system administrators a new view into image
life cycle management and best practice scenarios.
Conclusion
Although the research findings of this study are not finalized, we feel that the precursory
findings, in addition to the process of defining and developing this research analytics tool are
valuable to an audience. We are intending to expand on our initial findings very soon and are
confident in our ability to present both a structured methodology and pertinent findings for
SIIM 2017 attendees.
References
1. Ellenbogen, Paul H. "Imaging 3.0: what is it?." Journal of the American College of Radiology 10.4
(2013): 229.
2. Smith-Bindman, Rebecca, Diana L. Miglioretti, and Eric B. Larson. "Rising use of diagnostic
medical imaging in a large integrated health system." Health Affairs 27.6 (2008): 1491-1502.
3. American College of Radiology. "Imaging 3.0." (2014).
4. Radiology Qualityhttps://www.moccredit.com/index.php/project-options/comparison-studies
Keywords
quality, analytics, image management