Download mammography Facilities, Regardless Of Their Size Or

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

The Cancer Genome Atlas wikipedia , lookup

Transcript
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
This document was taken directly from pages 74-86 of Clinical Determinants of Mammography. This
government book was the predecessor to most of our current MQSA Regulations. The FDA left out most
of the audit information. As you’ll see, it’s valuable information… (see Table 10 on page 10)
-Medical Audit-
STRONG RECOMMENDATION: Mammography facilities, regardless of their size or setting,
should perform medical audits at regular intervals, usually annually.
The mammography medical audit is a systematic collection and analysis of mammography results, which
are compared with outcomes data. It is the only way to measure both technical and interpretive aspects of
mammography performance.
Medical audits permit evaluation of the ability to detect very small cancers at the expected rate, which is an
important measure for any mammography practice. Interpreting physicians receive individualized feedback
on performance, which instills confidence if results meet expectations or which identifies a need for
additional training. Longitudinal audit studies may detect causes of false-negative errors, allowing technical
and interpretive shortcomings to be corrected. When audit data are reviewed and acted upon appropriately,
the audit is a powerful education tool. Audit results can indicate acceptable levels of performance to third
party payers and government agencies and assist in legal cases. Timely audits of abnormal mammograms
can result in optima followup of individual patients as a part of risk management program. An audit also
can help monitor patient population outcomes of screening mammography, locally and through a national
database. Yearly review of audit data allows for more timely and meaningful longitudinal evaluation of
individual and group performance.
-Data CollectionRECOMMENDATION: Data should be collected from the appropriate sources for all audits.
Data sources should include the mammography report; information recorded by the scheduler, receptionist,
and technologist; and biopsy results. If biopsy results indicate cancer, information can be found through a
regional or statewide tumor registry. If no such registry is available, a definitive diagnosis of cancer or
benign results can be obtained from, in order of preference, the pathology report, the referring physician or
surgeon, or the patient herself. Anyone who is involved in patient care and has access to biopsy results,
such as surgeons and pathologists, should cooperate in communicating these results to the mammography
facility.
RECOMMENDATION: Regional or national mammography databases should be established with a
major emphasis on the medical audit component.
RECOMMENDATION: A medical audit should use data from regional or national databases to
expedite data collection, ensure completeness, and compare individual medical audit data with those
of a wide variety of other practices.
Collection of some data elements may be difficult because of patient mobility, performance of biopsies off
site, and physicians’ referral patterns and because some mammography facilities only do screening
mammography. For comparisons, a mammography facility needs data that match its patient population and
practice setting. Regional or national databases permit the collection of data from a wide geographic area
and comparison over a wide variety of practices. A regional or national mammography database may be
Page 1 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
implemented as a voluntary, cooperative program linked to standardized cancer registries in collaboration
with State or local health departments.
A regional or national mammography database,(Table 6) is a computerized method of data collection that
includes the collection and analysis of medical audit data from individual practices and may include a
system for patient monitoring and tracking. Such a database may also include a population registry, which
would identify women not in the screening system.
However, before any such system can be implemented, patient confidentiality and protection of medical
audit information from discovery must be insured. Current State peer review statutes protecting peer review
activities, such as collection of medical audit data, apply to inpatient facilities. In many States, there is
virtually no protection from discovery of quality assurance activities in ambulatory and outpatient settings,
where most mammography facilities function.
STRONG RECOMMENDATION: Certain essential raw data and derived data should be obtained
for a meaningful medical audit.
The raw (unprocessed) data recommended as minimum requirements, (Table 7) have been collected in most
major audits. They are necessary for proper calculation of the derived data (calculated from raw data) and
for meaningful analysis of all audit data.
The more complete data (those listed in tables 8 and 9) have been useful as determinants of prevalent vs.
incident cancer rates, predictive value of various mammographic findings, and significance of various risk
factors. It is recommended that these data be collected because they provide information about variables
that can cause considerable fluctuation in audit results. For example, if more initial than follow-up
mammograms are performed, the rate of cancers detected will be higher.
Before any calculations of derived data can be made, every mammogram should be categorized as follows:
*True Positive (TP). Cancer diagnosed within 1 year after biopsy recommendation based on abnormal
mammogram.
*True Negative (TN). No known cancer diagnosed within 1 year of normal mammogram.
*False Negative (FN). Cancer diagnosed within 1 year of a normal mammogram.
definitions of false negative exist, this definition is the most widely applied.
Although other
*False Positive (FP). Three separate definitions are seen in the literature:
(1) Cases recalled for additional imaging evaluation of an abnormal finding on a screening
mammogram in which no cancer was found within 1 year or recalled cases not shown to be
malignant within 1 year.
(2) No known cancer diagnosed within 1 year after an abnormal mammogram and recommendation
for biopsy or surgical consultation.
(3) Benign disease found at biopsy within 1 year after an abnormal mammogram and
recommendation for biopsy or surgical consultation. This last definition of false positive is similar
to the ACR BI-RADS language.
The following derived data can then be calculated and will help to reflect the overall quality of a
mammography practice:
Page 2 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
*Sensitivity. The probability of detecting a cancer when a cancer exists, or the percentage of all patients
found to have breast cancer within 1 year of screening who were correctly diagnosed at the screening
session.
Sensitivity = TP/(TP+FN)
*Positive predictive value (PPV). Three separate definitions may be applied, depending on the practice
conditions and based on the above three definitions of false positive. It is important to know which
definition of PPV is being used to accurately compare audit data from a mammography practice with
published data. For practices performing only screening mammography, only PPV 1 will be of value in
evaluating data. For practices performing both screening and diagnostic mammography, all three
definitions of PPV can be applied.
*PPV1 (Abnormal). The percentage of all screening mammography cases that result in a
diagnosis of cancer based on abnormal screening examination.
PPV1 = TP/(number of abnormal screening exams), or
PPV1 = TP/(TP+FP1).
*PPV2 (Biopsy recommended). The percentage of all screening mammography cases that result in
a diagnosis of cancer based on a recommendation of consideration for biopsy.
PPV2 = TP/
(# of cases recommended for biopsy after abn. screening exams), or
PPV2 = TP/(TP+FP2).
*PPV3 (Biopsy done). The percentage of all screening mammography cases that result in a
diagnosis of cancer based on biopsies performed. This is also known as the biopsy yield of
malignancy, or the positive biopsy rate.
PPV3 = TP/(number of biopsies), or
PPV3 = TP/(TP+FP3).
*Specificity. The probability of a normal mammogram report when no cancer exists, or the percentage of
all patients not found to have breast cancer within 1 year of screening who were correctly identified as
normal at the time of screening.
Specificity = TN/(FP+TN).
Some variation in the range of specificity will exist depending on the definition of false positive
being applied, but the variation will be small because of the very small number of false positives
and the very large number of true negatives in most audit series.
*Cancer detection rate. The number of cancers detected per 1,000 women screened by mammography.
This number should be available in all basic audits.
The following cancer detection rates can be calculated only if the appropriate raw data are collected.
Although not essential to a basic audit, they provide valuable information and should be calculated when
possible:
Page 3 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
1. Prevalent vs. incident cancer rates, that is, rates of cancer in first time vs. follow-up exams.
2. Cancer detection rates in diagnostic exams.
3. Cancer detection rates by age group.
Separate sensitivities, PPV’s, and specificities can be calculated for each of these three subgroups, yielding
yet another stratum of useful audit information.
RECOMMENDATION: To improve their interpretive skills, interpreting physicians should keep
track of the results of the breast biopsies they have recommended. They should attempt to increase
the positive predictive value of biopsied mammographic abnormalities without decreasing the
number of cancers detected.
Excessive biopsies are reflected by a low positive predictive value of biopsied mammographic
abnormalities, which can result in increased costs of screening mammography programs, decreased use of
screening mammography, decreased compliance with recommendations for necessary biopsies by referring
health care providers and women, and avoidable patient anxiety and discomfort or pain. The most
appropriate positive biopsy rate for abnormalities detected by mammography has not been determined.
Reported positive predictive values of biopsied mammographic abnormalities range from 10 to 40 percent,
even for experienced interpreting physicians; positive biopsy rates of 25-40 percent have been
recommended as appropriate.
Interpreting physicians should review mammograms and results of recommended biopsies to determine the
positive predictive value of a biopsied mammographic abnormality for individual radiologists. These rates
would be kept in logs or computer databases to evaluate cumulative results, changes over time, and effects
activities to improve the accuracy of interpretations.
-Data AnalysisRECOMMENDATION: To achieve optimal benefit from a mammography audit, the data should be
analyzed from many perspectives. The analysis should use sensitivity (if and when measurable),
positive predictive value, tumor size, node positivity, cancers found per 1,000 women screened, and
recall rate.
RECOMMENDATION: Audit data should be analyzed not only for a mammography interpreter
group as a whole, but also for each individual within that group. If available, false-negative cases
should be evaluated individually to assess cause, thus further improving future outcomes.
Mammography interpreters should strive to reach the desirable numbers achieved by highly skilled experts
in the field of mammography (shown in table 10). Data must be analyzed to answer the three questions that
most directly measure the quality of a mammographer’s performance:
(1) Is the mammographer finding a high percentage of the cancers that exist in that particular
patient population? That is, what are the sensitivity and number of cancers found per 1,000
women screened?
(2) Are these cancers being found while the numbers of biopsies and requests for further imaging
evaluation of screened cases remain acceptably low? That is, what is the PPV and what is the
recall rate?
Page 4 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
(3) Are a large proportion of the cancers being found still small and confined to the breast? That
is, what are the numbers of minimal (invasive cancer is less than or equal to 1 cm or in situ ductal
carcinoma) and node-negative cancers found?
Derived data are required to answer the above questions. An analysis of derived data found in the scientific
literature yielded the following:
*Sensitivity. The range of sensitivity (as defined above) in most recent mammography audits is 85-90%.
Sensitivity may vary with patient age and appears to decrease in populations with denser breast tissue. It is
among the most difficult data to obtain because their calculation requires knowledge of the number of falsenegatives, information usually obtainable only through a computer linkage with a regional tumor registry. It
may be necessary instead to estimate the number of false-negatives to obtain an approximation of
sensitivity.
*PPV. This number is almost always measurable, using one or more of the above definitions. PPV2 (2540% in most reported series) is most frequently used. Lower PPVs are acceptable when associated with
greater numbers of early-stage disease and node-negative cases. If a facility performs only screening
mammography, PPV1 (5-10% in most reported series) should be applied instead. Facilities performing both
screening and diagnostic mammography may find calculations of both PPV1 and PPV2 to be valuable.
Separate PPV statistics should be maintained for cases where core or fine needle aspiration biopsy is
recommended. PPV is subject to many variables, including age distribution, percentage of palpable
cancers, cancer detection rate, the size and node positivity of cancers found, and sensitivity.
Tumor size. In most series, more than 50% of tumors diagnosed by mammography are stage 0 or 1 and
more than 30% of cancers diagnosed by mammography are minimal cancers-invasive cancer is less than or
equal to 1 cm or in situ ductal carcinoma. Because mortality from breast cancer is directly related to tumor
size, these values should be considered valuable goals. Tumor size varies with the percentage of screening
and diagnostic examinations in a mammography practice; tumors found by diagnostic examinations are
invariably larger.
*Node positivity. Tumor size should be correlated with node positivity, which in most series is less than
25%. Because mortality from breast cancer is directly proportional to node positivity, this value must also
be considered a desirable goal.
*Cancer detection rate. (Cancers found per 1,000 women screened by mammography). Rates of 2-10
cancers per 1,000 have been reported in most series. Variability is due to differing rates of detection in
first-time-screened vs. already-screened women (prevalent vs. incident cancers). Prevalent cancer rates
vary from 6 to 10 per 1,000 and incident cancer rates vary from 2 to 4 per 1,000. The cancer detection rate
also varies in younger vs. older women and in asymptomatic vs. symptomatic patients. Despite these
variables, the cancer rate serves as a relative threshold for abnormal. For example, if an audit shows that
sensitivity and PPV are both acceptably high but the number of cancers found is less than 2 per 1,000, the
sensitivity figure is suspect; the number of cancers eluding detection in that particular population is
probably still too great, and the overall quality of the mammography should be further evaluated.
*Recall rate. The percentage of patients for whom further imaging evaluation (coned compression views,
magnification views, sonography, etc.) is recommended can be used to calculate one of the definitions of
PPV (see above). This is useful for all facilities, especially for those performing screening mammography
only. A disproportionately high recall rate can negatively affect the cost-effectiveness and credibility of
mammography. In most large reported series, the recall rate is 10% or less. This value is desirable and
seems to decrease with increasing experience of the interpreting physician.
Page 5 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
*Specificity. Specificity is a difficult measure of quality to obtain; it is not even calculated in many large
studies. When calculated, it is usually found to be greater than 90%. Calculating specificity is difficult in
that it requires knowledge of all true negatives. The number of true negatives is based on the number of
false negatives, which is the least accessible information in any audit.
Two other important issues regarding analysis of audit data should be noted:
*Group audit vs. individual audit. A group audit gives greater statistical power to results, which enables
comparison with overall expected rates. However, the multiple variables described earlier (prevalent vs.
incident cancers, age of a population, ratio of screening to diagnostic mammograms, etc.) markedly
influence group audit results and may render comparisons with other group audits less valuable than an
intragroup comparison of individual interpreting physicians’ results. A major advantage of an individual
audit is that it provides a valid relative comparison among individual group members. If the performance of
certain group members shows considerable variation in sensitivity and other standards, measures can be
taken to improve their performance and thus improve future outcomes.
Review of false negatives. As noted, these cases may be difficult to identify unless a complete tumor
registry is available. However, if available, false-negative cases should be evaluated thoroughly to assess
cause (technical vs. interpretive error). A thorough and critical review of false negative cases can benefit
groups’ members and improve overall quality.
Medico legal Considerations
RECOMMENDATION: Very few broadly drawn statutes protecting audit information from
discovery are in place at this time. Therefore, it is recommended that complete mammography audits
be maintained primarily as internal audits. Interpreting physicians should not disseminate the data
more widely without being aware of confidentiality legislation in their state.
Mammography facilities and interpreting physicians should be aware of State legislation pertaining to
release of medical audit information. All States have statutes that protect the records of hospital peer
review committees from discovery. However, few statutes exist that protect other information collected for
quality review, including outpatient information. Congress provided protection to participants in quality
control programs and created qualified immunity for quality assurance records in the military health care
system and the Department of Veteran Affairs. Efforts are underway to enact legislation to more fully
protect audit material outside the military setting. The panel urges that these efforts be maximized to
expedite rapid passage of protective legislation. Enactment of such legislation would encourage all
mammography facilities to participate in medical audit without fear of increasing their medico legal liability
and would allow them to more completely fulfill the quality assessment standards requirements of the Joint
Commission for Accreditation of Healthcare Organizations, ACR, and MSQA. It would also enable them
to obtain audit information beneficial to themselves, the medical community, and their patients.
Table 6. Medical audit component of a regional or national mammography database
<>Identification of practice (confidential identification)
<>Anonymous identification number of mammography interpreters
Page 6 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
<>Dates of audit period and number of examinations audited for each practice
<>Patient identification number (confidential identification) compatible with
database(s) enabling linkage, for example, to tumor registry
<>Patient date of birth
<>Type of mammogram (baseline or first-time, routine screen or diagnostic)
<>Interpretation of mammogram
<>Followup recommendation
<>Dates and results of followup studies, if any
<>If abnormality found, palpable vs. nonpalpable
<>Results of cytology, core needle biopsy, or open biopsy
<>Cancer data:
-Size of each breast cancer diagnosed
-Histology of each breast cancer diagnosed
-Lymph node status
-Pathological stage
-False-negative rate
-True-positive biopsy rate
-False positive rate
-Positive predictive value
-Overall cancer detection rate
-Date of patient’s death
If the database is to include the function of patient monitoring and tracking, the following items
comprise a minimum data set to be collected:
<>Patient’s name, including all prior names
<>Patient’s current address
<>Patient’s date of birth
<>Patient identification number compatible with database(s) enabling linkage, for
example, to tumor registry
<>Patient’s telephone number
<>Patient’s spoken language
<>Dates of prior mammograms
<>Location of prior mammograms
<>Date of followup studies, if any
<>Date to send patient/clinician next reminder about when next study is due
Page 7 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
Table 7. The essential mammography audit: the minimum desired raw and derived data
A. Raw data (unprocessed data)
1.
2.
3.
4.
5.
6.
Dates of audit period and total number of mammography examinations in that period.
Number of screened women; number of diagnostic patients*.
Number of recommendations for further imaging evaluation.
Number of recommendations for biopsy or surgical consultation.
Biopsy results: malignant or benign (keep separate data for fine needle aspiration or core biopsy
cases).
Tumor staging: histologic type in situ (ductal) or invasive (ductal or lobular); grade; size; and
nodal status.
B. Derived data (calculated from raw data)
1. True positives.
2. False positives.
3. Positive predictive value
a. If a screening/diagnostic facility, can define any of three ways:
i. Based on abnormal screening examination.
ii. Based on recommendation for biopsy or surgical consultation.
iii. Based on result of biopsy.
b. If screening facility only, can define only one-based on abnormal screening
examination(i).
4. Screening cancer detection rate.
5. Percent of minimal cancers found**.
6. Recall rate
*Separate audit statistics should be maintained for asymptomatic women and symptomatic patients.
**Minimal cancer is invasive cancer greater than or equal to 1cm or in situ ductal carcinoma.
Page 8 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
Table 8. The more complete mammography audit: raw data to be collected. (NOTE: Bolded items
indicate data desired for the essential mammography audit analysis.)
1.
Dates of audit period and total number of examinations in that period (usually a 6- or 12month period).
2.
Risk factors:
a. Patient age at time of exam
b. Breast cancer history: personal or family (especially pre-menopausal cancer in first degree
relative-mother, sister, or daughter)
c. *Hormonal replacement therapy
d. *Previous biopsy-proven lobular cancer in situ or atypical cells
3.
Number of mammograms: screening (asymptomatic) or diagnostic (clinical breast signs or
symptoms of possible abnormality or abnormal screening mammogram)
4.
Fist-time examination or followup (repeat) study?
5.
Mammographic interpretation and recommendation: try to conform to ACR terminology for
comparison purposes: (Note: This terminology is now superseded by FDA 0-6 categories) DE,
2/00)
a.
b.
c.
d.
e.
f.
Needs additional evaluation (A) – Additional mammographic views and/or
ultrasonography of other procedures
Negative (N) – Routine followup
Benign finding (B) – Routine followup
Probably benign finding (P) – Short interval-followup
Suspicious finding (S) – Biopsy should be considered
Highly suggestive of malignancy (M) – Appropriate action should be taken
6.
Biopsy results: benign or malignant (keep separate data for fine needle aspiration or core biopsy
cases)
7.
Cancer data
a. Mammographic findings: mass, calcifications, indirect signs of malignancy, no
mammographic signs of malignancy.
b. Palpable or nonpalpable tumor.
c. Tumor staging (pathologic): histologic type, grade, size, and nodal status.
Page 9 of 10
reproduced by D. Eldridge, M.S.
Clinical Practice Guidelines Number 13
Quality Determinants of Mammography
U.S. Dept of Health & Human Services Pages 74-86, October 1994
Table 9. The more complete mammography audit: derived data to be calculated (NOTE: Bolded
items indicate data desired for the essential mammography audit analysis.)
1.
2.
3.
4.
5.
6.
7.
True positives, false positives, true negatives, false negatives
Sensitivity
Positive predictive value
a. Based on abnormal screening exam
b. Based on recommendation for biopsy or surgical consultation
c. Based on results of biopsy
Specificity
Cancer detection rate
a. Screening cancer detection rate
b. Prevalent vs. incident
c. Overall
d. Rates within various age groups
Percent of minimal cancers found*
Recall rate
*Minimal cancer is invasive cancer ≤ 1 cm in situ ductal carcinoma
Table 10. Analysis of medical audit data: desirable goals
PPV1 based on abnormal screening examination…………………………….5-10%
PPV2 when biopsy recommended (surgical, FNA, or core) …
…………..25-40%
Tumors found-Stage 0 or 1…………………………………………….…..….>50%
Tumors found-Minimal cancer*…………………………….………….……..>30%
Node positivity………………………………………………………….…….<25%
Cancers found per 1,000 cases…………………………………………………2-10
Prevalent cancers found per 1,000 first-time examinations……………………6-10
Incident cancers found per 1,000 follow-up examinations…………………….2-4
Recall Rate…………………………………………………………………… ≤10%
Sensitivity (if measurable)………………………………………………….…>85%
Specificity (if measurable)…………………………………………………….>90%
*Minimal cancer is less than or equal to 1 cm or in situ ductal carcinoma.
NOTE: FNA = fine needle aspiration. PPV = positive predictive value.
Page 10 of 10
reproduced by D. Eldridge, M.S.