Download Comparative Effectiveness Research

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Medical research wikipedia , lookup

Fetal origins hypothesis wikipedia , lookup

Health equity wikipedia , lookup

Race and health wikipedia , lookup

Declaration of Helsinki wikipedia , lookup

Patient safety wikipedia , lookup

Rhetoric of health and medicine wikipedia , lookup

Multiple sclerosis research wikipedia , lookup

Transcript
Chapter 2
Comparative Effectiveness Research
George J. Chang
Abstract Comparative effectiveness research (CER) is a type of research involving
human subjects or data from them that compares the effectiveness of one preventive,
diagnostic, therapeutic, or care delivery modality model to another. The purpose
of this research is to improve health outcomes by developing and disseminating
evidence-based information to patients, clinicians, and other decision-makers to
improve decisions that affect medical care. CER studies utilize a variety of
data sources and methods to conduct timely and relevant research that can be
disseminated in a quickly usable form to improve outcomes and value for health
care systems.
Keywords Comparative effectiveness research • Comparative effectiveness •
Effectiveness • Patient centered outcomes research
2.1 Introduction
Perhaps the greatest challenge confronting the United States (U.S.) health care
system is delivering effective therapies that provide the best health outcomes with
high-value. While some of the most innovative treatments originate from the U.S.
G.J. Chang, M.D., M.S. ()
Department of Surgical Oncology, University of Texas, MD Anderson Cancer Center, 1515
Holcombe Blvd, Houston, TX 77230-1402, USA
Colorectal Center, University of Texas, MD Anderson Cancer Center, 1515 Holcombe Blvd,
Houston, TX 77230-1402, USA
Minimally Invasive and New Technologies in Oncologic Surgery Program, University of Texas,
MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77230-1402, USA
e-mail: [email protected]
J.B. Dimick and C.C. Greenberg (eds.), Success in Academic Surgery: Health Services
Research, Success in Academic Surgery, DOI 10.1007/978-1-4471-4718-3__2,
© Springer-Verlag London 2014
9
10
G.J. Chang
and despite having among the highest per capita spending on health care, the U.S.
still lags in important health outcomes for treatable medical conditions. Although
debate remains about how it should be performed, in the push for health care reform
there is agreement that addressing the gaps in quality and efficiency in U.S. health
care should be a top priority. Too often there is inadequate information regarding
the best course of treatment for a particular patient’s medical condition given that
patient’s characteristics and coexisting conditions. In order to aid in these decisions
for both individuals and groups of patients, knowledge regarding both the benefits
and harms of the treatment options is necessary. Furthermore it is important to
understand the priorities of the relevant stakeholders in these decisions so that
the evidence that is generated is relevant and can be applied. Thus comparative
effectiveness research (CER) focuses on addressing these gaps to improve health
care outcomes while improving value.
2.2 What Is Comparative Effectiveness Research
Broadly speaking, comparative effectiveness research is a type of research involving
human subjects or data from them that compares the effectiveness of one preventive,
diagnostic, therapeutic, or care delivery modality to another. Over the past decade
the definition has evolved in part due to influence from policy makers to become
more specific. As defined by the Federal Coordinating Council for Comparative
Effectiveness Research (FCCCER), comparative effectiveness research is “the
conduct and synthesis of research comparing the benefits and harms of different
interventions and strategies to prevent, diagnose, treat, and monitor health conditions in ‘real world’ settings [1]. The purpose of this research is to improve health
outcomes by developing and disseminating evidence-based information to patients,
clinicians, and other decision-makers, responding to their expressed needs, about
which interventions are most effective for which patients under specific circumstances.” It has been further refined by legislation through section 6301 of The
Patient Protection and Affordable Care Act signed into law by President Obama in
2010. CER now pertains to research comparing the clinical effectiveness of “two or
more medical treatments, services, and items.” These treatments, services, and items
include “health care interventions, protocols for treatment, care management, and
delivery, procedures, medical devices, diagnostic tools, pharmaceuticals, (including
drugs and biological), integrative health practices, and any other strategies or items
being used in the treatment, management, and diagnosis of, or prevention of illness
or injury in, individuals.”
Comparative effectiveness research focuses on the practical comparison of two
or more health interventions to discern what works best for which patients and
populations and to inform health-care decisions at the both the individual patient
and policy levels. It seeks to provide evidence for the effectiveness, benefits, and
harms of different treatment options. This differs from many traditional approaches
to treatment studies that have prioritized the efficacy of treatment without equal
2 Comparative Effectiveness Research
11
emphasis on the potential harms and the associated burden to the patient or society.
In contrast to efficacy, which is typically assessed under ideal circumstances within
a stringent randomized study, effectiveness refers measurement of the degree of
benefit under “real world” clinical settings. Unlike efficacy studies, CER extends
beyond just knowledge generation to determine the best way to disseminate this
new evidence in a usable way for individual patients and their providers.
The scope of CER is broad and refers to the health care of both individual
patients and populations. CER is highly relevant to healthcare policy and studies of
organizations, delivery, and payment of health services. Indeed 50 of the 100 priority
topics as recommended by the Institute of Medicine (IOM) in its Initial National
Priorities for Comparative Effectiveness Research report related to comparing some
aspect of the health care delivery system [3]. These topics relate to how or where
services are delivered, rather than which services are rendered.
Furthermore, in its scope, CER values stakeholder engagement (patients,
providers, and other decision makers) throughout the CER process. Its over-arching
goal is to improve the ability for patients, providers, and policy makers to make
healthcare decisions that affect individual patients and to determine what works
best and for whom. CER measures both benefits and harms and the effectiveness
of interventions, procedures, regimens, or services in “real-world” circumstances.
This is an important distinction from traditional clinical research and CER, in that
CER places high value on external validity, or the ability to generalize the results
to real-world decisions. Another important feature of CER is the employment of a
variety of data sources and the generation of new evidence through observational
studies and pragmatic clinical trials, or those with more practical and generalizable
inclusion criteria and monitoring requirements [7, 8].
2.3 Why Comparative Effectiveness Research?
As surgeons, perhaps one of the most compelling arguments for CER can be made
in examining the hundreds of years old practice of bloodletting. While the lack
of benefit and the associated harms seem obvious today, it wasn’t until Scottish
surgeon Alexander Hamilton performed a comparative effectiveness study in 1809
that the harms of the practice were clearly identified. In this pragmatic clinical
trial, sick solders were admitted to the infirmary where one surgeon employed “the
lancet” as frequently as he wished and two others were prohibited from bloodletting.
Mortality was ten-fold higher among the soldiers assigned to the blood-letting
service (30 %) compared to only 3 % in the blood-letting prohibited service. While
this may be quite an extreme example, there are many practices today that are
performed with either only marginal benefits that may be outweighed by the harms
or no clear evidence for benefit. This problem was highlighted in the 2001 IOM
report Crossing the Quality Chasm that concluded that the U.S. health care delivery
system does not provide consistent, high-quality medical care to all people and
that a chasm exists between what health care we now have and what we could
12
G.J. Chang
(should) have [4]. The IOM further identified aims for improvement and rules for
health care systems redesign. In summary, the goal is to provide safe, effective,
patient-centered, efficient and equitable care that is evidence-based, coordinated,
and without waste. What physician, at least fundamentally, does not share these
goals? Yet it is clear that either sufficient evidence or the mechanisms for translating
that evidence into practice in order to cross the chasm is lacking.
For policy makers CER has become an important priority in an effort to identify
ways to address the rising cost of health care. The unsustainability of the rising costs
of U.S. health care has been widely recognized. With an aging population, resource
use-based reimbursement, and advancing medical technology, health care spending
accounted for 17.9 % of the U.S. Gross Domestic Product in 2012 and is projected
to rise to 19.9 % by 2022. The rising costs are also driven by widespread variation
in practice patterns and the use of new therapies leading to system waste due to the
lack of high-quality evidence regarding treatment options. In fact the IOM estimates
that fewer than 50 % of treatments delivered today are supported by evidence and
that as much as 30 % of health care dollars are spent on medical care of uncertain
value. It is imperative, therefore, to understand the incremental value of medical
treatments in diverse, real-world patient populations to identify and promote the use
of the most effective treatments and discourage or eliminate the use of ineffective
treatments. Comparative effectiveness research (CER) hopes to address this need.
2.4 Investments and Activities in Comparative Effectiveness
Research
Comparative effectiveness research is not a new idea. The principles of CER
applied to improve the quality and maximize the value of health care services have
been in place for nearly 50 years. Efforts began with the U.S. Congress Office
of Technology Assessment created in 1972 and abolished by Congress in 1995
(Fig. 2.1). The Agency for Health Care Policy and Research, later Agency for Health
Care Research and Quality (AHRQ), initially focused on developing guidelines for
clinical care but subsequently expanded its scope with the Medicare Modernization
Act (MMA) of 2003 that ensured funding for CER. More recently efforts in CER
have grown thanks in part to a greater federal emphasis in identifying value in
health care through CER. In 2009 the American Recovery and Reinvestment Act
(ARRA) provided $1.1 billion in research support for CER through the AHRQ to
identify new research topics, evidence, gaps, and develop pragmatic studies and
registries; the National Institutes of Health (NIH) to fund “challenge grants” and
“grand opportunity” grants to address the Institute of Medicine (IOM) CER priority
research areas; and the Department of Health and Human Services (HHS) to fund
infrastructure, collection, and dissemination of CER. ARRA thus established the
15-member Federal Coordinating Council for Comparative Effectiveness Research
(FCCCER), composed of senior representatives of several federal agencies to
2 Comparative Effectiveness Research
13
Fig. 2.1 Timeline representing the evolution of comparative effectiveness research in each of its
forms within the U.S.
coordinate research activities and also allocated $1.5 to the IOM to investigate
and recommend national CER priorities through extensive stakeholder engagement.
The FCCCER report in its strategic framework identified four core categories for
investment in CER: (1) research (generation or synthesis of evidence); (2) human
and scientific capital (to train new researchers in CER and further develop its
methods); (3) CER data infrastructure (to develop Electronic Health Records and
practice based data networks); and (4) dissemination and translation of CER (to
build tools and methods to disseminate CER findings to clinicians and patients to
translate CER into practice). Furthermore, it recommended that the activities be
related to themes that cut across the core categories.
While there have been many public sector activities in CER including those
funded by the AHRQ, NIH, the Veterans Health Administration (VHA), the
Department of Defense (DoD), until recently it has not possible to estimate the
total number of CER studies funded due to a lack of a standard, systematic means
for reporting CER across the funding agencies. Additionally a number of public
and private sector organizations have been engaged in CER, much of it has been
fragmented and not aligned with a common definition or set of priorities for CER,
resulting in numerous gaps in the research being conducted.
Thus the Patient Protection and Affordable Care Act of 2010, subsequently
approved by Congress as the Affordable Care Act (ACA), established the Patient
14
G.J. Chang
Centered Outcomes Research Institute (PCORI) to be the primary agency to oversee
and support the conduct of CER. The ACA was enacted with provisions for up to
$470 million per year of funding for Patient Centered Outcomes Research (PCOR)
which includes greater involvement by patients, providers, and other stakeholders
for CER. PCORI is governed by a 21 member board which has supplanted the
FCCCER. The research that PCORI supports should improve the quality, increase
transparency, and increase access to better health care [2]. However, the creation
and use of cost-effectiveness thresholds or calculations of quality adjusted life
years are explicitly prohibited. There is also specific language in the act that the
reports and research findings may not be construed as practice guidelines or policy
recommendations and that the Secretary of HHS may not use the findings to deny
coverage, reflecting political fears that the research findings could lead to the
potential for health care rationing.
CER has also been a national priority in many countries including the UK
(National Institute for Health and Clinical Excellence), Canada (The Canadian
Agency for Drugs and Technologies in Health Care), Germany (Institute for
Quality and Efficiency in Health Care), and Australia (Pharmacy Benefits Advisory
Committee) to name just a few.
2.5 CER and Stakeholder Engagement
A major criticism of prior work in clinical and health services research and potential
explanation for the gap in practical knowledge with limited translation to real-world
practice is that studies failed to maintain sustained and meaningful engagement
of key decision-makers in both the design and implementation of the studies.
Stakeholder engagement is felt to be a critical element for researchers to understand
what clinical outcomes matter most to patients, caregivers, and clinicians in order to
design “relevant” study endpoints that improve knowledge translation into usual
care. Stakeholders may include patients, caregivers providers, researchers, and
policy-makers.
The goal of this emphasis on stakeholder engagement is to improve the dissemination and translation of CER. By involving stakeholders in identifying the
key priorities for research, the most relevant questions are identified. In fact this
was a major activity of the IOM when it established the initial CER priorities [5].
Now PCORI engages stakeholders in a similar fashion to identify new questions
that are aligned with its five National Priorities for Research: (1) assessing options
for prevention, diagnosis, and treatment; (2) improving health care systems; (3)
addressing disparities; (4) communicating and disseminating research; and (5)
improving patient-centered outcomes research methods and infrastructure [6].
Engagement of stakeholders can also help identify the best ways to disseminate
and translate knowledge into practice.
2 Comparative Effectiveness Research
15
2.6 Types of CER Studies
Comparative effectiveness research requires the development, expansion, and use
of a variety of data sources and methods to conduct timely and relevant research
and disseminate the results in a form that is quickly usable by clinicians, patients,
policymakers, and health plans and other payers. The principle methodologies
employed in CER include randomized trials (experimental study), observational
research, data synthesis, and decision analysis [7]. These methods can be used to
generate new evidence, evaluate the available existing evidence about the benefits
and harms of each choice for different patient groups, or to synthesize the existing
data to generate new evidence to inform choices. CER investigations may be based
on data from clinical trials, clinical studies, or other research. As a more detailed
coverage of specific research methods are provided in subsequent chapters of this
text, we will focus the discussion here on aspects particularly relevant for the
conduct of CER.
2.6.1 Randomized Trials
Randomized comparative studies represent perhaps the earliest form of comparative
effectiveness research for evidence generation in medicine. Randomized trials
generally provide the highest level of evidence to establish the efficacy of the
intervention in question and thus can be considered the gold standard of efficacy
research. Randomized trials also provide investigators with an opportunity to study
patient-reported outcomes and quality of life associated with the intervention, and
also to measure potential harms of treatment. However traditional randomized
trials have very strict inclusion and exclusion criteria, are typically performed
after selection of the healthiest patients, and involved detailed and rigorous patient
monitoring and management that is not routinely performed in day-to-day patient
management. Thus while the traditional randomized controlled trial may assess the
efficacy of an intervention, the real-world effectiveness of the intervention when
performed in community practice may be quite different.
One of the main limits to generalizability in traditional randomized controlled
trials is the strict patient selection criteria designed to isolate the effect of the
intervention from confounding. Moreover, treatment in randomized controlled trials
often occurs in ideal clinical conditions that are not readily replicated during realworld patient care. Some of this difference stems from the fact that traditional
randomized trials are often used for novel therapy development and for drug
registration or label extension. In contrast among the goals of CER trials are to
compare the effectiveness of various existing treatment options, to identify patient
and tumor subsets most likely to benefit from interventions, to study screening
and prevention strategies, and to focus on survivorship and quality of life. The
16
G.J. Chang
results of CER trials should be generalizable to the broader community and
easily disseminated for broad application without the stringent criteria inherent in
traditional randomized trials.
A number of alternative, non-traditional trial designs may be considered for CER
and overcome some of the limitations outlined above. In Cluster Randomized
Trials, the randomization is by group rather than the individual patient. Implementation of a single intervention at each site improves external validity as patients are
treated as in the real-world and there is less risk for contamination across the arms.
Statistical methods such as hierarchical models must be used to adjust for cluster
effects effectively reducing the statistical power compared to studies with individual
randomization. Pragmatic Trials are highly aligned with the goals of CER as they
are performed in typical practice and in typical patients with eligibility criteria
designed to be inclusive. The study patients have the typical comorbid diseases and
characteristics of patients in usual practice. In keeping with the practical nature and
intent of the pragmatic trials, the outcomes measured are tailored to collect only
the most pertinent and easily assessed or adjudicated. While these trials have good
both internal (individual randomization) and external validity, the lack of complete
data collection precludes meaningful subsequent subgroup analysis for evaluation
of treatment heterogeneity. Adaptive Trials change in response to the accumulating
data by utilizing the Bayesian framework to formally account for prior knowledge.
Key design parameters change during the execution based upon predefined rules and
accumulating data from the trial. Adaptive designs can improve the efficiency of
the study and allow for more rapid completion. But there are limitations to adaptive
designs, that affect the potential for type I error in particular. Sample size estimations
can thus be complex and require careful planning with adjustment of statistical
analyses.
2.6.2 Observational Studies
While well-controlled experimental efficacy studies optimize internal validity, this
often comes at the price of generalizability and the therapies studied may perform
differently in general practice. Furthermore the patients for whom the benefits of
therapy may be the most limited (patients who are elderly or have many comorbid
conditions) are the least likely to be enrolled in the randomized trials. On the other
hand, observational studies use data from patient care as it occurs in real life.
Observational studies use data from medical records, insurance claims, surveys,
and registry databases. Although observational studies are associated with the
intrinsic benefit of being resource efficient and well-suited for CER studies, because
the exposures or treatments are not assigned by the investigator and rather by
routine practice considerations, threats to internal validity must be considered in
the interpretation of the observed findings.
A central assumption in observational CER studies is that the treatment groups
compared have the same underlying risk for the outcome other than the intervention.
2 Comparative Effectiveness Research
17
Of course, only in a randomized trial is this completely possible. However, because
observational studies use data collected in real life without a priori intent for
the CER study, issues of bias are significant concerns, principally related to
confounding by indication in intervention studies and confounding by frailty in
prevention studies. Unfortunately it is difficult to measure these effects and therefore
a number of methods have been developed in order to handle these issues.
2.6.2.1
Measuring Associations and Managing Confounding
CER with observational data begins with descriptive statistics and graphical representation of the data to broadly describe the study subjects, assess their exposure to
covariates, and assess the potential for imbalances in these measures. Estimates of
treatment effects can then be determined by regression analysis.
One increasingly common approach to cope with confounding for CER is
propensity score analysis. Propensity score analysis is mainly used in the context
of binary treatment choices to answer the question of why the patient was given
one treatment over another. It is defined as the probability of receiving the exposure
conditional on observed covariates and is estimated typically by logistic regression
models. The propensity score is the estimated probability or propensity for a patient
to receive one treatment over another. Patients with similar propensity scores may
be “similar” for comparing treatment outcome. The propensity scores may be used
for stratification, matching, and weighting or included as a covariate in regression
model for outcomes. Propensity models should include covariates that are either
confounders or are otherwise related to the outcome in addition to covariates that
are related to the exposure.
The distribution of propensity scores between the exposure groups may provide a
visual assessment of the risk for biased exposure estimates among cohorts with poor
overlap in propensity scores. Thus it is important after propensity adjustment, that
balance in the study covariates between the exposure groups be carefully assessed.
One approach to using propensity scores to adjust for confounding is matching on
the propensity score. Good overlap in the propensity score distributions can facilitate
balance in the study covariates as is achieved in randomized treatment groups.
While use of propensity score adjustment can result in residual imbalances in the
study covariates, matching techniques reduce sample size and power. Furthermore,
one can only ensure that measured covariates are being balanced, and unmeasured
confounding may still need to be addressed. Stratification by quantiles also permits
comparisons among groups with heterogeneous response characteristics. Thus
propensity score analyses are well suited to CER because they attempt to model the
process of patient selection for therapy, focus on the treatment effect, and provide
insight into subgroups with heterogeneous response characteristics.
Disease risk scores (DRS) are similar to propensity scores in that it is a
summary measure derived from the observed values of the covariates. It estimates
the probability or rate of disease as a function of the covariates. It can be calculated
as a regression of the “full-cohort” DRS, also known as the multivariate confounder
18
G.J. Chang
score. It relates the study outcome to the exposure and covariates for the entire study
population. The resultant DRS is then derived for each individual subject and then
stratified in order to determine the stratified estimate of the exposure effect. The
DRS regression model can also be developed from the “unexposed cohort” only
but the fitted values are then determined for the entire cohort. The DRS method is
particularly favorable in studies having a common outcome and rare exposure or
multiple exposures.
Another approach to managing incomplete information or potentially unmeasured confounders in CER is instrumental variable analysis. An “instrument”
is an external cause of the intervention or exposure but is by itself unrelated to
the outcome. An important assumption is that the instrument does not affect the
outcome except through the treatment. Even if there is unmeasured confounding,
the effect of the instrument on the treatment without effect on the outcome can
together be used to essentially create the effect of “randomization.” After dividing
the population into subgroups according to the value of the instrumental variable,
the rates of treatment in the groups will differ. Thus the probability of treatment
is therefore not affected by individual characteristics and comparing outcomes
between groups who have different values of the instrumental variable is analogous
to comparing groups that are randomized with different probabilities (of receiving
an intervention). However, an instrument must not affect the outcome except
through the treatment (so called “exclusion restriction”).
The choice of approach for coping with confounding should be determined by
the characteristics and availability of the data and there may be situations where
multiple analytic strategies should be utilized.
2.6.3 Research Synthesis
Approaches to research synthesis include systematic reviews, meta-analysis, and
technology assessments. Each of these methods rely upon the use of rigorous
methods to collect, evaluate, and synthesize studies in accordance with explicit
and structured methodology, some of which are outlined in AHRQ methods guides
and by the IOM. There is considerable overlap between the methods of research
synthesis for CER and for traditional evidence based medicine (EBM). However a
central priority of CER evidence synthesis is the focus on the best care options in
the context of usual care. Stakeholder input (e.g. citizen panels) is often solicited
to select and refine the questions to be relevant to daily practice and for the
improvement of the quality of care and system performance. Finally CER studies
cast a broad net with respect to the types of evidence with high-quality, highly
applicable evidence about effectiveness as the top of the hierarchy that may include
pragmatic trials and observational studies.
Systematic reviews in CER should have a pre-specified plan and analytic
framework for gathering and appraising the evidence that sets the stage for the
2 Comparative Effectiveness Research
19
qualitative evidence synthesis. Stakeholder input may be again solicited to refine the
analysis. If the studies lend themselves to a quantitative synthesis, a meta-analysis
can provide a direct summary with a pooled relative risk. However, underlying
heterogeneity of the studies can lead to exaggeration of the findings and is an
important potential pitfall. By definition, CER reviews may include a broad range
of study designs, not just randomized controlled trials, and the risk for amplification
of bias and confounding must be carefully examined before quantitative synthesis.
2.6.4 Decision Analysis
Decision analysis is a method for model-based quantitative evaluation of the
outcomes that result from specific choices in a given situation. It is inherently
CER in that it is applied to classic question of “which choice of treatment is right
for me?” It involves evaluating a decision that considers the benefits and harms
of each treatment choice for a given patient. For example a patient is faced with
the decision between two surgical treatment options, one that has a lower risk for
recurrent disease but a greater impact on function and another that has a higher risk
for recurrent disease but a lower impact on function. Similarly a decision analytic
question can be framed for groups of patients. Ultimately the answer to the question
will depend on the probability of the outcome and the patient’s subjective value of
the outcome. The decisions that are faced thus involve a tradeoff, for example a
procedure may have a higher risk for morbidity but has a lower risk for recurrent
disease, and the evaluation of this tradeoff permits individualization of treatment.
2.7 Conclusion
The need to improve value in our health care treatment options has highlighted CER
as a major discipline in health services research. The goal of CER is to generate
the knowledge to deliver the right treatment to the right patient at the right time
and thus aid patients, providers, and policymakers in making the best healthcare
decisions. It emphasizes practical comparisons and generates evidence regarding the
effectiveness, benefits, and harms of different treatment options. It utilizes a variety
of research methodologies including pragmatic trials and observational research to
assess real-world effects of treatment decisions and engages key stakeholders to
improve the way that knowledge is disseminated and translated into practice. The
challenge for the future will be to continue to develop the infrastructure, networks,
methodologies and techniques that will help to close the gap between the health care
that we have now and the health care that we could and should have.
20
G.J. Chang
References
1. Federal Coordinating Council for Comparative Effectiveness Research. Report to The President
and Congress. U.S. Department of Health and Human Services. 2009.
2. Fleurence R, Selby JV, Odom-Walker K, et al. How the patient-centered outcomes institute is
engaging patients and others in shaping its research agenda. Health Aff. 2013;32(2):393–400.
3. Iglehart JK. Prioritizing comparative-effectiveness research–IOM recommendations. N Engl J
Med. 2009;361(4):325–8.
4. IOM (Institute of Medicine). Crossing the quality chasm. Washington, DC: The National
Academies Press; 2001. www.nap.edu/catalog/10027.html.
5. IOM (Institute of Medicine). Initial national priorities for comparative effectiveness research.
Washington, DC: The National Academies Press; 2009. www.nap.edu.
6. Patient-Centered Outcomes Research Institute. National priorities for research and research
agenda [Internet]. Washington, DC: PCORI; [cited 2013 Oct 1]. Available from: http://www.
pcori.org/what-we-do/priorities-agenda/.
7. Sox HC, Goodman SN. The methods of comparative effectiveness research. Annu Rev Public
Health. 2012;33:425–45.
8. Tunis SR, Benner J, McClellan M. Comparative effectiveness research: policy context, methods
development and research infrastructure. Stat Med. 2010;29:1963–76.
Landmark Studies
• Iglehart JK. Prioritizing comparative-effectiveness research–IOM recommendations. New Engl
J Med. 2009;361(4):325–8.
• IOM (Institute of Medicine). Initial national priorities for comparative effectiveness research.
Washington, DC: The National Academies Press; 2009. www.nap.edu.
• Sox HC, Goodman SN. The methods of comparative effectiveness research. Annu Rev Public
Health. 2012;33:425–45.
http://www.springer.com/978-1-4471-4717-6