Download Slide 1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Declaration of Helsinki wikipedia , lookup

Fetal origins hypothesis wikipedia , lookup

Seven Countries Study wikipedia , lookup

Transtheoretical model wikipedia , lookup

Clinical trial wikipedia , lookup

Placebo-controlled study wikipedia , lookup

Alzheimer's disease research wikipedia , lookup

Women's Health Initiative wikipedia , lookup

Multiple sclerosis research wikipedia , lookup

Impact evaluation wikipedia , lookup

Forensic epidemiology wikipedia , lookup

Epidemiology wikipedia , lookup

Randomized controlled trial wikipedia , lookup

Transcript
Fundamental Principles of
Epidemiologic Study Design
F. Bruce Coles, D.O.
Assistant Professor
Louise-Anne McNutt, PhD
Associate Professor
University at Albany
School of Public Health
Department of Epidemiology and Biostatistics
What are the primary goals of the Epidemiologist?
To describe the frequency of disease
and its distribution
To assess the determinants and possible
causes of disease
To identify/develop effective interventions
to prevent or control disease
The Basic Question…
Are exposure and disease linked?
(Does the exposure cause the disease?)
My son, what
do you mean
So,
youtrue
want a
by the
descriptive
effect?
For who or
whom do you
incidence
want it? An individual?
proportion ratio?
A population? If the
The value
of awhich
causalone?
incidence
latter,
And
proportion
ratio
can
be period??
different for
for
what
time
different
groups of people and for
No…causal!
I want
to
The true
risk if a person
different time periods. It is not
Please! Show me!
knowWhat
the
real if
effect
is exposed
versus
they of the
necessarily a biological
is the true effect
of the
isolated
What?
Okay.constant,
Comparing
are notexposure
exposed…
A truefrom
you
know…
exposure on the
all other
possible
I…
what two exposure
Uh… Everyone
relative
risk…
occurrence of the causes… in the entire
levels?
disease?
population?
And calculate it
Exposed
What do My
youson,
mean
bywant
you
for one year…
versus
exposed and unexposed?
an absolute
unexposed… Exposed howcounterfactual…
much, for how
long, andI’m
in only
whatthe
time
God of
period? There
are aalot
of
Epi…not
miracle
different ways you
could
worker.
define exposed and
Oh please, Great God
unexposed…and each of the
of Epidemiology…
possible
Why does it have tocorresponding
be
so hard? I’ll takeratios
any can have a different
exposure, for anytrue value, you know.
amount of time versus
Based upon: Maldonado G and Greenland S.
no exposure at all.
Estimating Causal Effects
How’s that?
Int J Epidemiol 2002; 31: 422-29.
Counterfactual Analysis
“We may define a cause to be an object followed by another,
and where all the objects, similar to the first, are followed by
objects similar to the second. Or, in other words, where, if the
first object had not been, the second never had existed.”
David Hume, Philosopher (1748)
Counterfactual Model of Causation
is a cause of
if…
under actual (factual) conditions when
E occurs, D follows…
and…
if under conditions contrary to the actual
conditions (counter-factual) when E does
not occur, D does not occur
If we can
turn
back
time…
We can
answer
the
question
!
But, Oh Great God of
Epidemiology… If I
can’t go back in time
and observe the
unobservable…what
CAN I do to
determine the cause
of a disease?
You must, my son, move
from the dream of
theoretical perfection to the
next best thing…
Substitution!!!
Once you clearly define
your study question,
choose a target population
that corresponds to that
question…
…then
choose a study
design and sample subjects
from that target population
to balance the tradeoffs in
bias, variance, and loss to
follow-up…
Epidemiologic Study Designs
Experimental
Observational
(Randomized
Controlled
Trials)
Analytical
Case-Control
Descriptive
Cohort
+ cross-sectional & ecologic
Epidemiologic Study Designs
Descriptive Studies
Examine patterns of disease
Analytical studies
Studies of suspected causes of diseases
Experimental studies
Compare treatment or intervention modalities
Epidemiologic Study Designs
Grimes & Schulz, 2002
Hierarchy of Epidemiologic Study Design
Tower & Spector, 2007
When considering any etiologic
study, keep in mind two issues
related to participant (patient)
heterogeneity: the effect of
chance and the effect of bias…
We will descend the ladder of
perfection. So we begin with…
Randomized Controlled Trials
• Recommended to achieve a valid determination of
the comparative benefit of competing intervention
strategies:
- Prevention
- Screening
- Treatment
- Management
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Participant Characteristics
• Continuum: healthy > elevated risk > precursor
abnormality (preclinical) > disease
• Prevention or Screening trial:
– drawn from “normal” healthy population
– may be selected due to elevated (“high”) risk
• Treatment or Management trial:
– clinical trials
– diseased patients
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Phase Objectives
• Phase I
– Safety test: investigate dosage, route of administration, and
toxicity
– Usually not randomized
• Phase II
– Look for evidence of “activity” of an intervention, e.g., evidence
of tumor shrinkage, change in biomarker
– Tolerability
– May be small randomized, blinded or non-randomized
• Phase III
– Randomized design to investigate “efficacy” i.e., most ideal
conditions
• Phase IV
– Designed to assess “effectiveness” of proven intervention in
wide-scale (“real world”) conditions
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Phases
When we talk about a “clinical trial” of medications we almost always
mean a Phase III clinical trial.
Efficacy vs. Effectiveness
• Efficacy – does the intervention work in tightly
controlled conditions?
–
–
–
–
Strict inclusion/exclusion criteria
Highly standardized treatments
Explicit procedures for ensuring compliance
Focus on direct outcomes
Efficacy vs Effectiveness
• Effectiveness – does the intervention work in ‘real world’
conditions?
–
–
–
–
Looser inclusion/exclusion criteria
Treatments carried out by typical clinical personnel
Little or no provision for insuring compliance
Focus on less direct outcomes (e.g., quality of life)
RCT: Advantages
• investigator controls the predictor variable
(intervention or treatment)
• randomization controls unmeasured
confounding
• ability to assess causality much greater
than observational studies
RCT: Design
outcome
RANDOMIZATION
Intervention
no outcome
Study
population
outcome
Control
no outcome
baseline
future
time
Study begins here (baseline point)
RCT: Steps in study procedures
1.
Select participants
– high-risk for outcome (high incidence)
– Likely to benefit and not be harmed
– Likely to adhere
• Pre-trial run-in period?
RCT: Pre-trial “Run-in” Period
• Pro
– Provides stabilization and baseline
– Tests endurance/reliability of subjects
• Con
– Can be perceived as too demanding
RCT: Steps in study procedures
2. Measure baseline variables
3. Randomize
–
–
Eliminates baseline confounding
Types
• Simple – two arm
• Stratified – multi-arm; factorial
• Block - group
RCT: Steps in study procedures
4. Assess need for blinding the intervention
–
–
Can be as important as randomization
Eliminates
•
•
•
co- intervention
biased outcome ascertainment
biased measurement of outcome
5. Follow subjects
–
–
Adherence to protocol
Lost to follow up
6. Measure outcome
–
–
Clinically important measures
Adverse events
RCT: Design Concepts (5 questions)
• Why is the study being done?
- the objectives should be clearly defined
- will help determine the outcome measures
- single primary outcome with limited secondary outcome
measures
• What is being compared to what?
- two-arm trial: experimental intervention vs nothing, placebo
- standard intervention, different dose or duration
- multi-arm
- factorial
- groups
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Design Concepts (5 questions)
• Which type of intervention is being assessed?
- well-defined
- tightly controlled: new intervention
- flexible: assessing one already in use
- multifaceted?
• Who is the target population?
- eligibility
= restriction: enhance statistical power
by having a more homogeneous group, higher
rate of outcome events, higher rate of benefit
= practical consideration: accessible
- include:
= potential to benefit
= effect can be detected
= those most likely to adhere
- exclude:
= unacceptable risk
= competing risk (condition)
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Design Concepts (5 questions)
• How many should be enrolled?
- ensure power to detect intervention effect
- increase sample size to increase precision
of estimate of intervention effect (decreases
variability (standard error)
- subgroups
= requires increased sample size
= risk of spurious results increases with
a greater number of subgroup analyses
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Factorial Design
• answer more than one question by addressing
more than one comparison of interventions
Intervention A
Intervention B
Intervention not-B
Intervention not-A
Intervention B
Intervention not-B
• important that the two interventions can be given
together (mechanism of action differ)
- no serious interactions expected
- interaction effect is of interest
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Group Randomization
• settings
- communities
- workplace
- religious institutions
- families
- village
- schools or classrooms
- social organizations
- clinics
• concerns
- less efficient statistically than individual randomization
- must account for correlation of individuals within a cluster
- must assure adequate sample size
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Group Randomization
• advantages
- feasibility of delivering intervention
- avoids contamination of those assigned to different
intervention
- decrease cost
- possibly greater generalizability
• intervention applications
- behavioral and lifestyle interventions
- infectious disease interventions (vaccines)
- studies of screening approaches
- health services research
- studies of new drugs (or other agents) in short supply
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Maintaining the Integrity of Randomization
• The procedure for randomization should be:
- unbiased
- unpredictable (for participants and study personnel
recruiting and enrolling them)
• Timing
- randomize after determining eligibility
- avoid delays in implementation to minimize possibility
of participants becoming non-candidates
• Run-in period
- brief
- all participants started on same intervention
- those who comply are enrolled
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Blinding (Masking)
Single blind – participants are unaware of treatment
group
Double blind – both participants and investigators
are unaware
Triple blind – various meanings
• person performing tests
• outcome auditors
• safety monitoring groups*
(* some clinical trials experts oppose this practice – inhibits
ability to weigh benefits and adverse effects and to assure
ethical standards are maintained)
RCT: Blinding (Masking)
Why blind? …
To avoid biased outcome, ascertainment or adjudication
• If group assignment is known
- participants may report symptoms or outcomes differently
- physicians or investigators may elicit symptoms or outcomes
differently
- study staff or adjudicators may classify similar events
differently in treatment groups
• Problematic with “soft” outcomes
- investigator judgment
- participant reported symptoms, scales
RCT: Why Blind? … Co-interventions
• Unintended effective interventions
– participants use other therapy or change behavior
– study staff, medical providers, family or friends treat
participants differently
• Nondifferential - decreases power
• Differential - causes bias
RCT: Blinding (Masking)
• Feasibility depends upon study design
- yes: drug – placebo trial
- no: surgical vs medical intervention
- no: drug with obvious side effects
- trials with survival as an outcome little effected by
inability to mask the observer
- independent, masked observer may be used for:
= studies with subjective outcome measures
= studies with objective endpoints (scans,
photographs, histopathology slides,
cardiograms)
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Intension-to-treat Analysis
• includes all participants regardless of what occurs
after randomization
• maintains comparability in expectation across
intervention groups
• excluding participants after randomization introduces
bias that randomization was designed to avoid
• may not be necessary with trials using a
pre-randomization screening test with results
available after intervention begins provided
eligibility is not influenced by randomized
assignment
• must consider impact of noncompliance
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Accounting for loss to follow-up (LTF)
• decrease power
- remedy: inflate sample size to account for expected LTF
• increase bias
- difficult: design study (and consent process) to follow
participants that drop out
Green SB. Design of Randomized Trials.
Epidemiol Reviews 2002;24:4-11
RCT: Analysis
• Intention to treat analysis
– Most conservative interpretation
– Include all persons assigned to intervention
group (including those who did not get
treatment or dropped out)
• Subgroup analysis
– Groups identified pre-randomization
The Ideal Randomized Trial
• Tamper-proof randomization
• Blinding of participants, study staff, lab staff,
outcome ascertainment and adjudication
• Adherence to study intervention and protocol
• Complete follow-up