Download Evaluating Poolability of Continuous and Binary Endpoints Across

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Pattern recognition wikipedia , lookup

Bootstrapping (statistics) wikipedia , lookup

Transcript
Evaluating Poolability of
Continuous and Binary
Endpoints Across Centers
Roseann White
Clin/Reg Felllow
Abbott Vascular – Cardiac Therapies
Background
• MA in Statistics from UC Berkeley
• 2nd Generation statistician (My mother specialized in
econometrics, i.e. statistics for economics)
• 15 years as professional statistician in the Biotechnology
industry providing statistical support for research, analytical
method development, diagnostics, clinical, and manufacturing
• Favorite Quote:
“Statisticians speak for the data when the data can’t speak for
themselves”
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
2
Evaluating Poolability across Centers
• Key Issues
• Current Methods
• Proposed alternative method
• Potential Bayesian approach
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
3
Key Issues
• Centers are not chosen at random
– Sponsors try to include centers that will represent the patient
population across the geography
– Often there are no centers or centers that receive very few of the
type of patients needed
• Clinical Trials tend to initiate more centers than they potentially
need
– Accelerate enrollment
– Involve Key Opinion leaders
– Provide visibility to product
• In device trials, its often difficult to “blind” the clinician to the
product being used.
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
4
Key Issues
• Assessing poolability is rarely discussed prospectively from
both a clinical as well as statistical perspective
– No definition of what a clinical meaningful difference among sites
– When assessing poolability for centers with a small number of
patients, should they be combined
• Based on size of center?
• Based on what geographical region the center is located?
• Based on standard practices?
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
5
Current Methodology
• Centers that have less than a pre-specified number of patients are
combined into a “center”
• The interaction effect between center and treatment is tested:
Yijk  Ti  (T * Center)ij   ijk
• If the p-value is greater than a pre-specified value, then there is
evidence of the lack of poolability of the sites
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
6
Challenges to the current methodology
• Reflexive – does not take into consideration whether a clinically
meaningful interaction would be detected
• Combination of all the smaller sites may dilute regional differences
• What p-value does one choose?
– <0.05 to only pick up extreme differences. i.e. increase specificity and
decrease sensitivity
– >0.05 so to increase the sensitivity but decrease the specificity
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
7
Proposed alternative Process
• Prospectively define what a clinical meaningful interaction
Measure
Site 1
Site 2
• Determine sample size necessary to detect difference
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
8
Proposed Alternative Process (con’t)
• Combine smaller centers (where enrollment is too low to detect
differences) with larger “similar” centers where simliar is prespecified
– Geographical similar (same country or region)
– Same patient population (urban vs. rural)
– Same standard practices (con-committment medication use)
• If center groupings are still too small – use bootstrap method of
resampling to get the “appropriate number” from each site
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
9
Example – Binary Endpoint
• Primary endpoint – non-inferiority in oputcome rate
– Assumptions T1=9%, T2=9%, margin=5%
– N=1400
• Clinical Meaningful interaction between treatment groups:
– If the difference between the treatment group varies more than twice
the non-inferiority margin
• Minimum grouping size=150;
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
10
Bootstrap
•
Use bootstrap when
–
Actual possible group size is lower than needed
–
Actual possible group size is greater than needed
•
Simulation results
N
1000
•
Actual
Grouping
Size
25
Needed
Grouping
Size
150
% p<0.05
50
150
0.09
100
150
0.054
0.15
Limitations
–
For Binary outcomes, grouping sizes less than 50 can lead to misleading results
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
11
Does Bayesian play a role in determining
poolability?
• Modify the approach presented by Jen-Pei Liu, et. al. in “A
Bayesian noninferiority approach to evaluation of bridging
studies” J Biopharm Stat. 2004 May;14(2):291-300.
Step 1: Develop a prior based on the treatment difference based on
largest center grouping
Step 2: Use the data from the next largest center grouping and prior
distribution to obtain the mean and variability of the posterior
distribution
Step 3: Evaluate the posterior probability that difference is greater or
equal to some clinically acceptable limit
Step 4: If the posterior probability is sufficiently large, say 80%, then
conclude the similarity between the two center groupings.
Step 5: Repeat the same process with next center grouping.
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
12
Conclusion
• Pre-specify clinical meaningful difference up front
• Group smaller sites by commonalities not size
• If group size is smaller or much larger than needed – potential
solution is using bootstrap but need more investigation
• Explore bayesian approach to evaluating poolability.
Evaluating Poolability
09/29/2006
Company Confidential
© 2006 Abbott
13