* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Bayesian Evidence Synthesis in Drug Development and
Survey
Document related concepts
Transcript
Bayesian evidence synthesis in drug development and comparative effectiveness research David Ohlssen (Novartis Pharmaceticals Corporation) Introduction Evidence synthesis in drug development The ideas and principles behind evidence synthesis date back to the work of Eddy et al; 1992 However, wide spread application has been driven by the need for quantitative health technology assessment: • cost effectiveness • comparitive effectiveness Ideas often closely linked with Bayesian principles and methods: • Good decision making should ideally be based on all relevant information • MCMC computation Recent developments in comparative effectiveness Health agencies have increasing become interested in health technology assessment and the comparative effectiveness of various treatment options Statistical approaches include extensions of standard metaanalysis models allowing multiple treatments to be compared FDA Partnership in Applied Comparative Effectiveness Science (PACES) -including projects on utilizing historical data in clinical trials and subgroup analysis Aims of this talk Evidence synthesis Introduce some basic concepts Illustration through a series of applications: • Motivating public health example • Meta-analysis and Network meta-analysis • Using historical data in the design and analysis of clinical trials • Extrapolation • Subgroup analysis Focus on principles and understanding of critical assumptions rather than technical details Basic concepts Framework and Notation for evidence synthesis Y1 Y2 YS Y1,..,YS Data from S sources 1,…, S Source-specific parameters/effects of interest (e.g. a mean difference) 2 1 ? S Question related to 1,…, S (e.g. average effect, or effect in a new study) Strategies for HIV screening Ades and Cliffe (2002) HIV: synthesizing evidence from multiple sources Aim to compare strategies for screening for HIV in prenatal clinics: • Universal screening of all women, • or targeted screening of current injecting drug users (IDU) or women born in sub-Saharan Africa (SSA) Use synthesis to determine the optimal policy Key parameters Ades and Cliffe (2002) a- Proportion of women born in sub-Saharan Africa (SSA) b Proportion of women who are intravenous drug users (IDU) c HIV infection rate in SSA d HIV infection rate in IDU e HIV infection rate in non-SSA, non-IDU f Proportion HIV already diagnosed in SSA g Proportion HIV already diagnosed in IDU h Proportion HIV already diagnosed in non-SSA, non-IDU NO direct evidence concerning e and h! A subset of some of the data used in the synthesis Ades and Cliffe (2002) HIV prevalence, women not born in SSA,1997-8 [db + e(1 − a − b)]/(1 − a) 74 / 136139 Overall HIV prevalence in pregnant women, 1999 ca + db + e(1 − a − b) 254 / 102287 Diagnosed HIV in SSA women as a proportion of all diagnosed HIV, 1999 fca/[fca + gdb + he(1 − a − b)] 43 / 60 Implementation of the evidence synthesis Ades and Cliffe (2002) The evidence was synthesized by placing all data sources within a single Bayesian model Easy to code in WinBUGS Key assumption – consistency of evidence across the different data sources Can be checked by comparing direct and indirect evidence at various “nodes” in the graphical model (Conflict p-value) Meta-analysis and network metaanalysis Why use Bayesian statistics for meta-analysis? Natural approach for accumulating data / meta-analysis Unified modelling and the ability to explore a wide range of modelling structure • Synthesis of evidence from multiple sources / multiple treatments Formal incorporation of other sources of evidence by utilizing prior distributions for modelling unknowns. e.g. • Ability to incorporate prior information regarding background event rates • Ability to model between-study variability properly in random effects models Probability statements about true effects of treatment easier to understand than confidence intervals and p-values Bayesian random effects meta-analysis for summary data Let yi denote the observed treatment effect in trial i and si2 be the corresponding estimated standard error yi | i ~ N(i, si2) i ~ N(m, t2) Add prior distributions for unknowns: m ~ N(?, ?) • Heterogeneity t ~ halfN(0, ?) t ~ Unif(0, ?) Carlin JB, Meta-analysis for 2x2 tables: a Bayesian approach. Statistics in Medicine 1992; 11: 141-58 Bayesian method - extending the basic model Characterizing heterogeneity and prediction (See Higgins et al; 2009) • Heterogeneity: quantification – but not homogeneity test • Mean effect: important, but incomplete summary • Study effect: maybe of interest, if studies distinguishable • Prediction: effect in new study most relevant and complete summary (predictive distribution) Flexibility • Alternative scales and link function - see Warn et al (2002) • Flexible random effects distributions – see Lee et al (2007) and Muthukumarana (2012) • Combining individual patient data with aggregate data - see Sutton et al (2008) • Subgroup analysis – see Jones et al (2011) • Multiple treatments and network meta-analysis- Motivation for Network Meta-Analysis There are often many treatments for health conditions Published systematic reviews and meta-analyses typically focus on pair-wise comparisons • More than 20 separate Cochrane reviews for adult smoking cessation • More than 20 separate Cochrane reviews for chronic asthma in adults An alternative approach would involve extending the standard meta-analysis techniques to accommodate multiple treatment This emerging field has been described as both network meta-analysis and mixed treatment comparisons 15 Bayesian Network Meta-Analysis Systematic reviews are considered standard practice to inform evidence-based decisionmaking regarding efficacy and safety Bayesian network meta-analysis (mixed treatment comparisons) have been presented as an extension of traditional MA by including multiple different pairwise comparisons across a range of different interventions Several Guidances/Technical Documents recently published Treatment comparison representation A vs. B vs. C vs. D vs. P A P B P A P B P A P B P A P B C A D B C A D B D 17 Treatment comparison representation A vs. B vs. C vs. D vs. P A P B P A P B P A P B P A P B C A D B C A D B D 18 P A C B D Treatment comparison representation A vs. B vs. C vs. D vs. P A P B P A P B P A P B P A P B C A D B C A D B D P A C B D Network Meta-Analysis (NMA) 19 Treatment comparison representation A vs. B vs. C vs. D vs. P A P B P A P B P A P B P A P B C A D B C A D B D P A C B D Direct comparison Indirect comparison Network Meta-Analysis (NMA) 20 Network meta-analysis – key assumptions Three key assumptions (Song et al., 2009): Homogeneity assumption – Studies in the network MA which compare the same treatments must be sufficiently similar. Similarity assumption – When comparing A and C indirectly via B, the patient populations of the trial(s) investigating A vs B and those investigating B vs C must be sufficiently similar. Consistency assumption – direct and indirect comparisons, when done separately, must be roughly in agreement. Network meta-analysis Trelle et al (2011) - Cardiovascular safety of non-steroidal anti-inflammatory drugs: Primary Endpoint was myocardial infarction Data synthesis 31 trials in 116 429 patients with more than 115 000 patient years of follow-up were included. A Network random effects metaanalysis were used in the analysis Critical aspect – the assumptions regarding the consistency of evidence across the network How reasonable is it to rank and compare treatments with this technique? Trelle, Reichenbach, Wandel, Hildebrand, Tschannen, Villiger, Egger, and Juni. Cardiovascular safety of non-steroidal anti-inflammatory drugs network meta-analysis. BMJ 2011; 342: c7086. Doi: 10.1136/bmj.c7086 Poisson network meta-analysis model Based on the work of Lu and Ades (LA) (2006 & 2009) b is the control treatment associated with trial i μj is the effect of the baseline treatment b in trial i and δibk is the trialspecific treatment effect of treatment k relative to treatment to b (the baseline treatment associated with trial i) Note baseline treatments can vary from trial to trial Different choices for µ’s and ’s. They can be: common (over studies), fixed (unconstrained), or “random” Consistency assumptions required among the treatment effects Prior distributions required to complete the model specification Results from Trelle et al Myocardial infarction analysis Relative risk with 95% confidence interval compared to placebo Treatment RR estimate Celecoxib 1.35 Diclofenac 0.82 Etoricoxib 0.75 Ibuprofen 1.61 Lumiracoxib 2.00 Naproxen 0.82 Rofecoxib 2.12 lower limit 0.71 0.29 0.23 0.50 0.71 0.37 1.26 upper limit 2.72 2.20 2.39 5.77 6.21 1.67 3.56 Authors' conclusion: Although uncertainty remains, little evidence exists to suggest that any of the investigated drugs are safe in cardiovascular terms. Naproxen seemed least harmful. 25 Comments on Trelle et al Drug doses could not be considered (data not available). Average duration of exposure was different for different trials. Therefore, ranking of treatments relies on the strong assumption that the risk ratio is constant across time for all treatments The authors conducted extensive sensitivity analysis and the results appeared to be robust Two way layout via MAR assumption An alternative way to parameterize proposed by Jones et al (2011) and Piephoetal et al (2012) uses a classical two-way (TW) linear predictor with main effects for treatment and trial. Both papers focus on using the two-way model in the classical framework. By using the MAR property a general approach to implementation in the Bayesian framework can be formed All studies can in principle contain every arm, but in practice many arms will be missing. As the network meta-analysis model implicitly assume MAR (Lu and Ades; 2009) a common (though possibly missing) baseline treatment can be assumed for every study (Hong and Carlin; 2012) Comments on implementation and practical advantages In WinBUGS include every treatment in every trial with missing outcome cells for missing treatments Utilize a set of conditional univariate normal distributions to form the multivariate normal (this speeds up convergence) The parameterization has several advantages when forming priors: • In the Lu and Ades model, default “non-informative” priors must be used as the trial baseline parameters are nuisance parameters with no interpretation • In the two-way model an informative prior for a single treatment baseline treatment can be formed as each trial has the same parameterization • In the two way model there is much greater control over noninformative priors. This can be valuable when you have rare safety events asymmetry in prior information can potentially lead to a bias Full multivariate meta-analysis Instead of associating a concurrent control parameter with each study, an alternative approach is to place random effects on every treatment main effect This creates a so called multivariate meta-analysis 29 MI and stroke results from Trelle et al Comparing LA FE RE model with the TW RE model and MV RE Discussion of full multivariate meta-analysis model Allows borrowing of strength across baseline as every treatment is considered random Therefore, in rare event meta-analysis, incorporates trials with zero total events through the random effects No consistency relations to deal with! Priors on the variance components can be formed using inverse Wishart or using Cholesky decomposition Breaks the concurrent control structure so automatically will introduce some confounding Future directions Network meta-analysis with multiple outcomes • Sampling model (multinomial?) • Borrow strength across treatment effects • Surrogate outcome meta-analysis combined with a network meta-analysis Network meta-analysis with subgroup analysis Combining network meta-analysis; meta-analysis of subgroups and multivariate meta-analysis More work on informative priors for variance components and baseline parameters Use of Historical controls Introduction Objective and Problem Statement Design a study with a control arm / treatment arm(s) Use historical control data in design and analysis Ideally: smaller trial comparable to a standard trial Used in some of Novartis phase I and II trials Design options • Standard Design: “n vs. n” • New Design: “n*+(n-n*) vs. n” with n* = “prior sample size” How can the historical information be quantified? How much is it worth? The Meta-Analytic-Predictive Approach Framework and Notation Y1 Y2 Y1,..,YH Historical control data from H trials YH 1,…, H Control “effects” (unknown) 2 ? ‘Relationship/Similarity’ (unknown) no relation… same effects 1 ? * H * Effect in new trial (unknown) Design objective: [ * | Y1,…,YH ] Y* Y* Data in new study (yet to be observed) Example – meta-analytic predictive approach to form priors Application Random-effect meta-analysis prior information for control group in new study, corresponding to prior sample size n* Bayesian setup-using historical control data Meta Analysis of Historical Data Study Analysis Drug Placebo Observed Control Response Rates Prior Distribution of Control Response Rate Historical Trial 1 Observed Control data Prior Distribution of drug response rate Observed Drug data Historical Trial 2 Historical Trial 3 Historical Trial 4 Historical Trial 5 MetaAnalysis Predictive Distribution of Control Response Rate in a New Study Bayesian Analysis Posterior Distribution of Control Response Rate Posterior Distribution of Drug Response Rate Historical Trial 6 Historical Trial 7 Historical Trial 8 Posterior Distribution of Difference in Response Utilization in a quick kill quick win PoC Design ... ≥ 70% ... ≥ 50% ... ≥ 50% 1st Interim 2nd Interim Final analysis Positive PoC if P(d ≥ 0.2)... Negative PoC if P(d < 0.2)... ... ≥ 90% ... ≥ 90% With N=60, 2:1 Active:Placebo, IA’s after 20 and 40 patients First interim Second interim ... > 50% Final Overall power Stop for efficacy Stop for futility Stop for efficacy Stop for futility Claim efficacy Fail 0 1.6% 49.0% 1.4% 26.0% 0.2% 21.9% 3.2% d = 0.2 33.9% 5.1% 27.7% 3.0% 8.8% 21.6% 70.4% d = 0.5 96.0% 0.0% 4.0% 0.0% 0.0% 0.0% 100.0% Scenario d= With pPlacebo = 0.15, 10000 runs R package available for design investigation Extrapolation Thanks to Roland Fisch General Background: EMA Concept Paper on Extrapolation EMA produced a “Concept paper on extrapolation of efficacy and safety in medicine development”: A specific focus on Pediatric Investigation Plans : ‘Extrapolation from adults to children is a typical example ...’ Bayesian methods mentioned: • ‘could be supported by 'Bayesian' statistical approaches’ Alternative Approaches: - No extrapolation: full development program in the target population. - Partial extrapolation: reduced study program in target population depending on magnitude of expected differences and certainty of assumptions. - Full extrapolation: some supportive data to validate the extrapolation concept. Adult data Bayesian meta-analytic predictive approach Model Mixed effect logistic regression model Yi ~ Binomial( Ni , πi ) logit( πi ) = μ + i + xi β Study i, Yi = number of events, Ni = number of patients, πi = event rate • μ: intercept • i ~ N(0, σ2): random study effect • xi : design matrix (Study level covariates) The Meta-Analytic-Predictive Approach Framework and Notation σ n * μ i x* yobs Yrep β ni YH yi xi Subgroup analysis Based on Jones, Ohlssen, Neuenschwander, Racine, Branson (2011) Introduction to Subgroup analysis For biological reasons treatments may be more effective in some populations of patients than others • Risk factors • Genetic factors • Demographic factors This motivates interest in statistical methods that can explore and identify potential subgroups of interest 45 Challenges with exploratory subgroup analysis random high bias - Fleming 2010 Effects of 5-Fluorouracil Plus Levamisole on Patient Survival Presented Overall and Within Subgroups, by Sex and Age* Hazard Ratio Risk of Mortality Analysis North Central Intergroup Group Treatment Study Group Study # 0035 (n = 162) (n = 619) All patients 0.72 0.67 Female Male 0.57 0.91 0.85 0.50 Young Old 0.60 0.87 0.77 0.59 Assumptions to deal with extremes Jones et al (2011) Similar methods to those used when combining historical data However, the focus is on the individual subgroup parameters g1,......, gG rather than the prediction of a new subgroup 1) Unrelated Parameters g1,......, gG (u) Assumes a different treatment effect in each subgroup 2) Equal Parameters g1=...= gG (c) Assumes the same treatment effect in each subgroup 3) Compromise. Effects are similar/related to a certain degree (r) Comments on shrinkage estimation This type of approach is sometimes called shrinkage estimation Shrinkage estimation attempts to adjust for random high bias When relating subgroups, it is often desirable and logical to use structures that allow greater similarity between some subgroups than others A variety of possible subgroup structures can be examined to assess robustness Subgroup analysis– Extension to multiple studies Data summary from several studies • Subgroup analysis in a meta-analytic context • Efficacy comparison T vs. C • Data from 7 studies • 8 subgroups • defined by 3 binary baseline covariates A, B, C • A, B, C high (+) or low (-) • describing burden of disease (BOD) • Idea: patients with higher BOD at baseline might show better efficacy Graphical model Subgroup analysis involving several studies Y1 Y2 Y... ? 1 2 S Study-specific parameters 1,…, S • Parameters allow data to be combined from multiple studies Y1,..,YS Data from S studies YS g2 g1 ? gG Subgroup parameters g1,…, gG • Main parameters of interest • Various modeling structures can be examined Extension to multiple studies Example 3: sensitivity analyses across a range of subgroup structures • 8 subgroups • defined by 3 binary base-line covariates A, B, C • A, B, C high (+) or low (-) • describing burden of disease (BOD) 51 | Evidence synthesis in drug development Summary Subgroup analysis Important to distinguish between exploratory subgroup analysis and confirmatory subgroup analysis Exploratory subgroup analysis can be misleading due to random high bias Evidence synthesis techniques that account for similarity among subgroups will help adjust for random high bias Examine a range of subgroup models to assess the robustness of any conclusions Overall Conclusions • There is general agreement that good decision making should be based on all relevant information • However, this is not easy to do in a formal/quantitative way • Evidence synthesis - offers fairly well-developed methodologies - has many areas of application - is particularly useful for company-internal decision making (we have used and will increasingly use evidence synthesis in our phase I and II trials) - has become an important tool when making public health policy decisions References Evidence Synthesis/Meta-Analysis DerSimonian, Laird (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7; 177-88 Gould (1991). Using prior findings to augment active-controlled trials and trials with small placebo groups. Drug Information J. 25 369--380. Normand (1999). Meta-analysis: formulating, evaluating, combining, and reporting (Tutorial in Biostatistics). Statistics in Medicine 18: 321-359. See also Letters to the Editor by Carlin (2000) 19: 753-59, and Stijnen (2000) 19:759-761 Spiegelhalter et al. (2004); see main reference Stangl, Berry (eds) (2000). Meta-analysis in Medicine in Health Policy. Marcel Dekker Sutton, Abrams, Jones, Sheldon, Song (2000). Methods for Meta-analysis in Medical Research. John Wiley & Sons Trelle et al., “Cardiovascular safety of non-steroidal anti-inflammatory drugs: network non-steroidal anti-inflammatory drugs: network meta-analysis,” BMJ 342 (January 11, 2011): c7086-c7086. Meta-analysis and network meta-analysis Carlin J. Meta-analysis for 2 2 tables: A Bayesian approach. Statistics in Medicine 1992; 11(2):141–158, doi:10.1002/sim.4780110202. Smith TC, Spiegelhalter DJ, Thomas A. Bayesian approaches to random-effects meta-analysis: A comparative study. Statistics in Medicine 1995; 14(24):2685–2699, doi:10.1002/sim.4780142408. Warn D, Thompson S, Spiegelhalter D. Bayesian random effects meta-analysis of trials with binary outcomes: methods for the absolute risk difference and relative risk scales. Statistics in Medicine 2002; 21(11):1601–1623, doi:10.1002/sim.1189. Lambert PC, Sutton AJ, Burton PR, Abrams KR, Jones DR. How vague is vague? a simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS. Statistics in Medicine 2005; 24(15):2401–2428, doi:10.1002/sim.2112. Turner RM, Davey J, Clarke MJ, Thompson SG, Higgins JP. Predicting the extent of heterogeneity in meta-analysis, using empirical data from the Cochrane Database of Systematic Reviews. International journal of epidemiology 2012; 41(3):818–827. Sutton A, Kendrick D, Coupland C. Meta-analysis of individual-and aggregate-level data. Statistics in Medicine 2008; 27(5):651–669, doi:10.1002/sim.2916. Turner R, Spiegelhalter D, Smith G, Thompson S. Bias modelling in evidence synthesis. Journal of the Royal Statistical Society: Series A(Statistics in Society) 2009; 172:21–47. Lee K, Thompson S. Flexible parametric models for random-effects distributions. Statistics in Medicine 2007; 27(3):418–434. Muthukumarana S, Tiwari RC. Meta-analysis using Dirichlet process. Statistical Methods in Medical Research Jul 2012; doi:10.1177/0962280212453891. Jones HE, Ohlssen DI, Neuenschwander B, Racine A, Branson M. Bayesian models for subgroup analysis in clinical trials. Clinical Trials Apr 2011; 8(2):129–143, doi:10.1177/1740774510396933. Turner RM, Davey J, Clarke MJ, Thompson SG, Higgins JP. Predicting the extent of heterogeneity meta-analysis, using empirical data from the Cochrane Database of Systematic Reviews. Internationalj ournal of epidemiology 2012; 41(3):818–827. 56 Historical Controls Ibrahim, Chen (2000). Power prior distributions for regression models.Statistical Science, 15: 46-60 Neuenschwander, Branson, Spiegelhalter (2009). A note on the power prior. Statistics in Medicine, 28: 3562-3566 Neuenschwander, Capkun-Niggli, Branson, Spiegelhalter. (2010). Summarizing Historical Information on Controls in Clinical Trials. Clinical Trials, 7: 5-18 Pocock (1976). The combination of randomized and historical controls in clinical trials. Journal of Chronic Diseases, 29: 175-88 Spiegelhalter et al. (2004); see main reference Thall, Simon (1990). Incorporating historical control data in planning phase II studies. Statistics in Medicine, 9: 215-28 Subgroup Analyses Berry, Berry (2004). Accounting for multiplicities in assessing drug safety: a three-level hierarchical mixture model. Biometrics, 60: 418-26 Davis, Leffingwell (1990). Empirical Bayes estimates of subgroup effects in clinical trial. Controlled Clinical Trials, 11: 37-42 Dixon, Simon (1991). Bayesian subgroup analysis. Biometrics, 47: 871-81 Fleming (2010), “Clinical Trials: Discerning Hype From Substance,” Annals of Internal Medicine 153:400 -406. Hodges, Cui, Sargent, Carlin (2007). Smoothing balanced single-error terms Analysis of Variance. Technometrics, 49: 12-25 Jones, Ohlssen, Neuenschwander, Racine, Branson (2011). Bayesian models for subgroup analysis in clinical trials. Clinical Trials Clinical Trials 8 129 -143 Louis (1984). Estimating a population of parameter values using Bayes and empirical Bayes methods. JASA, 79: 393-98 Pocock, Assman, Enos, Kasten (2002). Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practic eand problems. Statistics in Medicine, 21: 2917–2930 Spiegelhalter et al. (2004); see main reference Thall, Wathen, Bekele, Champlin, Baker, Benjamin (2003). Hierarchical Bayesian approaches to phase II trials in diseases with multiple subtypes, Statistics in Medicine, 22: 763-80 Acknowledgements Stuart Bailey ,Björn Bornkamp, Roland Fisch, Beat Neuenschwander, Heinz Schmidli, Min Wu, Andrew Wright