Breadcrumbs Section. Click here to navigate to respective pages.

Chapter

Chapter

# Multiple Historical Studies and Meta-Analysis

DOI link for Multiple Historical Studies and Meta-Analysis

Multiple Historical Studies and Meta-Analysis book

# Multiple Historical Studies and Meta-Analysis

DOI link for Multiple Historical Studies and Meta-Analysis

Multiple Historical Studies and Meta-Analysis book

## ABSTRACT

In this chapter, we consider the situation where there are multiple historical studies for estimating the effect size of the standard therapy as compared to placebo� Many concepts in the analyses of noninferiority (NI) trials are a lot simpler when there is only one historical study than when there are multiple historical studies� For example, what does the constancy assumption mean when there are multiple historical studies? However, multiple historical studies are needed to assess assay sensitivity, which depends on the historical evidence of sensitivity to drug effects (ICH E10 2001)�

There are limitations in the meta-analysis to estimate the effect size of the standard therapy as compared to placebo, such as publication bias and heterogeneity� Section 7�2 discusses meta-analysis in general and the associated issues� Section 7�3 discusses the fixed-effect model, and Section 7�4 discusses the use of the random-effects model to deal with heterogeneity� The constancy assumption in the context of meta-analysis will be discussed in Section 7�5� Ng and Valappil (2011) proposed an alternative to deal with heterogeneity by discounting the historical studies individually before pooling� This approach will be discussed in Section 7�6� Finally, Section 7�7 concludes this chapter with a discussion�

The definition and objective of a meta-analysis are given by Iyengar and Greenhouse (1988) in the following:

The application of statistical procedures to collections of results from individual studies for the purpose of integrating, synthesizing, and advancing a research domain is commonly known as meta-analysis� The objective of a meta-analysis is to summarize quantitatively research

literature with respect to a particular question and to examine systematically the manner in which a collection of studies contributes to knowledge about that question�

There are variations in the definition of meta-analysis in the literature� Some examples are given in the following:

• DerSimonian and Laird (1986): Meta-analysis is defined as the statistical analysis of a collection of analytical results for the purpose of integrating the findings�

• Follmann and Proschan (1999): Meta-analysis is an important tool used in medical research to quantitatively summarize multiple related studies�

• Ziegler, Koch, and Victor (2001): Meta-analysis is the systematic synthesis of the results of several studies, especially of clinical trials�

• Schumi and Wittes (2011): Meta-analysis is a set of methods used to combine data from a group of studies to obtain an estimate of a treatment effect�

The studies included in the meta-analysis are typically derived from a systematic review� The Center for Outcomes Research and Education (CORE) contrasts the difference between a systematic review and a meta-analysis in the following (http://researchcore�org/faq/answers�php?recID=5; Accessed: September 7, 2013):

A systematic review is a thorough, comprehensive, and explicit way of interrogating the medical literature� It typically involves several steps, including (1) asking an answerable question (often the most difficult step); (2) identifying one or more databases to search; (3) developing an explicit search strategy; (4) selecting titles, abstracts, and manuscripts based on explicit inclusion and exclusion criteria; and (5) abstracting data in a standardized format�

A “meta-analysis” is a statistical approach to combine the data derived from a systematic review� Therefore, every meta-analysis should be based on an underlying systematic review, but not every systematic review leads to a meta-analysis� Bartolucci and Hillegass (2010, p� 17) elaborate the basic principle of a systematic review contrasting with an informal review as follows:

The systematic review [follows] an explicit and reproducible protocol to locate and evaluate the available data� The collection, abstraction, and compilation of the data follow a rigorous and prospectively defined objective process� ��� Unlike an informal review of the literature, this systematic, disciplined approach is intended to reduce the potential for subjectivity or bias in the subsequent findings�

In addition to published data, O’Gorman et al� (2013) include unpublished data, conference proceedings, and abstracts in a systematic review� Furthermore, the meta-analysis, if performed, is considered part of the systematic review� Khan et al� (2003) explicitly include the meta-analysis as part of the systematic review�

Prior to the introduction of the random-effects model (see Section 7�4) by DerSimonian and Laird in 1986, meta-analysis was mainly based on the fixed-effect model (FEM), assuming there is one true effect size that is shared by all studies� The combined effect size is the estimate of this common effect size (Muthukumarana and Tiwari 2012)� For a continuous endpoint under the normality assumption, the common effect size is estimated by the weighted average of the estimates of the individual studies, with weight being the inverse of the estimated variance of individual estimates� See, for example, Rothmann, Wiens, and Chan (2012, p� 75)� Without the normality assumption of the underlying distribution, the analysis relies on the asymptotic results on the sample sizes, regardless of the number of studies�

The lower confidence limit for the common effect size should be used to account for the variability in determining the NI margin, with or without discounting for the fixed-margin method discussed in Section 5�3 of Chapter 5� For the synthesis method discussed in Section 5�4 of Chapter 5, the point estimate, with or without discounting, may be incorporated into the test statistic, similar to the situation where there is only one historical study�

Even when investigating the same disease and the same therapeutic intervention, different studies are almost never identical in design; they might, for example, differ with respect to (1) dosage scheme, (2) duration of followup, (3) diagnostic strategies, or (4) the risk profile of the patient population (Ziegler, Koch, and Victor 2001)� Such differences could lead to different effect sizes, rendering the assumption for FEM invalid� DerSimonian and Laird (1986) introduced a random-effects model (REM) to take into consideration the heterogeneity between the studies� Under this model, the true effect size could vary across studies, and is assumed to follow a normal distribution� In addition, the between-study and within-study variabilities are assumed to be independent�

As in the FEM, the overall effect size is estimated by the weighted means� See, for example, Rothmann, Wiens, and Chan (2012, 77)� The studies in the meta-analysis (see Section 7�2) are assumed to be a random sample of the relevant distribution of effects, and the combined effect estimates the mean effect of this distribution (Muthukumarana and Tiwari 2012)� This mean effect is referred to as the global mean by Rothmann, Wiens, and Chan (2012, 61)�

Testing under either an FEM or REM typically depends on an asymptotic approximation of a test statistic to a standard normal distribution� For

the FEM, the approximation is asymptotic on the total number of subjects, whereas for the REM, it is asymptotic on the number of studies (Follmann and Proschan 1999)� Results under the REM based on an asymptotic approximation with a small number of studies are not reliable and are subject to non-negligible bias� DerSimonian and Laird (1986) described the main difficulty in meta-analysis as follows:

The main difficulty in integrating the results from various studies stems from the sometimes-diverse nature of the studies, both in terms of design and methods employed� Some are carefully controlled randomized experiments while others are less well controlled� Because of differing sample sizes and patient populations, each study has a different level of sampling error as well� Thus, one problem in combining studies for integrative purposes is the assignment of weights that reflect the relative “value” of the information provided in a study� A more difficult issue in combining evidence is that one may be using incommensurable studies to answer the same question�

Rothmann, Wiens, and Chan (2012, 81-82) discussed various concerns with the application of the REM to estimate the effect size in the setting of the NI trial� These are summarized as follows:

• “… the estimation of the mean effect across studies (the global mean) is frequently used to infer the effect of the active control in the NI trial from either the use of a 95% confidence interval or by the use of the point estimate with its standard error (e�g�, in a synthesis test statistic)�”

• “The lower limit of a two-sided 95% confidence interval for the global mean may increase as the largest estimated effect is reduced,” as shown in Example 4�4 of Rothmann, Wiens, and Chan (2012, 82-85), which is counterintuitive� In fact, it is not appropriate to use the global mean, assuming it is known, as the effect size to establish an NI margin, especially since the within-study effects are greatly heterogeneous�

• Including small studies in the systematic review tends to overestimate the effect size in the meta-analysis, as small studies with a smaller observed effect size (and mostly negative results) are less likely to be published� “This bias would be more profound in a random-effects meta-analysis than in a fixed-effect meta-analysis� A random-effects meta-analysis pulls the estimated effect away from that of a fixed-effect meta-analysis toward the observed effects from the smaller studies�”

Using an extreme hypothetical example with an infinite number of historical trials, each of infinite size, Brittain, Fay, and Follmann (2012) showed that the global mean (overall mean) should not be used as the effect size in the NI trial� Note that in such an extreme hypothetical example, it effectively assumes that global mean is known� They propose using a prediction

interval for the missing standard-versus-placebo effect rather than a confidence interval for the mean� Use of the predictive interval is also discussed by Rothmann, Wiens, and Chan (2012, 108-109)�

For the FEM, the constancy assumption means that the common effect size is equal to the effect size in the current NI trial, which is similar to the situation where there is only one historical study (see Section 2�5�1 in Chapter 2)� Since studies may have different designs, assessing the constancy assumption can be even more challenging than the situation where there is only one historical study�

When the true effect of the active control varies across previous trials (i�e�, the active control effect is not constant), such as in the REM, what does the constancy assumption mean? This question was raised by Rothmann, Wiens, and Chan (2012, 61)� They gave two interpretations: (1) the active control effect in the NI trial equals the global mean active-control effect across studies (i�e�, the location parameter in the REM); and (2) the true active-control effect in the NI trial has the same distribution as the true effects in the previous trials�

Whether to use the confidence interval for the global mean or the predictive interval for estimating the effect size in the current NI trial depends on the interpretation of the constancy assumption� With the first interpretation, we should use the confidence interval for the global mean, and discounting may be used if the constancy assumption does not hold, as in Section 2�5�2 of Chapter 2�

However, Brittain, Fay, and Follmann (2012) are against using the global mean even if it is known, as discussed in Section 7�4� With the second interpretation, we may consider the studies in the meta-analysis as a random sample of the relevant distribution of effects (Muthukumarana and Tiwari 2012) (see Section 7�4), with the effect size of the current NI trial being missing; therefore, we should use the predictive interval rather than the confidence interval� In this case, it is not clear what is meant by violation of the constancy assumption because there is no explicit formula to equate the effect size in the current NI trial and the “effect size” of the historical studies� Perhaps, violation of the constancy assumption means that the assumption of the REM is not valid� In any case, the estimated effect size using the predictive interval can always be discounted� Therefore, the effect size in the current NI trial is obtained by first pooling the data from the historical studies using the predictive interval and then applying the discounting, if needed� Such an approach will be referred to as the Pooling-and-then-Discounting (PatD) approach�

It is not clear how realistic the assumption for the second interpretation is� Even if this assumption holds, the number of studies is typically limited�

For example, DerSimonian and Laird (1986) described seven medical metaanalyses, six of which had fewer than 10 studies� With a limited number of studies, Ng and Valappil (2011) proposed discounting each individual study and then pooling the results� This approach will be discussed in Section 7�6�

When we use the confidence interval for the global mean or the predictive interval for estimating the effect size in the current NI trial, we implicitly use the fixed-margin method discussed in Section 5�3 of Chapter 5� Using the synthesis method with the first interpretation, we incorporate the point estimate for the global mean in the test statistic, as shown in Section 5�4 of Chapter 5� Using the synthesis method with the second interpretation, we incorporate the point estimate for a given percentile (e�g�, 10th percentile) of the distribution of the truth means in the REM, rather than the global mean in the test statistic, as shown in Section 5�4 of Chapter 5� In either case, the variability of the test statistic needs to be accounted for appropriately�

In both FEM and REM analyses, a discounting of the estimated effect size in the meta-analysis may be considered� Since the discounting is done after the data is pooled, it is referred to as the Pooling-and-then-Discounting (PatD) approach (see Section 7�5)� Ng and Valappil (2011) proposed the Discountingand-then-Pooling (DatP) approach when the number of historical studies is small (e�g�, three)� A simple hypothetical example will be discussed in Section 7�6�1 to illustrate the DatP approach� In Section 7�6�2, an example based on published literature in the anti-infective therapeutic area is used to contrast the two approaches�

The focus is on estimating the effect size in the current NI trial using data from the historical studies, that is, (S – P), the mean difference for the continuous outcomes or the difference in proportions for the binary outcomes� Typically, the lower limit of the associated 95% confidence interval is used for such an estimate� This estimate may be subjected to discounting, depending upon the validity of the constancy assumption (see Section 2�5 of Chapter 2) and is used to determine the NI margin for the fixed-margin method (see Section 5�3 of Chapter 5)� Note that the notation “d” is used in this section instead of “γ” to avoid confusing the three components (i�e�, γ 1, γ 2, and γ 3) discussed in Section 2�5�2 of Chapter 2�

7.6.1 A Simple Hypothetical Example

Suppose there are three previous trials� Let

=

=

=

(S – P) d (S – P)

(S – P) d (S – P)

(S – P) d (S – P)

where 0 ≤ di ≤ 1, for i = 1, 2, 3 and di’s are the rates of discounting� Accordingly, these studies are discounted by (1 – d1)100%, (1 – d2)100%, and (1 – d3)100%, respectively� After discounting, we then pool the three studies by taking the average, as given in the following:

= + +(S – P) [d (S – P) d (S – P) d (S – P) ]/ 3c 1 h 2 h 3 h1 2 3

So, the effect size is the average of the effects of the individual studies after discounting� The effect size may be estimated by the weighted average (see Section 7�3) of the individual estimates with discounting�

How do we choose d1, d2, and d3? We consider a simplistic situation for illustration purposes� Let us assume that the three studies were identical in every aspect, except for the time when the studies were conducted� Suppose that studies 1, 2, and 3 were conducted 5, 10, and 15 years ago (see Table 7�1)� If we decide to discount study 1 by 10% (so, d1 = 0�9), then we would proportionally discount studies 2 and 3 by 20% and 30%, respectively� Note that determination of discounting for study 1 is subjective� However, once that is decided, a logical and simple way is to discount the other two studies proportionally�

When two or more factors might affect the constancy assumption, each factor can be assessed individually, and the composite discounting may then be calculated� This will be shown in the example in the next sub-section�

7.6.2 Anti-Infective Example

This example contrasts two approaches that deal with discounting to determine the effect size of an NI trial for Clostridium difficile infection that induces severe diarrhea in patients compromised by antibiotic usage and other underlying disease conditions� Vancomycin is chosen as the active control in the current NI trial example� The study objective is to show the efficacy of an experimental treatment as compared to putative placebo (i�e�, ε = 1; see Section 2�4 in Chapter 2); therefore, the NI margin is determined using the estimated effect size in the NI trial with discounting�

There is limited information on placebo-controlled trials using vancomycin in the literature� Therefore, in this example, two large, phase 3, randomized, multicenter, double-blind, controlled studies-referred to as studies 301 and 302-comparing vancomycin to tolevamer were used to estimate the effect of vancomycin treatment over tolevamer (Weiss 2009)� These studies were

TABLE 7.1

completed during March 2005 through August 2007 and were originally designed to demonstrate that tolevamer is noninferior to vancomycin using an NI margin of 15%, but tolevamer was found to be inferior to vancomycin, and further development of tolevamer was stopped (Weiss 2009)� In this example, tolevamer was assumed to be no worse than placebo, and was considered as a placebo in determining the effect size for designing future NI trials�

These two studies (301 and 302) utilized vancomycin 125 mg four times a day (q�i�d�) for 10 days and tolevamer 3 g dosing three times a day (t�i�d�) for 14 days� An initial 9 g loading dose was utilized among the tolevamer treated patients� The results of these studies have already been published or presented at conferences (e�g�, Bouza et al� 2008; Louie et al� 2007; U�S� FDA 2011; Optimer 2011)� These studies were originally designed as phase 3, multicenter, randomized, double-blind, parallel studies with patients enrolled from the United States, Canada, Europe, or Australia� Patients were randomized (2:1:1) to tolevamer (3 g t�i�d�, 14 days), vancomycin (125 mg q�i�d�, 10 days), or metronidazole (375 mg q�i�d�, 10 days)� Note that the metronidazole arm is not used in this example�

Clinical success was used as the outcome in those studies and was defined as resolution of diarrhea and absence of severe abdominal discomfort due to Clostridium difficile-associated diarrhea (CDAD) on day 10� This example considers two different metrics to evaluate the treatment effect-namely, risk difference and odds ratio� The estimated individual treatment effects, as well as the pooled treatment effect, with the associated 95% confidence intervals are summarized in Table 7�2�

Comparing studies 301 and 302 to the recent NI trials, it appears that the strains of C� difficile and susceptible populations are likely to be similar� However, there can potentially be few differences, compared to recent trials in CDAD, with respect to: (1) entry criteria, (2) definitions of clinical success, (3) emerging resistance on vancomycin, and (4) other factors�

The proportions of patients with severe symptoms varied and were different between historical and current NI trials� Patients enrolled in Study 301 had symptoms that were more severe than those enrolled in Study 302, in terms of percent of patients with severe disease at baseline (32% versus 25%)�

Study 301 reported 37% dropout rate where the dropouts included nonresponse, death, lost to follow-up, voluntary withdrawal, etc� (Louie et al� 2007; U�S� FDA 2011; Optimer 2011)� The dropout rate in Study 302 was not available� It is assumed to be 44% in this example to illustrate the method� Recent NI trials have enrolled patients with baseline disease severity in the range of 35%–39% with dropout rates less than 10% (see Table 7�3)� For illustration of the DatP approach, disease severity (in terms of the proportion of patients with severe symptoms) and dropout rate are used in the determination of discounting�

For the PatD approach with a 20% discounting, the effect size in the current NI trial is estimated at 0�8 × (lower limit of the 95% confidence interval), which is equal to (1) δ = 0�8 × 30�5% = 24�4%, for the difference metric (i�e�,

S – P; see Section 4�2 in Chapter 4); and (2) r = 3�790�8 = 2�90, for the odds ratio metric (see Section 4�3 in Chapter 4)�

An alternative approach is to discount each individual study first and then pool (DatP)� To illustrate this approach, each individual study was separately discounted based on disease severity and dropout rate, and the overall discounting was then calculated, as shown in Table 7�4� For example, assuming disease severity of about 39% (see Table 7�3) for the current NI trials and using a 5% discounting for study 301, study 302 should then be discounted by 10% (= 5% × 14%/7%)� Similarly, assuming a dropout rate of 10% (see Table 7�3) for the current NI trials and using a 6% discounting for study 301, study 302 should then be discounted by 7�6% (= 6% × 34%/27%)� The overall discounting for studies 301 and 302 is 10�7% and 16�84%, respectively, as shown in Table 7�4� Using the lower limit of the 95% confidence interval for the absolute difference in each study in Table 7�2, the effect size in the current NI trial is estimated at (1) δ = 0�5[0�8930(25�8%) + 0�8316(29�9%)] = 24�0%, for the difference metric (i�e�, S – P; see Section 4�2 in Chapter 4); and (2) r = (3�040�8930 × 3�530�8316)0�5 = 2�76, for the odds ratio metric (see Section 4�3 in Chapter 4)�

TABLE 7.2

Clinical Success Rates: Intent-to-Treat Analysis

TABLE 7.3

Patient Severity and Dropout Rates: Intent-to-Treat Analysis

Calculations of the NI margins δ and r, with the objective to show efficacy (i�e�, ε = 1; see Section 2�4 in Chapter 2) using the PatD and DatP approaches for discounting are summarized in Table 7�5� It should be noted that showing efficacy of an anti-infective agent is, in general, not sufficient for approval by the FDA� The study should be designed to show that certain percent of the control effect be preserved based on a clinical judgment� In this example, if the study objective is to show greater than 50% preservation using the difference as the metric, then the NI margins are 12�2% and 12% using the PatD and DatP methods, respectively� However, the clinically acceptable NI margin for this type of bacterial infection trials should be set to 10%� In other words, the NI margin should not be larger than 10 percentage points�

TABLE 7.4

Discounting Using DatP Approach

TABLE 7.5

PatD and DatP Approaches for Discounting

Meta-analysis has its limitations� In addition to the concerns raised by Rothmann, Wiens, and Chan (2012, 81-82) (see Section 7�4), two well-known major problems with the meta-analysis are given by Hung, Wang, and O’Neill (2009): (1) publication bias due to the fact that negative studies are rarely published in the literature, and (2) how to weigh each study in the meta-analysis� Exclusion of negative studies may overestimate the treatment effect� Using invalid FEM (See Section 7�3) in the meta-analysis can seriously overestimate the variance of the treatment effect estimate� On the other hand, the REM (See Section 7�4) may give similar weights to all studies regardless of sample size, which is undesirable when the sample sizes varies greatly� (Hung, Wang, and O’Neill 2009)�

The DatP approach proposed in Section 7�6 provides an alternative, especially when the number of historical studies is small� Although the discussion of the DatP approach in Section 7�6�2 focuses on the fixed-margin method, the synthesis method (see Section 5�4 in Chapter 5) may be used, but is omitted here�

Bartolucci AA and Hillegass WB (2010)� Overview, Strengths, and Limitations of Systematic Reviews and Meta-Analyses Overview, in Evidence-Based Practice: Toward Optimizing Clinical Outcomes, eds� Chiappelli F, Caldeira Brant XM, Neagos N, Oluwadara OO, and Ramchandani, MH� Berlin Heidelberg: Springer�

Bouza E, Dryden M, Mohammed R, Peppe J, Chasan-Taber S, Donovan J, Davidson D, and Short G (2008)� Results of a Phase III Trial Comparing Tolevamer, Vancomycin and Metronidazole in Patients with Clostridium Difficile-Associated Diarrhea� Poster Abstract Number: O464� 18th European Congress of Clinical Microbiology and Infectious Diseases Barcelona, Spain, April 19-22, 2008�

Brittain EH, Fay MP, and Follmann DA (2012)� A Valid Formulation of the Analysis of Noninferiority Trials Under Random-Effects Meta-analysis� Biostatistics, 13(4):637-649�

DerSimonian R and Laird N (1986)� Meta-analysis in Clinical Trials� Controlled Clinical Trials, 7:177-188�

Follmann DA and Proschan MA (1999)� Valid Inference in Random-Effects MetaAnalysis� Biometrics, 55:732-737�

Hung HMJ, Wang S-J, and O’Neill R (2009)� Challenges and Regulatory Experiences with Non-Inferiority Trial Design Without Placebo Arm� Biometrical Journal, 51:324-334�

International Conference on Harmonization (ICH) E10 Guideline (2001)� Choice of Control Groups in Clinical Trials� http://www�fda�gov/downloads/Drugs /GuidanceComplianceRegulatoryInformation/Guidances/UCM073139�pdf (Accessed: September 27, 2012)�

Iyengar S and Greenhouse JB (1988)� Selection Models and the File Drawer Problem� Statistical Science, 3:109-135�

Khan KS, Kunz R, Kleijnen J, and Antes G (2003)� Five Steps to Conducting a Systematic Review� Journal of the Royal Society of Medicine, 96:118-121�

Louie TJ, Gerson M, Grimard D, Johnson S, Poirier A, Weiss K et al� (2007)� Results of a Phase III Study Comparing Tolevamer, Vancomycin and Metronidazole in Clostridium Difficile-Associated Diarrhea (CDAD), in Program and Abstracts of the 47th Interscience Conference on Antimicrobial Agents and Chemotherapy (ICAAC); September 17-20, 2007; Chicago, IL� Washington DC: ASM Press; Abstract K-4259�

Muthukumarana S and Tiwari RC (2012)� Meta-analysis Using Dirichlet Process� Statistical Methods in Medical Research� http://smm�sagepub�com/content/early /2012/07/16/ (Accessed: July 24, 2012)�

Ng T-H and Valappil T (2011)� Discounting and Pooling of Historical Data in Noninferiority Clinical Trials� Unpublished manuscript�

O’Gorman CS, Macken AP, Cullen W, Saunders J, Dunne C, and Higgins MF (2013)� What Are the Differences between a Literature Search, a Literature Review, a Systematic Review, and a Meta-analysis? And Why Is a Systematic Review Considered to Be So Good? Irish Medical Journal, 106(2):8-10� http://ulir�ul�ie /handle/10344/3011 (Accessed: September 7, 2013)�

Optimer Pharmaceuticals, Inc� (2011)� Dificid™ (Fidaxomicin Tablets) for the Treatment of Clostridium Difficile Infection (CDI), Also Known as Clostridium Difficile-Associated Diarrhea (CDAD), and for Reducing the Risk of Recurrence when Used for Treatment of Initial CDI, NDA 201699: Anti-Infective Drugs Advisory Committee Meeting Briefing Document, April 5, 2011� http://www �fda�gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials /Drugs/Anti-InfectiveDrugsAdvisoryCommittee/UCM249354�pdf (Accessed: August 16, 2013)�

Rothmann MD, Wiens BL, and Chan ISF (2012)� Design and Analysis of Non-Inferiority Trials. Boca Raton, FL: Chapman & Hall/CRC�

Schumi J and Wittes JT (2011)� Through the Looking Glass: Understanding Noninferiority� Trials, 12:106� http://www�trialsjournal�com/content/12/1/106 (Accessed: August 25, 2013)�

U�S� Food and Drug Administration (2011)� Fidaxomicin for the Treatment of Clostridium Difficile-Associated Diarrhea (CDAD): FDA Briefing Document for Anti-Infective Drugs Advisory Committee Meeting, April 5, 2011� http://www �fda�gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials /Drugs/Anti-InfectiveDrugsAdvisoryCommittee/UCM249353�pdf (Accessed: August 16, 2013)�

Weiss K (2009)� Toxin-Binding Treatment for Clostridium difficile: A Review Including Reports of Studies with Tolevamer� International Journal of Antimicrobial Agents, 33:4-7�

Ziegler S, Koch A, and Victor N (2001)� Deficits and Remedy of the Standard RandomEffects Methods in Meta-analysis� Methods of Information in Medicine, 40:148-155�