ABSTRACT

48An effect size is a unit-free quantitative measure of the strength of a phenomenon and independent of sample sizes. While combining effect sizes in meta-analysis, the standard error of effect-size plays an important role. As in any statistical setting, effect sizes are estimated with sampling error and may be biased unless the effect size estimator that is used is appropriate for the manner in which the data were sampled and the manner in which the measurements were made. The term effect size can refer to a standardized measure of effect (such as r, Cohen’s d, and odds ratio), or to an unstandardized measure (the raw difference between group means). The first step to meta-analyzing a sample of studies is to describe the general distribution of effect sizes. An important moderator that has a strong influence on effect size may be considering separately for its descriptive analyses on each subpopulation.

One should always put effort into interpreting the observed effect sizes. There are three important effect sizes, namely, the Glass c, Cohen’s d and Hedges g used in standardized difference in means for studies with different scales. All studies found in the literature may not provide the appropriate effect sizes. Instead, some may report Φ-value, P-value, χ2-value, or any other statistics. In such cases, a transformation to a common endpoint is necessary. The sampling distribution of a correlation coefficient is somewhat skewed, especially if the population correlation is large. It is therefore conventional in meta-analysis to convert correlations to z scores using Fisher’s r-to-z transformation. In between-effects test statistic, if you have access to the means and standard deviations of your two groups, you can calculate g from the definitional formula. For a study with binary outcome, the summary statistics include proportion of events in case of open trials, and odds ratio, risk ratio, and risk difference in case of controlled studies.