ABSTRACT

112Meta-analysis of observational studies is also common in psychiatry research. Observational studies yield estimates of association that may deviate from true underlying relationship beyond the play of chance. This may be due to the effects of confounding factors, bias or both. Consideration of possible sources of heterogeneity between observational studies results will provide more insights than the mechanistic calculation of an overall measure of effect, which may often be biased. Meta-analysis of evaluation of prognostic variables has a higher risk of missing studies than for randomized trials. Most of the prognostic studies are found to be methodologically poor. It is more difficult to identify all prognostic studies by literature search than for randomized trials. Many studies seek parsimonious prediction models by retaining only the most important prognostic factors. If the prognostic variable is continuous, the risk of an event would usually be expected to increase or decrease systematically as the level increases. The method of analysis for pooling values across several studies will depend on whether the prognostic variable is binary, categorical, or continuous. In case of time to event data, the data is analyzed using survival analysis method-most often the log-rank test for simple comparisons on Cox regression for analysis of multiple predictor variables. Some meta-analysis studies consider the set of research studies where the aim was to investigate many factors simultaneously to identify important risk factors. Statistical power is rarely discussed in studies of diagnostic accuracy as they do not compare two groups, and they do not formally test hypotheses. The choice of a statistical method for pooling results depends on the source of heterogeneity, especially variation in diagnostic thresholds. There is also one important extra source of variation to consider in meta-analysis of diagnostic accuracy: variation introduced by changes in diagnostic threshold. Sensitivities and specificities, and positive and negative likelihood ratios, can be combined into the same single summary of diagnostic performance, known as the diagnostic odds ratio. If there is any evidence that the diagnostic threshold varies between the studies, the best summary of the results of the studies will be an ROC curve rather than a single point. The simplest method of combining studies of diagnostic accuracy is to compute weighted averages of the sensitivities, specificities, or likelihood ratios. Likelihood ratios are ratios of probabilities and in a meta-analysis can be treated as risk ratio. A weighted average of the 113likelihood ratios can be computed using the standard Mantel–Haenszel or inverse variance methods of meta-analysis of risk ratios.