**Summary:**‘Meta Analysis’, about 200 pages, was easy to read for a graduate level textbook and managed to be rigorous without overwhelming the reader with formulae.

Most of the book is dedicated to the mathematical methods of finding collective estimates of a value or set of values from related independent studies. The latter half of the book is dedicated to describing these methods and on a large Monte-Carlo based comparison of the methods under a range of conditions. Conditions include different sample sizes per study, different number of studies, and different correlation coefficients of interest.

The first half was much more useful on a first reading, but the detailed descriptions and comparisons would make an excellent reference if I were preparing or performing a meta-analysis.

The ‘soft’ aspects of meta-analysis are only briefly touched upon, but several promising references are given. References on retrieval of studies (e.g. sampling, scraping, and coverage) and assessing studies for quality and relevance include to several chapters of [1] and to [2].

**Take-home lessons**(i.e. what I learned):

The most common method to get a collective estimate of a parameter is to take a weighted sum of estimates from independent studies, with weights inversely proportional to the variance of each estimate. This method makes a very questionable assumption: that all papers studied are estimating the same parameter.

The authors call this assumption the fixed effect model. Some of the methods described explicitly use this model, but all of the methods include a test (usually using the chi-squared distribution) to detect if a fixed effect model is inappropriate.

Other models, such as Olkin and Pratt, and DerSimonian-Laird use the more complex, but more realistic random effects model. Under this model, the parameter that each study is estimating is related, but slightly different. Then the collective estimate that comes out of the meta-analysis is an estimate of some parameter with an extra layer of abstraction than the parameters described in each individual study.

There are other, yet more complex models that are viable, such as mixture models or a hierarchical linear models in which each study’s parameter estimate is an estimate of some combination of abstract parameters, but these are only briefly covered in ‘Meta Analysis’.

Many of the methods described used Fisher’s z transformation in some way, where

z =1/2 * ln ( (1 + r) / (1 - r)) ,

which is a pretty simple transformation for Pearson correlation coefficients r that maps from [-1,1] to (-infty, +infty), converges to normality way faster than r does, and has an approximate variance that only depends on the sample size n. (Found on pages 22-23).

Also, apparently transforming effect sizes into correlations by treating treatment group as continuous variable at 0 or 1 isn’t overly problematic (pages 30-32). However, it can be very useful in bringing in a wider range of studies when a collective correlation coefficient is desired.

I didn't find any clear beacon that said "this is where replication work is published", but I found the following promising leads:

[1] The Handbook of Research Synthesis (1994)

[2] Chalmers et al. (1981) “A method for assessing the quality of a randomized control trial.” Controlled Clinical Trials, volume 2, pages 31-49.

[3] Quality & Quantity: International Journal of Methodology

[4] Educational and Psychological Measurement (Journal)

[5] International Journal of Selection and Assessment

[6] Validity Generalization (Book)

[7] Combining Information: Statistical Issues and Opportunities for Research (Book)