The following is a
report on the reproduction of the statistical work in the paper "The use of informative priors and Bayesian updating:
implications for behavioral research" by Charlotte O. Brand et al.

The
original paper was accepted for publication by Meta-Psychology, https://open.lnu.se/index.php/metapsychology, a journal focused on methodology and reproductions of existing work.
This report continues my attempt to establishing a standard for replication
reports according to the Psych Data Standards found here https://github.com/psych-ds/psych-DS

"Informative
priors" is a simulation study that examines the differences between
analysis methods of a collection of simple synthetic datasets. In each of 60
datasets there are 80 subjects that answer 25 questions each. The response is
the number out of 25 questions correct, the covariates are the sex of the
respondent, and the condition under which they were put (a neutral control or a
stereotype threat). Each respondent also had a hidden latent variable which was
his or her aptitude, a log-odds of getting any one of the questions correct in
neutral conditions. Each dataset had 40 men and 40 women, and 20 respondents of
each sex were subjected to each condition. Men under the stereotype threat
condition enjoyed an advantage which the remaining respondents were unaffected.

The same 60
datasets were analyzed under five different methods, ANOVA, GLMM, Bayesian
GLMM, Posterior Passing, and "mega" Bayesian GLMM. The simulation was
repeated under 25 different combinations of aptitude variance and stereotype
threat strength. These analyses are described below as necessary.

For the
computational work, I relied heavily on the code provided by the authors at https://github.com/thomasmorgan/freqbayes2. Primarily, I used the analyses.R file to extract details about
analyses. Also used was main.R, which is a wrapper and collection of arguments
which are passed on to the simulation file (simulation.R), the population as
well as calls the analysis (analysis.R) and plotting R files.

My own
version of the code for population simulation, experiment simulation, result
extraction, and two of the five described analyses are included in the R code
below. These are only meant to provide a shorthand for the author's much more
extensive and stable code.

####
**Analysis 1 (ANOVA)**

Going from
the text alone, there was some minor difficulty related to interpretation of
the analysis done. It's not clear that the "effect size" mentioned
refers to the interaction's F-statistic, the regression slope coefficient, or
the regression t-statistic. Digging further, the analysis.R code uses the aov()
function and extracts the sum of squares of the interaction for the effect
size. While reproducing this analysis, I opted for the slope-coefficient of the
interaction (divided by 25 for normalization purposes) from the lm() function,
which seemed more in line with the other analyses. I obtained results very
similar to those reported.

####
**Analysis 2 (GLMM)**

This
analysis was clear and straightforward to reproduce. The version I programmed
is quite slow and has some computational hiccups. The author's version is much
more stable. Both versions use the well-tested glmer() command from the car
package in R to conduct the analyses. As in the first analysis, any missing
details necessary for reproduction in the text are made apparent in the
supplementary code, such as the use of the Wald confidence interval.

####
**Analysis 3 (Bayesian GLMM using MCMC)**

This
analysis uses the jags.model() function, as found in the well-tested rjags
package. For reference, JAGS stands for "Just Another Gibbs Sampler".
A Gibbs Sampler is a standard Monte Carlo-based method for estimating
parameters from otherwise intractable distributions. The sampler requires a
prior distribution to operate, and diffuse or non-informative prior was used in
order to reduce the effect of the prior to a negligible amount. Finally, the
jags.model() function in this code calls a BUGS (Bayesian inference Using Gibbs
Sampling) script, which is found in "bglmm.R" file in the github
link.

####
**Analysis 4 (Posterior Passing)**

My
commentary on this analysis is from a position of relative ignorance; posterior
passing is a new concept to me, at least under this name. It appears to be an
iterative process for updating a posterior much as one would do with data
becomes available sequentially in time, such as sports results, or, in this
case, a sequence of identical experiments on a common phenomenon.

Both this
and the Bayesian GLMM analysis use the jags.model() function to make their
inferences via Gibbs sampling. The difference between the analyses is in the
BUGS script. In "bglmm.R" script for Analysis 3, all four of the
parameters (the intercept, the main effects for sex and for condition, and the
interaction, respectively) are estimated using the same non-informative prior.
In the "pp.R" script for Analysis 4, the first three parameters of
interest are estimated this way, but the four parameter, the interaction (the
parameter of interest) is estimated using a prior based on the median and
(inverse) variance of the interaction parameter estimates of the previous
experiment.

####
**Analysis 5 ("Mega" Bayesian GLMM)**

This
analysis is very similar in structure to analysis 3. The "mega" part
refers to the fact that the results from all 60 of the experiments are fed into
the Gibbs Sampler together as if they came from one very large sample of size
4800, instead of 60 distinct samples of size 80 analyzed separately. It works
the same and produces very similar results, but requires much more
computational resources.

Analysis 5
also uses the jags.model() function and calls the same "bglmm.R" BUGS
script as Analysis 3.

One final
comment of the manuscript: Two sets of effect sizes are given. However, this
the same set, just that in one location it is described as a difference in
probability from 0.5 (0, 0.12, 0.23, 0.32, 0.38), and later as a difference in
the log-odds from 0 (0, 0.5, 1, 1.5, 2).

### Population Generation

protoN = 250000

N = 4*protoN

protoLatent = rnorm(protoN)

sex = rep(c(0,1),each=2*protoN)

latent = c(protoLatent, -protoLatent, protoLatent, -protoLatent)

### Sample generation

Nexp = 60

sampSize = 80

sampleMat = matrix(NA,nrow=60,ncol=80)

for(k in 1:Nexp)

{

sampleMat[k,1:40] = c(sample(1:(N/2),20),
sample((N/2)+(1:(N/2)),20))

sampleMat[k,41:80] = c(sample(1:(N/2),20),
sample((N/2)+(1:(N/2)),20))

}

sampleIdx = rep(1:Nexp,each=5*5*20)

### Experiment generation

vari = rep(rep(c(0,.25,.50,.75,1),times=5,each=20),times=Nexp)

effect = rep(rep(c(0,.5,1,1.5,2),each=100),times=Nexp)

effect_prob = rep(rep(c(0,.12,.23,.32,.38),each=100),times=Nexp)

Niter = length(sampleIdx)

manifestMat = matrix(NA,nrow=Niter,ncol=80)

for(k in 1:Niter)

{

thisSample =
sampleMat[sampleIdx[k],]

thisVari = vari[k]

thisEffect = effect[k]

thisSex =
sex[sampleMat[sampleIdx[k],]]

thisCond =
rep(c(0,1),each=40)

thisLatent =
latent[thisSample]*thisVari + thisSex*thisCond*thisEffect

thisProb =
exp(thisLatent) / (1 + exp(thisLatent))

thisManifest =
rbinom(n=80, size=25, prob=thisProb)

manifestMat[k,] =
thisManifest

}

################

#### ANOVA Analysis

sex = rep(c(0,1),each=20,times=2)

cond = rep(c(0,1),each=40)

ESE1 = rep(NA,Niter) # 1) Effect size estimate;

PRR1 = rep(NA,Niter) # 2) Positive result rate;

UNC1 = rep(NA,Niter) # 3) Uncertainty;

EER1 = rep(NA,Niter) # 4) Estimate error;

Niter = 30000

tcrit = qt(.975,df=76)

for(k in 1:Niter)

{

y = manifestMat[k,]

res = summary(lm(y ~
sex*cond))

ESE1[k] =
res$coef[4,1]/25

PRR1[k] =
res$coef[4,4] # P-value, we can
dimension reduce later

UNC1[k] =
res$coef[4,2]/25 * 2 * tcrit

EER1[k] =
res$coef[4,1]/25 - effect_prob[k]

if(k %% 1000 ==
0){print(k)}

}

### Results presentation (Analysis 1)

levels_effect = unique(effect)

ESE1grid = matrix(NA,length(levels_vari),length(levels_effect))

PRR1grid = matrix(NA,length(levels_vari),length(levels_effect))

UNC1grid = matrix(NA,length(levels_vari),length(levels_effect))

EER1grid = matrix(NA,length(levels_vari),length(levels_effect))

for(i in 1:length(levels_vari))

{

for(j in
1:length(levels_effect))

{

ESE1grid[i,j] =
mean( ESE1[which(vari == levels_vari[i] & effect == levels_effect[j])])

PRR1grid[i,j] =
mean( PRR1[which(vari == levels_vari[i] & effect == levels_effect[j])])

UNC1grid[i,j] =
mean( UNC1[which(vari == levels_vari[i] & effect == levels_effect[j])])

EER1grid[i,j] =
mean( EER1[which(vari == levels_vari[i] & effect == levels_effect[j])])

}

}

round(ESE1grid,4)

round(PRR1grid,4)

round(UNC1grid,4)

round(EER1grid,4)

##### ANALYSIS 2: GLMM

sex = rep(c(0,1),each=20,times=2)

cond = rep(c(0,1),each=40)

sexlong = rep(sex,each=25)

condlong = rep(cond,each=25)

IDlong = rep(1:80,each=25)

Niter = 30000

ESE2 = rep(NA,Niter) # 1) Effect size estimate

PRR2 = rep(NA,Niter) # 2) Positive result rate

UNC2 = rep(NA,Niter) # 3) Uncertainty

EER2 = rep(NA,Niter) # 4) Estimate error;

zcrit = qnorm(.975)

library(lme4)

for(k in 1:Niter)

{

y = manifestMat[k,]

ylong = rep(0,80*25)

for(i in 1:80)

{

ylong[(i-1)*25+(1:25)]
= c(rep(1,y[i]),rep(0,25-y[i]))

}

res =
summary(glmer(ylong ~ sexlong*condlong + (1|IDlong),
family="binomial"))

ESE2[k] =
res$coef[4,1]

PRR2[k] =
res$coef[4,4] # P-value, we can
dimension reduce later

UNC2[k] =
res$coef[4,2] * 2 * zcrit

EER2[k] =
res$coef[4,1] - effect[k]

if(k %% 1000 ==
0){print(k)}

}

### Results presentation (Analysis 2)

levels_vari = unique(vari)

levels_effect = unique(effect)

ESE2grid = matrix(NA,length(levels_vari),length(levels_effect))

PRR2grid = matrix(NA,length(levels_vari),length(levels_effect))

UNC2grid = matrix(NA,length(levels_vari),length(levels_effect))

EER2grid = matrix(NA,length(levels_vari),length(levels_effect))

for(i in 1:length(levels_vari))

{

for(j in
1:length(levels_effect))

{

ESE2grid[i,j] =
mean( ESE2[which(vari == levels_vari[i] & effect == levels_effect[j])])

PRR2grid[i,j] =
mean( PRR2[which(vari == levels_vari[i] & effect == levels_effect[j])])

UNC2grid[i,j] =
mean( UNC2[which(vari == levels_vari[i] & effect == levels_effect[j])])

EER2grid[i,j] =
mean( EER2[which(vari == levels_vari[i] & effect == levels_effect[j])])

}

}

round(ESE2grid,4)

round(PRR2grid,4)

round(UNC2grid,4)

round(EER2grid,4)

The previous replication report on Jessica Witt's work is found here: https://www.stats-et-al.com/2019/02/replication-report-signal-detection.html

## No comments:

## Post a Comment