Jump to content

Bayesian t-test hypothesis testing for two independent groups

From Wikiversity

This method is based on the book "Bayesian Cognitive Modeling: A Practical Course" by Michael Lee and Eric-Jan Wagenmakers. Additionally, a published scientific article can be found here. Some of the code may have been changed in order to make the application of the method easier. If you want to learn more about the model and the code you can read the book.

Both groups should have their data normally distributed.

The procedure is being done using R and OpenBugs.

Input

[edit | edit source]
setwd("/the/directory/for/both/of/your/files/") #this will help the program find the location of the model
group1 = c(49,13,50,64,21,23,15,7,32,8,17,15,19,16,33) #your set of values for group 1
group2 = c(41,20,53,41,20,44,31,24,24,44,15,35,32,25,35) #your set of values for group 2
openbugsmodel = "Ttest_2.txt" #specify the location for the file that contains the OpenBugs code
priorforh1 = dcauchy(0) #prior for the case of delta<>0
priorforh2 = 2*dcauchy(0) #prior for the case of delta<0
priorforh3 = 2*dcauchy(0) #prior for the case of delta>0
#Advanced input
itterations = 10000 #Don't change these unless you know what you're doing
burnin = 1000 #Don't change these unless you know what you're doing

Set the first three lines according to your setup and data. You also need to set the file that contains the OpenBUGS model which you can find at the end of this page. If you wish to change the priors you can, just remember that you need to adjust the prior for hypothesis 2 and hypothesis 3 in order apply only for positive or negative numbers. Also you can change the itterations and the burnin if you want to improve your results.

Output

[edit | edit source]

The hypotheses for this test are as follows:

  • H0 - δ = 0 means that there is no difference between the two groups
  • H1 - δ <>0 means that there is a difference between the two groups
  • H2 - δ < 0 means that group 2 has larger values than group 1
  • H3 - δ > 0 means that group 1 has larger values than group 2

The output produces a set of results in text such as these along with the probability distribution plots for each one of them. Both are useful for making a decision about your hypothesis. As an example, you can use the graph and determine if at the point δ=0 your posterior(your results after the data) are higher or lower than the prior(your initial belief). If the posterior is higher than the prior at δ=0 then it reinforces the fact that the null hypothesis(H0) is probably true. If the posterior is lower than the prior then the data weakens your belief that the null hypothesis is true.

-----------Results--------------
Bayes Factor(BF10) for H1 delta<>0 over H0 delta=0:  0.5346907 
Bayes Factor(BF01) for H0 delta=0 over H1 delta<>0:  1.87024 
---
Bayes Factor(BF20) for H2 delta<0 over H0 delta=0:  0.9432402 
Bayes Factor(BF02) for H0 delta=0 over H2 delta<0:  1.060175 
---
Bayes Factor(BF30) for H3 delta> over 0H0 delta=0:  0.1256324 
Bayes Factor(BF03) for H0 delta=0 over H3 delta>0:  7.959729 
---

You determined which hypothesis is more likely based on the Bayes Factor. The way to interpret a general Bayes Factor is the following. If a Bayes factor is denoted as BFxy then you say that the data are n times more likely under Hx than Hy. An example based on the BF10 would be that the data would be 0.53 times more likely under H1 than H0. If we use the BF01 then we would say that the data are 1.87 times more likely under H0 than H1(which is more meaningful). Basically, one version of Bayes factor(e.g. BF01) is the inverted version of the other(e.g BF10).

You determine how strong is your evidence based on the Bayes factor by looking at the table here.

In the first two cases the evidence is "Barely worth mentioning" for H0. But, the third result (7.67) is considered "Substantial" evidence in favor of H0, indicating that when we consider if group 1 has a bigger effect than group 2, there is substantial evidence to say that is unlikely(providing proof for H0).

When publishing you are going to have to report also the process you obtained your results and the numbers of itterations for the MCMC test along with the burnin value and the chains(in this case 3).

The model specifications are:

Group 1 ~ Normal distribution ( μ + α/2, σ^2) #These are the only known values
Group 2 ~ Normal distribution ( μ - α/2, σ^2) #These are the only known values
σ ~ Cauchy distribution(0,1)+
μ ~ Cauchy distribution(0,1)
α = σ * δ
δ ~ Cauchy distribution(0,1) # This is assumed for H1 for the model while H0 is considered δ=0

R code

[edit | edit source]
# clears workspace:  
rm(list=ls(all=TRUE))

#---------------INPUT DATA------------------
setwd("location/of/both/files") #this will help the program find the location of the model
group1 = c(49,13,50,64,21,23,15,7,32,8,17,15,19,16,33) #your set of values for group 1
group2 = c(41,20,53,41,20,44,31,24,24,44,15,35,32,25,35) #your set of values for group 2
openbugsmodel = "Ttest_2.txt"
priorforh1 = dcauchy(0) #prior for the case of delta<>0
priorforh2 = 2*dcauchy(0) #prior for the case of delta<0
priorforh3 = 2*dcauchy(0) #prior for the case of delta>0
#Advanced input
itterations = 10000 #Don't change these unless you know what you're doing
burnin = 1000 #Don't change these unless you know what you're doing
#-------------------------------------------

library(R2OpenBUGS)

n1 = length(group1)
n2 = length(group2)

# Rescale:
group2 = group2-mean(group1)
group2 = group2/sd(group1)
group1 = as.vector(scale(group1))

data=list("group1", "group2", "n1", "n2") # to be passed on to WinBUGS

inits=function()
{
	list(delta=rnorm(1,0,1),mu=rnorm(1,0,1),sigma=runif(1,0,5))
}

# Parameters to be monitored
parameters=c("delta")

# The following command calls WinBUGS with specific options.
# For a detailed description see Sturtz, Ligges, & Gelman (2005).
samples = bugs(data, inits, parameters,
	 			model.file =openbugsmodel,
	 			n.chains=3, n.iter=itterations, n.burnin=burnin, n.thin=1,
	 			DIC=T, 
	 			codaPkg=F, debug=F)
# Now the values for the monitored parameters are in the "samples" object, 
# ready for inspection.

samples$summary # overview

# Please work through the analyses below one at a time
######################################################
# Analysis 1. H1: delta ~ Cauchy (unrestricted case)
######################################################

# Collect posterior samples across all chains:
delta.posterior  = samples$sims.list$delta  

#============ BFs based on logspline fit ===========================
library(polspline) # this package can be installed from within R
fit.posterior = logspline(delta.posterior)

# 95% confidence interval:
x0=qlogspline(0.025,fit.posterior)
x1=qlogspline(0.975,fit.posterior)

posterior     = dlogspline(0, fit.posterior) # this gives the pdf at point delta = 0
prior         = priorforh1                   # height of order--restricted prior at delta = 0
BF10          = prior/posterior
BF01          = posterior/prior
cat("-----------Results--------------\r\n")
cat("Bayes Factor(BF10) for H1 delta<>0 over H0 delta=0: ",BF10,"\r\n")
cat("Bayes Factor(BF01) for H0 delta=0 over H1 delta<>0: ",BF01,"\r\n")
cat("---\r\n")

#============ Plot Prior and Posterior  ===========================
par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5,
    font.lab = 2, cex.axis = 1.3, bty = "n", las=1)
xlow  = -3
xhigh = 3
yhigh = 4
Nbreaks = 80
y = hist(delta.posterior, Nbreaks, prob=T, border="white", ylim=c(0,yhigh), xlim=c(xlow,xhigh), lwd=2, lty=1, ylab="Density", xlab=" ", main=" ", axes=F) 
#white makes the original histogram -- with unwanted vertical lines -- invisible
lines(c(y$breaks, max(y$breaks)), c(0,y$intensities,0), type="S", lwd=2, lty=1) 
axis(1, at = c(-4,-3,-2,-1,0,1,2,3,4), lab=c("-4","-3","-2","-1","0", "1", "2", "3", "4"))
axis(2)
mtext(expression(delta), side=1, line = 2.8, cex=2)
#now bring in log spline density estimation:
par(new=T)
plot(fit.posterior, ylim=c(0,yhigh), xlim=c(xlow,xhigh), lty=1, lwd=1, axes=F)
points(0, dlogspline(0, fit.posterior),pch=19, cex=2)
# plot the prior:
par(new=T)
plot ( function( x ) dcauchy( x, 0, 1 ), xlow, xhigh, ylim=c(0,yhigh), xlim=c(xlow,xhigh), lwd=1, lty=1, ylab=" ", xlab = " ", axes=F) 
axis(1, at = c(-4,-3,-2,-1,0,1,2,3,4), lab=c("-4","-3","-2","-1","0", "1", "2", "3", "4"))
axis(2)
points(0, dcauchy(0), pch=19, cex=2)

###########################################################################
# Analysis 2. H1: delta ~ Cauchy^- (restricted case, negative values only)
###########################################################################

# Collect posterior samples across all chains:
delta.posterior  = samples$sims.list$delta
# selects only negative delta's:
delta.posterior  = delta.posterior[which(delta.posterior<0)]

#============ BFs based on logspline fit ===========================
fit.posterior = logspline(delta.posterior,ubound=0) # NB. note the bound

# 95% confidence interval:
x0=qlogspline(0.025,fit.posterior)
x1=qlogspline(0.975,fit.posterior)

posterior     = dlogspline(0, fit.posterior) # this gives the pdf at point delta = 0
prior         = priorforh2                 # height of order--restricted prior at delta = 0
BF10          = prior/posterior
BF01          = posterior/prior
cat("Bayes Factor(BF20) for H2 delta<0 over H0 delta=0: ",BF10,"\r\n")
cat("Bayes Factor(BF02) for H0 delta=0 over H2 delta<0: ",BF01,"\r\n")
cat("---\r\n")

#============ Plot Prior and Posterior  ===========================
par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5,
    font.lab = 2, cex.axis = 1.3, bty = "n", las=1)
xlow  = -3
xhigh = 0
yhigh = 12
Nbreaks = 80
y = hist(delta.posterior, Nbreaks, prob=T, border="white", ylim=c(0,yhigh), xlim=c(xlow,xhigh), lwd=2, lty=1, ylab="Density", xlab=" ", main=" ", axes=F) 
#white makes the original histogram -- with unwanted vertical lines -- invisible
lines(c(y$breaks, max(y$breaks)), c(0,y$intensities,0), type="S", lwd=2, lty=1) 
axis(1, at = c(-3,-2,-1,0), lab=c("-3","-2","-1","0"))
axis(2)
mtext(expression(delta), side=1, line = 2.8, cex=2)
#now bring in log spline density estimation:
par(new=T)
plot(fit.posterior, ylim=c(0,yhigh), xlim=c(xlow,xhigh), lty=1, lwd=1, axes=F)
points(0, dlogspline(0, fit.posterior),pch=19, cex=2)
# plot the prior:
par(new=T)
plot ( function( x ) 2*dcauchy( x, 0, 1 ), xlow, xhigh, ylim=c(0,yhigh), xlim=c(xlow,xhigh), lwd=1, lty=1, ylab=" ", xlab = " ", axes=F) 
axis(1, at = c(-3,-2,-1,0), lab=c("-3","-2","-1","0"))
axis(2)
points(0, 2*dcauchy(0), pch=19, cex=2)

###########################################################################
# Analysis 3. H1: delta ~ Cauchy^+ (restricted case, positive values only)
###########################################################################

# Collect posterior samples across all chains:
delta.posterior  = samples$sims.list$delta  
# selects only positive delta's:
delta.posterior  = delta.posterior[which(delta.posterior>0)]

#============ BFs based on logspline fit ===========================
fit.posterior = logspline(delta.posterior,lbound=0) # NB. note the bound

# 95% confidence interval:
x0=qlogspline(0.025,fit.posterior)
x1=qlogspline(0.975,fit.posterior)

posterior     = dlogspline(0, fit.posterior) # this gives the pdf at point delta = 0
prior         = priorforh3                   # height of order--restricted prior at delta = 0
BF10          = prior/posterior #this produces the decimal version. if you inverse this you get the odds
BF01          = posterior/prior
cat("Bayes Factor(BF30) for H3 delta>0 over H0 delta=0: ",BF10,"\r\n")
cat("Bayes Factor(BF03) for H0 delta=0 over H3 delta>0: ",BF01,"\r\n")
cat("---\r\n")

#============ Plot Prior and Posterior  ===========================
par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5,
    font.lab = 2, cex.axis = 1.3, bty = "n", las=1)
xlow  = 0
xhigh = 3
yhigh = 12
Nbreaks = 80
y = hist(delta.posterior, Nbreaks, prob=T, border="white", ylim=c(0,yhigh), xlim=c(xlow,xhigh), lwd=2, lty=1, ylab="Density", xlab=" ", main=" ", axes=F) 
#white makes the original histogram -- with unwanted vertical lines -- invisible
lines(c(y$breaks, max(y$breaks)), c(0,y$intensities,0), type="S", lwd=2, lty=1) 
axis(1, at = c(0,1,2,3,4), lab=c("0", "1", "2", "3", "4"))
axis(2)
mtext(expression(delta), side=1, line = 2.8, cex=2)
#now bring in log spline density estimation:
par(new=T)
plot(fit.posterior, ylim=c(0,yhigh), xlim=c(xlow,xhigh), lty=1, lwd=1, axes=F)
points(0, dlogspline(0, fit.posterior),pch=19, cex=2)
# plot the prior:
par(new=T)
plot ( function( x ) 2*dcauchy( x, 0, 1 ), xlow, xhigh, ylim=c(0,yhigh), xlim=c(xlow,xhigh), lwd=1, lty=1, ylab=" ", xlab = " ", axes=F) 
axis(1, at = c(0,1,2,3,4), lab=c("0", "1", "2", "3", "4"))
axis(2)
points(0, 2*dcauchy(0), pch=19, cex=2)

OpenBugs code

[edit | edit source]
model
{ 
  for (i in 1:n1)
  {
    group1[i] ~ dnorm(muX,lambdaXY)
  }
  for (i in 1:n2)
  {
    group2[i] ~ dnorm(muY,lambdaXY)
  }

  lambdaXY <- pow(sigmaXY,-2)

  delta       ~ dnorm(0,lambdaDelta)
  lambdaDelta ~ dchisqr(1)
	
  sigma    ~ dnorm(0,sigmaChi)
  sigmaChi ~ dchisqr(1)
  sigmaXY <- abs(sigma)

  mu    ~ dnorm(0,muChi)
  muChi ~ dchisqr(1)
  
  alpha <- delta*sigmaXY	

  muX <- mu + alpha*0.5	
  muY <- mu - alpha*0.5	
}

See also

[edit | edit source]