Felix Schönbrodt

PD Dr. Dipl.-Psych.

A Compendium of Clean Graphs in R

[This is a guest post by Eric-Jan Wagenmakers and Quentin Gronau introducing the RGraphCompendium. Click here to see the full compendium!]

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but easy. Many users find it almost impossible to resist the siren song of adding grid lines, including grey backgrounds, using elaborate color schemes, and applying default font sizes that makes the text much too small in relation to the graphical elements. As a result, many R graphs are an aesthetic disaster; they are difficult to parse and unfit for publication.

In constrast, a good graph obeys the golden rule: “create graphs unto others as you want them to create graphs unto you”. This means that a good graph is a simple graph, in the Einsteinian sense that a graph should be made as simple as possible, but not simpler. A good graph communicates the main message effectively, without fuss and distraction. In addition, a good graph balances its graphical and textual elements – large symbols demand an increase in line width, and these together require an increase in font size.

The graphing chaos is exacerbated by the default settings in R (and the graphical packages that it provides, such as ggplot2), which are decidedly suboptimal. For instance, the font size is often too small, and the graphical elements are not sufficiently prominent. As a result, creating a good graph in R requires a lot of tinkering, not unlike the process of editing the first draft of a novice writer.

Fortunately, many plots share the same underlying structure, and the tinkering that has led to a clean graph of time series A will generally provide useful starting values for a clean graph of time series B. To exploit the overlap in structure, however, the user needs to remember the settings that were used for the first graph. Usually, this means that the user has to recall the location of the relevant R code. Sometimes the search for this initial code can take longer than the tinkering that was required to produce a clean graph in the first place.

In order to reduce the time needed to find relevant R code, we have constructed a compendium of clean graphs in R. This compendium, available at http://shinyapps.org/apps/RGraphCompendium/index.html, can also be used for teaching or as inspiration for improving one’s own graphs. In addition, the compendium provides a selective overview of the kind of graphs that researchers often use; the graphs cover a range of statistical scenarios and feature contributions of different data analysts. We do not wish to presume the graphs in the compendium are in any way perfect; some are better than others, and overall much remains to be improved. The compendium is undergoing continual refinement. Nevertheless, we hope the graphs are useful in their current state.

As an example of what the compendium has to offer, consider the graph below. This graph shows the proportion of the popular vote as a function of the relative height of the US president against his most successful opponent. Note the large circles for the data, the thick line for the linear relation, and the large font size for the axis labels. Also, note that the line does not touch the y-axis (a subtlety that requires deviating from the default). As in the compendium, the R code that created the graph is displayed after clicking the box “Show R-code”.

Show R-Code

# Presidential data up to and including 2008; data from Stulp et al. 2013
# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905,
   0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908,
   1.115606936, 0.971910112, 0.97752809, 0.978609626, 1,
   0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857,
   1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889,
   1.011111111, 1.032967033, 1.044444444, 1, 1.086705202,
   1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222,
   1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778,
   1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)

# proportion popular vote for president vs most successful opponent
# NB can be lower than .5 because popolar vote does not decide election
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067,
  0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387,
  0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212,
  0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892,
  0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407,
  0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958,
  0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534,
  0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415,
  0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242,
  0.536914796)
           
#cor.test(height.ratio,pop.vote)
require(plotrix) # package plotrix is needed for function "ablineclip""
# if the following line and the line containing "dev.off()" are executed, the plot will be saved as a png file in the current working directory
# png("Presidental.png", width = 18, height = 18, units = "cm", res = 800, pointsize = 10)
op <- par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, bty = "n", las=1)
plot(height.ratio, pop.vote, col="black", pch=21, bg = "grey", cex = 2,
     xlim=c(.90,1.20), ylim=c(.40,.70), ylab="", xlab="", axes=F)
axis(1)
axis(2)
reg1 <- lm(pop.vote~height.ratio)
ablineclip(reg1, lwd=2,x1 = .9, x2 = 1.2)
par(las=0)
mtext("Presidential Height Ratio", side=1, line=2.5, cex=1.5)
mtext("Relative Support for President", side=2, line=3.7, cex=1.5)
text(1.15, .65, "r = .39", cex=1.5)
# dev.off()
# For comparison, consider the default plot:
#par(op) # reset to default "par" settings
#plot(height.ratio, pop.vote) #yuk!

index1

 

A more complicated example takes the same data, but uses it to plot the development of the Bayes factor, assessing the evidence for the hypothesis that taller presidential candidates attract more votes. This plot was created based in part on code from Ruud Wetzels and Benjamin Scheibehenne. Note the annotations on the right side of the plot, and the subtle horizontal lines that indicate Jeffreys’ criteria on the evidence. It took some time to figure out how to display the word “Evidence” in its current direction.

Show R-Code

# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905, 0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908, 1.115606936, 0.971910112, 0.97752809, 0.978609626, 1, 0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857, 1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889, 1.011111111, 1.032967033, 1.044444444, 1, 1.086705202, 1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222, 1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778, 1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)
# proportion popular vote for president vs most successful opponent
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067, 0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387, 0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212, 0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892, 0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407, 0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958, 0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534, 0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415, 0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242, 0.536914796)
## now calculate BF sequentially; two-sided test
library("hypergeo")
BF10.HG.exact = function(n, r)
{
#Jeffreys' test for whether a correlation is zero or not
#Jeffreys (1961), pp. 289-292
#Note that if the means are subtracted, n needs to be replaced by n-1
  hypgeo = hypergeo((.25+n/2), (-.25+n/2), (3/2+n/2), r^2)
  BF10 = ( sqrt(pi) * gamma(n/2+1) * (hypgeo) ) / ( 2 * gamma(3/2+n/2) )
  return(as.numeric(BF10))
}
BF10 <- array()
BF10[1]<-1
BF10[2]<-1
for (i in 3:length(height.ratio))
{
  BF10[i] <- BF10.HG.exact(n=i-1, r=cor(height.ratio[1:i],pop.vote[1:i]))
}
# We wish to plot this Bayes factor sequentially, as it unfolds as more elections become available:
#============ Plot log Bayes factors  ===========================
par(cex.main = 1.3, mar = c(4.5, 6, 4, 7)+.1, mgp = c(3, 1, 0), #bottom, left, top, right
  cex.lab = 1.3, font.lab = 2, cex.axis = 1.3, las=1)
xhigh <- 60
plot(log(BF10), xlim=c(1,xhigh), ylim=c(-1*log(200),log(200)), xlab="", ylab="", cex.lab=1.3,cex.axis=1.3, las =1, yaxt="n", bty = "n", type="p", pch=21, bg="grey")

labelsUpper=log(c(100,30,10,3,1))
labelsLower=-1*labelsUpper
criticalP=c(labelsLower,0,labelsUpper)
for (idx in 1:length(criticalP))
{
  abline(h=criticalP[idx],col='darkgrey',lwd=1,lty=2)
}
abline(h=0)
axis(side=4, at=criticalP,tick=T,las=2,cex.axis=1, labels=F)
axis(side=4, at=labelsUpper+.602, tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))
axis(side=4, at=labelsLower-.602,tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))

axis(side=2, at=c(criticalP),tick=T,las=2,cex.axis=1,
labels=c("1/100","1/30","1/10","1/3","1","", "100","30","10","3",""))
 
mtext(expression(BF[1][0]), side=2, line=2.5, las=0, cex=1.3)
grid::grid.text("Evidence", 0.97, 0.5, rot = 270, gp=grid::gpar(cex=1.3))
mtext("No. of Elections", side=1, line=2.5, las=1, cex=1.3)

arrows(20, -log(10), 20, -log(100), length=.25, angle=30, code=2, lwd=2)
arrows(20, log(10), 20, log(100), length=.25, angle=30, code=2, lwd=2)
text(25, -log(70), "Evidence for H0", pos=4, cex=1.3)
text(25, log(70), "Evidence for H1", pos=4, cex=1.3)

index

A final example is borrowed from the graphs in JASP (http://jasp-stats.org), a free and open-source statistical software program with a GUI not unlike that of SPSS. In contrast to SPSS, JASP also includes Bayesian hypthesis tests, the results of which are summarized in graphs such as the one below.

Show R-Code

.plotPosterior.ttest <- function(x= NULL, y= NULL, paired= FALSE, oneSided= FALSE, iterations= 10000, rscale= "medium", lwd= 2, cexPoints= 1.5, cexAxis= 1.2, cexYlab= 1.5, cexXlab= 1.5, cexTextBF= 1.4, cexCI= 1.1, cexLegend= 1.4, lwdAxis= 1.2){
   
    library(BayesFactor)
   
    if(rscale == "medium"){
        r <- sqrt(2) / 2
    }
    if(rscale == "wide"){
        r <- 1
    }
    if(rscale == "ultrawide"){
        r <- sqrt(2)
    }
    if(mode(rscale) == "numeric"){
        r <- rscale
    }
   
    if(oneSided == FALSE){
        nullInterval <- NULL
    }
    if(oneSided == "right"){
        nullInterval <- c(0, Inf)
    }
    if(oneSided == "left"){
        nullInterval <- c(-Inf, 0)
    }
   
    # sample from delta posterior
    samples <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = TRUE, iterations = iterations, rscale= r)
   
    delta <- samples[,"delta"]
   
    # fit denisty estimator
    fit.posterior <-  logspline::logspline(delta)
   
    # density function posterior
    dposterior <- function(x, oneSided= oneSided, delta= delta){
        if(oneSided == FALSE){
            k <- 1
            return(k*logspline::dlogspline(x, fit.posterior))
        }
        if(oneSided == "right"){
            k <- 1 / (length(delta[delta >= 0]) / length(delta))
            return(ifelse(x < 0, 0, k*logspline::dlogspline(x, fit.posterior)))
        }
        if(oneSided == "left"){
            k <- 1 / (length(delta[delta <= 0]) / length(delta))
            return(ifelse(x > 0, 0, k*logspline::dlogspline(x, fit.posterior)))
        }  
    }  
   
    # pdf cauchy prior
    dprior <- function(delta,r, oneSided= oneSided){
        if(oneSided == "right"){
            y <- ifelse(delta < 0, 0, 2/(pi*r*(1+(delta/r)^2)))
            return(y)
        }
        if(oneSided == "left"){
            y <- ifelse(delta > 0, 0, 2/(pi*r*(1+(delta/r)^2)))
            return(y)
        }   else{
            return(1/(pi*r*(1+(delta/r)^2)))
        }
    }
   
    # set limits plot
    xlim <- vector("numeric", 2)
    if(oneSided == FALSE){
        xlim[1] <- min(-2, quantile(delta, probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta, probs = 0.99)[[1]])
    }
    if(oneSided == "right"){
        xlim[1] <- min(-2, quantile(delta[delta >= 0], probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta[delta >= 0], probs = 0.99)[[1]])
    }
    if(oneSided == "left"){
        xlim[1] <- min(-2, quantile(delta[delta <= 0], probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta[delta <= 0], probs = 0.99)[[1]])
    }
   
    ylim <- vector("numeric", 2)
    ylim[1] <- 0
    ylim[2] <- max(dprior(0,r, oneSided= oneSided), 1.28*max(dposterior(x= delta, oneSided= oneSided, delta=delta)))
   
    # calculate position of "nice" tick marks and create labels
    xticks <- pretty(xlim)
    yticks <- pretty(ylim)
    xlabels <- formatC(pretty(xlim), 1, format= "f")
    ylabels <- formatC(pretty(ylim), 1, format= "f")
   
    # 95% credible interval:
    if(oneSided == FALSE){
        CIlow <- quantile(delta, probs = 0.025)[[1]]
        CIhigh <- quantile(delta, probs = 0.975)[[1]]
    }
    if(oneSided == "right"){
        CIlow <- quantile(delta[delta >= 0], probs = 0.025)[[1]]
        CIhigh <- quantile(delta[delta >= 0], probs = 0.975)[[1]]
    }
    if(oneSided == "left"){
        CIlow <- quantile(delta[delta <= 0], probs = 0.025)[[1]]
        CIhigh <- quantile(delta[delta <= 0], probs = 0.975)[[1]]
    }  
   
    par(mar= c(5, 5, 7, 4) + 0.1, las=1)
    xlim <- c(min(CIlow,range(xticks)[1]), max(range(xticks)[2], CIhigh))
    plot(1,1, xlim= xlim, ylim= range(yticks), ylab= "", xlab="", type= "n", axes= FALSE)
    lines(seq(min(xticks), max(xticks),length.out = 1000),dposterior(x=seq(min(xticks), max(xticks),length.out = 1000), oneSided = oneSided, delta=delta), lwd= lwd, xlim= xlim, ylim= range(yticks), ylab= "", xlab= "")
    lines(seq(min(xticks), max(xticks),length.out = 1000), dprior(seq(min(xticks), max(xticks),length.out = 1000), r=r, oneSided= oneSided), lwd= lwd, lty=3)
   
    axis(1, at= xticks, labels = xlabels, cex.axis= cexAxis, lwd= lwdAxis)
    axis(2, at= yticks, labels= ylabels, , cex.axis= cexAxis, lwd= lwdAxis)
    mtext(text = "Density", side = 2, las=0, cex = cexYlab, line= 3)
    mtext(expression(paste("Effect size", ~delta)), side = 1, cex = cexXlab, line= 2.5)
   
    points(0, dprior(0,r, oneSided= oneSided), col="black", pch=21, bg = "grey", cex= cexPoints)
    points(0, dposterior(0, oneSided = oneSided, delta=delta), col="black", pch=21, bg = "grey", cex= cexPoints)
   
    # 95% credible interval
    dmax <- optimize(function(x)dposterior(x,oneSided= oneSided, delta=delta), interval= range(xticks), maximum = TRUE)$objective # get maximum density
    yCI <- grconvertY(dmax, "user", "ndc") + 0.08
    yCIt <- grconvertY(dmax, "user", "ndc") + 0.04
    y95 <- grconvertY(dmax, "user", "ndc") + 0.1
    yCI <- grconvertY(yCI, "ndc", "user")
    yCIt <- grconvertY(yCIt, "ndc", "user")
    y95 <- grconvertY(y95, "ndc", "user")
    arrows(CIlow, yCI , CIhigh, yCI, angle = 90, code = 3, length= 0.1, lwd= lwd)
    text(mean(c(CIlow, CIhigh)), y95,"95%", cex= cexCI)
   
    text(CIlow, yCIt, bquote(.(formatC(CIlow,2, format="f"))), cex= cexCI)
    text(CIhigh, yCIt, bquote(.(formatC(CIhigh,2, format= "f"))), cex= cexCI)
   
    # enable plotting in margin
    par(xpd=TRUE)
   
    # display BF10 value
    BF <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = FALSE, rscale= r)
    BF10 <- BayesFactor::extractBF(BF, logbf = FALSE, onlybf = F)[1, "bf"]
    BF01 <- 1 / BF10
   
    xx <- grconvertX(0.3, "ndc", "user")
    yy <- grconvertY(0.822, "ndc", "user")
    yy2 <- grconvertY(0.878, "ndc", "user")
   
    if(BF10 >= 1000000 | BF01 >= 1000000){
        BF10t <- format(BF10, digits= 3, scientific = TRUE)
        BF01t <- format(BF01, digits= 3, scientific = TRUE)
    }
    if(BF10 < 1000000 & BF01 < 1000000){
        BF10t <- formatC(BF10,2, format = "f")
        BF01t <- formatC(BF01,2, format = "f")
    }
   
    if(oneSided == FALSE){
        text(xx, yy2, bquote(BF[10]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0][1]==.(BF01t)), cex= cexTextBF)
    }
    if(oneSided == "right"){
        text(xx, yy2, bquote(BF["+"][0]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0]["+"]==.(BF01t)), cex= cexTextBF)
    }
    if(oneSided == "left"){
        text(xx, yy2, bquote(BF["-"][0]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0]["-"]==.(BF01t)), cex= cexTextBF)
    }
   
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) <= 4){
        xx <- grconvertX(0.44, "ndc", "user")
    }
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) == 5){
        xx <- grconvertX(0.44 +  0.001* 5, "ndc", "user")
    }
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) == 6){
        xx <- grconvertX(0.44 + 0.001* 6, "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) == 7){
        xx <- grconvertX(0.44 + 0.002* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) == 8){
        xx <- grconvertX(0.44 + 0.003* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) > 8){
        xx <- grconvertX(0.44 + 0.004* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    yy <- grconvertY(0.85, "ndc", "user")
   
    # make sure that colored area is centered
    radius <- 0.06*diff(range(xticks))
    A <- radius^2*pi
    alpha <- 2 / (BF01 + 1) * A / radius^2
    startpos <- pi/2 - alpha/2
   
    # draw probability wheel
    plotrix::floating.pie(xx, yy,c(BF10, 1),radius= radius, col=c("darkred", "white"), lwd=2,startpos = startpos)
   
    yy <- grconvertY(0.927, "ndc", "user")
    yy2 <- grconvertY(0.77, "ndc", "user")
   
    if(oneSided == FALSE){
        text(xx, yy, "data|H1", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
    if(oneSided == "right"){
        text(xx, yy, "data|H+", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
    if(oneSided == "left"){
        text(xx, yy, "data|H-", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
   
    # add legend
    xx <- grconvertX(0.57, "ndc", "user")
    yy <- grconvertY(0.92, "ndc", "user")
    legend(xx, yy, legend = c("Posterior", "Prior"), lty=c(1,3), bty= "n", lwd = c(lwd,lwd), cex= cexLegend)
}

set.seed(1)
.plotPosterior.ttest(x= rnorm(30,0.15), rscale=1)

index3

The compendium contains many more examples. We hope some R users will find them convenient. Finally, if you create a clean graph in R that you believe is a candidate for inclusion in this compendium, please do not hesitate to write an email to EJ.Wagenmakers@gmail.com. Your contribution will be acknowledged explicitly, alongside the code you provided.

Eric-Jan Wagenmakers and Quentin Gronau

University of Amsterdam, Department of Psychology.

Key links:

Comments (8) | Trackback

What does a Bayes factor feel like?

m4s0n501

A Bayes factor (BF) is a statistical index that quantifies the evidence for a hypothesis, compared to an alternative hypothesis (for introductions to Bayes factors, see here, here or here).

Although the BF is a continuous measure of evidence, humans love verbal labels, categories, and benchmarks. Labels give interpretations of the objective index – and that is both the good and the bad about labels. The good thing is that these labels can facilitate communication (but see @richardmorey), and people just crave for verbal interpretations to guide their understanding of those “boring” raw numbers.

Eingebetteter Bild-Link

The bad thing about labels is that an interpretation should always be context dependent (Such as “30 min.” can be both a long time (train delay) or a short time (concert), as @CaAl said). But once a categorical system has been established, it’s no longer context dependent.

 

These labels can also be a dangerous tool, as they implicitly introduce cutoff values (“Hey, the BF jumped over the boundary of 3. It’s not anecdotal any more, it’s moderate evidence!”). But we do not want another sacred .05 criterion!; see also Andrew Gelman’s blog post and its critical comments. The strength of the BF is precisely its non-binary nature.

Several labels for paraphrasing the size of a BF have been suggested. The most common system seems to be the suggestion of Harold Jeffreys (1961):

Bayes factor BF_{10} Label
> 100 Extreme evidence for H1
30 – 100 Very strong evidence for H1
10 – 30 Strong evidence for H1
3 – 10 Moderate evidence for H1
1 – 3 Anecdotal evidence for H1
1 No evidence
1/3 – 1 Anecdotal evidence for H0
1/3 – 1/10 Moderate evidence for H0
1/10 – 1/30 Strong evidence for H0
1/30 – 1/100 Very strong evidence for H0
< 1/100 Extreme evidence for H0

 

Note: The original label for 3 < BF < 10 was “substantial evidence”. Lee and Wagenmakers (2013) changed it to “moderate”, as “substantial” already sounds too decisive. “Anecdotal” formerly was known as “Barely worth mentioning”.

Kass and Raftery suggested a comparable classification, only that the “strong evidence” category for them starts at BF > 20 (see also Wikipedia entry).

Getting a feeling for Bayes factors

How much is a BF_{10} of 3.7? It indicates that data occured 3.7x more likely under H_1 than under H_0, given the priors assumed in the model. Is that a lot of evidence for H_1? Or not?

Following Table 1, it can be labeled “moderate evidence” for an effect – whatever that means.

Some have argued that strong evidence, such as BFs > 10, are quite evident from eyeballing only:

“If your result needs a statistician then you should design a better experiment.” (attributed to Ernest Rutherford)

Is that really the case? Can we just “see” it when there is an effect?

Let’s approach the topic a bit more experientially. What does such a BF look like, visually? We take the good old urn model as a first example.

Visualizing Bayes factors for proportions

Imagine the following scenario: When I give a present to my two boys (4 and 6 years old), it is not so important what it is. The most important thing is: “Is it fair?”. (And my boys are very sensitive detectors of unfairness).

Imagine you have bags with red and blue marbles. Obviously, the blue marbles are much better, so it is key to make sure that in each bag there is an equal number of red and blue marbles. Hence, for our familial harmony I should check whether reds and blues are distributed evenly or not. In statistical terms: H_0: p = 0.5, H_1: p != 0.5.

When drawing samples from the bags, the strongest evidence for an even distribution (H_0) is given when exactly the same number of red and blue marbles has been drawn. How much evidence for H_0 is it when I draw n=2, 1 red/1 blue? The answer is in Figure 1, upper table, first row: The BF_{10} is 0.86 in favor of H_1, resp. a BF_{01} of 1.16 in favor of H_0 – i.e., anecdotal evidence for an equal distribution.

You can get these values easily with the famous BayesFactor package for R:

proportionBF(y=1, N=2, p=0.5)

 

What if I had drawn two reds instead? Then the BF would be 1.14 in favor of H_1 (see Figure 1, lower table, row 1).

proportionBF(y=2, N=2, p=0.5)

Obviously, with small sample sizes it’s not possible to generate strong evidence, neither for H_0 nor for H_1. You need a minimal sample size to leave the region of “anecdotal evidence”. Figure 1 shows some examples how the BF gets more extreme with increasing sample size.

Marble_distirbutions_and_BF

Figure 1.

 

These visualizations indeed seem to indicate that for simple designs such as the urn model you do not really need a statistical test if your BF is > 10. You can just see it from looking at the data (although the “obviousness” is more pronounced for large BFs in small sample sizes).

Maximal and minimal Bayes factors for a certain sample size

The dotted lines in Figure 2 show the maximal and the minimal BF that can be obtained for a given number of drawn marbles. The minimum BF is obtained when the sample is maximally consistent with H_0 (i.e. when exactly the same number of red and blue marbles has been drawn), the maximal BF is obtained when only marbles from one color are drawn.

max_min_BF_r-medium

Figure 2: Maximal and minimal BF for a certain sample size.

 

Figure 2 highlights two features:

  • If you have few data points, you cannot have strong evidence, neither for H_1 nor for H_0.
  • It is much easier to get strong evidence for H_1 than for H_0. This property depends somewhat on the choice of the prior distribution of H_1 effect sizes. If you expect very strong effects under the H_1, it is easier to get evidence for H_0. But still, with every reasonable prior distribution, it is easier to gather evidence for H_1.

 

Get a feeling yourself!

Here’s a shiny widget that let’s you draw marbles from the urn. Monitor how the BF evolves as you sequentially add marbles to your sample!

 

[Open app in separate window]

Teaching sequential sampling and Bayes factors

IMG_4037

When I teach sequential sampling and Bayes factors, I bring an actual bag with marbles (or candies of two colors).

In my typical setup I ask some volunteers to test whether the same amount of both colors is in the bag. (The bag of course has a cover so that they don’t see the marbles). They may sample as many marbles as they want, but each marble costs them 10 Cent (i.e., an efficiency criterium: Sample as much as necessary, but not too much!). They should think aloud, about when they have a first hunch, and when they are relatively sure about the presence or absence of an effect. I use a color mixture of 2:1 – in my experience this give a good chance to detect the difference, but it’s not too obvious (some teams stop sampling and conclude “no difference”).

This exercise typically reveals following insights (hopefully!)

  • By intuition, humans sample sequentially. When the evidence is not strong enough, more data is sampled, until they are sure enough about the (un)fairness of the distribution.
  • Intuitionally, nobody does a fixed-n design with a-priori power analysis.
  • Often, they stop quite soon, in the range of “anecdotal evidence”. It’s also my own impression: BFs that are still in the “anecdotal” range already look quite conclusive for everyday hypothesis testing (e.g., a 2 vs. 9 distribution; BF_{10} = 2.7). This might change, however, if in the scenario a wrong decision is associated with higher costs. Next time, I will try a scenario of prescription drugs which have potentially severe side effects.

 

The “interocular traumatic test”

The analysis so far seems to support the “interocular traumatic test”: “when the data are so compelling that conclusion hits you straight between the eyes” (attributed to Joseph Berkson; quoted from Wagenmakers, Verhagen, & Ly, 2014).

But the authors go on and quote Edwards et al. (1963, p. 217), who said: “…the enthusiast’s interocular trauma may be the skeptic’s random error. A little arithmetic to verify the extent of the trauma can yield great peace of mind for little cost.”.

In the next visualization we will see, that large Bayes factors are not always obvious.

Visualizing Bayes factors for group differences

What happens if we switch to group differences? European women have on average a self-reported height of 165.8 cm, European males of 177.9 cm – difference: 12.1 cm, pooled standard deviation is around 7 cm. (Source: European Community Household Panel; see Garcia, J., & Quintana-Domeque, C., 2007; based on ~50,000 participants born between 1970 and 1980). This translates to a Cohen’s d of 1.72.

Unfortunately, this source only contains self-reported heights, which can be subject to biases (males over-report their height on average). But it was the only source I found which also contains the standard deviations within sex. However, Meyer et al (2001) report a similar effect size of d = 1.8 for objectively measured heights.

 

Now look at this plot. Would you say the blue lines are obviously higher than the red ones?

Bildschirmfoto 2015-01-29 um 13.17.32

I couldn’t say for sure. But the BF_{10} is 14.54, a “strong” evidence!

If we sort the lines by height the effect is more visible:

Bildschirmfoto 2015-01-29 um 13.17.43

… and alternatively, we can plot the distributions of males’ and females’ heights:Bildschirmfoto 2015-01-29 um 13.17.58

 

 

Again, you can play around with the interactive app:

[Open app in separate window]

 

Can we get a feeling for Bayes factors?

To summarize: Whether a strong evidence “hits you between the eyes” depends on many things – the kind of test, the kind of visualization, the sample size. Sometimes a BF of 2.5 seems obvious, and sometimes it is hard to spot a BF>100 by eyeballing only. Overall, I’m glad that we have a numeric measure of strength of evidence and do not have to rely on eyeballing only.

Try it yourself – draw some marbles in the interactive app, or change the height difference between males and females, and calibrate your personal gut feeling with the resulting Bayes factor!

Comments (1) | Trackback

Reanalyzing the Schnall/Johnson “cleanliness” data sets: New insights from Bayesian and robust approaches

[mathjax]

I want to present a re-analysis of the raw data from two studies that investigated whether physical cleanliness reduces the severity of moral judgments – from the original study (n = 40; Schnall, Benton, & Harvey, 2008), and from a direct replication (n = 208, Johnson, Cheung, & Donnellan, 2014). Both data sets are provided on the Open Science Framework. All of my analyses are based on the composite measure as dependent variable.

This re-analsis follows previous analyses by Tal Yarkoni, Yoel Inbar, and R. Chris Fraley, and is focused on one question: What can we learn from the data when we apply modern (i.e., Bayesian and robust) statistical approaches?

The complete and reproducible R code for these analyses is at the end of the post.

Disclaimer 1: This analysis assumes that the studies from which data came from were internally valid. Of course the garbage-in-garbage-out principle holds. But as the original author reviewed the experimental material of the replication study and gave her OK, I assume that the relication data is as valid as the original data.

Disclaimer 2: I am not going to talk about tone, civility, or bullying here. Although these are important issues, a lot has been already written about it, including apologies from one side of the debate (not from the other, yet), etc. For nice overviews over the debate, see for example a blog post by , and the summary provided by Tal Yarkoni). I am completely unemotional about these data. False positives do exists, I am sure I had my share of them, and replication is a key element of science. I do not suspect anybody of anything – I just look at the data.

That being said, let’s get to business:

Bayes factor analysis

The BF is a continuous measure of evidence for H0 or for H1, and quantifies, “how much more likely is H1 compared to H0″. Typically, a BF of at least 3 is requested to speak of evidence (i.e., the H1 should be at least 3 times more likely than the H0 to speak of evidence for an effect). For an introduction to Bayes factors see here, here, or here.

Using the BayesFactor package, it is simple to compute a Bayes factor (BF) for the group comparison. In the original study, the Bayes factor against the H0, BF_{10}, is 1.08. That means, data are 1.08 times more likely under the H1 (“there is an effect”) than under the H0 (“There is no effect”). As the BF is virtually 1, data occurred equally likely under both hypotheses.

What researchers actually are interested in is not p(Data | Hypothesis), but rather p(Hypothesis | Data). Using Bayes’ theorem, the former can be transformed into the latter by assuming prior probabilities for the hypotheses. The BF then tells one how to update one’s prior probabilities after having seen the data. Given a BF very close to 1, one does not have to update his or her priors. If one holds, for example, equal priors (p(H1) = p(H0) = .5), these probabilities do not change after having seen the data of the original study. With these data, it is not possible to decide between H0 and H1, and being so close to 1, this BF is not even “anectdotal evidence” for H1 (although the original study was just skirting the boundary of significance, p = .06).

For the replication data, the situation looks different. The Bayes factor BF_{10} here is 0.11. That means, H0 is (1 / 0.11) = 9 times more likely than H1. A BF_{10} of 0.11 would be labelled “moderate to strong evidence” for H0. If you had equals priors before, you should update your belief for H1 to 10% and for H0 to 90% (Berger, 2006).

To summarize, neither the original nor the replication study show evidence for H1. In contrast, the replication study even shows quite strong evidence for H0.

A more detailed look at the data using robust statistics

Parametric tests, like the ANOVA employed in the original and the replication study, rest on assumptions. Unfortunately, these assumptions are very rarely met (Micceri, 1989), and ANOVA etc. are not as robust against these violations as many textbooks suggest (Erceg-Hurn & Mirosevic, 2008). Fortunately, over the last 30 years robust statistical methods have been developed that do not rest on such strict assumptions.

In the presence of violations and outliers, these robust methods have much lower Type I error rates and/or higher power than classical tests. Furthermore, a key advantage of these methods is that they are designed in a way that they are equally efficient compared to classical methods, even if the assumptions are not violated. In a nutshell, when using robust methods, there nothing to lose, but a lot to gain.

Rand Wilcox pioneered in developing many robust methods (see, for example, this blog post by him), and the methods are implemented in the WRS package for R (Wilcox & Schönbrodt, 2014).

Comparing the central tendency of two groups

A robust alternative to the independent group t test would be to compare the trimmed means via percentile bootstrap. This method is robust against outliers and does not rest on parametric assumptions. Here we find a p value of .106 for the original study and p = .94 for the replication study. Hence, the same picture: No evidence against the H0.

Comparing other-than-central tendencies between two groups, aka.: Comparing extreme quantiles between groups

When comparing data from two groups, approximately 99.6% of all psychological research compares the central tendency (that is a subjective estimate).

In some cases, however, it would be sensible to compare other parts of the distributions.For example, an intervention could have effects only on slow reaction times (RT), but not on fast or medium RTs. Similarly, priming could have an effect only on very high responses, but not on low and average responses. Measures of central tendency might obscure or miss this pattern.

And indeed, descriptively there (only) seems to be a priming effect in the “extremely wrong pole” (large numbers on the x axis) of the original study (i.e., the black density line is higher than the red on at “7” and “8” ratings):

Bildschirmfoto 2014-06-02 um 08.51.55

This visual difference can be tested. Here, I employed the qcomhd function from the WRS package (Wilcox, Erceg-Hurn, Clark, & Carlson, 2013). This method looks whether two samples differ in several quantiles (not only the central tendency). For an introduction to this method, see this blog post.

Here’s the result when comparing the 10th, 30th, 50th, 70th, and 90th quantile:

    q n1 n2 est.1 est.2 est.1_minus_est.2 ci.low ci.up p_crit p.value signif
1 0.1 20 20  3.86  3.15             0.712 -1.077  2.41 0.0500   0.457     NO
2 0.3 20 20  4.92  4.51             0.410 -0.341  1.39 0.0250   0.265     NO
3 0.5 20 20  5.76  5.03             0.721 -0.285  1.87 0.0167   0.197     NO
4 0.7 20 20  6.86  5.70             1.167  0.023  2.05 0.0125   0.047     NO
5 0.9 20 20  7.65  6.49             1.163  0.368  1.80 0.0100   0.008    YES

(Please note: Estimating 5 quantiles from 20 data points is not quite the epitome of precision. So treat this analysis with caution.)

As multiple comparison are made, the Benjamini-Hochberg-correction for the p value is applied. This correction gives new critical p values (column p_crit) to which the actual p values (column p.value) have to be compared. One quantile survives the correction: the 90th quantile. That means that there are fewer extreme answers in the cleanliness priming group than in the control group

This finding, of course, is purely exploratory, and as any other exploratory finding it awaits cross-validation in a fresh data set. Luckily, we have the replication data set! Let’s see whether we can replicate this effect.

The answer is: no. Not even a tendency:

    q  n1  n2 est.1 est.2 est.1_minus_est.2 ci.low ci.up p_crit p.value signif
1 0.1 102 106  4.75  4.88           -0.1309 -0.705 0.492 0.0125   0.676     NO
2 0.3 102 106  6.00  6.12           -0.1152 -0.571 0.386 0.0250   0.699     NO
3 0.5 102 106  6.67  6.61            0.0577 -0.267 0.349 0.0500   0.737     NO
4 0.7 102 106  7.11  7.05            0.0565 -0.213 0.411 0.0167   0.699     NO
5 0.9 102 106  7.84  7.73            0.1111 -0.246 0.431 0.0100   0.549     NO

Here’s a plot of the results:

Bildschirmfoto 2014-06-02 um 10.30.12

 

Overall summary

From the Bayes factor analysis, both the original and the replication study do not show evidence for the H1. The replication study actually shows moderate to strong evidence for the H0.

If anything, the original study shows some exploratory evidence that only the high end of the answer distribution (around the 90th quantile) is reduced by the cleanliness priming – not the central tendency. If one wants to interpret this effect, it would translate to: “Cleanliness primes reduce extreme morality judgements (but not average or low judgements)”. This exploratory effect, however, could not be cross-validated in the better powered replication study.

Outlook

Recently, Silberzahn, Uhlmann, Martin, and Nosek proposed “crowdstorming a data set”, in cases where a complex data set calls for different analytical approaches. Now, a simple two group experimental design, usually analyzed with a t test, doesn’t seem to have too much complexity – still it is interesting how different analytical approaches highlight different aspects of the data set.

And it is also interesting to see that the majority of diverse approaches comes to the same conclusion: From this data base, we can conclude that we cannot conclude that the H0 is wrong (This sentence, a hommage to Cohen, 1990, was for my Frequentist friends ;-)).

And, thanks to Bayesian approaches, we can say (and even understand): There is strong evidence that the H0 is true. Very likely, there is no priming effect in this paradigm.

PS: Celebrate open science. Without open data, all of this would not be possible.

 

## (c) 2014 Felix Schönbrodt
## http://www.nicebread.de
##
## This is a reanalysis of raw data from
## - Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience cleanliness reduces the severity of moral judgments. Psychological Science, 19(12), 1219-1222.
## - Johnson, D. J., Cheung, F., & Donnellan, M. B. (2014). Does Cleanliness Influence Moral Judgments? A Direct Replication of Schnall, Benton, and Harvey (2008). Social Psychology, 45, 209-215.


## ======================================================================
## Read raw data, provided on Open Science Framework
# - https://osf.io/4cs3k/
# - https://osf.io/yubaf/
## ======================================================================

library(foreign)
dat1 <- read.spss("raw/Schnall_Benton__Harvey_2008_Study_1_Priming.sav", to.data.frame=TRUE)
dat2 <- read.spss("raw/Exp1_Data.sav", to.data.frame=TRUE)
dat2 <- dat2[dat2[, "filter_."] == "Selected", ]
dat2$condition <- factor(dat2$Condition, labels=c("neutral priming", "clean priming"))
table(dat2$condition)

# build composite scores from the 6 vignettes:
dat1$DV <- rowMeans(dat1[, c("dog", "trolley", "wallet", "plane", "resume", "kitten")])
dat2$DV <- rowMeans(dat2[, c("Dog", "Trolley", "Wallet", "Plane", "Resume", "Kitten")])

# define shortcuts for DV in each condition
neutral <- dat1$DV[dat1$condition == "neutral priming"]
clean <- dat1$DV[dat1$condition == "clean priming"]

neutral2 <- dat2$DV[dat2$condition == "neutral priming"]
clean2 <- dat2$DV[dat2$condition == "clean priming"]


## ======================================================================
## Original analyses with t-tests/ ANOVA
## ======================================================================

# ---------------------------------------------------------------------
# Original study:

# Some descriptives ...
mean(neutral)
sd(neutral)

mean(clean)
sd(clean)

# Run the ANOVA from Schnall et al. (2008)
a1 <- aov(DV ~ condition, dat1)
summary(a1) # p = .0644

# --> everything as in original publication


# ---------------------------------------------------------------------
# Replication study

mean(neutral2)
sd(neutral2)

mean(clean2)
sd(clean2)

a2 <- aov(DV ~ condition, dat2)
summary(a2) # p = .947

# --> everything as in replication report

## ======================================================================
## Bayes factor analyses
## ======================================================================

library(BayesFactor)

ttestBF(neutral, clean, rscale=1)   # BF_10 = 1.08
ttestBF(neutral2, clean2, rscale=1) # BF_10 = 0.11

## ======================================================================
## Robust statistics
## ======================================================================


library(WRS)

# ---------------------------------------------------------------------
# robust tests: group difference of central tendency

# percentile bootstrap for comparing measures of location:
# 20% trimmed mean
trimpb2(neutral, clean)     # p = 0.106 ; CI: [-0.17; +1.67]
trimpb2(neutral2, clean2)   # p = 0.9375; CI: [-0.30; +0.33]

# median
medpb2(neutral, clean)      # p = 0.3265; CI: [-0.50; +2.08]
medpb2(neutral2, clean2)    # p = 0.7355; CI: [-0.33; +0.33]



# ---------------------------------------------------------------------
# Comparing other quantiles (not only the central tendency)

# plot of densities
par(mfcol=c(1, 2))
plot(density(clean, from=1, to=8), ylim=c(0, 0.5), col="red", main="Original data", xlab="Composite rating")
lines(density(neutral, from=1, to=8), col="black")
legend("topleft", col=c("black", "red"), lty="solid", legend=c("neutral", "clean"))

plot(density(clean2, from=1, to=8), ylim=c(0, 0.5), col="red", main="Replication data", xlab="Composite rating")
lines(density(neutral2, from=1, to=8), col="black")
legend("topleft", col=c("black", "red"), lty="solid", legend=c("neutral", "clean"))


# Compare quantiles of original study ...
par(mfcol=c(1, 2))
qcomhd(neutral, clean, q=seq(.1, .9, by=.2), xlab="Original: Neutral Priming", ylab="Neutral - Clean")
abline(h=0, lty="dotted")

# Compare quantiles of replication study
qcomhd(neutral2, clean2, q=seq(.1, .9, by=.2), xlab="Replication: Neutral Priming", ylab="Neutral - Clean")
abline(h=0, lty="dotted")

References

Berger, J. O. (2006). Bayes factors. In S. Kotz, N. Balakrishnan, C. Read, B. Vidakovic, & N. L. Johnson (Eds.), Encyclopedia of Statistical Sciences, vol. 1 (2nd ed.) (pp. 378–386). Hoboken, NJ: Wiley.

Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Psychologist, 63, 591–601.

Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156–166. doi:10.1037/0033-2909.105.1.156

Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience cleanliness reduces the severity of moral judgments. Psychological Science, 19(12), 1219–1222.

Wilcox, R. R., Erceg-Hurn, D. M., Clark, F., & Carlson, M. (2013). Comparing two independent groups via the lower and upper quantiles. Journal of Statistical Computation and Simulation, 1–9. doi:10.1080/00949655.2012.754026

Wilcox, R.R., & Schönbrodt, F.D. (2014). The WRS package for robust statistics in R (version 0.25.2). Retrieved from https://github.com/nicebread/WRS

Comments (10) | Trackback

A comment on “We cannot afford to study effect size in the lab” from the DataColada blog

In a recent post on the DataColada blog, Uri Simonsohn wrote about “We cannot afford to study effect size in the lab“. The central message is: If we want accurate effect size (ES) estimates, we need large sample sizes (he suggests four-digit n’s). As this is hardly possible in the lab we have to use other research tools, like onelin studies, archival data, or more within-subject designs.

While I agree to the main point of the post, I’d like to discuss and extend some of the conclusions. As the DataColada blog has no comments section, I’ll comment in my own blog …

 

“Does it make sense to push for effect size reporting when we run small samples? I don’t see how.”

“Properly powered studies teach you almost nothing about effect size.”

It is true that the ES estimate with n=20 will be utterly imprecise, and reporting this ES estimate could misguide readers who give too much importance to the point estimate and do not take the huge CI into account (maybe, because it has not been reported).

Still, and here’s my disagreement, I’d argue that also small-n studies should report the point estimate (along with the CI), as a meta-analysis of many imprecise small-n estimates still can give an unbiased and precise cumulative estimate. This, of course, would require that all estimates are reported, not only the significant ones (van Assen, van Aert, Nuijten, & Wicherts, 2014).

Here’s an example – we simulate a population with a true Cohen’s d of 0.8. Then we look at three scenarios: a) a single small-n study with 20 participants, b) 400 participants, and c) 20 * n=20 studies which are meta-analyzed.

# simulate population with true ES d=0.8
set.seed(0xBEEF1)
library(compute.es)
library(metafor)
library(ggplot2)
X1 <- rnorm(1000000, mean=0, sd=1)
X2 <- rnorm(1000000, mean=0, sd=1) - 0.8

## Compute effect size for ...
# ... a single n=20 study
x1 <- sample(X1, 20)
x2 <- sample(X2, 20)
(t1 <- t.test(x1, x2))
(ES1 <- tes(t1$statistic, 20, 20, dig=3))

This single small-n study has g = 1.337 [ 0.64 , 2.034] (g is an unbiased estimate of d). Quite biased, but the true ES is in the CI.

# .. a single n=400 study
x1 <- sample(X1, 400)
x2 <- sample(X2, 400)
(t2 <- t.test(x1, x2))
(ES2 <- tes(t2$statistic, 400, 400, dig=3))

This single large-n study has g = 0.76 [ 0.616 , 0.903] . It close to the true ES, and has quite narrow CI (i.e., high precision).

Now, here’s a meta-analyses of 20*n=20 studies:

# ... 20x n=20 studies
dat <- data.frame()
for (i in 1:20) {
x1 <- sample(X1, 20)
x2 <- sample(X2, 20)
dat <- rbind(dat, data.frame(
study=i,
m1i=mean(x1),
m2i=mean(x2),
sd1i=sd(x1),
sd2i=sd(x2),
n1i=20, n2i=20
))
}
}

# Do a fixed-effect model meta-analysis
es <- escalc("SMD", m1i=m1i, m2i=m2i, sd1i=sd1i, sd2i=sd2i, n1i=n1i, n2i=n2i, data=dat, append=TRUE)
(meta <- rma(yi, vi, data=es, method="FE"))

The meta-analysis reveals g = 0.775 [0.630; 0.920]. This has nearly exactly the same CI width as the n=400 study, and a slightly different ES estimate.

Here’s a plot of the results:

# Plot results
res <- data.frame(
n = factor(c("a) n=20", "b) n=400", "c) 20 * n=20\n(meta-analysis)"), ordered=TRUE),
point_estimate = c(ES1$d, ES2$d, meta$b),
ci.lower = c(ES1$l.d, ES2$l.d, meta$ci.lb),
ci.upper = c(ES1$u.d, ES2$u.d, meta$ci.ub)
)
ggplot(res, aes(x=n, y=point_estimate, ymin=ci.lower, ymax=ci.upper)) + geom_pointrange() + theme_bw() + xlab("") + ylab("Cohen's d") + geom_hline(yintercept=0.8, linetype="dotted", color="darkgreen")

Bildschirmfoto 2014-05-07 um 07.14.43

To summarize, a single small-n study hardly teaches something about effect sizes – but many small-n‘s do. But meta-analyses are only possible, if the ES is reported.

“But just how big an n do we need to study effect size? I am about to show that the answer has four-digits.”

In Uri’s post (and the linked R code) the precision issue is approached from the power side – if you increase power, you also increase precision. But you can also directly compute the necessary sample size for a desired precision. This is called the AIPE-framework (“accuracy in parameter estimation”) made popular by Ken Kelley, Scott Maxwell, and Joseph Rausch (Kelley & Rausch, 2006; Kelley & Maxwell, 2003; Maxwell, Kelley, & Rausch, 2008). The necessary functions are implemented in the MBESS package for R. If you want a CI width of .10 around an expected ES of 0.5, you need 3170 participants:

library(MBESS)
ss.aipe.smd(delta=0.5, conf.level=.95, width=0.10)
[1] 3170

The same point has been made from a Bayesian point of view in a blog post from John Kruschke: notice the sample size on the x-axis.

In our own analysis on how correlations evolve with increasing sample size (Schönbrodt & Perugini, 2013; see also blog post), we conclude that for typical effect sizes in psychology, you need 250 participants to get sufficiently accurate and stable estimates of the ES:

evolDemo

How much precision is needed?

It’s certainly hard to give general guidelines how much precision is sensible, but here are our thoughts we based our stability analyses on. We used a CI-like “corridor of stability” (see Figure) with a half-width of w= .10, w=.15, and w=.20 (everything in the “correlation-metric”).

w = .20 was chosen for following reason: The average reported effect size in psychology is around r = .21 (Richard, Bond, & Stokes-Zoota, 2003). For this effect size, an accuracy of w = .20 would result in a CI that is “just significant” and does not include a reversal of the sign of the effect. Hence, with the typical effect sizes we are dealing with in psychology, a CI with a half-width > .20 would not make much sense.

w = .10 was chosen as it corresponds to a “small effect size” à la Cohen. This is arbitrary, of course, but at least some anchor. And w = .15 is just in between.

Using these numbers and an ES estimate of, say, r = .29, a just tolerable precision would be [.10; .46] (w = .20), a tolerable precision [.15; .42] (w = .15) and a moderate precision [.20; .38] (w = .10).

If we use this lower threshold of “just tolerable precision”, we would need in the two-sample group difference around 200-250 participants per group. While I am not sure whether we really need four-digit samples for typical scenarios, I am sure that we need at least three-digit samples when we want to talk about “precision”.

Regardless of the specific level of precision and method used, however, one thing is clear: Accuracy does not come in cheaply. We need much less participants for an hypothesis test (Is there a non-zero effect or not?) compared to an accurate estimate.

With increasing sample size, unfortunately you have diminishing returns on precision: As you can see in the dotted lines in Figure 2, the CI levels off, and you need disproportionally large sample sizes to squeeze out the last tiny percentages of a shrinking CI. If you follow Pareto’s principle, you should stop somewhen. Probably in scientific progress accuracy will be rather achieved in meta-analyzing several studies (which also gives you an estimate about the ES variability and possible moderators) than doing one mega-study.

Hence: Always report your ES estimate, even in small-n studies.

 

References

Kelley, K., & Maxwell, S. E. (2003). Sample Size for Multiple Regression: Obtaining Regression Coefficients That Are Accurate, Not Simply Significant. Psychological Methods, 8, 305–321. doi:10.1037/1082-989X.8.3.305

Kelley, K., & Rausch, J. R. (2006). Sample size planning for the standardized mean difference: Accuracy in parameter estimation via narrow confidence intervals. Psychological Methods, 11, 363–385. doi:10.1037/1082-989X.11.4.363

Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample Size Planning for Statistical Power and Accuracy in Parameter Estimation. Annual Review of Psychology, 59(1), 537–563. doi:10.1146/annurev.psych.59.103006.093735

Richard, F. D., Bond, C. F., & Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7, 331–363. doi:10.1037/1089-2680.7.4.331

van Assen, M. A. L. M., van Aert, R. C. M., Nuijten, M. B., & Wicherts, J. M. (2014). Why publishing everything is more effective than selective publishing of statistically significant results. PLoS ONE, 9, e84896. doi:10.1371/journal.pone.0084896

Comments (16) | Trackback