Felix Schönbrodt

PD Dr. Dipl.-Psych.

A Compendium of Clean Graphs in R

[This is a guest post by Eric-Jan Wagenmakers and Quentin Gronau introducing the RGraphCompendium. Click here to see the full compendium!]

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but easy. Many users find it almost impossible to resist the siren song of adding grid lines, including grey backgrounds, using elaborate color schemes, and applying default font sizes that makes the text much too small in relation to the graphical elements. As a result, many R graphs are an aesthetic disaster; they are difficult to parse and unfit for publication.

In constrast, a good graph obeys the golden rule: “create graphs unto others as you want them to create graphs unto you”. This means that a good graph is a simple graph, in the Einsteinian sense that a graph should be made as simple as possible, but not simpler. A good graph communicates the main message effectively, without fuss and distraction. In addition, a good graph balances its graphical and textual elements – large symbols demand an increase in line width, and these together require an increase in font size.

The graphing chaos is exacerbated by the default settings in R (and the graphical packages that it provides, such as ggplot2), which are decidedly suboptimal. For instance, the font size is often too small, and the graphical elements are not sufficiently prominent. As a result, creating a good graph in R requires a lot of tinkering, not unlike the process of editing the first draft of a novice writer.

Fortunately, many plots share the same underlying structure, and the tinkering that has led to a clean graph of time series A will generally provide useful starting values for a clean graph of time series B. To exploit the overlap in structure, however, the user needs to remember the settings that were used for the first graph. Usually, this means that the user has to recall the location of the relevant R code. Sometimes the search for this initial code can take longer than the tinkering that was required to produce a clean graph in the first place.

In order to reduce the time needed to find relevant R code, we have constructed a compendium of clean graphs in R. This compendium, available at http://shinyapps.org/apps/RGraphCompendium/index.html, can also be used for teaching or as inspiration for improving one’s own graphs. In addition, the compendium provides a selective overview of the kind of graphs that researchers often use; the graphs cover a range of statistical scenarios and feature contributions of different data analysts. We do not wish to presume the graphs in the compendium are in any way perfect; some are better than others, and overall much remains to be improved. The compendium is undergoing continual refinement. Nevertheless, we hope the graphs are useful in their current state.

As an example of what the compendium has to offer, consider the graph below. This graph shows the proportion of the popular vote as a function of the relative height of the US president against his most successful opponent. Note the large circles for the data, the thick line for the linear relation, and the large font size for the axis labels. Also, note that the line does not touch the y-axis (a subtlety that requires deviating from the default). As in the compendium, the R code that created the graph is displayed after clicking the box “Show R-code”.

Show R-Code

# Presidential data up to and including 2008; data from Stulp et al. 2013
# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905,
   0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908,
   1.115606936, 0.971910112, 0.97752809, 0.978609626, 1,
   0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857,
   1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889,
   1.011111111, 1.032967033, 1.044444444, 1, 1.086705202,
   1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222,
   1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778,
   1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)

# proportion popular vote for president vs most successful opponent
# NB can be lower than .5 because popolar vote does not decide election
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067,
  0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387,
  0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212,
  0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892,
  0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407,
  0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958,
  0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534,
  0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415,
  0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242,
  0.536914796)
           
#cor.test(height.ratio,pop.vote)
require(plotrix) # package plotrix is needed for function "ablineclip""
# if the following line and the line containing "dev.off()" are executed, the plot will be saved as a png file in the current working directory
# png("Presidental.png", width = 18, height = 18, units = "cm", res = 800, pointsize = 10)
op <- par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, bty = "n", las=1)
plot(height.ratio, pop.vote, col="black", pch=21, bg = "grey", cex = 2,
     xlim=c(.90,1.20), ylim=c(.40,.70), ylab="", xlab="", axes=F)
axis(1)
axis(2)
reg1 <- lm(pop.vote~height.ratio)
ablineclip(reg1, lwd=2,x1 = .9, x2 = 1.2)
par(las=0)
mtext("Presidential Height Ratio", side=1, line=2.5, cex=1.5)
mtext("Relative Support for President", side=2, line=3.7, cex=1.5)
text(1.15, .65, "r = .39", cex=1.5)
# dev.off()
# For comparison, consider the default plot:
#par(op) # reset to default "par" settings
#plot(height.ratio, pop.vote) #yuk!

index1

 

A more complicated example takes the same data, but uses it to plot the development of the Bayes factor, assessing the evidence for the hypothesis that taller presidential candidates attract more votes. This plot was created based in part on code from Ruud Wetzels and Benjamin Scheibehenne. Note the annotations on the right side of the plot, and the subtle horizontal lines that indicate Jeffreys’ criteria on the evidence. It took some time to figure out how to display the word “Evidence” in its current direction.

Show R-Code

# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905, 0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908, 1.115606936, 0.971910112, 0.97752809, 0.978609626, 1, 0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857, 1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889, 1.011111111, 1.032967033, 1.044444444, 1, 1.086705202, 1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222, 1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778, 1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)
# proportion popular vote for president vs most successful opponent
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067, 0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387, 0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212, 0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892, 0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407, 0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958, 0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534, 0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415, 0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242, 0.536914796)
## now calculate BF sequentially; two-sided test
library("hypergeo")
BF10.HG.exact = function(n, r)
{
#Jeffreys' test for whether a correlation is zero or not
#Jeffreys (1961), pp. 289-292
#Note that if the means are subtracted, n needs to be replaced by n-1
  hypgeo = hypergeo((.25+n/2), (-.25+n/2), (3/2+n/2), r^2)
  BF10 = ( sqrt(pi) * gamma(n/2+1) * (hypgeo) ) / ( 2 * gamma(3/2+n/2) )
  return(as.numeric(BF10))
}
BF10 <- array()
BF10[1]<-1
BF10[2]<-1
for (i in 3:length(height.ratio))
{
  BF10[i] <- BF10.HG.exact(n=i-1, r=cor(height.ratio[1:i],pop.vote[1:i]))
}
# We wish to plot this Bayes factor sequentially, as it unfolds as more elections become available:
#============ Plot log Bayes factors  ===========================
par(cex.main = 1.3, mar = c(4.5, 6, 4, 7)+.1, mgp = c(3, 1, 0), #bottom, left, top, right
  cex.lab = 1.3, font.lab = 2, cex.axis = 1.3, las=1)
xhigh <- 60
plot(log(BF10), xlim=c(1,xhigh), ylim=c(-1*log(200),log(200)), xlab="", ylab="", cex.lab=1.3,cex.axis=1.3, las =1, yaxt="n", bty = "n", type="p", pch=21, bg="grey")

labelsUpper=log(c(100,30,10,3,1))
labelsLower=-1*labelsUpper
criticalP=c(labelsLower,0,labelsUpper)
for (idx in 1:length(criticalP))
{
  abline(h=criticalP[idx],col='darkgrey',lwd=1,lty=2)
}
abline(h=0)
axis(side=4, at=criticalP,tick=T,las=2,cex.axis=1, labels=F)
axis(side=4, at=labelsUpper+.602, tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))
axis(side=4, at=labelsLower-.602,tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))

axis(side=2, at=c(criticalP),tick=T,las=2,cex.axis=1,
labels=c("1/100","1/30","1/10","1/3","1","", "100","30","10","3",""))
 
mtext(expression(BF[1][0]), side=2, line=2.5, las=0, cex=1.3)
grid::grid.text("Evidence", 0.97, 0.5, rot = 270, gp=grid::gpar(cex=1.3))
mtext("No. of Elections", side=1, line=2.5, las=1, cex=1.3)

arrows(20, -log(10), 20, -log(100), length=.25, angle=30, code=2, lwd=2)
arrows(20, log(10), 20, log(100), length=.25, angle=30, code=2, lwd=2)
text(25, -log(70), "Evidence for H0", pos=4, cex=1.3)
text(25, log(70), "Evidence for H1", pos=4, cex=1.3)

index

A final example is borrowed from the graphs in JASP (http://jasp-stats.org), a free and open-source statistical software program with a GUI not unlike that of SPSS. In contrast to SPSS, JASP also includes Bayesian hypthesis tests, the results of which are summarized in graphs such as the one below.

Show R-Code

.plotPosterior.ttest <- function(x= NULL, y= NULL, paired= FALSE, oneSided= FALSE, iterations= 10000, rscale= "medium", lwd= 2, cexPoints= 1.5, cexAxis= 1.2, cexYlab= 1.5, cexXlab= 1.5, cexTextBF= 1.4, cexCI= 1.1, cexLegend= 1.4, lwdAxis= 1.2){
   
    library(BayesFactor)
   
    if(rscale == "medium"){
        r <- sqrt(2) / 2
    }
    if(rscale == "wide"){
        r <- 1
    }
    if(rscale == "ultrawide"){
        r <- sqrt(2)
    }
    if(mode(rscale) == "numeric"){
        r <- rscale
    }
   
    if(oneSided == FALSE){
        nullInterval <- NULL
    }
    if(oneSided == "right"){
        nullInterval <- c(0, Inf)
    }
    if(oneSided == "left"){
        nullInterval <- c(-Inf, 0)
    }
   
    # sample from delta posterior
    samples <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = TRUE, iterations = iterations, rscale= r)
   
    delta <- samples[,"delta"]
   
    # fit denisty estimator
    fit.posterior <-  logspline::logspline(delta)
   
    # density function posterior
    dposterior <- function(x, oneSided= oneSided, delta= delta){
        if(oneSided == FALSE){
            k <- 1
            return(k*logspline::dlogspline(x, fit.posterior))
        }
        if(oneSided == "right"){
            k <- 1 / (length(delta[delta >= 0]) / length(delta))
            return(ifelse(x < 0, 0, k*logspline::dlogspline(x, fit.posterior)))
        }
        if(oneSided == "left"){
            k <- 1 / (length(delta[delta <= 0]) / length(delta))
            return(ifelse(x > 0, 0, k*logspline::dlogspline(x, fit.posterior)))
        }  
    }  
   
    # pdf cauchy prior
    dprior <- function(delta,r, oneSided= oneSided){
        if(oneSided == "right"){
            y <- ifelse(delta < 0, 0, 2/(pi*r*(1+(delta/r)^2)))
            return(y)
        }
        if(oneSided == "left"){
            y <- ifelse(delta > 0, 0, 2/(pi*r*(1+(delta/r)^2)))
            return(y)
        }   else{
            return(1/(pi*r*(1+(delta/r)^2)))
        }
    }
   
    # set limits plot
    xlim <- vector("numeric", 2)
    if(oneSided == FALSE){
        xlim[1] <- min(-2, quantile(delta, probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta, probs = 0.99)[[1]])
    }
    if(oneSided == "right"){
        xlim[1] <- min(-2, quantile(delta[delta >= 0], probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta[delta >= 0], probs = 0.99)[[1]])
    }
    if(oneSided == "left"){
        xlim[1] <- min(-2, quantile(delta[delta <= 0], probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta[delta <= 0], probs = 0.99)[[1]])
    }
   
    ylim <- vector("numeric", 2)
    ylim[1] <- 0
    ylim[2] <- max(dprior(0,r, oneSided= oneSided), 1.28*max(dposterior(x= delta, oneSided= oneSided, delta=delta)))
   
    # calculate position of "nice" tick marks and create labels
    xticks <- pretty(xlim)
    yticks <- pretty(ylim)
    xlabels <- formatC(pretty(xlim), 1, format= "f")
    ylabels <- formatC(pretty(ylim), 1, format= "f")
   
    # 95% credible interval:
    if(oneSided == FALSE){
        CIlow <- quantile(delta, probs = 0.025)[[1]]
        CIhigh <- quantile(delta, probs = 0.975)[[1]]
    }
    if(oneSided == "right"){
        CIlow <- quantile(delta[delta >= 0], probs = 0.025)[[1]]
        CIhigh <- quantile(delta[delta >= 0], probs = 0.975)[[1]]
    }
    if(oneSided == "left"){
        CIlow <- quantile(delta[delta <= 0], probs = 0.025)[[1]]
        CIhigh <- quantile(delta[delta <= 0], probs = 0.975)[[1]]
    }  
   
    par(mar= c(5, 5, 7, 4) + 0.1, las=1)
    xlim <- c(min(CIlow,range(xticks)[1]), max(range(xticks)[2], CIhigh))
    plot(1,1, xlim= xlim, ylim= range(yticks), ylab= "", xlab="", type= "n", axes= FALSE)
    lines(seq(min(xticks), max(xticks),length.out = 1000),dposterior(x=seq(min(xticks), max(xticks),length.out = 1000), oneSided = oneSided, delta=delta), lwd= lwd, xlim= xlim, ylim= range(yticks), ylab= "", xlab= "")
    lines(seq(min(xticks), max(xticks),length.out = 1000), dprior(seq(min(xticks), max(xticks),length.out = 1000), r=r, oneSided= oneSided), lwd= lwd, lty=3)
   
    axis(1, at= xticks, labels = xlabels, cex.axis= cexAxis, lwd= lwdAxis)
    axis(2, at= yticks, labels= ylabels, , cex.axis= cexAxis, lwd= lwdAxis)
    mtext(text = "Density", side = 2, las=0, cex = cexYlab, line= 3)
    mtext(expression(paste("Effect size", ~delta)), side = 1, cex = cexXlab, line= 2.5)
   
    points(0, dprior(0,r, oneSided= oneSided), col="black", pch=21, bg = "grey", cex= cexPoints)
    points(0, dposterior(0, oneSided = oneSided, delta=delta), col="black", pch=21, bg = "grey", cex= cexPoints)
   
    # 95% credible interval
    dmax <- optimize(function(x)dposterior(x,oneSided= oneSided, delta=delta), interval= range(xticks), maximum = TRUE)$objective # get maximum density
    yCI <- grconvertY(dmax, "user", "ndc") + 0.08
    yCIt <- grconvertY(dmax, "user", "ndc") + 0.04
    y95 <- grconvertY(dmax, "user", "ndc") + 0.1
    yCI <- grconvertY(yCI, "ndc", "user")
    yCIt <- grconvertY(yCIt, "ndc", "user")
    y95 <- grconvertY(y95, "ndc", "user")
    arrows(CIlow, yCI , CIhigh, yCI, angle = 90, code = 3, length= 0.1, lwd= lwd)
    text(mean(c(CIlow, CIhigh)), y95,"95%", cex= cexCI)
   
    text(CIlow, yCIt, bquote(.(formatC(CIlow,2, format="f"))), cex= cexCI)
    text(CIhigh, yCIt, bquote(.(formatC(CIhigh,2, format= "f"))), cex= cexCI)
   
    # enable plotting in margin
    par(xpd=TRUE)
   
    # display BF10 value
    BF <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = FALSE, rscale= r)
    BF10 <- BayesFactor::extractBF(BF, logbf = FALSE, onlybf = F)[1, "bf"]
    BF01 <- 1 / BF10
   
    xx <- grconvertX(0.3, "ndc", "user")
    yy <- grconvertY(0.822, "ndc", "user")
    yy2 <- grconvertY(0.878, "ndc", "user")
   
    if(BF10 >= 1000000 | BF01 >= 1000000){
        BF10t <- format(BF10, digits= 3, scientific = TRUE)
        BF01t <- format(BF01, digits= 3, scientific = TRUE)
    }
    if(BF10 < 1000000 & BF01 < 1000000){
        BF10t <- formatC(BF10,2, format = "f")
        BF01t <- formatC(BF01,2, format = "f")
    }
   
    if(oneSided == FALSE){
        text(xx, yy2, bquote(BF[10]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0][1]==.(BF01t)), cex= cexTextBF)
    }
    if(oneSided == "right"){
        text(xx, yy2, bquote(BF["+"][0]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0]["+"]==.(BF01t)), cex= cexTextBF)
    }
    if(oneSided == "left"){
        text(xx, yy2, bquote(BF["-"][0]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0]["-"]==.(BF01t)), cex= cexTextBF)
    }
   
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) <= 4){
        xx <- grconvertX(0.44, "ndc", "user")
    }
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) == 5){
        xx <- grconvertX(0.44 +  0.001* 5, "ndc", "user")
    }
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) == 6){
        xx <- grconvertX(0.44 + 0.001* 6, "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) == 7){
        xx <- grconvertX(0.44 + 0.002* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) == 8){
        xx <- grconvertX(0.44 + 0.003* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) > 8){
        xx <- grconvertX(0.44 + 0.004* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    yy <- grconvertY(0.85, "ndc", "user")
   
    # make sure that colored area is centered
    radius <- 0.06*diff(range(xticks))
    A <- radius^2*pi
    alpha <- 2 / (BF01 + 1) * A / radius^2
    startpos <- pi/2 - alpha/2
   
    # draw probability wheel
    plotrix::floating.pie(xx, yy,c(BF10, 1),radius= radius, col=c("darkred", "white"), lwd=2,startpos = startpos)
   
    yy <- grconvertY(0.927, "ndc", "user")
    yy2 <- grconvertY(0.77, "ndc", "user")
   
    if(oneSided == FALSE){
        text(xx, yy, "data|H1", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
    if(oneSided == "right"){
        text(xx, yy, "data|H+", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
    if(oneSided == "left"){
        text(xx, yy, "data|H-", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
   
    # add legend
    xx <- grconvertX(0.57, "ndc", "user")
    yy <- grconvertY(0.92, "ndc", "user")
    legend(xx, yy, legend = c("Posterior", "Prior"), lty=c(1,3), bty= "n", lwd = c(lwd,lwd), cex= cexLegend)
}

set.seed(1)
.plotPosterior.ttest(x= rnorm(30,0.15), rscale=1)

index3

The compendium contains many more examples. We hope some R users will find them convenient. Finally, if you create a clean graph in R that you believe is a candidate for inclusion in this compendium, please do not hesitate to write an email to EJ.Wagenmakers@gmail.com. Your contribution will be acknowledged explicitly, alongside the code you provided.

Eric-Jan Wagenmakers and Quentin Gronau

University of Amsterdam, Department of Psychology.

Key links:

Comments (8) | Trackback

What does a Bayes factor feel like?

A Bayes factor (BF) is a statistical index that quantifies the evidence for a hypothesis, compared to an alternative hypothesis (for introductions to Bayes factors, see here, here or here).

Although the BF is a continuous measure of evidence, humans love verbal labels, categories, and benchmarks. Labels give interpretations of the objective index – and that is both the good and the bad about labels. The good thing is that these labels can facilitate communication (but see @richardmorey), and people just crave for verbal interpretations to guide their understanding of those “boring” raw numbers.

Eingebetteter Bild-Link

The bad thing about labels is that an interpretation should always be context dependent (Such as “30 min.” can be both a long time (train delay) or a short time (concert), as @CaAl said). But once a categorical system has been established, it’s no longer context dependent.

 

These labels can also be a dangerous tool, as they implicitly introduce cutoff values (“Hey, the BF jumped over the boundary of 3. It’s not anecdotal any more, it’s moderate evidence!”). But we do not want another sacred .05 criterion!; see also Andrew Gelman’s blog post and its critical comments. The strength of the BF is precisely its non-binary nature.

Several labels for paraphrasing the size of a BF have been suggested. The most common system seems to be the suggestion of Harold Jeffreys (1961):

Bayes factor BF_{10} Label
> 100 Extreme evidence for H1
30 – 100 Very strong evidence for H1
10 – 30 Strong evidence for H1
3 – 10 Moderate evidence for H1
1 – 3 Anecdotal evidence for H1
1 No evidence
1/3 – 1 Anecdotal evidence for H0
1/3 – 1/10 Moderate evidence for H0
1/10 – 1/30 Strong evidence for H0
1/30 – 1/100 Very strong evidence for H0
< 1/100 Extreme evidence for H0

 

Note: The original label for 3 < BF < 10 was “substantial evidence”. Lee and Wagenmakers (2013) changed it to “moderate”, as “substantial” already sounds too decisive. “Anecdotal” formerly was known as “Barely worth mentioning”.

Kass and Raftery suggested a comparable classification, only that the “strong evidence” category for them starts at BF > 20 (see also Wikipedia entry).

Getting a feeling for Bayes factors

How much is a BF_{10} of 3.7? It indicates that data occured 3.7x more likely under H_1 than under H_0, given the priors assumed in the model. Is that a lot of evidence for H_1? Or not?

Following Table 1, it can be labeled “moderate evidence” for an effect – whatever that means.

Some have argued that strong evidence, such as BFs > 10, are quite evident from eyeballing only:

“If your result needs a statistician then you should design a better experiment.” (attributed to Ernest Rutherford)

Is that really the case? Can we just “see” it when there is an effect?

Let’s approach the topic a bit more experientially. What does such a BF look like, visually? We take the good old urn model as a first example.

Visualizing Bayes factors for proportions

Imagine the following scenario: When I give a present to my two boys (4 and 6 years old), it is not so important what it is. The most important thing is: “Is it fair?”. (And my boys are very sensitive detectors of unfairness).

Imagine you have bags with red and blue marbles. Obviously, the blue marbles are much better, so it is key to make sure that in each bag there is an equal number of red and blue marbles. Hence, for our familial harmony I should check whether reds and blues are distributed evenly or not. In statistical terms: H_0: p = 0.5, H_1: p != 0.5.

When drawing samples from the bags, the strongest evidence for an even distribution (H_0) is given when exactly the same number of red and blue marbles has been drawn. How much evidence for H_0 is it when I draw n=2, 1 red/1 blue? The answer is in Figure 1, upper table, first row: The BF_{10} is 0.86 in favor of H_1, resp. a BF_{01} of 1.16 in favor of H_0 – i.e., anecdotal evidence for an equal distribution.

You can get these values easily with the famous BayesFactor package for R:

proportionBF(y=1, N=2, p=0.5)

 

What if I had drawn two reds instead? Then the BF would be 1.14 in favor of H_1 (see Figure 1, lower table, row 1).

proportionBF(y=2, N=2, p=0.5)

Obviously, with small sample sizes it’s not possible to generate strong evidence, neither for H_0 nor for H_1. You need a minimal sample size to leave the region of “anecdotal evidence”. Figure 1 shows some examples how the BF gets more extreme with increasing sample size.

Marble_distirbutions_and_BF

Figure 1.

 

These visualizations indeed seem to indicate that for simple designs such as the urn model you do not really need a statistical test if your BF is > 10. You can just see it from looking at the data (although the “obviousness” is more pronounced for large BFs in small sample sizes).

Maximal and minimal Bayes factors for a certain sample size

The dotted lines in Figure 2 show the maximal and the minimal BF that can be obtained for a given number of drawn marbles. The minimum BF is obtained when the sample is maximally consistent with H_0 (i.e. when exactly the same number of red and blue marbles has been drawn), the maximal BF is obtained when only marbles from one color are drawn.

max_min_BF_r-medium

Figure 2: Maximal and minimal BF for a certain sample size.

 

Figure 2 highlights two features:

  • If you have few data points, you cannot have strong evidence, neither for H_1 nor for H_0.
  • It is much easier to get strong evidence for H_1 than for H_0. This property depends somewhat on the choice of the prior distribution of H_1 effect sizes. If you expect very strong effects under the H_1, it is easier to get evidence for H_0. But still, with every reasonable prior distribution, it is easier to gather evidence for H_1.

 

Get a feeling yourself!

Here’s a shiny widget that let’s you draw marbles from the urn. Monitor how the BF evolves as you sequentially add marbles to your sample!

 

[Open app in separate window]

Teaching sequential sampling and Bayes factors

IMG_4037

When I teach sequential sampling and Bayes factors, I bring an actual bag with marbles (or candies of two colors).

In my typical setup I ask some volunteers to test whether the same amount of both colors is in the bag. (The bag of course has a cover so that they don’t see the marbles). They may sample as many marbles as they want, but each marble costs them 10 Cent (i.e., an efficiency criterium: Sample as much as necessary, but not too much!). They should think aloud, about when they have a first hunch, and when they are relatively sure about the presence or absence of an effect. I use a color mixture of 2:1 – in my experience this give a good chance to detect the difference, but it’s not too obvious (some teams stop sampling and conclude “no difference”).

This exercise typically reveals following insights (hopefully!)

  • By intuition, humans sample sequentially. When the evidence is not strong enough, more data is sampled, until they are sure enough about the (un)fairness of the distribution.
  • Intuitionally, nobody does a fixed-n design with a-priori power analysis.
  • Often, they stop quite soon, in the range of “anecdotal evidence”. It’s also my own impression: BFs that are still in the “anecdotal” range already look quite conclusive for everyday hypothesis testing (e.g., a 2 vs. 9 distribution; BF_{10} = 2.7). This might change, however, if in the scenario a wrong decision is associated with higher costs. Next time, I will try a scenario of prescription drugs which have potentially severe side effects.

 

The “interocular traumatic test”

The analysis so far seems to support the “interocular traumatic test”: “when the data are so compelling that conclusion hits you straight between the eyes” (attributed to Joseph Berkson; quoted from Wagenmakers, Verhagen, & Ly, 2014).

But the authors go on and quote Edwards et al. (1963, p. 217), who said: “…the enthusiast’s interocular trauma may be the skeptic’s random error. A little arithmetic to verify the extent of the trauma can yield great peace of mind for little cost.”.

In the next visualization we will see, that large Bayes factors are not always obvious.

Visualizing Bayes factors for group differences

What happens if we switch to group differences? European women have on average a self-reported height of 165.8 cm, European males of 177.9 cm – difference: 12.1 cm, pooled standard deviation is around 7 cm. (Source: European Community Household Panel; see Garcia, J., & Quintana-Domeque, C., 2007; based on ~50,000 participants born between 1970 and 1980). This translates to a Cohen’s d of 1.72.

Unfortunately, this source only contains self-reported heights, which can be subject to biases (males over-report their height on average). But it was the only source I found which also contains the standard deviations within sex. However, Meyer et al (2001) report a similar effect size of d = 1.8 for objectively measured heights.

 

Now look at this plot. Would you say the blue lines are obviously higher than the red ones?

Bildschirmfoto 2015-01-29 um 13.17.32

I couldn’t say for sure. But the BF_{10} is 14.54, a “strong” evidence!

If we sort the lines by height the effect is more visible:

Bildschirmfoto 2015-01-29 um 13.17.43

… and alternatively, we can plot the distributions of males’ and females’ heights:Bildschirmfoto 2015-01-29 um 13.17.58

 

 

Again, you can play around with the interactive app:

[Open app in separate window]

 

Can we get a feeling for Bayes factors?

To summarize: Whether a strong evidence “hits you between the eyes” depends on many things – the kind of test, the kind of visualization, the sample size. Sometimes a BF of 2.5 seems obvious, and sometimes it is hard to spot a BF>100 by eyeballing only. Overall, I’m glad that we have a numeric measure of strength of evidence and do not have to rely on eyeballing only.

Try it yourself – draw some marbles in the interactive app, or change the height difference between males and females, and calibrate your personal gut feeling with the resulting Bayes factor!

Comments (1) | Trackback

In the era of #repligate: What are valid cues for the trustworthiness of a study?

[Update 2015/1/14: I consolidate feedback from Twitter, comments, email, and real life into the main text (StackExchange-style), so that we get a good and improving answer. Thanks to @TonyLFreitas@PhDefunct, @bahniks, @JoeHilgard, @_r_c_a, @richardmorey, @R__INDEX, the commenters at the end of this post and on the OSF mailing list, and many others for their feedback!]

In a recent lecture I talked about the replication crisis in psychology. After the lecture my students asked: “We learn so much stuff in our lectures, and now you tell us that a considerable proportion of these ‘facts’ probably are just false positives, or highly exaggerated? Then, what can we believe at all?”. A short discussion soon led to the crucial question:

In the era of #repligate: What are valid cues for the trustworthiness of a study?
Of course the best way to judge a study’s quality would be to read the paper thoroughly, making an informed judgement about the internal and statistical validity, invest some extra time into a literature review, and maybe take a look at the raw data, if available. However, such an investment is not possible in all scenarios.

 

Here, I will only focus on cues that are easy and fast to retrieve.

 

As a conceptual framework we can use the lens model (Brunswick, 1956), which differentiates the concepts of cue usage and cue validity. We use some information as a manifest cue for a latent variable (“cue utilization”). But only some cues are also valid indicators (“cue validity”). Valid cues correlate with the latent variable, invalid cues have no correlation. Sometimes, there exist valid cues which we don’t use, and sometimes we use cues that are not valid. Of course, each of the following cues can be critiziced, and you certainly can give many examples where each cue breaks down. Furthermore, the absence of a positive cue (e.g., if a study has not been pre-registered, which was uncommon until recently) does not necessarily indicate the untrustworthiness.
But this is the nature of cues – they are not perfect, and only work on average.

 


Valid cues for trustworthiness of a single study:

  • Pre-registration. This might be one of the strongest cues for trustworthiness. Pre-registration makes p-hacking and HARKing unlikely (Wagenmakers, Wetzels, Borsboom, Maas, & Kievit, 2012), and takes care for a sufficient amount of statistical power (At least, some sort of sample size planning has been done. Of course, this depends on the correctness of the a-priori effect size estimate).
  • Sample size / Statistical Power. Larger samples mean higher power, higher precision, and less false positives (Bakker, van Dijk, & Wicherts, 2012; Maxwell, Kelley, & Rausch, 2008; Schönbrodt & Perugini, 2013). Of course sample size alone is not a panacea. As always, the garbage in/garbage out principle holds, and a well designed lab study with n=40 can be much more trustworthy than a sloppy mTurk study with n=800. But all other things being equal, I put more trust in larger studies.
  • Independent high-power replications. If a study has been independently replicated from another lab with high power and preferably pre-registered, this probably is the strongest evidence for the trustworthiness of a study (How to conduct a replication? See the Replication Recipe by Brandt et al., 2014).
  • I guess that studies with Open Data and Open Material have a higher replication rate
    • “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results” (Wicherts, Bakker, & Molenaar, 2011) —> this is not exactly Open Data, because here authors only shared data upon request (or not). But it points into the same direction.
    • Beyond publishing Open Data at all, the neatness of the data set and the quality of the analysis script is an indicator (see also comment by Richard Morey). The journal “Quarterly Journal of Political Science” demands to publish raw data and analysis code that generates all the results reported in the paper. Of these submissions, 54% “had results in the paper that differed from those generated by the author’s own code”! My fear is that analytical code that has not been refined and polished for publishing contains even more errors (not to speak of unreproducible point-and-click analyses). Therefore, a well prepared data set and analysis code should be a valid indicator.
    • Open Material could be an indicator that people are not afraid of replications and further scrutiny
  • An abstract with reasonable conclusions that stick close to the data – see also below: “Red flags”. This includes visible efforts of the authors to explain how they could be wrong and what precautions were/were not taken.
    • A sensitivity analysis, which shows that conclusions do not depend on specific analytical choices. For Bayesian analyses this means to explore how the conclusions depend on the choice of the prior. But you could also show how your results change when you do not exlude the outliers, or do not apply that debatable transformation to your data (see also comment)
  • Using the “21 Word Solution” of Simmons, Nelson, & Simonsohn (2012) leads to a better replication index.
These cues might be a feature of a specific study. Beyond that, these cues could also be used as indicators of an authors’ general approach to science (e.g., Does s/he in general embrace open practices and care about the replicability of his or her research? Does the author have a good replication record?). So the author’s open science reputation could be another valid indicator, and could be useful for hiring or tenure decisions.
(As a side note: I am not so interested in creating another formalized author index “The super-objective-h-index-extending-altmetric-open-science-author-index!”. But when I reflect about how I judge the trustworthiness of a study, I indeed take into account the open science reputation an author has).

Valid cues for trustworthiness of a research programme/ multiple studies:


Valid cues for UNtrustworthiness of a single study/ red flags:

In a comment below, Dr. R introduced the idea of “red flags”, which I really like. These red flags aren’t a prove of the untrustworthiness of a study – but definitely a warning sign to look closer and to be more sceptical.

  • Sweeping claims, counterintuitive, and shocking results (that don’t connect to the actual data)
  • Most p values are in the range of .03 – .05 (or, equivalently, most t-values in the 2-3 range, or most F-values are in the 4-9 range; see comment by Dr. R below).
    • How does a distribution of p values look like when there’s an effect? See Daniël Lakens blog. With large samples, p-values just below .05 even indicate support for the null!
  • It’s a highly cited result, but no direct replications have been published so far. That could be an indicator that many unsuccessful replication attempts went into the file-drawer (see comment by Ruben below).
  • Too good to be true: If several low-power studies are combined in a paper, it can be very unlikely that all of them produce significant results. The “Test of Excess Significance” has been used to formally test for “too many significant results”. Although this formal test has been criticized (e.g., see The Etz-Files, and especially the long thread of comments, or this special issue on the test), I still think excess significance can be used as a red flag indicator to look closer.

Possibly invalid cues (cues which are often used, but only seemingly are indicators for a study’s trustworthiness):

  • The journal’s impact factor. Impact factors correlate with retractions (Fang & Casadevall, 2011), but do not correlate with a single paper’s citation count (see here).
    • I’m not really sure whether that is a valid or invalid cue for a study’s quality. The higher retraction rate might due to the stronger public interest and a tougher post-publication review of papers in high-impact journals. The IF seems not to be predictive of a single paper’s citation count; but I’m not sure either whether the citation count is an index of a study’s quality. Furthermore, “Impact factors should have no place in grant-giving, tenure or appointment committees.” (ibid.), see also a reccent article by @deevybee in Times Higher Education.
    • On the other hand, the current replicability estimate of a full volume of JPSP is only at 20-30% (see Reproducibility Project: Psychology). A weak performance for one of our “best journals”.
  • The author’s publication record in high-impact journals or h-index. This might be a less valid cue as expected, or even an invalid cue.
  • Meta-analyses. Garbage-in, garbage-out: Meta-analyses of a biased literature produce biased results. Typical correction methods do not work well. When looking at meta-analyses, at least one has to check whether and how it was corrected for publication bias.
This list of cues was compiled in a collaborative effort. Some of them have empirical support; others are only a personal hunch.

 

So, if my students ask me again “What studies can we trust at all?”, I would say something like:
“If a study has a large sample size, Open Data, and maybe even has been pre-registered, I would put quite some trust into the results. If the study has been independently replicated, even better. In contrast to common practice, I do not care so much whether this paper has been published in a high-impact journal or whether the author has a long publication record. The next step, of course, is: Read the paper, and judge it’s validity and the quality of its arguments!”
What are your cues or tips for students?

This list certainly is not complete, and I would be interested in your ideas, additions, and links to relevant literature!

 

References

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554. doi:10.1177/1745691612459060
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., et al. (2014). The Replication Recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224. doi:10.1016/j.jesp.2013.10.005
Brunswik, E. (1956). Perception and the representative design of psychological experiments. University of California Press.
Fang, F. C., & Casadevall, A. (2011). Retracted science and the retraction index. Infection and Immunity, 79, 3855–3859. doi:10.1128/IAI.05661-11
Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537–563. doi:10.1146/annurev.psych.59.103006.093735
Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. doi:10.1016/j.jrp.2013.05.009
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Maas, H. L. J. v. d., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632–638. doi:10.1177/1745691612463078
Comments (16) | Trackback

Gaze-cueing and trustworthiness: New paper + raw data + R script on OSF

Recently, a student of mine (Felix Süßenbach, now at the University of Edinburgh) and I published a little study on gaze-cueing, and how it is moderated by the trustworthiness of the gazing person.

In a nutshell, although instructed to ignore the gaze, participants shifted their attention into the direction where another person looked at (–> this is the well-established gaze-cueing effect), but more so when the sender was introduced as being trustworthy (–> which is the new result)

Bildschirmfoto 2014-10-30 um 09.32.39

 

We also found some exploratory evidence that the trait anxiety of participants moderates that effect, in a way that highly anxious participants did not differentiate between trustworthy and untrustworthy senders: Highly anxious participants always followed the other person’s gaze. For low anxious participants, in contrast, the gaze-cueing effect was reduced to zero for untrustworthy senders. (This exploratory finding, of course, awaits cross-validation).

Bildschirmfoto 2014-10-30 um 09.37.29

The paper, raw data, and R script for the analyses are on OSF.


 

Süßenbach, F., & Schönbrodt, F. (2014). Not afraid to trust you: Trustworthiness moderates gaze cueing but not in highly anxious participants. Journal of Cognitive Psychology, 26, 670–678. doi:10.1080/20445911.2014.945457

Publisher’s website: http://www.tandfonline.com/doi/abs/10.1080/20445911.2014.945457#.VEn3GVu17sI

Abstract Gaze cueing (i.e., the shifting of person B’s attention by following person A’s gaze) is closely linked with human interaction and learning. To make the most of this connection, researchers need to investigate possible moderators enhancing or reducing the extent of this attentional shifting. In this study we used a gaze cueing paradigm to demonstrate that the perceived trustworthiness of a cueing person constitutes such a moderator for female participants. Our results show a significant interaction between perceived trustworthiness and the response time trade-off between valid and invalid gaze cues [gaze cueing effect (GCE)], as manifested in greater following of a person’s gaze if this person was trustworthy as opposed to the following of an untrustworthy person’s gaze. An additional exploratory analysis showed potentially moderating influences of trait-anxiety on this interaction (p = .057). The affective background of the experiment (i.e., using positive or negative target stimuli) had no influence.

No comments | Trackback