# A Compendium of Clean Graphs in R

[This is a guest post by Eric-Jan Wagenmakers and Quentin Gronau introducing the RGraphCompendium. Click here to see the full compendium!]

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but easy. Many users find it almost impossible to resist the siren song of adding grid lines, including grey backgrounds, using elaborate color schemes, and applying default font sizes that makes the text much too small in relation to the graphical elements. As a result, many R graphs are an aesthetic disaster; they are difficult to parse and unfit for publication.

In constrast, a good graph obeys the golden rule: “create graphs unto others as you want them to create graphs unto you”. This means that a good graph is a simple graph, in the Einsteinian sense that a graph should be made as simple as possible, but not simpler. A good graph communicates the main message effectively, without fuss and distraction. In addition, a good graph balances its graphical and textual elements – large symbols demand an increase in line width, and these together require an increase in font size.

The graphing chaos is exacerbated by the default settings in R (and the graphical packages that it provides, such as ggplot2), which are decidedly suboptimal. For instance, the font size is often too small, and the graphical elements are not sufficiently prominent. As a result, creating a good graph in R requires a lot of tinkering, not unlike the process of editing the first draft of a novice writer.

Fortunately, many plots share the same underlying structure, and the tinkering that has led to a clean graph of time series A will generally provide useful starting values for a clean graph of time series B. To exploit the overlap in structure, however, the user needs to remember the settings that were used for the first graph. Usually, this means that the user has to recall the location of the relevant R code. Sometimes the search for this initial code can take longer than the tinkering that was required to produce a clean graph in the first place.

In order to reduce the time needed to find relevant R code, we have constructed a compendium of clean graphs in R. This compendium, available at http://shinyapps.org/apps/RGraphCompendium/index.html, can also be used for teaching or as inspiration for improving one’s own graphs. In addition, the compendium provides a selective overview of the kind of graphs that researchers often use; the graphs cover a range of statistical scenarios and feature contributions of different data analysts. We do not wish to presume the graphs in the compendium are in any way perfect; some are better than others, and overall much remains to be improved. The compendium is undergoing continual refinement. Nevertheless, we hope the graphs are useful in their current state.

As an example of what the compendium has to offer, consider the graph below. This graph shows the proportion of the popular vote as a function of the relative height of the US president against his most successful opponent. Note the large circles for the data, the thick line for the linear relation, and the large font size for the axis labels. Also, note that the line does not touch the y-axis (a subtlety that requires deviating from the default). As in the compendium, the R code that created the graph is displayed after clicking the box “Show R-code”.

Show R-Code

# Presidential data up to and including 2008; data from Stulp et al. 2013
# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905,
0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908,
1.115606936, 0.971910112, 0.97752809, 0.978609626, 1,
0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857,
1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889,
1.011111111, 1.032967033, 1.044444444, 1, 1.086705202,
1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222,
1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778,
1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)

# proportion popular vote for president vs most successful opponent
# NB can be lower than .5 because popolar vote does not decide election
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067,
0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387,
0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212,
0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892,
0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407,
0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958,
0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534,
0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415,
0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242,
0.536914796)

#cor.test(height.ratio,pop.vote)
require(plotrix) # package plotrix is needed for function "ablineclip""
# if the following line and the line containing "dev.off()" are executed, the plot will be saved as a png file in the current working directory
# png("Presidental.png", width = 18, height = 18, units = "cm", res = 800, pointsize = 10)
op <- par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, bty = "n", las=1)
plot(height.ratio, pop.vote, col="black", pch=21, bg = "grey", cex = 2,
xlim=c(.90,1.20), ylim=c(.40,.70), ylab="", xlab="", axes=F)
axis(1)
axis(2)
reg1 <- lm(pop.vote~height.ratio)
ablineclip(reg1, lwd=2,x1 = .9, x2 = 1.2)
par(las=0)
mtext("Presidential Height Ratio", side=1, line=2.5, cex=1.5)
mtext("Relative Support for President", side=2, line=3.7, cex=1.5)
text(1.15, .65, "r = .39", cex=1.5)
# dev.off()
# For comparison, consider the default plot:
#par(op) # reset to default "par" settings
#plot(height.ratio, pop.vote) #yuk! A more complicated example takes the same data, but uses it to plot the development of the Bayes factor, assessing the evidence for the hypothesis that taller presidential candidates attract more votes. This plot was created based in part on code from Ruud Wetzels and Benjamin Scheibehenne. Note the annotations on the right side of the plot, and the subtle horizontal lines that indicate Jeffreys’ criteria on the evidence. It took some time to figure out how to display the word “Evidence” in its current direction.

Show R-Code

# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905, 0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908, 1.115606936, 0.971910112, 0.97752809, 0.978609626, 1, 0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857, 1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889, 1.011111111, 1.032967033, 1.044444444, 1, 1.086705202, 1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222, 1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778, 1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)
# proportion popular vote for president vs most successful opponent
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067, 0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387, 0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212, 0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892, 0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407, 0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958, 0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534, 0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415, 0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242, 0.536914796)
## now calculate BF sequentially; two-sided test
library("hypergeo")
BF10.HG.exact = function(n, r)
{
#Jeffreys' test for whether a correlation is zero or not
#Jeffreys (1961), pp. 289-292
#Note that if the means are subtracted, n needs to be replaced by n-1
hypgeo = hypergeo((.25+n/2), (-.25+n/2), (3/2+n/2), r^2)
BF10 = ( sqrt(pi) * gamma(n/2+1) * (hypgeo) ) / ( 2 * gamma(3/2+n/2) )
return(as.numeric(BF10))
}
BF10 <- array()
BF10<-1
BF10<-1
for (i in 3:length(height.ratio))
{
BF10[i] <- BF10.HG.exact(n=i-1, r=cor(height.ratio[1:i],pop.vote[1:i]))
}
# We wish to plot this Bayes factor sequentially, as it unfolds as more elections become available:
#============ Plot log Bayes factors  ===========================
par(cex.main = 1.3, mar = c(4.5, 6, 4, 7)+.1, mgp = c(3, 1, 0), #bottom, left, top, right
cex.lab = 1.3, font.lab = 2, cex.axis = 1.3, las=1)
xhigh <- 60
plot(log(BF10), xlim=c(1,xhigh), ylim=c(-1*log(200),log(200)), xlab="", ylab="", cex.lab=1.3,cex.axis=1.3, las =1, yaxt="n", bty = "n", type="p", pch=21, bg="grey")

labelsUpper=log(c(100,30,10,3,1))
labelsLower=-1*labelsUpper
criticalP=c(labelsLower,0,labelsUpper)
for (idx in 1:length(criticalP))
{
abline(h=criticalP[idx],col='darkgrey',lwd=1,lty=2)
}
abline(h=0)
axis(side=4, at=criticalP,tick=T,las=2,cex.axis=1, labels=F)
axis(side=4, at=labelsUpper+.602, tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))
axis(side=4, at=labelsLower-.602,tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))

axis(side=2, at=c(criticalP),tick=T,las=2,cex.axis=1,
labels=c("1/100","1/30","1/10","1/3","1","", "100","30","10","3",""))

mtext(expression(BF), side=2, line=2.5, las=0, cex=1.3)
grid::grid.text("Evidence", 0.97, 0.5, rot = 270, gp=grid::gpar(cex=1.3))
mtext("No. of Elections", side=1, line=2.5, las=1, cex=1.3)

arrows(20, -log(10), 20, -log(100), length=.25, angle=30, code=2, lwd=2)
arrows(20, log(10), 20, log(100), length=.25, angle=30, code=2, lwd=2)
text(25, -log(70), "Evidence for H0", pos=4, cex=1.3)
text(25, log(70), "Evidence for H1", pos=4, cex=1.3)

A final example is borrowed from the graphs in JASP (http://jasp-stats.org), a free and open-source statistical software program with a GUI not unlike that of SPSS. In contrast to SPSS, JASP also includes Bayesian hypthesis tests, the results of which are summarized in graphs such as the one below.

Show R-Code

.plotPosterior.ttest <- function(x= NULL, y= NULL, paired= FALSE, oneSided= FALSE, iterations= 10000, rscale= "medium", lwd= 2, cexPoints= 1.5, cexAxis= 1.2, cexYlab= 1.5, cexXlab= 1.5, cexTextBF= 1.4, cexCI= 1.1, cexLegend= 1.4, lwdAxis= 1.2){

library(BayesFactor)

if(rscale == "medium"){
r <- sqrt(2) / 2
}
if(rscale == "wide"){
r <- 1
}
if(rscale == "ultrawide"){
r <- sqrt(2)
}
if(mode(rscale) == "numeric"){
r <- rscale
}

if(oneSided == FALSE){
nullInterval <- NULL
}
if(oneSided == "right"){
nullInterval <- c(0, Inf)
}
if(oneSided == "left"){
nullInterval <- c(-Inf, 0)
}

# sample from delta posterior
samples <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = TRUE, iterations = iterations, rscale= r)

delta <- samples[,"delta"]

# fit denisty estimator
fit.posterior <-  logspline::logspline(delta)

# density function posterior
dposterior <- function(x, oneSided= oneSided, delta= delta){
if(oneSided == FALSE){
k <- 1
return(k*logspline::dlogspline(x, fit.posterior))
}
if(oneSided == "right"){
k <- 1 / (length(delta[delta >= 0]) / length(delta))
return(ifelse(x < 0, 0, k*logspline::dlogspline(x, fit.posterior)))
}
if(oneSided == "left"){
k <- 1 / (length(delta[delta <= 0]) / length(delta))
return(ifelse(x > 0, 0, k*logspline::dlogspline(x, fit.posterior)))
}
}

# pdf cauchy prior
dprior <- function(delta,r, oneSided= oneSided){
if(oneSided == "right"){
y <- ifelse(delta < 0, 0, 2/(pi*r*(1+(delta/r)^2)))
return(y)
}
if(oneSided == "left"){
y <- ifelse(delta > 0, 0, 2/(pi*r*(1+(delta/r)^2)))
return(y)
}   else{
return(1/(pi*r*(1+(delta/r)^2)))
}
}

# set limits plot
xlim <- vector("numeric", 2)
if(oneSided == FALSE){
xlim <- min(-2, quantile(delta, probs = 0.01)[])
xlim <- max(2, quantile(delta, probs = 0.99)[])
}
if(oneSided == "right"){
xlim <- min(-2, quantile(delta[delta >= 0], probs = 0.01)[])
xlim <- max(2, quantile(delta[delta >= 0], probs = 0.99)[])
}
if(oneSided == "left"){
xlim <- min(-2, quantile(delta[delta <= 0], probs = 0.01)[])
xlim <- max(2, quantile(delta[delta <= 0], probs = 0.99)[])
}

ylim <- vector("numeric", 2)
ylim <- 0
ylim <- max(dprior(0,r, oneSided= oneSided), 1.28*max(dposterior(x= delta, oneSided= oneSided, delta=delta)))

# calculate position of "nice" tick marks and create labels
xticks <- pretty(xlim)
yticks <- pretty(ylim)
xlabels <- formatC(pretty(xlim), 1, format= "f")
ylabels <- formatC(pretty(ylim), 1, format= "f")

# 95% credible interval:
if(oneSided == FALSE){
CIlow <- quantile(delta, probs = 0.025)[]
CIhigh <- quantile(delta, probs = 0.975)[]
}
if(oneSided == "right"){
CIlow <- quantile(delta[delta >= 0], probs = 0.025)[]
CIhigh <- quantile(delta[delta >= 0], probs = 0.975)[]
}
if(oneSided == "left"){
CIlow <- quantile(delta[delta <= 0], probs = 0.025)[]
CIhigh <- quantile(delta[delta <= 0], probs = 0.975)[]
}

par(mar= c(5, 5, 7, 4) + 0.1, las=1)
xlim <- c(min(CIlow,range(xticks)), max(range(xticks), CIhigh))
plot(1,1, xlim= xlim, ylim= range(yticks), ylab= "", xlab="", type= "n", axes= FALSE)
lines(seq(min(xticks), max(xticks),length.out = 1000),dposterior(x=seq(min(xticks), max(xticks),length.out = 1000), oneSided = oneSided, delta=delta), lwd= lwd, xlim= xlim, ylim= range(yticks), ylab= "", xlab= "")
lines(seq(min(xticks), max(xticks),length.out = 1000), dprior(seq(min(xticks), max(xticks),length.out = 1000), r=r, oneSided= oneSided), lwd= lwd, lty=3)

axis(1, at= xticks, labels = xlabels, cex.axis= cexAxis, lwd= lwdAxis)
axis(2, at= yticks, labels= ylabels, , cex.axis= cexAxis, lwd= lwdAxis)
mtext(text = "Density", side = 2, las=0, cex = cexYlab, line= 3)
mtext(expression(paste("Effect size", ~delta)), side = 1, cex = cexXlab, line= 2.5)

points(0, dprior(0,r, oneSided= oneSided), col="black", pch=21, bg = "grey", cex= cexPoints)
points(0, dposterior(0, oneSided = oneSided, delta=delta), col="black", pch=21, bg = "grey", cex= cexPoints)

# 95% credible interval
dmax <- optimize(function(x)dposterior(x,oneSided= oneSided, delta=delta), interval= range(xticks), maximum = TRUE)\$objective # get maximum density
yCI <- grconvertY(dmax, "user", "ndc") + 0.08
yCIt <- grconvertY(dmax, "user", "ndc") + 0.04
y95 <- grconvertY(dmax, "user", "ndc") + 0.1
yCI <- grconvertY(yCI, "ndc", "user")
yCIt <- grconvertY(yCIt, "ndc", "user")
y95 <- grconvertY(y95, "ndc", "user")
arrows(CIlow, yCI , CIhigh, yCI, angle = 90, code = 3, length= 0.1, lwd= lwd)
text(mean(c(CIlow, CIhigh)), y95,"95%", cex= cexCI)

text(CIlow, yCIt, bquote(.(formatC(CIlow,2, format="f"))), cex= cexCI)
text(CIhigh, yCIt, bquote(.(formatC(CIhigh,2, format= "f"))), cex= cexCI)

# enable plotting in margin
par(xpd=TRUE)

# display BF10 value
BF <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = FALSE, rscale= r)
BF10 <- BayesFactor::extractBF(BF, logbf = FALSE, onlybf = F)[1, "bf"]
BF01 <- 1 / BF10

xx <- grconvertX(0.3, "ndc", "user")
yy <- grconvertY(0.822, "ndc", "user")
yy2 <- grconvertY(0.878, "ndc", "user")

if(BF10 >= 1000000 | BF01 >= 1000000){
BF10t <- format(BF10, digits= 3, scientific = TRUE)
BF01t <- format(BF01, digits= 3, scientific = TRUE)
}
if(BF10 < 1000000 & BF01 < 1000000){
BF10t <- formatC(BF10,2, format = "f")
BF01t <- formatC(BF01,2, format = "f")
}

if(oneSided == FALSE){
text(xx, yy2, bquote(BF==.(BF10t)), cex= cexTextBF)
text(xx, yy, bquote(BF==.(BF01t)), cex= cexTextBF)
}
if(oneSided == "right"){
text(xx, yy2, bquote(BF["+"]==.(BF10t)), cex= cexTextBF)
text(xx, yy, bquote(BF["+"]==.(BF01t)), cex= cexTextBF)
}
if(oneSided == "left"){
text(xx, yy2, bquote(BF["-"]==.(BF10t)), cex= cexTextBF)
text(xx, yy, bquote(BF["-"]==.(BF01t)), cex= cexTextBF)
}

# probability wheel
if(max(nchar(BF10t), nchar(BF01t)) <= 4){
xx <- grconvertX(0.44, "ndc", "user")
}
# probability wheel
if(max(nchar(BF10t), nchar(BF01t)) == 5){
xx <- grconvertX(0.44 +  0.001* 5, "ndc", "user")
}
# probability wheel
if(max(nchar(BF10t), nchar(BF01t)) == 6){
xx <- grconvertX(0.44 + 0.001* 6, "ndc", "user")
}
if(max(nchar(BF10t), nchar(BF01t)) == 7){
xx <- grconvertX(0.44 + 0.002* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
}
if(max(nchar(BF10t), nchar(BF01t)) == 8){
xx <- grconvertX(0.44 + 0.003* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
}
if(max(nchar(BF10t), nchar(BF01t)) > 8){
xx <- grconvertX(0.44 + 0.004* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
}
yy <- grconvertY(0.85, "ndc", "user")

# make sure that colored area is centered
alpha <- 2 / (BF01 + 1) * A / radius^2
startpos <- pi/2 - alpha/2

# draw probability wheel

yy <- grconvertY(0.927, "ndc", "user")
yy2 <- grconvertY(0.77, "ndc", "user")

if(oneSided == FALSE){
text(xx, yy, "data|H1", cex= cexCI)
text(xx, yy2, "data|H0", cex= cexCI)
}
if(oneSided == "right"){
text(xx, yy, "data|H+", cex= cexCI)
text(xx, yy2, "data|H0", cex= cexCI)
}
if(oneSided == "left"){
text(xx, yy, "data|H-", cex= cexCI)
text(xx, yy2, "data|H0", cex= cexCI)
}

xx <- grconvertX(0.57, "ndc", "user")
yy <- grconvertY(0.92, "ndc", "user")
legend(xx, yy, legend = c("Posterior", "Prior"), lty=c(1,3), bty= "n", lwd = c(lwd,lwd), cex= cexLegend)
}

set.seed(1)
.plotPosterior.ttest(x= rnorm(30,0.15), rscale=1) The compendium contains many more examples. We hope some R users will find them convenient. Finally, if you create a clean graph in R that you believe is a candidate for inclusion in this compendium, please do not hesitate to write an email to EJ.Wagenmakers@gmail.com. Your contribution will be acknowledged explicitly, alongside the code you provided.

Eric-Jan Wagenmakers and Quentin Gronau

University of Amsterdam, Department of Psychology.

# Exploring the robustness of Bayes Factors: A convenient plotting function

One critique frequently heard about Bayesian statistics is the subjectivity of the assumed prior distribution. If one is cherry-picking a prior, of course the posterior can be tweaked, especially when only few data points are at hand. For example, see the Scholarpedia article on Bayesian statistics:

In the uncommon situation that the data are extensive and of simple structure, the prior assumptions will be unimportant and the assumed sampling model will be uncontroversial. More generally we would like to report that any conclusions are robust to reasonable changes in both prior and assumed model: this has been termed inference robustness

(David Spiegelhalter and Kenneth Rice (2009) Bayesian statistics. Scholarpedia, 4(8):5230.)

Therefore, it is suggested that …

In particular, audiences should ideally fully understand the contribution of the prior distribution to the conclusions. (ibid)

In the example of Bayes factors for t tests (Rouder, Speckman, Sun, Morey, & Iverson, 2009), the assumption that has to be defined a priori is the effect size δ expected under the H1. In the BayesFactor package for R, this can be adjusted via the r parameter. By default, it is set to 0.5, but it can be made wider (larger r’s, which means one expects larger effects) or narrower (r’s close to zero, which means one expects smaller effects in the population).

In their reanalysis of Bem’s ESP data, Wagenmakers, Wetzels, Borsboom, Kievit, and van der Maas (2011, PDF) proposed a robustness analysis for Bayes factors (BF), which simply shows the BF for a range of priors. If the conclusion is the same for a large range of priors, it could be judged to be robust (this is also called a “sensitivity analysis”).

I wrote an R function that can generate plots like this. Here’s a reproduction of Wagenmakers’ et al (2011) analysis of Bem’s data – it looks pretty identical:

## Bem data, two sided
# provide t values, sample sizes, and the location(s) of the red dot(s)
# set forH1 to FALSE in order to vertically flip the plot.
# Usually I prefer higher BF to be in favor of H1, but I flipped it in order to match Wagenmakers et al (2011)
BFrobustplot(
ts=c(2.51, 2.55, 2.23, 1.74, 1.92, 2.39, 2.03, 1.8, 1.31, 2.96),
ns=c(100, 97, 100, 150, 100, 150, 99, 150, 200, 50),
dots=1, forH1 = FALSE)

You can throw in as many t values and corresponding sample sizes as you want. Furthermore, the function can compute one-sided Bayes factors as described in Wagenmakers and Morey (2013). If this approach is applied to the Bem data, the plot looks as following – everything is shifted a bit into the H1 direction:

# Bem data one-sided
BFrobustplot(
ts=c(2.51, 2.55, 2.23, 1.74, 1.92, 2.39, 2.03, 1.8, 1.31, 2.96),
ns=c(100, 97, 100, 150, 100, 150, 99, 150, 200, 50),
dots=1, sides="one", forH1 = FALSE) Bayes factor robustness analysis, one-sided (cf. Wagenmakers et al., 2011; Wagenmakers and Morey, 2013)

Finally, here’s the function:

## (c) 2013 Felix Schönbrodt

#' @title Plots a comparison of a sequence of priors for t test Bayes factors
#'
#' @details
#'
#'
#' @param ts A vector of t values
#' @param ns A vector of corresponding sample sizes
#' @param rs The sequence of rs that should be tested. r should run up to 2 (higher values are implausible; E.-J. Wagenmakers, personal communication, Aug 22, 2013)
#' @param labels Names for the studies (displayed in the facet headings)
#' @param dots Values of r's which should be marked with a red dot
#' @param plot If TRUE, a ggplot is returned. If false, a data frame with the computed Bayes factors is returned
#' @param sides If set to "two" (default), a two-sided Bayes factor is computed. If set to "one", a one-sided Bayes factor is computed. In this case, it is assumed that positive t values correspond to results in the predicted direction and negative t values to results in the unpredicted direction. For details, see Wagenmakers, E. J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests.
#' @param nrow Number of rows of the faceted plot.
#' @param forH1 Defines the direction of the BF. If forH1 is TRUE, BF > 1 speak in favor of H1 (i.e., the quotient is defined as H1/H0). If forH1 is FALSE, it's the reverse direction.
#'
#' @references
#'
#' Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t-tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin and Review, 16, 225-237.
#' Wagenmakers, E.-J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests. Manuscript submitted for publication
#' Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Kievit, R. & van der Maas, H. L. J. (2011). Yes, psychologists must change the way they analyze their data: Clarifications for Bem, Utts, & Johnson (2011)

BFrobustplot <- function(
ts, ns, rs=seq(0, 2, length.out=200), dots=1, plot=TRUE,
labels=c(), sides="two", nrow=2, xticks=3, forH1=TRUE)
{
library(BayesFactor)

# compute one-sided p-values from ts and ns
ps <- pt(ts, df=ns-1, lower.tail = FALSE)   # one-sided test

# add the dots location to the sequences of r's
rs <- c(rs, dots)

res <- data.frame()
for (r in rs) {

# first: calculate two-sided BF
B_e0 <- c()
for (i in 1:length(ts))
B_e0 <- c(B_e0, exp(ttest.tstat(t = ts[i], n1 = ns[i], rscale=r)\$bf))

# second: calculate one-sided BF
B_r0 <- c()
for (i in 1:length(ts)) {
if (ts[i] > 0) {
# correct direction
B_r0 <- c(B_r0, (2 - 2*ps[i])*B_e0[i])
} else {
# wrong direction
B_r0 <- c(B_r0, (1 - ps[i])*2*B_e0[i])
}
}

res0 <- data.frame(t=ts, n=ns, BF_two=B_e0, BF_one=B_r0, r=r)
if (length(labels) > 0) {
res0\$labels <- labels
res0\$heading <- factor(1:length(labels), labels=paste0(labels, "\n(t = ", ts, ", df = ", ns-1, ")"), ordered=TRUE)
} else {
res0\$heading <- factor(1:length(ts), labels=paste0("t = ", ts, ", df = ", ns-1), ordered=TRUE)
}
res <- rbind(res, res0)
}

# define the measure to be plotted: one- or two-sided?
res\$BF <- res[, paste0("BF_", sides)]

# Flip BF if requested
if (forH1 == FALSE) {
res\$BF <- 1/res\$BF
}

if (plot==TRUE) {
library(ggplot2)
p1 <- ggplot(res, aes(x=r, y=log(BF))) + geom_line() + facet_wrap(~heading, nrow=nrow) + theme_bw() + ylab("log(BF)")
p1 <- p1 + geom_hline(yintercept=c(c(-log(c(30, 10, 3)), log(c(3, 10, 30)))), linetype="dotted", color="darkgrey")
p1 <- p1 + geom_hline(yintercept=log(1), linetype="dashed", color="darkgreen")

p1 <- p1 + geom_point(data=res[res\$r %in% dots,], aes(x=r, y=log(BF)), color="red", size=2)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-2.85, label=paste0("Strong~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)
p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-1.7 , label=paste0("Moderate~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)
p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-.55 , label=paste0("Anectodal~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)
p1 <- p1 + annotate("text", x=max(rs)*1.8, y=2.86 , label=paste0("Strong~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)
p1 <- p1 + annotate("text", x=max(rs)*1.8, y=1.7  , label=paste0("Moderate~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)
p1 <- p1 + annotate("text", x=max(rs)*1.8, y=.55  , label=paste0("Anectodal~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, vjust=.5, size=3, color="black", parse=TRUE)

# set scale ticks
p1 <- p1 + scale_y_continuous(breaks=c(c(-log(c(30, 10, 3)), 0, log(c(3, 10, 30)))), labels=c("-log(30)", "-log(10)", "-log(3)", "log(1)", "log(3)", "log(10)", "log(30)"))
p1 <- p1 + scale_x_continuous(breaks=seq(min(rs), max(rs), length.out=xticks))

return(p1)
} else {
return(res)
}
}

References

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t-tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin and Review, 16, 225-237. [for a PDF, see bottom of this page]
Wagenmakers, E.-J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests. Manuscript submitted for publication (website)
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Kievit, R. & van der Maas, H. L. J. (2011). Yes, psychologists must change the way they analyze their data: Clarifications for Bem, Utts, & Johnson (2011) [PDF]

# Validating email adresses in R

I currently program an automated report generation in R – participants fill out a questionnaire, and they receive a nicely formatted pdf with their personality profile. I use knitr, LaTex, and the sendmailR package.

Some participants did not provide valid email addresses, which caused the sendmail function to crash. Therefore I wanted some validation of email addresses – here’s the function:

isValidEmail <- function(x) {
grepl("\\<[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,}\\>", as.character(x), ignore.case=TRUE)
}

Let’s test some valid and invalid adresses: