source code

A Compendium of Clean Graphs in R

[This is a guest post by Eric-Jan Wagenmakers and Quentin Gronau introducing the RGraphCompendium. Click here to see the full compendium!]

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but easy. Many users find it almost impossible to resist the siren song of adding grid lines, including grey backgrounds, using elaborate color schemes, and applying default font sizes that makes the text much too small in relation to the graphical elements. As a result, many R graphs are an aesthetic disaster; they are difficult to parse and unfit for publication.

In constrast, a good graph obeys the golden rule: “create graphs unto others as you want them to create graphs unto you”. This means that a good graph is a simple graph, in the Einsteinian sense that a graph should be made as simple as possible, but not simpler. A good graph communicates the main message effectively, without fuss and distraction. In addition, a good graph balances its graphical and textual elements – large symbols demand an increase in line width, and these together require an increase in font size.

The graphing chaos is exacerbated by the default settings in R (and the graphical packages that it provides, such as ggplot2), which are decidedly suboptimal. For instance, the font size is often too small, and the graphical elements are not sufficiently prominent. As a result, creating a good graph in R requires a lot of tinkering, not unlike the process of editing the first draft of a novice writer.

Fortunately, many plots share the same underlying structure, and the tinkering that has led to a clean graph of time series A will generally provide useful starting values for a clean graph of time series B. To exploit the overlap in structure, however, the user needs to remember the settings that were used for the first graph. Usually, this means that the user has to recall the location of the relevant R code. Sometimes the search for this initial code can take longer than the tinkering that was required to produce a clean graph in the first place.

In order to reduce the time needed to find relevant R code, we have constructed a compendium of clean graphs in R. This compendium, available at http://shinyapps.org/apps/RGraphCompendium/index.html, can also be used for teaching or as inspiration for improving one’s own graphs. In addition, the compendium provides a selective overview of the kind of graphs that researchers often use; the graphs cover a range of statistical scenarios and feature contributions of different data analysts. We do not wish to presume the graphs in the compendium are in any way perfect; some are better than others, and overall much remains to be improved. The compendium is undergoing continual refinement. Nevertheless, we hope the graphs are useful in their current state.

As an example of what the compendium has to offer, consider the graph below. This graph shows the proportion of the popular vote as a function of the relative height of the US president against his most successful opponent. Note the large circles for the data, the thick line for the linear relation, and the large font size for the axis labels. Also, note that the line does not touch the y-axis (a subtlety that requires deviating from the default). As in the compendium, the R code that created the graph is displayed after clicking the box “Show R-code”.

Show R-Code

# Presidential data up to and including 2008; data from Stulp et al. 2013
# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905,
   0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908,
   1.115606936, 0.971910112, 0.97752809, 0.978609626, 1,
   0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857,
   1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889,
   1.011111111, 1.032967033, 1.044444444, 1, 1.086705202,
   1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222,
   1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778,
   1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)

# proportion popular vote for president vs most successful opponent
# NB can be lower than .5 because popolar vote does not decide election
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067,
  0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387,
  0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212,
  0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892,
  0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407,
  0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958,
  0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534,
  0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415,
  0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242,
  0.536914796)
           
#cor.test(height.ratio,pop.vote)
require(plotrix) # package plotrix is needed for function "ablineclip""
# if the following line and the line containing "dev.off()" are executed, the plot will be saved as a png file in the current working directory
# png("Presidental.png", width = 18, height = 18, units = "cm", res = 800, pointsize = 10)
op <- par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, bty = "n", las=1)
plot(height.ratio, pop.vote, col="black", pch=21, bg = "grey", cex = 2,
     xlim=c(.90,1.20), ylim=c(.40,.70), ylab="", xlab="", axes=F)
axis(1)
axis(2)
reg1 <- lm(pop.vote~height.ratio)
ablineclip(reg1, lwd=2,x1 = .9, x2 = 1.2)
par(las=0)
mtext("Presidential Height Ratio", side=1, line=2.5, cex=1.5)
mtext("Relative Support for President", side=2, line=3.7, cex=1.5)
text(1.15, .65, "r = .39", cex=1.5)
# dev.off()
# For comparison, consider the default plot:
#par(op) # reset to default "par" settings
#plot(height.ratio, pop.vote) #yuk!

index1

 

A more complicated example takes the same data, but uses it to plot the development of the Bayes factor, assessing the evidence for the hypothesis that taller presidential candidates attract more votes. This plot was created based in part on code from Ruud Wetzels and Benjamin Scheibehenne. Note the annotations on the right side of the plot, and the subtle horizontal lines that indicate Jeffreys’ criteria on the evidence. It took some time to figure out how to display the word “Evidence” in its current direction.

Show R-Code

# rm(list=ls())
# height of president divided by height of most successful opponent:
height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905, 0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908, 1.115606936, 0.971910112, 0.97752809, 0.978609626, 1, 0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857, 1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889, 1.011111111, 1.032967033, 1.044444444, 1, 1.086705202, 1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222, 1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778, 1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)
# proportion popular vote for president vs most successful opponent
pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067, 0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387, 0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212, 0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892, 0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407, 0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958, 0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534, 0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415, 0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242, 0.536914796)
## now calculate BF sequentially; two-sided test
library("hypergeo")
BF10.HG.exact = function(n, r)
{
#Jeffreys' test for whether a correlation is zero or not
#Jeffreys (1961), pp. 289-292
#Note that if the means are subtracted, n needs to be replaced by n-1
  hypgeo = hypergeo((.25+n/2), (-.25+n/2), (3/2+n/2), r^2)
  BF10 = ( sqrt(pi) * gamma(n/2+1) * (hypgeo) ) / ( 2 * gamma(3/2+n/2) )
  return(as.numeric(BF10))
}
BF10 <- array()
BF10[1]<-1
BF10[2]<-1
for (i in 3:length(height.ratio))
{
  BF10[i] <- BF10.HG.exact(n=i-1, r=cor(height.ratio[1:i],pop.vote[1:i]))
}
# We wish to plot this Bayes factor sequentially, as it unfolds as more elections become available:
#============ Plot log Bayes factors  ===========================
par(cex.main = 1.3, mar = c(4.5, 6, 4, 7)+.1, mgp = c(3, 1, 0), #bottom, left, top, right
  cex.lab = 1.3, font.lab = 2, cex.axis = 1.3, las=1)
xhigh <- 60
plot(log(BF10), xlim=c(1,xhigh), ylim=c(-1*log(200),log(200)), xlab="", ylab="", cex.lab=1.3,cex.axis=1.3, las =1, yaxt="n", bty = "n", type="p", pch=21, bg="grey")

labelsUpper=log(c(100,30,10,3,1))
labelsLower=-1*labelsUpper
criticalP=c(labelsLower,0,labelsUpper)
for (idx in 1:length(criticalP))
{
  abline(h=criticalP[idx],col='darkgrey',lwd=1,lty=2)
}
abline(h=0)
axis(side=4, at=criticalP,tick=T,las=2,cex.axis=1, labels=F)
axis(side=4, at=labelsUpper+.602, tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))
axis(side=4, at=labelsLower-.602,tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))

axis(side=2, at=c(criticalP),tick=T,las=2,cex.axis=1,
labels=c("1/100","1/30","1/10","1/3","1","", "100","30","10","3",""))
 
mtext(expression(BF[1][0]), side=2, line=2.5, las=0, cex=1.3)
grid::grid.text("Evidence", 0.97, 0.5, rot = 270, gp=grid::gpar(cex=1.3))
mtext("No. of Elections", side=1, line=2.5, las=1, cex=1.3)

arrows(20, -log(10), 20, -log(100), length=.25, angle=30, code=2, lwd=2)
arrows(20, log(10), 20, log(100), length=.25, angle=30, code=2, lwd=2)
text(25, -log(70), "Evidence for H0", pos=4, cex=1.3)
text(25, log(70), "Evidence for H1", pos=4, cex=1.3)

index

A final example is borrowed from the graphs in JASP (http://jasp-stats.org), a free and open-source statistical software program with a GUI not unlike that of SPSS. In contrast to SPSS, JASP also includes Bayesian hypthesis tests, the results of which are summarized in graphs such as the one below.

Show R-Code

.plotPosterior.ttest <- function(x= NULL, y= NULL, paired= FALSE, oneSided= FALSE, iterations= 10000, rscale= "medium", lwd= 2, cexPoints= 1.5, cexAxis= 1.2, cexYlab= 1.5, cexXlab= 1.5, cexTextBF= 1.4, cexCI= 1.1, cexLegend= 1.4, lwdAxis= 1.2){
   
    library(BayesFactor)
   
    if(rscale == "medium"){
        r <- sqrt(2) / 2
    }
    if(rscale == "wide"){
        r <- 1
    }
    if(rscale == "ultrawide"){
        r <- sqrt(2)
    }
    if(mode(rscale) == "numeric"){
        r <- rscale
    }
   
    if(oneSided == FALSE){
        nullInterval <- NULL
    }
    if(oneSided == "right"){
        nullInterval <- c(0, Inf)
    }
    if(oneSided == "left"){
        nullInterval <- c(-Inf, 0)
    }
   
    # sample from delta posterior
    samples <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = TRUE, iterations = iterations, rscale= r)
   
    delta <- samples[,"delta"]
   
    # fit denisty estimator
    fit.posterior <-  logspline::logspline(delta)
   
    # density function posterior
    dposterior <- function(x, oneSided= oneSided, delta= delta){
        if(oneSided == FALSE){
            k <- 1
            return(k*logspline::dlogspline(x, fit.posterior))
        }
        if(oneSided == "right"){
            k <- 1 / (length(delta[delta >= 0]) / length(delta))
            return(ifelse(x < 0, 0, k*logspline::dlogspline(x, fit.posterior)))
        }
        if(oneSided == "left"){
            k <- 1 / (length(delta[delta <= 0]) / length(delta))
            return(ifelse(x > 0, 0, k*logspline::dlogspline(x, fit.posterior)))
        }  
    }  
   
    # pdf cauchy prior
    dprior <- function(delta,r, oneSided= oneSided){
        if(oneSided == "right"){
            y <- ifelse(delta < 0, 0, 2/(pi*r*(1+(delta/r)^2)))
            return(y)
        }
        if(oneSided == "left"){
            y <- ifelse(delta > 0, 0, 2/(pi*r*(1+(delta/r)^2)))
            return(y)
        }   else{
            return(1/(pi*r*(1+(delta/r)^2)))
        }
    }
   
    # set limits plot
    xlim <- vector("numeric", 2)
    if(oneSided == FALSE){
        xlim[1] <- min(-2, quantile(delta, probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta, probs = 0.99)[[1]])
    }
    if(oneSided == "right"){
        xlim[1] <- min(-2, quantile(delta[delta >= 0], probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta[delta >= 0], probs = 0.99)[[1]])
    }
    if(oneSided == "left"){
        xlim[1] <- min(-2, quantile(delta[delta <= 0], probs = 0.01)[[1]])
        xlim[2] <- max(2, quantile(delta[delta <= 0], probs = 0.99)[[1]])
    }
   
    ylim <- vector("numeric", 2)
    ylim[1] <- 0
    ylim[2] <- max(dprior(0,r, oneSided= oneSided), 1.28*max(dposterior(x= delta, oneSided= oneSided, delta=delta)))
   
    # calculate position of "nice" tick marks and create labels
    xticks <- pretty(xlim)
    yticks <- pretty(ylim)
    xlabels <- formatC(pretty(xlim), 1, format= "f")
    ylabels <- formatC(pretty(ylim), 1, format= "f")
   
    # 95% credible interval:
    if(oneSided == FALSE){
        CIlow <- quantile(delta, probs = 0.025)[[1]]
        CIhigh <- quantile(delta, probs = 0.975)[[1]]
    }
    if(oneSided == "right"){
        CIlow <- quantile(delta[delta >= 0], probs = 0.025)[[1]]
        CIhigh <- quantile(delta[delta >= 0], probs = 0.975)[[1]]
    }
    if(oneSided == "left"){
        CIlow <- quantile(delta[delta <= 0], probs = 0.025)[[1]]
        CIhigh <- quantile(delta[delta <= 0], probs = 0.975)[[1]]
    }  
   
    par(mar= c(5, 5, 7, 4) + 0.1, las=1)
    xlim <- c(min(CIlow,range(xticks)[1]), max(range(xticks)[2], CIhigh))
    plot(1,1, xlim= xlim, ylim= range(yticks), ylab= "", xlab="", type= "n", axes= FALSE)
    lines(seq(min(xticks), max(xticks),length.out = 1000),dposterior(x=seq(min(xticks), max(xticks),length.out = 1000), oneSided = oneSided, delta=delta), lwd= lwd, xlim= xlim, ylim= range(yticks), ylab= "", xlab= "")
    lines(seq(min(xticks), max(xticks),length.out = 1000), dprior(seq(min(xticks), max(xticks),length.out = 1000), r=r, oneSided= oneSided), lwd= lwd, lty=3)
   
    axis(1, at= xticks, labels = xlabels, cex.axis= cexAxis, lwd= lwdAxis)
    axis(2, at= yticks, labels= ylabels, , cex.axis= cexAxis, lwd= lwdAxis)
    mtext(text = "Density", side = 2, las=0, cex = cexYlab, line= 3)
    mtext(expression(paste("Effect size", ~delta)), side = 1, cex = cexXlab, line= 2.5)
   
    points(0, dprior(0,r, oneSided= oneSided), col="black", pch=21, bg = "grey", cex= cexPoints)
    points(0, dposterior(0, oneSided = oneSided, delta=delta), col="black", pch=21, bg = "grey", cex= cexPoints)
   
    # 95% credible interval
    dmax <- optimize(function(x)dposterior(x,oneSided= oneSided, delta=delta), interval= range(xticks), maximum = TRUE)$objective # get maximum density
    yCI <- grconvertY(dmax, "user", "ndc") + 0.08
    yCIt <- grconvertY(dmax, "user", "ndc") + 0.04
    y95 <- grconvertY(dmax, "user", "ndc") + 0.1
    yCI <- grconvertY(yCI, "ndc", "user")
    yCIt <- grconvertY(yCIt, "ndc", "user")
    y95 <- grconvertY(y95, "ndc", "user")
    arrows(CIlow, yCI , CIhigh, yCI, angle = 90, code = 3, length= 0.1, lwd= lwd)
    text(mean(c(CIlow, CIhigh)), y95,"95%", cex= cexCI)
   
    text(CIlow, yCIt, bquote(.(formatC(CIlow,2, format="f"))), cex= cexCI)
    text(CIhigh, yCIt, bquote(.(formatC(CIhigh,2, format= "f"))), cex= cexCI)
   
    # enable plotting in margin
    par(xpd=TRUE)
   
    # display BF10 value
    BF <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = FALSE, rscale= r)
    BF10 <- BayesFactor::extractBF(BF, logbf = FALSE, onlybf = F)[1, "bf"]
    BF01 <- 1 / BF10
   
    xx <- grconvertX(0.3, "ndc", "user")
    yy <- grconvertY(0.822, "ndc", "user")
    yy2 <- grconvertY(0.878, "ndc", "user")
   
    if(BF10 >= 1000000 | BF01 >= 1000000){
        BF10t <- format(BF10, digits= 3, scientific = TRUE)
        BF01t <- format(BF01, digits= 3, scientific = TRUE)
    }
    if(BF10 < 1000000 & BF01 < 1000000){
        BF10t <- formatC(BF10,2, format = "f")
        BF01t <- formatC(BF01,2, format = "f")
    }
   
    if(oneSided == FALSE){
        text(xx, yy2, bquote(BF[10]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0][1]==.(BF01t)), cex= cexTextBF)
    }
    if(oneSided == "right"){
        text(xx, yy2, bquote(BF["+"][0]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0]["+"]==.(BF01t)), cex= cexTextBF)
    }
    if(oneSided == "left"){
        text(xx, yy2, bquote(BF["-"][0]==.(BF10t)), cex= cexTextBF)
        text(xx, yy, bquote(BF[0]["-"]==.(BF01t)), cex= cexTextBF)
    }
   
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) <= 4){
        xx <- grconvertX(0.44, "ndc", "user")
    }
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) == 5){
        xx <- grconvertX(0.44 +  0.001* 5, "ndc", "user")
    }
    # probability wheel
    if(max(nchar(BF10t), nchar(BF01t)) == 6){
        xx <- grconvertX(0.44 + 0.001* 6, "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) == 7){
        xx <- grconvertX(0.44 + 0.002* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) == 8){
        xx <- grconvertX(0.44 + 0.003* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    if(max(nchar(BF10t), nchar(BF01t)) > 8){
        xx <- grconvertX(0.44 + 0.004* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")
    }
    yy <- grconvertY(0.85, "ndc", "user")
   
    # make sure that colored area is centered
    radius <- 0.06*diff(range(xticks))
    A <- radius^2*pi
    alpha <- 2 / (BF01 + 1) * A / radius^2
    startpos <- pi/2 - alpha/2
   
    # draw probability wheel
    plotrix::floating.pie(xx, yy,c(BF10, 1),radius= radius, col=c("darkred", "white"), lwd=2,startpos = startpos)
   
    yy <- grconvertY(0.927, "ndc", "user")
    yy2 <- grconvertY(0.77, "ndc", "user")
   
    if(oneSided == FALSE){
        text(xx, yy, "data|H1", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
    if(oneSided == "right"){
        text(xx, yy, "data|H+", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
    if(oneSided == "left"){
        text(xx, yy, "data|H-", cex= cexCI)
        text(xx, yy2, "data|H0", cex= cexCI)
    }
   
    # add legend
    xx <- grconvertX(0.57, "ndc", "user")
    yy <- grconvertY(0.92, "ndc", "user")
    legend(xx, yy, legend = c("Posterior", "Prior"), lty=c(1,3), bty= "n", lwd = c(lwd,lwd), cex= cexLegend)
}

set.seed(1)
.plotPosterior.ttest(x= rnorm(30,0.15), rscale=1)

index3

The compendium contains many more examples. We hope some R users will find them convenient. Finally, if you create a clean graph in R that you believe is a candidate for inclusion in this compendium, please do not hesitate to write an email to EJ.Wagenmakers@gmail.com. Your contribution will be acknowledged explicitly, alongside the code you provided.

Eric-Jan Wagenmakers and Quentin Gronau

University of Amsterdam, Department of Psychology.

Key links:

Comments (11) | Trackback

Visually weighted/ Watercolor Plots, new variants: Please vote!

Update Oct-23: Added a new parameter

add

to the function. Now multiple groups can be plotted in a single plot (see example in my comment)

As a follow-up on my R implementation of Solomon’s watercolor plots, I made some improvements to the function. I fine-tuned the graphical parameters (the median smoother line now diminishes faster with increasing CIs, and the shaded watercolors look more pretty). Furthermore, the function is faster and has more features:

  • You can define any standard regression function for the bootstrap procedure.
    • vwReg(y ~ x, df, method=lm)
    • vwReg(y ~ x + I(x^2), df, method=lm)
  • Provide parameters for the fitting function.
    • You can make the smoother’s span larger. Then it takes more points into account when doing the local fitting. Per default, the smoother fits a polynomial of degree two – that means as you increase span you will approach the overall quadratic fit: vwReg(y ~ x, df, span=2)
    • You can also make the smoother’s span smaller, then it takes less points for local fitting. If it is too small, it will overfit and approach each single data point. The default span (.75) seemed to be the best choice for me for a variety of data sets: vwReg(y ~ x, df, span=0.5)
    • Use a robust M-estimator for the smoother; see ?loess for details: vwReg(y ~ x, df, family=”symmetric”)
  • Provide your own color scheme (or, for example, a black-and-white scheme). Examples see pictures below.
  • Quantize the color ramp, so that regions for 1, 2, and 3 SD have the same color (an idea proposed by John Mashey).

At the end of this post is the source code for the R function.

Some picture – please vote!

Here are some variants of the watercolor plots – at the end, you can vote for your favorite (or write something into the comments). I am still fine-tuning the default parameters, and I am interested in your opinions what would be the best default.

Plot 1: The current default

 

Plot 2: Using an M-estimator for bootstrap smoothers. Usually you get wider confidence intervalls.

 

Plot 3:Increasing the span of the smoothers

 

Plot 4: Decreasing the span of the smoothers

Plot 5: Changing the color scheme, using a predefined ColorBrewer palette. You can see all available palettes by using this command: library(RColorBrewer); display.brewer.all()

 

Plot 6: Using a custom-made palette

 

Plot 7: Using a custom-made palette; with the parameter bias you can shift the color ramp to the “higher” colors:

 

Plot 8: A black and white version of the plot

 

Plot 9: The anti-Tufte-plot: Using as much ink as possible by reversing black and white (a.k.a. “the Milky-Way Plot“)

Plot 10: The Northern Light Plot/ fMRI plot. This plotting technique already has been used by a suspicious company (called IRET – never heard of that). I hurried to publish the R code under a FreeBSD license before they can patent it! Feel free to use, share, or change the code for whatever purpose you need. Isn’t that beautiful?

Plot 11: The 1-2-3-SD plot. You can use your own color schemes as well, e.g.: vwReg(y~x, df, bw=TRUE, quantize=”SD”)

 

Any comments or ideas? Or just a vote? If you produce some nice plots with your data, you can send it to me, and I will post a gallery of the most impressive “data art”!

Cheers,

Felix

Which water color plot do you like most?

View Results

Loading ... Loading ...
# Copyright 2012 Felix Schönbrodt
# All rights reserved.
#
# FreeBSD License
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#      
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#      
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER `AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# The views and conclusions contained in the software and documentation
# are those of the authors and should not be interpreted as representing
# official policies, either expressed or implied, of the copyright
# holder.

# Version history:
# 0.1: original code
# 0.1.1: changed license to FreeBSD; re-established compability to ggplot2 (new version 0.9.2)

## Visually weighted regression / Watercolor plots
## Idea: Solomon Hsiang, with additional ideas from many blog commenters


# B = number bootstrapped smoothers
# shade: plot the shaded confidence region?
# shade.alpha: should the CI shading fade out at the edges? (by reducing alpha; 0 = no alpha decrease, 0.1 = medium alpha decrease, 0.5 = strong alpha decrease)
# spag: plot spaghetti lines?
# spag.color: color of spaghetti lines
# mweight: should the median smoother be visually weighted?
# show.lm: should the linear regresison line be plotted?
# show.CI: should the 95% CI limits be plotted?
# show.median: should the median smoother be plotted?
# median.col: color of the median smoother
# shape: shape of points
# method: the fitting function for the spaghettis; default: loess
# bw = TRUE: define a default b&w-palette
# slices: number of slices in x and y direction for the shaded region. Higher numbers make a smoother plot, but takes longer to draw. I wouldn'T go beyond 500
# palette: provide a custom color palette for the watercolors
# ylim: restrict range of the watercoloring
# quantize: either "continuous", or "SD". In the latter case, we get three color regions for 1, 2, and 3 SD (an idea of John Mashey)
# add: if add == FALSE, a new ggplot is returned. If add == TRUE, only the elements are returned, which can be added to an existing ggplot (with the '+' operator)
# ...: further parameters passed to the fitting function, in the case of loess, for example, "span = .9", or "family = 'symmetric'"
vwReg <- function(formula, data, title="", B=1000, shade=TRUE, shade.alpha=.1, spag=FALSE, spag.color="darkblue", mweight=TRUE, show.lm=FALSE, show.median = TRUE, median.col = "white", shape = 21, show.CI=FALSE, method=loess, bw=FALSE, slices=200, palette=colorRampPalette(c("#FFEDA0", "#DD0000"), bias=2)(20), ylim=NULL, quantize = "continuous",  add=FALSE, ...) {
    IV <- all.vars(formula)[2]
    DV <- all.vars(formula)[1]
    data <- na.omit(data[order(data[, IV]), c(IV, DV)])
   
    if (bw == TRUE) {
        palette <- colorRampPalette(c("#EEEEEE", "#999999", "#333333"), bias=2)(20)
    }

    print("Computing boostrapped smoothers ...")
    newx <- data.frame(seq(min(data[, IV]), max(data[, IV]), length=slices))
    colnames(newx) <- IV
    l0.boot <- matrix(NA, nrow=nrow(newx), ncol=B)
   
    l0 <- method(formula, data)
    for (i in 1:B) {
        data2 <- data[sample(nrow(data), replace=TRUE), ]
        data2 <- data2[order(data2[, IV]), ]
        if (class(l0)=="loess") {
            m1 <- method(formula, data2, control = loess.control(surface = "i", statistics="a", trace.hat="a"), ...)
        } else {
            m1 <- method(formula, data2, ...)
        }
        l0.boot[, i] <- predict(m1, newdata=newx)
    }

    # compute median and CI limits of bootstrap
    library(plyr)
    library(reshape2)
    CI.boot <- adply(l0.boot, 1, function(x) quantile(x, prob=c(.025, .5, .975, pnorm(c(-3, -2, -1, 0, 1, 2, 3))), na.rm=TRUE))[, -1]
    colnames(CI.boot)[1:10] <- c("LL", "M", "UL", paste0("SD", 1:7))
    CI.boot$x <- newx[, 1]
    CI.boot$width <- CI.boot$UL - CI.boot$LL
   
    # scale the CI width to the range 0 to 1 and flip it (bigger numbers = narrower CI)
    CI.boot$w2 <- (CI.boot$width - min(CI.boot$width))
    CI.boot$w3 <- 1-(CI.boot$w2/max(CI.boot$w2))

   
    # convert bootstrapped spaghettis to long format
    b2 <- melt(l0.boot)
    b2$x <- newx[,1]
    colnames(b2) <- c("index", "B", "value", "x")

    library(ggplot2)
    library(RColorBrewer)
   
    # Construct ggplot
    # All plot elements are constructed as a list, so they can be added to an existing ggplot
   
    # if add == FALSE: provide the basic ggplot object
    p0 <- ggplot(data, aes_string(x=IV, y=DV)) + theme_bw()
   
    # initialize elements with NULL (if they are defined, they are overwritten with something meaningful)
    gg.tiles <- gg.poly <- gg.spag <- gg.median <- gg.CI1 <- gg.CI2 <- gg.lm <- gg.points <- gg.title <- NULL

    if (shade == TRUE) {
        quantize <- match.arg(quantize, c("continuous", "SD"))
        if (quantize == "continuous") {
            print("Computing density estimates for each vertical cut ...")
            flush.console()
       
            if (is.null(ylim)) {
                min_value <- min(min(l0.boot, na.rm=TRUE), min(data[, DV], na.rm=TRUE))
                max_value <- max(max(l0.boot, na.rm=TRUE), max(data[, DV], na.rm=TRUE))
                ylim <- c(min_value, max_value)
            }

            # vertical cross-sectional density estimate
            d2 <- ddply(b2[, c("x", "value")], .(x), function(df) {
                res <- data.frame(density(df$value, na.rm=TRUE, n=slices, from=ylim[1], to=ylim[2])[c("x", "y")])
                #res <- data.frame(density(df$value, na.rm=TRUE, n=slices)[c("x", "y")])
                colnames(res) <- c("y", "dens")
                return(res)
            }, .progress="text")

            maxdens <- max(d2$dens)
            mindens <- min(d2$dens)
            d2$dens.scaled <- (d2$dens - mindens)/maxdens  
       
            ## Tile approach
            d2$alpha.factor <- d2$dens.scaled^shade.alpha
            gg.tiles <-  list(geom_tile(data=d2, aes(x=x, y=y, fill=dens.scaled, alpha=alpha.factor)), scale_fill_gradientn("dens.scaled", colours=palette), scale_alpha_continuous(range=c(0.001, 1)))
        }
        if (quantize == "SD") {
            ## Polygon approach
           
            SDs <- melt(CI.boot[, c("x", paste0("SD", 1:7))], id.vars="x")
            count <- 0
            d3 <- data.frame()
            col <- c(1,2,3,3,2,1)
            for (i in 1:6) {
                seg1 <- SDs[SDs$variable == paste0("SD", i), ]
                seg2 <- SDs[SDs$variable == paste0("SD", i+1), ]
                seg <- rbind(seg1, seg2[nrow(seg2):1, ])
                seg$group <- count
                seg$col <- col[i]
                count <- count + 1
                d3 <- rbind(d3, seg)
            }
           
            gg.poly <-  list(geom_polygon(data=d3, aes(x=x, y=value, color=NULL, fill=col, group=group)), scale_fill_gradientn("dens.scaled", colours=palette, values=seq(-1, 3, 1)))
        }
    }
   
    print("Build ggplot figure ...")
    flush.console()
   
   
    if (spag==TRUE) {
        gg.spag <-  geom_path(data=b2, aes(x=x, y=value, group=B), size=0.7, alpha=10/B, color=spag.color)
    }
   
    if (show.median == TRUE) {
        if (mweight == TRUE) {
            gg.median <-  geom_path(data=CI.boot, aes(x=x, y=M, alpha=w3^3), size=.6, linejoin="mitre", color=median.col)
        } else {
            gg.median <-  geom_path(data=CI.boot, aes(x=x, y=M), size = 0.6, linejoin="mitre", color=median.col)
        }
    }
   
    # Confidence limits
    if (show.CI == TRUE) {
        gg.CI1 <- geom_path(data=CI.boot, aes(x=x, y=UL), size=1, color="red")
        gg.CI2 <- geom_path(data=CI.boot, aes(x=x, y=LL), size=1, color="red")
    }
   
    # plain linear regression line
    if (show.lm==TRUE) {gg.lm <- geom_smooth(method="lm", color="darkgreen", se=FALSE)}
       
    gg.points <- geom_point(data=data, aes_string(x=IV, y=DV), size=1, shape=shape, fill="white", color="black")       
       
    if (title != "") {
        gg.title <- theme(title=title)
    }
   

    gg.elements <- list(gg.tiles, gg.poly, gg.spag, gg.median, gg.CI1, gg.CI2, gg.lm, gg.points, gg.title, theme(legend.position="none"))
   
    if (add == FALSE) {
        return(p0 + gg.elements)
    } else {
        return(gg.elements)
    }
}
Comments (43) | Trackback

Validating email adresses in R

I currently program an automated report generation in R – participants fill out a questionnaire, and they receive a nicely formatted pdf with their personality profile. I use knitr, LaTex, and the sendmailR package.

Some participants did not provide valid email addresses, which caused the sendmail function to crash. Therefore I wanted some validation of email addresses – here’s the function:

isValidEmail <- function(x) {
    grepl("\\<[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,}\\>", as.character(x), ignore.case=TRUE)
}

Let’s test some valid and invalid adresses:

# Valid adresses
isValidEmail("felix@nicebread.de")
isValidEmail("felix.123.honeyBunny@nicebread.lmu.de")
isValidEmail("felix@nicebread.de  ")
isValidEmail("    felix@nicebread.de")
isValidEmail("felix+batman@nicebread.de")
isValidEmail("felix@nicebread.office")

# invalid addresses
isValidEmail("felix@nicebread")  
isValidEmail("felix@nicebread@de")
isValidEmail("felixnicebread.de")

The regexp is taken from www.regular-expressions.info and adapted to the R style of regexp. Please note the many comments (e.g., here or here) about “Is there a single regexp that matches all valid email adresses?” (the answer is no).

Comments (7) | Trackback
© 2017 Felix Schönbrodt | Impressum | Datenschutz | Contact