# Optional stopping does not bias parameter estimates (if done correctly)

*tl;dr: Optional stopping does not bias parameter estimates from a frequentist point of view if all studies are reported (i.e., no publication bias exists) and effect sizes are appropriately meta-analytically weighted.*

- “… sequential testing appears to inflate the observed effect size”
- “discussion suggests to me that estimation is not straight forward?“
- “researchers who are interested in estimating population effect sizes should not use […] optional stopping“
- “we found that truncated RCTs provide biased estimates of effects on the outcome that precipitated early stopping” (Bassler et al., 2010)

**The good news is: if done correctly, optional stopping does not bias your effect size estimate at all, as I will demonstrate below.**

Given the recent discussion on optional stopping and Bayes, I wanted to solicit opinions on the following thought experiment.Researcher A collects tap water samples in a city, tests them for lead, and stops collecting data once a t-test comparing the mean lead level to a “safe” level is significant atp<.05. After this optional stopping, researcher A computes a Bayesian posterior (with weakly informative prior), and reports the median of the posterior as the best estimate of the lead level in the city.Researcher B collects the same amount of water samples but with a pre-specified N, and then also computes a Bayesian estimate.Researcher C collects water samples from every single household in the city (effectively collecting the whole population).Hopefully we can all agree that the best estimate of the mean lead level in the city is obtained by researcher C. But do you think that the estimate of researcher B is closer to the one from researcher C and should be preferred over the estimate of researcher A? What – if anything – does this tell us about optional stopping and its influence on Bayesian estimates?

Let’s simulate the scenario (R code provided below) with the following settings:

- The true lead level in the city has a mean of 3 with a SD of 2
- The “safe” lead level is defined at 2.7 (or below)

**Strategy A:**Start with a sample of n.min = 3, and increase by 1. After every increase, compute a one-sided t-test (expecting that the lead level is smaller than the safe level), and stop if

*p*< .05. Stop if you reach n.max of 50.

**Strategy B:**Collect a fixed-

*n*sample with the size of the final sample of strategy A. (This results in a collection of samples that have the same sizes as the samples from strategy A).

*not*compute a Bayesian posterior, because the usage of a prior

*will*bias the estimate. (A side note: When using a prior, this bias is deliberately accepted, because a small bias is traded in for a reduction in variance of the estimates, as extreme and implausible sample estimates are shrunken towards more realistic numbers). Here, we simply take the plain sample mean, because this is an unbiased estimator – at least in the typical textbook-case of fixed sample sizes. But what happens with optional stopping?

**A naive analysis**

*n*studies with the same sample sizes, we get a mean lead level of 3.00, which is exactly the true value.

**A valid analysis**

**If effect sizes from samples with different sample sizes are combined, they must be meta-analytically weighted according to their sample size (or precision).**Optional stopping (e.g., based on*p*-values, but also based on Bayes factors) leads to a conditional bias: If the study stops very early, the effect size must be overestimated (otherwise it wouldn’t have stopped with a significant*p*-value). But early stops have a small sample size, and in a meta-analysis, these extreme early stops will get a small weight.**The determination of sample size (fixed vs. optional stopping) and the presence of publication bias are separate issues.**By comparing strategy A and B, two issues are (at least implicitly) conflated: A does optional stopping*and*has publication bias, as she only reports the result if the study hits the threshold. Non-significant results go into the file drawer. B, in contrast, has a fixed sample size, and reports all results,*without*publication bias. You can do optional stopping without publication bias (stop if significant, but also report result if you didn’t hit the threshold before reaching n_max). Likewise, if B samples a fixed sample size, but only reports trials in which the effect size is close to a foregone conclusion, it will be very biased as well.

*under*estimations from late terminations (Schönbrodt, Wagenmakers, Zehetleitner, & Perugini, 2015). Hence, optional stopping leads to a

*conditional*bias (i.e., conditional on early or late termination), but it is

*unconditionally*unbiased.

- The strategies A and B
*without publication bias*report all outcomes - Strategy A
*with*publication bias reports only the studies, which are significantly lower than the safe lead level - Strategy B
*with*publication bias reports only studies which show a sample mean which is smaller than the safe lead level (regardless of significance)

### Some descriptive plots to illustrate the behavior of the strategies

*n*) is well-behaved and symmetric. The distribution from strategy A (optional stopping) shows a bump at small effect sizes (these are the early stops with a small lead level).

*n*design there is no conditional bias.

### The estimated mean levels

Here are the compute mean levels in our 8 combinations (true value = 3):

Sampling plan | PubBias | Naive mean | Weighted mean |
---|---|---|---|

sequential | FALSE | 2.85 | 3.00 |

fixed | FALSE | 3.00 | 3.00 |

sequential | TRUE | 1.42 | 1.71 |

fixed | TRUE | 2.46 | 2.55 |

**To summarize: (fixed sample size vs. optional stopping) and (publication bias or not) are orthogonal issues.**

*The only problem for biased estimates is the publication bias – not the optional stopping!*Concerning the sequential procedures described here, some authors have raised concerns that these procedures result in biased effect size estimates (e.g., Bassler et al., 2010, J. Kruschke, 2014). We believe these concerns are overstated, for at least two reasons.First, it is true that studies that terminate early at the H1 boundary will, on average, overestimate the true effect. This conditional bias, however, is balanced by late terminations, which will, on average, underestimate the true effect. Early terminations have a smaller sample size than late terminations, and consequently receive less weight in a meta-analysis. When all studies (i.e., early and late terminations) are considered together, the bias is negligible (Berry, Bradley, & Connor, 2010; Fan, DeMets, & Lan, 2004; Goodman, 2007; Schönbrodt et al., 2015). Hence, the sequential procedure is approximately unbiased overall.Second, the conditional bias of early terminations is conceptually equivalent to the bias that results when only significant studies are reported and non-significant studies disappear into the file drawer (Goodman, 2007). In all experimental designs –whether sequential, non-sequential, frequentist, or Bayesian– the average effect size inevitably increases when one selectively averages studies that show a larger-than-average effect size. Selective publishing is a concern across the board, and an unbiased research synthesis requires that one considers significant and non-significant results, as well as early and late terminations.Although sequential designs have negligible unconditional bias, it may nevertheless be desirable to provide a principled “correction” for the conditional bias at early terminations, in particular when the effect size of a single study is evaluated. For this purpose, Goodman (2007) outlines a Bayesian approach that uses prior expectations about plausible effect sizes. This approach shrinks extreme estimates from early terminations towards more plausible regions. Smaller sample sizes are naturally more sensitive to prior-induced shrinkage, and hence the proposed correction fits the fact that most extreme deviations from the true value are found in very early terminations that have a small sample size (Schönbrodt et al., 2015).

library(dplyr)

library(htmlTable)

# set seed for reproducibility

set.seed(0xBEEF)

trueLevel <- 3

trueSD <- 2

safeLevel <- 2.7

maxN <- 50

minN <- 3

B <- 10000 # number of Monte Carlo simulations

res <- data.frame()

for (i in 1:B) {

print(paste0(i, "/", B))

maxSample <- rnorm(maxN, trueLevel, trueSD)

# optional stopping

for (n in minN:maxN) {

t0 <- t.test(maxSample[1:n], mu=safeLevel, alternative="less")

#print(paste0("n=", n, "; ", t0$estimate, ": ", t0$p.value))

if (t0$p.value <= .05) break;

}

finalSample.seq <- maxSample[1:n]

# now construct a matched fixed-n

finalSample.fixed <- rnorm(n, trueLevel, trueSD)

# ---------------------------------------------------------------------

# save results in long format

# sequential design

res <- rbind(res, data.frame(

id = i,

type = "sequential",

n = n,

p.value = t0$p.value,

selected = t0$p.value <= .05,

empMean = mean(finalSample.seq)

))

# fixed design

res <- rbind(res, data.frame(

id = i,

type = "fixed",

n = n,

p.value = NA,

selected = mean(finalSample.fixed) <= safeLevel, # some arbitrary publication bias selection

empMean = mean(finalSample.fixed)

))

}

save(res, file="res.RData")

# load("res.RData")

# Figure 1: Sampling distribution

ggplot(res, aes(x=n, y=empMean)) + geom_jitter(height=0, alpha=0.15) + xlab("Sample size") + ylab("Sample mean") + geom_hline(yintercept=trueLevel, color="red") + facet_wrap(~type) + theme_bw()

# Figure 2: Individual study estimates

ggplot(res, aes(x=empMean)) + geom_density() + xlab("Sample mean") + geom_vline(xintercept=trueLevel, color="red") + facet_wrap(~type) + theme_bw()

# the mean estimate of all late terminations

res %>% group_by(type) %>% filter(n==50) %>% summarise(lateEst = mean(empMean))

# how many strategy A studies were significant?

res %>% filter(type=="sequential") %>% .[["selected"]] %>% table()

# Compute estimated lead levels

est.noBias <- res %>% group_by(type) %>% dplyr::summarise(

bias = FALSE,

naive.mean = mean(empMean),

weighted.mean = weighted.mean(empMean, w=n)

)

est.Bias <- res %>% filter(selected==TRUE) %>% group_by(type) %>% dplyr::summarise(

bias = TRUE,

naive.mean = mean(empMean),

weighted.mean = weighted.mean(empMean, w=n)

)

est <- rbind(est.noBias, est.Bias)

est

# output a html table

est.display <- txtRound(data.frame(est), 2, excl.cols=1:2)

t1 <- htmlTable(est.display,

header = c("Sampling plan", "PubBias", "Naive mean", "Weighted mean"),

rnames = FALSE)

t1

cat(t1)

# What’s the probability that a significant p-value indicates a true effect?

If the *p*-value is < .05, then the probability of falsely rejecting the null hypothesis is <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate).

Well, no. As you will see in a minute, the “false discovery rate” (aka. false-positive rate), which indicates the probability that a significant *p*-value actually is a false-positive, usually is much higher than 5%.

## A common misconception about *p*-values

Oakes (1986) asked the following question to students and senior scientists:

You have a p-value of .01. Is the following statement true, or false?You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.

The answer is “false” (you will learn why it’s false below). But 86% of all professors and lecturers in the sample who were teaching statistics (!) answered this question erroneously with “true”. Gigerenzer, Kraus, and Vitouch replicated this result in 2000 in a German sample (here, the “statistics lecturer” category had 73% wrong). Hence, it is a wide-spread error to confuse the *p*-value with the false discovery rate.

## The False Discovery Rate (FDR) and the Positive Predictive Value (PPV)

To answer the question “What’s the probability that a significant *p*-value indicates a true effect?”, we have to look at the **positive predictive value (PPV)** of a significant *p*-value. The PPV indicates the proportion of significant *p*-values which indicate a real effect amongst all significant *p*-values. Put in other words: *Given that a p-value is significant: What is the probability (in a frequentist sense) that it stems from a real effect?*

(The false discovery rate simply is 1-PPV: the probability that a significant *p*-value stems from a population with null effect).

That is, we are interested in a conditional probability Prob(effect is real | *p*-value is significant).

Inspired by Colquhoun (2014) one can visualize this conditional probability in the form of a tree-diagram (see below). Let’s assume, we carry out 1000 experiments for 1000 different research questions. We now have to make a couple of prior assumptions (which you can make differently in the app we provide below). For now, we assume that 30% of all studies have a real effect and the statistical test used has a power of 35% with an α level set to 5%. That is of the 1000 experiments, 300 investigate a real effect, and 700 a null effect. Of the 300 true effects, 0.35*300 = 105 are detected, the remaining 195 effects are non-significant false-negatives. On the other branch of 700 null effects, 0.05*700 = 35 *p*-values are significant by chance (false positives) and 665 are non-significant (true negatives).

This path is visualized here (completely inspired by Colquhoun, 2014):

Now we can compute the false discovery rate (FDR): 35 of (35+105) = 140 significant *p*-values actually come from a null effect. That means, 35/140 = 25% of all significant *p*-values do not indicate a real effect! That is much more than the alleged 5% level (see also Lakens & Evers, 2014, and Ioannidis, 2005)

## An interactive app

Together with Michael Zehetleitner I developed an interactive app that computes and visualizes these numbers. For the computations, you have to choose 4 parameters.

Let’s go through the settings!

Some of our investigated hypotheses are actually true, and some are false. As a first parameter, we have to estimate what proportion of our investigated hypotheses is actually true.

Now, what is a good setting for the a priori proportion of true hypotheses? It’s certainly not near 100% – in this case only trivial and obvious research questions would be investigated, which is obviously not the case. On the other hand, the rate can definitely drop close to zero. For example, in pharmaceutical drug development “only one in every 5,000 compounds that makes it through lead development to the stage of pre-clinical development becomes an approved drug” (Wikipedia). Here, only 0.02% of all investigated hypotheses are true.

Furthermore, the number depends on the field – some fields are highly speculative and risky (i.e., they have a low prior probability), some fields are more cumulative and work mostly on variations of established effects (i.e., in these fields a higher prior probability can be expected).

But given that many journals in psychology exert a selection pressure towards novel, surprising, and counter-intuitive results (which a priori have a low probability of being true), I guess that the proportion is typically lower than 50%. My personal grand average gut estimate is around 25%.

(Also see this comment and this reply for a discussion about this estimate).

That’s easy. The default α level usually is 5%, but you can play with the impact of stricter levels on the FDR!

The average power in psychology has been estimated at 35% (Bakker, van Dijk, & Wicherts, 2012). An median estimate for neuroscience is at only 21% (Button et al., 2013). Even worse, both estimates can be expected to be inflated, as they are based on the average *published* effect size, which almost certainly is overestimated due to the significance filter (Ioannidis, 2008). Hence, the average true power is most likely smaller. Let’s assume an estimate of 25%.

Finally, let’s add some realism to the computations. We know that researchers employ “researchers degrees of freedom”, aka. questionable research practices, to optimize their *p*-value, and to push a “nearly significant result” across the magic boundary. How many reported significant *p*-values would not have been significant without *p*-hacking? That is hard to tell, and probably also field dependent. Let’s assume that 15% of all studies are *p*-hacked, intentionally or unintentionally.

When these values are defined, the app computes the FDR and PPV and shows a visualization:

With these settings, **only 39% of all significant studies are actually true**!

Wait – what was the success rate of the Reproducibility Project: Psychology?** 36% of replication projects found a significant effect** in a direct replication attempt. Just a coincidence? Maybe. Maybe not.

The formula to compute the FDR and PPV are based on Ioannidis (2005: “Why most published research findings are false“). A related, but different approach, was proposed by David Colquhoun in his paper “An investigation of the false discovery rate and the misinterpretation of p-values” [open access]. He asks: “How should one interpret the observation of, say, *p*=0.047 in a *single* experiment?”. The Ioannidis approach implemented in the app, in contrast, asks: “What is the FDR in a *set of studies* with p <= .05 and a certain power, etc.?”. Both approaches make sense, but answer different questions.

### Other resources about PPV and FDR of *p*-values

- See also Daniel Laken’s blog post about the same topic, and the interesting discussion below it.
- Will Gervais also wrote a blog post on the topic and also has a cool interactive app which displays the results in another style.
- Commenter SamGG pointed to another publication which has illuminating graphics:
Krzywinski, M., & Altman, N. (2013). Points of significance: Power and sample size.
*Nature Methods*,*10*, 1139–1140. doi:10.1038/nmeth.2738

# A Compendium of Clean Graphs in R

**[This is a guest post by Eric-Jan Wagenmakers and Quentin Gronau introducing the RGraphCompendium. Click here to see the full compendium!]**

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but easy. Many users find it almost impossible to resist the siren song of adding grid lines, including grey backgrounds, using elaborate color schemes, and applying default font sizes that makes the text much too small in relation to the graphical elements. As a result, many R graphs are an aesthetic disaster; they are difficult to parse and unfit for publication.

In constrast, a good graph obeys the golden rule: “create graphs unto others as you want them to create graphs unto you”. This means that a good graph is a simple graph, in the Einsteinian sense that a graph should be made as simple as possible, but not simpler. A good graph communicates the main message effectively, without fuss and distraction. In addition, a good graph balances its graphical and textual elements – large symbols demand an increase in line width, and these together require an increase in font size.

The graphing chaos is exacerbated by the default settings in R (and the graphical packages that it provides, such as ggplot2), which are decidedly suboptimal. For instance, the font size is often too small, and the graphical elements are not sufficiently prominent. As a result, creating a good graph in R requires a lot of tinkering, not unlike the process of editing the first draft of a novice writer.

Fortunately, many plots share the same underlying structure, and the tinkering that has led to a clean graph of time series A will generally provide useful starting values for a clean graph of time series B. To exploit the overlap in structure, however, the user needs to remember the settings that were used for the first graph. Usually, this means that the user has to recall the location of the relevant R code. Sometimes the search for this initial code can take longer than the tinkering that was required to produce a clean graph in the first place.

In order to reduce the time needed to find relevant R code, we have constructed a compendium of clean graphs in R. This compendium, available at http://shinyapps.org/apps/RGraphCompendium/index.html, can also be used for teaching or as inspiration for improving one’s own graphs. In addition, the compendium provides a selective overview of the kind of graphs that researchers often use; the graphs cover a range of statistical scenarios and feature contributions of different data analysts. We do not wish to presume the graphs in the compendium are in any way perfect; some are better than others, and overall much remains to be improved. The compendium is undergoing continual refinement. Nevertheless, we hope the graphs are useful in their current state.

As an example of what the compendium has to offer, consider the graph below. This graph shows the proportion of the popular vote as a function of the relative height of the US president against his most successful opponent. Note the large circles for the data, the thick line for the linear relation, and the large font size for the axis labels. Also, note that the line does not touch the y-axis (a subtlety that requires deviating from the default). As in the compendium, the R code that created the graph is displayed after clicking the box “Show R-code”.

Show R-Code

# rm(list=ls())

# height of president divided by height of most successful opponent:

height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905,

0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908,

1.115606936, 0.971910112, 0.97752809, 0.978609626, 1,

0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857,

1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889,

1.011111111, 1.032967033, 1.044444444, 1, 1.086705202,

1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222,

1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778,

1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)

# proportion popular vote for president vs most successful opponent

# NB can be lower than .5 because popolar vote does not decide election

pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067,

0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387,

0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212,

0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892,

0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407,

0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958,

0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534,

0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415,

0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242,

0.536914796)

#cor.test(height.ratio,pop.vote)

require(plotrix) # package plotrix is needed for function "ablineclip""

# if the following line and the line containing "dev.off()" are executed, the plot will be saved as a png file in the current working directory

# png("Presidental.png", width = 18, height = 18, units = "cm", res = 800, pointsize = 10)

op <- par(cex.main = 1.5, mar = c(5, 6, 4, 5) + 0.1, mgp = c(3.5, 1, 0), cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, bty = "n", las=1)

plot(height.ratio, pop.vote, col="black", pch=21, bg = "grey", cex = 2,

xlim=c(.90,1.20), ylim=c(.40,.70), ylab="", xlab="", axes=F)

axis(1)

axis(2)

reg1 <- lm(pop.vote~height.ratio)

ablineclip(reg1, lwd=2,x1 = .9, x2 = 1.2)

par(las=0)

mtext("Presidential Height Ratio", side=1, line=2.5, cex=1.5)

mtext("Relative Support for President", side=2, line=3.7, cex=1.5)

text(1.15, .65, "r = .39", cex=1.5)

# dev.off()

# For comparison, consider the default plot:

#par(op) # reset to default "par" settings

#plot(height.ratio, pop.vote) #yuk!

A more complicated example takes the same data, but uses it to plot the development of the Bayes factor, assessing the evidence for the hypothesis that taller presidential candidates attract more votes. This plot was created based in part on code from Ruud Wetzels and Benjamin Scheibehenne. Note the annotations on the right side of the plot, and the subtle horizontal lines that indicate Jeffreys’ criteria on the evidence. It took some time to figure out how to display the word “Evidence” in its current direction.

Show R-Code

# height of president divided by height of most successful opponent:

height.ratio <- c(0.924324324, 1.081871345, 1, 0.971098266, 1.029761905, 0.935135135, 0.994252874, 0.908163265, 1.045714286, 1.18404908, 1.115606936, 0.971910112, 0.97752809, 0.978609626, 1, 0.933333333, 1.071428571, 0.944444444, 0.944444444, 1.017142857, 1.011111111, 1.011235955, 1.011235955, 1.089285714, 0.988888889, 1.011111111, 1.032967033, 1.044444444, 1, 1.086705202, 1.011560694, 1.005617978, 1.005617978, 1.005494505, 1.072222222, 1.011111111, 0.983783784, 0.967213115, 1.04519774, 1.027777778, 1.086705202, 1, 1.005347594, 0.983783784, 0.943005181, 1.057142857)

# proportion popular vote for president vs most successful opponent

pop.vote <- c(0.427780852, 0.56148981, 0.597141922, 0.581254292, 0.530344067, 0.507425996, 0.526679292, 0.536690951, 0.577825976, 0.573225387, 0.550410082, 0.559380032, 0.484823958, 0.500466176, 0.502934212, 0.49569636, 0.516904414, 0.522050547, 0.531494442, 0.60014892, 0.545079801, 0.604274986, 0.51635906, 0.63850958, 0.652184407, 0.587920412, 0.5914898, 0.624614752, 0.550040193, 0.537771958, 0.523673642, 0.554517134, 0.577511576, 0.500856251, 0.613444534, 0.504063153, 0.617883695, 0.51049949, 0.553073235, 0.59166415, 0.538982024, 0.53455133, 0.547304058, 0.497350649, 0.512424242, 0.536914796)

## now calculate BF sequentially; two-sided test

library("hypergeo")

BF10.HG.exact = function(n, r)

{

#Jeffreys' test for whether a correlation is zero or not

#Jeffreys (1961), pp. 289-292

#Note that if the means are subtracted, n needs to be replaced by n-1

hypgeo = hypergeo((.25+n/2), (-.25+n/2), (3/2+n/2), r^2)

BF10 = ( sqrt(pi) * gamma(n/2+1) * (hypgeo) ) / ( 2 * gamma(3/2+n/2) )

return(as.numeric(BF10))

}

BF10 <- array()

BF10[1]<-1

BF10[2]<-1

for (i in 3:length(height.ratio))

{

BF10[i] <- BF10.HG.exact(n=i-1, r=cor(height.ratio[1:i],pop.vote[1:i]))

}

# We wish to plot this Bayes factor sequentially, as it unfolds as more elections become available:

#============ Plot log Bayes factors ===========================

par(cex.main = 1.3, mar = c(4.5, 6, 4, 7)+.1, mgp = c(3, 1, 0), #bottom, left, top, right

cex.lab = 1.3, font.lab = 2, cex.axis = 1.3, las=1)

xhigh <- 60

plot(log(BF10), xlim=c(1,xhigh), ylim=c(-1*log(200),log(200)), xlab="", ylab="", cex.lab=1.3,cex.axis=1.3, las =1, yaxt="n", bty = "n", type="p", pch=21, bg="grey")

labelsUpper=log(c(100,30,10,3,1))

labelsLower=-1*labelsUpper

criticalP=c(labelsLower,0,labelsUpper)

for (idx in 1:length(criticalP))

{

abline(h=criticalP[idx],col='darkgrey',lwd=1,lty=2)

}

abline(h=0)

axis(side=4, at=criticalP,tick=T,las=2,cex.axis=1, labels=F)

axis(side=4, at=labelsUpper+.602, tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))

axis(side=4, at=labelsLower-.602,tick=F, cex.axis=1, labels=c("Extreme","Very strong", "Strong","Moderate", "Anecdotal"))

axis(side=2, at=c(criticalP),tick=T,las=2,cex.axis=1,

labels=c("1/100","1/30","1/10","1/3","1","", "100","30","10","3",""))

mtext(expression(BF[1][0]), side=2, line=2.5, las=0, cex=1.3)

grid::grid.text("Evidence", 0.97, 0.5, rot = 270, gp=grid::gpar(cex=1.3))

mtext("No. of Elections", side=1, line=2.5, las=1, cex=1.3)

arrows(20, -log(10), 20, -log(100), length=.25, angle=30, code=2, lwd=2)

arrows(20, log(10), 20, log(100), length=.25, angle=30, code=2, lwd=2)

text(25, -log(70), "Evidence for H0", pos=4, cex=1.3)

text(25, log(70), "Evidence for H1", pos=4, cex=1.3)

A final example is borrowed from the graphs in **JASP** (http://jasp-stats.org), a free and open-source statistical software program with a GUI not unlike that of SPSS. In contrast to SPSS, JASP also includes Bayesian hypthesis tests, the results of which are summarized in graphs such as the one below.

Show R-Code

library(BayesFactor)

if(rscale == "medium"){

r <- sqrt(2) / 2

}

if(rscale == "wide"){

r <- 1

}

if(rscale == "ultrawide"){

r <- sqrt(2)

}

if(mode(rscale) == "numeric"){

r <- rscale

}

if(oneSided == FALSE){

nullInterval <- NULL

}

if(oneSided == "right"){

nullInterval <- c(0, Inf)

}

if(oneSided == "left"){

nullInterval <- c(-Inf, 0)

}

# sample from delta posterior

samples <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = TRUE, iterations = iterations, rscale= r)

delta <- samples[,"delta"]

# fit denisty estimator

fit.posterior <- logspline::logspline(delta)

# density function posterior

dposterior <- function(x, oneSided= oneSided, delta= delta){

if(oneSided == FALSE){

k <- 1

return(k*logspline::dlogspline(x, fit.posterior))

}

if(oneSided == "right"){

k <- 1 / (length(delta[delta >= 0]) / length(delta))

return(ifelse(x < 0, 0, k*logspline::dlogspline(x, fit.posterior)))

}

if(oneSided == "left"){

k <- 1 / (length(delta[delta <= 0]) / length(delta))

return(ifelse(x > 0, 0, k*logspline::dlogspline(x, fit.posterior)))

}

}

# pdf cauchy prior

dprior <- function(delta,r, oneSided= oneSided){

if(oneSided == "right"){

y <- ifelse(delta < 0, 0, 2/(pi*r*(1+(delta/r)^2)))

return(y)

}

if(oneSided == "left"){

y <- ifelse(delta > 0, 0, 2/(pi*r*(1+(delta/r)^2)))

return(y)

} else{

return(1/(pi*r*(1+(delta/r)^2)))

}

}

# set limits plot

xlim <- vector("numeric", 2)

if(oneSided == FALSE){

xlim[1] <- min(-2, quantile(delta, probs = 0.01)[[1]])

xlim[2] <- max(2, quantile(delta, probs = 0.99)[[1]])

}

if(oneSided == "right"){

xlim[1] <- min(-2, quantile(delta[delta >= 0], probs = 0.01)[[1]])

xlim[2] <- max(2, quantile(delta[delta >= 0], probs = 0.99)[[1]])

}

if(oneSided == "left"){

xlim[1] <- min(-2, quantile(delta[delta <= 0], probs = 0.01)[[1]])

xlim[2] <- max(2, quantile(delta[delta <= 0], probs = 0.99)[[1]])

}

ylim <- vector("numeric", 2)

ylim[1] <- 0

ylim[2] <- max(dprior(0,r, oneSided= oneSided), 1.28*max(dposterior(x= delta, oneSided= oneSided, delta=delta)))

# calculate position of "nice" tick marks and create labels

xticks <- pretty(xlim)

yticks <- pretty(ylim)

xlabels <- formatC(pretty(xlim), 1, format= "f")

ylabels <- formatC(pretty(ylim), 1, format= "f")

# 95% credible interval:

if(oneSided == FALSE){

CIlow <- quantile(delta, probs = 0.025)[[1]]

CIhigh <- quantile(delta, probs = 0.975)[[1]]

}

if(oneSided == "right"){

CIlow <- quantile(delta[delta >= 0], probs = 0.025)[[1]]

CIhigh <- quantile(delta[delta >= 0], probs = 0.975)[[1]]

}

if(oneSided == "left"){

CIlow <- quantile(delta[delta <= 0], probs = 0.025)[[1]]

CIhigh <- quantile(delta[delta <= 0], probs = 0.975)[[1]]

}

par(mar= c(5, 5, 7, 4) + 0.1, las=1)

xlim <- c(min(CIlow,range(xticks)[1]), max(range(xticks)[2], CIhigh))

plot(1,1, xlim= xlim, ylim= range(yticks), ylab= "", xlab="", type= "n", axes= FALSE)

lines(seq(min(xticks), max(xticks),length.out = 1000),dposterior(x=seq(min(xticks), max(xticks),length.out = 1000), oneSided = oneSided, delta=delta), lwd= lwd, xlim= xlim, ylim= range(yticks), ylab= "", xlab= "")

lines(seq(min(xticks), max(xticks),length.out = 1000), dprior(seq(min(xticks), max(xticks),length.out = 1000), r=r, oneSided= oneSided), lwd= lwd, lty=3)

axis(1, at= xticks, labels = xlabels, cex.axis= cexAxis, lwd= lwdAxis)

axis(2, at= yticks, labels= ylabels, , cex.axis= cexAxis, lwd= lwdAxis)

mtext(text = "Density", side = 2, las=0, cex = cexYlab, line= 3)

mtext(expression(paste("Effect size", ~delta)), side = 1, cex = cexXlab, line= 2.5)

points(0, dprior(0,r, oneSided= oneSided), col="black", pch=21, bg = "grey", cex= cexPoints)

points(0, dposterior(0, oneSided = oneSided, delta=delta), col="black", pch=21, bg = "grey", cex= cexPoints)

# 95% credible interval

dmax <- optimize(function(x)dposterior(x,oneSided= oneSided, delta=delta), interval= range(xticks), maximum = TRUE)$objective # get maximum density

yCI <- grconvertY(dmax, "user", "ndc") + 0.08

yCIt <- grconvertY(dmax, "user", "ndc") + 0.04

y95 <- grconvertY(dmax, "user", "ndc") + 0.1

yCI <- grconvertY(yCI, "ndc", "user")

yCIt <- grconvertY(yCIt, "ndc", "user")

y95 <- grconvertY(y95, "ndc", "user")

arrows(CIlow, yCI , CIhigh, yCI, angle = 90, code = 3, length= 0.1, lwd= lwd)

text(mean(c(CIlow, CIhigh)), y95,"95%", cex= cexCI)

text(CIlow, yCIt, bquote(.(formatC(CIlow,2, format="f"))), cex= cexCI)

text(CIhigh, yCIt, bquote(.(formatC(CIhigh,2, format= "f"))), cex= cexCI)

# enable plotting in margin

par(xpd=TRUE)

# display BF10 value

BF <- BayesFactor::ttestBF(x=x, y=y, paired=paired, nullInterval= nullInterval, posterior = FALSE, rscale= r)

BF10 <- BayesFactor::extractBF(BF, logbf = FALSE, onlybf = F)[1, "bf"]

BF01 <- 1 / BF10

xx <- grconvertX(0.3, "ndc", "user")

yy <- grconvertY(0.822, "ndc", "user")

yy2 <- grconvertY(0.878, "ndc", "user")

if(BF10 >= 1000000 | BF01 >= 1000000){

BF10t <- format(BF10, digits= 3, scientific = TRUE)

BF01t <- format(BF01, digits= 3, scientific = TRUE)

}

if(BF10 < 1000000 & BF01 < 1000000){

BF10t <- formatC(BF10,2, format = "f")

BF01t <- formatC(BF01,2, format = "f")

}

if(oneSided == FALSE){

text(xx, yy2, bquote(BF[10]==.(BF10t)), cex= cexTextBF)

text(xx, yy, bquote(BF[0][1]==.(BF01t)), cex= cexTextBF)

}

if(oneSided == "right"){

text(xx, yy2, bquote(BF["+"][0]==.(BF10t)), cex= cexTextBF)

text(xx, yy, bquote(BF[0]["+"]==.(BF01t)), cex= cexTextBF)

}

if(oneSided == "left"){

text(xx, yy2, bquote(BF["-"][0]==.(BF10t)), cex= cexTextBF)

text(xx, yy, bquote(BF[0]["-"]==.(BF01t)), cex= cexTextBF)

}

# probability wheel

if(max(nchar(BF10t), nchar(BF01t)) <= 4){

xx <- grconvertX(0.44, "ndc", "user")

}

# probability wheel

if(max(nchar(BF10t), nchar(BF01t)) == 5){

xx <- grconvertX(0.44 + 0.001* 5, "ndc", "user")

}

# probability wheel

if(max(nchar(BF10t), nchar(BF01t)) == 6){

xx <- grconvertX(0.44 + 0.001* 6, "ndc", "user")

}

if(max(nchar(BF10t), nchar(BF01t)) == 7){

xx <- grconvertX(0.44 + 0.002* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")

}

if(max(nchar(BF10t), nchar(BF01t)) == 8){

xx <- grconvertX(0.44 + 0.003* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")

}

if(max(nchar(BF10t), nchar(BF01t)) > 8){

xx <- grconvertX(0.44 + 0.004* max(nchar(BF10t), nchar(BF01t)), "ndc", "user")

}

yy <- grconvertY(0.85, "ndc", "user")

# make sure that colored area is centered

radius <- 0.06*diff(range(xticks))

A <- radius^2*pi

alpha <- 2 / (BF01 + 1) * A / radius^2

startpos <- pi/2 - alpha/2

# draw probability wheel

plotrix::floating.pie(xx, yy,c(BF10, 1),radius= radius, col=c("darkred", "white"), lwd=2,startpos = startpos)

yy <- grconvertY(0.927, "ndc", "user")

yy2 <- grconvertY(0.77, "ndc", "user")

if(oneSided == FALSE){

text(xx, yy, "data|H1", cex= cexCI)

text(xx, yy2, "data|H0", cex= cexCI)

}

if(oneSided == "right"){

text(xx, yy, "data|H+", cex= cexCI)

text(xx, yy2, "data|H0", cex= cexCI)

}

if(oneSided == "left"){

text(xx, yy, "data|H-", cex= cexCI)

text(xx, yy2, "data|H0", cex= cexCI)

}

# add legend

xx <- grconvertX(0.57, "ndc", "user")

yy <- grconvertY(0.92, "ndc", "user")

legend(xx, yy, legend = c("Posterior", "Prior"), lty=c(1,3), bty= "n", lwd = c(lwd,lwd), cex= cexLegend)

}

set.seed(1)

.plotPosterior.ttest(x= rnorm(30,0.15), rscale=1)

The compendium contains many more examples. We hope some R users will find them convenient. Finally, if you create a clean graph in R that you believe is a candidate for inclusion in this compendium, please do not hesitate to write an email to EJ.Wagenmakers@gmail.com. Your contribution will be acknowledged explicitly, alongside the code you provided.

*Eric-Jan Wagenmakers and Quentin Gronau*

*University of Amsterdam, Department of Psychology.*

Key links: