Optional stopping does not bias parameter estimates (if done correctly)

tl;dr: Optional stopping does not bias parameter estimates from a frequentist point of view if all studies are reported (i.e., no publication bias exists) and effect sizes are appropriately meta-analytically weighted.

Several recent discussions on the Psychological Methods Facebook group surrounded the question whether an optional stopping procedure leads to biased effect size estimates (see also this recent blog post by Jeff Rouder).  Optional stopping is a rather new technique, and potential users wonder about the potential down-sides, as these (out-of-context) statements demonstrate:
  • “… sequential testing appears to inflate the observed effect size”
  • discussion suggests to me that estimation is not straight forward?
  • researchers who are interested in estimating population effect sizes should not use […] optional stopping
  • “we found that truncated RCTs provide biased estimates of effects on the outcome that precipitated early stopping” (Bassler et al., 2010)
Hence, the concern is that the usefulness of optional stopping is severely limited, because of this (alleged) bias in parameter estimation.
The good news is: if done correctly, optional stopping does not bias your effect size estimate at all, as I will demonstrate below.
Here’s a (slightly shortened) scenario from a Facebook discussion:
Given the recent discussion on optional stopping and Bayes, I wanted to solicit opinions on the following thought experiment.
Researcher A collects tap water samples in a city, tests them for lead, and stops collecting data once a t-test comparing the mean lead level to a “safe” level is significant at p <.05. After this optional stopping, researcher A computes a Bayesian posterior (with weakly informative prior), and reports the median of the posterior as the best estimate of the lead level in the city.
Researcher B collects the same amount of water samples but with a pre-specified N, and then also computes a Bayesian estimate.
Researcher C collects water samples from every single household in the city (effectively collecting the whole population).
Hopefully we can all agree that the best estimate of the mean lead level in the city is obtained by researcher C. But do you think that the estimate of researcher B is closer to the one from researcher C and should be preferred over the estimate of researcher A? What – if anything – does this tell us about optional stopping and its influence on Bayesian estimates?

Let’s simulate the scenario (R code provided below) with the following settings:

  • The true lead level in the city has a mean of 3 with a SD of 2
  • The “safe” lead level is defined at 2.7 (or below)
Strategy A: Start with a sample of n.min = 3, and increase by 1. After every increase, compute a one-sided t-test (expecting that the lead level is smaller than the safe level), and stop if p < .05. Stop if you reach n.max of 50.
Strategy B: Collect a fixed-n sample with the size of the final sample of strategy A. (This results in a collection of samples that have the same sizes as the samples from strategy A).
We run 10,000 studies with strategy A and save the sample mean of the lead level along with the final sample size. We run 10,000 studies with strategy B (sample sizes matched to those of the 10,000 A-runs) and save the sample mean of the lead level along with the sample size.
In contrast to the quoted scenario above, I will not compute a Bayesian posterior, because the usage of a prior will bias the estimate. (A side note: When using a prior, this bias is deliberately accepted, because a small bias is traded in for a reduction in variance of the estimates, as extreme and implausible sample estimates are shrunken towards more realistic numbers). Here, we simply take the plain sample mean, because this is an unbiased estimator – at least in the typical textbook-case of fixed sample sizes. But what happens with optional stopping?

A naive analysis

For strategy A, we compute the mean across all significant Monte Carlo simulations (which were ~11%), which is 1.42.
This is much less than the true value of 3! When we look at the 10,000 fixed-n studies with the same sample sizes, we get a mean lead level of 3.00, which is exactly the true value.
The impact of optional stopping seems devastating – it screws up my effect size estimates, and leads to an underestimation of the true lead level!
Does it really?

A valid analysis

The naive analysis, however, ignores two crucial points:
  1. If effect sizes from samples with different sample sizes are combined, they must be meta-analytically weighted according to their sample size (or precision). Optional stopping (e.g., based on p-values, but also based on Bayes factors) leads to a conditional bias: If the study stops very early, the effect size must be overestimated (otherwise it wouldn’t have stopped with a significant p-value). But early stops have a small sample size, and in a meta-analysis, these extreme early stops will get a small weight.
  2. The determination of sample size (fixed vs. optional stopping) and the presence of publication bias are separate issues. By comparing strategy A and B, two issues are (at least implicitly) conflated: A does optional stopping and has publication bias, as she only reports the result if the study hits the threshold. Non-significant results go into the file drawer. B, in contrast, has a fixed sample size, and reports all results, without publication bias. You can do optional stopping without publication bias (stop if significant, but also report result if you didn’t hit the threshold before reaching n_max). Likewise, if B samples a fixed sample size, but only reports trials in which the effect size is close to a foregone conclusion, it will be very biased as well.
As it turns out, the overestimations from early terminations are perfectly balanced by underestimations from late terminations (Schönbrodt, Wagenmakers, Zehetleitner, & Perugini, 2015). Hence, optional stopping leads to a conditional bias (i.e., conditional on early or late termination), but it is unconditionally unbiased.
Hence, let’s keep these factors separate in our analysis and look at the 2 (publication bias: yes vs. no) x 2 (optional stopping vs. fixed sample size) x 2 (naive average vs. meta-analytically weighted average) combinations.
For this purpose, we have to update the simulation:
  • The strategies A and B without publication bias report all outcomes
  • Strategy A with publication bias reports only the studies, which are significantly lower than the safe lead level
  • Strategy B with publication bias reports only studies which show a sample mean which is smaller than the safe lead level (regardless of significance)

Some descriptive plots to illustrate the behavior of the strategies

This is the sampling distribution of the sample means, across all 10,000 replications:
Bildschirmfoto 2016-04-14 um 16.16.25
The distribution from strategy B (fixed-n) is well-behaved and symmetric. The distribution from strategy A (optional stopping) shows a bump at small effect sizes (these are the early stops with a small lead level).
Another way to look at this is to plot the single study estimates by sample size:
Bildschirmfoto 2016-04-14 um 16.15.47
Early terminations in the sequential design underestimate the true level – but the late terminations at n=50 overestimate on average in the sequential design. This is the conditional bias – underestimation in early stops (because the optional stopping favored small lead levels), but overestimation in late stops. In the fixed-n design there is no conditional bias.

The estimated mean levels

Here are the compute mean levels in our 8 combinations (true value = 3):

Sampling plan PubBias Naive mean Weighted mean
sequential FALSE 2.85 3.00
fixed FALSE 3.00 3.00
sequential TRUE 1.42 1.71
fixed TRUE 2.46 2.55
If you selectively only report studies with a desired outcome (rows 3 & 4), the estimates cannot be trusted – all of them are way below the true value. Or, as Joachim Vandekerckhove put it: “I think it’s obvious that you can’t actively bias your data and expect magic to happen”.
If you report all studies (no publication bias), they must be properly weighted if they are combined. And then it does not matter whether sample sizes are fixed or optionally stopped! Both sampling plans lead to unbiased estimates.
(To be precise: it does not matter with respect to the unbiasedness of effect size estimates. It does matter concerning other properties, like the variance of estimates or the average sample size).
To summarize: (fixed sample size vs. optional stopping) and (publication bias or not) are orthogonal issues. The only problem for biased estimates is the publication bias – not the optional stopping! 
A more detailed analysis of the impact of sequential testing on parameter estimates can be found in our paper “Sequential Hypothesis Testing with Bayes Factors“. Finally, I want to quote a paragraph from our recent paper on Bayes Factor Design Analysis (Schönbrodt & Wagenmakers, 2016), which also summarizes the discussion and provides some more references:
Concerning the sequential procedures described here, some authors have raised concerns that these procedures result in biased effect size estimates (e.g., Bassler et al., 2010, J. Kruschke, 2014). We believe these concerns are overstated, for at least two reasons.
First, it is true that studies that terminate early at the H1 boundary will, on average, overestimate the true effect. This conditional bias, however, is balanced by late terminations, which will, on average, underestimate the true effect. Early terminations have a smaller sample size than late terminations, and consequently receive less weight in a meta-analysis. When all studies (i.e., early and late terminations) are considered together, the bias is negligible (Berry, Bradley, & Connor, 2010; Fan, DeMets, & Lan, 2004; Goodman, 2007; Schönbrodt et al., 2015). Hence, the sequential procedure is approximately unbiased overall.
Second, the conditional bias of early terminations is conceptually equivalent to the bias that results when only significant studies are reported and non-significant studies disappear into the file drawer (Goodman, 2007). In all experimental designs –whether sequential, non-sequential, frequentist, or Bayesian– the average effect size inevitably increases when one selectively averages studies that show a larger-than-average effect size. Selective publishing is a concern across the board, and an unbiased research synthesis requires that one considers significant and non-significant results, as well as early and late terminations.
Although sequential designs have negligible unconditional bias, it may nevertheless be desirable to provide a principled “correction” for the conditional bias at early terminations, in particular when the effect size of a single study is evaluated. For this purpose, Goodman (2007) outlines a Bayesian approach that uses prior expectations about plausible effect sizes. This approach shrinks extreme estimates from early terminations towards more plausible regions. Smaller sample sizes are naturally more sensitive to prior-induced shrinkage, and hence the proposed correction fits the fact that most extreme deviations from the true value are found in very early terminations that have a small sample size (Schönbrodt et al., 2015).

# set seed for reproducibility

trueLevel <- 3
trueSD <- 2
safeLevel <- 2.7
maxN <- 50
minN <- 3

B <- 10000  # number of Monte Carlo simulations

res <- data.frame()
for (i in 1:B) {
    print(paste0(i, "/", B))
    maxSample <- rnorm(maxN, trueLevel, trueSD)
    # optional stopping
    for (n in minN:maxN) {
        t0 <- t.test(maxSample[1:n], mu=safeLevel, alternative="less")
        #print(paste0("n=", n, "; ", t0$estimate, ": ", t0$p.value))
        if (t0$p.value <= .05) break;
    finalSample.seq <- maxSample[1:n]
    # now construct a matched fixed-n
    finalSample.fixed <- rnorm(n, trueLevel, trueSD)
    # ---------------------------------------------------------------------
    #  save results in long format
    # sequential design
    res <- rbind(res, data.frame(
        id = i,
        type = "sequential",
        n = n,
        p.value = t0$p.value,
        selected = t0$p.value <= .05,
        empMean = mean(finalSample.seq)
    # fixed design
    res <- rbind(res, data.frame(
        id = i,
        type = "fixed",
        n = n,
        p.value = NA,
        selected = mean(finalSample.fixed) <= safeLevel,    # some arbitrary publication bias selection
        empMean = mean(finalSample.fixed)

save(res, file="res.RData")
# load("res.RData")

# Figure 1: Sampling distribution
ggplot(res, aes(x=n, y=empMean)) + geom_jitter(height=0, alpha=0.15) + xlab("Sample size") + ylab("Sample mean") + geom_hline(yintercept=trueLevel, color="red") + facet_wrap(~type) + theme_bw()

# Figure 2: Individual study estimates
ggplot(res, aes(x=empMean)) + geom_density() + xlab("Sample mean") + geom_vline(xintercept=trueLevel, color="red") + facet_wrap(~type) + theme_bw()

# the mean estimate of all late terminations
res %>% group_by(type) %>% filter(n==50) %>% summarise(lateEst = mean(empMean))

# how many strategy A studies were significant?
res %>% filter(type=="sequential") %>% .[["selected"]] %>% table()

# Compute estimated lead levels
est.noBias <- res %>% group_by(type) %>% dplyr::summarise(
    bias = FALSE,
    naive.mean = mean(empMean),
    weighted.mean = weighted.mean(empMean, w=n)

est.Bias <- res %>% filter(selected==TRUE) %>% group_by(type) %>% dplyr::summarise(
    bias = TRUE,
    naive.mean = mean(empMean),
    weighted.mean = weighted.mean(empMean, w=n)

est <- rbind(est.noBias, est.Bias)

# output a html table
est.display <- txtRound(data.frame(est), 2, excl.cols=1:2)
t1 <- htmlTable(est.display,
          header =  c("Sampling plan", "PubBias", "Naive mean", "Weighted mean"),
          rnames = FALSE)
Comments (3) | Trackback

LMU psychology department distributes funding based on criteria of research transparency

The Psychology Department at LMU Munich continues to change the incentive structure towards reproducible and open science. The internal distribution of funding now partly is based on transparency criteria: Publications with open data, open material and pre-registrations get bonus points which directly translate into larger money allocations for that research unit.
Related posts:
Comments (1) | Trackback

Changing hiring practices towards research transparency: The first open science statement in a professorship advertisement

Engaging in open science practices increases knowledge as a common good, and ensures the reproducibility, verifiability and credibility of research. But some have the fear that on an individual strategic level (in particular from an early career perspective) engaging in research transparency could reduce a researcher’s chance to get a tenured position in academia.

University hiring decisions often are driven (amongst other criteria) by publication quantity and journal prestige: “Several universities base promotion decisions on threshold h-index values and on the number of articles in ‘high-impact’ journals” (Hicks, Wouters, Waltman, de Rijcke, & Rafols, 2015), and Nosek, Spies, & Motyl (2012) mention “[…] the prevailing perception that publication numbers and journal prestige are the key drivers for professional success”.

We all know where this focus on pure quantity and too-perfect results led us: “In a world where researchers are rewarded for how many papers they publish, this can lead to a decrease in the truth value of our shared knowledge” (Nelson, Simmons, & Simonsohn, 2012), which can be seen in ongoing debates about low replication rates in psychology, medicine, or economics.

Doing studies with high statistical power, preparing open data, and trying to publish realistic results that are not hacked to (unrealistic) perfection will slow down scientists. Researchers engaging in these good research practices probably will have a smaller quantity of publications, and if that is the major selection criterion, they have a disadvantage in a competitive job market for tenured positions.

For this reason, hiring standards have to change as well towards a valuation of research transparency, and the department of psychology at LMU München did the first step into this direction.

Based on a suggestion of our Open Science Committee, the department added a paragraph to a professorship job advertisement which asks for an open science statement from the candidates:


Here’s a translation of the open science paragraph:

Our department embraces the values of open science and strives for replicable and reproducible research. For this goal we support transparent research with open data, open material, and pre-registrations. Candidates are asked to describe in what way they already pursued and plan to pursue these goals.

This paragraph clearly communicates open science as a core value of our department.

Of course, criteria of research transparency will not be the only criteria of evaluation for candidates. But, to my knowledge, this is the first time that they are explicit criteria.

Jean-Claude Burgelman (Directorate General for Research and Innovation of the European Commission) says that “the career system has to gratify open science”. I hope that many more universities will follow the LMU’s lead with an explicit commitment to open science in their hiring practices.

Comments (9) | Trackback

Send this to a friend

© 2018 Felix Schönbrodt | Impressum | Datenschutz | Contact