by Angelika Stefan & Felix Schönbrodt
Almost all researchers have experienced the tingling feeling of suspense that arises right before they take a look at long-awaited data: Will they support their favored hypothesis? Will they yield interesting or even groundbreaking results? In a perfect world (especially one without publication bias), the cause of this suspense should be nothing else but scientific curiosity. However, the world, and specifically the incentive system in science, is not perfect. A lot of pressure rests on researchers to produce statistically significant results. For many researchers, statistical significance is the cornerstone of their academic career, so non-significant results in an important study can not only question their scientific convictions but also crash their hopes of professional promotion. (Although, fortunately things are changing for the better).
Now, what does a researcher do confronted with messy, non-significant results? According to several much-cited studies (for example John et al., 2012; Simmons et al., 2011), a common reaction is to start sampling again (and again, and again, …) in the hope that a somewhat larger sample size can boost significance. Another reaction is to wildly conduct hypothesis tests on the existing sample until at least one of them becomes significant (see for example: Simmons et al., 2011; Kerr, 1998 ). These practices, along with some others, are commonly known as p-hacking, because they are designed to drag the famous p-value right below the mark of .05 which usually indicates statistical significance. Undisputedly, p-hacking works (for a demonstration try out the p-hacker app). The two questions we want to answer in this blog post are: How does it work and why is that bad for science?
As many people may have heard, p-hacking works because it exploits a process called alpha error accumulation which is covered in most introductory statistics classes (but also easily forgotten again). Basically, alpha error accumulation means that as one conducts more and more hypothesis tests, the probability increases that one makes a wrong test decision at least once. Specifically, this wrong test decision is a false positive decision or alpha error, which means that you proclaim the existence of an effect although, in fact, there is none. Speaking in statistical terms, an alpha error occurs when the test yields a significant result although the null hypothesis (“There is no effect”) is true in the population. This means that p-hacking leads to the publication of an increased rate of false positive results, that is, studies that claim to have found an effect although, in fact, their result is just due to the randomness of the data. Such studies will never replicate.
At this point, the blog post could be over. P-Hacking exploits alpha error accumulation and fosters the publication of false positive results which is bad for science. However, we want to take a closer look at how bad it really is. In fact, some p-hacking techniques are worse than others (or, if you like the unscrupulous science villain perspective: some p-hacking techniques work better than others). As a showcase, we want to introduce two researchers: The HARKer takes existing data and conducts multiple independent hypothesis tests (based on multiple uncorrelated variables in the data set) with the goal to publish the ones that become significant. For example, the HARKer tests for each possible correlation in a large data set whether it differs significantly from zero. On the other hand, the Accumulator uses optional stopping. This means that he collects data for a single research question test in a sequential manner until either statistical significance or a maximum sample size is reached. For simplicity, we assume that they use neither other p-hacking techniques nor other questionable research practices.
Let us start with the HARKer: Since the conducted hypothesis tests in our defined scenario are essentially independent, the situation can be seen as a problem of multiple testing. This means, it is comparatively easy to determine the exact probability that the HARKer will end up with at least one false-positive result given a certain number of hypothesis tests. Assuming no effects in the population (for example, no correlation between the variables), one can picture the situation as a decision tree: At each branch level stands a hypothesis test which can either result in a non-significant result with 95% probability or in a (spurious) significant result with 5% probability, which is the level.
No matter how many hypothesis tests the HARKer conducts, there will only be one condition in the all-null scenario where no error occurs, that is, where all hypothesis tests yield non-significant results. The probability that this occurs can be calculated by , with being the number of conducted hypothesis tests. The probability that at least one of the hypothesis tests is significant is the probability of the complementary event, that is . For example, when the HARKer computes hypothesis with an alpha level of , the overall probability to obtain at least one false positive result is . Of course, the formula can be adjusted for other suggested alpha levels, such as or . We show this general formula in the R-code chunk below.
The Accumulator has a different tactic: Instead of conducting multiple hypothesis tests on different variables of one data set, he repeatedly conducts the same hypothesis test on the same variables in a growing sample. Starting with a minimum sample size, the Accumulator looks at the results of the analysis – if these are significant, data collection is stopped, if not, more data is collected until (finally) the results become significant, or a maximum sample size is reached. This strategy is also called Optional Stopping. Of course, the more often a researcher peeks at the data, the higher is the probability to obtain a false positive result at least once. However, this overall probability is not the same as the one obtained through HARKing. The reason is that the hypothesis tests are not independent in this case. Why is that? The same hypothesis test is repeatedly conducted on only slightly different data. In fact, the data that were used in in the first hypothesis test are used in every single of the subsequent hypothesis tests so that there is a spillover effect of the first test to every other hypothesis test in the set. Imagine, your initial sample contains an outlier: This outlier will affect the test results in any other test. With multiple testing, in contrast, the outlier will affect only the test in question but none of the other tests in the set.
So does this dependency make optional stopping more or less effective than HARKing? Of course, people have been wondering about this for quite a while. A paper by Armitage et al. (1969) demonstrates error accumulation in optional stopping for three different tests. We can replicate their results for the z-test with a small simulation (a more flexible simulation can be found at the end of the blog post): We start by drawing a large number of samples (iter) with the maximum sample size (n.max) from the null hypothesis. Then we conduct a sequential testing procedure on each of the samples, starting with a minimum sample size (n.min) and going up in several steps (step) up to the maximum sample size. The probability to obtain a significant result at least once up to a certain step can be estimated through the percentage of iterations that end up with at least one significant result at that point.
For example, the researcher conducts a two-sided one-sample z-test with an overall level of .05 in a sequential way. He starts with 10 observations, then always adds another 10 if the result is not significant, up to 100 observations at maximum. This means, he has 10 chances to peek at the data and end the data collection if the hypothesis test is significant. Using our simulation function, we can determine the probability to have obtained at least one false positive result at any of these steps:
We can see that with one single evaluation, the false positive rate is at the nominal 5%. However, when more in-between tests are calculated, the false positive rate rises to roughly 20% with ten peeks. This means that even if there is no effect at all in the population, the researcher would have stopped data collection with a signficant result in 20% of the cases.
Let’s compare the false positive rates of HARKing and optional stopping: Since the researcher in our example above conducts one to ten dependent hypothesis tests, we can compare this to a situation where a HARKer conducts one to ten independent hypothesis tests. The figure below shows the results of both p-hacking strategies:
We can see that HARKing produces higher false positive rates than optional stopping with the same number of tests. This can be explained through the dependency on the first sample in the case of optional stopping: Given that the null hypothesis is true, this sample is not very likely to show extreme effects in any direction (however, there is a small probability that it does). Every extension of this sample has to “overcome” this property not only by being extreme in itself but also by being extreme enough to shift the test on the overall sample from non-significance to significance. In contrast, every sample in the multiple testing case only needs to be extreme in itself. Note, however, that false positive rates in optional stopping are not only dependent on the number of interim peeks, but also on the size of the initial sample and on the step size (how many observations are added between two peeks?). The difference between multiple testing and optional stopping which you see in the figure above is therefore only valid for this specific case. Going back to the two researchers from our example, we can say that the HARKer has a better chance to come up with significant results than the Accumulator, if both do the same number of hypothesis tests.
You can use the interactive p-hacker app to experience the efficiency of both p-hacking strategies yourself: You can increase the number of dependent variables and see whether one of them gets significant (HARing), or you can got to the “Now: p-hack!” tab and increase your sample until you obtain significance. Note that the DVs in the p-hacker app are not completely independent as in our example above, but rather correlate with r = .2, assuming that the DVs to some extent measure at least related constructs.
To conclude, we have shown how two p-hacking techniques work and why their application is bad for science. We found out that p-hacking techniques based on multiple testing typically end up with higher rates of false positive results than p-hacking techniques based on optional stopping, if we assume the same number of hypothesis tests. We want to stress that this does not mean that naive optional stopping is okay (or even okay-ish) in frequentist statistics, even if it does have a certain appeal. For those who want to do guilt-free optional stopping, there are ways to control for the error accumulation in the frequentist framework (see for example Wald, 1945, Chow & Chang, 2008, Lakens, 2014) and sequential Bayesian hypothesis tests (see for example our paper on sequential hypothesis testing with Bayes factors or Rouder, 2014).
Previous investigations typically looked only at publication bias or questionable research practices QRPs (but not both), used non-representative study-level sample sizes, or only compared few bias-correcting techniques, but not all of them. Our goal was to simulate a research literature that is as realistic as possible for psychology. In order to simulate several research environments, we fully crossed five experimental factors: (1) the true underlying effect, δ (0, 0.2, 0.5, 0.8); (2) between-study heterogeneity, τ (0, 0.2, 0.4); (3) the number of studies in the meta-analytic sample, k (10, 30, 60, 100); (4) the percentage of studies in the meta-analytic sample produced under publication bias (0%, 60%, 90%); and (5) the use of QRPs in the literature that produced the meta-analytic sample (none, medium, high).
This blog post summarizes some insights from our study, internally called “meta-showdown”. Check out the preprint; and the interactive app metaExplorer. The fully reproducible and reusable simulation code is on Github, and more information is on OSF.
In this blog post, I will highlight some lessons that we learned during the project, primarily focusing on what not do to when performing a meta-analysis.
Constraints on Generality disclaimer: These recommendations apply to typical sample sizes, effect sizes, and heterogeneities in psychology; other research literatures might have different settings and therefore a different performance of the methods. Furthermore, the recommendations rely on the modeling assumptions of our simulation. We went a long way to make them as realistic as possible, but other assumptions could lead to other results.
If studies have no publication bias, nothing can beat plain old random effects meta-analysis: it has the highest power, the least bias, and the highest efficiency compared to all other methods. Even in the presence of some (though not extreme) QRPs, naive RE performs better than all other methods. When can we expect no publication bias? If (and, in my opinion only if) we meta-analyze a set of registered reports.
In any other setting except registered reports, a consequential amount of publication bias must be expected. In the field of psychology/psychiatry, more than 90% of all published hypothesis tests are significant (Fanelli, 2011) despite the average power being estimated as around 35% (Bakker, van Dijk, & Wicherts, 2012) – the gap points towards a huge publication bias. In the presence of publication bias, naive random effects meta-analysis and trim-and-fill have false positive rates approaching 100%:
More thoughts about trim-and-fill’s inability to recover δ=0 are in Joe Hilgard’s blog post. (Note: this insight is not really new and has been shown multiple times before, for example by Moreno et al., 2009, and Simonsohn, Nelson, and Simmons, 2014).
Our recommendation: Never trust meta-analyses based on naive random effects and trim-and-fill, unless you can rule out publication bias. Results from previously published meta-analyses based on these methods should be treated with a lot of skepticism.
Update 2017/06/09: We had a productive exchange with Uri Simonsohn and Joe Simmons concerning what should be estimated in a meta-analysis with heterogeneity. Traditionally, meta-analysts have tried to arrive at techniques that recover the true average effect of all conducted studies (AEA – average effect of all studies). Simonsohn et al (2014) propose estimating a different magnitude; the average effect of the studies one observes, rather than of all studies (AEO – average effect of observed studies). See Simonsohn et al (2014), the associated Supplementary Material 2, and also this blog post for arguments why they think this is a more useful quantity to estimate.
Hence, an investigation of the topic can be done on two levels: A) What is the more appropriate estimand (AEA or AEO?), and B) Under what conditions are estimators able to recover the respective true value with the least bias and least variance?
Instead of updating the section of the current blog post in the light of our discussion, I decided to cut it out and to move the topic to a future blog post. Likewise, one part of our manuscript’s revision will be a more detailed discussion about excatly these differences.
I archived the previous version of the blog post here.
Many bias-correcting methods are driven by QRPs – the more QRPs, the stronger the downward correction. However, this effect can get so strong, that methods overadjust into the opposite direction, even if all studies in the meta-analysis are of the same sign:
Note: You need to set the option “Keep negative estimates” to get this plot.
Our recommendation: Ignore bias-corrected results that go into the opposite direction; set the estimate to zero, do not reject H₀.
Typical small-study effects (e.g., by p-hacking or publication bias) induce a negative correlation between sample size and effect size – the smaller the sample, the larger the observed effect size. PET-PEESE aims to correct for that relationship. In the absence of bias and QRPs, however, random fluctuations can lead to a positive correlation between sample size and effect size, which leads to a PET and PEESE slope of the unintended sign. Without publication bias, this reversal of the slope actually happens quite often.
See for example the next figure. The true effect size is zero (red dot), naive random effects meta-analysis slightly overestimates the true effect (see black dotted triangle), but PET and PEESE massively overadjust towards more positive effects:
As far as I know, PET-PEESE is typically not intended to correct in the reverse direction. An underlying biasing process would have to systematically remove small studies that show a significant result with larger effect sizes, and keep small studies with non-significant results. In the current incentive structure of psychological research, I see no reason for such a process, unless researchers are motivated to show that a (maybe politically controversial) effect does not exist.
Our recommendation: Ignore the PET-PEESE correction if it has the wrong sign, unless there are good reasons for an untypical selection process.
A bias can be more easily accepted if it always is conservative – then one could conclude: “This method might miss some true effects, but if it indicates an effect, we can be quite confident that it really exists”. Depending on the conditions (i.e., how much publication bias, how much QRPs, etc.), however, PET/PEESE sometimes shows huge overestimation and sometimes huge underestimation.
For example, with no publication bias, some heterogeneity (τ=0.2), and severe QRPs, PET/PEESE underestimates the true effect of δ = 0.5:
In contrast, if no effect exists in reality, but strong publication bias, large heterogeneity and no QRPs, PET/PEESE overestimates at lot:
In fact, the distribution of PET/PEESE estimates looks virtually identical for these two examples, although the underlying true effect is δ = 0.5 in the upper plot and δ = 0 in the lower plot. Furthermore, note the huge spread of PET/PEESE estimates (the error bars visualize the 95% quantiles of all simulated replications): Any single PET/PEESE estimate can be very far off.
Our recommendation: As one cannot know the condition of reality, it is probably safest not to use PET/PEESE at all.
Again, please consider the “Constraints on Generality” disclaimer above.
As with any general recommendations, there might be good reasons to ignore them.
by Angelika Stefan & Felix Schönbrodt
This is the second part of “Two meanings of priors”. The first part explained a first meaning – “priors as subjective probabilities of models”. While the first meaning of priors refers to a global appraisal of existing hypotheses, the second meaning of priors refers to specific assumptions which are needed in the process of hypothesis building. The two kinds of priors have in common that they are both specified before concrete data are available. However, as it will hopefully become evident from the following blog post, they differ significantly from each other and should be distinguished clearly during data analysis.
In order to know how well evidence supports a hypothesis compared to another hypothesis, one must know the concrete specifications of each hypothesis. For example, in the tea tasting experiment, each hypothesis was characterized by a specific probability (e.g., the success rate of exactly 0.5 in HFisher of the previous blog post). What might sound trivial at first – deciding on the concrete specifications of a hypothesis – is in fact one of the major challenges when doing Bayesian statistics. Scientific theories are often imprecise, resulting in more than one plausible way to derive a hypothesis. With deciding upon one specific hypothesis, often new auxiliary assumptions are made. These assumptions, which are needed in order to specify a hypothesis adequately, are called “priors” as well. They influence the formulation and interpretation of the likelihood (which gives you the plausibility of data under a specific hypothesis). We will illustrate this in an example.
A food company conducts market research in a large German city. They know from a recent representative enquiry by the German Federal Statistical Office that Germans spend on average 4.50 € for their lunch (standard deviation: 0.60 €). Now they want to know if the inhabitants of one specific city spend more money for their lunch compared to the German average. They expect lunch expenses to be especially high in this city because of the generally high living costs. In a traditional testing procedure in inferential statistics the food company would formulate two hypotheses to test their assumption: a null and an alternative hypothesis: H0: µ ≤ 4.50 and H1: µ > 4.50.
In Bayesian hypothesis testing, the formulation of the hypotheses has to be more precise than this. We need precise hypotheses as a basis for the likelihood functions which assign probability values to possible states of reality. The traditional formulation, µ > 4.50, is too vague for that purpose: Is any lunch cost above 4.50€ a priori equally likely? Is it plausible that a lunch costs 1,000,000€ on average? Probably not. Not every state of reality is, a priori, equally plausible. “Models connect theory to data“ (Rouder, Morey, & Wagenmakers, 2016), and a model that predicts everything predicts nothing.
As Bayesian statisticians we therefore must ask ourselves: Which values are more plausible given that our hypotheses are true? Of course, our knowledge differs from case to case in this point. Sometimes, we may be able to commit to a very small range of plausible values or even to a single value (in this case, we would call the respecting hypothesis a “point hypothesis”). Theories in physics sometimes predict a single state of reality: “If this theory is true, then the mass of a Chicks boson is exactly 1.52324E-16 gr”.
More often, however, our knowledge about plausible values under a certain theory might be less precise, leading to a wider range of plausible values. Hence, the prior in the second sense defines the probability of a parameter value given a hypothesis, p(θ | H1).
Let us come back to the food company example. Their null hypothesis might be that there is no difference between the city in the focus of their research project and the German average. Hence, the null hypothesis predicts an average lunch cost of 4.50€. With the alternative hypothesis, it becomes slightly more complex. They assume that average lunch expenses in the city should be higher than the German average, so the most plausible value under the alternative hypothesis should be higher than 4.5. However, they may deem it very improbable that the mean lunch expenses are more than two standard deviations higher than the German average (so, for example, it should be very improbable that someone spends more than, say, 10 EUR for lunch even in the expensive city). With this knowledge, they can put most plausibility on values in a range from 4.5 to 5.7 (4.5 + 2 standard deviations). They could further specify their hypothesis by claiming that the most plausible value should be 5.1, i.e., one standard deviation higher than the German average. The elements of these verbal descriptions of the alternative hypothesis can be summarized in a truncated normal distribution that is centered over 5.1 and truncated at 4.5 (as the directional hypothesis does not predict values in the opposite direction).
With this model specification, the researchers would place 13% of the probability mass on values larger than 2SD of the general population (i.e., > 5.7).
Making it even more complex, they could quantify their uncertainty about the most plausible value (i.e., the maximum of the density distribution) by assigning another distribution to it. For example, they could build a normal distribution around it, with a mean of 5.1 and a standard deviation of 0.3. This would imply that in their opinion, 5.1 is the “most plausible most plausible value” but that values between 4.8 and 5.4 are also potential candidates for the most plausible value.
What you can notice in the example about the development of hypotheses is that the market researchers have to make auxiliary assumptions on top of their original hypothesis (which was H1: µ > 4.5). If possible, these prior plausibilities should be informed by theory or by previous empirical data. Specifying alternative hypothesis in this way may seem to be an unnecessary burden compared to traditional hypothesis testing where these extra assumptions seemingly are not necessary. Except that they are necessary. Without going into detail in this blog post, we recommend to read Rouder et al.’s (2016a) “Is there a free lunch in inference?“, with the bottom line that principled and rational inference needs specified alternative hypotheses. (For example, in Neyman-Pearson testing, you also need to specify a precise alternative hypothesis that refers to the “smallest effect size of interest”)
Furthermore, readers might object: “Researchers rarely have enough background knowledge to specify models that predict data“. Rouder et al. (2016b) argue that this critique is overstated, as (1) with proper elicitation, researchers often know much more than they initially think, (2) default models can be a starting point if really no information is available, and (3) several models can be explored without penalty.
A question that may come to your mind soon after you understood the difference between the two kinds of priors is: If they both are called “priors”, do they depend on each other in some way? Does the formulation of your “personal prior plausibility of a hypothesis” (like the skeptical observer’s prior on Mrs. Bristol’s tea tasting abilities) influence the specification of your model (like the hypothesis specification in the second example) or vice versa?
The straightforward answer to this question is “no, they don’t”. This can be easily illustrated in a case where the prior conviction of a researcher runs against the hypothesis he or she wants to test. The food company in the second example has sophisticatedly determined the likelihood of the two hypotheses (H0 and H1), which they want to pit against each other. They are probably considerably convinced that the specification of the alternative hypothesis describes reality better than the specification of the null hypothesis. In a simplified form, their prior odds (i.e., priors in the first sense) can be described as a ratio like 10:1. This would mean that they deem the alternative hypothesis ten times as likely as the null hypothesis. However, another food company, may have prior odds of 3:5 while conducting the same test (i.e., using the same prior plausibilities of model parameters). This shows that priors in the first sense are independent of priors in the second sense. Priors in the first sense change with different personal convictions while priors in the second sense remain constant. Similarly, prior beliefs can change after seeing the data – the formulation of the model (i.e., what a theory predicts) stays the same. (As long as the theory, from which the model specification is derived, does not change. In an estimation context, the model parameters are updated by the data.)
The term “prior” has two meanings in the context of Bayesian hypothesis testing. The first one, usually applied in Bayes factor analysis, is equivalent to a prior subjective probability of a hypothesis (“how plausible do you deem a hypothesis compared to another hypothesis before seeing the data”). The second meaning refers to the assumptions made in the specification of the model of the hypotheses which are needed to derive the likelihood function. These two meanings of the term “prior” have to be distinguished clearly during data analysis, especially as they do not depend on each other in any way. Some researchers (e.g., Dienes, 2016) therefore suggest to call only priors in the first sense “priors” and speak about “specification of the model” when referring to the second meaning.
Dienes, Z. (2011). Bayesian versus orthodox statistics: Which side are you on?. Perspectives On Psychological Science, 6(3), 274-290. http://doi:10.1177/1745691611406920
Dienes, Z. (2016). How Bayes factors change scientific practice. Journal Of Mathematical Psychology, 7278-89. http://doi:10.1016/j.jmp.2015.10.003
Lindley, D. V. (1993). The analysis of experimental data: The appreciation of tea and wine. Teaching Statistics, 15(1), 22-25. http://dx.doi.org/10.1111/j.1467-9639.1993.tb00252.x
Rouder, J. N., Morey, R. D., Verhagen, J., Province, J. M., & Wagenmakers, E. J. (2016a). Is there a free lunch in inference? Topics in Cognitive Science, 8, 520–547. http://doi.org/10.1111/tops.12214
Rouder, J. N., Morey, R. D., & Wagenmakers, E. J. (2016b). The Interplay between Subjectivity, Statistical Practice, and Psychological Science. Collabra, 2(1), 6–12. http://doi.org/10.1525/collabra.28