Felix Schönbrodt

PD Dr. Dipl.-Psych.

What’s the probability that a significant p-value indicates a true effect?

If the p-value is < .05, then the probability of falsely rejecting the null hypothesis is  <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate).

Well, no. As you will see in a minute, the “false discovery rate” (aka. false-positive rate), which indicates the probability that a significant p-value actually is a false-positive, usually is much higher than 5%.

A common misconception about p-values

Oakes (1986) asked the following question to students and senior scientists:

You have a p-value of .01. Is the following statement true, or false?

You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.

The answer is “false” (you will learn why it’s false below). But 86% of all professors and lecturers in the sample who were teaching statistics (!) answered this question erroneously with “true”. Gigerenzer, Kraus, and Vitouch replicated this result in 2000 in a German sample (here, the “statistics lecturer” category had 73% wrong). Hence, it is a wide-spread error to confuse the p-value with the false discovery rate.

The False Discovery Rate (FDR) and the Positive Predictive Value (PPV)

To answer the question “What’s the probability that a significant p-value indicates a true effect?”, we have to look at the positive predictive value (PPV) of a significant p-value. The PPV indicates the proportion of significant p-values which indicate a real effect amongst all significant p-values. Put in other words: Given that a p-value is significant: What is the probability (in a frequentist sense) that it stems from a real effect?

(The false discovery rate simply is 1-PPV: the probability that a significant p-value stems from a population with null effect).

That is, we are interested in a conditional probability Prob(effect is real | p-value is significant).
Inspired by Colquhoun (2014) one can visualize this conditional probability in the form of a tree-diagram (see below). Let’s assume, we carry out 1000 experiments for 1000 different research questions. We now have to make a couple of prior assumptions (which you can make differently in the app we provide below). For now, we assume that 30% of all studies have a real effect and the statistical test used has a power of 35% with an α level set to 5%. That is of the 1000 experiments, 300 investigate a real effect, and 700 a null effect. Of the 300 true effects, 0.35*300 = 105 are detected, the remaining 195 effects are non-significant false-negatives. On the other branch of 700 null effects, 0.05*700 = 35 p-values are significant by chance (false positives) and 665 are non-significant (true negatives).

This path is visualized here (completely inspired by Colquhoun, 2014):



Now we can compute the false discovery rate (FDR): 35 of (35+105) = 140 significant p-values actually come from a null effect. That means, 35/140 = 25% of all significant p-values do not indicate a real effect! That is much more than the alleged 5% level (see also Lakens & Evers, 2014, and Ioannidis, 2005)

An interactive app

Together with Michael Zehetleitner I developed an interactive app that computes and visualizes these numbers. For the computations, you have to choose 4 parameters. app_button

Let’s go through the settings!


Bildschirmfoto 2015-11-03 um 10.24.55Some of our investigated hypotheses are actually true, and some are false. As a first parameter, we have to estimate what proportion of our investigated hypotheses is actually true.

Now, what is a good setting for the a priori proportion of true hypotheses? It’s certainly not near 100% – in this case only trivial and obvious research questions would be investigated, which is obviously not the case. On the other hand, the rate can definitely drop close to zero. For example, in pharmaceutical drug development “only one in every 5,000 compounds that makes it through lead development to the stage of pre-clinical development becomes an approved drug” (Wikipedia). Here, only 0.02% of all investigated hypotheses are true.

Furthermore, the number depends on the field – some fields are highly speculative and risky (i.e., they have a low prior probability), some fields are more cumulative and work mostly on variations of established effects (i.e., in these fields a higher prior probability can be expected).

But given that many journals in psychology exert a selection pressure towards novel, surprising, and counter-intuitive results (which a priori have a low probability of being true), I guess that the proportion is typically lower than 50%. My personal grand average gut estimate is around 25%.

(Also see this comment and this reply for a discussion about this estimate).


Bildschirmfoto 2015-11-03 um 08.30.11

That’s easy. The default α level usually is 5%, but you can play with the impact of stricter levels on the FDR!


Bildschirmfoto 2015-11-03 um 10.39.09The average power in psychology has been estimated at 35% (Bakker, van Dijk, & Wicherts, 2012). An median estimate for neuroscience is at only 21% (Button et al., 2013). Even worse, both estimates can be expected to be inflated, as they are based on the average published effect size, which almost certainly is overestimated due to the significance filter (Ioannidis, 2008). Hence, the average true power is most likely smaller. Let’s assume an estimate of 25%.


Bildschirmfoto 2015-11-03 um 10.35.09Finally, let’s add some realism to the computations. We know that researchers employ “researchers degrees of freedom”, aka. questionable research practices, to optimize their p-value, and to push a “nearly significant result” across the magic boundary. How many reported significant p-values would not have been significant without p-hacking? That is hard to tell, and probably also field dependent. Let’s assume that 15% of all studies are p-hacked, intentionally or unintentionally.

When these values are defined, the app computes the FDR and PPV and shows a visualization:

Bildschirmfoto 2015-11-03 um 10.27.17

With these settings, only 39% of all significant studies are actually true!

Wait – what was the success rate of the Reproducibility Project: Psychology? 36% of replication projects found a significant effect in a direct replication attempt. Just a coincidence? Maybe. Maybe not.

The formula to compute the FDR and PPV are based on Ioannidis (2005: “Why most published research findings are false“). A related, but different approach, was proposed by David Colquhoun in his paper “An investigation of the false discovery rate and the misinterpretation of p-values” [open access]. He asks: “How should one interpret the observation of, say,  p=0.047 in a single experiment?”. The Ioannidis approach implemented in the app, in contrast, asks: “What is the FDR in a set of studies with p <= .05 and a certain power, etc.?”. Both approaches make sense, but answer different questions.

Other resources about PPV and FDR of p-values


Comments (21) | Trackback

Reflections about our first Open-Science-Committee’s meeting

Yesterday, we had the first meeting of our department’s Open Science Committee. I am happy that the committee has 20 members, representing every research unit of the department, and all groups from PhD students to full professors.
In the meeting, I first gave a quick overview about the replication crisis in psychology. I had the impression that we had a large consensus about the fact that we indeed have a problem, and that we should think about possible consequences. Then we started an open discussion where we collected questions, reservations, and ideas.
Here are some unordered topics of our discussions. Not all of them could be resolved at that meeting (which was not the goal), but eventually these stubs could result in a FAQ:
  • It is important to acknowledge the diversity of our (sub)fields. Even if we agree on the overarching values of open science, the specific implementations might differ. The current discussion often is focused on experimental laboratory (or online) research. What about existing large-scale data sets? What about sensitive video data from infant studies? What about I&O research in companies, where agreements with the work council forbid open data? Feasible protocols and solutions have to be developed in these fields.
  • Does the new focus on power lead to boring and low-risk “Mturk-research”? This is certainly not the goal, but we should be aware that this could happen as an unintended side effect. For example, all ManyLab projects focused on easy-to-implement computer studies. Given the focus of the projects this is understandable; but we should not forget “the other” research.
  • We had a longer discussion which could be framed as “strategic choices vs. intrinsic motivation (and social mandate) to increase knowledge”. From a moral point of view, the choice is clear. But we all are also individuals who have to feed our families (or, at least, ourselves), and the strategic perspective has an existential relevance to us.
    Related to this question is also the next point:
  • What about the “middle” generation who soon will look for a job in academia? Can we really recommend to go the open way? Without the possibility to p-hack, and with the goal to run high-powered studies (which typically have a larger n), individual productivity (aka: # of published papers) will decline. (Of course, “productivity” in terms of “increase in valid knowledge” will rise).
    This would be my current answer: I expect that the gain in reputation outweighs the potential loss in # of published papers. Furthermore, we now have several techniques which allow us to assess the likelihood of p-hacking and the evidential value of a set of studies. If we present a paper with 4 studies and ps = .03, .04, .04, and .05, chances are high that we do not earn a lot of respect but rather sceptical frowns. Hence, with the increasing knowledge of healthy p-curves and other indicators, the old strategy of packing together too-good-to-be-true studies might soon backfire.
    Finally, I’d advocate for an agentic position: It’s not some omnipotent external force that imposes an evil incentive structure on us. We are the incentive structure! At least at our department we can make sure that we do not incentivize massive p-hacking, but reward scientists that produce transparent, replicable, and honest research.
  • The new focus on replicability and transparency criteria does not imply that other quality indicators (such as good theoretical foundations) are less important.
  • Some change can been achieved by positive, voluntary incentives. For example, the Open Science Badges led to 40% of papers having open data in the journal Psychological Science. In other situations, we might need mandatory rules. Concerning this voluntary/mandatory dimension: When is which approach appropriate and more constructive?
  • An experience: Registered reports can take a long time in the review process – A problem for publication-based dissertations?
  • We have to teach the field on what methods we base the conclusion that we have a replicability problem. In some discussions (not in our OSC ;-)) you can hear something like: “Some young greenhorns invent some fancy statistical index, and tell us that everything we have done is crap. First they should show that their method is valid!” It is our responsibility to explain the methods to researchers that do not follow the current replicability discussion so closely or are not so statistically savvy.
  • Idea: Should we give an annual Open-Science-Award at our department?
We have no ready-made answers for many of these questions. Most of them have to be tackled at multiple levels. Any comments and other perspectives on these open questions are appreciated!

Talks and Workshops

One goal of the committee is to train our researchers in new tools and topics. I am happy to announce that we will host at least 4 talks/workshops in our department in the remainder of 2015:
  • Sep 30, 2015: Jonathon Love (Amsterdam): JASP – A Fresh Way to Do Statistics (14-16, room 3322)
  • Nov 5, 2015: Daniel Lakens (TU Eindhoven): Practical Recommendations to Increase the Informational Value of Studies
  • End of November 2015 (TBA): Etienne LeBel: Introducing Curate Science (curatescience.org)
  • Dec 2015 (TBA): Felix Schönbrodt: How to detect p-hacking and publication bias: A practical introduction to p-curve, R-index, etc.
The plan for the next meeting is to discuss our voluntary commitment to research transparency.

Related posts:
No comments | Trackback

A voluntary commitment to research transparency

The Reproducibility Project: Psychology was published last week, and it was another blow to the overall credibility of the current research system’s output.

Some interpretations of the results were in a “Hey, it’s all fine; nothing to see here; let’s just do business as usual” style. Without going into details about the “universal hidden moderator hypothesis” (see Sanjay’s blog for a reply) or “The results can easily explained by regression to the mean” (see Moritz’ and Uli’s reply): I do not share these optimistic views, and I do not want to do “business as usual”.

What makes me much more optimistic about the state of our profession than unfalsifiable post-hoc “explanations” is that there has been considerable progress towards an open science, such as the TOP guidelines for transparency and openness in scientific journals, the introduction of registered reports, or the introduction of the open science badges (Psych Science has increased sharing of data and materials from near zero to near 25%38% in 1.5 years, simply by awarding the badges). And all of this happend within the last 3 years!

Beyond these already beneficial changes, we asked ourself: What can we do on the personal and local department level to make more published research true?

A first reaction was the foundation of our local Open Science Committee (more about this soon). As another step, I developed together with some colleagues a Voluntary Commitment to Research Transparency.

The idea of that public commitment is to signal to others that we follow these guidelines of open science. The signal is supposed to go to:

  • Colleagues in the department and other universities (With the hope that more and more will join)
  • Co-authors (This is how we will do science)
  • Funding agencies (We prefer quality over quantity)
  • Potential future employers (This is our research style, if you want that)
  • PhD students:
    • If you want to do your PhD here: these are the conditions
    • If you apply for a job after your PhD, you will get the open-science-reputation-badge from us.


Now, here’s the current version of our commitment:

[Update 2015/11/19: I uploaded a minor revision which reflects some feedback from new signatories]

Voluntary Commitment to Research Transparency and Open Science

We embrace the values of openness and transparency in science. We believe that such research practices increase the informational value and impact of our research, as the data can be reanalyzed and synthesized in future studies. Furthermore, they increase the credibility of the results, as independent verification of the findings is possible.

Here, we express a voluntary commitment about how we will conduct our research. Please note that to every guideline there can be justified exceptions. But whenever we deviate from one of the guidelines, we give an explicit justification for why we do so (e.g., in the manuscript, or in the README file of the project repository).

As signatories, we warrant to follow these guidelines from the day of signature on:

Own Research

  1. Open Data: Whenever possible, we publish, for every first-authored empirical publication, all raw data which are necessary to reproduce the reported results on a reliable repository with high data persistence standards (such as the Open Science Framework).

  2. Reproducible scripts: For every first authored empirical publication we publish reproducible data analysis scripts, and, where applicable, reproducible code for simulations or computational modeling.

  3. We provide (and follow) the “21-word solution” in every empirical publication: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.”1 If necessary, this statement is adjusted to ensure that it is accurate.

  4. As co-authors we try to convince the respective first authors to act accordingly.


  1. As reviewers, we add the “standard reviewer disclosure request”, if necessary (https://osf.io/hadz3/). It asks the authors to add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes.

  2. As reviewers, we ask for Open Data (or a justification why it is not possible).2

Supervision of Dissertations

  1. As PhD supervisors we put particular emphasis on the propagation of methods that enhance the informational value and the replicability of studies. From the very beginning of a supervisor-PhD student relationship we discuss these requirements explicitly.

  2. From PhD students, we expect that they provide Open Data, Open Materials and reproducible scripts to the supervisor (they do not have to be public yet).

  3. If PhD projects result in publications, we expect that they follow points I. to III.

  4. In the case of a series of experiments with a confirmatory orientation, it is expected that at least one pre-registered study is conducted with a justifiable a priori power analysis (in the frequentist case), or a strong evidence threshold (e.g., if a sequential Bayes factor design is implemented). A pre-registration consists of the hypotheses, design, data collection stopping rule, and planned analyses.

  5. The grading of the final PhD thesis is independent of the studies’ statistical significance. Publications are aspired; however, a successful publication is not a criterion for passing or grading.

Service to the field

  1. As members of committees (e.g., tenure track, appointment committees, teaching, professional societies) or editorial boards, we will promote the values of open science.

1Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2012). A 21 word solution. Retrieved from: http://dx.doi.org/10.2139/ssrn.2160588

2See also Peer Reviewers’ Openness Initiative: http://opennessinitiative.org/

So far, 4 members of our department, and 8 researchers from other universities have signed the commitment – take us at our word!

We hope that many more will join the initiative, or think about crafting their own personal commitment, at the openness level they feel comfortable with.

Comments (7) | Trackback
  • Categories