Felix Schönbrodt

PD Dr. Dipl.-Psych.

Statistics

What’s the probability that a significant p-value indicates a true effect?

If the p-value is < .05, then the probability of falsely rejecting the null hypothesis is  <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate).

Well, no. As you will see in a minute, the “false discovery rate” (aka. false-positive rate), which indicates the probability that a significant p-value actually is a false-positive, usually is much higher than 5%.

A common misconception about p-values

Oakes (1986) asked the following question to students and senior scientists:

You have a p-value of .01. Is the following statement true, or false?

You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.

The answer is “false” (you will learn why it’s false below). But 86% of all professors and lecturers in the sample who were teaching statistics (!) answered this question erroneously with “true”. Gigerenzer, Kraus, and Vitouch replicated this result in 2000 in a German sample (here, the “statistics lecturer” category had 73% wrong). Hence, it is a wide-spread error to confuse the p-value with the false discovery rate.

The False Discovery Rate (FDR) and the Positive Predictive Value (PPV)

To answer the question “What’s the probability that a significant p-value indicates a true effect?”, we have to look at the positive predictive value (PPV) of a significant p-value. The PPV indicates the proportion of significant p-values which indicate a real effect amongst all significant p-values. Put in other words: Given that a p-value is significant: What is the probability (in a frequentist sense) that it stems from a real effect?

(The false discovery rate simply is 1-PPV: the probability that a significant p-value stems from a population with null effect).

That is, we are interested in a conditional probability Prob(effect is real | p-value is significant).
Inspired by Colquhoun (2014) one can visualize this conditional probability in the form of a tree-diagram (see below). Let’s assume, we carry out 1000 experiments for 1000 different research questions. We now have to make a couple of prior assumptions (which you can make differently in the app we provide below). For now, we assume that 30% of all studies have a real effect and the statistical test used has a power of 35% with an α level set to 5%. That is of the 1000 experiments, 300 investigate a real effect, and 700 a null effect. Of the 300 true effects, 0.35*300 = 105 are detected, the remaining 195 effects are non-significant false-negatives. On the other branch of 700 null effects, 0.05*700 = 35 p-values are significant by chance (false positives) and 665 are non-significant (true negatives).

This path is visualized here (completely inspired by Colquhoun, 2014):

PPV_tree

 

Now we can compute the false discovery rate (FDR): 35 of (35+105) = 140 significant p-values actually come from a null effect. That means, 35/140 = 25% of all significant p-values do not indicate a real effect! That is much more than the alleged 5% level (see also Lakens & Evers, 2014, and Ioannidis, 2005)

An interactive app

Together with Michael Zehetleitner I developed an interactive app that computes and visualizes these numbers. For the computations, you have to choose 4 parameters. app_button

Let’s go through the settings!

 

Bildschirmfoto 2015-11-03 um 10.24.55Some of our investigated hypotheses are actually true, and some are false. As a first parameter, we have to estimate what proportion of our investigated hypotheses is actually true.

Now, what is a good setting for the a priori proportion of true hypotheses? It’s certainly not near 100% – in this case only trivial and obvious research questions would be investigated, which is obviously not the case. On the other hand, the rate can definitely drop close to zero. For example, in pharmaceutical drug development “only one in every 5,000 compounds that makes it through lead development to the stage of pre-clinical development becomes an approved drug” (Wikipedia). Here, only 0.02% of all investigated hypotheses are true.

Furthermore, the number depends on the field – some fields are highly speculative and risky (i.e., they have a low prior probability), some fields are more cumulative and work mostly on variations of established effects (i.e., in these fields a higher prior probability can be expected).

But given that many journals in psychology exert a selection pressure towards novel, surprising, and counter-intuitive results (which a priori have a low probability of being true), I guess that the proportion is typically lower than 50%. My personal grand average gut estimate is around 25%.

(Also see this comment and this reply for a discussion about this estimate).

 

Bildschirmfoto 2015-11-03 um 08.30.11

That’s easy. The default α level usually is 5%, but you can play with the impact of stricter levels on the FDR!

 

Bildschirmfoto 2015-11-03 um 10.39.09The average power in psychology has been estimated at 35% (Bakker, van Dijk, & Wicherts, 2012). An median estimate for neuroscience is at only 21% (Button et al., 2013). Even worse, both estimates can be expected to be inflated, as they are based on the average published effect size, which almost certainly is overestimated due to the significance filter (Ioannidis, 2008). Hence, the average true power is most likely smaller. Let’s assume an estimate of 25%.

 

Bildschirmfoto 2015-11-03 um 10.35.09Finally, let’s add some realism to the computations. We know that researchers employ “researchers degrees of freedom”, aka. questionable research practices, to optimize their p-value, and to push a “nearly significant result” across the magic boundary. How many reported significant p-values would not have been significant without p-hacking? That is hard to tell, and probably also field dependent. Let’s assume that 15% of all studies are p-hacked, intentionally or unintentionally.

When these values are defined, the app computes the FDR and PPV and shows a visualization:

Bildschirmfoto 2015-11-03 um 10.27.17

With these settings, only 39% of all significant studies are actually true!

Wait – what was the success rate of the Reproducibility Project: Psychology? 36% of replication projects found a significant effect in a direct replication attempt. Just a coincidence? Maybe. Maybe not.

The formula to compute the FDR and PPV are based on Ioannidis (2005: “Why most published research findings are false“). A related, but different approach, was proposed by David Colquhoun in his paper “An investigation of the false discovery rate and the misinterpretation of p-values” [open access]. He asks: “How should one interpret the observation of, say,  p=0.047 in a single experiment?”. The Ioannidis approach implemented in the app, in contrast, asks: “What is the FDR in a set of studies with p <= .05 and a certain power, etc.?”. Both approaches make sense, but answer different questions.

Other resources about PPV and FDR of p-values

app_button

Comments (21) | Trackback

Best Paper Award for the “Evolution of correlations”

I am pleased to announce that Marco Perugini and I have received the 2015 Best Paper Award from the Association of Research in Personality (ARP) for our paper:

Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. doi:10.1016/j.jrp.2013.05.009
evolDemo
The TL;DR summary of the paper: As sample size increases, correlations wiggle up and down. In typical situations, stable estimates can be expected when n approaches 250. See this blog post for some more information and a video (Or: read the paper. It’s short.)
Interestingly (and in contrast to all of my other papers …), the paper has not only been cited in psychology, but also in medical chemistry, geophysical research, athmospheric physics, chronobiology, building research, and, most importantly, in the Indian Journal of Plant Breeding. Amazing.
And the best thing is: The paper is open access, and all simulation code and data are open on Open Science Framework. Use it and run your own simulations!
Comments (3) | Trackback

Grades of evidence – A cheat sheet

There are at least three traditions in statistics which work with a kind of likelihood ratios (LRs): the “Bayes factor camp”, the “AIC camp”, and the “likehood camp”. In my experience, unfortunately most people do not have an intuitive understanding of LRs. When I give talks about Bayes factors, the most predictable question is “And how much is a BF of 3.4? Is that something I can put confidence in?”.

Recently, I tried to approach the topic from an experiental perspective (“What does a Bayes factor feel like?“) by letting people draw balls from an urn and monitor the Bayes factor for an equal distribution of colors. Now I realized that I re-discovered an approach that Richard Royall did in his 1997 book “Statistical Evidence: A Likelihood Paradigm”: He also derived labels for likelihood ratios by looking at simple experiments, including ball draws.

But beyond this approach of getting an experiental access to LRs, all traditions mentioned above proposed in some way labels or “grades” of evidence.

These are summarized in my cheat sheet below.

Grades_of_evidence_cheat_sheet

(There’s also a PDF of the cheat sheet).

There’s considerable consensus about what counts as “strong evidence” (But this is not necessarily “independent replication” – maybe they just copied each other).

But there’s also the position that we do not need labels at all – the numbers simply speak for themselves! For an elaboration of that position, see Richard Morey’s blog post. Note that Kass & Raftery (1995) are often cited for their grades in the cheat sheet, but according to Richard Morey rather belong to the “need no labels” camp (see here and here). On the other hand, EJ Wagenmakers mentions that they use their guidelines themselves for interpretation and asks “when you propose labels and use them, how are you in the no-labels camp?”. Well, decide yourself (or ask Kass and Raftery personally), whether they belong into the “labels” or “no-labels” camp.

Now that I have some experience with LRs, I am inclined to follow the “no labels needed” position. But whenever I explain Bayes factors to people who are unacquainted with them, I really long for a descriptive label. I think the labels are short-cuts, which relieve you from the burden to explain how to interpret and judge an LR (You can decide yourself whether that is a good or a bad property of the labels).

To summarize, as LRs are not self-explanatory to the typical audience, I think you either need a label (which is self-explanatory, but probably too simplified and not sufficiently context-dependent), or you should give an introduction on how to interpret and judge these numbers correctly.

Literature on grades of evidence:

“AIC camp”

Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach. Springer Science & Business Media.

Burnham, K. P., Anderson, D. R., & Huyvaert, K. P. (2011). AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. Behavioral Ecology and Sociobiology, 65, 23–35. doi:10.1007/s00265-010-1029-6

Symonds, M. R. E., & Moussalli, A. (2011). A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. Behavioral Ecology and Sociobiology, 65, 13–21. doi:10.1007/s00265-010-1037-6

“Bayesian camp”

Good, I. J. (1985). Weight of evidence: A brief survey. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley, & A. F. M. Smith (Eds.), Bayesian Statistics 2 (pp. 249–270). Elsevier.

Jeffreys, H. (1961). The theory of probability. Oxford University Press.

Lee, M. D., & Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. Cambridge University Press.

“Likelihood camp”

Royall, R. M. (1997). Statistical evidence: A likelihood paradigm. London: Chapman & Hall.

Royall, R. M. (2000). On the probability of observing misleading statistical evidence. Journal of the American Statistical Association, 95, 760–768. doi:10.2307/2669456

“We need no labels camp”

Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773–795.

Morey, R. D. (2015). On verbal categories for the interpretation of Bayes factors (Blog post). http://bayesfactor.blogspot.de/2015/01/on-verbal-categories-for-interpretation.html

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16, 225–237.

Comments (1) | Trackback
  • Categories