Felix Schönbrodt

Dr. Dipl.-Psych.

A short taxonomy of Bayes factors


I started to familiarize myself with Bayesian statistics. In this post I’ll show some insights I had about Bayes factors (BF).

What are Bayes factors?

Bayes factors provide a numerical value that quantifies how well a hypothesis predicts the empirical data relative to a competing hypothesis. For example, if the BF is 4, this indicates: “This empirical data is 4 times more probable if H₁ were true than if H₀ were true.”
. Hence, evidence points towards H₁. A BF of 1 means that data are equally likely to be occured under both hypotheses. In this case, it would be impossible to decide between both.

More formally, the BF can be written as:

BF_{10} = \frac{p(D|H_1)}{p(D|H_0)}

where D is the data.

(for more formal introductions to BFs, see Wikipedia, Bayesian-Inference, or the classic paper by Kass and Raftery, 1995).

Hence, the BF is a ratio of probabilities, and is related to larger class of likelihood-ratio test.

Although many authors agree about the many theoretical advantages of BFs, until recently it was complicated and unclear how to compute a BF even for the simplest standard designs (Rouder, Morey, Speckman, & Province, 2012). Fortunately, over the last years default Bayes factors for several standard designs have been developed (Rouder et al., 2012; Rouder, Speckman, Sun, Morey, & Iverson, 2009; Morey & Rouder, 2011). For example, for a two-sample t test, a BF can be derived simply by plugging the t value and the sample sizes into a formula. The BF is easy to compute by the R package BayesFactor (Morey & Rouder, 2013), or by online calculators [1][2].

Flavors of Bayes factors

When I started to familiarize myself with BFs, I was sometimes confused, as the same number seemed to mean different things in different publications. And indeed, four types of Bayes factors can be distinguished. “Under the hood”, all four types are identical, but you have to be aware which type has been employed in the specific case.

The first distinction is, whether the BF indicates “H_0 over H_1” (=BF_{01}), or “H_1 over H_0” (=BF_{10}). A BF_{01} of 2 means “Data is 2 times more likely to be occured under H_0 than under H_1“, while the same situation would be a BF_{10} of 0.5 (i.e., the reciprocal value 1/2). Intuitively, I prefer larger values to be “better”, and as I usually would like to have evidence for H_1 instead of H_0, I prefer the BF_{10}.

The second distinction is, whether one reports the raw BF, or the natural logarithm of the BF. The logarithm has the advantage that the scale above 1 (evidence for H_1) is identical to the scale below 1 (evidence for H_0). In the previous example, a BF_{10} of 2 is equivalent to a BF_{01} of 0.5. Taking the log of both values leads to log(BF_{10}) = 0.69 and log(BF_{01}) = -0.69: Same value, reversed sign. This makes the log(BF) ideal for visualization, as the scale is linear in both directions. Following graphic shows the relationship between raw/log BF:

Figure 1

Figure 1

 

As you can see in the Table of Figure 1, different authors use different flavors. Notably, the website calculator gives “raw-” BF, while the BayesFactor package gives “log+” BF. (That caused me some headache … To be clear: The help page of ?ttest.tstat clearly states that it computes the “log(e) Bayes factor (against the point null)”. Once you understand the system, everything falls into place; but it took me some time to figure out how to understand and convert the different metrics.)

Figure 2 shows conversion paths of the different BF flavors:

Figure 2

Figure 2

 

For example, if you enter t = 2.51, n = 100 into the website, you get a JZS Bayes Factor = 0.6140779. The same numbers in the BayesFactor package give a BF of 0.4876335:

> ttest.tstat(t=2.51, n1=100, r=1)
$bf
[1] 0.4876335

# now apply the conversion formula 1/exp(BF)
> 1/(exp(0.4876335))
[1] 0.6140779

As you can see: After applying the conversion, it is exactly the same.

Related posts: Exploring the robustness of Bayes Factors: A convenient plotting function

References

Morey, R. D. & Rouder, J. N. (2011). Bayes factor approaches for testing interval null hypotheses. Psychological Methods, 16(4), 406–419. PMID: 21787084. doi:10.1037/a0024377

Morey, R. D. & Rouder, J. N. (2013). {BAYESFACTOR}: computation of bayes factors for common designs. R package version 0.9.4. Retrieved from http://CRAN.R- project.org/package=BayesFactor

Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56(5), 356–374. doi:10.1016/j.jmp.2012.08.001

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237.

Comments (7) | Trackback

7 Responses to “A short taxonomy of Bayes factors”

  1. HG says:

    Excellent post! Great job!

  2. Kishore Rathi says:

    Dear Felix

    Interesting post. I have a completely unrelated question: Which software did you use to produce the figures 1 & 2 in your post. I am trying to identify a suitable open-source software to produce figures like yours.

    Appreciate your help very much.

    regards

    K

  3. CL says:

    Thanks, this is a clear and helpful explanation..

  4. […] the free lunch in inference” Jeff Rouder and colleagues argue that in the case of the default Bayes factor for t tests the choice of the H1 prior distribution does not make a strong difference (see Figure […]

  5. […] likely than the H0 to speak of evidence for an effect). For an introduction to Bayes factors see here, here, or […]

Leave a Reply