Bayes

January 17, 2017

*by Angelika Stefan & Felix Schönbrodt*

This is the second part of “Two meanings of priors”. The first part explained a first meaning – “priors as subjective probabilities of models”. While the first meaning of priors refers to a global appraisal of existing hypotheses, the second meaning of priors refers to specific assumptions which are needed in the process of hypothesis building. The two kinds of priors have in common that they are both specified before concrete data are available. However, as it will hopefully become evident from the following blog post, they differ significantly from each other and should be distinguished clearly during data analysis.

In order to know how well evidence supports a hypothesis compared to another hypothesis, one must know the concrete specifications of each hypothesis. For example, in the tea tasting experiment, each hypothesis was characterized by a specific probability (e.g., the success rate of exactly 0.5 in H_{Fisher }of the previous blog post). What might sound trivial at first – deciding on the concrete specifications of a hypothesis – is in fact one of the major challenges when doing Bayesian statistics. Scientific theories are often imprecise, resulting in more than one plausible way to derive a hypothesis. With deciding upon one specific hypothesis, often new auxiliary assumptions are made. **These assumptions, which are needed in order to specify a hypothesis adequately, are called “priors” as well.** They influence the formulation and interpretation of the likelihood (which gives you the plausibility of data under a specific hypothesis). We will illustrate this in an example.

A food company conducts market research in a large German city. They know from a recent representative enquiry by the German Federal Statistical Office that Germans spend on average 4.50 € for their lunch (standard deviation: 0.60 €). Now they want to know if the inhabitants of one specific city spend more money for their lunch compared to the German average. They expect lunch expenses to be especially high in this city because of the generally high living costs. In a traditional testing procedure in inferential statistics the food company would formulate two hypotheses to test their assumption: a null and an alternative hypothesis: H_{0}: µ ≤ 4.50 and H_{1}: µ > 4.50.

In Bayesian hypothesis testing, the formulation of the hypotheses has to be more precise than this. We need precise hypotheses as a basis for the likelihood functions which assign probability values to possible states of reality. The traditional formulation, µ > 4.50, is too vague for that purpose: Is any lunch cost above 4.50€ a priori equally likely? Is it plausible that a lunch costs 1,000,000€ on average? Probably not. Not every state of reality is, a priori, equally plausible. “Models connect theory to data“ (Rouder, Morey, & Wagenmakers, 2016), and a model that predicts everything predicts nothing.

As Bayesian statisticians we therefore must ask ourselves: Which values are more plausible given that our hypotheses are true? Of course, our knowledge differs from case to case in this point. Sometimes, we may be able to commit to a very small range of plausible values or even to a single value (in this case, we would call the respecting hypothesis a “point hypothesis”). Theories in physics sometimes predict a single state of reality: “If this theory is true, then the mass of a Chicks boson is exactly 1.52324E-16 gr”.

More often, however, our knowledge about plausible values under a certain theory might be less precise, leading to a wider range of plausible values. Hence, the prior in the second sense defines the probability of a parameter value given a hypothesis, *p*(θ | H1).

Let us come back to the food company example. Their null hypothesis might be that there is no difference between the city in the focus of their research project and the German average. Hence, the null hypothesis predicts an average lunch cost of 4.50€. With the alternative hypothesis, it becomes slightly more complex. They assume that average lunch expenses in the city should be higher than the German average, so the most plausible value under the alternative hypothesis should be higher than 4.5. However, they may deem it very improbable that the mean lunch expenses are more than two standard deviations higher than the German average (so, for example, it should be very improbable that someone spends more than, say, 10 EUR for lunch even in the expensive city). With this knowledge, they can put most plausibility on values in a range from 4.5 to 5.7 (4.5 + 2 standard deviations). They could further specify their hypothesis by claiming that the most plausible value should be 5.1, i.e., one standard deviation higher than the German average. The elements of these verbal descriptions of the alternative hypothesis can be summarized in a truncated normal distribution that is centered over 5.1 and truncated at 4.5 (as the directional hypothesis does not predict values in the opposite direction).

With this model specification, the researchers would place 13% of the probability mass on values larger than 2SD of the general population (i.e., > 5.7).

Making it even more complex, they could quantify their uncertainty about the most plausible value (i.e., the maximum of the density distribution) by assigning another distribution to it. For example, they could build a normal distribution around it, with a mean of 5.1 and a standard deviation of 0.3. This would imply that in their opinion, 5.1 is the “most plausible most plausible value” but that values between 4.8 and 5.4 are also potential candidates for the most plausible value.

What you can notice in the example about the development of hypotheses is that the market researchers have to make auxiliary assumptions on top of their original hypothesis (which was H1: µ > 4.5). If possible, these prior plausibilities should be informed by theory or by previous empirical data. Specifying alternative hypothesis in this way may seem to be an unnecessary burden compared to traditional hypothesis testing where these extra assumptions seemingly are not necessary. Except that they are necessary. Without going into detail in this blog post, we recommend to read Rouder et al.’s (2016a) “Is there a free lunch in inference?“, with the bottom line that principled and rational inference needs specified alternative hypotheses. (For example, in Neyman-Pearson testing, you also need to specify a precise alternative hypothesis that refers to the “smallest effect size of interest”)

Furthermore, readers might object: “Researchers rarely have enough background knowledge to specify models that predict data“. Rouder et al. (2016b) argue that this critique is overstated, as (1) with proper elicitation, researchers often know much more than they initially think, (2) default models can be a starting point if really no information is available, and (3) several models can be explored without penalty.

A question that may come to your mind soon after you understood the difference between the two kinds of priors is: If they both are called “priors”, do they depend on each other in some way? Does the formulation of your “personal prior plausibility of a hypothesis” (like the skeptical observer’s prior on Mrs. Bristol’s tea tasting abilities) influence the specification of your model (like the hypothesis specification in the second example) or vice versa?

The straightforward answer to this question is “no, they don’t”. This can be easily illustrated in a case where the prior conviction of a researcher runs against the hypothesis he or she wants to test. The food company in the second example has sophisticatedly determined the likelihood of the two hypotheses (H0 and H1), which they want to pit against each other. They are probably considerably convinced that the specification of the alternative hypothesis describes reality better than the specification of the null hypothesis. In a simplified form, their prior odds (i.e., priors in the first sense) can be described as a ratio like 10:1. This would mean that they deem the alternative hypothesis ten times as likely as the null hypothesis. However, another food company, may have prior odds of 3:5 while conducting the same test (i.e., using the same prior plausibilities of model parameters). This shows that priors in the first sense are independent of priors in the second sense. Priors in the first sense change with different personal convictions while priors in the second sense remain constant. Similarly, prior beliefs can change after seeing the data – the formulation of the model (i.e., what a theory predicts) stays the same. (As long as the theory, from which the model specification is derived, does not change. In an estimation context, the model parameters are updated by the data.)

The term “prior” has two meanings in the context of Bayesian hypothesis testing. The first one, usually applied in Bayes factor analysis, is equivalent to a prior subjective probability of a hypothesis (“how plausible do you deem a hypothesis compared to another hypothesis before seeing the data”). The second meaning refers to the assumptions made in the specification of the model of the hypotheses which are needed to derive the likelihood function. These two meanings of the term “prior” have to be distinguished clearly during data analysis, especially as they do not depend on each other in any way. Some researchers (e.g., Dienes, 2016) therefore suggest to call only priors in the first sense “priors” and speak about “specification of the model” when referring to the second meaning.

Read the first part of this blog post: Priors as the plausibility of models

Dienes, Z. (2011). Bayesian versus orthodox statistics: Which side are you on?. *Perspectives On Psychological Science*, 6(3), 274-290. http://doi:10.1177/1745691611406920

Dienes, Z. (2016). How Bayes factors change scientific practice. *Journal Of Mathematical Psychology*, 7278-89. http://doi:10.1016/j.jmp.2015.10.003

Lindley, D. V. (1993). The analysis of experimental data: The appreciation of tea and wine. *Teaching Statistics*, 15(1), 22-25. http://dx.doi.org/10.1111/j.1467-9639.1993.tb00252.x

Rouder, J. N., Morey, R. D., Verhagen, J., Province, J. M., & Wagenmakers, E. J. (2016a). Is there a free lunch in inference? *Topics in Cognitive Science*, *8*, 520–547. http://doi.org/10.1111/tops.12214

Rouder, J. N., Morey, R. D., & Wagenmakers, E. J. (2016b). The Interplay between Subjectivity, Statistical Practice, and Psychological Science. *Collabra*, *2*(1), 6–12. http://doi.org/10.1525/collabra.28

library(truncnorm)

# parameters for H1 model

M <- 5.1

SD <- 0.5

range <- seq(4.5, 7, by=.01)

plausibility <- dtruncnorm(range, a=4.5, b=Inf, mean=M, sd=SD)

plot(range, plausibility, type="l", xlim=c(4, 7), axes=FALSE, ylab="Plausibility", xlab="Lunch cost in €", mgp = c(2.5, 1, 0))

axis(1)

# Get the axis ranges, draw arrow

u <- par("usr")

points(u[1], u[4], pch=17, xpd = TRUE)

lines(c(u[1], u[1]), c(u[3], u[4]), xpd = TRUE)

abline(v=4.5, lty="dotted")

# What is the probability of values > 5.7?

1-ptruncnorm(5.7, a=4.5, b=Inf, mean=M, sd=SD)

# parameters for H1 model

M <- 5.1

SD <- 0.5

range <- seq(4.5, 7, by=.01)

plausibility <- dtruncnorm(range, a=4.5, b=Inf, mean=M, sd=SD)

plot(range, plausibility, type="l", xlim=c(4, 7), axes=FALSE, ylab="Plausibility", xlab="Lunch cost in €", mgp = c(2.5, 1, 0))

axis(1)

# Get the axis ranges, draw arrow

u <- par("usr")

points(u[1], u[4], pch=17, xpd = TRUE)

lines(c(u[1], u[1]), c(u[3], u[4]), xpd = TRUE)

abline(v=4.5, lty="dotted")

# What is the probability of values > 5.7?

1-ptruncnorm(5.7, a=4.5, b=Inf, mean=M, sd=SD)

Comments (1) | Trackback

January 10, 2017

*by Angelika Stefan & Felix Schönbrodt*

When reading about Bayesian statistics, you regularly come across terms like “objective priors“, “prior odds”, “prior distribution”, and “normal prior”. However, it may not be intuitively clear that the meaning of “prior” differs in these terms. In fact, there are two meanings of “prior” in the context of Bayesian statistics: (a) prior plausibilities of models, and (b) the quantification of uncertainty about model parameters. As this often leads to confusion for novices in Bayesian statistics, we want to explain these two meanings of priors in the next two blog posts*. The current blog post covers the the first meaning of priors (link to part II).

In this context, the term “prior” incorporates the personal assumptions of a researcher on the probability of a hypothesis (p(H1)) relative to a competing hypothesis, which has the probability p(H2). Hence, **the meaning of this prior is “how plausible do you deem a model relative to another model before looking at your data”**. The ratio of the two priors of the models, that is “how probable do you consider H1 compared to H2”, is called “prior odds”: p(H1) / p(H2).

The first meaning of priors is used in the context of Bayes factor analysis, where you compare two different hypotheses. In Bayes factor analysis, prior odds are updated by the likelihood ratio of the two hypotheses, which contains the information from the data, and result in the posterior odds (“what you believe after looking at your data”):

The prior belief is called “subjective”, but this label does not imply that it is “arbitrary”, “unprincipled”, or “irrational”. In contrast, the prior belief can (and preferably should) be informed by previous data or experiences. For example, it can be a belief that started with an equipoise (50/50) position, but has been repeatedly updated by data. But within the bounds of rationality and consistency, people still can come to considerably different prior beliefs, and might have good arguments for their choice – that’s why it is called “subjective”. But initially differing degrees of belief will converge as more and more evidence comes in. We will observe this in the following example.

The **classical experiment of tasting tea** has already been described in the context of Bayesian hypothesis testing by Lindley (1993). We will present a simpler form here. Dr. Muriel Bristol, a scientist working in the field of alga biology who was acquainted to the statistician R. A. Fisher, claimed that she could discriminate whether milk is put in a cup before or after the tea infusion during the process of preparing tea with milk. However, Mr. Fisher considered this very unlikely.

So they decided to run an experiment: Muriel Bristol tasted several cups of tea in a row, making a guess on the preparation procedure for each cup. Unlike in the original story, where inferential statistics were consulted to solve the disagreement, we will employ Bayesian statistics to track how prior convictions in this example change. If Muriel Bristol makes her guesses only based on chance as Mr. Fisher supposes, she has a probability of success of 50% in each trial. Before observing her performance, Mr. Fisher should therefore consider it very likely that she is right about the procedure in about 50% of the trials across all trials. We can therefore assume a point hypothesis: H_{Fisher}: Success rate (SR) = 0.5. Muriel Bristol, on the other hand, is very confident in her divination skills. She claims to get 80% of trials correct, which can be equally captured in a point hypothesis: H_{Muriel}: SR = 0.8.

To introduce prior beliefs about hypotheses and show how they change with upcoming evidence, we want to introduce two additional persons. The first one is a slightly skeptical observer who tends to favor H_{Fisher}, but does not completely rule out that Mrs. Bristol could be right with her hypothesis. More formally, we could describe this position as: P(H_{Fisher}) = 0.6 and P(H_{Muriel}) = 0.4. This means that his prior odds are P(H_{Fisher})/P(H_{Muriel}) = 3:2. Fisher’s hypothesis is 1.5 times more likely to him than Muriel Bristol’s hypothesis.

The second additional person we would like to introduce is William, Muriel Bristol’s loving husband who fervently advocates her position. He knows his wife would never (or at least very rarely) make wrong claims, concerning tea preparation and all others issues of their marriage. He therefore assigns a much higher subjective probability to her hypothesis (P(H_{Muriel}) = 0.9) than to the one of Mr. Fisher (P(H_{Fisher}) = 0.1). His prior odds are therefore P(H_{Fisher})/P(H_{Muriel}) = 1:9. Please note that the *content* of the hypotheses (the proposed success rates 0.5 and 0.8, which are the parameters of the model) is logically independent of the *probability of the hypotheses (priors)* that our two observers have.

During the process of hypothesis testing, these two priors are updated with the existing evidence. It is reported that Muriel Bristol’s performance at the experiment was extraordinarily good. We therefore assume that out of the first 6 trials of the experiment she got 5 correct.

With this information, we can now compute the likelihood of the data under each of the hypotheses (for more information on the computation of likelihoods, see Alexander Etz’s blog:

The computation of the likelihoods does not involve the prior model probabilities of our observers. What can be seen is that the data are more likely under Muriel Bristol’s hypothesis than under Mr. Fisher’s. This should not come as a surprise as Muriel Bristol claimed that she could make a very high percentage of right guesses and the data show a very high percentage of right guesses whereas Mr. Fisher assumed a much lower percentage of right guesses. To emphasize this difference in likelihoods and to assign it a numerical value, we can compute the likelihood ratio (Bayes factor):

This ratio means that the data are 4.19 times as likely under Mrs. Bristol’s hypothesis as under Mr. Fisher’s hypothesis. It does not matter how you order the likelihoods in the fraction, the meaning remains constant (see this blog post).

How does this likelihood change the prior odds of both our slightly skeptical observer and William Bristol? Bayes theorem shows that prior odds can be updated by multiplying them with the likelihood ratio (Bayes factor):

First, we will focus on the posterior odds of the slightly skeptical observer. To remember, the slightly skeptical observer had assigned a probability of 0.6 to Mr. Fisher’s hypothesis and a probability of 0.4 to Muriel Bristol’s hypothesis *before *seeing the data, which resulted in prior odds of 3:2 for Mr. Fisher’s hypothesis. How do these convictions change now when the experiment has conducted? To examine this, we simply have to insert all known values in the equation:

This shows that the prior odds of the slightly skeptical observer changed from 3:2 to posterior odds of 1:2.8. This means that whereas before the experiment the slightly skeptical observer had deemed Mr. Fisher’s hypothesis more plausible than Mrs. Bristol’s hypothesis, he changed his opinion after seeing the data, now preferring Mrs. Bristol’s hypothesis over Mr. Fisher’s.

The same equation can be applied to William Bristol’s prior odds:

What we can notice is that after taking the data into consideration both prior odds display a higher amount of agreement with Muriel Bristol and reduced confidence in Mr. Fisher’s hypothesis. Whereas the convictions of the slightly skeptical observer were changed in favor of Muriel Bristol’s hypothesis after the experiment, William Bristol’s prior convictions were strengthened.

Something else you can notice is that compared to William Bristol the slightly skeptical observer still assigns a higher plausibility to Mr. Fisher’s hypothesis. This rank order between the two priors will remain no matter what the data look like. Even if Muriel Bristol made, say, 100/100 correct guesses, the slightly skeptical observer would trust less in her hypothesis than her husband. However, with increasing evidence the absolute difference between both observers will decrease more and more.

This blog post explained the first meaning of “prior” in the context of Bayesian statistics. It can be defined as the subjective plausibility a researcher assigns to a hypothesis compared to another hypothesis before seeing the data. As illustrated in the tea-tasting example, these prior beliefs are updated with upcoming evidence in the research process. In the next blog post, we will explain a second meaning of “priors”: The quantification of uncertainty about model parameters.

Continue reading part II: Quantifying uncertainty about model parameters

*We want to thank Eric-Jan Wagenmakers for helpful comments on a previous version of the post.*

*As a note: Both meanings in fact can be unified, but for didactic purpose we think it makes sense to keep them distinct as a start.

Dienes, Z. (2011). Bayesian versus orthodox statistics: Which side are you on?. *Perspectives On Psychological Science*, 6(3), 274-290. http://doi:10.1177/1745691611406920

Dienes, Z. (2016). How Bayes factors change scientific practice. *Journal Of Mathematical Psychology*, 7278-89. http://doi:10.1016/j.jmp.2015.10.003

Lindley, D. V. (1993). The analysis of experimental data: The appreciation of tea and wine. *Teaching Statistics*, 15(1), 22-25. http://dx.doi.org/10.1111/j.1467-9639.1993.tb00252.x

Rouder, J. N., Morey, R. D., Verhagen, J., Province, J. M., & Wagenmakers, E. J. (2016a). Is there a free lunch in inference? *Topics in Cognitive Science*, *8*, 520–547. http://doi.org/10.1111/tops.12214

Rouder, J. N., Morey, R. D., & Wagenmakers, E. J. (2016b). The Interplay between Subjectivity, Statistical Practice, and Psychological Science. *Collabra*, *2*(1), 6–12. http://doi.org/10.1525/collabra.28

April 17, 2015

There are at least three traditions in statistics which work with a kind of likelihood ratios (LRs): the “Bayes factor camp”, the “AIC camp”, and the “likehood camp”. In my experience, unfortunately most people do *not* have an intuitive understanding of LRs. When I give talks about Bayes factors, the most predictable question is “And how much is a BF of 3.4? Is that something I can put confidence in?”.

Recently, I tried to approach the topic from an experiental perspective (“What does a Bayes factor feel like?“) by letting people draw balls from an urn and monitor the Bayes factor for an equal distribution of colors. Now I realized that I re-discovered an approach that Richard Royall did in his 1997 book “Statistical Evidence: A Likelihood Paradigm”: He also derived labels for likelihood ratios by looking at simple experiments, including ball draws.

But beyond this approach of getting an experiental access to LRs, all traditions mentioned above proposed in some way** labels or “grades” of evidence**.

These are summarized in my cheat sheet below.

(There’s also a PDF of the cheat sheet).

There’s considerable consensus about what counts as “strong evidence” (But this is not necessarily “independent replication” – maybe they just copied each other).

But there’s also the position that we **do not need labels at all** – the numbers simply speak for themselves! For an elaboration of that position, see Richard Morey’s blog post. Note that Kass & Raftery (1995) are often cited for their grades in the cheat sheet, but according to Richard Morey rather belong to the “need no labels” camp (see here and here). On the other hand, EJ Wagenmakers mentions that they use their guidelines themselves for interpretation and asks “when you propose labels and use them, how are you in the no-labels camp?”. Well, decide yourself (or ask Kass and Raftery personally), whether they belong into the “labels” or “no-labels” camp.

Now that I have some experience with LRs, I am inclined to follow the “no labels needed” position. But whenever I *explain* Bayes factors to people who are unacquainted with them, I really long for a descriptive label. I think the labels are short-cuts, which relieve you from the burden to explain how to interpret and judge an LR (You can decide yourself whether that is a good or a bad property of the labels).

To summarize, as LRs are not self-explanatory to the typical audience, I think you either need a label (which is self-explanatory, but probably too simplified and not sufficiently context-dependent), or you should give an introduction on how to interpret and judge these numbers correctly.

Burnham, K. P., & Anderson, D. R. (2002). *Model selection and multimodel inference: A practical information-theoretic approach*. Springer Science & Business Media.

Burnham, K. P., Anderson, D. R., & Huyvaert, K. P. (2011). AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. *Behavioral Ecology and Sociobiology*, *65*, 23–35. doi:10.1007/s00265-010-1029-6

Symonds, M. R. E., & Moussalli, A. (2011). A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. *Behavioral Ecology and Sociobiology*, *65*, 13–21. doi:10.1007/s00265-010-1037-6

Good, I. J. (1985). Weight of evidence: A brief survey. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley, & A. F. M. Smith (Eds.), *Bayesian Statistics 2* (pp. 249–270). Elsevier.

Jeffreys, H. (1961). *The theory of probability*. Oxford University Press.

Lee, M. D., & Wagenmakers, E.-J. (2013). *Bayesian cognitive modeling: A practical course*. Cambridge University Press.

Royall, R. M. (1997). *Statistical evidence: A likelihood paradigm*. London: Chapman & Hall.

Royall, R. M. (2000). On the probability of observing misleading statistical evidence. *Journal of the American Statistical Association*, *95*, 760–768. doi:10.2307/2669456

Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773–795.

Morey, R. D. (2015). *On verbal categories for the interpretation of Bayes factors (Blog post).* http://bayesfactor.blogspot.de/2015/01/on-verbal-categories-for-interpretation.html

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. *Psychonomic Bulletin & Review*, *16*, 225–237.

Send this to a friend