Felix Schönbrodt

PD Dr. Dipl.-Psych.

Statistics

What does a Bayes factor feel like?

A Bayes factor (BF) is a statistical index that quantifies the evidence for a hypothesis, compared to an alternative hypothesis (for introductions to Bayes factors, see here, here or here).

Although the BF is a continuous measure of evidence, humans love verbal labels, categories, and benchmarks. Labels give interpretations of the objective index – and that is both the good and the bad about labels. The good thing is that these labels can facilitate communication (but see @richardmorey), and people just crave for verbal interpretations to guide their understanding of those “boring” raw numbers.

Eingebetteter Bild-Link

The bad thing about labels is that an interpretation should always be context dependent (Such as “30 min.” can be both a long time (train delay) or a short time (concert), as @CaAl said). But once a categorical system has been established, it’s no longer context dependent.

 

These labels can also be a dangerous tool, as they implicitly introduce cutoff values (“Hey, the BF jumped over the boundary of 3. It’s not anecdotal any more, it’s moderate evidence!”). But we do not want another sacred .05 criterion!; see also Andrew Gelman’s blog post and its critical comments. The strength of the BF is precisely its non-binary nature.

Several labels for paraphrasing the size of a BF have been suggested. The most common system seems to be the suggestion of Harold Jeffreys (1961):

Bayes factor BF_{10} Label
> 100 Extreme evidence for H1
30 – 100 Very strong evidence for H1
10 – 30 Strong evidence for H1
3 – 10 Moderate evidence for H1
1 – 3 Anecdotal evidence for H1
1 No evidence
1/3 – 1 Anecdotal evidence for H0
1/3 – 1/10 Moderate evidence for H0
1/10 – 1/30 Strong evidence for H0
1/30 – 1/100 Very strong evidence for H0
< 1/100 Extreme evidence for H0

 

Note: The original label for 3 < BF < 10 was “substantial evidence”. Lee and Wagenmakers (2013) changed it to “moderate”, as “substantial” already sounds too decisive. “Anecdotal” formerly was known as “Barely worth mentioning”.

Kass and Raftery suggested a comparable classification, only that the “strong evidence” category for them starts at BF > 20 (see also Wikipedia entry).

Getting a feeling for Bayes factors

How much is a BF_{10} of 3.7? It indicates that data occured 3.7x more likely under H_1 than under H_0, given the priors assumed in the model. Is that a lot of evidence for H_1? Or not?

Following Table 1, it can be labeled “moderate evidence” for an effect – whatever that means.

Some have argued that strong evidence, such as BFs > 10, are quite evident from eyeballing only:

“If your result needs a statistician then you should design a better experiment.” (attributed to Ernest Rutherford)

Is that really the case? Can we just “see” it when there is an effect?

Let’s approach the topic a bit more experientially. What does such a BF look like, visually? We take the good old urn model as a first example.

Visualizing Bayes factors for proportions

Imagine the following scenario: When I give a present to my two boys (4 and 6 years old), it is not so important what it is. The most important thing is: “Is it fair?”. (And my boys are very sensitive detectors of unfairness).

Imagine you have bags with red and blue marbles. Obviously, the blue marbles are much better, so it is key to make sure that in each bag there is an equal number of red and blue marbles. Hence, for our familial harmony I should check whether reds and blues are distributed evenly or not. In statistical terms: H_0: p = 0.5, H_1: p != 0.5.

When drawing samples from the bags, the strongest evidence for an even distribution (H_0) is given when exactly the same number of red and blue marbles has been drawn. How much evidence for H_0 is it when I draw n=2, 1 red/1 blue? The answer is in Figure 1, upper table, first row: The BF_{10} is 0.86 in favor of H_1, resp. a BF_{01} of 1.16 in favor of H_0 – i.e., anecdotal evidence for an equal distribution.

You can get these values easily with the famous BayesFactor package for R:

proportionBF(y=1, N=2, p=0.5)

 

What if I had drawn two reds instead? Then the BF would be 1.14 in favor of H_1 (see Figure 1, lower table, row 1).

proportionBF(y=2, N=2, p=0.5)

Obviously, with small sample sizes it’s not possible to generate strong evidence, neither for H_0 nor for H_1. You need a minimal sample size to leave the region of “anecdotal evidence”. Figure 1 shows some examples how the BF gets more extreme with increasing sample size.

Marble_distirbutions_and_BF

Figure 1.

 

These visualizations indeed seem to indicate that for simple designs such as the urn model you do not really need a statistical test if your BF is > 10. You can just see it from looking at the data (although the “obviousness” is more pronounced for large BFs in small sample sizes).

Maximal and minimal Bayes factors for a certain sample size

The dotted lines in Figure 2 show the maximal and the minimal BF that can be obtained for a given number of drawn marbles. The minimum BF is obtained when the sample is maximally consistent with H_0 (i.e. when exactly the same number of red and blue marbles has been drawn), the maximal BF is obtained when only marbles from one color are drawn.

max_min_BF_r-medium

Figure 2: Maximal and minimal BF for a certain sample size.

 

Figure 2 highlights two features:

  • If you have few data points, you cannot have strong evidence, neither for H_1 nor for H_0.
  • It is much easier to get strong evidence for H_1 than for H_0. This property depends somewhat on the choice of the prior distribution of H_1 effect sizes. If you expect very strong effects under the H_1, it is easier to get evidence for H_0. But still, with every reasonable prior distribution, it is easier to gather evidence for H_1.

 

Get a feeling yourself!

Here’s a shiny widget that let’s you draw marbles from the urn. Monitor how the BF evolves as you sequentially add marbles to your sample!

 

[Open app in separate window]

Teaching sequential sampling and Bayes factors

IMG_4037

When I teach sequential sampling and Bayes factors, I bring an actual bag with marbles (or candies of two colors).

In my typical setup I ask some volunteers to test whether the same amount of both colors is in the bag. (The bag of course has a cover so that they don’t see the marbles). They may sample as many marbles as they want, but each marble costs them 10 Cent (i.e., an efficiency criterium: Sample as much as necessary, but not too much!). They should think aloud, about when they have a first hunch, and when they are relatively sure about the presence or absence of an effect. I use a color mixture of 2:1 – in my experience this give a good chance to detect the difference, but it’s not too obvious (some teams stop sampling and conclude “no difference”).

This exercise typically reveals following insights (hopefully!)

  • By intuition, humans sample sequentially. When the evidence is not strong enough, more data is sampled, until they are sure enough about the (un)fairness of the distribution.
  • Intuitionally, nobody does a fixed-n design with a-priori power analysis.
  • Often, they stop quite soon, in the range of “anecdotal evidence”. It’s also my own impression: BFs that are still in the “anecdotal” range already look quite conclusive for everyday hypothesis testing (e.g., a 2 vs. 9 distribution; BF_{10} = 2.7). This might change, however, if in the scenario a wrong decision is associated with higher costs. Next time, I will try a scenario of prescription drugs which have potentially severe side effects.

 

The “interocular traumatic test”

The analysis so far seems to support the “interocular traumatic test”: “when the data are so compelling that conclusion hits you straight between the eyes” (attributed to Joseph Berkson; quoted from Wagenmakers, Verhagen, & Ly, 2014).

But the authors go on and quote Edwards et al. (1963, p. 217), who said: “…the enthusiast’s interocular trauma may be the skeptic’s random error. A little arithmetic to verify the extent of the trauma can yield great peace of mind for little cost.”.

In the next visualization we will see, that large Bayes factors are not always obvious.

Visualizing Bayes factors for group differences

What happens if we switch to group differences? European women have on average a self-reported height of 165.8 cm, European males of 177.9 cm – difference: 12.1 cm, pooled standard deviation is around 7 cm. (Source: European Community Household Panel; see Garcia, J., & Quintana-Domeque, C., 2007; based on ~50,000 participants born between 1970 and 1980). This translates to a Cohen’s d of 1.72.

Unfortunately, this source only contains self-reported heights, which can be subject to biases (males over-report their height on average). But it was the only source I found which also contains the standard deviations within sex. However, Meyer et al (2001) report a similar effect size of d = 1.8 for objectively measured heights.

 

Now look at this plot. Would you say the blue lines are obviously higher than the red ones?

Bildschirmfoto 2015-01-29 um 13.17.32

I couldn’t say for sure. But the BF_{10} is 14.54, a “strong” evidence!

If we sort the lines by height the effect is more visible:

Bildschirmfoto 2015-01-29 um 13.17.43

… and alternatively, we can plot the distributions of males’ and females’ heights:Bildschirmfoto 2015-01-29 um 13.17.58

 

 

Again, you can play around with the interactive app:

[Open app in separate window]

 

Can we get a feeling for Bayes factors?

To summarize: Whether a strong evidence “hits you between the eyes” depends on many things – the kind of test, the kind of visualization, the sample size. Sometimes a BF of 2.5 seems obvious, and sometimes it is hard to spot a BF>100 by eyeballing only. Overall, I’m glad that we have a numeric measure of strength of evidence and do not have to rely on eyeballing only.

Try it yourself – draw some marbles in the interactive app, or change the height difference between males and females, and calibrate your personal gut feeling with the resulting Bayes factor!

Comments (1) | Trackback

In the era of #repligate: What are valid cues for the trustworthiness of a study?

[Update 2015/1/14: I consolidate feedback from Twitter, comments, email, and real life into the main text (StackExchange-style), so that we get a good and improving answer. Thanks to @TonyLFreitas@PhDefunct, @bahniks, @JoeHilgard, @_r_c_a, @richardmorey, @R__INDEX, the commenters at the end of this post and on the OSF mailing list, and many others for their feedback!]

In a recent lecture I talked about the replication crisis in psychology. After the lecture my students asked: “We learn so much stuff in our lectures, and now you tell us that a considerable proportion of these ‘facts’ probably are just false positives, or highly exaggerated? Then, what can we believe at all?”. A short discussion soon led to the crucial question:

In the era of #repligate: What are valid cues for the trustworthiness of a study?
Of course the best way to judge a study’s quality would be to read the paper thoroughly, making an informed judgement about the internal and statistical validity, invest some extra time into a literature review, and maybe take a look at the raw data, if available. However, such an investment is not possible in all scenarios.

 

Here, I will only focus on cues that are easy and fast to retrieve.

 

As a conceptual framework we can use the lens model (Brunswick, 1956), which differentiates the concepts of cue usage and cue validity. We use some information as a manifest cue for a latent variable (“cue utilization”). But only some cues are also valid indicators (“cue validity”). Valid cues correlate with the latent variable, invalid cues have no correlation. Sometimes, there exist valid cues which we don’t use, and sometimes we use cues that are not valid. Of course, each of the following cues can be critiziced, and you certainly can give many examples where each cue breaks down. Furthermore, the absence of a positive cue (e.g., if a study has not been pre-registered, which was uncommon until recently) does not necessarily indicate the untrustworthiness.
But this is the nature of cues – they are not perfect, and only work on average.

 


Valid cues for trustworthiness of a single study:

  • Pre-registration. This might be one of the strongest cues for trustworthiness. Pre-registration makes p-hacking and HARKing unlikely (Wagenmakers, Wetzels, Borsboom, Maas, & Kievit, 2012), and takes care for a sufficient amount of statistical power (At least, some sort of sample size planning has been done. Of course, this depends on the correctness of the a-priori effect size estimate).
  • Sample size / Statistical Power. Larger samples mean higher power, higher precision, and less false positives (Bakker, van Dijk, & Wicherts, 2012; Maxwell, Kelley, & Rausch, 2008; Schönbrodt & Perugini, 2013). Of course sample size alone is not a panacea. As always, the garbage in/garbage out principle holds, and a well designed lab study with n=40 can be much more trustworthy than a sloppy mTurk study with n=800. But all other things being equal, I put more trust in larger studies.
  • Independent high-power replications. If a study has been independently replicated from another lab with high power and preferably pre-registered, this probably is the strongest evidence for the trustworthiness of a study (How to conduct a replication? See the Replication Recipe by Brandt et al., 2014).
  • I guess that studies with Open Data and Open Material have a higher replication rate
    • “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results” (Wicherts, Bakker, & Molenaar, 2011) —> this is not exactly Open Data, because here authors only shared data upon request (or not). But it points into the same direction.
    • Beyond publishing Open Data at all, the neatness of the data set and the quality of the analysis script is an indicator (see also comment by Richard Morey). The journal “Quarterly Journal of Political Science” demands to publish raw data and analysis code that generates all the results reported in the paper. Of these submissions, 54% “had results in the paper that differed from those generated by the author’s own code”! My fear is that analytical code that has not been refined and polished for publishing contains even more errors (not to speak of unreproducible point-and-click analyses). Therefore, a well prepared data set and analysis code should be a valid indicator.
    • Open Material could be an indicator that people are not afraid of replications and further scrutiny
  • An abstract with reasonable conclusions that stick close to the data – see also below: “Red flags”. This includes visible efforts of the authors to explain how they could be wrong and what precautions were/were not taken.
    • A sensitivity analysis, which shows that conclusions do not depend on specific analytical choices. For Bayesian analyses this means to explore how the conclusions depend on the choice of the prior. But you could also show how your results change when you do not exlude the outliers, or do not apply that debatable transformation to your data (see also comment)
  • Using the “21 Word Solution” of Simmons, Nelson, & Simonsohn (2012) leads to a better replication index.
These cues might be a feature of a specific study. Beyond that, these cues could also be used as indicators of an authors’ general approach to science (e.g., Does s/he in general embrace open practices and care about the replicability of his or her research? Does the author have a good replication record?). So the author’s open science reputation could be another valid indicator, and could be useful for hiring or tenure decisions.
(As a side note: I am not so interested in creating another formalized author index “The super-objective-h-index-extending-altmetric-open-science-author-index!”. But when I reflect about how I judge the trustworthiness of a study, I indeed take into account the open science reputation an author has).

Valid cues for trustworthiness of a research programme/ multiple studies:


Valid cues for UNtrustworthiness of a single study/ red flags:

In a comment below, Dr. R introduced the idea of “red flags”, which I really like. These red flags aren’t a prove of the untrustworthiness of a study – but definitely a warning sign to look closer and to be more sceptical.

  • Sweeping claims, counterintuitive, and shocking results (that don’t connect to the actual data)
  • Most p values are in the range of .03 – .05 (or, equivalently, most t-values in the 2-3 range, or most F-values are in the 4-9 range; see comment by Dr. R below).
    • How does a distribution of p values look like when there’s an effect? See Daniël Lakens blog. With large samples, p-values just below .05 even indicate support for the null!
  • It’s a highly cited result, but no direct replications have been published so far. That could be an indicator that many unsuccessful replication attempts went into the file-drawer (see comment by Ruben below).
  • Too good to be true: If several low-power studies are combined in a paper, it can be very unlikely that all of them produce significant results. The “Test of Excess Significance” has been used to formally test for “too many significant results”. Although this formal test has been criticized (e.g., see The Etz-Files, and especially the long thread of comments, or this special issue on the test), I still think excess significance can be used as a red flag indicator to look closer.

Possibly invalid cues (cues which are often used, but only seemingly are indicators for a study’s trustworthiness):

  • The journal’s impact factor. Impact factors correlate with retractions (Fang & Casadevall, 2011), but do not correlate with a single paper’s citation count (see here).
    • I’m not really sure whether that is a valid or invalid cue for a study’s quality. The higher retraction rate might due to the stronger public interest and a tougher post-publication review of papers in high-impact journals. The IF seems not to be predictive of a single paper’s citation count; but I’m not sure either whether the citation count is an index of a study’s quality. Furthermore, “Impact factors should have no place in grant-giving, tenure or appointment committees.” (ibid.), see also a reccent article by @deevybee in Times Higher Education.
    • On the other hand, the current replicability estimate of a full volume of JPSP is only at 20-30% (see Reproducibility Project: Psychology). A weak performance for one of our “best journals”.
  • The author’s publication record in high-impact journals or h-index. This might be a less valid cue as expected, or even an invalid cue.
  • Meta-analyses. Garbage-in, garbage-out: Meta-analyses of a biased literature produce biased results. Typical correction methods do not work well. When looking at meta-analyses, at least one has to check whether and how it was corrected for publication bias.
This list of cues was compiled in a collaborative effort. Some of them have empirical support; others are only a personal hunch.

 

So, if my students ask me again “What studies can we trust at all?”, I would say something like:
“If a study has a large sample size, Open Data, and maybe even has been pre-registered, I would put quite some trust into the results. If the study has been independently replicated, even better. In contrast to common practice, I do not care so much whether this paper has been published in a high-impact journal or whether the author has a long publication record. The next step, of course, is: Read the paper, and judge it’s validity and the quality of its arguments!”
What are your cues or tips for students?

This list certainly is not complete, and I would be interested in your ideas, additions, and links to relevant literature!

 

References

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554. doi:10.1177/1745691612459060
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., et al. (2014). The Replication Recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224. doi:10.1016/j.jesp.2013.10.005
Brunswik, E. (1956). Perception and the representative design of psychological experiments. University of California Press.
Fang, F. C., & Casadevall, A. (2011). Retracted science and the retraction index. Infection and Immunity, 79, 3855–3859. doi:10.1128/IAI.05661-11
Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537–563. doi:10.1146/annurev.psych.59.103006.093735
Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. doi:10.1016/j.jrp.2013.05.009
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Maas, H. L. J. v. d., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632–638. doi:10.1177/1745691612463078
Comments (16) | Trackback

#HIBAR: Why Using Age as a Proxy for Testosterone is a Bad Deal.

This is a post-publication peer review (HIBAR: “Had I Been A Reviewer”) of the following paper:

Levi, M., Li, K., & Zhang, F. (2010). Deal or no deal: Hormones and the mergers and acquisitions game. Management Science 56, 1462 -1483.

A citeable version of this post-publication peer review can be found at SSRN:

Schönbrodt, Felix D., Why Using Age as a Proxy for Testosterone is a Bad Deal (March 26, 2012). Available at SSRN: http://ssrn.com/abstract=2028882 or http://dx.doi.org/10.2139/ssrn.2028882

 

In their article “Deal or No Deal: Hormones and the Mergers and Acquisitions Game,” Levi, Li, and Zhang (2010) claimed that they investigated the effect of testosterone on CEOs’ decisions in mergers and acquisitions. However, they did not measure testosterone levels directly. Rather, they tried to use CEO age as a proxy, based on a previously documented negative correlation between age and testosterone level. In this comment, I argue that it is not reasonable to use age as a proxy for testosterone, and that Levi et al.’s study does not tell us anything about testosterone. General remarks on using proxy variables are given.

In their article “Deal or No Deal: Hormones and the Mergers and Acquisitions Game,” Levi, Li, and Zhang (2010) investigated the research question of whether the hormone testosterone (T) has an impact on decisions in mergers and acquisitions (M&As). Based on experimental results that T has an effect on behavior in ultimatum games (Burnham, 2007), Levi et al. hypothesized that CEOs with higher T levels should show more aggressive/dominant behavior in M&As. To investigate this hypothesis, the authors assembled a data set with 357 M&As and several economic variables related to them (e.g., the size of the target firm, the board size, and several other economic indicators). As they could not assess the T levels of the CEOs directly, they “[…] have suggested an alternative: specifically, to proxy testosterone by age” (p. 1476). Therefore, as the authors admitted themselves, their reasoning was based on a central assumption: “The validity of this approach clearly depends on the extent of the association of hormone levels with age.” (p. 1476). To summarize their findings, a significant but small effect of age on M&A decisions was found: younger CEOs made more bid withdrawals and more tender offers than their older counterparts (which has been interpreted as more dominant behavior). As the story has received wide press coverage, for example, in the Wall Street Journal, Financial Times, Time Magazine, and the Los Angeles Times, I feel the need to make some clarifications.

Empirically, Levi et al. have shown an effect of CEO age on the outcome of M&As. From the title to the conclusion of their article, however, they refer to the effect of testosterone (e.g., “[…] in M&As the testosterone of both parties could influence the course and outcome of negotiations,” p. 1463; “[…] we consider whether testosterone influences the likelihood that offers made are subsequently withdrawn,” p.1466; “This finding strongly supports an association between testosterone, as proxied by the bidder male CEO age, and M&As,” p. 1469).

In the following commentary, I argue that it is not appropriate to use age as a proxy for T level and that the conclusions of Levi et al. are taking it way too far. For the clarity of my arguments, I will focus only on the strongest reported effect. For all weaker effects, the same reasoning applies even more.

The Effect of Testosterone on Dominant Behavior is Rather Low

Is it a reasonable hypothesis to expect more dominant M&A behavior from CEOs with higher T levels? Early investigations with animals have shown a relation between testosterone level and aggressive or dominant behavior (e.g., Wingfield, Hegner, Dufty, & Ball, 1990). Recent review articles and meta-analyses on human testosterone, however, have shown that the effects in humans are rather small. For example, in a meta-analysis on the relation of male human aggression and T level (N = 9760; Archer, Graham-Kevan, & Davies, 2005; Book, Starzyk, & Quinsey, 2001), the weighted correlation was only .08. Other meta-analyses on the relation of T to dominance (weighted r = .13) and to a challenge in a sports competition (weighted r = .18) have supported this finding of a small effect (for an overview, see Archer, 2006). To summarize, if M&As are seen as competitive situations, indeed, an effect of T could be expected – but the relation is probably much less pronounced than common sense might suggest.

The Relation Between Age and Testosterone is Rather Low

The central assumption in their article is that T level can be proxied by age. How strong is that relation? A meta-analysis of 23 studies reporting the correlation between age and T level (N = 1900) revealed that the average correlation between these variables was -.18 (Gray, Berlin, McKinlay, & Longcope, 1991). That means only 3% of the variation in T level can be explained by age. Levi et al. refer to another study that shows a remarkably stronger correlation of -.50 (Harman, Metter, Tobin, Pearson, & Blackman, 2001).

All articles cited by Levi et al. report the correlation for a broad age range (e.g., 24 – 90 years, Ferrini & Barrett-Conner, 1998; 23 – 91 years, Harman et al., 2001). Given that 90% of CEOs in Levi et al.’s sample were between 46 and 64, a massive range restriction is present, which presumably lowers the correlation in that age range even more. Indeed, the scatter plots in Harman et al. (2001) strongly suggest that the correlation is mainly driven by the very young and very old participants.
But regardless of whether the true correlation is closer to -.18 or to -.50: Are these correlations high enough that age is warranted as a valid proxy for T level? As we will see in the next section, the answer is no.

A “Triangulation” of an Unobserved Correlation?

In their conclusions, the authors argue as though they had established a causal effect of T on M&A decisions. However, they had not even established a correlation between T and M&A behavior. Their approach seems to be something like a “triangulation” of an unobserved correlation by the knowledge of two other correlations. Indeed, there are dependencies and constraints in the relation of the bivariate correlations between three variables. If two of the three correlations are given (r_{13} and r_{23}), the possible range of the third correlation, r_{12}, is constrained by following equation (Olkin, 1981):

(1)   \begin{equation*} r_{13}r_{23}-\sqrt{(1-r_{13}^2)(1-r_{23}^2)} \leq r_{12} \leq r_{13}r_{23}+\sqrt{(1-r_{13}^2)(1-r_{23}^2)} \end{equation*}

Figure 1 graphically represents the relationship of variables given by this equation. The left plot shows the upper boundary of r_{12}, the right plot the lower boundary of r_{12}. As one can see, rather high values for either r_{13}, r_{23}, or both, have to be present to imply a positive sign (i.e., a lower boundary > 0) of r_{12}.

OlkinPlot_Levi_final

The left panel shows the upper boundary of a correlation r_{12} for the given correlations r_{13} and r_{23}, the right panel the lower boundary. The asteriks marks the position of Levi et al.’s study.

Given the observed estimates of r_{age,T} = -.50 and r_{rage,bid} = -.12, the range of possible values for r_{T,bid} goes from -.80 to +.92. (The position is marked by the asteriks in Figure 1.) If the probably more realistic estimate from the meta-analysis is used (r_{age,T} = -.18; Gray et al., 1991), the possible range for r_{T,bid} goes from -.95 to +1.00. These calculations clearly show that there is no argument to expect a positive correlation between T and M&A decisions. Actually, with the given data set, no conclusions about T can be drawn at all.

The Effect of Age on M&A Decisions: An Artefact?

Given these analyses, it should be clear that one cannot speak of a T effect based on these data. What about the age effect reported in this article? The authors wrote that, “[… m]otivated by the studies of population testosterone levels reviewed earlier, we use the age of 45 years as the cut-off to separate young male CEOs from the rest.” (p. 1467). This seems to be a problematic choice to me. First of all, none of the cited studies gives a hint about why the age of 45 should be a particularly meaningful cut point. None of the reviewed studies suggests a significant break or stronger decline of T at that age. Furthermore, that cut point leads to a very skewed distribution of 16 “young” vs. 341 “old” CEOs.

The strongest reported effect was that of the dichotomized variable “CEO is young” (i.e., younger than 45) on bid withdrawals. Based on the reported descriptive statistics and correlations, it can be computed that 5 young CEOs withdrew, whereas 11 did not. Concerning old CEOs, 40 out of 341 withdrew. This distribution led to the reported Pearson correlation of r = .12 (p = .02). If only one young CEO would not have withdrawn, the correlation would have been nonsignificant (r = .08, p = .13). If only three young CEOs would not have withdrawn, age and withdrawing behavior would be completely unrelated. Hence, as Levi et al.’s conclusions stand and fall on the decision of one single CEO, these results do not seem very robust to me. Why is age categorized at all? Methodological papers on the topic clearly suggest not to dichotomize if a predictor variable can be used on an interval or ratio scale (e.g., Royston, Altman, & Sauerbrei, 2006).

Conclusion

Referring to several meta-analyses, it could be shown that the relation between T and dominant human behavior is much less pronounced than suggested by Levi and colleagues. Likewise, the relation between age and testosterone, especially in the restricted age range of their sample, is presumably close to zero. From my reading of the empirical evidence, the article should be rewritten as “Age and the mergers and acquisitions game,” and it should be acknowledged that the age effect, although significant, has a small effect size.

Of course, age is not a psychological variable per se – it is always a proxy for a true causal factor: “And although it is unquestionably useful to find that a phenomenon covaries with age, neither age nor the related variable time is a causal variable; changes occur in time, but not as a result of time.” (Hartmann & George, 1999, p. 132). Hence, it would be feasible to spend one or two paragraphs in the discussion speculating about the potential causal role of testosterone (and several other age-related variables). In that article, however, from the main title to the conclusion, the authors argue that T is the causal factor. Although the authors try to rule out some alternative age-related factors, the main criticism remains: With the current data, there is no way to provide evidence for or against a T hypothesis. It’s simply the wrong story for the data.

Based on the same data set, several other articles could have been written in the same style, for example, “Bicep strength and the M&A game.” Why bicep strength? The theory of embodied cognition predicts that physical dominance is a predictor for psychological dominance in negotiations (Barsalou, 2008). As CEO muscle strength cannot be measured directly, one could use CEO age as a proxy for muscle strength (r_{age,muscle strength} =.55; Lindle et al., 1997). Physically stronger CEOs, as proxied by age, would therefore be expected to make more dominant choices in M&As. This ad hoc hypothesis has the same argumentative structure and the same empirical justification as the “testosterone story,” and the creative reader can think of a multitude of other age-related variables that might have an influence on negotiation decisions (e.g., generational cohort effects in moral values, fluid intelligence, different educational levels between age groups, time spent in marriage, or the grayishness of the CEO’s hair).

To summarize, Levi et al. conclude that they “[…] have been able to conduct that age appears to be proxying for testosterone rather than experience, horizon, or some other effect” (p. 1478) and that their finding “strongly supports an association between testosterone, as proxied by the bidder male CEO age, and M&As.” (p. 1469). In the light of the analyses given above, that conclusion is not justified.

References

Archer, J. 2006. Testosterone and human aggression: An evaluation of the challenge hypothesis. Neuroscience & Biobehavioral Reviews 30(3) 319-345.

Archer, J., N. Graham-Kevan, M. Davies. 2005. Testosterone and aggression: A reanalysis of Book, Starzyk, and Quinsey’s (2001) study. Aggression and Violent Behavior 10(2) 241-261.

Barsalou, L.W. 2008. Grounded cognition. Annu. Rev. Psychol. 59 617–645.

Book, A.S., K.B. Starzyk, V.L. Quinsey. 2001. The relationship between testosterone and aggression: A meta-analysis. Aggression and Violent Behavior 6(6) 579–599.

Burnham, T. C. 2007. High-testosterone men reject low ultimatum game offers. Proceedings of the Royal Society B: Biological Sciences 274(1623) 2327-2330.

Cohen, J. 1992. A power primer. Psychological Bulletin 112(1) 155-159.

Ferrini, R. L., E. Barrett-Connor. 1998. Sex hormones and age: a cross-sectional study of testosterone and estradiol and their bioavailable fractions in community-dwelling men. American Journal of Epidemiology 147 750-754.

Gray, A., J. A. Berlin, J. B. McKinlay, C. Longcope. 1991. An examination of research design effects on the association of testosterone and male aging: results of a meta-analysis. Journal of Clinical Epidemiology 44(7) 671-684.

Harman, S. M., E. J. Metter, J. D. Tobin, J. Pearson, M. R. Blackman. 2001. Longitudinal effects of aging on serum total and tree testosterone levels in healthy men. Journal of Clinical Endocrinology & Metabolism 86(2) 724 -731.

Hartmann, D. P., T. P. George. 1999. Design, measurement, and analysis in developmental research. M. H. Bornstein, M. E. Lamb, ed. Developmental psychology: An advanced textbook (4th ed.). Mahwah, NJ US: Lawrence Erlbaum Associates Publishers, 125-195.

Levi, M., K. Li, F. Zhang. 2010. Deal or no deal: Hormones and the mergers and acquisitions game. Management Science 56(9) 1462 -1483.

Lindle, R. S., E. J. Metter, N. A. Lynch, J. L. Fleg, J. L. Fozard, J. Tobin, T. A. Roy, B. F. Hurley. 1997. Age and gender comparisons of muscle strength in 654 women and men aged 20–93 yr. Journal of Applied Physiology 83(5) 1581-1587.

Olkin, I. 1981. Range restrictions for product-moment correlation matrices. Psychometrika 46(4)  469-472.

Royston, P., D. G. Altman, W. Sauerbrei. 2006. Dichotomizing continuous predictors in multiple regression: A bad idea. Statistics in Medicine 25(1) 127-141.

Wingfield, J. C., R. E. Hegner, A. M. Dufty, G. F. Ball. 1990. The ‘Challenge Hypothesis’: Theoretical Implications for Patterns of Testosterone Secretion, Mating Systems, and Breeding Strategies. The American Naturalist 136(6) 829-846.

 

Comments (3) | Trackback

Reanalyzing the Schnall/Johnson “cleanliness” data sets: New insights from Bayesian and robust approaches

[mathjax]

I want to present a re-analysis of the raw data from two studies that investigated whether physical cleanliness reduces the severity of moral judgments – from the original study (n = 40; Schnall, Benton, & Harvey, 2008), and from a direct replication (n = 208, Johnson, Cheung, & Donnellan, 2014). Both data sets are provided on the Open Science Framework. All of my analyses are based on the composite measure as dependent variable.

This re-analsis follows previous analyses by Tal Yarkoni, Yoel Inbar, and R. Chris Fraley, and is focused on one question: What can we learn from the data when we apply modern (i.e., Bayesian and robust) statistical approaches?

The complete and reproducible R code for these analyses is at the end of the post.

Disclaimer 1: This analysis assumes that the studies from which data came from were internally valid. Of course the garbage-in-garbage-out principle holds. But as the original author reviewed the experimental material of the replication study and gave her OK, I assume that the relication data is as valid as the original data.

Disclaimer 2: I am not going to talk about tone, civility, or bullying here. Although these are important issues, a lot has been already written about it, including apologies from one side of the debate (not from the other, yet), etc. For nice overviews over the debate, see for example a blog post by , and the summary provided by Tal Yarkoni). I am completely unemotional about these data. False positives do exists, I am sure I had my share of them, and replication is a key element of science. I do not suspect anybody of anything – I just look at the data.

That being said, let’s get to business:

Bayes factor analysis

The BF is a continuous measure of evidence for H0 or for H1, and quantifies, “how much more likely is H1 compared to H0″. Typically, a BF of at least 3 is requested to speak of evidence (i.e., the H1 should be at least 3 times more likely than the H0 to speak of evidence for an effect). For an introduction to Bayes factors see here, here, or here.

Using the BayesFactor package, it is simple to compute a Bayes factor (BF) for the group comparison. In the original study, the Bayes factor against the H0, BF_{10}, is 1.08. That means, data are 1.08 times more likely under the H1 (“there is an effect”) than under the H0 (“There is no effect”). As the BF is virtually 1, data occurred equally likely under both hypotheses.

What researchers actually are interested in is not p(Data | Hypothesis), but rather p(Hypothesis | Data). Using Bayes’ theorem, the former can be transformed into the latter by assuming prior probabilities for the hypotheses. The BF then tells one how to update one’s prior probabilities after having seen the data. Given a BF very close to 1, one does not have to update his or her priors. If one holds, for example, equal priors (p(H1) = p(H0) = .5), these probabilities do not change after having seen the data of the original study. With these data, it is not possible to decide between H0 and H1, and being so close to 1, this BF is not even “anectdotal evidence” for H1 (although the original study was just skirting the boundary of significance, p = .06).

For the replication data, the situation looks different. The Bayes factor BF_{10} here is 0.11. That means, H0 is (1 / 0.11) = 9 times more likely than H1. A BF_{10} of 0.11 would be labelled “moderate to strong evidence” for H0. If you had equals priors before, you should update your belief for H1 to 10% and for H0 to 90% (Berger, 2006).

To summarize, neither the original nor the replication study show evidence for H1. In contrast, the replication study even shows quite strong evidence for H0.

A more detailed look at the data using robust statistics

Parametric tests, like the ANOVA employed in the original and the replication study, rest on assumptions. Unfortunately, these assumptions are very rarely met (Micceri, 1989), and ANOVA etc. are not as robust against these violations as many textbooks suggest (Erceg-Hurn & Mirosevic, 2008). Fortunately, over the last 30 years robust statistical methods have been developed that do not rest on such strict assumptions.

In the presence of violations and outliers, these robust methods have much lower Type I error rates and/or higher power than classical tests. Furthermore, a key advantage of these methods is that they are designed in a way that they are equally efficient compared to classical methods, even if the assumptions are not violated. In a nutshell, when using robust methods, there nothing to lose, but a lot to gain.

Rand Wilcox pioneered in developing many robust methods (see, for example, this blog post by him), and the methods are implemented in the WRS package for R (Wilcox & Schönbrodt, 2014).

Comparing the central tendency of two groups

A robust alternative to the independent group t test would be to compare the trimmed means via percentile bootstrap. This method is robust against outliers and does not rest on parametric assumptions. Here we find a p value of .106 for the original study and p = .94 for the replication study. Hence, the same picture: No evidence against the H0.

Comparing other-than-central tendencies between two groups, aka.: Comparing extreme quantiles between groups

When comparing data from two groups, approximately 99.6% of all psychological research compares the central tendency (that is a subjective estimate).

In some cases, however, it would be sensible to compare other parts of the distributions.For example, an intervention could have effects only on slow reaction times (RT), but not on fast or medium RTs. Similarly, priming could have an effect only on very high responses, but not on low and average responses. Measures of central tendency might obscure or miss this pattern.

And indeed, descriptively there (only) seems to be a priming effect in the “extremely wrong pole” (large numbers on the x axis) of the original study (i.e., the black density line is higher than the red on at “7” and “8” ratings):

Bildschirmfoto 2014-06-02 um 08.51.55

This visual difference can be tested. Here, I employed the qcomhd function from the WRS package (Wilcox, Erceg-Hurn, Clark, & Carlson, 2013). This method looks whether two samples differ in several quantiles (not only the central tendency). For an introduction to this method, see this blog post.

Here’s the result when comparing the 10th, 30th, 50th, 70th, and 90th quantile:

    q n1 n2 est.1 est.2 est.1_minus_est.2 ci.low ci.up p_crit p.value signif
1 0.1 20 20  3.86  3.15             0.712 -1.077  2.41 0.0500   0.457     NO
2 0.3 20 20  4.92  4.51             0.410 -0.341  1.39 0.0250   0.265     NO
3 0.5 20 20  5.76  5.03             0.721 -0.285  1.87 0.0167   0.197     NO
4 0.7 20 20  6.86  5.70             1.167  0.023  2.05 0.0125   0.047     NO
5 0.9 20 20  7.65  6.49             1.163  0.368  1.80 0.0100   0.008    YES

(Please note: Estimating 5 quantiles from 20 data points is not quite the epitome of precision. So treat this analysis with caution.)

As multiple comparison are made, the Benjamini-Hochberg-correction for the p value is applied. This correction gives new critical p values (column p_crit) to which the actual p values (column p.value) have to be compared. One quantile survives the correction: the 90th quantile. That means that there are fewer extreme answers in the cleanliness priming group than in the control group

This finding, of course, is purely exploratory, and as any other exploratory finding it awaits cross-validation in a fresh data set. Luckily, we have the replication data set! Let’s see whether we can replicate this effect.

The answer is: no. Not even a tendency:

    q  n1  n2 est.1 est.2 est.1_minus_est.2 ci.low ci.up p_crit p.value signif
1 0.1 102 106  4.75  4.88           -0.1309 -0.705 0.492 0.0125   0.676     NO
2 0.3 102 106  6.00  6.12           -0.1152 -0.571 0.386 0.0250   0.699     NO
3 0.5 102 106  6.67  6.61            0.0577 -0.267 0.349 0.0500   0.737     NO
4 0.7 102 106  7.11  7.05            0.0565 -0.213 0.411 0.0167   0.699     NO
5 0.9 102 106  7.84  7.73            0.1111 -0.246 0.431 0.0100   0.549     NO

Here’s a plot of the results:

Bildschirmfoto 2014-06-02 um 10.30.12

 

Overall summary

From the Bayes factor analysis, both the original and the replication study do not show evidence for the H1. The replication study actually shows moderate to strong evidence for the H0.

If anything, the original study shows some exploratory evidence that only the high end of the answer distribution (around the 90th quantile) is reduced by the cleanliness priming – not the central tendency. If one wants to interpret this effect, it would translate to: “Cleanliness primes reduce extreme morality judgements (but not average or low judgements)”. This exploratory effect, however, could not be cross-validated in the better powered replication study.

Outlook

Recently, Silberzahn, Uhlmann, Martin, and Nosek proposed “crowdstorming a data set”, in cases where a complex data set calls for different analytical approaches. Now, a simple two group experimental design, usually analyzed with a t test, doesn’t seem to have too much complexity – still it is interesting how different analytical approaches highlight different aspects of the data set.

And it is also interesting to see that the majority of diverse approaches comes to the same conclusion: From this data base, we can conclude that we cannot conclude that the H0 is wrong (This sentence, a hommage to Cohen, 1990, was for my Frequentist friends ;-)).

And, thanks to Bayesian approaches, we can say (and even understand): There is strong evidence that the H0 is true. Very likely, there is no priming effect in this paradigm.

PS: Celebrate open science. Without open data, all of this would not be possible.

 

## (c) 2014 Felix Schönbrodt
## http://www.nicebread.de
##
## This is a reanalysis of raw data from
## - Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience cleanliness reduces the severity of moral judgments. Psychological Science, 19(12), 1219-1222.
## - Johnson, D. J., Cheung, F., & Donnellan, M. B. (2014). Does Cleanliness Influence Moral Judgments? A Direct Replication of Schnall, Benton, and Harvey (2008). Social Psychology, 45, 209-215.


## ======================================================================
## Read raw data, provided on Open Science Framework
# - https://osf.io/4cs3k/
# - https://osf.io/yubaf/
## ======================================================================

library(foreign)
dat1 <- read.spss("raw/Schnall_Benton__Harvey_2008_Study_1_Priming.sav", to.data.frame=TRUE)
dat2 <- read.spss("raw/Exp1_Data.sav", to.data.frame=TRUE)
dat2 <- dat2[dat2[, "filter_."] == "Selected", ]
dat2$condition <- factor(dat2$Condition, labels=c("neutral priming", "clean priming"))
table(dat2$condition)

# build composite scores from the 6 vignettes:
dat1$DV <- rowMeans(dat1[, c("dog", "trolley", "wallet", "plane", "resume", "kitten")])
dat2$DV <- rowMeans(dat2[, c("Dog", "Trolley", "Wallet", "Plane", "Resume", "Kitten")])

# define shortcuts for DV in each condition
neutral <- dat1$DV[dat1$condition == "neutral priming"]
clean <- dat1$DV[dat1$condition == "clean priming"]

neutral2 <- dat2$DV[dat2$condition == "neutral priming"]
clean2 <- dat2$DV[dat2$condition == "clean priming"]


## ======================================================================
## Original analyses with t-tests/ ANOVA
## ======================================================================

# ---------------------------------------------------------------------
# Original study:

# Some descriptives ...
mean(neutral)
sd(neutral)

mean(clean)
sd(clean)

# Run the ANOVA from Schnall et al. (2008)
a1 <- aov(DV ~ condition, dat1)
summary(a1) # p = .0644

# --> everything as in original publication


# ---------------------------------------------------------------------
# Replication study

mean(neutral2)
sd(neutral2)

mean(clean2)
sd(clean2)

a2 <- aov(DV ~ condition, dat2)
summary(a2) # p = .947

# --> everything as in replication report

## ======================================================================
## Bayes factor analyses
## ======================================================================

library(BayesFactor)

ttestBF(neutral, clean, rscale=1)   # BF_10 = 1.08
ttestBF(neutral2, clean2, rscale=1) # BF_10 = 0.11

## ======================================================================
## Robust statistics
## ======================================================================


library(WRS)

# ---------------------------------------------------------------------
# robust tests: group difference of central tendency

# percentile bootstrap for comparing measures of location:
# 20% trimmed mean
trimpb2(neutral, clean)     # p = 0.106 ; CI: [-0.17; +1.67]
trimpb2(neutral2, clean2)   # p = 0.9375; CI: [-0.30; +0.33]

# median
medpb2(neutral, clean)      # p = 0.3265; CI: [-0.50; +2.08]
medpb2(neutral2, clean2)    # p = 0.7355; CI: [-0.33; +0.33]



# ---------------------------------------------------------------------
# Comparing other quantiles (not only the central tendency)

# plot of densities
par(mfcol=c(1, 2))
plot(density(clean, from=1, to=8), ylim=c(0, 0.5), col="red", main="Original data", xlab="Composite rating")
lines(density(neutral, from=1, to=8), col="black")
legend("topleft", col=c("black", "red"), lty="solid", legend=c("neutral", "clean"))

plot(density(clean2, from=1, to=8), ylim=c(0, 0.5), col="red", main="Replication data", xlab="Composite rating")
lines(density(neutral2, from=1, to=8), col="black")
legend("topleft", col=c("black", "red"), lty="solid", legend=c("neutral", "clean"))


# Compare quantiles of original study ...
par(mfcol=c(1, 2))
qcomhd(neutral, clean, q=seq(.1, .9, by=.2), xlab="Original: Neutral Priming", ylab="Neutral - Clean")
abline(h=0, lty="dotted")

# Compare quantiles of replication study
qcomhd(neutral2, clean2, q=seq(.1, .9, by=.2), xlab="Replication: Neutral Priming", ylab="Neutral - Clean")
abline(h=0, lty="dotted")

References

Berger, J. O. (2006). Bayes factors. In S. Kotz, N. Balakrishnan, C. Read, B. Vidakovic, & N. L. Johnson (Eds.), Encyclopedia of Statistical Sciences, vol. 1 (2nd ed.) (pp. 378–386). Hoboken, NJ: Wiley.

Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Psychologist, 63, 591–601.

Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156–166. doi:10.1037/0033-2909.105.1.156

Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience cleanliness reduces the severity of moral judgments. Psychological Science, 19(12), 1219–1222.

Wilcox, R. R., Erceg-Hurn, D. M., Clark, F., & Carlson, M. (2013). Comparing two independent groups via the lower and upper quantiles. Journal of Statistical Computation and Simulation, 1–9. doi:10.1080/00949655.2012.754026

Wilcox, R.R., & Schönbrodt, F.D. (2014). The WRS package for robust statistics in R (version 0.25.2). Retrieved from https://github.com/nicebread/WRS

Comments (10) | Trackback

A comment on “We cannot afford to study effect size in the lab” from the DataColada blog

In a recent post on the DataColada blog, Uri Simonsohn wrote about “We cannot afford to study effect size in the lab“. The central message is: If we want accurate effect size (ES) estimates, we need large sample sizes (he suggests four-digit n’s). As this is hardly possible in the lab we have to use other research tools, like onelin studies, archival data, or more within-subject designs.

While I agree to the main point of the post, I’d like to discuss and extend some of the conclusions. As the DataColada blog has no comments section, I’ll comment in my own blog …

 

“Does it make sense to push for effect size reporting when we run small samples? I don’t see how.”

“Properly powered studies teach you almost nothing about effect size.”

It is true that the ES estimate with n=20 will be utterly imprecise, and reporting this ES estimate could misguide readers who give too much importance to the point estimate and do not take the huge CI into account (maybe, because it has not been reported).

Still, and here’s my disagreement, I’d argue that also small-n studies should report the point estimate (along with the CI), as a meta-analysis of many imprecise small-n estimates still can give an unbiased and precise cumulative estimate. This, of course, would require that all estimates are reported, not only the significant ones (van Assen, van Aert, Nuijten, & Wicherts, 2014).

Here’s an example – we simulate a population with a true Cohen’s d of 0.8. Then we look at three scenarios: a) a single small-n study with 20 participants, b) 400 participants, and c) 20 * n=20 studies which are meta-analyzed.

# simulate population with true ES d=0.8
set.seed(0xBEEF1)
library(compute.es)
library(metafor)
library(ggplot2)
X1 <- rnorm(1000000, mean=0, sd=1)
X2 <- rnorm(1000000, mean=0, sd=1) - 0.8

## Compute effect size for ...
# ... a single n=20 study
x1 <- sample(X1, 20)
x2 <- sample(X2, 20)
(t1 <- t.test(x1, x2))
(ES1 <- tes(t1$statistic, 20, 20, dig=3))

This single small-n study has g = 1.337 [ 0.64 , 2.034] (g is an unbiased estimate of d). Quite biased, but the true ES is in the CI.

# .. a single n=400 study
x1 <- sample(X1, 400)
x2 <- sample(X2, 400)
(t2 <- t.test(x1, x2))
(ES2 <- tes(t2$statistic, 400, 400, dig=3))

This single large-n study has g = 0.76 [ 0.616 , 0.903] . It close to the true ES, and has quite narrow CI (i.e., high precision).

Now, here’s a meta-analyses of 20*n=20 studies:

# ... 20x n=20 studies
dat <- data.frame()
for (i in 1:20) {
x1 <- sample(X1, 20)
x2 <- sample(X2, 20)
dat <- rbind(dat, data.frame(
study=i,
m1i=mean(x1),
m2i=mean(x2),
sd1i=sd(x1),
sd2i=sd(x2),
n1i=20, n2i=20
))
}
}

# Do a fixed-effect model meta-analysis
es <- escalc("SMD", m1i=m1i, m2i=m2i, sd1i=sd1i, sd2i=sd2i, n1i=n1i, n2i=n2i, data=dat, append=TRUE)
(meta <- rma(yi, vi, data=es, method="FE"))

The meta-analysis reveals g = 0.775 [0.630; 0.920]. This has nearly exactly the same CI width as the n=400 study, and a slightly different ES estimate.

Here’s a plot of the results:

# Plot results
res <- data.frame(
n = factor(c("a) n=20", "b) n=400", "c) 20 * n=20\n(meta-analysis)"), ordered=TRUE),
point_estimate = c(ES1$d, ES2$d, meta$b),
ci.lower = c(ES1$l.d, ES2$l.d, meta$ci.lb),
ci.upper = c(ES1$u.d, ES2$u.d, meta$ci.ub)
)
ggplot(res, aes(x=n, y=point_estimate, ymin=ci.lower, ymax=ci.upper)) + geom_pointrange() + theme_bw() + xlab("") + ylab("Cohen's d") + geom_hline(yintercept=0.8, linetype="dotted", color="darkgreen")

Bildschirmfoto 2014-05-07 um 07.14.43

To summarize, a single small-n study hardly teaches something about effect sizes – but many small-n‘s do. But meta-analyses are only possible, if the ES is reported.

“But just how big an n do we need to study effect size? I am about to show that the answer has four-digits.”

In Uri’s post (and the linked R code) the precision issue is approached from the power side – if you increase power, you also increase precision. But you can also directly compute the necessary sample size for a desired precision. This is called the AIPE-framework (“accuracy in parameter estimation”) made popular by Ken Kelley, Scott Maxwell, and Joseph Rausch (Kelley & Rausch, 2006; Kelley & Maxwell, 2003; Maxwell, Kelley, & Rausch, 2008). The necessary functions are implemented in the MBESS package for R. If you want a CI width of .10 around an expected ES of 0.5, you need 3170 participants:

library(MBESS)
ss.aipe.smd(delta=0.5, conf.level=.95, width=0.10)
[1] 3170

The same point has been made from a Bayesian point of view in a blog post from John Kruschke: notice the sample size on the x-axis.

In our own analysis on how correlations evolve with increasing sample size (Schönbrodt & Perugini, 2013; see also blog post), we conclude that for typical effect sizes in psychology, you need 250 participants to get sufficiently accurate and stable estimates of the ES:

evolDemo

How much precision is needed?

It’s certainly hard to give general guidelines how much precision is sensible, but here are our thoughts we based our stability analyses on. We used a CI-like “corridor of stability” (see Figure) with a half-width of w= .10, w=.15, and w=.20 (everything in the “correlation-metric”).

w = .20 was chosen for following reason: The average reported effect size in psychology is around r = .21 (Richard, Bond, & Stokes-Zoota, 2003). For this effect size, an accuracy of w = .20 would result in a CI that is “just significant” and does not include a reversal of the sign of the effect. Hence, with the typical effect sizes we are dealing with in psychology, a CI with a half-width > .20 would not make much sense.

w = .10 was chosen as it corresponds to a “small effect size” à la Cohen. This is arbitrary, of course, but at least some anchor. And w = .15 is just in between.

Using these numbers and an ES estimate of, say, r = .29, a just tolerable precision would be [.10; .46] (w = .20), a tolerable precision [.15; .42] (w = .15) and a moderate precision [.20; .38] (w = .10).

If we use this lower threshold of “just tolerable precision”, we would need in the two-sample group difference around 200-250 participants per group. While I am not sure whether we really need four-digit samples for typical scenarios, I am sure that we need at least three-digit samples when we want to talk about “precision”.

Regardless of the specific level of precision and method used, however, one thing is clear: Accuracy does not come in cheaply. We need much less participants for an hypothesis test (Is there a non-zero effect or not?) compared to an accurate estimate.

With increasing sample size, unfortunately you have diminishing returns on precision: As you can see in the dotted lines in Figure 2, the CI levels off, and you need disproportionally large sample sizes to squeeze out the last tiny percentages of a shrinking CI. If you follow Pareto’s principle, you should stop somewhen. Probably in scientific progress accuracy will be rather achieved in meta-analyzing several studies (which also gives you an estimate about the ES variability and possible moderators) than doing one mega-study.

Hence: Always report your ES estimate, even in small-n studies.

 

References

Kelley, K., & Maxwell, S. E. (2003). Sample Size for Multiple Regression: Obtaining Regression Coefficients That Are Accurate, Not Simply Significant. Psychological Methods, 8, 305–321. doi:10.1037/1082-989X.8.3.305

Kelley, K., & Rausch, J. R. (2006). Sample size planning for the standardized mean difference: Accuracy in parameter estimation via narrow confidence intervals. Psychological Methods, 11, 363–385. doi:10.1037/1082-989X.11.4.363

Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample Size Planning for Statistical Power and Accuracy in Parameter Estimation. Annual Review of Psychology, 59(1), 537–563. doi:10.1146/annurev.psych.59.103006.093735

Richard, F. D., Bond, C. F., & Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7, 331–363. doi:10.1037/1089-2680.7.4.331

van Assen, M. A. L. M., van Aert, R. C. M., Nuijten, M. B., & Wicherts, J. M. (2014). Why publishing everything is more effective than selective publishing of statistically significant results. PLoS ONE, 9, e84896. doi:10.1371/journal.pone.0084896

Comments (16) | Trackback