Best Paper Award for the “Evolution of correlations”

I am pleased to announce that Marco Perugini and I have received the 2015 Best Paper Award from the Association of Research in Personality (ARP) for our paper:

Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. doi:10.1016/j.jrp.2013.05.009
evolDemo
The TL;DR summary of the paper: As sample size increases, correlations wiggle up and down. In typical situations, stable estimates can be expected when n approaches 250. See this blog post for some more information and a video (Or: read the paper. It’s short.)
Interestingly (and in contrast to all of my other papers …), the paper has not only been cited in psychology, but also in medical chemistry, geophysical research, athmospheric physics, chronobiology, building research, and, most importantly, in the Indian Journal of Plant Breeding. Amazing.
And the best thing is: The paper is open access, and all simulation code and data are open on Open Science Framework. Use it and run your own simulations!

3 thoughts on “Best Paper Award for the “Evolution of correlations””

  1. I am one of those who cited it (atmospheric physics), and found it through your blog, which I have enjoyed reading for a year or so now. It’s a good, clear paper, and the statistical principles are transferable to a variety of disciplines. A lot of people either aren’t quite so knowledgeable about statistics as they’d like to be (including me), or simply haven’t thought things through and can read too much into statistics.

    1. That is really cool – I guess such cross-discipline fertilization hardly happend in the pre-www era.
      Most statisticians will think the content of the paper is trivial and/or obvious – but how we presented the matter seemed to have hit a nerve for many researchers who are less statistics-oriented.

      1. I try to include caveats and disclaimers into my papers based on the statistics, and caution readers not to e.g. over-interpret small differences in statistics between different data sets. But explaining *why* they shouldn’t do that (which is something reviewers who aren’t statisticians sometimes asks for) can take up pages and there just isn’t space in papers to go into those details. So it’s great to have a reference like your paper which you can just cite and say “see here for why”. So, congrats on the award. 🙂
        There are a whole host of statistical techniques which are mis-applied in my field, but few papers which show concisely why they are wrong for a particular analysis. Another big problem is people doing linear least-squares regressions on unsuitable data sets. When pointing out why this is wrong (in person or as a reviewer), the authors often come back with “ok, maybe it is wrong, but lots of people in this field do it, so we will keep it in”. So we definitely need papers like yours to help educate non-statisticians who make use of statistics, to make sure the conclusions drawn are robust.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.