[Update 2015/1/14: I consolidate feedback from Twitter, comments, email, and real life into the main text (StackExchange-style), so that we get a good and improving answer. Thanks to @TonyLFreitas, @PhDefunct, @bahniks, @JoeHilgard, @_r_c_a, @richardmorey, @R__INDEX, the commenters at the end of this post and on the OSF mailing list, and many others for their feedback!]
In a recent lecture I talked about the replication crisis in psychology. After the lecture my students asked: “We learn so much stuff in our lectures, and now you tell us that a considerable proportion of these ‘facts’ probably are just false positives, or highly exaggerated? Then, what can we believe at all?”. A short discussion soon led to the crucial question:
In the era of #repligate: What are valid cues for the trustworthiness of a study?
Valid cues for trustworthiness of a single study:
- Pre-registration. This might be one of the strongest cues for trustworthiness. Pre-registration makes p-hacking and HARKing unlikely (Wagenmakers, Wetzels, Borsboom, Maas, & Kievit, 2012), and takes care for a sufficient amount of statistical power (At least, some sort of sample size planning has been done. Of course, this depends on the correctness of the a-priori effect size estimate).
- Sample size / Statistical Power. Larger samples mean higher power, higher precision, and less false positives (Bakker, van Dijk, & Wicherts, 2012; Maxwell, Kelley, & Rausch, 2008; Schönbrodt & Perugini, 2013). Of course sample size alone is not a panacea. As always, the garbage in/garbage out principle holds, and a well designed lab study with n=40 can be much more trustworthy than a sloppy mTurk study with n=800. But all other things being equal, I put more trust in larger studies.
- Independent high-power replications. If a study has been independently replicated from another lab with high power and preferably pre-registered, this probably is the strongest evidence for the trustworthiness of a study (How to conduct a replication? See the Replication Recipe by Brandt et al., 2014).
- I guess that studies with Open Data and Open Material have a higher replication rate
- “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results” (Wicherts, Bakker, & Molenaar, 2011) —> this is not exactly Open Data, because here authors only shared data upon request (or not). But it points into the same direction.
- Beyond publishing Open Data at all, the neatness of the data set and the quality of the analysis script is an indicator (see also comment by Richard Morey). The journal “Quarterly Journal of Political Science” demands to publish raw data and analysis code that generates all the results reported in the paper. Of these submissions, 54% “had results in the paper that differed from those generated by the author’s own code”! My fear is that analytical code that has not been refined and polished for publishing contains even more errors (not to speak of unreproducible point-and-click analyses). Therefore, a well prepared data set and analysis code should be a valid indicator.
- Open Material could be an indicator that people are not afraid of replications and further scrutiny
- An abstract with reasonable conclusions that stick close to the data – see also below: “Red flags”. This includes visible efforts of the authors to explain how they could be wrong and what precautions were/were not taken.
- A sensitivity analysis, which shows that conclusions do not depend on specific analytical choices. For Bayesian analyses this means to explore how the conclusions depend on the choice of the prior. But you could also show how your results change when you do not exlude the outliers, or do not apply that debatable transformation to your data (see also comment)
- Using the “21 Word Solution” of Simmons, Nelson, & Simonsohn (2012) leads to a better replication index.
Valid cues for trustworthiness of a research programme/ multiple studies:
- An R-Index > 50%
- A well-behaved p-curve analysis
Valid cues for UNtrustworthiness of a single study/ red flags:
In a comment below, Dr. R introduced the idea of “red flags”, which I really like. These red flags aren’t a prove of the untrustworthiness of a study – but definitely a warning sign to look closer and to be more sceptical.
- Sweeping claims, counterintuitive, and shocking results (that don’t connect to the actual data)
- Most p values are in the range of .03 – .05 (or, equivalently, most t-values in the 2-3 range, or most F-values are in the 4-9 range; see comment by Dr. R below).
- How does a distribution of p values look like when there’s an effect? See Daniël Lakens blog. With large samples, p-values just below .05 even indicate support for the null!
- It’s a highly cited result, but no direct replications have been published so far. That could be an indicator that many unsuccessful replication attempts went into the file-drawer (see comment by Ruben below).
- Too good to be true: If several low-power studies are combined in a paper, it can be very unlikely that all of them produce significant results. The “Test of Excess Significance” has been used to formally test for “too many significant results”. Although this formal test has been criticized (e.g., see The Etz-Files, and especially the long thread of comments, or this special issue on the test), I still think excess significance can be used as a red flag indicator to look closer.
Possibly invalid cues (cues which are often used, but only seemingly are indicators for a study’s trustworthiness):
- The journal’s impact factor. Impact factors correlate with retractions (Fang & Casadevall, 2011), but do not correlate with a single paper’s citation count (see here).
- I’m not really sure whether that is a valid or invalid cue for a study’s quality. The higher retraction rate might due to the stronger public interest and a tougher post-publication review of papers in high-impact journals. The IF seems not to be predictive of a single paper’s citation count; but I’m not sure either whether the citation count is an index of a study’s quality. Furthermore, “Impact factors should have no place in grant-giving, tenure or appointment committees.” (ibid.), see also a reccent article by @deevybee in Times Higher Education.
- On the other hand, the current replicability estimate of a full volume of JPSP is only at 20-30% (see Reproducibility Project: Psychology). A weak performance for one of our “best journals”.
- The author’s publication record in high-impact journals or h-index. This might be a less valid cue as expected, or even an invalid cue.
- Meta-analyses. Garbage-in, garbage-out: Meta-analyses of a biased literature produce biased results. Typical correction methods do not work well. When looking at meta-analyses, at least one has to check whether and how it was corrected for publication bias.
“If a study has a large sample size, Open Data, and maybe even has been pre-registered, I would put quite some trust into the results. If the study has been independently replicated, even better. In contrast to common practice, I do not care so much whether this paper has been published in a high-impact journal or whether the author has a long publication record. The next step, of course, is: Read the paper, and judge it’s validity and the quality of its arguments!”
This list certainly is not complete, and I would be interested in your ideas, additions, and links to relevant literature!
Dear Dr. Schönbrodt
Nice summary of initiatives to improve psychological science and to hear that students are learning about this important topic.
I fully agree that it is not helpful to tell students about untrustworthy research without giving them tools to distinguish trustworthy and untrustworthy results.
Aside from a formal bias analysis, it is helpful to give them some simple heuristics.
If most p-values are > .01, red flag. If p-values are not reported in sufficient detail, if most t-values are in the 2-3 range, red flag.
If most F-values (often just t-values squared) are in the 4-9 range, red flag.
Even a single study article often reports several statistical results. If the results are very similar (p.s .02, .03. .04), red flag.
Finally, we should not leave it to students to second guess the quality of published work. It is time to hold editors accountable for the replicability of studies published in their journals.
Sincerely, Dr. R
Hi Dr. R,
I like your “red flags” idea. That’s exactly what I was looking for: some rules of thumb that I can provide my students with. They usually will not do a formal bias analysis, but they should have an intuition about studies they hear in their lectures, for example.
I will include your ideas in the main text.
I tell students to use the Altmetric bookmarklet (do it myself too). Not to find out the score, but to find related tweets and blog posts. Works best if you already have some people whose judgment you trust, and works better for studies that make it into the media, but can especially help in areas where it would take you a long time to find weak spots yourself (e.g. biomedicine for me). Once you have a criticism it’s easier to verify it for yourself than coming up with it.
Also, I wouldn’t teach to dichotomise into true and false positives or trustworthy or not (but I kind of doubt you do anyway).
There’s a number of people whose work I trust somewhat, I read it with a grain of salt and to almost all studies I try to apply a downward effect size correction in my mind, because of the statistical significance filter and so on.
You kind of said this already with the “is it replicated” thing, but if a finding stakes out a very novel territory I expect the estimates to be overestimated more. So some sort of novelty/surprise penalty. Depends on knowing the literature though. Also, I would teach that if there is no replication to a highly cited result after 10 years, it probably has been replicated and file-drawered.
And maybe I’m an ass, but the field/paradigm matters to me as a quick cue. I am more careful around “funny” embodiment studies, much of social neuroscience and social endocrinology, social psychology, and evolutionary psychology, my own field. Same in genetics for claims involving candidate genes, epigenetics or telomeres. This extra scrutiny list is incomplete, but to my mind all its members really belong there.
But I don’t want to give this as advice to students, I don’t even know what most of the other departments at my institute are doing, maybe they are the one of the few teams doing really solid funny embodiment studies, and this sort of stereotype may be useful for me but pernicious in the local context.
Hi Ruben,
very good thoughts! I haven’t used the Altmetrics tool yet, but it seems to be $$ only? Do you have an institutional license?
I totally agree that we should avoid dichotomies. Let’s think in grades of evidence.
+1 for mentioning the decline effect – that’s a very important lesson from the reproducibility projects, and a good take-home message for students.
The bookmarklet is free! I think I recall using their API, so that must be free too because we certainly don’t have an institutional license.
http://www.altmetric.com/bookmarklet.php
Ha! Now it works.
I checked it on some papers that have been heavily discussed on several blogs (hint: has to do with cleanliness), and not a single of these blog posts or Tweets has been tracked by altmetrics …
But I think it is still useful to discover some posts/tweets one wasn’t aware of before.
This one looks okay to me? http://www.altmetric.com/details.php?citation_id=101738&src=bookmarklet
I think there are some arbitrary criteria for blogs to be on the list, apparently it’s not enough to be syndicated at researchblogging.org
http://www.altmetric.com/sources-blogs.php
Clues of validity: That the author bends over backwards to explain how they could be wrong and what precautions were/were not taken, from the data generation, modeling, and interpretation.
Not a bad idea! In a similar sense, a little bit of sensitivity analysis always makes me feel better about results.
For example, if the author wishes to exclude certain subjects, the tests and descriptive statistics are presented with and without those subjects. Cook’s distance is available so that the reader can see if there are extremely influential datapoints. The actual data are scatterplotted on top of the regression line, etc.
These things help me to know I’m looking at the real data and not just its prettiest facet.
Regarding the fact that students (and most researchers) will never do a formal bias analysis:
We’re trying to make that easier with the Archival project. By collecting the test statistics in a machine-readable format, we can later automatically spit out p-curves, R-index and make meta-analysts lives easier.
But we also plan to look for direct and conceptual replications to the articles that we code. That way we should be able to arbitrate which cues are valid predictors of replication. If you want to add by letting your students rate articles’ replicability, we can go full Brunswik 🙂
What a great project! Machine-readability is the future.
Oh and I’m surprised Dr. R didn’t mention this himself, but apparently the 21 word solution is a valid cue: https://replicationindex.wordpress.com/2014/12/17/the-r-index-of-simmons-et-al-s-21-word-solution/
Right, of course. It’s added!
You mentioned open data, but the *neatness* of the data is also a valid cue. If it is full of cut-and-pastes from various Excel spreadsheets, formulas that are impossible to follow, or badly written code, then I don’t have as much faith in the quality of the research. Messy data and code make it harder to get quality results without mistakes, whether you are the original researcher or a researcher trying to replicate the analyses later.
Agreed. Properly curated data and code are helpful twice over — once, because the analysis is reproducible and double-checked by the author, and twice, because the analysis can be more easily dissected and discussed by others outside the research group.
I thought I had a proper citation for this, but I was probably only thinking of the Wicherts, Bakker, and Molenaar (2011) paper.
Right! Added.