In the era of #repligate: What are valid cues for the trustworthiness of a study?

[Update 2015/1/14: I consolidate feedback from Twitter, comments, email, and real life into the main text (StackExchange-style), so that we get a good and improving answer. Thanks to @TonyLFreitas@PhDefunct, @bahniks, @JoeHilgard, @_r_c_a, @richardmorey, @R__INDEX, the commenters at the end of this post and on the OSF mailing list, and many others for their feedback!]
In a recent lecture I talked about the replication crisis in psychology. After the lecture my students asked: “We learn so much stuff in our lectures, and now you tell us that a considerable proportion of these ‘facts’ probably are just false positives, or highly exaggerated? Then, what can we believe at all?”. A short discussion soon led to the crucial question:

In the era of #repligate: What are valid cues for the trustworthiness of a study?
Of course the best way to judge a study’s quality would be to read the paper thoroughly, making an informed judgement about the internal and statistical validity, invest some extra time into a literature review, and maybe take a look at the raw data, if available. However, such an investment is not possible in all scenarios.

 

Here, I will only focus on cues that are easy and fast to retrieve.

 

As a conceptual framework we can use the lens model (Brunswick, 1956), which differentiates the concepts of cue usage and cue validity. We use some information as a manifest cue for a latent variable (“cue utilization”). But only some cues are also valid indicators (“cue validity”). Valid cues correlate with the latent variable, invalid cues have no correlation. Sometimes, there exist valid cues which we don’t use, and sometimes we use cues that are not valid. Of course, each of the following cues can be critiziced, and you certainly can give many examples where each cue breaks down. Furthermore, the absence of a positive cue (e.g., if a study has not been pre-registered, which was uncommon until recently) does not necessarily indicate the untrustworthiness.
But this is the nature of cues – they are not perfect, and only work on average.

 


Valid cues for trustworthiness of a single study:

  • Pre-registration. This might be one of the strongest cues for trustworthiness. Pre-registration makes p-hacking and HARKing unlikely (Wagenmakers, Wetzels, Borsboom, Maas, & Kievit, 2012), and takes care for a sufficient amount of statistical power (At least, some sort of sample size planning has been done. Of course, this depends on the correctness of the a-priori effect size estimate).
  • Sample size / Statistical Power. Larger samples mean higher power, higher precision, and less false positives (Bakker, van Dijk, & Wicherts, 2012; Maxwell, Kelley, & Rausch, 2008; Schönbrodt & Perugini, 2013). Of course sample size alone is not a panacea. As always, the garbage in/garbage out principle holds, and a well designed lab study with n=40 can be much more trustworthy than a sloppy mTurk study with n=800. But all other things being equal, I put more trust in larger studies.
  • Independent high-power replications. If a study has been independently replicated from another lab with high power and preferably pre-registered, this probably is the strongest evidence for the trustworthiness of a study (How to conduct a replication? See the Replication Recipe by Brandt et al., 2014).
  • I guess that studies with Open Data and Open Material have a higher replication rate
    • “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results” (Wicherts, Bakker, & Molenaar, 2011) —> this is not exactly Open Data, because here authors only shared data upon request (or not). But it points into the same direction.
    • Beyond publishing Open Data at all, the neatness of the data set and the quality of the analysis script is an indicator (see also comment by Richard Morey). The journal “Quarterly Journal of Political Science” demands to publish raw data and analysis code that generates all the results reported in the paper. Of these submissions, 54% “had results in the paper that differed from those generated by the author’s own code”! My fear is that analytical code that has not been refined and polished for publishing contains even more errors (not to speak of unreproducible point-and-click analyses). Therefore, a well prepared data set and analysis code should be a valid indicator.
    • Open Material could be an indicator that people are not afraid of replications and further scrutiny
  • An abstract with reasonable conclusions that stick close to the data – see also below: “Red flags”. This includes visible efforts of the authors to explain how they could be wrong and what precautions were/were not taken.
    • A sensitivity analysis, which shows that conclusions do not depend on specific analytical choices. For Bayesian analyses this means to explore how the conclusions depend on the choice of the prior. But you could also show how your results change when you do not exlude the outliers, or do not apply that debatable transformation to your data (see also comment)
  • Using the “21 Word Solution” of Simmons, Nelson, & Simonsohn (2012) leads to a better replication index.
These cues might be a feature of a specific study. Beyond that, these cues could also be used as indicators of an authors’ general approach to science (e.g., Does s/he in general embrace open practices and care about the replicability of his or her research? Does the author have a good replication record?). So the author’s open science reputation could be another valid indicator, and could be useful for hiring or tenure decisions.
(As a side note: I am not so interested in creating another formalized author index “The super-objective-h-index-extending-altmetric-open-science-author-index!”. But when I reflect about how I judge the trustworthiness of a study, I indeed take into account the open science reputation an author has).

Valid cues for trustworthiness of a research programme/ multiple studies:


Valid cues for UNtrustworthiness of a single study/ red flags:

In a comment below, Dr. R introduced the idea of “red flags”, which I really like. These red flags aren’t a prove of the untrustworthiness of a study – but definitely a warning sign to look closer and to be more sceptical.

  • Sweeping claims, counterintuitive, and shocking results (that don’t connect to the actual data)
  • Most p values are in the range of .03 – .05 (or, equivalently, most t-values in the 2-3 range, or most F-values are in the 4-9 range; see comment by Dr. R below).
    • How does a distribution of p values look like when there’s an effect? See Daniël Lakens blog. With large samples, p-values just below .05 even indicate support for the null!
  • It’s a highly cited result, but no direct replications have been published so far. That could be an indicator that many unsuccessful replication attempts went into the file-drawer (see comment by Ruben below).
  • Too good to be true: If several low-power studies are combined in a paper, it can be very unlikely that all of them produce significant results. The “Test of Excess Significance” has been used to formally test for “too many significant results”. Although this formal test has been criticized (e.g., see The Etz-Files, and especially the long thread of comments, or this special issue on the test), I still think excess significance can be used as a red flag indicator to look closer.

Possibly invalid cues (cues which are often used, but only seemingly are indicators for a study’s trustworthiness):

  • The journal’s impact factor. Impact factors correlate with retractions (Fang & Casadevall, 2011), but do not correlate with a single paper’s citation count (see here).
    • I’m not really sure whether that is a valid or invalid cue for a study’s quality. The higher retraction rate might due to the stronger public interest and a tougher post-publication review of papers in high-impact journals. The IF seems not to be predictive of a single paper’s citation count; but I’m not sure either whether the citation count is an index of a study’s quality. Furthermore, “Impact factors should have no place in grant-giving, tenure or appointment committees.” (ibid.), see also a reccent article by @deevybee in Times Higher Education.
    • On the other hand, the current replicability estimate of a full volume of JPSP is only at 20-30% (see Reproducibility Project: Psychology). A weak performance for one of our “best journals”.
  • The author’s publication record in high-impact journals or h-index. This might be a less valid cue as expected, or even an invalid cue.
  • Meta-analyses. Garbage-in, garbage-out: Meta-analyses of a biased literature produce biased results. Typical correction methods do not work well. When looking at meta-analyses, at least one has to check whether and how it was corrected for publication bias.
This list of cues was compiled in a collaborative effort. Some of them have empirical support; others are only a personal hunch.

 

So, if my students ask me again “What studies can we trust at all?”, I would say something like:
“If a study has a large sample size, Open Data, and maybe even has been pre-registered, I would put quite some trust into the results. If the study has been independently replicated, even better. In contrast to common practice, I do not care so much whether this paper has been published in a high-impact journal or whether the author has a long publication record. The next step, of course, is: Read the paper, and judge it’s validity and the quality of its arguments!”
What are your cues or tips for students?

This list certainly is not complete, and I would be interested in your ideas, additions, and links to relevant literature!

 

References

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554. doi:10.1177/1745691612459060
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., et al. (2014). The Replication Recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224. doi:10.1016/j.jesp.2013.10.005
Brunswik, E. (1956). Perception and the representative design of psychological experiments. University of California Press.
Fang, F. C., & Casadevall, A. (2011). Retracted science and the retraction index. Infection and Immunity, 79, 3855–3859. doi:10.1128/IAI.05661-11
Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537–563. doi:10.1146/annurev.psych.59.103006.093735
Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. doi:10.1016/j.jrp.2013.05.009
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Maas, H. L. J. v. d., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632–638. doi:10.1177/1745691612463078

16 thoughts on “In the era of #repligate: What are valid cues for the trustworthiness of a study?”

  1. Dear Dr. Schönbrodt
    Nice summary of initiatives to improve psychological science and to hear that students are learning about this important topic.
    I fully agree that it is not helpful to tell students about untrustworthy research without giving them tools to distinguish trustworthy and untrustworthy results.
    Aside from a formal bias analysis, it is helpful to give them some simple heuristics.
    If most p-values are > .01, red flag. If p-values are not reported in sufficient detail, if most t-values are in the 2-3 range, red flag.
    If most F-values (often just t-values squared) are in the 4-9 range, red flag.
    Even a single study article often reports several statistical results. If the results are very similar (p.s .02, .03. .04), red flag.
    Finally, we should not leave it to students to second guess the quality of published work. It is time to hold editors accountable for the replicability of studies published in their journals.
    Sincerely, Dr. R

    1. Hi Dr. R,
      I like your “red flags” idea. That’s exactly what I was looking for: some rules of thumb that I can provide my students with. They usually will not do a formal bias analysis, but they should have an intuition about studies they hear in their lectures, for example.
      I will include your ideas in the main text.

  2. I tell students to use the Altmetric bookmarklet (do it myself too). Not to find out the score, but to find related tweets and blog posts. Works best if you already have some people whose judgment you trust, and works better for studies that make it into the media, but can especially help in areas where it would take you a long time to find weak spots yourself (e.g. biomedicine for me). Once you have a criticism it’s easier to verify it for yourself than coming up with it.
    Also, I wouldn’t teach to dichotomise into true and false positives or trustworthy or not (but I kind of doubt you do anyway).
    There’s a number of people whose work I trust somewhat, I read it with a grain of salt and to almost all studies I try to apply a downward effect size correction in my mind, because of the statistical significance filter and so on.
    You kind of said this already with the “is it replicated” thing, but if a finding stakes out a very novel territory I expect the estimates to be overestimated more. So some sort of novelty/surprise penalty. Depends on knowing the literature though. Also, I would teach that if there is no replication to a highly cited result after 10 years, it probably has been replicated and file-drawered.
    And maybe I’m an ass, but the field/paradigm matters to me as a quick cue. I am more careful around “funny” embodiment studies, much of social neuroscience and social endocrinology, social psychology, and evolutionary psychology, my own field. Same in genetics for claims involving candidate genes, epigenetics or telomeres. This extra scrutiny list is incomplete, but to my mind all its members really belong there.
    But I don’t want to give this as advice to students, I don’t even know what most of the other departments at my institute are doing, maybe they are the one of the few teams doing really solid funny embodiment studies, and this sort of stereotype may be useful for me but pernicious in the local context.

    1. Hi Ruben,
      very good thoughts! I haven’t used the Altmetrics tool yet, but it seems to be $$ only? Do you have an institutional license?
      I totally agree that we should avoid dichotomies. Let’s think in grades of evidence.
      +1 for mentioning the decline effect – that’s a very important lesson from the reproducibility projects, and a good take-home message for students.

        1. Ha! Now it works.
          I checked it on some papers that have been heavily discussed on several blogs (hint: has to do with cleanliness), and not a single of these blog posts or Tweets has been tracked by altmetrics …
          But I think it is still useful to discover some posts/tweets one wasn’t aware of before.

  3. Clues of validity: That the author bends over backwards to explain how they could be wrong and what precautions were/were not taken, from the data generation, modeling, and interpretation.

    1. Not a bad idea! In a similar sense, a little bit of sensitivity analysis always makes me feel better about results.
      For example, if the author wishes to exclude certain subjects, the tests and descriptive statistics are presented with and without those subjects. Cook’s distance is available so that the reader can see if there are extremely influential datapoints. The actual data are scatterplotted on top of the regression line, etc.
      These things help me to know I’m looking at the real data and not just its prettiest facet.

  4. Regarding the fact that students (and most researchers) will never do a formal bias analysis:
    We’re trying to make that easier with the Archival project. By collecting the test statistics in a machine-readable format, we can later automatically spit out p-curves, R-index and make meta-analysts lives easier.
    But we also plan to look for direct and conceptual replications to the articles that we code. That way we should be able to arbitrate which cues are valid predictors of replication. If you want to add by letting your students rate articles’ replicability, we can go full Brunswik 🙂

  5. You mentioned open data, but the *neatness* of the data is also a valid cue. If it is full of cut-and-pastes from various Excel spreadsheets, formulas that are impossible to follow, or badly written code, then I don’t have as much faith in the quality of the research. Messy data and code make it harder to get quality results without mistakes, whether you are the original researcher or a researcher trying to replicate the analyses later.

    1. Agreed. Properly curated data and code are helpful twice over — once, because the analysis is reproducible and double-checked by the author, and twice, because the analysis can be more easily dissected and discussed by others outside the research group.
      I thought I had a proper citation for this, but I was probably only thinking of the Wicherts, Bakker, and Molenaar (2011) paper.

Leave a Reply to Mayo Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.