A voluntary commitment to research transparency

The Reproducibility Project: Psychology was published last week, and it was another blow to the overall credibility of the current research system’s output.

Some interpretations of the results were in a “Hey, it’s all fine; nothing to see here; let’s just do business as usual” style. Without going into details about the “universal hidden moderator hypothesis” (see Sanjay’s blog for a reply) or “The results can easily explained by regression to the mean” (see Moritz’ and Uli’s reply): I do not share these optimistic views, and I do not want to do “business as usual”.

What makes me much more optimistic about the state of our profession than unfalsifiable post-hoc “explanations” is that there has been considerable progress towards an open science, such as the TOP guidelines for transparency and openness in scientific journals, the introduction of registered reports, or the introduction of the open science badges (Psych Science has increased sharing of data and materials from near zero to near 25%38% in 1.5 years, simply by awarding the badges). And all of this happend within the last 3 years!

Beyond these already beneficial changes, we asked ourself: What can we do on the personal and local department level to make more published research true?

A first reaction was the foundation of our local Open Science Committee (more about this soon). As another step, I developed together with some colleagues a Voluntary Commitment to Research Transparency.

The idea of that public commitment is to signal to others that we follow these guidelines of open science. The signal is supposed to go to:

  • Colleagues in the department and other universities (With the hope that more and more will join)
  • Co-authors (This is how we will do science)
  • Funding agencies (We prefer quality over quantity)
  • Potential future employers (This is our research style, if you want that)
  • PhD students:
    • If you want to do your PhD here: these are the conditions
    • If you apply for a job after your PhD, you will get the open-science-reputation-badge from us.


Now, here’s the current version of our commitment:

[Update 2015/11/19: I uploaded a minor revision which reflects some feedback from new signatories]

Voluntary Commitment to Research Transparency and Open Science

We embrace the values of openness and transparency in science. We believe that such research practices increase the informational value and impact of our research, as the data can be reanalyzed and synthesized in future studies. Furthermore, they increase the credibility of the results, as independent verification of the findings is possible.

Here, we express a voluntary commitment about how we will conduct our research. Please note that to every guideline there can be justified exceptions. But whenever we deviate from one of the guidelines, we give an explicit justification for why we do so (e.g., in the manuscript, or in the README file of the project repository).

As signatories, we warrant to follow these guidelines from the day of signature on:

Own Research

  1. Open Data: Whenever possible, we publish, for every first-authored empirical publication, all raw data which are necessary to reproduce the reported results on a reliable repository with high data persistence standards (such as the Open Science Framework).

  2. Reproducible scripts: For every first authored empirical publication we publish reproducible data analysis scripts, and, where applicable, reproducible code for simulations or computational modeling.

  3. We provide (and follow) the “21-word solution” in every empirical publication: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.”1 If necessary, this statement is adjusted to ensure that it is accurate.

  4. As co-authors we try to convince the respective first authors to act accordingly.


  1. As reviewers, we add the “standard reviewer disclosure request”, if necessary (https://osf.io/hadz3/). It asks the authors to add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes.

  2. As reviewers, we ask for Open Data (or a justification why it is not possible).2

Supervision of Dissertations

  1. As PhD supervisors we put particular emphasis on the propagation of methods that enhance the informational value and the replicability of studies. From the very beginning of a supervisor-PhD student relationship we discuss these requirements explicitly.

  2. From PhD students, we expect that they provide Open Data, Open Materials and reproducible scripts to the supervisor (they do not have to be public yet).

  3. If PhD projects result in publications, we expect that they follow points I. to III.

  4. In the case of a series of experiments with a confirmatory orientation, it is expected that at least one pre-registered study is conducted with a justifiable a priori power analysis (in the frequentist case), or a strong evidence threshold (e.g., if a sequential Bayes factor design is implemented). A pre-registration consists of the hypotheses, design, data collection stopping rule, and planned analyses.

  5. The grading of the final PhD thesis is independent of the studies’ statistical significance. Publications are aspired; however, a successful publication is not a criterion for passing or grading.

Service to the field

  1. As members of committees (e.g., tenure track, appointment committees, teaching, professional societies) or editorial boards, we will promote the values of open science.

1Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2012). A 21 word solution. Retrieved from: http://dx.doi.org/10.2139/ssrn.2160588

2See also Peer Reviewers’ Openness Initiative: http://opennessinitiative.org/

So far, 4 members of our department, and 8 researchers from other universities have signed the commitment – take us at our word!

We hope that many more will join the initiative, or think about crafting their own personal commitment, at the openness level they feel comfortable with.

Comments (9) | Trackback

Best Paper Award for the “Evolution of correlations”

I am pleased to announce that Marco Perugini and I have received the 2015 Best Paper Award from the Association of Research in Personality (ARP) for our paper:

Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. doi:10.1016/j.jrp.2013.05.009
The TL;DR summary of the paper: As sample size increases, correlations wiggle up and down. In typical situations, stable estimates can be expected when n approaches 250. See this blog post for some more information and a video (Or: read the paper. It’s short.)
Interestingly (and in contrast to all of my other papers …), the paper has not only been cited in psychology, but also in medical chemistry, geophysical research, athmospheric physics, chronobiology, building research, and, most importantly, in the Indian Journal of Plant Breeding. Amazing.
And the best thing is: The paper is open access, and all simulation code and data are open on Open Science Framework. Use it and run your own simulations!
Comments (3) | Trackback

Introducing: The Open Science Committee at our department

Large-scale replication projects of the last years (e.g., ManyLabs I, II, and III, Reproducibility Project: Psychology) showed that the “replication crisis” in psychology is more than just hot air: According to recent estimates, ~60% of current psychological research is not replicableI will not go into details here about 'What counts as a replication?'. The 60% number certainly can be debated on many grounds, but the take-home-message is: It's devastating.. This spurred a lot of developments, such as the TOP guidelines, which define transparency and openness criteria for scientific publications.

The field is thinking about how we can ensure that we generate more actual knowledge and less false positives, or in the words of John Ioannidis: How to make more published research true.

In order to fathom potential consequences for our own department of psychology at the Ludwig-Maximilians-Universität München, our department’s administration unanimously decided to establish an Open Science Committee (OSC).

The committee’s mission and goals include:

  • Monitor the international developments in the area of open science and communicate them to the department.
  • Organize workshops that teach skills for open science (e.g., How do I write a good pre-registration? What practical steps are necessary for Open Data? How can I apply for the Open Science badges?, How to do an advanced power analysis, What are Registered Reports?).
  • Develop concrete suggestions concerning tenure-track criteria, hiring criteria, PhD supervision and grading, teaching, curricula, etc.
  • Channel the discussion concerning standards of research quality and transparency in the department. Even if we share the same scientific values, the implementations might differ between research areas. A medium-term goal of the committee is to explore in what way a department-wide consensus can be established concerning certain points of open science.

The OSC developed some first suggestions about appropriate actions that could be taken in response to the replication crisis at the level of our department. We focused on five topics:

  • Supervision and grading of dissertations
  • Voluntary public commitments to research transparency and quality standards (this also includes supervision of PhDs and coauthorships)
  • Criteria for hiring decisions
  • Criteria for tenure track decisions
  • How to allocate the department’s money without setting incentives for p-hacking

Raising the bars naturally provokes backlashs. Therefore we emphasize three points right from the beginning:

  1. The described proposals are no “final program”, but a basis for discussion. We hope these suggestions will trigger a discussion within research units and the department as a whole. Since the proposal targets a variety of issues, of course they need to be discussed in the appropriate committees before any actions are taken.
  2. Different areas of research differ in many aspects, and the actions taken can differ betweens these areas. Despite the probably different modes of implementation, there can be a consensus regarding the overarching goal – for example, that studies with higher statistical power offer higher gains in knowledge (ceteris paribus), and that research with larger gains in knowledge should be supported.
  3. There can be justified exceptions from every guideline. For example, some data cannot sufficiently be anonymized, in which case Open Data is not an option. The suggestions described here should not be interpreted as chains to the freedom of research, but rather as a statement about which values we as a research community represent and actively strive for.

Two chairs are currently developing a voluntary commitment to research transparency and quality standards. These might serve as a blue-print or at least as food for thought for other research units. When finished, these commitments will be made public on the department’s website (and also on this blog). Furthermore, we will collect our suggestions, voluntary commitments, milestones,  etc. on a public OSF project.

Do you have an Open Science Committee or a similar initiative at your university? We would love to bundle our efforts with other initiatives, share experiences, material, etc. Contact us!

— Felix Schönbrodt, Moritz Heene, Michael Zehetleitner, Markus Maier

Stay tuned – soon we will present a first major success of our committee!
(Follow me on Twitter for more updates on #openscience and our Open Science Committee: @nicebread303)

Comments (9) | Trackback

Send this to a friend

© 2018 Felix Schönbrodt | Impressum | Datenschutz | Contact