Reflections about our first Open-Science-Committee’s meeting

Yesterday, we had the first meeting of our department’s Open Science Committee. I am happy that the committee has 20 members, representing every research unit of the department, and all groups from PhD students to full professors.
In the meeting, I first gave a quick overview about the replication crisis in psychology. I had the impression that we had a large consensus about the fact that we indeed have a problem, and that we should think about possible consequences. Then we started an open discussion where we collected questions, reservations, and ideas.
Here are some unordered topics of our discussions. Not all of them could be resolved at that meeting (which was not the goal), but eventually these stubs could result in a FAQ:
  • It is important to acknowledge the diversity of our (sub)fields. Even if we agree on the overarching values of open science, the specific implementations might differ. The current discussion often is focused on experimental laboratory (or online) research. What about existing large-scale data sets? What about sensitive video data from infant studies? What about I&O research in companies, where agreements with the work council forbid open data? Feasible protocols and solutions have to be developed in these fields.
  • Does the new focus on power lead to boring and low-risk “Mturk-research”? This is certainly not the goal, but we should be aware that this could happen as an unintended side effect. For example, all ManyLab projects focused on easy-to-implement computer studies. Given the focus of the projects this is understandable; but we should not forget “the other” research.
  • We had a longer discussion which could be framed as “strategic choices vs. intrinsic motivation (and social mandate) to increase knowledge”. From a moral point of view, the choice is clear. But we all are also individuals who have to feed our families (or, at least, ourselves), and the strategic perspective has an existential relevance to us.
    Related to this question is also the next point:
  • What about the “middle” generation who soon will look for a job in academia? Can we really recommend to go the open way? Without the possibility to p-hack, and with the goal to run high-powered studies (which typically have a larger n), individual productivity (aka: # of published papers) will decline. (Of course, “productivity” in terms of “increase in valid knowledge” will rise).
    This would be my current answer: I expect that the gain in reputation outweighs the potential loss in # of published papers. Furthermore, we now have several techniques which allow us to assess the likelihood of p-hacking and the evidential value of a set of studies. If we present a paper with 4 studies and ps = .03, .04, .04, and .05, chances are high that we do not earn a lot of respect but rather sceptical frowns. Hence, with the increasing knowledge of healthy p-curves and other indicators, the old strategy of packing together too-good-to-be-true studies might soon backfire.
    Finally, I’d advocate for an agentic position: It’s not some omnipotent external force that imposes an evil incentive structure on us. We are the incentive structure! At least at our department we can make sure that we do not incentivize massive p-hacking, but reward scientists that produce transparent, replicable, and honest research.
  • The new focus on replicability and transparency criteria does not imply that other quality indicators (such as good theoretical foundations) are less important.
  • Some change can been achieved by positive, voluntary incentives. For example, the Open Science Badges led to 40% of papers having open data in the journal Psychological Science. In other situations, we might need mandatory rules. Concerning this voluntary/mandatory dimension: When is which approach appropriate and more constructive?
  • An experience: Registered reports can take a long time in the review process – A problem for publication-based dissertations?
  • We have to teach the field on what methods we base the conclusion that we have a replicability problem. In some discussions (not in our OSC ;-)) you can hear something like: “Some young greenhorns invent some fancy statistical index, and tell us that everything we have done is crap. First they should show that their method is valid!” It is our responsibility to explain the methods to researchers that do not follow the current replicability discussion so closely or are not so statistically savvy.
  • Idea: Should we give an annual Open-Science-Award at our department?
We have no ready-made answers for many of these questions. Most of them have to be tackled at multiple levels. Any comments and other perspectives on these open questions are appreciated!

Talks and Workshops

One goal of the committee is to train our researchers in new tools and topics. I am happy to announce that we will host at least 4 talks/workshops in our department in the remainder of 2015:
  • Sep 30, 2015: Jonathon Love (Amsterdam): JASP – A Fresh Way to Do Statistics (14-16, room 3322)
  • Nov 5, 2015: Daniel Lakens (TU Eindhoven): Practical Recommendations to Increase the Informational Value of Studies
  • End of November 2015 (TBA): Etienne LeBel: Introducing Curate Science (curatescience.org)
  • Dec 2015 (TBA): Felix Schönbrodt: How to detect p-hacking and publication bias: A practical introduction to p-curve, R-index, etc.
The plan for the next meeting is to discuss our voluntary commitment to research transparency.

Related posts: