Felix Schönbrodt

PD Dr. Dipl.-Psych.

Changing hiring practices towards research transparency: The first open science statement in a professorship advertisement

Engaging in open science practices increases knowledge as a common good, and ensures the reproducibility, verifiability and credibility of research. But some have the fear that on an individual strategic level (in particular from an early career perspective) engaging in research transparency could reduce a researcher’s chance to get a tenured position in academia.

University hiring decisions often are driven (amongst other criteria) by publication quantity and journal prestige: “Several universities base promotion decisions on threshold h-index values and on the number of articles in ‘high-impact’ journals” (Hicks, Wouters, Waltman, de Rijcke, & Rafols, 2015), and Nosek, Spies, & Motyl (2012) mention “[…] the prevailing perception that publication numbers and journal prestige are the key drivers for professional success”.

We all know where this focus on pure quantity and too-perfect results led us: “In a world where researchers are rewarded for how many papers they publish, this can lead to a decrease in the truth value of our shared knowledge” (Nelson, Simmons, & Simonsohn, 2012), which can be seen in ongoing debates about low replication rates in psychology, medicine, or economics.

Doing studies with high statistical power, preparing open data, and trying to publish realistic results that are not hacked to (unrealistic) perfection will slow down scientists. Researchers engaging in these good research practices probably will have a smaller quantity of publications, and if that is the major selection criterion, they have a disadvantage in a competitive job market for tenured positions.

For this reason, hiring standards have to change as well towards a valuation of research transparency, and the department of psychology at LMU München did the first step into this direction.

Based on a suggestion of our Open Science Committee, the department added a paragraph to a professorship job advertisement which asks for an open science statement from the candidates:

W3_sozial

Here’s a translation of the open science paragraph:

Our department embraces the values of open science and strives for replicable and reproducible research. For this goal we support transparent research with open data, open material, and pre-registrations. Candidates are asked to describe in what way they already pursued and plan to pursue these goals.

This paragraph clearly communicates open science as a core value of our department.

Of course, criteria of research transparency will not be the only criteria of evaluation for candidates. But, to my knowledge, this is the first time that they are explicit criteria.

Jean-Claude Burgelman (Directorate General for Research and Innovation of the European Commission) says that “the career system has to gratify open science”. I hope that many more universities will follow the LMU’s lead with an explicit commitment to open science in their hiring practices.

Comments (1) | Trackback

Putting the ‘I’ in open science: How you can change the face of science

If we want to shift from a closed science to an open science, there has to be change at several levels. In this process, it’s easy to push the responsibility (and the power) for reform onto “the system”: “If only journals changed their policy …”, “It’s the responsibility of the granting agencies to change XYZ”, or “Unless university boards change their hiring practices, it is futile to …”.

Beyond doubt, change has to occur at the institutional level. In particular, many journals have already done a lot (see, for example, the TOP guidelines or the new registered reports article format). But journal policies aren’t enough, particularly since they are often not enforced.

In this blog post, I want to advocate for a complementary position of agency and empowerment: Let’s focus on steps each individual can do!

Here I want to show 9 steps that each individual can do, starting today, to foster open science:

What you can do today:

1) Join the community. Follow open science advocates on Twitter and blogs. While monitoring these tweets does not change anything per se, it can give you important updates about developments in open science, and useful hints about how to implement open science in practice. Here’s my personal, selective, and incomplete list of Twitter users that frequently tweet about open science: https://twitter.com/nicebread303/lists/openscience

2) Engage open values in peer review. I started to realize that my work as a reviewer is very valuable work. I review more than 6x the number of papers that I submit myself. I receive more requests than I can handle, so I have to decide anyway which request to accept and which not. Where should I allocate my reviewing resources to? I prefer not to allocate them to research that is closed and practically unverifiable. I’d rather allocate them to research that is transparent, verifiable, sustainable, and re-usable.

pro_lock_wide2-1024x410Exactly this is the goal of the PRO initiative (Peer Reviewer Openness initiative), which uses the reviewer’s role to foster open science practices.  The vision of the initiative is to switch from an opt-in model to an opt-out model: Openness is the new default; if authors don’t want it, they have to explicitly opt out. Signatories of the initiative only provide a comprehensive review of a manuscript if (a) open data and open material is provided, or (b) a public justification is given in the manuscript why this is not possible. Since the two weeks of the initiative’s existence, more than 160 reviewers signed it. I think this group already can have some impact, and I hope that more will sign.

[Read the paper — Sign the Initiative — More resources for open science]

Previous posts on the PRO initiative by Richard Morey, Candice Morey, Rolf Zwaan, and Daniel Lakens

What you can do this week:

3) Commit yourself to open science. In our “Voluntary Commitment to Research Transparency and Open Science” we explain which principles of research transparency we will follow from the day of signature on (see also my blog post). If you like it, sign it, and show the world that your research can be trusted and verified. Or use it as an inspiration to craft your own transparency commitment, on the openness level that you feel comfortable with.

4) Find local like-minded people. Find colleagues in your department that embrace the values of open science as you do. Found a local open science initiative where you can exchange about challenges, help each other with practical problems (How did that pre-registration work?), and talk about ways open science can be implemented in your specific field. Use this “coalition of the willing” as the starting point for the next step …

What you can do this month:

5) Found a local Open-Science-Committee. Explore whether your local open science initiative could be installed as an official open science committee (OSC) at your department/ institution. See our OSF project for information about our open science committee at the department psychology at LMU Munich. Maybe you can reuse and adapt some of our material. Not all of our faculty members have the same opinion about this committee, some are enthusiastic, some are more skeptical. But still, the department’s board unanimously decided to establish this committee in order to keep the discussion going. Our OSC has 32 members from all chairs and we meet two times each semester. Our OSC has 4 goals:

  • Monitor the international developments in the area of open science and communicate them to the department.
  • Organize workshops that teach skills for open science
  • Develop concrete suggestions concerning tenure-track criteria, hiring criteria, PhD supervision and grading, teaching, curricula, etc.
  • Channel the discussion concerning standards of research quality and transparency in the department. Explore in what way a department-wide consensus can be established concerning certain points of open science.

6) Pre-register your next study. Pre-registration is a new skill we have to learn, so the first try does not have to be perfect. For example, I had to revise two of my registrations because I forgot important parts in the first version. In my experience, writing a few pre-registration documents gives you a better feeling for how long they take, what they should contain, what level of detail is appropriate, etc.

You can even win 1000$ if you participate in the pre-reg challenge!

What you can do next semester and beyond:

7) Teach open science practices to students. You could plan your next Research Methods course as a pre-registered replication study.   See also this OSF collection of syllabi, the “Good Science, Bad Science” course from EJ Wagenmakers, and the OSF Collaborative Replications and Education Project (CREP).

8) Submit a registered report. Think about submitting a registered report if there’s a journal in your field that supports this format. In this new article format an introduction, methods section, and analysis plan is submitted before data is collected. This proposal is sent to review, and in the positive case you get an in-principle-acceptance and proceed to actual data collection. This means, the paper is published independent of the results (unless you screw up your data collection or analysis).

9) Promote the values of open science in committees. As a member of a job committee, you can argue for open science criteria and evaluate candidates (amongst other criteria, of course) whether they engage in open practices. For example, Betsy Levy Paluck wrote in her blog: “In a hiring capacity, I will appreciate applicants who, though they do not have a ton of publications, can link their projects to an online analysis registration, or have posted data and replication code. Why? I will infer that they were slowing down to do very careful work, that they are doing their best to build a cumulative science.”

These are 9 small and medium steps, which each researcher could implement to some extent. If enough researchers join us, we can change the face of research.

No comments | Trackback

What’s the probability that a significant p-value indicates a true effect?

If the p-value is < .05, then the probability of falsely rejecting the null hypothesis is  <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate).

Well, no. As you will see in a minute, the “false discovery rate” (aka. false-positive rate), which indicates the probability that a significant p-value actually is a false-positive, usually is much higher than 5%.

A common misconception about p-values

Oakes (1986) asked the following question to students and senior scientists:

You have a p-value of .01. Is the following statement true, or false?

You know, if you decide to reject the null hypothesis, the probability that you are making the wrong decision.

The answer is “false” (you will learn why it’s false below). But 86% of all professors and lecturers in the sample who were teaching statistics (!) answered this question erroneously with “true”. Gigerenzer, Kraus, and Vitouch replicated this result in 2000 in a German sample (here, the “statistics lecturer” category had 73% wrong). Hence, it is a wide-spread error to confuse the p-value with the false discovery rate.

The False Discovery Rate (FDR) and the Positive Predictive Value (PPV)

To answer the question “What’s the probability that a significant p-value indicates a true effect?”, we have to look at the positive predictive value (PPV) of a significant p-value. The PPV indicates the proportion of significant p-values which indicate a real effect amongst all significant p-values. Put in other words: Given that a p-value is significant: What is the probability (in a frequentist sense) that it stems from a real effect?

(The false discovery rate simply is 1-PPV: the probability that a significant p-value stems from a population with null effect).

That is, we are interested in a conditional probability Prob(effect is real | p-value is significant).
Inspired by Colquhoun (2014) one can visualize this conditional probability in the form of a tree-diagram (see below). Let’s assume, we carry out 1000 experiments for 1000 different research questions. We now have to make a couple of prior assumptions (which you can make differently in the app we provide below). For now, we assume that 30% of all studies have a real effect and the statistical test used has a power of 35% with an α level set to 5%. That is of the 1000 experiments, 300 investigate a real effect, and 700 a null effect. Of the 300 true effects, 0.35*300 = 105 are detected, the remaining 195 effects are non-significant false-negatives. On the other branch of 700 null effects, 0.05*700 = 35 p-values are significant by chance (false positives) and 665 are non-significant (true negatives).

This path is visualized here (completely inspired by Colquhoun, 2014):

PPV_tree

 

Now we can compute the false discovery rate (FDR): 35 of (35+105) = 140 significant p-values actually come from a null effect. That means, 35/140 = 25% of all significant p-values do not indicate a real effect! That is much more than the alleged 5% level (see also Lakens & Evers, 2014, and Ioannidis, 2005)

An interactive app

Together with Michael Zehetleitner I developed an interactive app that computes and visualizes these numbers. For the computations, you have to choose 4 parameters. app_button

Let’s go through the settings!

 

Bildschirmfoto 2015-11-03 um 10.24.55Some of our investigated hypotheses are actually true, and some are false. As a first parameter, we have to estimate what proportion of our investigated hypotheses is actually true.

Now, what is a good setting for the a priori proportion of true hypotheses? It’s certainly not near 100% – in this case only trivial and obvious research questions would be investigated, which is obviously not the case. On the other hand, the rate can definitely drop close to zero. For example, in pharmaceutical drug development “only one in every 5,000 compounds that makes it through lead development to the stage of pre-clinical development becomes an approved drug” (Wikipedia). Here, only 0.02% of all investigated hypotheses are true.

Furthermore, the number depends on the field – some fields are highly speculative and risky (i.e., they have a low prior probability), some fields are more cumulative and work mostly on variations of established effects (i.e., in these fields a higher prior probability can be expected).

But given that many journals in psychology exert a selection pressure towards novel, surprising, and counter-intuitive results (which a priori have a low probability of being true), I guess that the proportion is typically lower than 50%. My personal grand average gut estimate is around 25%.

(Also see this comment and this reply for a discussion about this estimate).

 

Bildschirmfoto 2015-11-03 um 08.30.11

That’s easy. The default α level usually is 5%, but you can play with the impact of stricter levels on the FDR!

 

Bildschirmfoto 2015-11-03 um 10.39.09The average power in psychology has been estimated at 35% (Bakker, van Dijk, & Wicherts, 2012). An median estimate for neuroscience is at only 21% (Button et al., 2013). Even worse, both estimates can be expected to be inflated, as they are based on the average published effect size, which almost certainly is overestimated due to the significance filter (Ioannidis, 2008). Hence, the average true power is most likely smaller. Let’s assume an estimate of 25%.

 

Bildschirmfoto 2015-11-03 um 10.35.09Finally, let’s add some realism to the computations. We know that researchers employ “researchers degrees of freedom”, aka. questionable research practices, to optimize their p-value, and to push a “nearly significant result” across the magic boundary. How many reported significant p-values would not have been significant without p-hacking? That is hard to tell, and probably also field dependent. Let’s assume that 15% of all studies are p-hacked, intentionally or unintentionally.

When these values are defined, the app computes the FDR and PPV and shows a visualization:

Bildschirmfoto 2015-11-03 um 10.27.17

With these settings, only 39% of all significant studies are actually true!

Wait – what was the success rate of the Reproducibility Project: Psychology? 36% of replication projects found a significant effect in a direct replication attempt. Just a coincidence? Maybe. Maybe not.

The formula to compute the FDR and PPV are based on Ioannidis (2005: “Why most published research findings are false“). A related, but different approach, was proposed by David Colquhoun in his paper “An investigation of the false discovery rate and the misinterpretation of p-values” [open access]. He asks: “How should one interpret the observation of, say,  p=0.047 in a single experiment?”. The Ioannidis approach implemented in the app, in contrast, asks: “What is the FDR in a set of studies with p <= .05 and a certain power, etc.?”. Both approaches make sense, but answer different questions.

Other resources about PPV and FDR of p-values

app_button

Comments (21) | Trackback
  • Categories