German Psychological Society fully embraces open data, gives detailed recommendations

tl;dr: The German Psychological Society developed and adopted new recommendations for data sharing that fully embrace openness, transparency and scientific integrity. Key message is that raw data are an essential part of an empirical publication and must be openly shared. The recommendations also give very practical advice on how to implement these values, such as “When should data providers be asked to be co-authors in a data reuse project?” and “How to deal with participant privacy?”.

In the last year, the discussion in our field moved from “Do we have a replication crisis?” towards “Yes, we have a problem, and what can and should we change? How can be implement it?”. I think that we need both top-down changes on an institutional level, combined with bottom-up approaches, such as local Open Science Initiatives. Here, I want to present one big institutional change concerning open data.

Funders Start Requiring Open Data: Recommendations for Psychology

The German Research Foundation (DFG), the largest public funder of research in Germany, updated their policy on data sharing, which can be summarized in a single sentence: Publicly funded research, including the raw data, belongs to the public. Consequently, all research data from a DFG funded project should be made open immediately, or at least a couple of months after finalization of the research project (see [1] and [2]). Furthermore, the DFG asked all scientific disciplines to develop more specific guidelines which implement these principles in their respective discipline.

The German Psychological Society (Deutsche Gesellschaft für Psychologie, DGPs) installed a working group (Andrea Abele-Brehm, Mario Gollwitzer and me) who worked for one year on such recommendations for psychology.

In the development of the document, we tried to be very inclusive and to harvest the wisdom of the crowd. A first draft (Feb 2016) was discussed for 6 weeks in an internet forum where all DGPs members could comment. Based on this discussion (and many additional personal conversations), a revised version was circulated and discussed in person with a smaller group of interested members (July 2016) and a representative of the DFG. Furthermore, we had regular contact to the “Fachkollegium Psychologie” of the DFG (i.e., the group of people which decides about funding decisions in psychology; meanwhile, the members of the Fachkollegium have changed on a rotational basis). Finally, the chair persons of all sections of the DGPs and the speakers of the young members had another possibility to comment. On September 17, the recommendations were officially adopted by the society.

I think this thorough and iterative process was very important for two reasons: First, it definitely improved the quality of the document, because we got so many great ideas and comments from the members, ironing out some inconsistencies and covering some edge cases. Second, it was important in order to get people on board. As this new open data guideline of the DFG causes a major change in the way we do our everyday scientific work, we wanted to talk to and convince as many people as possible from the early steps on. Of course not every single of the >4,000 members is equally convinced, but the topic now has considerable attention in the society.

Hence, one focus was consensus and inclusivity. At the same time, we had the goal to develop bold and forward-looking guidelines that really address the current challenges of the field, and not to settle on the lowest common denominator. For this goal, we had to find a balance between several, sometimes conflicting, values.

A Fine Balance of Values

Research transparency ⬌ privacy rights. A first specialty of psychology is that we do not investigate rocks or electrons, but human subjects who have privacy rights. In a nutshell, privacy rights have to be respected, and in case of doubt they win over openness. But if data can be properly anonymized, there’s no problem in open sharing; one possibility to share non-anonymous research data are “scientific use files”, where access is restricted to scientists. If data cannot be shared due to privacy (or other) reasons, this has to be made transparent in the paper. (Hence, the recommendations are PRO compatible). The recommendations give clear guidance on privacy issues and gives practical advice, for example, on how to write your informed consent that you actually are able to share the data afterwards.

Data reuse ⬌ right of first usage. A second balance concerns an optimal reuse of data on the one hand, and the right of first usage of the original authors. In the discussion phase during the development of the recommendations, several people expressed the fear of “research parasites”, who “steal” the data from hard-working scientists. A very common gut feeling is: “The data belong to me”. But, as we are publicly funded researchers with publicly funded research projects, the answer is quite clear: the data belong to the public. There is no copyright on raw data. On the other hand, we also need incentives for original researchers to generate data in the first place. Data generators of course have the right of first usage, and the recommendations allow to extend this right by an embargo of 5 more years (see below). But at the end of the day, publicly funded research data belongs to the public, and everybody can reuse it. If data are open by default, a guideline also must discuss and define how data reuse should be handled. Our recommendations make suggestions in which cases a co-authorship should be offered to the data providers and in which cases this is not necessary.

Verification ⬌ fair treatment of original authors. Finally, research should be verifiable, but with a fair treatment of the original authors. The guidelines say that whenever a reanalysis of a data set is going to be published (and that also includes blog posts or presentations), the original authors have to be informed about this. They cannot prevent the reanalysis, but they have the chance to react to it.

Two types of data sharing

We distinguish two types of data sharing:

Type 1 data sharing means that all raw data should be openly shared that is necessary to reproduce the results reported in a paper. Hence, this can be only a subset of all available variables in the full data set: The subset which is needed to reproduce these specific results. The primary data are an essential part of an empirical publication, and a paper without that simply is not complete.

Type 2 data sharing refers to the release of the full data set of a funded research project. The DGPs recommendations claim that after the end of a DFG-funded
project all data – even data which has not yet been used for publications – should be made open. Unpublished null results, or additional, exploratory variables now have to chance to see the light and to be reused by other researchers. Experience tells that not all planned papers have been written after the official end date of a project. Therefore, the recommendations allow that the right of first usage can be extended with an embargo period of up to 5 years, where the (so far unpublished) data do not have to be made public. The embargo option only applies to data that has not yet been used for publications. Hence, typically an embargo cannot be applied to Type 1 data sharing.

Summary & the Next Steps

To summarize, I think these recommendations are the most complete, practical, and specific guidelines for data sharing in psychology to date. (Of course much more details are in the recommendations themselves). They fully embrace openness, transparency and scientific integrity. Furthermore, they do not proclaim detached ethical principles, but give very practical guidance on how to actually implement data sharing in psychology.

What are the next steps? The president of the DGPs, Prof. Conny Antoni, and the secretary Prof. Mario Gollwitzer already contacted other psychological societies (APA, APS, EAPP, EASP, EFPA, SIPS, SESP, SPSP) and introduced our recommendations. The Board of Scientific Affairs of EFPA – the European Federation of Psychologists’ Associations – already expressed its appreciation of the recommendations and will post them on their website. Furthermore, it will discuss them in an invited symposium on the European Congress of Psychology in Amsterdam this year. A mid-term goal will also be to check compatibility with existing other guidelines and to think about a harmonization of several guidelines within psychology.

As other scientific disciplines in Germany also work on their specific implementations of the DFG guidelines, it will be interesting to see whether there are common lines (although there certainly will be persisting and necessary differences between the requirements of the fields). Finally, we are in contact with the new Fachkollegium at the DFG, with the goal to see how the recommendations can and should be used in the process of funding decisions.

If your field also implements such recommendations/guidelines, don’t hesitate to contact us.

Download the Recommendations

Schönbrodt, F., Gollwitzer, M., & Abele-Brehm, A. (2017). Der Umgang mit Forschungsdaten im Fach Psychologie: Konkretisierung der DFG-Leitlinien. Psychologische Rundschau, 68, 20–35. doi:10.1026/0033-3042/a000341. [PDF German][PDF English]

(English translation by Malte Elson, Johannes Breuer, and Zoe Magraw-Mickelson)

Comments (3) | Trackback

Introducing the p-hacker app: Train your expert p-hacking skills

[This is a guest post by Ned Bicare, PhD]
  Start the p-hacker app!
My dear fellow scientists!
“If you torture the data long enough, it will confess.”
This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis.
In fact, the art of creative data analysis has experienced despicable attacks over the last years. A small but annoyingly persistent group of second-stringers tries to denigrate our scientific achievements. They drag psychological science through the mire.
These people propagate stupid method repetitions; and what was once one of the supreme disciplines of scientific investigation – a creative data analysis of a data set – has been crippled to conducting an empty-headed step-by-step pre-registered analysis plan. (Come on: If I lay out the full analysis plan in a pre-registration, even an undergrad student can do the final analysis, right? Is that really the high-level scientific work we were trained for so hard?).
They broadcast in an annoying frequency that p-hacking leads to more significant results, and that researcher who use p-hacking have higher chances of getting things published.
What are the consequence of these findings? The answer is clear. Everybody should be equipped with these powerful tools of research enhancement!

The art of creative data analysis

Some researchers describe a performance-oriented data analysis as “data-dependent analysis”. We go one step further, and call this technique data-optimal analysis (DOA), as our goal is to produce the optimal, most significant outcome from a data set.
I developed an online app that allows to practice creative data analysis and how to polish your p-values. It’s primarily aimed at young researchers who do not have our level of expertise yet, but I guess even old hands might learn one or two new tricks! It’s called “The p-hacker” (please note that ‘hacker’ is meant in a very positive way here. You should think of the cool hackers who fight for world peace). You can use the app in teaching, or to practice p-hacking yourself.
Please test the app, and give me feedback! You can also send it to colleagues: http://shinyapps.org/apps/p-hacker
  Start the p-hacker app!
The full R code for this Shiny app is on Github.

Train your p-hacking skills: Introducing the p-hacker app

Here’s a quick walkthrough of the app. Please see also the quick manual at the top of the app for more details.
First, you have to run an initial study in the “New study” tab:
69CFFA3C-7144-4D3E-889D-38D6493FF9E2
When you ran your first study, inspect the results in the middle pane. Let’s take a look at our results, which are quite promising:
EAE34502-7F93-4100-B0DE-D739593A12EC
After exclusion of this obvious outlier, your first study is already a success! Click on “Save” next to your significant result to save the study to your study stack on the right panel:
8AE9BA13-1A8A-4293-A0B5-FC99872A63A0
Sometimes outlier exclusion is not enough to improve your result.
Now comes the magic. Click on the “Now: p-hack!” tab – this gives you all the great tools to improve your current study. Here you can fully utilize your data analytic skills and creativity.
In the following example, we could not get a significant result by outlier exclusion alone. But after adding 10 participants (in two batches of 5), controlling for age and gender, and focusing on the variable that worked best – voilà!
8E270173-E65E-40E2-B3DD-0E93E911A938
Do you see how easy it is to craft a significant study?
Now it is important to show even more productivity: Go for the next conceptual replication (i.e., go back to Step 1 and collect a new sample, with a new manipulation and a new DV). Whenever your study reached significance, click on the Save button next to each DV and the study is saved to your stack, awaiting some additional conceptual replications that show the robustness of the effect.
Many journals require multiple studies. Four to six studies should make a compelling case for your subtile, counterintuitive, and shocking effects:
67F00220-678D-4D21-BB34-16E8FB062AF6
Honor to whom honor is due: Find the best outlet for your achievements!
My friends, let’s stand together and Make Psychological Science Great Again! I really hope that the p-hacker app can play its part in bringing psychological science back to its old days of glory.
make
Start the p-hacker app!
Best regards,
Ned Bicare, PhD
 
PS: A similar app can be found on FiveThirtyEight: Hack Your Way To Scientific Glory
Comments (5) | Trackback

Best Paper Award for the “Evolution of correlations”

I am pleased to announce that Marco Perugini and I have received the 2015 Best Paper Award from the Association of Research in Personality (ARP) for our paper:

Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. doi:10.1016/j.jrp.2013.05.009
evolDemo
The TL;DR summary of the paper: As sample size increases, correlations wiggle up and down. In typical situations, stable estimates can be expected when n approaches 250. See this blog post for some more information and a video (Or: read the paper. It’s short.)
Interestingly (and in contrast to all of my other papers …), the paper has not only been cited in psychology, but also in medical chemistry, geophysical research, athmospheric physics, chronobiology, building research, and, most importantly, in the Indian Journal of Plant Breeding. Amazing.
And the best thing is: The paper is open access, and all simulation code and data are open on Open Science Framework. Use it and run your own simulations!
Comments (3) | Trackback
© 2016 Felix Schönbrodt | Impressum | Datenschutz | Contact