Honoured to receive the Leamer-Rosenthal-Prize

I’m honoured that the Berkeley Initiative for Transparency in the Social Sciences (BITSS) chose me for one of the 2016 Leamer-Rosenthal Prizes for Open Social Science! This award comes with a prize of $10,000 and “recognizes important contributions by individuals in the open science movement”. For my contributions to Open Science, see this website. For more details about the price, which is generously donated by the John Templeton Foundation, see here.

BITSS has become one of the central hubs for the global open science movement and does a great job by providing open educational resources (e.g., “Tools and Resources for Data Curation”, or “How to Write a Pre-Analysis Plan”), grants, running the open science catalysts program, and hosting their annual meeting in San Francisco.

Other recipients of the price were Eric-Jan Wagenmakers, Lorena Barba, Zacharie Tsala Dimbuene, Abel Brodeur, Elaine Toomey, Graeme Blair, Beth Baribault, Michèle Nuijten and Sacha Epskamp.

I am optimistically looking into a future of credible, reproducible, and transparent research. Stay tuned for some news from our work here at the department’s Open Science Committee at LMU Munich!

Comments (2) | Trackback

Open Science and research quality at the German conference on psychology (DGPs congress in Leipzig)

From 17th to 22th September, the 50th anniversary congress of the German psychological association takes place in Leipzig. On previous conferences in Germany in the last two or three years, the topic of the credibility crisis and research transparency has been sometimes covered, sometimes completely ignored.

 

Therefore I am quite happy that this topic now has a really prominent place at the current conference. Here’s a list of some talks and events focusing on Open Science, research transparency, and what a future science could look like – see you there!


Sunday, Sep 18: Pre-conference workshop “How to do Open Science: Chancen, Möglichkeiten, Standards” (Susann Fiedler, Kai Jonas, Erich Weichselgartner)


Sunday, Sep 18: Pre-conference workshop “Dos and Don’ts of data analysis: Lessons from the replication crisis” (Felix Schönbrodt)


Tuesday, Sep 20, 10-12: Invited symposium “Reproducibility and trust in psychological science“, chaired by Jelte Wicherts (Tilburg University)

From the abstract:

In this symposium we discuss issues related to reproducibility and trust in psychological science. In the first talk, Jelte Wicherts will present some empirical results from meta-science that perhaps lower the trust in psychological science. Next, Coosje Veldkamp will discuss results bearing on actual public trust in psychological science and in psychologists from an international perspective. After that, Felix Schönbrodt and Chris Chambers will present innovations that could strengthen reproducibility in psychology. Felix Schönbrodt will present Sequential Bayes Factors as a novel method to collect and analyze psychological data and Chris Chambers will discuss Registered Reports as a means to prevent p-hacking and publication bias. We end with a general discussion.

 

  • Reproducibility problems in psychology: what would Wundt think? (Jelte Wicherts)
  • Trust in psychology and psychologists (Coosje Veldkamp)
  • Never underpowered again: Sequential Bayes Factors guarantee compelling evidence (Felix Schönbrodt)
  • The Registered Reports project: Three years on (Chris Chambers)

 

For details on the talks, see here.


Tuesday, Sep 20, 13:30: Keynote by Brian Nosek: “Addressing the Reproducibility of Psychological Science”

The currency of science is publishing.  Producing novel, positive, and clean results maximizes the likelihood of publishing success because those are the best kind of results.  There are multiple ways to produce such results: (1) be a genius, (2) be lucky, (3) be patient, or (4) employ flexible analytic and selective reporting practices to manufacture beauty.  In a competitive marketplace with minimal accountability, it is hard to resist (4).  But, there is a way.  With results, beauty is contingent on what is known about their origin.  With methodology, if it looks beautiful, it is beautiful. The only way to be rewarded for something other than the results is to make transparent how they were obtained.  With openness, I won’t stop aiming for beautiful papers, but when I get them, it will be clear that I earned them.


Tuesday, Sep 20, 14:30: Panel discussion: “Assuring the Quality of Psychological Research”

Moderation: Manfred Schmitt
Discussants: Manfred Schmitt, Andrea Abele-Brehm, Klaus Fiedler, Kai Jonas, Brian Nosek, Felix Schönbrodt, Rolf Ulrich, Jelte Wicherts

When replicated, many findings seem to either diminish in magnitude or to disappear altogether, as, for instance, recently shown in the Reproducibility Project: Psychology. Several reasons for false-positive results in psychology have been identified (e.g., p-hacking, selective reporting, underpowered studies) and call for reforms across the whole range of academic practices. These range from (1) new journal policies promoting an open research culture to (2) hiring, tenure and funding criteria that reward credibility and replicability rather than sexiness and quantity to (3) actions for increasing transparent and open research practices within and across individual labs. Following Brian Nosek’s (Center of Open Science) keynote, titled “Addressing the Reproducibility of Psychological Science” this panel discussion aims to explore the various ways in which our field may take advantage of the current debate. That is, the focus of the discussion will be on effective ways of improving the quality of psychological research in the future. Seven invited discussants provide insights into different current activities aimed at improving scientific practice and will discuss their potential. The audience will be invited to contribute to the discussion.  

I will represent the new guidelines of the German association for data management. They soon will be published, but here’s the gist: Open by default (raw data are an essential part of a publication); exceptions should be justified. Furthermore, we define norms for data reusage. Stay tuned on this blog for more details!


Tuesday, Sep 20, 18:00: Positionsreferat – Towards Evidence-Based Peer Review (Malte Elson)


Wednesday, Sep 21, 10:45: Beyond Registered Experiments – The Foundations of Cumulative Empirical Research

  • What Does it Mean to Replicate? (Prof. Dr. Christoph Klauer)
  • About the Recognizable Reality in Measures of Latent Psychological Attributes (Florian G. Kaiser)
  • The Stochastic Theory of Causal Effects and its Implications for Assuring the Quality of Quasi-Experimental Research (Rolf Steyer)
  • Diagnosticity and A-Priori Theorizing (Klaus Fiedler)

Wednesday, Sep 21, 14:45: Reproducible Paper Writing (Sebastian Sauer)

[…] In any case, a reanalysis of the data must result in similar or identical results.
[…] In this talk, we present a method that is well-suited for writing reproducible academic papers. This method is a combination of Latex, R, Knitr, Git, and Pandoc. These software tools are robust, well established and not more than reasonable complex. Additional approaches, such as using word processors (MS Word), Markdown, or online collaborative writing tools (Authorea) are presented briefly. The presentation is based on a practical software demonstration. A Github repository for easy reproducibility is provided.


 

These are not all sessions on the topic – go to http://www.dgpskongress.de/frontend/index.php# and search for “ASSURING THE QUALITY OF PSYCHOLOGICAL RESEARCH” to see all sessions associated with this topic. Furthermore, the CMS of the congress does not allow direct linking to the sessions, so you have to search for the sessions yourself.

 

Want to meet me at the conference? Write me an email, or send me a PM on Twitter.

Comments (1) | Trackback

Introducing the p-hacker app: Train your expert p-hacking skills

[This is a guest post by Ned Bicare, PhD]
  Start the p-hacker app!
My dear fellow scientists!
“If you torture the data long enough, it will confess.”
This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis.
In fact, the art of creative data analysis has experienced despicable attacks over the last years. A small but annoyingly persistent group of second-stringers tries to denigrate our scientific achievements. They drag psychological science through the mire.
These people propagate stupid method repetitions; and what was once one of the supreme disciplines of scientific investigation – a creative data analysis of a data set – has been crippled to conducting an empty-headed step-by-step pre-registered analysis plan. (Come on: If I lay out the full analysis plan in a pre-registration, even an undergrad student can do the final analysis, right? Is that really the high-level scientific work we were trained for so hard?).
They broadcast in an annoying frequency that p-hacking leads to more significant results, and that researcher who use p-hacking have higher chances of getting things published.
What are the consequence of these findings? The answer is clear. Everybody should be equipped with these powerful tools of research enhancement!

The art of creative data analysis

Some researchers describe a performance-oriented data analysis as “data-dependent analysis”. We go one step further, and call this technique data-optimal analysis (DOA), as our goal is to produce the optimal, most significant outcome from a data set.
I developed an online app that allows to practice creative data analysis and how to polish your p-values. It’s primarily aimed at young researchers who do not have our level of expertise yet, but I guess even old hands might learn one or two new tricks! It’s called “The p-hacker” (please note that ‘hacker’ is meant in a very positive way here. You should think of the cool hackers who fight for world peace). You can use the app in teaching, or to practice p-hacking yourself.
Please test the app, and give me feedback! You can also send it to colleagues: http://shinyapps.org/apps/p-hacker
  Start the p-hacker app!
The full R code for this Shiny app is on Github.

Train your p-hacking skills: Introducing the p-hacker app

Here’s a quick walkthrough of the app. Please see also the quick manual at the top of the app for more details.
First, you have to run an initial study in the “New study” tab:
69CFFA3C-7144-4D3E-889D-38D6493FF9E2
When you ran your first study, inspect the results in the middle pane. Let’s take a look at our results, which are quite promising:
EAE34502-7F93-4100-B0DE-D739593A12EC
After exclusion of this obvious outlier, your first study is already a success! Click on “Save” next to your significant result to save the study to your study stack on the right panel:
8AE9BA13-1A8A-4293-A0B5-FC99872A63A0
Sometimes outlier exclusion is not enough to improve your result.
Now comes the magic. Click on the “Now: p-hack!” tab – this gives you all the great tools to improve your current study. Here you can fully utilize your data analytic skills and creativity.
In the following example, we could not get a significant result by outlier exclusion alone. But after adding 10 participants (in two batches of 5), controlling for age and gender, and focusing on the variable that worked best – voilà!
8E270173-E65E-40E2-B3DD-0E93E911A938
Do you see how easy it is to craft a significant study?
Now it is important to show even more productivity: Go for the next conceptual replication (i.e., go back to Step 1 and collect a new sample, with a new manipulation and a new DV). Whenever your study reached significance, click on the Save button next to each DV and the study is saved to your stack, awaiting some additional conceptual replications that show the robustness of the effect.
Many journals require multiple studies. Four to six studies should make a compelling case for your subtile, counterintuitive, and shocking effects:
67F00220-678D-4D21-BB34-16E8FB062AF6
Honor to whom honor is due: Find the best outlet for your achievements!
My friends, let’s stand together and Make Psychological Science Great Again! I really hope that the p-hacker app can play its part in bringing psychological science back to its old days of glory.
make
Start the p-hacker app!
Best regards,
Ned Bicare, PhD
 
PS: A similar app can be found on FiveThirtyEight: Hack Your Way To Scientific Glory
Comments (5) | Trackback
© 2017 Felix Schönbrodt | Impressum | Datenschutz | Contact