Assessing the evidential value of journals with p-curve, R-index, TIVA, etc: A comment on Motyl et al. (2017) with new data

Recently, Matt Motyl et al. (2017) posted a pre-print paper in which they contrasted the evidential value of several journals in two time periods (2003-2004 vs. 2013-2014). The paper sparked a lot of discussion in Facebook groups [1][2], blog posts commenting on the paper (DataColada, Dr. R), and a reply from the authors. Some of the discussion was about which effect sizes should have been coded for their analyses. As it turns out, several working groups had similar ideas to check the evidential value of journals (e.g., Leif Nelson pre-registered a very similar project here). In an act of precognition, we did a time-reversed conceptual replication of of Motyl et al.’s study already in 2015:
Anna Bittner, a former student of LMU Munich, wrote her bachelor thesis on a systematic comparison of two journals in social psychology – one with a very high impact factor (Journal of Personality and Social Psychology (JPSP), IF = 5.0 at that time), another with a very low IF (Journal of Applied Social Psychology (JASP), IF = 0.8 at that time – not to be confused with the great statistical software JASP). By coincidence (or precognition), we also chose the year 2013 for our analyses. Read her full blog post about the study below.
In a Facebook discussion about the Motyl et al. paper, Michael Inzlicht wrote: “But, is there some way to systematically test a small sample of your p-values? […] Can you randomly sample 1-2% of your p-values, checking for errors and then calculating your error rate? I have no doubt there will be some errors, but the question is what the rate is.“. As it turns out, we also did our analysis on the 2013 volume of JPSP. Anna followed the guidelines by Simonsohn et al. That means, she only selected one effect size per sample, selected the focal effect size, and wrote a detailed disclosure table. Hence, we do not think that the critique in the DataColada blog post applies to our coding scheme. The disclosure table includes quotes from the verbal hypotheses, quotes from results section, and which test statistic was selected. Hence, you can directly validate the choice of effect sizes. Furthermore, all test statistics can be entered into the p-checker app, where you get TIVA, R-Index, p-curve and more diagnostic tests (grab the JPSP test statistics & JASP test statistics).

Is our field “Rotten to the core”? (CC-BY by Andy Hay; https://www.flickr.com/photos/andyhay/8196333166)

Now we can compare two completely independently coded p-curve disclosure tables about a large set of articles. Any disagreement of course does not mean that one party is right and the other is wrong. But it will be interesting to see the amount of agreement.
Here comes Anna’s blog post about her own study. Anna Bittner is now doing her Master of Finance at the University of Melbourne.


Don’t judge a journal by its cover (or its impact factor)

by Anna Bittner and Felix Schönbrodt

The recent discoveries on staggeringly low replicability in psychology have come as a shock to many and led to a discussion on how to ensure better research practices are employed in the future. To this end, it is necessary to find ways to efficiently distinguish good research from bad and research that contains evidential value from such that does not.
In the past the impact factor (IF) has often been the favored indicator of a journal’s quality. To check whether a journal with a higher IF does indeed publish the “better” research in terms of evidential value, we compared two academic journals from the domain of social psychology: The Journal of Personality and Social Psychology (JPSP, Impact Factor = 5.031) and the Journal of Applied Social Psychology (JASP, Impact Factor = 0.79).
For this comparison, Anna has analysed and carefully hand-coded all studies with hypothesis tests starting in January 2013 and progressing chronologically until about 110 independent test statistics for each journal were acquired. See the full report (in German) in Anna’s bachelor thesis. These test statistics were fed into the p-checker app (Version 0.4; Schönbrodt, 2015) that analyzed them with the tools p-curve, TIVA und R-Index.
All material and raw data is available on OSF: https://osf.io/pgc86/

P-Curve

P-curve (Simonsohn, Nelson, & Simmons, 2014) takes a closer look at all significant p-values and plots them against their relative frequency. This results in a curve that will ideally have a lot of very small p-values (<.025) and much fewer larger p–values (>.025). Another possible shape is a flat curve, which will occur when researchers only investigate null effects and selectively publish those studies that obtained a p-value < .05 by chance. under the null hypothesis each individual p-value is equally likely and the distribution is even. P-curve allows to test whether the empirical curve is flatter than the p-curve that would be expected at any chosen power.
Please note that p-curve assumes homogeneity (McShane, Böckenholt, & Hansen, 2016). Lumping together diverse studies from a whole journal, in contrast, guarantees heterogeneity. Hence, trying to recover the true underlying effect size/power is of limited usefulness here.
The p-curves of both JPSP and JASP were significantly right-skewed, which suggests that both journal’s output cannot be explained by pure selective publication of null effects that got significant by chance. JASP’s curve, however, had a much stronger right-skew, indicating stronger evidential value:

TIVA

TIVA (Schimmack, 2014a) tests for an insufficient variance of p-values: If no p-hacking and no publication bias was present, the variance of p-values should be at least 1. An value below 1 is seen as indicator of publication bias and/or p-hacking. However, variance can and will be much larger than 1 when studies of different sample and effect sizes are included in the analysis (which was inevitably the case here). Hence, TIVA is a rather weak test of publication bias when heterogeneous studies are combined: A p-hacked set of heterogeneous effect sizes can still result in a high variance in TIVA. Publication bias and p-hacking reduce the variance, but heterogeneity and different sample sizes can increase the variance in a way that the TIVA is clearly above 1, even if all studies in the set are severely p-hacked.
As expected, neither JPSP nor JASP attained a significant TIVA result, which means the variance of p-values was not significantly smaller than 1 for either journal. Descriptively, JASP had a higher variance of 6.03 (chi2(112)=674, p=1), compared to 1.09 (chi2(111)=121, p=.759) for the JPSP. Given the huge heterogeneity of the underlying studies, a TIVA variance in JPSP close to 1 signals a strong bias. This, again, is not very surprising. We already knew before with certainty that our literature is plagued by huge publication bias.

The descriptive difference in TIVA variances can be due to less p-hacking, less publication bias, or more heterogeneity of effect sizes and sample sizes in JASP compared to JPSP. Hence, drawing firm conclusions from this numerical difference is difficult; but the much larger value in JASP can be seen as an indicator that the studies published there paint a more realistic picture.
(Note: The results here differ from the results reported in Anna’s bachelor thesis, as the underlying computation has been improved. p-checker now uses logged p-values, which allows more precision with very small p-values. Early versions of p-checker underestimated the variance when extremely low p-values were present).
Unfortunately, Motyl et al. do not report the actual variances from their TIVA test (only the test statistics), so a direct comparison of our results is not possible.

R-Index

The R-Index (Schimmack, 2014b) is a tool that aims to quantify the replicability of a set of studies. It calculates the difference between median estimated power and success rate, which results in the so called inflation rate. This inflation rate is then subtracted from the median estimated power, resulting in the R-Index. Here is Uli Schimmack’s interpretation of certain R-Index values: “The R-Index is not a replicability estimate […] I consider an R-Index below 50 an F (fail).  An R-Index in the 50s is a D, and an R-Index in the 60s is a C.  An R-Index greater than 80 is considered an A”.
Here, again, the JASP was ahead. It obtained an R-Index of .60, whereas the JPSP landed at .49.

Both journals had success rates of around 80%, which is much higher than what would be expected with the average power and effect sizes found in psychology (Bakker, van Dijk, & Wicherts, 2012). It is known and widely accepted that journals tend to publish significant results over non-significant ones.
Motyl et al. report an R-Index of .52 for 2013-2014 for high impact journals, which is very close to our value of .49.


Summary

The comparison between JPSP and JASP revealed a better R-Index, a more realistic TIVA variance and a more right-skewed p-curve for the journal with the much lower IF. As the studies had roughly comparable sample sizes (JPSP: Md = 86, IQR: 54 – 124; JASP: Md = 114, IQR: 65 – 184), I would bet some money that more studies from JASP replicate then from JPSP.
A journal’s prestige does not protect it from research submissions that contain QRPs – contrarily it might lead to higher competition between reseachers and more pressure to submit a significant result by all means. Furthermore, higher rejection rates of a journal also leave more room for “selecting for significance”. In contrast, a journal that must publish more or less every submission it gets to fill up its issues simply does not have much room for this filter. With the currently applied tools, however, it is not possible to make a distinction between p-hacking and publication bias: they only detect patterns in test statistics that can be the result of both practices.

References

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543-554.
McShane, B. B., Böckenholt, U., & Hansen, K. T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11, 730–749. doi:10.1177/1745691616662243
Schimmack, U. (2014b). Quantifying Statistical Research Integrity: The Replicabilty-Index.
Schimmack, U. (2014a, December 30). The Test of Insufficient Variance (TIVA): A New Tool for the Detection of Questionable Research Practices. Retrieved from https://replicationindex.wordpress.com/2014/12/30/the-test-of-insufficient-variance-tiva-a-new-tool-for-the-detection-of-questionable-research-practices/
Schimmack, U. (2015, September 15). Replicability-Ranking of 100 Social Psychology Departments [Web log post]. Retrieved from https://replicationindex.wordpress.com/2015/09/15/replicability-ranking-of-100-social-psychology-departments/
Schönbrodt, F. (2015). p-checker [Online application]. Retrieved from http://shinyapps.org/apps/p-checker/
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534.

4 thoughts on “Assessing the evidential value of journals with p-curve, R-index, TIVA, etc: A comment on Motyl et al. (2017) with new data”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.