The post Gazing into the Abyss of P-Hacking: HARKing vs. Optional Stopping appeared first on Nicebread.

]]>Almost all researchers have experienced the tingling feeling of suspense that arises right before they take a look at long-awaited data: Will they support their favored hypothesis? Will they yield interesting or even groundbreaking results? In a perfect world (especially one without publication bias), the cause of this suspense should be nothing else but scientific curiosity. However, the world, and specifically the incentive system in science, is not perfect. A lot of pressure rests on researchers to produce statistically significant results. For many researchers, statistical significance is the cornerstone of their academic career, so non-significant results in an important study can not only question their scientific convictions but also crash their hopes of professional promotion. (Although, fortunately things are changing for the better).

Now, what does a researcher do confronted with messy, non-significant results? According to several much-cited studies (for example John et al., 2012; Simmons et al., 2011), a common reaction is to start sampling again (and again, and again, …) in the hope that a somewhat larger sample size can boost significance. Another reaction is to wildly conduct hypothesis tests on the existing sample until at least one of them becomes significant (see for example: Simmons et al., 2011; Kerr, 1998 ). These practices, along with some others, are commonly known as *p-hacking*, because they are designed to drag the famous *p*-value right below the mark of .05 which usually indicates statistical significance. Undisputedly, *p*-hacking works (for a demonstration try out the p-hacker app). The two questions we want to answer in this blog post are: How does it work and why is that bad for science?

As many people may have heard, *p*-hacking works because it exploits a process called *alpha error accumulation* which is covered in most introductory statistics classes (but also easily forgotten again). Basically, alpha error accumulation means that as one conducts more and more hypothesis tests, the probability increases that one makes a wrong test decision at least once. Specifically, this wrong test decision is a false positive decision or *alpha error*, which means that you proclaim the existence of an effect although, in fact, there is none. Speaking in statistical terms, an alpha error occurs when the test yields a significant result although the null hypothesis (“There is no effect”) is true in the population. This means that *p*-hacking leads to the publication of an increased rate of false positive results, that is, studies that claim to have found an effect although, in fact, their result is just due to the randomness of the data. Such studies will never replicate.

At this point, the blog post could be over. *P*-Hacking exploits alpha error accumulation and fosters the publication of false positive results which is bad for science. However, we want to take a closer look at *how* bad it really is. In fact, some *p*-hacking techniques are worse than others (or, if you like the unscrupulous science villain perspective: some *p*-hacking techniques work better than others). As a showcase, we want to introduce two researchers: The *HARKer* takes existing data and conducts multiple independent hypothesis tests (based on multiple uncorrelated variables in the data set) with the goal to publish the ones that become significant. For example, the HARKer tests for each possible correlation in a large data set whether it differs significantly from zero. On the other hand, the *Accumulator* uses optional stopping. This means that he collects data for a single research question test in a sequential manner until either statistical significance or a maximum sample size is reached. For simplicity, we assume that they use neither other *p*-hacking techniques nor other questionable research practices.

Let us start with the HARKer: Since the conducted hypothesis tests in our defined scenario are essentially independent, the situation can be seen as a problem of multiple testing. This means, it is comparatively easy to determine the exact probability that the HARKer will end up with at least one false-positive result given a certain number of hypothesis tests. Assuming no effects in the population (for example, no correlation between the variables), one can picture the situation as a decision tree: At each branch level stands a hypothesis test which can either result in a non-significant result with 95% probability or in a (spurious) significant result with 5% probability, which is the level.

No matter how many hypothesis tests the HARKer conducts, there will only be one condition in the all-null scenario where no error occurs, that is, where all hypothesis tests yield non-significant results. The probability that this occurs can be calculated by , with being the number of conducted hypothesis tests. The probability that at least one of the hypothesis tests is significant is the probability of the complementary event, that is . For example, when the HARKer computes hypothesis with an alpha level of , the overall probability to obtain at least one false positive result is . Of course, the formula can be adjusted for other suggested alpha levels, such as or . We show this general formula in the R-code chunk below.

[cc lang=”rsplus” escaped=”true”]

# The problem of HARKing

harker <- function(x, alpha){1-(1-alpha)^x}

[/cc]

The Accumulator has a different tactic: Instead of conducting multiple hypothesis tests on different variables of one data set, he repeatedly conducts the same hypothesis test on the same variables in a growing sample. Starting with a minimum sample size, the Accumulator looks at the results of the analysis – if these are significant, data collection is stopped, if not, more data is collected until (finally) the results become significant, or a maximum sample size is reached. This strategy is also called *Optional Stopping*. Of course, the more often a researcher peeks at the data, the higher is the probability to obtain a false positive result at least once. However, this overall probability is not the same as the one obtained through HARKing. The reason is that the hypothesis tests are not independent in this case. Why is that? The same hypothesis test is repeatedly conducted on only slightly different data. In fact, the data that were used in in the first hypothesis test are used in every single of the subsequent hypothesis tests so that there is a spillover effect of the first test to every other hypothesis test in the set. Imagine, your initial sample contains an outlier: This outlier will affect the test results in any other test. With multiple testing, in contrast, the outlier will affect only the test in question but none of the other tests in the set.

So does this dependency make optional stopping more or less effective than HARKing? Of course, people have been wondering about this for quite a while. A paper by Armitage et al. (1969) demonstrates error accumulation in optional stopping for three different tests. We can replicate their results for the *z*-test with a small simulation (a more flexible simulation can be found at the end of the blog post): We start by drawing a large number of samples (`iter`) with the maximum sample size (`n.max`) from the null hypothesis. Then we conduct a sequential testing procedure on each of the samples, starting with a minimum sample size (`n.min`) and going up in several steps (`step`) up to the maximum sample size. The probability to obtain a significant result at least once up to a certain step can be estimated through the percentage of iterations that end up with at least one significant result at that point.

[cc lang=”rsplus” escaped=”true”]

# The Problem of optional stopping

accumulator <- function(n.min, n.max, step, alpha=0.05, iter=10000){

# Determine places of peeks

peeks <- seq(n.min, n.max, by=step)

# Initialize result matrix (non-sequential)

res <- matrix(NA, ncol=length(peeks), nrow=iter)

colnames(res) <- peeks

# Conduct sequential testing (always until n.max, with peeks at pre-determined places)

for(i in 1:iter){

sample <- rnorm(n.max, 0, 1)

res[i,] <- sapply(peeks, FUN=function(x){sum(sample[1:x])/sqrt(x)})

}

# Create matrix: Which tests are significant?

signif <- abs(res) >= qnorm(1-alpha)

# Create matrix: Has there been at least one significant test in the trial?

seq.signif <- matrix(NA, ncol=length(peeks), nrow=iter)

for(i in 1:iter){

for(j in 1:ncol(signif)){

seq.signif[i,j] <- TRUE %in% signif[i, 1:j]

}

}

# Determine the sequential alpha error probability for the sequential tests

seq.alpha <- apply(seq.signif, MARGIN = 2, function(x){sum(x)/iter})

# Return a list of individual test p-values, sequential significance and sequential alpha error probability

return(list(seq.alpha = seq.alpha))

}

[/cc]

For example, the researcher conducts a two-sided one-sample *z*-test with an overall level of .05 in a sequential way. He starts with 10 observations, then always adds another 10 if the result is not significant, up to 100 observations at maximum. This means, he has 10 chances to peek at the data and end the data collection if the hypothesis test is significant. Using our simulation function, we can determine the probability to have obtained at least one false positive result at any of these steps:

[cc lang=”rsplus” escaped=”true”]

set.seed(1234567)

res.optstopp <- accumulator(n.min=10, n.max=100, step=10, alpha=0.025, iter=10000)

print(res.optstopp[[1]])

[/cc]

[cc]

[1] 0.0492 0.0824 0.1075 0.1278 0.1431 0.1574 0.1701 0.1788 0.1873 0.1949

[/cc]

We can see that with one single evaluation, the false positive rate is at the nominal 5%. However, when more in-between tests are calculated, the false positive rate rises to roughly 20% with ten peeks. This means that even if there is no effect at all in the population, the researcher would have stopped data collection with a signficant result in 20% of the cases.

Let’s compare the false positive rates of HARKing and optional stopping: Since the researcher in our example above conducts one to ten dependent hypothesis tests, we can compare this to a situation where a HARKer conducts one to ten independent hypothesis tests. The figure below shows the results of both *p*-hacking strategies:

[cc lang=”rsplus” escaped=”true”]

# HARKing False Positive Rates

HARKs <- harker(1:10, alpha=0.05)

[/cc]

We can see that HARKing produces higher false positive rates than optional stopping with the same number of tests. This can be explained through the dependency on the first sample in the case of optional stopping: Given that the null hypothesis is true, this sample is not very likely to show extreme effects in any direction (however, there is a small probability that it does). Every extension of this sample has to “overcome” this property not only by being extreme in itself but also by being extreme enough to shift the test on the overall sample from non-significance to significance. In contrast, every sample in the multiple testing case only needs to be extreme in itself. Note, however, that false positive rates in optional stopping are not only dependent on the number of interim peeks, but also on the size of the initial sample and on the step size (how many observations are added between two peeks?). The difference between multiple testing and optional stopping which you see in the figure above is therefore only valid for this specific case. Going back to the two researchers from our example, we can say that the HARKer has a better chance to come up with significant results than the Accumulator, if both do the same number of hypothesis tests.

You can use the interactive *p*-hacker app to experience the efficiency of both *p*-hacking strategies yourself: You can increase the number of dependent variables and see whether one of them gets significant (HARing), or you can got to the “Now: p-hack!” tab and increase your sample until you obtain significance. Note that the DVs in the p-hacker app are not completely independent as in our example above, but rather correlate with *r* = .2, assuming that the DVs to some extent measure at least related constructs.

To conclude, we have shown how two *p*-hacking techniques work and why their application is bad for science. We found out that *p*-hacking techniques based on multiple testing typically end up with higher rates of false positive results than *p*-hacking techniques based on optional stopping, if we assume the same number of hypothesis tests. We want to stress that this does *not* mean that naive optional stopping is okay (or even okay-ish) in frequentist statistics, even if it does have a certain appeal. For those who want to do guilt-free optional stopping, there are ways to control for the error accumulation in the frequentist framework (see for example Wald, 1945, Chow & Chang, 2008, Lakens, 2014) and sequential Bayesian hypothesis tests (see for example our paper on sequential hypothesis testing with Bayes factors or Rouder, 2014).

[cc lang=”rsplus” escaped=”true”]

sim.optstopping <- function(n.min, n.max, step, alpha=0.05, test=”z.test”, alternative=”two.sided”, iter=10000){

match.arg(test, choices=c(“t.test”, “z.test”))

match.arg(alternative, choices=c(“two.sided”, “directional”))

# Determine places of peek

peeks <- seq(n.min, n.max, by=step)

# Initialize result matrix (non-sequential)

res <- matrix(NA, ncol=length(peeks), nrow=iter)

colnames(res) <- peeks

# Conduct sequential testing (always until n.max, with peeks at pre-determined places)

for(i in 1:iter){

sample <- rnorm(n.max, 0, 1)

if(test==”t.test”){res[i,] <- sapply(peeks, FUN=function(x){mean(sample[1:x])/sd(sample[1:x])*sqrt(x)})}

if(test==”z.test”){res[i,] <- sapply(peeks, FUN=function(x){sum(sample[1:x])/sqrt(x)})}

}

# Create matrix: Which tests are significant?

if(test==”z.test”){

ifelse(alternative==”two.sided”, signif <- abs(res) >= qnorm(1-alpha), signif <- res <= qnorm(alpha))

}else if (test==”t.test”){

n <- matrix(rep(peeks, iter), nrow=iter, byrow=T)

ifelse(alternative==”two.sided”, signif <- abs(res) >= qt(1-alpha, df=n-1), signif <- res <= qt(alpha, df=n-1))

}

# Create matrix: Has there been at least one significant test in the trial?

seq.signif <- matrix(NA, ncol=length(peeks), nrow=iter)

for(i in 1:iter){

for(j in 1:ncol(signif)){

seq.signif[i,j] <- TRUE %in% signif[i, 1:j]

}

}

# Determine the sequential alpha error probability for the sequential tests

seq.alpha <- apply(seq.signif, MARGIN = 2, function(x){sum(x)/iter})

# Return a list of individual test p-values, sequential significance and sequential alpha error probability

return(list(p.values = res,

seq.significance = seq.signif,

seq.alpha = seq.alpha))

}

[/cc]

The post Gazing into the Abyss of P-Hacking: HARKing vs. Optional Stopping appeared first on Nicebread.

]]>The post Hiring Policy at the LMU Psychology Department: Better have some open science track record appeared first on Nicebread.

]]>Our department embraces the values of open science and strives for replicable and reproducible research. For this goal we support transparent research with open data, open materials, and study pre-registration. Candidates are asked to describe in what way they already pursued and plan to pursue these goals.

Since then, every professorship announcement contained this paragraph (and we made good experiences with it).

I am very happy to announce that my department now turned this implicit policy into an explicit hiring policy, effective since May 2018: **The department’s steering committee unanimously voted for an explicit policy to always include this (or a similar) statement to all future professorship job advertisements.**

It is the task of the appointment committee to value the existing open science activities as well as future commitments of applicants appropriately. By including this statement, our department aims to communicate core values of good scientific practice and to attract excellent researchers who aim for transparent and credible research.

In this respect, take a look at the current draft of a *Modular Certification Initiative* (initiated by Chris Chambers, Kyle MacDonald and me, with a lot of input from the open science community). With this TOP-like scheme, institutions, but also single researchers, can select a level of openness which they require in their hiring process.

So, if you want to join the LMU psychology department as a professor, you should better have some open science track record.

The post Hiring Policy at the LMU Psychology Department: Better have some open science track record appeared first on Nicebread.

]]>The post Correcting bias in meta-analyses: What not to do (meta-showdown Part 1) appeared first on Nicebread.

]]>It is well known that publication bias and *p*-hacking inflate effect size estimates from meta-analyses. In the last years, methodologists have developed an ever growing menu of statistical approaches to correct for such overestimation. However, to date it was unclear under which conditions they perform well, and what to do if they disagree. Born out of a Twitter discussion, Evan Carter, Joe Hilgard, Will Gervais and I did a large simulation project, where we compared the performance of naive random effects meta-analysis (RE), trim-and-fill (TF),* p*-curve, *p*-uniform, PET, PEESE, PET-PEESE, and the three-parameter selection model (3PSM).

Previous investigations typically looked only at publication bias *or* questionable research practices QRPs (but not both), used non-representative study-level sample sizes, or only compared few bias-correcting techniques, but not all of them. Our goal was to simulate a research literature that is as realistic as possible for psychology. In order to simulate several research environments, we fully crossed five experimental factors: (1) the true underlying effect, δ (0, 0.2, 0.5, 0.8); (2) between-study heterogeneity, τ (0, 0.2, 0.4); (3) the number of studies in the meta-analytic sample, *k* (10, 30, 60, 100); (4) the percentage of studies in the meta-analytic sample produced under publication bias (0%, 60%, 90%); and (5) the use of QRPs in the literature that produced the meta-analytic sample (none, medium, high).

This blog post summarizes some insights from our study, internally called “meta-showdown”. Check out the preprint; and the interactive app metaExplorer. The fully reproducible and reusable simulation code is on Github, and more information is on OSF.

In this blog post, I will highlight some lessons that we learned during the project, primarily focusing on **what not do to when performing a meta-analysis**.

**Constraints on Generality disclaimer: These recommendations apply to typical sample sizes, effect sizes, and heterogeneities in psychology; other research literatures might have different settings and therefore a different performance of the methods. Furthermore, the recommendations rely on the modeling assumptions of our simulation. We went a long way to make them as realistic as possible, but other assumptions could lead to other results.**

If studies have no publication bias, nothing can beat plain old random effects meta-analysis: it has the highest power, the least bias, and the highest efficiency compared to all other methods. Even in the presence of some (though not extreme) QRPs, naive RE performs better than all other methods. When can we expect no publication bias? If (and, in my opinion *only if*) we meta-analyze a set of registered reports.

But.

In *any* other setting except registered reports, a consequential amount of publication bias must be expected. In the field of psychology/psychiatry, more than 90% of all published hypothesis tests are significant (Fanelli, 2011) despite the average power being estimated as around 35% (Bakker, van Dijk, & Wicherts, 2012) – the gap points towards a huge publication bias. In the presence of publication bias, naive random effects meta-analysis and trim-and-fill have false positive rates approaching 100%:

More thoughts about trim-and-fill’s inability to recover δ=0 are in Joe Hilgard’s blog post. (Note: this insight is not really new and has been shown multiple times before, for example by Moreno et al., 2009, and Simonsohn, Nelson, and Simmons, 2014).

**Our recommendation: Never trust meta-analyses based on naive random effects and trim-and-fill, unless you can rule out publication bias. Results from previously published meta-analyses based on these methods should be treated with a lot of skepticism.
**

*Update 2017/06/09:* We had a productive exchange with Uri Simonsohn and Joe Simmons concerning what should be estimated in a meta-analysis with heterogeneity. Traditionally, meta-analysts have tried to arrive at techniques that recover the true average effect of all conducted studies (AEA – average effect of all studies). Simonsohn et al (2014) propose estimating a different magnitude; the average effect of the studies one observes, rather than of all studies (AEO – average effect of observed studies). See Simonsohn et al (2014), the associated Supplementary Material 2, and also this blog post for arguments why they think this is a more useful quantity to estimate.

Hence, an investigation of the topic can be done on two levels: A) What is the more appropriate estimand (AEA or AEO?), and B) Under what conditions are estimators able to recover the respective true value with the least bias and least variance?

Instead of updating the section of the current blog post in the light of our discussion, I decided to cut it out and to move the topic to a future blog post. Likewise, one part of our manuscript’s revision will be a more detailed discussion about excatly these differences.

I archived the previous version of the blog post here.

Many bias-correcting methods are driven by QRPs – the more QRPs, the stronger the downward correction. However, this effect can get so strong, that methods overadjust into the opposite direction, even if all studies in the meta-analysis are of the same sign:

Note: You need to set the option “Keep negative estimates” to get this plot.

**Our recommendation: Ignore bias-corrected results that go into the opposite direction; set the estimate to zero, do not reject H₀.
**

Typical small-study effects (e.g., by *p*-hacking or publication bias) induce a negative correlation between sample size and effect size – the smaller the sample, the larger the observed effect size. PET-PEESE aims to correct for that relationship. In the absence of bias and QRPs, however, random fluctuations can lead to a *positive* correlation between sample size and effect size, which leads to a PET and PEESE slope of the unintended sign. Without publication bias, this reversal of the slope actually happens quite often.

See for example the next figure. The true effect size is zero (red dot), naive random effects meta-analysis slightly overestimates the true effect (see black dotted triangle), but PET and PEESE massively overadjust towards more positive effects:

As far as I know, PET-PEESE is typically not intended to correct in the reverse direction. An underlying biasing process would have to systematically remove small studies that show a significant result with larger effect sizes, and keep small studies with non-significant results. In the current incentive structure of psychological research, I see no reason for such a process, unless researchers are motivated to show that a (maybe politically controversial) effect does not exist.

**Our recommendation: Ignore the PET-PEESE correction if it has the wrong sign, unless there are good reasons for an untypical selection process.
**

A bias can be more easily accepted if it always is conservative – then one could conclude: “This method might miss some true effects, but *if* it indicates an effect, we can be quite confident that it really exists”. Depending on the conditions (i.e., how much publication bias, how much QRPs, etc.), however, PET/PEESE sometimes shows huge overestimation and sometimes huge underestimation.

For example, with no publication bias, some heterogeneity (τ=0.2), and severe QRPs, PET/PEESE *underestimates* the true effect of δ = 0.5:

In contrast, if no effect exists in reality, but strong publication bias, large heterogeneity and no QRPs, PET/PEESE *overestimates* at lot:

In fact, the distribution of PET/PEESE estimates looks virtually identical for these two examples, although the underlying true effect is δ = 0.5 in the upper plot and δ = 0 in the lower plot. Furthermore, note the huge spread of PET/PEESE estimates (the error bars visualize the 95% quantiles of all simulated replications): Any single PET/PEESE estimate can be very far off.

**Our recommendation: As one cannot know the condition of reality, it is probably safest not to use PET/PEESE at all.
**

Again, please consider the “Constraints on Generality” disclaimer above.

- When you can exclude publication bias (i.e., in the context of registered reports), do not use bias-correcting techniques. Even in the presence of some QRPs they perform worse than plain random effects meta-analysis.
- In any other setting except registered reports, expect publication bias, and do not use random effects meta-analysis or trim-and-fill. Both will give you a 100% false positive rate in typical settings, and a biased estimation.
- Even if all studies entering a meta-analysis point into the same direction (e.g., all are positive), bias-correcting techniques sometimes overadjust and return a significant estimate of the
*opposite*direction. Ignore these results, set the estimate to zero, do not reject H₀. - Sometimes PET/PEESE adjust into the wrong direction (i.e., increasing the estimated true effect size)

As with any general recommendations, there might be good reasons to ignore them.

- The
*p*-uniform package (v. 0.0.2) very rarely does not provide a lower CI. In this case, ignore the estimate. - Do not run
*p*-curve or*p*-uniform on <=3 significant and directionally consistent studies. Although computationally possible, this gives hugely variable results, which are often very biased. See our supplemental material for more information and plots. - If the 3PSM method (in the implementation of McShane et al., 2016) returns an incomplete covariance matrix, ignore the result (even if a point estimate is provided).

The post Correcting bias in meta-analyses: What not to do (meta-showdown Part 1) appeared first on Nicebread.

]]>The post Assessing the evidential value of journals with p-curve, R-index, TIVA, etc: A comment on Motyl et al. (2017) with new data appeared first on Nicebread.

]]>Anna Bittner, a former student of LMU Munich, wrote her bachelor thesis on a systematic comparison of two journals in social psychology – one with a very high impact factor (Journal of Personality and Social Psychology (JPSP), IF = 5.0 at that time), another with a very low IF (Journal of Applied Social Psychology (JASP), IF = 0.8 at that time – not to be confused with the great statistical software JASP). By coincidence (or precognition), we also chose the year 2013 for our analyses. Read her full blog post about the study below.

In a Facebook discussion about the Motyl et al. paper, Michael Inzlicht wrote: “But, is there some way to systematically test a small sample of your p-values? […] Can you randomly sample 1-2% of your p-values, checking for errors and then calculating your error rate? I have no doubt there will be some errors, but the question is what the rate is.“. As it turns out, we also did our analysis on the 2013 volume of JPSP. Anna followed the guidelines by Simonsohn et al. That means, she only selected one effect size per sample, selected the focal effect size, and wrote a detailed disclosure table. Hence, we do not think that the critique in the DataColada blog post applies to our coding scheme. The disclosure table includes quotes from the verbal hypotheses, quotes from results section, and which test statistic was selected. Hence, you can directly validate the choice of effect sizes. Furthermore, all test statistics can be entered into the p-checker app, where you get TIVA, R-Index, p-curve and more diagnostic tests (grab the JPSP test statistics & JASP test statistics).

Now we can compare two completely independently coded *p*-curve disclosure tables about a large set of articles. Any disagreement of course does not mean that one party is right and the other is wrong. But it will be interesting to see the amount of agreement.

Here comes Anna’s blog post about her own study. Anna Bittner is now doing her Master of Finance at the University of Melbourne.

*by Anna Bittner and Felix Schönbrodt
*

The recent discoveries on staggeringly low replicability in psychology have come as a shock to many and led to a discussion on how to ensure better research practices are employed in the future. To this end, it is necessary to find ways to efficiently distinguish good research from bad and research that contains evidential value from such that does not.

In the past the impact factor (IF) has often been the favored indicator of a journal’s quality. To check whether a journal with a higher IF does indeed publish the “better” research in terms of evidential value, we compared two academic journals from the domain of social psychology: The Journal of Personality and Social Psychology (JPSP, Impact Factor = 5.031) and the Journal of Applied Social Psychology (JASP, Impact Factor = 0.79).

For this comparison, Anna has analysed and carefully hand-coded all studies with hypothesis tests starting in January 2013 and progressing chronologically until about 110 independent test statistics for each journal were acquired. See the full report (in German) in Anna’s bachelor thesis. These test statistics were fed into the p-checker app (Version 0.4; Schönbrodt, 2015) that analyzed them with the tools *p*-curve, TIVA und R-Index.

All material and raw data is available on OSF: https://osf.io/pgc86/

*P*-curve (Simonsohn, Nelson, & Simmons, 2014) takes a closer look at all significant *p*-values and plots them against their relative frequency. This results in a curve that will ideally have a lot of very small *p*-values (<.025) and much fewer larger *p*–values (>.025). Another possible shape is a flat curve, which will occur when researchers only investigate null effects and selectively publish those studies that obtained a *p*-value < .05 by chance. under the null hypothesis each individual *p*-value is equally likely and the distribution is even. *P*-curve allows to test whether the empirical curve is flatter than the *p*-curve that would be expected at any chosen power.

Please note that *p*-curve assumes homogeneity (McShane, Böckenholt, & Hansen, 2016). Lumping together diverse studies from a whole journal, in contrast, guarantees heterogeneity. Hence, trying to recover the true underlying effect size/power is of limited usefulness here.

The *p*-curves of both JPSP and JASP were significantly right-skewed, which suggests that both journal’s output cannot be explained by pure selective publication of null effects that got significant by chance. JASP’s curve, however, had a much stronger right-skew, indicating stronger evidential value:

TIVA (Schimmack, 2014a) tests for an insufficient variance of *p*-values: If no *p*-hacking and no publication bias was present, the variance of *p*-values should be at least 1. An value below 1 is seen as indicator of publication bias and/or *p*-hacking. However, variance can and will be much larger than 1 when studies of different sample and effect sizes are included in the analysis (which was inevitably the case here). Hence, TIVA is a rather weak test of publication bias when heterogeneous studies are combined: A *p*-hacked set of heterogeneous effect sizes can still result in a high variance in TIVA. Publication bias and *p*-hacking reduce the variance, but heterogeneity and different sample sizes can increase the variance in a way that the TIVA is clearly above 1, even if all studies in the set are severely *p*-hacked.

As expected, neither JPSP nor JASP attained a significant TIVA result, which means the variance of *p*-values was not significantly smaller than 1 for either journal. Descriptively, JASP had a higher variance of 6.03 (chi2(112)=674, *p*=1), compared to 1.09 (chi2(111)=121, *p*=.759) for the JPSP. Given the huge heterogeneity of the underlying studies, a TIVA variance in JPSP close to 1 signals a strong bias. This, again, is not very surprising. We already knew before with certainty that our literature is plagued by huge publication bias.

The descriptive difference in TIVA variances can be due to less *p*-hacking, less publication bias, or more heterogeneity of effect sizes and sample sizes in JASP compared to JPSP. Hence, drawing firm conclusions from this numerical difference is difficult; but the much larger value in JASP can be seen as an indicator that the studies published there paint a more realistic picture.

(Note: The results here differ from the results reported in Anna’s bachelor thesis, as the underlying computation has been improved. p-checker now uses logged p-values, which allows more precision with very small p-values. Early versions of p-checker underestimated the variance when extremely low p-values were present).

Unfortunately, Motyl et al. do not report the actual variances from their TIVA test (only the test statistics), so a direct comparison of our results is not possible.

The R-Index (Schimmack, 2014b) is a tool that aims to quantify the replicability of a set of studies. It calculates the difference between median estimated power and success rate, which results in the so called inflation rate. This inflation rate is then subtracted from the median estimated power, resulting in the R-Index. Here is Uli Schimmack’s interpretation of certain R-Index values: “The R-Index is not a replicability estimate […] I consider an R-Index below 50 an F (fail). An R-Index in the 50s is a D, and an R-Index in the 60s is a C. An R-Index greater than 80 is considered an A”.

Here, again, the JASP was ahead. It obtained an R-Index of .60, whereas the JPSP landed at .49.

Both journals had success rates of around 80%, which is much higher than what would be expected with the average power and effect sizes found in psychology (Bakker, van Dijk, & Wicherts, 2012). It is known and widely accepted that journals tend to publish significant results over non-significant ones.

Motyl et al. report an R-Index of .52 for 2013-2014 for high impact journals, which is very close to our value of .49.

The comparison between JPSP and JASP revealed a better R-Index, a more realistic TIVA variance and a more right-skewed *p*-curve for the journal with the much lower IF. As the studies had roughly comparable sample sizes (JPSP: Md = 86, IQR: 54 – 124; JASP: Md = 114, IQR: 65 – 184), I would bet some money that more studies from JASP replicate then from JPSP.

A journal’s prestige does not protect it from research submissions that contain QRPs – contrarily it might lead to higher competition between reseachers and more pressure to submit a significant result by all means. Furthermore, higher rejection rates of a journal also leave more room for “selecting for significance”. In contrast, a journal that must publish more or less every submission it gets to fill up its issues simply does not have much room for this filter. With the currently applied tools, however, it is not possible to make a distinction between *p*-hacking and publication bias: they only detect patterns in test statistics that can be the result of both practices.

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543-554.

McShane, B. B., Böckenholt, U., & Hansen, K. T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. *Perspectives on Psychological Science, 11*, 730–749. doi:10.1177/1745691616662243

Schimmack, U. (2014b). Quantifying Statistical Research Integrity: The Replicabilty-Index.

Schimmack, U. (2014a, December 30). The Test of Insufficient Variance (TIVA): A New Tool for the Detection of Questionable Research Practices. Retrieved from https://replicationindex.wordpress.com/2014/12/30/the-test-of-insufficient-variance-tiva-a-new-tool-for-the-detection-of-questionable-research-practices/

Schimmack, U. (2015, September 15). Replicability-Ranking of 100 Social Psychology Departments [Web log post]. Retrieved from https://replicationindex.wordpress.com/2015/09/15/replicability-ranking-of-100-social-psychology-departments/

Schönbrodt, F. (2015). p-checker [Online application]. Retrieved from http://shinyapps.org/apps/p-checker/

Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534.

The post Assessing the evidential value of journals with p-curve, R-index, TIVA, etc: A comment on Motyl et al. (2017) with new data appeared first on Nicebread.

]]>The post German Psychological Society fully embraces open data, gives detailed recommendations appeared first on Nicebread.

]]>In the last year, the discussion in our field moved from “Do we have a replication crisis?” towards “Yes, we have a problem, and what can and should we change? How can be implement it?”. I think that we need both top-down changes on an institutional level, combined with bottom-up approaches, such as local Open Science Initiatives. Here, I want to present one big institutional change concerning open data.

The German Research Foundation (DFG), the largest public funder of research in Germany, updated their policy on data sharing, which can be summarized in a single sentence: **Publicly funded research, including the raw data, belongs to the public**. Consequently, all research data from a DFG funded project should be made open immediately, or at least a couple of months after finalization of the research project (see [1] and [2]). Furthermore, the DFG asked all scientific disciplines to develop more specific guidelines which implement these principles in their respective discipline.

The German Psychological Society (Deutsche Gesellschaft für Psychologie, DGPs) installed a working group (Andrea Abele-Brehm, Mario Gollwitzer and me) who worked for one year on such recommendations for psychology.

In the development of the document, we tried to be very inclusive and to harvest the wisdom of the crowd. A first draft (Feb 2016) was discussed for 6 weeks in an internet forum where all DGPs members could comment. Based on this discussion (and many additional personal conversations), a revised version was circulated and discussed in person with a smaller group of interested members (July 2016) and a representative of the DFG. Furthermore, we had regular contact to the “Fachkollegium Psychologie” of the DFG (i.e., the group of people which decides about funding decisions in psychology; meanwhile, the members of the Fachkollegium have changed on a rotational basis). Finally, the chair persons of all sections of the DGPs and the speakers of the young members had another possibility to comment. On September 17, the recommendations were officially adopted by the society.

I think this thorough and iterative process was very important for two reasons: First, it definitely improved the quality of the document, because we got so many great ideas and comments from the members, ironing out some inconsistencies and covering some edge cases. Second, it was important in order to get people on board. As this new open data guideline of the DFG causes a major change in the way we do our everyday scientific work, we wanted to talk to and convince as many people as possible from the early steps on. Of course not every single of the >4,000 members is equally convinced, but the topic now has considerable attention in the society.

Hence, one focus was consensus and inclusivity. At the same time, we had the goal to develop bold and forward-looking guidelines that really address the current challenges of the field, and not to settle on the lowest common denominator. For this goal, we had to find a balance between several, sometimes conflicting, values.

*Research transparency ⬌ privacy rights.* A first specialty of psychology is that we do not investigate rocks or electrons, but human subjects who have privacy rights. In a nutshell, privacy rights have to be respected, and in case of doubt they win over openness. But if data can be properly anonymized, there’s no problem in open sharing; one possibility to share non-anonymous research data are “scientific use files”, where access is restricted to scientists. If data cannot be shared due to privacy (or other) reasons, this has to be made transparent in the paper. (Hence, the recommendations are PRO compatible). The recommendations give clear guidance on privacy issues and gives practical advice, for example, on how to write your informed consent that you actually are able to share the data afterwards.

*Data reuse ⬌ right of first usage.* A second balance concerns an optimal reuse of data on the one hand, and the right of first usage of the original authors. In the discussion phase during the development of the recommendations, several people expressed the fear of “research parasites”, who “steal” the data from hard-working scientists. A very common gut feeling is: “The data belong to me”. But, as we are publicly funded researchers with publicly funded research projects, the answer is quite clear: the data belong to the public. There is no copyright on raw data. On the other hand, we also need incentives for original researchers to generate data in the first place. Data generators of course have the right of first usage, and the recommendations allow to extend this right by an embargo of 5 more years (see below). But at the end of the day, publicly funded research data belongs to the public, and everybody can reuse it. If data are open by default, a guideline also must discuss and define how data reuse should be handled. Our recommendations make suggestions in which cases a co-authorship should be offered to the data providers and in which cases this is not necessary.

*Verification ⬌ fair treatment of original authors.* Finally, research should be verifiable, but with a fair treatment of the original authors. The guidelines say that whenever a reanalysis of a data set is going to be published (and that also includes blog posts or presentations), the original authors have to be informed about this. They cannot prevent the reanalysis, but they have the chance to react to it.

We distinguish two types of data sharing:

*Type 1 data sharing* means that all raw data should be openly shared that is necessary to reproduce the results reported in a paper. Hence, this can be only a subset of all available variables in the full data set: The subset which is needed to reproduce these specific results. **The primary data are an essential part of an empirical publication, and a paper without that simply is not complete.**

*Type 2 data sharing* refers to the release of the full data set of a funded research project. The DGPs recommendations claim that after the end of a DFG-funded

project all data – even data which has not yet been used for publications – should be made open. Unpublished null results, or additional, exploratory variables now have to chance to see the light and to be reused by other researchers. Experience tells that not all planned papers have been written after the official end date of a project. Therefore, the recommendations allow that the right of first usage can be extended with an embargo period of up to 5 years, where the (so far unpublished) data do not have to be made public. The embargo option only applies to data that has not yet been used for publications. Hence, typically an embargo cannot be applied to Type 1 data sharing.

To summarize, I think these recommendations are the most complete, practical, and specific guidelines for data sharing in psychology to date. (Of course much more details are in the recommendations themselves). They fully embrace openness, transparency and scientific integrity. Furthermore, they do not proclaim detached ethical principles, but give very practical guidance on how to actually implement data sharing in psychology.

What are the next steps? The president of the DGPs, Prof. Conny Antoni, and the secretary Prof. Mario Gollwitzer already contacted other psychological societies (APA, APS, EAPP, EASP, EFPA, SIPS, SESP, SPSP) and introduced our recommendations. The Board of Scientific Affairs of EFPA – the European Federation of Psychologists’ Associations – already expressed its appreciation of the recommendations and will post them on their website. Furthermore, it will discuss them in an invited symposium on the European Congress of Psychology in Amsterdam this year. A mid-term goal will also be to check compatibility with existing other guidelines and to think about a harmonization of several guidelines within psychology.

As other scientific disciplines in Germany also work on their specific implementations of the DFG guidelines, it will be interesting to see whether there are common lines (although there certainly will be persisting and necessary differences between the requirements of the fields). Finally, we are in contact with the new Fachkollegium at the DFG, with the goal to see how the recommendations can and should be used in the process of funding decisions.

If your field also implements such recommendations/guidelines, don’t hesitate to contact us.

Schönbrodt, F., Gollwitzer, M., & Abele-Brehm, A. (2017). Der Umgang mit Forschungsdaten im Fach Psychologie: Konkretisierung der DFG-Leitlinien. *Psychologische Rundschau, 68*, 20–35. doi:10.1026/0033-3042/a000341. [PDF German][PDF English]

*(English translation by Malte Elson, Johannes Breuer, and Zoe Magraw-Mickelson)*

The post German Psychological Society fully embraces open data, gives detailed recommendations appeared first on Nicebread.

]]>The post Two meanings of priors, part II: Quantifying uncertainty about model parameters appeared first on Nicebread.

]]>This is the second part of “Two meanings of priors”. The first part explained a first meaning – “priors as subjective probabilities of models”. While the first meaning of priors refers to a global appraisal of existing hypotheses, the second meaning of priors refers to specific assumptions which are needed in the process of hypothesis building. The two kinds of priors have in common that they are both specified before concrete data are available. However, as it will hopefully become evident from the following blog post, they differ significantly from each other and should be distinguished clearly during data analysis.

In order to know how well evidence supports a hypothesis compared to another hypothesis, one must know the concrete specifications of each hypothesis. For example, in the tea tasting experiment, each hypothesis was characterized by a specific probability (e.g., the success rate of exactly 0.5 in H_{Fisher }of the previous blog post). What might sound trivial at first – deciding on the concrete specifications of a hypothesis – is in fact one of the major challenges when doing Bayesian statistics. Scientific theories are often imprecise, resulting in more than one plausible way to derive a hypothesis. With deciding upon one specific hypothesis, often new auxiliary assumptions are made. **These assumptions, which are needed in order to specify a hypothesis adequately, are called “priors” as well.** They influence the formulation and interpretation of the likelihood (which gives you the plausibility of data under a specific hypothesis). We will illustrate this in an example.

A food company conducts market research in a large German city. They know from a recent representative enquiry by the German Federal Statistical Office that Germans spend on average 4.50 € for their lunch (standard deviation: 0.60 €). Now they want to know if the inhabitants of one specific city spend more money for their lunch compared to the German average. They expect lunch expenses to be especially high in this city because of the generally high living costs. In a traditional testing procedure in inferential statistics the food company would formulate two hypotheses to test their assumption: a null and an alternative hypothesis: H_{0}: µ ≤ 4.50 and H_{1}: µ > 4.50.

In Bayesian hypothesis testing, the formulation of the hypotheses has to be more precise than this. We need precise hypotheses as a basis for the likelihood functions which assign probability values to possible states of reality. The traditional formulation, µ > 4.50, is too vague for that purpose: Is any lunch cost above 4.50€ a priori equally likely? Is it plausible that a lunch costs 1,000,000€ on average? Probably not. Not every state of reality is, a priori, equally plausible. “Models connect theory to data“ (Rouder, Morey, & Wagenmakers, 2016), and a model that predicts everything predicts nothing.

As Bayesian statisticians we therefore must ask ourselves: Which values are more plausible given that our hypotheses are true? Of course, our knowledge differs from case to case in this point. Sometimes, we may be able to commit to a very small range of plausible values or even to a single value (in this case, we would call the respecting hypothesis a “point hypothesis”). Theories in physics sometimes predict a single state of reality: “If this theory is true, then the mass of a Chicks boson is exactly 1.52324E-16 gr”.

More often, however, our knowledge about plausible values under a certain theory might be less precise, leading to a wider range of plausible values. Hence, the prior in the second sense defines the probability of a parameter value given a hypothesis, *p*(θ | H1).

Let us come back to the food company example. Their null hypothesis might be that there is no difference between the city in the focus of their research project and the German average. Hence, the null hypothesis predicts an average lunch cost of 4.50€. With the alternative hypothesis, it becomes slightly more complex. They assume that average lunch expenses in the city should be higher than the German average, so the most plausible value under the alternative hypothesis should be higher than 4.5. However, they may deem it very improbable that the mean lunch expenses are more than two standard deviations higher than the German average (so, for example, it should be very improbable that someone spends more than, say, 10 EUR for lunch even in the expensive city). With this knowledge, they can put most plausibility on values in a range from 4.5 to 5.7 (4.5 + 2 standard deviations). They could further specify their hypothesis by claiming that the most plausible value should be 5.1, i.e., one standard deviation higher than the German average. The elements of these verbal descriptions of the alternative hypothesis can be summarized in a truncated normal distribution that is centered over 5.1 and truncated at 4.5 (as the directional hypothesis does not predict values in the opposite direction).

With this model specification, the researchers would place 13% of the probability mass on values larger than 2SD of the general population (i.e., > 5.7).

Making it even more complex, they could quantify their uncertainty about the most plausible value (i.e., the maximum of the density distribution) by assigning another distribution to it. For example, they could build a normal distribution around it, with a mean of 5.1 and a standard deviation of 0.3. This would imply that in their opinion, 5.1 is the “most plausible most plausible value” but that values between 4.8 and 5.4 are also potential candidates for the most plausible value.

What you can notice in the example about the development of hypotheses is that the market researchers have to make auxiliary assumptions on top of their original hypothesis (which was H1: µ > 4.5). If possible, these prior plausibilities should be informed by theory or by previous empirical data. Specifying alternative hypothesis in this way may seem to be an unnecessary burden compared to traditional hypothesis testing where these extra assumptions seemingly are not necessary. Except that they are necessary. Without going into detail in this blog post, we recommend to read Rouder et al.’s (2016a) “Is there a free lunch in inference?“, with the bottom line that principled and rational inference needs specified alternative hypotheses. (For example, in Neyman-Pearson testing, you also need to specify a precise alternative hypothesis that refers to the “smallest effect size of interest”)

Furthermore, readers might object: “Researchers rarely have enough background knowledge to specify models that predict data“. Rouder et al. (2016b) argue that this critique is overstated, as (1) with proper elicitation, researchers often know much more than they initially think, (2) default models can be a starting point if really no information is available, and (3) several models can be explored without penalty.

A question that may come to your mind soon after you understood the difference between the two kinds of priors is: If they both are called “priors”, do they depend on each other in some way? Does the formulation of your “personal prior plausibility of a hypothesis” (like the skeptical observer’s prior on Mrs. Bristol’s tea tasting abilities) influence the specification of your model (like the hypothesis specification in the second example) or vice versa?

The straightforward answer to this question is “no, they don’t”. This can be easily illustrated in a case where the prior conviction of a researcher runs against the hypothesis he or she wants to test. The food company in the second example has sophisticatedly determined the likelihood of the two hypotheses (H0 and H1), which they want to pit against each other. They are probably considerably convinced that the specification of the alternative hypothesis describes reality better than the specification of the null hypothesis. In a simplified form, their prior odds (i.e., priors in the first sense) can be described as a ratio like 10:1. This would mean that they deem the alternative hypothesis ten times as likely as the null hypothesis. However, another food company, may have prior odds of 3:5 while conducting the same test (i.e., using the same prior plausibilities of model parameters). This shows that priors in the first sense are independent of priors in the second sense. Priors in the first sense change with different personal convictions while priors in the second sense remain constant. Similarly, prior beliefs can change after seeing the data – the formulation of the model (i.e., what a theory predicts) stays the same. (As long as the theory, from which the model specification is derived, does not change. In an estimation context, the model parameters are updated by the data.)

The term “prior” has two meanings in the context of Bayesian hypothesis testing. The first one, usually applied in Bayes factor analysis, is equivalent to a prior subjective probability of a hypothesis (“how plausible do you deem a hypothesis compared to another hypothesis before seeing the data”). The second meaning refers to the assumptions made in the specification of the model of the hypotheses which are needed to derive the likelihood function. These two meanings of the term “prior” have to be distinguished clearly during data analysis, especially as they do not depend on each other in any way. Some researchers (e.g., Dienes, 2016) therefore suggest to call only priors in the first sense “priors” and speak about “specification of the model” when referring to the second meaning.

Read the first part of this blog post: Priors as the plausibility of models

Dienes, Z. (2011). Bayesian versus orthodox statistics: Which side are you on?. *Perspectives On Psychological Science*, 6(3), 274-290. http://doi:10.1177/1745691611406920

Dienes, Z. (2016). How Bayes factors change scientific practice. *Journal Of Mathematical Psychology*, 7278-89. http://doi:10.1016/j.jmp.2015.10.003

Lindley, D. V. (1993). The analysis of experimental data: The appreciation of tea and wine. *Teaching Statistics*, 15(1), 22-25. http://dx.doi.org/10.1111/j.1467-9639.1993.tb00252.x

Rouder, J. N., Morey, R. D., Verhagen, J., Province, J. M., & Wagenmakers, E. J. (2016a). Is there a free lunch in inference? *Topics in Cognitive Science*, *8*, 520–547. http://doi.org/10.1111/tops.12214

Rouder, J. N., Morey, R. D., & Wagenmakers, E. J. (2016b). The Interplay between Subjectivity, Statistical Practice, and Psychological Science. *Collabra*, *2*(1), 6–12. http://doi.org/10.1525/collabra.28

[cc lang=”rsplus” escaped=”true”]

library(truncnorm)

# parameters for H1 model

M <- 5.1

SD <- 0.5

range <- seq(4.5, 7, by=.01)

plausibility <- dtruncnorm(range, a=4.5, b=Inf, mean=M, sd=SD)

plot(range, plausibility, type=”l”, xlim=c(4, 7), axes=FALSE, ylab=”Plausibility”, xlab=”Lunch cost in €”, mgp = c(2.5, 1, 0))

axis(1)

# Get the axis ranges, draw arrow

u <- par(“usr”)

points(u[1], u[4], pch=17, xpd = TRUE)

lines(c(u[1], u[1]), c(u[3], u[4]), xpd = TRUE)

abline(v=4.5, lty=”dotted”)

# What is the probability of values > 5.7?

1-ptruncnorm(5.7, a=4.5, b=Inf, mean=M, sd=SD)

[/cc]

The post Two meanings of priors, part II: Quantifying uncertainty about model parameters appeared first on Nicebread.

]]>The post Two meanings of priors, part I: The plausibility of models appeared first on Nicebread.

]]>When reading about Bayesian statistics, you regularly come across terms like “objective priors“, “prior odds”, “prior distribution”, and “normal prior”. However, it may not be intuitively clear that the meaning of “prior” differs in these terms. In fact, there are two meanings of “prior” in the context of Bayesian statistics: (a) prior plausibilities of models, and (b) the quantification of uncertainty about model parameters. As this often leads to confusion for novices in Bayesian statistics, we want to explain these two meanings of priors in the next two blog posts*. The current blog post covers the the first meaning of priors (link to part II).

In this context, the term “prior” incorporates the personal assumptions of a researcher on the probability of a hypothesis (p(H1)) relative to a competing hypothesis, which has the probability p(H2). Hence, **the meaning of this prior is “how plausible do you deem a model relative to another model before looking at your data”**. The ratio of the two priors of the models, that is “how probable do you consider H1 compared to H2”, is called “prior odds”: p(H1) / p(H2).

The first meaning of priors is used in the context of Bayes factor analysis, where you compare two different hypotheses. In Bayes factor analysis, prior odds are updated by the likelihood ratio of the two hypotheses, which contains the information from the data, and result in the posterior odds (“what you believe after looking at your data”):

The prior belief is called “subjective”, but this label does not imply that it is “arbitrary”, “unprincipled”, or “irrational”. In contrast, the prior belief can (and preferably should) be informed by previous data or experiences. For example, it can be a belief that started with an equipoise (50/50) position, but has been repeatedly updated by data. But within the bounds of rationality and consistency, people still can come to considerably different prior beliefs, and might have good arguments for their choice – that’s why it is called “subjective”. But initially differing degrees of belief will converge as more and more evidence comes in. We will observe this in the following example.

The **classical experiment of tasting tea** has already been described in the context of Bayesian hypothesis testing by Lindley (1993). We will present a simpler form here. Dr. Muriel Bristol, a scientist working in the field of alga biology who was acquainted to the statistician R. A. Fisher, claimed that she could discriminate whether milk is put in a cup before or after the tea infusion during the process of preparing tea with milk. However, Mr. Fisher considered this very unlikely.

So they decided to run an experiment: Muriel Bristol tasted several cups of tea in a row, making a guess on the preparation procedure for each cup. Unlike in the original story, where inferential statistics were consulted to solve the disagreement, we will employ Bayesian statistics to track how prior convictions in this example change. If Muriel Bristol makes her guesses only based on chance as Mr. Fisher supposes, she has a probability of success of 50% in each trial. Before observing her performance, Mr. Fisher should therefore consider it very likely that she is right about the procedure in about 50% of the trials across all trials. We can therefore assume a point hypothesis: H_{Fisher}: Success rate (SR) = 0.5. Muriel Bristol, on the other hand, is very confident in her divination skills. She claims to get 80% of trials correct, which can be equally captured in a point hypothesis: H_{Muriel}: SR = 0.8.

To introduce prior beliefs about hypotheses and show how they change with upcoming evidence, we want to introduce two additional persons. The first one is a slightly skeptical observer who tends to favor H_{Fisher}, but does not completely rule out that Mrs. Bristol could be right with her hypothesis. More formally, we could describe this position as: P(H_{Fisher}) = 0.6 and P(H_{Muriel}) = 0.4. This means that his prior odds are P(H_{Fisher})/P(H_{Muriel}) = 3:2. Fisher’s hypothesis is 1.5 times more likely to him than Muriel Bristol’s hypothesis.

The second additional person we would like to introduce is William, Muriel Bristol’s loving husband who fervently advocates her position. He knows his wife would never (or at least very rarely) make wrong claims, concerning tea preparation and all others issues of their marriage. He therefore assigns a much higher subjective probability to her hypothesis (P(H_{Muriel}) = 0.9) than to the one of Mr. Fisher (P(H_{Fisher}) = 0.1). His prior odds are therefore P(H_{Fisher})/P(H_{Muriel}) = 1:9. Please note that the *content* of the hypotheses (the proposed success rates 0.5 and 0.8, which are the parameters of the model) is logically independent of the *probability of the hypotheses (priors)* that our two observers have.

During the process of hypothesis testing, these two priors are updated with the existing evidence. It is reported that Muriel Bristol’s performance at the experiment was extraordinarily good. We therefore assume that out of the first 6 trials of the experiment she got 5 correct.

With this information, we can now compute the likelihood of the data under each of the hypotheses (for more information on the computation of likelihoods, see Alexander Etz’s blog:

The computation of the likelihoods does not involve the prior model probabilities of our observers. What can be seen is that the data are more likely under Muriel Bristol’s hypothesis than under Mr. Fisher’s. This should not come as a surprise as Muriel Bristol claimed that she could make a very high percentage of right guesses and the data show a very high percentage of right guesses whereas Mr. Fisher assumed a much lower percentage of right guesses. To emphasize this difference in likelihoods and to assign it a numerical value, we can compute the likelihood ratio (Bayes factor):

This ratio means that the data are 4.19 times as likely under Mrs. Bristol’s hypothesis as under Mr. Fisher’s hypothesis. It does not matter how you order the likelihoods in the fraction, the meaning remains constant (see this blog post).

How does this likelihood change the prior odds of both our slightly skeptical observer and William Bristol? Bayes theorem shows that prior odds can be updated by multiplying them with the likelihood ratio (Bayes factor):

First, we will focus on the posterior odds of the slightly skeptical observer. To remember, the slightly skeptical observer had assigned a probability of 0.6 to Mr. Fisher’s hypothesis and a probability of 0.4 to Muriel Bristol’s hypothesis *before *seeing the data, which resulted in prior odds of 3:2 for Mr. Fisher’s hypothesis. How do these convictions change now when the experiment has conducted? To examine this, we simply have to insert all known values in the equation:

This shows that the prior odds of the slightly skeptical observer changed from 3:2 to posterior odds of 1:2.8. This means that whereas before the experiment the slightly skeptical observer had deemed Mr. Fisher’s hypothesis more plausible than Mrs. Bristol’s hypothesis, he changed his opinion after seeing the data, now preferring Mrs. Bristol’s hypothesis over Mr. Fisher’s.

The same equation can be applied to William Bristol’s prior odds:

What we can notice is that after taking the data into consideration both prior odds display a higher amount of agreement with Muriel Bristol and reduced confidence in Mr. Fisher’s hypothesis. Whereas the convictions of the slightly skeptical observer were changed in favor of Muriel Bristol’s hypothesis after the experiment, William Bristol’s prior convictions were strengthened.

Something else you can notice is that compared to William Bristol the slightly skeptical observer still assigns a higher plausibility to Mr. Fisher’s hypothesis. This rank order between the two priors will remain no matter what the data look like. Even if Muriel Bristol made, say, 100/100 correct guesses, the slightly skeptical observer would trust less in her hypothesis than her husband. However, with increasing evidence the absolute difference between both observers will decrease more and more.

This blog post explained the first meaning of “prior” in the context of Bayesian statistics. It can be defined as the subjective plausibility a researcher assigns to a hypothesis compared to another hypothesis before seeing the data. As illustrated in the tea-tasting example, these prior beliefs are updated with upcoming evidence in the research process. In the next blog post, we will explain a second meaning of “priors”: The quantification of uncertainty about model parameters.

Continue reading part II: Quantifying uncertainty about model parameters

*We want to thank Eric-Jan Wagenmakers for helpful comments on a previous version of the post.*

*As a note: Both meanings in fact can be unified, but for didactic purpose we think it makes sense to keep them distinct as a start.

Dienes, Z. (2011). Bayesian versus orthodox statistics: Which side are you on?. *Perspectives On Psychological Science*, 6(3), 274-290. http://doi:10.1177/1745691611406920

Dienes, Z. (2016). How Bayes factors change scientific practice. *Journal Of Mathematical Psychology*, 7278-89. http://doi:10.1016/j.jmp.2015.10.003

Lindley, D. V. (1993). The analysis of experimental data: The appreciation of tea and wine. *Teaching Statistics*, 15(1), 22-25. http://dx.doi.org/10.1111/j.1467-9639.1993.tb00252.x

Rouder, J. N., Morey, R. D., Verhagen, J., Province, J. M., & Wagenmakers, E. J. (2016a). Is there a free lunch in inference? *Topics in Cognitive Science*, *8*, 520–547. http://doi.org/10.1111/tops.12214

Rouder, J. N., Morey, R. D., & Wagenmakers, E. J. (2016b). The Interplay between Subjectivity, Statistical Practice, and Psychological Science. *Collabra*, *2*(1), 6–12. http://doi.org/10.1525/collabra.28

The post Two meanings of priors, part I: The plausibility of models appeared first on Nicebread.

]]>The post Honoured to receive the Leamer-Rosenthal-Prize appeared first on Nicebread.

]]>BITSS has become one of the central hubs for the global open science movement and does a great job by providing open educational resources (e.g., “Tools and Resources for Data Curation”, or “How to Write a Pre-Analysis Plan”), grants, running the open science catalysts program, and hosting their annual meeting in San Francisco.

Other recipients of the price were Eric-Jan Wagenmakers, Lorena Barba, Zacharie Tsala Dimbuene, Abel Brodeur, Elaine Toomey, Graeme Blair, Beth Baribault, Michèle Nuijten and Sacha Epskamp.

I am optimistically looking into a future of credible, reproducible, and transparent research. Stay tuned for some news from our work here at the department’s Open Science Committee at LMU Munich!

The post Honoured to receive the Leamer-Rosenthal-Prize appeared first on Nicebread.

]]>The post Open Science and research quality at the German conference on psychology (DGPs congress in Leipzig) appeared first on Nicebread.

]]>

Therefore I am quite happy that this topic now has a really prominent place at the current conference. Here’s a list of some talks and events focusing on Open Science, research transparency, and what a future science could look like – see you there!

From the abstract:

In this symposium we discuss issues related to reproducibility and trust in psychological science. In the first talk, Jelte Wicherts will present some empirical results from meta-science that perhaps lower the trust in psychological science. Next, Coosje Veldkamp will discuss results bearing on actual public trust in psychological science and in psychologists from an international perspective. After that, Felix Schönbrodt and Chris Chambers will present innovations that could strengthen reproducibility in psychology. Felix Schönbrodt will present Sequential Bayes Factors as a novel method to collect and analyze psychological data and Chris Chambers will discuss Registered Reports as a means to prevent p-hacking and publication bias. We end with a general discussion.

- Reproducibility problems in psychology: what would Wundt think? (
*Jelte Wicherts)* - Trust in psychology and psychologists (
*Coosje Veldkamp)* - Never underpowered again: Sequential Bayes Factors guarantee compelling evidence (
*Felix Schönbrodt)* - The Registered Reports project: Three years on (
*Chris Chambers)*

For details on the talks, see here.

The currency of science is publishing. Producing novel, positive, and clean results maximizes the likelihood of publishing success because those are the best kind of results. There are multiple ways to produce such results: (1) be a genius, (2) be lucky, (3) be patient, or (4) employ flexible analytic and selective reporting practices to manufacture beauty. In a competitive marketplace with minimal accountability, it is hard to resist (4). But, there is a way. With results, beauty is contingent on what is known about their origin. With methodology, if it looks beautiful, it is beautiful. The only way to be rewarded for something other than the results is to make transparent how they were obtained. With openness, I won’t stop aiming for beautiful papers, but when I get them, it will be clear that I earned them.

Moderation: Manfred Schmitt

Discussants: Manfred Schmitt, Andrea Abele-Brehm, Klaus Fiedler, Kai Jonas, Brian Nosek, Felix Schönbrodt, Rolf Ulrich, Jelte Wicherts

When replicated, many findings seem to either diminish in magnitude or to disappear altogether, as, for instance, recently shown in the Reproducibility Project: Psychology. Several reasons for false-positive results in psychology have been identified (e.g., p-hacking, selective reporting, underpowered studies) and call for reforms across the whole range of academic practices. These range from (1) new journal policies promoting an open research culture to (2) hiring, tenure and funding criteria that reward credibility and replicability rather than sexiness and quantity to (3) actions for increasing transparent and open research practices within and across individual labs. Following Brian Nosek’s (Center of Open Science) keynote, titled “Addressing the Reproducibility of Psychological Science” this panel discussion aims to explore the various ways in which our field may take advantage of the current debate. That is, the focus of the discussion will be on effective ways of improving the quality of psychological research in the future. Seven invited discussants provide insights into different current activities aimed at improving scientific practice and will discuss their potential. The audience will be invited to contribute to the discussion.

I will represent the new guidelines of the German association for data management. They soon will be published, but here’s the gist: Open by default (raw data are an essential part of a publication); exceptions should be justified. Furthermore, we define norms for data reusage. Stay tuned on this blog for more details!

[…] In any case, a reanalysis of the data must result in similar or identical results.

[…] In this talk, we present a method that is well-suited for writing reproducible academic papers. This method is a combination of Latex, R, Knitr, Git, and Pandoc. These software tools are robust, well established and not more than reasonable complex. Additional approaches, such as using word processors (MS Word), Markdown, or online collaborative writing tools (Authorea) are presented briefly. The presentation is based on a practical software demonstration. A Github repository for easy reproducibility is provided.

These are not all sessions on the topic – go to http://www.dgpskongress.de/frontend/index.php# and search for “ASSURING THE QUALITY OF PSYCHOLOGICAL RESEARCH” to see all sessions associated with this topic. Furthermore, the CMS of the congress does not allow direct linking to the sessions, so you have to search for the sessions yourself.

Want to meet me at the conference? Write me an email, or send me a PM on Twitter.

The post Open Science and research quality at the German conference on psychology (DGPs congress in Leipzig) appeared first on Nicebread.

]]>The post Introducing the p-hacker app: Train your expert p-hacking skills appeared first on Nicebread.

]]>“If you torture the data long enough, it will confess.”

This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis.

In fact, the art of creative data analysis has experienced despicable attacks over the last years. A small but annoyingly persistent group of second-stringers tries to denigrate our scientific achievements. They drag psychological science through the mire.

These people propagate stupid method repetitions; and what was once one of the supreme disciplines of scientific investigation – a creative data analysis of a data set – has been crippled to conducting an empty-headed step-by-step pre-registered analysis plan. (Come on: If I lay out the full analysis plan in a pre-registration, even an *undergrad* student can do the final analysis, right? Is that really the high-level scientific work we were trained for so hard?).

They broadcast in an annoying frequency that p-hacking leads to more significant results, and that researcher who use *p*-hacking have higher chances of getting things published.

What are the consequence of these findings? The answer is clear. Everybody should be equipped with these powerful tools of research enhancement!

Some researchers describe a performance-oriented data analysis as “data-dependent analysis”. We go one step further, and call this technique *data-optimal analysis (DOA)*, as our goal is to produce the optimal, most significant outcome from a data set.

I developed an **online app that allows to practice creative data analysis and how to polish your ***p*-values. It’s primarily aimed at young researchers who do not have our level of expertise yet, but I guess even old hands might learn one or two new tricks! It’s called “The p-hacker” (please note that ‘hacker’ is meant in a very positive way here. You should think of the cool hackers who fight for world peace). You can use the app in teaching, or to practice *p*-hacking yourself.

Please test the app, and give me feedback! You can also send it to colleagues: http://shinyapps.org/apps/p-hacker

The full R code for this Shiny app is on Github.

Here’s a quick walkthrough of the app. Please see also the quick manual at the top of the app for more details.

First, you have to run an initial study in the “New study” tab:

When you ran your first study, inspect the results in the middle pane. Let’s take a look at our results, which are quite promising:

Sometimes outlier exclusion is not enough to improve your result.

Now comes the magic. Click on the “Now: p-hack!” tab – **this gives you all the great tools to improve your current study**. Here you can fully utilize your data analytic skills and creativity.

In the following example, we could not get a significant result by outlier exclusion alone. But after adding 10 participants (in two batches of 5), controlling for age and gender, and focusing on the variable that worked best – voilà!

Do you see how easy it is to craft a significant study?

Now it is important to **show even more productivity**: Go for the next conceptual replication (i.e., go back to Step 1 and collect a new sample, with a new manipulation and a new DV). Whenever your study reached significance, click on the *Save* button next to each DV and the study is saved to your stack, awaiting some additional conceptual replications that show the robustness of the effect.

Honor to whom honor is due: Find the best outlet for your achievements!

My friends, let’s stand together and Make Psychological Science Great Again! I really hope that the *p*-hacker app can play its part in bringing psychological science back to its old days of glory.

Best regards,

PS: A similar app can be found on FiveThirtyEight: Hack Your Way To Scientific Glory

The post Introducing the p-hacker app: Train your expert p-hacking skills appeared first on Nicebread.

]]>