A (non-viral) copyleft/sharealike license for open research data

by Felix Schönbrodt & Roland Ramthun

The open availability of scientific material (such as research data, code, or other material) has often been identified as one cornerstone of a trustworthy, reproducible, and verifiable science. At the same time, the actual availability of such reproducible material still is scarce (though on the rise).

To increase the availability of open scientific material, we propose a license for scientific research data that increases the availability of other open scientific material. It borrows a mechanism from open source software development: The application of copyleft (or, in the CC terminology, “sharealike”) licenses. These are so-called “sticky licenses”, because they require that every reuse of the licensed material has to have the same license. This means, if you reuse material under this license, your own product/derivative must also (a) be freely reusable and (b) use that license, so that any derivative from your product is free as well, ad infinitum.

The promise of such a “viral” license is that it can induce more and more freedom into a system. It is supposed to be a strategy to reform the environment: The more artifacts have a copyleft license, the more likely it is that future products have the same license, until, at the end, everything is free.

Picture of a viral license by Phoebus87 (https://de.wikipedia.org/wiki/Datei:Symian_virus.png)

One criticism of such licenses stems from the definition of “freedom”: According to this point of view, the highest degree of freedom is if you can do anything with a material. This also includes commercial usage, which is usually closed for competitive reasons, or to integrate the material into a larger dataset which itself can not be open, because other parts of the data have restrictive licenses. We are not lawyers, but in our understanding this could, for example, also include restrictions due to privacy rights.

For example, imagine the compilation of an integrative database that includes both material from a copyleft source and another source that has individual-related material, which cannot be openly shared due to privacy rights (but could be shared as a restricted scientific use file). At least from our understanding, a strict copyleft license would preclude the reuse in such a restricted way. Hence, the copyleft license, although claiming to ensure freedom, does preclude a lot of potential reuse scenarios. From this point of view, a so-called permissive license (such as CC0, MIT, or BSD) provides more freedom than a copyleft license (see, e.g., The Whys and Hows of Licensing Scientific Code).

We propose a system that addresses both points of view, with the goal to provide some stickiness of scientific open sharing, but also the possibility to operate with scientific material that require restrictiveness, for example due to privacy rights.

The proposed copyleft license for open data: Open data requires open analysis code.

We suggest the following clause for the reuse of open research data:

Upon publication of any scientific work under a broad definition (including, but not limited to journal papers, books or book chapters, conference proceedings, blog posts) that is based in full or in part on this data set, all data analysis scripts involved in the creation of this work must be made openly available under a license that allows reuse (e.g., BSD or MIT).

(Of course more topics must be addressed in the license, such as the obligation to properly cite the authors of the data set, not to try to reidentify research participants, etc. But we focus only on the copyleft aspect here).

This system has some differences from traditional copyleft licenses.

  • First, usually the reuser has to share any derivative, which often is the same category as the open material (typically: you reuse a piece of software, and have to share your own software product under an open license). In this proposal, you reuse open data, and have to share open analysis code. Hence, you support the openness of a community in another currency. Without the need to publish derived data sets, integration scenarios of usually incompatible, open and closed data become possible.
  • Second, it restricts the copyleft property to a certain type of reuse, namely the creation of scientific work. This ensures, on the one hand, that open knowledge grows and scientific claims are verifiable to a larger extent than before. On the other hand, commercial reuse is enabled; furthermore there might be non-scientific reuse scenarios that do not involve analysis code, where the clause is not applicable anyway. Finally, even the most restrictive data set (where you have to go to a repository operator and analyze the data on dedicated computers in a secure room) can generate open derivatives.
  • Third, the license is not sticky: The published open analysis code itself does not require a copyleft when it is reused. Instead it has a permissive license.

Against the “research parasite” argument

The proposed system offers some protection against the “research parasites” argument. The parasite discussion refers to the free-rider problem in social dilemmas: While some people invest resources to provide a public good, others (the parasites/free-riders) profit from the public good, without giving back to the community (see also Linek et al., 2017). This often creates a feeling of injustice, and impulses to punish the free-riders. (An entire scientific field is devoted to the structural, sociological, political, and psychological properties and consequences of such social dilemma structures.)

In the proposed licensing system, those who profit from openness by reusing open data must give something back to the community. This increases overall openness, reusability, and reproducibility of scientific outputs, and probably decreases feelings of exploitation and unfairness for the data providers.

Do you think such a license would work? Do you see any drawbacks we didn’t think of?

You can leave feedback here as a comment, on Twitter (@nicebread303) or via email to felix@nicebread.de.

No comments | Trackback

Differentiate the Power Motive into Dominance, Prestige, and Leadership: New Tool and Theory

This is a guest post from Felix Suessenbach.

DOWNLOAD THE DOMINANCE, PRESTIGE, AND LEADERSHIP SCALES HERE:

What is the dominance, prestige, and leadership account of the power motive?

Researchers of motivational psychology have long struggled with the power motive’s heterogeneous definition encompassing elements such as desires for dominance, reputation, prestige, leadership, and status (e.g., Winter, 1973). This heterogeneity has likely been responsible for researchers having found different relationships between the power motive and external variables depending on which power motive scale they used (e.g., Engeser & Langens, 2010). Thus, to provide a long-needed taxonomy of clearly distinguishable power motive components we developed the dominance, prestige, and leadership (DoPL) account of social power motives. In particular we differentiate between:

  • The dominance motive, defined as a desire for coercive power obtained through threats, intimidation, or deception

  • The prestige motive, defined as a desire for voluntary deference obtained through others’ admiration and respect particularly for one’s valued skills and knowledge

  • The leadership motive, defined as a desire for legitimised power granted by one’s group and obtained through taking responsibility in and for this group

Opposed to previous attempts to differentiate different power motive components (e.g., socialised and personalised power; McClelland, 1970) the DoPL account of social power motives is based on a solid theoretical framework adapted from research into social hierarchies (e.g., Cheng, Tracy, & Henrich, 2010; Henrich & Gil-White, 2001). Thus, the DoPL account does not suffer from strongly different interpretations of how these components manifest themselves.

Empirical results:

Using newly developed DoPL questionnaires we showed the DoPL motives can be measured both reliably and distinctively (study 1). Moreover, we showed these DoPL motives strongly related to a common power desire (study 2), explaining more than 80% of variance in two established power motive scales (UMS power, Schönbrodt & Gerstenberg, 2012; PRF dominance, Jackson, 1984). Assessing their nomological networks (studies 3 & 4), we demonstrated distinct associations such as between…

  • the dominance motive and self-reported anger and verbal aggression

  • the prestige motive and self-reported fear of losing reputation and claiming to have higher moral concerns

  • the leadership motive and self-reported emotional stability and helping behaviour

Regarding observed behaviour and other external variables (studies 5 to 7) we found:

  • The dominance motive uniquely and negatively predicted the amount of money given to another player in a dictator game after having received nothing in two previous dictator games. This effect can be explained by a combination of general agonistic tendencies as well as retaliatory desires related to the dominance motive.

  • The leadership motive uniquely predicted the attainment of higher employment ranks across all kinds of professions. This effect was somewhat stronger in females which might be explained by discrimination against females regarding promotions and thus females having to compensate by being more highly motivated to reach high leadership positions.

  • When donating behaviour to a charity was made overt, residualised dominance motives (i.e., controlled for shared prestige and leadership influences) related negatively to the overall proportion donated to a charity as well as the probability to donate. Whereas residualised leadership motives only related positively to the overall amount donated to charity, residualised prestige motives only related positively to the probability to donate. Thus, to some degree, dominance desires relate negatively and leadership and prestige desires positively to prosocial donating behaviour.

Conclusions:

This research shows that different power motive components in many (but not all) cases relate differently to a range of external variables. Thus, to improve the prediction of influential power-relevant behaviour as a function of individuals’ power desires we invite researchers to employ this novel taxonomy of power motives to further advance this important field of research.

Bibliography:

Cheng, J. T., Tracy, J. L., & Henrich, J. (2010). Pride, personality, and the evolutionary foundations of human social status. Evolution and Human Behavior, 31, 334–347 https://doi.org/10.1016/

Engeser, S., & Langens, T. (2010). Mapping explicit social motives of achievement, power, and affiliation onto the five-factor model of personality. Scandinavian Journal of Psychology, 51, 309–318 https://doi.org/10.1111/j.1467-9450.2009.00773.x.

Henrich, J., & Gil-White, F. J. (2001). The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evolution and Human Behavior, 22, 165–196 https://doi.org/10.1016/S1090-5138(00)00071-4.

Jackson, D. N. (1984). Personality research form manual (3rd ed.). Port Huron: Research Psychologists Press.

McClelland, D. C. (1970). The two faces of power. Journal of International Affairs, 24, 29–47.

Schönbrodt, F. D., & Gerstenberg, F. X. R. (2012). An IRT analysis of motive questionnaires: The unified motive scales. Journal of Research in Personality, 46, 725–742 https://doi.org/10.1016/j.jrp.2012.08.010. [Free PDF on OSF]

Winter, D. G. (1973). The power motive. New York: The Free Press.

| Trackback

Gazing into the Abyss of P-Hacking: HARKing vs. Optional Stopping

by Angelika Stefan & Felix Schönbrodt

Almost all researchers have experienced the tingling feeling of suspense that arises right before they take a look at long-awaited data: Will they support their favored hypothesis? Will they yield interesting or even groundbreaking results? In a perfect world (especially one without publication bias), the cause of this suspense should be nothing else but scientific curiosity. However, the world, and specifically the incentive system in science, is not perfect. A lot of pressure rests on researchers to produce statistically significant results. For many researchers, statistical significance is the cornerstone of their academic career, so non-significant results in an important study can not only question their scientific convictions but also crash their hopes of professional promotion. (Although, fortunately things are changing for the better).

Now, what does a researcher do confronted with messy, non-significant results? According to several much-cited studies (for example John et al., 2012; Simmons et al., 2011), a common reaction is to start sampling again (and again, and again, …) in the hope that a somewhat larger sample size can boost significance. Another reaction is to wildly conduct hypothesis tests on the existing sample until at least one of them becomes significant (see for example: Simmons et al., 2011; Kerr, 1998 ). These practices, along with some others, are commonly known as p-hacking, because they are designed to drag the famous p-value right below the mark of .05 which usually indicates statistical significance. Undisputedly, p-hacking works (for a demonstration try out the p-hacker app). The two questions we want to answer in this blog post are: How does it work and why is that bad for science?

As many people may have heard, p-hacking works because it exploits a process called alpha error accumulation which is covered in most introductory statistics classes (but also easily forgotten again). Basically, alpha error accumulation means that as one conducts more and more hypothesis tests, the probability increases that one makes a wrong test decision at least once. Specifically, this wrong test decision is a false positive decision or alpha error, which means that you proclaim the existence of an effect although, in fact, there is none. Speaking in statistical terms, an alpha error occurs when the test yields a significant result although the null hypothesis (“There is no effect”) is true in the population. This means that p-hacking leads to the publication of an increased rate of false positive results, that is, studies that claim to have found an effect although, in fact, their result is just due to the randomness of the data. Such studies will never replicate.

At this point, the blog post could be over. P-Hacking exploits alpha error accumulation and fosters the publication of false positive results which is bad for science. However, we want to take a closer look at how bad it really is. In fact, some p-hacking techniques are worse than others (or, if you like the unscrupulous science villain perspective: some p-hacking techniques work better than others). As a showcase, we want to introduce two researchers: The HARKer takes existing data and conducts multiple independent hypothesis tests (based on multiple uncorrelated variables in the data set) with the goal to publish the ones that become significant. For example, the HARKer tests for each possible correlation in a large data set whether it differs significantly from zero. On the other hand, the Accumulator uses optional stopping. This means that he collects data for a single research question test in a sequential manner until either statistical significance or a maximum sample size is reached. For simplicity, we assume that they use neither other p-hacking techniques nor other questionable research practices.

The HARKer’s p-hacking strategy

Let us start with the HARKer: Since the conducted hypothesis tests in our defined scenario are essentially independent, the situation can be seen as a problem of multiple testing. This means, it is comparatively easy to determine the exact probability that the HARKer will end up with at least one false-positive result given a certain number of hypothesis tests. Assuming no effects in the population (for example, no correlation between the variables), one can picture the situation as a decision tree: At each branch level stands a hypothesis test which can either result in a non-significant result with 95% probability or in a (spurious) significant result with 5% probability, which is the \alpha level.

This figure shows a decision tree. It depicts alpha error accumulation with multiple testing / HARKing.

No matter how many hypothesis tests the HARKer conducts, there will only be one condition in the all-null scenario where no \alpha error occurs, that is, where all hypothesis tests yield non-significant results. The probability that this occurs can be calculated by 0.95 \cdot 0.95 \cdot 0.95 \cdot ... \cdot 0.95 = 0.95^x, with x being the number of conducted hypothesis tests. The probability that at least one of the hypothesis tests is significant is the probability of the complementary event, that is 1-(0.95^x). For example, when the HARKer computes 10 hypothesis with an alpha level of 0.05, the overall probability to obtain at least one false positive result is 1-(0.95^{10}) = 0.401. Of course, the formula can be adjusted for other suggested alpha levels, such as \alpha = 0.005 or \alpha = 0.1. We show this general formula in the R-code chunk below.

# The problem of HARKing

harker <- function(x, alpha){1-(1-alpha)^x}

The Accumulator’s p-hacking strategy

The Accumulator has a different tactic: Instead of conducting multiple hypothesis tests on different variables of one data set, he repeatedly conducts the same hypothesis test on the same variables in a growing sample. Starting with a minimum sample size, the Accumulator looks at the results of the analysis – if these are significant, data collection is stopped, if not, more data is collected until (finally) the results become significant, or a maximum sample size is reached. This strategy is also called Optional Stopping. Of course, the more often a researcher peeks at the data, the higher is the probability to obtain a false positive result at least once. However, this overall probability is not the same as the one obtained through HARKing. The reason is that the hypothesis tests are not independent in this case. Why is that? The same hypothesis test is repeatedly conducted on only slightly different data. In fact, the data that were used in in the first hypothesis test are used in every single of the subsequent hypothesis tests so that there is a spillover effect of the first test to every other hypothesis test in the set. Imagine, your initial sample contains an outlier: This outlier will affect the test results in any other test. With multiple testing, in contrast, the outlier will affect only the test in question but none of the other tests in the set.

So does this dependency make optional stopping more or less effective than HARKing? Of course, people have been wondering about this for quite a while. A paper by Armitage et al. (1969) demonstrates error accumulation in optional stopping for three different tests. We can replicate their results for the z-test with a small simulation (a more flexible simulation can be found at the end of the blog post): We start by drawing a large number of samples (iter) with the maximum sample size (n.max) from the null hypothesis. Then we conduct a sequential testing procedure on each of the samples, starting with a minimum sample size (n.min) and going up in several steps (step) up to the maximum sample size. The probability to obtain a significant result at least once up to a certain step can be estimated through the percentage of iterations that end up with at least one significant result at that point.

# The Problem of optional stopping

accumulator <- function(n.min, n.max, step, alpha=0.05, iter=10000){

# Determine places of peeks
peeks <- seq(n.min, n.max, by=step)

# Initialize result matrix (non-sequential)
res <- matrix(NA, ncol=length(peeks), nrow=iter)
colnames(res) <- peeks

# Conduct sequential testing (always until n.max, with peeks at pre-determined places)
for(i in 1:iter){
sample <- rnorm(n.max, 0, 1)
res[i,] <- sapply(peeks, FUN=function(x){sum(sample[1:x])/sqrt(x)})
}

# Create matrix: Which tests are significant?
signif <- abs(res) >= qnorm(1-alpha)

# Create matrix: Has there been at least one significant test in the trial?
seq.signif <- matrix(NA, ncol=length(peeks), nrow=iter)

for(i in 1:iter){
for(j in 1:ncol(signif)){
seq.signif[i,j] <- TRUE %in% signif[i, 1:j]
}
}

# Determine the sequential alpha error probability for the sequential tests
seq.alpha <- apply(seq.signif, MARGIN = 2, function(x){sum(x)/iter})

# Return a list of individual test p-values, sequential significance and sequential alpha error probability
return(list(seq.alpha = seq.alpha))

}

For example, the researcher conducts a two-sided one-sample z-test with an overall \alpha level of .05 in a sequential way. He starts with 10 observations, then always adds another 10 if the result is not significant, up to 100 observations at maximum. This means, he has 10 chances to peek at the data and end the data collection if the hypothesis test is significant. Using our simulation function, we can determine the probability to have obtained at least one false positive result at any of these steps:

set.seed(1234567)
res.optstopp <- accumulator(n.min=10, n.max=100, step=10, alpha=0.025, iter=10000)
print(res.optstopp[[1]])
[1] 0.0492 0.0824 0.1075 0.1278 0.1431 0.1574 0.1701 0.1788 0.1873 0.1949

We can see that with one single evaluation, the false positive rate is at the nominal 5%. However, when more in-between tests are calculated, the false positive rate rises to roughly 20% with ten peeks. This means that even if there is no effect at all in the population, the researcher would have stopped data collection with a signficant result in 20% of the cases.

A comparison of the HARKer’s and the Accumulator’s strategy

Let’s compare the false positive rates of HARKing and optional stopping: Since the researcher in our example above conducts one to ten dependent hypothesis tests, we can compare this to a situation where a HARKer conducts one to ten independent hypothesis tests. The figure below shows the results of both p-hacking strategies:

# HARKing False Positive Rates

HARKs <- harker(1:10, alpha=0.05)

p-hacking efficiency with optional stopping (as described in the blog post) and HARKing

We can see that HARKing produces higher false positive rates than optional stopping with the same number of tests. This can be explained through the dependency on the first sample in the case of optional stopping: Given that the null hypothesis is true, this sample is not very likely to show extreme effects in any direction (however, there is a small probability that it does). Every extension of this sample has to “overcome” this property not only by being extreme in itself but also by being extreme enough to shift the test on the overall sample from non-significance to significance. In contrast, every sample in the multiple testing case only needs to be extreme in itself. Note, however, that false positive rates in optional stopping are not only dependent on the number of interim peeks, but also on the size of the initial sample and on the step size (how many observations are added between two peeks?). The difference between multiple testing and optional stopping which you see in the figure above is therefore only valid for this specific case. Going back to the two researchers from our example, we can say that the HARKer has a better chance to come up with significant results than the Accumulator, if both do the same number of hypothesis tests.

Practice HARKing and Optional Stopping yourself

You can use the interactive p-hacker app to experience the efficiency of both p-hacking strategies yourself: You can increase the number of dependent variables and see whether one of them gets significant (HARing), or you can got to the “Now: p-hack!” tab and increase your sample until you obtain significance. Note that the DVs in the p-hacker app are not completely independent as in our example above, but rather correlate with r = .2, assuming that the DVs to some extent measure at least related constructs.

Conclusion

To conclude, we have shown how two p-hacking techniques work and why their application is bad for science. We found out that p-hacking techniques based on multiple testing typically end up with higher rates of false positive results than p-hacking techniques based on optional stopping, if we assume the same number of hypothesis tests. We want to stress that this does not mean that naive optional stopping is okay (or even okay-ish) in frequentist statistics, even if it does have a certain appeal. For those who want to do guilt-free optional stopping, there are ways to control for the \alpha error accumulation in the frequentist framework (see for example Wald, 1945, Chow & Chang, 2008, Lakens, 2014) and sequential Bayesian hypothesis tests (see for example our paper on sequential hypothesis testing with Bayes factors or Rouder, 2014).

Alternative Simulation Code (also including one-sided tests and t-tests)

sim.optstopping <- function(n.min, n.max, step, alpha=0.05, test="z.test", alternative="two.sided", iter=10000){

match.arg(test, choices=c("t.test", "z.test"))
match.arg(alternative, choices=c("two.sided", "directional"))

# Determine places of peek
peeks <- seq(n.min, n.max, by=step)

# Initialize result matrix (non-sequential)
res <- matrix(NA, ncol=length(peeks), nrow=iter)
colnames(res) <- peeks

# Conduct sequential testing (always until n.max, with peeks at pre-determined places)
for(i in 1:iter){
sample <- rnorm(n.max, 0, 1)
if(test=="t.test"){res[i,] <- sapply(peeks, FUN=function(x){mean(sample[1:x])/sd(sample[1:x])*sqrt(x)})}
if(test=="z.test"){res[i,] <- sapply(peeks, FUN=function(x){sum(sample[1:x])/sqrt(x)})}
}

# Create matrix: Which tests are significant?
if(test=="z.test"){
ifelse(alternative=="two.sided", signif <- abs(res) >= qnorm(1-alpha), signif <- res <= qnorm(alpha))
}else if (test=="t.test"){
n <- matrix(rep(peeks, iter), nrow=iter, byrow=T)
ifelse(alternative=="two.sided", signif <- abs(res) >= qt(1-alpha, df=n-1), signif <- res <= qt(alpha, df=n-1))
}

# Create matrix: Has there been at least one significant test in the trial?
seq.signif <- matrix(NA, ncol=length(peeks), nrow=iter)

for(i in 1:iter){
for(j in 1:ncol(signif)){
seq.signif[i,j] <- TRUE %in% signif[i, 1:j]
}
}

# Determine the sequential alpha error probability for the sequential tests
seq.alpha <- apply(seq.signif, MARGIN = 2, function(x){sum(x)/iter})

# Return a list of individual test p-values, sequential significance and sequential alpha error probability
return(list(p.values = res,
seq.significance = seq.signif,
seq.alpha = seq.alpha))

}
| Trackback

Send this to a friend