Correcting bias in meta-analyses: What not to do (meta-showdown Part 1)

tl;dr: Publication bias and p-hacking can dramatically inflate effect size estimates in meta-analyses. Many methods have been proposed to correct for such bias and to estimate the underlying true effect. In a large simulation study, we studied which methods do (not) work well under which conditions, and give some recommendations what not to use.
Estimated reading time: 7 min.

It is well known that publication bias and p-hacking inflate effect size estimates from meta-analyses. In the last years, methodologists have developed an ever growing menu of statistical approaches to correct for such overestimation. However, to date it was unclear under which conditions they perform well, and what to do if they disagree. Born out of a Twitter discussion, Evan Carter, Joe Hilgard, Will Gervais and I did a large simulation project, where we compared the performance of naive random effects meta-analysis (RE), trim-and-fill (TF), p-curve, p-uniform, PET, PEESE, PET-PEESE, and the three-parameter selection model (3PSM).

Previous investigations typically looked only at publication bias or questionable research practices QRPs (but not both), used non-representative study-level sample sizes, or only compared few bias-correcting techniques, but not all of them. Our goal was to simulate a research literature that is as realistic as possible for psychology. In order to simulate several research environments, we fully crossed five experimental factors: (1) the true underlying effect, δ (0, 0.2, 0.5, 0.8); (2) between-study heterogeneity, τ (0, 0.2, 0.4); (3) the number of studies in the meta-analytic sample, k (10, 30, 60, 100); (4) the percentage of studies in the meta-analytic sample produced under publication bias (0%, 60%, 90%); and (5) the use of QRPs in the literature that produced the meta-analytic sample (none, medium, high).

This blog post summarizes some insights from our study, internally called “meta-showdown”. Check out the preprint; and the interactive app metaExplorer. The fully reproducible and reusable simulation code is on Github, and more information is on OSF.

In this blog post, I will highlight some lessons that we learned during the project, primarily focusing on what not do to when performing a meta-analysis.

Constraints on Generality disclaimer: These recommendations apply to typical sample sizes, effect sizes, and heterogeneities in psychology; other research literatures might have different settings and therefore a different performance of the methods. Furthermore, the recommendations rely on the modeling assumptions of our simulation. We went a long way to make them as realistic as possible, but other assumptions could lead to other results.

Never trust a naive random effects meta-analysis or trim-and-fill (unless you meta-analyze a set of registered reports)

If studies have no publication bias, nothing can beat plain old random effects meta-analysis: it has the highest power, the least bias, and the highest efficiency compared to all other methods. Even in the presence of some (though not extreme) QRPs, naive RE performs better than all other methods. When can we expect no publication bias? If (and, in my opinion only if) we meta-analyze a set of registered reports.

But.

In any other setting except registered reports, a consequential amount of publication bias must be expected. In the field of psychology/psychiatry, more than 90% of all published hypothesis tests are significant (Fanelli, 2011) despite the average power being estimated as around 35% (Bakker, van Dijk, & Wicherts, 2012) – the gap points towards a huge publication bias. In the presence of publication bias, naive random effects meta-analysis and trim-and-fill have false positive rates approaching 100%:

More thoughts about trim-and-fill’s inability to recover δ=0 are in Joe Hilgard’s blog post. (Note: this insight is not really new and has been shown multiple times before, for example by Moreno et al., 2009, and Simonsohn, Nelson, and Simmons, 2014).

Our recommendation: Never trust meta-analyses based on naive random effects and trim-and-fill, unless you can rule out publication bias. Results from previously published meta-analyses based on these methods should be treated with a lot of skepticism.

 

Do not use p-curve to estimate the mean of all conducted studies under heterogeneity (it is not intended to do that)

Update 2017/06/09: We had a productive exchange with Uri Simonsohn and Joe Simmons concerning what should be estimated in a meta-analysis with heterogeneity. Traditionally, meta-analysts have tried to arrive at techniques that recover the true average effect of all conducted studies (AEA – average effect of all studies). Simonsohn et al (2014) propose estimating a different magnitude; the average effect of the studies one observes, rather than of all studies (AEO – average effect of observed studies). See Simonsohn et al (2014), the associated Supplementary Material 2, and also this blog post for arguments why they think this is a more useful quantity to estimate.

Hence, an investigation of the topic can be done on two levels: A) What is the more appropriate estimand (AEA or AEO?), and B) Under what conditions are estimators able to recover the respective true value with the least bias and least variance?

Instead of updating the section of the current blog post in the light of our discussion, I decided to cut it out and to move the topic to a future blog post. Likewise, one part of our manuscript’s revision will be a more detailed discussion about excatly these differences.

I archived the previous version of the blog post here.

Ignore overadjustments in the opposite direction

Many bias-correcting methods are driven by QRPs – the more QRPs, the stronger the downward correction. However, this effect can get so strong, that methods overadjust into the opposite direction, even if all studies in the meta-analysis are of the same sign:

 

Note: You need to set the option “Keep negative estimates” to get this plot.

Our recommendation: Ignore bias-corrected results that go into the opposite direction; set the estimate to zero, do not reject H₀.

Do not take it seriously if PET-PEESE does a reverse correction

Typical small-study effects (e.g., by p-hacking or publication bias) induce a negative correlation between sample size and effect size – the smaller the sample, the larger the observed effect size. PET-PEESE aims to correct for that relationship. In the absence of bias and QRPs, however, random fluctuations can lead to a positive correlation between sample size and effect size, which leads to a PET and PEESE slope of the unintended sign. Without publication bias, this reversal of the slope actually happens quite often.

See for example the next figure. The true effect size is zero (red dot), naive random effects meta-analysis slightly overestimates the true effect (see black dotted triangle), but PET and PEESE massively overadjust towards more positive effects:

 

As far as I know, PET-PEESE is typically not intended to correct in the reverse direction. An underlying biasing process would have to systematically remove small studies that show a significant result with larger effect sizes, and keep small studies with non-significant results. In the current incentive structure of psychological research, I see no reason for such a process, unless researchers are motivated to show that a (maybe politically controversial) effect does not exist.

Our recommendation: Ignore the PET-PEESE correction if it has the wrong sign, unless there are good reasons for an untypical selection process.

 

PET-PEESE sometimes overestimates, sometimes underestimates

A bias can be more easily accepted if it always is conservative – then one could conclude: “This method might miss some true effects, but if it indicates an effect, we can be quite confident that it really exists”. Depending on the conditions (i.e., how much publication bias, how much QRPs, etc.), however, PET/PEESE sometimes shows huge overestimation and sometimes huge underestimation.

For example, with no publication bias, some heterogeneity (τ=0.2), and severe QRPs, PET/PEESE underestimates the true effect of δ = 0.5:

In contrast, if no effect exists in reality, but strong publication bias, large heterogeneity and no QRPs, PET/PEESE overestimates at lot:

In fact, the distribution of PET/PEESE estimates looks virtually identical for these two examples, although the underlying true effect is δ = 0.5 in the upper plot and δ = 0 in the lower plot. Furthermore, note the huge spread of PET/PEESE estimates (the error bars visualize the 95% quantiles of all simulated replications): Any single PET/PEESE estimate can be very far off.

Our recommendation: As one cannot know the condition of reality, it is probably safest not to use PET/PEESE at all.

 

Recommendations in a nutshell: What you should not use in a meta-analysis

Again, please consider the “Constraints on Generality” disclaimer above.

  • When you can exclude publication bias (i.e., in the context of registered reports), do not use bias-correcting techniques. Even in the presence of some QRPs they perform worse than plain random effects meta-analysis.
  • In any other setting except registered reports, expect publication bias, and do not use random effects meta-analysis or trim-and-fill. Both will give you a 100% false positive rate in typical settings, and a biased estimation.
  • Even if all studies entering a meta-analysis point into the same direction (e.g., all are positive), bias-correcting techniques sometimes overadjust and return a significant estimate of the opposite direction. Ignore these results, set the estimate to zero, do not reject H₀.
  • Sometimes PET/PEESE adjust into the wrong direction (i.e., increasing the estimated true effect size)

As with any general recommendations, there might be good reasons to ignore them.

Additional technical recommendations

  • The p-uniform package (v. 0.0.2) very rarely does not provide a lower CI. In this case, ignore the estimate.
  • Do not run p-curve or p-uniform on <=3 significant and directionally consistent studies. Although computationally possible, this gives hugely variable results, which are often very biased. See our supplemental material for more information and plots.
  • If the 3PSM method (in the implementation of McShane et al., 2016) returns an incomplete covariance matrix, ignore the result (even if a point estimate is provided).
Comments (2) | Trackback

Assessing the evidential value of journals with p-curve, R-index, TIVA, etc: A comment on Motyl et al. (2017) with new data

Recently, Matt Motyl et al. (2017) posted a pre-print paper in which they contrasted the evidential value of several journals in two time periods (2003-2004 vs. 2013-2014). The paper sparked a lot of discussion in Facebook groups [1][2], blog posts commenting on the paper (DataColada, Dr. R), and a reply from the authors. Some of the discussion was about which effect sizes should have been coded for their analyses. As it turns out, several working groups had similar ideas to check the evidential value of journals (e.g., Leif Nelson pre-registered a very similar project here). In an act of precognition, we did a time-reversed conceptual replication of of Motyl et al.’s study already in 2015:

Anna Bittner, a former student of LMU Munich, wrote her bachelor thesis on a systematic comparison of two journals in social psychology – one with a very high impact factor (Journal of Personality and Social Psychology (JPSP), IF = 5.0 at that time), another with a very low IF (Journal of Applied Social Psychology (JASP), IF = 0.8 at that time – not to be confused with the great statistical software JASP). By coincidence (or precognition), we also chose the year 2013 for our analyses. Read her full blog post about the study below.

In a Facebook discussion about the Motyl et al. paper, Michael Inzlicht wrote: “But, is there some way to systematically test a small sample of your p-values? […] Can you randomly sample 1-2% of your p-values, checking for errors and then calculating your error rate? I have no doubt there will be some errors, but the question is what the rate is.“. As it turns out, we also did our analysis on the 2013 volume of JPSP. Anna followed the guidelines by Simonsohn et al. That means, she only selected one effect size per sample, selected the focal effect size, and wrote a detailed disclosure table. Hence, we do not think that the critique in the DataColada blog post applies to our coding scheme. The disclosure table includes quotes from the verbal hypotheses, quotes from results section, and which test statistic was selected. Hence, you can directly validate the choice of effect sizes. Furthermore, all test statistics can be entered into the p-checker app, where you get TIVA, R-Index, p-curve and more diagnostic tests (grab the JPSP test statistics & JASP test statistics).

Is our field “Rotten to the core”? (CC-BY by Andy Hay; https://www.flickr.com/photos/andyhay/8196333166)

Now we can compare two completely independently coded p-curve disclosure tables about a large set of articles. Any disagreement of course does not mean that one party is right and the other is wrong. But it will be interesting to see the amount of agreement.

Here comes Anna’s blog post about her own study. Anna Bittner is now doing her Master of Finance at the University of Melbourne.


Don’t judge a journal by its cover (or its impact factor)

by Anna Bittner and Felix Schönbrodt

The recent discoveries on staggeringly low replicability in psychology have come as a shock to many and led to a discussion on how to ensure better research practices are employed in the future. To this end, it is necessary to find ways to efficiently distinguish good research from bad and research that contains evidential value from such that does not.

In the past the impact factor (IF) has often been the favored indicator of a journal’s quality. To check whether a journal with a higher IF does indeed publish the “better” research in terms of evidential value, we compared two academic journals from the domain of social psychology: The Journal of Personality and Social Psychology (JPSP, Impact Factor = 5.031) and the Journal of Applied Social Psychology (JASP, Impact Factor = 0.79).

For this comparison, Anna has analysed and carefully hand-coded all studies with hypothesis tests starting in January 2013 and progressing chronologically until about 110 independent test statistics for each journal were acquired. See the full report (in German) in Anna’s bachelor thesis. These test statistics were fed into the p-checker app (Version 0.4; Schönbrodt, 2015) that analyzed them with the tools p-curve, TIVA und R-Index.

All material and raw data is available on OSF: https://osf.io/pgc86/

P-Curve

P-curve (Simonsohn, Nelson, & Simmons, 2014) takes a closer look at all significant p-values and plots them against their relative frequency. This results in a curve that will ideally have a lot of very small p-values (<.025) and much fewer larger p–values (>.025). Another possible shape is a flat curve, which will occur when researchers only investigate null effects and selectively publish those studies that obtained a p-value < .05 by chance. under the null hypothesis each individual p-value is equally likely and the distribution is even. P-curve allows to test whether the empirical curve is flatter than the p-curve that would be expected at any chosen power.

Please note that p-curve assumes homogeneity (McShane, Böckenholt, & Hansen, 2016). Lumping together diverse studies from a whole journal, in contrast, guarantees heterogeneity. Hence, trying to recover the true underlying effect size/power is of limited usefulness here.

The p-curves of both JPSP and JASP were significantly right-skewed, which suggests that both journal’s output cannot be explained by pure selective publication of null effects that got significant by chance. JASP’s curve, however, had a much stronger right-skew, indicating stronger evidential value:

TIVA

TIVA (Schimmack, 2014a) tests for an insufficient variance of p-values: If no p-hacking and no publication bias was present, the variance of p-values should be at least 1. An value below 1 is seen as indicator of publication bias and/or p-hacking. However, variance can and will be much larger than 1 when studies of different sample and effect sizes are included in the analysis (which was inevitably the case here). Hence, TIVA is a rather weak test of publication bias when heterogeneous studies are combined: A p-hacked set of heterogeneous effect sizes can still result in a high variance in TIVA. Publication bias and p-hacking reduce the variance, but heterogeneity and different sample sizes can increase the variance in a way that the TIVA is clearly above 1, even if all studies in the set are severely p-hacked.

As expected, neither JPSP nor JASP attained a significant TIVA result, which means the variance of p-values was not significantly smaller than 1 for either journal. Descriptively, JASP had a higher variance of 6.03 (chi2(112)=674, p=1), compared to 1.09 (chi2(111)=121, p=.759) for the JPSP. Given the huge heterogeneity of the underlying studies, a TIVA variance in JPSP close to 1 signals a strong bias. This, again, is not very surprising. We already knew before with certainty that our literature is plagued by huge publication bias.

The descriptive difference in TIVA variances can be due to less p-hacking, less publication bias, or more heterogeneity of effect sizes and sample sizes in JASP compared to JPSP. Hence, drawing firm conclusions from this numerical difference is difficult; but the much larger value in JASP can be seen as an indicator that the studies published there paint a more realistic picture.

(Note: The results here differ from the results reported in Anna’s bachelor thesis, as the underlying computation has been improved. p-checker now uses logged p-values, which allows more precision with very small p-values. Early versions of p-checker underestimated the variance when extremely low p-values were present).

Unfortunately, Motyl et al. do not report the actual variances from their TIVA test (only the test statistics), so a direct comparison of our results is not possible.

R-Index

The R-Index (Schimmack, 2014b) is a tool that aims to quantify the replicability of a set of studies. It calculates the difference between median estimated power and success rate, which results in the so called inflation rate. This inflation rate is then subtracted from the median estimated power, resulting in the R-Index. Here is Uli Schimmack’s interpretation of certain R-Index values: “The R-Index is not a replicability estimate […] I consider an R-Index below 50 an F (fail).  An R-Index in the 50s is a D, and an R-Index in the 60s is a C.  An R-Index greater than 80 is considered an A”.

Here, again, the JASP was ahead. It obtained an R-Index of .60, whereas the JPSP landed at .49.

Both journals had success rates of around 80%, which is much higher than what would be expected with the average power and effect sizes found in psychology (Bakker, van Dijk, & Wicherts, 2012). It is known and widely accepted that journals tend to publish significant results over non-significant ones.

Motyl et al. report an R-Index of .52 for 2013-2014 for high impact journals, which is very close to our value of .49.


Summary

The comparison between JPSP and JASP revealed a better R-Index, a more realistic TIVA variance and a more right-skewed p-curve for the journal with the much lower IF. As the studies had roughly comparable sample sizes (JPSP: Md = 86, IQR: 54 – 124; JASP: Md = 114, IQR: 65 – 184), I would bet some money that more studies from JASP replicate then from JPSP.

A journal’s prestige does not protect it from research submissions that contain QRPs – contrarily it might lead to higher competition between reseachers and more pressure to submit a significant result by all means. Furthermore, higher rejection rates of a journal also leave more room for “selecting for significance”. In contrast, a journal that must publish more or less every submission it gets to fill up its issues simply does not have much room for this filter. With the currently applied tools, however, it is not possible to make a distinction between p-hacking and publication bias: they only detect patterns in test statistics that can be the result of both practices.

References

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543-554.

McShane, B. B., Böckenholt, U., & Hansen, K. T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11, 730–749. doi:10.1177/1745691616662243

Schimmack, U. (2014b). Quantifying Statistical Research Integrity: The Replicabilty-Index.

Schimmack, U. (2014a, December 30). The Test of Insufficient Variance (TIVA): A New Tool for the Detection of Questionable Research Practices. Retrieved from https://replicationindex.wordpress.com/2014/12/30/the-test-of-insufficient-variance-tiva-a-new-tool-for-the-detection-of-questionable-research-practices/

Schimmack, U. (2015, September 15). Replicability-Ranking of 100 Social Psychology Departments [Web log post]. Retrieved from https://replicationindex.wordpress.com/2015/09/15/replicability-ranking-of-100-social-psychology-departments/

Schönbrodt, F. (2015). p-checker [Online application]. Retrieved from http://shinyapps.org/apps/p-checker/

Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534.

Comments (3) | Trackback

German Psychological Society fully embraces open data, gives detailed recommendations

tl;dr: The German Psychological Society developed and adopted new recommendations for data sharing that fully embrace openness, transparency and scientific integrity. Key message is that raw data are an essential part of an empirical publication and must be openly shared. The recommendations also give very practical advice on how to implement these values, such as “When should data providers be asked to be co-authors in a data reuse project?” and “How to deal with participant privacy?”.

In the last year, the discussion in our field moved from “Do we have a replication crisis?” towards “Yes, we have a problem, and what can and should we change? How can be implement it?”. I think that we need both top-down changes on an institutional level, combined with bottom-up approaches, such as local Open Science Initiatives. Here, I want to present one big institutional change concerning open data.

Funders Start Requiring Open Data: Recommendations for Psychology

The German Research Foundation (DFG), the largest public funder of research in Germany, updated their policy on data sharing, which can be summarized in a single sentence: Publicly funded research, including the raw data, belongs to the public. Consequently, all research data from a DFG funded project should be made open immediately, or at least a couple of months after finalization of the research project (see [1] and [2]). Furthermore, the DFG asked all scientific disciplines to develop more specific guidelines which implement these principles in their respective discipline.

The German Psychological Society (Deutsche Gesellschaft für Psychologie, DGPs) installed a working group (Andrea Abele-Brehm, Mario Gollwitzer and me) who worked for one year on such recommendations for psychology.

In the development of the document, we tried to be very inclusive and to harvest the wisdom of the crowd. A first draft (Feb 2016) was discussed for 6 weeks in an internet forum where all DGPs members could comment. Based on this discussion (and many additional personal conversations), a revised version was circulated and discussed in person with a smaller group of interested members (July 2016) and a representative of the DFG. Furthermore, we had regular contact to the “Fachkollegium Psychologie” of the DFG (i.e., the group of people which decides about funding decisions in psychology; meanwhile, the members of the Fachkollegium have changed on a rotational basis). Finally, the chair persons of all sections of the DGPs and the speakers of the young members had another possibility to comment. On September 17, the recommendations were officially adopted by the society.

I think this thorough and iterative process was very important for two reasons: First, it definitely improved the quality of the document, because we got so many great ideas and comments from the members, ironing out some inconsistencies and covering some edge cases. Second, it was important in order to get people on board. As this new open data guideline of the DFG causes a major change in the way we do our everyday scientific work, we wanted to talk to and convince as many people as possible from the early steps on. Of course not every single of the >4,000 members is equally convinced, but the topic now has considerable attention in the society.

Hence, one focus was consensus and inclusivity. At the same time, we had the goal to develop bold and forward-looking guidelines that really address the current challenges of the field, and not to settle on the lowest common denominator. For this goal, we had to find a balance between several, sometimes conflicting, values.

A Fine Balance of Values

Research transparency ⬌ privacy rights. A first specialty of psychology is that we do not investigate rocks or electrons, but human subjects who have privacy rights. In a nutshell, privacy rights have to be respected, and in case of doubt they win over openness. But if data can be properly anonymized, there’s no problem in open sharing; one possibility to share non-anonymous research data are “scientific use files”, where access is restricted to scientists. If data cannot be shared due to privacy (or other) reasons, this has to be made transparent in the paper. (Hence, the recommendations are PRO compatible). The recommendations give clear guidance on privacy issues and gives practical advice, for example, on how to write your informed consent that you actually are able to share the data afterwards.

Data reuse ⬌ right of first usage. A second balance concerns an optimal reuse of data on the one hand, and the right of first usage of the original authors. In the discussion phase during the development of the recommendations, several people expressed the fear of “research parasites”, who “steal” the data from hard-working scientists. A very common gut feeling is: “The data belong to me”. But, as we are publicly funded researchers with publicly funded research projects, the answer is quite clear: the data belong to the public. There is no copyright on raw data. On the other hand, we also need incentives for original researchers to generate data in the first place. Data generators of course have the right of first usage, and the recommendations allow to extend this right by an embargo of 5 more years (see below). But at the end of the day, publicly funded research data belongs to the public, and everybody can reuse it. If data are open by default, a guideline also must discuss and define how data reuse should be handled. Our recommendations make suggestions in which cases a co-authorship should be offered to the data providers and in which cases this is not necessary.

Verification ⬌ fair treatment of original authors. Finally, research should be verifiable, but with a fair treatment of the original authors. The guidelines say that whenever a reanalysis of a data set is going to be published (and that also includes blog posts or presentations), the original authors have to be informed about this. They cannot prevent the reanalysis, but they have the chance to react to it.

Two types of data sharing

We distinguish two types of data sharing:

Type 1 data sharing means that all raw data should be openly shared that is necessary to reproduce the results reported in a paper. Hence, this can be only a subset of all available variables in the full data set: The subset which is needed to reproduce these specific results. The primary data are an essential part of an empirical publication, and a paper without that simply is not complete.

Type 2 data sharing refers to the release of the full data set of a funded research project. The DGPs recommendations claim that after the end of a DFG-funded
project all data – even data which has not yet been used for publications – should be made open. Unpublished null results, or additional, exploratory variables now have to chance to see the light and to be reused by other researchers. Experience tells that not all planned papers have been written after the official end date of a project. Therefore, the recommendations allow that the right of first usage can be extended with an embargo period of up to 5 years, where the (so far unpublished) data do not have to be made public. The embargo option only applies to data that has not yet been used for publications. Hence, typically an embargo cannot be applied to Type 1 data sharing.

Summary & the Next Steps

To summarize, I think these recommendations are the most complete, practical, and specific guidelines for data sharing in psychology to date. (Of course much more details are in the recommendations themselves). They fully embrace openness, transparency and scientific integrity. Furthermore, they do not proclaim detached ethical principles, but give very practical guidance on how to actually implement data sharing in psychology.

What are the next steps? The president of the DGPs, Prof. Conny Antoni, and the secretary Prof. Mario Gollwitzer already contacted other psychological societies (APA, APS, EAPP, EASP, EFPA, SIPS, SESP, SPSP) and introduced our recommendations. The Board of Scientific Affairs of EFPA – the European Federation of Psychologists’ Associations – already expressed its appreciation of the recommendations and will post them on their website. Furthermore, it will discuss them in an invited symposium on the European Congress of Psychology in Amsterdam this year. A mid-term goal will also be to check compatibility with existing other guidelines and to think about a harmonization of several guidelines within psychology.

As other scientific disciplines in Germany also work on their specific implementations of the DFG guidelines, it will be interesting to see whether there are common lines (although there certainly will be persisting and necessary differences between the requirements of the fields). Finally, we are in contact with the new Fachkollegium at the DFG, with the goal to see how the recommendations can and should be used in the process of funding decisions.

If your field also implements such recommendations/guidelines, don’t hesitate to contact us.

Download the Recommendations

Schönbrodt, F., Gollwitzer, M., & Abele-Brehm, A. (2017). Der Umgang mit Forschungsdaten im Fach Psychologie: Konkretisierung der DFG-Leitlinien. Psychologische Rundschau, 68, 20–35. doi:10.1026/0033-3042/a000341. [PDF German][PDF English]

(English translation by Malte Elson, Johannes Breuer, and Zoe Magraw-Mickelson)

Comments (3) | Trackback
© 2017 Felix Schönbrodt | Impressum | Datenschutz | Contact