# Parse pdf files with R (on a Mac)

Inspired by this blog post from theBioBucket, I created a script to parse all pdf files in a directory. Due to its reliance on the Terminal, it’s Mac specific, but modifications for other systems shouldn’t be too hard (as a start for Windows, see BioBucket’s script).

First, you have to install the command line tool pdftotext (a binary can be found on Carsten Blüm’s website). Then, run following script within a directory with pdfs:

?View Code RSPLUS

# Amazing fMRI plots for everybody!

Dear valued customer,

it is a well-known scientific truth that research results which are accompanied by a fancy, colorful fMRI scan, are perceived as more believable and more persuasive than simple bar graphs or text results (McCabe & Castel, 2007; Weisberg, Keil, Goodstein, Rawson, & Gray, 2008). Readers even agree more with fictitious and unsubstantiated claims, as long as you provide a colorful brain image, and it works even when the subject is a dead salmon.

## The power of brain images for everybody

What are the consequence of these troubling findings? The answer is clear. Everybody should be equipped with these powerful tools of research communication! We at IRET made it to our mission to provide the latest, cutting-edge tools for your research analysis. In this case we adopted a new technology called “visually weighted regression” or “watercolor plots” (see here, here, or here), and simply applied a new color scheme.

But now, let’s get some hands on it!

## The example

Imagine you invested a lot of effort in collecting the data of 41 participants. Now you find following pattern in 2 of your 87 variables:

You could show that plain scatterplot. But should you do it? Nay. Of course everybody would spot the outliers on the top right. But which is much more important: it is b-o-r-i-n-g!

What is the alternative? Reporting the correlation as text? “We found a correlation of r = .38 (p = .014)”. Yawn.

Or maybe: “We chose to use a correlation technique that is robust against outliers and violations of normality, the Spearman rank coefficient. It turned out that the correlation broke down and was not significant any more (r = .06, p = .708).”.

Don’t be silly! With that style of scientific reporting, there would be nothing to write home about. But you can be sure: we have the right tools for you. Finally, the power of pictures is not limited to brain research – now you can turn any data into a magical fMRI plot like that:

Isn’t that beautiful? We recommend to accompany the figure with an elaborated description: “For local fitting, we used spline smoothers from 10000 bootstrap replications. For a robust estimation of vertical confidence densities, a re-descending M-estimator with Tukey’s biweight function was employed. As one can clearly see in the plot, there is  significant confidence in the prediction of the x=0, y=0 region, as well as a minor hot spot in the x=15, y=60 region (also known as the supra-dextral data region).”

## Magical Data Enhancer Tool

With the Magical Data Enhancer Tool (MDET) you can …

• … turn boring, marginally significant, or just crappy results into a stunning research experience
• … publish in scientific journal with higher impact factors
• … receive the media coverage that you and your research deserve
• … achieve higher acceptance rates from funding agencies
• … impress young women at the bar (you wouldn’t show a plain scatterplot, dude?!)

## FAQ

Q: But – isn’t that approach unethical?
A: No, it’s not at all. In contrast, we at IRES think that it is unethical that only some researchers are allowed to exploit the cognitive biases of their readers. We design our products with a great respect for humanity and we believe that every researcher who can afford our products should have the same powerful tools at hand.

Q: How much does you product cost?
A: The standard version of the Magical Data Enhancer ships for 12’998 \$. We are aware that this is a significant investment. But, come on: You deserve it! Furthermore, we will soon publish a free trial version, including the full R code on this blog. So stay tuned!

Best regards,

Lexis “Lex” Brycenet (CEO & CTO Research Communication)
International Research Enhancement Technology (IRET)

# Visually weighted regression in R (à la Solomon Hsiang)

[Update 1: Sep 5, 2012: Explore the Magical Data Enhancer by IRES, using this visualization technique]

[Update 2: Sep 6, 2012: See new improved plots, and new R code!

Solomon Hsiang proposed an appealing method for visually displaying the uncertainty in regressions (see his blog [1][2], and also the discussions on the Statistical Modeling, Causal Inference, and Social Science Blog [1][2]).

I implemented the method in R (using ggplot2), and used an additional method of determining the shading (especially concerning Andrew Gelman’s comment that traditional statistical summaries (such as 95% intervals) give too much weight to the edges. In the following I will show how to produce plots like that:

I used following procedure:

1. Compute smoothers from 1000 bootstrap samples of the original sample (this results in a spaghetti plot)
2. Calculate a density estimate for each vertical cut through the bootstrapped smoothers. The area under the density curve always is 1, so the ink is constant for each y-slice.
3. Shade the figure according to these density estimates.

## Now let’s construct some plots!

The basic scatter plot:

No we show the bootstrapped smoothers (a “spaghetti plot”). Each spaghetti has a low alpha. That means that overlapping spaghettis produce a darker color and already give weight to highly populated regions.

Here is the shading according to the smoother’s density:

Now, we can overplot the median smoother estimate for each x value (the “median smoother”):

Or, a visually weighted smoother:

Finally, we can add the plain linear regression line (which obviously does not refelct the data points very well):

At the end of this post is the function that produces all of these plots. The function returns a ggplot object, so you can modify it afterwards, e.g.:

?View Code RSPLUS
 vwReg(y~x, df, shade=FALSE, spag=TRUE) + xlab("Implicit power motive") + ylab("Corrugator activity during preparation")

## Here are two plots with actual data I am working on:

The correlation of both variables is .22 (p = .003).

A) As a heat map (note: the vertical breaks at the left and right end occur due to single data points that get either sampled or not during the bootstrap):

B) As a spaghetti plot:

Finally, here’s the code (sometimes the code box is collapsed – click the arrow on the top right of the box to open it). Comments and additions are welcome.

?View Code RSPLUS
 [Update: I removed the code, as an updated version has been published here (see at the end of the post)]

# Validating email adresses in R

I currently program an automated report generation in R – participants fill out a questionnaire, and they receive a nicely formatted pdf with their personality profile. I use knitr, LaTex, and the sendmailR package.

Some participants did not provide valid email addresses, which caused the sendmail function to crash. Therefore I wanted some validation of email addresses – here’s the function:

?View Code RSPLUS
 isValidEmail <- function(x) { grepl("\\<[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,}\\>", as.character(x), ignore.case=TRUE) }

Let’s test some valid and invalid adresses:

?View Code RSPLUS
 # Valid adresses isValidEmail("felix@nicebread.de") isValidEmail("felix.123.honeyBunny@nicebread.lmu.de") isValidEmail("felix@nicebread.de ") isValidEmail(" felix@nicebread.de") isValidEmail("felix+batman@nicebread.de") isValidEmail("felix@nicebread.office")   # invalid addresses isValidEmail("felix@nicebread") isValidEmail("felix@nicebread@de") isValidEmail("felixnicebread.de")

The regexp is taken from www.regular-expressions.info and adapted to the R style of regexp. Please note the many comments (e.g., here or here) about “Is there a single regexp that matches all valid email adresses?” (the answer is no).

# Shading regions of the normal: The Stanine scale

For the presentation of norm values, often stanines are used (standard nine). These values mark a person’s relativ position in comparison to the sample or to norm values.
According to Wikipedia:

The underlying basis for obtaining stanines is that a normal distribution is divided into nine intervals, each of which has a width of 0.5 standard deviations excluding the first and last, which are just the remainder (the tails of the distribution). The mean lies at the centre of the fifth interval.

For illustration purposes, I wanted to plot the regions of the stanine values in the standard normal distribution – here’s the result:

First: Calculate the stanine boundaries and draw the normal curve:

?View Code RSPLUS
 # First: Calculate stanine breaks (on a z scale) stan.z <- c(-3, seq(-1.75, +1.75, length.out=8), 3)   # Second: get cumulative probabilities for these z values stan.PR <- pnorm(stan.z)   # define a color ramp from blue to red (... or anything else ...) c_ramp <- colorRamp(c("darkblue", "red"), space="Lab")   # draw the normal curve, without axes; reduce margins on left, top, and right par(mar=c(2,0,0,0)) curve(dnorm(x,0,1), xlim=c(-3,3), ylim=c(-0.03, .45), xlab="", ylab="", axes=FALSE)

Next: Calculate the shaded regions and plot a polygon for each region:

?View Code RSPLUS
 # Calculate polygons for each stanine region # S.x = x values of polygon boundary points, S.y = y values for (i in 1:(length(stan.z)-1)) { S.x <- c(stan.z[i], seq(stan.z[i], stan.z[i+1], 0.01), stan.z[i+1]) S.y <- c(0, dnorm(seq(stan.z[i], stan.z[i+1], 0.01)), 0) polygon(S.x,S.y, col=rgb(c_ramp(i/9), max=255)) }

And finally: add some legends to the plot:

?View Code RSPLUS
 # print stanine values in white # font = 2 prints numbers in boldface text(seq(-2,2, by=.5), 0.015, label=1:9, col="white", font=2)   # print cumulative probabilities in black below the curve text(seq(-1.75,1.75, by=.5), -0.015, label=paste(round(stan.PR[-c(1, 10)], 2)*100, "%", sep=""), col="black", adj=.5, cex=.8) text(0, -0.035, label="Percentage of sample <= this value", adj=0.5, cex=.8)

And finally, here’s a short script for shading only one region (e.g., the lower 2.5%):

?View Code RSPLUS
 # draw the normal curve curve(dnorm(x,0,1), xlim=c(-3,3), main="Normal density")   # define shaded region from.z <- -3 to.z <- qnorm(.025)   S.x <- c(from.z, seq(from.z, to.z, 0.01), to.z) S.y <- c(0, dnorm(seq(from.z, to.z, 0.01)), 0) polygon(S.x,S.y, col="red")

# Comparing all quantiles of two distributions simultaneously

Summary: A new function in the WRS package compares many quantiles of two distributions simultaneously while controlling the overall alpha error.

When comparing data from two groups, approximately 99.6% of all psychological research compares the central tendency (that is a subjective estimate).

In some cases, however, it would be sensible to compare different parts of the distributions. For example, in reaction time (RT) experiments two groups may only differ in the fast RTs, but not in the long. Measures of central tendency might obscure or miss this pattern, as following example demonstrates.

Imagine RT distributions for two experimental conditions (“black” and “red”). Participants in the red condition have some very fast RTs:

?View Code RSPLUS
 set.seed(1234) RT1 <- rnorm(100, 350, 52) RT2 <- c(rnorm(85, 375, 55), rnorm(15, 220, 25)) plot(density(RT1), xlim=c(100, 600)) lines(density(RT2), col=2)

A naïve (but common) approach would be to compare both distributions with a t test:

t.test(RT1, RT2)
######################
data:  RT1 and RT2
t = -0.3778, df = 168.715, p-value = 0.706
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-22.74478  15.43712
sample estimates:
mean of x mean of y
341.8484  345.5022

Results show that both groups do not differ in their central tendency.

Now let’s do the same with a new method!

The function qcomhd from the WRS package compares user-defined quantiles of both distributions using a Harrell–Davis estimator in conjunction with a percentile bootstrap. The method seems to improve over other methods: “Currently, when there are tied values, no other method has been found that performs reasonably well. Even with no tied values, method HD can provide a substantial gain in power when q ≤ .25 or q ≥ .75 compared to other techniques that have been proposed”. The method is described in the paper “Comparing two independent groups via the upper and lower quantiles” by Wilcox, Erceg-Hurn, Clark and Carlson (2013).
You can use the function as soon as you install the latest version (17) of the WRS package:
install.packages("WRS", repos="http://R-Forge.R-project.org")

Let’s compare all percentiles from the 10th to the 90th:

qcomhd(RT1, RT2, q = seq(.1, .9, by=.1))

The graphical output shows how groups differ in the requested quantiles, and the confidence intervals for each quantile:

The text output (see below) also shows that groups differ significantly in the 10th, the 50th, and the 60th percentile. The column labeled ‘’.value’’shows the p value for a single quantile bootstrapping test. As we do multiple tests (one for each quantile), the overall Type 1 error (defaulting to .05) is controlled by the Hochberg method. Therefore, for each p value a critical p value is calculated that must be undercut (see column ‘_crit’. The column ‘signify’ marks all tests which fulfill this condition:

    q  n1  n2    est.1    est.2 est.1.est.2    ci.low       ci.up      p_crit p.value signif
1 0.1 100 100 285.8276 218.4852   67.342399  41.04707 84.67980495 0.005555556   0.001      *
2 0.2 100 100 297.5061 264.7904   32.715724 -16.52601 68.80486452 0.025000000   0.217
3 0.3 100 100 310.8760 320.0196   -9.143593 -33.63576 32.95577465 0.050000000   0.589
4 0.4 100 100 322.5014 344.0439  -21.542475 -40.43463  0.03938696 0.010000000   0.054
5 0.5 100 100 331.4413 360.3548  -28.913498 -44.78068 -9.11259108 0.007142857   0.006      *
6 0.6 100 100 344.8502 374.7056  -29.855369 -46.88886 -9.69559705 0.006250000   0.005      *
7 0.7 100 100 363.6210 388.0228  -24.401872 -47.41493 -4.13498039 0.008333333   0.016
8 0.8 100 100 385.8985 406.3956  -20.497097 -47.09522  2.23935390 0.012500000   0.080
9 0.9 100 100 419.4520 444.7892  -25.337206 -55.84177 11.49107833 0.016666667   0.174

To summarize, we see that we have significant differences between both groups: the red group has significantly more faster RTs, but in their central tendency longer RTs.

Recommendations for comparing groups:

1. Always plot the densities of both distributions.
2. Make a visual scan: Where do the groups differ? Is the central tendency a reasonable summary of the distributions and of the difference between both distributions?
3. If you are interested in the central tendency, think about the test for trimmed means, as in most cases this describes the central tendency better than the arithmetic mean.
4. If you are interested in comparing quantiles in the tails of the distribution, use the qcomhd function.

### References

Wilcox, R. R., Erceg-Hurn, D. M, Clark, F., & Carlson, M. (in press). Comparing two independent groups via the lower and upper quantiles. Journal of Statistical Computation and Simulation. doi:10.1080/00949655.2012.754026.

# The Evolution of Correlations

This is the evolution of a bivariate correlation between two questionnaire scales, “hope of power” and “fear of losing control”. Both scales were administered in an open online study. The video shows how the correlation evolves from r = .69*** (n=20) to r = .26*** (n=271). It does not stabilize until n = 150.

Data has not been rearranged – it is the random order how participants dropped into the study. This had been a rather extreme case of an unstable correlation – other scales in this study were stable right from the beginning. Maybe this video could help as an anecdotal caveat for a careful interpretation of correlations with small n’s (and with ‘small’ I mean n < 100) …

# Range restrictions for the correlations of 3 variables

A little follow up to my testosterone comment (written in German):
When three variables are correlated to each other, and two of the three correlations are known, the range for the third correlation is restricted according to this formula (Olkin, 1981):

Now comes the new part: here’s the graphical representation of that range restriction:

As one can see, one, or both of the two given correlations have to be fairly high to imply a positive third correlation.

Olkin, I. (1981). Range restrictions for product-moment correlation matrices. Psychometrika, 46, 469-472. doi:10.1007/BF02293804