2013

September 16, 2013

Today a new version (0.23.1) of the WRS package (Wilcox’ Robust Statistics) has been released. This package is the companion to his rather exhaustive book on robust statistics, “Introduction to Robust Estimation and Hypothesis Testing” (Amazon Link de/us).

For a fail-safe installation of the package, follow this instruction.

As a guest post, Rand Wilcox describes the new functions of the newest WRS version – have fun!

“Hi everyone,

As you probably know, when standard assumptions are violated, classic methods for comparing groups and studying associations can have very poor power and yield highly misleading results. The better known methods for dealing with these problem (transforming the data or testing assumptions) are ineffective compared to more modern methods. Simply removing outliers among the dependent variable and applying standard techniques to the remaining data is disastrous.

Methods I derive to deal with these problems can be applied with R functions stored in an R package (WRS) maintained by Felix Schönbrodt. Felix asked me to briefly describe my recent efforts for a newsletter he posts. In case this might help some of you, a brief description of my recently developed methods and corresponding R functions are provided below. (The papers I cite illustrate that they can make a substantial difference compared to extant techniques.)

Sometimes it can be important and more informative to compare the tails (upper and lower quantiles) of two groups rather than a measure of location that is centrally located. Example: have been involved in a study aimed at determining whether intervention reduced depressive symptoms. But the typical individual was not very depressed prior to intervention and no difference is found using the more obvious techniques. Simply ignoring the less depressed individuals results in using the wrong standard error – a very serious problem. But comparing quantiles, it was found that the more depressed individuals benefitted the most from intervention.

The new method beats the shift function. See

Wilcox, R. R., Erceg-Hurn, D., Clark, F. & Carlson, M. (2013). Comparing two independent groups via the lower and upper quantiles. Journal of Statistical Computation and Simulation. DOI: 10.1080/00949655.2012.754026

Use the R function `qcomhd`

.

For **dependent groups** must use another method. There are, in fact, two distinct ways of viewing the problem. See

Wilcox, R. R. & Erceg-Hurn, D. (2012). Comparing two dependent groups via quantiles. Journal of Applied Statistics, 39, 2655–2664.

Use the R function `Dqcomhd`

.

When comparing two groups based on a Likert scale, use the function `disc2com`

.

It performs a global test of P(X=x)=P(Y=x) for all x using a generalization of the Storer–Kim method for comparing binomials.

`binband`

: a multiple comparison method for the individual cell probabilities.

`tshdreg`

: This is a modification of the Theil–Sen estimator. When there are tied values among the dependent variable, this modification might result in substantially higher power. A paper (Wilcox & Clark, in press) provides details. The function `tsreg`

now checks whether there are any tied values and prints a message suggesting that you might want to use `tshdreg`

instead.

`qhdsm`

: A quantile regression smoother. That is, plot the regression line when predicting some quantile without specifying a parametric form for the regression line. Multiple quantile regression lines can be plotted. The method can be more satisfactory than using the function `qsmcobs`

(a spline-type method), which often creates waves and curvature that give an incorrect sense of the association. Another advantage of `qhsdm`

is that it can be used with more than one predictor; `qsmcobs`

is limited to one predictor only. The strategy behind `qhdsm`

is to get an initial approximation of the regression line using a running interval smoother in conjunction with the Harrell–Davis quantile estimator and then smoothed again via LOESS.

It is surprising how often an association is found when dealing with the higher and lower quantiles of the dependent variable that are not detected by least squares and other robust estimators.

`qhdsm2g`

: Plots regression lines for two groups using the function `qhdsm`

.

`rplot`

has been updated: setting the argument `LP=TRUE`

gives a smoother regression line.

`rplotCI`

. Same as `rplot`

but includes lines indicating a confidence interval for the predicted Y values

`rplotpbCI`

. Same as `rplotCI`

, only use a bootstrap method to compute confidence intervals.

`ancJN`

: The function fits a robust regression line for each group and then determines whether the predicted Y values differ significantly at specified points. So it has connections to the classic Johnson-Neyman method. That is, the method provides an indication of where the regression lines cross. Both types of heteroscedasticity are allowed, which can result in improved power beyond the improved power stemming from a robust estimator. See

Wilcox, R. R. (2013). A heteroscedastic method for comparing regression lines at specified design points when using a robust regression estimator. Journal of Data Science, 11, 281–291

`anctspb`

: Like `ancJN`

but uses a percentile bootstrap method that might help when there are tied values among the dependent variable.

`ancGLOB`

. A robust global ANCOVA method. Like the function ancova, it provides a flexible way of dealing with curvature and heteroscedasticity is allowed. But this function can reject in situations where ancova does not reject. The function returns a p-value and the hypothesis of identical regression lines is rejected if the p-value is less than or equal to a critical p-value. In essence, it can beat reliance on improved versions of the Bonferroni method. (Details are in a paper submitted for publication.) It does not dominate my original ANCOVA method (applied with the R function `ancova`

) in terms of power, but have encountered situations where it makes a practical difference.

It determines a critical p-value via the R function `ancGLOB_pv`

.

In essence, simulations are used. By default, the number of replications is `iter=500`

. But suggest using `iter=2000`

or larger. Execution time can be reduced substantially with `cpp=TRUE`

, which calls a C++ version of the function written by Xiao He. Here are the commands to install the C++ version:

For a global test that two parametric regression lines are identical, see

Wilcox, R. R. & Clark, F. (in press). Heteroscedastic global tests that the regression parameters for two or more independent groups are identical. Communications in Statistics– Simulation and Computation.

`ancGpar`

performs the robust method. The paper includes a different method when using least squares regression. It is based in part on the HC4 estimator, which deals with heteroscedasticity. But if there are outliers among the dependent variable, you are much better off using a robust estimator.

`Dancova`

: ANCOVA for two dependent groups that provides a flexible way of dealing with curvature. Both types of heteroscedasticity are allowed. Roughly, approximate the regression lines with a running interval smoother and at specified design points compare the regression lines. This is an extension of the R function `ancova`

to dependent groups. The function can do an analysis on either the marginal measures of location or a measure of location based on the difference scores. When using a robust estimator, the choice between these two approaches can be important. Defaults to using a trimmed mean.

`Dancovamp`

: Like `Dancova`

only designed to handle multiple covariates.

`Danctspb`

: Compare regression lines of two dependent groups using a robust regression estimator. The default is to use Theil–Sen, but any estimator can be used via the argument regfun. So in contrast to `Dancova`

, a parametric form for the regression line is made. As usual, can eliminate outliers among the independent variable by setting the argument `xout=TRUE`

. When a parametric regression line provides a more accurate fit, can have more power compared to using a smoother. But when there is curvature that is not modeled well with a parametric fit, the reverse can happen.

Note: a version of `ancGLOB`

for dependent groups is being studied.

`Rcoefalpha`

: computes a robust analog of coefficient alpha. Developed this method some years ago but just got around to writing an R function. See

Wilcox, R. R. (1992). Robust generalizations of classical test reliability and Cronbach’s alpha. British Journal of Mathematical and Statistical Psychology, 45, 239–254.

*R. R. Wilcox*”

Have fun exploring these new methods!

Comments (5) | Trackback

August 23, 2013

One critique frequently heard about Bayesian statistics is the subjectivity of the assumed prior distribution. If one is cherry-picking a prior, of course the posterior can be tweaked, especially when only few data points are at hand. For example, see the Scholarpedia article on Bayesian statistics:

In the uncommon situation that the data are extensive and of simple structure, the prior assumptions will be unimportant and the assumed sampling model will be uncontroversial. More generally we would like to report that any conclusions are robust to reasonable changes in both prior and assumed model: this has been termed inference robustness

(David Spiegelhalter and Kenneth Rice (2009) Bayesian statistics. Scholarpedia, 4(8):5230.)

Therefore, it is suggested that …

In particular, audiences should ideally fully understand the contribution of the prior distribution to the conclusions. (ibid)

In the example of Bayes factors for t tests (Rouder, Speckman, Sun, Morey, & Iverson, 2009), the assumption that has to be defined a priori is the effect size δ expected under the H1. In the BayesFactor package for R, this can be adjusted via the r parameter. By default, it is set to 0.5, but it can be made wider (larger r’s, which means one expects larger effects) or narrower (r’s close to zero, which means one expects smaller effects in the population).

In their reanalysis of Bem’s ESP data, Wagenmakers, Wetzels, Borsboom, Kievit, and van der Maas (2011, PDF) proposed a robustness analysis for Bayes factors (BF), which simply shows the BF for a range of priors. If the conclusion is the same for a large range of priors, it could be judged to be robust (this is also called a “sensitivity analysis”).

I wrote an R function that can generate plots like this. Here’s a reproduction of Wagenmakers’ et al (2011) analysis of Bem’s data – it looks pretty identical:

## Bem data, two sided

# provide t values, sample sizes, and the location(s) of the red dot(s)

# set forH1 to FALSE in order to vertically flip the plot.

# Usually I prefer higher BF to be in favor of H1, but I flipped it in order to match Wagenmakers et al (2011)

BFrobustplot(

ts=c(2.51, 2.55, 2.23, 1.74, 1.92, 2.39, 2.03, 1.8, 1.31, 2.96),

ns=c(100, 97, 100, 150, 100, 150, 99, 150, 200, 50),

dots=1, forH1 = FALSE)

# provide t values, sample sizes, and the location(s) of the red dot(s)

# set forH1 to FALSE in order to vertically flip the plot.

# Usually I prefer higher BF to be in favor of H1, but I flipped it in order to match Wagenmakers et al (2011)

BFrobustplot(

ts=c(2.51, 2.55, 2.23, 1.74, 1.92, 2.39, 2.03, 1.8, 1.31, 2.96),

ns=c(100, 97, 100, 150, 100, 150, 99, 150, 200, 50),

dots=1, forH1 = FALSE)

You can throw in as many t values and corresponding sample sizes as you want. Furthermore, the function can compute one-sided Bayes factors as described in Wagenmakers and Morey (2013). If this approach is applied to the Bem data, the plot looks as following – everything is shifted a bit into the H1 direction:

Finally, here’s the function:

## This source code is licensed under the FreeBSD license

## (c) 2013 Felix Schönbrodt

#' @title Plots a comparison of a sequence of priors for t test Bayes factors

#'

#' @details

#'

#'

#' @param ts A vector of t values

#' @param ns A vector of corresponding sample sizes

#' @param rs The sequence of rs that should be tested. r should run up to 2 (higher values are implausible; E.-J. Wagenmakers, personal communication, Aug 22, 2013)

#' @param labels Names for the studies (displayed in the facet headings)

#' @param dots Values of r's which should be marked with a red dot

#' @param plot If TRUE, a ggplot is returned. If false, a data frame with the computed Bayes factors is returned

#' @param sides If set to "two" (default), a two-sided Bayes factor is computed. If set to "one", a one-sided Bayes factor is computed. In this case, it is assumed that positive t values correspond to results in the predicted direction and negative t values to results in the unpredicted direction. For details, see Wagenmakers, E. J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests.

#' @param nrow Number of rows of the faceted plot.

#' @param forH1 Defines the direction of the BF. If forH1 is TRUE, BF > 1 speak in favor of H1 (i.e., the quotient is defined as H1/H0). If forH1 is FALSE, it's the reverse direction.

#'

#' @references

#'

#' Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t-tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin and Review, 16, 225-237.

#' Wagenmakers, E.-J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests. Manuscript submitted for publication

#' Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Kievit, R. & van der Maas, H. L. J. (2011). Yes, psychologists must change the way they analyze their data: Clarifications for Bem, Utts, & Johnson (2011)

BFrobustplot <- function(

ts, ns, rs=seq(0, 2, length.out=200), dots=1, plot=TRUE,

labels=c(), sides="two", nrow=2, xticks=3, forH1=TRUE)

{

library(BayesFactor)

# compute one-sided p-values from ts and ns

ps <- pt(ts, df=ns-1, lower.tail = FALSE) # one-sided test

# add the dots location to the sequences of r's

rs <- c(rs, dots)

res <- data.frame()

for (r in rs) {

# first: calculate two-sided BF

B_e0 <- c()

for (i in 1:length(ts))

B_e0 <- c(B_e0, exp(ttest.tstat(t = ts[i], n1 = ns[i], rscale=r)$bf))

# second: calculate one-sided BF

B_r0 <- c()

for (i in 1:length(ts)) {

if (ts[i] > 0) {

# correct direction

B_r0 <- c(B_r0, (2 - 2*ps[i])*B_e0[i])

} else {

# wrong direction

B_r0 <- c(B_r0, (1 - ps[i])*2*B_e0[i])

}

}

res0 <- data.frame(t=ts, n=ns, BF_two=B_e0, BF_one=B_r0, r=r)

if (length(labels) > 0) {

res0$labels <- labels

res0$heading <- factor(1:length(labels), labels=paste0(labels, "\n(t = ", ts, ", df = ", ns-1, ")"), ordered=TRUE)

} else {

res0$heading <- factor(1:length(ts), labels=paste0("t = ", ts, ", df = ", ns-1), ordered=TRUE)

}

res <- rbind(res, res0)

}

# define the measure to be plotted: one- or two-sided?

res$BF <- res[, paste0("BF_", sides)]

# Flip BF if requested

if (forH1 == FALSE) {

res$BF <- 1/res$BF

}

if (plot==TRUE) {

library(ggplot2)

p1 <- ggplot(res, aes(x=r, y=log(BF))) + geom_line() + facet_wrap(~heading, nrow=nrow) + theme_bw() + ylab("log(BF)")

p1 <- p1 + geom_hline(yintercept=c(c(-log(c(30, 10, 3)), log(c(3, 10, 30)))), linetype="dotted", color="darkgrey")

p1 <- p1 + geom_hline(yintercept=log(1), linetype="dashed", color="darkgreen")

# add the dots

p1 <- p1 + geom_point(data=res[res$r %in% dots,], aes(x=r, y=log(BF)), color="red", size=2)

# add annotation

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-2.85, label=paste0("Strong~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-1.7 , label=paste0("Moderate~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-.55 , label=paste0("Anectodal~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=2.86 , label=paste0("Strong~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=1.7 , label=paste0("Moderate~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=.55 , label=paste0("Anectodal~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, vjust=.5, size=3, color="black", parse=TRUE)

# set scale ticks

p1 <- p1 + scale_y_continuous(breaks=c(c(-log(c(30, 10, 3)), 0, log(c(3, 10, 30)))), labels=c("-log(30)", "-log(10)", "-log(3)", "log(1)", "log(3)", "log(10)", "log(30)"))

p1 <- p1 + scale_x_continuous(breaks=seq(min(rs), max(rs), length.out=xticks))

return(p1)

} else {

return(res)

}

}

## (c) 2013 Felix Schönbrodt

#' @title Plots a comparison of a sequence of priors for t test Bayes factors

#'

#' @details

#'

#'

#' @param ts A vector of t values

#' @param ns A vector of corresponding sample sizes

#' @param rs The sequence of rs that should be tested. r should run up to 2 (higher values are implausible; E.-J. Wagenmakers, personal communication, Aug 22, 2013)

#' @param labels Names for the studies (displayed in the facet headings)

#' @param dots Values of r's which should be marked with a red dot

#' @param plot If TRUE, a ggplot is returned. If false, a data frame with the computed Bayes factors is returned

#' @param sides If set to "two" (default), a two-sided Bayes factor is computed. If set to "one", a one-sided Bayes factor is computed. In this case, it is assumed that positive t values correspond to results in the predicted direction and negative t values to results in the unpredicted direction. For details, see Wagenmakers, E. J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests.

#' @param nrow Number of rows of the faceted plot.

#' @param forH1 Defines the direction of the BF. If forH1 is TRUE, BF > 1 speak in favor of H1 (i.e., the quotient is defined as H1/H0). If forH1 is FALSE, it's the reverse direction.

#'

#' @references

#'

#' Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t-tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin and Review, 16, 225-237.

#' Wagenmakers, E.-J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests. Manuscript submitted for publication

#' Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Kievit, R. & van der Maas, H. L. J. (2011). Yes, psychologists must change the way they analyze their data: Clarifications for Bem, Utts, & Johnson (2011)

BFrobustplot <- function(

ts, ns, rs=seq(0, 2, length.out=200), dots=1, plot=TRUE,

labels=c(), sides="two", nrow=2, xticks=3, forH1=TRUE)

{

library(BayesFactor)

# compute one-sided p-values from ts and ns

ps <- pt(ts, df=ns-1, lower.tail = FALSE) # one-sided test

# add the dots location to the sequences of r's

rs <- c(rs, dots)

res <- data.frame()

for (r in rs) {

# first: calculate two-sided BF

B_e0 <- c()

for (i in 1:length(ts))

B_e0 <- c(B_e0, exp(ttest.tstat(t = ts[i], n1 = ns[i], rscale=r)$bf))

# second: calculate one-sided BF

B_r0 <- c()

for (i in 1:length(ts)) {

if (ts[i] > 0) {

# correct direction

B_r0 <- c(B_r0, (2 - 2*ps[i])*B_e0[i])

} else {

# wrong direction

B_r0 <- c(B_r0, (1 - ps[i])*2*B_e0[i])

}

}

res0 <- data.frame(t=ts, n=ns, BF_two=B_e0, BF_one=B_r0, r=r)

if (length(labels) > 0) {

res0$labels <- labels

res0$heading <- factor(1:length(labels), labels=paste0(labels, "\n(t = ", ts, ", df = ", ns-1, ")"), ordered=TRUE)

} else {

res0$heading <- factor(1:length(ts), labels=paste0("t = ", ts, ", df = ", ns-1), ordered=TRUE)

}

res <- rbind(res, res0)

}

# define the measure to be plotted: one- or two-sided?

res$BF <- res[, paste0("BF_", sides)]

# Flip BF if requested

if (forH1 == FALSE) {

res$BF <- 1/res$BF

}

if (plot==TRUE) {

library(ggplot2)

p1 <- ggplot(res, aes(x=r, y=log(BF))) + geom_line() + facet_wrap(~heading, nrow=nrow) + theme_bw() + ylab("log(BF)")

p1 <- p1 + geom_hline(yintercept=c(c(-log(c(30, 10, 3)), log(c(3, 10, 30)))), linetype="dotted", color="darkgrey")

p1 <- p1 + geom_hline(yintercept=log(1), linetype="dashed", color="darkgreen")

# add the dots

p1 <- p1 + geom_point(data=res[res$r %in% dots,], aes(x=r, y=log(BF)), color="red", size=2)

# add annotation

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-2.85, label=paste0("Strong~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-1.7 , label=paste0("Moderate~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=-.55 , label=paste0("Anectodal~H[", ifelse(forH1==TRUE,0,1), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=2.86 , label=paste0("Strong~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=1.7 , label=paste0("Moderate~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, size=3, color="black", parse=TRUE)

p1 <- p1 + annotate("text", x=max(rs)*1.8, y=.55 , label=paste0("Anectodal~H[", ifelse(forH1==TRUE,1,0), "]"), hjust=1, vjust=.5, vjust=.5, size=3, color="black", parse=TRUE)

# set scale ticks

p1 <- p1 + scale_y_continuous(breaks=c(c(-log(c(30, 10, 3)), 0, log(c(3, 10, 30)))), labels=c("-log(30)", "-log(10)", "-log(3)", "log(1)", "log(3)", "log(10)", "log(30)"))

p1 <- p1 + scale_x_continuous(breaks=seq(min(rs), max(rs), length.out=xticks))

return(p1)

} else {

return(res)

}

}

*References*

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t-tests for accepting and rejecting the null hypothesis. *Psychonomic Bulletin and Review, 16*, 225-237. [for a PDF, see bottom of this page]

Wagenmakers, E.-J., & Morey, R. D. (2013). Simple relation between one-sided and two-sided Bayesian point-null hypothesis tests. Manuscript submitted for publication (website)

Wagenmakers, E.-J., Wetzels, R., Borsboom, D., Kievit, R. & van der Maas, H. L. J. (2011). Yes, psychologists must change the way they analyze their data: Clarifications for Bem, Utts, & Johnson (2011) [PDF]

June 11, 2013

*[Update June 12: Data.tables functions have been improved (thanks to a comment by Matthew Dowle); for a similar approach see also Tal Galili’s post]*

The guys from RStudio now provide CRAN download logs (see also this blog post). Great work!

I always asked myself, how many people actually download my packages. Now I finally can get an answer (… with some anxiety to get frustrated 😉

Here are the complete, self-contained R scripts to analyze these log data:

## ======================================================================

## Step 1: Download all log files

## ======================================================================

# Here's an easy way to get all the URLs in R

start <- as.Date('2012-10-01')

today <- as.Date('2013-06-10')

all_days <- seq(start, today, by = 'day')

year <- as.POSIXlt(all_days)$year + 1900

urls <- paste0('http://cran-logs.rstudio.com/', year, '/', all_days, '.csv.gz')

# only download the files you don't have:

missing_days <- setdiff(as.character(all_days), tools::file_path_sans_ext(dir("CRANlogs"), TRUE))

dir.create("CRANlogs")

for (i in 1:length(missing_days)) {

print(paste0(i, "/", length(missing_days)))

download.file(urls[i], paste0('CRANlogs/', missing_days[i], '.csv.gz'))

}

## Step 1: Download all log files

## ======================================================================

# Here's an easy way to get all the URLs in R

start <- as.Date('2012-10-01')

today <- as.Date('2013-06-10')

all_days <- seq(start, today, by = 'day')

year <- as.POSIXlt(all_days)$year + 1900

urls <- paste0('http://cran-logs.rstudio.com/', year, '/', all_days, '.csv.gz')

# only download the files you don't have:

missing_days <- setdiff(as.character(all_days), tools::file_path_sans_ext(dir("CRANlogs"), TRUE))

dir.create("CRANlogs")

for (i in 1:length(missing_days)) {

print(paste0(i, "/", length(missing_days)))

download.file(urls[i], paste0('CRANlogs/', missing_days[i], '.csv.gz'))

}

## ======================================================================

## Step 2: Load single data files into one big data.table

## ======================================================================

file_list <- list.files("CRANlogs", full.names=TRUE)

logs <- list()

for (file in file_list) {

print(paste("Reading", file, "..."))

logs[[file]] <- read.table(file, header = TRUE, sep = ",", quote = "\"",

dec = ".", fill = TRUE, comment.char = "", as.is=TRUE)

}

# rbind together all files

library(data.table)

dat <- rbindlist(logs)

# add some keys and define variable types

dat[, date:=as.Date(date)]

dat[, package:=factor(package)]

dat[, country:=factor(country)]

dat[, weekday:=weekdays(date)]

dat[, week:=strftime(as.POSIXlt(date),format="%Y-%W")]

setkey(dat, package, date, week, country)

save(dat, file="CRANlogs/CRANlogs.RData")

# for later analyses: load the saved data.table

# load("CRANlogs/CRANlogs.RData")

## Step 2: Load single data files into one big data.table

## ======================================================================

file_list <- list.files("CRANlogs", full.names=TRUE)

logs <- list()

for (file in file_list) {

print(paste("Reading", file, "..."))

logs[[file]] <- read.table(file, header = TRUE, sep = ",", quote = "\"",

dec = ".", fill = TRUE, comment.char = "", as.is=TRUE)

}

# rbind together all files

library(data.table)

dat <- rbindlist(logs)

# add some keys and define variable types

dat[, date:=as.Date(date)]

dat[, package:=factor(package)]

dat[, country:=factor(country)]

dat[, weekday:=weekdays(date)]

dat[, week:=strftime(as.POSIXlt(date),format="%Y-%W")]

setkey(dat, package, date, week, country)

save(dat, file="CRANlogs/CRANlogs.RData")

# for later analyses: load the saved data.table

# load("CRANlogs/CRANlogs.RData")

## ======================================================================

## Step 3: Analyze it!

## ======================================================================

library(ggplot2)

library(plyr)

str(dat)

# Overall downloads of packages

d1 <- dat[, length(week), by=package]

d1 <- d1[order(V1), ]

d1[package=="TripleR", ]

d1[package=="psych", ]

# plot 1: Compare downloads of selected packages on a weekly basis

agg1 <- dat[J(c("TripleR", "RSA")), length(unique(ip_id)), by=c("week", "package")]

ggplot(agg1, aes(x=week, y=V1, color=package, group=package)) + geom_line() + ylab("Downloads") + theme_bw() + theme(axis.text.x = element_text(angle=90, size=8, vjust=0.5))

agg1 <- dat[J(c("psych", "TripleR", "RSA")), length(unique(ip_id)), by=c("week", "package")]

ggplot(agg1, aes(x=week, y=V1, color=package, group=package)) + geom_line() + ylab("Downloads") + theme_bw() + theme(axis.text.x = element_text(angle=90, size=8, vjust=0.5))

## Step 3: Analyze it!

## ======================================================================

library(ggplot2)

library(plyr)

str(dat)

# Overall downloads of packages

d1 <- dat[, length(week), by=package]

d1 <- d1[order(V1), ]

d1[package=="TripleR", ]

d1[package=="psych", ]

# plot 1: Compare downloads of selected packages on a weekly basis

agg1 <- dat[J(c("TripleR", "RSA")), length(unique(ip_id)), by=c("week", "package")]

ggplot(agg1, aes(x=week, y=V1, color=package, group=package)) + geom_line() + ylab("Downloads") + theme_bw() + theme(axis.text.x = element_text(angle=90, size=8, vjust=0.5))

agg1 <- dat[J(c("psych", "TripleR", "RSA")), length(unique(ip_id)), by=c("week", "package")]

ggplot(agg1, aes(x=week, y=V1, color=package, group=package)) + geom_line() + ylab("Downloads") + theme_bw() + theme(axis.text.x = element_text(angle=90, size=8, vjust=0.5))

TripleR

RSA

. Actually, ~30 downloads per week (from this single mirror) is much more than I’ve expected!

To put things in perspective: package

psych

included in the plot:

Some psychological sidenotes on social comparisons:

- Downward comparisons enhance well-being, extreme upward comparisons are detrimental. Hence, do
*never*includeggplot2into your graphic!

- Upward comparisons instigate your achievement motive, and give you drive to get better. Hence, select some packages, which are slightly above your own.
- Of course, things are a bit more complicated than that …

*All source code on this post is licensed under the FreeBSD license.*