Matt Dowle

Behind the scenes of CRAN

(Just from my point of view as a package maintainer.)

New users of R might not appreciate the full benefit of CRAN and new package maintainers may not appreciate the importance of keeping their packages updated and free of warnings and errors. This is something I only came to realize myself in the last few years so I thought I would write about it, by way of today’s example.

Since data.table was updated on CRAN on 3rd December, it has been passing all-OK. But today I noticed a new warning (converted to error by data.table) on one CRAN machine. This is displayed under the CRAN checks link.


Sometimes errors happen for mundane reasons. For example, one of the Windows machines was in error recently because somehow the install became locked. That either fixed itself or someone spent time fixing it (if so, thank you). Today’s issue appears to be different.

I can either look lower down on that page or click the link to see the message.

Calling 'structure(NULL, *)' is deprecated, as NULL cannot have attributes.

I’ve never seen this message before. But given it mentions something about something being deprecated and the flavor name is r-devel it looks likely that it has just been added to R itself in development. On a daily basis all CRAN packages are retested with the very latest commits to R that day. I did a quick scan of the latest commit messages to R but I couldn’t see anything relating to this apparently new warning. Some of the commit messages are long with detail right there in the message. Others are short and use reference numbers that require you to hop to that reference such as “port r71828 from trunk” which prevent fast scanning at times like this. There is more hunting I could do, but for now, let’s see if I get lucky.

The last line of data.table’s test suite output has been refined over the years for my future self, and on the same CRAN page without me needing to go anywhere else, it is showing me today:

5 errors out of 5940 (lastID=1748.4, endian==little, sizeof(long double)==16, sizeof(pointer)==8) in inst/tests/tests.Rraw on Tue Dec 27 18:09:48 2016. Search tests.Rraw for test numbers: 167, 167.1, 167.2, 168, 168.1.

The date and time is included to double check to myself that it really did run on CRAN’s machine recently and I’m not seeing an old stale result that would disappear when simply rerun with latest commits/fixes.

Next I do what I told myself and open tests.Rraw file in my editor and search for "test(167". Immediately, I see this test is within a block of code starting with :

if ("package:ggplot2" %in% search()) {
test(167, ...

In data.table we test compatibility of data.table with a bunch of popular packages. These packages are listed in the Suggests suggestion of DESCRIPTION.

Suggests: bit64, knitr, chron, ggplot2 (≥ 0.9.0), plyr, reshape, reshape2, testthat (≥ 0.4), hexbin, fastmatch, nlme, xts, gdata, GenomicRanges, caret, curl, zoo, plm, rmarkdown, parallel

We do not necessarily suggest these packages in the English sense of the verb; i.e., ‘to recommend’. Rather, perhaps a better name for that field would be Optional in the sense that you need those packages installed if you wish to run all tests, documentation and in data.table’s case, compatibility.

Anyway, now that I know that the failing tests are testing compatibility with ggplot2, I’ll click over to ggplot2 and look at its status. I’m hoping I’ll get lucky and ggplot2 is in error too.


Indeed it is. And I can see the same message on ggplot2’s CRAN checks page.

Calling 'structure(NULL, *)' is deprecated, as NULL cannot have attributes.

It’s my lucky day. ggplot2 is in error too with the same message. This time, thankfully, the new warning is therefore nothing to do with data.table per se. I got to this point in under 30 seconds! No typing was required to run anything at all. It was all done just by clicking within CRAN’s pages and searching a file. My task is done and I can move on. Thanks to CRAN and the people that run it.

What if data.table or ggplot2 were already in error or warning before R-core made their change? R-core members wouldn’t have seen any status change. If they see no status change for any of the 9,787 CRAN packages then they don’t know for sure it’s ok. All they know is their change didn’t affect any of the passing packages but they can’t be sure about the packages which are already in error or warning for an unrelated reason. I get more requests from R-core and CRAN maintainers to update data.table than from users of data.table. I’m sorry that I could not find time earlier in 2016 to update data.table than I did (data.table was showing an error for many months).

Regarding today’s warning, it has been caught before it gets to users. You will never be aware it ever happened. Either R-core will revert this change, or ggplot2 will be asked to send an update to CRAN before this change in R is released.

This is one reason why packages need to be on CRAN not just on GitHub. Not just so they are available to users most easily but so they are under the watchful eye of CRAN daily tests on all platforms.

Now that data.table is used by 320 CRAN and Bioconductor packages, I’m experiencing the same (minor in comparison) frustration that R-core maintainers must have been having for many years: package maintainers not keeping their packages clean of errors and warnings, myself included. No matter how insignificant those errors or warnings might appear. Sometimes, as in my case in 2016, I simply haven’t been able to assign time to start the process of releasing to CRAN. I have worked hard to reduce the time it takes to run the checks not covered by R CMD check and this is happening faster now. One aspect of that script is reverse dependency checks; checking packages which use data.table in some way.

The current status() of data.table reverse dependency checks is as follows, using data.table in development on my laptop. These 320 packages themselves often depend or suggest other packages so my local revdep library has 2,108 packages.

> status()
ERROR : 6 : AFM mlr mtconnectR partools quanteda stremr
WARNING : 2 : ie2miscdata PGRdup
NOTE : 69
OK : 155
TOTAL : 232 / 237
NOT STARTED (first 5 of 5) : finch flippant gasfluxes propr rlas

WARNING : 4 : genomation methylPipe RiboProfiling S4Vectors
NOTE : 68
OK : 9
TOTAL : 82 / 83
NOT STARTED (first 5 of 1) : diffloop

Now that Jan Gorecki has joined H2O he has been able to spend some time to automate and improve this. Currently, the result he gets with a docker script is as follows.

> status()
ERROR : 18 : AFM blscrapeR brainGraph checkmate ie2misc lava mlr mtconnectR OptiQuantR panelaggregation partools pcrsim psidR quanteda simcausal stremr strvalidator xgboost
WARNING : 4 : data.table ie2miscdata msmtools PGRdup
NOTE : 72
OK : 141
TOTAL : 235 / 235

ERROR : 20 : CAGEr dada2 facopy flowWorkspace GenomicTuples ggcyto GOTHiC IONiseR LowMACA methylPipe minfi openCyto pepStat PGA phyloseq pwOmics QUALIFIER RTCGA SNPhood TCGAbiolinks
WARNING : 15 : biobroom bsseq Chicago genomation GenomicInteractions iGC ImmuneSpaceR metaX MSnID MSstats paxtoolsr Pviz r3Cseq RiboProfiling scater
NOTE : 27
OK : 3
TOTAL : 65 / 65

So, our next task is to make Jan’s result on docker match mine. I can’t quite remember how I got all these packages to pass locally for me. In some cases I needed to find and install Ubuntu libraries and I tried my best to keep a note of them at the time here here. Another case is that lava suggests mets but mets depends on lava. We currently solve chicken-or-egg situations manually, one-by-one. A third example is that permissions of /tmp seem to be different on docker which at least one package appears to test and depend on. We have tried changing TEMPDIR from /tmp to ~/tmp to solve that and will wait for the rerun to see if that worked. I won’t be surprised if it takes a week of elapsed time to get our results to match. That’s two man weeks of on-and-off time as we fix, automate-the-fix and wait to see if the rerun works. And this is work after data.table has already made it to CRAN; to make next time easier and less of a barrier-to-entry to start.

The point is, all this takes time behind the scenes. I’m sure other package maintainers have similar issues and have come up with various solutions. I’m aware of devtools::revdep_check, used it gratefully for some years and thanked Hadley for it in this tweet. But recently I’ve found it more reliable and simpler to run R CMD check at the command line directly using the unix parallel command. Thank you to R-core and CRAN maintainers for keeping CRAN going in 2016. There must be much that nobody knows about. Thank you to the package maintainers that use data.table and have received my emails and fixed their warnings or errors (users will never know that happened). Sorry I myself didn’t keep data.table cleaner, faster. We’re working to improve that going forward.

Red herring bites

At the Bay Area R User Group in February I presented progress in big-join in H2O which is based on the algorithm in R’s data.table package. The presentation had two goals: i) describe one test in great detail so everyone understands what is being tested so they can judge if it is relevant to them or not; and ii) show how it scales with data size and number of nodes.

These were the final two slides :

Slide14     Slide15

I left a blank for 1e10 (10 billion high cardinality rows joined with 10 billion high cardinality rows returning 10 billion high cardinality rows) because it didn’t work at that time. Although each node has 256GB RAM (and 32 cores) the 10 billion row test involves joining two 10 billion row tables (each 200GB) and returning a third table (also ~10 billion rows) of 300GB, total 700GB. I was giving 200GB to each of the 4 H2O nodes (to leave 50GB on each node for the operating system and my forgiving colleagues) which meant the H2O cluster had 800GB RAM. The join algorithm needed more than a mere 100GB to complete the task and hence failed. Given that a common rule of thumb is “3x data size” for working memory, to fail with 0.1x data size as working memory is very reasonable. Internally we refer to H2O as a fast calculator. Like R and Python it is in-memory. Unlike R and Python a single data frame can be bigger than a single node.

So I scaled up to 10 nodes: 2TB RAM and 320 cores.

But it still didn’t work. It ran for 30 minutes and then failed with out-of-memory. I felt that 2TB of RAM really should be enough to complete this task: 200GB joined with 200GB returning 300GB. 2TB-700GB = 1.3TB; that amount of working memory should be enough. It pointed to something being wrong somewhere. Indeed with help from colleagues we identified one point where data was being duplicated and shouldn’t have been. This reduced the working memory needed and then it worked.

But it took 20 minutes.

This didn’t fit with my gut feel given I knew how the algorithm works (or was supposed to work). It should be nearer to linear scaling. 1 billion rows took under a minute so 10 billion should take 10 minutes. But it was taking twice that: 20 minutes. More to the point I’m not just having a cup of coffee or lunch while it’s running; I’m watching the CPUs and the network traffic between the nodes. The puzzling thing was that the network wasn’t saturated and the CPU’s were often cold. Something was wrong.

There ensued many proposals and discussions as to what it might be. Focusing on the algorithm and its implementation.

Until one day I went back to the 1 billion row test and ran it on 4 nodes, but a different set of 4 nodes. I usually run on servers 6-9. There was no reason for picking those servers. The first time I did it I asked which ones I could use and then never changed it. This time I ran on servers 1-4 instead. It was 3 times slower. At first I thought there must be a difference in the libraries on the servers or my code. After ruling out many things I rubbed my eyes and ran it again a few times again and then again on servers 6-9. It was repeatable and confirmed. How on earth could it make a difference which servers I ran on? All 10 servers are identical spec in the same rack connected to the same switch. Here’s a photo :


I was monitoring CPU and network usage. Nothing else (either human or artificial) was using the cluster. I was the only one. I had this physical cluster to myself. For sure.

I realized we might have been barking up the wrong tree when we were looking at the algorithm and its implementation. When I scaled up to 10bn rows and 10 nodes, perhaps I didn’t just scale up as intended, but perhaps I included a server that was somehow, faulty?!

I reran on servers 2-5 and it was 3 times faster than on servers 1-4. The culprit appears to be server 1, then. We have a network tester in H2O that I had already run but ran it again. Our server names correspond to the last digit of their IP addresses: server 1 = .181, server 2 = .182, server 3 = .183, etc. Here was the image :


There doesn’t appear to be much wrong here. I asked colleagues and they advised to more thoroughly check the network speeds as they’d seen problems in the past. I Googled for how to test network speed which quickly returned iperf. I knew the problem might be server 1 so I chose server 3 to be the server and compared speeds from servers 1, 2 and 4 as clients to server 3 as the server. Here’s the result :


So server 1 is more than 10 times slower the others. I trotted over to our server room again and I had a look at the back of server 1.

rackBack   rackBackOrange

See that orange light? That’s what was wrong. Either the switch or the network card had auto negotiated itself down to 1G speed when all its friends in the rack are happy at 10G speed. Despite it being up 57 days, it hadn’t auto-negotiated itself back up to 10G speed. Or something like that. We think.

What’s the solution? Old school: I unplugged the ethernet cable and plugged it back in. The orange light turned green. I went back to my laptop and tested again with iperf. This time iperf reported 10G speed for server 1 consistent with the other servers. The non-physical way to do this is to use ethtool. Next time a problem occurs I’ll try it to save some foot steps to the server room.

Rerunning the 10 billion row to 10 billion row high cardinality join test now comes in twice as fast: 10 minutes instead of 20 minutes. I’m not really sure why that made such a big difference since the network wasn’t saturated enough for it to be a pure data transfer speed issue. I’ll chalk it up to something to do with that network card or switch and move on. I’ll ensure that iperf reports 10G speed between all nodes going forward.

In my talk in Chicago this week I connected to these servers and gave a live demo. That video along with all our other presentations that day are available here.

The event itself was quite packed as seen in the picture below:

Big Joins, Scalable Data Munging and Data.Table

Matt Dowle, Hacker,

I’ll be presenting on the same topic at Data by the Bay on Monday 16 May.

Fast csv writing for R

R has traditionally been very slow at reading and writing csv files of, say, 1 million rows or more. Getting data into R is often the first task a user needs to do and if they have a poor experience (either hard to use, or very slow) they are less likely to progress. The data.table package in R solved csv import convenience and speed in 2013 by implementing data.table::fread() in C. The examples at that time were 30 seconds down to 3 seconds to read 50MB and over 1 hour down to 8 minute to read 20GB. In 2015 Hadley Wickham implemented readr::read_csv() in C++.

But what about writing csv files?

It is often the case that the end goal of work in R is a report, a visualization or a summary of some form. Not anything large. So we don’t really need to export large data, right? It turns out that the combination of speed, expressiveness and advanced abilities of R (particularly data.table) mean that it can be faster (so I’m told) to export the data from other environments (e.g. a database), manipulate in R, and export back to the database, than it is keeping the data in the database. However, the data export step out of R being the biggest bottleneck preventing that workflow is being increasingly heard from practitioners in the field. The export step may be needed to send clean data or derived features to other teams or systems in the user’s organization.

Yes: feather, I hear you thinking! Indeed feather is a new fast uncompressed binary format which is also cross-language, by Wes McKinney and Hadley Wickham. As Roger Peng pointed out R has had fast XDR binary format for many years but the innovation of feather is that it is compatible with Python and Julia and other systems will adopt the format too. This is a great thing. Note that it is very important to set compress=FALSE in save() and saveRDS() to achieve comparable times to feather using base R.

I was thinking about jumping on board with feather when to my surprise 2 weeks ago, the data.table project received a new user contribution from Otto Seiskari: fwrite(). Analogous to fread() it writes a data.frame or data.table to .csv and is implemented in C. I looked at the source and realized a few speed improvements could be made. Over the years I’ve occasionally mulled over different approaches. This contribution gave me the nudge to give those ideas a try.

The first change I made was to stop using the fprintf() function in C. This takes a file handle and a format specification string and writes a field to the file. If you have 1 million rows and 6 columns that’s 6 million calls to that function. I’ve never looked at its source code but I guessed that because it was being iteratively called 6 million times, some tasks like interpreting the format specification string and checking the file handle is valid were being performed over and over again wastefully. I knew it maintains its own internal buffer automatically to save writing too many very small chunks of data to disk. But I wondered how much time was spent writing data to the disk versus how much time was spent formatting that data. So I added my own in-memory buffer, writing to it directly and moved from fprintf() to the lower level C functions write(). Note: not the C function fwrite() but write(). This writes a buffer of data to a file; it doesn’t have an internal buffer in the way. I did this so I could wrap the write() call with a timing block and see how much time was being spent writing to file versus formatting.

Here is the result :


Wow: 99% of the time is being spent formatting! The time is not spent writing to disk at all. Not one ounce.

Can we parallelize it then? On first glance, the data has to be written to the file in a serial fashion: row 1, then row 2, then row 3. Perhaps the data could be divided into N subsets, each subset written out in parallel, then joined together afterwards. Before rushing ahead and doing it that way, let’s think what the problems might be. First the N subsets could be writing at the same time (to different files) but that could saturate the internal bus (I don’t know much about that). Once the N files are obtained, they still have to be joined together into one file. I searched online and asked my colleagues and it appears they is no way to create a file by reference to existing files; you have to concatenate the N files into a new single file (a copy). That step would be serial and would also churn through disk space. It’s possible there would not be enough disk space available to complete the task. Having anticipated these problems, is there a better way I could implement from the get go?

What did I have one of that I could make N of? Well, I had one buffer: the input to write(). If I make N threads and give each thread its own buffer can it be made to work? This is inherently thread safe since the only memory write would be to a thread’s private buffer. The only tricky aspect is ensuring that each piece is written to the output csv file in the correct order. I’m on good terms with Norman Matloff who has recommended OpenMP to me in the past and kindly gave me a copy of his excellent book Parallel Computing for Data Science. I turned to Google and quickly found that OpenMP has exactly the construct I needed :


OpenMP does a lot of work for the programmer. It directs the team of threads to work on each iteration of the for loop. The code after the #pragma omp order gets executed one-at-a-time and in-the-order-of-the-for-loop, automatically. So if iteration 7 finishes quickly for some reason, that thread will wait for iterations 1-6 to finish before it writes its buffer and receives the next piece to work on.

I chose the buffer size for each thread to be small (1MB) to fit in each core’s cache. This results in the number of pieces N to be much larger (size of data / 1MB) than the number of threads so that each piece is not too big to get stuck and cause a blockage and not too small either to incur wasteful startup and scheduling overhead. Here is the result on my 4-core MacBook Pro (running bare-metal Ubuntu, of course):


This was one of those rare moments where the speedup factor was the best I hoped for. An ideal 4x improvement: 3.6 seconds down to 0.9 seconds.

I thought I’d have to add lots of tests for fwrite() but I looked at the test suite and found that Otto had already added 26 tests in his pull request that I’d merged. My changes had broken quite a few of them so I fixed the code to pass his tests. Then I pushed to GitHub for others to test on MacOS and Windows. Within hours feedback came back that the speedup and correctness were confirmed.

One outstanding itch was that each field was being written to the buffer by the C function sprintf(). Although this is writing to an in-memory buffer directly, I wondered how big the overhead of interpreting the format string and actually calling the library function was. I knew from fread() that specialized code that is tuned to be iterated can make a big difference. So I created specialized writeNumeric and writeInteger functions in C using base R’s C source code (e.g. format.c:scientific()) as a guide to give me some clues.


That’s better. Another 3x speedup: 0.9 seconds down to 0.3. This feels like the lower bound to me. So far we’ve been working with 1 million rows and an output csv file of around 150MB. Quite small by many standards. Let’s scale up 10x to 10 million rows on my laptop with SSD and 100x to 100 million rows on one of our 32 core / 256 GB physical servers. The reproducible code is at the end of this article. On the server we test writing to ram disk (/run/shm) versus writing to hard disk. Ram disk may be appropriate for ephemeral needs; e.g. transferring to a database where there is no requirement to keep the transfer file.


To my surprise, writing a csv file using data.table::fwrite() appears to be faster than writing a binary file with feather. Your mileage may of course vary. If you ever need to quickly inspect the data using familiar command line text tools like head, tail, wc, grep and sed then it seems that csv files are still in the running then. Sometimes a csv format can turn out to be quite efficient. Consider the number 1.2 for example. In R that is an 8 byte ‘double’ but in the csv it’s just 3 characters wide, a 62% saving. This may be why the csv size (7.5GB) is slightly smaller than the binary (9.1GB).

Here’s a small display of the data being tested:


fwrite is available to test now in data.table v1.9.7 in development. Check out our installation notes for how to install the dev version and get OpenMP where necessary. Please try it out and let us know your findings. It works just as well on both data.frame and data.table. Note there are some outstanding items to complete before release to CRAN, collated here.

fwrite() is a great improvement over write.csv(): 63 seconds down to 2 seconds for the 10 million row test on my laptop. write.csv() converts the input to character format first in-memory. This creates new string objects for each and every unique integer or numeric which takes compute cycles to create and hits the global character cache (a hash table) and uses more RAM. There is an advantage of doing it that way, though. The character conversion is object oriented so many user or package defined classes with their own print methods in R code will just work automatically with no C programming required.

What’s the connection to H2O? Usually you would import data into H2O using h2o.importFile() called from either the R or Python H2O packages. h2o.importFile() has been parallel and distributed for some years. To test big-join I’m using it to load two 200GB files into a 10 node cluster with 2TB of RAM, for example. I’ll write more about this in a future article. Sometimes however, it’s necessary or convenient to transfer data between H2O and the R client. This step currently uses base R’s write.csv and read.csv. We intend to replace these calls with fwrite/fread. We’ll also look at the H2O Python package and see if we can improve that similarly.

If you’d like to hear more about big-join in H2O I’ll be presenting at our Open H2O Tour in Chicago on May 3rd, 2016 together with lots of other amazing people including Leland Wilkinson, Jeff Chapman, Erin LeDell, Mark Landry and others. And use the promo code H2OYEAH to get a 50% discount. Also please catch up with me at Data-by-the-Bay where I’ll be presenting on May 16th, 2016.

This article was also published on R-Bloggers here:


Reproducible code adapted from this Stack Overflow question

install.packages(“data.table”, type=”source”, repos=””)
For Mac and Windows, see
require(data.table) # v1.9.7

require(feather) # v0.0.0.9000

require(readr) # CRAN v0.2.2

DTn = function(N) data.table( # or data.frame. Both work just as well.
str1=sample(sprintf(“%010d”,sample(N,1e5,replace=TRUE)), N, replace=TRUE),
str2=sample(sprintf(“%09d”,sample(N,1e5,replace=TRUE)), N, replace=TRUE),
str3=sample(sapply(sample(2:30, 100, TRUE), function(n) paste0(sample(LETTERS, n, TRUE), collapse=””)), N, TRUE),
num1=sample(round(rnorm(1e6,mean=6.5,sd=15),2), N, replace=TRUE),
num2=sample(round(rnorm(1e6,mean=6.5,sd=15),10), N, replace=TRUE),
int1=sample(ceiling(rexp(1e6)), N, replace=TRUE),
DT = DTn(1e6)
Either ram disk :
or HDD/SDD :
system.time(write_feather(DT, “feather.bin”))

NB: It works just as well on both data.frame and data.table