Behind the scenes of CRAN

(Just from my point of view as a package maintainer.)

New users of R might not appreciate the full benefit of CRAN and new package maintainers may not appreciate the importance of keeping their packages updated and free of warnings and errors. This is something I only came to realize myself in the last few years so I thought I would write about it, by way of today’s example.

Since data.table was updated on CRAN on 3rd December, it has been passing all-OK. But today I noticed a new warning (converted to error by data.table) on one CRAN machine. This is displayed under the CRAN checks link.

selection_003

Sometimes errors happen for mundane reasons. For example, one of the Windows machines was in error recently because somehow the install became locked. That either fixed itself or someone spent time fixing it (if so, thank you). Today’s issue appears to be different.

I can either look lower down on that page or click the link to see the message.

Calling 'structure(NULL, *)' is deprecated, as NULL cannot have attributes.

I’ve never seen this message before. But given it mentions something about something being deprecated and the flavor name is r-devel it looks likely that it has just been added to R itself in development. On a daily basis all CRAN packages are retested with the very latest commits to R that day. I did a quick scan of the latest commit messages to R but I couldn’t see anything relating to this apparently new warning. Some of the commit messages are long with detail right there in the message. Others are short and use reference numbers that require you to hop to that reference such as “port r71828 from trunk” which prevent fast scanning at times like this. There is more hunting I could do, but for now, let’s see if I get lucky.

The last line of data.table’s test suite output has been refined over the years for my future self, and on the same CRAN page without me needing to go anywhere else, it is showing me today:

5 errors out of 5940 (lastID=1748.4, endian==little, sizeof(long double)==16, sizeof(pointer)==8) in inst/tests/tests.Rraw on Tue Dec 27 18:09:48 2016. Search tests.Rraw for test numbers: 167, 167.1, 167.2, 168, 168.1.

The date and time is included to double check to myself that it really did run on CRAN’s machine recently and I’m not seeing an old stale result that would disappear when simply rerun with latest commits/fixes.

Next I do what I told myself and open tests.Rraw file in my editor and search for "test(167". Immediately, I see this test is within a block of code starting with :

if ("package:ggplot2" %in% search()) {
test(167, ...
}

In data.table we test compatibility of data.table with a bunch of popular packages. These packages are listed in the Suggests suggestion of DESCRIPTION.

Suggests: bit64, knitr, chron, ggplot2 (≥ 0.9.0), plyr, reshape, reshape2, testthat (≥ 0.4), hexbin, fastmatch, nlme, xts, gdata, GenomicRanges, caret, curl, zoo, plm, rmarkdown, parallel

We do not necessarily suggest these packages in the English sense of the verb; i.e., ‘to recommend’. Rather, perhaps a better name for that field would be Optional in the sense that you need those packages installed if you wish to run all tests, documentation and in data.table’s case, compatibility.

Anyway, now that I know that the failing tests are testing compatibility with ggplot2, I’ll click over to ggplot2 and look at its status. I’m hoping I’ll get lucky and ggplot2 is in error too.

selection_005

Indeed it is. And I can see the same message on ggplot2’s CRAN checks page.

Calling 'structure(NULL, *)' is deprecated, as NULL cannot have attributes.

It’s my lucky day. ggplot2 is in error too with the same message. This time, thankfully, the new warning is therefore nothing to do with data.table per se. I got to this point in under 30 seconds! No typing was required to run anything at all. It was all done just by clicking within CRAN’s pages and searching a file. My task is done and I can move on. Thanks to CRAN and the people that run it.

What if data.table or ggplot2 were already in error or warning before R-core made their change? R-core members wouldn’t have seen any status change. If they see no status change for any of the 9,787 CRAN packages then they don’t know for sure it’s ok. All they know is their change didn’t affect any of the passing packages but they can’t be sure about the packages which are already in error or warning for an unrelated reason. I get more requests from R-core and CRAN maintainers to update data.table than from users of data.table. I’m sorry that I could not find time earlier in 2016 to update data.table than I did (data.table was showing an error for many months).

Regarding today’s warning, it has been caught before it gets to users. You will never be aware it ever happened. Either R-core will revert this change, or ggplot2 will be asked to send an update to CRAN before this change in R is released.

This is one reason why packages need to be on CRAN not just on GitHub. Not just so they are available to users most easily but so they are under the watchful eye of CRAN daily tests on all platforms.

Now that data.table is used by 320 CRAN and Bioconductor packages, I’m experiencing the same (minor in comparison) frustration that R-core maintainers must have been having for many years: package maintainers not keeping their packages clean of errors and warnings, myself included. No matter how insignificant those errors or warnings might appear. Sometimes, as in my case in 2016, I simply haven’t been able to assign time to start the process of releasing to CRAN. I have worked hard to reduce the time it takes to run the checks not covered by R CMD check and this is happening faster now. One aspect of that script is reverse dependency checks; checking packages which use data.table in some way.

The current status() of data.table reverse dependency checks is as follows, using data.table in development on my laptop. These 320 packages themselves often depend or suggest other packages so my local revdep library has 2,108 packages.

> status()
CRAN:
ERROR : 6 : AFM mlr mtconnectR partools quanteda stremr
WARNING : 2 : ie2miscdata PGRdup
NOTE : 69
OK : 155
TOTAL : 232 / 237
RUNNING : 0
NOT STARTED (first 5 of 5) : finch flippant gasfluxes propr rlas

BIOC:
ERROR : 1 : RTCGA
WARNING : 4 : genomation methylPipe RiboProfiling S4Vectors
NOTE : 68
OK : 9
TOTAL : 82 / 83
RUNNING : 0
NOT STARTED (first 5 of 1) : diffloop

Now that Jan Gorecki has joined H2O he has been able to spend some time to automate and improve this. Currently, the result he gets with a docker script is as follows.

> status()
CRAN:
ERROR : 18 : AFM blscrapeR brainGraph checkmate ie2misc lava mlr mtconnectR OptiQuantR panelaggregation partools pcrsim psidR quanteda simcausal stremr strvalidator xgboost
WARNING : 4 : data.table ie2miscdata msmtools PGRdup
NOTE : 72
OK : 141
TOTAL : 235 / 235
RUNNING : 0

BIOC:
ERROR : 20 : CAGEr dada2 facopy flowWorkspace GenomicTuples ggcyto GOTHiC IONiseR LowMACA methylPipe minfi openCyto pepStat PGA phyloseq pwOmics QUALIFIER RTCGA SNPhood TCGAbiolinks
WARNING : 15 : biobroom bsseq Chicago genomation GenomicInteractions iGC ImmuneSpaceR metaX MSnID MSstats paxtoolsr Pviz r3Cseq RiboProfiling scater
NOTE : 27
OK : 3
TOTAL : 65 / 65
RUNNING : 0

So, our next task is to make Jan’s result on docker match mine. I can’t quite remember how I got all these packages to pass locally for me. In some cases I needed to find and install Ubuntu libraries and I tried my best to keep a note of them at the time here here. Another case is that lava suggests mets but mets depends on lava. We currently solve chicken-or-egg situations manually, one-by-one. A third example is that permissions of /tmp seem to be different on docker which at least one package appears to test and depend on. We have tried changing TEMPDIR from /tmp to ~/tmp to solve that and will wait for the rerun to see if that worked. I won’t be surprised if it takes a week of elapsed time to get our results to match. That’s two man weeks of on-and-off time as we fix, automate-the-fix and wait to see if the rerun works. And this is work after data.table has already made it to CRAN; to make next time easier and less of a barrier-to-entry to start.

The point is, all this takes time behind the scenes. I’m sure other package maintainers have similar issues and have come up with various solutions. I’m aware of devtools::revdep_check, used it gratefully for some years and thanked Hadley for it in this tweet. But recently I’ve found it more reliable and simpler to run R CMD check at the command line directly using the unix parallel command. Thank you to R-core and CRAN maintainers for keeping CRAN going in 2016. There must be much that nobody knows about. Thank you to the package maintainers that use data.table and have received my emails and fixed their warnings or errors (users will never know that happened). Sorry I myself didn’t keep data.table cleaner, faster. We’re working to improve that going forward.

Hyperparameter Optimization in H2O: Grid Search, Random Search and the Future

“Good, better, best. Never let it rest. ‘Til your good is better and your better is best.” – St. Jerome

tl;dr

H2O now has random hyperparameter search with time- and metric-based early stopping. Bergstra and Bengio[1] write on p. 281:

Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time.

Even smarter means of searching the hyperparameter space are in the pipeline, but for most use cases random search does as well.

What Are Hyperparameters?

Nearly all model algorithms used in machine learning have a set of tuning “knobs” which affect how the learning algorithm fits the model to the data. Examples are the regularization settings alpha and lambda for Generalized Linear Modeling or ntrees and max_depth for Gradient Boosted Models. These knobs are called hyperparameters to distinguish them from internal model parameters, such as GLM’s beta coefficients or Deep Learning’s weights, which get learned from the data during the model training process.

What Is Hyperparameter Optimization?

The set of all combinations of values for these knobs is called the hyperparameter space. We’d like to find a set of hyperparameter values which gives us the best model for our data in a reasonable amount of time. This process is called hyperparameter optimization.

H2O contains good default values for many datasets, but to get the best performance for your data you will want to tune at least some of these hyperparameters to maximize the predictive performance of your models. You should start with the most important hyperparameters for your algorithm of choice, for example ntrees and max_depth for the tree models or the hidden layers for Deep Learning.

H2O provides some guidance by grouping the hyperparameters by their importance in the Flow UI. You should look carefully at the values of the ones marked critical, while the secondary or expert ones are generally used for special cases or fine tuning.

Note that some hyperparameters, such as learning_rate, have a very wide dynamic range. You should choose values that reflect this for your search (e.g., powers of 10 or of 2) to ensure that you cover the most relevant parts of the hyperparameter space. (Bergstra and Bengio p. 290)

Measuring Model Quality

There are many different ways to measure model quality. If you don’t know which to use, H2O will choose a good general-purpose metric for you based on the category of your model (binomial or multinomial classification, regression, clustering, …). However, you may want to choose a metric to compare your models based on your specific goals (e.g., maximizing AUC, minimizing log loss, minimizing false negatives, minimizing mean squared error, …).

Overfitting

Overfitting is the phenomenon of fitting a model so thoroughly to your training data that it begins to memorize the fine details of that specific data, rather than finding general characteristics of that data which will also apply to future data on which you want to make predictions.

Overfitting not only applies to the model training process, but also to the model selection process. During the process of tuning the hyperparameters and selecting the best model you should avoid overfitting them to your training data. Otherwise, the hyperparameter values that you choose will be too highly tuned to your selection data, and will not generalize as well as they could to new data. Note that this is the same principle as, but subtly different from, overfitting during model training. Ideally you should use cross-validation or a validation set during training and then a final holdout test (validation) dataset for model selection. As Bergstra and Bengio write on p. 290,

The standard practice for evaluating a model found by cross-validation is to report [test set error] for the [hyperparameter vector] that minimizes [validation error].

You can read much more on this topic in Chapter 7 of Elements of Statistical Learning from H2O advisors and Stanford professors Trevor Hastie and Rob Tibshirani with Jerome Friedman [2].

Selecting Hyperparameters Manually and With Cartesian Grid

The traditional method of selecting the values for your hyperparameters has been to individually train a number of models with different combinations of values, and then to compare the model performance to choose the best model. For example, for a tree-based model you might choose ntrees of (50, 100 and 200) and max_depth of (5, 10, 15 and 20) for a total of 3 x 4 = 12 models. This process of trying out hyperparameter sets by hand is called manual search. By looking at the models’ predictive performance, as measured by test-set, cross-validation or validation metrics, you select the best hyperparameter settings for your data and needs.

As the number of hyperparameters and the lists of desired values increase this obviously becomes quite tedious and difficult to manage.

A Little Help?

For several years H2O has included grid search, also known as Cartesian Hyperparameter Search or exhaustive search. Grid search builds models for every combination of hyperparameter values that you specify.

Bergstra and Bengio write on p. 283:

Grid search … typically finds a better [set of hyperparameters] than purely manual sequential optimization (in the same amount of time)

H2O keeps track of all the models resulting from the search, and allows you to sort the list based on any supported model metric (e.g., AUC or log loss). For the example above, H2O would build your 12 models and return the list sorted with the best first, either using the metric of your choice or automatically using one that’s generally appropriate for your model’s category.

H2O allows you to run multiple hyperparameter searches and to collect all the models for comparison in a single sortable result set: just name your grid and run multiple searches. You can even add models from manual searches to the result set by specifying a grid search with a single value for each interesting hyperparameter:

# Begin with a random search of a space of 6 * 5 = 30 possible models:
hyper_parameters = { 'alpha': [0.01,0.1,0.3,0.5,0.7,0.9], 
                     'lambda': [1e-4,1e-5,1e-6,1e-7,1e-8] }

search_criteria = { 'strategy': "RandomDiscrete", 'seed': 42,
                    'stopping_metric': "AUTO", 
                    'stopping_tolerance': 0.001,
                    'stopping_rounds': 2 }
            
random_plus_manual = 
    H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial', nfolds=5),
      hyper_parameters, 
      grid_id="random_plus_manual", 
      search_criteria=search_criteria)
    
random_plus_manual.train(x=x,y=y, training_frame=training_data)

# Now add a manual search to the results:
manual_hyper_parameters = {'alpha': [0.05], 'lambda': [1e-4]}
random_plus_manual = 
    H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial', nfolds=5),
      manual_hyper_parameters, 
      grid_id="random_plus_manual")

random_plus_manual.train(x=x,y=y, training_frame=training_data)

random_plus_manual.show()
print(random_plus_manual.sort_by('F1', False))

Searching Large Hyperparameter Spaces

As the number of hyperparameters being tuned increases, and the values that you would like to explore for those hyperparameters increases, you obviously get a combinatorial explosion in the number of models required for an exhaustive search. Since we always have time constraints on the model tuning process the obvious thing to do is to narrow down our choices by doing a coarser search of the space. Given a fixed amount of time, making random choices of hyperparameter values generally gives results that are better than the best results of an Cartesian (exhaustive) search.

Bergstra and Bengio write on p. 281:

Compared with neural networks configured by a pure grid search,
we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time.
Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration
space.

[F]or most data sets only a few of the hyper-parameters really matter,
but … different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets.

We propose random search as a substitute and baseline that is both reasonably efficient (roughly equivalent to or better than combinining manual search and grid search, in our experiments) and keeping the advantages of implementation simplicity and reproducibility of pure grid search.

[R]andom search … trades a small reduction in efficiency in low-dimensional spaces for a large improvement in efficiency in high-dimensional search spaces.

After doing a random search, if desired you can then iterate by “zooming in” on the regions of the hyperparameter space which look promising. You can do this by running additional, more targeted, random or Cartesian hyperparameter searches or manual searches. For example, if you started with alpha values of [0.0, 0.25, 0.5, 0.75, 1.0] and the middle values look promising, you can follow up with a finer grid of [0.3, 0.4, 0.5, 0.6, 0.7].

Random Hyperparameter Search in H2O

H2O has supported random hyperparameter search since version 3.8.1.1. To use it, specify a grid search as you would with a Cartesian search, but add search criteria parameters to control the type and extent of the search. You can specify a max runtime for the grid, a max number of models to build, or metric-based automatic early stopping. If you choose several of these then H2O will stop when the first of the criteria are met. As an example, you might specify “stop when MSE has improved over the moving average of the best 5 models by less than 0.0001, but take no more than 12 hours”.

H2O will choose a random set of hyperparameter values from the ones that you specify, without repeats, and build the models sequentially. You can look at the incremental results while the models are being built by fetching the grid with the h2o.getGrid (R) or h2o.get_grid (Python) functions. There’s also a getGrids command in Flow that will allow you to click on any of the grids you’ve built. H2O’s Flow UI will soon plot the error metric as the grid is being built to make the progress easy to visualize, something like this:

Error for Random Search

Choosing Search Criteria

In general, metric-based early stopping optionally combined with max runtime is the best choice. The number of models it will take to converge toward a global best can vary a lot (see below), and metric-based early stopping accounts for this automatically by stopping the search process when the error curve (learning curve[3]) flattens out.

The number of models required for convergence depends on a number of things, but mostly on the “shape” of the error function in the hyperparameter space [Bergstra and Bengio p. 291]. While most algorithms perform well in a fairly large region of the hyperparameter space on most datasets, some combinations of dataset and algorithm are very sensitive: they have a very “peaked” error functions. In tuning neural networks with a large numbers of hyperparameters and various datasets Bergstra and Bengio find convergence within 2-64 trials (models built), depending largely on which hyperparameters they choose to tune. In some classes of search they reach convergence in 4-8 trials, even with a very large search space:

Random experiment efficiency curves of a single-layer neural network for eight of the data sets used in Larochelle et al. (2007) … (7 hyper-parameters to optimize). … Random searches of 8 trials match or outperform grid searches of (on average) 100 trials.

Simpler algorithms such as GBM and GLM should require few trials to get close to a global minimum.

Examples: R

This example is clipped from GridSearch.md:


# Construct a large Cartesian hyper-parameter space
ntrees_opts = c(10000)       # early stopping will stop earlier
max_depth_opts = seq(1,20)
min_rows_opts = c(1,5,10,20,50,100)
learn_rate_opts = seq(0.001,0.01,0.001)
sample_rate_opts = seq(0.3,1,0.05)
col_sample_rate_opts = seq(0.3,1,0.05)
col_sample_rate_per_tree_opts = seq(0.3,1,0.05)
#nbins_cats_opts = seq(100,10000,100) # no categorical features
                                      # in this dataset

hyper_params = list( ntrees = ntrees_opts, 
                     max_depth = max_depth_opts, 
                     min_rows = min_rows_opts, 
                     learn_rate = learn_rate_opts,
                     sample_rate = sample_rate_opts,
                     col_sample_rate = col_sample_rate_opts,
                     col_sample_rate_per_tree = col_sample_rate_per_tree_opts
                     #,nbins_cats = nbins_cats_opts
)


# Search a random subset of these hyper-parmameters. Max runtime 
# and max models are enforced, and the search will stop after we 
# don't improve much over the best 5 random models.
search_criteria = list(strategy = "RandomDiscrete", 
                       max_runtime_secs = 600, 
                       max_models = 100, 
                       stopping_metric = "AUTO", 
                       stopping_tolerance = 0.00001, 
                       stopping_rounds = 5, 
                       seed = 123456)

gbm_grid <- h2o.grid("gbm", 
                     grid_id = "mygrid",
                     x = predictors, 
                     y = response, 

                     # faster to use a 80/20 split
                     training_frame = trainSplit,
                     validation_frame = validSplit,
                     nfolds = 0,

                     # alternatively, use N-fold cross-validation:
                     # training_frame = train,
                     # nfolds = 5,

                     # Gaussian is best for MSE loss, but can try 
                     # other distributions ("laplace", "quantile"):
                     distribution="gaussian",

                     # stop as soon as mse doesn't improve by 
                     # more than 0.1% on the validation set, 
                     # for 2 consecutive scoring events:
                     stopping_rounds = 2,
                     stopping_tolerance = 1e-3,
                     stopping_metric = "MSE",

                     # how often to score (affects early stopping):
                     score_tree_interval = 100, 
                     
                     ## seed to control the sampling of the 
                     ## Cartesian hyper-parameter space:
                     seed = 123456,
                     hyper_params = hyper_params,
                     search_criteria = search_criteria)

gbm_sorted_grid <- h2o.getGrid(grid_id = "mygrid", sort_by = "mse")
print(gbm_sorted_grid)

best_model <- h2o.getModel(gbm_sorted_grid@model_ids[[1]])
summary(best_model)

You can find another example here.

Examples: Python

This example is clipped from pyunit_benign_glm_grid.py:

hyper_parameters = {'alpha': [0.01,0.3,0.5], 'lambda': [1e-5,1e-6,1e-7,1e-8]}

# test search_criteria plumbing and max_models
search_criteria = { 'strategy': "RandomDiscrete", 'max_models': 3 }
max_models_g = H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial'),
                             hyper_parameters, search_criteria=search_criteria)
max_models_g.train(x=x,y=y, training_frame=training_data)

max_models_g.show()
print(max_models_g.grid_id)
print(max_models_g.sort_by('F1', False))

assert len(max_models_g.models) == 3, "expected 3 models, got: {}".format(len(max_models_g.models))
print(max_models_g.sorted_metric_table())
print(max_models_g.get_grid("r2"))

# test search_criteria plumbing and asymptotic stopping
search_criteria = { 'strategy': "RandomDiscrete", 'seed': 42,
            'stopping_metric': "AUTO", 'stopping_tolerance': 0.1,
            'stopping_rounds': 2 }
asymp_g = H2OGridSearch(H2OGeneralizedLinearEstimator(family='binomial', nfolds=5),
                    hyper_parameters, search_criteria=search_criteria)
asymp_g.train(x=x,y=y, training_frame=training_data)

asymp_g.show()
print(asymp_g.grid_id)
print(asymp_g.sort_by('F1', False))

assert len(asymp_g.models) == 5, "expected 5 models, got: {}".format(len(asymp_g.models))

Examples: Flow

Flow includes an example called GBM_GridSearch.flow which does both Cartesian and random searches:

search criteria

search criteria

What’s That Up In The Road? A Head? (Roadmap)

This section covers possible improvements for hyperparameter search in H2O and lays out a roadmap.

Ease of Use

With the addition of random hyperparameter search it becomes more practical for non-experts to get good, albeit not expert, results with the ML model training process:

Algorithmic approaches to hyper-parameter optimization make machine learning results easier to disseminate, reproduce, and transfer to other domains.[4] p. 8

We are looking into adding either fixed or heuristically-driven hyperparameter spaces for use with random search, essentially an “I’m Feeling Lucky” button for model building.

Covering the Space Better

One possibility for improving random search is choosing sets of hyperparameters which cover the space more efficiently than randomly choosing each value independently. Bergstra and Bengio cover this on pages 295-297, and find a potential improvement of only a few percentage points and only when doing searches of 100-500 models. This is because, as they state earlier, the number of hyperparameters which are important for a given dataset is quite small (1-4), and the random search process covers this low number of dimensions quite well. See the illustration of the projection of high hyperparameter space dimensions onto low on Bergstra and Bengio p. 284 and the plots of hyperparameter importance by dataset on p. 294. On p. 295 they show that the speed of convergence of the search is directly related to the number of hyperparameters which are important for the given dataset.

There is ongoing research on trying to predetermine the “variable importance” of hyperparameters for a given dataset. If this bears fruit we will be able to narrow the search so that we converge to a globally-good model more quickly.

Learning the Hyperparameter Space

Bergstra and Bengio and Bergstra, Bengio, Bardenet and Kegl note that random hyperparameter search works almost as well as more sophisticated methods for the types of algorithms available in H2O. For very complex algorithms like Deep Belief Networks (not available in H2O) they can be insufficient:

Random search has been shown to be sufficiently efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs.

  1. Random search is competitive with the manual optimization of DBNs … and 2) Automatic sequential optimization outperforms both manual and random search.

Automatic sequential optimization refers here to techniques which build a model of the hyperparameter space and use it to guide the search process. The most well-known of these is the use of Gaussian Process (GP) models. Bergstra, Bengio, Bardenet and Kegl compare random search against both Gaussian Process and Tree-structured Parzen Estimator (TPE) learning techniques. They train Deep Belief Networks of 10 hyperparameters on a very tiny dataset of 506 rows and 13 columns. [Bergstra, Bengio, Bardenet and Kegl p. 5], initializing the GP and TPE models with the results of a 30-model random search.

They find that for this test case the TPE method outperforms GP and GP outperforms random search beyond the initial 30 models. However, they can’t explain whether TPE does better because it narrows in on good hyperparameters more quickly or conversely because it searches more randomly than GP. [Bergstra, Bengio, Bardenet and Kegl p. 7] Also note that the size of the dataset is very, very small compared with the number of internal model parameters and model tuning hyperparameters. It is a bit hard to believe that these results apply to datasets of typical sizes for users of H2O (hundreds of millions or billions of rows, and hundreds or thousands of columns).

Experimentation and prototyping is clearly needed here to see which of these techniques, if any, are worth adding to H2O.


  1. Bergstra and Bengio. Random Search for Hyper-Parameter Optimization, 2012
  2. Trevor Hastie, Rob Tibshirani and Jerome Friedman. The Elements of Statistical Learning, 2008
  3. Andrew Ng. Machine Learning, 2016
  4. Bergstra, Bengio, Bardenet and Kegl. Algorithms for Hyper-parameter Optimization, 2011

Red herring bites

At the Bay Area R User Group in February I presented progress in big-join in H2O which is based on the algorithm in R’s data.table package. The presentation had two goals: i) describe one test in great detail so everyone understands what is being tested so they can judge if it is relevant to them or not; and ii) show how it scales with data size and number of nodes.

These were the final two slides :

Slide14     Slide15

I left a blank for 1e10 (10 billion high cardinality rows joined with 10 billion high cardinality rows returning 10 billion high cardinality rows) because it didn’t work at that time. Although each node has 256GB RAM (and 32 cores) the 10 billion row test involves joining two 10 billion row tables (each 200GB) and returning a third table (also ~10 billion rows) of 300GB, total 700GB. I was giving 200GB to each of the 4 H2O nodes (to leave 50GB on each node for the operating system and my forgiving colleagues) which meant the H2O cluster had 800GB RAM. The join algorithm needed more than a mere 100GB to complete the task and hence failed. Given that a common rule of thumb is “3x data size” for working memory, to fail with 0.1x data size as working memory is very reasonable. Internally we refer to H2O as a fast calculator. Like R and Python it is in-memory. Unlike R and Python a single data frame can be bigger than a single node.

So I scaled up to 10 nodes: 2TB RAM and 320 cores.

But it still didn’t work. It ran for 30 minutes and then failed with out-of-memory. I felt that 2TB of RAM really should be enough to complete this task: 200GB joined with 200GB returning 300GB. 2TB-700GB = 1.3TB; that amount of working memory should be enough. It pointed to something being wrong somewhere. Indeed with help from colleagues we identified one point where data was being duplicated and shouldn’t have been. This reduced the working memory needed and then it worked.

But it took 20 minutes.

This didn’t fit with my gut feel given I knew how the algorithm works (or was supposed to work). It should be nearer to linear scaling. 1 billion rows took under a minute so 10 billion should take 10 minutes. But it was taking twice that: 20 minutes. More to the point I’m not just having a cup of coffee or lunch while it’s running; I’m watching the CPUs and the network traffic between the nodes. The puzzling thing was that the network wasn’t saturated and the CPU’s were often cold. Something was wrong.

There ensued many proposals and discussions as to what it might be. Focusing on the algorithm and its implementation.

Until one day I went back to the 1 billion row test and ran it on 4 nodes, but a different set of 4 nodes. I usually run on servers 6-9. There was no reason for picking those servers. The first time I did it I asked which ones I could use and then never changed it. This time I ran on servers 1-4 instead. It was 3 times slower. At first I thought there must be a difference in the libraries on the servers or my code. After ruling out many things I rubbed my eyes and ran it again a few times again and then again on servers 6-9. It was repeatable and confirmed. How on earth could it make a difference which servers I ran on? All 10 servers are identical spec in the same rack connected to the same switch. Here’s a photo :

rackFront

I was monitoring CPU and network usage. Nothing else (either human or artificial) was using the cluster. I was the only one. I had this physical cluster to myself. For sure.

I realized we might have been barking up the wrong tree when we were looking at the algorithm and its implementation. When I scaled up to 10bn rows and 10 nodes, perhaps I didn’t just scale up as intended, but perhaps I included a server that was somehow, faulty?!

I reran on servers 2-5 and it was 3 times faster than on servers 1-4. The culprit appears to be server 1, then. We have a network tester in H2O that I had already run but ran it again. Our server names correspond to the last digit of their IP addresses: server 1 = .181, server 2 = .182, server 3 = .183, etc. Here was the image :

NetworkTest

There doesn’t appear to be much wrong here. I asked colleagues and they advised to more thoroughly check the network speeds as they’d seen problems in the past. I Googled for how to test network speed which quickly returned iperf. I knew the problem might be server 1 so I chose server 3 to be the server and compared speeds from servers 1, 2 and 4 as clients to server 3 as the server. Here’s the result :

iperf

So server 1 is more than 10 times slower the others. I trotted over to our server room again and I had a look at the back of server 1.

rackBack   rackBackOrange

See that orange light? That’s what was wrong. Either the switch or the network card had auto negotiated itself down to 1G speed when all its friends in the rack are happy at 10G speed. Despite it being up 57 days, it hadn’t auto-negotiated itself back up to 10G speed. Or something like that. We think.

What’s the solution? Old school: I unplugged the ethernet cable and plugged it back in. The orange light turned green. I went back to my laptop and tested again with iperf. This time iperf reported 10G speed for server 1 consistent with the other servers. The non-physical way to do this is to use ethtool. Next time a problem occurs I’ll try it to save some foot steps to the server room.

Rerunning the 10 billion row to 10 billion row high cardinality join test now comes in twice as fast: 10 minutes instead of 20 minutes. I’m not really sure why that made such a big difference since the network wasn’t saturated enough for it to be a pure data transfer speed issue. I’ll chalk it up to something to do with that network card or switch and move on. I’ll ensure that iperf reports 10G speed between all nodes going forward.

In my talk in Chicago this week I connected to these servers and gave a live demo. That video along with all our other presentations that day are available here.

The event itself was quite packed as seen in the picture below:
h2o-chicago

Big Joins, Scalable Data Munging and Data.Table

Matt Dowle, Hacker, H2O.ai

I’ll be presenting on the same topic at Data by the Bay on Monday 16 May.

Fast csv writing for R

R has traditionally been very slow at reading and writing csv files of, say, 1 million rows or more. Getting data into R is often the first task a user needs to do and if they have a poor experience (either hard to use, or very slow) they are less likely to progress. The data.table package in R solved csv import convenience and speed in 2013 by implementing data.table::fread() in C. The examples at that time were 30 seconds down to 3 seconds to read 50MB and over 1 hour down to 8 minute to read 20GB. In 2015 Hadley Wickham implemented readr::read_csv() in C++.

But what about writing csv files?

It is often the case that the end goal of work in R is a report, a visualization or a summary of some form. Not anything large. So we don’t really need to export large data, right? It turns out that the combination of speed, expressiveness and advanced abilities of R (particularly data.table) mean that it can be faster (so I’m told) to export the data from other environments (e.g. a database), manipulate in R, and export back to the database, than it is keeping the data in the database. However, the data export step out of R being the biggest bottleneck preventing that workflow is being increasingly heard from practitioners in the field. The export step may be needed to send clean data or derived features to other teams or systems in the user’s organization.

Yes: feather, I hear you thinking! Indeed feather is a new fast uncompressed binary format which is also cross-language, by Wes McKinney and Hadley Wickham. As Roger Peng pointed out R has had fast XDR binary format for many years but the innovation of feather is that it is compatible with Python and Julia and other systems will adopt the format too. This is a great thing. Note that it is very important to set compress=FALSE in save() and saveRDS() to achieve comparable times to feather using base R.

I was thinking about jumping on board with feather when to my surprise 2 weeks ago, the data.table project received a new user contribution from Otto Seiskari: fwrite(). Analogous to fread() it writes a data.frame or data.table to .csv and is implemented in C. I looked at the source and realized a few speed improvements could be made. Over the years I’ve occasionally mulled over different approaches. This contribution gave me the nudge to give those ideas a try.

The first change I made was to stop using the fprintf() function in C. This takes a file handle and a format specification string and writes a field to the file. If you have 1 million rows and 6 columns that’s 6 million calls to that function. I’ve never looked at its source code but I guessed that because it was being iteratively called 6 million times, some tasks like interpreting the format specification string and checking the file handle is valid were being performed over and over again wastefully. I knew it maintains its own internal buffer automatically to save writing too many very small chunks of data to disk. But I wondered how much time was spent writing data to the disk versus how much time was spent formatting that data. So I added my own in-memory buffer, writing to it directly and moved from fprintf() to the lower level C functions write(). Note: not the C function fwrite() but write(). This writes a buffer of data to a file; it doesn’t have an internal buffer in the way. I did this so I could wrap the write() call with a timing block and see how much time was being spent writing to file versus formatting.

Here is the result :

Step1

Wow: 99% of the time is being spent formatting! The time is not spent writing to disk at all. Not one ounce.

Can we parallelize it then? On first glance, the data has to be written to the file in a serial fashion: row 1, then row 2, then row 3. Perhaps the data could be divided into N subsets, each subset written out in parallel, then joined together afterwards. Before rushing ahead and doing it that way, let’s think what the problems might be. First the N subsets could be writing at the same time (to different files) but that could saturate the internal bus (I don’t know much about that). Once the N files are obtained, they still have to be joined together into one file. I searched online and asked my colleagues and it appears they is no way to create a file by reference to existing files; you have to concatenate the N files into a new single file (a copy). That step would be serial and would also churn through disk space. It’s possible there would not be enough disk space available to complete the task. Having anticipated these problems, is there a better way I could implement from the get go?

What did I have one of that I could make N of? Well, I had one buffer: the input to write(). If I make N threads and give each thread its own buffer can it be made to work? This is inherently thread safe since the only memory write would be to a thread’s private buffer. The only tricky aspect is ensuring that each piece is written to the output csv file in the correct order. I’m on good terms with Norman Matloff who has recommended OpenMP to me in the past and kindly gave me a copy of his excellent book Parallel Computing for Data Science. I turned to Google and quickly found that OpenMP has exactly the construct I needed :

Step2

OpenMP does a lot of work for the programmer. It directs the team of threads to work on each iteration of the for loop. The code after the #pragma omp order gets executed one-at-a-time and in-the-order-of-the-for-loop, automatically. So if iteration 7 finishes quickly for some reason, that thread will wait for iterations 1-6 to finish before it writes its buffer and receives the next piece to work on.

I chose the buffer size for each thread to be small (1MB) to fit in each core’s cache. This results in the number of pieces N to be much larger (size of data / 1MB) than the number of threads so that each piece is not too big to get stuck and cause a blockage and not too small either to incur wasteful startup and scheduling overhead. Here is the result on my 4-core MacBook Pro (running bare-metal Ubuntu, of course):

Step3

This was one of those rare moments where the speedup factor was the best I hoped for. An ideal 4x improvement: 3.6 seconds down to 0.9 seconds.

I thought I’d have to add lots of tests for fwrite() but I looked at the test suite and found that Otto had already added 26 tests in his pull request that I’d merged. My changes had broken quite a few of them so I fixed the code to pass his tests. Then I pushed to GitHub for others to test on MacOS and Windows. Within hours feedback came back that the speedup and correctness were confirmed.

One outstanding itch was that each field was being written to the buffer by the C function sprintf(). Although this is writing to an in-memory buffer directly, I wondered how big the overhead of interpreting the format string and actually calling the library function was. I knew from fread() that specialized code that is tuned to be iterated can make a big difference. So I created specialized writeNumeric and writeInteger functions in C using base R’s C source code (e.g. format.c:scientific()) as a guide to give me some clues.

Step4

That’s better. Another 3x speedup: 0.9 seconds down to 0.3. This feels like the lower bound to me. So far we’ve been working with 1 million rows and an output csv file of around 150MB. Quite small by many standards. Let’s scale up 10x to 10 million rows on my laptop with SSD and 100x to 100 million rows on one of our 32 core / 256 GB physical servers. The reproducible code is at the end of this article. On the server we test writing to ram disk (/run/shm) versus writing to hard disk. Ram disk may be appropriate for ephemeral needs; e.g. transferring to a database where there is no requirement to keep the transfer file.

Results

To my surprise, writing a csv file using data.table::fwrite() appears to be faster than writing a binary file with feather. Your mileage may of course vary. If you ever need to quickly inspect the data using familiar command line text tools like head, tail, wc, grep and sed then it seems that csv files are still in the running then. Sometimes a csv format can turn out to be quite efficient. Consider the number 1.2 for example. In R that is an 8 byte ‘double’ but in the csv it’s just 3 characters wide, a 62% saving. This may be why the csv size (7.5GB) is slightly smaller than the binary (9.1GB).

Here’s a small display of the data being tested:

Step5

fwrite is available to test now in data.table v1.9.7 in development. Check out our installation notes for how to install the dev version and get OpenMP where necessary. Please try it out and let us know your findings. It works just as well on both data.frame and data.table. Note there are some outstanding items to complete before release to CRAN, collated here.

fwrite() is a great improvement over write.csv(): 63 seconds down to 2 seconds for the 10 million row test on my laptop. write.csv() converts the input to character format first in-memory. This creates new string objects for each and every unique integer or numeric which takes compute cycles to create and hits the global character cache (a hash table) and uses more RAM. There is an advantage of doing it that way, though. The character conversion is object oriented so many user or package defined classes with their own print methods in R code will just work automatically with no C programming required.

What’s the connection to H2O? Usually you would import data into H2O using h2o.importFile() called from either the R or Python H2O packages. h2o.importFile() has been parallel and distributed for some years. To test big-join I’m using it to load two 200GB files into a 10 node cluster with 2TB of RAM, for example. I’ll write more about this in a future article. Sometimes however, it’s necessary or convenient to transfer data between H2O and the R client. This step currently uses base R’s write.csv and read.csv. We intend to replace these calls with fwrite/fread. We’ll also look at the H2O Python package and see if we can improve that similarly.

If you’d like to hear more about big-join in H2O I’ll be presenting at our Open H2O Tour in Chicago on May 3rd, 2016 together with lots of other amazing people including Leland Wilkinson, Jeff Chapman, Erin LeDell, Mark Landry and others. And use the promo code H2OYEAH to get a 50% discount. Also please catch up with me at Data-by-the-Bay where I’ll be presenting on May 16th, 2016.

This article was also published on R-Bloggers here: http://www.r-bloggers.com/fast-csv-writing-for-r/

Appendix

Reproducible code adapted from this Stack Overflow question

install.packages(“data.table”, type=”source”, repos=”https://Rdatatable.github.io/data.table”)
For Mac and Windows, see https://github.com/Rdatatable/data.table/wiki/Installation
require(data.table) # v1.9.7

devtools::install_github(“wesm/feather/R”)
require(feather) # v0.0.0.9000

require(readr) # CRAN v0.2.2

DTn = function(N) data.table( # or data.frame. Both work just as well.
str1=sample(sprintf(“%010d”,sample(N,1e5,replace=TRUE)), N, replace=TRUE),
str2=sample(sprintf(“%09d”,sample(N,1e5,replace=TRUE)), N, replace=TRUE),
str3=sample(sapply(sample(2:30, 100, TRUE), function(n) paste0(sample(LETTERS, n, TRUE), collapse=””)), N, TRUE),
str4=sprintf(“%05d”,sample(sample(1e5,50),N,TRUE)),
num1=sample(round(rnorm(1e6,mean=6.5,sd=15),2), N, replace=TRUE),
num2=sample(round(rnorm(1e6,mean=6.5,sd=15),10), N, replace=TRUE),
str5=sample(c(“Y”,”N”),N,TRUE),
str6=sample(c(“M”,”F”),N,TRUE),
int1=sample(ceiling(rexp(1e6)), N, replace=TRUE),
int2=sample(N,N,replace=TRUE)-N/2
)
set.seed(21)
DT = DTn(1e6)
Either ram disk :
setwd(“/dev/shm”)
or HDD/SDD :
setwd(“~”)
system.time(fwrite(DT,”fwrite.csv”))
system.time(write_feather(DT, “feather.bin”))
system.time(save(DT,file=”save1.Rdata”,compress=F))
system.time(save(DT,file=”save2.Rdata”,compress=T))
system.time(write.csv(DT,”write.csv.csv”,row.names=F,quote=F))
system.time(readr::write_csv(DT,”write_csv.csv”))

NB: It works just as well on both data.frame and data.table