Start Off 2017 with Our Stanford Advisors

We were very excited to meet with our advisors (Prof. Stephen Boyd, Prof. Rob Tibshirani and Prof. Trevor Hastie) at H2O.AI on Jan 6, 2017.

Our CEO, Sri Ambati, made two great observations at the start of the meeting:

  • First was the hardware trend where hardware companies like Intel/Nvidia/AMD plan to put the various machine learning algorithms into hardware/GPUs.
  • Second was the data trend where more and more datasets are images/texts/audio instead of the traditional transactional datasets.  To deal with these new datasets, deep learning seems to be the go-to algorithm.  However, with deep learning, it might work very well but it was very difficult to explain to business or regulatory professionals how and why it worked.

There were several techniques to get around this problem and make machine learning solutions interpretable to our customers:

  • Patrick Hall pointed out that monotonicity determines interpretability, not linearity of systems.  He cited a credit scoring system using a constrained neural network, when the input variable was monotonic to the response variable, the system could automatically generate reason codes.
  • One could use deep learning and simpler algorithms (like GLM, Random Forest, etc.) on datasets.  When the performances were similar, we chose the simple models since they tended to be more interpretable. These meetings were great learning opportunities for us.
  • Another suggestion is to use a layered approach:
    • Use deep learning to extract a small number of features from a high dimension datasets. 
    • Next, use a simple model that used these extracted features to perform specific tasks. 
    This layered approach could provide great speed up as well.  Imagine the cases where you could use feature sets for images/text/speech derived from others on your datasets, all you need to do was to build your simple model off the feature sets to perform the functions you desired.  In this case, deep learning is the equivalent of PCA for non-linear features.  Prof. Boyd seemed to like GLRM (check out H2O GLRM) as well for feature extraction.
    With this layered approach, there were more system parameters to tune.  Our auto-ML toolbox would be perfect for this!  Go team!

Subsequently the conversation turned to visualization of datasets.  Patrick Hall brought up the approach to first use clustering to separate the datasets and apply simple models for each cluster.  This approach was very similar to their hierarchical mixture of experts algorithm described in their elements of statistical learning book.  Basically, you built decision trees from your dataset, then fit linear models at the leaf nodes to perform specific tasks. 

Our very own Dr. Wilkinson had built a dataset visualization tool that could summarize a big dataset while maintaining the characteristics of the original datasets (like outliners and others). Totally awesome!

Arno Candel brought up the issue of overfitting and how to detect it during the training process rather than at the end of the training process using the held-out set.  Prof. Boyd mentioned that we should checkout Bayesian trees/additive models.

Last Words of Wisdom from our esteemed advisors: Deep learning was powerful but other algorithms like random forest could beat deep learning depending on the datasets.  Deep learning required big datasets to train.  It worked best with datasets that had some kind of organization in it like spatial features (in images) and temporal trends (in speech/time series).  Random forest, on the other hand, worked perfectly well with dataset with no such organization/features.

What is new in Sparkling Water 2.0.3 Release?

This release has H2O core – 3.10.1.2

Important Feature:

This architectural change allows to connect to existing h2o cluster from sparkling water. This has a benefit that we are no longer affected by Spark killing it’s executors thus we should have more stable solution in environment with lots of h2o/spark node. We are working on article on how to use this very important feature in Sparkling Water 2.0.3.

Release notes: https://0xdata.atlassian.net/secure/ReleaseNote.jspa?projectId=12000&version=16601

2.0.3 (2017-01-04)

  • Bug
    • SW-152 – ClassNotFound with spark-submit
    • SW-266 – H2OContext shouldn’t be Serializable
    • SW-276 – ClassLoading issue when running code using SparkSubmit
    • SW-281 – Update sparkling water tests so they use correct frame locking
    • SW-283 – Set spark.sql.warehouse.dir explicitly in tests because of SPARK-17810
    • SW-284 – Fix CraigsListJobTitlesApp to use local file instead of trying to get one from hdfs
    • SW-285 – Disable timeline service also in python integration tests
    • SW-286 – Add missing test in pysparkling for conversion RDD[Double] -> H2OFrame
    • SW-287 – Fix bug in SparkDataFrame converter where key wasn’t random if not specified
    • SW-288 – Improve performance of Dataset tests and call super.afterAll
    • SW-289 – Fix PySparkling numeric handling during conversions
    • SW-290 – Fixes and improvements of task used to extended h2o jars by sparkling-water classes
    • SW-292 – Fix ScalaCodeHandlerTestSuite
  • New Feature
    • SW-178 – Allow external h2o cluster to act as h2o backend in Sparkling Water
  • Improvement
    • SW-282 – Integrate SW with H2O 3.10.1.2 ( Support for external cluster )
    • SW-291 – Use absolute value for random number in sparkling-water in internal backend
    • SW-295 – H2OConf should be parameterized by SparkConf and not by SparkContext

Please visit https://community.h2o.ai to learn more about it, provide feedback and ask for assistance as needed.

@avkashchauhan | @h2oai

What is new in H2O latest release 3.10.2.1 (Tutte) ?

Today we released H2O version 3.10.2.1 (Tutte). It’s available on our Downloads page, and release notes can be found here.

sz42-6-wheels-lightened

Photo Credit: https://en.wikipedia.org/wiki/W._T._Tutte

Top enhancements in this release:

GLM MOJO Support: GLM now supports our smaller, faster, more efficient MOJO (Model ObJect, Optimized) format for model publication and deployment (PUBDEV-3664, PUBDEV-3695).

ISAX: We actually introduced ISAX (Indexable Symbolic Aggregate ApproXimation) support a couple of releases back, but this version features more improvements and is worth a look. ISAX allows you to represent complex time series patterns using a symbolic notation, reducing the dimensionality of your data and allowing you to run our ML algos or use the index for searching or data analysis. For more information, check out the blog entry here: Indexing 1 billion time series with H2O and ISAX. (PUBDEV-3367, PUBDEV-3377, PUBDEV-3376)

GLM: Improved feature and parameter descriptions for GLM. Next focus will be on improving documentation for the K-Means algorithm (PUBDEV-3695, PUBDEV-3753, PUBDEV-3791).

Quasibinomial support in GLM:
the quasibinomial family is similar to the binomial family except that, where the binomial models only support 0/1 for the values of a target, the quasibinomial family allows for two arbitrary values. This feature was requested by advanced users of H2O for applications such as implementing their own advanced estimators. (PUBDEV-3482, PUBDEV-3791)

GBM/DRF high cardinality accuracy improvements: Fixed a bug in the handling of large categorical features (cardinality > 32) that was there since the first release of H2O-3. Certain such categorical tree split decisions were incorrect, essentially sending observations down the wrong path at any such split point in the decision tree. The error was systematic and consistent between in-H2O and POJO/MOJO, and led to lower training accuracy (and often, to lower validation accurary). The handling of unseen categorical levels (in training and testing) was also inconsistent and unseen levels would go left or right without any reason – now they follow the path of a missing values consistently. Generally, models involving high-cardinality categorical features should have improved accuracy now. This change might require re-tuning of model parameters for best results. In particular the nbins_cats parameter, which controls the number of separable categorical levels at a given split, which has a large impact on the amount of memorization of per-level behavior that is possible: higher values generally (over)fit more.

Direct Download: http://h2o-release.s3.amazonaws.com/h2o/rel-tutte/1/index.html

For each PUBDEV-* information please look at the release note links at the top of this article

Accordingly to VP of Engineering Bill Gallmeister, this release consist of signifiant work done by his engineering team. For more information on these features and all the other improvements in H2O version 3.10.2.1, review our documentation.

Happy Holidays from all H2O team!!

@avkashchauhan (Avkash Chauhan)

Creating a Binary Classifier to Sort Trump vs. Clinton Tweets Using NLP

The problem: Can we determine if a tweet came from the Donald Trump Twitter account (@realDonaldTrump) or the Hillary Clinton Twitter account (@HillaryClinton) using text analysis and Natural Language Processing (NLP) alone?

The Solution: Yes! We’ll divide this tutorial into three parts, the first on how to gather the necessary data, the second on data exploration, munging, & feature engineering, and the third on building our model itself. You can find all of our code on GitHub (https://git.io/vPwxr).


Part One: Collecting the Data
Note: We are going to be using Python. For the R version of this process, the concepts translate, and we have some code on Github that might be helpful. You can find the notebook for this part as “TweetGetter.ipynb” in our GitHub repository: https://git.io/vPwxr.

We used the Twitter API to collect tweets by both presidential candidates, which would become our dataset. Twitter only lets you access the latest ~3000 or so tweets from a particular handle, even though they keep all the Tweets in their own databases. 

The first step is to create an app on Twitter, which you can do by visiting https://apps.twitter.com/. After completing the form you can access your app, and your keys and tokens. Specifically we’re looking for four things: the client key and secret (called consumer key and consumer secret) and the resource owner key and secret (called access token and access token secret).


screen-shot-2016-10-12-at-1-19-02-pm
We save this information in JSON format in a separate file

Then, we can use the Python libraries Requests and Pandas to gather the tweets into a DataFrame. We only really care about three things: the author of the Tweet (Donald Trump or Hillary Clinton), the text of the Tweet, and the unique identifier of the Tweet, but we can take in as much other data as we want (for the sake of data exploration, we also included the timestamp of each Tweet).

Once we have all this information, we can output it to a .csv file for further analysis and exploration. 


Part Two: Data Cleaning and Munging
You can find the notebook for this part as “NLPAnalysis.ipynb” in our GitHub repository: https://git.io/vPwxr.

To fully take advantage of machine learning, we need to add features to this dataset. For example, we might want to take into account the punctuation that each Twitter account uses, thinking that it might be important in helping us discriminate between Trump and Clinton. If we take the amount of punctuation symbols in each Tweet, and take the average across all Tweets, we get the following graph:

screen-shot-2016-10-14-at-2-55-54-pm

Or perhaps we care about how many hashtags or mentions each account uses:

screen-shot-2016-10-14-at-2-56-21-pm

With our timestamp data, we can examine Tweets by their Retweet count, over time:


screen-shot-2016-10-14-at-2-28-12-pm
The tall blue skyscraper was Clinton’s “Delete Your Account” Tweet

screen-shot-2016-10-14-at-2-28-03-pm

The scale graph, on a logarithmic scale

We can also compare the distribution of Tweets over time. We can see that Clinton tweets more frequently than Trump (this is also evidenced by us being able to access older Tweets from Trump, since there’s a hard limit on the number of Tweets we can access).


screen-shot-2016-10-14-at-2-27-53-pm
The Democratic National Convention was in session from July 25th to the 28th

We can construct heatmaps of when these candidates were posting:


screen-shot-2016-10-14-at-2-27-42-pm
Heatmap of Trump Tweets, by day and hour

All this light analysis was useful for intuition, but our real goal is to only use the text of the tweet (including derived features) for our classification. If we included features like the time-stamp, it would become a lot easier.

We can utilize a process called tokenization, which lets us create features from the words in our text. To understand why this is useful, let’s pretend to only care about the mentions (for example, @h2oai) in each tweet. We would expect that Donald Trump would mention certain people (@GovPenceIN) more than others and certainly different people than Hillary Clinton. Of course, there might be people both parties tweet at (maybe @POTUS). These patterns could be useful in classifying Tweets. 

Now, we can apply that same line of thinking to words. To make sure that we are only including valuable words, we can exclude stop-words which are filler words, such as ‘and’ or ‘the.’ We can also use a metric called term frequency – inverse document frequency (TF-IDF) that computes how important a word is to a document. 

There are also other ways to use and combine NLP. One approach might be sentiment analysis, where we interpret a tweet to be positive or negative. David Robinson did this to show that Trump’s personal tweets are angrier, as opposed to those written by his staff.

Another approach might be to create word trees that represent sentence structure. Once each tweet has been represented in this format you can examine metrics such as tree length or number of nodes, which are measures of the complexity of a sentence. Maybe Trump tweets a lot of clauses, as opposed to full sentences.


Part Three: Building, Training, and Testing the Model
You can find the notebooks for this part as “Python-TF-IDF.ipynb” and “TweetsNLP.flow” in our GitHub repository: https://git.io/vPwxr.

There were a lot of approaches to take but we decided to keep it simple for now by only using TF-IDF vectorization. The actual code writing was relatively simple thanks to the excellent Scikit-Learn package alongside NLTK. 

We could have also done some further cleaning of the data, such as excluding urls from our Tweets text (right now, strings such as “zy7vpfrsdz” get their own feature column as the NLTK vectorizer treats them as words). Our not doing this won’t affect our model as the urls are unique, but it might save on space and time. Another strategy could be to stem words, treating words as their root (so ‘hearing’ and ‘heard’ would both be coded as ‘hear’).

Still, our model (created using H2O Flow) produces quite a good result without those improvements. We can use a variety of metrics to confirm this, including the Area Under the Curve (AUC). The AUC measures the True Positive Rate (tpr) versus the False Negative Rate (fpr). A score of 0.5 means that the model is equivalent to flipping a coin, and a score of 1 means that the model is 100% accurate. 


screen-shot-2016-10-13-at-2-33-56-pm
The model curve is blue, while the red curve represents 50–50 guessing

For a more intuitive judgement of our model we can look at the variable importances of our model (what the model considers to be good discriminators of the data) and see if they make sense:


screen-shot-2016-10-13-at-1-41-46-pm
Can you guess which words (variables) correspond (are important) to which candidate?

Maybe the next step could be to build an app that will take in text and output if the text is more likely to have come from Clinton or Trump. Perhaps we can even consider the Tweets of several politicians, assign them a ‘liberal/conservative’ score, and then build a model to predict if a Tweet is more conservative or more liberal (important features would maybe include “Benghazi” or “climate change”). Another cool application might be a deep learning model, in the footsteps of @DeepDrumpf.

If this inspired you to create analysis or build models, please let us know! We might want to highlight your project 🎉📈.

sparklyr: R interface for Apache Spark

This post is reposted from Rstudio’s announcement on sparklyr – Rstudio’s extension for Spark

sparklyr-illustration

  • Connect to Spark from R. The sparklyr package provides a complete dplyr backend.
  • Filter and aggregate Spark datasets then bring them into R for analysis and visualization.
  • Use Spark’s distributed machine learning library from R.
  • Create extensions that call the full Spark API and provide interfaces to Spark packages.

Installation

You can install the sparklyr package from CRAN as follows:

install.packages("sparklyr")

You should also install a local version of Spark for development purposes:

library(sparklyr)
spark_install(version = "1.6.2")

To upgrade to the latest version of sparklyr, run the following command and restart your r session:

devtools::install_github("rstudio/sparklyr")

If you use the RStudio IDE, you should also download the latest preview release of the IDE which includes several enhancements for interacting with Spark (see the RStudio IDE section below for more details).

Connecting to Spark

You can connect to both local instances of Spark as well as remote Spark clusters. Here we’ll connect to a local instance of Spark via the spark_connect function:

library(sparklyr)
sc <- spark_connect(master = "local")

The returned Spark connection (sc) provides a remote dplyr data source to the Spark cluster.

For more information on connecting to remote Spark clusters see the Deployment section of the sparklyr website.

Using dplyr

We can new use all of the available dplyr verbs against the tables within the cluster.

We’ll start by copying some datasets from R into the Spark cluster (note that you may need to install the nycflights13 and Lahman packages in order to execute this code):

install.packages(c("nycflights13", "Lahman"))
library(dplyr)
iris_tbl <- copy_to(sc, iris)
flights_tbl <- copy_to(sc, nycflights13::flights, "flights")
batting_tbl <- copy_to(sc, Lahman::Batting, "batting")
src_tbls(sc)
## [1] "batting" "flights" "iris"

To start with here’s a simple filtering example:

# filter by departure delay and print the first few records
flights_tbl %>% filter(dep_delay == 2)
## Source:   query [?? x 19]
## Database: spark connection master=local[8] app=sparklyr local=TRUE
## 
##     year month   day dep_time sched_dep_time dep_delay arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>
## 1   2013     1     1      517            515         2      830
## 2   2013     1     1      542            540         2      923
## 3   2013     1     1      702            700         2     1058
## 4   2013     1     1      715            713         2      911
## 5   2013     1     1      752            750         2     1025
## 6   2013     1     1      917            915         2     1206
## 7   2013     1     1      932            930         2     1219
## 8   2013     1     1     1028           1026         2     1350
## 9   2013     1     1     1042           1040         2     1325
## 10  2013     1     1     1231           1229         2     1523
## # ... with more rows, and 12 more variables: sched_arr_time <int>,
## #   arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>,
## #   origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>,
## #   minute <dbl>, time_hour <dbl>

Introduction to dplyr provides additional dplyr examples you can try. For example, consider the last example from the tutorial which plots data on flight delays:

delay <- flights_tbl %>% 
  group_by(tailnum) %>%
  summarise(count = n(), dist = mean(distance), delay = mean(arr_delay)) %>%
  filter(count > 20, dist < 2000, !is.na(delay)) %>%
  collect

# plot delays
library(ggplot2)
ggplot(delay, aes(dist, delay)) +
  geom_point(aes(size = count), alpha = 1/2) +
  geom_smooth() +
  scale_size_area(max_size = 2)

ggplot2-flights

Window Functions

dplyr window functions are also supported, for example:

batting_tbl %>%
  select(playerID, yearID, teamID, G, AB:H) %>%
  arrange(playerID, yearID, teamID) %>%
  group_by(playerID) %>%
  filter(min_rank(desc(H)) <= 2 & H > 0)
## Source:   query [?? x 7]
## Database: spark connection master=local[8] app=sparklyr local=TRUE
## Groups: playerID
## 
##     playerID yearID teamID     G    AB     R     H
##        <chr>  <int>  <chr> <int> <int> <int> <int>
## 1  abbotpa01   2000    SEA    35     5     1     2
## 2  abbotpa01   2004    PHI    10    11     1     2
## 3  abnersh01   1992    CHA    97   208    21    58
## 4  abnersh01   1990    SDN    91   184    17    45
## 5  abreujo02   2014    CHA   145   556    80   176
## 6  acevejo01   2001    CIN    18    34     1     4
## 7  acevejo01   2004    CIN    39    43     0     2
## 8  adamsbe01   1919    PHI    78   232    14    54
## 9  adamsbe01   1918    PHI    84   227    10    40
## 10 adamsbu01   1945    SLN   140   578    98   169
## # ... with more rows

For additional documentation on using dplyr with Spark see the dplyr section of the sparklyr website.

Using SQL

It’s also possible to execute SQL queries directly against tables within a Spark cluster. The spark_connection object implements a DBI interface for Spark, so you can use dbGetQuery to execute SQL and return the result as an R data frame:

library(DBI)
iris_preview <- dbGetQuery(sc, "SELECT * FROM iris LIMIT 10")
iris_preview
##    Sepal_Length Sepal_Width Petal_Length Petal_Width Species
## 1           5.1         3.5          1.4         0.2  setosa
## 2           4.9         3.0          1.4         0.2  setosa
## 3           4.7         3.2          1.3         0.2  setosa
## 4           4.6         3.1          1.5         0.2  setosa
## 5           5.0         3.6          1.4         0.2  setosa
## 6           5.4         3.9          1.7         0.4  setosa
## 7           4.6         3.4          1.4         0.3  setosa
## 8           5.0         3.4          1.5         0.2  setosa
## 9           4.4         2.9          1.4         0.2  setosa
## 10          4.9         3.1          1.5         0.1  setosa

Machine Learning

You can orchestrate machine learning algorithms in a Spark cluster via the machine learning functions within sparklyr. These functions connect to a set of high-level APIs built on top of DataFrames that help you create and tune machine learning workflows.

Here’s an example where we use ml_linear_regression to fit a linear regression model. We’ll use the built-in mtcars dataset, and see if we can predict a car’s fuel consumption (mpg) based on its weight (wt), and the number of cylinders the engine contains (cyl). We’ll assume in each case that the relationship between mpg and each of our features is linear.

# copy mtcars into spark
mtcars_tbl <- copy_to(sc, mtcars)

# transform our data set, and then partition into 'training', 'test'
partitions <- mtcars_tbl %>%
  filter(hp >= 100) %>%
  mutate(cyl8 = cyl == 8) %>%
  sdf_partition(training = 0.5, test = 0.5, seed = 1099)

# fit a linear model to the training dataset
fit <- partitions$training %>%
  ml_linear_regression(response = "mpg", features = c("wt", "cyl"))
fit
## Call: ml_linear_regression(., response = "mpg", features = c("wt", "cyl"))
## 
## Coefficients:
## (Intercept)          wt         cyl 
##   37.066699   -2.309504   -1.639546

For linear regression models produced by Spark, we can use summary() to learn a bit more about the quality of our fit, and the statistical significance of each of our predictors.

summary(fit)
## Call: ml_linear_regression(., response = "mpg", features = c("wt", "cyl"))
## 
## Deviance Residuals::
##     Min      1Q  Median      3Q     Max 
## -2.6881 -1.0507 -0.4420  0.4757  3.3858 
## 
## Coefficients:
##             Estimate Std. Error t value  Pr(>|t|)    
## (Intercept) 37.06670    2.76494 13.4059 2.981e-07 ***
## wt          -2.30950    0.84748 -2.7252   0.02341 *  
## cyl         -1.63955    0.58635 -2.7962   0.02084 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-Squared: 0.8665
## Root Mean Squared Error: 1.799

Spark machine learning supports a wide array of algorithms and feature transformations and as illustrated above it’s easy to chain these functions together with dplyr pipelines. To learn more see the machine learning section.

Reading and Writing Data

You can read and write data in CSV, JSON, and Parquet formats. Data can be stored in HDFS, S3, or on the lcoal filesystem of cluster nodes.

temp_csv <- tempfile(fileext = ".csv")
temp_parquet <- tempfile(fileext = ".parquet")
temp_json <- tempfile(fileext = ".json")

spark_write_csv(iris_tbl, temp_csv)
iris_csv_tbl <- spark_read_csv(sc, "iris_csv", temp_csv)

spark_write_parquet(iris_tbl, temp_parquet)
iris_parquet_tbl <- spark_read_parquet(sc, "iris_parquet", temp_parquet)

spark_write_csv(iris_tbl, temp_json)
iris_json_tbl <- spark_read_csv(sc, "iris_json", temp_json)

src_tbls(sc)
## [1] "batting"      "flights"      "iris"         "iris_csv"    
## [5] "iris_json"    "iris_parquet" "mtcars"

Extensions

The facilities used internally by sparklyr for its dplyr and machine learning interfaces are available to extension packages. Since Spark is a general purpose cluster computing system there are many potential applications for extensions (e.g. interfaces to custom machine learning pipelines, interfaces to 3rd party Spark packages, etc.).

Here’s a simple example that wraps a Spark text file line counting function with an R function:

# write a CSV 
tempfile <- tempfile(fileext = ".csv")
write.csv(nycflights13::flights, tempfile, row.names = FALSE, na = "")

# define an R interface to Spark line counting
count_lines <- function(sc, path) {
  spark_context(sc) %>% 
    invoke("textFile", path, 1L) %>% 
      invoke("count")
}

# call spark to count the lines of the CSV
count_lines(sc, tempfile)
## [1] 336777

To learn more about creating extensions see the Extensions section of the sparklyr website.

dplyr Utilities

You can cache a table into memory with:

tbl_cache(sc, "batting")

and unload from memory using:

tbl_uncache(sc, "batting")

Connection Utilities

You can view the Spark web console using the spark_web function:

spark_web(sc)

You can show the log using the spark_log function:

spark_log(sc, n = 10)
## 16/09/24 07:50:59 INFO ContextCleaner: Cleaned accumulator 224
## 16/09/24 07:50:59 INFO ContextCleaner: Cleaned accumulator 223
## 16/09/24 07:50:59 INFO ContextCleaner: Cleaned accumulator 222
## 16/09/24 07:50:59 INFO BlockManagerInfo: Removed broadcast_64_piece0 on localhost:56324 in memory (size: 20.6 KB, free: 483.0 MB)
## 16/09/24 07:50:59 INFO ContextCleaner: Cleaned accumulator 220
## 16/09/24 07:50:59 INFO Executor: Finished task 0.0 in stage 67.0 (TID 117). 2082 bytes result sent to driver
## 16/09/24 07:50:59 INFO TaskSetManager: Finished task 0.0 in stage 67.0 (TID 117) in 122 ms on localhost (1/1)
## 16/09/24 07:50:59 INFO DAGScheduler: ResultStage 67 (count at NativeMethodAccessorImpl.java:-2) finished in 0.122 s
## 16/09/24 07:50:59 INFO TaskSchedulerImpl: Removed TaskSet 67.0, whose tasks have all completed, from pool 
## 16/09/24 07:50:59 INFO DAGScheduler: Job 47 finished: count at NativeMethodAccessorImpl.java:-2, took 0.125238 s

Finally, we disconnect from Spark:

spark_disconnect(sc)

RStudio IDE

The latest RStudio Preview Release of the RStudio IDE includes integrated support for Spark and the sparklyr package, including tools for:

  • Creating and managing Spark connections
  • Browsing the tables and columns of Spark DataFrames
  • Previewing the first 1,000 rows of Spark DataFrames

Once you’ve installed the sparklyr package, you should find a new Spark pane within the IDE. This pane includes a New Connection dialog which can be used to make connections to local or remote Spark instances:

spark-connect

Once you’ve connected to Spark you’ll be able to browse the tables contained within the Spark cluster:

spark-tab

The Spark DataFrame preview uses the standard RStudio data viewer:

spark-connect

The RStudio IDE features for sparklyr are available now as part of the RStudio Preview Release.

Focus

———- Forwarded message ———
From: SriSatish Ambati
Date: Thu, Sep 15, 2016 at 10:17 PM
Subject: changes and all hands tomorrow.
To: team

Team,

Our focus has changed towards larger fewer deals & deeper engagements with handful of finance and insurance customers.

We took a hard look at our marketing spend, pr programs and personnel. We let go most of our amazing inside sales talent. And two of our account executives. We are not building a vertical in IOT. In all nine business folks were affected. No further changes are anticipated or necessary.

These were heroic partners in our journey. I spoke to most all of them today to personally convey the message. Some were with me for a short time, many for years – all great humans who diligently served me and h2o well. I’m grateful for their support and partnership towards my vision. I learnt a lot from each one of them and will not hesitate to assist them any ways possible personally.

Thank you. heroes in bcc. may you find fulfillment & love in your path. It’s a small world and we will all meet very soon.

Our goal as a startup is to get to the most optimal business unit before nurturing and scaling it for growth. We will work tirelessly to get us to that state.

thank you for your partnership, Sri


culture . code . customer
c o m m u n i t y, h2o.ai,

Introducing H2O Community & Support Portals

At H2O, we enjoy serving our customers and the community, and we take pride in making them successful while using H2O products. Today, we are very excited to announce two great platforms for our customers and for the community to better communicate with H2O. Let’s start with our community first:

Community Badge

The success of every open source project depends on a vibrant community, and having an active community helps to convert an average product into a successful product. So to maintain our commitment to our H2O community, we are releasing an updated community platform at https://community.h2o.ai. This community platform is available for everyone, whether you are new to machine intelligence or are a seasoned veteran. If you are new to machine intelligence or H2O, you have an opportunity to learn from great minds, and if you are a seasoned industry veteran, you can not only enhance your skillset, you can also help others to achieve success.

Our objective is to develop this community in a way where every community member has the opportunity to establish himself or herself as a technology leader or expert by helping others. Every moment you spend here in the community, either by creating or consuming content, will not only help you to learn more, but will also help to establish your own brand as a reputed member of our machine intelligence community. Here are some highlights for our community:

  • The community content is distributed into 3 main sections as below:
    • Questions
    • Ideas
    • Knowledge Base Articles
  • The contents in the above sections is distributed among various technology groups called spaces, i.e. Algorithms, H2O, Sparkling Water, Exception, Debugging, Build, etc.

  • Every content needs to be part of a specific space so that experts in their space can provide faster and better responses. A list of all spaces is here.
  • As a visitor, you are welcome to visit every section of the community and learn from posts from community members.
  • Once logged in as community member using OpenID®, you can ask questions, write knowledge base articles, and propose ideas or feature requests for our products.
  • You are welcome to provide feedback to others’ content by liking the KB, question, or answer or simply by up-voting an idea.
  • As you spend more and more time here in community, you will be given higher roles toward management and improvement of your own community.
  • As logged-in member of community, every activity adds points toward your reputation, and as you spend more time in community, you will rank higher among your peers and establish yourself as an expert or a technology leader.
  • Please make sure you read the Guidelines before posting a question.
  • We are working towards making the site more integrated with other social platforms such as Twitter® and Facebook®, as well as adding support to other OpenID providers.

Now let me introduce our updated enterprise support portal:

Support Badge

H2O has been by over 60K data scientists since its initial release, and now more than 7,000 organizations worldwide use H2O applications, including H2O-3 and Sparkling Water. To assist our enterprise customers, we have revamped our enterprise support portal, which is available at https://support.h2o.ai. With this new portal, we are able to provide SLA-based, 24×7 support for our enterprise customers. Please visit this page to learn about the H2O enterprise support offering. While this support portal is specially catered to assist our enterprise customers, it is also open for everyone who is using any of the free, open source H2O applications.

You can open a support incident with the H2O support team in one of two ways:

  • Through the Support Portal
    • Please visit to support portal at https://support.h2o.ai, and select “NEW SUPPORT INCIDENT”.
    • You don’t need to be logged in to the support portal to open a new incident; however it is advisable to have an account so that you can monitor the ticket progress.
    • You will have an opportunity to set up incident priority, i.e. Low, High, Medium, or Urgent.
  • By Email
    • Send an email to support@h2o.ai describing your problem clearly.
    • Please attach any other info within the email in zipped format that could be helpful to identify the root cause.

When opening a support incident, please provide your H2O version, your Hadoop or Spark version (if applicable) and any logs, stack dump, or other information that might be helpful when troubleshooting this problem. Whether you are an H2O enterprise customer or just using one of our free, open source H2O applications, both of these venues are open for you to bring your question or comments. We are listening and are here to help.

We look forward to working with you through our community and support portals.

Avkash Chauhan

H2O Support: Customer focused and Community Driven

Interview with Carolyn Phillips, Sr. Data Scientist, Neurensic

During Open Tour Chicago we conducted a series of interviews with data scientists attending the conference. This is the second of a multipart series recapping our conversations.

Be sure to keep an eye out for updates by checking our website or following us on Twitter @h2oai.

AAEAAQAAAAAAAAeRAAAAJGZmMWZiMGE1LTVlMDgtNGQwZi05NzYyLTEwMTMxNDhmODcwMw

H2O.ai: How did you become a data scientist?

Phillips: Until very close to two months ago I was a computational scientist working at Argonne National Laboratory.

H2O.ai: Okay.

Phillips: I was working in material science, physics, mathematics, etc., but I was getting bored with that and looking for new opportunities, and I got hired by a startup in Chicago.

H2O.ai: Yes.

Phillips: When they hired me they said, “We’re hiring you, and your title is Senior Quantitative Analyst,” but the very first day I showed up, they said, “Scratch that. We’ve changed your title. Your title is now Senior Data Scientist.” And I said, “Yes, all right.” It has senior in it, so I’m okay going with that.

H2O.ai: Nice. I like it.

Phillips: So I’m a mathematician, physicist and computer scientist by training who likes to solve problems with data and algorithms, and so now I’m a data scientist.

H2O.ai: That’s impressive. I don’t know if people have really wrapped their head around what it means to be a data scientist.

Phillips: I will say that one of the reasons why I started looking around for data scientist positions is that I come from an academic research background. I have a PhD in physics and computing, and a lot of my peers who have a very, very similar background to me – we did research together, we wrote papers together – became frustrated with academic research for various reasons. Many of them said, “Well, rats. I have a skill set that’s valuable,” and they’ve become data scientists. They work at places like Airbnb, they work at consulting firms, they work at startups. Each one of us has reached that point where we’ve said, “I’m frustrated with being an academic researcher.” I saw the direction that many of my peers had gone in saying, “I have a good skill set and it is valuable, and the place right now where that is being valued is in this area called data science, and I shall go into it,” and I said, “That’s a good idea. I’ll do that too.” There you go. That’s my story.

H2O.ai: Wow, that’s really cool, yeah. I mean, I’m finding that the more people I talk to the greater number of paths towards becoming a data scientist I find. So what’s your biggest pain point as a data scientist?

Phillips: Data preparation. We want to get more data from our companies, and theoretically all this data is being generated by the same software everywhere. But different companies configure that software differently, and it’s a lot of work to make sure all the data you get is formatted in the same way.

H2O.ai: Yes, I see.

Phillips: Everything I do has to have meaning. For example, I built this beautiful algorithm and I love it, and I applied it to the data, and we found this result in the data, and we said, “What is that? Look at that. Oh, my goodness, what is that? What is that? That’s crazy. That’s terrible, you know, we have to get right on that.”

H2O.ai: Yes.

Phillips: And I thought, well, before we get too excited, let me dig down to the original raw data that generated this. Dig, dig, dig, dig, dig. Oh, we assumed that data would always come in this format, and this data came in that format, and at the end of the day it looked like something it wasn’t, so I feel like that’s actually the big challenge.

H2O.ai: Oh, very interesting. Do you have methods of making your data more uniform?

Phillips: Well, I’m not responsible for that directly, but no. Every time we get in a new source of data it’s going to be this painful process of normalizing it so that it looks as much as possible like the other sources of data.

H2O.ai: Thank you so much, Carolyn. That was really helpful information. It was a pleasure meeting you.

Phillips: You too.

Interview with Svetlana Kharlamova, ­Sr. Data Scientist, Grainger

During Open Tour Chicago we conducted a series of interviews with data scientists attending the conference. This is the first of a multipart series recapping our conversations.

Be sure to keep an eye out for updates by checking our website or following us on Twitter @h2oai.

Svetlana Kharlamova

H2O.ai: How did you become a data scientist?

Kharlamova: I’m a physicist.

H2O.ai: Okay.

Kharlamova: I came here from the academia of physics. I worked for seven years in academia for physics and math, and four years ago I switched to finance to be more of a math person than a physics person.

H2O.ai: I see.

Kharlamova: And from finance I came to the data industry. At that time data science was booming.

H2O.ai: Oh, okay.

Kharlamova: And I got excited with all new the stuff and technologies coming up, and here I am.

H2O.ai: Okay, nice. So what business do you work for now?

Kharlamova: I work for Grainger. We’re focused on equipment distribution; serving as a connector between manufacturing plants, factories and consumers.

H2O.ai: So what are some of the problems that you guys are looking to solve?

Kharlamova: Building recommendation engines for customers. For that you need to leverage natural language processing and positive logic.

H2O.ai: What resources do you use to stay on top of the information in the data science world? Are there blogs that you read or like, or places that you go?

Kharlamova: Staff communities and data science communities are important sources of information.

H2O.ai: Yes. That’s great. And is there any advice that you would have for someone who’s an up and coming data scientist, or someone who’s just generally interested in the field?

Kharlamova: Advice to somebody who’s generally interested in the field?

H2O.ai: Yes, about becoming a data scientist.

Kharlamova: It’s a difficult question, because if a person takes a one year course on Coursera or somewhere else on data science, it doesn’t mean that they’re a data scientist yet, because you need to see the problem in the big picture.

H2O.ai: Yes.

Kharlamova: You need to be able to identify the challenges, the problem and various solutions. You cannot explore everything. You need to narrow down your choice.

H2O.ai: Yes, okay.

Kharlamova: You also need to have substantial knowledge of mathematics, statistics and computer science. But understand that you don’t need to immediately start using a sophisticated random forest model. Maybe you can just use simple algebra. Maybe it’s a question of two plus two.

H2O.ai: Right.

Kharlamova: And then you don’t need all these assumptions and approximations. Because I’m a physicist, I like a defined correct answer much more than something fuzzy. To be successful as a data scientist you need to decide how best to approach a problem then find a solution that’s as simple as possible.

H2O.ai: Okay. I see. That’s great advice. So it’s not just about having the knowledge, but it’s also about having an approach that is, like you said, simple, that you can probably use more often to provide a clear answer. That’s great, great advice.

H2O Day at Capital One

Here at H2O.ai one of our most important partners is Capital One, and we’re proud to have been working with them for over a year. One of the world’s leading financial services providers, Capital One has a strong reputation for being an extremely data and technology-focused organization. That’s why when the Capital One team invited us to their offices in McLean, Virginia for for a full day of H2O talks and demos we were delighted to accept. Many key members of Capital One’s technology team were among the 500+ attendees at the event, including Jeff Chapman, MVP of Shared Technology, Hiren Hiranandani, Lead Software Engineer, Mike Fulkerson, VP of Software Engineering and Adam Wenchel, VP of Data Engineering.

A major theme throughout the day was “vertical is the new horizontal,” an idea presented by our CEO Sri Ambati, about how every company is becoming a technology company. Sri pointed out that software is becoming increasingly ubiquitous at organizations at the same time that code is becoming a commodity. Today, the only assets that companies can defend is their community and brand. Airbnb is more valuable than most hospitality companies, despite owning no property, and Uber is more valuable than most transportation companies, despite owning no vehicles. And if “software is eating the world” then artificial intelligence (AI) is eating software, as traditional rules-based models no longer cut it in today’s rapidly changing world.

Our partnership started about a year ago, where we met in California, and learned about the value proposition of H2O. To be honest, I think we were all floored by what we saw. – Jeff Chapman

This was obviously an important message for attendees at Capital One, who were looking to learn more about AI and machine learning. Of particular interest was how machine learning and AI can help with use cases such as personalization and fraud detection and how the technology can drive future data-driven decision making. Attendees also had a chance to share their experiences using H2O to analyze and score models with their colleagues across business units. The event fit perfectly into H2O.ai’s vision of a grassroots community that encourages cooperation and the sharing of information. We look forward to continuing to work with Capital One, and all of our partners, to promote the democratization of data science and the growth of open source communities.

Visit us online to find a local event where you can meet with the makers of H2O in-person. Please also don’t forget to see the video of our time at Capital One here!