R has traditionally been very slow at reading and writing csv files of, say, 1 million rows or more. Getting data into R is often the first task a user needs to do and if they have a poor experience (either hard to use, or very slow) they are less likely to progress. The data.table package in R solved csv import convenience and speed in 2013 by implementing data.table::fread() in C. The examples at that time were 30 seconds down to 3 seconds to read 50MB and over 1 hour down to 8 minute to read 20GB. In 2015 Hadley Wickham implemented readr::read_csv() in C++.
But what about writing csv files?
It is often the case that the end goal of work in R is a report, a visualization or a summary of some form. Not anything large. So we don’t really need to export large data, right? It turns out that the combination of speed, expressiveness and advanced abilities of R (particularly data.table) mean that it can be faster (so I’m told) to export the data from other environments (e.g. a database), manipulate in R, and export back to the database, than it is keeping the data…
This is a guest post re-published with permission from our friends at Datapipe. The original lives here.
One of the advantages of public cloud is the ability to experiment and run various workloads without the need to commit to purchasing hardware. However, to meet your data processing needs, a well-defined mapping between your objectives and the cloud vendor offerings is a must. In collaboration with Denis Perevalov (Milliman), we’d like to share some details around one of our most recent – and largest – big-data projects we’ve worked on; a project with our client, Milliman, to build a machine-learning platform on Amazon Web Services.
Before we get into the details, let’s introduce Datapipe’s data and analytics consulting team. The goal of our data and analytics team is to help customers with their data processing needs. Our engagements fall into data engineering efforts, where we help customers build data processing pipelines. In addition to that, our team helps clients get a better insight into their existing datasets by engaging with our data science consultants.
When we first started working…
Sparkling Water offers the best of breed machine learning for Spark users. Sparkling Water brings all of H2O’s advanced algorithms and capabilities to Spark. This means that you can continue to use H2O from Rstudio or any other ide of your choice. This post will walk you through the steps to get running on plain R or R studio from Spark.
It works just the same the same way as regular H2O. You just need to call h2o.init() from R with the right parameters i.e. IP, PORT
For example: we start sparkling shell (bin/sparkling-shell) here and create an H2OContext:
Now H2OContext is running and H2O’s REST API is exposed on 172.162.223:54321
So we can open RStudio and call h2o.init() (make sure you have the right R H2O package installed):
Let’s now create a Spark DataFrame, then publish it as H2O frame and access it from R:
This is how you achieve that in sparkling-shell:
val df =...
It’s about to rain data in San Jose when Strata + Hadoop World comes to town March 29 – March 31st.
H2O has a waterfall of action happening at the show. Here’s a rundown of what’s on tap.
Keep it handy so you have less chance of FOMO (fear of missing out).
Hang out with H2O at Booth #1225 to learn more about how machine learning can help transform your business and find us throughout the conference:
Tuesday, March 29th
Wednesday, March 30th
- 12:45pm – 1:15pm Meet the Makers: The brains and innovation behind the leading machine learning solution is on hand to hack with you
- #AskArno – Arno Candel, Chief Architect and H2O algorithm expert
- #RuReady with Matt Dowle, H2O Hacker and author of R…
Cliff resigned from the Company last week – He is parting on good terms and supports our success in future. Cliff and I worked closely since 2004 so this is a loss for me. It ends an era of prolific work supporting my vision as a partner.
Let’s take this opportunity to congratulate Cliff on his work, in helping me build something from nothing. Millions of little things we did together to get us this far. (Still remember the uHaul trip with earliest furniture in old building and cliff cranking out code furiously running on Taco Bell & Fiesta Del Mar.) A lot of how I built the Company has to do with maximizing partnering with Cliff. Lots of wins came out of that and we’ll cherish them. Like all good things it came to an end. I only wish him the very best in the future.
Over the past four years, Cliff and the rest of you have helped me build an amazing technology, business, customer and investor team. Your creativity, passion, loyalty, spirited work, grit & determination are the pillars of support and wellspring of life for the Company. I’ll look for strong partners in each one of…
Now that we’re a few months out from H2O World we wanted to share with you all what the most popular talks were by online viewership. The talks covered a variety of topics from introductions, to in-depth examinations of use cases, to wide-ranging panels.
Introduction to Data Science
Featuring Erin LeDell, Statistician and Machine Learning Scientist, H2O.ai
An introductory talk for people new to the field of data science.
Intro to R, Python, Flow
Featuring Amy Wang, Math Hacker, H2O.ai
A hands-on demonstration of how to run H2O in R and Python and an introduction to the Flow GUI.
Machine Learning at Comcast
Featuring Andrew Leamon, Director of Engineering Analysis, Comcast and Chushi Ren, Software Engineer, Comcast
An inside look at how Comcast leverages machine learning across its business units.
Migrating from Proprietary Analytics Stacks to Open Source H2O
Featuring Fonda Ingram, Technical Manager, H2O.ai
A ten-year SAS veteran explains how to migrate from proprietary software to an open source environment.
Top 10 Data Science Pitfalls
Featuring Mark Landry, Product Manager, H2O.ai
A Kaggle champion offers an overview of ten top pitfalls to avoid when…
This tutorial introduces the Generalized Low Rank Model (GLRM) , a new machine learning approach for reconstructing missing values and identifying important features in heterogeneous data. It demonstrates how to build a GLRM in H2O that condenses categorical information into a numeric representation, which can then be used in other modeling frameworks such as deep learning.
What is a Low Rank Model?
Across business and research, analysts seek to understand large collections of data with numeric and categorical values. Many entries in this table may be noisy or even missing altogether. Low rank models facilitate the understanding of tabular data by producing a condensed vector representation for every row and column in the data set.
Specifically, given a data table A with m rows and n columns, a GLRM consists of a decomposition of A into numeric matrices X and Y. The matrix X has the same number of rows as A, but only a small, user-specified number of columns k. The matrix Y has k rows and d columns, where d is equal to the total dimension of the embedded features in A. For example, if A has 4 numeric columns and 1 categorical column with 3 distinct…
**This blog post was first posted on the Databricks blog here
Databricks provides a cloud-based integrated workspace on top of Apache Spark for developers and data scientists. H2O.ai has been an early adopter of Apache Spark and has developed Sparkling Water to seamlessly integrate H2O.ai’s machine learning library on top of Spark.
In this blog, we will demonstrate an integration between the Databricks platform and H2O.ai’s Sparking Water that provides Databricks users with an additional set of machine learning libraries. The integration allows data scientists to utilize Sparkling Water with Spark in a notebook environment more easily, allowing them to seamlessly combine Spark with H2O and get the best of both worlds.
Let’s begin by preparing a Databricks environment to develop our spam predictor:
The first step is to log into your Databricks account and create a new library containing Sparkling Water. You can use the Maven coordinates of the Sparkling Water package, for example:
h2o:sparkling-water-examples_2.10:1.5.6 (this version works with Spark 1.5)
The next step is to create a new cluster to run the example:
Data Science is like Rome, and all roads lead to Rome. H2O WORLD is the crossroad, pulling in a confluence of math, statistics, science and computer science and incorporating all avenues of business. From the academic, research oriented models to the business and computer science analytics implementations of those ideas, H2O WORLD informs attendees on H2O’s ability to help users and customers explore their data and produce a prediction or answer a question.
I came to H2O World hoping to gain a better understanding of H2O’s software and of Data Science in general. I thoroughly enjoyed attending the sessions, following along with the demos and playing with H2O myself. Learning from the hackers and Data Scientists about the algorithms and science behind H2O and seeing the community spirit at the Hackathons was enlightening. Listening to the keynote speakers, both women, describe our data-influenced future and hearing the customer’s point of view on how H2O has impacted their work has been inspirational. I especially appreciated learning about the potential influence on scientific and medical research and social issues and H2O’s ability to influence positive change.
Curiosity led me to delve into the world of Data Science and as…