Lending Club : Predict Bad Loans to Minimize Loss to Defaulted Accounts

As a sales engineer on the H2O.ai team I get asked a lot about the value add of H2O. How do you put a price tag on something that is open source? This typically revolves around the use cases; if a use case pertains to improving user experience or making apps that can improve internal operations then there’s no straightforward way of monetarily accumulating better experiences. However, if the use case is focused on detecting fraud or maintaining enough supply for the next sales quarter, we can calculate the total amount of money saved by detecting pricey fraudulent cases or sales lost due to incorrectly forecasted demand.

The H2O team has built a number of user-facing demostrations from our Ask Craig App to predicting flight delays which are available in R, Sparkling Water,and Python. Today, we will use Lending Club’s Open Data to obtain the probability of a loan request defaulting or being charged off. We will build an H2O model and calculate the dollar amount of money saved by rejecting these loan requests with the model (not including the opportunity cost), and then combine this with the profits lost in rejecting good loans to determine the net amount saved.

Summary of Approved Loan Applicants

The dataset had a total of half a million records from 2007 up to 2015 which means with H2O, the data can actually just be processed on your personal computer with an H2O instance with at least 2GB of memory. The first step is to import the data and create a new column that categorizes the loan as either a good loan or a bad loan (the user has defaulted or the account has been charged off). The following is a code snippet for R:

    print("Start up H2O...")
    library(h2o)
    conn <- h2o.init(nthreads = -1)

    print("Import approved and rejected loan requests from Lending Tree...")
    path   <- "/Users/amy/Desktop/lending_club/loanStats/"
    loanStats <- h2o.importFile(path = path, destination_frame = "LoanStats")

    print("Create bad loan label, this will include charged off, defaulted, and late repayments on loans...")
    loanStats$bad_loan <- ifelse(loanStats$loan_status == "Charged Off" | 
                             loanStats$loan_status == "Default" | 
                            loanStats$loan_status == "Does not meet the credit policy.  Status:Charged Off", 
                            1, 0)
    loanStats$bad_loan <- as.factor(loanStats$bad_loan)

    print("Create the applicant's risk score, if the credit score is 0 make it an NA...")
    loanStats$risk_score <- ifelse(loanStats$last_fico_range_low == 0, NA,
                               (loanStats$last_fico_range_high + loanStats$last_fico_range_low)/2)

Credit Score Summaries

In H2O Flow, you can grab the distribution of credit scores for good loans vs bad loans. It is easy to see that owners of bad loans typically have the lowest credit score, which will be the biggest driving force in predicting whether a loan is good or not. However we want a model that actually takes into account other features so that loans aren’t automatically cut off at a certain threshold.

credit_score_distributions

Modeling

Pending review : Will update blog soon

Introduction to Data Science using H2O – Chicago

Thank you to Chicago for the great meetup on 29 July 2015. Slides have been posted on GitHub. The links to the sample scripts and data is contained in the slides. If you have any further questions about H2O, please join our GoogleGroup or chat with us on Gitter .

The slides are also available on the H2O Slideshare:

Also, thank you to Serendipity Labs; a great space and location!

Enjoy H2O and let us know about your data science / machine learning journey!

-Hank
@hankroark

useR! Aalborg 2015 conference

The H2O team spent most of the useR! Aalborg 2015 conference at the booth giving demos and discussing H2O. Amy had a 16 node EC2 cluster running with 8 cores per node, making a total of 128 CPUs. The demo consisted of loading large files in parallel and then running our distributed machine learning algos in parallel.

At an R conference, most people wanted to script H2O from R, which is of course built-in (as is Python) but we also conveyed the benefits that our user interface Flow can provide in this space (even for programmers) by automating and accelerating common tasks. We enjoyed discussing future directions with and bouncing ideas off of the attendees. There is nothing like seeing people’s first reaction to the product, live and in person! As an open source platform, H2O thrives on suggestions and contributions from our community.

All components of H2O are developed-in-the-open on GitHub.

Continue reading useR! Aalborg 2015 conference

KFold Cross Validation With H2O-3 and R

This blog is also explains the solution to a Google Stream question we received

Note: KFold Cross Validation will be added to H2O-3 as an argument soon

This is a terse guide to building KFold cross-validated models with H2O using the R interface. There's not very much R code needed to get up and running, but it's by no means the one-magic-button method either. This guide is intended for the more “rustic” data scientist that likes to get there hands a bit dirty and build out their own tools.

Continue reading KFold Cross Validation With H2O-3 and R

‘Ask Craig’- Determining Craigslist Job Categories with Sparkling Water, Part 2

This is the second blog in a two blog series. The first blog is on turning these models into a Spark streaming application

The presentation on this application can be downloaded and viewed at Slideshare

In the last blog post we learned how to build a set of H2O and Spark models to predict categories for jobs posted on Craigslist using Sparkling Water.

This blog post will show how to use the models to build a Spark streaming application which scores posted job titles on the fly.

Continue reading ‘Ask Craig’- Determining Craigslist Job Categories with Sparkling Water, Part 2

Sparkling Water Tutorials Updated

This is updated version of Sparkling Water tutorials originally published by Amy Wang here

For the newest examples, and updates, please visit Sparkling Water GitHub page

The blog post introduces 3 tutorials:

Continue reading Sparkling Water Tutorials Updated

‘Ask Craig’- Determining Craigslist Job Categories with Sparkling Water

This is the first blog in a two blog series. The second blog is on turning these models into a Spark streaming application

The presentation on this application can be downloaded and viewed at Slideshare

One question we often get asked at Meetups or conferences is: “How are you guys different than other open-source machine-learning toolkits? Notably: Spark’s MLlib?” The answer to this question is not “black and white” but actually a shade of “gray”. The best way to showcase the power of Spark’s MLlib library and H2O.ai’s distributed algorithms is to build an app that utilizes both of their strengths in harmony, going end-to-end from data-munging and model building through deployment and scoring on real-time data using Spark Streaming. Enough chit-chat, let’s make it happen!

Continue reading ‘Ask Craig’- Determining Craigslist Job Categories with Sparkling Water

Scaling R with H2O

In the advent of H2O 3.0 it seems appropriately timed to reintroduce the R API for H2O to help users better understand the differences between R dataframes and H2OFrames. Typically some of the first questions we get include:

  • Does H2O support all R packages and functions?
  • Is H2OFrame an extension of data.frame?
  • Are H2O supported algorithms written on top of preexisting packages in R like glmnet?

Continue reading Scaling R with H2O

Using H2O for Kaggle: Guest Post by Gaston Besanson and Tim Kreienkamp

This post also appears on the GSE Data Science Blog

In this special H2O guest blog post, Gaston Besanson and Tim Kreienkamp talk about their experience using H2O for competitive data science. They are both students in the new Master of Data Science Program at the Barcelona Graduate School of Economics and used H2O in an in-class Kaggle competition for their Machine Learning class. Gaston’s team came in second, scoring 0.92838 in overall accuracy, slightly surpassed by Tim’s team with 0.92964, on a subset of the famous “Forest Cover” dataset.

Continue reading Using H2O for Kaggle: Guest Post by Gaston Besanson and Tim Kreienkamp