Deep learning algorithms have benefited significantly from the recent performance gains of GPUs. However, it has been uncertain whether GPUs can speed up powerful classical machine learning algorithms such as generalized linear modeling, random forests, gradient boosting machines, clustering, and singular value decomposition.
Today I’d love to share another interesting presentation from #H2OWorld focused on H2O4GPU.
H2O4GPU is a GPU-optimized machine learning library with a Python scikit-learn API tailored for enterprise AI. The library includes all the CPU algorithms from scikit-learn and also has selected algorithms that benefit greatly from GPU acceleration.
In the video below, Jon McKinney, Director of Research at H2O.ai, discussed the GPU-optimized machine learning algorithms in H2O4GPU and showed their speed in a suite of benchmarks against scikit-learn run on CPUs.
We’re always receiving helpful feedback from the community and making updates.
Exciting updates to expect in Q1 2018 include:
– Kalman Filters
– K-nearest neighbors
If you’d like to learn more about H2O4GPU, I invite you to explore these helpful links:
– H2O4GPU Readme: https://github.com/h2oai/h2o4gpu/blob/master/README.md
– Open Source License (Apache V2): https://github.com/h2oai/h2o4gpu/blob/master/LICENSE