Patients, physicians, nurses, health administrators and policymakers are beneficiaries of the rapid transformations in health and life sciences. These transformations are being driven by new discoveries (etiology, therapies, and drugs/implants), market reconfiguration and consolidation, a movement to value-based care, and access/affordability considerations. The people and systems that are driving these changes are generating new engagement models, workflows, data, and most importantly, new needs for all participants in the care continuum.
Analytics 1.0 (driven by business intelligence & reporting) for Healthcare as we describe in our book is inadequate to address these transformations. A retrospective understanding of “what happened?” is limited in its usefulness as it only provides for corrective action – usually driven by resource availability. To improve wellness, care outcomes, clinician satisfaction, and patient quality of life, we ought to be leveraging little and big data via Analytics 2.0 & 3.0. This journey will require leveraging machine/deep learning and other AI methods to separate signal from noise, integrate insights into a workflow, address data fidelity, and develop contextually-intelligent agents.
Automating machine learning and deep learning simplifies access to these advanced technologies by the Humans of Healthcare. They are key pre-requisites to create a data-driven, learning Healthcare organization. The net results – better science, improved access & affordability, and evidence-based wellness/care.
Among others involved in the care continuum, physicians are at the forefront of the coming health sciences revolution. Join our expert, all-physician panel at the H2O offices in Mountain View, CA to hear their expert thoughts and interact with them. Our panel consists of 3 leading physician leaders who are also driving clinical innovations using AI in their specialties & organizations:
Dr. Baber Ghauri, Physician Executive and Healthcare Innovator, Trinity Health
Dr. Esther Yu, Professor & Neuroradiologist, UCSF
Dr. Pratik Mukherjee, Professor, and Director of CIND, San Francisco VA
Moderator: Prashant Natarajan, Sr. Dir. AI Apps at H2O.ai and best-selling author/contributor to books on medical informatics & analytics
Your intelligence, support and love have been the strength behind an incredible year of growth, product innovation, partnerships, investments and customer wins for H2O and AI in 2017. Thank you for answering our rallying call to democratize AI with our maker culture.
Our mission to make AI ubiquitous is still fresh as dawn and our creativity new as spring. We are only getting started, learning, rising from each fall. H2O and Driverless AI are just the beginnings.
As we look into 2018, we see prolific innovation to make AI accessible to everyone. Simplicity that opens scale. Our focus on making experiments faster, easier and cheaper. We are so happy that you will be the center of our journey. We look forward to delivering many more magical customer experiences.
On behalf of the team and management at H2O, I wish you all a wonderful holiday: deep meaningful time spent with yourself and your loved ones and to come back refreshed for a winning 2018!
Gratitude for your partnership in our beautiful journey – it’s just begun!
To get started, request an AWS EC2 instance with GPU support. We used a single g2.2xlarge instance running Ubuntu 14.04.To setup TensorFlow with GPU support, following softwares should be installed:
#To install Java follow below steps: Type ‘Y’ on installation prompt
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
Update JAVA_HOME in ~/.bashrc
#Add JAVA_HOME to PATH:
# Execute following command to update current session:
#Verify version and path:
#AWS EC2 instance has Python installed by default. Verify if Python 2.7 is installed already:
sudo apt-get install python-pip
#Install IPython notebook
sudo pip install "ipython[notebook]"
#To run H2O example notebooks, execute following commands:
sudo pip install requests
sudo pip install tabulate
#Execute following command to install unzip
sudo apt-get install unzip
#Follow below mentioned steps: Type ‘Y’ on installation prompt
sudo apt-get install scala
#Update SCALA_HOME in ~/.bashrc and execute following command to update current session:
#Verify version and path:
#Java and Scala should be installed before installing Spark.
#Get latest version of Spark binary:
#Extract the file:
tar xvzf spark-1.6.1-bin-hadoop2.6.tgz
#Update SPARK_HOME in ~/.bashrc and execute following command to update current session:
#Add SPARK_HOME to PATH:
#Verify the variables:
#Latest Spark pre-built for Hadoop should be installed and point SPARK_HOME to it:
#To launch a local Spark cluster with 3 worker nodes with 2 cores and 1g per node, export MASTER variable
#Download and run Sparkling Water
bin/sparkling-shell --conf "spark.executor.memory=1g"
#In order to build or run TensorFlow with GPU support, both NVIDIA’s Cuda Toolkit (>= 7.0) and cuDNN (>= v2) need to be installed.
#To install CUDA toolkit, run:
sudo dpkg -i cuda-repo-ubuntu1410_7.0-28_amd64.deb
sudo apt-get update
sudo apt-get install cuda
#To install cuDNN, download a file named cudnn-7.0-linux-x64-v4.0-prod.tgz after filling NVIDIA questionnaire.
#You need to transfer it to your EC2 instance’s home directory.
tar -zxf cudnn-7.0-linux-x64-v4.0-prod.tgz &&
sudo cp -R cuda/lib64 /usr/local/cuda/lib64
sudo cp ~/cuda/include/cudnn.h /usr/local/cuda
#Reboot the system
#Update environment variables as shown below:
#Since, we want to open IPython notebook remotely, we will use IP and port option. To start TensorFlow notebook:
IPYTHON_OPTS="notebook --no-browser --ip='*' --port=54321" bin/pysparkling
#Note that port specified in above command should be open in the system.
Open http://PublicIP:8888 in browser to start IPython notebook console.
Click on TensorFlowDeepLearning.ipynb
Refer this video for demo details.
#Sample .bashrc contents:
1) ERROR: Getting java.net.UnknownHostException while starting spark-shell
Make sure /etc/hosts has entry for hostname.
Eg: 127.0.0.1 hostname
2) ERROR: Getting Could not find .egg-info directory in install record error during IPython installation
sudo pip install --upgrade setuptools pip
3) ERROR: Can’t find swig while configuring TF
sudo apt-get install swig
4) ERROR: “Ignoring gpu device (device: 0, name: GRID K520, pci bus id: 0000:00:03.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5”
Specify 3.0 while configuring TF at:
Please note that each additional compute capability significantly increases your build time and binary size.
5) ERROR: Could not insert ‘nvidia_352’: Unknown symbol in module, or unknown parameter (see dmesg)