Sentiment Analysis is a powerful Natural Language Processing technique that can be used to compute and quantify the emotions associated with a body of text. One of the reasons that Sentiment Analysis is so powerful is because its results are easy to interpret and can give you a big-picture metric for your dataset.
One recent event that surprised many people was the November 8th US Presidential election. Hillary Clinton, who ended up losing the race, had been given chances ranging from a 71.4% (FiveThirtyEight), to a 85% (New York Times), to a >99% chance of victory (Princeton Election Consortium).
To measure the shock of this upset, we decided to examine comments made during the announcements of the election results and see how (if) the sentiment changed. The sentiment of a comment is measured by how its words correspond to either a negative or positive connotation. A score of ‘0.00’ means the comment is neutral, while a higher score means that the sentiment is more positive (and a negative score implies the comment is negative).
Our dataset is a .csv of all Reddit comments made during 11/8/2016 to 11/10/2016 (UTC) and is courtesy of /u/Stuck_In_the_Matrix. All times are in EST, and we’ve annotated the timeline (the height of the bars denotes the number of comments posted during that hour):
We examined five political subreddits to gauge their reactions. Our first target was /r/hillaryclinton, Clinton’s primary support base. The number of comments reached a high starting at around 9pm EST, but the sentiment gradually fell as news came in that Donald Trump was winning more states than expected.
What is interesting is the low number of comments made after the election was called for Donald Trump. I suspect that it may have been a subreddit-wide pause on comments due to concerns about trolls, but I’m not sure; I contacted the moderators but haven’t received a response back yet.
A few other left-leaning subreddits had interesting results as well. While /r/SandersforPresident was closed for the election season post-Bernie concession, it’s successor, /r/Political_Revolution, had not closed and experienced declines in comment sentiment as well.
On /r/The_Donald (Donald Trump’s base), the results were the opposite.
There are also a few subreddits that are less candidate- or ideology-specific: /r/politics and /r/PoliticalDiscussion. /r/PoliticalDiscussion didn’t seem to show any shift, but /r/politics did seem to become more muted, at least compared to the previous night.
- Reddit political subreddits experienced a sizable increase in activity during the election results
- Subreddits differed in their reactions to the news along idealogical lines, with pro-Trump subreddits having higher positive sentiment than pro-Clinton subreddits
What could be the next steps for this type of analysis?
- Can we use these patterns to classify the readership of the comments sections of newspapers as left- or right-leaning?
- Can we apply these time-series sentiment analyses to other events, such as sporting events (which also includes two ‘teams’)?
- Can we use sentiment analysis to evaluate the long-term health of communities, such as subreddits dedicated to eventually-losing candidates, like Bernie Sanders?
The problem: Can we determine if a tweet came from the Donald Trump Twitter account (@realDonaldTrump) or the Hillary Clinton Twitter account (@HillaryClinton) using text analysis and Natural Language Processing (NLP) alone?
The Solution: Yes! We’ll divide this tutorial into three parts, the first on how to gather the necessary data, the second on data exploration, munging, & feature engineering, and the third on building our model itself. You can find all of our code on GitHub (https://git.io/vPwxr).
Part One: Collecting the Data
Note: We are going to be using Python. For the R version of this process, the concepts translate, and we have some code on Github that might be helpful. You can find the notebook for this part as “TweetGetter.ipynb” in our GitHub repository: https://git.io/vPwxr.
We used the Twitter API to collect tweets by both presidential candidates, which would become our dataset. Twitter only lets you access the latest ~3000 or so tweets from a particular handle, even though they keep all the Tweets in their own databases.
The first step is to create an app on Twitter, which you can do by visiting https://apps.twitter.com/. After completing the form you can access your app, and your keys and tokens. Specifically we’re looking for four things: the client key and secret (called consumer key and consumer secret) and the resource owner key and secret (called access token and access token secret).
We save this information in JSON format in a separate file
Then, we can use the Python libraries Requests and Pandas to gather the tweets into a DataFrame. We only really care about three things: the author of the Tweet (Donald Trump or Hillary Clinton), the text of the Tweet, and the unique identifier of the Tweet, but we can take in as much other data as we want (for the sake of data exploration, we also included the timestamp of each Tweet).
Once we have all this information, we can output it to a .csv file for further analysis and exploration.
Part Two: Data Cleaning and Munging
You can find the notebook for this part as “NLPAnalysis.ipynb” in our GitHub repository: https://git.io/vPwxr.
To fully take advantage of machine learning, we need to add features to this dataset. For example, we might want to take into account the punctuation that each Twitter account uses, thinking that it might be important in helping us discriminate between Trump and Clinton. If we take the amount of punctuation symbols in each Tweet, and take the average across all Tweets, we get the following graph:
Or perhaps we care about how many hashtags or mentions each account uses:
With our timestamp data, we can examine Tweets by their Retweet count, over time:
The tall blue skyscraper was Clinton’s “Delete Your Account” Tweet
We can also compare the distribution of Tweets over time. We can see that Clinton tweets more frequently than Trump (this is also evidenced by us being able to access older Tweets from Trump, since there’s a hard limit on the number of Tweets we can access).
The Democratic National Convention was in session from July 25th to the 28th
We can construct heatmaps of when these candidates were posting:
Heatmap of Trump Tweets, by day and hour
All this light analysis was useful for intuition, but our real goal is to only use the text of the tweet (including derived features) for our classification. If we included features like the time-stamp, it would become a lot easier.
We can utilize a process called tokenization, which lets us create features from the words in our text. To understand why this is useful, let’s pretend to only care about the mentions (for example, @h2oai) in each tweet. We would expect that Donald Trump would mention certain people (@GovPenceIN) more than others and certainly different people than Hillary Clinton. Of course, there might be people both parties tweet at (maybe @POTUS). These patterns could be useful in classifying Tweets.
Now, we can apply that same line of thinking to words. To make sure that we are only including valuable words, we can exclude stop-words which are filler words, such as ‘and’ or ‘the.’ We can also use a metric called term frequency – inverse document frequency (TF-IDF) that computes how important a word is to a document.
There are also other ways to use and combine NLP. One approach might be sentiment analysis, where we interpret a tweet to be positive or negative. David Robinson did this to show that Trump’s personal tweets are angrier, as opposed to those written by his staff.
Another approach might be to create word trees that represent sentence structure. Once each tweet has been represented in this format you can examine metrics such as tree length or number of nodes, which are measures of the complexity of a sentence. Maybe Trump tweets a lot of clauses, as opposed to full sentences.
Part Three: Building, Training, and Testing the Model
You can find the notebooks for this part as “Python-TF-IDF.ipynb” and “TweetsNLP.flow” in our GitHub repository: https://git.io/vPwxr.
There were a lot of approaches to take but we decided to keep it simple for now by only using TF-IDF vectorization. The actual code writing was relatively simple thanks to the excellent Scikit-Learn package alongside NLTK.
We could have also done some further cleaning of the data, such as excluding urls from our Tweets text (right now, strings such as “zy7vpfrsdz” get their own feature column as the NLTK vectorizer treats them as words). Our not doing this won’t affect our model as the urls are unique, but it might save on space and time. Another strategy could be to stem words, treating words as their root (so ‘hearing’ and ‘heard’ would both be coded as ‘hear’).
Still, our model (created using H2O Flow) produces quite a good result without those improvements. We can use a variety of metrics to confirm this, including the Area Under the Curve (AUC). The AUC measures the True Positive Rate (tpr) versus the False Negative Rate (fpr). A score of 0.5 means that the model is equivalent to flipping a coin, and a score of 1 means that the model is 100% accurate.
The model curve is blue, while the red curve represents 50–50 guessing
For a more intuitive judgement of our model we can look at the variable importances of our model (what the model considers to be good discriminators of the data) and see if they make sense:
Can you guess which words (variables) correspond (are important) to which candidate?
Maybe the next step could be to build an app that will take in text and output if the text is more likely to have come from Clinton or Trump. Perhaps we can even consider the Tweets of several politicians, assign them a ‘liberal/conservative’ score, and then build a model to predict if a Tweet is more conservative or more liberal (important features would maybe include “Benghazi” or “climate change”). Another cool application might be a deep learning model, in the footsteps of @DeepDrumpf.
If this inspired you to create analysis or build models, please let us know! We might want to highlight your project 🎉📈.
A while ago I was looking for an apartment in San Francisco. There are a lot of problems with finding housing in San Francisco, mostly stemming from the fierce competition. I was checking Craigslist every single day. It still took me (and my girlfriend) a few months to find a place — and we had to sublet for three weeks in between. Thankfully we’re happily housed now but it was quite the journey. Others have talked about their search for SF housing, but I have a few tips myself:
1) While Craigslist continues to be the best resource for finding housing (it’s how I found my current apartment), there are quite a few Facebook groups that may also be useful. My experience has been of having weekly cycles, where I send out lots of emails, get a stream of responses, go visit 1-2 places per weekday evening, and then get a stream of rejections back. If you do check Craigslist, the best times to check are Tuesday and Wednesday evenings, and then the following mornings, as the following graphic shows.
2) Be prepared to apply to an apartment on the spot. I’ve been burned a few times when I took a day or two to fully think about the location and price, but by the time I applied I was at the back of the line. It really helps to know exactly what you want, and know how to spot it. The good news is that even if you’re not sure of your wants and needs at the beginning of your search, you’ll learn as you visit more and more apartments.
3) Make sure you know where the laundry machines are. I once lived in an apartment where I forgot to ask if they had laundry in the building (they didn’t.) The result was that I spent an unanticipated few hours every few weeks having to clean my clothes. It’s not the end of the world, and unfortunately I doubt it could have changed my situation, but it’s still a very important amenity that some people overlook.
Last week, we started to examine the 7.2% increase in traffic fatalities from 2014 to 2015, the reversal of a near decade-long downward trend. We then broke out the data by various accident classifications, such as “speeding” or “driving with a positive BAC,” and identified those classifications that had the greatest increase. One label that showed promise for improvement was “involving a distracted driver.” According to Pew Research, the number of Americans who own a mobile device has pretty consistently risen over the past decade, as has the number of Americans who own a smartphone. Moreover, apps like Pokemon Go have built-in features that incentivize driving while playing, and these types of augmented reality games are only going to become more common.
The National Highway Traffic Safety Association (NHTSA) defines distracted driving as “any activity that could divert a person’s attention away from the primary task of driving.” This includes several activities, from texting while driving, to using one hand to place a call. The Governor’s Highway Safety Administration (GHSA), an organization that “provides leadership and representation for the states and territories to improve traffic safety,” notes that states can even collect data on distracted driving in different ways. While most states split up distracted driving into two or three categories, some states use only one category (and other states use as many as 15 categories!) These categories include not just distraction by technology, but also events such as animals in the vehicle, or the consumption of food & drink. Distracted driving is also said to be under-reported because drivers are less likely to admit to using their phone in the event of a crash.
Because of these discrepancies, it’s important to keep in mind that regulations vary from state to state and policy that successfully reduces accidents in one state may not automatically follow to another state. Still, sharing what works and what doesn’t can be important in saving lives, which is one reason why this data is collected and aggregated. So, which states are succeeding at reducing the number of fatalities caused by distracted driving?
In 2015, New Mexico had one of the highest rates of distracted driving fatalities per mile driven. New Mexico Governor Susana Martinez recognized this even back in 2014, signing a bill that banned texting while driving citing, “Texting while driving is now the leading cause of death for New Mexico’s teen drivers. Most other states have banned the practice of texting while driving.”
Did these laws end up working? Well, maybe. If we look at all crashes (not just fatal ones) in New Mexico from 2005 to 2014, the general trend was downward post-2007, seemingly leveling out during 2014. Unfortunately, data from 2015 on the total number of crashes in New Mexico isn’t available and so we aren’t able to examine whether the bill ended up succeeding in terms of reducing all crashes due to distracted driving.
If we examine only fatalities, we see that while the number of all fatalities decreases in 2015, the 2014 bill doesn’t seem to actually affect distracted driving fatalities. The bill was signed in March and took effect in July, and so there were several months for its effects to be able to propagate. This ambiguous policy impact isn’t limited to New Mexico either. Economists Rahi Abouk and Scott Adams ran a national study where they discovered that “while the effects are strong for the month immediately following ban imposition, accident levels appear to return toward normal levels in about three months.” Still, New Mexico’s decrease in fatalities bucked the national trend by the greatest amount (we’ll talk about which states experienced the largest increase in fatalities in a separate post).
Of course, legislation is only one tactic that can be used to prevent distracted driving. The GHSA notes that (some) states use other tactics such as social media outreach or statewide campaigns. Several states have adopted slogans, which range from the passive Wyoming “The road is no place for distractions” to the more flavorful Missouri “U TXT UR NXT, NO DWT.” States sometimes aim these campaigns at specific demographics, such as teens and young adults, who have higher rates of distracted driving (quite a few states also pass legislation that directly targets young people).
This post is the second in our series on traffic fatalities, inspired by a call to action put out by the Department of Transportation. Watch out for another post highlighting a different aspect of the dataset next week. In the meantime if you have any questions or comments or suggestions you can find me at @JayMahabal or email me at jay@h2oai.
On Tuesday, August 30th, the National Highway Traffic Safety Administration released their annual dataset of traffic fatalities asking interested parties to use the dataset to identify the causes of an increase of 7.2% in fatalities from 2014 to 2015. As part of H2O.ai‘s vision of using artificial intelligence for the betterment of society we were excited to tackle this problem.
This post is the first in our series on the Department of Transportation dataset and driving fatalities which will hopefully culminate in a hackathon in late September, where we’ll invite community members to join forces with the talented engineers and scientists at H2O.ai to find a solution to this problem and prescribe policy changes.
To begin, we started by reading some literature and getting familiar with the data. These documents served as excellent inspiration for possible paths of analysis and guided our thinking. Our introductory investigation was based around asking a series of questions, paving the way for detailed analysis down the road. The dataset includes every (reported) accident along with several labels, from “involving a distracted driving” to “involving a driver aged 15-20.” Even though fatalities as a whole fell during the last ten years more progress has been made in some areas over others, and comparing 2014 incidents to 2015 incidents can reveal promising openings for policy action.
It’s important to keep in mind that regulations vary from state to state and policy that successfully reduces accidents in one state may not follow to another state. Still, sharing what works and what doesn’t can be important in saving lives, which is one reason why this data is collected and aggregated.
Next week we’ll examine distracted driving, and investigate whether or not the laws that prohibited texting while driving made a difference — and why those laws didn’t continue the downward trend in 2015. We’ll follow that with an investigation into speeding and motorcycle crashes. In the meantime if you have any questions or comments or suggestions you can find me at @JayMahabal or email me at jay@h2oai.
Clarification: September 14th, 2016
We shifted graph labels to reduce confusion.