{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Assignment 2: Naive Bayes\n", "Welcome to week two of this specialization. You will learn about Naive Bayes. Concretely, you will be using Naive Bayes for sentiment analysis on tweets. Given a tweet, you will decide if it has a positive sentiment or a negative one. Specifically you will: \n", "\n", "* Train a naive bayes model on a sentiment analysis task\n", "* Test using your model\n", "* Compute ratios of positive words to negative words\n", "* Do some error analysis\n", "* Predict on your own tweet\n", "\n", "You may already be familiar with Naive Bayes and its justification in terms of conditional probabilities and independence.\n", "* In this week's lectures and assignments we used the ratio of probabilities between positive and negative sentiments.\n", "* This approach gives us simpler formulas for these 2-way classification tasks.\n", "\n", "Load the cell below to import some packages.\n", "You may want to browse the documentation of unfamiliar libraries and functions." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from utils import process_tweet, lookup\n", "import pdb\n", "from nltk.corpus import stopwords, twitter_samples\n", "import numpy as np\n", "import pandas as pd\n", "import nltk\n", "import string\n", "from nltk.tokenize import TweetTokenizer\n", "from os import getcwd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you are running this notebook in your local computer,\n", "don't forget to download the twitter samples and stopwords from nltk.\n", "\n", "```\n", "nltk.download('stopwords')\n", "nltk.download('twitter_samples')\n", "```" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# add folder, tmp2, from our local workspace containing pre-downloaded corpora files to nltk's data path\n", "filePath = f\"{getcwd()}/../tmp2/\"\n", "nltk.data.path.append(filePath)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# get the sets of positive and negative tweets\n", "all_positive_tweets = twitter_samples.strings('positive_tweets.json')\n", "all_negative_tweets = twitter_samples.strings('negative_tweets.json')\n", "\n", "# split the data into two pieces, one for training and one for testing (validation set)\n", "test_pos = all_positive_tweets[4000:]\n", "train_pos = all_positive_tweets[:4000]\n", "test_neg = all_negative_tweets[4000:]\n", "train_neg = all_negative_tweets[:4000]\n", "\n", "train_x = train_pos + train_neg\n", "test_x = test_pos + test_neg\n", "\n", "# avoid assumptions about the length of all_positive_tweets\n", "train_y = np.append(np.ones(len(train_pos)), np.zeros(len(train_neg)))\n", "test_y = np.append(np.ones(len(test_pos)), np.zeros(len(test_neg)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 1: Process the Data\n", "\n", "For any machine learning project, once you've gathered the data, the first step is to process it to make useful inputs to your model.\n", "- **Remove noise**: You will first want to remove noise from your data -- that is, remove words that don't tell you much about the content. These include all common words like 'I, you, are, is, etc...' that would not give us enough information on the sentiment.\n", "- We'll also remove stock market tickers, retweet symbols, hyperlinks, and hashtags because they can not tell you a lot of information on the sentiment.\n", "- You also want to remove all the punctuation from a tweet. The reason for doing this is because we want to treat words with or without the punctuation as the same word, instead of treating \"happy\", \"happy?\", \"happy!\", \"happy,\" and \"happy.\" as different words.\n", "- Finally you want to use stemming to only keep track of one variation of each word. In other words, we'll treat \"motivation\", \"motivated\", and \"motivate\" similarly by grouping them within the same stem of \"motiv-\".\n", "\n", "We have given you the function `process_tweet()` that does this for you." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['hello', 'great', 'day', ':)', 'good', 'morn']\n" ] } ], "source": [ "custom_tweet = \"RT @Twitter @chapagain Hello There! Have a great day. :) #good #morning http://chapagain.com.np\"\n", "\n", "# print cleaned tweet\n", "print(process_tweet(custom_tweet))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1.1 Implementing your helper functions\n", "\n", "To help train your naive bayes model, you will need to build a dictionary where the keys are a (word, label) tuple and the values are the corresponding frequency. Note that the labels we'll use here are 1 for positive and 0 for negative.\n", "\n", "You will also implement a `lookup()` helper function that takes in the `freqs` dictionary, a word, and a label (1 or 0) and returns the number of times that word and label tuple appears in the collection of tweets.\n", "\n", "For example: given a list of tweets `[\"i am rather excited\", \"you are rather happy\"]` and the label 1, the function will return a dictionary that contains the following key-value pairs:\n", "\n", "{\n", " (\"rather\", 1): 2\n", " (\"happi\", 1) : 1\n", " (\"excit\", 1) : 1\n", "}\n", "\n", "- Notice how for each word in the given string, the same label 1 is assigned to each word.\n", "- Notice how the words \"i\" and \"am\" are not saved, since it was removed by process_tweet because it is a stopword.\n", "- Notice how the word \"rather\" appears twice in the list of tweets, and so its count value is 2.\n", "\n", "#### Instructions\n", "Create a function `count_tweets()` that takes a list of tweets as input, cleans all of them, and returns a dictionary.\n", "- The key in the dictionary is a tuple containing the stemmed word and its class label, e.g. (\"happi\",1).\n", "- The value the number of times this word appears in the given collection of tweets (an integer)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " Hints\n", "\n", "

\n", "

\n", "

" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "def count_tweets(result, tweets, ys):\n", " '''\n", " Input:\n", " result: a dictionary that will be used to map each pair to its frequency\n", " tweets: a list of tweets\n", " ys: a list corresponding to the sentiment of each tweet (either 0 or 1)\n", " Output:\n", " result: a dictionary mapping each pair to its frequency\n", " '''\n", "\n", " ### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###\n", " for y, tweet in zip(ys, tweets):\n", " for word in process_tweet(tweet):\n", " # define the key, which is the word and label tuple\n", " pair = (word,y)\n", "\n", " # if the key exists in the dictionary, increment the count\n", " if pair in result:\n", " result[pair] += 1\n", "\n", " # else, if the key is new, add it to the dictionary and set the count to 1\n", " else:\n", " result[pair] = 1\n", " ### END CODE HERE ###\n", "\n", " return result" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{('happi', 1): 1, ('trick', 0): 1, ('sad', 0): 1, ('tire', 0): 2}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Testing your function\n", "\n", "\n", "result = {}\n", "tweets = ['i am happy', 'i am tricked', 'i am sad', 'i am tired', 'i am tired']\n", "ys = [1, 0, 0, 0, 0]\n", "count_tweets(result, tweets, ys)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**: {('happi', 1): 1, ('trick', 0): 1, ('sad', 0): 1, ('tire', 0): 2}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 2: Train your model using Naive Bayes\n", "\n", "Naive bayes is an algorithm that could be used for sentiment analysis. It takes a short time to train and also has a short prediction time.\n", "\n", "#### So how do you train a Naive Bayes classifier?\n", "- The first part of training a naive bayes classifier is to identify the number of classes that you have.\n", "- You will create a probability for each class.\n", "$P(D_{pos})$ is the probability that the document is positive.\n", "$P(D_{neg})$ is the probability that the document is negative.\n", "Use the formulas as follows and store the values in a dictionary:\n", "\n", "$$P(D_{pos}) = \\frac{D_{pos}}{D}\\tag{1}$$\n", "\n", "$$P(D_{neg}) = \\frac{D_{neg}}{D}\\tag{2}$$\n", "\n", "Where $D$ is the total number of documents, or tweets in this case, $D_{pos}$ is the total number of positive tweets and $D_{neg}$ is the total number of negative tweets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Prior and Logprior\n", "\n", "The prior probability represents the underlying probability in the target population that a tweet is positive versus negative. In other words, if we had no specific information and blindly picked a tweet out of the population set, what is the probability that it will be positive versus that it will be negative? That is the \"prior\".\n", "\n", "The prior is the ratio of the probabilities $\\frac{P(D_{pos})}{P(D_{neg})}$.\n", "We can take the log of the prior to rescale it, and we'll call this the logprior\n", "\n", "$$\\text{logprior} = log \\left( \\frac{P(D_{pos})}{P(D_{neg})} \\right) = log \\left( \\frac{D_{pos}}{D_{neg}} \\right)$$.\n", "\n", "Note that $log(\\frac{A}{B})$ is the same as $log(A) - log(B)$. So the logprior can also be calculated as the difference between two logs:\n", "\n", "$$\\text{logprior} = \\log (P(D_{pos})) - \\log (P(D_{neg})) = \\log (D_{pos}) - \\log (D_{neg})\\tag{3}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Positive and Negative Probability of a Word\n", "To compute the positive probability and the negative probability for a specific word in the vocabulary, we'll use the following inputs:\n", "\n", "- $freq_{pos}$ and $freq_{neg}$ are the frequencies of that specific word in the positive or negative class. In other words, the positive frequency of a word is the number of times the word is counted with the label of 1.\n", "- $N_{pos}$ and $N_{neg}$ are the total number of positive and negative words for all documents (for all tweets), respectively.\n", "- $V$ is the number of unique words in the entire set of documents, for all classes, whether positive or negative.\n", "\n", "We'll use these to compute the positive and negative probability for a specific word using this formula:\n", "\n", "$$ P(W_{pos}) = \\frac{freq_{pos} + 1}{N_{pos} + V}\\tag{4} $$\n", "$$ P(W_{neg}) = \\frac{freq_{neg} + 1}{N_{neg} + V}\\tag{5} $$\n", "\n", "Notice that we add the \"+1\" in the numerator for additive smoothing. This [wiki article](https://en.wikipedia.org/wiki/Additive_smoothing) explains more about additive smoothing." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Log likelihood\n", "To compute the loglikelihood of that very same word, we can implement the following equations:\n", "\n", "$$\\text{loglikelihood} = \\log \\left(\\frac{P(W_{pos})}{P(W_{neg})} \\right)\\tag{6}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Create `freqs` dictionary\n", "- Given your `count_tweets()` function, you can compute a dictionary called `freqs` that contains all the frequencies.\n", "- In this `freqs` dictionary, the key is the tuple (word, label)\n", "- The value is the number of times it has appeared.\n", "\n", "We will use this dictionary in several parts of this assignment." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Build the freqs dictionary for later uses\n", "\n", "freqs = count_tweets({}, train_x, train_y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Instructions\n", "Given a freqs dictionary, `train_x` (a list of tweets) and a `train_y` (a list of labels for each tweet), implement a naive bayes classifier.\n", "\n", "##### Calculate $V$\n", "- You can then compute the number of unique words that appear in the `freqs` dictionary to get your $V$ (you can use the `set` function).\n", "\n", "##### Calculate $freq_{pos}$ and $freq_{neg}$\n", "- Using your `freqs` dictionary, you can compute the positive and negative frequency of each word $freq_{pos}$ and $freq_{neg}$.\n", "\n", "##### Calculate $N_{pos}$, $N_{neg}$, $V_{pos}$, and $V_{neg}$\n", "- Using `freqs` dictionary, you can also compute the total number of positive words and total number of negative words $N_{pos}$ and $N_{neg}$.\n", "- Similarly, use `freqs` dictionary to compute the total number of **unique** positive words, $V_{pos}$, and total **unique** negative words $V_{neg}$.\n", "\n", "##### Calculate $D$, $D_{pos}$, $D_{neg}$\n", "- Using the `train_y` input list of labels, calculate the number of documents (tweets) $D$, as well as the number of positive documents (tweets) $D_{pos}$ and number of negative documents (tweets) $D_{neg}$.\n", "- Calculate the probability that a document (tweet) is positive $P(D_{pos})$, and the probability that a document (tweet) is negative $P(D_{neg})$\n", "\n", "##### Calculate the logprior\n", "- the logprior is $log(D_{pos}) - log(D_{neg})$\n", "\n", "##### Calculate log likelihood\n", "- Finally, you can iterate over each word in the vocabulary, use your `lookup` function to get the positive frequencies, $freq_{pos}$, and the negative frequencies, $freq_{neg}$, for that specific word.\n", "- Compute the positive probability of each word $P(W_{pos})$, negative probability of each word $P(W_{neg})$ using equations 4 & 5.\n", "\n", "$$ P(W_{pos}) = \\frac{freq_{pos} + 1}{N_{pos} + V}\\tag{4} $$\n", "$$ P(W_{neg}) = \\frac{freq_{neg} + 1}{N_{neg} + V}\\tag{5} $$\n", "\n", "**Note:** We'll use a dictionary to store the log likelihoods for each word. The key is the word, the value is the log likelihood of that word).\n", "\n", "- You can then compute the loglikelihood: $log \\left( \\frac{P(W_{pos})}{P(W_{neg})} \\right)$." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "def train_naive_bayes(freqs, train_x, train_y):\n", " '''\n", " Input:\n", " freqs: dictionary from (word, label) to how often the word appears\n", " train_x: a list of tweets\n", " train_y: a list of labels correponding to the tweets (0,1)\n", " Output:\n", " logprior: the log prior. (equation 3 above)\n", " loglikelihood: the log likelihood of you Naive bayes equation. (equation 6 above)\n", " '''\n", " loglikelihood = {}\n", " logprior = 0\n", "\n", " ### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###\n", "\n", " # calculate V, the number of unique words in the vocabulary\n", " vocab = set([pair[0] for pair in freqs.keys()])\n", " V = len(vocab)\n", "\n", " # calculate N_pos, N_neg, V_pos, V_neg\n", " N_pos = N_neg = V_pos = V_neg = 0\n", " for pair in freqs.keys():\n", " # if the label is positive (greater than zero)\n", " if pair[1] > 0:\n", " # increment the count of unique positive words by 1\n", " V_pos += 1\n", "\n", " # Increment the number of positive words by the count for this (word, label) pair\n", " N_pos += freqs[pair]\n", "\n", " # else, the label is negative\n", " else:\n", " # increment the count of unique negative words by 1\n", " V_neg += 1\n", "\n", " # increment the number of negative words by the count for this (word,label) pair\n", " N_neg += freqs[pair]\n", "\n", " # Calculate D, the number of documents\n", " D = len(train_y)\n", "\n", " # Calculate D_pos, the number of positive documents\n", " D_pos = (len(list(filter(lambda x: x > 0, train_y))))\n", "\n", " # Calculate D_neg, the number of negative documents\n", " D_neg = (len(list(filter(lambda x: x <= 0, train_y))))\n", "\n", " # Calculate logprior\n", " logprior = np.log(D_pos) - np.log(D_neg)\n", "\n", " # For each word in the vocabulary...\n", " for word in vocab:\n", " # get the positive and negative frequency of the word\n", " freq_pos = lookup(freqs,word,1)\n", " freq_neg = lookup(freqs,word,0)\n", "\n", " # calculate the probability that each word is positive, and negative\n", " p_w_pos = (freq_pos + 1) / (N_pos + V)\n", " p_w_neg = (freq_neg + 1) / (N_neg + V)\n", "\n", " # calculate the log likelihood of the word\n", " loglikelihood[word] = np.log(p_w_pos/p_w_neg)\n", "\n", " ### END CODE HERE ###\n", "\n", " return logprior, loglikelihood\n" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.0\n", "9089\n" ] } ], "source": [ "# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything\n", "logprior, loglikelihood = train_naive_bayes(freqs, train_x, train_y)\n", "print(logprior)\n", "print(len(loglikelihood))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", "0.0\n", "\n", "9089" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 3: Test your naive bayes\n", "\n", "Now that we have the `logprior` and `loglikelihood`, we can test the naive bayes function by making predicting on some tweets!\n", "\n", "#### Implement `naive_bayes_predict`\n", "**Instructions**:\n", "Implement the `naive_bayes_predict` function to make predictions on tweets.\n", "* The function takes in the `tweet`, `logprior`, `loglikelihood`.\n", "* It returns the probability that the tweet belongs to the positive or negative class.\n", "* For each tweet, sum up loglikelihoods of each word in the tweet.\n", "* Also add the logprior to this sum to get the predicted sentiment of that tweet.\n", "\n", "$$ p = logprior + \\sum_i^N (loglikelihood_i)$$\n", "\n", "#### Note\n", "Note we calculate the prior from the training data, and that the training data is evenly split between positive and negative labels (4000 positive and 4000 negative tweets). This means that the ratio of positive to negative 1, and the logprior is 0.\n", "\n", "The value of 0.0 means that when we add the logprior to the log likelihood, we're just adding zero to the log likelihood. However, please remember to include the logprior, because whenever the data is not perfectly balanced, the logprior will be a non-zero value." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "def naive_bayes_predict(tweet, logprior, loglikelihood):\n", " '''\n", " Input:\n", " tweet: a string\n", " logprior: a number\n", " loglikelihood: a dictionary of words mapping to numbers\n", " Output:\n", " p: the sum of all the logliklihoods of each word in the tweet (if found in the dictionary) + logprior (a number)\n", "\n", " '''\n", " ### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###\n", " # process the tweet to get a list of words\n", " word_l = process_tweet(tweet)\n", "\n", " # initialize probability to zero\n", " p = 0\n", "\n", " # add the logprior\n", " p += logprior\n", "\n", " for word in word_l:\n", "\n", " # check if the word exists in the loglikelihood dictionary\n", " if word in loglikelihood:\n", " # add the log likelihood of that word to the probability\n", " p += loglikelihood[word]\n", "\n", " ### END CODE HERE ###\n", "\n", " return p\n" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The expected output is 1.5740278623499175\n" ] } ], "source": [ "# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything\n", "\n", "# Experiment with your own tweet.\n", "my_tweet = 'She smiled.'\n", "p = naive_bayes_predict(my_tweet, logprior, loglikelihood)\n", "print('The expected output is', p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "- The expected output is around 1.57\n", "- The sentiment is positive." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Implement test_naive_bayes\n", "**Instructions**:\n", "* Implement `test_naive_bayes` to check the accuracy of your predictions.\n", "* The function takes in your `test_x`, `test_y`, log_prior, and loglikelihood\n", "* It returns the accuracy of your model.\n", "* First, use `naive_bayes_predict` function to make predictions for each tweet in text_x." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "def test_naive_bayes(test_x, test_y, logprior, loglikelihood):\n", " \"\"\"\n", " Input:\n", " test_x: A list of tweets\n", " test_y: the corresponding labels for the list of tweets\n", " logprior: the logprior\n", " loglikelihood: a dictionary with the loglikelihoods for each word\n", " Output:\n", " accuracy: (# of tweets classified correctly)/(total # of tweets)\n", " \"\"\"\n", " accuracy = 0 # return this properly\n", "\n", " ### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###\n", " y_hats = []\n", " for tweet in test_x:\n", " # if the prediction is > 0\n", " if naive_bayes_predict(tweet, logprior, loglikelihood) > 0:\n", " # the predicted class is 1\n", " y_hat_i = 1\n", " else:\n", " # otherwise the predicted class is 0\n", " y_hat_i = 0\n", "\n", " # append the predicted class to the list y_hats\n", " y_hats.append(y_hat_i)\n", "\n", " # error is the average of the absolute values of the differences between y_hats and test_y\n", " error = np.mean(np.absolute(y_hats-test_y))\n", "\n", " # Accuracy is 1 minus the error\n", " accuracy = 1-error\n", "\n", " ### END CODE HERE ###\n", "\n", " return accuracy\n" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Naive Bayes accuracy = 0.9940\n" ] } ], "source": [ "print(\"Naive Bayes accuracy = %0.4f\" %\n", " (test_naive_bayes(test_x, test_y, logprior, loglikelihood)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Accuracy**:\n", "\n", "0.9940" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "I am happy -> 2.15\n", "I am bad -> -1.29\n", "this movie should have been great. -> 2.14\n", "great -> 2.14\n", "great great -> 4.28\n", "great great great -> 6.41\n", "great great great great -> 8.55\n" ] } ], "source": [ "# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything\n", "\n", "# Run this cell to test your function\n", "for tweet in ['I am happy', 'I am bad', 'this movie should have been great.', 'great', 'great great', 'great great great', 'great great great great']:\n", " # print( '%s -> %f' % (tweet, naive_bayes_predict(tweet, logprior, loglikelihood)))\n", " p = naive_bayes_predict(tweet, logprior, loglikelihood)\n", "# print(f'{tweet} -> {p:.2f} ({p_category})')\n", " print(f'{tweet} -> {p:.2f}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "- I am happy -> 2.15\n", "- I am bad -> -1.29\n", "- this movie should have been great. -> 2.14\n", "- great -> 2.14\n", "- great great -> 4.28\n", "- great great great -> 6.41\n", "- great great great great -> 8.55" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-8.801622640492191" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Feel free to check the sentiment of your own tweet below\n", "my_tweet = 'you are bad :('\n", "naive_bayes_predict(my_tweet, logprior, loglikelihood)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 4: Filter words by Ratio of positive to negative counts\n", "\n", "- Some words have more positive counts than others, and can be considered \"more positive\". Likewise, some words can be considered more negative than others.\n", "- One way for us to define the level of positiveness or negativeness, without calculating the log likelihood, is to compare the positive to negative frequency of the word.\n", " - Note that we can also use the log likelihood calculations to compare relative positivity or negativity of words.\n", "- We can calculate the ratio of positive to negative frequencies of a word.\n", "- Once we're able to calculate these ratios, we can also filter a subset of words that have a minimum ratio of positivity / negativity or higher.\n", "- Similarly, we can also filter a subset of words that have a maximum ratio of positivity / negativity or lower (words that are at least as negative, or even more negative than a given threshold).\n", "\n", "#### Implement `get_ratio()`\n", "- Given the `freqs` dictionary of words and a particular word, use `lookup(freqs,word,1)` to get the positive count of the word.\n", "- Similarly, use the `lookup()` function to get the negative count of that word.\n", "- Calculate the ratio of positive divided by negative counts\n", "\n", "$$ ratio = \\frac{\\text{pos_words} + 1}{\\text{neg_words} + 1} $$\n", "\n", "Where pos_words and neg_words correspond to the frequency of the words in their respective classes. \n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
\n", " Words\n", " \n", " Positive word count\n", " \n", " Negative Word Count\n", "
\n", " glad\n", " \n", " 41\n", " \n", " 2\n", "
\n", " arriv\n", " \n", " 57\n", " \n", " 4\n", "
\n", " :(\n", " \n", " 1\n", " \n", " 3663\n", "
\n", " :-(\n", " \n", " 0\n", " \n", " 378\n", "
" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "def get_ratio(freqs, word):\n", " '''\n", " Input:\n", " freqs: dictionary containing the words\n", "\n", " Output: a dictionary with keys 'positive', 'negative', and 'ratio'.\n", " Example: {'positive': 10, 'negative': 20, 'ratio': 0.5}\n", " '''\n", " pos_neg_ratio = {'positive': 0, 'negative': 0, 'ratio': 0.0}\n", " ### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###\n", " # use lookup() to find positive counts for the word (denoted by the integer 1)\n", " pos_neg_ratio['positive'] = lookup(freqs,word,1)\n", "\n", " # use lookup() to find negative counts for the word (denoted by integer 0)\n", " pos_neg_ratio['negative'] = lookup(freqs,word,0)\n", "\n", " # calculate the ratio of positive to negative counts for the word\n", " pos_neg_ratio['ratio'] = (pos_neg_ratio['positive'] + 1)/(pos_neg_ratio['negative'] + 1)\n", " ### END CODE HERE ###\n", " return pos_neg_ratio\n" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'positive': 161, 'negative': 18, 'ratio': 8.526315789473685}" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_ratio(freqs, 'happi')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Implement `get_words_by_threshold(freqs,label,threshold)`\n", "\n", "* If we set the label to 1, then we'll look for all words whose threshold of positive/negative is at least as high as that threshold, or higher.\n", "* If we set the label to 0, then we'll look for all words whose threshold of positive/negative is at most as low as the given threshold, or lower.\n", "* Use the `get_ratio()` function to get a dictionary containing the positive count, negative count, and the ratio of positive to negative counts.\n", "* Append a dictionary to a list, where the key is the word, and the dictionary is the dictionary `pos_neg_ratio` that is returned by the `get_ratio()` function.\n", "An example key-value pair would have this structure:\n", "```\n", "{'happi':\n", " {'positive': 10, 'negative': 20, 'ratio': 0.5}\n", "}\n", "```" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "# UNQ_C9 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)\n", "def get_words_by_threshold(freqs, label, threshold):\n", " '''\n", " Input:\n", " freqs: dictionary of words\n", " pos_neg_ratio: dictionary of positive counts, negative counts, and ratio of positive / negative counts.\n", " label: 1 for positive, 0 for negative\n", " threshold: ratio that will be used as the cutoff for including a word in the returned dictionary\n", " Output:\n", " word_set: dictionary containing the word and information on its positive count, negative count, and ratio of positive to negative counts.\n", " example of a key value pair:\n", " {'happi':\n", " {'positive': 10, 'negative': 20, 'ratio': 0.5}\n", " }\n", " '''\n", " word_list = {}\n", "\n", " ### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###\n", " for key in freqs.keys():\n", " word, _ = key\n", "\n", " # get the positive/negative ratio for a word\n", " pos_neg_ratio = get_ratio(freqs, word)\n", "\n", " # if the label is 1 and the ratio is greater than or equal to the threshold...\n", " if label == 1 and pos_neg_ratio['ratio'] >= threshold :\n", "\n", " # Add the pos_neg_ratio to the dictionary\n", " word_list[word] = pos_neg_ratio\n", "\n", " # If the label is 0 and the pos_neg_ratio is less than or equal to the threshold...\n", " elif label == 0 and pos_neg_ratio['ratio'] <= threshold:\n", " # Add the pos_neg_ratio to the dictionary\n", " word_list[word] = pos_neg_ratio\n", "\n", " # otherwise, do not include this word in the list (do nothing)\n", "\n", " ### END CODE HERE ###\n", " return word_list\n" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{':(': {'positive': 1, 'negative': 3663, 'ratio': 0.0005458515283842794},\n", " ':-(': {'positive': 0, 'negative': 378, 'ratio': 0.002638522427440633},\n", " 'zayniscomingbackonjuli': {'positive': 0, 'negative': 19, 'ratio': 0.05},\n", " '26': {'positive': 0, 'negative': 20, 'ratio': 0.047619047619047616},\n", " '>:(': {'positive': 0, 'negative': 43, 'ratio': 0.022727272727272728},\n", " 'lost': {'positive': 0, 'negative': 19, 'ratio': 0.05},\n", " '♛': {'positive': 0, 'negative': 210, 'ratio': 0.004739336492890996},\n", " '》': {'positive': 0, 'negative': 210, 'ratio': 0.004739336492890996},\n", " 'beli̇ev': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},\n", " 'wi̇ll': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},\n", " 'justi̇n': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},\n", " 'see': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776},\n", " 'me': {'positive': 0, 'negative': 35, 'ratio': 0.027777777777777776}}" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Test your function: find negative words at or below a threshold\n", "get_words_by_threshold(freqs, label=0, threshold=0.05)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'followfriday': {'positive': 23, 'negative': 0, 'ratio': 24.0},\n", " 'commun': {'positive': 27, 'negative': 1, 'ratio': 14.0},\n", " ':)': {'positive': 2847, 'negative': 2, 'ratio': 949.3333333333334},\n", " 'flipkartfashionfriday': {'positive': 16, 'negative': 0, 'ratio': 17.0},\n", " ':D': {'positive': 498, 'negative': 0, 'ratio': 499.0},\n", " ':p': {'positive': 103, 'negative': 0, 'ratio': 104.0},\n", " 'influenc': {'positive': 16, 'negative': 0, 'ratio': 17.0},\n", " ':-)': {'positive': 543, 'negative': 0, 'ratio': 544.0},\n", " \"here'\": {'positive': 20, 'negative': 0, 'ratio': 21.0},\n", " 'youth': {'positive': 14, 'negative': 0, 'ratio': 15.0},\n", " 'bam': {'positive': 44, 'negative': 0, 'ratio': 45.0},\n", " 'warsaw': {'positive': 44, 'negative': 0, 'ratio': 45.0},\n", " 'shout': {'positive': 11, 'negative': 0, 'ratio': 12.0},\n", " ';)': {'positive': 22, 'negative': 0, 'ratio': 23.0},\n", " 'stat': {'positive': 51, 'negative': 0, 'ratio': 52.0},\n", " 'arriv': {'positive': 57, 'negative': 4, 'ratio': 11.6},\n", " 'via': {'positive': 60, 'negative': 1, 'ratio': 30.5},\n", " 'glad': {'positive': 41, 'negative': 2, 'ratio': 14.0},\n", " 'blog': {'positive': 27, 'negative': 0, 'ratio': 28.0},\n", " 'fav': {'positive': 11, 'negative': 0, 'ratio': 12.0},\n", " 'fback': {'positive': 26, 'negative': 0, 'ratio': 27.0},\n", " 'pleasur': {'positive': 10, 'negative': 0, 'ratio': 11.0}}" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Test your function; find positive words at or above a threshold\n", "get_words_by_threshold(freqs, label=1, threshold=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the difference between the positive and negative ratios. Emojis like :( and words like 'me' tend to have a negative connotation. Other words like 'glad', 'community', and 'arrives' tend to be found in the positive tweets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 5: Error Analysis\n", "\n", "In this part you will see some tweets that your model missclassified. Why do you think the misclassifications happened? Were there any assumptions made by the naive bayes model?" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Truth Predicted Tweet\n", "1\t0.00\tb''\n", "1\t0.00\tb'truli later move know queen bee upward bound movingonup'\n", "1\t0.00\tb'new report talk burn calori cold work harder warm feel better weather :p'\n", "1\t0.00\tb'harri niall 94 harri born ik stupid wanna chang :D'\n", "1\t0.00\tb''\n", "1\t0.00\tb''\n", "1\t0.00\tb'park get sunlight'\n", "1\t0.00\tb'uff itna miss karhi thi ap :p'\n", "0\t1.00\tb'hello info possibl interest jonatha close join beti :( great'\n", "0\t1.00\tb'u prob fun david'\n", "0\t1.00\tb'pat jay'\n", "0\t1.00\tb'whatev stil l young >:-('\n" ] } ], "source": [ "# Some error analysis done for you\n", "print('Truth Predicted Tweet')\n", "for x, y in zip(test_x, test_y):\n", " y_hat = naive_bayes_predict(x, logprior, loglikelihood)\n", " if y != (np.sign(y_hat) > 0):\n", " print('%d\\t%0.2f\\t%s' % (y, np.sign(y_hat) > 0, ' '.join(\n", " process_tweet(x)).encode('ascii', 'ignore')))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 6: Predict with your own tweet\n", "\n", "In this part you can predict the sentiment of your own tweet." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "9.574768961173339\n" ] } ], "source": [ "# Test with your own tweet - feel free to modify `my_tweet`\n", "my_tweet = 'I am happy because I am learning :)'\n", "\n", "p = naive_bayes_predict(my_tweet, logprior, loglikelihood)\n", "print(p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Congratulations on completing this assignment. See you next week!" ] } ], "metadata": { "anaconda-cloud": {}, "coursera": { "schema_names": [ "NLPC1-2" ] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 1 }