{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Sentiment analysis with TFLearn\n", "\n", "In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using [TFLearn](http://tflearn.org/), a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.\n", "\n", "We'll start off by importing all the modules we'll need, then load and prepare the data." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import tensorflow as tf\n", "import tflearn\n", "from tflearn.data_utils import to_categorical" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparing the data\n", "\n", "Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Read the data\n", "\n", "Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "reviews = pd.read_csv('reviews.txt', header=None)\n", "labels = pd.read_csv('labels.txt', header=None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Counting word frequency\n", "\n", "To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a [bag of words](https://en.wikipedia.org/wiki/Bag-of-words_model). We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the [Counter class](https://docs.python.org/2/library/collections.html#collections.Counter).\n", "\n", "> **Exercise:** Create the bag of words from the reviews data and assign it to `total_counts`. The reviews are stores in the `reviews` [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html). If you want the reviews as a Numpy array, use `reviews.values`. You can iterate through the rows in the DataFrame with `for idx, row in reviews.iterrows():` ([documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iterrows.html)). When you break up the reviews into words, use `.split(' ')` instead of `.split()` so your results match ours." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total words in data set: 74074\n" ] } ], "source": [ "from collections import Counter\n", "total_counts = Counter()\n", "for _, row in reviews.iterrows():\n", " total_counts.update(row[0].split(' '))\n", "print(\"Total words in data set: \", len(total_counts))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort `vocab` by the count value and keep the 10000 most frequent words." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['', 'the', '.', 'and', 'a', 'of', 'to', 'is', 'br', 'it', 'in', 'i', 'this', 'that', 's', 'was', 'as', 'for', 'with', 'movie', 'but', 'film', 'you', 'on', 't', 'not', 'he', 'are', 'his', 'have', 'be', 'one', 'all', 'at', 'they', 'by', 'an', 'who', 'so', 'from', 'like', 'there', 'her', 'or', 'just', 'about', 'out', 'if', 'has', 'what', 'some', 'good', 'can', 'more', 'she', 'when', 'very', 'up', 'time', 'no']\n" ] } ], "source": [ "vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]\n", "print(vocab[:60])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "float : 30\n" ] } ], "source": [ "print(vocab[-1], ': ', total_counts[vocab[-1]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.\n", "\n", "**Note:** When you run, you may see a different word from the one shown above, but it will also have the value `30`. That's because there are many words tied for that number of counts, and the `Counter` class does not guarantee which one will be returned in the case of a tie.\n", "\n", "Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.\n", "\n", "> **Exercise:** Create a dictionary called `word2idx` that maps each word in the vocabulary to an index. The first word in `vocab` has index `0`, the second word has index `1`, and so on." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [], "source": [ "word2idx = {word: i for i, word in enumerate(vocab)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Text to vector function\n", "\n", "Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:\n", "\n", "* Initialize the word vector with [np.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html), it should be the length of the vocabulary.\n", "* Split the input string of text into a list of words with `.split(' ')`. Again, if you call `.split()` instead, you'll get slightly different results than what we show here.\n", "* For each word in that list, increment the element in the index associated with that word, which you get from `word2idx`.\n", "\n", "**Note:** Since all words aren't in the `vocab` dictionary, you'll get a key error if you run into one of those words. You can use the `.get` method of the `word2idx` dictionary to specify a default returned value when you make a key error. For example, `word2idx.get(word, None)` returns `None` if `word` doesn't exist in the dictionary." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def text_to_vector(text):\n", " word_vector = np.zeros(len(vocab), dtype=np.int_)\n", " for word in text.split(' '):\n", " idx = word2idx.get(word, None)\n", " if idx is None:\n", " continue\n", " else:\n", " word_vector[idx] += 1\n", " return np.array(word_vector)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you do this right, the following code should return\n", "\n", "```\n", "text_to_vector('The tea is for a party to celebrate '\n", " 'the movie so she has no time for a cake')[:65]\n", " \n", "array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n", " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n", " 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\n", "``` " ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n", " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n", " 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_to_vector('The tea is for a party to celebrate '\n", " 'the movie so she has no time for a cake')[:65]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, run through our entire review data set and convert each review to a word vector." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [], "source": [ "word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)\n", "for ii, (_, text) in enumerate(reviews.iterrows()):\n", " word_vectors[ii] = text_to_vector(text[0])" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 18, 9, 27, 1, 4, 4, 6, 4, 0, 2, 2, 5, 0,\n", " 4, 1, 0, 2, 0, 0, 0, 0, 0, 0],\n", " [ 5, 4, 8, 1, 7, 3, 1, 2, 0, 4, 0, 0, 0,\n", " 1, 2, 0, 0, 1, 3, 0, 0, 0, 1],\n", " [ 78, 24, 12, 4, 17, 5, 20, 2, 8, 8, 2, 1, 1,\n", " 2, 8, 0, 5, 5, 4, 0, 2, 1, 4],\n", " [167, 53, 23, 0, 22, 23, 13, 14, 8, 10, 8, 12, 9,\n", " 4, 11, 2, 11, 5, 11, 0, 5, 3, 0],\n", " [ 19, 10, 11, 4, 6, 2, 2, 5, 0, 1, 2, 3, 1,\n", " 0, 0, 0, 3, 1, 0, 1, 0, 0, 0]])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Printing out the first 5 word vectors\n", "word_vectors[:5, :23]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Train, Validation, Test sets\n", "\n", "Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function `to_categorical` from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [], "source": [ "Y = (labels=='positive').astype(np.int_)\n", "records = len(labels)\n", "\n", "shuffle = np.arange(records)\n", "np.random.shuffle(shuffle)\n", "test_fraction = 0.9\n", "\n", "train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\n", "trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)\n", "testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 0., 1.],\n", " [ 1., 0.],\n", " [ 1., 0.],\n", " ..., \n", " [ 1., 0.],\n", " [ 1., 0.],\n", " [ 0., 1.]])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "trainY" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building the network\n", "\n", "[TFLearn](http://tflearn.org/) lets you build the network by [defining the layers](http://tflearn.org/layers/core/). \n", "\n", "### Input layer\n", "\n", "For the input layer, you just need to tell it how many units you have. For example, \n", "\n", "```\n", "net = tflearn.input_data([None, 100])\n", "```\n", "\n", "would create a network with 100 input units. The first element in the list, `None` in this case, sets the batch size. Setting it to `None` here leaves it at the default batch size.\n", "\n", "The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.\n", "\n", "\n", "### Adding layers\n", "\n", "To add new hidden layers, you use \n", "\n", "```\n", "net = tflearn.fully_connected(net, n_units, activation='ReLU')\n", "```\n", "\n", "This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument `net` is the network you created in the `tflearn.input_data` call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with `n_units`, and set the activation function with the `activation` keyword. You can keep adding layers to your network by repeated calling `net = tflearn.fully_connected(net, n_units)`.\n", "\n", "### Output layer\n", "\n", "The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.\n", "\n", "```\n", "net = tflearn.fully_connected(net, 2, activation='softmax')\n", "```\n", "\n", "### Training\n", "To set how you train the network, use \n", "\n", "```\n", "net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\n", "```\n", "\n", "Again, this is passing in the network you've been building. The keywords: \n", "\n", "* `optimizer` sets the training method, here stochastic gradient descent\n", "* `learning_rate` is the learning rate\n", "* `loss` determines how the network error is calculated. In this example, with the categorical cross-entropy.\n", "\n", "Finally you put all this together to create the model with `tflearn.DNN(net)`. So it ends up looking something like \n", "\n", "```\n", "net = tflearn.input_data([None, 10]) # Input\n", "net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden\n", "net = tflearn.fully_connected(net, 2, activation='softmax') # Output\n", "net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\n", "model = tflearn.DNN(net)\n", "```\n", "\n", "> **Exercise:** Below in the `build_model()` function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Network building\n", "def build_model():\n", " # This resets all parameters and variables, leave this here\n", " tf.reset_default_graph()\n", " \n", " # Inputs\n", " net = tflearn.input_data([None, 10000])\n", "\n", " # Hidden layer(s)\n", " net = tflearn.fully_connected(net, 200, activation='ReLU')\n", " net = tflearn.fully_connected(net, 25, activation='ReLU')\n", "\n", " # Output layer\n", " net = tflearn.fully_connected(net, 2, activation='softmax')\n", " net = tflearn.regression(net, optimizer='sgd', \n", " learning_rate=0.1, \n", " loss='categorical_crossentropy')\n", " \n", " model = tflearn.DNN(net)\n", " return model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Intializing the model\n", "\n", "Next we need to call the `build_model()` function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.\n", "\n", "> **Note:** You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From //anaconda3/envs/dl/lib/python3.5/site-packages/tflearn/summaries.py:46 in get_summary.: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.\n", "Instructions for updating:\n", "Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.\n", "WARNING:tensorflow:From //anaconda3/envs/dl/lib/python3.5/site-packages/tflearn/summaries.py:46 in get_summary.: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.\n", "Instructions for updating:\n", "Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.\n", "WARNING:tensorflow:From //anaconda3/envs/dl/lib/python3.5/site-packages/tflearn/helpers/trainer.py:766 in create_summaries.: merge_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.\n", "Instructions for updating:\n", "Please switch to tf.summary.merge.\n", "WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.\n", "WARNING:tensorflow:From //anaconda3/envs/dl/lib/python3.5/site-packages/tflearn/helpers/trainer.py:130 in __init__.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.\n", "Instructions for updating:\n", "Use `tf.global_variables_initializer` instead.\n" ] } ], "source": [ "model = build_model()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training the network\n", "\n", "Now that we've constructed the network, saved as the variable `model`, we can fit it to the data. Here we use the `model.fit` method. You pass in the training features `trainX` and the training targets `trainY`. Below I set `validation_set=0.1` which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the `batch_size` and `n_epoch` keywords, respectively. Below is the code to fit our the network to our word vectors.\n", "\n", "You can rerun `model.fit` to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. **Only use the test set after you're completely done training the network.**" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Training Step: 15900 | total loss: \u001b[1m\u001b[32m0.28905\u001b[0m\u001b[0m\n", "| SGD | epoch: 100 | loss: 0.28905 - acc: 0.8768 | val_loss: 0.45618 - val_acc: 0.8351 -- iter: 20250/20250\n", "Training Step: 15900 | total loss: \u001b[1m\u001b[32m0.28905\u001b[0m\u001b[0m\n", "| SGD | epoch: 100 | loss: 0.28905 - acc: 0.8768 | val_loss: 0.45618 - val_acc: 0.8351 -- iter: 20250/20250\n", "--\n" ] } ], "source": [ "# Training\n", "model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Testing\n", "\n", "After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, *only do this after finalizing the hyperparameters*." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test accuracy: 0.8504\n" ] } ], "source": [ "predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\n", "test_accuracy = np.mean(predictions == testY[:,0], axis=0)\n", "print(\"Test accuracy: \", test_accuracy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Try out your own text!" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Helper function that uses your model to predict sentiment\n", "def test_sentence(sentence):\n", " positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]\n", " print('Sentence: {}'.format(sentence))\n", " print('P(positive) = {:.3f} :'.format(positive_prob), \n", " 'Positive' if positive_prob > 0.5 else 'Negative')" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sentence: Moonlight is by far the best movie of 2016.\n", "P(positive) = 0.932 : Positive\n", "Sentence: It's amazing anyone could be talented enough to make something this spectacularly awful\n", "P(positive) = 0.002 : Negative\n" ] } ], "source": [ "sentence = \"Moonlight is by far the best movie of 2016.\"\n", "test_sentence(sentence)\n", "\n", "sentence = \"It's amazing anyone could be talented enough to make something this spectacularly awful\"\n", "test_sentence(sentence)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 2 }