{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "basic-text-classification.ipynb", "version": "0.3.2", "views": {}, "default_view": {}, "provenance": [], "private_outputs": true, "collapsed_sections": [], "toc_visible": true }, "kernelspec": { "name": "python3", "display_name": "Python 3" } }, "cells": [ { "metadata": { "id": "Ic4_occAAiAT", "colab_type": "text" }, "cell_type": "markdown", "source": [ "##### Copyright 2018 The TensorFlow Authors." ] }, { "metadata": { "id": "ioaprt5q5US7", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } }, "cellView": "form" }, "cell_type": "code", "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "yCl0eTNH5RS3", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } }, "cellView": "form" }, "cell_type": "code", "source": [ "#@title MIT License\n", "#\n", "# Copyright (c) 2017 François Chollet\n", "#\n", "# Permission is hereby granted, free of charge, to any person obtaining a\n", "# copy of this software and associated documentation files (the \"Software\"),\n", "# to deal in the Software without restriction, including without limitation\n", "# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n", "# and/or sell copies of the Software, and to permit persons to whom the\n", "# Software is furnished to do so, subject to the following conditions:\n", "#\n", "# The above copyright notice and this permission notice shall be included in\n", "# all copies or substantial portions of the Software.\n", "#\n", "# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n", "# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n", "# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n", "# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n", "# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n", "# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n", "# DEALINGS IN THE SOFTWARE." ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "ItXfxkxvosLH", "colab_type": "text" }, "cell_type": "markdown", "source": [ "# Text classification with movie reviews" ] }, { "metadata": { "id": "hKY4XMc9o8iB", "colab_type": "text" }, "cell_type": "markdown", "source": [ "\n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", "
" ] }, { "metadata": { "id": "Eg62Pmz3o83v", "colab_type": "text" }, "cell_type": "markdown", "source": [ "\n", "This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. \n", "\n", "We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. \n", "\n", "This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/)." ] }, { "metadata": { "id": "2ew7HTbPpCJH", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "import tensorflow as tf\n", "from tensorflow import keras\n", "\n", "import numpy as np\n", "\n", "print(tf.__version__)" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "iAsKG535pHep", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Download the IMDB dataset\n", "\n", "The IMDB dataset comes packaged with TensorFlow. It has already been preprocessed such that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.\n", "\n", "The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):" ] }, { "metadata": { "id": "zXXx5Oc3pOmN", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "imdb = keras.datasets.imdb\n", "\n", "(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "odr-KlzO-lkL", "colab_type": "text" }, "cell_type": "markdown", "source": [ "The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the data manageable." ] }, { "metadata": { "id": "l50X3GfjpU4r", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Explore the data \n", "\n", "Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review." ] }, { "metadata": { "id": "y8qCnve_-lkO", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "print(\"Training entries: {}, labels: {}\".format(len(train_data), len(train_labels)))" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "RnKvHWW4-lkW", "colab_type": "text" }, "cell_type": "markdown", "source": [ "The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:" ] }, { "metadata": { "id": "QtTS4kpEpjbi", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "print(train_data[0])" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "hIE4l_72x7DP", "colab_type": "text" }, "cell_type": "markdown", "source": [ "Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later." ] }, { "metadata": { "id": "X-6Ii9Pfx6Nr", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "len(train_data[0]), len(train_data[1])" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "4wJg2FiYpuoX", "colab_type": "text" }, "cell_type": "markdown", "source": [ "### Convert the integers back to words\n", "\n", "It may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping:" ] }, { "metadata": { "id": "tr5s_1alpzop", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "# A dictionary mapping words to an integer index\n", "word_index = imdb.get_word_index()\n", "\n", "# The first indices are reserved\n", "word_index = {k:(v+3) for k,v in word_index.items()} \n", "word_index[\"\"] = 0\n", "word_index[\"\"] = 1\n", "word_index[\"\"] = 2 # unknown\n", "word_index[\"\"] = 3\n", "\n", "reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])\n", "\n", "def decode_review(text):\n", " return ' '.join([reverse_word_index.get(i, '?') for i in text])" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "U3CNRvEZVppl", "colab_type": "text" }, "cell_type": "markdown", "source": [ "Now we can use the `decode_review` function to display the text for the first review:" ] }, { "metadata": { "id": "s_OqxmH6-lkn", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "decode_review(train_data[0])" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "lFP_XKVRp4_S", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Prepare the data\n", "\n", "The reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways:\n", "\n", "* One-hot-encode the arrays to convert them into vectors of 0s and 1s. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.\n", "\n", "* Alternatively, we can pad the arrays so they all have the same length, then create an integer tensor of shape `max_length * num_reviews`. We can use an embedding layer capable of handling this shape as the first layer in our network.\n", "\n", "In this tutorial, we will use the second approach. \n", "\n", "Since the movie reviews must be the same length, we will use the [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) function to standardize the lengths:" ] }, { "metadata": { "id": "2jQv-omsHurp", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "train_data = keras.preprocessing.sequence.pad_sequences(train_data,\n", " value=word_index[\"\"],\n", " padding='post',\n", " maxlen=256)\n", "\n", "test_data = keras.preprocessing.sequence.pad_sequences(test_data,\n", " value=word_index[\"\"],\n", " padding='post',\n", " maxlen=256)" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "VO5MBpyQdipD", "colab_type": "text" }, "cell_type": "markdown", "source": [ "Let's look at the length of the examples now:" ] }, { "metadata": { "id": "USSSBnkE-lky", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "len(train_data[0]), len(train_data[1])" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "QJoxZGyfjT5V", "colab_type": "text" }, "cell_type": "markdown", "source": [ "And inspect the (now padded) first review:" ] }, { "metadata": { "id": "TG8X9cqi-lk9", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "print(train_data[0])" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "LLC02j2g-llC", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Build the model\n", "\n", "The neural network is created by stacking layers—this requires two main architectural decisions:\n", "\n", "* How many layers to use in the model?\n", "* How many *hidden units* to use for each layer?\n", "\n", "In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a model for this problem:" ] }, { "metadata": { "id": "xpKOoWgu-llD", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "# input shape is the vocabulary count used for the movie reviews (10,000 words)\n", "vocab_size = 10000\n", "\n", "model = keras.Sequential()\n", "model.add(keras.layers.Embedding(vocab_size, 16))\n", "model.add(keras.layers.GlobalAveragePooling1D())\n", "model.add(keras.layers.Dense(16, activation=tf.nn.relu))\n", "model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))\n", "\n", "model.summary()" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "6PbKQ6mucuKL", "colab_type": "text" }, "cell_type": "markdown", "source": [ "The layers are stacked sequentially to build the classifier:\n", "\n", "1. The first layer is an `Embedding` layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: `(batch, sequence, embedding)`.\n", "2. Next, a `GlobalAveragePooling1D` layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\n", "3. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.\n", "4. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level." ] }, { "metadata": { "id": "0XMwnDOp-llH", "colab_type": "text" }, "cell_type": "markdown", "source": [ "### Hidden units\n", "\n", "The above model has two intermediate or \"hidden\" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.\n", "\n", "If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later." ] }, { "metadata": { "id": "L4EqVWg4-llM", "colab_type": "text" }, "cell_type": "markdown", "source": [ "### Loss function and optimizer\n", "\n", "A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. \n", "\n", "This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the \"distance\" between probability distributions, or in our case, between the ground-truth distribution and the predictions.\n", "\n", "Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.\n", "\n", "Now, configure the model to use an optimizer and a loss function:" ] }, { "metadata": { "id": "Mr0GP-cQ-llN", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "model.compile(optimizer=tf.train.AdamOptimizer(),\n", " loss='binary_crossentropy',\n", " metrics=['accuracy'])" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "hCWYwkug-llQ", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Create a validation set\n", "\n", "When training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy)." ] }, { "metadata": { "id": "-NpcXY9--llS", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "x_val = train_data[:10000]\n", "partial_x_train = train_data[10000:]\n", "\n", "y_val = train_labels[:10000]\n", "partial_y_train = train_labels[10000:]" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "35jv_fzP-llU", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Train the model\n", "\n", "Train the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:" ] }, { "metadata": { "id": "tXSGrjWZ-llW", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "history = model.fit(partial_x_train,\n", " partial_y_train,\n", " epochs=40,\n", " batch_size=512,\n", " validation_data=(x_val, y_val),\n", " verbose=1)" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "9EEGuDVuzb5r", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Evaluate the model\n", "\n", "And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy." ] }, { "metadata": { "id": "zOMKywn4zReN", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "results = model.evaluate(test_data, test_labels)\n", "\n", "print(results)" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "z1iEXVTR0Z2t", "colab_type": "text" }, "cell_type": "markdown", "source": [ "This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%." ] }, { "metadata": { "id": "5KggXVeL-llZ", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Create a graph of accuracy and loss over time\n", "\n", "`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:" ] }, { "metadata": { "id": "VcvSXvhp-llb", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "history_dict = history.history\n", "history_dict.keys()" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "nRKsqL40-lle", "colab_type": "text" }, "cell_type": "markdown", "source": [ "There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:" ] }, { "metadata": { "id": "nGoYf2Js-lle", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "import matplotlib.pyplot as plt\n", "\n", "acc = history.history['acc']\n", "val_acc = history.history['val_acc']\n", "loss = history.history['loss']\n", "val_loss = history.history['val_loss']\n", "\n", "epochs = range(1, len(acc) + 1)\n", "\n", "# \"bo\" is for \"blue dot\"\n", "plt.plot(epochs, loss, 'bo', label='Training loss')\n", "# b is for \"solid blue line\"\n", "plt.plot(epochs, val_loss, 'b', label='Validation loss')\n", "plt.title('Training and validation loss')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Loss')\n", "plt.legend()\n", "\n", "plt.show()" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "6hXx-xOv-llh", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "plt.clf() # clear figure\n", "acc_values = history_dict['acc']\n", "val_acc_values = history_dict['val_acc']\n", "\n", "plt.plot(epochs, acc, 'bo', label='Training acc')\n", "plt.plot(epochs, val_acc, 'b', label='Validation acc')\n", "plt.title('Training and validation accuracy')\n", "plt.xlabel('Epochs')\n", "plt.ylabel('Accuracy')\n", "plt.legend()\n", "\n", "plt.show()" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "oFEmZ5zq-llk", "colab_type": "text" }, "cell_type": "markdown", "source": [ "\n", "In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy.\n", "\n", "Notice the training loss *decreases* with each epoch and the training accuracy *increases* with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration.\n", "\n", "This isn't the case for the validation loss and accuracy—they seem to peak after about twenty epochs. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations *specific* to the training data that do not *generalize* to test data.\n", "\n", "For this particular case, we could prevent overfitting by simply stopping the training after twenty or so epochs. Later, you'll see how to do this automatically with a callback." ] } ] }