{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Recurrent Neural Networks: Character RNNs with Keras\n", "\n", "Often we are not interested in isolated datapoints, but rather datapoints within a context of others. A datapoint may mean something different depending on what's come before it. This can typically be represented as some kind of _sequence_ of datapoints, perhaps the most common of which is a time series.\n", "\n", "One of the most ubiquitous sequences of data where context is especially important is natural language. We have quite a few words in English where the meaning of a word may be totally different depending on it's context. An innocuous example of this is \"bank\": \"I went fishing down by the river bank\" vs \"I deposited some money into the bank\".\n", "\n", "If we consider that each word is a datapoint, most non-recurrent methods will treat \"bank\" in the first sentence exactly the same as \"bank\" in the second sentence - they are indistinguishable. If you think about it, in isolation they are indistinguishable to us as well - it's the same word!\n", "\n", "We can only start to discern them when we consider the previous word (or words). So we might want our neural network to consider that \"bank\" in the first sentence is preceded by \"river\" and that in the second sentence \"money\" comes a few words before it. That's basically what RNNs do - they \"remember\" some of the previous context and that influences the output it produces. This \"memory\" (called the network's \"_hidden state_\") works by retaining some of the previous outputs and combining it with the current input; this recursing (feedback) of the network's output back into itself is where its name comes from.\n", "\n", "This recursing makes RNNs quite deep, and thus they can be difficult to train. The gradient gets smaller and smaller the deeper it is pushed backwards through the network until it \"vanishes\" (effectively becomes zero), so long-term dependencies are hard to learn. The typical practice is to only extend the RNN back a certain number of time steps so the network is still trainable.\n", "\n", "Certain units, such as the LSTM (long short-term memory) and GRU (gated recurrent unit), have been developed to mitigate some of this vanishing gradient effect.\n", "\n", "Let's walkthrough an example of a character RNN, which is a great approach for learning a character-level language model. A language model is essentially some function which returns a probability over possible words (or in this case, characters), based on what has been seen so far. This function can vary from region to region (e.g. if terms like \"pop\" are used more commonly than \"soda\") or from person to person. You could say that a (good) language model captures the style in which someone writes.\n", "\n", "Language models often must make the simplifying assumption that only what came immediately (one time step) before matters (this is called the \"Markov assumption\"), but with RNNs we do not need to make such an assumption.\n", "\n", "We'll use Keras which makes building neural networks extremely easy (this example is an annotated version of Keras's [LSTM text generation example](https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py)).\n", "\n", "First we'll do some simple preparation - import the classes we need and load up the text we want to learn from." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using Theano backend.\n", "WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be removed in the next release (v0.10). Please switch to the gpuarray backend. You can get more information about how to switch at this URL:\n", " https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29\n", "\n", "Using gpu device 0: GeForce GTX TITAN X (CNMeM is disabled, cuDNN 5103)\n" ] } ], "source": [ "import os\n", "\n", "#if using Theano with GPU\n", "#os.environ[\"THEANO_FLAGS\"] = \"mode=FAST_RUN,device=gpu,floatX=float32\"\n", "\n", "import random\n", "import numpy as np\n", "from glob import glob\n", "from keras.models import Sequential\n", "from keras.layers.recurrent import LSTM\n", "from keras.layers.core import Dense, Activation, Dropout\n", "\n", "# load up our text\n", "text_files = glob('../data/sotu/*.txt')\n", "text = '\\n'.join([open(f, 'r').read() for f in text_files])\n", "\n", "# extract all (unique) characters\n", "# these are our \"categories\" or \"labels\"\n", "chars = list(set(text))\n", "\n", "# set a fixed vector size\n", "# so we look at specific windows of characters\n", "max_len = 20" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we'll define our RNN. Keras makes this trivial:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "model = Sequential()\n", "model.add(LSTM(512, return_sequences=True, input_shape=(max_len, len(chars))))\n", "model.add(Dropout(0.2))\n", "model.add(LSTM(512, return_sequences=False))\n", "model.add(Dropout(0.2))\n", "model.add(Dense(len(chars)))\n", "model.add(Activation('softmax'))\n", "model.compile(loss='categorical_crossentropy', optimizer='rmsprop')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're framing our task as a classification task. Given a sequence of characters, we want to predict the next character. We equate each character with some label or category (e.g. \"a\" is 0, \"b\" is 1, etc).\n", "\n", "We use the _softmax_ activation function on our output layer - this function is used for categorical output. It turns the output into a probability distribution over the categories (i.e. it makes the values the network outputs sum to 1). So the network will essentially tell us how strongly it feels about each character being the next one.\n", "\n", "The categorical cross-entropy loss the standard loss function for multilabel classification, which basically penalizes the network more the further off it is from the correct label.\n", "\n", "We use dropout here to prevent overfitting - we don't want the network to just return things already in the text, we want it to have some wiggle room and create novelty! Dropout is a technique where, in training, some percent (here, 20%) of random neurons of the associated layer are \"turned off\" for that epoch. This prevents overfitting by preventing the network from relying on particular neurons.\n", "\n", "That's it for the network architecture!\n", "\n", "To train, we have to do some additional preparation. We need to chop up the text into character sequences of the length we specified (`max_len`) - these are our training inputs. We match them with the character that immediately follows each sequence. These are our expected training outputs.\n", "\n", "For example, say we have the following text (this quote is from Zhuang Zi). With `max_len=20`, we could manually create the first couple training examples like so:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "example_text = \"The fish trap exists because of the fish. Once you have gotten the fish you can forget the trap. The rabbit snare exists because of the rabbit. Once you have gotten the rabbit, you can forget the snare. Words exist because of meaning. Once you have gotten the meaning, you can forget the words. Where can I find a man who has forgotten words so that I may have a word with him?\"\n", "\n", "# step size here is 3, but we can vary that\n", "input_1 = example_text[0:20]\n", "true_output_1 = example_text[20]\n", "# >>> 'The fish trap exists'\n", "# >>> ' '\n", "\n", "input_2 = example_text[3:23]\n", "true_output_2 = example_text[23]\n", "# >>> 'fish trap exists be'\n", "# >>> 'c'\n", "\n", "input_3 = example_text[6:26]\n", "true_output_3 = example_text[26]\n", "# >>> 'sh trap exists becau'\n", "# >>> 's'\n", "\n", "# etc" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can generalize this like so:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "step = 3\n", "inputs = []\n", "outputs = []\n", "for i in range(0, len(text) - max_len, step):\n", " inputs.append(text[i:i+max_len])\n", " outputs.append(text[i+max_len])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also need to map each character to a label and create a reverse mapping to use later:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [], "source": [ "char_labels = {ch:i for i, ch in enumerate(chars)}\n", "labels_char = {i:ch for i, ch in enumerate(chars)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can start constructing our numerical input 3-tensor and output matrix. Each input example (i.e. a sequence of characters) is turned into a matrix of one-hot vectors; that is, a bunch of vectors where the index corresponding to the character is set to 1 and all the rest are set to zero.\n", "\n", "For example, if we have the following:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[[0, 0, 1, 0, 0],\n", " [1, 0, 0, 0, 0],\n", " [0, 1, 0, 0, 0],\n", " [0, 0, 0, 0, 1],\n", " [0, 0, 0, 1, 0],\n", " [1, 0, 0, 0, 0],\n", " [0, 1, 0, 0, 0]]" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# assuming max_len = 7\n", "# so our examples have 7 characters\n", "example = 'cab dab'\n", "example_char_labels = {\n", " 'a': 0,\n", " 'b': 1,\n", " 'c': 2,\n", " 'd': 3,\n", " ' ' : 4\n", "}\n", "\n", "# matrix form\n", "# the example uses only five kinds of characters,\n", "# so the vectors only need to have five components,\n", "# and since the input phrase has seven characters,\n", "# the matrix has seven vectors.\n", "[\n", " [0, 0, 1, 0, 0], # c\n", " [1, 0, 0, 0, 0], # a\n", " [0, 1, 0, 0, 0], # b\n", " [0, 0, 0, 0, 1], # (space)\n", " [0, 0, 0, 1, 0], # d\n", " [1, 0, 0, 0, 0], # a\n", " [0, 1, 0, 0, 0] # b\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That matrix represents a _single_ training example, so for our full set of training examples, we'd have a stack of those matrices (hence a 3-tensor).\n", "\n", "![A 3-tensor of training examples](../assets/rnn_3tensor.png)\n", "\n", "And the outputs for each example are each a one-hot vector (i.e. a single character). With that in mind:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# using bool to reduce memory usage\n", "X = np.zeros((len(inputs), max_len, len(chars)), dtype=np.bool)\n", "y = np.zeros((len(inputs), len(chars)), dtype=np.bool)\n", "\n", "# set the appropriate indices to 1 in each one-hot vector\n", "for i, example in enumerate(inputs):\n", " for t, char in enumerate(example):\n", " X[i, t, char_labels[char]] = 1\n", " y[i, char_labels[outputs[i]]] = 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have our training data, we can start training. Keras also makes this easy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# more epochs is usually better, but training can be very slow if not on a GPU\n", "epochs = 10\n", "model.fit(X, y, batch_size=128, epochs=epochs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's much more fun to see your network's ramblings as it's training, so let's write a function to produce text from the network:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def generate(temperature=0.35, seed=None, num_chars=100):\n", " predicate=lambda x: len(x) < num_chars\n", " \n", " if seed is not None and len(seed) < max_len:\n", " raise Exception('Seed text must be at least {} chars long'.format(max_len))\n", "\n", " # if no seed text is specified, randomly select a chunk of text\n", " else:\n", " start_idx = random.randint(0, len(text) - max_len - 1)\n", " seed = text[start_idx:start_idx + max_len]\n", "\n", " sentence = seed\n", " generated = sentence\n", "\n", " while predicate(generated):\n", " # generate the input tensor\n", " # from the last max_len characters generated so far\n", " x = np.zeros((1, max_len, len(chars)))\n", " for t, char in enumerate(sentence):\n", " x[0, t, char_labels[char]] = 1.\n", "\n", " # this produces a probability distribution over characters\n", " probs = model.predict(x, verbose=0)[0]\n", "\n", " # sample the character to use based on the predicted probabilities\n", " next_idx = sample(probs, temperature)\n", " next_char = labels_char[next_idx]\n", "\n", " generated += next_char\n", " sentence = sentence[1:] + next_char\n", " return generated\n", "\n", "def sample(probs, temperature):\n", " \"\"\"samples an index from a vector of probabilities\n", " (this is not the most efficient way but is more robust)\"\"\"\n", " a = np.log(probs)/temperature\n", " dist = np.exp(a)/np.sum(np.exp(a))\n", " choices = range(len(probs))\n", " return np.random.choice(choices, p=dist)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The _temperature_ controls how random we want the network to be. Lower temperatures favors more likely values, whereas higher temperatures introduce more and more randomness. At a high enough temperature, values will be chosen at random.\n", "\n", "With this generation function we can modify how we train the network so that we see some output at each step:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch 0\n", "Epoch 1/1\n", "980939/980939 [==============================] - 990s - loss: 1.6563 \n", "temperature: 0.20\n", "ons, neutrals and all of the progress of the first to the first to develop the country and the fisca\n", "temperature: 0.50\n", "that weren't, and people who for every can be a standards of American people because the community f\n", "temperature: 1.00\n", "would have created the somether you whan alls. Employment. 28 the health in the final changes to mee\n", "temperature: 1.20\n", "ure we pass a reasonk the Nation-\n", "But our stenfityen the efforts to sif, Mrkinn-; have To cave defen\n", "epoch 1\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1017s - loss: 1.2809 \n", "temperature: 0.20\n", " we should not be as a major the construction of the continuing of the security of the productive of\n", "temperature: 0.50\n", ". This work must now have been recommended to protect the prompt and the same time, and to continue \n", "temperature: 1.00\n", "ot whether we shrink on the world, and when the range put a school as rises. The difficult activitie\n", "temperature: 1.20\n", " over his head and what haw the curdition of the President officed year lessing the had suching that\n", "epoch 2\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1252s - loss: 1.2134 \n", "temperature: 0.20\n", " of Americans will be successfully and the program of the Congress and the Congress to respect the p\n", "temperature: 0.50\n", "f low-income families, and we have already been sever the world. It should be able to protect the mi\n", "temperature: 1.00\n", "ighbors who cannot to meet t gorus and our relief for a new 11rian deprivious with vigorously will s\n", "temperature: 1.20\n", "lity in this critical relatio bodden many of classt. They say. Thkreary school loun burge will creat\n", "epoch 3\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1251s - loss: 1.1781 \n", "temperature: 0.20\n", "same cooperation and the control of the world we have to do the control of the control of the task o\n", "temperature: 0.50\n", "t of the hands of victory is the beginning of a bank to strengthening our children's security and ne\n", "temperature: 1.00\n", "egislation now in continued law, and whatever xeeping Federal Gulf to this Union wor that important \n", "temperature: 1.20\n", "Last year, I called such implementation. The organ - budget Amendmentmred task. \n", "And everywhere ere\n", "epoch 4\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1245s - loss: 1.1542 \n", "temperature: 0.20\n", "cooperation with one more people will be the first time we have a security of the first time the exp\n", "temperature: 0.50\n", "ards and rising nations to the rest of the future of the most family of unity on the first commitmen\n", "temperature: 1.00\n", "and day.I shall not make the past year, batter years the was not aid, money, or helping new world fo\n", "temperature: 1.20\n", ", and our farms. \n", "But notconsuprior watening, more others are now given with America nosuccied are \n", "epoch 5\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1243s - loss: 1.1368 \n", "temperature: 0.20\n", "uture. Support the same time to serve the first time to provide a stable program of the world we hav\n", "temperature: 0.50\n", "all civilian war workers and which now will be prepared to see the large of the same time we have a \n", "temperature: 1.00\n", "s.\n", "It is my conviction. In good life for the International \n", "but only payment around the world can be\n", "temperature: 1.20\n", "mic growth in the thousand of us would foint on relationship by clear major nuges of money of ;orn w\n", "epoch 6\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1246s - loss: 1.1228 \n", "temperature: 0.20\n", "ing their demands for the world who would also be able to be able to provide a strong program will b\n", "temperature: 0.50\n", "ey could be allowed to be the past year, the tax cut to the country of a interests in the great serv\n", "temperature: 1.00\n", " the United States is a values of world tax cuts and to our national economies, conviction, and foe \n", "temperature: 1.20\n", "ncroached on one of the free people the future.\n", "To it to violence in the care of publicn jobs is a f\n", "epoch 7\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1246s - loss: 1.1126 \n", "temperature: 0.20\n", "agement and the result of the prosperity of the people of the people of the people of the world. \n", "Th\n", "temperature: 0.50\n", ". \n", "This program is important to a strong program to develop the company of our people will be provid\n", "temperature: 1.00\n", " 36 days, and 2 days for agriculture of the $15 billion depending these great ststen our in both cau\n", "temperature: 1.20\n", "rican. \n", "The executive breakdys of but sourcess business.\n", "\n", "epecution irprovement is terrible.\n", "She pro\n", "epoch 8\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1245s - loss: 1.1039 \n", "temperature: 0.20\n", "reat to the security of the enemies of the world. The strength of our country. \n", "And we must not be a\n", "temperature: 0.50\n", "h I'll announce very deficit in the country now are on the safety to national resources and the lead\n", "temperature: 1.00\n", "gely of payments to anything creritify, reveil and sufferonous nearly new generous use government an\n", "temperature: 1.20\n", "an pay us greater difterbny democracy--has sufficienty reulting - triticH in managing ways. I need a\n", "epoch 9\n", "Epoch 1/1\n", "980939/980939 [==============================] - 1248s - loss: 1.0954 \n", "temperature: 0.20\n", "e to cut harmful emission that we are the first time that we can succeed to the expenditures of the \n", "temperature: 0.50\n", "hat will lead to last year, the Soviet Union. \n", "So I recommend that the support of the world is the o\n", "temperature: 1.00\n", " the youth of our Nation's necessity for spirit prants would have had have greatly edgatting the ext\n", "temperature: 1.20\n", " the organization of our Administration to turn more faiile to reduce only Now is conductiongages, P\n" ] } ], "source": [ "epochs = 10\n", "for i in range(epochs):\n", " print('epoch %d'%i)\n", "\n", " # set nb_epoch to 1 since we're iterating manually\n", " # comment this out if you just want to generate text\n", " model.fit(X, y, batch_size=128, epochs=1)\n", "\n", " # preview\n", " for temp in [0.2, 0.5, 1., 1.2]:\n", " print('temperature: %0.2f'%temp)\n", " print('%s'%generate(temperature=temp))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's about all there is to it. Let's try to generate one long sample passage with 2000 characters. We'll arbitrarily pick a temperature of 0.4, which seems to work decently well -- enough randomness without being incoherent. We'll also give it a seed this time (starting text): \"Today, we are facing an important challenge\"" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Council to convene our policies are all of us have been a promise of the last year of our people who have the first time that we have one of them meet the same program of the world. \n", "I have held the strongest nations to provide a new strategy of the past year, and we should start the public schools and well-being, and the security of the people of the Congress to maintain and a program to maintain the future of the world, and the actions of the lines of the world about the many of the price structure of the people of the situation to continue the first time that the Congress must be proud of the buildings of the operation of the area of the first process of the world. \n", "I cannot retain the problem of the fact of the demand of the course of the world. And the program of proposals will be made to the international scientific and police buildings of the loss of the world. \n", "I have proposed that no world that we must stand by a second region of the first time that the excessive and the same spending to our faith in the entire period of such a time that will also deserve the cause of the future of the future. \n", "I have the same time to strengthen the world that the present price and promote a stabilization and industrial programs that will see the future of the strong state of the United States that we have done a great part of the world. \n", "And the support of the freedom of the people. \n", "The process of the same time, the responsibilities that we can be a first contribution to the superiority of our country, some of the citizens of the world. It is the summer of our strength of the world. I have the many children and the economy that we can not be the deficit in the next four years. \n", "All people are going to the institutions and to our people are the committee of the right thing to maintain the energy state of the world. I have already in a construction of the process of our own people who have been the strongest programs of our country in the first time that is a consummer that\n" ] } ], "source": [ "print('%s' % generate(temperature=0.4, seed='Today, we are facing an important challenge.', num_chars=2000))" ] } ], "metadata": { "kernelspec": { "display_name": "Python [Root]", "language": "python", "name": "Python [Root]" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 1 }