{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 用numpy实现RNN\n", "本文针对上一个笔记[BPTT算法数学推导](./BackPropagation Through Time.ipynb),用numpy实现了一个RNN语言模型,实验数据为Google's BigQuery上下载的longish reddit 评论,我们要用RNN来生成类似reddit的评论。大部分代码参考自wildML,在其基础上进行了包装,并融入了自己的一些理解,力求写的通俗易懂。 \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 读取依赖包" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import csv\n", "import nltk\n", "import itertools\n", "import numpy as np\n", "import os\n", "import matplotlib.pyplot as plt\n", "import time\n", "import operator" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 定义全局变量" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": true }, "outputs": [], "source": [ "_VOCABULARY_SIZE = int(os.environ.get('VOCABULARY_SIZE', '8000'))\n", "_HIDDEN_SIZE = int(os.environ.get('_HIDDEN_SIZE', '80'))\n", "_NBPTT_TRUNCATE = int(os.environ.get('NBPTT_TRUNCATE', '4'))\n", "_NEPOCH = int(os.environ.get('NEPOCH', '10'))\n", "\n", "vocabulary_size = _VOCABULARY_SIZE\n", "unknown_token = \"UNKNOWN_TOKEN\"\n", "sentence_start_token = \"SENTENCE_START\"\n", "sentence_end_token = \"SENTENCE_END\" " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 读取语料库\n", "首先是读取语料库的函数,这里直接摘抄自wildML的代码,这段代码写的十分精巧,将许多功能浓缩在一行,实在是令我叹为观止。" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading CSV file...\n", "Parsed 79170 sentences.\n" ] } ], "source": [ "'''\n", "读取语料库\n", "'''\n", "def load_corpus():\n", " print 'Loading CSV file...'\n", " with open('reddit-comments-2015-08.csv', 'rb') as f:\n", " reader = csv.reader(f)\n", " # 跳过表头\n", " reader.next()\n", " # 将整篇文档分成句子的列表\n", " sentences = itertools.chain(*[nltk.sent_tokenize(x[0].decode('utf-8').lower()) for x in reader])\n", " # 为每句话加上 SENTENCE_START 和 SENTENCE_END 符号\n", " sentences = [\"%s %s %s\" % (sentence_start_token, x, sentence_end_token) for x in sentences]\n", " print 'Parsed %d sentences.'%len(sentences)\n", " return sentences\n", "\n", "# 读取语料库\n", "sentences = load_corpus()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "将文档拆成句子列表的这一句话写的非常geek,也非常难懂\n", "```py\n", "sentences = itertools.chain(*[nltk.sent_tokenize(x[0].decode('utf-8').lower()) for x in reader])\n", "```\n", "我来分析一下这句话干了哪些事情:\n", "1. 调用nltk的`sent_tokenize`对段落x[0]分句,将其转变为句子的列表,x[0]表示csv文件里一行的内容\n", "2. 用列表生成式遍历csv文件每一行,生成句子列表的嵌套列表\n", "3. 用星号`*`对这个嵌套列表解包,将其变为一个一个的列表\n", "4. 用`chain()`函数将这些列表合并为一个列表 \n", "\n", "`chain()`函数的定义如下:\n", "\n", "```py\n", "def chain(*iterables):\n", " # chain('ABC', 'DEF') --> A B C D E F\n", " for it in iterables:\n", " for element in it:\n", " yield element\n", "``` " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "它的作用是将多个迭代器作为参数, 但只返回单个迭代器, 它产生所有参数迭代器的内容, 就好像他们是来自于一个单一的序列。 \n", "最后再用一个列表生成式为每一句话加上起始符和结束符。\n", "```py\n", "sentences = [\"%s %s %s\" % (sentence_start_token, x, sentence_end_token) for x in sentences]\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 文本的预处理\n", "文本预处理主要完成了:\n", "1. 对句子的分词\n", "2. 统计词频\n", "3. 获取高频词,建立索引\n", "4. 替换未登录词" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Preprocessing...\n", "Found 65751 unique words tokens.\n", "Using vocabulary size 8000.\n", "The least frequent word in our vocabulary is 'devoted' and appeared 10 times.\n" ] } ], "source": [ "'''\n", "文本预处理\n", "'''\n", "def preprocessing(sentences):\n", " print 'Preprocessing...'\n", " # 对每个句子进行分词\n", " tokenized_sentences = [nltk.word_tokenize(sent) for sent in sentences]\n", " # 统计词频\n", " word_freq = nltk.FreqDist(itertools.chain(*tokenized_sentences))\n", " print \"Found %d unique words tokens.\" % len(word_freq.items())\n", " # 获得词频最高的词,并建立索引到词、词到索引的向量\n", " vocab = word_freq.most_common(vocabulary_size-1)\n", " index_to_word = [x[0] for x in vocab]\n", " index_to_word.append(unknown_token)\n", " word_to_index = dict([(w,i) for i,w in enumerate(index_to_word)])\n", "\n", " print \"Using vocabulary size %d.\" % vocabulary_size\n", " print \"The least frequent word in our vocabulary is '%s' and appeared %d times.\" % (vocab[-1][0], vocab[-1][1]) \n", " # 将未登录词替换为unknown token\n", " for i, sent in enumerate(tokenized_sentences):\n", " tokenized_sentences[i] = [w if w in word_to_index else unknown_token for w in sent]\n", " return tokenized_sentences, word_to_index, index_to_word\n", "\n", "# 预处理\n", "tokenized_sentences, word_to_index, index_to_word = preprocessing(sentences)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. 对句子分词\n", "`nltk.word_tokenize(sent)`作用是对输入的句子进行分词,输出为分词结果组成的列表。`tokenized_sentences`是一个嵌套列表,列表中的每一个元素表示一条句子的分词结果。\n", "2. 统计词频\n", "首先用`*`将`tokenized_sentences`分解为多个list,接着调用`itertools.chain()`函数将这些list接合成一个list,作为`nltk.FreqDist()`函数的输入,返回一个字典,键为word,值为该word的词频。\n", "3. 获取高频词,建立索引\n", "用`most_common`函数获取频率最高的`vocabulary_size-1`个词,返回值`vocab`是一个tuple的list,其格式如下:\n", "```py\n", "[(',', 18713), ('the', 13721), ('.', 6862), ('of', 6536), ('and', 6024),\n", "('a', 4569), ('to', 4542), (';', 4072), ('in', 3916), ('that', 2982)]\n", "```\n", "接着我们遍历这个list,将高频词从每个tuple取出来单独构成一个list,并在最后加入unknown_token表示未登录词:\n", "```py\n", "index_to_word = [x[0] for x in vocab]\n", "```\n", "然后我们要建立词的索引,`enumerate()`函数接收一个list,并为list中的每个元素生成一个序号。一种较为naive的写法是:\n", "\n", "```py\n", "i = 0\n", "for item in iterable:\n", " print i, item\n", " i += 1\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "而使用enumerate我们可以将代码简化为:\n", "\n", "```py\n", "for i, item in enumerate(iterable):\n", " print i, item\n", "``` " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "接着我们生成一个由词和索引构成的tuple组成的list,并用它生成一个dict `word_to_index`。\n", "\n", "4.替换未登录词\n", "这里使用了一种较为高级的列表生成式:\n", "```py\n", "[w if w in word_to_index else unknown_token for w in sent]\n", "```\n", "在生成列表时候,我们可以加入条件判断以完成复杂的表达式赋值,这里如果w出现在索引字典中则保持不变,否则将被替换为unknown_token。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 生成训练数据\n", "上一步中,我们获得了分词结果`tokenized_sentences`和词索引`word_to_index`,这一节我们利用它们来生成训练集。" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Generating train data...\n", "Generated 79170 training instances.\n", "Example input: [0, 6, 3513, 7, 155, 794, 25, 223, 8, 32, 20, 202, 5025, 350, 91, 6, 66, 207, 5, 2]\n", "Example output: [6, 3513, 7, 155, 794, 25, 223, 8, 32, 20, 202, 5025, 350, 91, 6, 66, 207, 5, 2, 1]\n" ] } ], "source": [ "'''\n", "生成训练数据\n", "'''\n", "def gen_train_data(tokenized_sentences, word_to_index):\n", " print 'Generating train data...'\n", " # 创建训练数据\n", " X_train = np.asarray([[word_to_index[w] for w in sent[:-1]] for sent in tokenized_sentences])\n", " y_train = np.asarray([[word_to_index[w] for w in sent[1:]] for sent in tokenized_sentences]) \n", " print \"Generated %d training instances.\"%X_train.shape[0]\n", " print \"Example input: %s\"%X_train[0]\n", " print \"Example output: %s\"%y_train[0]\n", " return X_train, y_train\n", "\n", "# 生成训练数据\n", "X_train, y_train = gen_train_data(tokenized_sentences, word_to_index)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`gen_train_data()`函数首先读取语料库,接着对句子进行了预处理,最后生成训练数据集。`X_train`和`y_train`都是二维numpy数组,在初始化这两个数组时我们采用了含有两层嵌套循环的列表生成式:\n", "```py\n", " X_train = np.asarray([[word_to_index[w] for w in sent[:-1]] for sent in tokenized_sentences])\n", " y_train = np.asarray([[word_to_index[w] for w in sent[1:]] for sent in tokenized_sentences]) \n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "第一层循环遍历每一条句子的分词序列,第二层循环遍历序列中的每个词将它们转化为索引。\n", "注意到`sent[:-1]`代表`[sentence_start_token x ]`,`sent[1:]`代表`[x sentence_end_token]`,这是由于我们这个任务是训练一个语言模型,每次预测下一个出现的词,因此`y_train`中每个词对应`X_train`中下一个出现的词,即$y_t=x_{t+1}$。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 实现softmax函数" ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def softmax(x):\n", " xt = np.exp(x - np.max(x))\n", " return xt / np.sum(xt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "解释一下为什么要减去max(x)。由于x可能会很大,exp以后可能会上溢出(overflow),而softmax是一个比值,因此如果我们在分子分母上同时乘/除以一个常数是不会影响最终结果的,我们只要同时除以一个较大的数就可以防止overflow。显然这里除以最大值是最好的,因为我们无法确定x的数量级,而最大值是关于x数量级自适应增长的。 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## RNN实现" ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "collapsed": true }, "outputs": [], "source": [ "\n", "class RNNNumpy:\n", " def __init__(self, vocab_size, hidden_size = 100, bptt_truncate = 4):\n", " # 初始化模型超参数\n", " self.vocab_size = vocab_size\n", " self.hidden_size = hidden_size\n", " self.bptt_truncate = bptt_truncate\n", " # 输入权重矩阵, H x K 矩阵\n", " self.W_in = np.random.uniform(-np.sqrt(1.0/vocab_size), np.sqrt(1.0/vocab_size), (hidden_size, vocab_size))\n", " # 隐层权重矩阵, H x H 矩阵\n", " self.W_rec = np.random.uniform(-np.sqrt(1.0/hidden_size), np.sqrt(1.0/hidden_size), (hidden_size, hidden_size))\n", " # 输出权重矩阵, K x H 矩阵\n", " self.W_out = np.random.uniform(-np.sqrt(1.0/hidden_size), np.sqrt(1.0/hidden_size), (vocab_size, hidden_size))\n", "\n", "\n", " def forward_propagation(self, x):\n", " '''前向传播\n", " # 参数:\n", " x:输入句子序列\n", " y:输出句子序列\n", " # 返回值:\n", " h:各时刻隐藏层的输出\n", " o:各时刻softmax层的输出\n", " '''\n", " # 句子长度\n", " T = len(x)\n", " # 隐层各时刻输出,多加一列用于存放全0向量,以表示h[-1]\n", " h = np.zeros((T+1, self.hidden_size))\n", " # 各时刻softmax的输出\n", " o = np.zeros((T, self.vocab_size))\n", " # 计算各时刻隐单元值和输出\n", " for t in np.arange(T):\n", " h[t] = np.tanh(self.W_rec.dot(h[t-1]) + self.W_in[:, x[t]])\n", " o[t] = softmax(self.W_out.dot(h[t]))\n", " return h, o\n", "\n", " def calc_total_loss(self, X_train, y_train):\n", " '''计算总交叉熵损失\n", " # 参数\n", " X_train:训练集特征\n", " y_train:训练集标签\n", " # 返回值:\n", " E:训练集上的交叉熵\n", " '''\n", " E = 0\n", " for x, y in zip(X_train, y_train):\n", " h, o = self.forward_propagation(x)\n", " correct_word_predictions = o[np.arange(len(y)), y]\n", " E += -1 * np.sum(np.log(correct_word_predictions))\n", " return E\n", "\n", " def calc_avg_loss(self, X_train, y_train):\n", " '''计算每个词上的平均交叉熵损失\n", " '''\n", " num_total_words = np.sum((len(y_i) for y_i in y_train))\n", " return self.calc_total_loss(X_train, y_train)/ num_total_words\n", "\n", " def bptt(self, x, y):\n", " '''bptt:随时间反向传播\n", " # 参数\n", " x:输入句子序列\n", " y:输出句子序列\n", " # 返回值\n", " W_in_grad:交叉熵关于输入矩阵的梯度\n", " W_rec_grad:交叉熵关于隐层矩阵的梯度\n", " W_out_grad:交叉熵关于输出矩阵的梯度\n", " '''\n", " T = len(x)\n", " W_in_grad = np.zeros(self.W_in.shape)\n", " W_rec_grad = np.zeros(self.W_rec.shape)\n", " W_out_grad = np.zeros(self.W_out.shape)\n", " \n", " h, o = self.forward_propagation(x)\n", " residual = o\n", " residual[np.arange(T), y] -= 1\n", " for t in np.arange(T)[::-1]:\n", " W_out_grad += np.outer(residual[t], h[t])\n", " delta_k = self.W_out.T.dot(residual[t]) * (1 - h[t] ** 2)\n", " for k in np.arange(max(0, t-self.bptt_truncate), t+1)[::-1]:\n", " W_rec_grad += np.outer(delta_k, h[k-1])\n", " W_in_grad[:, x[k]] += delta_k\n", " delta_k = self.W_rec.T.dot(delta_k) * (1 - h[k-1]**2)\n", " return (W_in_grad, W_rec_grad, W_out_grad)\n", "\n", " def sgd(self, x, y, learning_rate):\n", " '''随机梯度下降\n", " # 参数:\n", " x:输入句子序列\n", " y:输出句子序列 \n", " learning_rate: 学习率 \n", " '''\n", " W_in_grad, W_rec_grad, W_out_grad = self.bptt(x, y)\n", " self.W_in -= learning_rate * W_in_grad\n", " self.W_rec -= learning_rate * W_rec_grad\n", " self.W_out -= learning_rate * W_out_grad \n", " \n", " def train_with_sgd(self, X_train, y_train, nb_epoch = 100, learning_rate = 0.005, evaluate_loss_after = 5):\n", " '''用随机梯度下降训练模型\n", " # 参数:\n", " X_train:训练集特征\n", " y_train:训练集标签\n", " nb_epoch:轮询次数\n", " learning_rate:学习率\n", " evaluate_loss_after:设定经过多少轮epoch后计算损失函数\n", " '''\n", " losses = []\n", " for epoch in range(nb_epoch):\n", " if epoch % evaluate_loss_after ==0:\n", " loss = self.calc_avg_loss(X_train, y_train)\n", " losses.append(loss)\n", " if len(losses) > 1 and losses[-1] > losses[-2]:\n", " learning_rate *= 0.5\n", " print \"Setting learning rate to %f\"%learning_rate\n", " print time.strftime(\"%Y-%m-%d %H:%M:%S\",time.localtime(time.time())) + \\\n", " \" epoch=%d, num_samples_seen=%d, loss=%f\"%(epoch, epoch * X_train.shape[0], loss)\n", " for x, y in zip(X_train, y_train):\n", " self.sgd(x, y, learning_rate)\n", "\n", " def gradient_check(self, x, y, h=0.001, error_threshold=0.01):\n", " '''梯度检查\n", " # 参数:\n", " h: 参数增量\n", " error_threshold: 误差阈值\n", " '''\n", " # 用反向传播计算梯度,我们希望检查这些梯度是否正确\n", " bptt_gradients = self.bptt(x, y)\n", " # 我们要检查的参数名列表\n", " model_parameters = ['W_in', 'W_rec', 'W_out']\n", " # 为每个参数进行梯度检查\n", " for pidx, pname in enumerate(model_parameters):\n", " # 获得参数的实际值,比如model.W_in\n", " parameter = operator.attrgetter(pname)(self)\n", " print \"Performing gradient check for parameter %s with size %d.\" % (pname, np.prod(parameter.shape))\n", " # 遍历参数矩阵的每个元素,比如 (0,0), (0,1), ...\n", " it = np.nditer(parameter, flags=['multi_index'], op_flags=['readwrite'])\n", " while not it.finished:\n", " ix = it.multi_index\n", " # 保存原来的值以便我们可以恢复\n", " original_value = parameter[ix]\n", " # 用公式(f(x+h) - f(x-h))/(2*h)估计梯度\n", " parameter[ix] = original_value + h\n", " gradplus = self.calc_total_loss([x],[y])\n", " parameter[ix] = original_value - h\n", " gradminus = self.calc_total_loss([x],[y])\n", " estimated_gradient = (gradplus - gradminus)/(2*h)\n", " # 恢复参数原来的值\n", " parameter[ix] = original_value\n", " # 反向传播计算得到的梯度值\n", " backprop_gradient = bptt_gradients[pidx][ix]\n", " # 计算相对误差: (|x - y|/(|x| + |y|))\n", " relative_error = np.abs(backprop_gradient - estimated_gradient)/(np.abs(backprop_gradient) + np.abs(estimated_gradient))\n", " # 如果误差高于阈值,则检查失败\n", " if relative_error > error_threshold:\n", " print \"Gradient Check ERROR: parameter=%s ix=%s\" % (pname, ix)\n", " print \"+h Loss: %f\" % gradplus\n", " print \"-h Loss: %f\" % gradminus\n", " print \"Estimated_gradient: %f\" % estimated_gradient\n", " print \"Backpropagation gradient: %f\" % backprop_gradient\n", " print \"Relative Error: %f\" % relative_error\n", " return\n", " it.iternext()\n", " print \"Gradient check for parameter %s passed.\" % (pname)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 梯度检查\n", "上一节我们已经实现了RNN,包括前馈、随时间反向传播、随机梯度下降和计算损失函数等功能,但我们还不知道我们的实现是否正确,因此需要用梯度检查(Gradient Check)技术来检查bptt返回的梯度与估计的梯度是否一致。梯度检查的原理是用损失函数来估计梯度,具体方法如下:\n", "$$\\bigg|\\frac{\\partial E}{\\partial \\theta} - \\lim_{h\\to 0}\\frac{f(\\theta+h)-f(\\theta-h)}{2h}\\bigg|<\\epsilon$$\n", "其中$\\epsilon>0$是一个误差阈值。如果两个值的绝对值误差不超过这个阈值,则梯度检查通过;否则就是我们的实现出了问题。 \n", "对于求导变量是矩阵的情况,我们需要对矩阵中的每一个元素都进行梯度检查,如果我们直接用语料库数据去做梯度检查,则时间开销会很高。实际上,我们只要构造一个简单的测试数据集就可以解决这个问题:" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Performing gradient check for parameter W_in with size 1000.\n", "Gradient check for parameter W_in passed.\n", "Performing gradient check for parameter W_rec with size 100.\n", "Gradient check for parameter W_rec passed.\n", "Performing gradient check for parameter W_out with size 1000.\n", "Gradient check for parameter W_out passed.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "C:\\Anaconda2\\lib\\site-packages\\ipykernel\\__main__.py:150: RuntimeWarning: invalid value encountered in double_scalars\n" ] } ], "source": [ "grad_check_vocab_size = 100\n", "np.random.seed(10)\n", "model = RNNNumpy(grad_check_vocab_size, 10, 1000)\n", "model.gradient_check([0,1,2,3], [1,2,3,4])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "梯度检查的好处在于,它可以同时检查前馈、反向传播和计算损失函数等功能的正确性,而这些功能正是神经网络算法的核心组件。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 训练模型\n", "首先,我们估计一下一次SGD的时间" ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1 loops, best of 3: 334 ms per loop\n" ] } ], "source": [ "np.random.seed(10)\n", "model = RNNNumpy(vocabulary_size)\n", "%timeit model.sgd(X_train[10], y_train[10], 0.005)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们看到,没有GPU加速的SGD一次要花大约300ms,而我们有将近80000个样本,也就是说一次epoch要将近6个小时,训练好一个RNN估计要好几个星期了。为了能尽快训练一个模型,我们可以对训练集进行裁剪,只使用其最开始的100个样本:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2016-08-23 10:33:50 epoch=0, num_samples_seen=0, loss=8.987291\n", "2016-08-23 10:34:05 epoch=1, num_samples_seen=100, loss=8.975247\n", "2016-08-23 10:34:20 epoch=2, num_samples_seen=200, loss=8.956812\n", "2016-08-23 10:34:37 epoch=3, num_samples_seen=300, loss=8.841700\n", "2016-08-23 10:34:52 epoch=4, num_samples_seen=400, loss=6.822989\n", "2016-08-23 10:35:07 epoch=5, num_samples_seen=500, loss=6.306852\n", "2016-08-23 10:35:24 epoch=6, num_samples_seen=600, loss=6.048093\n", "2016-08-23 10:35:41 epoch=7, num_samples_seen=700, loss=5.877046\n", "2016-08-23 10:36:01 epoch=8, num_samples_seen=800, loss=5.746562\n", "2016-08-23 10:36:20 epoch=9, num_samples_seen=900, loss=5.644863\n" ] } ], "source": [ "model = RNNNumpy(_VOCABULARY_SIZE, _HIDDEN_SIZE, _NBPTT_TRUNCATE)\n", "model.train_with_sgd(X_train[:100], y_train[:100], nb_epoch = _NEPOCH, evaluate_loss_after = 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 生成句子\n", "在生成句子时,我们每次从输出概率的多项式分布中采样,产生下一个词。为什么要从多项式分布进行采样?我觉得原因是增加随机性,防止每次产生同一条句子。另外,我发现如果将采样改为每次取概率最大的那个词,会出现每次预测同一个词的情况,比如\"i the the the the......\",这可能是由于优化过程陷入了局部最优,因此需要尝试其他优化方法。" ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "=== sentence 1 ===\n", "sharking objectively regions ease : films i like shared .\n", "=== sentence 2 ===\n", "is qb played local alive gatherer would the and ; it and is son a surprisingly logic and ; the what why device require rest only off-topic having is he bad zero having a paid for ranking sell places a she ? gt and decide 2faskreddit what .\n", "=== sentence 3 ===\n", "gifts lifted missing gauge .\n", "=== sentence 4 ===\n", "abandoned languages players i .\n", "=== sentence 5 ===\n", "^^^message ? the but .\n", "=== sentence 6 ===\n", "price magic efficient load ? the and of is always missing rights calm it the cab going the the 250 have the stressful have usual event the custom the do players indeed unable 's me the travelling was . anything than ; the users and attempts defense a into i house simplistic wrote house sound or\n", "=== sentence 7 ===\n", "update interesting and convince\n", "=== sentence 8 ===\n", "proposed enter would of the not why is , to site and is from article dodging suggests very early her but a n't .\n", "=== sentence 9 ===\n", "empire other and and silent own the a an would the pocket background say that nah true round anyone 18 cause do this think ) . for what felon in of a for the i faint and far bang bikes hole the of 're an had i from what there qb tournament legal to .\n", "=== sentence 10 ===\n", "atleast '' of that really .\n", "=== sentence 11 ===\n", "basically premade say say , a .\n", "=== sentence 12 ===\n", "than if ( the in completion n't the n't out of i strings lb me chance wo tu gt reasonable jazz i started like in the ! dust what nap one be allows\n", "=== sentence 13 ===\n", "feel reform reasonable n't what had and .\n", "=== sentence 14 ===\n", "along presentation the the sell or what like my is been n't have going please the poison they that where we the highest here this i slurs profitable the background him the the of robot been 's the was wo the you than do n't consent had any at .\n", "=== sentence 15 ===\n", "prepare //www.wolframalpha.com/input/ .\n", "=== sentence 16 ===\n", "aging de the of squat because low seemed selfish dodge as keep preferences if is .\n", "=== sentence 17 ===\n", "7200rpm or rendering you stored resolution that ] the the i would .\n", "=== sentence 18 ===\n", "fantastic issued for enterprise and wo n't like\n", "=== sentence 19 ===\n", "keeper and if settlement see pg\n", "=== sentence 20 ===\n", ".. completely ) that to hunter slide would if sound also batteries the first they as key why close be does if . probably do the all the is down of who , before that woke used concerned the to the n't ? of more n't demo to de to are think .\n" ] } ], "source": [ "def gen_sentence(model):\n", " '''生成句子\n", " '''\n", " new_sentence = [word_to_index[sentence_start_token]]\n", " while new_sentence[-1] != word_to_index[sentence_end_token]:\n", " h, next_word_probs = model.forward_propagation(new_sentence)\n", " sampled_word = word_to_index[unknown_token]\n", " while sampled_word == word_to_index[unknown_token]:\n", " samples = np.random.multinomial(1, next_word_probs[-1])\n", " sampled_word = np.argmax(samples)\n", " new_sentence.append(sampled_word)\n", " sentence = [index_to_word[word] for word in new_sentence[1:-1]]\n", " return sentence\n", "\n", "num_sentences = 20\n", "sentence_min_len = 2\n", "for i in xrange(num_sentences):\n", " while True:\n", " sentence = gen_sentence(model)\n", " if len(sentence) > sentence_min_len:\n", " break\n", " sentence = ' '.join([word.encode('utf-8') for word in sentence])\n", " print '=== sentence {} ==='.format(i + 1)\n", " print sentence" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 0 }