{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 了解Keras中LSTM的返回序列和返回狀態之間的區別\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "長期短期記憶(LSTM)是由三個內部閘(internal gates)所構建成的循環神經網絡(recurrent neural network)。\n", "\n", "與基本RNN (vanilla RNN)不同的是,LSTM的這些內部閘的設計可以允許整個模型使用反向傳播(back propagation)來訓練模型,並避免梯度消失(gradients vanishing)的問題。\n", "\n", "在Keras深度學習庫中,可以使用LSTM()類別來創建LSTM神經層。而且每一層LSTM單元都允許我們指定圖層內存儲單元的數量。層中的每個LSTM單元的內部狀態,通常縮寫為“__c__”,並輸出隱藏狀態,通常縮寫為“__h__”。\n", "\n", "![lstm-cell](http://bit.ly/2i3kb1w)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Keras API允許我們訪問這些\"內部狀態\"數據,這些數據在開發複雜的循環神經網絡架構(如`encoder-decoder`模型)時可能有用,甚至是必需的。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using TensorFlow backend.\n" ] }, { "data": { "text/plain": [ "'2.1.1'" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import keras\n", "keras.__version__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 返回序列 (Return Sequences)\n", "\n", "每個LSTM單元將為每個輸入來輸出一個隱藏狀態\" __h__ \"。\n", "> h = LSTM(X)\n", "\n", "我們可以在Keras中用一個非常小的模型來觀察這一點,該模型具有單個LSTM層(其本身包含單個\"LSTM\"單元)。\n", "\n", "在這個例子中,我們將有一個有三個時間步(每個時間歩只有一個特徵)的輸入樣本:\n", "> timestep_1 = 0.1\n", "\n", "> timestep_2 = 0.2\n", "\n", "> timestep_3 = 0.3" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[-0.07730947]]\n" ] } ], "source": [ "from keras.models import Model\n", "from keras.layers import Input\n", "from keras.layers import LSTM\n", "import numpy as np\n", "\n", "# 定義模型架構\n", "input_x = Input(shape=(3, 1))\n", "lstm_1 = LSTM(1)(input_x)\n", "model = Model(inputs = input_x, outputs = lstm_1)\n", "\n", "# LSTM的模型需要的輸入張量格式為:\n", "# (batch_size,timesteps,input_features)\n", "data = np.array([0.1, 0.2, 0.3]).reshape((1,3,1))\n", "\n", "# 打印模型的輸出\n", "print(model.predict(data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "運行以上以3個時間步長為輸入序列來讓LSTM輸出LSTM cell的內部隱藏狀態\" __h__ \"。\n", "\n", "由於LSTM權重和單元狀態的隨機初始化,你的具體輸出值會有所不同。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "如果有需要, 我們也可要求Keras來輸出每個輸入時間步的隱藏狀態。這可以通過在定義LSTM層時將`return_sequences`屬性設置為`True`,如下所示:\n", "\n", "> LSTM(1, return_sequences=True)\n", "\n", "我們可以修改更新前一個範例。" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[[ 0.00558797]\n", " [ 0.01459772]\n", " [ 0.02498127]]]\n" ] } ], "source": [ "from keras.models import Model\n", "from keras.layers import Input\n", "from keras.layers import LSTM\n", "import numpy as np\n", "\n", "# 定義模型架構\n", "input_x = Input(shape=(3, 1))\n", "lstm_1 = LSTM(1, return_sequences=True)(input_x)\n", "model = Model(inputs = input_x, outputs = lstm_1)\n", "\n", "# LSTM的模型需要的輸入張量格式為:\n", "# (batch_size,timesteps,input_features)\n", "data = np.array([0.1, 0.2, 0.3]).reshape((1,3,1))\n", "\n", "# 打印模型的輸出\n", "print(model.predict(data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "運行該範例將返回包含了\"3\"個值的序列,每一個隱藏狀態輸出會對應到每個輸入時間步。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 返回狀態 (Return States)\n", "\n", "LSTM單元或單元層的輸出被稱為隱藏狀態。\n", "\n", "這很令人困惑,因為每個LSTM單元保留一個不輸出的內部狀態,稱為單元狀態或\"__c__\"。通常,我們不需要訪問單元狀態,除非我們正在開發複雜的模型,其中後續神經層可能需要使用另一層的最終單元狀態(例如`encoder-decoder`模型)來初始化其單元狀態。\n", "\n", "Keras為LSTM層提供了`return_state`參數,以提供對隱藏狀態輸出(state_h)和單元狀態(state_c)的訪問。 例如:\n", "> lstm1, state_h, state_c = LSTM(1, return_state=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "這可能看起來很混亂,因為lstm1和state_h都指向相同的隱藏狀態輸出。 這兩個張量分開的原因將在其它的文章中會進行清楚的解釋。我們可以通過下面列出的工作範例來演示如何訪問LSTM層中單元格的隱藏和單元狀態。" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[array([[ 0.03827229]], dtype=float32), array([[ 0.03827229]], dtype=float32), array([[ 0.06963068]], dtype=float32)]\n" ] } ], "source": [ "from keras.models import Model\n", "from keras.layers import Input\n", "from keras.layers import LSTM\n", "import numpy as np\n", "\n", "# 定義模型架構\n", "input_x = Input(shape=(3, 1))\n", "lstm_1, state_h, state_c = LSTM(1, return_state=True)(input_x)\n", "model = Model(inputs=input_x, outputs=[lstm_1, state_h, state_c])\n", "\n", "# LSTM的模型需要的輸入張量格式為:\n", "# (batch_size,timesteps,input_features)\n", "data = np.array([0.1, 0.2, 0.3]).reshape((1,3,1))\n", "\n", "# 打印模型的輸出\n", "print(model.predict(data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "運行該範例將返回3個數組:\n", "\n", "1. 最後一步的LSTM隱藏狀態輸出。\n", "2. 最後一步(再次)的LSTM隱藏狀態輸出。\n", "3. 最後一步的LSTM單元狀態。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "隱藏狀態和單元狀態可以用來初始化具有相同單元數量的另一個LSTM層的狀態。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 返回狀態與序列 (Return States & Sequences)\n", "\n", "我們可以同時訪問隱藏狀態序列和單元狀態。\n", "\n", "這可以通過配置LSTM層來返回序列和返回狀態來完成。\n", "> lstm_1, state_h, state_c = LSTM(1, return_sequences=True, return_state=True)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[array([[[ 0.02167951],\n", " [ 0.05465259],\n", " [ 0.09158381]]], dtype=float32), array([[ 0.09158381]], dtype=float32), array([[ 0.20488389]], dtype=float32)]\n" ] } ], "source": [ "from keras.models import Model\n", "from keras.layers import Input\n", "from keras.layers import LSTM\n", "import numpy as np\n", "\n", "# 定義模型架構\n", "input_x = Input(shape=(3, 1))\n", "lstm_1, state_h, state_c = LSTM(1, return_sequences=True, return_state=True)(input_x)\n", "model = Model(inputs=input_x, outputs=[lstm_1, state_h, state_c])\n", "\n", "# LSTM的模型需要的輸入張量格式為:\n", "# (batch_size,timesteps,input_features)\n", "data = np.array([0.1, 0.2, 0.3]).reshape((1,3,1))\n", "\n", "# 打印模型的輸出\n", "print(model.predict(data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "運行這個例子,我們現在可以看到為什麼LSTM輸出張量和隱藏狀態輸出張量被分開聲明。這次的LSTM該層會返回每個輸入時間步的隱藏狀態,然後分別返回最後一個時間步的隱藏狀態輸出和最後輸入時間步的單元狀態。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 參考:\n", "* [Understand the Difference Between Return Sequences and Return States for LSTMs in Keras](https://machinelearningmastery.com/return-sequences-and-return-states-for-lstms-in-keras/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.3" } }, "nbformat": 4, "nbformat_minor": 2 }