{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Julia(Chars74K)字母圖像辨識\n", "\n", "Kaggle的[First Steps With Julia](https://www.kaggle.com/c/street-view-getting-started-with-julia)比賽是一個圖像的分類問題。這些字符圖像來自Chars74k數據集的一個子集。這個比賽通常作為如何使用Julia語言的教程,但這我們將透過Keras來建構一個卷積神經網絡(CNN)並使用圖像增強的手法來強化模型的辨識能力。\n", "\n", "![Chars74K](http://ankivil.com/wp-content/uploads/2016/09/Kaggle_FirstStepsJulia_Cover-816x459.png)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## 資料預處理 (Data Preprocessing)\n", "首先,你必須先從Kaggle下載[Julia(Chars74K)](https://www.kaggle.com/c/street-view-getting-started-with-julia/data/)相關數據並在解壓縮到本機的特定檔案目錄。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# 資料預處理 (Data Preprocessing)\n", "import os\n", "import glob\n", "import pandas as pd\n", "import math\n", "import numpy as np\n", "from scipy.misc import imread, imsave, imresize\n", "from natsort import natsorted" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### 圖像的色彩 (Image Color)\n", "\n", "訓練和測試資料集中幾乎所有圖像都是彩色圖像。預處理的第一步是將所有圖像轉換為灰階。它簡化了輸入到網絡的數據,也讓模型更能夠一般化(generalize),因為一個藍色的字母與一個紅色字母在這個圖像的分類問題上都是相同的。因此把圖像顏色的通道(channel)進行縮減的這個預處理應該對最終的準確性沒有負面影響,因為大多數文本與背景具有高度對比。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 圖像大小的修改 (Image Resizing)\n", "\n", "由於圖像具有不同的形狀和大小,因此我們必須對圖像進行歸一化(normalize)處理以便可以決定模型的輸入。這個處理會有兩個主要的問題需要解決:我們選擇哪種圖像大小(size)?我們是否該保持圖像寬高比(aspect ratio)?\n", "\n", "起初,我也認為保持圖像的高寬比會更好,因為它不會任意扭曲圖像。這也可能導致O和O(大寫o和零)之間的混淆。不過,經過一番測試,似乎沒有保持寬高比的模型效果更好。\n", "\n", "關於圖像尺寸,16×16的圖像允許非常快速的訓練,但不能給出最好的結果。這些小圖像是快速測試想法的完美選擇。使用32×32的圖像使訓練相當快,並提供良好的準確性。最後,與32×32圖像相比,使用64×64圖像使得訓練相當緩慢並略微提高了結果。我選擇使用32×32的圖像,因為它是速度和準確性之間的最佳折衷。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 圖像資料的檔案路徑\n", "path = \"data\"\n", "\n", "# 圖像轉換後的目標大小 (32像素 x 32像素)\n", "img_height, img_width = 32, 32\n", "\n", "# 轉換圖像後的儲存目錄\n", "suffix = \"Preproc\"\n", "trainDataPath = path + \"/train\" + suffix\n", "testDataPath = path + \"/test\" + suffix\n", "\n", "# 產生目錄\n", "if not os.path.exists(trainDataPath):\n", " os.makedirs(trainDataPath)\n", "\n", "if not os.path.exists(testDataPath):\n", " os.makedirs(testDataPath)\n", " \n", "### 圖像大小與圖像的色彩的預處理 ###\n", "\n", "for datasetType in [\"train\",\"test\"]:\n", " # 透過natsorted可以讓回傳的檔案名稱的排序\n", " imgFiles = natsorted(glob.glob(path + \"/\" + datasetType + \"/*\"))\n", " \n", " # 初始一個ndarray物件來暫存讀進來的圖像資料\n", " imgData = np.zeros((len(imgFiles), img_height, img_width))\n", " \n", " # 使用迴圈來處理每一筆圖像檔\n", " for i, imgFilePath in enumerate(imgFiles):\n", " # 圖像的色彩 (Image Color)處理\n", " img = imread(imgFilePath, True) # True: 代表讀取圖像時順便將多階圖像, 打平成灰階(單一通道:one channel)\n", " \n", " # 圖像大小的修改 (Image Resizing)\n", " imgResized = imresize(img, (img_height, img_width))\n", " \n", " # 把圖像資料儲放在暫存記憶體中\n", " imgData[i] = imgResized\n", " \n", " # 將修改的圖像儲存到檔案系統 (方便視覺化了解)\n", " filename = os.path.basename(imgFilePath)\n", " filenameDotSplit = filename.split(\".\")\n", " newFilename = str(int(filenameDotSplit[0])).zfill(5) + \".\" + filenameDotSplit[-1].lower()\n", " newFilepath = path + \"/\" + datasetType + suffix + \"/\" + newFilename\n", " imsave(newFilepath, imgResized)\n", " \n", " # 新增加\"Channel\"的維度\n", " print(\"Before: \", imgData.shape)\n", " imgData = imgData[:,:,:,np.newaxis] # 改變前: []\n", " print(\"After: \", imgData.shape)\n", " \n", " # 進行資料(pixel值)標準化\n", " imgData = imgData.astype('float32')/255\n", " \n", " # 以numpy物件將圖像轉換後的ndarray物件保存在檔案系統中\n", " np.save(path + \"/\" + datasetType + suffix + \".npy\", imgData)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 標籤轉換 (Label Conversion)\n", "\n", "我們也必須將字符的標籤進行one-hot編碼的轉換。將標籤信息提供給CNN神經網絡是必要的。這個過程包括了兩個步驟。\n", "首先,我們將字符轉換為連續整數。由於要預測的字符是[0~9],[a~z]及[A~Z]共有62個字符, 所以我們將把每個字符\n", "對應到[0~61]的整數。 \n", "\n", "再來,我們將每個對應到某個字符的整數值去進行one-hot的編碼轉換成為一個向量。" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using TensorFlow backend.\n" ] } ], "source": [ "# 標籤轉換 (Label Conversion)\n", "import keras\n", "\n", "def label2int(ch):\n", " asciiVal = ord(ch)\n", " if(asciiVal<=57): #0-9\n", " asciiVal-=48\n", " elif(asciiVal<=90): #A-Z\n", " asciiVal-=55\n", " else: #a-z\n", " asciiVal-=61\n", " return asciiVal\n", " \n", "def int2label(i):\n", " if(i<=9): #0-9\n", " i+=48\n", " elif(i<=35): #A-Z\n", " i+=55\n", " else: #a-z\n", " i+=61\n", " return chr(i)\n", "\n", "# 圖像資料的檔案路徑\n", "path = \"data\"\n", "\n", "# 載入標籤資料\n", "y_train = pd.read_csv(path + \"/trainLabels.csv\").values[:,1] #只保留\"標籤資料\"欄\n", "\n", "# 對標籤(Label)進行one-hot編碼\n", "Y_train = np.zeros((y_train.shape[0], 62)) # A-Z, a-z, 0-9共有62個類別\n", "\n", "for i in range(y_train.shape[0]):\n", " Y_train[i][label2int(y_train[i])] = 1 # One-hot\n", "\n", "# 把轉換過的標籤(Label)資料保存在檔案系統便於後續的快速載入與處理\n", "np.save(path + \"/\" + \"labelsPreproc.npy\", Y_train)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 圖像增強 (Data Augmentation)\n", "\n", "這次我們要對於訓練數據的使用手法上有一些不同,主要的是想要應用一些圖像增強的技術來人為地增加“新”圖像到訓練資料集中。增量是對於原始的圖像進行一些隨機變換以產生新的圖像。這些轉換可以是縮放,旋轉,位移..等或所有這些的組合。\n", "\n", "方便的是,Keras中有一個圖像增強類別:ImageDataGenerator 可以讓我們很輕鬆地就達成任務" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 使用 ImageDataGenerator\n", "\n", "ImageDataGenerator構建函數需要幾個參數來定義我們想要使用的增強效果。我只會通過對我們的案例有用的參數進行設定,如果您需要對您的圖像進行其他修改,請參閱[Keras文檔](https://keras.io/preprocessing/image/)。\n", "\n", "* `featurewise_center`,`featurewise_std_normalization`和`zca_whitening`不使用,因為在本案例裡它們不會增加網絡的性能。如果你想測試這些選項,一定要合適地計算相關的數量,並將這些修改應用到你的測試集中進行標準化。\n", "* `rotation_range` 20左右的值效果最好。\n", "* `width_shift_range` 0.15左右的值效果最好。\n", "* `height_shift_range` 0.15左右的值效果最好。\n", "* `shear_range 0.4` 左右的值效果最好。\n", "* `zoom_range 0.3` 左右的值效果最好。\n", "* `channel_shift_range` 0.1左右的值效果最好。\n", "\n", "當然,我沒有測試所有的組合,所以可能還有其他值的組合可以用來提高最終的準確度。但要小心,太多的增量(高參數值)會使學習變得緩慢甚至跑不出來。\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 網絡模型 (Model)\n", "\n", "我嘗試了兩種不同的網絡模型結構:\n", "* Vikesh的CNN-2你可以在他的[文章](http://cs231n.stanford.edu/reports/vikesh_final.pdf)中找到細節。整個Chars74k數據集的準確率為86.52%。然而,在這個Kaggle比賽裡,我們只使用Chars74k的一個子集,這個模型我只能設法獲得大約80%的驗證準確度。\n", "* Florian Muellerklein的類VGG網絡模型結構,你可以在[這裡](http://florianmuellerklein.github.io/cnn_streetview/)找到細節。這個是我跑出來最好分數的模型,超過85%的驗證準確性,所以我會詳細描述它。\n", "\n", "一圖勝千言,下面是模型的結構圖:\n", "![VGG-like](http://ankivil.com/wp-content/uploads/2016/09/CNN_final_model.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "這個模型與Florian Muellerklein非常相似。我在卷積層設定了zero-padding,並增加了密集層的大小。Dropout設定為0.5。\n", "\n", "這個模型給出了最好的結果,但是學習起來很慢很花時間。通過將所有濾波器數量和密集的圖層大小除以2或4,您也可以得到很好的結果。\n", "較小的網絡對於測試不同的超參數非常有用。值得注意的是,增加網絡的規模確實提高了驗證的準確性,但也成倍地增加了模型學習的時間。\n", "\n", "我也嘗試添加一些圖層,但產生的網絡難以收斂,並沒有給出好的結果。" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## 模型學習 (Learning)\n", "\n", "對於模型的訓練,我使用了分類交叉熵(cross-entropy)作為損失函數(loss function),最後一層使用softmax的激勵函數。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 演算法 (Algorithm)\n", "\n", "在這個模型裡我選擇使用`AdaMax`和`AdaDelta`來作為優化器(optimizer),而不是使用經典的隨機梯度下降(SGD)算法。\n", "同時我發現`AdaMax`比`AdaDelta`在這個問題上會給出更好的結果。\n", "\n", "但是,對於具有眾多濾波器和大型完全連接層的複雜網絡,AdaMax在訓練循環不太收斂,甚至無法完全收斂。因此在這次的網絡訓練過程我拆成二個階段。\n", "第一個階段,我先使用`AdaDelta`進行了20個循環的前期訓練為的是要比較快速的幫忙卷積網絡的模型收斂。第二個階段,則利用`AdaMax`來進行更多訓練循環與更細微的修正來得到更好的模型。\n", "\n", "如果將網絡的大小除以2,則不需要使用該策略。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 訓練批次量 (Batch Size)\n", "\n", "在保持訓練循環次數不變的同時,我試圖改變每次訓練循環的批量大小(batch size)。大的批量(batch)會使算法運行速度更快,但結果效能不佳。\n", "這可能是因為在相同數量的數據量下,更大的批量意味著更少的模型權重的更新。無論如何,在這個範例中最好的結果是在批量(batch size) 設成\n", "128的情況下達到的。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 網絡層的權重初始 (Layer Initialization)\n", "\n", "如果網絡未正確初始化,則優化算法可能無法找到最佳值。我發現使用`he_normal`來進行初始化會使模型的學習變得更容易。在Keras中,你只需要為每一層使用`kernel_initializer='he_normal'`參數。\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 學習率衰減 (Learning Rate Decay)\n", "\n", "在訓練期間逐漸降低學習率(learning rate)通常是一個好主意。它允許算法微調參數,並接近局部最小值。\n", "但是,我發現使用`AdaMax`的optimizer,在沒有設定學習速率衰減的情況下結果更好,所以我們現在不必擔心。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 訓練循環 (Number of Epochs)\n", "\n", "使用128的批量大小,沒有學習速度衰減,我測試了200到500個訓練循環。即使運行到第500個訓練循環,整個網絡模型似乎也沒出現過擬合(overfitting)的情形。\n", "我想這肯定要歸功於Dropout的設定發揮了功效。我發現500個訓練循環的結果比300個訓練循環略好。最後的模型我用了500個訓練循環,但是如果你在CPU上運行,300個訓練循環應該就足夠了。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 交叉驗證 (Cross-Validation)\n", "\n", "為了評估不同模型的質量和超參數的影響,我使用了蒙特卡洛交叉驗證:我隨機分配了初始數據1/4進行驗證,並將3/4進行學習。\n", "我還使用分裂技術,確保在我們的例子中,每個類別約有1/4圖像出現在測試集中。這導致更穩定的驗證分數。\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "_________________________________________________________________\n", "Layer (type) Output Shape Param # \n", "=================================================================\n", "conv2d_1 (Conv2D) (None, 32, 32, 128) 1280 \n", "_________________________________________________________________\n", "conv2d_2 (Conv2D) (None, 32, 32, 128) 147584 \n", "_________________________________________________________________\n", "max_pooling2d_1 (MaxPooling2 (None, 16, 16, 128) 0 \n", "_________________________________________________________________\n", "conv2d_3 (Conv2D) (None, 16, 16, 256) 295168 \n", "_________________________________________________________________\n", "conv2d_4 (Conv2D) (None, 16, 16, 256) 590080 \n", "_________________________________________________________________\n", "max_pooling2d_2 (MaxPooling2 (None, 8, 8, 256) 0 \n", "_________________________________________________________________\n", "conv2d_5 (Conv2D) (None, 8, 8, 512) 1180160 \n", "_________________________________________________________________\n", "conv2d_6 (Conv2D) (None, 8, 8, 512) 2359808 \n", "_________________________________________________________________\n", "conv2d_7 (Conv2D) (None, 8, 8, 512) 2359808 \n", "_________________________________________________________________\n", "max_pooling2d_3 (MaxPooling2 (None, 4, 4, 512) 0 \n", "_________________________________________________________________\n", "flatten_1 (Flatten) (None, 8192) 0 \n", "_________________________________________________________________\n", "dense_1 (Dense) (None, 4096) 33558528 \n", "_________________________________________________________________\n", "dropout_1 (Dropout) (None, 4096) 0 \n", "_________________________________________________________________\n", "dense_2 (Dense) (None, 4096) 16781312 \n", "_________________________________________________________________\n", "dropout_2 (Dropout) (None, 4096) 0 \n", "_________________________________________________________________\n", "dense_3 (Dense) (None, 62) 254014 \n", "=================================================================\n", "Total params: 57,527,742\n", "Trainable params: 57,527,742\n", "Non-trainable params: 0\n", "_________________________________________________________________\n" ] } ], "source": [ "import numpy as np\n", "import os\n", "from keras.preprocessing.image import ImageDataGenerator\n", "from keras.models import Sequential\n", "from keras.layers.core import Dense, Dropout, Activation, Flatten\n", "from keras.layers.convolutional import Convolution2D, MaxPooling2D\n", "from keras.callbacks import ModelCheckpoint\n", "from sklearn.model_selection import train_test_split\n", "\n", "batch_size = 128 # 訓練批次量 (Batch Size)\n", "nb_classes = 62 # A-Z, a-z, 0-9共有62個類別\n", "nb_epoch = 500 # 進行500個訓練循環\n", "\n", "# Input image dimensions\n", "# 要輸入到第一層網絡的圖像大小 (32像素 x 32像素)\n", "img_height, img_width = 32, 32\n", "\n", "# 相關資料的路徑\n", "path = \"data/\"\n", "\n", "# 載入預處理好的訓練資料與標籤\n", "X_train_all = np.load(path+\"/trainPreproc.npy\")\n", "Y_train_all = np.load(path+\"/labelsPreproc.npy\")\n", "\n", "# 將資料區分為訓練資料集與驗證資料集\n", "X_train, X_val, Y_train, Y_val = train_test_split(X_train_all, Y_train_all, test_size=0.25, stratify=np.argmax(Y_train_all, axis=1))\n", "\n", "# 設定圖像增強(data augmentation)的設定\n", "datagen = ImageDataGenerator(\n", " rotation_range = 20,\n", " width_shift_range = 0.15,\n", " height_shift_range = 0.15,\n", " shear_range = 0.4,\n", " zoom_range = 0.3, \n", " channel_shift_range = 0.1)\n", "\n", "### 卷積網絡模型架構 ###\n", "model = Sequential()\n", "\n", "model.add(Convolution2D(128,(3, 3), padding='same', kernel_initializer='he_normal', activation='relu', \n", " input_shape=(img_height, img_width, 1)))\n", "\n", "model.add(Convolution2D(128,(3, 3), padding='same', kernel_initializer='he_normal', activation='relu'))\n", "\n", "model.add(MaxPooling2D(pool_size=(2, 2)))\n", "\n", "model.add(Convolution2D(256,(3, 3), padding='same', kernel_initializer='he_normal', activation='relu'))\n", "model.add(Convolution2D(256,(3, 3), padding='same', kernel_initializer='he_normal', activation='relu'))\n", "\n", "model.add(MaxPooling2D(pool_size=(2, 2)))\n", "\n", "model.add(Convolution2D(512,(3, 3), padding='same', kernel_initializer='he_normal', activation='relu'))\n", "model.add(Convolution2D(512,(3, 3), padding='same', kernel_initializer='he_normal', activation='relu'))\n", "model.add(Convolution2D(512,(3, 3), padding='same', kernel_initializer='he_normal', activation='relu'))\n", "\n", "model.add(MaxPooling2D(pool_size=(2, 2)))\n", "\n", "model.add(Flatten())\n", "model.add(Dense(4096, kernel_initializer='he_normal', activation='relu'))\n", "model.add(Dropout(0.5))\n", "\n", "model.add(Dense(4096, kernel_initializer='he_normal', activation='relu'))\n", "model.add(Dropout(0.5))\n", "\n", "model.add(Dense(nb_classes, kernel_initializer='he_normal', activation='softmax'))\n", "\n", "# 展現整個模型架構\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Train on 4712 samples, validate on 1571 samples\n", "Epoch 1/20\n", "4712/4712 [==============================] - 7s 2ms/step - loss: 3.9692 - acc: 0.0548 - val_loss: 3.8141 - val_acc: 0.0535\n", "Epoch 2/20\n", "4712/4712 [==============================] - 4s 919us/step - loss: 3.8019 - acc: 0.0745 - val_loss: 3.6092 - val_acc: 0.1031\n", "Epoch 3/20\n", "4712/4712 [==============================] - 4s 925us/step - loss: 3.7895 - acc: 0.1089 - val_loss: 3.6961 - val_acc: 0.1254\n", "Epoch 4/20\n", "4712/4712 [==============================] - 4s 922us/step - loss: 3.0957 - acc: 0.2674 - val_loss: 2.3153 - val_acc: 0.3730\n", "Epoch 5/20\n", "4712/4712 [==============================] - 4s 926us/step - loss: 2.2733 - acc: 0.4185 - val_loss: 1.5714 - val_acc: 0.5659\n", "Epoch 6/20\n", "4712/4712 [==============================] - 4s 927us/step - loss: 1.6286 - acc: 0.5575 - val_loss: 1.7043 - val_acc: 0.5341\n", "Epoch 7/20\n", "4712/4712 [==============================] - 4s 925us/step - loss: 1.3224 - acc: 0.6382 - val_loss: 1.1770 - val_acc: 0.6677\n", "Epoch 8/20\n", "4712/4712 [==============================] - 4s 922us/step - loss: 1.0587 - acc: 0.7086 - val_loss: 1.0827 - val_acc: 0.6926\n", "Epoch 9/20\n", "4712/4712 [==============================] - 4s 923us/step - loss: 0.9026 - acc: 0.7421 - val_loss: 1.0253 - val_acc: 0.7053\n", "Epoch 10/20\n", "4712/4712 [==============================] - 4s 925us/step - loss: 0.7683 - acc: 0.7742 - val_loss: 0.9706 - val_acc: 0.7148\n", "Epoch 11/20\n", "4712/4712 [==============================] - 4s 925us/step - loss: 0.6277 - acc: 0.8079 - val_loss: 0.8502 - val_acc: 0.7575\n", "Epoch 12/20\n", "4712/4712 [==============================] - 4s 922us/step - loss: 0.5210 - acc: 0.8427 - val_loss: 0.9288 - val_acc: 0.7505\n", "Epoch 13/20\n", "4712/4712 [==============================] - 4s 929us/step - loss: 0.4484 - acc: 0.8553 - val_loss: 0.8608 - val_acc: 0.7836\n", "Epoch 14/20\n", "4712/4712 [==============================] - 4s 929us/step - loss: 0.3457 - acc: 0.8907 - val_loss: 0.8800 - val_acc: 0.7486\n", "Epoch 15/20\n", "4712/4712 [==============================] - 4s 925us/step - loss: 0.2642 - acc: 0.9174 - val_loss: 0.9797 - val_acc: 0.7721\n", "Epoch 16/20\n", "4712/4712 [==============================] - 4s 925us/step - loss: 0.2397 - acc: 0.9228 - val_loss: 1.0379 - val_acc: 0.7632\n", "Epoch 17/20\n", "4712/4712 [==============================] - 4s 926us/step - loss: 0.2042 - acc: 0.9310 - val_loss: 1.1407 - val_acc: 0.7467\n", "Epoch 18/20\n", "4712/4712 [==============================] - 4s 923us/step - loss: 0.1660 - acc: 0.9442 - val_loss: 1.1450 - val_acc: 0.7632\n", "Epoch 19/20\n", "4712/4712 [==============================] - 4s 922us/step - loss: 0.1479 - acc: 0.9531 - val_loss: 1.0959 - val_acc: 0.7721\n", "Epoch 20/20\n", "4712/4712 [==============================] - 4s 925us/step - loss: 0.1259 - acc: 0.9584 - val_loss: 1.1597 - val_acc: 0.7785\n", "Epoch 1/500\n", "36/36 [============================>.] - ETA: 0s - loss: 2.6030 - acc: 0.3466- ETA: 1s - loss: 2.8169 Epoch 00001: val_acc improved from -inf to 0.69637, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 8s 206ms/step - loss: 2.5830 - acc: 0.3501 - val_loss: 1.0911 - val_acc: 0.6964\n", "Epoch 2/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.8551 - acc: 0.497 - ETA: 0s - loss: 1.8574 - acc: 0.4967Epoch 00002: val_acc improved from 0.69637 to 0.75939, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 133ms/step - loss: 1.8493 - acc: 0.4991 - val_loss: 0.9028 - val_acc: 0.7594\n", "Epoch 3/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.5711 - acc: 0.5669Epoch 00003: val_acc improved from 0.75939 to 0.77912, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 133ms/step - loss: 1.5680 - acc: 0.5668 - val_loss: 0.7923 - val_acc: 0.7791\n", "Epoch 4/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.4082 - acc: 0.6021Epoch 00004: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 1.4037 - acc: 0.6033 - val_loss: 0.7607 - val_acc: 0.7721\n", "Epoch 5/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.3048 - acc: 0.6357Epoch 00005: val_acc improved from 0.77912 to 0.78358, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 130ms/step - loss: 1.3034 - acc: 0.6354 - val_loss: 0.7554 - val_acc: 0.7836\n", "Epoch 6/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.2425 - acc: 0.6462- ETA: 1s - loss: 1.2638 - Epoch 00006: val_acc improved from 0.78358 to 0.80522, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 131ms/step - loss: 1.2402 - acc: 0.6486 - val_loss: 0.7101 - val_acc: 0.8052\n", "Epoch 7/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.1642 - acc: 0.6638Epoch 00007: val_acc did not improve\n", "37/36 [==============================] - 4s 106ms/step - loss: 1.1617 - acc: 0.6628 - val_loss: 0.7095 - val_acc: 0.7963\n", "Epoch 8/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.1017 - acc: 0.6788- ETA: Epoch 00008: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 1.1050 - acc: 0.6788 - val_loss: 0.7013 - val_acc: 0.7791\n", "Epoch 9/500\n", "36/36 [============================>.] - ETA: 0s - loss: 1.0698 - acc: 0.6914Epoch 00009: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 1.0717 - acc: 0.6918 - val_loss: 0.6647 - val_acc: 0.8052\n", "Epoch 10/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.9987 - acc: 0.7007Epoch 00010: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.9960 - acc: 0.7014 - val_loss: 0.6507 - val_acc: 0.7995\n", "Epoch 11/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.9989 - acc: 0.7063Epoch 00011: val_acc improved from 0.80522 to 0.81604, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 131ms/step - loss: 0.9948 - acc: 0.7072 - val_loss: 0.6371 - val_acc: 0.8160\n", "Epoch 12/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.9585 - acc: 0.7235Epoch 00012: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.9563 - acc: 0.7238 - val_loss: 0.6774 - val_acc: 0.8097\n", "Epoch 13/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.9254 - acc: 0.7285Epoch 00013: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.9284 - acc: 0.7281 - val_loss: 0.6377 - val_acc: 0.8103\n", "Epoch 14/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.8643 - acc: 0.7444Epoch 00014: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.8717 - acc: 0.7435 - val_loss: 0.6371 - val_acc: 0.8084\n", "Epoch 15/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.8827 - acc: 0.7294Epoch 00015: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.8873 - acc: 0.7289 - val_loss: 0.6237 - val_acc: 0.8148\n", "Epoch 16/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7999 - acc: 0.7495Epoch 00016: val_acc improved from 0.81604 to 0.82113, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 134ms/step - loss: 0.8029 - acc: 0.7487 - val_loss: 0.6319 - val_acc: 0.8211\n", "Epoch 17/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7998 - acc: 0.7536Epoch 00017: val_acc improved from 0.82113 to 0.82877, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 133ms/step - loss: 0.7961 - acc: 0.7552 - val_loss: 0.6242 - val_acc: 0.8288\n", "Epoch 18/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7705 - acc: 0.7561Epoch 00018: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.7742 - acc: 0.7555 - val_loss: 0.6713 - val_acc: 0.8052\n", "Epoch 19/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7837 - acc: 0.7601Epoch 00019: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.7902 - acc: 0.7581 - val_loss: 0.6155 - val_acc: 0.8084\n", "Epoch 20/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7709 - acc: 0.7578Epoch 00020: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.7647 - acc: 0.7597 - val_loss: 0.6137 - val_acc: 0.8148\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 21/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7343 - acc: 0.7675Epoch 00021: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.7385 - acc: 0.7658 - val_loss: 0.6170 - val_acc: 0.8180\n", "Epoch 22/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7017 - acc: 0.7788Epoch 00022: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.6998 - acc: 0.7791 - val_loss: 0.6357 - val_acc: 0.8129\n", "Epoch 23/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.7411 - acc: 0.7650Epoch 00023: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.7375 - acc: 0.7655 - val_loss: 0.6160 - val_acc: 0.8205\n", "Epoch 24/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6815 - acc: 0.7855Epoch 00024: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.6805 - acc: 0.7856 - val_loss: 0.5630 - val_acc: 0.8230\n", "Epoch 25/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6661 - acc: 0.7909- ETA: 1s - loss: 0.Epoch 00025: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.6637 - acc: 0.7909 - val_loss: 0.6000 - val_acc: 0.8281\n", "Epoch 26/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6592 - acc: 0.7864Epoch 00026: val_acc improved from 0.82877 to 0.82941, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 131ms/step - loss: 0.6618 - acc: 0.7863 - val_loss: 0.6078 - val_acc: 0.8294\n", "Epoch 27/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6180 - acc: 0.7993Epoch 00027: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.6197 - acc: 0.7999 - val_loss: 0.6164 - val_acc: 0.8256\n", "Epoch 28/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6200 - acc: 0.8053Epoch 00028: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.6244 - acc: 0.8038 - val_loss: 0.5762 - val_acc: 0.8288\n", "Epoch 29/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6200 - acc: 0.8020Epoch 00029: val_acc improved from 0.82941 to 0.83450, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 136ms/step - loss: 0.6226 - acc: 0.8021 - val_loss: 0.6069 - val_acc: 0.8345\n", "Epoch 30/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6107 - acc: 0.8046Epoch 00030: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.6147 - acc: 0.8027 - val_loss: 0.6189 - val_acc: 0.8186\n", "Epoch 31/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.6057 - acc: 0.8039Epoch 00031: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.6005 - acc: 0.8056 - val_loss: 0.6189 - val_acc: 0.8199\n", "Epoch 32/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5824 - acc: 0.8104Epoch 00032: val_acc improved from 0.83450 to 0.83641, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 132ms/step - loss: 0.5820 - acc: 0.8108 - val_loss: 0.5929 - val_acc: 0.8364\n", "Epoch 33/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5598 - acc: 0.8156Epoch 00033: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.5602 - acc: 0.8147 - val_loss: 0.6052 - val_acc: 0.8269\n", "Epoch 34/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5470 - acc: 0.8197Epoch 00034: val_acc improved from 0.83641 to 0.84278, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 136ms/step - loss: 0.5435 - acc: 0.8203 - val_loss: 0.5952 - val_acc: 0.8428\n", "Epoch 35/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5618 - acc: 0.8136Epoch 00035: val_acc improved from 0.84278 to 0.84596, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 132ms/step - loss: 0.5593 - acc: 0.8142 - val_loss: 0.5882 - val_acc: 0.8460\n", "Epoch 36/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5228 - acc: 0.8258Epoch 00036: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.5215 - acc: 0.8255 - val_loss: 0.5652 - val_acc: 0.8370\n", "Epoch 37/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5121 - acc: 0.8272Epoch 00037: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.5148 - acc: 0.8266 - val_loss: 0.5932 - val_acc: 0.8288\n", "Epoch 38/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5238 - acc: 0.8277Epoch 00038: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.5244 - acc: 0.8273 - val_loss: 0.6155 - val_acc: 0.8421\n", "Epoch 39/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5167 - acc: 0.8246Epoch 00039: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.5187 - acc: 0.8251 - val_loss: 0.5526 - val_acc: 0.8370\n", "Epoch 40/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.5076 - acc: 0.8307Epoch 00040: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.5086 - acc: 0.8294 - val_loss: 0.6106 - val_acc: 0.8326\n", "Epoch 41/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4844 - acc: 0.8363Epoch 00041: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4857 - acc: 0.8361 - val_loss: 0.5678 - val_acc: 0.8428\n", "Epoch 42/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4839 - acc: 0.8400Epoch 00042: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4856 - acc: 0.8393 - val_loss: 0.6495 - val_acc: 0.8192\n", "Epoch 43/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4695 - acc: 0.8470Epoch 00043: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4728 - acc: 0.8467 - val_loss: 0.5671 - val_acc: 0.8415\n", "Epoch 44/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4726 - acc: 0.8438Epoch 00044: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4741 - acc: 0.8429 - val_loss: 0.6026 - val_acc: 0.8358\n", "Epoch 45/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4665 - acc: 0.8483Epoch 00045: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.4629 - acc: 0.8482 - val_loss: 0.5849 - val_acc: 0.8447\n", "Epoch 46/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4731 - acc: 0.8429Epoch 00046: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4768 - acc: 0.8423 - val_loss: 0.5976 - val_acc: 0.8345\n", "Epoch 47/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4472 - acc: 0.8516Epoch 00047: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.4505 - acc: 0.8503 - val_loss: 0.6614 - val_acc: 0.8326\n", "Epoch 48/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4258 - acc: 0.8563Epoch 00048: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4277 - acc: 0.8556 - val_loss: 0.5981 - val_acc: 0.8415\n", "Epoch 49/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4521 - acc: 0.8507Epoch 00049: val_acc improved from 0.84596 to 0.85551, saving model to best.kerasModelWeights\n", "37/36 [==============================] - 5s 134ms/step - loss: 0.4540 - acc: 0.8505 - val_loss: 0.5959 - val_acc: 0.8555\n", "Epoch 50/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4382 - acc: 0.8553Epoch 00050: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4390 - acc: 0.8547 - val_loss: 0.5942 - val_acc: 0.8396\n", "Epoch 51/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4236 - acc: 0.8584Epoch 00051: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4224 - acc: 0.8592 - val_loss: 0.5962 - val_acc: 0.8269\n", "Epoch 52/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4045 - acc: 0.8634Epoch 00052: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4038 - acc: 0.8640 - val_loss: 0.6647 - val_acc: 0.8173\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 53/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4034 - acc: 0.8602Epoch 00053: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4040 - acc: 0.8600 - val_loss: 0.6517 - val_acc: 0.8300\n", "Epoch 54/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4147 - acc: 0.8646- ETA: 2Epoch 00054: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4152 - acc: 0.8644 - val_loss: 0.5769 - val_acc: 0.8345\n", "Epoch 55/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4069 - acc: 0.8630Epoch 00055: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4047 - acc: 0.8637 - val_loss: 0.5980 - val_acc: 0.8364\n", "Epoch 56/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4194 - acc: 0.8605Epoch 00056: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4205 - acc: 0.8598 - val_loss: 0.6209 - val_acc: 0.8358\n", "Epoch 57/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.4000 - acc: 0.8669Epoch 00057: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.4039 - acc: 0.8661 - val_loss: 0.6333 - val_acc: 0.8364\n", "Epoch 58/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3950 - acc: 0.8596Epoch 00058: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3916 - acc: 0.8615 - val_loss: 0.5998 - val_acc: 0.8332\n", "Epoch 59/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3566 - acc: 0.8763Epoch 00059: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3578 - acc: 0.8757 - val_loss: 0.5975 - val_acc: 0.8364\n", "Epoch 60/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3827 - acc: 0.8743Epoch 00060: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3805 - acc: 0.8747 - val_loss: 0.6034 - val_acc: 0.8428\n", "Epoch 61/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3578 - acc: 0.8779Epoch 00061: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3569 - acc: 0.8784 - val_loss: 0.6016 - val_acc: 0.8345\n", "Epoch 62/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3528 - acc: 0.8749Epoch 00062: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.3520 - acc: 0.8751 - val_loss: 0.6642 - val_acc: 0.8358\n", "Epoch 63/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3559 - acc: 0.8838Epoch 00063: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3543 - acc: 0.8849 - val_loss: 0.6594 - val_acc: 0.8307\n", "Epoch 64/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3746 - acc: 0.8724Epoch 00064: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3744 - acc: 0.8729 - val_loss: 0.6113 - val_acc: 0.8460\n", "Epoch 65/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3479 - acc: 0.8793Epoch 00065: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3493 - acc: 0.8792 - val_loss: 0.6690 - val_acc: 0.8300\n", "Epoch 66/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3414 - acc: 0.8800Epoch 00066: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3418 - acc: 0.8795 - val_loss: 0.6082 - val_acc: 0.8415\n", "Epoch 67/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3346 - acc: 0.8839Epoch 00067: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.3352 - acc: 0.8841 - val_loss: 0.5873 - val_acc: 0.8409\n", "Epoch 68/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3649 - acc: 0.8779Epoch 00068: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3633 - acc: 0.8781 - val_loss: 0.6274 - val_acc: 0.8339\n", "Epoch 69/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3513 - acc: 0.8823Epoch 00069: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3529 - acc: 0.8817 - val_loss: 0.6459 - val_acc: 0.8396\n", "Epoch 70/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3364 - acc: 0.8819Epoch 00070: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3337 - acc: 0.8832 - val_loss: 0.6746 - val_acc: 0.8377\n", "Epoch 71/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3509 - acc: 0.8820Epoch 00071: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.3488 - acc: 0.8826 - val_loss: 0.6338 - val_acc: 0.8320\n", "Epoch 72/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3196 - acc: 0.8923Epoch 00072: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.3228 - acc: 0.8916 - val_loss: 0.6401 - val_acc: 0.8339\n", "Epoch 73/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3173 - acc: 0.8954Epoch 00073: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.3184 - acc: 0.8946 - val_loss: 0.6636 - val_acc: 0.8256\n", "Epoch 74/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3132 - acc: 0.8882Epoch 00074: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.3150 - acc: 0.8876 - val_loss: 0.6343 - val_acc: 0.8402\n", "Epoch 75/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2999 - acc: 0.8949Epoch 00075: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2974 - acc: 0.8959 - val_loss: 0.6262 - val_acc: 0.8472\n", "Epoch 76/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3154 - acc: 0.8920Epoch 00076: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.3170 - acc: 0.8911 - val_loss: 0.6305 - val_acc: 0.8491\n", "Epoch 77/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3252 - acc: 0.8847Epoch 00077: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.3240 - acc: 0.8845 - val_loss: 0.6638 - val_acc: 0.8409\n", "Epoch 78/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3171 - acc: 0.8906Epoch 00078: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.3155 - acc: 0.8904 - val_loss: 0.6571 - val_acc: 0.8415\n", "Epoch 79/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3110 - acc: 0.8961Epoch 00079: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3118 - acc: 0.8968 - val_loss: 0.6333 - val_acc: 0.8358\n", "Epoch 80/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3042 - acc: 0.8978Epoch 00080: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3030 - acc: 0.8979 - val_loss: 0.6800 - val_acc: 0.8313\n", "Epoch 81/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2977 - acc: 0.8976Epoch 00081: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2964 - acc: 0.8980 - val_loss: 0.6209 - val_acc: 0.8447\n", "Epoch 82/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2819 - acc: 0.9005Epoch 00082: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.2880 - acc: 0.8977 - val_loss: 0.6263 - val_acc: 0.8358\n", "Epoch 83/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3043 - acc: 0.8988Epoch 00083: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.3006 - acc: 0.9003 - val_loss: 0.6402 - val_acc: 0.8300\n", "Epoch 84/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.3073 - acc: 0.8977Epoch 00084: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.3102 - acc: 0.8958 - val_loss: 0.6481 - val_acc: 0.8402\n", "Epoch 85/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2837 - acc: 0.9021Epoch 00085: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2843 - acc: 0.9028 - val_loss: 0.7099 - val_acc: 0.8326\n", "Epoch 86/500\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "36/36 [============================>.] - ETA: 0s - loss: 0.2852 - acc: 0.8934Epoch 00086: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2827 - acc: 0.8946 - val_loss: 0.6542 - val_acc: 0.8313\n", "Epoch 87/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2605 - acc: 0.9100Epoch 00087: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2603 - acc: 0.9099 - val_loss: 0.7248 - val_acc: 0.8339\n", "Epoch 88/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2540 - acc: 0.9124Epoch 00088: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2527 - acc: 0.9126 - val_loss: 0.6729 - val_acc: 0.8281\n", "Epoch 89/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2785 - acc: 0.9067Epoch 00089: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2762 - acc: 0.9071 - val_loss: 0.6409 - val_acc: 0.8434\n", "Epoch 90/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2811 - acc: 0.9057Epoch 00090: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.2826 - acc: 0.9057 - val_loss: 0.6965 - val_acc: 0.8326\n", "Epoch 91/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2762 - acc: 0.9053Epoch 00091: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2760 - acc: 0.9047 - val_loss: 0.6528 - val_acc: 0.8409\n", "Epoch 92/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2481 - acc: 0.9175Epoch 00092: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2490 - acc: 0.9172 - val_loss: 0.7456 - val_acc: 0.8364\n", "Epoch 93/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2682 - acc: 0.9093Epoch 00093: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.2645 - acc: 0.9103 - val_loss: 0.6381 - val_acc: 0.8300\n", "Epoch 94/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2412 - acc: 0.9162Epoch 00094: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2456 - acc: 0.9149 - val_loss: 0.6466 - val_acc: 0.8307\n", "Epoch 95/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2577 - acc: 0.9133Epoch 00095: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2563 - acc: 0.9135 - val_loss: 0.6787 - val_acc: 0.8396\n", "Epoch 96/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2504 - acc: 0.9117Epoch 00096: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2493 - acc: 0.9118 - val_loss: 0.6961 - val_acc: 0.8294\n", "Epoch 97/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2703 - acc: 0.9101Epoch 00097: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2702 - acc: 0.9102 - val_loss: 0.6318 - val_acc: 0.8313\n", "Epoch 98/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2509 - acc: 0.9138Epoch 00098: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2536 - acc: 0.9138 - val_loss: 0.6765 - val_acc: 0.8358\n", "Epoch 99/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2465 - acc: 0.9165Epoch 00099: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2473 - acc: 0.9164 - val_loss: 0.7415 - val_acc: 0.8453\n", "Epoch 100/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2589 - acc: 0.9164Epoch 00100: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2592 - acc: 0.9157 - val_loss: 0.6162 - val_acc: 0.8434\n", "Epoch 101/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2313 - acc: 0.9192Epoch 00101: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2343 - acc: 0.9187 - val_loss: 0.7442 - val_acc: 0.8211\n", "Epoch 102/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2659 - acc: 0.9131Epoch 00102: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2676 - acc: 0.9121 - val_loss: 0.7387 - val_acc: 0.8358\n", "Epoch 103/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2458 - acc: 0.9161Epoch 00103: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.2468 - acc: 0.9159 - val_loss: 0.7013 - val_acc: 0.8307\n", "Epoch 104/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2610 - acc: 0.9126Epoch 00104: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2591 - acc: 0.9128 - val_loss: 0.6660 - val_acc: 0.8358\n", "Epoch 105/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2456 - acc: 0.9177Epoch 00105: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2472 - acc: 0.9169 - val_loss: 0.6947 - val_acc: 0.8320\n", "Epoch 106/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2656 - acc: 0.9108Epoch 00106: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2640 - acc: 0.9111 - val_loss: 0.6628 - val_acc: 0.8256\n", "Epoch 107/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2191 - acc: 0.9208Epoch 00107: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2178 - acc: 0.9214 - val_loss: 0.6584 - val_acc: 0.8275\n", "Epoch 108/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2345 - acc: 0.9252Epoch 00108: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2340 - acc: 0.9251 - val_loss: 0.7057 - val_acc: 0.8351\n", "Epoch 109/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2327 - acc: 0.9211Epoch 00109: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2335 - acc: 0.9216 - val_loss: 0.6683 - val_acc: 0.8396\n", "Epoch 110/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2373 - acc: 0.9168Epoch 00110: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2360 - acc: 0.9171 - val_loss: 0.7181 - val_acc: 0.8447\n", "Epoch 111/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2314 - acc: 0.9206Epoch 00111: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2326 - acc: 0.9197 - val_loss: 0.6703 - val_acc: 0.8415\n", "Epoch 112/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2168 - acc: 0.9296Epoch 00112: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2162 - acc: 0.9292 - val_loss: 0.6879 - val_acc: 0.8230\n", "Epoch 113/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2379 - acc: 0.9163Epoch 00113: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2352 - acc: 0.9173 - val_loss: 0.7063 - val_acc: 0.8313\n", "Epoch 114/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2372 - acc: 0.9215Epoch 00114: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2345 - acc: 0.9223 - val_loss: 0.6880 - val_acc: 0.8370\n", "Epoch 115/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2342 - acc: 0.9182Epoch 00115: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2340 - acc: 0.9183 - val_loss: 0.7267 - val_acc: 0.8358\n", "Epoch 116/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2214 - acc: 0.9253Epoch 00116: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2192 - acc: 0.9261 - val_loss: 0.7044 - val_acc: 0.8453\n", "Epoch 117/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2125 - acc: 0.9272Epoch 00117: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2096 - acc: 0.9283 - val_loss: 0.7022 - val_acc: 0.8294\n", "Epoch 118/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2182 - acc: 0.9238Epoch 00118: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2215 - acc: 0.9233 - val_loss: 0.6884 - val_acc: 0.8358\n", "Epoch 119/500\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "36/36 [============================>.] - ETA: 0s - loss: 0.2364 - acc: 0.9208- ETA: 2s - loEpoch 00119: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.2335 - acc: 0.9223 - val_loss: 0.6419 - val_acc: 0.8383\n", "Epoch 120/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2147 - acc: 0.9201Epoch 00120: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2187 - acc: 0.9197 - val_loss: 0.6482 - val_acc: 0.8511\n", "Epoch 121/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2189 - acc: 0.9249Epoch 00121: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.2180 - acc: 0.9250 - val_loss: 0.7027 - val_acc: 0.8402\n", "Epoch 122/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2260 - acc: 0.9179- ETA: 0s - loss: 0.2296 - acc: Epoch 00122: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2254 - acc: 0.9182 - val_loss: 0.7382 - val_acc: 0.8345\n", "Epoch 123/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2152 - acc: 0.9280Epoch 00123: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2168 - acc: 0.9272 - val_loss: 0.7344 - val_acc: 0.8313\n", "Epoch 124/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2175 - acc: 0.9285Epoch 00124: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2173 - acc: 0.9287 - val_loss: 0.6971 - val_acc: 0.8307\n", "Epoch 125/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1990 - acc: 0.9331Epoch 00125: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1976 - acc: 0.9335 - val_loss: 0.7635 - val_acc: 0.8313\n", "Epoch 126/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2196 - acc: 0.9243Epoch 00126: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2176 - acc: 0.9251 - val_loss: 0.7031 - val_acc: 0.8300\n", "Epoch 127/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2033 - acc: 0.9295Epoch 00127: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2035 - acc: 0.9293 - val_loss: 0.6898 - val_acc: 0.8466\n", "Epoch 128/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2109 - acc: 0.9255Epoch 00128: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2117 - acc: 0.9246 - val_loss: 0.7136 - val_acc: 0.8370\n", "Epoch 129/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2249 - acc: 0.9224Epoch 00129: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2263 - acc: 0.9226 - val_loss: 0.6668 - val_acc: 0.8549\n", "Epoch 130/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1981 - acc: 0.9323Epoch 00130: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1980 - acc: 0.9324 - val_loss: 0.7529 - val_acc: 0.8415\n", "Epoch 131/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1848 - acc: 0.9378- ETA: 2s - lossEpoch 00131: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1881 - acc: 0.9365 - val_loss: 0.7328 - val_acc: 0.8428\n", "Epoch 132/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.2058 - acc: 0.9340Epoch 00132: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.2055 - acc: 0.9339 - val_loss: 0.7227 - val_acc: 0.8491\n", "Epoch 133/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1941 - acc: 0.9318Epoch 00133: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1931 - acc: 0.9317 - val_loss: 0.7607 - val_acc: 0.8364\n", "Epoch 134/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1900 - acc: 0.9344Epoch 00134: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1897 - acc: 0.9343 - val_loss: 0.7203 - val_acc: 0.8402\n", "Epoch 135/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1883 - acc: 0.9318Epoch 00135: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1906 - acc: 0.9309 - val_loss: 0.7415 - val_acc: 0.8460\n", "Epoch 136/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1877 - acc: 0.9328Epoch 00136: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1913 - acc: 0.9319 - val_loss: 0.7929 - val_acc: 0.8434\n", "Epoch 137/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1997 - acc: 0.9280Epoch 00137: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1991 - acc: 0.9278 - val_loss: 0.7579 - val_acc: 0.8428\n", "Epoch 138/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1918 - acc: 0.9334Epoch 00138: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1907 - acc: 0.9337 - val_loss: 0.7352 - val_acc: 0.8339\n", "Epoch 139/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1884 - acc: 0.9331Epoch 00139: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1907 - acc: 0.9322 - val_loss: 0.7431 - val_acc: 0.8434\n", "Epoch 140/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1881 - acc: 0.9339- ETA: 0s - loss: 0.1855 - accEpoch 00140: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1884 - acc: 0.9338 - val_loss: 0.7056 - val_acc: 0.8358\n", "Epoch 141/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1832 - acc: 0.9375Epoch 00141: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1850 - acc: 0.9375 - val_loss: 0.7142 - val_acc: 0.8351\n", "Epoch 142/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1873 - acc: 0.9376Epoch 00142: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1867 - acc: 0.9383 - val_loss: 0.7610 - val_acc: 0.8288\n", "Epoch 143/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1791 - acc: 0.9381Epoch 00143: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1794 - acc: 0.9381 - val_loss: 0.7605 - val_acc: 0.8313\n", "Epoch 144/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1867 - acc: 0.9356- ETA: 0s - loss: 0.1870 - acc: 0.935Epoch 00144: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1848 - acc: 0.9361 - val_loss: 0.7256 - val_acc: 0.8485\n", "Epoch 145/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1859 - acc: 0.9372Epoch 00145: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1863 - acc: 0.9366 - val_loss: 0.7343 - val_acc: 0.8370\n", "Epoch 146/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1837 - acc: 0.9375Epoch 00146: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1853 - acc: 0.9382 - val_loss: 0.7667 - val_acc: 0.8351\n", "Epoch 147/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1895 - acc: 0.9357Epoch 00147: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1910 - acc: 0.9351 - val_loss: 0.7825 - val_acc: 0.8402\n", "Epoch 148/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1888 - acc: 0.9374Epoch 00148: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1894 - acc: 0.9372 - val_loss: 0.7609 - val_acc: 0.8345\n", "Epoch 149/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1802 - acc: 0.9395Epoch 00149: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1784 - acc: 0.9396 - val_loss: 0.7651 - val_acc: 0.8288\n", "Epoch 150/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1922 - acc: 0.9367Epoch 00150: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1928 - acc: 0.9363 - val_loss: 0.7432 - val_acc: 0.8256\n", "Epoch 151/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1691 - acc: 0.9429Epoch 00151: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1704 - acc: 0.9423 - val_loss: 0.7938 - val_acc: 0.8421\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 152/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1963 - acc: 0.9369Epoch 00152: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.1966 - acc: 0.9367 - val_loss: 0.6949 - val_acc: 0.8523\n", "Epoch 153/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1867 - acc: 0.9341Epoch 00153: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1838 - acc: 0.9350 - val_loss: 0.7462 - val_acc: 0.8447\n", "Epoch 154/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1798 - acc: 0.9371- ETA: 2s - lEpoch 00154: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1779 - acc: 0.9378 - val_loss: 0.7737 - val_acc: 0.8447\n", "Epoch 155/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1666 - acc: 0.9420Epoch 00155: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1656 - acc: 0.9421 - val_loss: 0.8352 - val_acc: 0.8351\n", "Epoch 156/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1699 - acc: 0.9368Epoch 00156: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1688 - acc: 0.9371 - val_loss: 0.7778 - val_acc: 0.8320\n", "Epoch 157/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1670 - acc: 0.9420Epoch 00157: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1662 - acc: 0.9421 - val_loss: 0.7956 - val_acc: 0.8364\n", "Epoch 158/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1816 - acc: 0.9392Epoch 00158: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1835 - acc: 0.9387 - val_loss: 0.7609 - val_acc: 0.8281\n", "Epoch 159/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1667 - acc: 0.9459Epoch 00159: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1661 - acc: 0.9459 - val_loss: 0.7770 - val_acc: 0.8402\n", "Epoch 160/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1661 - acc: 0.9453Epoch 00160: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1647 - acc: 0.9459 - val_loss: 0.8009 - val_acc: 0.8390\n", "Epoch 161/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1838 - acc: 0.9371Epoch 00161: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1838 - acc: 0.9372 - val_loss: 0.7777 - val_acc: 0.8332\n", "Epoch 162/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1687 - acc: 0.9459- ETA: 0s - loss: 0.1686 - accEpoch 00162: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1672 - acc: 0.9465 - val_loss: 0.7428 - val_acc: 0.8332\n", "Epoch 163/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1613 - acc: 0.9414- ETA: 0s - loss: 0.1616 - acc: 0.Epoch 00163: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1614 - acc: 0.9413 - val_loss: 0.7533 - val_acc: 0.8428\n", "Epoch 164/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1591 - acc: 0.9472Epoch 00164: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1599 - acc: 0.9465 - val_loss: 0.7655 - val_acc: 0.8345\n", "Epoch 165/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1641 - acc: 0.9431Epoch 00165: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1624 - acc: 0.9436 - val_loss: 0.7981 - val_acc: 0.8313\n", "Epoch 166/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1674 - acc: 0.9458Epoch 00166: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1663 - acc: 0.9458 - val_loss: 0.7350 - val_acc: 0.8421\n", "Epoch 167/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1652 - acc: 0.9427Epoch 00167: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1690 - acc: 0.9415 - val_loss: 0.7683 - val_acc: 0.8326\n", "Epoch 168/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1323 - acc: 0.9518Epoch 00168: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1330 - acc: 0.9514 - val_loss: 0.8201 - val_acc: 0.8370\n", "Epoch 169/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1656 - acc: 0.9437Epoch 00169: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1657 - acc: 0.9435 - val_loss: 0.7566 - val_acc: 0.8409\n", "Epoch 170/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1620 - acc: 0.9485- ETA: 2s - lossEpoch 00170: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1610 - acc: 0.9487 - val_loss: 0.7926 - val_acc: 0.8313\n", "Epoch 171/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1669 - acc: 0.9436Epoch 00171: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1675 - acc: 0.9435 - val_loss: 0.7226 - val_acc: 0.8402\n", "Epoch 172/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1542 - acc: 0.9481Epoch 00172: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1562 - acc: 0.9474 - val_loss: 0.7653 - val_acc: 0.8377\n", "Epoch 173/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1555 - acc: 0.9431Epoch 00173: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1545 - acc: 0.9435 - val_loss: 0.8355 - val_acc: 0.8288\n", "Epoch 174/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1600 - acc: 0.9489Epoch 00174: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.1581 - acc: 0.9498 - val_loss: 0.7620 - val_acc: 0.8364\n", "Epoch 175/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1315 - acc: 0.9517Epoch 00175: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1343 - acc: 0.9514 - val_loss: 0.7376 - val_acc: 0.8358\n", "Epoch 176/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1520 - acc: 0.9505Epoch 00176: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1540 - acc: 0.9495 - val_loss: 0.7659 - val_acc: 0.8307\n", "Epoch 177/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1534 - acc: 0.9509Epoch 00177: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1542 - acc: 0.9507 - val_loss: 0.7919 - val_acc: 0.8326\n", "Epoch 178/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1575 - acc: 0.9492Epoch 00178: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1562 - acc: 0.9498 - val_loss: 0.8016 - val_acc: 0.8294\n", "Epoch 179/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1566 - acc: 0.9461Epoch 00179: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1536 - acc: 0.9476 - val_loss: 0.7470 - val_acc: 0.8339\n", "Epoch 180/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1376 - acc: 0.9518- ETA: 1s - loss: 0.1350 - aEpoch 00180: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1371 - acc: 0.9516 - val_loss: 0.7911 - val_acc: 0.8364\n", "Epoch 181/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1287 - acc: 0.9556Epoch 00181: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1282 - acc: 0.9553 - val_loss: 0.7832 - val_acc: 0.8307\n", "Epoch 182/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1497 - acc: 0.9500Epoch 00182: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1525 - acc: 0.9494 - val_loss: 0.8551 - val_acc: 0.8230\n", "Epoch 183/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1591 - acc: 0.9507Epoch 00183: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1593 - acc: 0.9510 - val_loss: 0.7805 - val_acc: 0.8332\n", "Epoch 184/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1356 - acc: 0.9531Epoch 00184: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1370 - acc: 0.9519 - val_loss: 0.7919 - val_acc: 0.8370\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 185/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1352 - acc: 0.9521Epoch 00185: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1388 - acc: 0.9517 - val_loss: 0.8131 - val_acc: 0.8339\n", "Epoch 186/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1512 - acc: 0.9466Epoch 00186: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.1549 - acc: 0.9464 - val_loss: 0.7539 - val_acc: 0.8498\n", "Epoch 187/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1510 - acc: 0.9508Epoch 00187: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1507 - acc: 0.9505 - val_loss: 0.8096 - val_acc: 0.8396\n", "Epoch 188/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1540 - acc: 0.9474Epoch 00188: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1544 - acc: 0.9471 - val_loss: 0.8447 - val_acc: 0.8402\n", "Epoch 189/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1331 - acc: 0.9541Epoch 00189: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1328 - acc: 0.9545 - val_loss: 0.8441 - val_acc: 0.8390\n", "Epoch 190/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1489 - acc: 0.9539Epoch 00190: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1484 - acc: 0.9533 - val_loss: 0.7995 - val_acc: 0.8440\n", "Epoch 191/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1501 - acc: 0.9524Epoch 00191: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1491 - acc: 0.9526 - val_loss: 0.7635 - val_acc: 0.8409\n", "Epoch 192/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1259 - acc: 0.9565Epoch 00192: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.1256 - acc: 0.9566 - val_loss: 0.8084 - val_acc: 0.8523\n", "Epoch 193/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1510 - acc: 0.9523Epoch 00193: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.1511 - acc: 0.9527 - val_loss: 0.8103 - val_acc: 0.8351\n", "Epoch 194/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1472 - acc: 0.9447Epoch 00194: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1474 - acc: 0.9443 - val_loss: 0.8283 - val_acc: 0.8415\n", "Epoch 195/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1469 - acc: 0.9516Epoch 00195: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1455 - acc: 0.9519 - val_loss: 0.8054 - val_acc: 0.8332\n", "Epoch 196/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1538 - acc: 0.9437Epoch 00196: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1541 - acc: 0.9439 - val_loss: 0.7705 - val_acc: 0.8428\n", "Epoch 197/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1502 - acc: 0.9480Epoch 00197: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1492 - acc: 0.9479 - val_loss: 0.7990 - val_acc: 0.8523\n", "Epoch 198/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1414 - acc: 0.9533- ETA: 2Epoch 00198: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1398 - acc: 0.9537 - val_loss: 0.7710 - val_acc: 0.8421\n", "Epoch 199/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1423 - acc: 0.9501Epoch 00199: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1403 - acc: 0.9507 - val_loss: 0.8304 - val_acc: 0.8358\n", "Epoch 200/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1361 - acc: 0.9558Epoch 00200: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1347 - acc: 0.9561 - val_loss: 0.8123 - val_acc: 0.8434\n", "Epoch 201/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1493 - acc: 0.9525Epoch 00201: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1488 - acc: 0.9527 - val_loss: 0.8227 - val_acc: 0.8364\n", "Epoch 202/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1299 - acc: 0.9535Epoch 00202: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1298 - acc: 0.9539 - val_loss: 0.8841 - val_acc: 0.8211\n", "Epoch 203/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1242 - acc: 0.9584Epoch 00203: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1250 - acc: 0.9576 - val_loss: 0.8624 - val_acc: 0.8370\n", "Epoch 204/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1338 - acc: 0.9536- ETA: 0s - loss: 0.1353 - acEpoch 00204: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1345 - acc: 0.9536 - val_loss: 0.8160 - val_acc: 0.8351\n", "Epoch 205/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1328 - acc: 0.9532Epoch 00205: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1331 - acc: 0.9530 - val_loss: 0.8733 - val_acc: 0.8390\n", "Epoch 206/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1309 - acc: 0.9543Epoch 00206: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1330 - acc: 0.9536 - val_loss: 0.8822 - val_acc: 0.8332\n", "Epoch 207/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1411 - acc: 0.9499Epoch 00207: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1402 - acc: 0.9502 - val_loss: 0.8166 - val_acc: 0.8326\n", "Epoch 208/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1333 - acc: 0.9585Epoch 00208: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1321 - acc: 0.9587 - val_loss: 0.8256 - val_acc: 0.8300\n", "Epoch 209/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1402 - acc: 0.9520Epoch 00209: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.1396 - acc: 0.9522 - val_loss: 0.8877 - val_acc: 0.8269\n", "Epoch 210/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1457 - acc: 0.9544Epoch 00210: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1479 - acc: 0.9538 - val_loss: 0.8132 - val_acc: 0.8370\n", "Epoch 211/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1376 - acc: 0.9527Epoch 00211: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1393 - acc: 0.9523 - val_loss: 0.8469 - val_acc: 0.8370\n", "Epoch 212/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1285 - acc: 0.9561Epoch 00212: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1269 - acc: 0.9564 - val_loss: 0.8404 - val_acc: 0.8370\n", "Epoch 213/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1182 - acc: 0.9595Epoch 00213: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1157 - acc: 0.9606 - val_loss: 0.8808 - val_acc: 0.8479\n", "Epoch 214/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1357 - acc: 0.9526Epoch 00214: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1347 - acc: 0.9530 - val_loss: 0.8195 - val_acc: 0.8498\n", "Epoch 215/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1451 - acc: 0.9508Epoch 00215: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1437 - acc: 0.9513 - val_loss: 0.7975 - val_acc: 0.8479\n", "Epoch 216/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1279 - acc: 0.9569Epoch 00216: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1283 - acc: 0.9568 - val_loss: 0.8050 - val_acc: 0.8504\n", "Epoch 217/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1358 - acc: 0.9534Epoch 00217: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1334 - acc: 0.9546 - val_loss: 0.8024 - val_acc: 0.8409\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 218/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1249 - acc: 0.9594Epoch 00218: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.1240 - acc: 0.9595 - val_loss: 0.8464 - val_acc: 0.8440\n", "Epoch 219/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1181 - acc: 0.9607Epoch 00219: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1192 - acc: 0.9603 - val_loss: 0.8091 - val_acc: 0.8453\n", "Epoch 220/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1306 - acc: 0.9551Epoch 00220: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1289 - acc: 0.9557 - val_loss: 0.8498 - val_acc: 0.8409\n", "Epoch 221/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1223 - acc: 0.9577Epoch 00221: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1224 - acc: 0.9578 - val_loss: 0.8433 - val_acc: 0.8485\n", "Epoch 222/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1338 - acc: 0.9606Epoch 00222: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1356 - acc: 0.9597 - val_loss: 0.7811 - val_acc: 0.8377\n", "Epoch 223/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1192 - acc: 0.9609Epoch 00223: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1207 - acc: 0.9603 - val_loss: 0.8315 - val_acc: 0.8358\n", "Epoch 224/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1242 - acc: 0.9570Epoch 00224: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1240 - acc: 0.9571 - val_loss: 0.7593 - val_acc: 0.8428\n", "Epoch 225/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1095 - acc: 0.9625Epoch 00225: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1097 - acc: 0.9622 - val_loss: 0.8327 - val_acc: 0.8409\n", "Epoch 226/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1282 - acc: 0.9588Epoch 00226: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1295 - acc: 0.9578 - val_loss: 0.8427 - val_acc: 0.8326\n", "Epoch 227/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1234 - acc: 0.9596Epoch 00227: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1225 - acc: 0.9596 - val_loss: 0.7906 - val_acc: 0.8421\n", "Epoch 228/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1211 - acc: 0.9618Epoch 00228: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1237 - acc: 0.9614 - val_loss: 0.8327 - val_acc: 0.8288\n", "Epoch 229/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1139 - acc: 0.9609Epoch 00229: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1140 - acc: 0.9605 - val_loss: 0.8111 - val_acc: 0.8402\n", "Epoch 230/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1225 - acc: 0.9551Epoch 00230: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1221 - acc: 0.9555 - val_loss: 0.8342 - val_acc: 0.8358\n", "Epoch 231/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1159 - acc: 0.9619- ETA: 2s - loEpoch 00231: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1170 - acc: 0.9616 - val_loss: 0.7884 - val_acc: 0.8472\n", "Epoch 232/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1200 - acc: 0.9612Epoch 00232: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1190 - acc: 0.9616 - val_loss: 0.8188 - val_acc: 0.8434\n", "Epoch 233/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1235 - acc: 0.9564Epoch 00233: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1217 - acc: 0.9572 - val_loss: 0.8487 - val_acc: 0.8326\n", "Epoch 234/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1117 - acc: 0.9620Epoch 00234: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1095 - acc: 0.9626 - val_loss: 0.8511 - val_acc: 0.8383\n", "Epoch 235/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1246 - acc: 0.9578Epoch 00235: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1238 - acc: 0.9579 - val_loss: 0.8630 - val_acc: 0.8415\n", "Epoch 236/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1203 - acc: 0.9571Epoch 00236: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1218 - acc: 0.9561 - val_loss: 0.8420 - val_acc: 0.8300\n", "Epoch 237/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1100 - acc: 0.9637Epoch 00237: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1093 - acc: 0.9636 - val_loss: 0.8694 - val_acc: 0.8383\n", "Epoch 238/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1165 - acc: 0.9558Epoch 00238: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1191 - acc: 0.9549 - val_loss: 0.9292 - val_acc: 0.8377\n", "Epoch 239/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1184 - acc: 0.9587Epoch 00239: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1175 - acc: 0.9590 - val_loss: 0.8920 - val_acc: 0.8383\n", "Epoch 240/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1192 - acc: 0.9591Epoch 00240: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1207 - acc: 0.9591 - val_loss: 0.8899 - val_acc: 0.8466\n", "Epoch 241/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1277 - acc: 0.9575- ETA: 1s - loss: 0.1236 - aEpoch 00241: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.1267 - acc: 0.9574 - val_loss: 0.8474 - val_acc: 0.8326\n", "Epoch 242/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1243 - acc: 0.9604Epoch 00242: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1259 - acc: 0.9604 - val_loss: 0.8295 - val_acc: 0.8421\n", "Epoch 243/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1198 - acc: 0.9618Epoch 00243: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1191 - acc: 0.9620 - val_loss: 0.8456 - val_acc: 0.8409\n", "Epoch 244/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1133 - acc: 0.9609Epoch 00244: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1127 - acc: 0.9611 - val_loss: 0.8914 - val_acc: 0.8434\n", "Epoch 245/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1031 - acc: 0.9657Epoch 00245: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1052 - acc: 0.9652 - val_loss: 0.8967 - val_acc: 0.8390\n", "Epoch 246/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1237 - acc: 0.9609Epoch 00246: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1236 - acc: 0.9607 - val_loss: 0.7850 - val_acc: 0.8491\n", "Epoch 247/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1190 - acc: 0.9618Epoch 00247: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1195 - acc: 0.9618 - val_loss: 0.8843 - val_acc: 0.8421\n", "Epoch 248/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1040 - acc: 0.9658- ETA: 2s - Epoch 00248: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1073 - acc: 0.9655 - val_loss: 0.9663 - val_acc: 0.8364\n", "Epoch 249/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1001 - acc: 0.9671Epoch 00249: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0998 - acc: 0.9673 - val_loss: 0.8615 - val_acc: 0.8479\n", "Epoch 250/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1058 - acc: 0.9631Epoch 00250: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1048 - acc: 0.9638 - val_loss: 0.8409 - val_acc: 0.8479\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 251/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1051 - acc: 0.9657Epoch 00251: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1063 - acc: 0.9647 - val_loss: 0.8615 - val_acc: 0.8479\n", "Epoch 252/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1125 - acc: 0.9626Epoch 00252: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1112 - acc: 0.9634 - val_loss: 0.9004 - val_acc: 0.8358\n", "Epoch 253/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1268 - acc: 0.9601Epoch 00253: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1269 - acc: 0.9601 - val_loss: 0.8422 - val_acc: 0.8364\n", "Epoch 254/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1173 - acc: 0.9607Epoch 00254: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1168 - acc: 0.9610 - val_loss: 0.8942 - val_acc: 0.8364\n", "Epoch 255/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1175 - acc: 0.9616Epoch 00255: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1160 - acc: 0.9618 - val_loss: 0.9011 - val_acc: 0.8396\n", "Epoch 256/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1230 - acc: 0.9581Epoch 00256: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1217 - acc: 0.9588 - val_loss: 0.8138 - val_acc: 0.8504\n", "Epoch 257/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1044 - acc: 0.9636Epoch 00257: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1051 - acc: 0.9633 - val_loss: 0.9437 - val_acc: 0.8358\n", "Epoch 258/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1051 - acc: 0.9620Epoch 00258: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.1052 - acc: 0.9624 - val_loss: 0.9118 - val_acc: 0.8390\n", "Epoch 259/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1019 - acc: 0.9631Epoch 00259: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1005 - acc: 0.9637 - val_loss: 0.8504 - val_acc: 0.8421\n", "Epoch 260/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1225 - acc: 0.9564Epoch 00260: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.1199 - acc: 0.9574 - val_loss: 0.8339 - val_acc: 0.8530\n", "Epoch 261/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1168 - acc: 0.9644Epoch 00261: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1160 - acc: 0.9648 - val_loss: 0.8456 - val_acc: 0.8453\n", "Epoch 262/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1030 - acc: 0.9647Epoch 00262: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1021 - acc: 0.9648 - val_loss: 0.8807 - val_acc: 0.8434\n", "Epoch 263/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1168 - acc: 0.9608Epoch 00263: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1175 - acc: 0.9612 - val_loss: 0.8921 - val_acc: 0.8383\n", "Epoch 264/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1062 - acc: 0.9669Epoch 00264: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.1068 - acc: 0.9670 - val_loss: 0.9141 - val_acc: 0.8345\n", "Epoch 265/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1192 - acc: 0.9587Epoch 00265: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1174 - acc: 0.9594 - val_loss: 0.8800 - val_acc: 0.8377\n", "Epoch 266/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1049 - acc: 0.9655Epoch 00266: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1046 - acc: 0.9656 - val_loss: 0.8957 - val_acc: 0.8498\n", "Epoch 267/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1043 - acc: 0.9653Epoch 00267: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1032 - acc: 0.9656 - val_loss: 0.9140 - val_acc: 0.8320\n", "Epoch 268/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1046 - acc: 0.9650Epoch 00268: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1054 - acc: 0.9651 - val_loss: 0.9021 - val_acc: 0.8428\n", "Epoch 269/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0888 - acc: 0.9701Epoch 00269: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0891 - acc: 0.9699 - val_loss: 0.9539 - val_acc: 0.8332\n", "Epoch 270/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0908 - acc: 0.9676Epoch 00270: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0906 - acc: 0.9679 - val_loss: 0.9218 - val_acc: 0.8358\n", "Epoch 271/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1024 - acc: 0.9664Epoch 00271: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1030 - acc: 0.9664 - val_loss: 0.9487 - val_acc: 0.8364\n", "Epoch 272/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1217 - acc: 0.9623Epoch 00272: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1205 - acc: 0.9624 - val_loss: 0.8738 - val_acc: 0.8339\n", "Epoch 273/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1086 - acc: 0.9637Epoch 00273: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1080 - acc: 0.9638 - val_loss: 0.9387 - val_acc: 0.8370\n", "Epoch 274/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1126 - acc: 0.9620Epoch 00274: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.1122 - acc: 0.9618 - val_loss: 0.9816 - val_acc: 0.8192\n", "Epoch 275/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1181 - acc: 0.9615Epoch 00275: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1168 - acc: 0.9617 - val_loss: 0.8479 - val_acc: 0.8269\n", "Epoch 276/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1032 - acc: 0.9652Epoch 00276: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1026 - acc: 0.9655 - val_loss: 0.8702 - val_acc: 0.8281\n", "Epoch 277/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1065 - acc: 0.9673Epoch 00277: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1043 - acc: 0.9678 - val_loss: 0.8493 - val_acc: 0.8370\n", "Epoch 278/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0981 - acc: 0.9654Epoch 00278: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0975 - acc: 0.9660 - val_loss: 0.8671 - val_acc: 0.8377\n", "Epoch 279/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0916 - acc: 0.9696Epoch 00279: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0908 - acc: 0.9700 - val_loss: 0.8617 - val_acc: 0.8402\n", "Epoch 280/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1162 - acc: 0.9645Epoch 00280: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1157 - acc: 0.9646 - val_loss: 0.8822 - val_acc: 0.8396\n", "Epoch 281/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0878 - acc: 0.9696Epoch 00281: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0874 - acc: 0.9696 - val_loss: 0.8706 - val_acc: 0.8396\n", "Epoch 282/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0903 - acc: 0.9716Epoch 00282: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0888 - acc: 0.9721 - val_loss: 0.9473 - val_acc: 0.8364\n", "Epoch 283/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0996 - acc: 0.9667Epoch 00283: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0985 - acc: 0.9672 - val_loss: 0.9164 - val_acc: 0.8453\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 284/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0990 - acc: 0.9708Epoch 00284: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0997 - acc: 0.9705 - val_loss: 0.9444 - val_acc: 0.8390\n", "Epoch 285/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0948 - acc: 0.9686Epoch 00285: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0932 - acc: 0.9690 - val_loss: 0.8981 - val_acc: 0.8491\n", "Epoch 286/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0924 - acc: 0.9703Epoch 00286: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0932 - acc: 0.9700 - val_loss: 0.8950 - val_acc: 0.8428\n", "Epoch 287/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0911 - acc: 0.9669Epoch 00287: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0901 - acc: 0.9672 - val_loss: 0.9313 - val_acc: 0.8383\n", "Epoch 288/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0883 - acc: 0.9709Epoch 00288: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0886 - acc: 0.9713 - val_loss: 0.9419 - val_acc: 0.8326\n", "Epoch 289/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1013 - acc: 0.9689Epoch 00289: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0996 - acc: 0.9694 - val_loss: 0.9295 - val_acc: 0.8294\n", "Epoch 290/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1083 - acc: 0.9654Epoch 00290: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1086 - acc: 0.9657 - val_loss: 0.8693 - val_acc: 0.8390\n", "Epoch 291/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1022 - acc: 0.9662Epoch 00291: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1014 - acc: 0.9662 - val_loss: 0.8900 - val_acc: 0.8415\n", "Epoch 292/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0853 - acc: 0.9739- ETA: 0s - loss: 0.0847 - acEpoch 00292: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.0866 - acc: 0.9739 - val_loss: 0.9220 - val_acc: 0.8434\n", "Epoch 293/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0890 - acc: 0.9705Epoch 00293: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0899 - acc: 0.9705 - val_loss: 0.9326 - val_acc: 0.8377\n", "Epoch 294/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1047 - acc: 0.9663Epoch 00294: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.1067 - acc: 0.9659 - val_loss: 0.9249 - val_acc: 0.8383\n", "Epoch 295/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1135 - acc: 0.9612Epoch 00295: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1151 - acc: 0.9612 - val_loss: 0.9587 - val_acc: 0.8396\n", "Epoch 296/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1041 - acc: 0.9665Epoch 00296: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.1065 - acc: 0.9661 - val_loss: 0.8999 - val_acc: 0.8377\n", "Epoch 297/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0937 - acc: 0.9673Epoch 00297: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0939 - acc: 0.9674 - val_loss: 0.9012 - val_acc: 0.8434\n", "Epoch 298/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0957 - acc: 0.9680Epoch 00298: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0941 - acc: 0.9687 - val_loss: 0.8999 - val_acc: 0.8364\n", "Epoch 299/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0816 - acc: 0.9722Epoch 00299: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0813 - acc: 0.9719 - val_loss: 0.9812 - val_acc: 0.8377\n", "Epoch 300/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0940 - acc: 0.9675Epoch 00300: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0935 - acc: 0.9677 - val_loss: 0.9308 - val_acc: 0.8472\n", "Epoch 301/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0916 - acc: 0.9692Epoch 00301: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0922 - acc: 0.9689 - val_loss: 0.9597 - val_acc: 0.8351\n", "Epoch 302/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1075 - acc: 0.9668Epoch 00302: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1088 - acc: 0.9668 - val_loss: 0.8956 - val_acc: 0.8370\n", "Epoch 303/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1012 - acc: 0.9657Epoch 00303: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1034 - acc: 0.9653 - val_loss: 0.8450 - val_acc: 0.8415\n", "Epoch 304/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0999 - acc: 0.9688Epoch 00304: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0998 - acc: 0.9685 - val_loss: 0.8663 - val_acc: 0.8479\n", "Epoch 305/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1054 - acc: 0.9652Epoch 00305: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.1047 - acc: 0.9658 - val_loss: 0.9065 - val_acc: 0.8383\n", "Epoch 306/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0985 - acc: 0.9672Epoch 00306: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0981 - acc: 0.9675 - val_loss: 0.8601 - val_acc: 0.8517\n", "Epoch 307/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0861 - acc: 0.9711Epoch 00307: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0868 - acc: 0.9710 - val_loss: 0.9492 - val_acc: 0.8440\n", "Epoch 308/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0740 - acc: 0.9742Epoch 00308: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0749 - acc: 0.9741 - val_loss: 0.8616 - val_acc: 0.8460\n", "Epoch 309/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0851 - acc: 0.9715Epoch 00309: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0887 - acc: 0.9708 - val_loss: 0.8976 - val_acc: 0.8453\n", "Epoch 310/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0895 - acc: 0.9723Epoch 00310: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0890 - acc: 0.9720 - val_loss: 0.9182 - val_acc: 0.8472\n", "Epoch 311/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.1003 - acc: 0.9649Epoch 00311: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0984 - acc: 0.9652 - val_loss: 0.9337 - val_acc: 0.8479\n", "Epoch 312/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0871 - acc: 0.9708Epoch 00312: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0899 - acc: 0.9703 - val_loss: 0.9141 - val_acc: 0.8466\n", "Epoch 313/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0970 - acc: 0.9709Epoch 00313: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0957 - acc: 0.9713 - val_loss: 0.8519 - val_acc: 0.8421\n", "Epoch 314/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0875 - acc: 0.9711Epoch 00314: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0857 - acc: 0.9717 - val_loss: 0.9057 - val_acc: 0.8472\n", "Epoch 315/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0914 - acc: 0.9691Epoch 00315: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0915 - acc: 0.9689 - val_loss: 0.8846 - val_acc: 0.8472\n", "Epoch 316/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0897 - acc: 0.9709Epoch 00316: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0912 - acc: 0.9700 - val_loss: 0.9142 - val_acc: 0.8383\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 317/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0966 - acc: 0.9716Epoch 00317: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0953 - acc: 0.9719 - val_loss: 0.8820 - val_acc: 0.8440\n", "Epoch 318/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0978 - acc: 0.9690Epoch 00318: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0961 - acc: 0.9697 - val_loss: 0.8954 - val_acc: 0.8396\n", "Epoch 319/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0867 - acc: 0.9720Epoch 00319: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0861 - acc: 0.9723 - val_loss: 0.9056 - val_acc: 0.8517\n", "Epoch 320/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0841 - acc: 0.9721Epoch 00320: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0834 - acc: 0.9722 - val_loss: 0.9595 - val_acc: 0.8460\n", "Epoch 321/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0980 - acc: 0.9684Epoch 00321: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0964 - acc: 0.9690 - val_loss: 0.9432 - val_acc: 0.8383\n", "Epoch 322/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0766 - acc: 0.9726- ETA: 2s Epoch 00322: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0787 - acc: 0.9729 - val_loss: 0.9331 - val_acc: 0.8466\n", "Epoch 323/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0859 - acc: 0.9712Epoch 00323: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0864 - acc: 0.9711 - val_loss: 0.8698 - val_acc: 0.8460\n", "Epoch 324/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0787 - acc: 0.9743Epoch 00324: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0772 - acc: 0.9748 - val_loss: 0.9687 - val_acc: 0.8440\n", "Epoch 325/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0764 - acc: 0.9739Epoch 00325: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0763 - acc: 0.9744 - val_loss: 0.9621 - val_acc: 0.8523\n", "Epoch 326/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0876 - acc: 0.9699Epoch 00326: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0870 - acc: 0.9702 - val_loss: 0.9773 - val_acc: 0.8428\n", "Epoch 327/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0845 - acc: 0.9722Epoch 00327: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0844 - acc: 0.9723 - val_loss: 0.8746 - val_acc: 0.8517\n", "Epoch 328/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0917 - acc: 0.9736Epoch 00328: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0903 - acc: 0.9739 - val_loss: 0.8660 - val_acc: 0.8523\n", "Epoch 329/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0957 - acc: 0.9712Epoch 00329: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0967 - acc: 0.9709 - val_loss: 0.8659 - val_acc: 0.8428\n", "Epoch 330/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0771 - acc: 0.9752Epoch 00330: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0792 - acc: 0.9748 - val_loss: 1.0340 - val_acc: 0.8390\n", "Epoch 331/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0915 - acc: 0.9692Epoch 00331: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0927 - acc: 0.9691 - val_loss: 0.9530 - val_acc: 0.8460\n", "Epoch 332/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0901 - acc: 0.9688Epoch 00332: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0886 - acc: 0.9694 - val_loss: 0.9421 - val_acc: 0.8472\n", "Epoch 333/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0854 - acc: 0.9719Epoch 00333: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0858 - acc: 0.9720 - val_loss: 0.9137 - val_acc: 0.8530\n", "Epoch 334/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0838 - acc: 0.9717Epoch 00334: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0823 - acc: 0.9725 - val_loss: 0.8785 - val_acc: 0.8440\n", "Epoch 335/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0833 - acc: 0.9730Epoch 00335: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0834 - acc: 0.9727 - val_loss: 0.8523 - val_acc: 0.8421\n", "Epoch 336/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0865 - acc: 0.9723Epoch 00336: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0846 - acc: 0.9730 - val_loss: 0.9212 - val_acc: 0.8390\n", "Epoch 337/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0857 - acc: 0.9723Epoch 00337: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0854 - acc: 0.9720 - val_loss: 0.8496 - val_acc: 0.8511\n", "Epoch 338/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0757 - acc: 0.9751- ETA: 1s - loss: 0.0854Epoch 00338: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0756 - acc: 0.9752 - val_loss: 0.9761 - val_acc: 0.8498\n", "Epoch 339/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0818 - acc: 0.9773Epoch 00339: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0801 - acc: 0.9775 - val_loss: 0.9354 - val_acc: 0.8370\n", "Epoch 340/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0779 - acc: 0.9755Epoch 00340: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0777 - acc: 0.9758 - val_loss: 0.9492 - val_acc: 0.8390\n", "Epoch 341/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0871 - acc: 0.9719Epoch 00341: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0860 - acc: 0.9721 - val_loss: 0.9905 - val_acc: 0.8479\n", "Epoch 342/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0870 - acc: 0.9736Epoch 00342: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0878 - acc: 0.9731 - val_loss: 0.9397 - val_acc: 0.8415\n", "Epoch 343/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0847 - acc: 0.9709Epoch 00343: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0861 - acc: 0.9706 - val_loss: 0.9330 - val_acc: 0.8434\n", "Epoch 344/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0772 - acc: 0.9706- ETA: 2Epoch 00344: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0758 - acc: 0.9712 - val_loss: 0.9827 - val_acc: 0.8453\n", "Epoch 345/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0853 - acc: 0.9706Epoch 00345: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0846 - acc: 0.9709 - val_loss: 0.9131 - val_acc: 0.8415\n", "Epoch 346/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0874 - acc: 0.9714Epoch 00346: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0874 - acc: 0.9713 - val_loss: 0.9346 - val_acc: 0.8409\n", "Epoch 347/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0778 - acc: 0.9747- ETA: 0s - loss: 0.0778 - acc: 0.97Epoch 00347: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0774 - acc: 0.9747 - val_loss: 0.9450 - val_acc: 0.8485\n", "Epoch 348/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0879 - acc: 0.9716Epoch 00348: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0877 - acc: 0.9716 - val_loss: 1.0111 - val_acc: 0.8421\n", "Epoch 349/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0684 - acc: 0.9759Epoch 00349: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0684 - acc: 0.9755 - val_loss: 0.9461 - val_acc: 0.8402\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 350/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0861 - acc: 0.9696Epoch 00350: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0858 - acc: 0.9698 - val_loss: 0.9102 - val_acc: 0.8409\n", "Epoch 351/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0862 - acc: 0.9722Epoch 00351: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0861 - acc: 0.9719 - val_loss: 0.9229 - val_acc: 0.8460\n", "Epoch 352/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0868 - acc: 0.9719Epoch 00352: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.0858 - acc: 0.9720 - val_loss: 0.9204 - val_acc: 0.8460\n", "Epoch 353/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0818 - acc: 0.9736Epoch 00353: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0831 - acc: 0.9731 - val_loss: 0.9234 - val_acc: 0.8466\n", "Epoch 354/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0808 - acc: 0.9703Epoch 00354: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0818 - acc: 0.9701 - val_loss: 0.9223 - val_acc: 0.8434\n", "Epoch 355/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0780 - acc: 0.9760Epoch 00355: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0777 - acc: 0.9758 - val_loss: 0.9232 - val_acc: 0.8479\n", "Epoch 356/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0743 - acc: 0.9739Epoch 00356: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0731 - acc: 0.9741 - val_loss: 0.9899 - val_acc: 0.8390\n", "Epoch 357/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0900 - acc: 0.9711Epoch 00357: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0896 - acc: 0.9710 - val_loss: 0.9653 - val_acc: 0.8498\n", "Epoch 358/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0788 - acc: 0.9739Epoch 00358: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0775 - acc: 0.9742 - val_loss: 0.9697 - val_acc: 0.8447\n", "Epoch 359/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0732 - acc: 0.9757Epoch 00359: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0735 - acc: 0.9757 - val_loss: 0.9687 - val_acc: 0.8415\n", "Epoch 360/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0761 - acc: 0.9754Epoch 00360: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0769 - acc: 0.9752 - val_loss: 0.9207 - val_acc: 0.8479\n", "Epoch 361/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0666 - acc: 0.9791Epoch 00361: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0668 - acc: 0.9791 - val_loss: 0.9837 - val_acc: 0.8498\n", "Epoch 362/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0743 - acc: 0.9772Epoch 00362: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0745 - acc: 0.9770 - val_loss: 0.9656 - val_acc: 0.8402\n", "Epoch 363/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0763 - acc: 0.9752Epoch 00363: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0776 - acc: 0.9751 - val_loss: 0.9935 - val_acc: 0.8377\n", "Epoch 364/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0774 - acc: 0.9782Epoch 00364: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0774 - acc: 0.9779 - val_loss: 0.9557 - val_acc: 0.8504\n", "Epoch 365/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0812 - acc: 0.9741Epoch 00365: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0810 - acc: 0.9740 - val_loss: 0.9771 - val_acc: 0.8415\n", "Epoch 366/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0862 - acc: 0.9741Epoch 00366: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0861 - acc: 0.9740 - val_loss: 0.9246 - val_acc: 0.8472\n", "Epoch 367/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0678 - acc: 0.9791Epoch 00367: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0681 - acc: 0.9792 - val_loss: 1.0050 - val_acc: 0.8542\n", "Epoch 368/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0908 - acc: 0.9698Epoch 00368: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0894 - acc: 0.9704 - val_loss: 1.0069 - val_acc: 0.8415\n", "Epoch 369/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0842 - acc: 0.9737Epoch 00369: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0837 - acc: 0.9740 - val_loss: 1.0165 - val_acc: 0.8396\n", "Epoch 370/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0740 - acc: 0.9761Epoch 00370: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0727 - acc: 0.9763 - val_loss: 1.0084 - val_acc: 0.8466\n", "Epoch 371/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0723 - acc: 0.9776Epoch 00371: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0715 - acc: 0.9780 - val_loss: 1.0005 - val_acc: 0.8498\n", "Epoch 372/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0590 - acc: 0.9793Epoch 00372: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0625 - acc: 0.9793 - val_loss: 1.0397 - val_acc: 0.8434\n", "Epoch 373/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0789 - acc: 0.9718Epoch 00373: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0796 - acc: 0.9715 - val_loss: 0.9805 - val_acc: 0.8390\n", "Epoch 374/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0643 - acc: 0.9806Epoch 00374: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0637 - acc: 0.9807 - val_loss: 0.9575 - val_acc: 0.8409\n", "Epoch 375/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0710 - acc: 0.9760Epoch 00375: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0717 - acc: 0.9758 - val_loss: 1.0599 - val_acc: 0.8364\n", "Epoch 376/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0875 - acc: 0.9716Epoch 00376: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0859 - acc: 0.9721 - val_loss: 0.9575 - val_acc: 0.8364\n", "Epoch 377/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0756 - acc: 0.9769Epoch 00377: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0753 - acc: 0.9769 - val_loss: 0.9710 - val_acc: 0.8440\n", "Epoch 378/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0694 - acc: 0.9755Epoch 00378: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0688 - acc: 0.9756 - val_loss: 1.0158 - val_acc: 0.8421\n", "Epoch 379/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0813 - acc: 0.9734Epoch 00379: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0813 - acc: 0.9730 - val_loss: 0.9460 - val_acc: 0.8472\n", "Epoch 380/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0676 - acc: 0.9778Epoch 00380: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0674 - acc: 0.9776 - val_loss: 0.9326 - val_acc: 0.8530\n", "Epoch 381/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0794 - acc: 0.9755Epoch 00381: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0792 - acc: 0.9758 - val_loss: 0.8946 - val_acc: 0.8421\n", "Epoch 382/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0805 - acc: 0.9752Epoch 00382: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0801 - acc: 0.9755 - val_loss: 1.0161 - val_acc: 0.8415\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 383/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0729 - acc: 0.9759Epoch 00383: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0733 - acc: 0.9757 - val_loss: 0.9812 - val_acc: 0.8453\n", "Epoch 384/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0623 - acc: 0.9783Epoch 00384: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0644 - acc: 0.9778 - val_loss: 0.9901 - val_acc: 0.8491\n", "Epoch 385/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0699 - acc: 0.9772Epoch 00385: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0714 - acc: 0.9765 - val_loss: 0.9768 - val_acc: 0.8409\n", "Epoch 386/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0744 - acc: 0.9763- ETA: 0s - loss: 0.0722 - acc: Epoch 00386: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0767 - acc: 0.9759 - val_loss: 1.0505 - val_acc: 0.8504\n", "Epoch 387/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0676 - acc: 0.9771Epoch 00387: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0668 - acc: 0.9773 - val_loss: 1.0276 - val_acc: 0.8440\n", "Epoch 388/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0712 - acc: 0.9752Epoch 00388: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0705 - acc: 0.9755 - val_loss: 0.9827 - val_acc: 0.8396\n", "Epoch 389/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0678 - acc: 0.9761Epoch 00389: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0666 - acc: 0.9764 - val_loss: 0.9807 - val_acc: 0.8415\n", "Epoch 390/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0824 - acc: 0.9733Epoch 00390: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0852 - acc: 0.9730 - val_loss: 0.9078 - val_acc: 0.8370\n", "Epoch 391/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0831 - acc: 0.9755Epoch 00391: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0830 - acc: 0.9751 - val_loss: 0.9293 - val_acc: 0.8421\n", "Epoch 392/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0721 - acc: 0.9775Epoch 00392: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0707 - acc: 0.9779 - val_loss: 0.9756 - val_acc: 0.8415\n", "Epoch 393/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0638 - acc: 0.9807Epoch 00393: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0643 - acc: 0.9806 - val_loss: 0.9921 - val_acc: 0.8485\n", "Epoch 394/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0724 - acc: 0.9765Epoch 00394: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.0710 - acc: 0.9769 - val_loss: 0.9908 - val_acc: 0.8409\n", "Epoch 395/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0843 - acc: 0.9733Epoch 00395: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0836 - acc: 0.9735 - val_loss: 0.9478 - val_acc: 0.8370\n", "Epoch 396/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0708 - acc: 0.9775Epoch 00396: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0713 - acc: 0.9771 - val_loss: 0.9531 - val_acc: 0.8447\n", "Epoch 397/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0698 - acc: 0.9759Epoch 00397: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0731 - acc: 0.9746 - val_loss: 0.9580 - val_acc: 0.8428\n", "Epoch 398/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0770 - acc: 0.9769Epoch 00398: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0782 - acc: 0.9771 - val_loss: 1.0042 - val_acc: 0.8479\n", "Epoch 399/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0679 - acc: 0.9788Epoch 00399: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0671 - acc: 0.9792 - val_loss: 1.0230 - val_acc: 0.8396\n", "Epoch 400/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0686 - acc: 0.9772Epoch 00400: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0676 - acc: 0.9776 - val_loss: 0.9858 - val_acc: 0.8523\n", "Epoch 401/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0722 - acc: 0.9767Epoch 00401: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0715 - acc: 0.9771 - val_loss: 1.0203 - val_acc: 0.8428\n", "Epoch 402/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0718 - acc: 0.9764Epoch 00402: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0720 - acc: 0.9762 - val_loss: 0.9515 - val_acc: 0.8511\n", "Epoch 403/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0718 - acc: 0.9771Epoch 00403: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0757 - acc: 0.9768 - val_loss: 0.9485 - val_acc: 0.8447\n", "Epoch 404/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0659 - acc: 0.9735Epoch 00404: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0663 - acc: 0.9738 - val_loss: 0.9992 - val_acc: 0.8421\n", "Epoch 405/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0722 - acc: 0.9758Epoch 00405: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0712 - acc: 0.9759 - val_loss: 0.9426 - val_acc: 0.8447\n", "Epoch 406/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0545 - acc: 0.9810Epoch 00406: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0542 - acc: 0.9811 - val_loss: 1.0351 - val_acc: 0.8453\n", "Epoch 407/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0615 - acc: 0.9793- ETA: 1s - loss: 0.0719 Epoch 00407: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0612 - acc: 0.9792 - val_loss: 0.9705 - val_acc: 0.8479\n", "Epoch 408/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0684 - acc: 0.9767Epoch 00408: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0698 - acc: 0.9767 - val_loss: 0.9326 - val_acc: 0.8421\n", "Epoch 409/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0724 - acc: 0.9752Epoch 00409: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0715 - acc: 0.9757 - val_loss: 0.9615 - val_acc: 0.8466\n", "Epoch 410/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0690 - acc: 0.9785Epoch 00410: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0696 - acc: 0.9780 - val_loss: 0.9742 - val_acc: 0.8498\n", "Epoch 411/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0782 - acc: 0.9748Epoch 00411: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0778 - acc: 0.9748 - val_loss: 0.9607 - val_acc: 0.8402\n", "Epoch 412/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0722 - acc: 0.9779Epoch 00412: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0725 - acc: 0.9781 - val_loss: 0.9446 - val_acc: 0.8434\n", "Epoch 413/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0673 - acc: 0.9800Epoch 00413: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0668 - acc: 0.9801 - val_loss: 0.9991 - val_acc: 0.8428\n", "Epoch 414/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0646 - acc: 0.9782Epoch 00414: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0698 - acc: 0.9773 - val_loss: 0.9744 - val_acc: 0.8491\n", "Epoch 415/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0681 - acc: 0.9779Epoch 00415: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0693 - acc: 0.9776 - val_loss: 1.1077 - val_acc: 0.8370\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 416/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0714 - acc: 0.9753Epoch 00416: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0734 - acc: 0.9747 - val_loss: 0.9972 - val_acc: 0.8460\n", "Epoch 417/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0701 - acc: 0.9764Epoch 00417: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0687 - acc: 0.9770 - val_loss: 1.0265 - val_acc: 0.8402\n", "Epoch 418/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0710 - acc: 0.9780- ETA: 0s - loss: 0.0724 - acc: 0Epoch 00418: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0712 - acc: 0.9779 - val_loss: 1.0058 - val_acc: 0.8364\n", "Epoch 419/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0684 - acc: 0.9766Epoch 00419: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0686 - acc: 0.9764 - val_loss: 1.0285 - val_acc: 0.8491\n", "Epoch 420/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0534 - acc: 0.9818Epoch 00420: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0543 - acc: 0.9817 - val_loss: 1.0873 - val_acc: 0.8434\n", "Epoch 421/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0722 - acc: 0.9754Epoch 00421: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0718 - acc: 0.9754 - val_loss: 1.0576 - val_acc: 0.8383\n", "Epoch 422/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0828 - acc: 0.9727Epoch 00422: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0813 - acc: 0.9732 - val_loss: 1.0474 - val_acc: 0.8453\n", "Epoch 423/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0664 - acc: 0.9793Epoch 00423: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0651 - acc: 0.9797 - val_loss: 0.9831 - val_acc: 0.8542\n", "Epoch 424/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0691 - acc: 0.9778Epoch 00424: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0689 - acc: 0.9776 - val_loss: 0.9553 - val_acc: 0.8383\n", "Epoch 425/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0611 - acc: 0.9802Epoch 00425: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0656 - acc: 0.9798 - val_loss: 0.9904 - val_acc: 0.8523\n", "Epoch 426/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0625 - acc: 0.9800Epoch 00426: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0637 - acc: 0.9797 - val_loss: 1.0087 - val_acc: 0.8434\n", "Epoch 427/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0660 - acc: 0.9789Epoch 00427: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0655 - acc: 0.9788 - val_loss: 0.9859 - val_acc: 0.8440\n", "Epoch 428/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0599 - acc: 0.9804Epoch 00428: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0617 - acc: 0.9798 - val_loss: 1.0235 - val_acc: 0.8447\n", "Epoch 429/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0739 - acc: 0.9768Epoch 00429: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0743 - acc: 0.9771 - val_loss: 0.9108 - val_acc: 0.8472\n", "Epoch 430/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0582 - acc: 0.9808Epoch 00430: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0571 - acc: 0.9813 - val_loss: 1.0341 - val_acc: 0.8460\n", "Epoch 431/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0615 - acc: 0.9781Epoch 00431: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0614 - acc: 0.9779 - val_loss: 1.0049 - val_acc: 0.8491\n", "Epoch 432/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0525 - acc: 0.9821Epoch 00432: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0538 - acc: 0.9813 - val_loss: 1.0447 - val_acc: 0.8402\n", "Epoch 433/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0683 - acc: 0.9761Epoch 00433: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0713 - acc: 0.9757 - val_loss: 0.9919 - val_acc: 0.8491\n", "Epoch 434/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0705 - acc: 0.9767Epoch 00434: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0702 - acc: 0.9769 - val_loss: 0.9805 - val_acc: 0.8479\n", "Epoch 435/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0770 - acc: 0.9778Epoch 00435: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0776 - acc: 0.9777 - val_loss: 1.0278 - val_acc: 0.8491\n", "Epoch 436/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0769 - acc: 0.9767Epoch 00436: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0782 - acc: 0.9765 - val_loss: 0.9352 - val_acc: 0.8523\n", "Epoch 437/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0724 - acc: 0.9754Epoch 00437: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0735 - acc: 0.9752 - val_loss: 1.0333 - val_acc: 0.8472\n", "Epoch 438/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0748 - acc: 0.9749Epoch 00438: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0776 - acc: 0.9746 - val_loss: 0.9741 - val_acc: 0.8542\n", "Epoch 439/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0571 - acc: 0.9823Epoch 00439: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0559 - acc: 0.9828 - val_loss: 1.0823 - val_acc: 0.8466\n", "Epoch 440/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0579 - acc: 0.9825Epoch 00440: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0588 - acc: 0.9822 - val_loss: 1.0314 - val_acc: 0.8479\n", "Epoch 441/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0706 - acc: 0.9780Epoch 00441: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0734 - acc: 0.9773 - val_loss: 1.0435 - val_acc: 0.8517\n", "Epoch 442/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0722 - acc: 0.9753Epoch 00442: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0727 - acc: 0.9754 - val_loss: 1.0735 - val_acc: 0.8491\n", "Epoch 443/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0867 - acc: 0.9731Epoch 00443: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0854 - acc: 0.9736 - val_loss: 0.9624 - val_acc: 0.8530\n", "Epoch 444/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0768 - acc: 0.9770Epoch 00444: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0765 - acc: 0.9770 - val_loss: 1.0042 - val_acc: 0.8504\n", "Epoch 445/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0648 - acc: 0.9769Epoch 00445: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0635 - acc: 0.9775 - val_loss: 0.9644 - val_acc: 0.8491\n", "Epoch 446/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0497 - acc: 0.9843Epoch 00446: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0492 - acc: 0.9843 - val_loss: 0.9729 - val_acc: 0.8479\n", "Epoch 447/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0606 - acc: 0.9831Epoch 00447: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0601 - acc: 0.9827 - val_loss: 1.0150 - val_acc: 0.8504\n", "Epoch 448/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0494 - acc: 0.9826Epoch 00448: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0494 - acc: 0.9829 - val_loss: 0.9487 - val_acc: 0.8536\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 449/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0616 - acc: 0.9784Epoch 00449: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0606 - acc: 0.9787 - val_loss: 0.9821 - val_acc: 0.8498\n", "Epoch 450/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0657 - acc: 0.9796- ETA: 1s - loss: 0.0Epoch 00450: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0651 - acc: 0.9795 - val_loss: 1.0614 - val_acc: 0.8428\n", "Epoch 451/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0667 - acc: 0.9782Epoch 00451: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0663 - acc: 0.9782 - val_loss: 0.9880 - val_acc: 0.8402\n", "Epoch 452/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0607 - acc: 0.9779Epoch 00452: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0610 - acc: 0.9779 - val_loss: 1.0228 - val_acc: 0.8460\n", "Epoch 453/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0674 - acc: 0.9781Epoch 00453: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0661 - acc: 0.9787 - val_loss: 1.0576 - val_acc: 0.8300\n", "Epoch 454/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0610 - acc: 0.9802Epoch 00454: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0603 - acc: 0.9805 - val_loss: 1.0625 - val_acc: 0.8370\n", "Epoch 455/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0600 - acc: 0.9825Epoch 00455: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0598 - acc: 0.9826 - val_loss: 1.0247 - val_acc: 0.8511\n", "Epoch 456/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0661 - acc: 0.9802Epoch 00456: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0646 - acc: 0.9807 - val_loss: 1.0286 - val_acc: 0.8377\n", "Epoch 457/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0675 - acc: 0.9782- ETA: 2s - loss: - ETA: 1s - loss: 0.0626 - aEpoch 00457: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0681 - acc: 0.9777 - val_loss: 1.0013 - val_acc: 0.8332\n", "Epoch 458/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0542 - acc: 0.9823Epoch 00458: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0542 - acc: 0.9822 - val_loss: 1.0214 - val_acc: 0.8428\n", "Epoch 459/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0643 - acc: 0.9804- ETA: 0s - loss: 0.0656 - acc: 0.980Epoch 00459: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0633 - acc: 0.9807 - val_loss: 1.0546 - val_acc: 0.8491\n", "Epoch 460/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0658 - acc: 0.9797Epoch 00460: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0677 - acc: 0.9788 - val_loss: 1.0936 - val_acc: 0.8415\n", "Epoch 461/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0654 - acc: 0.9800Epoch 00461: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0663 - acc: 0.9797 - val_loss: 0.9827 - val_acc: 0.8383\n", "Epoch 462/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0476 - acc: 0.9850Epoch 00462: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0478 - acc: 0.9850 - val_loss: 1.0276 - val_acc: 0.8472\n", "Epoch 463/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0673 - acc: 0.9791Epoch 00463: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0673 - acc: 0.9792 - val_loss: 1.0005 - val_acc: 0.8377\n", "Epoch 464/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0573 - acc: 0.9811Epoch 00464: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0574 - acc: 0.9809 - val_loss: 1.0012 - val_acc: 0.8460\n", "Epoch 465/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0559 - acc: 0.9809Epoch 00465: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0559 - acc: 0.9807 - val_loss: 0.9813 - val_acc: 0.8415\n", "Epoch 466/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0616 - acc: 0.9835Epoch 00466: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0613 - acc: 0.9833 - val_loss: 1.0303 - val_acc: 0.8332\n", "Epoch 467/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0686 - acc: 0.9784Epoch 00467: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0689 - acc: 0.9784 - val_loss: 1.0318 - val_acc: 0.8415\n", "Epoch 468/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0648 - acc: 0.9789Epoch 00468: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0642 - acc: 0.9790 - val_loss: 1.0230 - val_acc: 0.8460\n", "Epoch 469/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0768 - acc: 0.9769Epoch 00469: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0792 - acc: 0.9765 - val_loss: 0.9590 - val_acc: 0.8498\n", "Epoch 470/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0645 - acc: 0.9807Epoch 00470: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0673 - acc: 0.9804 - val_loss: 1.0704 - val_acc: 0.8345\n", "Epoch 471/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0651 - acc: 0.9799Epoch 00471: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0639 - acc: 0.9802 - val_loss: 0.9353 - val_acc: 0.8434\n", "Epoch 472/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0711 - acc: 0.9789Epoch 00472: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0706 - acc: 0.9790 - val_loss: 0.9695 - val_acc: 0.8434\n", "Epoch 473/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0475 - acc: 0.9828Epoch 00473: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0476 - acc: 0.9826 - val_loss: 0.9992 - val_acc: 0.8440\n", "Epoch 474/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0633 - acc: 0.9802Epoch 00474: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0624 - acc: 0.9805 - val_loss: 0.9376 - val_acc: 0.8370\n", "Epoch 475/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0716 - acc: 0.9745Epoch 00475: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0707 - acc: 0.9748 - val_loss: 0.9366 - val_acc: 0.8377\n", "Epoch 476/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0605 - acc: 0.9782Epoch 00476: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0619 - acc: 0.9780 - val_loss: 1.0020 - val_acc: 0.8504\n", "Epoch 477/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0507 - acc: 0.9833Epoch 00477: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0503 - acc: 0.9833 - val_loss: 0.9810 - val_acc: 0.8434\n", "Epoch 478/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0555 - acc: 0.9829Epoch 00478: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0552 - acc: 0.9827 - val_loss: 1.0260 - val_acc: 0.8498\n", "Epoch 479/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0622 - acc: 0.9793Epoch 00479: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0628 - acc: 0.9790 - val_loss: 1.0310 - val_acc: 0.8364\n", "Epoch 480/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0631 - acc: 0.9799Epoch 00480: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0627 - acc: 0.9801 - val_loss: 1.0427 - val_acc: 0.8390\n", "Epoch 481/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0709 - acc: 0.9776Epoch 00481: val_acc did not improve\n", "37/36 [==============================] - 4s 103ms/step - loss: 0.0705 - acc: 0.9777 - val_loss: 1.0181 - val_acc: 0.8447\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 482/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0604 - acc: 0.9808Epoch 00482: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0606 - acc: 0.9806 - val_loss: 1.0099 - val_acc: 0.8428\n", "Epoch 483/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0542 - acc: 0.9819Epoch 00483: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0556 - acc: 0.9815 - val_loss: 1.0727 - val_acc: 0.8409\n", "Epoch 484/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0585 - acc: 0.9815Epoch 00484: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0581 - acc: 0.9813 - val_loss: 0.9782 - val_acc: 0.8511\n", "Epoch 485/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0609 - acc: 0.9802Epoch 00485: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0619 - acc: 0.9803 - val_loss: 0.9964 - val_acc: 0.8453\n", "Epoch 486/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0685 - acc: 0.9775Epoch 00486: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0674 - acc: 0.9779 - val_loss: 1.0003 - val_acc: 0.8453\n", "Epoch 487/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0648 - acc: 0.9835Epoch 00487: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0644 - acc: 0.9837 - val_loss: 0.9929 - val_acc: 0.8434\n", "Epoch 488/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0527 - acc: 0.9828- ETA: 1s - loss: 0.045Epoch 00488: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0532 - acc: 0.9826 - val_loss: 1.0625 - val_acc: 0.8440\n", "Epoch 489/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0584 - acc: 0.9817Epoch 00489: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0572 - acc: 0.9822 - val_loss: 1.0303 - val_acc: 0.8530\n", "Epoch 490/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0666 - acc: 0.9799Epoch 00490: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0659 - acc: 0.9798 - val_loss: 0.9709 - val_acc: 0.8460\n", "Epoch 491/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0573 - acc: 0.9830Epoch 00491: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0572 - acc: 0.9833 - val_loss: 1.0299 - val_acc: 0.8504\n", "Epoch 492/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0744 - acc: 0.9773Epoch 00492: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0730 - acc: 0.9779 - val_loss: 0.9579 - val_acc: 0.8390\n", "Epoch 493/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0544 - acc: 0.9825Epoch 00493: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0549 - acc: 0.9822 - val_loss: 1.0702 - val_acc: 0.8517\n", "Epoch 494/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0654 - acc: 0.9785- ETA: 0s - loss: 0.0691 - acc: 0Epoch 00494: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0680 - acc: 0.9782 - val_loss: 0.9963 - val_acc: 0.8447\n", "Epoch 495/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0564 - acc: 0.9810Epoch 00495: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0571 - acc: 0.9811 - val_loss: 1.0619 - val_acc: 0.8396\n", "Epoch 496/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0602 - acc: 0.9804Epoch 00496: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0592 - acc: 0.9807 - val_loss: 1.0286 - val_acc: 0.8428\n", "Epoch 497/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0531 - acc: 0.9819Epoch 00497: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0529 - acc: 0.9820 - val_loss: 1.0863 - val_acc: 0.8440\n", "Epoch 498/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0614 - acc: 0.9806Epoch 00498: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0606 - acc: 0.9805 - val_loss: 1.0873 - val_acc: 0.8402\n", "Epoch 499/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0629 - acc: 0.9793Epoch 00499: val_acc did not improve\n", "37/36 [==============================] - 4s 105ms/step - loss: 0.0647 - acc: 0.9788 - val_loss: 1.0634 - val_acc: 0.8409\n", "Epoch 500/500\n", "36/36 [============================>.] - ETA: 0s - loss: 0.0559 - acc: 0.9817Epoch 00500: val_acc did not improve\n", "37/36 [==============================] - 4s 104ms/step - loss: 0.0548 - acc: 0.9820 - val_loss: 0.9885 - val_acc: 0.8364\n", "6220/6220 [==============================] - 2s 279us/step\n" ] } ], "source": [ "### 模型訓練學習 ###\n", "\n", "# 首先使用AdaDelta來做第一階段的訓練, 因為AdaMax會無卡住\n", "model.compile(loss='categorical_crossentropy', \n", " optimizer='adadelta', \n", " metrics=[\"accuracy\"])\n", "\n", "# 進行20循環就足夠了\n", "model.fit(X_train, Y_train, batch_size=batch_size,\n", " epochs=20, \n", " validation_data=(X_val, Y_val),\n", " verbose=1)\n", "\n", "# 接著改用AdaMax\n", "model.compile(loss='categorical_crossentropy', \n", " optimizer='adamax', \n", " metrics=[\"accuracy\"])\n", "\n", "\n", "# 我們想要保存在訓練過程中驗證結果比較好的模型\n", "saveBestModel = ModelCheckpoint(\"best.kerasModelWeights\", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=True)\n", "\n", "# 在訓練的過程透過ImageDataGenerator來持續產生圖像資料\n", "history = model.fit_generator(datagen.flow(X_train, Y_train, batch_size=batch_size),\n", " steps_per_epoch=len(X_train)/batch_size,\n", " epochs=nb_epoch, \n", " validation_data=(X_val, Y_val),\n", " callbacks=[saveBestModel],\n", " verbose=1)\n", "\n", "### 進行預測 ###\n", "\n", "# 載入訓練過程中驗證結果最好的模型\n", "model.load_weights(\"best.kerasModelWeights\")\n", "\n", "# 載入Kaggle測試資料集\n", "X_test = np.load(path+\"/testPreproc.npy\")\n", "\n", "# 預測字符的類別\n", "Y_test_pred = model.predict_classes(X_test)\n", "\n", "# 從類別的數字轉換為字符\n", "vInt2label = np.vectorize(int2label)\n", "Y_test_pred = vInt2label(Y_test_pred) \n", "\n", "# 保存預測結果到檔案系統\n", "np.savetxt(path+\"/jular_pred\" + \".csv\", np.c_[range(6284,len(Y_test_pred)+6284),Y_test_pred], delimiter=',', header = 'ID,Class', comments = '', fmt='%s')\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAEICAYAAACktLTqAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJzt3XmYFNXV+PHvYWCAAQSZQVGQRSUq\nEkAcUV8xohh/4IJxiYgYNUhQCIrGLERMXDHGLS4xKjG4hFbCG18VjIqKRNxlCDsEQRxkZF9kG2QY\nOL8/bhVd3dPbzPQw0z3n8zz1dC23qm9Vd5++devWLVFVjDHGZJcGtZ0BY4wx6WfB3RhjspAFd2OM\nyUIW3I0xJgtZcDfGmCxkwd0YY7KQBfcsJiI5IrJDRDqkM21tEpGjRSTt7XdF5GwRKQ5MLxWR01NJ\nW4X3ekZEbq3q+sakomFtZ8CEiciOwGQesBvY601fp6qhymxPVfcCzdOdtj5Q1WPSsR0RGQZcqap9\nA9selo5tG5OIBfc6RFX3B1evZDhMVd+Nl15EGqpq+YHImzHJ2PexbrFqmQwiIveIyD9E5CUR2Q5c\nKSKnisinIvKtiKwRkcdEpJGXvqGIqIh08qYnesvfFJHtIvKJiHSubFpv+QAR+UJEtorI4yLykYhc\nEyffqeTxOhFZLiJbROSxwLo5IvInEdkkIl8C/RMcn9tEZFLUvCdE5GFvfJiILPH250uvVB1vWyUi\n0tcbzxORv3t5WwScGON9V3jbXSQiA7353wf+DJzuVXltDBzbOwLrX+/t+yYReVVEDkvl2FTmOPv5\nEZF3RWSziKwVkV8H3ud33jHZJiJFInJ4rCowEfnQ/5y94znTe5/NwG0i0kVEZnj7stE7bi0D63f0\n9nGDt/xREWni5fm4QLrDRKRURPLj7a9JQlVtqIMDUAycHTXvHqAMuAD3x9wUOAk4GXcWdiTwBTDK\nS98QUKCTNz0R2AgUAo2AfwATq5D2EGA7cKG37BfAHuCaOPuSSh5fA1oCnYDN/r4Do4BFQHsgH5jp\nvrYx3+dIYAfQLLDt9UChN32Bl0aAs4BdQHdv2dlAcWBbJUBfb/xB4N/AwUBHYHFU2suAw7zP5Aov\nD4d6y4YB/47K50TgDm/8HC+PPYEmwF+A91I5NpU8zi2BdcBooDFwENDbW/ZbYB7QxduHnkBr4Ojo\nYw186H/O3r6VAyOAHNz38XtAPyDX+558BDwY2J+F3vFs5qU/zVs2HhgXeJ9bgFdq+3eYyUOtZ8CG\nOB9M/OD+XpL1fgn8rzceK2A/FUg7EFhYhbRDgQ8CywRYQ5zgnmIeTwks/z/gl974TFz1lL/s3OiA\nE7XtT4ErvPEBwBcJ0r4O/NwbTxTcvw5+FsDIYNoY210InOeNJwvuzwP3BpYdhLvO0j7Zsankcf4J\nUBQn3Zd+fqPmpxLcVyTJw6XALG/8dGAtkBMj3WnAV4B403OBi9P9u6pPg1XLZJ5VwQkROVZE/uWd\nZm8D7gIKEqy/NjBeSuKLqPHSHh7Mh7pfY0m8jaSYx5TeC1iZIL8ALwKDvfErgP0XoUXkfBH5zKuW\n+BZXak50rHyHJcqDiFwjIvO8qoVvgWNT3C64/du/PVXdBmwB2gXSpPSZJTnORwDL4+ThCFyAr4ro\n72NbEZksIt94eXguKg/F6i7eR1DVj3BnAX1EpBvQAfhXFfNksDr3TBTdDPBpXEnxaFU9CPg9riRd\nk9bgSpYAiIgQGYyiVSePa3BBwZesqeY/gLNFpD2u2uhFL49NgX8Cf8BVmbQC3k4xH2vj5UFEjgSe\nxFVN5Hvb/W9gu8maba7GVfX422uBq/75JoV8RUt0nFcBR8VZL96ynV6e8gLz2kalid6/P+JaeX3f\ny8M1UXnoKCI5cfLxAnAl7ixjsqrujpPOpMCCe+ZrAWwFdnoXpK47AO/5OtBLRC4QkYa4etw2NZTH\nycBNItLOu7j2m0SJVXUdrurgWWCpqi7zFjXG1QNvAPaKyPm4uuFU83CriLQSdx/AqMCy5rgAtwH3\nPzcMV3L3rQPaBy9sRnkJuFZEuotIY9yfzweqGvdMKIFEx3kK0EFERolIrogcJCK9vWXPAPeIyFHi\n9BSR1rg/tbW4C/c5IjKcwB9RgjzsBLaKyBG4qiHfJ8Am4F5xF6mbishpgeV/x1XjXIEL9KYaLLhn\nvluAq3EXOJ/GlVxrlBdABwEP436sRwFzcCW2dOfxSWA6sACYhSt9J/Mirg79xUCevwVuBl7BXZS8\nFPcnlYrbcWcQxcCbBAKPqs4HHgM+99IcC3wWWPcdYBmwTkSC1Sv++m/hqk9e8dbvAAxJMV/R4h5n\nVd0K/BC4BHcB9wvgDG/xA8CruOO8DXdxs4lX3fYz4FbcxfWjo/YtltuB3rg/mSnAy4E8lAPnA8fh\nSvFf4z4Hf3kx7nMuU9WPK7nvJop/8cKYKvNOs1cDl6rqB7WdH5O5ROQF3EXaO2o7L5nObmIyVSIi\n/XGn2d/hmtKV40qvxlSJd/3iQuD7tZ2XbGDVMqaq+gArcKfr/YEf2QUwU1Ui8gdcW/t7VfXr2s5P\nNrBqGWOMyUJWcjfGmCxUa3XuBQUF2qlTp9p6e2OMyUizZ8/eqKqJmh4DtRjcO3XqRFFRUW29vTHG\nZCQRSXaXNpBCtYyITBCR9SKyMM5y8XqFWy4i80WkV2Uza4wxJr1SqXN/jgTdrOI6Z+riDcNxN50Y\nY4ypRUmDu6rOxN3RF8+FwAvqfAq0Eq8/amOMMbUjHa1l2hHZM1wJiTuRMsYYU8PSEdxj9aoXs/G8\niAz3nvJStGHDhjS8tTHGmFjSEdxLiOwOtT2un5EKVHW8qhaqamGbNklb8hhjTNqEQtCpEzRo4F5D\nMR43n0qaTJGO4D4FuMprNXMKsFVV16Rhu8aYeqI6gTcUgoICEHFDQQGMHBk5r3lzuPpqWLkSVN3r\nlVfC2We7bYm47V55ZcU0wW3679+8OeTkhNdr0iT8XskGf70a//NI9qgmXH/Ta3DPyCwBrgWuB673\nlgvwBO5JLgvwnleZbDjxxBPVGFN3TZyo2rGjKqjm5ES+irhXf8jPd+mD64m414kTVUeMiNxGv37h\nNPn5qrm5kdurL0NeXvi4pYo4j0uMHmqtb5nCwkK1m5iMqZ5QCMaOha+/hg4d4Nxz4Y03wtPjxsFH\nH8FTT7lwAq7U+ZOfuHQrV7qS5N69kJ/vlm/eDI0aQVlZ7e1XfdKxIxQXp55eRGaramHSdBbcjakZ\nwcDburWbt3lz5Hi8gDxkiFt/9GjYtKn29sHUPBHYt68y6S24G1OjokvNwaB83XWwc2dt59Bkgpoq\nuVuvkKZe8y/S+RfGghfl4l3U8y/Uxbr45l+Us8BuUiHiCgU1wYK7ySqxWlQEA3jDhuGWCmef7eqe\nV3rdMAVPYjdtCreUCA5XXpm4msQej2Aq4/rr3dleTbDH7JmMEK/+Oi8Pdu2KXWfpl6aD9u4NL/OD\nujHpJgK5ubA7zrPJ8vPh0UdrLrCDldzNAZCoDXOqJW2/hK3qSs6bNrnxnTsrdzHKZJ/mzd13pFkz\n9z1KxYgRqTdYnDjR1YuLuKCcn+/GO3Z0y2Kts28ffPdd/G1u3FizgR1I3s69pgZr514/TJzo2vJG\nt+0dMaL+tm3OtKFfP9VmzeIvj24D77dtD34Hgm3a8/Mj28DHahef6PuUatrqrFOXkWI796QJamqw\n4J49on88/fpVvMnFhpobGjRwr7GOeU6O+yMNflbRgbp583DgDS4L3pgU63PO9CCZqVIN7tYU0lTK\nyJGRN8SY1DRr5m5Rr2ybdb+6YefOyOaWpv6yppAm7UaOhCeftMAelJsLXbvGX+7Xy+7Y4epZ/XLx\nxInuYnBQXl7FOtx9+2D7dvdaXGyB3aTOgruJKbozJhEX2LNdfr672Obfip8s7YQJsGhR5EW34IW2\neAF5yBAYPz5ynfHjLXibNEql7qYmBqtzP/Di1ZmOGJF9deTJOroKDtF1y6keN2NqA1bnbqLbhn/7\nbbidt08kO6pZRNwNIX/5S+J08boMMCZTWJ17PRcKwfDhkW3DowM71P3APmJE7CqP6Hl//3vywA4u\nkBcXWx22yX52h2oGi3fXZocOsG6du4mitom4vAV7QJw8OdxqpFkzl89YfzwjRoQDdrx6a2NMbFZy\nz0D+xc5gx1XBuzZXrqwbgT0/35WoN24Ml5T/8pfIViM7dsDzz0dewMzPdyXzVErixpjYrOSeYfzm\niHVN8+ZVb4s9ZIiVwo1JNyu51yGJ+lnxn9tYVwK734eHXwdubbGNqVus5F5H+BdAS0vddKweDQ9E\nH+H9+sHy5Ykf22bB25i6z4J7LfMvitaF7meDFzCNMZnNqmVqUSgEQ4fWTmBv1qxi80IL7MZkDyu5\n16LRo2vnCfO5ufD001a9Ykw2s5J7LTpQT7XPyYl8wMCECRbYjcl2KQV3EekvIktFZLmIjImxvKOI\nTBeR+SLybxFpn/6sZr7ozrhqQrNmbvDl57t25MG25hbYjcl+SYO7iOQATwADgK7AYBGJ7uT0QeAF\nVe0O3AX8Id0ZzUTRwTzZw5UrKze3YhexO3a4wZ8+II/zMsbUOamU3HsDy1V1haqWAZOAC6PSdAWm\ne+MzYiyvd0IhuPrqmqt68bubtcBtjIklleDeDlgVmC7x5gXNAy7xxi8CWohIhR6xRWS4iBSJSNGG\nDRuqkt+MMXp07P5SKqNJk/gP4rUSuTEmkVSCe6za4ei+BH8JnCEic4AzgG+A8gorqY5X1UJVLWzT\npk2lM1tXxbqztDol9gYNXJvzXbusntwYUzWpNIUsAY4ITLcHVgcTqOpq4GIAEWkOXKKqW9OVybos\nlTtLK6NjRxfMjTGmOlIpuc8CuohIZxHJBS4HpgQTiEiBiPjb+i0wIb3ZrLtGjw4H9urKzXW39xtj\nTHUlDe6qWg6MAqYBS4DJqrpIRO4SkYFesr7AUhH5AjgUqBchauTI9F0wtQukxph0ssfsVVEoVL3q\nF19dfxKSMaZuscfs1bDRo6u/jY4dq78NY4yJxYJ7FVS3NQxAXp7Vrxtjao4F9xT5zR39O00rq1+/\nyF4Yx4+3+nVjTM2xXiFT4HfNW5UeHEXg+uutO11jzIFlJfcUVLVrXv8B0RbYjTEHmgX3JCpTv96g\nQWR3AdZFgDGmtli1TAIjR1bugdSqrrsAY4ypbVZyjyEUgubNKxfYwT1Auq564w13VlHZVj6qsHZt\nzeTJt3lzuNpryxbYvbtm368qvv0WPvywtnMR39at7k7pb76BPXvcvE8/hfXr0/cemzbBxRfD6tWx\nl+/dC2++Wbl7N9avr3yBqLzc9bg6Y4brfymW1avhllvglVfc9Pz57tgAvPoqfPBB5d6zshYvhhUr\navY9klLVWhlOPPFErYsmTlRt1CjY/2JqQ26uWzee775T3bbtwOzD2LGqkyZFzjv9dJfPqVNVN2xI\nfVvjx7v1Fi5MX/5KSlT/8x83vnev2/7gwar79rnxAQOSb+Obb1RffFH19ttVd++ufp7WrlXds6fi\n/OefV50zR/WCC1ze1q1z8xcsUP3qq9S3/803lcvPK6+oNm2q+u23sZe/8ILqoYe675Vq5Hfxj39U\n3bTJjf/P/6jOmKE6aJBqcXFq711Wpvqvf7nPY/Zs1dWrVf/0J/cZgeqdd6rOn6/63/9GrvfMM275\n+++7958+PfFvYu3acJ6j0/373+57unt3+PtaVqa6fr3qp5+G1+vaNbzOqlWqGzeqvv125PHIzw+P\n/9//hce3b3frTZum+vOfu/HVq913MtrMmW7bvrffdt+NDz4Iz7vtNtXnnnPj/ntcdZVq48aV//wT\nAYo0hRhrwV1Vt25V3bzZjXfsWPnAnp/vvpzxAsTatao5Oap9+tT8vvjBEsLzSkoiv+D+F3vnzsh1\nlyxxy2bPDs8bONDNe+yx9OWxTRu3zX37VL/8Mpyn4uKKeY9l2zbVQw4Jp502TbWoSFVEdenSyLRl\nZe4PYO3ayPlffqk6b54b37FDtUUL1V//WvWMM1R79XLBbffu8HsUFLjXf/zDrZNKPn0zZ7q0//u/\n8dPs2+eC8qpVLmj427//ftXf/Mb94T31lOrw4aqffx7e/z/9SXXChMjP9rzzVO+5Jzz9k5+41yuv\nVP3xj936+/apvvqqauvW7ru/cqX74x83LrzekCHu9bjjIrc/bJh7bdIkch/OOMPNv+++yN+RH0Tf\nflu1eXPVNWtUly1zxzy43X373L762x882O0LqH79teoDD7h17r47cr3ly1WPP77yv9tnn3W/DX96\n7lz3+vvfq95xh8vj3r2qixa5+b/7XXhfg9vZtcv97v3pdesqvtcTT6T2XUmFBfcU7dnjvqTHHOOm\nKxPQ/fVLSlyghHAJIOiBB8LrbdvmvlTBL4qqKxlHByZV1TFjVHv3jiw1qLov3RVXqP7zn256xgy3\nfT+Q+D8WVdXDDquY/y5d3B/OggXuR75vn+pDD7llI0aE36dv3/A6fjD07dhRMV+p8Lf33/+qdu/u\nxhs0cPviL5s6Nf6ZzrvvujTt21fcrwceiAzk773n5p90Uuw8lJW5HzO44xEscc2bF572j2G/fi5f\n/vxkZ0EvvhhOO3BgeP6MGeFj99FHqlOmuDT/7/8l/+41a5b697RBA/dZR88//PDIwBMdaFMdVq5U\nLS9XHTUqfhr/O3rSSW7a/9OIHpLloUOH2POjvwd5ee7PrXVrt/+33eYCdUmJ+31+8ok764nezlVX\nVZx33XXh+ccco3rvvap//nNkmtGjVS+9NHHezz9f9Q9/cGcd1WXBPUWvvhr+AGL9CGINjRqFTyOv\nu87N+/jj8PLLLnOBq6zMpenXL/YX9NNPXalDNTzPLyGvXBlZCn/kkch8B089t21T7d/fjQ8dGp6/\nenXkthMNoZDqgw/q/iA0ZYoL+K1auR8IuNKM/96rV6ueeKKbf+qprlT4wAOuumDu3MTHPNXAMWhQ\n7PX9P6H16yuu07Che23b1gVk/7iAK8GPGxdZIo819O7tAtHf/x453y+N/upXkfP9P9GgGTNc6T8v\nL5yuTRtXjfL44276ootc9VRwW02aRE7/6EeqL7/sqtni5VckXJXYp094/tSp4eORynD11cnTtGwZ\nOd2qleqjj4anjz029nr+mU+yoWfP1PPrD716qb70Unh6xYrw5xCrikVV9ZprKv8+qQyXXaZ60EGq\nhYXuTCtWmhdeSPz7SMaCewpKS1VvvrnyH+CECa4edvHi8LxYJZdjjnE/5qZN3alm8IceHRyC034J\n9tZbw/N+/evIIHLRReFlr78eHj/iiPD4Sy+5tMFt33JLeHzs2PD4TTe5IZj2nXfc64MPurrN884L\nnyEkGz75JLJU79c5Bv8EYw3BElXLlq5U2bq16t/+5tZ//XVXj3zYYW76vvtUR45MLTD5w5FHxl/W\noYPqDTe48WCgbNbMVQ3EWvejj1wpf/p0l6cnn4y//TvvjDxDSDR06xY+fnv3uuoavzAB4YDuFwgW\nLVLdsiW8XNVV20BkNdaYMaqTJ7s/sZ/+1G3nkkvc92vHDpemdWvVX/wi8sxjzJhwYWjsWNVTTonM\n74svut/E8OGuqqxPH9Xrr49Mc/TR7rWw0P0pgStVL1rkjl95uau39n8D992X+Bj5hSNV9/1t1Cj2\nn220lSsr993wq6b8qidw1Wz++OWXq158sTuT/+47l4fS0tj7LlK1M16fBfcEVq1yX06/PrkyQ7t2\nrqSaanr/NPuJJ8I/nOjhuecipw86KP72evVy+c/JCZ8RNG0ameaMM1zwGzjQ/Vj8H9G994b/SI4/\n3tV9BtcLnqqD6jnnhL/ElQmewWHZsnC1R7Da5Ve/cgFm3rzIP5zgjyc4NGrkzlD8wHjFFRU/12B9\ncbKhbVv3mpfnzrLef99VEV18serTT4fTde3qXh96yL3Hww8n3u7Che61fXt3Gg7uT3HxYtWjjgqn\ne+EFd3bQq1fFz2/YMNW77nLrRFu3zgXduXPdmcvNN4frtH2ffBI+e9q71/3Jv/uuO3bPP19xm9EX\npKdPj7z4evfd7g/PV1zszkpXrXLVDVCx2su3d6+7ONywoduuXyB66y130TX6WkhQebl7zc11fwbg\n/hxvusmd8dx5Z2qBPJ5duyp+fk895T7jSy4JX0AG1TPPdK+vveaC9FFHuT9SEfcd8s/So02b5go2\nCxe6Y+Gfqd1/f9XzbcE9jn37VHv0qFqgCgYfP9jl5bl/7WCaE08MXzT0hzffdO/v1zumOhx6aMWL\nRX7Qfeut8LxgvfqsWeFSv/8H8OST4WNQXOxKFapu+2efHQ6affq46wGdO4e399lnqm+8UfVjFmsI\n1t+vXRv+o/3jH10pbvTocFr/jOfgg93rSy/FPt3et88FEr/awx+KiiKnjz7aBSY/8PrWrXMXF3fs\ncGdnH3/sjlMoFL5QXlrqzpquuCL2ft17r3v94guXfvnycPAMnikFL7zv2+f+YObPd6XjXbuq8w2P\nzw+W6bR3r7vOs2ZN4nTBIByvqiSZZctiN1ioDv/z+M1v3OucORWXH3mkC8oDBrhra/v2hfdn5cqK\nf67JvPZaxcYMlctzPQ/u999f8aLlQw+lfkocPbRrFx4/9lj34W7fHj69GjQovNyvggiWVP2LpcuX\nq551VrhaJVgi/u1vXfVD8H0ffzz2n1FBQeQV+n/9Kzzu5+2008LzZs5MfLz8/Nx9t5t+4YXwuv4P\n16+TB/el/uwzV3XTv79L//nnkXXcfksHCO9Dhw7xf9zLl4eXBatvysoiW0h8/XXifdm7N1zfWVDg\n5k2f7pqtgSsVq7o66a1bE28rnl27XECYM8edhvufdZs2rkojVoly9WqXpmXLqr2nST//Iu6+fbEb\nNKxdG785am2p18E9WIft27y5cheXgsPEiS5g+KfPQ4fGfs89e1zQC/K34bdH9pWXh9tJFxW5QLhr\nl9vOwoXulHvaNLfcP51/881wHeY110Ruf+tW15TRbz/u27s38gJTPIsXu1YF/unl9u3hbQeD8eOP\nuwuTiRQVuYts+/a5P4bXXw//aX3ve8nzohpZZaTqtuXXh6Z6Kv7QQxXb569YUfWSYyKffx7O7+jR\nidMtWpT+9zdVs2lTuBl0pqjXwX3ZsorB/bbb3PSTT8a/ih1r8Js8qobrX//4x9TzMmdO9du4+hcZ\nly511Ql33OGqDlQr7mc6ffRR+trnrljh8nnZZamlj/UHXVqa3ptB0skvlUf/GRqTbqkG96zsW2ba\ntPD41q2uV8Z77oErrnDd70L4NZlHHw2PH3SQez366NTz0rOnG6rjrrvguutch2SNG8Ptt4eXLVrk\nuhWoCf/zP25Ih86d4e23oXfv1NKLwLHHwumnh+c1beqGuuiQQ8LjDaxTD1MHZM3XcN48F4g2bYLX\nXgvP/8lP4NZb3fgNN7jXUCi1bY4YEdmroz9+/PHVz29lDB/uyoSNG1dc1rUrHHfcgc1PVf3wh9Cy\nZerplyxxDzXJBDk58Ne/wn/+U9s5McbJmgdkDx0Kzz4Lv/gFPP64K+UuXx6ZpqwMGjVyT1RauTLx\n9iZOrNhdryqsWQOHH562bBtjTKXUuwdk+1UmDz/sesW79trwsrFjYdIkF9hHjkwe2Dt2jN0Pu4gF\ndmNMZkgpuItIfxFZKiLLRWRMjOUdRGSGiMwRkfkicm76s1rRpk3w/PPwxRfw5z+7eWed5apnLrgg\nnO53v4NBg+Dss5N345ubaw+uNsZkvqQXVEUkB3gC+CFQAswSkSmqujiQ7DZgsqo+KSJdgTeATjWQ\n3wg33+weY+crLITp0914aSl06QJjxri66lAovCyRCRPs6UnGmMyXSmuZ3sByVV0BICKTgAuBYHBX\nwKsYoSUQpzv/9Nq5M3I6+FCJvDxXoveNHZt8e/GqY4wxJtOkUi3TDlgVmC7x5gXdAVwpIiW4UvsN\nsTYkIsNFpEhEijZs2FCF7Eby69l9wVJ8tGT17CJWHWOMyR6pBPdYraijm9gMBp5T1fbAucDfRaTC\ntlV1vKoWqmphmzZtKp/bKOvWhccvugj69o2dbuTI5Nu6/nortRtjskcq1TIlwBGB6fZUrHa5FugP\noKqfiEgToABI4xMcKwoG9/z82GlCIXjqqfjbaNgQnnvOArsxJrukUnKfBXQRkc4ikgtcDkyJSvM1\n0A9ARI4DmgDVr3dJIhjcDz44dprRo1379HjatbPAbozJPkmDu6qWA6OAacASXKuYRSJyl4gM9JLd\nAvxMROYBLwHXaA3fHaXqnpyek+Om/Se+B4VCrrlkIl9/nf68GWNMbUupbxlVfQN3oTQ47/eB8cXA\naenNWqL8uFu99+yBbt1g4ULX9DFaKi1kOnRIf/6MMaa2ZeQdqlOmuI60AEaNcq8DB1ZMl6xUnpdn\nLWSMMdkpI4P7li3h8UGDXAn+vPMqpmvdOv42OnZ0nVJZfbsxJhtlZJe/wSbyrVrFThMKwbZtsZeN\nGOG6ATbGmGyVkSX31V5DzEQ3Jo0dG/siK7j+aFLt9tcYYzJRRgb3NWvcAzMSXQxNFPhLS1O72GqM\nMZkqY4P7YYfFX57KHanWBNIYk80yMrivWwdt28ZeFgol79YXrAmkMSa7ZWRwLy2FZs1iLxs9Ovn6\n1gTSGJPtMjK4f/cdNGlScf7IkcnvSM3PtyaQxpjsl5FNIWMF92QdhIEr7W/cWHP5MsaYuiJrSu5j\nxybuICwnB55+umbzZYwxdUXGBfe9e1379ejgnqj1S4MGrm27VcUYY+qLjAvuu3e71+jgHq/1iwi8\n8IIFdmNM/ZI1wX3cOMjNrZj+rLMssBtj6p+MC+7ffedeo4P7Rx9BWVnF9B98YF0NGGPqn6wI7ola\nypSVWVcDxpj6JyuCe7KWMtbVgDGmvsmK4J4seFtXA8aY+iYrgnui4J2ba10NGGPqn6wI7uPGuf5i\nojVvDhMmWGsZY0z9k7HBvXHjyPlNm4bH8/Nh4kTYvt0CuzGmfsq4vmWiS+6hEAwdGtkMcvv2A58v\nY4ypS1IquYtIfxFZKiLLRWRMjOV/EpG53vCFiHyb/qw60cF99OiK7dvLylLr+tcYY7JV0pK7iOQA\nTwA/BEqAWSIyRVUX+2lU9eYPoBD5AAAVS0lEQVRA+huAE2ogr0DF4B6vi99kXf8aY0w2S6Xk3htY\nrqorVLUMmARcmCD9YOCldGQulnh3qBpjjAlLJbi3A1YFpku8eRWISEegM/BenOXDRaRIRIo2bNhQ\n2bwCFYN7vCcy5edXafPGGJMVUgnuEmNevPtBLwf+qap7Yy1U1fGqWqiqhW3atEk1jxGaNoX27V1w\nD4Vi9yeTkwOPPlqlzRtjTFZIJbiXAEcEptsDq+OkvZwarJIBuP56WLXKBfexY13f7tFatbImkMaY\n+i2V4D4L6CIinUUkFxfAp0QnEpFjgIOBT9KbxfjidTuwefOByoExxtRNSYO7qpYDo4BpwBJgsqou\nEpG7RGRgIOlgYJJqoi680itetwPWl4wxpr5L6SYmVX0DeCNq3u+jpu9IX7ZSc+658OSTsecbY0x9\nlnHdDwS98Ubl5htjTH2R0cE9Xp279d9ujKnvMjq4W527McbEltHB/dxzQaJa4eflWf/txhiTscE9\nFILnn498vJ4IXH21tXE3xpiMDe5jx0JpaeQ8VbuYaowxkMHB3S6mGmNMfBkb3Fu3jj3fLqYaY0yG\nBvdQCLZtqzjfHoZtjDFORgb3eB2GtWhhF1ONMQYyNLhbh2HGGJNYRgZ3u3nJGGMSy8jgHq9jMOsw\nzBhjnIwM7tZhmDHGJJaRwd3auBtjTGIZGdytzt0YYxLLyOA+bpzrICzIOgwzxpiwjAzuQ4bA+PHQ\nsaPrLKxjRzdtbdyNMcZJ6TF7ddGQIRbMjTEmnowsuRtjjEnMgrsxxmShjAzuoRB06gQNGrjXUKi2\nc2SMMXVLSsFdRPqLyFIRWS4iY+KkuUxEFovIIhF5Mb3ZDAuFYPhwWLnSPZxj5Uo3bQHeGGPCRIPP\nqYuVQCQH+AL4IVACzAIGq+riQJouwGTgLFXdIiKHqOr6RNstLCzUoqKiSme4UycX0KN17AjFxZXe\nnDHGZBQRma2qhcnSpVJy7w0sV9UVqloGTAIujErzM+AJVd0CkCywV4fdnWqMMcmlEtzbAasC0yXe\nvKDvAd8TkY9E5FMR6R9rQyIyXESKRKRow4YNVcqw3Z1qjDHJpRLcJca86LqchkAXoC8wGHhGRFpV\nWEl1vKoWqmphmzZtKptXwO5ONcaYVKQS3EuAIwLT7YHVMdK8pqp7VPUrYCku2Ked3Z1qjDHJpRLc\nZwFdRKSziOQClwNTotK8CpwJICIFuGqaFenMaNCQIe7i6b597tUCuzHGREoa3FW1HBgFTAOWAJNV\ndZGI3CUiA71k04BNIrIYmAH8SlU31VSmjTHGJJa0KWRNqWpTSGOMqc/S2RTSGGNMhsnI4G7dDxhj\nTGIZ1+Wv3/1Aaamb9rsfALuwaowxvowruY8dGw7svtJSN98YY4yTccHduh8wxpjkMi64W/cDxhiT\nXMYFd+t+wBhjksu44G7dDxhjTHIZ11oG7OHYxhiTTMaV3I0xxiRnwd0YY7KQBXdjjMlCFtyNMSYL\nWXA3xpgsZMHdGGOykAV3Y4zJQhbcjTEmC1lwN8aYLGTB3RhjspAFd2OMyUIW3I0xJgtZcDfGmCyU\nUnAXkf4islRElovImBjLrxGRDSIy1xuGpT+rxhhjUpU0uItIDvAEMADoCgwWka4xkv5DVXt6wzNp\nzud+oRB06gQNGrjXUKim3skYYzJXKv259waWq+oKABGZBFwILK7JjMUSCsHw4eEHZK9c6abB+nc3\nxpigVKpl2gGrAtMl3rxol4jIfBH5p4gcEWtDIjJcRIpEpGjDhg2VzuzYseHA7istdfONMcaEpRLc\nJcY8jZqeCnRS1e7Au8DzsTakquNVtVBVC9u0aVO5nAJff125+cYYU1+lEtxLgGBJvD2wOphAVTep\n6m5v8q/AienJXqQOHSo33xhj6qtUgvssoIuIdBaRXOByYEowgYgcFpgcCCxJXxbDxo2DvLzIeXl5\nbr4xxpiwpMFdVcuBUcA0XNCerKqLROQuERnoJbtRRBaJyDzgRuCamsjskCEwfjx07Agi7nX8eLuY\naowx0UQ1uvr8wCgsLNSioqJaeW9jjMlUIjJbVQuTpbM7VI0xJgtZcDfGmCxkwd0YY7KQBXdjjMlC\nFtyNMSYLWXA3xpgsZMHdGGOykAV3Y4zJQhbcjTEmC1lwN8aYLGTB3RhjspAFd2OMyUIW3I0xJgul\n8gxVY0wW2bNnDyUlJXz33Xe1nRWTQJMmTWjfvj2NGjWq0voW3I2pZ0pKSmjRogWdOnVCJNZTNE1t\nU1U2bdpESUkJnTt3rtI2rFrGmHrmu+++Iz8/3wJ7HSYi5OfnV+vsyoK7MfWQBfa6r7qfkQV3Y4zJ\nQhbcjTEJhULQqRM0aOBeQ6HqbW/Tpk307NmTnj170rZtW9q1a7d/uqysLKVt/PSnP2Xp0qUJ0zzx\nxBOEqpvZDGYXVI0xcYVCMHw4lJa66ZUr3TRU/cH0+fn5zJ07F4A77riD5s2b88tf/jIijaqiqjRo\nELv8+eyzzyZ9n5///OdVy2CWsJK7MSausWPDgd1XWurmp9vy5cvp1q0b119/Pb169WLNmjUMHz6c\nwsJCjj/+eO666679afv06cPcuXMpLy+nVatWjBkzhh49enDqqaeyfv16AG677TYeeeSR/enHjBlD\n7969OeaYY/j4448B2LlzJ5dccgk9evRg8ODBFBYW7v/jCbr99ts56aST9udPVQH44osvOOuss+jR\nowe9evWiuLgYgHvvvZfvf//79OjRg7E1cbBSYMHdGBPX119Xbn51LV68mGuvvZY5c+bQrl077rvv\nPoqKipg3bx7vvPMOixcvrrDO1q1bOeOMM5g3bx6nnnoqEyZMiLltVeXzzz/ngQce2P9H8fjjj9O2\nbVvmzZvHmDFjmDNnTsx1R48ezaxZs1iwYAFbt27lrbfeAmDw4MHcfPPNzJs3j48//phDDjmEqVOn\n8uabb/L5558zb948brnlljQdncpJKbiLSH8RWSoiy0VkTIJ0l4qIikhh+rJojKktHTpUbn51HXXU\nUZx00kn7p1966SV69epFr169WLJkSczg3rRpUwYMGADAiSeeuL/0HO3iiy+ukObDDz/k8ssvB6BH\njx4cf/zxMdedPn06vXv3pkePHrz//vssWrSILVu2sHHjRi644ALA3XSUl5fHu+++y9ChQ2natCkA\nrVu3rvyBSIOkwV1EcoAngAFAV2CwiHSNka4FcCPwWbozaYypHePGQV5e5Ly8PDe/JjRr1mz/+LJl\ny3j00Ud57733mD9/Pv3794/Z7js3N3f/eE5ODuXl5TG33bhx4wpp/OqVREpLSxk1ahSvvPIK8+fP\nZ+jQofvzEau5oqrWiaamqZTcewPLVXWFqpYBk4ALY6S7G7gfsHuajckSQ4bA+PHQsSOIuNfx46t+\nMbUytm3bRosWLTjooINYs2YN06ZNS/t79OnTh8mTJwOwYMGCmGcGu3btokGDBhQUFLB9+3Zefvll\nAA4++GAKCgqYOnUq4G4OKy0t5ZxzzuFvf/sbu3btAmDz5s1pz3cqUmkt0w5YFZguAU4OJhCRE4Aj\nVPV1EYm87B2ZbjgwHKBDTZ3XGWPSasiQAxPMo/Xq1YuuXbvSrVs3jjzySE477bS0v8cNN9zAVVdd\nRffu3enVqxfdunWjZcuWEWny8/O5+uqr6datGx07duTkk8PhLxQKcd111zF27Fhyc3N5+eWXOf/8\n85k3bx6FhYU0atSICy64gLvvvjvteU9Gkp2WiMiPgf+nqsO86Z8AvVX1Bm+6AfAecI2qFovIv4Ff\nqmpRou0WFhZqUVHCJMaYGrBkyRKOO+642s5GnVBeXk55eTlNmjRh2bJlnHPOOSxbtoyGDetGK/FY\nn5WIzFbVpNc1U9mDEuCIwHR7YHVgugXQDfi3V8/UFpgiIgOTBXhjjKlNO3bsoF+/fpSXl6OqPP30\n03UmsFdXKnsxC+giIp2Bb4DLgSv8haq6FSjwp1MtuRtjTG1r1aoVs2fPru1s1IikF1RVtRwYBUwD\nlgCTVXWRiNwlIgNrOoPGGGMqL6XzD1V9A3gjat7v46TtW/1sGWOMqQ67Q9UYY7KQBXdjjMlCFtyN\nMQdU3759K9yQ9MgjjzBy5MiE6zVv3hyA1atXc+mll8bddrIm1o888gilgd7Qzj33XL799ttUsp5R\nLLgbYw6owYMHM2nSpIh5kyZNYvDgwSmtf/jhh/PPf/6zyu8fHdzfeOMNWrVqVeXt1VXZ0aDTGFMl\nN90EMXq4rZaePcHraTemSy+9lNtuu43du3fTuHFjiouLWb16NX369GHHjh1ceOGFbNmyhT179nDP\nPfdw4YWRvZ0UFxdz/vnns3DhQnbt2sVPf/pTFi9ezHHHHbf/ln+AESNGMGvWLHbt2sWll17KnXfe\nyWOPPcbq1as588wzKSgoYMaMGXTq1ImioiIKCgp4+OGH9/cqOWzYMG666SaKi4sZMGAAffr04eOP\nP6Zdu3a89tpr+zsG802dOpV77rmHsrIy8vPzCYVCHHrooezYsYMbbriBoqIiRITbb7+dSy65hLfe\neotbb72VvXv3UlBQwPTp09P3IWDB3RhzgOXn59O7d2/eeustLrzwQiZNmsSgQYMQEZo0acIrr7zC\nQQcdxMaNGznllFMYOHBg3I64nnzySfLy8pg/fz7z58+nV69e+5eNGzeO1q1bs3fvXvr168f8+fO5\n8cYbefjhh5kxYwYFBQUR25o9ezbPPvssn332GarKySefzBlnnMHBBx/MsmXLeOmll/jrX//KZZdd\nxssvv8yVV14ZsX6fPn349NNPERGeeeYZ7r//fh566CHuvvtuWrZsyYIFCwDYsmULGzZs4Gc/+xkz\nZ86kc+fONdL/jAV3Y+qxRCXsmuRXzfjB3S8tqyq33norM2fOpEGDBnzzzTesW7eOtm3bxtzOzJkz\nufHGGwHo3r073bt3379s8uTJjB8/nvLyctasWcPixYsjlkf78MMPueiii/b3THnxxRfzwQcfMHDg\nQDp37kzPnj2B+N0Kl5SUMGjQINasWUNZWRmdO3cG4N13342ohjr44IOZOnUqP/jBD/anqYlugTOq\nzj3dz3I0xtSOH/3oR0yfPp3//Oc/7Nq1a3+JOxQKsWHDBmbPns3cuXM59NBDY3bzGxSrVP/VV1/x\n4IMPMn36dObPn895552XdDuJ+tnyuwuG+N0K33DDDYwaNYoFCxbw9NNP73+/WF0AH4hugTMmuPvP\ncly5ElTDz3K0AG9M5mnevDl9+/Zl6NChERdSt27dyiGHHEKjRo2YMWMGK1euTLidH/zgB/sfgr1w\n4ULmz58PuO6CmzVrRsuWLVm3bh1vvvnm/nVatGjB9u3bY27r1VdfpbS0lJ07d/LKK69w+umnp7xP\nW7dupV27dgA8//zz++efc845/PnPf94/vWXLFk499VTef/99vvrqK6BmugXOmOB+IJ/laIypeYMH\nD2bevHn7n4QEMGTIEIqKiigsLCQUCnHssccm3MaIESPYsWMH3bt35/7776d3796Ae6rSCSecwPHH\nH8/QoUMjugsePnw4AwYM4Mwzz4zYVq9evbjmmmvo3bs3J598MsOGDeOEE05IeX/uuOMOfvzjH3P6\n6adH1OffdtttbNmyhW7dutGjRw9mzJhBmzZtGD9+PBdffDE9evRg0KBBKb9PqpJ2+VtTKtvlb4MG\nrsQeTQT27UtjxozJctblb+aoTpe/GVNyP9DPcjTGmEyWMcH9QD/L0RhjMlnGBPfafJajMdmmtqpj\nTeqq+xllVDv32nqWozHZpEmTJmzatIn8/Pwab45nqkZV2bRpE02aNKnyNjIquBtjqq99+/aUlJSw\nYcOG2s6KSaBJkya0b9++yutbcDemnmnUqNH+OyNN9sqYOndjjDGps+BujDFZyIK7McZkoVq7Q1VE\nNgCJO46IrwDYmMbsZALb5/rB9rl+qM4+d1TVNskS1Vpwrw4RKUrl9ttsYvtcP9g+1w8HYp+tWsYY\nY7KQBXdjjMlCmRrcx9d2BmqB7XP9YPtcP9T4PmdknbsxxpjEMrXkbowxJgEL7sYYk4UyLriLSH8R\nWSoiy0VkTG3nJ11EZIKIrBeRhYF5rUXkHRFZ5r0e7M0XEXnMOwbzRaRX7eW86kTkCBGZISJLRGSR\niIz25mftfotIExH5XETmeft8pze/s4h85u3zP0Qk15vf2Jte7i3vVJv5ryoRyRGROSLyujed1fsL\nICLFIrJAROaKSJE374B9tzMquItIDvAEMADoCgwWka61m6u0eQ7oHzVvDDBdVbsA071pcPvfxRuG\nA08eoDymWzlwi6oeB5wC/Nz7PLN5v3cDZ6lqD6An0F9ETgH+CPzJ2+ctwLVe+muBLap6NPAnL10m\nGg0sCUxn+/76zlTVnoE27Qfuu62qGTMApwLTAtO/BX5b2/lK4/51AhYGppcCh3njhwFLvfGngcGx\n0mXyALwG/LC+7DeQB/wHOBl3t2JDb/7+7zkwDTjVG2/opZPaznsl97O9F8jOAl4HJJv3N7DfxUBB\n1LwD9t3OqJI70A5YFZgu8eZlq0NVdQ2A93qINz/rjoN3+n0C8BlZvt9eFcVcYD3wDvAl8K2qlntJ\ngvu1f5+95VuB/AOb42p7BPg14D/KPp/s3l+fAm+LyGwRGe7NO2Df7Uzrzz3WY2PqY1vOrDoOItIc\neBm4SVW3JXg6UFbst6ruBXqKSCvgFeC4WMm814zeZxE5H1ivqrNFpK8/O0bSrNjfKKep6moROQR4\nR0T+myBt2vc700ruJcARgen2wOpaysuBsE5EDgPwXtd787PmOIhII1xgD6nq/3mzs36/AVT1W+Df\nuOsNrUTEL2wF92v/PnvLWwKbD2xOq+U0YKCIFAOTcFUzj5C9+7ufqq72Xtfj/sR7cwC/25kW3GcB\nXbwr7bnA5cCUWs5TTZoCXO2NX42rk/bnX+VdYT8F2Oqf6mUScUX0vwFLVPXhwKKs3W8RaeOV2BGR\npsDZuAuNM4BLvWTR++wfi0uB99SrlM0EqvpbVW2vqp1wv9f3VHUIWbq/PhFpJiIt/HHgHGAhB/K7\nXdsXHapwkeJc4AtcPeXY2s5PGvfrJWANsAf3L34trq5xOrDMe23tpRVcq6EvgQVAYW3nv4r73Ad3\n6jkfmOsN52bzfgPdgTnePi8Efu/NPxL4HFgO/C/Q2JvfxJte7i0/srb3oRr73hd4vT7sr7d/87xh\nkR+rDuR327ofMMaYLJRp1TLGGGNSYMHdGGOykAV3Y4zJQhbcjTEmC1lwN8aYLGTB3RhjspAFd2OM\nyUL/H+7UtHiw4qHoAAAAAElFTkSuQmCC\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAEICAYAAACktLTqAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJztnXl4FUX2978nIRAgYQuoCELAFQgB\nQkQYUBaRQVxwQQWDiMIgMK7oKIPLOIy8o8IA4oKiiEsiyLgPgojAT8QFTFgCiCxKwLATIUAChiTn\n/eN0p/ve9F2S3OTm3pzP8/TT3dXV1dV3+Xb1qVOniJmhKIqihBcRwa6AoiiKEnhU3BVFUcIQFXdF\nUZQwRMVdURQlDFFxVxRFCUNU3BVFUcIQFXfFESKKJKKTRNQqkHmDCRFdQEQB9/0lov5ElGXb30ZE\nl/uTtxzXeoOIJpX3fC/lPkNEbwW6XCV41Ap2BZTAQEQnbbv1APwBoMjYv4eZ08pSHjMXAYgJdN6a\nADNfHIhyiGg0gOHM3MdW9uhAlK2EPyruYQIzl4ir0TIczcxfecpPRLWYubAq6qYoStWjZpkagvHa\n/T4RzSeiEwCGE1EPIvqBiI4R0X4imkVEUUb+WkTERBRv7Kcax5cQ0Qki+p6I2pQ1r3H8aiLaTkS5\nRPQiEX1LRCM91NufOt5DRDuJ6CgRzbKdG0lEM4goh4h+ATDQy+fzBBEtcEt7mYimG9ujiWircT+/\nGK1qT2VlE1EfY7seEb1r1G0LgK4O1/3VKHcLEV1vpHcE8BKAyw2T1xHbZ/u07fyxxr3nENEnRNTc\nn8/GF0R0g1GfY0S0goguth2bRET7iOg4Ef1su9fuRLTOSD9IRFP9vZ5SCTCzLmG2AMgC0N8t7RkA\nBQCugzzU6wK4FMBlkDe4tgC2A7jXyF8LAAOIN/ZTARwBkAwgCsD7AFLLkfcsACcADDaOTQBwBsBI\nD/fiTx0/BdAQQDyA3817B3AvgC0AWgKIA7BKfvKO12kL4CSA+rayDwFINvavM/IQgH4ATgFINI71\nB5BlKysbQB9jexqA/wPQGEBrAD+55b0VQHPjO7ndqMPZxrHRAP7PrZ6pAJ42tgcYdewMIBrAKwBW\n+PPZONz/MwDeMrbbGfXoZ3xHk4zPPQpABwC7AZxj5G0DoK2x/SOAYcZ2LIDLgv1fqMmLttxrFquZ\n+X/MXMzMp5j5R2Zew8yFzPwrgDkAens5/wNmTmfmMwDSIKJS1rzXAtjAzJ8ax2ZAHgSO+FnHfzNz\nLjNnQYTUvNatAGYwczYz5wB41st1fgWwGfLQAYCrABxj5nTj+P+Y+VcWVgBYDsCx09SNWwE8w8xH\nmXk3pDVuv+5CZt5vfCfvQR7MyX6UCwApAN5g5g3MfBrARAC9iailLY+nz8YbQwF8xswrjO/oWQAN\nIA/ZQsiDpINh2ttlfHaAPKQvJKI4Zj7BzGv8vA+lElBxr1n8Zt8hokuI6HMiOkBExwFMBtDUy/kH\nbNv58N6J6invufZ6MDNDWrqO+FlHv64FaXF64z0Aw4zt2yEPJbMe1xLRGiL6nYiOQVrN3j4rk+be\n6kBEI4loo2H+OAbgEj/LBeT+Sspj5uMAjgJoYctTlu/MU7nFkO+oBTNvA/Aw5Hs4ZJj5zjGy3gWg\nPYBtRLSWiAb5eR9KJaDiXrNwdwN8DdJavYCZGwB4CmJ2qEz2Q8wkAAAiIriKkTsVqeN+AOfZ9n25\nar4PoL/R8h0MEXsQUV0AHwD4N8Rk0gjAl37W44CnOhBRWwCzAYwDEGeU+7OtXF9um/sgph6zvFiI\n+WevH/UqS7kRkO9sLwAwcyoz94SYZCIhnwuYeRszD4WY3v4D4EMiiq5gXZRyouJes4kFkAsgj4ja\nAbinCq65CEASEV1HRLUAPACgWSXVcSGAB4moBRHFAXjMW2ZmPghgNYB5ALYx8w7jUB0AtQEcBlBE\nRNcCuLIMdZhERI1IxgHcazsWAxHww5Dn3GhIy93kIICWZgeyA/MBjCKiRCKqAxHZb5jZ45tQGep8\nPRH1Ma79N0g/yRoiakdEfY3rnTKWIsgN3EFETY2Wfq5xb8UVrItSTlTcazYPA7gT8sd9DdJyrVQM\nAb0NwHQAOQDOB7Ae4pcf6DrOhtjGN0E6+z7w45z3IB2k79nqfAzAQwA+hnRKDoE8pPzhH5A3iCwA\nSwC8Yys3E8AsAGuNPJcAsNuplwHYAeAgEdnNK+b5X0DMIx8b57eC2OErBDNvgXzmsyEPnoEArjfs\n73UAPA/pJzkAeVN4wjh1EICtJN5Y0wDcxswFFa2PUj5ITJ6KEhyIKBJiBhjCzN8Euz6KEi5oy12p\ncohoIBE1NF7tn4R4YKwNcrUUJaxQcVeCQS8Av0Je7QcCuIGZPZllFEUpB2qWURRFCUO05a4oihKG\nBC1wWNOmTTk+Pj5Yl1cURQlJMjIyjjCzN/dhAEEU9/j4eKSnpwfr8oqiKCEJEfkaaQ1AzTKKoihh\niYq7oihKGKLiriiKEoboTEyKUkM4c+YMsrOzcfr06WBXRfGD6OhotGzZElFRnkILeUfFXVFqCNnZ\n2YiNjUV8fDwkGKdSXWFm5OTkIDs7G23atPF9ggMhZZZJSwPi44GICFmnlWnKZ0Wp2Zw+fRpxcXEq\n7CEAESEuLq5Cb1kh03JPSwPGjAHy82V/927ZB4CUCsfBU5SagQp76FDR7ypkWu6PP24Ju0l+vqQr\niqIoroSMuO/ZU7Z0RVGqFzk5OejcuTM6d+6Mc845By1atCjZLyjwL+z7XXfdhW3btnnN8/LLLyMt\nQDbbXr16YcOGDQEpq6oJGbNMq1ZiinFKVxQl8KSlyZvxnj3yP5sypWIm0Li4uBKhfPrppxETE4NH\nHnnEJQ8zg5kREeHc7pw3b57P6/z1r38tfyXDiJBpuU+ZAtSr55pWr56kK4oSWMw+rt27AWarj6sy\nnBh27tyJhIQEjB07FklJSdi/fz/GjBmD5ORkdOjQAZMnTy7Ja7akCwsL0ahRI0ycOBGdOnVCjx49\ncOjQIQDAE088gZkzZ5bknzhxIrp164aLL74Y3333HQAgLy8PN998Mzp16oRhw4YhOTnZZws9NTUV\nHTt2REJCAiZNmgQAKCwsxB133FGSPmvWLADAjBkz0L59e3Tq1AnDhw8P+GfmDyEj7ikpwJw5QOvW\nAJGs58zRzlRFqQyquo/rp59+wqhRo7B+/Xq0aNECzz77LNLT07Fx40YsW7YMP/30U6lzcnNz0bt3\nb2zcuBE9evTAm2++6Vg2M2Pt2rWYOnVqyYPixRdfxDnnnIONGzdi4sSJWL9+vdf6ZWdn44knnsDK\nlSuxfv16fPvtt1i0aBEyMjJw5MgRbNq0CZs3b8aIESMAAM8//zw2bNiAjRs34qWXXqrgp1M+fIo7\nEZ1HRCuJaCsRbSGiBxzy9CGiXCLaYCxPVUZlU1KArCyguFjWKuyKUjlUdR/X+eefj0svvbRkf/78\n+UhKSkJSUhK2bt3qKO5169bF1VdfDQDo2rUrsrKyHMu+6aabSuVZvXo1hg4dCgDo1KkTOnTo4LV+\na9asQb9+/dC0aVNERUXh9ttvx6pVq3DBBRdg27ZteOCBB7B06VI0bNgQANChQwcMHz4caWlp5R6E\nVFH8abkXAniYmdsB6A7gr0TU3iHfN8zc2VgmOxxXFCVE8NSXVVl9XPXr1y/Z3rFjB1544QWsWLEC\nmZmZGDhwoKO/d+3atUu2IyMjUVhY6Fh2nTp1SuUp6yRFnvLHxcUhMzMTvXr1wqxZs3DPPfcAAJYu\nXYqxY8di7dq1SE5ORlFRUZmuFwh8ijsz72fmdcb2CQBbAbSo7IopihI8gtnHdfz4ccTGxqJBgwbY\nv38/li5dGvBr9OrVCwsXLgQAbNq0yfHNwE737t2xcuVK5OTkoLCwEAsWLEDv3r1x+PBhMDNuueUW\n/POf/8S6detQVFSE7Oxs9OvXD1OnTsXhw4eR727jqgLK5C1DRPEAugBY43C4BxFthMxk/wgzb3E4\nfwyAMQDQSt1cFKXaYpo8A+kt4y9JSUlo3749EhIS0LZtW/Ts2TPg17jvvvswYsQIJCYmIikpCQkJ\nCSUmFSdatmyJyZMno0+fPmBmXHfddbjmmmuwbt06jBo1CswMIsJzzz2HwsJC3H777Thx4gSKi4vx\n2GOPITY2NuD34Au/51AlohgAXwOYwswfuR1rAKCYmU8S0SAALzDzhd7KS05OZp2sQ1Gqjq1bt6Jd\nu3bBrka1oLCwEIWFhYiOjsaOHTswYMAA7NixA7VqVS/vcKfvjIgymDnZ17l+3QkRRQH4EECau7AD\nADMft20vJqJXiKgpMx/xp3xFUZSq5OTJk7jyyitRWFgIZsZrr71W7YS9ovi8G5IAB3MBbGXm6R7y\nnAPgIDMzEXWD2PJzAlpTRVGUANGoUSNkZGQEuxqVij+Pqp4A7gCwiYhML/9JAFoBADO/CmAIgHFE\nVAjgFIChXNbuaEVRFCVg+BR3Zl4NwGt4MmZ+CUBwPPUVRVGUUoTMCFVFURTFf1TcFUVRwhAVd0VR\nqoQ+ffqUGpA0c+ZMjB8/3ut5MTExAIB9+/ZhyJAhHsv25Vo9c+ZMl8FEgwYNwrFjx/ypuleefvpp\nTJs2rcLlBBoVd0VRqoRhw4ZhwYIFLmkLFizAsGHD/Dr/3HPPxQcffFDu67uL++LFi9GoUaNyl1fd\nUXFXFKVKGDJkCBYtWoQ//vgDAJCVlYV9+/ahV69eJX7nSUlJ6NixIz799NNS52dlZSEhIQEAcOrU\nKQwdOhSJiYm47bbbcOrUqZJ848aNKwkX/I9//AMAMGvWLOzbtw99+/ZF3759AQDx8fE4ckSG4kyf\nPh0JCQlISEgoCReclZWFdu3a4S9/+Qs6dOiAAQMGuFzHiQ0bNqB79+5ITEzEjTfeiKNHj5Zcv337\n9khMTCwJWPb111+XTFbSpUsXnDhxotyfrRPh5bWvKIpfPPggEOgJhjp3BgxddCQuLg7dunXDF198\ngcGDB2PBggW47bbbQESIjo7Gxx9/jAYNGuDIkSPo3r07rr/+eo/ziM6ePRv16tVDZmYmMjMzkZSU\nVHJsypQpaNKkCYqKinDllVciMzMT999/P6ZPn46VK1eiadOmLmVlZGRg3rx5WLNmDZgZl112GXr3\n7o3GjRtjx44dmD9/Pl5//XXceuut+PDDD73GZx8xYgRefPFF9O7dG0899RT++c9/YubMmXj22Wex\na9cu1KlTp8QUNG3aNLz88svo2bMnTp48iejo6DJ82r7RlruiKFWG3TRjN8kwMyZNmoTExET0798f\ne/fuxcGDBz2Ws2rVqhKRTUxMRGJiYsmxhQsXIikpCV26dMGWLVt8BgVbvXo1brzxRtSvXx8xMTG4\n6aab8M033wAA2rRpg86dOwPwHlYYkPjyx44dQ+/evQEAd955J1atWlVSx5SUFKSmppaMhO3Zsycm\nTJiAWbNm4dixYwEfIastd0WpgXhrYVcmN9xwAyZMmIB169bh1KlTJS3utLQ0HD58GBkZGYiKikJ8\nfLxjmF87Tq36Xbt2Ydq0afjxxx/RuHFjjBw50mc53sZbmuGCAQkZ7Mss44nPP/8cq1atwmeffYZ/\n/etf2LJlCyZOnIhrrrkGixcvRvfu3fHVV1/hkksuKVf5TmjLXVGUKiMmJgZ9+vTB3Xff7dKRmpub\ni7POOgtRUVFYuXIldjtNmGzjiiuuKJkEe/PmzcjMzAQg4YLr16+Phg0b4uDBg1iyZEnJObGxsY52\n7SuuuAKffPIJ8vPzkZeXh48//hiXX355me+tYcOGaNy4cUmr/91330Xv3r1RXFyM3377DX379sXz\nzz+PY8eO4eTJk/jll1/QsWNHPPbYY0hOTsbPP/9c5mt6Q1vuiqJUKcOGDcNNN93k4jmTkpKC6667\nDsnJyejcubPPFuy4ceNw1113ITExEZ07d0a3bt0AyKxKXbp0QYcOHUqFCx4zZgyuvvpqNG/eHCtX\nrixJT0pKwsiRI0vKGD16NLp06eLVBOOJt99+G2PHjkV+fj7atm2LefPmoaioCMOHD0dubi6YGQ89\n9BAaNWqEJ598EitXrkRkZCTat29fMqtUoPA75G+g0ZC/ilK1aMjf0KMiIX/VLKMoihKGqLgriqKE\nISruilKD0EjcoUNFvysVd0WpIURHRyMnJ0cFPgRgZuTk5FRoYJN6yyhKDaFly5bIzs7G4cOHg10V\nxQ+io6PRsmXLcp+v4q4oNYSoqCi0adMm2NVQqgg1yyiKooQhKu6KoihhiIq7oihKGKLiriiKEoao\nuCuKooQhKu6KoihhiIq7oihKGKLiriiKEoaouCuKooQhKu6KoihhiIq7oihKGKLiriiKEoaouCuK\nooQhPsWdiM4jopVEtJWIthDRAw55iIhmEdFOIsokoqTKqa6iKIriD/6E/C0E8DAzryOiWAAZRLSM\nmX+y5bkawIXGchmA2cZaURRFCQI+W+7MvJ+Z1xnbJwBsBdDCLdtgAO+w8AOARkTUPOC1VRRFUfyi\nTDZ3IooH0AXAGrdDLQD8ZtvPRukHAIhoDBGlE1G6zgajKIpSefgt7kQUA+BDAA8y83H3ww6nlJqo\nkZnnMHMyMyc3a9asbDVVFEVR/MYvcSeiKIiwpzHzRw5ZsgGcZ9tvCWBfxaunKIqilAd/vGUIwFwA\nW5l5uodsnwEYYXjNdAeQy8z7A1hPRVEUpQz44y3TE8AdADYR0QYjbRKAVgDAzK8CWAxgEICdAPIB\n3BX4qiqKoij+4lPcmXk1nG3q9jwM4K+BqpSiKIpSMXSEqqIoShii4q4oihKGhJy4p6UB8fFARISs\n09KCXSNFUZTqhz8dqtWGtDRgzBggP1/2d++WfQBISQlevRRFUaobIdVyf/xxS9hN8vMlXVEURbEI\nKXHfs6ds6YqiKDWVkBL3Vq3Klq4oilJTCSlxnzIFqFfPNa1ePUlXFEVRLEJK3FNSgDlzgNatASJZ\nz5mjnamKoijuhJS3DCBCrmKuKIrinZBquSuKoij+oeKuKIoShqi4K4qihCEq7oqiKGGIiruiKEoY\nouKuKIoShqi4K4qihCEq7oqiKGGIiruiKEoYouKuKIoShqi4K4qihCEq7oqiKGGIiruiKEoYouKu\nKIoShqi4K4qihCEhKe5paUB8PBARIeu0tGDXSFEUpXoRcpN1pKUBY8YA+fmyv3u37AM6iYeiKIpJ\nyLXcH3/cEnaT/HxJVxRFUYSQE/c9e8qWriiKUhMJOXFv1aps6YqiKDURn+JORG8S0SEi2uzheB8i\nyiWiDcbyVOCraTFlClCvnmtavXqSriiKogj+tNzfAjDQR55vmLmzsUyueLU8k5ICzJkDtG4NEMl6\nzhztTFUURbHj01uGmVcRUXzlV8V/UlJUzBVFUbwRKJt7DyLaSERLiKiDp0xENIaI0oko/fDhwwG6\ntKIoiuJOIMR9HYDWzNwJwIsAPvGUkZnnMHMyMyc3a9YsAJdWFEVRnKiwuDPzcWY+aWwvBhBFRE0r\nXDNFURSl3FRY3InoHCIiY7ubUWZORctVFEVRyo8/rpDzAXwP4GIiyiaiUUQ0lojGGlmGANhMRBsB\nzAIwlJm58qqssWUURVF84Y+3zDAfx18C8FLAauQDjS2jKIrim5AboaqxZRRFUXwTcuKusWUURVF8\nE3LirrFlFEVRfBNy4q6xZRRFUXwTcuKusWUURVF8E3IzMQEaW0ZRFMUXIddyN1Ffd0VRFM+EZMtd\nfd0VRVG8E5Itd/V1VxRF8U5Iirv6uiuKongnJMVdfd0VRVG8E5LiPmUKEBXlmhYVpb7uiqIoJiEp\n7oD4uHvbVxRFqcmEpLg//jhQUOCaVlCgHaqKoigmISnu2qGqKIrinZAUd+1QVRRF8U5IirsGD1MU\nRfFOSIq7GTwsLs5Kq1s3ePVRFEWpboSkuJucOmVt5+RICAKNMaMoihLC4q4hCBRFUTwTsuKuHjOK\noiieCVlx9+QZ06RJ1dZDURSlOhKy4u4UggAATpxQu7uiKErIintKCtCgQel0HamqKOHFd98BmzYF\nuxahR8iJ+9GjwNq14inz++/OedTurijhQ8+eQGJi1VyroABYtqxqrlXZhJy4f/klcNllwK+/erav\nq91dUcKTSZOAxx7zLy8z8Prrri7TvnjzTWDAACArq1zV88qkScCFFwKZmYEv24mQE/f69WXt7gap\nKEr48+9/A88/DxQX+8771Vcy9uWaa2TJy/N9znffyfrgQdf006fl2u4BC0327AHuuKP0g+TMGWDa\nNOCPP+RBs3OnPECqgpAV97w8z2YZT+mKogSW48eBYcNKi2GgKCqytgsLre2MDN/nnjkj65UrgcWL\ngQ0brGNLlgC9esmxiy4CPv5Y0n/4QdbuGjJ1qrS85851vta99wKpqcDy5cDmzUBMDLBjh+T/29+A\n6GjgyBHJ++mnvuseCEJO3M2YMnl5GkBMUYLJ4cMiXgsWOMd1eu01mWfh9OnyX+P4cWvb3pf2yCOl\n8/72G/DSS2KOcT8XcG1VDxsGfPstMG6ciPB//yuj3HfskOPu4p6TI2t3i0FxsZxrf7gtXCj6NGdO\n6Xs//3xg927rwVOZ+BR3InqTiA4R0WYPx4mIZhHRTiLKJKKkwFfTwm6W0QBiilKajz4CZsyo3Gvk\n5gJnnQX8/e+y72Qmee45WS9d6l+Ze/aIvdsurLm51vY338i6Vy9g1SpX4fzvf6VRd999wE8/iXnl\nrrtcy7eXG2Eo37Ztsp4/37VFvWaNPCQyMoCnngJ++UXSIyOtPMxy7NZbxckDkNa56aKdkSH1tHP5\n5XLegQO+P4+K4k/L/S0AA70cvxrAhcYyBsDsilfLM/aWu1MAsfx84IEH1NddqbncfDMwYULllZ+e\nbgnhH3/IurDQMpVkZopX28UXy/5nnzmXM3euiN2ECdKSnTJFPFXee8/KYxf3kSNlPWiQrM2W/JVX\nisCaJCSIh40p/i1ayPr330VYf/7ZEncAaN9e1qNGWekvvig29kceAf71L2DRotL1/+ST0g3Jw4fl\nDQIQk49p7jHp3VvWe/c6fiQBxae4M/MqAN6s2IMBvMPCDwAaEVHzQFXQHacOVffXr5wc4O67VeCV\nmo0/nY733gv8+c/+lccstudLLwXuvNP12GuvAcnJ0mHYqROQlGSZKpy8Q86cAUaPBlavlreMNWus\nyK5284ld3E3+9CdZb90qQr9ihfd679wp63feEbNNu3aWmQUQW/k558h2QoKV/uSTwLp1rmWdOGGt\nFy6Ulvwzz1jHDx6UejmxZ498NgCwb5/3OgcEZva5AIgHsNnDsUUAetn2lwNI9lVm165duTycOMEM\nMD//vOy3bi37Tkvr1uW6hKKENObv/+DB0sc2bmTOySmd186ePcwHDlj7mzczz5kj6Wb+2rWd/3PL\nlpVOq1uXubCQ+fRp5hdfZP7+e+bvvnPNk5RkbU+aJNf944/SZb39NvOuXbLdtavn/759sd+n+3L2\n2czFxcwrVsj+9Oml8/Tta21feCHzeecxX3GF7CckMBcVMT/3nH/1OHhQtmfNqsj3i3T2Q7cD0aHq\nNDU1O2YkGkNE6USUfvjw4XJdzHy6m25N3gYs6WAmpSp5913giy/Kd25GhrR+K0JGhqsdev9+a/vo\nUWmtduoE9OsnrWazRQu4vgm3amW1ZKdOldbsmDHS8gXERLFzp7gXurPZrWcuLk5a4jNmyP3ddx8w\ncGDpEaf2FvKePWLmMe3cAHD77cCxY8CIEUDLltJidveYuftu4LzznD8bT1xxhXT69u0LrF8PPPhg\n6Tz9+lnbO3aI2cW0pR89KqacRx+18tSpA1x/vWsZr7wi66ZNxSZfFWaZQLTcXwMwzLa/DUBzX2WW\nt+XOzBwdzfy3v8m2t5Z7XFy5L6HUQLZuZf7hh/Kf79QKLuu5BQVlP3f/fuaLL5bzp061ylqypHT5\n9qV/f2v7559L5+3Rw/m8jAzJl5ZW+tjdd7vu/7//J+v4eOYOHaz0W29lrl+f+aOPSpeRnMycmMjc\nrp2VduSI6z23b+9aDsA8YQLz+vX+t9ynTHF+u5k7V5YtW5j79WPevduzxvzvf9Z511zDfM45sr1t\nW+k6mCxaxLx9e9m/Z+v78a/lHghxvwbAEkgLvjuAtf6UWRFxj4tjHj9etlNTPX/w9euX+xJKDaQ8\n4rxsGXNmJvPevWU/v6hIBGHJEtffrWk2KSgQ04QnJkxg/stfmN991zp30iRr+803S9+bfTEFEmBe\nulTy5eZ6/j+Zy759nvMmJ8t60SLJV1TE/OijkkZkmTPq1mW+9FIpp21b79f74IPS937VVXJs3jzm\nceNk+1//kmPp6da5pog6lZud7f935XS+2cA0KS62Hs5FRcxPPlm+35TvugTILENE8wF8D+BiIsom\nolFENJaIxhpZFgP4FcBOAK8DGB+AFwqv1KtnmWVSUjzny8vTTlWlYpw+Lb7QnlzXrrpK4p6YHhnu\ndOggZo4XX5RX+Jkzrd/uli3A55+Lz7Wd9HRZJydbDgROTJ8uox5XrrTS7OaVt98WafHEr79a2+Zw\ne9PT4513LC+S//xHzCImZ50l6wYNgJdflk7ZyZOBjh2tujduDDRvLiYL87Nhlnslks7ehx6S9Nat\nnevXuLEsN99c+tiMGeIRc911lodLTIxr/SIjZbg/AGRnyz3+/rtcH/D8nTnx/vtSFzvu9Say3CAj\nIuQzeeONIMaq8ecJUBlLRVrul1zCfMst1r52qipl4YsvpAPNHfM3U1RkpS1dKmnXXFM6f05O6d9b\ndLR1/PRp12ODB8v6wQfFBGSmX3SRZVoApNPQXp9Nm8REsXWr57IjIphbtWIeNkz24+JkvXGja1n2\n6wBiRgCYR41izs9nfust2f/2W6sV/vHHrmV44oknrDxbtljpH35opW/ezLx4sZgtTI4dk9b54MHM\nDz0k+bp2lbTp0z1fz+S+++ScGTNk/9Qp6y3CiV27XOvnL0OHSgdsy5ZS/hdflL2MQIBAmmUqY6mI\nuCclMQ8aZO17M80E+pVICX08/S7czSLM8gcGmJs2tdLWrBEzzLRppX9rsbFWvqws599jy5bMl19u\n7Z9/voizuT95smt9TNv4qFE3zx8yAAAYZElEQVTy6v/ZZ2IKAiy7dN26ImZ/+pPsT5gg6+eeY27W\nTLaTkpjz8pjnz7fKnjePuVev0nXcu5f5tttk+5tvpD5RUd7/T/aH3d69VvoPP1jpvvoUiorkIfHL\nL97z2bn/finb/iBYtoz50CH/y/CHHTuY164Vr6ElS8QDKBiEtbhffjlz796uaWZLxX0hEvFXFGYR\nR/O3kZdnpZ88aaVv3y4t16NHmUePdn0YONmZW7Rw3Tf/9O7ufgBz587W9qOPMt91l7X/0kvSMhw1\nymrBui9Nmsj60ku55C3APPbnPzOfe65sv/aa2NS7dLGOv/KKdb+mHTw3V/qvAOYBA5j/8x95eDCL\nWE+dar3J5OS4ukg6YV4rP99Ky86WtObNK/bdeWLOHCn/k08qp/zqRliL+6BB0gqx4631rqaZ0KW4\n2No+eZJ53bqKlXfggPW7mDRJhPiqq1zNFQ884Pw7ys1lfuMN17S4OOYvv3RNO3pUrvXBB7L/6afW\nsWeftbZXrHD93X75JfNllzF371762ikppdNatWJevtw5T1qadLba86elWZ/DmTNWPY8edTZTlQfz\nWvbvrbhYTCa7dwfmGu4UF8vDuKbgr7iHXOAwQOK1Hz3qf371dw9NBg+WzsqlS8U/+6qrZORjbq6M\nkFy9uuxl7tplbT//vMQEWbbMNRzs5587n7t5s0QGNPnHP2S4eXy8a77cXOmAHTJE9nv0kCH4s2fL\ntkn37tLBuH27+E/37w9ccIEVmdBO//6l0959V0ZbmtjDcMTEuF4LABo1srZr1bL2GzUSP+9AsGGD\ndPKSbfQLkfiPV1ZAPyJr1KpiUSvYFSgPTZq4Dh8GvE+tp1EiQxMzJslAt8hGX38t3hxLl5YtANOu\nXeJBAkjAp8mTJeCUO/bBPXZSU4FDh8QDYsUK4P77RVjOPVeON20qgaPcxT4uTrw6APHaAMSTxByQ\nZ3p0AMAll5S+bq1armW+/748QK64wgoxUKeOa56YGPHUsWMX98qiUydriL0SXEJW3I8fl/gUpuuR\nt9a5RomsHkyYIEGenEY2uuMtLsr8+bI2xfH0aRHNoiIrWJU7v/4q4VYBcVO75RYR97K4yr73npx7\n660SZMqkfn0Zcbh+PXDttVZ6mzbi+mgPUtWihbjx3Xij8zWcxP1vf7Pc7urUcQ2SFREBfPihuC3G\nx8ubyPbtsm+OMjVxj6CqhDn+2G4qY6mIzX3WLLHr2XvDPblD6ijVwFBYKC5k/lBQIJ4Fds6csb4T\nZvH2iI2VEYZO2AcFeVrat5d62dMyMmTU4cCB4tlw6pTEIzJHcJode4WFpd0CAec0QLxlABk56Yl1\n66z8o0dLH0FZMb1grr1WRlseOiQ25TNnmC+4QDxdysJ//mPVyVdnqBIaIJw7VM1hz/Yh06mpzPXq\nOYu7estUHNOHec8e33nNDkm7mJgeE6a42wMtFRaKh8aUKXJswQIRYF/i3rCheLb4ymcuzz0nowq/\n/16uc/PNkm76hgPi9nfbbcyff241IgDmxx+XtemD7ont26XjtbxucsXFMuL0+PHyne+pzNOnA1ee\nElzCWtzN4drffeeanprq2SVy3LhyX05hy4Vv5Urn46YXyB9/WMPJzRgkzK6+zq+/7urnbXcZ3LtX\nRNv9+6tVy3Xf9DF/5RX/xX3zZtc6797NPHasNBLsDx479gdQerqrF4iiBAN/xT1kbe5A6amwUlJk\nog4nZs+W4crewhUonjFnoMnIkGHmc+eKDT0iQrxWnnhCjv/2m8ghIDGrGzaUCYHtnWx/+YusL75Y\nZsKxezo8/bQV2XDiROmI/OQT8fzYsUOmRrvoIst7ZLyfwS7OPtsaTm/SqpX8LtwnNbYzYoTYriMj\nga5d/buWolQL/HkCVMZSkZb7jh3Smpo3z+mp5nnRQGIWxcXSGp871/m46QN94YUy0MY++KZxY1l/\n9JEV9c9p8dWqtge5Gj5cIgea+2PG+K6/OZDnvvtkOLnTNf77Xy4xvXjDU8tdUaobCGc/99atpeff\nnxnQ7WggMYu9e8Un2e71YfLBBxIk6bvvpLU8b57lwgdYYww++khmlfeE09RkdhITLU+Sa66xAj4B\nQGys93OJxB2xoACYNUta5RddZB2/5x7xMhk8WFr57vNpOuHkS64ooUpImmWiosTE8sUX4jJndzWL\niyvtA29nxAhZ1xTzzJIl4hM9dKiVlpFhTegLSJs1P1/MJz16iA84IJ+xyZEjlhnFZPly79d2F/7P\nP5eHxD33yP4FF4jv9e+/i0/2gAFWvTp29H1vZhRAk5UrrendLrvMEnR/ovLl5VlutYoSFvjTvK+M\npSJmGWaJwwEwP/20a7qvIGKATBFWXT1oDhywTCJ2yuNWx+xqbpg3zzlmyZYtcgyQqIaePrfMTPE4\nOfdc56BZTosZWdBu8jA7XI8eZV69mvnGG8XV78wZiYC4c2f5Oy7ff1/K3rChfOcrSnUH4ewtwyx/\n/o4dJfi/O2bwfm9LdY03A0hwKBO7n/JTT/lfzg8/WNHyTG8Pb5/HZZe57q9ebUUYBCQ0rCm4xcXi\nCQMwDxli5fnxR2v7hhtkfffdEvXwkkusuv3yiwR7qiycHo6KEi6EvbgzS6dbkybOrTxfAk9U4csH\njDffZF61SrbdW7nu9bZTUGD5U//738yzZ1vHhg51PS8ry4rdbS6dOjFff73z53P6tPhr9+sn+z17\nul67uFj80XNzxV/8ggtc65uVJZE7t2+3WuWKolScGiHu5iATT6P2PI02rE4td9OM4N66NnGv92+/\nSXpurpiXrrpKpjMzj5vxst2nLjOnN7Mv8+aJSF9yiezffLMsdl/211+XY9df79/93HOPXFtRlMrB\nX3EPSW8ZkyuuEP/j22+XYFLuREd7Prc6xJv5+WfgttusfTNmCiC+10uXlj5nwQKZLqxhQ/EUWbbM\nClwFyEztBw64TqEGWLO1T5wondDZ2cDIkeJ10q2bHBs9Wjxl+vSxzjtzRtb2iIPeePVV11nrFUUJ\nDiEt7p06iWdM/foiSu64D3KyM3Zs8NwiT5yQ9d69rul33GFt33hj6WiIkZHi3temjXO5/fvLgCIz\nQqZTpMyzzxZBt88fOX068OOPpa8HiLsiIC6FiqKEDiEt7oC0YJOTgTVrgGefdY042KyZ5/NOngTu\nvjuwAs8MbNok2zk54orn/oDZvl0mFn7nHWtSYiecWu1FRc5527aVh9XixdKKf/NNSR81Supkd1+s\nU6f0+XFx8hk60bOnhLlVcVeU0CLkxR0Qs8KGDcDf/y4Cl5cnQ9gPHfJ+XkGB9zjwgAzYGTdOHga/\n/SZC6m4COnlShtBfdJG0dFeskIkg3npLBP7KK8WP/OabgZtuknM+/NB14gh/sPud23nrLRlGHxUF\nJCRIWlQUcN55sn3RRWIuAVwHCvmLt4ekoijVk7AQ9+HDXVu1mZnSQvaH3btLp2VnyyCn/HwR6Vdf\nldbw2rUiyAMGuObfvBn4/ntrkoeffwa2bpXtzz4TsZ87V0Z0btki6dHRkq8sfPSR2NwTEiRu+K5d\nwB9/AJdfbuUxJ2iIj5dJHkzGjJFBR+bDRVGU8CYsxL1jR9dh9DfcADz5pP/nuw87nzhRJmb49FMR\neEDeBPbtk+2CAllmz5ZWu7t5JSsL+Oor17RHH3XdX7hQWu/DhwPp6Vb6Qw/J2pyIws5ZZ0kH7KZN\nwP/+JwJeu7ZrHtO84j4hBpF0ttqnP1MUJXwJyfADTkyZImEI8vNlbklzijZ/WL5coguas+aYHiLF\nxdIyBsR2bn87uP9+4LXXxKuloMC1vKlTZX3nnSL0hYUSzbBFCxkW/9FHcrx9e3kjiIqSN4jcXJly\nrV49qU/v3mIjN1v7/jB0qMyr2bmz/+coihJ+hI24N2tm2ZW7dJE4Jrt3e54P053Zs8UV8dgxK+3A\nAbGzA6Vb56+/Lut160SUY2MtLxhAOjbfeku28/Kk/MGDRbzfe0/MPaawA67zvD7zjKy3bZMO0Vq1\n/A83GxEh968oSs2GxCe+6klOTuZ0uz2iEjh1SgS2QQPxjHFvYZeF5s2B/fvFDGKWY9/u2lVmsp89\nW/y8r7yytGmmvGzfLjHFGzQITHmKooQuRJTBzB782yzCwubuibp1xdMlJcWzq58/dO8uvvRRUdZE\nE02aAD/8IPb9hx4SE83DD8sa8H/Qjz9cdJEKu6IoZSNszDK+uPRSiU8eESG2dH9JSZF45lFRcl5B\nATBnjnRsdukCfPyxa/6+fSW+uHqlKIoSTGqMuJ99tqzT0sRE421qNZO6dYHUVGs/IkJcGL/+uvSU\nbfY8991X8foqiqJUhBoj7g8/LJ2Wt94qXi/Dh3vP36QJMG2a87EePQJfP0VRlEDil82diAYS0TYi\n2klEEx2OjySiw0S0wVhGB76qFaN2bTGxRETI2pdN/Pffxb6u0/IpihKK+BR3IooE8DKAqwG0BzCM\niJyMEu8zc2djeSPA9Qw4L7zgO09REfDAA5VfF0VRlEDjT8u9G4CdzPwrMxcAWAAg5MNIpaSIJ40v\ncnKktR8fr614RVFCB3/EvQWA32z72UaaOzcTUSYRfUBE5wWkdpXMK6/4l49ZBkSNGaMCryhKaOCP\nuDtFI3Ef+fQ/APHMnAjgKwBvOxZENIaI0oko/fDhw2WraSVRFn/0/HwJKaACryhKdccfcc8GYG+J\ntwSwz56BmXOY2YjCgtcBOA6WZ+Y5zJzMzMnNqkkc2RdeKB18yxtFRRLGVwVeUZTqjD/i/iOAC4mo\nDRHVBjAUgEtYLiJqbtu9HsDWwFWxcklJkRgvrVv7f86ZM9rRqihK9canuDNzIYB7ASyFiPZCZt5C\nRJOJ6Hoj2/1EtIWINgK4H8DIyqpwZZCSIoHB/OlgNcnJkfC5TZtqK15RlOpHWAcOKw/9+0sI4LIy\nbpz/HbSKoijlRQOHlZOvvpKQA2WxwwMSDVJb8IqiVBdU3B1ISZFJOsoa2fGOO9QnXlGU6oGKuxde\neEFmRfIXZssnfvhwtccrihI8VNy9kJIi4X3LG5s9J0dEPjZWRV5RlKpFxd0HKSnAkSMys1J5OXlS\nwgyrwCuKUlWouPvJV1+JRww5jdf1g4ICYMQItckrilI1qLiXgVdekdmYyus9ap5r2uRjY1XsFUWp\nHFTcy0lZRrR64uRJDUqmKErloOJeTqZMkXlVA0V+voY0UBQlcKi4lxNz4uzyetI4YYY0iIwExo8P\nXLmKotQ8VNwrgOlJY/q3p6YGRuyLi2XEa0SEiL25ti/qQ68oijdU3AOIXezLEoTME2bHrVMHbk4O\nMHKkCryiKM6ouFcSr7wiLfn69SvvGoWF4nVjb9GbrXz1wFGUmo2KeyWSkiIeMamp4l1DVPaAZGXF\nbOXbQyCMHy9ir26XilJzUHGvAsx48cXFEpAsULZ5f8jJEfv97t2W26WOllWU8EfFPQiYtvmYmOBc\nv6DAatX7as2npWmrX1FCERX3IPLqq5VvpvFGTo7riFl3jxwiSbe3+s18tWqpbV9RqjMq7kHEPn8r\nUeV2vgaaoiJZ794tE4b78xagKErVoeIeZOz2ePfO19atxaXS3I+MDHZtnTlzxvNbQGSk5ZevDwBF\nqTpU3KsZdrHPyhKXSnP/7bfLNnlIdaC4WNY5Of6Zgby5dNrt//qwUBTv6ATZIUZaGvD448CePUCT\nJpL2++/WthnCIEhfa1CJiQHy8oBWrST2T0pKsGukKIFHJ8gOU+wt+yNHZLFvM1uhhavS5bI6YI+y\n6evNwNNSqxbQv7+8Ddg7js21PRSEGQLCfKPQjmalOqHiHsZUVuybcKaoCFi+XB4Q5r59bX8jMqdR\nND2K7PnsDxgn05L7QyUy0vmhog8JpbyoWUYB4GruadUKGDQIWLhQBEwJLhERQN26YnJyOlZcLJ3u\ngwYBixfLgyUyUh40rVuriSrcULOMUiacOnLNVr/pwaMEh+JiZ2E3jwEi6OZIZMD5DaIsi71De/x4\nMUE5vW14MlnZzVbjx7vmqcgMZOUZVFfRgXihOpBPW+5KhUhLk1mk8vODXROlplO/vrjlFhR4z0cE\n9OsHbNhgvZmaY0zcH6K1azuXZ74xmWZOu1ODuX36tGt5cXHACy9U/C1KW+5KlZCSAsyZ4+qbn5pa\n/tZ+hP4ilXKSl+db2AF5G12+3NXkmJfn/HbkqTwnF1/3bffyzD4a880mNrZy3wL0r6RUGHeTTkqK\nlWY369jF397Jaz/2zjuej6WmyqAuoiDfsKIEgJMnK3lOBmYOytK1a1dWlPKQmsrcujUzkaxTU0sf\nA5gjI2XdujXzuHHMcXHmY0O2U1Ml3cyniy7BWFq3LtvvH0A6s2+NVZu7ohi4ewyZXiZmut0LxZMt\ntqYOIFPKD5Fl5vEvfwBt7kQ0kIi2EdFOIprocLwOEb1vHF9DRPH+V1VRqgdO5iV7OrPMfsUscfnH\njbPi/URGyn5xsfN4gtq1rf4EIqBOHetY/fqS3938pNQMWrWqpIJ9Ne0BRAL4BUBbALUBbATQ3i3P\neACvGttDAbzvq1w1yyiKd7yZn+x5nMxN3vLUry+LuR8RYZ1rT9el8pdatZy/V2/AT7OMP+LeA8BS\n2/7fAfzdLc9SAD2M7VoAjsBws/S0qLgrSuji7aHi6aHkq6/En4eU2Z9C5FkwIyKkL6W696fExJRd\n2JkDK+5DALxh278DwEtueTYDaGnb/wVAU2/lqrgrilKV+PMAMfP5ejjFxcni7a2qsvBX3H12qBLR\nLQD+zMyjjf07AHRj5vtsebYYebKN/V+MPDluZY0BMAYAWrVq1XW3OZxOURRF8YtAdqhmAzjPtt8S\nwD5PeYioFoCGAH53L4iZ5zBzMjMnN2vWzI9LK4qiKOXBH3H/EcCFRNSGiGpDOkw/c8vzGYA7je0h\nAFawr1cCRVEUpdKo5SsDMxcS0b2QTtNIAG8y8xYimgyx/XwGYC6Ad4loJ6TFPrQyK60oiqJ4x6e4\nAwAzLwaw2C3tKdv2aQC3BLZqiqIoSnnR2DKKoihhSNDCDxDRYQDldZdpCvGlr0noPdcM9J5rBhW5\n59bM7NMjJWjiXhGIKN0fV6BwQu+5ZqD3XDOointWs4yiKEoYouKuKIoShoSquM8JdgWCgN5zzUDv\nuWZQ6fcckjZ3RVEUxTuh2nJXFEVRvKDiriiKEoaEnLj7mhUqVCGiN4noEBFttqU1IaJlRLTDWDc2\n0omIZhmfQSYRJQWv5uWHiM4jopVEtJWIthDRA0Z62N43EUUT0Voi2mjc8z+N9DbGLGY7jFnNahvp\nYTHLGRFFEtF6Ilpk7If1/QIAEWUR0SYi2kBE6UZalf22Q0rciSgSwMsArgbQHsAwImof3FoFjLcA\nDHRLmwhgOTNfCGC5sQ/I/V9oLGMAzK6iOgaaQgAPM3M7AN0B/NX4PsP5vv8A0I+ZOwHoDGAgEXUH\n8ByAGcY9HwUwysg/CsBRZr4AwAwjXyjyAICttv1wv1+Tvszc2ebTXnW/bX+CvleXBX7MChXKC4B4\nAJtt+9sANDe2mwPYZmy/BmCYU75QXgB8CuCqmnLfAOoBWAfgMshoxVpGesnvHOWY5ay6LZAw4csB\n9AOwCACF8/3a7jsLbpMWVeVvO6Ra7gBaAPjNtp9tpIUrZzPzfgAw1mcZ6WH3ORiv310ArEGY37dh\notgA4BCAZZCZy44xc6GRxX5fJfdsHM8F4Db9drVnJoBHARQb+3EI7/s1YQBfElGGMVERUIW/bb+i\nQlYjyCGtJvpyhtXnQEQxAD4E8CAzHydyuj3J6pAWcvfNzEUAOhNRIwAfA2jnlM1Yh/Q9E9G1AA4x\ncwYR9TGTHbKGxf260ZOZ9xHRWQCWEdHPXvIG/L5DreXuz6xQ4cRBImoOAMb6kJEeNp8DEUVBhD2N\nmT8yksP+vgGAmY8B+D9If0MjYxYzwPW+/JrlrBrTE8D1RJQFYAHENDMT4Xu/JTDzPmN9CPIQ74Yq\n/G2Hmrj7MytUOGGf4epOiE3aTB9h9LB3B5BrvuqFEiRN9LkAtjLzdNuhsL1vImpmtNhBRHUB9Id0\nNK6EzGIGlL7nkJ3ljJn/zswtmTke8n9dwcwpCNP7NSGi+kQUa24DGABgM6rytx3sTodydFIMArAd\nYqd8PNj1CeB9zQewH8AZyFN8FMTWuBzADmPdxMhLEK+hXwBsApAc7PqX8557QV49MwFsMJZB4Xzf\nABIBrDfueTOAp4z0tgDWAtgJ4L8A6hjp0cb+TuN422DfQwXuvQ+ARTXhfo3722gsW0ytqsrftoYf\nUBRFCUNCzSyjKIqi+IGKu6IoShii4q4oihKGqLgriqKEISruiqIoYYiKu6IoShii4q4oihKG/H9R\nd3NRVTxengAAAABJRU5ErkJggg==\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# 透過趨勢圖來觀察訓練與驗證的走向 (特別去觀察是否有\"過擬合(overfitting)\"的現象)\n", "import matplotlib.pyplot as plt\n", "\n", "# 把每個訓練循環(epochs)的相關重要的監控指標取出來\n", "acc = history.history['acc']\n", "val_acc = history.history['val_acc']\n", "loss = history.history['loss']\n", "val_loss = history.history['val_loss']\n", "\n", "# 取得整個訓練循環(epochs)的總次數\n", "epochs = range(len(acc))\n", "\n", "# 把\"訓練準確率(Training acc)\"與\"驗證準確率(Validation acc)\"的趨勢線形表現在圖表上\n", "plt.plot(epochs, acc, 'bo', label='Training acc')\n", "plt.plot(epochs, val_acc, 'b', label='Validation acc')\n", "plt.title('Training and validation accuracy')\n", "plt.legend()\n", "\n", "plt.figure()\n", "\n", "# 把\"訓練損失(Training loss)\"與\"驗證損失(Validation loss)\"的趨勢線形表現在圖表上\n", "plt.plot(epochs, loss, 'bo', label='Training loss')\n", "plt.plot(epochs, val_loss, 'b', label='Validation loss')\n", "plt.title('Training and validation loss')\n", "plt.legend()\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "從\"Training與validation accuracy\"的線型圖來看, 訓練到50~60循環(epochs)之後驗證的準確率就提不上去了, 但是訓練的準確率確可以一直提高。\n", "雖然說83%的預測準確率在Kaggle的competition裡己經是前10名左右了, 但如果想要繼續提升效果的話可的的方向:\n", "* 增加更多的字符圖像\n", "* 字符圖像的增強的調教(可以增加如原文提及的影像頻導channel的flip,在這個文章為了簡化起見移除了這個部份的實作)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### 總結\n", "在這篇文章中有一些個人學習到的一些有趣的重點:\n", "* 如何使用Keras的ImageDataGenerator來增加模型訓練的圖像樣本\n", "* 不同的網絡構建模型有不同的效果(VGG16, VGG19 等等),當不知道怎麼架構一個有效的網絡時可以先學習這些被驗證有效的結構\n", "* 二階段的模型訓練想法及優化器(adadelta與adamax)的選用跟一般範例示範的有不一樣的想法\n", "* 硬體(GPU)對於支持複雜的深度學習很重要,快速的回饋與結果才有機會和精神去嘗試其它的超參數與組合進而產生更好的模型" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "參考: \n", "* [Kaggle First Steps With Julia (Chars74k): First Place using Convolutional Neural Networks](http://ankivil.com/kaggle-first-steps-with-julia-chars74k-first-place-using-convolutional-neural-networks/)\n", "* [Keras官網](http://keras.io/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.4" } }, "nbformat": 4, "nbformat_minor": 2 }