{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Chapter 11 – Deep Learning**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_This notebook contains all the sample code and solutions to the exercises in chapter 11._\n", "\n", "\n", " \n", "
\n", " Run in Google Colab\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Warning**: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# To support both python 2 and python 3\n", "from __future__ import division, print_function, unicode_literals\n", "\n", "# Common imports\n", "import numpy as np\n", "import os\n", "\n", "try:\n", " # %tensorflow_version only exists in Colab.\n", " %tensorflow_version 1.x\n", "except Exception:\n", " pass\n", "\n", "# to make this notebook's output stable across runs\n", "def reset_graph(seed=42):\n", " tf.reset_default_graph()\n", " tf.set_random_seed(seed)\n", " np.random.seed(seed)\n", "\n", "# To plot pretty figures\n", "%matplotlib inline\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "plt.rcParams['axes.labelsize'] = 14\n", "plt.rcParams['xtick.labelsize'] = 12\n", "plt.rcParams['ytick.labelsize'] = 12\n", "\n", "# Where to save the figures\n", "PROJECT_ROOT_DIR = \".\"\n", "CHAPTER_ID = \"deep\"\n", "IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID)\n", "os.makedirs(IMAGES_PATH, exist_ok=True)\n", "\n", "def save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n", " path = os.path.join(IMAGES_PATH, fig_id + \".\" + fig_extension)\n", " print(\"Saving figure\", fig_id)\n", " if tight_layout:\n", " plt.tight_layout()\n", " plt.savefig(path, format=fig_extension, dpi=resolution)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Vanishing/Exploding Gradients Problem" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def logit(z):\n", " return 1 / (1 + np.exp(-z))" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Saving figure sigmoid_saturation_plot\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAagAAAEYCAYAAAAJeGK1AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzs3Xl4Tdf6wPHvisyDOVJEDTXGPLXILTFXUUNoqbHaKlo/LUqvoVeq1dYY91YHnaKCKkUNNZaosYSG0opWY4gIgpDIJMn6/bGPNMMJCSc5Gd7P8+wn2Xuvs9d7tiPvWXuvvZbSWiOEEEIUNDbWDkAIIYQwRxKUEEKIAkkSlBBCiAJJEpQQQogCSRKUEEKIAkkSlBBCiAJJEpR4IEqpIKXUR9aOA3IWi1LqhFJqRj6FlL7eAKXUxnyox0cppZVS5fOhrpFKqfNKqVRrnNNMsQxXSsVaMwaRd5Q8ByUyU0q5A37A00BFIBo4AXygtd5uKlMWuKO1jrFaoCY5iUUpdQJYrbWekUcx+AC7AHetdVS67aUw/p9FW7Cus8BHWuu56bbZA2WByzoP/1MrpcoAV4DxwGogRmudLwlCKaWB/lrr1em2OQFuWusr+RGDyF+21g5AFEjfA87Ai8BfQAWgHVDubgGt9XXrhJZVQYolM631zXyqJwmIzIeqqmL83diotb6UD/Xdk9Y6Hoi3dhwij2itZZElbQFKAxrodJ9yQRjf4u+uewDrMf5YnANewGh1zUhXRgOjgR+AOOA00B7wBLYCt4EQoFmmuvoCvwGJwAVgKqbWfzaxVDDVcTeWEZljMfN+HjO9JtIUx1GgR6Yy9sAs0zETgb+B/wOqmd5b+iXA9JoAjD/mACOBy0CJTMddDqzPSRym95qhLtN2H9N6+Vyct7PANOAz4BYQDrx5j3M03Mz7rAbMAE6YKRubbn2G6d9gAHAGiAHWpY/XVG5YupgvA0vSxZq+3rPm6jFtewXji1WS6efLmfZr07/FKtM5/hsYbO3/e7JkXeQelMgs1rQ8o5RyzMXrlmB8u+4A9AIGm9YzmwZ8CzQGgk2/fwl8DDQFIjD+qAOglGqO8YdkDdAQeAv4N/DaPWIJAGoCnYDewFCMP6T34gpsBjqbYvseWKOUqpvpPQ7FuLxVD6OFGY3xx9/XVKY+xmXRcWbqWAWUMtVx9/25YpyvwBzG0RcjkbxjqqeiuTeTi/P2BkZCaAZ8CMxWSrU2d0xgJfCU6ffHTXVfyKasOdWA54A+QBeMf+/30sX8Ckay/BpohHGJ+YRpd0vTz5dN9d5dz0Ap1Qf4CPAHGgALgY+VUj0zFX0b44tAY9P7+kop9Wgu3ovID9bOkLIUvAXjj+11IAE4AMwFnshUJghTqwWog/GttFW6/VWAFLK2oN5Pt97AtG18um0+pGsJAMuAnZnqngGEZxNLbdPrvdPtr5o5lhyeh4PANNPvtUzHfSqbshniTrc9AFMLyrS+Bliabn0wcBNwzEkcpvWzwMR71Z/D83YWWJGpzJ/p6zITSwtTPdUyHTcnLagEoFS6bVOBv9Kth2Pc58yubg30u089+4CvzPwb7L3H59AWo0UvragCtkgLSmShtf4eqAT0xPg23wY4qJSaks1L6gKpGC2iu8e4gNEayux4ut8vm37+ZmZbBdPPehh/dNLbC1RWSpU0c/x6plgOpYvlXDaxpFFKuSilZiulfldK3TD1DGsB3P1W3dR03F33Ok4OBAK9lVLOpvVBwPda64QcxpFTOT1vxzOVieCfc29p53TGe3JpdSmlKgCVgZ8eso7s3rdXpm1p71trnQxcJe/et3hAkqCEWVrrBK31dq31O1rrNhiX4WaYeos9jDvpq7nHtpx8Nu/VWy23PdnmAv2B6RgdQppgJLmHfb+ZbQKSgV6mP8qd+OfyXn7Fkf7c3DGzL7d/F1IBlWmbnZlylqjrQWX+PFgzFpFD8g8icup3jEsh5u5LncL4LDW/u0Ep5YnRCntYfwDembb9C+NSlblu5XdjeTxdLI/mIJZ/Ad9orb/XWh/HuNz0WLr9Iabjts/m9UmmnyXuVYnWOhHj3tAgjPsxkRiXKHMax9267lkPuT9vD+Mq4KGUSp+kmuTmANroJn4R6HiPYnd48Pf9e27iEQWDJCiRgVKqnFJqp1JqsFKqkVKqulKqPzAJ+ElrfSvza7TWoRi98D5VSrVSSjXBuNEdR+5bMpnNA9oppWYopWorpQYBE4DZ5gqbYtkCfKaUam2KJYD7d0U+DfRRSjVTSjXEaNWkJWOt9WngO+ALpZSv6bw8qZQaYipyDuO9dldKuZs6P2QnEOgKjMK4B5Sa0zhMzgJPKqUq3+PB3Fydt4cUhPEM1hSl1GNKqReBfg9wnPeA15VSb5hibqKUmpBu/1mgo1LqEdPzWObMAYYopV5VStVSSo3F+DKQF+9b5DFJUCKzWIyb8uOA3cBJjK7VyzG+8WdnOMa3/SCM7ubLMB7oTHiYYLTWRzEueflieljYtNxr5IjhQBiwE9hgiv3sfaoab4p3D8Z9t4Om39MbajrWfzFaagEYvfLQWl8E/oPxR/byfeLbg9Fa8CLj5b2cxvE2RieUMxitlywe8Lw9EK31HxiPD4zEuLfTGeMzk9vjfAK8itFT7wTGF4366YpMwGjBXgB+zeYY64CxGL0Tf8f4HI/RWm/IbTzC+mQkCZEnTN/sI4CBpk4XQgiRKzKShLAIpVQHwA2jR14FjJZEFMa3YCGEyDWLXeJTSr2mlApWSiUqpQLuUW6YUuqIUuqWUirc1KVWEmXhZwe8i5GgNmDcf2qrtb5t1aiEEIWWxS7xKaX6YnQ37Qo4aa2HZ1NuNMb15V8Ad4z7Fau01h9YJBAhhBBFgsVaLlrrNQBKqRYYY6tlV+6TdKsXlVLLyL7rrhBCiGKqIFxaa4vRU8wspdRIjN5BODk5Na9SpUp+xZUjqamp2NhIZ8j7kfOUMxcuXEBrzaOPyrBw95Pfn6nIhEicSzhT0s7cACYFV0H8v3f69OkorbX7/cpZNUEppUZgDOPyUnZltNaLgcUALVq00MHBwdkVtYqgoCB8fHysHUaBJ+cpZ3x8fIiOjiYkJMTaoRR4+fmZmrx9MrP3z2aCzwTebvd2vtRpKQXx/55S6lxOylktQSmlegPvY0zrEHW/8kIIYQ3zD8xn9v7ZjGkxhultp1s7nGLFKglKKfUU8DnQXWv92/3KCyGENSw7vowJ2ybQz6sf/+32XzKO5iTymsUSlKmruC3GWFklTHMJJZtGCk5frgPGKAN9tNaHsh5JCCEKhrDoMNpXa09gn0BK2NxvGEBhaZa8czYNY7yztzDmuIkHpimlHlVKxaabDGw6xvAwP5q2xyqlNlswDiGEeCgpqSkATGs7ja2Dt+Jg62DliIoniyUorfUMrbXKtMzQWp/XWrtqrc+byrXXWtuatt1dulkqDiGEeBihUaE0+KQBhy4aF3jsSpibOUTkh4LQzVwIIQqEiJgIugZ2Je5OHGUcsxswXeQXSVBCCAFEJ0TzVOBTXIu/RtCwIGqVq2XtkIo9SVBCiGIvITmBXt/24lTUKTY9v4nmlZrf/0UizxWsx4uFEMJKyjuX55s+39D5sc7WDkWYSAtKCFFsaa2JuxOHi70Lq/uvluecChhpQQkhii2/3X60/rI10QnRkpwKIElQQohi6dPgT/Hb7UeLSi0o5VDK2uEIMyRBCSGKne9//54xm8bQo3YPFvdcLK2nAkoSlBCiWNlzbg/Pr3meVp6tWNlvJbY2ciu+oJIEJYQoVqqXqc4zdZ5h4/MbcbZztnY44h7kq4MQoli4HHuZ8s7l8Szpyar+q6wdjsgBaUEJIYq8K7ev8K+v/8UrG1+xdigiFyRBCSGKtJjEGLov787FWxcZ0XSEtcMRuSCX+IQQRVZSShK+3/ny66VfWfvcWtpUaWPtkEQuSIISQhRZozeOZvvf2/nqma/oWaentcMRuSQJSghRZL3U7CUaeTTihaYvWDsU8QAkQQkhipzfLv9GQ4+GtK7SmtZVWls7HPGApJOEEKJIWRKyhEafNmLtH2utHYp4SJKghBBFxqbTm3hx/Yt0qtGJ7rW7Wzsc8ZAkQQkhioSD4Qfpv6o/TR5pwppn12Bfwt7aIYmHJAlKCFHoXY+/To/lPahcsjI/DvoRNwc3a4ckLEA6SQghCr2yTmVZ0HUB3o96U8GlgrXDERYiLSghRKF1Pf46v4T/AsCQxkOoUaaGlSMSlmTRBKWUek0pFayUSlRKBdyn7BtKqUil1C2l1FdKKQdLxiKEKNoSUhLouaInTy17iuiEaGuHI/KApVtQEcC7wFf3KqSU6gq8BXQEqgI1AD8LxyKEKKKSU5OZ+cdMDlw4wOc9P6e0Y2lrhyTygNJaW/6gSr0LeGqth2ezfzlwVms9xbTeEVimtX7kXsd1c3PTzZs3z7Dt2WefZcyYMcTFxfH0009nec3w4cMZPnw4UVFR9OvXL8v+0aNH89xzz3HhwgWGDBmSZf+ECRPo2bMnoaGhvPJK1pGQe/bsyYQJEwgJCeH111/Psn/WrFm0adOG/fv3M2XKlCz7/f39adKkCTt27ODdd9/Nsv+zzz6jTp06bNiwgXnz5mXZv3TpUqpUqcLKlSv55JNPsuxfvXo15cuXJyAggICAgCz7f/zxR5ydnfn444/57rvvsuwPCgoCYO7cuWzcuDHDPicnJzZv3gzAzJkz+emnnzLsL1euHN9//z0AgwYN4uLFixn2e3p6EhgYCMDrr79OSEhIhv21a9dm8eLFAIwcOZLTp09n2N+kSRP8/f0BGDx4MOHh4Rn2t27dmvfffx8AX19frl27lmF/x44dmT59OgDdunUjPj4+w/4ePXowceJEAHx8fMgsLz57ISEhJCcn06JFi/t+9qZNm0anTp2K3WdPozlT/wwXK1zk46c/JmpL1D0/e//+9785cOBAhv3F6bPXqVMnSpfOmMAf9u/ew372du/efURr3SLLjkys1UmiPvBDuvVjgIdSqpzWOsO/pFJqJDASwM7OjujojE3506dPExQUREJCQpZ9AKdOnSIoKIibN2+a3X/y5EmCgoK4cuWK2f2//fYbbm5unD9/3uz++Ph4goKC+Ouvv8zuP3r0KElJSZw4ccLs/uDgYKKjozl27JjZ/b/88guXLl3it99+M7v/wIEDnDlzhpMnT5rdv2/fPkqVKsWpU6fM7v/5559xdHTk9OnTZvff/SNx5syZLPvvvneAsLCwLPtTU1PT9iclJWXZb2dnl7Y/PDw8y/6IiIi0/REREVn2h4eHp+2/fPlylv3nz59P23/16lVu3bqVYX9YWFja/uvXr5OYmJhh/5kzZ9L2mzs3efHZS05ORmtNdHT0fT97x44dw9bWtth99m543uBihYsMqDiAerfr8U3YN/f87Jk7f8Xps5eSkpKlzIP83dPahtRUF1JSnNm69QJ//nmEv/++xPnz9UlNdUBrR1JTHUhNdeTDD6FMmbNcvOjAyZMjSU11JDXVEa0dSE11AJ7MUqc51mpBnQFe1VpvMa3bAUlAda312eyO26JFCx0cHGzxeB9GUFCQ2W84IiM5Tznj4+NDdHR0lm/04h8pqSms+n0VHlc9aN++vbXDKfCCgoJo186H27fh2jW4ft1Y0v9+4wbExBjLrVv//J5+PS7OklGpAt2CigVKplu/+3uMFWIRQhQCG09vpLFHY6qUqsKABgPSWhjFVXw8XL4MkZH//Ey/REUZSSgysg2xsXDnzsPX6eb2z+LqCs7O4OT0z5J5PbttPXrkrD5rJaiTQGPg7oXnxsDlzJf3hBACYMffO+i7si++Xr6s8F1h7XDyXFIShIfDuXNw/ryx3P39wgW4dAlu3szp0YwRNZycoGxZKFfO+Hl3KVcOSpeGkiX/ST7mfndxAZuH6FZ3+vRpzp8/T6dOnXL8GosmKKWUremYJYASSilHIFlrnZyp6DdAgFJqGUbPv2lAgCVjEUIUDUcijtBnZR/qlq/LJ92zdsYorKKj4c8/jeX0aePnmTNGEoqMhPvdfbGzAw8PeOSRf5b06+XLG8knNHQ/3bu3wckpf96XOcuXL+eFF16gWbNm1ktQGInmP+nWBwN+SqmvgN8BL631ea31FqXUbGAX4AR8n+l1QgjBX9f/4unlT1POqRxbBm8pdN3JtTYuv/32m7GcOAGhoUYyuno1+9eVKAGVK8Ojj0LVqsbPu0uVKsa+MmVAqfvHcO1aktWSU0JCAmPGjGHlypUkJSWRmpqaq9dbNEFprWcAM7LZ7Zqp7HxgviXrF0IULW9uf5OU1BS2Dt5KJbdK1g7nnpKTjQR05AgcP/5PUoqKMl/eyQlq1TKW2rWNnzVrQrVqULEi2Bbygej+/vtvnn76ac6fP5/WjT63nfIK+SkQQhRlAb0COHfzHHXK17F2KBloDWFhcOjQP8vRo0bHhcxKloQGDaBhQ2OpV89ISJUqPdw9nYJszZo1DBs2jLi4uAytJqu2oIQQ4mElJifywd4PmOQ9iVKOpWjk2MjaIZGcbCSg3bvh55/h4EHzLaPHHoOWLaFRo38S0qOP5uxSXFFw584d3njjDb766qssDx+DtKCEEIVYSmoKg9cOZvXvq2lRqYXVJh1MTTUS0o4dRlLauxdiYzOWcXeHxx//Z2nZ0uiUUFydP3+eHj168Ndff5lNTiAJSghRSGmtGbdlHKt/X83cznPzPTlduQLbtsGWLcbPzJ0YatWCdu2MxdvbuFdUXFpG97Np0yYGDhxIXFwcKSkp2ZaTS3xCiEJp1p5ZLDq8iImtJzKhzYR8qfOPP2DNGli3DjIPUlO1KnTpAh06QNu2xj0jkdWUKVPw9/fPttWUnrSghBCFzrW4a/j/4s+QRkP4sPOHeVaP1kYvuzVrYO1aOHXqn32Ojkbr6KmnjKVOHWkh5UR4eDhaa0qUKHHP1hNIghJCFELlnMtx6KVDeJb0xEZZvmvbn3/C0qUQGGj0vrurbFl45hno2xc6djSG5RG588033zB16lSmTZvGxo0bSUxMzDYRSYISQhQae87tYdfZXUxvO53qZapb9NhRUbBypZGYfvnln+0VKxoJqW9f49JdYX/eqCCoU6cO3333HY0aNeLEiRPZlpMEJYQoFH67/BvPfPsMHi4evN7qdUo6lLz/i+5DawgKgk8/NS7h3R0g1dUVfH1hyBDw8TFGahCWtXPnTsLSN08x5oy7c+cOycnGaHfSSUIIUeCdiz7HU8uewtnOma2Dtz50coqOhiVLjMR0976SjY1xL2nIEOjdWy7f5bVJkyZx+/btDNsqVKiAj48PK1eu5M6dO9KCEkIUbFFxUXQN7MrtpNvseWEPVUtXfeBjnToF8+bBsmX/jOJQsSK8/LKxeHpaKGhxT7t37yY0NDTDNldXV2bPns2zzz7LzJkz8fPzy1FPv/QkQQkh8tXB8INcjLnIj8//SEOPhg90jH37YNq0Buzb98+2Tp1g9Gjo2dMY6VvknzfffDNL66ls2bL069cPgCpVqvDFF1/k+riSoIQQ+apH7R6cHXeWcs65G3YhNRU2bIDZs2H/foDyODjA8OHwxhtGt3CR//bs2cPJkyczbHNxceGDDz7A5iEHGyyiQxUKIQqSVJ3KyA0jWfPHGoBcJSet4YcfoEkT417S/v3GVBNDhpzl3DnjvpMkJ+uZNGkScZnmgy9TpgzPPvvsQx9bEpQQIs+9teMtPj/6Ob9f/T3Hr9HaGHbo8ceNxPTbb8Y9JX9/Y1K/ESPO4uGRh0GL+9q/fz/Hjx/PsM3V1ZX333+fEhboKimX+IQQeWre/nnM2T+HV1u+ytQnp+boNbt3w7RpxiCtYMwUO3Wq0fHB0TEPgxW5Yq71VLJkSQYMGGCR40uCEkLkmcDjgUzcPpH+Xv1Z+NRC1H3GDvrrL5g40bikB8bo4JMnw6uvSjfxguaXX37h119/zbDN1dWVWbNmYWuhp58lQQkh8syxyGO0r9aepX2WUsIm+0s+N2/Cu+/CwoXGw7UuLjBpErz+ujHhnyh4Jk+enKX15OLiwqBBgyxWhyQoIYTFpepUbJQNszvPJiklCQdbB7PlUlLgq6+My3lXrhjbhg+HWbOM55lEwRQcHMyhQ4cybHN1deW9996zWOsJpJOEEMLCQqNCafpZU367/BtKqWyT06+/whNPwMiRRnLy9obDh+HrryU5FXSTJ08mISEhwzYnJyeGDh1q0XokQQkhLObirYt0CezCpZhLONk5mS0TF2dcvmvZ0pj6okoV+PZb2LMHWrTI54BFrv36668cOHAgw7BFLi4uzJw5EzsLPyEtl/iEEBYRnRDNU8ue4nr8dYKGBVGzbM0sZbZtg1GjjCkvbGyMe0wzZxqDuYrC4a233jLbenrhhRcsXpckKCHEQ4u/E88zK54hNCqUHwf9SPNKzTPsv34dxo0z5mMCaNQIvvjCaEWJwuP48ePs2bMnS+vJz88Pe3t7i9dn0Ut8SqmySqm1SqnbSqlzSqnnsynnoJT6VCl1WSl1XSm1QSlV2ZKxCCHyT3JqMo62jizts5RONTpl2LdtGzRoYCQnR0f44ANjenVJToXPW2+9RWJiYoZtDg4OvPjii3lSn6VbUIuAJMADaAJsUkod01qfzFRuHNAaaATcBBYD/wP6WjgeIUQe0lqTmJKIm4MbWwdvzfCcU1yc8QzTRx8Z697eEBAANbNe+ROFwPXr19myZUuW1tOMGTNwcDDfEeZhWawFpZRyAXyB6VrrWK31XmA9MMRM8erAVq31Za11ArASqG+pWIQQ+cNvtx8+AT7EJMZkSE5HjkDz5kZysrU1uo3v3i3JqTArW7Ys27dvp2nTpri4uABgb2/Pyy+/nGd1WrIFVRtI1lqfTrftGNDOTNkvgYVKqUpANDAI2GzuoEqpkcBIAA8PD4KCgiwY8sOLjY0tcDEVRHKeciY6OpqUlJRCca5+iPgB/z/96fZIN4L3B6OUIjUVVqx4lK+/rkZKig1Vq95mypQ/qF07lj17LFu/fKZyxpLnqUSJEsyfP5+QkBA+//xzunXrxsGDBy1ybLO01hZZgCeByEzbXgaCzJQtBXwLaCAZ+BUoe786mjdvrguaXbt2WTuEQkHOU860a9dON27c2Nph3Neqk6u0mqF0j+U99J2UO1prra9e1fqpp7Q2hnnV+v/+T+u4uLyLQT5TOVMQzxMQrHOQVyzZSSIWyDwoSUkgxkzZRYADUA5wAdaQTQtKCFGw7D67m0FrBtG6SmtW9luJrY0tBw5A06bG6OPlysGPPxrDFjmZfxRKiByxZII6DdgqpWql29YYyNxBAowOFAFa6+ta60SMDhKPK6XKWzAeIUQeqFyyMl0e68KGgRtwsnXG3x/atoXwcGjd2hghols3a0cpigKLJSit9W2MltA7SikXpZQ30AtYaqb4YWCoUqqUUsoOGANEaK2jLBWPEMKyouKi0FpTs2xNNgzcQImksvTvb8xmm5xs/AwKMkaGEMISLD3U0RjACbgCrABGa61PKqWeVErFpis3EUgA/gSuAk8DfSwcixDCQq7cvkLrL1szfut4AP78E1q1gu+/N0YbX70a5s+HPHhWUxRjFn0OSmt9HehtZvsewDXd+jWMnntCiAIuJjGGp5c9zcVbF3m2/rP89BP07w83bhgP4K5dK93HCyIfHx8aNGhAv379rB3KA5PBYoUQ2UpKSaLvd30JiQzhu36rOPJDa7p2NZJTz56wf3/RSk5Xr15lzJgxVKtWDQcHBzw8POjYsSPbt2/P0euDgoJQShEVlX93KwICAnA1M5jhmjVreP/99/MtjrwgY/EJIbL18oaX2fH3Dj5/egkb5ndn8WJj+7//bUwwaFPEvuL6+voSFxfHl19+Sc2aNbly5Qq7d+/m2rVr+R5LUlLSQ41vV7ZsWQtGYx1F7OMlhLCkwQ0HM7PVIpZNGsrixeDgAMuWGSNDFLXkFB0dzZ49e/jggw/o2LEjVatWpWXLlkycOJEBAwYAEBgYSMuWLXFzc6NChQr079+fixcvAnD27Fnat28PgLu7O0ophg8fDhiX21577bUM9Q0fPpwePXqkrfv4+DB69GgmTpyIu7s73t7eAMyfP59GjRrh4uJC5cqVeemll4iOjgaMFtsLL7zA7du3UUqhlGLGjBlm66xWrRrvvvsur7zyCiVLlsTT05M5c+ZkiOn06dO0a9cOR0dH6tSpw48//oirqysBAQGWOcm5VMQ+YkIISwiNCgWgtm1nlr8xhqAgYxLBPXvgebNDQBd+rq6uuLq6sn79+izTSdyVlJSEn58fx44dY+PGjURFRTFw4EAAqlSpwvfffw/AyZMnuXTpEgsXLsxVDIGBgWit2bNnD9988w0ANjY2+Pv7c/LkSZYvX86hQ4cYO3YsAG3atMHf3x9nZ2cuXbrEpUuXmDhxYrbHX7BgAQ0bNuTo0aNMnjyZSZMmceDAAQBSU1Pp06cPtra2HDx4kICAAPz8/LIMDpuf5BKfECKDgJAAXlz/Iv9t8jPvvuJNZKTRGWLzZvD0tHZ0ecfW1paAgABefvllFi9eTNOmTfH29qZ///488cQTAIwYMSKtfI0aNfjkk0+oV68e4eHheHp6pl1Wq1ChAuXL5/6xzurVqzNv3rwM215//fW036tVq8bs2bPp1asXS5Yswd7enlKlSqGU4pFHHrnv8bt06ZLWqho7diz//e9/+emnn2jdujXbt28nNDSUbdu2UbmyMbnEggUL0lpy1iAtKCFEmk2nN/HS+pdocvtN3hrUhshI8PExWk5FOTnd5evrS0REBBs2bKBbt27s37+fVq1aMWvWLACOHj1Kr169qFq1Km5ubrQwTQF8/vx5i9TfvHnzLNt27txJ586d8fT0xM3Njb59+5KUlERkZGSuj9+oUaMM65UqVeLKlSsAnDp1ikqVKqUlJ4CWLVtiY8VruZKghBAAHLhwgP6r+lPl7FSOz3+f2FjFgAHG8EWlS1s7uvzj6OhI586defvtt9m/fz8vvvgiM2bM4ObNm3Tt2hVnZ2eWLl3K4cOH2bJlC2Bc+rsXGxubDNNUANy5cydLubujhN917tw5unfvTr169Vi1ahVHjhxrAd4+AAAgAElEQVThq6++ylGd5mSekt0Y4Dc118fJL5KghBBcuX2FHit64PzLO5z92o/kZMWbbxodIvJoqp9Cw8vLi+TkZEJCQoiKimLWrFm0bduWunXrprU+7rrb6y4lJSXDdnd3dy5dupRh27Fjx+5bd3BwMElJSSxYsIDWrVtTu3ZtIiIistSZub4HUbduXSIiIjIcPzg42KoJTBKUEAJ35wq0OLmdaxsmopQx0Ovs2UWvp969XLt2jQ4dOhAYGMjx48cJCwtj1apVzJ49m44dO+Ll5YWDgwMfffQRf//9N5s2bWL69OkZjlG1alWUUmzatImrV68SG2sMoNOhQwc2b97M+vXrCQ0NZfz48Vy4cOG+MdWqVYvU1FT8/f0JCwtjxYoV+Pv7ZyhTrVo1EhIS2L59O1FRUcTFxT3Q++/cuTN16tRh2LBhHDt2jIMHDzJ+/HhsbW0zzPWVn4rRx08Ikdn1+OuEXDrOmDGw7Ztm2Noarab/+z9rR5b/XF1dadWqFQsXLqRdu3bUr1+fKVOm8Pzzz7Ny5Urc3d1ZsmQJ69atw8vLCz8/P+bPn5/hGJUrV8bPz4+pU6fi4eGR1iFhxIgRaYu3tzdubm706XP/0d0aNWrEwoULmT9/Pl5eXnzxxRfMnTs3Q5k2bdowatQoBg4ciLu7O7Nnz36g929jY8PatWtJTEzk8ccfZ9iwYUydOhWlFI6Ojg90zIeWkzk5Csoi80EVXnKeciY/54O6nXRbt/rsX9qh2XcatHZw0HrDhnyp2iLkM5UzD3OeQkJCNKCDg4MtF5DO+XxQ0s1ciGIoOTWZfisGcXDBeDjVB1dXWL8eTM+ZimJq7dq1uLi4UKtWLc6ePcv48eNp3LgxzZo1s0o8kqCEKGa01ryw6jU2+42Gv7tQpozxjJPpUR9RjMXExDB58mQuXLhAmTJl8PHxYcGCBVa7ByUJSohi5rMDgQROHgDnfPDwgG3bINPjMaKYGjp0KEOHDrV2GGkkQQlRjMTGwvK3BsE5GypV0uzapahd29pRCWGe9OITophY/etWOnVJYs8eGypXht27JTmJgk1aUEIUAz8c28mzvV3R5+3x9IRdu4rWPE6iaJIEJUQRtzv0V/o+44w+34rKnqkEBdnw2GPWjkqI+5NLfEIUYSHn/qJT1zuknm9FJc9kft4tyUkUHpKghCii4uOhY7dYks89TsXKd9iz25YaNawdlRA5JwlKiCIoMRH69oXrfzShvMcdfg6yk+QkCh1JUEIUMbHxiTTvEsqWLVC+POzeaScdIkShJAlKiCIk6U4KXp0Pc/LnOriWvMP27eDlZe2ohHgwFk1QSqmySqm1SqnbSqlzSqnn71G2mVLqZ6VUrFLqslJqnCVjEaK4SUnRNOnxCxf2/QsH5yR2bLOjSRNrRyXEg7N0N/NFQBLgATQBNimljmmtT6YvpJQqD2wB3gBWA/ZAMZhQWoi8oTV49w/mj21tsHVIYttmexlbTxR6FmtBKaVcAF9gutY6Vmu9F1gPDDFTfDywVWu9TGudqLWO0Vr/YalYhChOtIax42P5ZW1LbGzvsHG9LW3bWjsqIR6eJVtQtYFkrfXpdNuOAe3MlG0F/KaU2g/UBH4BXtVan89cUCk1EhgJ4OHhQVBQkAVDfnixsbEFLqaCSM5TzkRHR5OSkpKrcxUY+ChfflmDEiVS+c9/TuJgH01xONXymcqZwnyeLJmgXIFbmbbdBNzMlPUEmgGdgd+A2cAKwDtzQa31YmAxQIsWLbSPj4/lIraAoKAgClpMBZGcp5wpXbo00dHROT5Xkz74ky+/rIFSsGyZDc89V3xuOslnKmcK83myZIKKBUpm2lYSiDFTNh5Yq7U+DKCU8gOilFKltNY3LRiTEEXWgq/OMWeK8XDT/IWJPPecg5UjEsKyLNmL7zRgq5SqlW5bY+CkmbLHAZ1uXZspI4TIxooNkYwf+QjoEoz/dzSvj5XkJIoeiyUorfVtYA3wjlLKRSnlDfQClpop/jXQRynVRCllB0wH9krrSYj727n/BoOfdYUUBwaMuMbc90pbOyQh8oSlH9QdAzgBVzDuKY3WWp9USj2plIq9W0hrvROYAmwyla0JZPvMlBDC8Ndf0LenE6kJrnToeZVln5fDSrNxC5HnLPoclNb6OtDbzPY9GJ0o0m/7BPjEkvULUZRdugRdusDN64607ZDI5tXu2MhYMKIIk4+3EIXA9RupNGwTTlgYtGwJG9c5YG9v7aiEyFuSoIQo4OLjoXHbs1w760n5R6PYtAnczD28IUQRIwlKiAIsORladv2T8BM1cCl3g8O7y+Hubu2ohMgfkqCEKKC0ho79/+LknlrYucSyf1dJqlWTHhGi+JAEJUQBNWUK/LyuJjb2CWzfbE+jhiWsHZIQ+UoSlBAF0Lx5mg8+AFtbzerV0O5J6REhih9JUEIUMJfjujBxonEp7+uvFX16Olo5IiGsQxKUEAVI5M3mRP41C4Ap711l8GArBySEFUmCEqKA2LLrFqEnZoK25YWxkbw3RbrrieJNEpQQBUDwrwn07KEg2Rm3Siv5cuEj1g5JCKuTBCWElZ07B890tyc5zg3XSjuoXn62jK8nBBYei08IkTtXrmg6d4FLl2xo106TmjqbW7dSrB2WEAWCJCghrCQmBpq2jSDidGUaNkrlhx9s6NUrKUu59evXExISQsOGDalfvz6PPfYYJUrIM1Gi6JMEJYQVJCbC450uEBFaBTePK2zd4k6pUubLnjlzBj8/P1xdXUlJSSEpKQlPT08aNmzI448/ToMGDahfvz7Vq1eXxCWKFElQQuSzlBTweSacU4eq4FDqBof3lKVixexvOo0ePZp3332X69evp20LCwsjLCyMH3/8EWdn5wyJq1GjRjz++OMMGDCAGjVq5MdbEiJPSCcJIfKR1tB7yEUObvOkhFMMQTscqVPr3t8THR0deffdd3FxccmyLzk5mVu3bnH79m3u3LlDWFgYP/zwA9OmTePAgQN59TaEyBeSoITIR9OmwcYVlbGxS+SH9ZpWLZxy9LqXXnoJV1fX+xcE7O3t6dq1K88/L5NUi8JNEpQQ+eS92XHMmgUlSsC67x3o3qlkjl9rZ2fHhx9+aLYVlVnJkiVZtmwZSvqqi0JOEpQQ+eCjz28xbbIzAF99BT175v4YgwcPpmzZsvcs4+DgQK1atR4kRCEKHElQQuSx79bEMXaUkZzG/ecsQ4c+2HFKlCjB3Llz79mKSkxM5MiRI9SpU4e9e/c+WEVCFBCSoITIQz/tusPAASUg1ZbnRv+F/4xqD3W8fv36UbFixXuWSUpKIioqii5dujB9+nRSUuTBX1E4SYISIo+EhEC3HndIveOAj28oKxbVfOhj2tjYsGDBgiytKEfHrFNyxMfHM3/+fJ544gnCw8Mfum4h8ptFE5RSqqxSaq1S6rZS6pxS6p7diJRS9kqpP5RS8r9HFCl//gldu8KdOGcadwhlx8o6Fhtfr3v37lSvXj1t3dnZmVdffRVXV1dsbDL+l46LiyMkJAQvLy/WrVtnmQCEyCeWbkEtApIAD2AQ8IlSqv49yr8JXLVwDEJYVUQEdOh0hytXoHNn+OXHOlhygAelFP7+/jg7O+Pk5MTAgQOZO3cuJ06coGHDhjg7O2con5KSQkxMDIMGDeKll14iPj7ecsEIkYcslqCUUi6ALzBdax2rtd4LrAeGZFO+OjAYeN9SMQhhbVeuQAvvaMLP21Gv8S3WrAEHB8vX07FjR+rXr0/FihX53//+B0DVqlUJDg5m7NixODllfb4qLi6O5cuX06BBA37//XfLByWEhSmttWUOpFRTYJ/W2jndtolAO611lk61SqmNwJfADSBQa+2ZzXFHAiMBPDw8mn/77bcWiddSYmNjc/wAZXFWHM7TrVu2jBpXi0tnPXCq+BdLF4VTrkzujvH666+TkpKSlnTu5e7QR+a6noeEhPD2228THx9PcnJyhn1KKezt7RkzZgw9e/YstM9LFYfPlCUUxPPUvn37I1rrFvctqLW2yAI8CURm2vYyEGSmbB9gs+l3HyA8J3U0b95cFzS7du2ydgiFQlE/Tzdval2v8S0NWjt6nNVnzsc+0HHatWunGzdubJGYrl69qjt06KCdnZ01kGVxdnbW3bt31zdu3LBIffmtqH+mLKUgnicgWOfgb74l70HFApkfjS8JxKTfYLoUOBv4PwvWLYTV3L4NXbol8scxN2zLXeDgz67UqHL/ER/yWvny5dmxYwfvvfdetpf8duzYQe3atdm/f78VIhTi3iyZoE4Dtkqp9I+xNwZOZipXC6gG7FFKRQJrgIpKqUilVDULxiNEnktIgN694Zf9DpRyj+GnHdC4djlrh5VGKcXrr7/OgQMHqFKlSpbu6ImJiVy9epVOnToxY8YMeWZKFCgWS1Ba69sYyeYdpZSLUsob6AUszVT0BFAFaGJaXgIum36/YKl4hMhrSUnQu28SO3ZAhQrwyx432japYu2wzGrcuDF//PEHvr6+WXr5gfHM1Jw5c2jTpg0XL160QoRCZGXpbuZjACfgCrACGK21PqmUelIpFQugtU7WWkfeXYDrQKppXb6+iUIhORkGPp/M1s32KOcbbNycQJ061o7q3lxcXAgMDOSLL77I9pmpo0eP4uXlxfr1660UpRD/sGiC0lpf11r31lq7aK0f1VovN23fo7U2241Eax2ks+nBJ0RBlJICw19IZc33tuBwkw8CjtKyWdaRHAqqgQMHcvz4cerXr5+lNXV3fqmBAwfyyiuvkJCQYKUohZChjooUHx8fXnvtNWuHUaSlpMALL2iWBdqAXSwTF+1iUv+O1g4r16pXr86RI0cYM2ZMth0oli5dSsOGDTl16pQVIhRCEhRXr15lzJgxVKtWDQcHBzw8POjYsSPbt2/P0etDQkJQShEVFZXHkf4jICDA7HMNa9as4f335bnnvJKSAsOGwdKlCuxiGT73O+a82NvaYT0wOzs75syZw4YNGyhTpgx2dnYZ9sfHx3PmzBmaN2/OF198cfcRESHyTbFPUL6+vhw6dIgvv/yS06dPs3HjRrp168a1a9fyPZakpKSHen3ZsmVxc3OzUDQiveRkGDoUli0DV1fNm5/8xFdjX7B2WBbRsWNHQkND8fb2znLJT2tNXFwc48aNo1evXty8edNKUYpiKScPSxWUxdIP6t64cUMDevv27dmWWbp0qW7RooV2dXXV7u7uul+/fjo8PFxrrXVYWFiWhx+HDRumtTYeuHz11VczHGvYsGG6e/fuaevt2rXTo0aN0hMmTNDly5fXLVq00FprPW/ePN2wYUPt7OysK1WqpF988cW0hyl37dqVpc7//Oc/ZuusWrWqnjlzph45cqR2c3PTlStX1rNnz84QU2hoqG7btq12cHDQtWvX1ps2bdIuLi7666+/fqBzmp2C+LBgTt25o/XAgVqD1q6uqXrv3ryry5IP6uZWamqqnjt3rnZycjL7YK+Dg4P28PDQBw4csEp8mRXmz1R+KojnCSs8qFvouLq64urqyvr167O9GZyUlISfnx/Hjh1j48aNREVFMXDgQACqVKmCn58fACdPnuTSpUssXLgwVzEEBgaitWbPnj188803gDGlgr+/PydPnmT58uUcOnSIsWPHAtCmTZu0gUIvXbrEpUuXmDhxYrbHX7BgAQ0bNuTo0aNMnjyZSZMmceDAAQBSU1Pp06cPtra2HDx4kICAAPz8/EhMTMzVeyjKkpNhyBBYsQKwj6Hj9Ll4e1s7qryhlGLChAns27ePypUrm31m6vLly3To0IGZM2eSmppqpUhFsZGTLFZQlrwY6mj16tW6TJky2sHBQbdq1UpPmDBBHzx4MNvyf/zxhwb0hQsXtNZaL1iwQAP66tWrGcrltAXVsGHD+8a4efNmbW9vr1NSUrTWWn/99dfaxcUlSzlzLagBAwZkKFOzZk09c+ZMrbXWW7Zs0SVKlEhrEWqt9b59+zQgLSitdUKC1n36GC0nHG7qxyYO0dHx0XlapzVbUOnFxMToAQMG3HOYpFatWumIiAirxVgYP1PWUBDPE9KCyhlfX18iIiLYsGED3bp1Y//+/bRq1YpZs2YBcPToUXr16kXVqlVxc3OjRQtjfMPz589bpP7mzZtn2bZz5046d+6Mp6cnbm5u9O3bl6SkJCIjI3N9/EaNGmVYr1SpEleuXAHg1KlTVKpUicqVK6ftb9myZZbnY4qj27fhmWdg7VpQjjd5ZPQwfn77A0o5lrJ2aPnC1dWVFStWsHjxYlxcXMw+MxUcHEzdunXZtGmTlaIURZ38JcKYjbRz5868/fbb7N+/nxdffJEZM2Zw8+ZNunbtirOzM0uXLuXw4cNs2bIFuH+HBhsbmyy9nu7cuZOlXOaZUc+dO0f37t2pV68eq1at4siRI3z11Vc5qtOczD2zlFJyaeY+oqONyQa3bQM7txuUGtWb3dM+pJJbJWuHlu8GDRrE8ePHqVevXrbPTPXv359XX31VLg0Li5MEZYaXlxfJycmEhIQQFRXFrFmzaNu2LXXr1k1rfdxla2sLkGUMM3d3dy5dupRh27Fjx+5bd3BwMElJSSxYsIDWrVtTu3ZtIiIiMpSxt7e3yJhpdevWJSIiIsPxg4ODi3UCu3oVOnSAffvA0xO270rkp0nzqF2utrVDs5oaNWrw66+/MnLkSLPPTMXHx7N48WK2bdtmhehEUVasE9S1a9fo0KEDgYGBHD9+nLCwMFatWsXs2bPp2LEjXl5eODg48NFHH/H333+zadMmpk+fnuEYHh4eKKXYtGkTV69eJTY2FoAOHTqwefNm1q9fT2hoKOPHj+fChfsPNVirVi1SU1Px9/cnLCyMFStW4O/vn6FMtWrVSEhIYPv27URFRREXF/dA779z587UqVOHYcOGcezYMQ4ePMj48eOxtbUttHMEPYzwcGjbFn79Fcp7RrP75xTaNX+EZhWbWTs0q7Ozs2PBggWsW7eO0qVLZ2iZ29nZ0aZNG7p3727FCEVRVKwTlKurK61atWLhwoW0a9eO+vXrM2XKFJ5//nlWrlyJu7s7S5YsYd26dXh5eeHn58f8+fMzHMPd3R0/Pz+mTp2Kh4dH2kgOI0aMSFu8vb1xc3OjT58+942pUaNGLFy4kPnz5+Pl5cUXX3zB3LlzM5Rp06YNo0aNYuDAgbi7uzN79uwHev82NjasXbuWxMREHn/8cYYNG8bUqVNRSmXpwVXUhYbCk0/CqVNQ6tFzRD1Xlwtqr7XDKnC6dOlCaGgorVq1Srvk5+LiwqpVq+TepbC8nPSkKCiLTFiY90JCQjSgg4ODLXrcgnye9u7VumxZo7eeR52/NZPK6Pn751slloLSi+9+UlJS9Icffqjt7Oz0tm3brBJDQf5MFSQF8TyRw158ttZOkMK61q5di4uLC7Vq1eLs2bOMHz+exo0b06xZ8bistXYtPP+8Ma9T7danOd2+KZN8XuON1m9YO7QCzcbGhkmTJjFu3DgcHBysHY4ooqRNXszFxMTw2muv4eXlxaBBg6hXrx5bt24tFvegPvoIfH2N5DTohVjOP9WcYS3780GnD6wdWqEhyUnkJWlBFXNDhw5l6NCh1g4jX6Wmwr//DXdv3b37LkyZ4sqkK/uoV75esUjOQhQGkqBEsRIXBy+8AN99B7a28OYHoTzSfi9KvUgjj0b3P4AQIt/IJT5RbISHGz31vvsO3Nzgv0v/5uM7TzDvwDwSkmViPmupVq1alp6qQoC0oEQxcfAg9OkDkZHw2GPwSeBFhu37F672rmwZvAVH2+LVrT6/DR8+nKioKDZu3Jhl3+HDh7OMqCIEFIMWVGRkJN26dWPZsmUyFEsxtXQp+PgYyal9e9i0M4rXgjsQnxzP1sFbebTUo9YOsVhzd3fPMoySNTzsfGzC8op8gvr444/ZsWMHo0aNwt3dnTfeeCPLEESiaEpJgcmTjYkGExNhzBjYuhUOXN/IhZsX2DhwI/Ur1Ld2mMVe5kt8SikWL15M//79cXFxoUaNGgQGBmZ4zcWLF3nnnXcoU6YMZcqUoXv37vz5559p+8+cOUOvXr145JFHcHFxoVmzZllab9WqVWPGjBmMGDGC0qVLM2jQoLx9oyLXinSCSk5OZtGiRSQnJxMbG0tMTAyLFi1iyZIl1g5N5LHLl6FLF6OnXokS8PHHsGgR2NnB8CbDCX0tFO9Hi+jETkXAO++8Q69evTh27BjPPfccI0aMSJtBIC4ujvbt22Nvb8/u3bs5cOAAFStWpFOnTmnDfsXGxtKtWze2b9/OsWPH8PX1pW/fvpw6dSpDPfPnz6du3boEBwenzWAgCo4inaA2bdqUZQRxGxsbBg8ebKWIRH74+Wdo2hR27gQPD9ixA14ZlcprP77G/gv7AahSqoqVoxT3MmTIEAYPHkzNmjWZOXMmtra2/PzzzwB8++23aK2ZPHkyjRo1om7dunz22WfExsamtZIaN27MqFGjaNiwITVr1mTq1Kk0a9aM1atXZ6inXbt2TJo0iZo1a1KrVq18f5/i3op0gpo9ezYxMTEZtj355JN4enpaKSKRl1JT4YMPjPtMly79M/Crjw9M2j6JRYcX8fO5n60dpsiB9POY2dra4u7unjaTwJEjRwgLC+Ppp59OmxW7VKlS3LhxgzNnzgBw+/ZtJk2ahJeXF2XKlMHV1ZXg4OAs87jdnd9NFEwW7cWnlCoLfAl0AaKAf2utl5sp9yYwDKhqKvex1nqOJWP5+++/OXr0aIZtbm5u95weXRRe16/DsGFw9zbDW2/BzJnGs05z989l3oF5vNbyNSZ7T7ZuoCJH7jWPWWpqKk2aNOGNN97giSeeyFCubNmyAEycOJEtW7Ywd+5catWqhbOzM0OHDs3SEUJ6DxZslu5mvghIAjyAJsAmpdQxrfXJTOUUMBQ4DjwGbFNKXdBaf2upQP73v/9lmTPJ2dmZzp07W6oKUUDs3Gkkp/BwKFMGvvkGevQw9n1z7Bve3P4mz9Z/Fv+n/GWUiCKgWbNmrFixglKlSlGzZk2zZfbu3cvQoUPx9fUFICEhgTNnzlC7dvGd16swsliCUkq5AL5AA611LLBXKbUeGAK8lb6s1jr9/BChSqkfAG/AIgkqMTGRL7/8MsP9JycnJ8aNGydTAhQhCQkwZQosWGCsP/EEfPstVKtmrGut2fzXZjpU78A3vb+hhE0Jq8Uq4NatW4SEhGTYVrp06VwfZ9CgQcydO5epU6fi5ubGo48+yoULF/jhhx8YNWoUtWrVonbt2qxdu5ZevXphZ2eHn58fCQnyMHZhY8kWVG0gWWt9Ot22Y0C7e71IGV9pnwQ+y2b/SGAkGJMDBgUF3TeQHTt2kJycnGFbcnIy9erVy9HrcyM2NtbixyyKLH2e/vrLhffe8+LsWRdsbDRDh55l8ODznD2rOXvWSE5KKV4q+xJJpZM4sPeAxerOS9HR0aSkpBS5z1RkZCR79uyhadOmGba3bds2rXWT/j2fPHmS8uXLp61nLvP+++/z8ccf07t3b27fvk25cuVo0qQJv//+OxcvXqR///7MmTMHb29vXF1d6devH15eXkRGRqYdw1y9RVGh/huVkzk5crJgJJnITNteBoLu8zo/jETmcL86cjofVJMmTTSQtiildK9evXI6VUmuFMS5VgoiS52n5GStP/xQazs7Y/6mWrW0/uWXjGX+uPqHbvd1O33h5gWL1JmfCst8UAWB/N/LmYJ4nrDCfFCxQMlM20oCMWbKAqCUeg3jXtSTWmuLDPNw4sQJQkNDM2xzdnaWzhFFwPHj8PLLcOiQsT56NMyZA+nvc1+8dZGugV1JSE4gMVlGDhGiMLPkDZnTgK1SKv3DBI2BzB0kAFBKjcC4N9VRax1uqSD8/f2z9NQpX7483t7yUGZhFR9vTI/RvLmRnCpXNnrrffxxxuR0I/4GTy17ihvxN9gyaAuPlX3MekELIR6axRKU1vo2sAZ4RynlopTyBnoBSzOXVUoNAmYBnbXWf1sqhtu3b7N8+fIMvfecnZ2ZMGGC9N4qpH76CRo2NJ5vSkmBV1+F33+H7t0zlou/E88z3z7D6WunWTdgHU0rNjV/QCFEoWHpLm1jACfgCrACGK21PqmUelIpFZuu3LtAOeCwUirWtHz6sJUvX748Sy+91NTUYjchX1Fw6ZLRdbxTJzhzBurXh337jFlwS2a+kAzEJMUQmxTL0j5L6VC9Q/4HLISwOIs+B6W1vg70NrN9D+Cabr26Jes1HZM5c+Zw+/bttG02Njb4+vpSqlQpS1cn8khCgtFtfNYsiI0FBweYPh3efBPs7bOW11qTqlOp4FKBwy8fxtZGZpARoqgoMg8FBQcHExERkWGbo6Mj48ePt1JEIje0hu+/By8v49mm2Fjo1QtOnICpU80nJ4D/BP0H3+98SUpJkuQkRBFTZBLUvHnziI+Pz7CtatWqNGvWzEoRiZwKDjbGz+vXD8LCoEEDY4DXdesgm4ECAFh0aBEzf55Jeefy2NnYZV9QCFEoFYkEdePGDX744Ye0sboAXF1dpWt5AXf8uDHLbcuWsHs3lCtn9Mz79Vfo2PHer111chVjN4/lmTrP8GmPT6UTjBBFUJG4JhIQEJClc4TWmgEDBlgpInEvp07BjBmwcqWx7uQEY8caA7yWKXP/1+8M28ngtYPxftSbb32/lUt7QhRRhf5/ttaa+fPnp01UBsbw/EOGDCkQ00iLf/z+O3z4IQQGGlNj2NsbD9u+9RY88kjOj+Ni50Jrz9asfW4tTnZOeRewEMKqCn2C2r17N9HR0Rm22dnZMW7cOCtFJNLTGvbuhSlTGnDANByera0xIsTUqVAlF/MGxiTG4ObgxhOeT7Br2C65rCdEEVfo70HNmzeP2NjYDNu8vLyoW7eulSISYDxUu2YNtGljTBx44EB5HB1hzBgIDYVPP81dcroce5mmn8yIIesAAA6sSURBVDVl9j5jIHxJTkIUfYUqQcXHx7Nt27a0zhCXL19mx44dGcq4uroyadIka4QngCtXjMt4tWqBry8cPAhly8LQoWc5fx4WLYIaNXJ3zFuJt+i2rBsRMRG0rdo2bwIXQhQ4heoS37Vr1+jWrRsVKlRg3LhxREVFZSljY2ND795ZnhUWeejuZbxPPoHVq+HuNFzVqsH48TBiBBw+fBZ392q5PnZiciJ9V/bl+OXjrB+4nlaerSwauxCi4CpUCcrW1hYbGxsiIyN55513SEpKyjDunr29PSNHjsQ+u6c6hUVdvAjLl8OSJXDSNCSwUsZstqNHQ9euUOIh5gjUWjP8h+H8FPYTS3ov4elaT1smcCFEoVDoEpSDgwPJyclZHsoF477EkCFDrBBZ8REba9xbWrrUGMjVmNILPDzgpZeMzg9Vq1qmLqUUTz32FC0qtmBoYxlPUYjiptAlqBL3+EpeokQJnnjiCfr168f48eOzzN4pHkxsLGzZYgxFtH493O3Rb29vtJYGDzZGF7dkwzX8VjieJT0Z1mSY5Q4qhChUClUnCVtb23v23oqLiyMhIYHly5fTrFkzPv/883yMrmi5ft24dNerF7i7Q//+8O23RnLy9jZ64f1/e/cfXFV95nH8/eSGREJ+CGIR5IdIYV2pJUgKSyklitXQahU7Si21ZbsV1wIdpkut1nXGarvd6XRKO9aRUtktgsViS3fBiFVrg9KOsrCbqKwIZRHFEeVXIAmBEPLsH+deSWKSe0MunHNzP6+Z7+Sek++9eXLm5Dz53vO9z/fdd4OkNXNmepPTsv9exugHR/PynpfT96IiknEybgTV+p5TZ8455xwmTZrELbfcchai6h1aWqCmJhgpPf10sLRF60M9eTLceGPQujsLrzvWvrGWuU/O5aqLr9KaTiJZLuMSVPvVctsrKCjgpptu4pFHHiE3N6N+vbNuz56gBt4zz8Af/gDvvXfqe7FYsBbTzJlwww0wZMiZj+fPb/2ZWb+dxYTBE/jdzb8jL6bJLiLZLKOu4LFYjBOJOcwdKCgo4O677+aee+7RBzk7sHt3kJASbefOtt8fOhQqKoI2fTqce+5ZjK12N9etuo5hxcOo/FIlhXmFyZ8kIr1aRiUoM6OgoKDNooQJBQUFLF26lNmzZ4cQWfQcPgxbtsCmTafaO++07VNUBJ/6FFx5JcyYEazFFFZeH1YyjAUTFzCndA7n9zs/nCBEJFIyKkEBlJSUfChBFRYWsm7dOsrLy8MJKmQHD8KrrwZt8+YgGW3bdmoKeEJJCUydCtOmQXk5lJYGdfHCdODoAeqb6hlx7gi+d8X3wg1GRCIl4xJU//79P1g5Nzc3l/79+1NVVcWll14acmRn3tGjQR27RDJKtHYLCQPBrLrS0mCtpYkTgzZmDOREaN5mQ1MD1666lvcb3uf1ea/rnpOItJFxCWrgwIEA5OfnM2LECKqqqhg8eHDIUaXP8ePBqrLbt8OOHUFLPN6zp+PnFBTA2LFw2WVw+eVBMvr4xyE//+zG3h0nTp5g1m9nsemdTTxx0xNKTiLyIRmXoC644AJycnKYNGkSlZWVFBZmzs109+DtuLfeatt27z71eO/eD781l5CbC6NGBYmodRs5smclhc42d+e2dbdRuaOSJZ9bwo1/e2PYIYlIBGVcgpoyZQqFhYUsWbIkMtPIGxth//4gubRu773Xdvvdd09VYehMTk5QKmjMmKCNHh20MWOC/RH5lXvkwU0PsrxmOfdNu4/by24POxwRiaiMu9wtWLAg7a/Z0gINDXDkCNTVnWq1tcGI58CB4Gv7xwcPwr59U0ny0aw2ioqCRDN8eNBaPx4+PPi8UW9IQl2ZUzqHHMth3ifmhR2KiERYWi+FZjYAWAZcDewH7nb3X3fQz4B/Bb4e3/UIcJd7Z29uBZqb4c03gxFLoh09mtp2Q0OQdFonocTjdusddlOMvDwYODBYtjzRBg1qu53YV1LSk5+V2Z7f9TyTLpxEcX4x8yfODzscEYm4dP+v/hDQBAwCSoFKM6tx963t+s0FbgDGAQ48C+wClnT14jU1wf2WM6Ffv2B0U1wcfC0qCpLJeecFC+4lviZaYvu1116gouLToX1+KFNsPriZ7774XeZ9Yh6LKxaHHY6IZABLMmhJ/YXM+gGHgI+5+/b4vhXAO+5+V7u+fwF+5e5L49v/ANzm7l2uRpeTM97z8taTk9NELHacnJxT7dR2U3z72AePE9u5uQ3EYo3EYkeJxRrIzU08bsSs5bR+79raWs49myUXMlBdUR3V46rpe6wvpf9TSu7JXv4eZg9UV1fT3NxMWVlZ2KFEnv72UhPF47Rhw4Yt7p70JE/nlWIM0JxITnE1wLQO+o6Nf691v7EdvaiZzSUYcdGnTx8uuaSix4G2tASti6pJKTt58iS1tbU9f6Fe6ni/4/z1sr8Sa4ox4sUR1B/v0fupvV5zczPurnMqBfrbS00mH6d0JqhC4Ei7fYeBok76Hm7Xr9DMrP19qPgoaylAWVmZb968OX0Rp0FVVVXWVrBIxt2ZvGwy/Q/15ydjf8KXf/TlsEOKvPLycmpra6murg47lMjT315qonicUq2Vms4EVQ8Ut9tXDNSl0LcYqE82SUIyi5mxYuYKjhw/Qt32jk4DEZHOpbPwzXYg18xGt9o3Dmg/QYL4vnEp9JMMdKz5GL/c8kvcndHnjWbCkAlhhyQiGShtCcrdG4A1wP1m1s/MpgDXAys66P4o8C0zu9DMhgD/BPwqXbFIeE62nGT2mtnMfXIuL+15KexwRCSDpbt06DeAvsD7wCrgDnffamZTzaz13fFfAOuAV4HXgMr4Pslg7s78p+az5vU1LL5mMZOHTQ47JBHJYGmd7+vuBwk+39R+/4sEEyMS2w7cGW/SSzzwwgMs2bKE70z5Dgv/bmHY4YhIhovQ4guSyXYe3Mn3X/g+Xx33VX44/YdhhyMivYA+MSlpMWrAKDZ+bSPjLxif8hRSEZGuaAQlPbLhzQ2s3roagIkXTqRPrE/IEYlIb6ERlJy2mr01fP7xzzOseBgzL5mp5CQiaaURlJyWXYd2UfFYBUV5RTw1+yklJxFJO42gpNv2NezjmpXXcKz5GBv/fiPDS4aHHZKI9EJKUNJtv9n6G94+8jbP3focYz/SYY1fEZEeU4KSbps/cT4zPjqDUQNGhR2KiPRiugclKWnxFhY+vZDqvUGVbSUnETnTlKAkKXdn0TOL+NnLP+PZnc+GHY6IZAklKEnqx3/5MYtfWsyCiQtY9MlFYYcjIllCCUq6tLx6OXc+dyc3j72Zn1b8VFUiROSsUYKSTrk7T/zvE0wfOZ1Hb3iUHNPpIiJnj2bxSafMjDWz1tB0son83PywwxGRLKN/ieVDtu3fxozHZrCvYR95sTwK8wqTP0lEJM00gpI29hzZw9UrrqbpZBN1TXWc3+/8sEMSkSylBCUfONR4iIqVFdQeq2XDnA1c3P/isEMSkSymBCUANJ5o5LpV17Hj4A7Wz17P+MHjww5JRLKc7kEJAAcaD7D/6H5WzlzJlSOvDDscERGNoLKdu+M4Q4uH8sodr5AXyws7JBERQCOorHfvn+5lzn/MobmlWclJRCJFCSqL/XzTz/nBiz8gP5ZPzGJhhyMi0oYSVJZavXU131z/Ta7/m+t5+NqHVcJIRCInLQnKzAaY2e/NrMHMdpvZl7ro+20ze83M6sxsl5l9Ox0xSOqe3/U8t/7+VqYMn8KqL6wiN0e3IkUketJ1ZXoIaAIGAaVApZnVuPvWDvoa8BXgFWAU8IyZve3uj6cpFknCMMqGlLH2i2vp26dv2OGIiHSoxwnKzPoBXwA+5u71wEYzWwvcCtzVvr+7/6jV5htm9p/AFEAJ6gxrPNFI3z59uWLkFWy8aKPe1hORSEvHCGoM0Ozu21vtqwGmJXuiBVfIqcAvuugzF5gb36w3szd6EOuZMBDYH3YQGUDHKXUDzUzHKjmdU6mJ4nEakUqndCSoQuBIu32HgaIUnnsfwX2wf++sg7svBZaebnBnmpltdveysOOIOh2n1OlYpUbHKTWZfJySTpIwsyoz807aRqAeKG73tGKgLsnrzie4F/U5dz9+ur+AiIj0TklHUO5e3tX34/egcs1stLvviO8eB3Q0QSLxnK8R3J/6tLvvST1cERHJFj2eZu7uDcAa4H4z62dmU4DrgRUd9Tez2cC/AJ9x9//r6c+PgMi+/RgxOk6p07FKjY5TajL2OJm79/xFzAYA/wZ8BjgA3OXuv45/byqw3t0L49u7gKFA67f1Vrr7P/Y4EBER6TXSkqBERETSTaWOREQkkpSgREQkkpSg0szMRpvZMTNbGXYsUWNm+Wa2LF6vsc7Mqs1sRthxRUV3alpmK51D3ZfJ1yQlqPR7CPivsIOIqFzgbYIqIyXAPwOrzeyiEGOKktY1LWcDD5vZ2HBDihydQ92XsdckJag0MrMvArXAH8OOJYrcvcHd73P3N929xd2fBHYBE8KOLWytalre6+717r4RSNS0lDidQ92T6dckJag0MbNi4H7gW2HHkinMbBBBLcdOP9SdRTqraakRVBd0DnWuN1yTlKDS5wFgmSpjpMbM+gCPAcvdfVvY8URAT2paZiWdQ0ll/DVJCSoFyeoRmlkpcBWwOOxYw5RC3cZEvxyCSiNNwPzQAo6W06ppma10DnWtt1yTtJRqClKoR7gQuAh4K77GUiEQM7NL3f3yMx5gRCQ7TvDBEivLCCYCfNbdT5zpuDLEdrpZ0zJb6RxKSTm94JqkShJpYGYFtP3vdxHByXGHu+8LJaiIMrMlBKsuXxVf4FLizOxxwIGvExyjp4BPdrIyddbSOZRcb7kmaQSVBu5+FDia2DazeuBYJp0IZ4OZjQBuJ6jDuLfVir63u/tjoQUWHd8gqGn5PkFNyzuUnNrSOZSa3nJN0ghKREQiSZMkREQkkpSgREQkkpSgREQkkpSgREQkkpSgREQkkpSgREQkkpSgREQkkpSgREQkkv4fpnIt6Q3iZsAAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "z = np.linspace(-5, 5, 200)\n", "\n", "plt.plot([-5, 5], [0, 0], 'k-')\n", "plt.plot([-5, 5], [1, 1], 'k--')\n", "plt.plot([0, 0], [-0.2, 1.2], 'k-')\n", "plt.plot([-5, 5], [-3/4, 7/4], 'g--')\n", "plt.plot(z, logit(z), \"b-\", linewidth=2)\n", "props = dict(facecolor='black', shrink=0.1)\n", "plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha=\"center\")\n", "plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha=\"center\")\n", "plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha=\"center\")\n", "plt.grid(True)\n", "plt.title(\"Sigmoid activation function\", fontsize=14)\n", "plt.axis([-5, 5, -0.2, 1.2])\n", "\n", "save_fig(\"sigmoid_saturation_plot\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Xavier and He Initialization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: the book uses `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function. The main differences relevant to this chapter are:\n", "* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.\n", "* the default `activation` is now `None` rather than `tf.nn.relu`.\n", "* it does not support `tensorflow.contrib.framework.arg_scope()` (introduced later in chapter 11).\n", "* it does not support regularizer params (introduced later in chapter 11)." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From :3: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use keras.layers.Dense instead.\n", "WARNING:tensorflow:From /Users/ageron/miniconda3/envs/tf1/lib/python3.7/site-packages/tensorflow_core/python/layers/core.py:187: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please use `layer.__call__` method instead.\n" ] } ], "source": [ "he_init = tf.variance_scaling_initializer()\n", "hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu,\n", " kernel_initializer=he_init, name=\"hidden1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Nonsaturating Activation Functions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Leaky ReLU" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def leaky_relu(z, alpha=0.01):\n", " return np.maximum(alpha*z, z)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Saving figure leaky_relu_plot\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAagAAAEYCAYAAAAJeGK1AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xl8FPX9x/HXh3AlHCJyVMGCeKCgVSCetBiPUm9RUEG04sGhVeuB1oMKiGfFWjwBiyJyi1yitD9Fo+JVQVG8gFJQQUUEEgghAZLv74/voiHk2E2ymdnN+/l47IM9JjvvHTb7zsx8d8acc4iIiIRNraADiIiIlEQFJSIioaSCEhGRUFJBiYhIKKmgREQklFRQIiISSiooKZWZZZrZ40HnSAZmlmFmzsyaVcO8VpvZ4GqYz6Fm9p6Z5ZnZ6njPL4o8zsx6BZ1Dqo4KKkGZ2Xgzmxd0jlhFSs9FLtvNbKWZ3W9m9WJ8nn5mllPOfPYo1/J+riqUUhDvAvsCG6pwPsPM7LMSHjoaeLKq5lOGe4Bc4NDIPKtFGe/9fYGXqiuHxF/toANIjfQscAdQF//B9mzk/tsDSxRnzrntwA/VNK/11TEf4CBgjnNudTXNr0zOuWpZvlJ9tAaVpMxsLzMba2Y/mtkWM3vTzNKLPL6PmU0xszVmts3MPjezy8t5zlPMLMvMBplZNzPbYWa/KjbNvWb2aTnxcp1zPzjnvnHOvQi8CnQv9jytzGyqmW2KXF42s4NjXAwVYmYPmNmyyHJZbWZ/M7P6xaY5w8w+iEyzwcxeMrP6ZpYJtAEe2rWmGJn+5018ZtY48nNnF3vO7pFl2qK8HGbWDxgKdCyyRtov8thua3Bm9mszmxV5H2wxs5lm1rrI48PM7DMz6x1Zo91iZrPL2hwZeV1HAndF5j3MzNpGrqcXn3bXprci0/Q0s1fNLNfMvjCz3xf7mUPNbK6ZZZtZTmRT4hFmNgy4DDizyOvOKD6fyO0jzOy1yPLbGFnz2qvI4+PNbJ6Z/dnM1kbeZ8+aWVppr1uqlwoqCZmZAS8DrYCzgE7AW8DrZrZvZLL6wEeRxzsCo4AxZnZKKc/ZC5gFDHDOjXbOvQWsBP5YZJpakdvjYsh6JNAV2FHkvjTgDSAPOBE4HvgeeK2aPjy2AlcAhwHXAL2BO4vkOw2Yiy/WLsBJwJv436fzgTXA3fhNTvtSjHNuM35TVN9iD/UFXnXO/RhFjmnAw8CyIvOZVnxekf+TOUDLSM6TgP2A2ZH3yS5tgYuA8/B/LHQC7i1l+RCZ37JIhn2BkWVMW5J7gUfxJfchMNXMGkYy7wcsBBzwe6Az8ASQEpnPdOC1Iq/73RJedwPg30AOcEzkdZ0APFNs0t8BhwOn8svr/3OMr0XixTmnSwJegPHAvFIeOxn/i5la7P4lwK1lPOdU4J9FbmcCjwMDgGyge7HpBwNfFrl9OpAP7FPGPDKB7ZF8+fgPoQKgZ5FprgBWAFbkvhT8/psLI7f7ATnlzOfxEu4v8+dKea5BwH+L3H4HmFrG9KuBwcXuy4i81maR2+fg9980itxOBTYDF8eQYxjwWVnzx3/AFwBtizzeDigETi3yPHnAXkWmubPovErJ8xkwrMjttpHXmF5sOgf0KjbNwCKPt4rc99vI7XuBr4G6sbz3i82nf+Q926iE/4ODijzPt0BKkWmeBl6ryO+kLlV/0RpUcuoCpAHrI5tHcswPDDgcOBDAzFLM7E4z+zSyiSoH/9f/r4s9Vw/8X6+nOef+r9hjzwHtzOyEyO0rgNnOufIGAkwDjsKvGU0HnnZ+U1/R/AcAW4pkzwb23pU/nsysl5ktNLMfIvN+hN2XSydgQSVnMx9fUOdFbp8DGDA7hhzROAz4zhXZT+Sc+x/wHdChyHRfO+eyi9z+DmgR47xiUXQz8HeRf3fNrxOw0Pn9dhV1GPCpc25LkfvexRdz0df9hXOuoFiWeL5uiYEGSSSnWsA6/OaL4jZH/h0M3IzfnLEUv0ZzH3v+cn4CHAFcaWbvu8ifmeB3xpvZXOAKM1uG/5A9m/JlO+f+C2BmlwCfm1k/59z4IvmX4DdpFbcxiucH/zr3KuH+JviyK5GZHYdfkxwO3Ahk4V9XrJuwyuSc22Fm0/Gb9SZE/p3lnMutxhxFT2Wwo4THYv0DtjDy78+bDs2sTinT/jw/55yLbG2srj+Yq/p1S5yooJLTR/h9DoWRv5ZL8lvgJefc8/DzfqtD8B+ERa0CrsNvMhtrZgOKlhR+k8gM4H/4UWqvxRI08kF9H3C/mU2PfEB/BPQBfnLOFc8TrWXAGWZmxfJ2jjxWmq7AWufciF13mFmbYtN8DJyCf+0l2Y7fJFmeicBbZtYBOA2/PzCWHNHM50tgPzNru2stysza4fdDfRFFxljsGj1YdL/bURV4no+BS8ysbilrUdG+7ivMrFGRtagT8OXzZQUySQD0l0Jia2xmRxW7tMWXxDvAHDM73cwOMLPjzWy4me1aq1oOnGJmvzWzQ/H7mg4oaSaRkjsJ/yE6ptjO9Vfx+4aGAuOdc4UlPEV5JuP/cr02cnsSfg1wjpmdGMnfzcwett1H8tUq4fUfHnnsKfy+lsfM7Egza29mN+KL76EysiwHWplZXzNrZ2ZXR36mqHuBC8zsHjPrYGYdzezGIgM4VgO/Mz8SsdSRcM65d/H7WiYDP7H7ZsNocqwG2phZZ/OjA0v6Ltlr+M1pk8ws3fwIu0n4PwJeL2M5xMw5tw14H/hLZJmcQMXW+J4EGgLTzexoMzvIzPqY2a6yWw0cHvk/bVbKWtok/CbUCeZH83UDxgAzd629S/ipoBLb7/B/bRa9jIysMZyB/wB6Gr/GMB1ozy/b++8B/oPfF/IWfsTYpNJm5Jxbid/JfDpFSioyr2eBOvzyfaaYRP5Kfhy4NfIXby7QDb9W9gLwFX5/197ApiI/mlrC68+MPOf/Is9xMPB/kdfaG7jAOTe/jCwv4QvsH/gP9t8DdxWb5hX8vqPTI/N8E1/gu8r5LmB//CjH8r6TNAk/km1q0X0h0eQAXgRewRfbevYssF3/P+dGHn8jcvkB6FFszbKqXBH590N8IQyJ9Qmcc2vx/3d18Xk/xq/F74xM8jR+LWgR/nV1LeE5coE/AI3x//dzgPeK5JMEYPF5j0pNYmZP4UdG/b7ciUVEoqR9UFJh5r/02AH/3acLA44jIklGBSWVMQf/JchxzrmXgw4jIslFm/hERCSUNEhCRERCKW6b+Jo1a+batm0br6evlK1bt9KgQYOgYyQkLbvYLVu2jIKCAjp06FD+xLIbvd8qrrRlt2oVbNwI9erBYYdBSjTf2Ktiixcv/sk517y86eJWUG3btmXRokXxevpKyczMJCMjI+gYCUnLLnYZGRlkZWWF9vchzPR+q7iSlt3DD8PgwdCgAXzwAXTsGEw2M/s6mum0iU9EpAZ49VW49VZ/fcKE4MopFiooEZEk97//wUUXQWEh/PWvcP75QSeKjgpKRCSJbd0KPXrApk1w1lkwbFjQiaKnghIRSVLOweWXw9Kl0L49TJwItRLoUz+BooqISCwefBBeeAEaNYLZs2Gvkk5AE2IxFZSZHWxmeWY2MV6BRESk8j74oCl33OGvT5oEhx4abJ6KiHUN6gn8UYpFRCSkVqyAe+45DOdg+HA4O5rTiIZQ1AVlZr3xJ7Or7KmuRUQkTrZs8YMicnLq0KMHDIn5hCfhEdUXdc2sMXA3cDJwVRnTDQAGALRs2ZLMzMwqiFj1cnJyQpst7LTsYpeVlUVBQYGWWwXo/RabwkIYOrQjX3zRnP3330L//kt4662C8n8wpKI9ksQI/BGr1+x+MtXdOefGAmMB0tPTXVi/Aa5vp1ecll3smjRpQlZWlpZbBej9FpsRI2DhQj8Y4r77vuCMM35X/g+FWLkFFTnN8qlAp/jHERGRinjpJRg6FMxgyhRITd0WdKRKi2YNKgNoC3wTWXtqCKSYWQfnXOf4RRMRkWh89RVccon/3tN998Hpp0MybBmNpqDGAlOL3B6ML6yr4xFIRESil53tB0Vs3gy9esFttwWdqOqUW1DOuVwgd9dtM8sB8pxz6+MZTEREylZY6Necli2DI46AZ5/1m/iSRcyn23DODYtDDhERidHw4TBvHuy9tz9SRMOGQSeqWjrUkYhIApo1C+6+2x9bb+pUaNcu6ERVTwUlIpJgPv8c/vhHf/3BB6F792DzxIsKSkQkgWzatOtIEdCnD9x8c9CJ4kcFJSKSIAoKoG9f+O9/4aij4J//TK5BEcWpoEREEsRf/wrz58M++/h9UGlpQSeKLxWUiEgCmD4d7r8fUlL89bZtg04UfyooEZGQ+/RTf2ZcgIcfhpNPDjZPdVFBiYiE2MaNflBEbq4fuXf99UEnqj4qKBGRkNq5E3r3hlWroEsXGD06uQdFFKeCEhEJqdtvh1dfhebN/aCI1NSgE1UvFZSISAhNngwjR0Lt2jBjBuy/f9CJqp8KSkQkZD7+GK6KnLv8H/+Abt2CzRMUFZSISIisX+8HRWzbBldcAddcE3Si4KigRERCYscOuOgi+OYbOPZYeOKJmjUoojgVlIhISNxyC7zxBvzqV/Dii1C/ftCJgqWCEhEJgQkTYNQoqFPHl1OrVkEnCp4KSkQkYIsWwYAB/vrjj8MJJwSbJyxUUCIiAVq3Ds47D/LzYeDAX4pKVFAiIoHZvh0uuADWrIGuXeHRR4NOFC4qKBGRgNx0E7z9Nuy3n/8ybt26QScKFxWUiEgAxo3zw8jr1oWZM/3IPdmdCkpEpJq9//4vX8AdPdp/50n2pIISEalG338P55/v9z9de+0v53mSPamgRESqSX4+9OzpS6pbN/j734NOFG4qKBGRanL99fDee9C6Nbzwgv9SrpROBSUiUg3GjIGxY/3hi2bPhhYtgk4UfiooEZE4e+cduO46f33sWH92XCmfCkpEJI7WrvX7nXbsgBtugEsvDTpR4lBBiYjESV6eH7G3bh2cfDI89FDQiRKLCkpEJA6cg6uvhv/8B9q0gWnT/OnbJXoqKBGROHjiCRg/HlJT/aCIZs2CTpR4VFAiIlXszTfhxhv99WeegaOOCjZPolJBiYhUoW++8Uco37nTnyG3d++gEyUuFZSISBXZts2f22n9eujeHe6/P+hEiU0FJSJSBZzzJxv86CNo1w6mTIGUlKBTJTYVlIhIFRg1CiZOhAYN/KCIpk2DTpT4VFAiIpW0YAEMHuyvjx8PRxwRaJykoYISEamEVavgoougoADuuAN69Qo6UfJQQYmIVFBurh8UsWEDnHEG3H130ImSS1QFZWYTzex7M9tsZsvN7Kp4BxMRCTPn4Mor4ZNP4OCDYdIkDYqoatGuQd0PtHXONQbOAe4xMx2PV0RqrJEjYepUaNjQD4po0iToRMknqoJyzn3unMvfdTNyOTBuqUREQuzf/4bbbvPXn38eOnQINk+yivrQhWb2JNAPSAU+Bl4pYZoBwACAli1bkpmZWSUhq1pOTk5os4Wdll3ssrKyKCgo0HKrgDC+39aurc+gQV0oLKzDZZetpkmT1YQsIhDOZRcrc85FP7FZCnA8kAE86JzbUdq06enpbtGiRZUOGA+ZmZlkZGQEHSMhadnFLiMjg6ysLJYsWRJ0lIQTtvdbTg4cfzx89hmccw7MmgW1QjrULGzLrigzW+ycSy9vupgWrXOuwDm3EGgNXF3RcCIiicY56NfPl9Ohh/pNe2Etp2RR0cVbG+2DEpEa5P774cUXoXFjPyiiceOgEyW/cgvKzFqYWW8za2hmKWb2B6APsCD+8UREgvfyyzBkCJjB5MnQvn3QiWqGaAZJOPzmvNH4QvsauME5NzeewUREwmD5cujb12/iGzECzjwz6EQ1R7kF5ZxbD5xYDVlEREJl82bo0QOys+H88/2hjKT6aBefiEgJCgvhj3+EL7+Ejh39QWA1KKJ6aXGLiJRgxAiYM8cfIWL2bGjUKOhENY8KSkSkmLlzYdgwPyhiyhQ46KCgE9VMKigRkSK+/BIuucRfv/9+OO20YPPUZCooEZGIrCw491zYsgUuvBBuvTXoRDWbCkpEBD8o4pJLYMUK+M1v4Jln/CY+CY4KSkQEGDrUfyG3aVM/KKJBg6ATiQpKRGq8F1+Ee+7xw8inTYMDDgg6kYAKSkRquM8+g8su89cfeghOPTXYPPILFZSI1FibNvkjRWzd6g9ndOONQSeSolRQIlIjFRRAnz6wciV06gRjx2pQRNiooESkRrrzTn/q9mbN/IkH09KCTiTFqaBEpMaZNg0efBBSUuCFF6BNm6ATSUlUUCJSo3zyCVx+ub/+yCMQ0rOiCyooEalBNmzwgyK2bfMj9669NuhEUhYVlIjUCDt3wkUXwerVcPTRMHq0BkWEnQpKRGqEv/wFFiyAFi1g5kyoXz/oRFIeFZSIJL2JE+Hvf4fatf1RI1q3DjqRREMFJSJJ7aOPoH9/f/3RR+G3vw02j0RPBSUiSevHH/2giLw8uOoqGDQo6EQSCxWUiCSlHTv8OZ2+/RaOOw4ef1yDIhKNCkpEktLNN8Obb8K++/r9TvXqBZ1IYqWCEpGkM348PPYY1Knjy2m//YJOJBWhghKRpPKf//yyr+nJJ+H444PNIxWnghKRpPHDD3D++ZCfD1df7QdGSOJSQYlIUti+HXr1grVr/VDyf/wj6ERSWSooEUkKN9wA77wDrVrBjBlQt27QiaSyVFAikvCefhqeesqP1Js1C1q2DDqRVAUVlIgktHffhT/9yV8fPdofCFaSgwpKRBLWd99Bz57+S7nXXw/9+gWdSKqSCkpEElJ+vi+nH36AE0+EkSODTiRVTQUlIgnHOb9Z7/334de/9qdtr1Mn6FRS1VRQIpJwRo+GceP8OZ1mzYLmzYNOJPGgghKRhPL2235/E8A//wmdOwebR+JHBSUiCWPNGv9l3J074aaboG/foBNJPKmgRCQh5OXBeef5czydcgo8+GDQiSTeVFAiEnrO+QPALloEbdvCtGn+9O2S3FRQIhJ6jz0Gzz0HaWkwezbss0/QiaQ6qKBEJNQyM/3+JoBnn4Ujjww0jlSjcgvKzOqZ2Tgz+9rMtpjZEjM7vTrCiUjN9sMP9bjgAigogL/8xZ/CXWqOaNagagPfAicCewFDgOlm1jZ+sUSkpsvNhbvuOpyffoI//AHuvTfoRFLdyt3N6JzbCgwrctc8M1sFdAFWxyeWiNRkzkH//rBiRSMOPBCmTIGUlKBTSXWLeRyMmbUEDgE+L+GxAcAAgJYtW5KZmVnZfHGRk5MT2mxhp2UXu6ysLAoKCrTcYjB9emsmTz6I+vV3cuedH/PJJ1uDjpRwkuF31Zxz0U9sVgeYD6x0zg0sa9r09HS3aNGiSsaLj8zMTDIyMoKOkZC07GKXkZFBVlYWS5YsCTpKQnjtNb9Jr7AQhg//jLvuOjzoSAkpzL+rZrbYOZde3nRRj+Izs1rA88B24NpKZBMRKdH//gcXXeTLacgQ6Nbtp6AjSYCiKigzM2Ac0BLo6ZzbEddUIlLjbN0KPXrAxo1w1lkwfHjQiSRo0e6Dego4DDjVObctjnlEpAZyDq64ApYuhUMOgYkToZa+pVnjRfM9qDbAQOAo4Aczy4lcdJhGEakSf/sbTJ8OjRr5I0XstVfQiSQMohlm/jVg1ZBFRGqgf/0Lbr/dX584EQ47LNg8Eh5aiRaRwPz3v9Cnj9/EN3w4nHNO0IkkTFRQIhKILVv8oIisLP/vkCFBJ5KwUUGJSLUrLITLLoPPP/eb9J57ToMiZE96S4hItbvvPpg1yw+GmD0bGjcOOpGEkQpKRKrVvHlw111gBpMn+2HlIiXROSlFpNosWwZ9+/pBEffeC2ecEXQiCTOtQYlItcjOhnPPhc2boVevX4aWi5RGBSUicVdYCJde6tegDj/cnxnX9O1KKYcKSkTibvhweOkl2HtvPyiiYcOgE0kiUEGJSFzNng133+2HkU+dCgceGHQiSRQqKBGJmy++8Jv2AB54ALp3DzaPJBYVlIjERVaWHxSRkwO9e8PgwUEnkkSjghKRKldQABdf7I+1d9RRMG6cBkVI7FRQIlLl7roL5s+HffbxR4xISws6kSQiFZSIVKkXXvCHMkpJgWnToG3boBNJolJBiUiV+fRT6NfPXx85Ek45JdA4kuBUUCJSJTZu9KfNyM31I/f+/OegE0miU0GJSKXt3OlH6q1aBV26wJgxGhQhlaeCEpFKu+MOePVVaN4cZs6E1NSgE0kyUEGJSKVMmQIPPQS1a8OMGfDrXwedSJKFCkpEKmzJErjySn/9H/+Abt2CzSPJRQUlIhXy009+UMS2bXD55XDNNUEnkmSjghKRmO3cCRdeCF9/DcccA08+qUERUvVUUCISs1tugTfegJYt/aCI+vWDTiTJSAUlIjF5/nm/v6lOHXjxRWjVKuhEkqxUUCIStUWLoH9/f/2xx6Br12DzSHJTQYlIVNatg/POg/x8GDAABg4MOpEkOxWUiJRrxw644AJYswZOOAEefTToRFITqKBEpFw33ghvvw377ee/jFuvXtCJpCZQQYlImZ55Bp54AurW9SP29t036ERSU6igRKRUH3wAV1/trz/1FBx7bLB5pGZRQYlIib7/Hs4/H7Zvhz/9Ca64IuhEUtOooERkD9u3Q69e8N13/vh6jzwSdCKpiVRQIrKH66+Hd9+F1q39Kdzr1Ak6kdREKigR2c2YMf5Srx7MmgUtWgSdSGoqFZSI/Oydd+C66/z1sWMhPT3YPFKzqaBEBIC1a6FnT/+l3BtugD/+MehEUtOpoESEvDw/Ym/dOjjpJH+GXJGgRVVQZnatmS0ys3wzGx/nTCJSjZzzw8j/8x9o0wamTfOnbxcJWrRvw++Ae4A/AKnxiyMi1e3JJ/3RIlJT/aCI5s2DTiTiRVVQzrmZAGaWDrSOayIRqTZvveX3NwGMGwedOgWbR6Qo7YMSqaG+/dZ/GXfnThg8GPr0CTqRyO6qdEuzmQ0ABgC0bNmSzMzMqnz6KpOTkxPabGGnZRe7rKwsCgoKQrXc8vNrcf31nVi/vhHp6Rs57bSlZGa6oGPtQe+3ikuGZVelBeWcGwuMBUhPT3cZGRlV+fRVJjMzk7BmCzstu9g1adKErKys0Cw35/wQ8uXLoV07+Pe/m9K06YlBxyqR3m8VlwzLTpv4RGqYUaNg4kRIS4PZs6Fp06ATiZQsqjUoM6sdmTYFSDGz+sBO59zOeIYTkar1+ut+fxPA+PFwxBGBxhEpU7RrUEOAbcBtwCWR60PiFUpEqt7q1XDhhVBQALff7k/hLhJm0Q4zHwYMi2sSEYmb3Fzo0QM2bIDTT4cRI4JOJFI+7YMSSXLOwZVXwiefwMEHw+TJkJISdCqR8qmgRJLcww/D1KnQsKEfFNGkSdCJRKKjghJJYv/3f/CXv/jrEyZAhw7B5hGJhQpKJEmtXAm9e0NhIdx1F5x3XtCJRGKjghJJQjk5flDEpk1w9tkwdGjQiURip4ISSTLOweWXw2efQfv2/ku5tfSbLglIb1uRJPPAAzBjBjRuDHPm+H9FEpEKSiSJvPIK3HknmMGkSX4NSiRRqaCqQUZGBtdee23QMSTJrVgBF1/sN/HdfTecdVbQiUQqRwUF9OvXj7P02ywJbMsWOPdcyM72o/XuuCPoRCKVp4ISSXCFhf70GV9+6b/n9NxzGhQhyUFv43JkZ2czYMAAWrRoQaNGjTjxxBNZtGjRz49v2LCBPn360Lp1a1JTU+nYsSPPPvtsmc+5YMECmjRpwujRo+MdX2qAe+755QgRc+ZAo0ZBJxKpGiqoMjjnOPPMM1m7di3z5s3j448/plu3bpx88sl8//33AOTl5dG5c2fmzZvH559/zp///GcGDhzIggULSnzOGTNmcN555zF27FgGDRpUnS9HktDcuf47TmYwZQocdFDQiUSqTpWeUTfZvPHGGyxZsoT169eTmpoKwIgRI3jppZd4/vnnufXWW2nVqhW33HLLzz8zYMAAXn/9daZMmcIpp5yy2/ONHTuWW265hRkzZtC9e/dqfS2SfL76Ci65xF+/7z447bRg84hUNRVUGRYvXkxubi7Nmzff7f68vDxWrlwJQEFBAQ888ADTpk1j7dq15Ofns3379j1OtTx79mzGjBnDW2+9xfHHH19dL0GSVHa2HxSxZYs/r9Ou4+2JJBMVVBkKCwtp2bIlb7/99h6PNY58+3HkyJE8/PDDjBo1iiOOOIKGDRtyxx138OOPP+42/ZFHHsnSpUsZN24cxx13HGZWLa9Bkk9hIfTtC8uX+zPiPvus38QnkmxUUGXo3Lkz69ato1atWrRr167EaRYuXMjZZ5/NpZdeCvj9VsuXL6dJsXMaHHDAATz22GNkZGQwYMAAxo4dq5KSChk6FF5+GZo29YMjGjQIOpFIfGiQRMTmzZtZsmTJbpeDDjqIrl27cu655zJ//nxWrVrFe++9x9ChQ39eqzrkkENYsGABCxcu5KuvvuLaa69l1apVJc6jXbt2vPHGG/zrX/9i4MCBOOeq8yVKEpg504/aq1ULpk2DUv5uEkkKKqiIt99+m06dOu12ueWWW3jllVc4+eST6d+/P+3bt+fCCy9k2bJl7LfffgAMGTKEY445htNPP51u3brRoEED+vbtW+p8DjzwQDIzM5k/f75KSmLy2Wf++04Af/sbnHpqsHlE4k2b+IDx48czfvz4Uh8fNWoUo0aNKvGxvffem5kzZ5b5/JmZmbvdPvDAA/n2229jjSk12KZN/vQZW7f6wxnddFPQiUTiT2tQIiFXUAB9+vgTEHbqBE8/rUERUjOooERCbsgQ+Pe/oVkzmDUL0tKCTiRSPVRQIiE2fbo/v1NKir/epk3QiUSqT1IX1M6dOxkzZgwbNmwIOopIzD75xJ8ZF+Dvf4eTTgo2j0h1S9qC+vbbbznmmGO47rrruOCCCzRaThLKhg1+UERuLlx2GVx3XdCJRKpfUhbUnDlz6NixI59++ik7duzggw8+YOTIkUHHEonKzp3QuzesXg3p6TB6tAZFSM2UVAWVn5/PoEGDuPjii9myZQsFBQUA5ObmMnTo0N1OkyESVrfdBq/vsgn3AAAKYElEQVS9Bi1a+C/m1q8fdCKRYCRNQa1YsYIjjzySCRMmkJubu8fjzjlWrFgRQDKR6E2aBA8/DLVrw4wZsP/+QScSCU5SfFF34sSJDBo0iNzc3D32NdWpU4fGjRsza9Ysfve73wWUUKR8H30EV13lrz/6KOjtKjVdQhfU1q1b6d+/P3PmzClxrSktLY1jjz2WF154gX322SeAhCLRWb8ezjsP8vLgyitB57IUSeBNfEuXLqVDhw7MmjWrxHJKTU1l+PDhLFiwQOUkobZjB1x4IXzzDRx3HDzxhAZFiEACrkE553jqqacYPHgw27Zt2+PxevXq0bRpU+bOnUt6enoACUViM3gwZGbCr34FL74I9eoFnUgkHBKqoLKysrjkkkt44403SiyntLQ0fv/73zNhwoSfTygoEmbjx/v9TXXq+BF7kYPkiwgJVFAffPAB55xzDtnZ2eTn5+/xeFpaGo888gj9+/fXiQAlIXz44S/7mp54Ao4/Ptg8ImET+oIqLCzkgQce4J577ilxral+/frsu+++zJs3jw4dOgSQUCR269b5QRH5+b6k+vcPOpFI+IS6oH788Ud69erF4sWLS92k17NnT8aMGUNqamoACUVit3079OoFa9dC165QyqnGRGq8QEfxbdiwgY8++qjEx15//XUOPfRQ3n///T1G6dWqVYsGDRowbtw4JkyYoHKShHLDDbBwIbRq5b+MW7du0IlEwinQgrrmmmvo2rUrK1eu/Pm+nTt3ctttt3HWWWexadMmduzYsdvPpKWlceihh/Lpp5/Su3fv6o4sUin//Cc89ZQfqTdzph+5JyIlC6ygli9fzty5c9m+fTtnn30227dvZ82aNRx77LE89thjJW7SS01N5corr+Tjjz+mXbt2AaQWqbj33oM//clff+opOOaYYPOIhF1U+6DMrCkwDugO/ATc7pybXJkZ33bbbezYsYPCwkJWr15Njx49WLhwIbm5uT8f5PXnkLVrk5aWxuTJkznzzDMrM1uRQOzYUYuePf3+p+uu++U8TyJSumgHSTwBbAdaAkcBL5vZJ865zysy0y+++IL58+f/XETbtm3j9ddfL3X4eMeOHZk1axatWrWqyOxEApWXB6tWNWDbNjjxRH8wWBEpn5V3Ij8zawBsAg53zi2P3Pc8sNY5d1tpP9eoUSPXpUuXEh9bunQpGzduLDdcrVq1aN26NW3btq3S7zZlZWXRpEmTKnu+mkTLbnfO+fM3lXbZvh3WrFkCQL16R9Gli/9SrkRH77eKC/Oye/PNNxc758o91E80a1CHADt3lVPEJ8CJxSc0swHAAPBHEc/KytrjybZt28amTZvKnWlKSgpt27alYcOGZGdnRxEzegUFBSVmk/Il27JzDgoKrMIX56L7wyklpZCDDspm61ad2TkWyfZ+q07JsOyiKaiGwOZi92UDjYpP6JwbC4wFSE9PdyWdILB79+7lnpepTZs2LFq0iGbNmkURL3aZmZlkZGTE5bmTXdiWXX4+ZGWVfcnOLv2xEsbixCQlBfbaC5o0Kf0yY0YGZlksWfJx1bzoGiRs77dEEuZlF+0WsWgKKgcofmC7xsCWGDOxePFiFi5cuMc5m4r78ccfWbp0KSeddFKss5AEk5dXfsGUVTZ5eZWbf0oK7L23L5LyiqakS4MG5R95fMECn1VEYhNNQS0HapvZwc65Xas+RwIxD5C4+eabyYviE2Xbtm307NmTZcuW0bx581hnI9XEudgLpnjRlDAuJia1a/9SMEUv0ZZNWppObSESVuUWlHNuq5nNBO42s6vwo/jOBU6IZUbvv/8+H374YblrT7ts2bKFG2+8kYkTJ8YyG4mBc5CbG92msF2Xb7/tTEHBL7eLfY86ZnXqlFww0ZZNaqoKRiRZRTvM/BrgGeBHYANwdaxDzG+66aYSTyxYr1496tWrR35+PnXr1uWQQw7h6KOPpkuXLtrEVw7nYOvW2Pa5FL/s3BnrXHff2lu3bvkFU1bR1K+vghGRkkVVUM65jUCPis7k3Xff5b333qNRo0YUFBRQUFBAu3bt6Ny5M0cffTS/+c1v6NixIy1atKjoLBKSc5CTU/Ed/FlZUOw7zTGrXz+2fS4rVy7mlFO6/Fw29etXzbIQESmuWo5mvs8++3DfffdxxBFHcPjhh9OmTZukOGdTYWH5BVNe0RQWVi5DWlrs+12KTh/r2VszM7fQvn3lMouIRKNaCqp9+/bcfvvt1TGrmBQWwpYtFd/Bn51d+YJp0CD2/S5Fp9GRsEUkWYX6fFDlKSiAzZtj3+/yww/HkZfnfzbKMRulatiwYvtedt2vowqIiJQs0IIqKNizWGIpms3Fvz4ctV92nDRqFNsmseK3ayd0xYuIhFfcPl7XrYO77iq7YLbE/FXfPe21V+z7Xr766n3+8IfjaNxYBSMiElZx+3heswZGjCh7GrPdyyXWomnUyB8JIFbZ2Xk0bVqx1yUiItUjbgXVogVcc035BVMr0HP6iohIWMWtoPbfH4YOjdezi4hIstP6i4iIhJIKSkREQkkFJSIioaSCEhGRUFJBiYhIKKmgREQklFRQIiISSiooEREJJRWUiIiEkgpKRERCyVxlT4hU2hObrQe+jsuTV14z4KegQyQoLbuK0XKrGC23igvzsmvjnGte3kRxK6gwM7NFzrn0oHMkIi27itFyqxgtt4pLhmWnTXwiIhJKKigREQmlmlpQY4MOkMC07CpGy61itNwqLuGXXY3cByUiIuFXU9egREQk5FRQIiISSiooEREJJRUUYGYHm1memU0MOkvYmVk9MxtnZl+b2RYzW2JmpwedK6zMrKmZzTKzrZFldnHQmcJO77GqkQyfayoo7wngw6BDJIjawLfAicBewBBgupm1DTBTmD0BbAdaAn2Bp8ysY7CRQk/vsaqR8J9rNb6gzKw3kAUsCDpLInDObXXODXPOrXbOFTrn5gGrgC5BZwsbM2sA9AT+6pzLcc4tBOYClwabLNz0Hqu8ZPlcq9EFZWaNgbuBm4LOkqjMrCVwCPB50FlC6BBgp3NueZH7PgG0BhUDvcdik0yfazW6oIARwDjn3JqggyQiM6sDTAKec859FXSeEGoIbC52XzbQKIAsCUnvsQpJms+1pC0oM8s0M1fKZaGZHQWcCjwSdNYwKW+5FZmuFvA8fv/KtYEFDrccoHGx+xoDWwLIknD0Hotdsn2u1Q46QLw45zLKetzMbgDaAt+YGfi/dlPMrINzrnPcA4ZUecsNwPwCG4ff8X+Gc25HvHMlqOVAbTM72Dm3InLfkWhTVbn0HquwDJLoc63GHurIzNLY/a/bwfj/2Kudc+sDCZUgzGw0cBRwqnMuJ+g8YWZmUwEHXIVfZq8AJzjnVFJl0HusYpLtcy1p16DK45zLBXJ33TazHCAvEf8Tq5OZtQEGAvnAD5G/0gAGOucmBRYsvK4BngF+BDbgPyhUTmXQe6ziku1zrcauQYmISLgl7SAJERFJbCooEREJJRWUiIiEkgpKRERCSQUlIiKhpIISEZFQUkGJiEgoqaBERCSU/h9r5scSI6iwhAAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "plt.plot(z, leaky_relu(z, 0.05), \"b-\", linewidth=2)\n", "plt.plot([-5, 5], [0, 0], 'k-')\n", "plt.plot([0, 0], [-0.5, 4.2], 'k-')\n", "plt.grid(True)\n", "props = dict(facecolor='black', shrink=0.1)\n", "plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha=\"center\")\n", "plt.title(\"Leaky ReLU activation function\", fontsize=14)\n", "plt.axis([-5, 5, -0.5, 4.2])\n", "\n", "save_fig(\"leaky_relu_plot\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Implementing Leaky ReLU in TensorFlow:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "def leaky_relu(z, name=None):\n", " return tf.maximum(0.01 * z, z, name=name)\n", "\n", "hidden1 = tf.layers.dense(X, n_hidden1, activation=leaky_relu, name=\"hidden1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's train a neural network on MNIST using the Leaky ReLU. First let's create the graph:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "n_hidden2 = 100\n", "n_outputs = 10" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=leaky_relu, name=\"hidden1\")\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=leaky_relu, name=\"hidden2\")\n", " logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /Users/ageron/miniconda3/envs/tf1/lib/python3.7/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.where in 2.0, which has the same broadcast rule as np.where\n" ] } ], "source": [ "learning_rate = 0.01\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's load the data:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Warning**: `tf.examples.tutorials.mnist` is deprecated. We will use `tf.keras.datasets.mnist` instead." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()\n", "X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0\n", "X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0\n", "y_train = y_train.astype(np.int32)\n", "y_test = y_test.astype(np.int32)\n", "X_valid, X_train = X_train[:5000], X_train[5000:]\n", "y_valid, y_train = y_train[:5000], y_train[5000:]" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "def shuffle_batch(X, y, batch_size):\n", " rnd_idx = np.random.permutation(len(X))\n", " n_batches = len(X) // batch_size\n", " for batch_idx in np.array_split(rnd_idx, n_batches):\n", " X_batch, y_batch = X[batch_idx], y[batch_idx]\n", " yield X_batch, y_batch" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Batch accuracy: 0.86 Validation accuracy: 0.9044\n", "5 Batch accuracy: 0.94 Validation accuracy: 0.9496\n", "10 Batch accuracy: 0.92 Validation accuracy: 0.9654\n", "15 Batch accuracy: 0.94 Validation accuracy: 0.971\n", "20 Batch accuracy: 1.0 Validation accuracy: 0.9764\n", "25 Batch accuracy: 1.0 Validation accuracy: 0.9778\n", "30 Batch accuracy: 0.98 Validation accuracy: 0.978\n", "35 Batch accuracy: 1.0 Validation accuracy: 0.9788\n" ] } ], "source": [ "n_epochs = 40\n", "batch_size = 50\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " if epoch % 5 == 0:\n", " acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n", " acc_valid = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Batch accuracy:\", acc_batch, \"Validation accuracy:\", acc_valid)\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ELU" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "def elu(z, alpha=1):\n", " return np.where(z < 0, alpha * (np.exp(z) - 1), z)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Saving figure elu_plot\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAagAAAEYCAYAAAAJeGK1AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xl8FeW9x/HPLwnKKiBorCJgXVDrwhWq1bqkalUWt7q2asUNKtqWqq0b9Gql2ipWqApKixcFF1CwKgh41XvABaWgIFAFRECQfTlAgARInvvHc4CQ9SSZZOac832/XueVyTxzZn5nGM43sz1jzjlERESiJivsAkRERMqjgBIRkUhSQImISCQpoEREJJIUUCIiEkkKKBERiSQFlIiIRJICSkREIkkBJSnDzIab2bg0Wk6WmT1rZuvMzJlZXl0vs5Ja6uUzJ5bV0sxWmdnh9bG86jKzV83szrDrEDD1JJGezGw4cH05TZ86536UaG/tnOtewftjwBzn3O2lxvcAnnLONQ204OSW3Ry/zcZTaTmVLL87MBbIA74B1jvnttflMhPLjVHqc9fXZ04s6zH8tndDXS+rnGWfCdwFdAIOBm5wzg0vNc3xwGTgMOfcxvquUfbICbsAqVPvAteVGlfnX4B1pb6+LOrxS+kIYIVz7uN6Wl6F6uszm1lj4GbgwvpYXjmaAnOAFxKvMpxzs83sG+Ba4Ol6rE1K0SG+9FbonFtZ6rW+rhdqZheY2QdmtsHM1pvZJDM7pkS7mdmdZrbAzArNbJmZPZJoGw6cBdyWOOzlzKz9rjYzG2dmPROHiLJLLfclM3szmTqSWU6J+exrZgMTyywws0/M7PQS7TEzG2xmD5vZWjNbbWYDzKzC/1+J5T8BtE0se3GJeT1Vetpd9SSzrJqs3+p+5pp+bqAr4ICPylknnczsPTPbZmZfm9mZZnalmZWZtqacc2875+5zzr0GFFcy6ZvAz4NartSMAkrqQhNgIHAy/vDVRuAtM9sn0f4w0A94BPgBcAWwNNH2W2Aq8D/A9xKvXW27vAo0B366a4SZNQUuBkYmWUcyy9nlUeAq4Ebgv4DZwEQz+16Jaa4BdgKnAbcDfRLvqchvgT8ByxLL/mEl05ZW1bJqu34huc+cTC2lnQHMcKXOLZjZD4EPgP8DTgA+AR4E7k98FkpNf5+Z5VfxOqOSOqoyDTjZzBrVYh5SSzrEl94uMLP8UuOeds7dXZcLdc6NKfm7md0AbML/h58J/A7o45x7LjHJ1/gvTZxzG81sO7DVObeygvlvMLO38V+OExOjL8F/Ub5ZYroK63DOfVjVchLvaQLcCtzsnBufGPcr4GzgNqBvYtL/OOf+mBieb2a3AOcAL1fwGTaa2WagqLLlV6DCZSWCutrr18xq8pmr/bmBdsDycsY/DrzlnOufWN5LwFvAFOfc++VM/wwwuoJl7PJdFe2VWQ40wJ+nWliL+UgtKKDS2xSgZ6lx9XES/HDgIeAU4AD8nnoW0BZ/Dmxf4L1aLmYk8LyZNXbObcWH1RjnXEGSdSTrcPwX1e7DTM65IjObChxbYrovSr1vOXBgNZZTHZUt61hqv36T/cxV1VKeRsCqkiPM7CD8ntVPSozejv+3KrP3lKhnPVCXh6u3JX5qDypECqj0ttU593UN37sJfxittBb4Q2WVGYc/dNUL/1fsTuA/wD6Vvamaxifme7GZvQecC5xfz3WUPEy1o5y2mhxCLwas1LgGpX4Palk1Ufqy3+rWshZoWWrcrvOT00uM6wDMc859WN5MzOw+4L7KS6WLc+6DKqapyP6Jn2tq+H4JgAJKKjIP6GpmVup8wUmJtnKZWSvgaKC3c+7/EuNOYs+29iVQiD8MtKCC2WwHsitoA8A5V2hmr+L3nFoDK4FYNepIajn4wzvbgR8nhjF/ccapwEtVvLcm1uDPC5V0IrA4yfcHsX7r8jN/DvQoNa4FPtiKEstqhj/3VNmhz7o+xHcc8J1zblWVU0qdUUClt30Th09KKnLO7fqrcD8z61iqPe6cWwwMwZ/0ftLM/gEU4K/A+jlwUSXL3ID/K/kWM1sKHAI8ht97wTm32cwGAY+YWSH+MGQroJNzbkhiHovx56vaA/n4+4PKu+JqJP5Q1mHAy6WmqbSOZJfjnNtiZkOAv5rZWmAR/hxPLjC4kvVQU+8DA83sIvwfAr2AQ0kyoGq6fkvNoy4/86TEfFs559Ylxs3E7zXea2Yv4v+dVgBHmNmRzrkyQVvTQ3yJc3RHJH7Nwl9F2RH/b/9tiUnPSNQqIdJVfOntXPx/9JKvz0u0n5H4veRrAIBz7hvgTOBI4B38VU1XA1c45yZUtMDEF/xV+Cux5uDvI+mH/6t+l3uBvybGfwmMAdqUaB+A/wv+P/g9iorOGX2A/yv5WPa+ei/ZOpJdzt3AKPyVbzMT87zAObeigulr47kSr4+AzcDr1ZxHEOu3Tj6zc242e7alXeMW4feYbgVm4T/zufh/t6DvEevMnm29Ef5Kwc/xV1QCYGYNgUuBfwS8bKkm9SQhIvXKzC4ABgHHOueKwq6nNDO7DbjYOXde2LVkOu1BiUi9cs5NxO/Rtqlq2pDsAH4ddhGiPSgREYko7UGJiEgkKaBERCSSQr/MvHXr1q59+/Zhl1HGli1baNKkSdhlpBSts+TNmzePoqIijj22dMcMUpFU276WLIG1ayE7Gzp0gEYh9EkR1XU2Y8aMtc65A6qaLvSAat++PdOnT696wnoWi8XIy8sLu4yUonWWvLy8POLxeCS3/ahKpe3rj3+Ehx6Chg3h3Xfhxz8Op46orjMzW5LMdDrEJyISoKef9uGUnQ2jR4cXTulAASUiEpBXX4VfJy5QHzoULgzrsYxpQgElIhKA99+Ha68F5+Dhh+HGG8OuKPUFGlBmNtLMVpjZJjObb2Y3Bzl/EZEo+vxzuOQS2L4dfvMbuOeesCtKD0HvQT0CtHfO7YfvULS/mXUKeBkiIpGxcCF06QKbN8NVV8ETT4CVfmCK1EigAeWcm+uc29UZp0u8Dg9yGSIiUbFqFZx/vv957rnw/POQpRMngQn8MnMzG4x/3ksjfC/Bb5czTU8ST3rNzc0lFosFXUat5efnR7KuKNM6S148HqeoqEjrqxqitn1t3ZpNnz4dWbiwGUceuZnf/W4mU6dGq+/bqK2z6qqTvvhKPNwsD/irc670Uzd369y5s4vivSBRvX8gyrTOkrfrPqiZM2eGXUrKiNL2VVgI3brBe+/B4YfDRx9Bbm7YVZUVpXVWkpnNcM51rmq6OtkZdc4VJR7V3Ab/jBcRkbRQXAzXX+/DKTcXJk2KZjilg7o+WpqDzkGJSJpwDvr0gVGjoFkzmDDB70FJ3QgsoMzsQDO72syamlm2mZ2Pfzz4e0EtQ0QkTH/5Czz5JOyzD/zrX/Bf/xV2RektyIskHP5w3jP44FsC9HHOvRngMkREQjFsGNx3n7+EfORIOPvssCtKf4EFlHNuDXBWUPMTEYmKN9+Enj398FNPwRVXhFtPptAV+yIilfjoI38DbnEx9OsHvXuHXVHmUECJiFRg7lzo3h0KCuCWW+DBB8OuKLMooEREyrF0KVxwAcTjcPHFMHiwujCqbwooEZFS1q3zXRgtWwannw4vvww5oT/eNfMooEREStiyxR/W+/JLOO44f4FEGI9rFwWUiMhuO3b4CyI++QTatoWJE6Fly7CrylwKKBERfC8Rt9wC48dDq1a+C6NDDgm7qsymgBIRAe691z8uo3FjH1JHHx12RaKAEpGM98QT8Ne/+gshxoyBU04JuyIBBZSIZLgXX4Q77vDDzz3nLy2XaFBAiUjGeucd6NHDDw8YANddF2o5UooCSkQy0r//DT/7GezcCXfe6V8SLQooEck48+dD167+nqdrr4VHHw27IimPAkpEMsqKFb6XiLVr/fmm556DLH0TRpL+WUQkY2zc6ENp8WI4+WR49VVo0CDsqqQiCigRyQgFBb7T1y++gA4d/L1OTZuGXZVURgElImmvqAiuuQYmT4aDD/a9RLRuHXZVUhUFlIikNefgtttg7Fho3tz3r9euXdhVSTIUUCKS1v70J3j2Wdh3X3jrLTj++LArkmQpoEQkbT37LDzwgL9K75VX4Iwzwq5IqkMBJSJpaexY6N3bDw8ZApdcEm49Un0KKBFJO5Mnwy9+AcXF/hBfz55hVyQ1oYASkbQyaxZcdBEUFvo9qL59w65IakoBJSJpY9EifyPupk1w+eXw97+DWdhVSU0poEQkLaxZ47swWrkSfvITGDkSsrPDrkpqQwElIikvP993/rpgAXTsCK+/7i8rl9SmgBKRlLZ9O1x2GUyfDocdBhMm+BtyJfUpoEQkZRUXww03+AcPHnCA/3nQQWFXJUFRQIlISnIO7roLXnrJd/o6YQIccUTYVUmQFFAikpIGDIAnnvCPyxg7Fjp1CrsiCZoCSkRSzvPPwx/+4IdfeAF++tNw65G6oYASkZQyfjzcdJMfHjQIrr463Hqk7gQWUGa2r5kNM7MlZrbZzGaaWZeg5i8i8skncMUV/vlO994Lv/lN2BVJXQpyDyoHWAqcBTQH+gKjzax9gMsQkQy1ZEljunWDbdvgxhvhz38OuyKpazlBzcg5twV4oMSocWa2COgELA5qOSKSeZYtgz/84QTWr4fu3f1jNNSFUfqrs3NQZpYLHAXMratliEj627DB96+3enVDTjsNRo2CnMD+tJYoq5N/ZjNrALwIPO+c+6qc9p5AT4Dc3FxisVhdlFEr+fn5kawryrTOkhePxykqKtL6qkJhYRZ33XUic+c259BDN3P33bOYNm1n2GWljFT/Pxl4QJlZFjAC2A7cXt40zrmhwFCAzp07u7y8vKDLqLVYLEYU64oyrbPktWjRgng8rvVViZ07fRdGc+ZAmzYwYMAcLrro9LDLSimp/n8y0IAyMwOGAblAV+fcjiDnLyKZwTn41a/gzTehZUuYNAlWry4MuyypZ0GfgxoCHANc6JzbFvC8RSRD9OsHw4ZBo0b+vqdjjw27IglDkPdBtQN6AR2BlWaWn3hdE9QyRCT9Pfmkv4Q8OxtGj4ZTTw27IglLkJeZLwF04aeI1NioUfDb3/rhf/7TX1IumUtdHYlIJLz7Llx3nT//9Je/QI8eYVckYVNAiUjoPvsMLr0UduyAPn32dAQrmU0BJSKhWrgQunTxj23/+c/h8cfVS4R4CigRCc2qVXDeebB6tX9kxvDhkKVvJUnQpiAiodi0ye85ffONf9jgmDGwzz5hVyVRooASkXpXWOjPOX3+uX9M+9tvQ7NmYVclUaOAEpF6VVTkr9Z7/3046CB45x048MCwq5IoUkCJSL1xzt/n9OqrsN9+MGECHHZY2FVJVCmgRKTePPwwPP20P9f0xhvQsWPYFUmUKaBEpF7885/Qt6+/hPyllyCFO9mWeqKAEpE698Yb0KuXHx482D9GQ6QqCigRqVMffghXXw3FxfDf/+0foyGSDAWUiNSZOXPgwguhoAB69vQBJZIsBZSI1IklS+D88yEe9/c8DR6sLoykehRQIhK4tWt9OC1fDmee6S+KyM4OuypJNQooEQnUli3+OU7z5sHxx/sLJBo2DLsqSUUKKBEJzI4dcMUV8Omn0K4dTJwILVqEXZWkKgWUiASiuBhuusn3DtG6te/C6OCDw65KUpkCSkQCcc89MGIENGkC48fDUUeFXZGkOgWUiNTa3/4Gjz0GOTn+sRknnxx2RZIOFFAiUisvvgh33umHhw/3V++JBEEBJSI1NmkS9Ojhhx9/HK65JtRyJM0ooESkRqZN833q7dwJv/893HFH2BVJulFAiUi1zZsH3br5e55++Uv4y1/CrkjSkQJKRKpl+XJ/nmntWujSxT9GI0vfJFIHtFmJSNLicbjgAt/P3imn+CfjNmgQdlWSrhRQIpKUbdvgootg9mw4+mh/r1OTJmFXJelMASUiVSoq8lfoffABHHKIv3qvVauwq5J0p4ASkUo5B717w+uv+371Jk6Etm3DrkoygQJKRCr14IMwdKjvkfytt+C448KuSDKFAkpEKjRkiA+orCwYNQpOPz3siiSTKKBEpFyvvQa33eaHn33WXyAhUp8UUCJSRizmL4pwDvr3h5tvDrsiyUSBBpSZ3W5m082s0MyGBzlvEakfM2fCxRfD9u1w++1w331hVySZKifg+S0H+gPnA40CnreI1LFvvvG9Q2zaBFdeCQMHglnYVUmmCjSgnHNjAcysM9AmyHmLSN1avdp3YbRyJZx9NrzwAmRnh12VZLKg96CSYmY9gZ4Aubm5xGKxMMqoVH5+fiTrijKts+TF43GKioois762bs3mjjtO5Ouv9+PIIzdzxx0zmTq1KOyy9qLtq/pSfZ2FElDOuaHAUIDOnTu7vLy8MMqoVCwWI4p1RZnWWfJatGhBPB6PxPravh26d/c9lB9+OHzwQTNyc88Iu6wytH1VX6qvM13FJ5LBiov9Awf/93/hwAN9F0a5uWFXJeIpoEQylHP+IYMvvwxNm8KECX4PSiQqAj3EZ2Y5iXlmA9lm1hDY6ZzbGeRyRKT2Hn0UBg3yj8v417/gpJPCrkhkb0HvQfUFtgH3ANcmhvsGvAwRqaX/+R+45x5/CfnIkXDOOWFXJFJW0JeZPwA8EOQ8RSRY48bBLbf44UGD/P1OIlGkc1AiGWTqVB9IRUVw//3w61+HXZFIxRRQIhniP/+Bbt38k3FvugkeeijsikQqp4ASyQBLl/peIjZs8L2SP/OMujCS6FNAiaS59evhggtg2TL/PKdXXoGcUG7RF6keBZRIGtu6FS680B/e+8EP4M03oZG6cZYUoYASSVM7d8JVV8HHH8Ohh8LEidCyZdhViSRPASWShpyDnj39JeX77++7MGqj5wtIilFAiaSh++/3N+M2agTjx8Mxx4RdkUj1KaBE0sygQfDII/5ZTq+9Bj/6UdgVidSMAkokjbzyCvTp44efew66dg23HpHaUECJpIl334Vf/tIPP/ronmGRVKWAEkkDM2bApZfCjh3+ERp33RV2RSK1p4ASSXELFkCXLpCfD9dcA489pl4iJD0ooERS2MqVvgujNWvgvPP8eacs/a+WNKFNWSRFbdzouzBatAh++EMYMwb22SfsqkSCo4ASSUEFBXDJJTBrFhx5pL/XqWnTsKsSCZYCSiTFFBXBdddBLAbf+x688w4ccEDYVYkETwElkkKcg9/8xt+Au99+vn+99u3DrkqkbiigRFLIn/8MgwfDvvv6nslPOCHsikTqjgJKJEX84x/Qr5+/Su+ll+Css8KuSKRuKaBEUsC//gW/+pUfHjwYfvazcOsRqQ8KKJGImzIFrr4aiovhwQehV6+wKxKpHwookQibPRsuuggKC/0eVL9+YVckUn8UUCIRtXix7yVi40Z/SO+pp9SFkWQWBZRIBK1d68NpxQp/McSLL/rnO4lkEgWUSMTk50O3bjB/Ppx4IrzxBjRsGHZVIvVPASUSITt2wOWXw7Rp/gbcCROgefOwqxIJhwJKJCKKi+HGG2HSJN910Tvv+K6MRDKVAkokIu6+G0aOhCZN4O23fSewIplMASUSAQMG+FeDBvD669C5c9gViYRPASUSshEj4Pe/98PPPw8//Wm49YhEhQJKJEQTJvjzTgBPPAE//3m49YhESaABZWb7m9nrZrbFzJaY2S+CnL9IOtm6NZvLL4edO/35pz59wq5IJFpyAp7f08B2IBfoCIw3s1nOubkBL0ckpW3dCt9805SiIrj+enjkkbArEokec84FMyOzJsAG4Djn3PzEuBHAd865eyp6X7NmzVynTp0CqSFI8XicFi1ahF1GStE6S05BAUybNhPnYP/9O3LccerCKBnavqovquts8uTJM5xzVV4KFOQe1FHAzl3hlDALKPPUGjPrCfQEaNCgAfF4PMAyglFUVBTJuqJM66xqO3dmsWBBU5yDrCzHIYdsZOPGYP5ITHfavqov1ddZkAHVFNhUatxGoFnpCZ1zQ4GhAJ07d3bTp08PsIxgxGIx8vLywi4jpWidVS4e9/3qbd8OTZvm0b79Rr744vOwy0oZ2r6qL6rrzJI8ZBBkQOUD+5Uatx+wOcBliKSkjRuha1f44gvo0AFatYItW7TnJFKZIK/imw/kmFnJ+99PBHSBhGS0DRv8vU1Tp0Lbtr4LowYNwq5KJPoCCyjn3BZgLPAnM2tiZj8GLgZGBLUMkVSzdi2ccw78+99w2GEwebIPKRGpWtA36vYGGgGrgZeBW3WJuWSq1avh7LPh8899v3qTJ/seykUkOYHeB+WcWw9cEuQ8RVLRwoXQpQssWABHHw3vv6+eyUWqS10diQRs2jQ49VQfTh07QiymcBKpCQWUSIDeegvy8mDNGjjvPJgyBXJzw65KJDUpoEQC4Bz8/e9wySWwbRvccAOMGwfNytwFKCLJUkCJ1NK2bdCjB/z2t/6puH/8IwwbpkvJRWor6M5iRTLKkiXws5/BZ59B48Y+mK6+OuyqRNKDAkqkhiZNgmuv9fc6ff/7/km4J5wQdlUi6UOH+ESqqbAQ7rwTLrjAh9P55/sbcRVOIsHSHpRINcyb5596+/nnkJ0NDz0Ef/iDHxaRYCmgRJJQVARPPQX33ecfNvj978NLL8Epp4RdmUj6UkCJVOHLL+Gmm3xnrwDXXefDar/SffeLSKB0DkqkAoWF0L+/7w1i6lQ4+GB44w144QWFk0h90B6USDnGj4c+feDrr/3vN98Mjz0GEXx6tkjaUkCJlLBgAfzudz6gAI45xh/OO/vscOsSyUQ6xCcCrFwJt90Gxx7rw6lZM/jb32DWLIWTSFi0ByUZLR6HAQPgiSf81XlZWb4fvYcfhoMOCrs6kcymgJKMtHYtDBoETz4JGzf6cRdfDH/+M/zgB+HWJiKeAkoyynff+b2lIUP8HhPAT37ig+nUU8OtTUT2poCSjDBtGgwcCK++Cjt3+nFdu8L998Npp4Vbm4iUTwElaWvLFnjtNXjmGfjkEz8uOxuuvBLuvhtOOinc+kSkcgooSSvOwaef+sdejBoFmzf78S1bQs+e/kq9Qw8Nt0YRSY4CStLCt9/C6NHw3HO+a6JdTjsNbrzRP6OpSZPw6hOR6lNAScpavNgfwnv1VX+OaZcDD4Trr/fBdPTRoZUnIrWkgJKU4Zy/cXbCBBg7FqZP39PWuDF06wa/+IX/qceti6Q+BZRE2oYN8O67PpQmToQVK/a0NWkC3bvDFVdAly4+pEQkfSigJFI2boQPP4TJk2HKFL+XVFS0p/3gg/2TbLt18z8VSiLpSwEloXEOlizx54+mTvWhNGsWFBfvmSYnB846y+8hdekCxx8PZuHVLCL1RwEl9cI534vDF1/4vaJp0/xrzZq9p2vQAH70Ix9KZ54JP/6x77hVRDKPAkoCt2kTzJkDs2fv/dqwoey0rVvDD38IJ58MZ5zhuxvSYTsRAQWU1FBhIXzzjX9+0q7XtGknsnYtLF1a/nv2398fouvUyQfSySdD+/Y6ZCci5VNASbk2bfI3vy5dWvbn4sV+uOS5Iq8lAPvs45+rdPzxe14nnADf+57CSESSp4DKIEVFsG4drFq192v1av9z5UpYtsyHz6ZNlc8rKwu+/3048sg9r23bvuCyy06gXTvdhyQitaeASkGFhb6PuQ0b/Gv9+sp/btjgL0ZYs6a8vZ7yNWoEbdv6fuvK+3nYYX5PqaRYbD1HHBH85xWRzBRIQJnZ7UAP4HjgZedcjyDmm4qcgx07YNs2/yooqHp461bIz/ehs3nznuGKxu3YUfP6WraE3Nw9r4MO2vv3Qw7xAbT//jocJyLhCmoPajnQHzgfaFSdNxYWwvz5/vBT6Vdxcfnjq2qrqn3Hjj2v7dv3/rlr+LvvjmXQoKqn2zW8K3AKCpLfS6mpnBx/6XXLlj5IWrbce7i8ca1a+T7qSu/1iIhEVSAB5ZwbC2BmnYE21XnvnDnz6NAhr9TYK4HewFagaznv6pF4rQUuL6f9VuAqYClwXTntdwIXAvOAXuW09wXOBWYCfcppfxg4DfgYuK9Ma3b2QBo37khW1rsUFPQnK4vdr+xsOP74ZznggA6sW/cW8+Y9TnY2e71uvXUE7dodyowZo5g4cUiZ9rFjX6N169YMHz6c4cOHs337nvNJAG+//TaNGzdm8ODB/P3vo8vUF4vFABgwYADjxo3bq61Ro0ZMmDABgIceeoj33ntvr/ZWrVoxZswYAO69916mTp26uy0ej3PccccxcuRIAPr06cPMmTP3ev9RRx3F0KFDAejZsyfz58/fq71jx44MHDgQgGuvvZZly5bt1X7qqafyyCOPAHDZZZexbt26vdrPOecc+vXrB0CXLl3Ytm3bXu3du3fnrrvuAiAvL6/Murnyyivp3bs3W7dupWvXsttejx496NGjB2vXruXyy8tue7feeitXXXUVS5cu5brrym57d955JxdeeCFbt27l66+/LlND3759Offcc5k5cyZ9+pTd9h5++GFOO+00Pv74Y+67r+y2N3DgQDp27Mi7775L//79y7Q/++yzdOjQgbfeeovHH3+8TPuIESM49NBDGTVqFEOGDCnT/tpre297pZXc9kaPDnbbKy4uZsqUKUDZbQ+gTZs22vZKbXvxeJwWLVoAe7a9efPm0atX2e+9+tz2khXKOSgz6wn09L81YZ99ihOHkxxm0KxZAS1bbga2sGzZzsR79hxyOuCAfA48cB1FReuYP3/H7vFmDoC2beMcfPAKCgtXMWvWdsxciWngmGPW0K7dYrZsWcbUqQW723fVcPrpSzj00M/YvPkbJk3akmhzu39eeumXdOjQgEWL5jJmzKbd47Oy/M9f/3o6RxwRZ8aMWYwYES/z+W+++VPatl3Bxx/PJh4v296mzVRatVpITs5ciovjFBfvfVjvo48+onnz5nz11Vflvn/KlCk0bNiQ+fPnl9u+60ti4cKFZdq3bdu2u33RokVl2ouLi3e3f/vtt3u1FxUVsWrVqt3ty5YtK/P+5cuX725fvnx5mfZly5btbl+1alWZ9m+//XZ3+5o1a9hU6mqORYsW7W5fv349hYWFe7UvXLhwd3t562b+/PlLNR+DAAAF6UlEQVTEYjEKCgrKbf/qq6+IxWJs3Lix3Pa5c+cSi8VYvXp1ue2zZ8+mWbNmbN68GedcmWlmzZpFTk4OX3/9dbnv/+yzz9i+fTtz5swpt3369OnE43FmzZpVbvunn37KihUrmD27/G1v6tSpLFy4kLlz55bbHua217hx4wq3PYAGDRpo2yu17RUVFe0e3rXtlbfuoH63vWSZcy7piaucmVl/oE11zkF17tzZTS/ZLXVExGKxcv/KkYppnSUvLy+PeDxe5q98qZi2r+qL6jozsxnOuc5VTZeVxIxiZuYqeH0YTLkiIiJ7q/IQn3Murx7qEBER2UtQl5nnJOaVDWSbWUNgp3NuZxDzFxGRzFPlIb4k9QW2AfcA1yaG+wY0bxERyUBBXWb+APBAEPMSERGB4PagREREAqWAEhGRSFJAiYhIJCmgREQkkhRQIiISSQooERGJJAWUiIhEkgJKREQiSQElIiKRpIASEZFIUkCJiEgkKaBERCSSFFAiIhJJCigREYkkBZSIiESSAkpERCJJASUiIpGkgBIRkUhSQImISCQpoEREJJIUUCIiEkkKKBERiSQFlIiIRJICSkREIkkBJSIikaSAEhGRSFJAiYhIJCmgREQkkhRQIiISSQooERGJJAWUiIhEkgJKREQiqdYBZWb7mtkwM1tiZpvNbKaZdQmiOBERyVxB7EHlAEuBs4DmQF9gtJm1D2DeIiKSoXJqOwPn3BbggRKjxpnZIqATsLi28xcRkcxU64AqzcxygaOAuZVM0xPoCZCbm0ssFgu6jFrLz8+PZF1RpnWWvHg8TlFRkdZXNWj7qr5UX2fmnAtuZmYNgAnAQudcr2Te07lzZzd9+vTAaghKLBYjLy8v7DJSitZZ8vLy8ojH48ycOTPsUlKGtq/qi+o6M7MZzrnOVU1X5TkoM4uZmavg9WGJ6bKAEcB24PZaVS8iIhmvykN8zrm8qqYxMwOGAblAV+fcjtqXJiIimSyoc1BDgGOAc51z2wKap4iIZLAg7oNqB/QCOgIrzSw/8bqm1tWJiEjGCuIy8yWABVCLiIjIburqSEREIkkBJSIikRTofVA1KsBsDbAk1CLK1xpYG3YRKUbrrHq0vqpH66v6orrO2jnnDqhqotADKqrMbHoyN5LJHlpn1aP1VT1aX9WX6utMh/hERCSSFFAiIhJJCqiKDQ27gBSkdVY9Wl/Vo/VVfSm9znQOSkREIkl7UCIiEkkKKBERiSQFlIiIRJICKklmdqSZFZjZyLBriSoz29fMhpnZEjPbbGYzzaxL2HVFjZntb2avm9mWxLr6Rdg1RZW2qdpJ9e8tBVTyngb+HXYREZcDLAXOApoDfYHRZtY+xJqi6Gn8gz1zgWuAIWb2g3BLiixtU7WT0t9bCqgkmNnVQBx4L+xaosw5t8U594BzbrFzrtg5Nw5YBHQKu7aoMLMmwGVAP+dcvnPuQ+BN4LpwK4smbVM1lw7fWwqoKpjZfsCfgDvCriXVmFkucBQwN+xaIuQoYKdzbn6JcbMA7UElQdtUctLle0sBVbWHgGHOuWVhF5JKzKwB8CLwvHPuq7DriZCmwKZS4zYCzUKoJaVom6qWtPjeyuiAMrOYmbkKXh+aWUfgXOCJsGuNgqrWV4npsoAR+PMst4dWcDTlA/uVGrcfsDmEWlKGtqnkpdP3Vq2fqJvKnHN5lbWbWR+gPfCtmYH/6zfbzI51zp1U5wVGTFXrC8D8ihqGvwCgq3NuR13XlWLmAzlmdqRzbkFi3InokFWFtE1VWx5p8r2lro4qYWaN2fuv3bvw//C3OufWhFJUxJnZM0BH4FznXH7Y9USRmb0COOBm/Lp6GzjNOaeQKoe2qepJp++tjN6DqopzbiuwddfvZpYPFKTaP3J9MbN2QC+gEFiZ+OsNoJdz7sXQCoue3sBzwGpgHf6LQ+FUDm1T1ZdO31vagxIRkUjK6IskREQkuhRQIiISSQooERGJJAWUiIhEkgJKREQiSQElIiKRpIASEZFIUkCJiEgk/T/XSE/diHEg1QAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "plt.plot(z, elu(z), \"b-\", linewidth=2)\n", "plt.plot([-5, 5], [0, 0], 'k-')\n", "plt.plot([-5, 5], [-1, -1], 'k--')\n", "plt.plot([0, 0], [-2.2, 3.2], 'k-')\n", "plt.grid(True)\n", "plt.title(r\"ELU activation function ($\\alpha=1$)\", fontsize=14)\n", "plt.axis([-5, 5, -2.2, 3.2])\n", "\n", "save_fig(\"elu_plot\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.elu, name=\"hidden1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### SELU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "from scipy.special import erfc\n", "\n", "# alpha and scale to self normalize with mean 0 and standard deviation 1\n", "# (see equation 14 in the paper):\n", "alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)\n", "scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "def selu(z, scale=scale_0_1, alpha=alpha_0_1):\n", " return scale * elu(z, alpha)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Saving figure selu_plot\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAagAAAEYCAYAAAAJeGK1AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3Xt8FPW5x/HPEwIIiEZBchTUeMN7RYhtxfaYVqyKl9qD1VpAsVoQqxaRVkUUDnCwtVTRFrAIgoJWqeIN0VZpo7WIFSTeRcWCICoXXTHhEhJ+54/fxixLLptkNjO7+b5fr3llmZnMPDtM9rsz++yMOecQERGJmpywCxAREamJAkpERCJJASUiIpGkgBIRkUhSQImISCQpoEREJJIUUCJ1MLOVZjaiGdYzxszebIb15JjZn8xso5k5MytK9zrrqWeWmc0PswaJLgWUpMTM9jGzKfEX7G1m9pmZLTSzUxPmKY6/6CUPDybM48zsvFrWMcjMSmuZVuvvBaGOgDgBmBLgegriz6UwadJE4OSg1lOHvsAlwNnAvsCiZlgnZlYUf96dkyb9EhjQHDVI5skNuwDJGI8A7YFLgQ+ALvgX1E5J880ERiaN25L26tLEObe+mdZTCtQYzgE7FPjEOdcswVQf59yXYdcg0aUjKKmXmeUB3wWud84tdM6tcs694pyb6Jx7MGn2zc65T5OGtL8ImdnpZvZPM/vCzD43s7+a2ZFJ8+xnZvfHT29tNrMSM/uemQ0CRgNHJxz1DYr/zten+MzsATN7JGmZOWa22syGp1jHf+I/X4mvpzj+ezsdwcWXe1N82dvM7A0z+2HC9KojsX5m9mz8+bydeERbwzaaBdwOHBD/3ZXx8cVm9sfkeRNPvcXnmWJmE8xsg5mtM7OJZpaTME+b+PRV8Zo/NLOrzawA+Ed8tvXxdc+qZT1tzWxS/Ah9q5ktNrPvJEyvOhI7xcxejj/vJWbWs7bnLZlLASWpqHp3f46Z7RZ2MbXoAEwCvgkUAV8CT5pZGwAz6wA8DxQA5wLHAmPjv/sQ8HtgOf60177xccnmAGea2Z4J406Oz//nVOqIjwc4Pf57/1PL8/kl8CvgunitjwLzzKxH0nz/B9wJHAe8AjxoZrvXscyxwJr4uk+oZb7a9AcqgN7AlcAw4IKE6fcCFwHDgSPxR9sxYDXQLz7P0fF1/7KWddwaX+bPgOOBN4BnzGzfpPluAa4HegIbgfvNzBr4fCTqnHMaNNQ74F9gPge2Ai/hPzP5VtI8xUA51YFWNVyRMI8DzqtlHYOA0lqm1fp7tczfAagEvhP/98+Br4DOtcw/BnizhvErgRHxx7nAZ8ClCdOnA39rQB0F8edSWNf6gY+Bm2vYvnOSljMkYXrX+Ljv1FHPCGBlDcv9Y9K4WcD8pHleSprnWWB6/PFh8XWfXst6i+LTO9e2nvi2KgcuSpjeClgBjE9azmkJ85wUH9ct7L8TDcEOOoKSlDjnHgH2w3+4/jT+XfRiM0v+vOkhoEfScH+66zOzQ+Kn4FaY2SZ8kOQAB8RnOR543Tm3obHrcM5V4J9f//g62+KDe04D6kjlueyB39b/Spr0InBU0rjXEx6vjf/skuq6Guj1pH+vTVjX8cAOqk/lNcYhQGsSnrdzrhL/hijM5y0hUZOEpMw5txX/rvlZYKyZTQfGmNlE51x5fLYvnXMfNHIVm4B2ZtbaObe9amT8MzDwp8tqMx9/6moI/uijAngbaFPH7zTGHOAlM+sKfCu+/HnNWEfy7Qe+3k7OORc/y9XQN547gOTTY61rmG970r9dI9bVWLU+74RpesOdZfQfKk3xNv5NTlCfSy3H75PHJ43vmTB9F2bWCTgCmOCce8459w7QkZ3fgC0DvlFDm3OVcvzppDo55/6N72K8EH8k9bjzHXip1lEV5LWuyzm3CX9UcFLSpO/gt3nQ1uM/F0p0XAOXUYL/v/teLdPrfd74U3nlJDxvM2sFnEh6nrdEnI6gpF7xF96/APfgT618BRQCvwYWxl9Qq7Q3s/9KWkS5c+7zhH8X1PBh/4fOubfM7G/A9HhX3AqgO3AHMNc591EtJX4BbAB+bmar8Z/F/A5/9FLlAfyH6o+b2fX4o5tjgK+cc//Af9Z0YLwb7KP4+G21rO9+4DL850CJTQ6p1LEO33Z/WryLbqurucvxd/ij1PeBpfjvCn2X6rAO0t+BSWZ2Dv5NwBBgf/w2SYlz7j0zm4v/v/sl8CrQDShwzs0GVuGPdM40syeBLVXBnrCMMjObCvzWzDbgOx6vAfIJ8LtokkHC/hBMQ/QHoC0wAd8l9gWwGXgfuA3YO2G+YvyLUPLwYsI8NU13wFnx6Xn4QPogvp73gN8Cu9dT4/eBN/FNHG8Cp+EbNAYlzNMN/xlSLL7sZUBRwnN8OP78XNXvkdAkkbCcg+PzfAbkNqKOy/AhWAkUx8eNYecmiRzgJnwHXDm+m+3chOkF1NxsUWczCTU3SbQGJuPDdQPwv9TcJFFfI0VbfBfex8A2/BuMKxOm3wR8gj+lOKuOZUyKb9ttwGISmj6oodmitm2hIfMHi/8Hi4iIRIo+gxIRkUhSQImISCQpoEREJJIUUCIiEkmht5l37tzZFRQUhF3GLsrKyujQoUPYZWQUbbPULV++nMrKSo46KvkCCVKbqO5f5eXwzjtQUQGdOkGUXs6ius2WLl26wTm3T33zhR5QBQUFLFmyJOwydlFcXExRUVHYZWQUbbPUFRUVEYvFIrnvR1UU969Nm+Ckk3w4fe978Mwz0Cboa5c0QRS3GYCZrUplPp3iExFphIoKuOACePNNOOIIeOSRaIVTNlBAiYg0kHNw9dX+iKlzZ3jqKdhrr7Cryj4KKBGRBpo0CaZOhbZt4fHH4eCDw64oOymgREQa4LHH4Npr/eN774XevcOtJ5sFGlBmNsfMPjGzTWb2npldFuTyRUTCtHQp9O/vT/GNH+8/g5L0CfoI6hb81Yv3AM4BxptZr4DXISLS7FavhrPPhs2b4eKLYWTyrTolcIEGlHPuLVd9i4Kqq1QfEuQ6RESa26ZNcOaZ8MknUFQE06aBJd/iUQIX+PegzGwKMAhoh7+dwYIa5hkMDAbIz8+nuLg46DKarLS0NJJ1RZm2WepisRiVlZXaXg0Q1v5VWWmMHHkMb7zRif3338zw4a+yaFFF/b8YAZn+N5mW220k3AWzCPitS7h9d7LCwkIXxS8rRvULblGmbZa6qi/qlpSUhF1Kxghj/3IOrrwSpkzx7eSLF8MhGXROKKp/k2a21DlXWN98aenic85VOudexN8gbmg61iEikm533OHDqU0b372XSeGUDdLdZp6LPoMSkQz0+OMwfLh/PGuWv6SRNK/AAsrMupjZT8xsdzNrZWanARcCC4Nah4hIc1i6FH76U3+Kb9w4uPDCsCtqmYJsknD403l34YNvFTDMOfdEgOsQEUmr5HbyG28Mu6KWK7CAcs6tB04OankiIs3tq6/grLN8O/nJJ6udPGy61JGICP7q5D/5Cbz+OnTvDvPm6erkYVNAiUiL5xwMGwYLFvibDi5YAHvvHXZVooASkRbvzjth8mS1k0eNAkpEWrQnn4RrrvGPZ86E73wn3HqkmgJKRFqsV1/1nzs5B2PH+tZyiQ4FlIi0SGvWVLeTX3QRjBoVdkWSTAElIi1OVTv52rXw3/+tdvKoUkCJSItSUeGvDPHaa3DYYb6dvG3bsKuSmiigRKRFGT4cnnqqup28U6ewK5LaKKBEpMW48074wx+q28kPPTTsiqQuCigRaRHmz69uJ7/nHrWTZwIFlIhkvWXLfDv5jh0wZgz07x92RZIKBZSIZLU1a3zHXlkZDBgAN98cdkWSKgWUiGSt0lL/XaeqdvLp09VOnkkUUCKSlSorfTt5SYnayTOVAkpEstLw4b4xYu+9q9vKJbMooEQk6/zhD76lvKqd/LDDwq5IGkMBJSJZ5amn/L2dAGbMgO9+N9x6pPEUUCKSNUpK4IILfDv56NG+a08ylwJKRLLCxx9Xt5P37+8DSjKbAkpEMl5VO/nHH/srRMyYoXbybKCAEpGMVlnpbzS4bJm/tt5jj6mdPFsooEQko117rb9tu9rJs48CSkQy1uTJcMcd0Lo1PPoodO8edkUSJAWUiGSkBQvg6qv94xkz/KWMJLsooEQk47z2WnU7+c03w8CBYVck6aCAEpGMsnatbycvLfXNEWPGhF2RpIsCSkQyRlmZbydfswZOOknt5NlOASUiGaGqnfzVV+GQQ3w7+W67hV2VpJMCSkQywl13HcITT8Bee/kGic6dw65I0k0BJSKRN2UKPPzw/monb2EUUCISaU8/DVdd5R9Pnw4nnxxuPdJ8AgsoM2trZjPMbJWZfWVmJWZ2RlDLF5GW57XX4PzzfTv5wIErueiisCuS5hTkEVQusBo4GdgTGAXMNbOCANchIi1EYjv5hRfCJZesDLskaWaBBZRzrsw5N8Y5t9I5t8M5Nx/4D9ArqHWISMuQ3E5+zz1qJ2+JctO1YDPLB7oDb9UwbTAwGCA/P5/i4uJ0ldFopaWlkawryrTNUheLxaisrNT2qkFlJYwefQyvvtqZ/fbbwogRr7J48XbtX42Q6dvMnHPBL9SsNfA0sMI5N6SueQsLC92SJUsCr6GpiouLKSoqCruMjKJtlrqioiJisRglJSVhlxI5114Lt93m28lfegkOP9yP1/7VcFHdZma21DlXWN98gXfxmVkOMBsoB64Mevkikr2mTvXh1Lo1zJtXHU7SMgV6is/MDJgB5AN9nXPbg1y+iGSvZ56pbie/+26I4Bt/aWZBfwY1FTgS6OOc2xLwskUkS73xhm8nr6yEG2+Eiy8OuyKJgiC/B3UgMAToAXxqZqXxoX9Q6xCR7PPJJ3DmmfDVV/4WGmPHhl2RREVgR1DOuVWAGkFFJGVV7eSrV0Pv3jBrFuTo+jYSp11BREJRWQkDBsDSpXDwwbo6uexKASUiobjuOh9KeXnw1FOwzz5hVyRRo4ASkWZ3113w+99Dbi488ggccUTYFUkUKaBEpFn99a9wZfwbktOmwfe/H249El0KKBFpNm+8AT/+sf/8aeRIuOSSsCuSKFNAiUiz+PRTf3XyqnbycePCrkiiTgElImm3eTOccw589BF8+9swc6bayaV+2kVEJK127PDt5K+8AgcdBI8/Du3ahV2VZAIFlIik1XXXwaOPwp57+nbyLl3CrkgyhQJKRNJm2jSYONG3k8+bB0ceGXZFkkkUUCKSFn/7G1xxhX/8pz+pnVwaTgElIoF7883qdvIbboCf/SzsiiQTKaBEJFCffuqvTr5pkw+p8ePDrkgylQJKRAKT3E5+771qJ5fG064jIoHYsQMuusi3kxcUqJ1cmk4BJSKBuOEGf+FXtZNLUBRQItJkd98Nt95afXXyo44KuyLJBgooEWmSZ5+FoUP947vuglNOCbceyR4KKBFptLfegvPO8+3k110Hl14adkWSTRRQItIon31W3U5+3nkwYULYFUm2UUCJSINVtZOvWgXf+hbcd5/aySV42qVEpEGq2sn//W+1k0t6KaBEpEFGjvSdenvs4dvJ8/PDrkiylQJKRFI2fTr89rdqJ5fmoYASkZQ89xxcfrl/PHUq9OkTbj2S/RRQIlKvt9+ubif/9a/hssvCrkhaAgWUiNSpqp38yy+hXz+45ZawK5KWQgElIrXasgV++ENYuRK++U21k0vz0q4mIjWqaid/+WU48EB44glo3z7sqqQlUUCJSI1uvBEefljt5BIeBZSI7OKee+A3v4FWrXxIHX102BVJS6SAEpGdLFwIQ4b4x1OnwqmnhluPtFwKKBH52ttv+069igr41a/g5z8PuyJpyQINKDO70syWmNk2M5sV5LJFJL3WratuJ/+f//Gn+ETClBvw8tYC44HTAF0+UiRDJLaTn3ACzJ6tdnIJX6AB5ZybB2BmhUC3IJctIumxYwcMGgSLF8MBB6idXKIj6COolJjZYGAwQH5+PsXFxWGUUafS0tJI1hVl2mapi8ViVFZWRmJ73X33QcydeyAdOlQwZswy3n23jHffDbuqXWn/arhM32ahBJRzbhowDaCwsNAVFRWFUUadiouLiWJdUaZtlrq8vDxisVjo22vmTHjgAd9OPm9eLj/4wQmh1lMX7V8Nl+nbTGeZRVqov/8dBg/2jydPhh/8INx6RJIpoERaoHfeqW4nHzGi+ntPIlES6Ck+M8uNL7MV0MrMdgMqnHMVQa5HRBqvqp08FoMf/cjfgFAkioI+ghoFbAGuBwbEH48KeB0i0khbt8K558J//gOFhTBnjtrJJbqCbjMfA4wJcpkiEoyqdvKXXoL991c7uUSf3juJtBA33wwPPQQdO/qrk++7b9gVidRNASXSAsycCf/3f76d/C9/gWOPDbsikfopoESy3D/+Ud1O/sc/wmmnhVuPSKoUUCJZ7N13/YVfKypg+HC4/PKwKxJJnQJKJEutX1/dTn7uuXDrrWFXJNIwCiiRLFTVTv7hh9Crl28nb9Uq7KpEGkYBJZJlduyASy6BRYt8O/mTT0KHDmFXJdJwCiiRLDN6NDz4oG8nnz9f7eSSuRRQIlnk3nth/Hh/Om/uXPjGN8KuSKTxFFAiWaK4GH7+c//4D3+A008PtRyRJlNAiWSB5ct9O/n27XDNNTB0aNgViTSdAkokw23Y4NvJv/gCzjkHfve7sCsSCYYCSiSDVbWTr1gBPXtW3x1XJBsooEQylHPws5/Bv/4F3bqpnVyyjwJKJEONHg1//jPsvru/Ovl++4VdkUiwFFAiGei++2DcOH+zwYceUju5ZCcFlEiGef55uOwy//jOO6Fv33DrEUkXBZRIBlm+HH70I99OPmwY/OIXYVckkj4KKJEMkdxOPnFi2BWJpJcCSiQDbNvmj5xWrIDjj4f771c7uWQ/BZRIxFW1k7/4InTt6tvJd9897KpE0k8BJRJxY8b4L+BWtZN37Rp2RSLNQwElEmGzZ8PYsdXt5McdF3ZFIs1HASUSUS+8AJde6h/fcYfayaXlUUCJRND771e3k199NVx5ZdgViTQ/BZRIxGzc6I+WPv8czj4bbrst7IpEwqGAEomQbdv81ck/+MC3k+vq5NKSKaBEIsI5fwkjtZOLeAookYgYOxbmzPG3zJg/X+3kIgookQiYM8d/3yknBx58EHr0CLsikfApoERC9s9/VreTT5oEZ50Vbj0iUaGAEgnR++/7pojycrjqKj+IiBdoQJnZ3mb2qJmVmdkqM/tpkMsXySYVFcaZZ/p28jPPhNtvD7sikWjJDXh5k4FyIB/oATxlZq85594KeD0iGc05WLmyA2Vl/vOmBx9UO7lIMnPOBbMgsw7AF8Axzrn34uNmAx87566v7fc6duzoevXqFUgNQYrFYuTl5YVdRkbRNkvdyy+XsHUrtGnTg549oW3bsCuKPu1fDRfVbfb8888vdc4V1jdfkEdQ3YGKqnCKew04OXlGMxsMDAZo3bo1sVgswDKCUVlZGcm6okzbLDXbtuWwdat/3LVrGVu2bGfLlnBrygTavxou07dZkAG1O7ApadyXQMfkGZ1z04BpAIWFhW7JkiUBlhGM4uJiioqKwi4jo2ib1c856NMH3n23iLy8cj78cFHYJWUM7V8NF9VtZmYpzRdkk0QpsEfSuD2ArwJch0hGmz8f/v53yM2Frl112CRSlyAD6j0g18wOSxh3HKAGCRGgshKuj38ae+CBkJsbzOe/ItkqsIByzpUB84CxZtbBzE4CfgjMDmodIpnsvvvg7behoAD22y/sakSiL+gv6l4BtAPWAX8GhqrFXATKyuDmm/3j8eP9JY1EpG6B/pk45z53zp3rnOvgnDvAOfdAkMsXyVS33gpr1kDPnnDhhWFXI5IZ9D5OJM1WrfIBBXDnnTp6EkmV/lRE0uxXv4KtW/2R00knhV2NSOZQQImkUXEx/OUv0L599VGUiKRGASWSJpWV8Mtf+sfXXw/duoVbj0imUUCJpMndd8Prr/vvPI0YEXY1IplHASWSBp9+Cjfc4B9PnAjt2oVbj0gmUkCJpMFVV0EsBmecAf36hV2NSGZSQIkE7LHH4OGHoUMHmDoVUrwupogkUUCJBOjLL+EXv/CPJ0zwnz+JSOMooEQCdP31sHYtfPvb1UElIo2jgBIJyAsvwF13QevWMH26buEu0lQKKJEAbNoEF1/sH99wAxx9dLj1iGQDBZRIAK66Clau9BeDvfHGsKsRyQ4KKJEmeughf6+ndu3g/vuhTZuwKxLJDgookSb46CO4/HL/+Lbb4Igjwq1HJJsooEQaqbISLrrIfyH37LNhyJCwKxLJLgookUaaMAGefx7y82HGDH0hVyRoCiiRRnj6aRg92ofSfffBPvuEXZFI9skNuwCRTPPhh9C/PzgH48bBD34QdkUi2UlHUCINsHmzv/jrF1/4z51Gjgy7IpHspYASSZFzMHQolJTAoYf6U3s5+gsSSRv9eYmk6PbbfSi1bw/z5kFeXtgViWQ3BZRICh55pPquuDNnwrHHhluPSEuggBKpx+LFMGCAP8X3m9/A+eeHXZFIy6CAEqnDihVwzjmwdSsMHgy//nXYFYm0HAookVqsWwd9+8L69XD66TB5sr6MK9KcFFAiNfj8c//9pvfeg+OO8xeEzdW3BkWalQJKJMmmTXDGGfDaa9C9O/z1r7DHHmFXJdLyKKBEEpSVwVlnwb//DQcdBAsX+mvtiUjzU0CJxJWVwQ9/CP/8J3Tt6sOpW7ewqxJpuXRWXQR/y4yzzoJ//Qu6dPHhdNBBYVcl0rLpCEpavPXr4Xvf8+G0//7+COrww8OuSkR0BCUt2po1cOqp8O67/vp6CxfCAQeEXZWIQEBHUGZ2pZktMbNtZjYriGWKpNvrr0Pv3j6cjj3WHzkpnESiI6hTfGuB8cA9AS1PJK0WLICTToLVq31IFRfDf/1X2FWJSKJAAso5N8859xiwMYjliaTT5Mn+Xk6lpXDhhf603t57h12ViCQL5TMoMxsMDAbIz8+nuLg4jDLqVFpaGsm6oizq26y83Jg8+VCeeKIrABddtJJBg1ayeHHz1xKLxaisrIz09oqaqO9fUZTp2yyUgHLOTQOmARQWFrqioqIwyqhTcXExUawryqK8zVatgh//GF55Bdq0genTYeDAAqAglHry8vKIxWKR3V5RFOX9K6oyfZvVe4rPzIrNzNUyvNgcRYo0xTPPQM+ePpwOPNC3kw8cGHZVIlKfeo+gnHNFzVCHSODKy+Hmm+HWW/29nM44A2bPhk6dwq5MRFIRyCk+M8uNL6sV0MrMdgMqnHMVQSxfpKHeegv69/cXfM3Jgf/9X7jxRv9YRDJDUH+uo4AtwPXAgPjjUQEtWyRlO3bApEnQq5cPp4MP9t9vuukmhZNIpgnkCMo5NwYYE8SyRBrrzTdhyBBYtMj/+9JL4fbboWPHcOsSkcbRe0rJeFu3wqhRcPzxPpz23Rcee8x36imcRDKXrsUnGcs5mD8fhg+HDz7w4y6/HG65BfLywq1NRJpOASUZ6Y034Jpr/FUgAI46CqZN85cvEpHsoFN8klHWrIHBg6FHDx9Oe+0Fd9wBJSUKJ5FsoyMoyQiffeZP3d11F2zbBq1awVVXwejR+l6TSLZSQEmkffKJ78SbPBk2b/bjzj/ff6/piCPCrU1E0ksBJZH0/vvwu9/Bvff6K0KAvwL5uHFw3HHh1iYizUMBJZGxYwc8+yxMmQJPPum79MygXz+47jo44YSwKxSR5qSAktBt3AgzZ/rPl1as8ONat4aLL4YRI+Dww8OtT0TCoYCSUFRWwvPPw6xZMHeub3wAf8v1IUP8VSDy80MtUURCpoCSZuMcLFsG998PDz4Ia9f68Wb+SuNXXOF/tmoVbp0iEg0KKEkr5/yXah97DB54AJYvr5528MHw05/CJZf4xyIiiRRQErjycn/67okn/PDRR9XT9tkHLrjA3wrjW9/yR08iIjVRQEmTOeevhffEE/vxxz/6TrxNm6qnd+niW8T79YM+fXwDhIhIfRRQ0iirV8MLL8Bzz/lLDq1eDdD96+nHHAPnnOOD6Zvf1L2YRKThFFBSr/Jyf627RYv88NJL/pp4iTp1gmOOWceFF3bh1FP1mZKINJ0CSnaydau/8d+rr/ph2TJ/Z9qqNvAqeXlw4olwyil++MY34IUX3qaoqEs4hYtI1lFAtVDl5f5yQu+8A2+/XT288w5UVOw6/xFHQO/efjjxRP9vnbYTkXRSQGWxigr/2dCHH1YPy5f7IPrgA/9l2WQ5OXDkkdCzZ/XQo4duACgizU8BlaGc851yH3/sPw/6+GM/JAbSqlU1hxD49u5DDvFhdNRR/ueRR/rmhg4dmve5iIjURAEVMVu2wPr1sG6d/5k4rF27cyCVldW/vK5dfcNC1XDooT6QDj8c2rVL//MREWksBVTAnPMNBV9+CbFYasPGjdUhlEroVGnfHrp18yFUNXTrVh1GBQWw225pe6oiImnVYgJqxw4fHFu3+iHxcU3jli3LZ/lyf0RTWuqDI/FnbY/LympuMkhVmzb+agtVQ5cu1Y/33XfnMNpzT12JQUSyV+gB9ckncNNN/kV9+/bqn4mPGztu27bq0Km66V3qjmz0c8rN9U0FNQ177VXzuKow6thRoSMiAhEIqLVrlzN+fFHS2POBK4DNQN8afmtQfNgAnFfD9KHABcBqYODXY818l1rHjtey555nk5OznHXrhpCTw07D0UePom3bY+jY8VNefnkYrVr5K2zn5Pif/ftPoGfP3qxcuYiZM0d+Pb1quOOOSfTo0YPnnnuO8ePHs3179Sk8gD/96U8cfvjhPPnkk9x66+93qX727Nnsv//+PPTQQ0ydOnWX6Q8//DCdO3dm1qxZzJo1a5fpCxYsoH379kyZMoW5c+fuMr24uBiAiRMnMn/+/J2mtWvXjqeffhqAcePGsXDhwp2md+rUiUceeQSAG264gZdeeunrabFYjGOOOYY5c+YAMGzYMEpKSnb6/e7duzNt2jQABg8ezHvvvbfT9B49ejBp0iQABgwYwJqkbwSfeOKJ3HLLLQD069ePjRs37jT9lFNO4aabbgICTsdMAAAFTElEQVTgjDPOYMuWLTtNP+ussxgxYgQARUVFu2yb888/nyuuuILNmzfTt++u+96gQYMYNGgQGzZs4Lzzdt33hg4dygUXXMDq1asZOHDgLtOvvfZazj77bDZv3swHH3ywSw2jRo2iT58+lJSUMGzYsF1+f8KECfTu3ZtFixYxcuTIXaZPmrTzvpcscd/7/e8za9/bsWMHL7zwArDrvgfQrVs37XtJ+14sFiMv3oJbte8tX76cIUOG7PL7zbnvpSr0gGrTBvbbz4dH1dCrF3z/+/603J137jzNDE47Dfr29afTRo/eeVpODgwYAOeeCxs2wNVX+3GJRyXXXusvwbN8ub/3ULJRoyA3913y8vKo4f+JPn3894EWLYKHH07fthERacnMORdqAYWFhW7JkiWh1lCT4uLiGt/lSO20zVJXVFRELBbb5V2+1E77V8NFdZuZ2VLnXGF98+laACIiEkkKKBERiSQFlIiIRJICSkREIkkBJSIikdTkgDKztmY2w8xWmdlXZlZiZmcEUZyIiLRcQRxB5eK/EXsysCcwCphrZgUBLFtERFqoJn9R1zlXBoxJGDXfzP4D9AJWNnX5IiLSMgV+JQkzywe6A2/VMc9gYDBAfn7+15c/iZLS0tJI1hVl2mapi8ViVFZWans1gPavhsv0bRbolSTMrDXwNLDCOVfDRYR2pStJZA9ts9TpShINp/2r4aK6zQK7koSZFZuZq2V4MWG+HGA2UA5c2aTqRUSkxav3FJ9zrqi+eczMgBlAPtDXObe96aWJiEhLFtRnUFPxN1Dq45zbUt/MIiIi9Qnie1AHAkOAHsCnZlYaH/o3uToREWmxgmgzXwXoHrAiIhIoXepIREQiSQElIiKRFPoddc1sPbAq1CJq1hnYEHYRGUbbrGG0vRpG26vhorrNDnTO7VPfTKEHVFSZ2ZJUvkgm1bTNGkbbq2G0vRou07eZTvGJiEgkKaBERCSSFFC1mxZ2ARlI26xhtL0aRtur4TJ6m+kzKBERiSQdQYmISCQpoEREJJIUUCIiEkkKqBSZ2WFmttXM5oRdS1SZWVszm2Fmq8zsKzMrMbMzwq4rasxsbzN71MzK4tvqp2HXFFXap5om01+3FFCpmwy8EnYREZcLrAZOBvYERgFzzawgxJqiaDL+xp75QH9gqpkdHW5JkaV9qmky+nVLAZUCM/sJEAMWhl1LlDnnypxzY5xzK51zO5xz84H/AL3Cri0qzKwD0A+4yTlX6px7EXgCGBhuZdGkfarxsuF1SwFVDzPbAxgLDA+7lkxjZvlAd+CtsGuJkO5AhXPuvYRxrwE6gkqB9qnUZMvrlgKqfuOAGc65NWEXkknMrDVwP3Cvc+7dsOuJkN2BTUnjvgQ6hlBLRtE+1SBZ8brVogPKzIrNzNUyvGhmPYA+wO1h1xoF9W2vhPlygNn4z1muDK3gaCoF9kgatwfwVQi1ZAztU6nLptetJt9RN5M554rqmm5mw4AC4CMzA//ut5WZHeWc65n2AiOmvu0FYH5DzcA3APR1zm1Pd10Z5j0g18wOc869Hx93HDplVSvtUw1WRJa8bulSR3Uws/bs/G53BP4/fqhzbn0oRUWcmd0F9AD6OOdKw64niszsQcABl+G31QKgt3NOIVUD7VMNk02vWy36CKo+zrnNwOaqf5tZKbA10/6Tm4uZHQgMAbYBn8bfvQEMcc7dH1ph0XMFcA+wDtiIf+FQONVA+1TDZdPrlo6gREQkklp0k4SIiESXAkpERCJJASUiIpGkgBIRkUhSQImISCQpoEREJJIUUCIiEkkKKBERiaT/B9c9MCs0b4SXAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "plt.plot(z, selu(z), \"b-\", linewidth=2)\n", "plt.plot([-5, 5], [0, 0], 'k-')\n", "plt.plot([-5, 5], [-1.758, -1.758], 'k--')\n", "plt.plot([0, 0], [-2.2, 3.2], 'k-')\n", "plt.grid(True)\n", "plt.title(r\"SELU activation function\", fontsize=14)\n", "plt.axis([-5, 5, -2.2, 3.2])\n", "\n", "save_fig(\"selu_plot\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Layer 0: mean -0.00, std deviation 1.00\n", "Layer 100: mean 0.02, std deviation 0.96\n", "Layer 200: mean 0.01, std deviation 0.90\n", "Layer 300: mean -0.02, std deviation 0.92\n", "Layer 400: mean 0.05, std deviation 0.89\n", "Layer 500: mean 0.01, std deviation 0.93\n", "Layer 600: mean 0.02, std deviation 0.92\n", "Layer 700: mean -0.02, std deviation 0.90\n", "Layer 800: mean 0.05, std deviation 0.83\n", "Layer 900: mean 0.02, std deviation 1.00\n" ] } ], "source": [ "np.random.seed(42)\n", "Z = np.random.normal(size=(500, 100)) # standardized inputs\n", "for layer in range(1000):\n", " W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization\n", " Z = selu(np.dot(Z, W))\n", " means = np.mean(Z, axis=0).mean()\n", " stds = np.std(Z, axis=0).mean()\n", " if layer % 100 == 0:\n", " print(\"Layer {}: mean {:.2f}, std deviation {:.2f}\".format(layer, means, stds))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `tf.nn.selu()` function was added in TensorFlow 1.4. For earlier versions, you can use the following implementation:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "def selu(z, scale=alpha_0_1, alpha=scale_0_1):\n", " return scale * tf.where(z >= 0.0, z, alpha * tf.nn.elu(z))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, the SELU activation function cannot be used along with regular Dropout (this would cancel the SELU activation function's self-normalizing property). Fortunately, there is a Dropout variant called Alpha Dropout proposed in the same paper. It is available in `tf.contrib.nn.alpha_dropout()` since TF 1.4 (or check out [this implementation](https://github.com/bioinf-jku/SNNs/blob/master/selu.py) by the Institute of Bioinformatics, Johannes Kepler University Linz)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a neural net for MNIST using the SELU activation function:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "n_hidden2 = 100\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=selu, name=\"hidden1\")\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=selu, name=\"hidden2\")\n", " logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "learning_rate = 0.01\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()\n", "n_epochs = 40\n", "batch_size = 50" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Batch accuracy: 0.88 Validation accuracy: 0.923\n", "5 Batch accuracy: 0.98 Validation accuracy: 0.9578\n", "10 Batch accuracy: 1.0 Validation accuracy: 0.9664\n", "15 Batch accuracy: 0.96 Validation accuracy: 0.9682\n", "20 Batch accuracy: 1.0 Validation accuracy: 0.9694\n", "25 Batch accuracy: 1.0 Validation accuracy: 0.9688\n", "30 Batch accuracy: 1.0 Validation accuracy: 0.9694\n", "35 Batch accuracy: 1.0 Validation accuracy: 0.97\n" ] } ], "source": [ "means = X_train.mean(axis=0, keepdims=True)\n", "stds = X_train.std(axis=0, keepdims=True) + 1e-10\n", "X_val_scaled = (X_valid - means) / stds\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " X_batch_scaled = (X_batch - means) / stds\n", " sess.run(training_op, feed_dict={X: X_batch_scaled, y: y_batch})\n", " if epoch % 5 == 0:\n", " acc_batch = accuracy.eval(feed_dict={X: X_batch_scaled, y: y_batch})\n", " acc_valid = accuracy.eval(feed_dict={X: X_val_scaled, y: y_valid})\n", " print(epoch, \"Batch accuracy:\", acc_batch, \"Validation accuracy:\", acc_valid)\n", "\n", " save_path = saver.save(sess, \"./my_model_final_selu.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Batch Normalization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: the book uses `tensorflow.contrib.layers.batch_norm()` rather than `tf.layers.batch_normalization()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.batch_normalization()`, because anything in the contrib module may change or be deleted without notice. Instead of using the `batch_norm()` function as a regularizer parameter to the `fully_connected()` function, we now use `batch_normalization()` and we explicitly create a distinct layer. The parameters are a bit different, in particular:\n", "* `decay` is renamed to `momentum`,\n", "* `is_training` is renamed to `training`,\n", "* `updates_collections` is removed: the update operations needed by batch normalization are added to the `UPDATE_OPS` collection and you need to explicity run these operations during training (see the execution phase below),\n", "* we don't need to specify `scale=True`, as that is the default.\n", "\n", "Also note that in order to run batch norm just _before_ each hidden layer's activation function, we apply the ELU activation function manually, right after the batch norm layer.\n", "\n", "Note: since the `tf.layers.dense()` function is incompatible with `tf.contrib.layers.arg_scope()` (which is used in the book), we now use python's `functools.partial()` function instead. It makes it easy to create a `my_dense_layer()` function that just calls `tf.layers.dense()` with the desired parameters automatically set (unless they are overridden when calling `my_dense_layer()`). As you can see, the code remains very similar." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From :15: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation).\n" ] } ], "source": [ "reset_graph()\n", "\n", "import tensorflow as tf\n", "\n", "n_inputs = 28 * 28\n", "n_hidden1 = 300\n", "n_hidden2 = 100\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "\n", "training = tf.placeholder_with_default(False, shape=(), name='training')\n", "\n", "hidden1 = tf.layers.dense(X, n_hidden1, name=\"hidden1\")\n", "bn1 = tf.layers.batch_normalization(hidden1, training=training, momentum=0.9)\n", "bn1_act = tf.nn.elu(bn1)\n", "\n", "hidden2 = tf.layers.dense(bn1_act, n_hidden2, name=\"hidden2\")\n", "bn2 = tf.layers.batch_normalization(hidden2, training=training, momentum=0.9)\n", "bn2_act = tf.nn.elu(bn2)\n", "\n", "logits_before_bn = tf.layers.dense(bn2_act, n_outputs, name=\"outputs\")\n", "logits = tf.layers.batch_normalization(logits_before_bn, training=training,\n", " momentum=0.9)" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "training = tf.placeholder_with_default(False, shape=(), name='training')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To avoid repeating the same parameters over and over again, we can use Python's `partial()` function:" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "from functools import partial\n", "\n", "my_batch_norm_layer = partial(tf.layers.batch_normalization,\n", " training=training, momentum=0.9)\n", "\n", "hidden1 = tf.layers.dense(X, n_hidden1, name=\"hidden1\")\n", "bn1 = my_batch_norm_layer(hidden1)\n", "bn1_act = tf.nn.elu(bn1)\n", "hidden2 = tf.layers.dense(bn1_act, n_hidden2, name=\"hidden2\")\n", "bn2 = my_batch_norm_layer(hidden2)\n", "bn2_act = tf.nn.elu(bn2)\n", "logits_before_bn = tf.layers.dense(bn2_act, n_outputs, name=\"outputs\")\n", "logits = my_batch_norm_layer(logits_before_bn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's build a neural net for MNIST, using the ELU activation function and Batch Normalization at each layer:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "batch_norm_momentum = 0.9\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "training = tf.placeholder_with_default(False, shape=(), name='training')\n", "\n", "with tf.name_scope(\"dnn\"):\n", " he_init = tf.variance_scaling_initializer()\n", "\n", " my_batch_norm_layer = partial(\n", " tf.layers.batch_normalization,\n", " training=training,\n", " momentum=batch_norm_momentum)\n", "\n", " my_dense_layer = partial(\n", " tf.layers.dense,\n", " kernel_initializer=he_init)\n", "\n", " hidden1 = my_dense_layer(X, n_hidden1, name=\"hidden1\")\n", " bn1 = tf.nn.elu(my_batch_norm_layer(hidden1))\n", " hidden2 = my_dense_layer(bn1, n_hidden2, name=\"hidden2\")\n", " bn2 = tf.nn.elu(my_batch_norm_layer(hidden2))\n", " logits_before_bn = my_dense_layer(bn2, n_outputs, name=\"outputs\")\n", " logits = my_batch_norm_layer(logits_before_bn)\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", " \n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: since we are using `tf.layers.batch_normalization()` rather than `tf.contrib.layers.batch_norm()` (as in the book), we need to explicitly run the extra update operations needed by batch normalization (`sess.run([training_op, extra_update_ops],...`)." ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "n_epochs = 20\n", "batch_size = 200" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.8952\n", "1 Validation accuracy: 0.9202\n", "2 Validation accuracy: 0.9318\n", "3 Validation accuracy: 0.9422\n", "4 Validation accuracy: 0.9468\n", "5 Validation accuracy: 0.954\n", "6 Validation accuracy: 0.9568\n", "7 Validation accuracy: 0.96\n", "8 Validation accuracy: 0.962\n", "9 Validation accuracy: 0.9638\n", "10 Validation accuracy: 0.9662\n", "11 Validation accuracy: 0.9682\n", "12 Validation accuracy: 0.9672\n", "13 Validation accuracy: 0.9696\n", "14 Validation accuracy: 0.9706\n", "15 Validation accuracy: 0.9704\n", "16 Validation accuracy: 0.9718\n", "17 Validation accuracy: 0.9726\n", "18 Validation accuracy: 0.9738\n", "19 Validation accuracy: 0.9742\n" ] } ], "source": [ "extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run([training_op, extra_update_ops],\n", " feed_dict={training: True, X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What!? That's not a great accuracy for MNIST. Of course, if you train for longer it will get much better accuracy, but with such a shallow network, Batch Norm and ELU are unlikely to have very positive impact: they shine mostly for much deeper nets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that you could also make the training operation depend on the update operations:\n", "\n", "```python\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\n", " with tf.control_dependencies(extra_update_ops):\n", " training_op = optimizer.minimize(loss)\n", "```\n", "\n", "This way, you would just have to evaluate the `training_op` during training, TensorFlow would automatically run the update operations as well:\n", "\n", "```python\n", "sess.run(training_op, feed_dict={training: True, X: X_batch, y: y_batch})\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One more thing: notice that the list of trainable variables is shorter than the list of all global variables. This is because the moving averages are non-trainable variables. If you want to reuse a pretrained neural network (see below), you must not forget these non-trainable variables." ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['hidden1/kernel:0',\n", " 'hidden1/bias:0',\n", " 'batch_normalization/gamma:0',\n", " 'batch_normalization/beta:0',\n", " 'hidden2/kernel:0',\n", " 'hidden2/bias:0',\n", " 'batch_normalization_1/gamma:0',\n", " 'batch_normalization_1/beta:0',\n", " 'outputs/kernel:0',\n", " 'outputs/bias:0',\n", " 'batch_normalization_2/gamma:0',\n", " 'batch_normalization_2/beta:0']" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[v.name for v in tf.trainable_variables()]" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['hidden1/kernel:0',\n", " 'hidden1/bias:0',\n", " 'batch_normalization/gamma:0',\n", " 'batch_normalization/beta:0',\n", " 'batch_normalization/moving_mean:0',\n", " 'batch_normalization/moving_variance:0',\n", " 'hidden2/kernel:0',\n", " 'hidden2/bias:0',\n", " 'batch_normalization_1/gamma:0',\n", " 'batch_normalization_1/beta:0',\n", " 'batch_normalization_1/moving_mean:0',\n", " 'batch_normalization_1/moving_variance:0',\n", " 'outputs/kernel:0',\n", " 'outputs/bias:0',\n", " 'batch_normalization_2/gamma:0',\n", " 'batch_normalization_2/beta:0',\n", " 'batch_normalization_2/moving_mean:0',\n", " 'batch_normalization_2/moving_variance:0']" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[v.name for v in tf.global_variables()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gradient Clipping" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a simple neural net for MNIST and add gradient clipping. The first part is the same as earlier (except we added a few more layers to demonstrate reusing pretrained models, see below):" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "n_hidden2 = 50\n", "n_hidden3 = 50\n", "n_hidden4 = 50\n", "n_hidden5 = 50\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\")\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name=\"hidden2\")\n", " hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name=\"hidden3\")\n", " hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name=\"hidden4\")\n", " hidden5 = tf.layers.dense(hidden4, n_hidden5, activation=tf.nn.relu, name=\"hidden5\")\n", " logits = tf.layers.dense(hidden5, n_outputs, name=\"outputs\")\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we apply gradient clipping. For this, we need to get the gradients, use the `clip_by_value()` function to clip them, then apply them:" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": [ "threshold = 1.0\n", "\n", "optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", "grads_and_vars = optimizer.compute_gradients(loss)\n", "capped_gvs = [(tf.clip_by_value(grad, -threshold, threshold), var)\n", " for grad, var in grads_and_vars]\n", "training_op = optimizer.apply_gradients(capped_gvs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The rest is the same as usual:" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "n_epochs = 20\n", "batch_size = 200" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.288\n", "1 Validation accuracy: 0.7936\n", "2 Validation accuracy: 0.8798\n", "3 Validation accuracy: 0.906\n", "4 Validation accuracy: 0.9164\n", "5 Validation accuracy: 0.9218\n", "6 Validation accuracy: 0.9296\n", "7 Validation accuracy: 0.9358\n", "8 Validation accuracy: 0.9382\n", "9 Validation accuracy: 0.9414\n", "10 Validation accuracy: 0.9456\n", "11 Validation accuracy: 0.9474\n", "12 Validation accuracy: 0.9478\n", "13 Validation accuracy: 0.9534\n", "14 Validation accuracy: 0.9568\n", "15 Validation accuracy: 0.9566\n", "16 Validation accuracy: 0.9574\n", "17 Validation accuracy: 0.959\n", "18 Validation accuracy: 0.9622\n", "19 Validation accuracy: 0.9612\n" ] } ], "source": [ "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reusing Pretrained Layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reusing a TensorFlow Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First you need to load the graph's structure. The `import_meta_graph()` function does just that, loading the graph's operations into the default graph, and returning a `Saver` that you can then use to restore the model's state. Note that by default, a `Saver` saves the structure of the graph into a `.meta` file, so that's the file you should load:" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "reset_graph()" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": [ "saver = tf.train.import_meta_graph(\"./my_model_final.ckpt.meta\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next you need to get a handle on all the operations you will need for training. If you don't know the graph's structure, you can list all the operations:" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "X\n", "y\n", "hidden1/kernel/Initializer/random_uniform/shape\n", "hidden1/kernel/Initializer/random_uniform/min\n", "hidden1/kernel/Initializer/random_uniform/max\n", "hidden1/kernel/Initializer/random_uniform/RandomUniform\n", "hidden1/kernel/Initializer/random_uniform/sub\n", "hidden1/kernel/Initializer/random_uniform/mul\n", "hidden1/kernel/Initializer/random_uniform\n", "hidden1/kernel\n", "hidden1/kernel/Assign\n", "hidden1/kernel/read\n", "hidden1/bias/Initializer/zeros\n", "hidden1/bias\n", "hidden1/bias/Assign\n", "hidden1/bias/read\n", "dnn/hidden1/MatMul\n", "dnn/hidden1/BiasAdd\n", "dnn/hidden1/Relu\n", "hidden2/kernel/Initializer/random_uniform/shape\n", "hidden2/kernel/Initializer/random_uniform/min\n", "hidden2/kernel/Initializer/random_uniform/max\n", "hidden2/kernel/Initializer/random_uniform/RandomUniform\n", "hidden2/kernel/Initializer/random_uniform/sub\n", "hidden2/kernel/Initializer/random_uniform/mul\n", "hidden2/kernel/Initializer/random_uniform\n", "hidden2/kernel\n", "hidden2/kernel/Assign\n", "hidden2/kernel/read\n", "hidden2/bias/Initializer/zeros\n", "hidden2/bias\n", "hidden2/bias/Assign\n", "hidden2/bias/read\n", "dnn/hidden2/MatMul\n", "dnn/hidden2/BiasAdd\n", "<<210 more lines>>\n", "GradientDescent/update_hidden4/bias/ApplyGradientDescent\n", "GradientDescent/update_hidden5/kernel/ApplyGradientDescent\n", "GradientDescent/update_hidden5/bias/ApplyGradientDescent\n", "GradientDescent/update_outputs/kernel/ApplyGradientDescent\n", "GradientDescent/update_outputs/bias/ApplyGradientDescent\n", "GradientDescent\n", "eval/in_top_k/InTopKV2/k\n", "eval/in_top_k/InTopKV2\n", "eval/Cast\n", "eval/Const\n", "eval/accuracy\n", "init\n", "save/filename/input\n", "save/filename\n", "save/Const\n", "save/SaveV2/tensor_names\n", "save/SaveV2/shape_and_slices\n", "save/SaveV2\n", "save/control_dependency\n", "save/RestoreV2/tensor_names\n", "save/RestoreV2/shape_and_slices\n", "save/RestoreV2\n", "save/Assign\n", "save/Assign_1\n", "save/Assign_2\n", "save/Assign_3\n", "save/Assign_4\n", "save/Assign_5\n", "save/Assign_6\n", "save/Assign_7\n", "save/Assign_8\n", "save/Assign_9\n", "save/Assign_10\n", "save/Assign_11\n", "save/restore_all\n" ] } ], "source": [ "for op in tf.get_default_graph().get_operations():\n", " print(op.name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Oops, that's a lot of operations! It's much easier to use TensorBoard to visualize the graph:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "from datetime import datetime\n", "\n", "root_logdir = os.path.join(os.curdir, \"tf_logs\")\n", "\n", "def make_log_subdir(run_id=None):\n", " if run_id is None:\n", " run_id = datetime.utcnow().strftime(\"%Y%m%d%H%M%S\")\n", " return \"{}/run-{}/\".format(root_logdir, run_id)\n", "\n", "def save_graph(graph=None, run_id=None):\n", " if graph is None:\n", " graph = tf.get_default_graph()\n", " logdir = make_log_subdir(run_id)\n", " file_writer = tf.summary.FileWriter(logdir, graph=graph)\n", " file_writer.close()\n", " return logdir" ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "'./tf_logs/run-20210325200138/'" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "save_graph()" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [], "source": [ "%load_ext tensorboard" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Reusing TensorBoard on port 6007 (pid 46883), started 0:09:56 ago. (Use '!kill 46883' to kill it.)" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " \n", " \n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%tensorboard --logdir {root_logdir}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once you know which operations you need, you can get a handle on them using the graph's `get_operation_by_name()` or `get_tensor_by_name()` methods:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "X = tf.get_default_graph().get_tensor_by_name(\"X:0\")\n", "y = tf.get_default_graph().get_tensor_by_name(\"y:0\")\n", "\n", "accuracy = tf.get_default_graph().get_tensor_by_name(\"eval/accuracy:0\")\n", "\n", "training_op = tf.get_default_graph().get_operation_by_name(\"GradientDescent\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you are the author of the original model, you could make things easier for people who will reuse your model by giving operations very clear names and documenting them. Another approach is to create a collection containing all the important operations that people will want to get a handle on:" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [], "source": [ "for op in (X, y, accuracy, training_op):\n", " tf.add_to_collection(\"my_important_ops\", op)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This way people who reuse your model will be able to simply write:" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [], "source": [ "X, y, accuracy, training_op = tf.get_collection(\"my_important_ops\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you can start a session, restore the model's state and continue training on your data:" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n" ] } ], "source": [ "with tf.Session() as sess:\n", " saver.restore(sess, \"./my_model_final.ckpt\")\n", " # continue training the model..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Actually, let's test this for real!" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n", "0 Validation accuracy: 0.9636\n", "1 Validation accuracy: 0.9632\n", "2 Validation accuracy: 0.9658\n", "3 Validation accuracy: 0.9652\n", "4 Validation accuracy: 0.9646\n", "5 Validation accuracy: 0.965\n", "6 Validation accuracy: 0.969\n", "7 Validation accuracy: 0.9682\n", "8 Validation accuracy: 0.9682\n", "9 Validation accuracy: 0.9684\n", "10 Validation accuracy: 0.9704\n", "11 Validation accuracy: 0.971\n", "12 Validation accuracy: 0.9668\n", "13 Validation accuracy: 0.97\n", "14 Validation accuracy: 0.9712\n", "15 Validation accuracy: 0.9726\n", "16 Validation accuracy: 0.9718\n", "17 Validation accuracy: 0.971\n", "18 Validation accuracy: 0.9712\n", "19 Validation accuracy: 0.9712\n" ] } ], "source": [ "with tf.Session() as sess:\n", " saver.restore(sess, \"./my_model_final.ckpt\")\n", "\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_new_model_final.ckpt\") " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, if you have access to the Python code that built the original graph, you can use it instead of `import_meta_graph()`:" ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "n_hidden2 = 50\n", "n_hidden3 = 50\n", "n_hidden4 = 50\n", "n_hidden5 = 50\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\")\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name=\"hidden2\")\n", " hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name=\"hidden3\")\n", " hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name=\"hidden4\")\n", " hidden5 = tf.layers.dense(hidden4, n_hidden5, activation=tf.nn.relu, name=\"hidden5\")\n", " logits = tf.layers.dense(hidden5, n_outputs, name=\"outputs\")\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "learning_rate = 0.01\n", "threshold = 1.0\n", "\n", "optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", "grads_and_vars = optimizer.compute_gradients(loss)\n", "capped_gvs = [(tf.clip_by_value(grad, -threshold, threshold), var)\n", " for grad, var in grads_and_vars]\n", "training_op = optimizer.apply_gradients(capped_gvs)\n", "\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And continue training:" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n", "0 Validation accuracy: 0.9642\n", "1 Validation accuracy: 0.963\n", "2 Validation accuracy: 0.9656\n", "3 Validation accuracy: 0.9652\n", "4 Validation accuracy: 0.9642\n", "5 Validation accuracy: 0.965\n", "6 Validation accuracy: 0.9686\n", "7 Validation accuracy: 0.9686\n", "8 Validation accuracy: 0.9684\n", "9 Validation accuracy: 0.9684\n", "10 Validation accuracy: 0.9702\n", "11 Validation accuracy: 0.9716\n", "12 Validation accuracy: 0.9676\n", "13 Validation accuracy: 0.97\n", "14 Validation accuracy: 0.9706\n", "15 Validation accuracy: 0.9724\n", "16 Validation accuracy: 0.972\n", "17 Validation accuracy: 0.9712\n", "18 Validation accuracy: 0.9712\n", "19 Validation accuracy: 0.9708\n" ] } ], "source": [ "with tf.Session() as sess:\n", " saver.restore(sess, \"./my_model_final.ckpt\")\n", "\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_new_model_final.ckpt\") " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general you will want to reuse only the lower layers. If you are using `import_meta_graph()` it will load the whole graph, but you can simply ignore the parts you do not need. In this example, we add a new 4th hidden layer on top of the pretrained 3rd layer (ignoring the old 4th hidden layer). We also build a new output layer, the loss for this new output, and a new optimizer to minimize it. We also need another saver to save the whole graph (containing both the entire old graph plus the new operations), and an initialization operation to initialize all the new variables:" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_hidden4 = 20 # new layer\n", "n_outputs = 10 # new layer\n", "\n", "saver = tf.train.import_meta_graph(\"./my_model_final.ckpt.meta\")\n", "\n", "X = tf.get_default_graph().get_tensor_by_name(\"X:0\")\n", "y = tf.get_default_graph().get_tensor_by_name(\"y:0\")\n", "\n", "hidden3 = tf.get_default_graph().get_tensor_by_name(\"dnn/hidden3/Relu:0\")\n", "\n", "new_hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name=\"new_hidden4\")\n", "new_logits = tf.layers.dense(new_hidden4, n_outputs, name=\"new_outputs\")\n", "\n", "with tf.name_scope(\"new_loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=new_logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"new_eval\"):\n", " correct = tf.nn.in_top_k(new_logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "with tf.name_scope(\"new_train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)\n", "\n", "init = tf.global_variables_initializer()\n", "new_saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can train this new model:" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n", "0 Validation accuracy: 0.9126\n", "1 Validation accuracy: 0.9374\n", "2 Validation accuracy: 0.946\n", "3 Validation accuracy: 0.9498\n", "4 Validation accuracy: 0.953\n", "5 Validation accuracy: 0.9528\n", "6 Validation accuracy: 0.9564\n", "7 Validation accuracy: 0.96\n", "8 Validation accuracy: 0.9616\n", "9 Validation accuracy: 0.9612\n", "10 Validation accuracy: 0.9634\n", "11 Validation accuracy: 0.9626\n", "12 Validation accuracy: 0.9648\n", "13 Validation accuracy: 0.9656\n", "14 Validation accuracy: 0.9664\n", "15 Validation accuracy: 0.967\n", "16 Validation accuracy: 0.968\n", "17 Validation accuracy: 0.9678\n", "18 Validation accuracy: 0.9684\n", "19 Validation accuracy: 0.9678\n" ] } ], "source": [ "with tf.Session() as sess:\n", " init.run()\n", " saver.restore(sess, \"./my_model_final.ckpt\")\n", "\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = new_saver.save(sess, \"./my_new_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you have access to the Python code that built the original graph, you can just reuse the parts you need and drop the rest:" ] }, { "cell_type": "code", "execution_count": 63, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300 # reused\n", "n_hidden2 = 50 # reused\n", "n_hidden3 = 50 # reused\n", "n_hidden4 = 20 # new!\n", "n_outputs = 10 # new!\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\") # reused\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name=\"hidden2\") # reused\n", " hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name=\"hidden3\") # reused\n", " hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name=\"hidden4\") # new!\n", " logits = tf.layers.dense(hidden4, n_outputs, name=\"outputs\") # new!\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, you must create one `Saver` to restore the pretrained model (giving it the list of variables to restore, or else it will complain that the graphs don't match), and another `Saver` to save the new model, once it is trained:" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n", "0 Validation accuracy: 0.9024\n", "1 Validation accuracy: 0.9332\n", "2 Validation accuracy: 0.943\n", "3 Validation accuracy: 0.947\n", "4 Validation accuracy: 0.9516\n", "5 Validation accuracy: 0.9532\n", "6 Validation accuracy: 0.9558\n", "7 Validation accuracy: 0.9592\n", "8 Validation accuracy: 0.9586\n", "9 Validation accuracy: 0.9608\n", "10 Validation accuracy: 0.9626\n", "11 Validation accuracy: 0.962\n", "12 Validation accuracy: 0.964\n", "13 Validation accuracy: 0.9662\n", "14 Validation accuracy: 0.966\n", "15 Validation accuracy: 0.9662\n", "16 Validation accuracy: 0.9672\n", "17 Validation accuracy: 0.9674\n", "18 Validation accuracy: 0.9682\n", "19 Validation accuracy: 0.9678\n" ] } ], "source": [ "reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,\n", " scope=\"hidden[123]\") # regular expression\n", "restore_saver = tf.train.Saver(reuse_vars) # to restore layers 1-3\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_model_final.ckpt\")\n", "\n", " for epoch in range(n_epochs): # not shown in the book\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size): # not shown\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) # not shown\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid}) # not shown\n", " print(epoch, \"Validation accuracy:\", accuracy_val) # not shown\n", "\n", " save_path = saver.save(sess, \"./my_new_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reusing Models from Other Frameworks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, for each variable we want to reuse, we find its initializer's assignment operation, and we get its second input, which corresponds to the initialization value. When we run the initializer, we replace the initialization values with the ones we want, using a `feed_dict`:" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 2\n", "n_hidden1 = 3" ] }, { "cell_type": "code", "execution_count": 66, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 61. 83. 105.]]\n" ] } ], "source": [ "original_w = [[1., 2., 3.], [4., 5., 6.]] # Load the weights from the other framework\n", "original_b = [7., 8., 9.] # Load the biases from the other framework\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\")\n", "# [...] Build the rest of the model\n", "\n", "# Get a handle on the assignment nodes for the hidden1 variables\n", "graph = tf.get_default_graph()\n", "assign_kernel = graph.get_operation_by_name(\"hidden1/kernel/Assign\")\n", "assign_bias = graph.get_operation_by_name(\"hidden1/bias/Assign\")\n", "init_kernel = assign_kernel.inputs[1]\n", "init_bias = assign_bias.inputs[1]\n", "\n", "init = tf.global_variables_initializer()\n", "\n", "with tf.Session() as sess:\n", " sess.run(init, feed_dict={init_kernel: original_w, init_bias: original_b})\n", " # [...] Train the model on your new task\n", " print(hidden1.eval(feed_dict={X: [[10.0, 11.0]]})) # not shown in the book" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: the weights variable created by the `tf.layers.dense()` function is called `\"kernel\"` (instead of `\"weights\"` when using the `tf.contrib.layers.fully_connected()`, as in the book), and the biases variable is called `bias` instead of `biases`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another approach (initially used in the book) would be to create dedicated assignment nodes and dedicated placeholders. This is more verbose and less efficient, but you may find this more explicit:" ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 61. 83. 105.]]\n" ] } ], "source": [ "reset_graph()\n", "\n", "n_inputs = 2\n", "n_hidden1 = 3\n", "\n", "original_w = [[1., 2., 3.], [4., 5., 6.]] # Load the weights from the other framework\n", "original_b = [7., 8., 9.] # Load the biases from the other framework\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\")\n", "# [...] Build the rest of the model\n", "\n", "# Get a handle on the variables of layer hidden1\n", "with tf.variable_scope(\"\", default_name=\"\", reuse=True): # root scope\n", " hidden1_weights = tf.get_variable(\"hidden1/kernel\")\n", " hidden1_biases = tf.get_variable(\"hidden1/bias\")\n", "\n", "# Create dedicated placeholders and assignment nodes\n", "original_weights = tf.placeholder(tf.float32, shape=(n_inputs, n_hidden1))\n", "original_biases = tf.placeholder(tf.float32, shape=n_hidden1)\n", "assign_hidden1_weights = tf.assign(hidden1_weights, original_weights)\n", "assign_hidden1_biases = tf.assign(hidden1_biases, original_biases)\n", "\n", "init = tf.global_variables_initializer()\n", "\n", "with tf.Session() as sess:\n", " sess.run(init)\n", " sess.run(assign_hidden1_weights, feed_dict={original_weights: original_w})\n", " sess.run(assign_hidden1_biases, feed_dict={original_biases: original_b})\n", " # [...] Train the model on your new task\n", " print(hidden1.eval(feed_dict={X: [[10.0, 11.0]]}))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we could also get a handle on the variables using `get_collection()` and specifying the `scope`:" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[,\n", " ]" ] }, "execution_count": 68, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=\"hidden1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or we could use the graph's `get_tensor_by_name()` method:" ] }, { "cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 69, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tf.get_default_graph().get_tensor_by_name(\"hidden1/kernel:0\")" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tf.get_default_graph().get_tensor_by_name(\"hidden1/bias:0\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Freezing the Lower Layers" ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300 # reused\n", "n_hidden2 = 50 # reused\n", "n_hidden3 = 50 # reused\n", "n_hidden4 = 20 # new!\n", "n_outputs = 10 # new!\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\") # reused\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name=\"hidden2\") # reused\n", " hidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name=\"hidden3\") # reused\n", " hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name=\"hidden4\") # new!\n", " logits = tf.layers.dense(hidden4, n_outputs, name=\"outputs\") # new!\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")" ] }, { "cell_type": "code", "execution_count": 72, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"train\"): # not shown in the book\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate) # not shown\n", " train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,\n", " scope=\"hidden[34]|outputs\")\n", " training_op = optimizer.minimize(loss, var_list=train_vars)" ] }, { "cell_type": "code", "execution_count": 73, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n", "0 Validation accuracy: 0.8964\n", "1 Validation accuracy: 0.9298\n", "2 Validation accuracy: 0.94\n", "3 Validation accuracy: 0.9442\n", "4 Validation accuracy: 0.948\n", "5 Validation accuracy: 0.951\n", "6 Validation accuracy: 0.9508\n", "7 Validation accuracy: 0.9538\n", "8 Validation accuracy: 0.9554\n", "9 Validation accuracy: 0.957\n", "10 Validation accuracy: 0.9562\n", "11 Validation accuracy: 0.9566\n", "12 Validation accuracy: 0.9572\n", "13 Validation accuracy: 0.9578\n", "14 Validation accuracy: 0.959\n", "15 Validation accuracy: 0.9576\n", "16 Validation accuracy: 0.9574\n", "17 Validation accuracy: 0.9602\n", "18 Validation accuracy: 0.9592\n", "19 Validation accuracy: 0.9602\n" ] } ], "source": [ "reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,\n", " scope=\"hidden[123]\") # regular expression\n", "restore_saver = tf.train.Saver(reuse_vars) # to restore layers 1-3\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_model_final.ckpt\")\n", "\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_new_model_final.ckpt\")" ] }, { "cell_type": "code", "execution_count": 74, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300 # reused\n", "n_hidden2 = 50 # reused\n", "n_hidden3 = 50 # reused\n", "n_hidden4 = 20 # new!\n", "n_outputs = 10 # new!\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")" ] }, { "cell_type": "code", "execution_count": 75, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu,\n", " name=\"hidden1\") # reused frozen\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu,\n", " name=\"hidden2\") # reused frozen\n", " hidden2_stop = tf.stop_gradient(hidden2)\n", " hidden3 = tf.layers.dense(hidden2_stop, n_hidden3, activation=tf.nn.relu,\n", " name=\"hidden3\") # reused, not frozen\n", " hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu,\n", " name=\"hidden4\") # new!\n", " logits = tf.layers.dense(hidden4, n_outputs, name=\"outputs\") # new!" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The training code is exactly the same as earlier:" ] }, { "cell_type": "code", "execution_count": 77, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n", "0 Validation accuracy: 0.902\n", "1 Validation accuracy: 0.9302\n", "2 Validation accuracy: 0.9438\n", "3 Validation accuracy: 0.9478\n", "4 Validation accuracy: 0.9514\n", "5 Validation accuracy: 0.9522\n", "6 Validation accuracy: 0.9524\n", "7 Validation accuracy: 0.9556\n", "8 Validation accuracy: 0.9556\n", "9 Validation accuracy: 0.9558\n", "10 Validation accuracy: 0.957\n", "11 Validation accuracy: 0.9552\n", "12 Validation accuracy: 0.9572\n", "13 Validation accuracy: 0.9582\n", "14 Validation accuracy: 0.9582\n", "15 Validation accuracy: 0.957\n", "16 Validation accuracy: 0.9566\n", "17 Validation accuracy: 0.9578\n", "18 Validation accuracy: 0.9594\n", "19 Validation accuracy: 0.958\n" ] } ], "source": [ "reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,\n", " scope=\"hidden[123]\") # regular expression\n", "restore_saver = tf.train.Saver(reuse_vars) # to restore layers 1-3\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_model_final.ckpt\")\n", "\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_new_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Caching the Frozen Layers" ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300 # reused\n", "n_hidden2 = 50 # reused\n", "n_hidden3 = 50 # reused\n", "n_hidden4 = 20 # new!\n", "n_outputs = 10 # new!\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu,\n", " name=\"hidden1\") # reused frozen\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu,\n", " name=\"hidden2\") # reused frozen & cached\n", " hidden2_stop = tf.stop_gradient(hidden2)\n", " hidden3 = tf.layers.dense(hidden2_stop, n_hidden3, activation=tf.nn.relu,\n", " name=\"hidden3\") # reused, not frozen\n", " hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu,\n", " name=\"hidden4\") # new!\n", " logits = tf.layers.dense(hidden4, n_outputs, name=\"outputs\") # new!\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)" ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [], "source": [ "reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,\n", " scope=\"hidden[123]\") # regular expression\n", "restore_saver = tf.train.Saver(reuse_vars) # to restore layers 1-3\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt\n", "0 Validation accuracy: 0.902\n", "1 Validation accuracy: 0.9302\n", "2 Validation accuracy: 0.9438\n", "3 Validation accuracy: 0.9478\n", "4 Validation accuracy: 0.9514\n", "5 Validation accuracy: 0.9522\n", "6 Validation accuracy: 0.9524\n", "7 Validation accuracy: 0.9556\n", "8 Validation accuracy: 0.9556\n", "9 Validation accuracy: 0.9558\n", "10 Validation accuracy: 0.957\n", "11 Validation accuracy: 0.9552\n", "12 Validation accuracy: 0.9572\n", "13 Validation accuracy: 0.9582\n", "14 Validation accuracy: 0.9582\n", "15 Validation accuracy: 0.957\n", "16 Validation accuracy: 0.9566\n", "17 Validation accuracy: 0.9578\n", "18 Validation accuracy: 0.9594\n", "19 Validation accuracy: 0.958\n" ] } ], "source": [ "import numpy as np\n", "\n", "n_batches = len(X_train) // batch_size\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_model_final.ckpt\")\n", " \n", " h2_cache = sess.run(hidden2, feed_dict={X: X_train})\n", " h2_cache_valid = sess.run(hidden2, feed_dict={X: X_valid}) # not shown in the book\n", "\n", " for epoch in range(n_epochs):\n", " shuffled_idx = np.random.permutation(len(X_train))\n", " hidden2_batches = np.array_split(h2_cache[shuffled_idx], n_batches)\n", " y_batches = np.array_split(y_train[shuffled_idx], n_batches)\n", " for hidden2_batch, y_batch in zip(hidden2_batches, y_batches):\n", " sess.run(training_op, feed_dict={hidden2:hidden2_batch, y:y_batch})\n", "\n", " accuracy_val = accuracy.eval(feed_dict={hidden2: h2_cache_valid, # not shown\n", " y: y_valid}) # not shown\n", " print(epoch, \"Validation accuracy:\", accuracy_val) # not shown\n", "\n", " save_path = saver.save(sess, \"./my_new_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Faster Optimizers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Momentum optimization" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,\n", " momentum=0.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Nesterov Accelerated Gradient" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,\n", " momentum=0.9, use_nesterov=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AdaGrad" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## RMSProp" ] }, { "cell_type": "code", "execution_count": 84, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate,\n", " momentum=0.9, decay=0.9, epsilon=1e-10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adam Optimization" ] }, { "cell_type": "code", "execution_count": 85, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning Rate Scheduling" ] }, { "cell_type": "code", "execution_count": 86, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "n_hidden2 = 50\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\")\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name=\"hidden2\")\n", " logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")" ] }, { "cell_type": "code", "execution_count": 87, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"train\"): # not shown in the book\n", " initial_learning_rate = 0.1\n", " decay_steps = 10000\n", " decay_rate = 1/10\n", " global_step = tf.Variable(0, trainable=False, name=\"global_step\")\n", " learning_rate = tf.train.exponential_decay(initial_learning_rate, global_step,\n", " decay_steps, decay_rate)\n", " optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9)\n", " training_op = optimizer.minimize(loss, global_step=global_step)" ] }, { "cell_type": "code", "execution_count": 88, "metadata": {}, "outputs": [], "source": [ "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 89, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.959\n", "1 Validation accuracy: 0.9688\n", "2 Validation accuracy: 0.9726\n", "3 Validation accuracy: 0.9804\n", "4 Validation accuracy: 0.982\n" ] } ], "source": [ "n_epochs = 5\n", "batch_size = 50\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Avoiding Overfitting Through Regularization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## $\\ell_1$ and $\\ell_2$ regularization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's implement $\\ell_1$ regularization manually. First, we create the model, as usual (with just one hidden layer this time, for simplicity):" ] }, { "cell_type": "code", "execution_count": 90, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\")\n", " logits = tf.layers.dense(hidden1, n_outputs, name=\"outputs\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we get a handle on the layer weights, and we compute the total loss, which is equal to the sum of the usual cross entropy loss and the $\\ell_1$ loss (i.e., the absolute values of the weights):" ] }, { "cell_type": "code", "execution_count": 91, "metadata": {}, "outputs": [], "source": [ "W1 = tf.get_default_graph().get_tensor_by_name(\"hidden1/kernel:0\")\n", "W2 = tf.get_default_graph().get_tensor_by_name(\"outputs/kernel:0\")\n", "\n", "scale = 0.001 # l1 regularization hyperparameter\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,\n", " logits=logits)\n", " base_loss = tf.reduce_mean(xentropy, name=\"avg_xentropy\")\n", " reg_losses = tf.reduce_sum(tf.abs(W1)) + tf.reduce_sum(tf.abs(W2))\n", " loss = tf.add(base_loss, scale * reg_losses, name=\"loss\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The rest is just as usual:" ] }, { "cell_type": "code", "execution_count": 92, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "learning_rate = 0.01\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 93, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.831\n", "1 Validation accuracy: 0.871\n", "2 Validation accuracy: 0.8838\n", "3 Validation accuracy: 0.8934\n", "4 Validation accuracy: 0.8966\n", "5 Validation accuracy: 0.8988\n", "6 Validation accuracy: 0.9016\n", "7 Validation accuracy: 0.9044\n", "8 Validation accuracy: 0.9058\n", "9 Validation accuracy: 0.906\n", "10 Validation accuracy: 0.9068\n", "11 Validation accuracy: 0.9054\n", "12 Validation accuracy: 0.907\n", "13 Validation accuracy: 0.9084\n", "14 Validation accuracy: 0.9088\n", "15 Validation accuracy: 0.9064\n", "16 Validation accuracy: 0.9066\n", "17 Validation accuracy: 0.9066\n", "18 Validation accuracy: 0.9066\n", "19 Validation accuracy: 0.9052\n" ] } ], "source": [ "n_epochs = 20\n", "batch_size = 200\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, we can pass a regularization function to the `tf.layers.dense()` function, which will use it to create operations that will compute the regularization loss, and it adds these operations to the collection of regularization losses. The beginning is the same as above:" ] }, { "cell_type": "code", "execution_count": 94, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_hidden1 = 300\n", "n_hidden2 = 50\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we will use Python's `partial()` function to avoid repeating the same arguments over and over again. Note that we set the `kernel_regularizer` argument:" ] }, { "cell_type": "code", "execution_count": 95, "metadata": {}, "outputs": [], "source": [ "scale = 0.001" ] }, { "cell_type": "code", "execution_count": 96, "metadata": {}, "outputs": [], "source": [ "my_dense_layer = partial(\n", " tf.layers.dense, activation=tf.nn.relu,\n", " kernel_regularizer=tf.contrib.layers.l1_regularizer(scale))\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = my_dense_layer(X, n_hidden1, name=\"hidden1\")\n", " hidden2 = my_dense_layer(hidden1, n_hidden2, name=\"hidden2\")\n", " logits = my_dense_layer(hidden2, n_outputs, activation=None,\n", " name=\"outputs\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we must add the regularization losses to the base loss:" ] }, { "cell_type": "code", "execution_count": 97, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"loss\"): # not shown in the book\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits( # not shown\n", " labels=y, logits=logits) # not shown\n", " base_loss = tf.reduce_mean(xentropy, name=\"avg_xentropy\") # not shown\n", " reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)\n", " loss = tf.add_n([base_loss] + reg_losses, name=\"loss\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the rest is the same as usual:" ] }, { "cell_type": "code", "execution_count": 98, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "learning_rate = 0.01\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", " training_op = optimizer.minimize(loss)\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 99, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.8274\n", "1 Validation accuracy: 0.8766\n", "2 Validation accuracy: 0.8952\n", "3 Validation accuracy: 0.9016\n", "4 Validation accuracy: 0.908\n", "5 Validation accuracy: 0.9096\n", "6 Validation accuracy: 0.9126\n", "7 Validation accuracy: 0.9154\n", "8 Validation accuracy: 0.9178\n", "9 Validation accuracy: 0.919\n", "10 Validation accuracy: 0.92\n", "11 Validation accuracy: 0.9224\n", "12 Validation accuracy: 0.9212\n", "13 Validation accuracy: 0.9228\n", "14 Validation accuracy: 0.9224\n", "15 Validation accuracy: 0.9216\n", "16 Validation accuracy: 0.9218\n", "17 Validation accuracy: 0.9228\n", "18 Validation accuracy: 0.9216\n", "19 Validation accuracy: 0.9214\n" ] } ], "source": [ "n_epochs = 20\n", "batch_size = 200\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dropout" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: the book uses `tf.contrib.layers.dropout()` rather than `tf.layers.dropout()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dropout()`, because anything in the contrib module may change or be deleted without notice. The `tf.layers.dropout()` function is almost identical to the `tf.contrib.layers.dropout()` function, except for a few minor differences. Most importantly:\n", "* you must specify the dropout rate (`rate`) rather than the keep probability (`keep_prob`), where `rate` is simply equal to `1 - keep_prob`,\n", "* the `is_training` parameter is renamed to `training`." ] }, { "cell_type": "code", "execution_count": 100, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")" ] }, { "cell_type": "code", "execution_count": 101, "metadata": {}, "outputs": [], "source": [ "training = tf.placeholder_with_default(False, shape=(), name='training')\n", "\n", "dropout_rate = 0.5 # == 1 - keep_prob\n", "X_drop = tf.layers.dropout(X, dropout_rate, training=training)\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X_drop, n_hidden1, activation=tf.nn.relu,\n", " name=\"hidden1\")\n", " hidden1_drop = tf.layers.dropout(hidden1, dropout_rate, training=training)\n", " hidden2 = tf.layers.dense(hidden1_drop, n_hidden2, activation=tf.nn.relu,\n", " name=\"hidden2\")\n", " hidden2_drop = tf.layers.dropout(hidden2, dropout_rate, training=training)\n", " logits = tf.layers.dense(hidden2_drop, n_outputs, name=\"outputs\")" ] }, { "cell_type": "code", "execution_count": 102, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.9)\n", " training_op = optimizer.minimize(loss) \n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", " \n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 103, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.9264\n", "1 Validation accuracy: 0.9446\n", "2 Validation accuracy: 0.9488\n", "3 Validation accuracy: 0.9556\n", "4 Validation accuracy: 0.9612\n", "5 Validation accuracy: 0.9598\n", "6 Validation accuracy: 0.9616\n", "7 Validation accuracy: 0.9674\n", "8 Validation accuracy: 0.967\n", "9 Validation accuracy: 0.9706\n", "10 Validation accuracy: 0.9674\n", "11 Validation accuracy: 0.9678\n", "12 Validation accuracy: 0.9698\n", "13 Validation accuracy: 0.97\n", "14 Validation accuracy: 0.971\n", "15 Validation accuracy: 0.9702\n", "16 Validation accuracy: 0.9718\n", "17 Validation accuracy: 0.9716\n", "18 Validation accuracy: 0.9734\n", "19 Validation accuracy: 0.972\n" ] } ], "source": [ "n_epochs = 20\n", "batch_size = 50\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch, training: True})\n", " accuracy_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})\n", " print(epoch, \"Validation accuracy:\", accuracy_val)\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Max norm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's go back to a plain and simple neural net for MNIST with just 2 hidden layers:" ] }, { "cell_type": "code", "execution_count": 104, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28\n", "n_hidden1 = 300\n", "n_hidden2 = 50\n", "n_outputs = 10\n", "\n", "learning_rate = 0.01\n", "momentum = 0.9\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name=\"hidden1\")\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name=\"hidden2\")\n", " logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")\n", "\n", "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.MomentumOptimizer(learning_rate, momentum)\n", " training_op = optimizer.minimize(loss) \n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's get a handle on the first hidden layer's weight and create an operation that will compute the clipped weights using the `clip_by_norm()` function. Then we create an assignment operation to assign the clipped weights to the weights variable:" ] }, { "cell_type": "code", "execution_count": 105, "metadata": {}, "outputs": [], "source": [ "threshold = 1.0\n", "weights = tf.get_default_graph().get_tensor_by_name(\"hidden1/kernel:0\")\n", "clipped_weights = tf.clip_by_norm(weights, clip_norm=threshold, axes=1)\n", "clip_weights = tf.assign(weights, clipped_weights)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can do this as well for the second hidden layer:" ] }, { "cell_type": "code", "execution_count": 106, "metadata": {}, "outputs": [], "source": [ "weights2 = tf.get_default_graph().get_tensor_by_name(\"hidden2/kernel:0\")\n", "clipped_weights2 = tf.clip_by_norm(weights2, clip_norm=threshold, axes=1)\n", "clip_weights2 = tf.assign(weights2, clipped_weights2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's add an initializer and a saver:" ] }, { "cell_type": "code", "execution_count": 107, "metadata": {}, "outputs": [], "source": [ "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now we can train the model. It's pretty much as usual, except that right after running the `training_op`, we run the `clip_weights` and `clip_weights2` operations:" ] }, { "cell_type": "code", "execution_count": 108, "metadata": {}, "outputs": [], "source": [ "n_epochs = 20\n", "batch_size = 50" ] }, { "cell_type": "code", "execution_count": 109, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.9568\n", "1 Validation accuracy: 0.9696\n", "2 Validation accuracy: 0.972\n", "3 Validation accuracy: 0.9768\n", "4 Validation accuracy: 0.9784\n", "5 Validation accuracy: 0.9786\n", "6 Validation accuracy: 0.9816\n", "7 Validation accuracy: 0.9808\n", "8 Validation accuracy: 0.981\n", "9 Validation accuracy: 0.983\n", "10 Validation accuracy: 0.9822\n", "11 Validation accuracy: 0.9854\n", "12 Validation accuracy: 0.9822\n", "13 Validation accuracy: 0.9842\n", "14 Validation accuracy: 0.984\n", "15 Validation accuracy: 0.9852\n", "16 Validation accuracy: 0.984\n", "17 Validation accuracy: 0.9844\n", "18 Validation accuracy: 0.9844\n", "19 Validation accuracy: 0.9844\n" ] } ], "source": [ "with tf.Session() as sess: # not shown in the book\n", " init.run() # not shown\n", " for epoch in range(n_epochs): # not shown\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size): # not shown\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " clip_weights.eval()\n", " clip_weights2.eval() # not shown\n", " acc_valid = accuracy.eval(feed_dict={X: X_valid, y: y_valid}) # not shown\n", " print(epoch, \"Validation accuracy:\", acc_valid) # not shown\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\") # not shown" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The implementation above is straightforward and it works fine, but it is a bit messy. A better approach is to define a `max_norm_regularizer()` function:" ] }, { "cell_type": "code", "execution_count": 110, "metadata": {}, "outputs": [], "source": [ "def max_norm_regularizer(threshold, axes=1, name=\"max_norm\",\n", " collection=\"max_norm\"):\n", " def max_norm(weights):\n", " clipped = tf.clip_by_norm(weights, clip_norm=threshold, axes=axes)\n", " clip_weights = tf.assign(weights, clipped, name=name)\n", " tf.add_to_collection(collection, clip_weights)\n", " return None # there is no regularization loss term\n", " return max_norm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then you can call this function to get a max norm regularizer (with the threshold you want). When you create a hidden layer, you can pass this regularizer to the `kernel_regularizer` argument:" ] }, { "cell_type": "code", "execution_count": 111, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28\n", "n_hidden1 = 300\n", "n_hidden2 = 50\n", "n_outputs = 10\n", "\n", "learning_rate = 0.01\n", "momentum = 0.9\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")" ] }, { "cell_type": "code", "execution_count": 112, "metadata": {}, "outputs": [], "source": [ "max_norm_reg = max_norm_regularizer(threshold=1.0)\n", "\n", "with tf.name_scope(\"dnn\"):\n", " hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu,\n", " kernel_regularizer=max_norm_reg, name=\"hidden1\")\n", " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu,\n", " kernel_regularizer=max_norm_reg, name=\"hidden2\")\n", " logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")" ] }, { "cell_type": "code", "execution_count": 113, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"loss\"):\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "with tf.name_scope(\"train\"):\n", " optimizer = tf.train.MomentumOptimizer(learning_rate, momentum)\n", " training_op = optimizer.minimize(loss) \n", "\n", "with tf.name_scope(\"eval\"):\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Training is as usual, except you must run the weights clipping operations after each training operation:" ] }, { "cell_type": "code", "execution_count": 114, "metadata": {}, "outputs": [], "source": [ "n_epochs = 20\n", "batch_size = 50" ] }, { "cell_type": "code", "execution_count": 115, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Validation accuracy: 0.9556\n", "1 Validation accuracy: 0.9698\n", "2 Validation accuracy: 0.9726\n", "3 Validation accuracy: 0.9744\n", "4 Validation accuracy: 0.9762\n", "5 Validation accuracy: 0.9772\n", "6 Validation accuracy: 0.979\n", "7 Validation accuracy: 0.9816\n", "8 Validation accuracy: 0.9814\n", "9 Validation accuracy: 0.9812\n", "10 Validation accuracy: 0.9818\n", "11 Validation accuracy: 0.9816\n", "12 Validation accuracy: 0.9802\n", "13 Validation accuracy: 0.9822\n", "14 Validation accuracy: 0.982\n", "15 Validation accuracy: 0.9812\n", "16 Validation accuracy: 0.9824\n", "17 Validation accuracy: 0.9836\n", "18 Validation accuracy: 0.9824\n", "19 Validation accuracy: 0.9826\n" ] } ], "source": [ "clip_all_weights = tf.get_collection(\"max_norm\")\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " sess.run(clip_all_weights)\n", " acc_valid = accuracy.eval(feed_dict={X: X_valid, y: y_valid}) # not shown\n", " print(epoch, \"Validation accuracy:\", acc_valid) # not shown\n", "\n", " save_path = saver.save(sess, \"./my_model_final.ckpt\") # not shown" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# Exercise solutions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. to 7." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See appendix A." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. Deep Learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 8.1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: Build a DNN with five hidden layers of 100 neurons each, He initialization, and the ELU activation function._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will need similar DNNs in the next exercises, so let's create a function to build this DNN:" ] }, { "cell_type": "code", "execution_count": 116, "metadata": {}, "outputs": [], "source": [ "he_init = tf.variance_scaling_initializer()\n", "\n", "def dnn(inputs, n_hidden_layers=5, n_neurons=100, name=None,\n", " activation=tf.nn.elu, initializer=he_init):\n", " with tf.variable_scope(name, \"dnn\"):\n", " for layer in range(n_hidden_layers):\n", " inputs = tf.layers.dense(inputs, n_neurons, activation=activation,\n", " kernel_initializer=initializer,\n", " name=\"hidden%d\" % (layer + 1))\n", " return inputs" ] }, { "cell_type": "code", "execution_count": 117, "metadata": {}, "outputs": [], "source": [ "n_inputs = 28 * 28 # MNIST\n", "n_outputs = 5\n", "\n", "reset_graph()\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "dnn_outputs = dnn(X)\n", "\n", "logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init, name=\"logits\")\n", "Y_proba = tf.nn.softmax(logits, name=\"Y_proba\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 8.2." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: Using Adam optimization and early stopping, try training it on MNIST but only on digits 0 to 4, as we will use transfer learning for digits 5 to 9 in the next exercise. You will need a softmax output layer with five neurons, and as always make sure to save checkpoints at regular intervals and save the final model so you can reuse it later._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's complete the graph with the cost function, the training op, and all the other usual components:" ] }, { "cell_type": "code", "execution_count": 118, "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", "\n", "xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", "loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "optimizer = tf.train.AdamOptimizer(learning_rate)\n", "training_op = optimizer.minimize(loss, name=\"training_op\")\n", "\n", "correct = tf.nn.in_top_k(logits, y, 1)\n", "accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's create the training set, validation and test set (we need the validation set to implement early stopping):" ] }, { "cell_type": "code", "execution_count": 119, "metadata": {}, "outputs": [], "source": [ "X_train1 = X_train[y_train < 5]\n", "y_train1 = y_train[y_train < 5]\n", "X_valid1 = X_valid[y_valid < 5]\n", "y_valid1 = y_valid[y_valid < 5]\n", "X_test1 = X_test[y_test < 5]\n", "y_test1 = y_test[y_test < 5]" ] }, { "cell_type": "code", "execution_count": 120, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.116407\tBest loss: 0.116407\tAccuracy: 97.58%\n", "1\tValidation loss: 0.180534\tBest loss: 0.116407\tAccuracy: 97.11%\n", "2\tValidation loss: 0.227535\tBest loss: 0.116407\tAccuracy: 93.86%\n", "3\tValidation loss: 0.107346\tBest loss: 0.107346\tAccuracy: 97.54%\n", "4\tValidation loss: 0.302668\tBest loss: 0.107346\tAccuracy: 95.35%\n", "5\tValidation loss: 1.631054\tBest loss: 0.107346\tAccuracy: 22.01%\n", "6\tValidation loss: 1.635262\tBest loss: 0.107346\tAccuracy: 18.73%\n", "7\tValidation loss: 1.671200\tBest loss: 0.107346\tAccuracy: 22.01%\n", "8\tValidation loss: 1.695277\tBest loss: 0.107346\tAccuracy: 19.27%\n", "9\tValidation loss: 1.744607\tBest loss: 0.107346\tAccuracy: 20.91%\n", "10\tValidation loss: 1.629857\tBest loss: 0.107346\tAccuracy: 22.01%\n", "11\tValidation loss: 1.810803\tBest loss: 0.107346\tAccuracy: 22.01%\n", "12\tValidation loss: 1.675703\tBest loss: 0.107346\tAccuracy: 18.73%\n", "13\tValidation loss: 1.633233\tBest loss: 0.107346\tAccuracy: 20.91%\n", "14\tValidation loss: 1.652905\tBest loss: 0.107346\tAccuracy: 20.91%\n", "15\tValidation loss: 1.635937\tBest loss: 0.107346\tAccuracy: 20.91%\n", "16\tValidation loss: 1.718919\tBest loss: 0.107346\tAccuracy: 19.08%\n", "17\tValidation loss: 1.682458\tBest loss: 0.107346\tAccuracy: 19.27%\n", "18\tValidation loss: 1.675366\tBest loss: 0.107346\tAccuracy: 18.73%\n", "19\tValidation loss: 1.645800\tBest loss: 0.107346\tAccuracy: 19.08%\n", "20\tValidation loss: 1.722334\tBest loss: 0.107346\tAccuracy: 22.01%\n", "21\tValidation loss: 1.656418\tBest loss: 0.107346\tAccuracy: 22.01%\n", "22\tValidation loss: 1.643529\tBest loss: 0.107346\tAccuracy: 18.73%\n", "23\tValidation loss: 1.644233\tBest loss: 0.107346\tAccuracy: 19.27%\n", "Early stopping!\n", "INFO:tensorflow:Restoring parameters from ./my_mnist_model_0_to_4.ckpt\n", "Final test accuracy: 97.26%\n" ] } ], "source": [ "n_epochs = 1000\n", "batch_size = 20\n", "\n", "max_checks_without_progress = 20\n", "checks_without_progress = 0\n", "best_loss = np.infty\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", "\n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train1))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train1) // batch_size):\n", " X_batch, y_batch = X_train1[rnd_indices], y_train1[rnd_indices]\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid1, y: y_valid1})\n", " if loss_val < best_loss:\n", " save_path = saver.save(sess, \"./my_mnist_model_0_to_4.ckpt\")\n", " best_loss = loss_val\n", " checks_without_progress = 0\n", " else:\n", " checks_without_progress += 1\n", " if checks_without_progress > max_checks_without_progress:\n", " print(\"Early stopping!\")\n", " break\n", " print(\"{}\\tValidation loss: {:.6f}\\tBest loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_val, best_loss, acc_val * 100))\n", "\n", "with tf.Session() as sess:\n", " saver.restore(sess, \"./my_mnist_model_0_to_4.ckpt\")\n", " acc_test = accuracy.eval(feed_dict={X: X_test1, y: y_test1})\n", " print(\"Final test accuracy: {:.2f}%\".format(acc_test * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This test accuracy is not too bad, but let's see if we can do better by tuning the hyperparameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 8.3." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: Tune the hyperparameters using cross-validation and see what precision you can achieve._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a `DNNClassifier` class, compatible with Scikit-Learn's `RandomizedSearchCV` class, to perform hyperparameter tuning. Here are the key points of this implementation:\n", "* the `__init__()` method (constructor) does nothing more than create instance variables for each of the hyperparameters.\n", "* the `fit()` method creates the graph, starts a session and trains the model:\n", " * it calls the `_build_graph()` method to build the graph (much lile the graph we defined earlier). Once this method is done creating the graph, it saves all the important operations as instance variables for easy access by other methods.\n", " * the `_dnn()` method builds the hidden layers, just like the `dnn()` function above, but also with support for batch normalization and dropout (for the next exercises).\n", " * if the `fit()` method is given a validation set (`X_valid` and `y_valid`), then it implements early stopping. This implementation does not save the best model to disk, but rather to memory: it uses the `_get_model_params()` method to get all the graph's variables and their values, and the `_restore_model_params()` method to restore the variable values (of the best model found). This trick helps speed up training.\n", " * After the `fit()` method has finished training the model, it keeps the session open so that predictions can be made quickly, without having to save a model to disk and restore it for every prediction. You can close the session by calling the `close_session()` method.\n", "* the `predict_proba()` method uses the trained model to predict the class probabilities.\n", "* the `predict()` method calls `predict_proba()` and returns the class with the highest probability, for each instance." ] }, { "cell_type": "code", "execution_count": 121, "metadata": {}, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, ClassifierMixin\n", "from sklearn.exceptions import NotFittedError\n", "\n", "class DNNClassifier(BaseEstimator, ClassifierMixin):\n", " def __init__(self, n_hidden_layers=5, n_neurons=100, optimizer_class=tf.train.AdamOptimizer,\n", " learning_rate=0.01, batch_size=20, activation=tf.nn.elu, initializer=he_init,\n", " batch_norm_momentum=None, dropout_rate=None, random_state=None):\n", " \"\"\"Initialize the DNNClassifier by simply storing all the hyperparameters.\"\"\"\n", " self.n_hidden_layers = n_hidden_layers\n", " self.n_neurons = n_neurons\n", " self.optimizer_class = optimizer_class\n", " self.learning_rate = learning_rate\n", " self.batch_size = batch_size\n", " self.activation = activation\n", " self.initializer = initializer\n", " self.batch_norm_momentum = batch_norm_momentum\n", " self.dropout_rate = dropout_rate\n", " self.random_state = random_state\n", " self._session = None\n", "\n", " def _dnn(self, inputs):\n", " \"\"\"Build the hidden layers, with support for batch normalization and dropout.\"\"\"\n", " for layer in range(self.n_hidden_layers):\n", " if self.dropout_rate:\n", " inputs = tf.layers.dropout(inputs, self.dropout_rate, training=self._training)\n", " inputs = tf.layers.dense(inputs, self.n_neurons,\n", " kernel_initializer=self.initializer,\n", " name=\"hidden%d\" % (layer + 1))\n", " if self.batch_norm_momentum:\n", " inputs = tf.layers.batch_normalization(inputs, momentum=self.batch_norm_momentum,\n", " training=self._training)\n", " inputs = self.activation(inputs, name=\"hidden%d_out\" % (layer + 1))\n", " return inputs\n", "\n", " def _build_graph(self, n_inputs, n_outputs):\n", " \"\"\"Build the same model as earlier\"\"\"\n", " if self.random_state is not None:\n", " tf.set_random_seed(self.random_state)\n", " np.random.seed(self.random_state)\n", "\n", " X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", " y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", " if self.batch_norm_momentum or self.dropout_rate:\n", " self._training = tf.placeholder_with_default(False, shape=(), name='training')\n", " else:\n", " self._training = None\n", "\n", " dnn_outputs = self._dnn(X)\n", "\n", " logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init, name=\"logits\")\n", " Y_proba = tf.nn.softmax(logits, name=\"Y_proba\")\n", "\n", " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,\n", " logits=logits)\n", " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", " optimizer = self.optimizer_class(learning_rate=self.learning_rate)\n", " training_op = optimizer.minimize(loss)\n", "\n", " correct = tf.nn.in_top_k(logits, y, 1)\n", " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", " init = tf.global_variables_initializer()\n", " saver = tf.train.Saver()\n", "\n", " # Make the important operations available easily through instance variables\n", " self._X, self._y = X, y\n", " self._Y_proba, self._loss = Y_proba, loss\n", " self._training_op, self._accuracy = training_op, accuracy\n", " self._init, self._saver = init, saver\n", "\n", " def close_session(self):\n", " if self._session:\n", " self._session.close()\n", "\n", " def _get_model_params(self):\n", " \"\"\"Get all variable values (used for early stopping, faster than saving to disk)\"\"\"\n", " with self._graph.as_default():\n", " gvars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)\n", " return {gvar.op.name: value for gvar, value in zip(gvars, self._session.run(gvars))}\n", "\n", " def _restore_model_params(self, model_params):\n", " \"\"\"Set all variables to the given values (for early stopping, faster than loading from disk)\"\"\"\n", " gvar_names = list(model_params.keys())\n", " assign_ops = {gvar_name: self._graph.get_operation_by_name(gvar_name + \"/Assign\")\n", " for gvar_name in gvar_names}\n", " init_values = {gvar_name: assign_op.inputs[1] for gvar_name, assign_op in assign_ops.items()}\n", " feed_dict = {init_values[gvar_name]: model_params[gvar_name] for gvar_name in gvar_names}\n", " self._session.run(assign_ops, feed_dict=feed_dict)\n", "\n", " def fit(self, X, y, n_epochs=100, X_valid=None, y_valid=None):\n", " \"\"\"Fit the model to the training set. If X_valid and y_valid are provided, use early stopping.\"\"\"\n", " self.close_session()\n", "\n", " # infer n_inputs and n_outputs from the training set.\n", " n_inputs = X.shape[1]\n", " self.classes_ = np.unique(y)\n", " n_outputs = len(self.classes_)\n", " \n", " # Translate the labels vector to a vector of sorted class indices, containing\n", " # integers from 0 to n_outputs - 1.\n", " # For example, if y is equal to [8, 8, 9, 5, 7, 6, 6, 6], then the sorted class\n", " # labels (self.classes_) will be equal to [5, 6, 7, 8, 9], and the labels vector\n", " # will be translated to [3, 3, 4, 0, 2, 1, 1, 1]\n", " self.class_to_index_ = {label: index\n", " for index, label in enumerate(self.classes_)}\n", " y = np.array([self.class_to_index_[label]\n", " for label in y], dtype=np.int32)\n", " \n", " self._graph = tf.Graph()\n", " with self._graph.as_default():\n", " self._build_graph(n_inputs, n_outputs)\n", " # extra ops for batch normalization\n", " extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\n", "\n", " # needed in case of early stopping\n", " max_checks_without_progress = 20\n", " checks_without_progress = 0\n", " best_loss = np.infty\n", " best_params = None\n", " \n", " # Now train the model!\n", " self._session = tf.Session(graph=self._graph)\n", " with self._session.as_default() as sess:\n", " self._init.run()\n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X))\n", " for rnd_indices in np.array_split(rnd_idx, len(X) // self.batch_size):\n", " X_batch, y_batch = X[rnd_indices], y[rnd_indices]\n", " feed_dict = {self._X: X_batch, self._y: y_batch}\n", " if self._training is not None:\n", " feed_dict[self._training] = True\n", " sess.run(self._training_op, feed_dict=feed_dict)\n", " if extra_update_ops:\n", " sess.run(extra_update_ops, feed_dict=feed_dict)\n", " if X_valid is not None and y_valid is not None:\n", " loss_val, acc_val = sess.run([self._loss, self._accuracy],\n", " feed_dict={self._X: X_valid,\n", " self._y: y_valid})\n", " if loss_val < best_loss:\n", " best_params = self._get_model_params()\n", " best_loss = loss_val\n", " checks_without_progress = 0\n", " else:\n", " checks_without_progress += 1\n", " print(\"{}\\tValidation loss: {:.6f}\\tBest loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_val, best_loss, acc_val * 100))\n", " if checks_without_progress > max_checks_without_progress:\n", " print(\"Early stopping!\")\n", " break\n", " else:\n", " loss_train, acc_train = sess.run([self._loss, self._accuracy],\n", " feed_dict={self._X: X_batch,\n", " self._y: y_batch})\n", " print(\"{}\\tLast training batch loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_train, acc_train * 100))\n", " # If we used early stopping then rollback to the best model found\n", " if best_params:\n", " self._restore_model_params(best_params)\n", " return self\n", "\n", " def predict_proba(self, X):\n", " if not self._session:\n", " raise NotFittedError(\"This %s instance is not fitted yet\" % self.__class__.__name__)\n", " with self._session.as_default() as sess:\n", " return self._Y_proba.eval(feed_dict={self._X: X})\n", "\n", " def predict(self, X):\n", " class_indices = np.argmax(self.predict_proba(X), axis=1)\n", " return np.array([[self.classes_[class_index]]\n", " for class_index in class_indices], np.int32)\n", "\n", " def save(self, path):\n", " self._saver.save(self._session, path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see if we get the exact same accuracy as earlier using this class (without dropout or batch norm):" ] }, { "cell_type": "code", "execution_count": 122, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.116407\tBest loss: 0.116407\tAccuracy: 97.58%\n", "1\tValidation loss: 0.180534\tBest loss: 0.116407\tAccuracy: 97.11%\n", "2\tValidation loss: 0.227535\tBest loss: 0.116407\tAccuracy: 93.86%\n", "3\tValidation loss: 0.107346\tBest loss: 0.107346\tAccuracy: 97.54%\n", "4\tValidation loss: 0.302668\tBest loss: 0.107346\tAccuracy: 95.35%\n", "5\tValidation loss: 1.631054\tBest loss: 0.107346\tAccuracy: 22.01%\n", "6\tValidation loss: 1.635262\tBest loss: 0.107346\tAccuracy: 18.73%\n", "7\tValidation loss: 1.671200\tBest loss: 0.107346\tAccuracy: 22.01%\n", "8\tValidation loss: 1.695277\tBest loss: 0.107346\tAccuracy: 19.27%\n", "9\tValidation loss: 1.744607\tBest loss: 0.107346\tAccuracy: 20.91%\n", "10\tValidation loss: 1.629857\tBest loss: 0.107346\tAccuracy: 22.01%\n", "11\tValidation loss: 1.810803\tBest loss: 0.107346\tAccuracy: 22.01%\n", "12\tValidation loss: 1.675703\tBest loss: 0.107346\tAccuracy: 18.73%\n", "13\tValidation loss: 1.633233\tBest loss: 0.107346\tAccuracy: 20.91%\n", "14\tValidation loss: 1.652905\tBest loss: 0.107346\tAccuracy: 20.91%\n", "15\tValidation loss: 1.635937\tBest loss: 0.107346\tAccuracy: 20.91%\n", "16\tValidation loss: 1.718919\tBest loss: 0.107346\tAccuracy: 19.08%\n", "17\tValidation loss: 1.682458\tBest loss: 0.107346\tAccuracy: 19.27%\n", "18\tValidation loss: 1.675366\tBest loss: 0.107346\tAccuracy: 18.73%\n", "19\tValidation loss: 1.645800\tBest loss: 0.107346\tAccuracy: 19.08%\n", "20\tValidation loss: 1.722334\tBest loss: 0.107346\tAccuracy: 22.01%\n", "21\tValidation loss: 1.656418\tBest loss: 0.107346\tAccuracy: 22.01%\n", "22\tValidation loss: 1.643529\tBest loss: 0.107346\tAccuracy: 18.73%\n", "23\tValidation loss: 1.644233\tBest loss: 0.107346\tAccuracy: 19.27%\n", "24\tValidation loss: 1.690035\tBest loss: 0.107346\tAccuracy: 18.73%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "DNNClassifier(activation=,\n", " batch_norm_momentum=None, batch_size=20, dropout_rate=None,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=5, n_neurons=100,\n", " optimizer_class=,\n", " random_state=42)" ] }, "execution_count": 122, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn_clf = DNNClassifier(random_state=42)\n", "dnn_clf.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model is trained, let's see if it gets the same accuracy as earlier:" ] }, { "cell_type": "code", "execution_count": 123, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9725627553998832" ] }, "execution_count": 123, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.metrics import accuracy_score\n", "\n", "y_pred = dnn_clf.predict(X_test1)\n", "accuracy_score(y_test1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Yep! Working fine. Now we can use Scikit-Learn's `RandomizedSearchCV` class to search for better hyperparameters (this may take over an hour, depending on your system):" ] }, { "cell_type": "code", "execution_count": 124, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fitting 3 folds for each of 50 candidates, totalling 150 fits\n", "[CV] n_neurons=10, learning_rate=0.05, batch_size=100, activation= \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.143224\tBest loss: 0.143224\tAccuracy: 95.82%\n", "1\tValidation loss: 0.143304\tBest loss: 0.143224\tAccuracy: 96.60%\n", "2\tValidation loss: 0.106488\tBest loss: 0.106488\tAccuracy: 96.95%\n", "3\tValidation loss: 0.307107\tBest loss: 0.106488\tAccuracy: 92.34%\n", "4\tValidation loss: 0.157948\tBest loss: 0.106488\tAccuracy: 95.50%\n", "5\tValidation loss: 0.131002\tBest loss: 0.106488\tAccuracy: 96.40%\n", "6\tValidation loss: 0.931847\tBest loss: 0.106488\tAccuracy: 58.29%\n", "7\tValidation loss: 0.872748\tBest loss: 0.106488\tAccuracy: 57.97%\n", "8\tValidation loss: 0.699336\tBest loss: 0.106488\tAccuracy: 58.29%\n", "9\tValidation loss: 0.853343\tBest loss: 0.106488\tAccuracy: 57.27%\n", "10\tValidation loss: 0.738493\tBest loss: 0.106488\tAccuracy: 59.19%\n", "11\tValidation loss: 0.670431\tBest loss: 0.106488\tAccuracy: 59.23%\n", "12\tValidation loss: 0.717334\tBest loss: 0.106488\tAccuracy: 59.11%\n", "13\tValidation loss: 0.718714\tBest loss: 0.106488\tAccuracy: 56.57%\n", "14\tValidation loss: 0.679313\tBest loss: 0.106488\tAccuracy: 59.07%\n", "15\tValidation loss: 0.732966\tBest loss: 0.106488\tAccuracy: 58.41%\n", "16\tValidation loss: 0.666333\tBest loss: 0.106488\tAccuracy: 60.48%\n", "17\tValidation loss: 0.677045\tBest loss: 0.106488\tAccuracy: 61.18%\n", "18\tValidation loss: 0.666103\tBest loss: 0.106488\tAccuracy: 59.97%\n", "19\tValidation loss: 0.710005\tBest loss: 0.106488\tAccuracy: 63.21%\n", "20\tValidation loss: 1.037921\tBest loss: 0.106488\tAccuracy: 64.03%\n", "21\tValidation loss: 1.626959\tBest loss: 0.106488\tAccuracy: 19.27%\n", "22\tValidation loss: 1.615710\tBest loss: 0.106488\tAccuracy: 18.73%\n", "23\tValidation loss: 1.609028\tBest loss: 0.106488\tAccuracy: 20.91%\n", "Early stopping!\n", "[CV] n_neurons=10, learning_rate=0.05, batch_size=100, activation=, total= 4.7s\n", "[CV] n_neurons=10, learning_rate=0.05, batch_size=100, activation= \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 4.8s remaining: 0.0s\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.137274\tBest loss: 0.137274\tAccuracy: 96.33%\n", "1\tValidation loss: 0.145733\tBest loss: 0.137274\tAccuracy: 95.97%\n", "2\tValidation loss: 0.171077\tBest loss: 0.137274\tAccuracy: 95.90%\n", "3\tValidation loss: 0.139310\tBest loss: 0.137274\tAccuracy: 96.79%\n", "<<5140 more lines>>\n", "51\tValidation loss: 0.400818\tBest loss: 0.265362\tAccuracy: 97.11%\n", "52\tValidation loss: 0.509595\tBest loss: 0.265362\tAccuracy: 96.99%\n", "Early stopping!\n", "[CV] n_neurons=90, learning_rate=0.1, batch_size=500, activation=.parametrized_leaky_relu at 0x13eb49f28>, total= 11.3s\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[Parallel(n_jobs=1)]: Done 150 out of 150 | elapsed: 46.9min finished\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.069587\tBest loss: 0.069587\tAccuracy: 98.12%\n", "1\tValidation loss: 0.045462\tBest loss: 0.045462\tAccuracy: 98.48%\n", "2\tValidation loss: 0.046439\tBest loss: 0.045462\tAccuracy: 98.40%\n", "3\tValidation loss: 0.037278\tBest loss: 0.037278\tAccuracy: 98.59%\n", "4\tValidation loss: 0.039989\tBest loss: 0.037278\tAccuracy: 98.51%\n", "5\tValidation loss: 0.039621\tBest loss: 0.037278\tAccuracy: 98.79%\n", "6\tValidation loss: 0.035959\tBest loss: 0.035959\tAccuracy: 99.06%\n", "7\tValidation loss: 0.033321\tBest loss: 0.033321\tAccuracy: 99.06%\n", "8\tValidation loss: 0.044559\tBest loss: 0.033321\tAccuracy: 98.87%\n", "9\tValidation loss: 0.035999\tBest loss: 0.033321\tAccuracy: 99.10%\n", "10\tValidation loss: 0.042629\tBest loss: 0.033321\tAccuracy: 98.98%\n", "11\tValidation loss: 0.059839\tBest loss: 0.033321\tAccuracy: 98.71%\n", "12\tValidation loss: 0.044683\tBest loss: 0.033321\tAccuracy: 98.87%\n", "13\tValidation loss: 0.051294\tBest loss: 0.033321\tAccuracy: 98.75%\n", "14\tValidation loss: 0.050140\tBest loss: 0.033321\tAccuracy: 98.98%\n", "15\tValidation loss: 0.051109\tBest loss: 0.033321\tAccuracy: 98.79%\n", "16\tValidation loss: 0.072444\tBest loss: 0.033321\tAccuracy: 97.97%\n", "17\tValidation loss: 0.063308\tBest loss: 0.033321\tAccuracy: 98.71%\n", "18\tValidation loss: 0.051853\tBest loss: 0.033321\tAccuracy: 98.87%\n", "19\tValidation loss: 0.058982\tBest loss: 0.033321\tAccuracy: 98.91%\n", "20\tValidation loss: 0.046894\tBest loss: 0.033321\tAccuracy: 99.06%\n", "21\tValidation loss: 0.039036\tBest loss: 0.033321\tAccuracy: 99.02%\n", "22\tValidation loss: 0.057221\tBest loss: 0.033321\tAccuracy: 98.32%\n", "23\tValidation loss: 0.054618\tBest loss: 0.033321\tAccuracy: 98.75%\n", "24\tValidation loss: 0.039252\tBest loss: 0.033321\tAccuracy: 99.14%\n", "25\tValidation loss: 0.111809\tBest loss: 0.033321\tAccuracy: 98.05%\n", "26\tValidation loss: 0.060662\tBest loss: 0.033321\tAccuracy: 98.98%\n", "27\tValidation loss: 0.073774\tBest loss: 0.033321\tAccuracy: 99.02%\n", "28\tValidation loss: 0.048667\tBest loss: 0.033321\tAccuracy: 99.18%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "RandomizedSearchCV(cv='warn', error_score='raise-deprecating',\n", " estimator=DNNClassifier(activation=,\n", " batch_norm_momentum=None, batch_size=20, dropout_rate=None,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=5, n_neurons=100,\n", " optimizer_class=,\n", " random_state=42),\n", " fit_params=None, iid='warn', n_iter=50, n_jobs=None,\n", " param_distributions={'n_neurons': [10, 30, 50, 70, 90, 100, 120, 140, 160], 'batch_size': [10, 50, 100, 500], 'learning_rate': [0.01, 0.02, 0.05, 0.1], 'activation': [, , .parametrized_leaky_relu at 0x133807c80>, .parametrized_leaky_relu at 0x13eb49f28>]},\n", " pre_dispatch='2*n_jobs', random_state=42, refit=True,\n", " return_train_score='warn', scoring=None, verbose=2)" ] }, "execution_count": 124, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", "\n", "def leaky_relu(alpha=0.01):\n", " def parametrized_leaky_relu(z, name=None):\n", " return tf.maximum(alpha * z, z, name=name)\n", " return parametrized_leaky_relu\n", "\n", "param_distribs = {\n", " \"n_neurons\": [10, 30, 50, 70, 90, 100, 120, 140, 160],\n", " \"batch_size\": [10, 50, 100, 500],\n", " \"learning_rate\": [0.01, 0.02, 0.05, 0.1],\n", " \"activation\": [tf.nn.relu, tf.nn.elu, leaky_relu(alpha=0.01), leaky_relu(alpha=0.1)],\n", " # you could also try exploring different numbers of hidden layers, different optimizers, etc.\n", " #\"n_hidden_layers\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n", " #\"optimizer_class\": [tf.train.AdamOptimizer, partial(tf.train.MomentumOptimizer, momentum=0.95)],\n", "}\n", "\n", "rnd_search = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50,\n", " cv=3, random_state=42, verbose=2)\n", "rnd_search.fit(X_train1, y_train1, X_valid=X_valid1, y_valid=y_valid1, n_epochs=1000)\n", "\n", "# If you have Scikit-Learn 0.18 or earlier, you should upgrade, or use the fit_params argument:\n", "# fit_params = dict(X_valid=X_valid1, y_valid=y_valid1, n_epochs=1000)\n", "# rnd_search = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50,\n", "# fit_params=fit_params, random_state=42, verbose=2)\n", "# rnd_search.fit(X_train1, y_train1)\n" ] }, { "cell_type": "code", "execution_count": 125, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'n_neurons': 90,\n", " 'learning_rate': 0.01,\n", " 'batch_size': 500,\n", " 'activation': .parametrized_leaky_relu(z, name=None)>}" ] }, "execution_count": 125, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rnd_search.best_params_" ] }, { "cell_type": "code", "execution_count": 126, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9891029383148473" ] }, "execution_count": 126, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = rnd_search.predict(X_test1)\n", "accuracy_score(y_test1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wonderful! Tuning the hyperparameters got us up to 98.91% accuracy! It may not sound like a great improvement to go from 97.26% to 98.91% accuracy, but consider the error rate: it went from roughly 2.6% to 1.1%. That's almost 60% reduction of the number of errors this model will produce!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's a good idea to save this model:" ] }, { "cell_type": "code", "execution_count": 127, "metadata": {}, "outputs": [], "source": [ "rnd_search.best_estimator_.save(\"./my_best_mnist_model_0_to_4\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 8.4." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: Now try adding Batch Normalization and compare the learning curves: is it converging faster than before? Does it produce a better model?_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's train the best model found, once again, to see how fast it converges (alternatively, you could tweak the code above to make it write summaries for TensorBoard, so you can visualize the learning curve):" ] }, { "cell_type": "code", "execution_count": 128, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.083541\tBest loss: 0.083541\tAccuracy: 97.54%\n", "1\tValidation loss: 0.052198\tBest loss: 0.052198\tAccuracy: 98.40%\n", "2\tValidation loss: 0.044553\tBest loss: 0.044553\tAccuracy: 98.71%\n", "3\tValidation loss: 0.051113\tBest loss: 0.044553\tAccuracy: 98.48%\n", "4\tValidation loss: 0.046304\tBest loss: 0.044553\tAccuracy: 98.75%\n", "5\tValidation loss: 0.037796\tBest loss: 0.037796\tAccuracy: 98.91%\n", "6\tValidation loss: 0.048525\tBest loss: 0.037796\tAccuracy: 98.67%\n", "7\tValidation loss: 0.039877\tBest loss: 0.037796\tAccuracy: 98.75%\n", "8\tValidation loss: 0.038729\tBest loss: 0.037796\tAccuracy: 98.98%\n", "9\tValidation loss: 0.064167\tBest loss: 0.037796\tAccuracy: 98.24%\n", "10\tValidation loss: 0.057274\tBest loss: 0.037796\tAccuracy: 98.79%\n", "11\tValidation loss: 0.064388\tBest loss: 0.037796\tAccuracy: 98.55%\n", "12\tValidation loss: 0.056382\tBest loss: 0.037796\tAccuracy: 98.63%\n", "13\tValidation loss: 0.049408\tBest loss: 0.037796\tAccuracy: 98.91%\n", "14\tValidation loss: 0.038494\tBest loss: 0.037796\tAccuracy: 99.10%\n", "15\tValidation loss: 0.064619\tBest loss: 0.037796\tAccuracy: 98.67%\n", "16\tValidation loss: 0.055027\tBest loss: 0.037796\tAccuracy: 98.91%\n", "17\tValidation loss: 0.054773\tBest loss: 0.037796\tAccuracy: 98.91%\n", "18\tValidation loss: 0.076131\tBest loss: 0.037796\tAccuracy: 98.71%\n", "19\tValidation loss: 0.063031\tBest loss: 0.037796\tAccuracy: 98.59%\n", "20\tValidation loss: 0.120501\tBest loss: 0.037796\tAccuracy: 98.55%\n", "21\tValidation loss: 3.922006\tBest loss: 0.037796\tAccuracy: 94.14%\n", "22\tValidation loss: 0.395737\tBest loss: 0.037796\tAccuracy: 96.83%\n", "23\tValidation loss: 0.237014\tBest loss: 0.037796\tAccuracy: 96.56%\n", "24\tValidation loss: 0.159249\tBest loss: 0.037796\tAccuracy: 97.07%\n", "25\tValidation loss: 0.228444\tBest loss: 0.037796\tAccuracy: 95.74%\n", "26\tValidation loss: 0.134490\tBest loss: 0.037796\tAccuracy: 96.99%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "DNNClassifier(activation=.parametrized_leaky_relu at 0x13e25ea60>,\n", " batch_norm_momentum=None, batch_size=500, dropout_rate=None,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=5, n_neurons=140,\n", " optimizer_class=,\n", " random_state=42)" ] }, "execution_count": 128, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn_clf = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01,\n", " n_neurons=140, random_state=42)\n", "dnn_clf.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best loss is reached at epoch 5." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check that we do indeed get 98.9% accuracy on the test set:" ] }, { "cell_type": "code", "execution_count": 129, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9898812998637867" ] }, "execution_count": 129, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = dnn_clf.predict(X_test1)\n", "accuracy_score(y_test1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Good, now let's use the exact same model, but this time with batch normalization:" ] }, { "cell_type": "code", "execution_count": 130, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.046685\tBest loss: 0.046685\tAccuracy: 98.63%\n", "1\tValidation loss: 0.040820\tBest loss: 0.040820\tAccuracy: 98.79%\n", "2\tValidation loss: 0.046557\tBest loss: 0.040820\tAccuracy: 98.67%\n", "3\tValidation loss: 0.032236\tBest loss: 0.032236\tAccuracy: 98.94%\n", "4\tValidation loss: 0.056148\tBest loss: 0.032236\tAccuracy: 98.44%\n", "5\tValidation loss: 0.035988\tBest loss: 0.032236\tAccuracy: 98.98%\n", "6\tValidation loss: 0.037958\tBest loss: 0.032236\tAccuracy: 98.94%\n", "7\tValidation loss: 0.034588\tBest loss: 0.032236\tAccuracy: 99.02%\n", "8\tValidation loss: 0.031261\tBest loss: 0.031261\tAccuracy: 99.34%\n", "9\tValidation loss: 0.050791\tBest loss: 0.031261\tAccuracy: 98.79%\n", "10\tValidation loss: 0.035324\tBest loss: 0.031261\tAccuracy: 99.02%\n", "11\tValidation loss: 0.039875\tBest loss: 0.031261\tAccuracy: 98.98%\n", "12\tValidation loss: 0.048575\tBest loss: 0.031261\tAccuracy: 98.94%\n", "13\tValidation loss: 0.028059\tBest loss: 0.028059\tAccuracy: 99.18%\n", "14\tValidation loss: 0.044112\tBest loss: 0.028059\tAccuracy: 99.14%\n", "15\tValidation loss: 0.039050\tBest loss: 0.028059\tAccuracy: 99.22%\n", "16\tValidation loss: 0.033278\tBest loss: 0.028059\tAccuracy: 99.14%\n", "17\tValidation loss: 0.031734\tBest loss: 0.028059\tAccuracy: 99.18%\n", "18\tValidation loss: 0.034500\tBest loss: 0.028059\tAccuracy: 99.14%\n", "19\tValidation loss: 0.032757\tBest loss: 0.028059\tAccuracy: 99.26%\n", "20\tValidation loss: 0.023842\tBest loss: 0.023842\tAccuracy: 99.53%\n", "21\tValidation loss: 0.026727\tBest loss: 0.023842\tAccuracy: 99.41%\n", "22\tValidation loss: 0.027016\tBest loss: 0.023842\tAccuracy: 99.41%\n", "23\tValidation loss: 0.033038\tBest loss: 0.023842\tAccuracy: 99.34%\n", "24\tValidation loss: 0.035490\tBest loss: 0.023842\tAccuracy: 99.18%\n", "25\tValidation loss: 0.060346\tBest loss: 0.023842\tAccuracy: 98.75%\n", "26\tValidation loss: 0.051341\tBest loss: 0.023842\tAccuracy: 99.26%\n", "27\tValidation loss: 0.033108\tBest loss: 0.023842\tAccuracy: 99.26%\n", "28\tValidation loss: 0.042162\tBest loss: 0.023842\tAccuracy: 99.18%\n", "29\tValidation loss: 0.036313\tBest loss: 0.023842\tAccuracy: 99.26%\n", "30\tValidation loss: 0.033812\tBest loss: 0.023842\tAccuracy: 99.26%\n", "31\tValidation loss: 0.038173\tBest loss: 0.023842\tAccuracy: 99.26%\n", "32\tValidation loss: 0.029853\tBest loss: 0.023842\tAccuracy: 99.37%\n", "33\tValidation loss: 0.026557\tBest loss: 0.023842\tAccuracy: 99.37%\n", "34\tValidation loss: 0.035003\tBest loss: 0.023842\tAccuracy: 99.37%\n", "35\tValidation loss: 0.027140\tBest loss: 0.023842\tAccuracy: 99.34%\n", "36\tValidation loss: 0.038988\tBest loss: 0.023842\tAccuracy: 99.34%\n", "37\tValidation loss: 0.048149\tBest loss: 0.023842\tAccuracy: 98.98%\n", "38\tValidation loss: 0.049070\tBest loss: 0.023842\tAccuracy: 99.02%\n", "39\tValidation loss: 0.041233\tBest loss: 0.023842\tAccuracy: 99.26%\n", "40\tValidation loss: 0.038571\tBest loss: 0.023842\tAccuracy: 99.26%\n", "41\tValidation loss: 0.036886\tBest loss: 0.023842\tAccuracy: 99.34%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "DNNClassifier(activation=.parametrized_leaky_relu at 0x14030a378>,\n", " batch_norm_momentum=0.95, batch_size=500, dropout_rate=None,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=5, n_neurons=90,\n", " optimizer_class=,\n", " random_state=42)" ] }, "execution_count": 130, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn_clf_bn = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01,\n", " n_neurons=90, random_state=42,\n", " batch_norm_momentum=0.95)\n", "dnn_clf_bn.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best params are reached during epoch 20, that's actually a slower convergence than earlier. Let's check the accuracy:" ] }, { "cell_type": "code", "execution_count": 131, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9941622883829538" ] }, "execution_count": 131, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = dnn_clf_bn.predict(X_test1)\n", "accuracy_score(y_test1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great, batch normalization improved accuracy! Let's see if we can find a good set of hyperparameters that will work even better with batch normalization:" ] }, { "cell_type": "code", "execution_count": 132, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fitting 3 folds for each of 50 candidates, totalling 150 fits\n", "[CV] n_neurons=70, learning_rate=0.01, batch_size=50, batch_norm_momentum=0.99, activation= \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.098522\tBest loss: 0.098522\tAccuracy: 97.81%\n", "1\tValidation loss: 0.080233\tBest loss: 0.080233\tAccuracy: 98.08%\n", "2\tValidation loss: 0.068767\tBest loss: 0.068767\tAccuracy: 98.01%\n", "3\tValidation loss: 0.057095\tBest loss: 0.057095\tAccuracy: 98.28%\n", "4\tValidation loss: 0.067008\tBest loss: 0.057095\tAccuracy: 98.12%\n", "5\tValidation loss: 0.058910\tBest loss: 0.057095\tAccuracy: 98.55%\n", "6\tValidation loss: 0.038421\tBest loss: 0.038421\tAccuracy: 98.91%\n", "7\tValidation loss: 0.071075\tBest loss: 0.038421\tAccuracy: 98.36%\n", "8\tValidation loss: 0.063073\tBest loss: 0.038421\tAccuracy: 98.28%\n", "9\tValidation loss: 0.057488\tBest loss: 0.038421\tAccuracy: 98.75%\n", "10\tValidation loss: 0.049557\tBest loss: 0.038421\tAccuracy: 98.75%\n", "11\tValidation loss: 0.039810\tBest loss: 0.038421\tAccuracy: 99.06%\n", "12\tValidation loss: 0.061837\tBest loss: 0.038421\tAccuracy: 98.55%\n", "13\tValidation loss: 0.062008\tBest loss: 0.038421\tAccuracy: 98.51%\n", "14\tValidation loss: 0.075937\tBest loss: 0.038421\tAccuracy: 98.44%\n", "15\tValidation loss: 0.053910\tBest loss: 0.038421\tAccuracy: 98.71%\n", "16\tValidation loss: 0.051419\tBest loss: 0.038421\tAccuracy: 98.94%\n", "17\tValidation loss: 0.049013\tBest loss: 0.038421\tAccuracy: 98.98%\n", "18\tValidation loss: 0.048979\tBest loss: 0.038421\tAccuracy: 99.10%\n", "19\tValidation loss: 0.058969\tBest loss: 0.038421\tAccuracy: 98.59%\n", "20\tValidation loss: 0.060048\tBest loss: 0.038421\tAccuracy: 98.79%\n", "21\tValidation loss: 0.088256\tBest loss: 0.038421\tAccuracy: 98.32%\n", "22\tValidation loss: 0.055535\tBest loss: 0.038421\tAccuracy: 98.59%\n", "23\tValidation loss: 0.054632\tBest loss: 0.038421\tAccuracy: 98.94%\n", "24\tValidation loss: 0.092021\tBest loss: 0.038421\tAccuracy: 98.20%\n", "25\tValidation loss: 0.042263\tBest loss: 0.038421\tAccuracy: 99.02%\n", "26\tValidation loss: 0.041139\tBest loss: 0.038421\tAccuracy: 99.30%\n", "27\tValidation loss: 0.054255\tBest loss: 0.038421\tAccuracy: 99.06%\n", "Early stopping!\n", "[CV] n_neurons=70, learning_rate=0.01, batch_size=50, batch_norm_momentum=0.99, activation=, total= 39.0s\n", "[CV] n_neurons=70, learning_rate=0.01, batch_size=50, batch_norm_momentum=0.99, activation= \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 39.1s remaining: 0.0s\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "<<7081 more lines>>\n", "36\tValidation loss: 0.032222\tBest loss: 0.021774\tAccuracy: 99.14%\n", "37\tValidation loss: 0.025638\tBest loss: 0.021774\tAccuracy: 99.34%\n", "38\tValidation loss: 0.031702\tBest loss: 0.021774\tAccuracy: 99.02%\n", "39\tValidation loss: 0.027012\tBest loss: 0.021774\tAccuracy: 99.30%\n", "40\tValidation loss: 0.027163\tBest loss: 0.021774\tAccuracy: 99.30%\n", "41\tValidation loss: 0.029205\tBest loss: 0.021774\tAccuracy: 99.26%\n", "42\tValidation loss: 0.024973\tBest loss: 0.021774\tAccuracy: 99.41%\n", "43\tValidation loss: 0.036898\tBest loss: 0.021774\tAccuracy: 98.94%\n", "44\tValidation loss: 0.040366\tBest loss: 0.021774\tAccuracy: 99.14%\n", "45\tValidation loss: 0.033711\tBest loss: 0.021774\tAccuracy: 99.02%\n", "46\tValidation loss: 0.046615\tBest loss: 0.021774\tAccuracy: 98.79%\n", "47\tValidation loss: 0.032732\tBest loss: 0.021774\tAccuracy: 99.26%\n", "48\tValidation loss: 0.020177\tBest loss: 0.020177\tAccuracy: 99.45%\n", "49\tValidation loss: 0.031700\tBest loss: 0.020177\tAccuracy: 99.37%\n", "50\tValidation loss: 0.035962\tBest loss: 0.020177\tAccuracy: 99.14%\n", "51\tValidation loss: 0.031128\tBest loss: 0.020177\tAccuracy: 99.18%\n", "52\tValidation loss: 0.038107\tBest loss: 0.020177\tAccuracy: 99.14%\n", "53\tValidation loss: 0.036671\tBest loss: 0.020177\tAccuracy: 99.18%\n", "54\tValidation loss: 0.029867\tBest loss: 0.020177\tAccuracy: 99.30%\n", "55\tValidation loss: 0.039179\tBest loss: 0.020177\tAccuracy: 99.10%\n", "56\tValidation loss: 0.028410\tBest loss: 0.020177\tAccuracy: 99.10%\n", "57\tValidation loss: 0.037625\tBest loss: 0.020177\tAccuracy: 99.06%\n", "58\tValidation loss: 0.035516\tBest loss: 0.020177\tAccuracy: 99.22%\n", "59\tValidation loss: 0.030096\tBest loss: 0.020177\tAccuracy: 99.37%\n", "60\tValidation loss: 0.032056\tBest loss: 0.020177\tAccuracy: 99.22%\n", "61\tValidation loss: 0.026143\tBest loss: 0.020177\tAccuracy: 99.37%\n", "62\tValidation loss: 0.022387\tBest loss: 0.020177\tAccuracy: 99.45%\n", "63\tValidation loss: 0.026331\tBest loss: 0.020177\tAccuracy: 99.41%\n", "64\tValidation loss: 0.034930\tBest loss: 0.020177\tAccuracy: 99.10%\n", "65\tValidation loss: 0.029928\tBest loss: 0.020177\tAccuracy: 99.30%\n", "66\tValidation loss: 0.028943\tBest loss: 0.020177\tAccuracy: 99.30%\n", "67\tValidation loss: 0.034912\tBest loss: 0.020177\tAccuracy: 99.18%\n", "68\tValidation loss: 0.037118\tBest loss: 0.020177\tAccuracy: 99.18%\n", "69\tValidation loss: 0.034165\tBest loss: 0.020177\tAccuracy: 99.37%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "RandomizedSearchCV(cv='warn', error_score='raise-deprecating',\n", " estimator=DNNClassifier(activation=,\n", " batch_norm_momentum=None, batch_size=20, dropout_rate=None,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=5, n_neurons=100,\n", " optimizer_class=,\n", " random_state=42),\n", " fit_params={'X_valid': array([[0., 0., ..., 0., 0.],\n", " [0., 0., ..., 0., 0.],\n", " ...,\n", " [0., 0., ..., 0., 0.],\n", " [0., 0., ..., 0., 0.]], dtype=float32), 'y_valid': array([0, 4, ..., 1, 2], dtype=int32), 'n_epochs': 1000},\n", " iid='warn', n_iter=50, n_jobs=None,\n", " param_distributions={'n_neurons': [10, 30, 50, 70, 90, 100, 120, 140, 160], 'batch_size': [10, 50, 100, 500], 'learning_rate': [0.01, 0.02, 0.05, 0.1], 'activation': [, , .parametrized_leaky_relu at 0x1500bd2f0>, .parametrized_leaky_relu at 0x1500bd378>], 'batch_norm_momentum': [0.9, 0.95, 0.98, 0.99, 0.999]},\n", " pre_dispatch='2*n_jobs', random_state=42, refit=True,\n", " return_train_score='warn', scoring=None, verbose=2)" ] }, "execution_count": 132, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", "\n", "param_distribs = {\n", " \"n_neurons\": [10, 30, 50, 70, 90, 100, 120, 140, 160],\n", " \"batch_size\": [10, 50, 100, 500],\n", " \"learning_rate\": [0.01, 0.02, 0.05, 0.1],\n", " \"activation\": [tf.nn.relu, tf.nn.elu, leaky_relu(alpha=0.01), leaky_relu(alpha=0.1)],\n", " # you could also try exploring different numbers of hidden layers, different optimizers, etc.\n", " #\"n_hidden_layers\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n", " #\"optimizer_class\": [tf.train.AdamOptimizer, partial(tf.train.MomentumOptimizer, momentum=0.95)],\n", " \"batch_norm_momentum\": [0.9, 0.95, 0.98, 0.99, 0.999],\n", "}\n", "\n", "rnd_search_bn = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50, cv=3,\n", " random_state=42, verbose=2)\n", "rnd_search_bn.fit(X_train1, y_train1, X_valid=X_valid1, y_valid=y_valid1, n_epochs=1000)\n", "\n", "# If you have Scikit-Learn 0.18 or earlier, you should upgrade, or use the fit_params argument:\n", "# fit_params = dict(X_valid=X_valid1, y_valid=y_valid1, n_epochs=1000)\n", "# rnd_search_bn = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50,\n", "# fit_params=fit_params, random_state=42, verbose=2)\n", "# rnd_search_bn.fit(X_train1, y_train1)\n" ] }, { "cell_type": "code", "execution_count": 133, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'n_neurons': 160,\n", " 'learning_rate': 0.01,\n", " 'batch_size': 10,\n", " 'batch_norm_momentum': 0.98,\n", " 'activation': }" ] }, "execution_count": 133, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rnd_search_bn.best_params_" ] }, { "cell_type": "code", "execution_count": 134, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9949406499318934" ] }, "execution_count": 134, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = rnd_search_bn.predict(X_test1)\n", "accuracy_score(y_test1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Slightly better than earlier: 99.49% vs 99.42%. Let's see if dropout can do better." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 8.5." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: is the model overfitting the training set? Try adding dropout to every layer and try again. Does it help?_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's go back to the model we trained earlier and see how it performs on the training set:" ] }, { "cell_type": "code", "execution_count": 135, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9950781082816178" ] }, "execution_count": 135, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = dnn_clf.predict(X_train1)\n", "accuracy_score(y_train1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model performs significantly better on the training set than on the test set (99.51% vs 99.00%), which means it is overfitting the training set. A bit of regularization may help. Let's try adding dropout with a 50% dropout rate:" ] }, { "cell_type": "code", "execution_count": 136, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.131152\tBest loss: 0.131152\tAccuracy: 96.91%\n", "1\tValidation loss: 0.105306\tBest loss: 0.105306\tAccuracy: 97.46%\n", "2\tValidation loss: 0.091219\tBest loss: 0.091219\tAccuracy: 97.73%\n", "3\tValidation loss: 0.089638\tBest loss: 0.089638\tAccuracy: 97.85%\n", "4\tValidation loss: 0.091288\tBest loss: 0.089638\tAccuracy: 97.69%\n", "5\tValidation loss: 0.081112\tBest loss: 0.081112\tAccuracy: 98.05%\n", "6\tValidation loss: 0.075575\tBest loss: 0.075575\tAccuracy: 98.24%\n", "7\tValidation loss: 0.084841\tBest loss: 0.075575\tAccuracy: 97.77%\n", "8\tValidation loss: 0.075269\tBest loss: 0.075269\tAccuracy: 97.65%\n", "9\tValidation loss: 0.076625\tBest loss: 0.075269\tAccuracy: 98.12%\n", "10\tValidation loss: 0.072509\tBest loss: 0.072509\tAccuracy: 97.97%\n", "11\tValidation loss: 0.071006\tBest loss: 0.071006\tAccuracy: 98.44%\n", "12\tValidation loss: 0.073272\tBest loss: 0.071006\tAccuracy: 98.08%\n", "13\tValidation loss: 0.076293\tBest loss: 0.071006\tAccuracy: 98.16%\n", "14\tValidation loss: 0.074955\tBest loss: 0.071006\tAccuracy: 98.05%\n", "15\tValidation loss: 0.066207\tBest loss: 0.066207\tAccuracy: 98.20%\n", "16\tValidation loss: 0.067388\tBest loss: 0.066207\tAccuracy: 98.08%\n", "17\tValidation loss: 0.061916\tBest loss: 0.061916\tAccuracy: 98.40%\n", "18\tValidation loss: 0.064908\tBest loss: 0.061916\tAccuracy: 98.40%\n", "19\tValidation loss: 0.064921\tBest loss: 0.061916\tAccuracy: 98.40%\n", "20\tValidation loss: 0.069939\tBest loss: 0.061916\tAccuracy: 98.40%\n", "21\tValidation loss: 0.069870\tBest loss: 0.061916\tAccuracy: 98.32%\n", "22\tValidation loss: 0.062807\tBest loss: 0.061916\tAccuracy: 98.24%\n", "23\tValidation loss: 0.065312\tBest loss: 0.061916\tAccuracy: 98.44%\n", "24\tValidation loss: 0.067044\tBest loss: 0.061916\tAccuracy: 98.44%\n", "25\tValidation loss: 0.072251\tBest loss: 0.061916\tAccuracy: 98.16%\n", "26\tValidation loss: 0.064444\tBest loss: 0.061916\tAccuracy: 98.20%\n", "27\tValidation loss: 0.069022\tBest loss: 0.061916\tAccuracy: 98.44%\n", "28\tValidation loss: 0.069079\tBest loss: 0.061916\tAccuracy: 98.28%\n", "29\tValidation loss: 0.148266\tBest loss: 0.061916\tAccuracy: 96.52%\n", "30\tValidation loss: 0.119943\tBest loss: 0.061916\tAccuracy: 96.72%\n", "31\tValidation loss: 0.167303\tBest loss: 0.061916\tAccuracy: 96.68%\n", "32\tValidation loss: 0.131897\tBest loss: 0.061916\tAccuracy: 96.52%\n", "33\tValidation loss: 0.146681\tBest loss: 0.061916\tAccuracy: 95.43%\n", "34\tValidation loss: 0.125731\tBest loss: 0.061916\tAccuracy: 96.64%\n", "35\tValidation loss: 0.099879\tBest loss: 0.061916\tAccuracy: 97.89%\n", "36\tValidation loss: 0.096915\tBest loss: 0.061916\tAccuracy: 97.73%\n", "37\tValidation loss: 0.096422\tBest loss: 0.061916\tAccuracy: 97.85%\n", "38\tValidation loss: 0.108040\tBest loss: 0.061916\tAccuracy: 97.54%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "DNNClassifier(activation=.parametrized_leaky_relu at 0x14e8501e0>,\n", " batch_norm_momentum=None, batch_size=500, dropout_rate=0.5,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=5, n_neurons=90,\n", " optimizer_class=,\n", " random_state=42)" ] }, "execution_count": 136, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn_clf_dropout = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01,\n", " n_neurons=90, random_state=42,\n", " dropout_rate=0.5)\n", "dnn_clf_dropout.fit(X_train1, y_train1, n_epochs=1000, X_valid=X_valid1, y_valid=y_valid1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best params are reached during epoch 17. Dropout somewhat slowed down convergence." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check the accuracy:" ] }, { "cell_type": "code", "execution_count": 137, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9861840825063242" ] }, "execution_count": 137, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = dnn_clf_dropout.predict(X_test1)\n", "accuracy_score(y_test1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are out of luck, dropout does not seem to help. Let's try tuning the hyperparameters, perhaps we can squeeze a bit more performance out of this model:" ] }, { "cell_type": "code", "execution_count": 138, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Fitting 3 folds for each of 50 candidates, totalling 150 fits\n", "[CV] n_neurons=70, learning_rate=0.01, dropout_rate=0.5, batch_size=100, activation= \n", "0\tValidation loss: 0.218595\tBest loss: 0.218595\tAccuracy: 93.63%\n", "1\tValidation loss: 0.210470\tBest loss: 0.210470\tAccuracy: 94.61%\n", "2\tValidation loss: 0.224635\tBest loss: 0.210470\tAccuracy: 95.50%\n", "3\tValidation loss: 0.200494\tBest loss: 0.200494\tAccuracy: 94.84%\n", "4\tValidation loss: 0.184056\tBest loss: 0.184056\tAccuracy: 95.58%\n", "5\tValidation loss: 0.187698\tBest loss: 0.184056\tAccuracy: 96.33%\n", "6\tValidation loss: 0.151692\tBest loss: 0.151692\tAccuracy: 96.17%\n", "7\tValidation loss: 0.176633\tBest loss: 0.151692\tAccuracy: 96.21%\n", "8\tValidation loss: 0.187090\tBest loss: 0.151692\tAccuracy: 96.01%\n", "9\tValidation loss: 0.204406\tBest loss: 0.151692\tAccuracy: 96.40%\n", "10\tValidation loss: 0.193938\tBest loss: 0.151692\tAccuracy: 95.74%\n", "11\tValidation loss: 0.190056\tBest loss: 0.151692\tAccuracy: 96.21%\n", "12\tValidation loss: 0.183601\tBest loss: 0.151692\tAccuracy: 96.05%\n", "13\tValidation loss: 0.179737\tBest loss: 0.151692\tAccuracy: 96.25%\n", "14\tValidation loss: 0.289718\tBest loss: 0.151692\tAccuracy: 96.29%\n", "15\tValidation loss: 0.188605\tBest loss: 0.151692\tAccuracy: 95.86%\n", "16\tValidation loss: 0.195911\tBest loss: 0.151692\tAccuracy: 96.01%\n", "17\tValidation loss: 0.158151\tBest loss: 0.151692\tAccuracy: 96.25%\n", "18\tValidation loss: 0.168049\tBest loss: 0.151692\tAccuracy: 96.25%\n", "19\tValidation loss: 0.170637\tBest loss: 0.151692\tAccuracy: 96.40%\n", "20\tValidation loss: 0.192890\tBest loss: 0.151692\tAccuracy: 96.21%\n", "21\tValidation loss: 0.178800\tBest loss: 0.151692\tAccuracy: 95.97%\n", "22\tValidation loss: 0.185295\tBest loss: 0.151692\tAccuracy: 96.44%\n", "23\tValidation loss: 0.150369\tBest loss: 0.150369\tAccuracy: 96.91%\n", "24\tValidation loss: 0.161164\tBest loss: 0.150369\tAccuracy: 96.52%\n", "25\tValidation loss: 0.180860\tBest loss: 0.150369\tAccuracy: 96.13%\n", "26\tValidation loss: 0.182730\tBest loss: 0.150369\tAccuracy: 96.52%\n", "27\tValidation loss: 0.184583\tBest loss: 0.150369\tAccuracy: 96.09%\n", "28\tValidation loss: 0.183952\tBest loss: 0.150369\tAccuracy: 95.39%\n", "29\tValidation loss: 0.211111\tBest loss: 0.150369\tAccuracy: 95.54%\n", "30\tValidation loss: 0.225760\tBest loss: 0.150369\tAccuracy: 95.97%\n", "31\tValidation loss: 0.170313\tBest loss: 0.150369\tAccuracy: 96.91%\n", "<<5625 more lines>>\n", "8\tValidation loss: 0.086624\tBest loss: 0.065724\tAccuracy: 98.01%\n", "9\tValidation loss: 0.069571\tBest loss: 0.065724\tAccuracy: 98.44%\n", "10\tValidation loss: 0.094720\tBest loss: 0.065724\tAccuracy: 98.20%\n", "11\tValidation loss: 0.070504\tBest loss: 0.065724\tAccuracy: 98.51%\n", "12\tValidation loss: 0.090169\tBest loss: 0.065724\tAccuracy: 98.24%\n", "13\tValidation loss: 0.080667\tBest loss: 0.065724\tAccuracy: 98.20%\n", "14\tValidation loss: 0.120917\tBest loss: 0.065724\tAccuracy: 96.60%\n", "15\tValidation loss: 0.105030\tBest loss: 0.065724\tAccuracy: 97.62%\n", "16\tValidation loss: 0.138571\tBest loss: 0.065724\tAccuracy: 97.85%\n", "17\tValidation loss: 0.078942\tBest loss: 0.065724\tAccuracy: 97.97%\n", "18\tValidation loss: 0.081645\tBest loss: 0.065724\tAccuracy: 97.89%\n", "19\tValidation loss: 0.054128\tBest loss: 0.054128\tAccuracy: 98.44%\n", "20\tValidation loss: 0.051510\tBest loss: 0.051510\tAccuracy: 98.44%\n", "21\tValidation loss: 0.071159\tBest loss: 0.051510\tAccuracy: 98.67%\n", "22\tValidation loss: 0.084647\tBest loss: 0.051510\tAccuracy: 98.28%\n", "23\tValidation loss: 0.081601\tBest loss: 0.051510\tAccuracy: 98.36%\n", "24\tValidation loss: 0.152964\tBest loss: 0.051510\tAccuracy: 97.93%\n", "25\tValidation loss: 0.173249\tBest loss: 0.051510\tAccuracy: 97.03%\n", "26\tValidation loss: 0.128901\tBest loss: 0.051510\tAccuracy: 96.13%\n", "27\tValidation loss: 0.110458\tBest loss: 0.051510\tAccuracy: 97.93%\n", "28\tValidation loss: 0.108197\tBest loss: 0.051510\tAccuracy: 97.30%\n", "29\tValidation loss: 0.104204\tBest loss: 0.051510\tAccuracy: 97.85%\n", "30\tValidation loss: 0.126637\tBest loss: 0.051510\tAccuracy: 98.32%\n", "31\tValidation loss: 0.142045\tBest loss: 0.051510\tAccuracy: 97.62%\n", "32\tValidation loss: 0.103701\tBest loss: 0.051510\tAccuracy: 97.69%\n", "33\tValidation loss: 0.120295\tBest loss: 0.051510\tAccuracy: 97.42%\n", "34\tValidation loss: 0.151388\tBest loss: 0.051510\tAccuracy: 97.85%\n", "35\tValidation loss: 0.096931\tBest loss: 0.051510\tAccuracy: 97.58%\n", "36\tValidation loss: 0.153569\tBest loss: 0.051510\tAccuracy: 97.11%\n", "37\tValidation loss: 0.120552\tBest loss: 0.051510\tAccuracy: 98.05%\n", "38\tValidation loss: 0.076677\tBest loss: 0.051510\tAccuracy: 98.55%\n", "39\tValidation loss: 0.071904\tBest loss: 0.051510\tAccuracy: 98.55%\n", "40\tValidation loss: 0.072618\tBest loss: 0.051510\tAccuracy: 98.12%\n", "41\tValidation loss: 0.086680\tBest loss: 0.051510\tAccuracy: 98.08%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "RandomizedSearchCV(cv='warn', error_score='raise-deprecating',\n", " estimator=DNNClassifier(activation=,\n", " batch_norm_momentum=None, batch_size=20, dropout_rate=None,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=5, n_neurons=100,\n", " optimizer_class=,\n", " random_state=42),\n", " fit_params={'X_valid': array([[0., 0., ..., 0., 0.],\n", " [0., 0., ..., 0., 0.],\n", " ...,\n", " [0., 0., ..., 0., 0.],\n", " [0., 0., ..., 0., 0.]], dtype=float32), 'y_valid': array([0, 4, ..., 1, 2], dtype=int32), 'n_epochs': 1000},\n", " iid='warn', n_iter=50, n_jobs=None,\n", " param_distributions={'n_neurons': [10, 30, 50, 70, 90, 100, 120, 140, 160], 'batch_size': [10, 50, 100, 500], 'learning_rate': [0.01, 0.02, 0.05, 0.1], 'activation': [, , .parametrized_leaky_relu at 0x14e850620>, .parametrized_leaky_relu at 0x14e850d08>], 'dropout_rate': [0.2, 0.3, 0.4, 0.5, 0.6]},\n", " pre_dispatch='2*n_jobs', random_state=42, refit=True,\n", " return_train_score='warn', scoring=None, verbose=2)" ] }, "execution_count": 138, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", "\n", "param_distribs = {\n", " \"n_neurons\": [10, 30, 50, 70, 90, 100, 120, 140, 160],\n", " \"batch_size\": [10, 50, 100, 500],\n", " \"learning_rate\": [0.01, 0.02, 0.05, 0.1],\n", " \"activation\": [tf.nn.relu, tf.nn.elu, leaky_relu(alpha=0.01), leaky_relu(alpha=0.1)],\n", " # you could also try exploring different numbers of hidden layers, different optimizers, etc.\n", " #\"n_hidden_layers\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n", " #\"optimizer_class\": [tf.train.AdamOptimizer, partial(tf.train.MomentumOptimizer, momentum=0.95)],\n", " \"dropout_rate\": [0.2, 0.3, 0.4, 0.5, 0.6],\n", "}\n", "\n", "rnd_search_dropout = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50,\n", " cv=3, random_state=42, verbose=2)\n", "rnd_search_dropout.fit(X_train1, y_train1, X_valid=X_valid1, y_valid=y_valid1, n_epochs=1000)\n", "\n", "# If you have Scikit-Learn 0.18 or earlier, you should upgrade, or use the fit_params argument:\n", "# fit_params = dict(X_valid=X_valid1, y_valid=y_valid1, n_epochs=1000)\n", "# rnd_search_dropout = RandomizedSearchCV(DNNClassifier(random_state=42), param_distribs, n_iter=50,\n", "# fit_params=fit_params, random_state=42, verbose=2)\n", "# rnd_search_dropout.fit(X_train1, y_train1)" ] }, { "cell_type": "code", "execution_count": 139, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'n_neurons': 160,\n", " 'learning_rate': 0.01,\n", " 'dropout_rate': 0.2,\n", " 'batch_size': 100,\n", " 'activation': }" ] }, "execution_count": 139, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rnd_search_dropout.best_params_" ] }, { "cell_type": "code", "execution_count": 140, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9889083479276124" ] }, "execution_count": 140, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = rnd_search_dropout.predict(X_test1)\n", "accuracy_score(y_test1, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Oh well, dropout did not improve the model. Better luck next time! :)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But that's okay, we have ourselves a nice DNN that achieves 99.49% accuracy on the test set using Batch Normalization, or 98.91% without BN. Let's see if some of this expertise on digits 0 to 4 can be transferred to the task of classifying digits 5 to 9. For the sake of simplicity we will reuse the DNN without BN." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## 9. Transfer learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 9.1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: create a new DNN that reuses all the pretrained hidden layers of the previous model, freezes them, and replaces the softmax output layer with a new one._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's load the best model's graph and get a handle on all the important operations we will need. Note that instead of creating a new softmax output layer, we will just reuse the existing one (since it has the same number of outputs as the existing one). We will reinitialize its parameters before training. " ] }, { "cell_type": "code", "execution_count": 141, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "restore_saver = tf.train.import_meta_graph(\"./my_best_mnist_model_0_to_4.meta\")\n", "\n", "X = tf.get_default_graph().get_tensor_by_name(\"X:0\")\n", "y = tf.get_default_graph().get_tensor_by_name(\"y:0\")\n", "loss = tf.get_default_graph().get_tensor_by_name(\"loss:0\")\n", "Y_proba = tf.get_default_graph().get_tensor_by_name(\"Y_proba:0\")\n", "logits = Y_proba.op.inputs[0]\n", "accuracy = tf.get_default_graph().get_tensor_by_name(\"accuracy:0\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To freeze the lower layers, we will exclude their variables from the optimizer's list of trainable variables, keeping only the output layer's trainable variables:" ] }, { "cell_type": "code", "execution_count": 142, "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", "\n", "output_layer_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=\"logits\")\n", "optimizer = tf.train.AdamOptimizer(learning_rate, name=\"Adam2\")\n", "training_op = optimizer.minimize(loss, var_list=output_layer_vars)" ] }, { "cell_type": "code", "execution_count": 143, "metadata": {}, "outputs": [], "source": [ "correct = tf.nn.in_top_k(logits, y, 1)\n", "accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")\n", "\n", "init = tf.global_variables_initializer()\n", "five_frozen_saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 9.2." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: train this new DNN on digits 5 to 9, using only 100 images per digit, and time how long it takes. Despite this small number of examples, can you achieve high precision?_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create the training, validation and test sets. We need to subtract 5 from the labels because TensorFlow expects integers from 0 to `n_classes-1`." ] }, { "cell_type": "code", "execution_count": 144, "metadata": {}, "outputs": [], "source": [ "X_train2_full = X_train[y_train >= 5]\n", "y_train2_full = y_train[y_train >= 5] - 5\n", "X_valid2_full = X_valid[y_valid >= 5]\n", "y_valid2_full = y_valid[y_valid >= 5] - 5\n", "X_test2 = X_test[y_test >= 5]\n", "y_test2 = y_test[y_test >= 5] - 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also, for the purpose of this exercise, we want to keep only 100 instances per class in the training set (and let's keep only 30 instances per class in the validation set). Let's create a small function to do that:" ] }, { "cell_type": "code", "execution_count": 145, "metadata": {}, "outputs": [], "source": [ "def sample_n_instances_per_class(X, y, n=100):\n", " Xs, ys = [], []\n", " for label in np.unique(y):\n", " idx = (y == label)\n", " Xc = X[idx][:n]\n", " yc = y[idx][:n]\n", " Xs.append(Xc)\n", " ys.append(yc)\n", " return np.concatenate(Xs), np.concatenate(ys)" ] }, { "cell_type": "code", "execution_count": 146, "metadata": {}, "outputs": [], "source": [ "X_train2, y_train2 = sample_n_instances_per_class(X_train2_full, y_train2_full, n=100)\n", "X_valid2, y_valid2 = sample_n_instances_per_class(X_valid2_full, y_valid2_full, n=30)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's train the model. This is the same training code as earlier, using early stopping, except for the initialization: we first initialize all the variables, then we restore the best model trained earlier (on digits 0 to 4), and finally we reinitialize the output layer variables." ] }, { "cell_type": "code", "execution_count": 147, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_best_mnist_model_0_to_4\n", "0\tValidation loss: 1.361167\tBest loss: 1.361167\tAccuracy: 43.33%\n", "1\tValidation loss: 1.154602\tBest loss: 1.154602\tAccuracy: 57.33%\n", "2\tValidation loss: 1.054218\tBest loss: 1.054218\tAccuracy: 53.33%\n", "3\tValidation loss: 0.981128\tBest loss: 0.981128\tAccuracy: 62.67%\n", "4\tValidation loss: 0.995353\tBest loss: 0.981128\tAccuracy: 59.33%\n", "5\tValidation loss: 0.967000\tBest loss: 0.967000\tAccuracy: 65.33%\n", "6\tValidation loss: 0.955700\tBest loss: 0.955700\tAccuracy: 61.33%\n", "7\tValidation loss: 1.015331\tBest loss: 0.955700\tAccuracy: 58.67%\n", "8\tValidation loss: 0.978280\tBest loss: 0.955700\tAccuracy: 62.00%\n", "9\tValidation loss: 0.923389\tBest loss: 0.923389\tAccuracy: 69.33%\n", "10\tValidation loss: 0.996236\tBest loss: 0.923389\tAccuracy: 63.33%\n", "11\tValidation loss: 0.976757\tBest loss: 0.923389\tAccuracy: 62.67%\n", "12\tValidation loss: 0.969096\tBest loss: 0.923389\tAccuracy: 63.33%\n", "13\tValidation loss: 1.023069\tBest loss: 0.923389\tAccuracy: 63.33%\n", "14\tValidation loss: 1.104664\tBest loss: 0.923389\tAccuracy: 55.33%\n", "15\tValidation loss: 0.950175\tBest loss: 0.923389\tAccuracy: 65.33%\n", "16\tValidation loss: 1.002944\tBest loss: 0.923389\tAccuracy: 63.33%\n", "17\tValidation loss: 0.895543\tBest loss: 0.895543\tAccuracy: 70.67%\n", "18\tValidation loss: 0.961151\tBest loss: 0.895543\tAccuracy: 66.67%\n", "19\tValidation loss: 0.896372\tBest loss: 0.895543\tAccuracy: 67.33%\n", "20\tValidation loss: 0.911938\tBest loss: 0.895543\tAccuracy: 69.33%\n", "21\tValidation loss: 0.929007\tBest loss: 0.895543\tAccuracy: 68.00%\n", "22\tValidation loss: 0.939231\tBest loss: 0.895543\tAccuracy: 65.33%\n", "23\tValidation loss: 0.919057\tBest loss: 0.895543\tAccuracy: 68.67%\n", "24\tValidation loss: 0.994529\tBest loss: 0.895543\tAccuracy: 65.33%\n", "25\tValidation loss: 0.901279\tBest loss: 0.895543\tAccuracy: 68.67%\n", "26\tValidation loss: 0.916238\tBest loss: 0.895543\tAccuracy: 68.67%\n", "27\tValidation loss: 1.007434\tBest loss: 0.895543\tAccuracy: 65.33%\n", "28\tValidation loss: 0.924729\tBest loss: 0.895543\tAccuracy: 70.00%\n", "29\tValidation loss: 0.974399\tBest loss: 0.895543\tAccuracy: 66.00%\n", "30\tValidation loss: 0.899418\tBest loss: 0.895543\tAccuracy: 68.00%\n", "31\tValidation loss: 0.940563\tBest loss: 0.895543\tAccuracy: 66.00%\n", "32\tValidation loss: 0.920235\tBest loss: 0.895543\tAccuracy: 68.00%\n", "33\tValidation loss: 0.929848\tBest loss: 0.895543\tAccuracy: 68.67%\n", "34\tValidation loss: 0.930288\tBest loss: 0.895543\tAccuracy: 66.67%\n", "35\tValidation loss: 0.943884\tBest loss: 0.895543\tAccuracy: 64.67%\n", "36\tValidation loss: 0.939372\tBest loss: 0.895543\tAccuracy: 68.00%\n", "37\tValidation loss: 0.894239\tBest loss: 0.894239\tAccuracy: 67.33%\n", "38\tValidation loss: 0.888806\tBest loss: 0.888806\tAccuracy: 69.33%\n", "39\tValidation loss: 0.933829\tBest loss: 0.888806\tAccuracy: 66.00%\n", "40\tValidation loss: 0.911836\tBest loss: 0.888806\tAccuracy: 72.67%\n", "41\tValidation loss: 0.896729\tBest loss: 0.888806\tAccuracy: 70.00%\n", "42\tValidation loss: 0.929394\tBest loss: 0.888806\tAccuracy: 68.00%\n", "43\tValidation loss: 0.919418\tBest loss: 0.888806\tAccuracy: 69.33%\n", "44\tValidation loss: 0.907830\tBest loss: 0.888806\tAccuracy: 65.33%\n", "45\tValidation loss: 1.004304\tBest loss: 0.888806\tAccuracy: 71.33%\n", "46\tValidation loss: 0.871899\tBest loss: 0.871899\tAccuracy: 74.00%\n", "47\tValidation loss: 0.904889\tBest loss: 0.871899\tAccuracy: 67.33%\n", "48\tValidation loss: 0.914138\tBest loss: 0.871899\tAccuracy: 66.00%\n", "49\tValidation loss: 0.930001\tBest loss: 0.871899\tAccuracy: 69.33%\n", "50\tValidation loss: 0.962153\tBest loss: 0.871899\tAccuracy: 68.67%\n", "51\tValidation loss: 0.925021\tBest loss: 0.871899\tAccuracy: 65.33%\n", "52\tValidation loss: 0.974412\tBest loss: 0.871899\tAccuracy: 67.33%\n", "53\tValidation loss: 0.897499\tBest loss: 0.871899\tAccuracy: 68.67%\n", "54\tValidation loss: 0.933581\tBest loss: 0.871899\tAccuracy: 60.67%\n", "55\tValidation loss: 0.988574\tBest loss: 0.871899\tAccuracy: 68.67%\n", "56\tValidation loss: 0.927290\tBest loss: 0.871899\tAccuracy: 66.67%\n", "57\tValidation loss: 1.018713\tBest loss: 0.871899\tAccuracy: 64.00%\n", "58\tValidation loss: 0.964709\tBest loss: 0.871899\tAccuracy: 66.00%\n", "59\tValidation loss: 1.004696\tBest loss: 0.871899\tAccuracy: 59.33%\n", "60\tValidation loss: 1.008746\tBest loss: 0.871899\tAccuracy: 58.67%\n", "61\tValidation loss: 0.948558\tBest loss: 0.871899\tAccuracy: 68.00%\n", "62\tValidation loss: 0.966037\tBest loss: 0.871899\tAccuracy: 64.00%\n", "63\tValidation loss: 0.922541\tBest loss: 0.871899\tAccuracy: 68.00%\n", "64\tValidation loss: 0.892541\tBest loss: 0.871899\tAccuracy: 72.00%\n", "65\tValidation loss: 0.890340\tBest loss: 0.871899\tAccuracy: 70.67%\n", "66\tValidation loss: 0.957904\tBest loss: 0.871899\tAccuracy: 66.00%\n", "Early stopping!\n", "Total training time: 1.9s\n", "INFO:tensorflow:Restoring parameters from ./my_mnist_model_5_to_9_five_frozen\n", "Final test accuracy: 64.02%\n" ] } ], "source": [ "import time\n", "\n", "n_epochs = 1000\n", "batch_size = 20\n", "\n", "max_checks_without_progress = 20\n", "checks_without_progress = 0\n", "best_loss = np.infty\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_best_mnist_model_0_to_4\")\n", " t0 = time.time()\n", " \n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train2))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size):\n", " X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices]\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2})\n", " if loss_val < best_loss:\n", " save_path = five_frozen_saver.save(sess, \"./my_mnist_model_5_to_9_five_frozen\")\n", " best_loss = loss_val\n", " checks_without_progress = 0\n", " else:\n", " checks_without_progress += 1\n", " if checks_without_progress > max_checks_without_progress:\n", " print(\"Early stopping!\")\n", " break\n", " print(\"{}\\tValidation loss: {:.6f}\\tBest loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_val, best_loss, acc_val * 100))\n", "\n", " t1 = time.time()\n", " print(\"Total training time: {:.1f}s\".format(t1 - t0))\n", "\n", "with tf.Session() as sess:\n", " five_frozen_saver.restore(sess, \"./my_mnist_model_5_to_9_five_frozen\")\n", " acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2})\n", " print(\"Final test accuracy: {:.2f}%\".format(acc_test * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well that's not a great accuracy, is it? Of course with such a tiny training set, and with only one layer to tweak, we should not expect miracles." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 9.3." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: try caching the frozen layers, and train the model again: how much faster is it now?_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start by getting a handle on the output of the last frozen layer:" ] }, { "cell_type": "code", "execution_count": 148, "metadata": {}, "outputs": [], "source": [ "hidden5_out = tf.get_default_graph().get_tensor_by_name(\"hidden5_out:0\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's train the model using roughly the same code as earlier. The difference is that we compute the output of the top frozen layer at the beginning (both for the training set and the validation set), and we cache it. This makes training roughly 1.5 to 3 times faster in this example (this may vary greatly, depending on your system): " ] }, { "cell_type": "code", "execution_count": 149, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_best_mnist_model_0_to_4\n", "0\tValidation loss: 1.416103\tBest loss: 1.416103\tAccuracy: 44.00%\n", "1\tValidation loss: 1.099216\tBest loss: 1.099216\tAccuracy: 53.33%\n", "2\tValidation loss: 1.024954\tBest loss: 1.024954\tAccuracy: 59.33%\n", "3\tValidation loss: 0.969193\tBest loss: 0.969193\tAccuracy: 60.00%\n", "4\tValidation loss: 0.973461\tBest loss: 0.969193\tAccuracy: 64.67%\n", "5\tValidation loss: 0.949333\tBest loss: 0.949333\tAccuracy: 64.67%\n", "6\tValidation loss: 0.922953\tBest loss: 0.922953\tAccuracy: 66.67%\n", "7\tValidation loss: 0.957186\tBest loss: 0.922953\tAccuracy: 62.67%\n", "8\tValidation loss: 0.950264\tBest loss: 0.922953\tAccuracy: 68.00%\n", "9\tValidation loss: 1.053465\tBest loss: 0.922953\tAccuracy: 59.33%\n", "10\tValidation loss: 1.069949\tBest loss: 0.922953\tAccuracy: 54.00%\n", "11\tValidation loss: 0.965197\tBest loss: 0.922953\tAccuracy: 62.67%\n", "12\tValidation loss: 0.949233\tBest loss: 0.922953\tAccuracy: 63.33%\n", "13\tValidation loss: 0.926229\tBest loss: 0.922953\tAccuracy: 63.33%\n", "14\tValidation loss: 0.922854\tBest loss: 0.922854\tAccuracy: 67.33%\n", "15\tValidation loss: 0.965205\tBest loss: 0.922854\tAccuracy: 66.67%\n", "16\tValidation loss: 1.050026\tBest loss: 0.922854\tAccuracy: 59.33%\n", "17\tValidation loss: 0.946699\tBest loss: 0.922854\tAccuracy: 64.67%\n", "18\tValidation loss: 0.973966\tBest loss: 0.922854\tAccuracy: 64.00%\n", "19\tValidation loss: 0.902573\tBest loss: 0.902573\tAccuracy: 66.67%\n", "20\tValidation loss: 0.933625\tBest loss: 0.902573\tAccuracy: 65.33%\n", "21\tValidation loss: 0.938296\tBest loss: 0.902573\tAccuracy: 64.00%\n", "22\tValidation loss: 0.938790\tBest loss: 0.902573\tAccuracy: 66.67%\n", "23\tValidation loss: 0.936572\tBest loss: 0.902573\tAccuracy: 68.00%\n", "24\tValidation loss: 1.039109\tBest loss: 0.902573\tAccuracy: 65.33%\n", "25\tValidation loss: 1.146837\tBest loss: 0.902573\tAccuracy: 59.33%\n", "26\tValidation loss: 0.958702\tBest loss: 0.902573\tAccuracy: 68.67%\n", "27\tValidation loss: 0.915434\tBest loss: 0.902573\tAccuracy: 70.67%\n", "28\tValidation loss: 0.915402\tBest loss: 0.902573\tAccuracy: 66.00%\n", "29\tValidation loss: 0.920591\tBest loss: 0.902573\tAccuracy: 70.67%\n", "30\tValidation loss: 1.029216\tBest loss: 0.902573\tAccuracy: 64.67%\n", "31\tValidation loss: 1.039922\tBest loss: 0.902573\tAccuracy: 55.33%\n", "32\tValidation loss: 0.925041\tBest loss: 0.902573\tAccuracy: 64.00%\n", "33\tValidation loss: 0.944033\tBest loss: 0.902573\tAccuracy: 67.33%\n", "34\tValidation loss: 0.941914\tBest loss: 0.902573\tAccuracy: 66.67%\n", "35\tValidation loss: 0.866297\tBest loss: 0.866297\tAccuracy: 69.33%\n", "36\tValidation loss: 0.900787\tBest loss: 0.866297\tAccuracy: 70.67%\n", "37\tValidation loss: 0.889670\tBest loss: 0.866297\tAccuracy: 66.67%\n", "38\tValidation loss: 0.968139\tBest loss: 0.866297\tAccuracy: 62.00%\n", "39\tValidation loss: 0.929764\tBest loss: 0.866297\tAccuracy: 66.00%\n", "40\tValidation loss: 0.889130\tBest loss: 0.866297\tAccuracy: 68.00%\n", "41\tValidation loss: 0.940024\tBest loss: 0.866297\tAccuracy: 70.00%\n", "42\tValidation loss: 0.896472\tBest loss: 0.866297\tAccuracy: 69.33%\n", "43\tValidation loss: 0.893887\tBest loss: 0.866297\tAccuracy: 67.33%\n", "44\tValidation loss: 0.925727\tBest loss: 0.866297\tAccuracy: 68.67%\n", "45\tValidation loss: 0.945748\tBest loss: 0.866297\tAccuracy: 66.00%\n", "46\tValidation loss: 0.897087\tBest loss: 0.866297\tAccuracy: 70.00%\n", "47\tValidation loss: 0.923855\tBest loss: 0.866297\tAccuracy: 68.67%\n", "48\tValidation loss: 0.944244\tBest loss: 0.866297\tAccuracy: 66.67%\n", "49\tValidation loss: 0.975582\tBest loss: 0.866297\tAccuracy: 66.67%\n", "50\tValidation loss: 0.889869\tBest loss: 0.866297\tAccuracy: 68.67%\n", "51\tValidation loss: 0.895552\tBest loss: 0.866297\tAccuracy: 69.33%\n", "52\tValidation loss: 0.943707\tBest loss: 0.866297\tAccuracy: 66.00%\n", "53\tValidation loss: 0.902883\tBest loss: 0.866297\tAccuracy: 70.67%\n", "54\tValidation loss: 0.958292\tBest loss: 0.866297\tAccuracy: 68.67%\n", "55\tValidation loss: 0.917368\tBest loss: 0.866297\tAccuracy: 67.33%\n", "Early stopping!\n", "Total training time: 1.1s\n", "INFO:tensorflow:Restoring parameters from ./my_mnist_model_5_to_9_five_frozen\n", "Final test accuracy: 61.16%\n" ] } ], "source": [ "import time\n", "\n", "n_epochs = 1000\n", "batch_size = 20\n", "\n", "max_checks_without_progress = 20\n", "checks_without_progress = 0\n", "best_loss = np.infty\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_best_mnist_model_0_to_4\")\n", " t0 = time.time()\n", " \n", " hidden5_train = hidden5_out.eval(feed_dict={X: X_train2, y: y_train2})\n", " hidden5_valid = hidden5_out.eval(feed_dict={X: X_valid2, y: y_valid2})\n", " \n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train2))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size):\n", " h5_batch, y_batch = hidden5_train[rnd_indices], y_train2[rnd_indices]\n", " sess.run(training_op, feed_dict={hidden5_out: h5_batch, y: y_batch})\n", " loss_val, acc_val = sess.run([loss, accuracy], feed_dict={hidden5_out: hidden5_valid, y: y_valid2})\n", " if loss_val < best_loss:\n", " save_path = five_frozen_saver.save(sess, \"./my_mnist_model_5_to_9_five_frozen\")\n", " best_loss = loss_val\n", " checks_without_progress = 0\n", " else:\n", " checks_without_progress += 1\n", " if checks_without_progress > max_checks_without_progress:\n", " print(\"Early stopping!\")\n", " break\n", " print(\"{}\\tValidation loss: {:.6f}\\tBest loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_val, best_loss, acc_val * 100))\n", "\n", " t1 = time.time()\n", " print(\"Total training time: {:.1f}s\".format(t1 - t0))\n", "\n", "with tf.Session() as sess:\n", " five_frozen_saver.restore(sess, \"./my_mnist_model_5_to_9_five_frozen\")\n", " acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2})\n", " print(\"Final test accuracy: {:.2f}%\".format(acc_test * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 9.4." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: try again reusing just four hidden layers instead of five. Can you achieve a higher precision?_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's load the best model again, but this time we will create a new softmax output layer on top of the 4th hidden layer:" ] }, { "cell_type": "code", "execution_count": 150, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_outputs = 5\n", "\n", "restore_saver = tf.train.import_meta_graph(\"./my_best_mnist_model_0_to_4.meta\")\n", "\n", "X = tf.get_default_graph().get_tensor_by_name(\"X:0\")\n", "y = tf.get_default_graph().get_tensor_by_name(\"y:0\")\n", "\n", "hidden4_out = tf.get_default_graph().get_tensor_by_name(\"hidden4_out:0\")\n", "logits = tf.layers.dense(hidden4_out, n_outputs, kernel_initializer=he_init, name=\"new_logits\")\n", "Y_proba = tf.nn.softmax(logits)\n", "xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", "loss = tf.reduce_mean(xentropy)\n", "correct = tf.nn.in_top_k(logits, y, 1)\n", "accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now let's create the training operation. We want to freeze all the layers except for the new output layer:" ] }, { "cell_type": "code", "execution_count": 151, "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", "\n", "output_layer_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=\"new_logits\")\n", "optimizer = tf.train.AdamOptimizer(learning_rate, name=\"Adam2\")\n", "training_op = optimizer.minimize(loss, var_list=output_layer_vars)\n", "\n", "init = tf.global_variables_initializer()\n", "four_frozen_saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And once again we train the model with the same code as earlier. Note: we could of course write a function once and use it multiple times, rather than copying almost the same training code over and over again, but as we keep tweaking the code slightly, the function would need multiple arguments and `if` statements, and it would have to be at the beginning of the notebook, where it would not make much sense to readers. In short it would be very confusing, so we're better off with copy & paste." ] }, { "cell_type": "code", "execution_count": 152, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_best_mnist_model_0_to_4\n", "0\tValidation loss: 1.073254\tBest loss: 1.073254\tAccuracy: 51.33%\n", "1\tValidation loss: 1.039487\tBest loss: 1.039487\tAccuracy: 64.00%\n", "2\tValidation loss: 0.991418\tBest loss: 0.991418\tAccuracy: 59.33%\n", "3\tValidation loss: 0.902691\tBest loss: 0.902691\tAccuracy: 64.67%\n", "4\tValidation loss: 0.919874\tBest loss: 0.902691\tAccuracy: 63.33%\n", "5\tValidation loss: 0.879734\tBest loss: 0.879734\tAccuracy: 72.00%\n", "6\tValidation loss: 0.877940\tBest loss: 0.877940\tAccuracy: 70.67%\n", "7\tValidation loss: 0.899513\tBest loss: 0.877940\tAccuracy: 71.33%\n", "8\tValidation loss: 0.879717\tBest loss: 0.877940\tAccuracy: 67.33%\n", "9\tValidation loss: 0.826527\tBest loss: 0.826527\tAccuracy: 75.33%\n", "10\tValidation loss: 0.890165\tBest loss: 0.826527\tAccuracy: 67.33%\n", "11\tValidation loss: 0.876235\tBest loss: 0.826527\tAccuracy: 68.67%\n", "12\tValidation loss: 0.877598\tBest loss: 0.826527\tAccuracy: 71.33%\n", "13\tValidation loss: 0.898070\tBest loss: 0.826527\tAccuracy: 74.67%\n", "14\tValidation loss: 0.923526\tBest loss: 0.826527\tAccuracy: 68.00%\n", "15\tValidation loss: 0.859624\tBest loss: 0.826527\tAccuracy: 70.00%\n", "16\tValidation loss: 0.896264\tBest loss: 0.826527\tAccuracy: 67.33%\n", "17\tValidation loss: 0.800813\tBest loss: 0.800813\tAccuracy: 73.33%\n", "18\tValidation loss: 0.811318\tBest loss: 0.800813\tAccuracy: 74.00%\n", "19\tValidation loss: 0.809687\tBest loss: 0.800813\tAccuracy: 75.33%\n", "20\tValidation loss: 0.807125\tBest loss: 0.800813\tAccuracy: 72.67%\n", "21\tValidation loss: 0.819150\tBest loss: 0.800813\tAccuracy: 71.33%\n", "22\tValidation loss: 0.849812\tBest loss: 0.800813\tAccuracy: 76.67%\n", "23\tValidation loss: 0.801709\tBest loss: 0.800813\tAccuracy: 74.67%\n", "24\tValidation loss: 0.832877\tBest loss: 0.800813\tAccuracy: 74.00%\n", "25\tValidation loss: 0.792853\tBest loss: 0.792853\tAccuracy: 72.67%\n", "26\tValidation loss: 0.842031\tBest loss: 0.792853\tAccuracy: 76.00%\n", "27\tValidation loss: 0.872236\tBest loss: 0.792853\tAccuracy: 71.33%\n", "28\tValidation loss: 0.782557\tBest loss: 0.782557\tAccuracy: 78.00%\n", "29\tValidation loss: 0.802515\tBest loss: 0.782557\tAccuracy: 73.33%\n", "30\tValidation loss: 0.812652\tBest loss: 0.782557\tAccuracy: 72.67%\n", "31\tValidation loss: 0.825467\tBest loss: 0.782557\tAccuracy: 76.00%\n", "32\tValidation loss: 0.791320\tBest loss: 0.782557\tAccuracy: 76.67%\n", "33\tValidation loss: 0.785207\tBest loss: 0.782557\tAccuracy: 77.33%\n", "34\tValidation loss: 0.815450\tBest loss: 0.782557\tAccuracy: 76.67%\n", "35\tValidation loss: 0.865081\tBest loss: 0.782557\tAccuracy: 71.33%\n", "36\tValidation loss: 0.852323\tBest loss: 0.782557\tAccuracy: 74.67%\n", "37\tValidation loss: 0.836967\tBest loss: 0.782557\tAccuracy: 72.00%\n", "38\tValidation loss: 0.807404\tBest loss: 0.782557\tAccuracy: 77.33%\n", "39\tValidation loss: 0.821566\tBest loss: 0.782557\tAccuracy: 75.33%\n", "40\tValidation loss: 0.817326\tBest loss: 0.782557\tAccuracy: 76.00%\n", "41\tValidation loss: 0.807987\tBest loss: 0.782557\tAccuracy: 70.67%\n", "42\tValidation loss: 0.838029\tBest loss: 0.782557\tAccuracy: 74.00%\n", "43\tValidation loss: 0.820425\tBest loss: 0.782557\tAccuracy: 76.00%\n", "44\tValidation loss: 0.785871\tBest loss: 0.782557\tAccuracy: 76.00%\n", "45\tValidation loss: 0.844337\tBest loss: 0.782557\tAccuracy: 78.67%\n", "46\tValidation loss: 0.764127\tBest loss: 0.764127\tAccuracy: 78.67%\n", "47\tValidation loss: 0.789726\tBest loss: 0.764127\tAccuracy: 77.33%\n", "48\tValidation loss: 0.839190\tBest loss: 0.764127\tAccuracy: 72.67%\n", "49\tValidation loss: 0.849353\tBest loss: 0.764127\tAccuracy: 75.33%\n", "50\tValidation loss: 0.869818\tBest loss: 0.764127\tAccuracy: 74.00%\n", "51\tValidation loss: 0.805526\tBest loss: 0.764127\tAccuracy: 76.67%\n", "52\tValidation loss: 0.850749\tBest loss: 0.764127\tAccuracy: 72.67%\n", "53\tValidation loss: 0.838693\tBest loss: 0.764127\tAccuracy: 71.33%\n", "54\tValidation loss: 0.791396\tBest loss: 0.764127\tAccuracy: 75.33%\n", "55\tValidation loss: 0.846888\tBest loss: 0.764127\tAccuracy: 76.00%\n", "56\tValidation loss: 0.826717\tBest loss: 0.764127\tAccuracy: 74.67%\n", "57\tValidation loss: 0.878286\tBest loss: 0.764127\tAccuracy: 70.67%\n", "58\tValidation loss: 0.878869\tBest loss: 0.764127\tAccuracy: 72.67%\n", "59\tValidation loss: 0.822241\tBest loss: 0.764127\tAccuracy: 72.67%\n", "60\tValidation loss: 0.864925\tBest loss: 0.764127\tAccuracy: 73.33%\n", "61\tValidation loss: 0.804545\tBest loss: 0.764127\tAccuracy: 73.33%\n", "62\tValidation loss: 0.891784\tBest loss: 0.764127\tAccuracy: 72.67%\n", "63\tValidation loss: 0.810186\tBest loss: 0.764127\tAccuracy: 74.00%\n", "64\tValidation loss: 0.810786\tBest loss: 0.764127\tAccuracy: 74.67%\n", "65\tValidation loss: 0.818044\tBest loss: 0.764127\tAccuracy: 74.00%\n", "66\tValidation loss: 0.853420\tBest loss: 0.764127\tAccuracy: 74.67%\n", "Early stopping!\n", "INFO:tensorflow:Restoring parameters from ./my_mnist_model_5_to_9_four_frozen\n", "Final test accuracy: 69.10%\n" ] } ], "source": [ "n_epochs = 1000\n", "batch_size = 20\n", "\n", "max_checks_without_progress = 20\n", "checks_without_progress = 0\n", "best_loss = np.infty\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_best_mnist_model_0_to_4\")\n", " \n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train2))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size):\n", " X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices]\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2})\n", " if loss_val < best_loss:\n", " save_path = four_frozen_saver.save(sess, \"./my_mnist_model_5_to_9_four_frozen\")\n", " best_loss = loss_val\n", " checks_without_progress = 0\n", " else:\n", " checks_without_progress += 1\n", " if checks_without_progress > max_checks_without_progress:\n", " print(\"Early stopping!\")\n", " break\n", " print(\"{}\\tValidation loss: {:.6f}\\tBest loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_val, best_loss, acc_val * 100))\n", "\n", "with tf.Session() as sess:\n", " four_frozen_saver.restore(sess, \"./my_mnist_model_5_to_9_four_frozen\")\n", " acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2})\n", " print(\"Final test accuracy: {:.2f}%\".format(acc_test * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Still not fantastic, but much better." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 9.5." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_Exercise: now unfreeze the top two hidden layers and continue training: can you get the model to perform even better?_" ] }, { "cell_type": "code", "execution_count": 153, "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", "\n", "unfrozen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=\"hidden[34]|new_logits\")\n", "optimizer = tf.train.AdamOptimizer(learning_rate, name=\"Adam3\")\n", "training_op = optimizer.minimize(loss, var_list=unfrozen_vars)\n", "\n", "init = tf.global_variables_initializer()\n", "two_frozen_saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 154, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_mnist_model_5_to_9_four_frozen\n", "0\tValidation loss: 1.054859\tBest loss: 1.054859\tAccuracy: 74.00%\n", "1\tValidation loss: 0.812410\tBest loss: 0.812410\tAccuracy: 78.00%\n", "2\tValidation loss: 0.750377\tBest loss: 0.750377\tAccuracy: 80.67%\n", "3\tValidation loss: 0.570973\tBest loss: 0.570973\tAccuracy: 84.67%\n", "4\tValidation loss: 0.805442\tBest loss: 0.570973\tAccuracy: 79.33%\n", "5\tValidation loss: 0.920925\tBest loss: 0.570973\tAccuracy: 80.00%\n", "6\tValidation loss: 0.817471\tBest loss: 0.570973\tAccuracy: 81.33%\n", "7\tValidation loss: 0.777876\tBest loss: 0.570973\tAccuracy: 84.00%\n", "8\tValidation loss: 1.030498\tBest loss: 0.570973\tAccuracy: 74.67%\n", "9\tValidation loss: 1.074356\tBest loss: 0.570973\tAccuracy: 81.33%\n", "10\tValidation loss: 0.912521\tBest loss: 0.570973\tAccuracy: 83.33%\n", "11\tValidation loss: 1.356695\tBest loss: 0.570973\tAccuracy: 79.33%\n", "12\tValidation loss: 0.918798\tBest loss: 0.570973\tAccuracy: 82.00%\n", "13\tValidation loss: 0.971029\tBest loss: 0.570973\tAccuracy: 82.67%\n", "14\tValidation loss: 0.860108\tBest loss: 0.570973\tAccuracy: 83.33%\n", "15\tValidation loss: 1.074813\tBest loss: 0.570973\tAccuracy: 82.00%\n", "16\tValidation loss: 0.867760\tBest loss: 0.570973\tAccuracy: 84.00%\n", "17\tValidation loss: 0.858290\tBest loss: 0.570973\tAccuracy: 85.33%\n", "18\tValidation loss: 0.996560\tBest loss: 0.570973\tAccuracy: 85.33%\n", "19\tValidation loss: 1.304507\tBest loss: 0.570973\tAccuracy: 83.33%\n", "20\tValidation loss: 1.134808\tBest loss: 0.570973\tAccuracy: 80.67%\n", "21\tValidation loss: 1.189581\tBest loss: 0.570973\tAccuracy: 82.00%\n", "22\tValidation loss: 1.131344\tBest loss: 0.570973\tAccuracy: 81.33%\n", "23\tValidation loss: 1.240507\tBest loss: 0.570973\tAccuracy: 82.67%\n", "Early stopping!\n", "INFO:tensorflow:Restoring parameters from ./my_mnist_model_5_to_9_two_frozen\n", "Final test accuracy: 78.09%\n" ] } ], "source": [ "n_epochs = 1000\n", "batch_size = 20\n", "\n", "max_checks_without_progress = 20\n", "checks_without_progress = 0\n", "best_loss = np.infty\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " four_frozen_saver.restore(sess, \"./my_mnist_model_5_to_9_four_frozen\")\n", " \n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train2))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size):\n", " X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices]\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2})\n", " if loss_val < best_loss:\n", " save_path = two_frozen_saver.save(sess, \"./my_mnist_model_5_to_9_two_frozen\")\n", " best_loss = loss_val\n", " checks_without_progress = 0\n", " else:\n", " checks_without_progress += 1\n", " if checks_without_progress > max_checks_without_progress:\n", " print(\"Early stopping!\")\n", " break\n", " print(\"{}\\tValidation loss: {:.6f}\\tBest loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_val, best_loss, acc_val * 100))\n", "\n", "with tf.Session() as sess:\n", " two_frozen_saver.restore(sess, \"./my_mnist_model_5_to_9_two_frozen\")\n", " acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2})\n", " print(\"Final test accuracy: {:.2f}%\".format(acc_test * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check what accuracy we can get by unfreezing all layers:" ] }, { "cell_type": "code", "execution_count": 155, "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", "\n", "optimizer = tf.train.AdamOptimizer(learning_rate, name=\"Adam4\")\n", "training_op = optimizer.minimize(loss)\n", "\n", "init = tf.global_variables_initializer()\n", "no_frozen_saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 156, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_mnist_model_5_to_9_two_frozen\n", "0\tValidation loss: 0.863416\tBest loss: 0.863416\tAccuracy: 86.00%\n", "1\tValidation loss: 0.695079\tBest loss: 0.695079\tAccuracy: 90.00%\n", "2\tValidation loss: 0.402921\tBest loss: 0.402921\tAccuracy: 92.00%\n", "3\tValidation loss: 0.606936\tBest loss: 0.402921\tAccuracy: 92.00%\n", "4\tValidation loss: 0.354645\tBest loss: 0.354645\tAccuracy: 90.67%\n", "5\tValidation loss: 0.376935\tBest loss: 0.354645\tAccuracy: 90.67%\n", "6\tValidation loss: 0.593208\tBest loss: 0.354645\tAccuracy: 90.00%\n", "7\tValidation loss: 0.388302\tBest loss: 0.354645\tAccuracy: 92.67%\n", "8\tValidation loss: 0.503276\tBest loss: 0.354645\tAccuracy: 91.33%\n", "9\tValidation loss: 1.440716\tBest loss: 0.354645\tAccuracy: 80.00%\n", "10\tValidation loss: 0.464323\tBest loss: 0.354645\tAccuracy: 92.00%\n", "11\tValidation loss: 0.410302\tBest loss: 0.354645\tAccuracy: 93.33%\n", "12\tValidation loss: 1.131754\tBest loss: 0.354645\tAccuracy: 88.00%\n", "13\tValidation loss: 0.511544\tBest loss: 0.354645\tAccuracy: 92.00%\n", "14\tValidation loss: 0.402083\tBest loss: 0.354645\tAccuracy: 94.00%\n", "15\tValidation loss: 1.149943\tBest loss: 0.354645\tAccuracy: 92.00%\n", "16\tValidation loss: 0.405171\tBest loss: 0.354645\tAccuracy: 94.00%\n", "17\tValidation loss: 0.304346\tBest loss: 0.304346\tAccuracy: 94.67%\n", "18\tValidation loss: 0.386952\tBest loss: 0.304346\tAccuracy: 94.67%\n", "19\tValidation loss: 0.387063\tBest loss: 0.304346\tAccuracy: 94.67%\n", "20\tValidation loss: 0.384417\tBest loss: 0.304346\tAccuracy: 94.67%\n", "21\tValidation loss: 0.381116\tBest loss: 0.304346\tAccuracy: 94.67%\n", "22\tValidation loss: 0.379346\tBest loss: 0.304346\tAccuracy: 94.67%\n", "23\tValidation loss: 0.378128\tBest loss: 0.304346\tAccuracy: 94.67%\n", "24\tValidation loss: 0.376642\tBest loss: 0.304346\tAccuracy: 94.67%\n", "25\tValidation loss: 0.375432\tBest loss: 0.304346\tAccuracy: 94.67%\n", "26\tValidation loss: 0.374804\tBest loss: 0.304346\tAccuracy: 94.67%\n", "27\tValidation loss: 0.373952\tBest loss: 0.304346\tAccuracy: 94.67%\n", "28\tValidation loss: 0.373471\tBest loss: 0.304346\tAccuracy: 94.67%\n", "29\tValidation loss: 0.373027\tBest loss: 0.304346\tAccuracy: 94.67%\n", "30\tValidation loss: 0.373124\tBest loss: 0.304346\tAccuracy: 94.67%\n", "31\tValidation loss: 0.373098\tBest loss: 0.304346\tAccuracy: 94.67%\n", "32\tValidation loss: 0.373206\tBest loss: 0.304346\tAccuracy: 94.67%\n", "33\tValidation loss: 0.372812\tBest loss: 0.304346\tAccuracy: 94.67%\n", "34\tValidation loss: 0.373109\tBest loss: 0.304346\tAccuracy: 94.67%\n", "35\tValidation loss: 0.372616\tBest loss: 0.304346\tAccuracy: 94.67%\n", "36\tValidation loss: 0.372491\tBest loss: 0.304346\tAccuracy: 94.67%\n", "37\tValidation loss: 0.372270\tBest loss: 0.304346\tAccuracy: 94.67%\n", "Early stopping!\n", "INFO:tensorflow:Restoring parameters from ./my_mnist_model_5_to_9_no_frozen\n", "Final test accuracy: 91.34%\n" ] } ], "source": [ "n_epochs = 1000\n", "batch_size = 20\n", "\n", "max_checks_without_progress = 20\n", "checks_without_progress = 0\n", "best_loss = np.infty\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " two_frozen_saver.restore(sess, \"./my_mnist_model_5_to_9_two_frozen\")\n", " \n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train2))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size):\n", " X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices]\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " loss_val, acc_val = sess.run([loss, accuracy], feed_dict={X: X_valid2, y: y_valid2})\n", " if loss_val < best_loss:\n", " save_path = no_frozen_saver.save(sess, \"./my_mnist_model_5_to_9_no_frozen\")\n", " best_loss = loss_val\n", " checks_without_progress = 0\n", " else:\n", " checks_without_progress += 1\n", " if checks_without_progress > max_checks_without_progress:\n", " print(\"Early stopping!\")\n", " break\n", " print(\"{}\\tValidation loss: {:.6f}\\tBest loss: {:.6f}\\tAccuracy: {:.2f}%\".format(\n", " epoch, loss_val, best_loss, acc_val * 100))\n", "\n", "with tf.Session() as sess:\n", " no_frozen_saver.restore(sess, \"./my_mnist_model_5_to_9_no_frozen\")\n", " acc_test = accuracy.eval(feed_dict={X: X_test2, y: y_test2})\n", " print(\"Final test accuracy: {:.2f}%\".format(acc_test * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's compare that to a DNN trained from scratch:" ] }, { "cell_type": "code", "execution_count": 157, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\tValidation loss: 0.674618\tBest loss: 0.674618\tAccuracy: 80.67%\n", "1\tValidation loss: 0.584845\tBest loss: 0.584845\tAccuracy: 88.67%\n", "2\tValidation loss: 0.647296\tBest loss: 0.584845\tAccuracy: 84.00%\n", "3\tValidation loss: 0.530389\tBest loss: 0.530389\tAccuracy: 87.33%\n", "4\tValidation loss: 0.683215\tBest loss: 0.530389\tAccuracy: 90.67%\n", "5\tValidation loss: 0.538040\tBest loss: 0.530389\tAccuracy: 89.33%\n", "6\tValidation loss: 0.670196\tBest loss: 0.530389\tAccuracy: 90.67%\n", "7\tValidation loss: 0.836470\tBest loss: 0.530389\tAccuracy: 85.33%\n", "8\tValidation loss: 0.837684\tBest loss: 0.530389\tAccuracy: 92.67%\n", "9\tValidation loss: 0.588950\tBest loss: 0.530389\tAccuracy: 88.00%\n", "10\tValidation loss: 0.643213\tBest loss: 0.530389\tAccuracy: 90.67%\n", "11\tValidation loss: 1.010521\tBest loss: 0.530389\tAccuracy: 88.00%\n", "12\tValidation loss: 0.931423\tBest loss: 0.530389\tAccuracy: 90.00%\n", "13\tValidation loss: 1.563524\tBest loss: 0.530389\tAccuracy: 88.67%\n", "14\tValidation loss: 2.340119\tBest loss: 0.530389\tAccuracy: 89.33%\n", "15\tValidation loss: 1.402095\tBest loss: 0.530389\tAccuracy: 88.00%\n", "16\tValidation loss: 1.269974\tBest loss: 0.530389\tAccuracy: 86.00%\n", "17\tValidation loss: 1.036325\tBest loss: 0.530389\tAccuracy: 89.33%\n", "18\tValidation loss: 1.578565\tBest loss: 0.530389\tAccuracy: 88.67%\n", "19\tValidation loss: 0.993890\tBest loss: 0.530389\tAccuracy: 93.33%\n", "20\tValidation loss: 0.958130\tBest loss: 0.530389\tAccuracy: 87.33%\n", "21\tValidation loss: 1.505322\tBest loss: 0.530389\tAccuracy: 88.67%\n", "22\tValidation loss: 1.378772\tBest loss: 0.530389\tAccuracy: 89.33%\n", "23\tValidation loss: 0.999445\tBest loss: 0.530389\tAccuracy: 88.00%\n", "24\tValidation loss: 2.366345\tBest loss: 0.530389\tAccuracy: 90.00%\n", "Early stopping!\n" ] }, { "data": { "text/plain": [ "DNNClassifier(activation=,\n", " batch_norm_momentum=None, batch_size=20, dropout_rate=None,\n", " initializer=,\n", " learning_rate=0.01, n_hidden_layers=4, n_neurons=100,\n", " optimizer_class=,\n", " random_state=42)" ] }, "execution_count": 157, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn_clf_5_to_9 = DNNClassifier(n_hidden_layers=4, random_state=42)\n", "dnn_clf_5_to_9.fit(X_train2, y_train2, n_epochs=1000, X_valid=X_valid2, y_valid=y_valid2)" ] }, { "cell_type": "code", "execution_count": 158, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.8481793869574161" ] }, "execution_count": 158, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_pred = dnn_clf_5_to_9.predict(X_test2)\n", "accuracy_score(y_test2, y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Transfer learning allowed us to go from 84.8% accuracy to 91.3%. Not too bad!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 10. Pretraining on an auxiliary task" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this exercise you will build a DNN that compares two MNIST digit images and predicts whether they represent the same digit or not. Then you will reuse the lower layers of this network to train an MNIST classifier using very little training data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 10.1.\n", "Exercise: _Start by building two DNNs (let's call them DNN A and B), both similar to the one you built earlier but without the output layer: each DNN should have five hidden layers of 100 neurons each, He initialization, and ELU activation. Next, add one more hidden layer with 10 units on top of both DNNs. You should use TensorFlow's `concat()` function with `axis=1` to concatenate the outputs of both DNNs along the horizontal axis, then feed the result to the hidden layer. Finally, add an output layer with a single neuron using the logistic activation function._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Warning**! There was an error in the book for this exercise: there was no instruction to add a top hidden layer. Without it, the neural network generally fails to start learning. If you have the latest version of the book, this error has been fixed." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You could have two input placeholders, `X1` and `X2`, one for the images that should be fed to the first DNN, and the other for the images that should be fed to the second DNN. It would work fine. However, another option is to have a single input placeholder to hold both sets of images (each row will hold a pair of images), and use `tf.unstack()` to split this tensor into two separate tensors, like this:" ] }, { "cell_type": "code", "execution_count": 159, "metadata": {}, "outputs": [], "source": [ "n_inputs = 28 * 28 # MNIST\n", "\n", "reset_graph()\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, 2, n_inputs), name=\"X\")\n", "X1, X2 = tf.unstack(X, axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also need the labels placeholder. Each label will be 0 if the images represent different digits, or 1 if they represent the same digit:" ] }, { "cell_type": "code", "execution_count": 160, "metadata": {}, "outputs": [], "source": [ "y = tf.placeholder(tf.int32, shape=[None, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's feed these inputs through two separate DNNs:" ] }, { "cell_type": "code", "execution_count": 161, "metadata": {}, "outputs": [], "source": [ "dnn1 = dnn(X1, name=\"DNN_A\")\n", "dnn2 = dnn(X2, name=\"DNN_B\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And let's concatenate their outputs:" ] }, { "cell_type": "code", "execution_count": 162, "metadata": {}, "outputs": [], "source": [ "dnn_outputs = tf.concat([dnn1, dnn2], axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each DNN outputs 100 activations (per instance), so the shape is `[None, 100]`:" ] }, { "cell_type": "code", "execution_count": 163, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TensorShape([Dimension(None), Dimension(100)])" ] }, "execution_count": 163, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn1.shape" ] }, { "cell_type": "code", "execution_count": 164, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TensorShape([Dimension(None), Dimension(100)])" ] }, "execution_count": 164, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn2.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And of course the concatenated outputs have a shape of `[None, 200]`:" ] }, { "cell_type": "code", "execution_count": 165, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TensorShape([Dimension(None), Dimension(200)])" ] }, "execution_count": 165, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dnn_outputs.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now lets add an extra hidden layer with just 10 neurons, and the output layer, with a single neuron:" ] }, { "cell_type": "code", "execution_count": 166, "metadata": {}, "outputs": [], "source": [ "hidden = tf.layers.dense(dnn_outputs, units=10, activation=tf.nn.elu, kernel_initializer=he_init)\n", "logits = tf.layers.dense(hidden, units=1, kernel_initializer=he_init)\n", "y_proba = tf.nn.sigmoid(logits)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The whole network predicts `1` if `y_proba >= 0.5` (i.e. the network predicts that the images represent the same digit), or `0` otherwise. We compute instead `logits >= 0`, which is equivalent but faster to compute: " ] }, { "cell_type": "code", "execution_count": 167, "metadata": {}, "outputs": [], "source": [ "y_pred = tf.cast(tf.greater_equal(logits, 0), tf.int32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's add the cost function:" ] }, { "cell_type": "code", "execution_count": 168, "metadata": {}, "outputs": [], "source": [ "y_as_float = tf.cast(y, tf.float32)\n", "xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_as_float, logits=logits)\n", "loss = tf.reduce_mean(xentropy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can now create the training operation using an optimizer:" ] }, { "cell_type": "code", "execution_count": 169, "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", "momentum = 0.95\n", "\n", "optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True)\n", "training_op = optimizer.minimize(loss)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will want to measure our classifier's accuracy." ] }, { "cell_type": "code", "execution_count": 170, "metadata": {}, "outputs": [], "source": [ "y_pred_correct = tf.equal(y_pred, y)\n", "accuracy = tf.reduce_mean(tf.cast(y_pred_correct, tf.float32))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And the usual `init` and `saver`:" ] }, { "cell_type": "code", "execution_count": 171, "metadata": {}, "outputs": [], "source": [ "init = tf.global_variables_initializer()\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 10.2.\n", "_Exercise: split the MNIST training set in two sets: split #1 should containing 55,000 images, and split #2 should contain contain 5,000 images. Create a function that generates a training batch where each instance is a pair of MNIST images picked from split #1. Half of the training instances should be pairs of images that belong to the same class, while the other half should be images from different classes. For each pair, the training label should be 0 if the images are from the same class, or 1 if they are from different classes._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The MNIST dataset returned by TensorFlow's `input_data()` function is already split into 3 parts: a training set (55,000 instances), a validation set (5,000 instances) and a test set (10,000 instances). Let's use the first set to generate the training set composed image pairs, and we will use the second set for the second phase of the exercise (to train a regular MNIST classifier). We will use the third set as the test set for both phases." ] }, { "cell_type": "code", "execution_count": 172, "metadata": {}, "outputs": [], "source": [ "X_train1 = X_train\n", "y_train1 = y_train\n", "\n", "X_train2 = X_valid\n", "y_train2 = y_valid\n", "\n", "X_test = X_test\n", "y_test = y_test" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's write a function that generates pairs of images: 50% representing the same digit, and 50% representing different digits. There are many ways to implement this. In this implementation, we first decide how many \"same\" pairs (i.e. pairs of images representing the same digit) we will generate, and how many \"different\" pairs (i.e. pairs of images representing different digits). We could just use `batch_size // 2` but we want to handle the case where it is odd (granted, that might be overkill!). Then we generate random pairs and we pick the right number of \"same\" pairs, then we generate the right number of \"different\" pairs. Finally we shuffle the batch and return it:" ] }, { "cell_type": "code", "execution_count": 173, "metadata": {}, "outputs": [], "source": [ "def generate_batch(images, labels, batch_size):\n", " size1 = batch_size // 2\n", " size2 = batch_size - size1\n", " if size1 != size2 and np.random.rand() > 0.5:\n", " size1, size2 = size2, size1\n", " X = []\n", " y = []\n", " while len(X) < size1:\n", " rnd_idx1, rnd_idx2 = np.random.randint(0, len(images), 2)\n", " if rnd_idx1 != rnd_idx2 and labels[rnd_idx1] == labels[rnd_idx2]:\n", " X.append(np.array([images[rnd_idx1], images[rnd_idx2]]))\n", " y.append([1])\n", " while len(X) < batch_size:\n", " rnd_idx1, rnd_idx2 = np.random.randint(0, len(images), 2)\n", " if labels[rnd_idx1] != labels[rnd_idx2]:\n", " X.append(np.array([images[rnd_idx1], images[rnd_idx2]]))\n", " y.append([0])\n", " rnd_indices = np.random.permutation(batch_size)\n", " return np.array(X)[rnd_indices], np.array(y)[rnd_indices]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's test it to generate a small batch of 5 image pairs:" ] }, { "cell_type": "code", "execution_count": 174, "metadata": {}, "outputs": [], "source": [ "batch_size = 5\n", "X_batch, y_batch = generate_batch(X_train1, y_train1, batch_size)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each row in `X_batch` contains a pair of images:" ] }, { "cell_type": "code", "execution_count": 175, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "((5, 2, 784), dtype('float32'))" ] }, "execution_count": 175, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X_batch.shape, X_batch.dtype" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at these pairs:" ] }, { "cell_type": "code", "execution_count": 176, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAANMAAAGiCAYAAAB9DvMJAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAHKxJREFUeJzt3XmYjef5B/AvwRjENvbYsiEIQkssEVmkspS4VJDKRl1SsRYhQWm0SDSi0UjbuGJUqxIhlliCBolYag2SyaZE5YqZjGUslRHR3x/5PffcJ+cdc86Z+z3nzDnfzz/5Xu+ZOefJjNtze9/nfd5i//vf/0BEhVc81gMgShQsJiIjLCYiIywmIiMsJiIjLCYiIywmIiMsJiIjLCYiIyViPYB8cFlG+IpF+H38WYfP82fNmYnICIuJyAiLicgIi4nICIuJyAiLicgIi4nICIuJyAiLicgIi4nICIuJyAiLicgIi4nICIuJyAiLichIvN7PFPdGjhwpecaMGZK7du0KAFi2bFnUx1SUbN26VfLKlSsl//73vwcA5ObmyrFixfJuH6pUqZLkiRMnAgAGDRokx0qUiN0fac5MREZYTERGisXpxv1xNahTp04BAB5++GE5tm7dOsm6JUlJSQEAbNmyRY7ddNNNfg8RKAK3rT/33HOSp02bJjknJyfoa/WfS93meRk+fLjk559/vjBDDBVvWyfyE4uJyAjP5uVj4cKFkocMGQIAyM7OlmPNmjWT7M7gAcDvfvc7AMD58+f9HmKRsWnTJgB5Z+qAwNauYsWKkqtWrQoAGDt2rBw7e/as5D/+8Y+SXXv9j3/8Q46dPHlS8qRJkyTXrVs34vGHijMTkRGegFAOHDgg+eabb5b83//+FwDQvHlzObZq1SrJ586dk9yzZ08AwL/+9S85VrJkSfvBBourExAZGRmSO3XqBCBwZu/Tp4/kUaNGSW7RokXIn+FODOnZav78+ZI3btwouWbNmiG/bwh4AoLITywmIiNJfwLim2++kTx69GjJrrXTZs2aJVm3De+9957kS5cuAYhaaxe3Zs+eLdm1d+7kAgA8++yzkq+66qqIPsMtHfr444/l2Oeffy45MzNTsnGb54kzE5ERFhORkaRv8+bNmyd5zZo1nl8zd+5cAECHDh08X2/cuLHkl156CQBw+vRpOVa+fPlCj7Mo+PTTTyXr63TOnDlzJEfa2mlu1bi+zhRLnJmIjLCYiIwkbZt39OhRAHmtAhC4OrlLly5BWV/U1Sug9cVB976bN2+WY+3atTMadXzTF69PnDgR9Hrt2rUL/RlnzpyRrG8wdHQr3rBhw0J/Xjg4MxEZSdqZ6fDhwwCArKwsz9fff/99yTfeeCOAwOUwBd1jQ/5Yu3at5O3btwMAqlevLsemT58uOTU1NXoDA2cmIjMsJiIjSdvmlSlTBgDQoEEDOaavk7jXgbwdcfT9OE2aNJHcunVrya7lqFevnvGI459eLqT//7/44otCva8+saF/B47+HerfRbRxZiIywmIiMpK0bV7Lli0BAPv375djO3bskKzbFK+lL/o6k+ZaQovlMkWNvo7UqFEjya7NGzBggBzTZ+X0xpLOhQsXJLutAIDAmy6deLmOx5mJyAiLichI0rZ5jr6JL5x2QS8X0vto5LeyPNmMHz9e8vr16wEAu3btkmNpaWmShw0bJrls2bIAgHfeeUeObdu2zfMzatWqBQDo37+/wYgLjzMTkRHuThSGgwcPSm7fvr1kvQec25Wnfv36URvX/4ur3Ym0xYsXAwg8kfDhhx9KvnjxYvCgQtgeecyYMQCAKVOmmIwzDNydiMhPLCYiI0l/AiIcup3Qq83vv/9+yTFo7+Jejx49Av4L5LV+QOD9YO7pIl9//bUcc5tN/pC7VhgvODMRGWExERnh2bwQuDZDb2Sor0/pB5s1bdo0egMLFLdn8yKhrx2lp6dL1r8Dd/3J4nb4MPFsHpGfeAIiBG4rX72Vsn6KQwxno4Tl9ioEAq8z3XDDDZJjMCNdFmcmIiMsJiIjbPPyoZ+CoR9s5rC184d+WJnjnmAPAE8++WQ0hxMWzkxERlhMREbY5uVjyZIlkvWt7c69994bzeEkDXfvk1a5cmXJnTt3juZwwsKZicgIi4nICJcT5aNatWqS3R7jescdvUtOuXLlojew/CXEcqIKFSoACHzahbs9Hch7ykiMcTkRkZ94AkLRfxt63Uqtn9kUJ7NRUvjRj34U6yGEhDMTkREWE5ERtnmK3qvN61ZptxsO+a9Tp06Si8rPnTMTkREWE5ERXmdKHAlxnamI4HUmIj+xmIiMsJiIjLCYiIywmIiMsJiIjLCYiIywmIiMsJiIjLCYiIxw1TgVKTt37pTsHjL38MMPy7EYPN9WcGYiMsKZieKevrdswIABkn/84x8DADIzM6M+Ji+cmYiMsJiIjPB+JgPjx4+XXKVKFQDA8OHDoz2MhL2fST/NPjU1VfKCBQsAALm5uXKsdOnS0RgS72ci8hOLicgIz+ZF6PTp05LnzZsnedSoUbEYTsJ5++23Ja9evVrye++9J9k96zZKrV2BODMRGWExERlhmxehEydOSNZPZtBPyqDIDRs2THLz5s0lt27dOhbDCQlnJiIjnJki5PVoTgBo2LBhlEeSWBYtWgQA+OSTT+TYsWPHYjWcsHBmIjLCYiIywjYvQrNmzYr1EBKS+7nqEznVq1eP1XDCwpmJyAiLicgI27wwnDt3TvKePXskt2jRQnL9+vWjOaSEcPz4ccnbtm0DEPjguaKCMxOREc5MYdiwYYPk7Oxsyf3794/FcBLGpEmTJFeqVAkA0LRp0xiNJnKcmYiMsJiIjLDNC8Mbb7whOSUlRfJjjz0Wi+EUaXqh8MKFCyVPmzYNAFCxYsWoj6mwODMRGWExERlhmxeG119/XXLNmjUlc6V4+NLT0yXr60zNmjUL+tqsrCzJ58+fl1y2bFkAeTtCxRpnJiIjLCYiI2zzQjBnzhwAgS0GL9QWzubNmyW3adNGcuPGjQEAL7/8shwbO3asZL0rlDvjN3nyZDk2ePBg+8GGiDMTkRHOTCFw2/CWKlVKjv3sZz+L1XCKrEOHDklesWKF5KlTp0ru3r07AGDfvn1yTM9S2uHDhwEAI0aMkGP16tWT/NOf/rRwAw4TZyYiIywmIiNs8/Kxd+9eye4fy64FAbg/XiT0NscXL16U3LFjR8nuqSz6AWYPPvig5/u5h6CNGzdOjuk9DKONMxORERYTkRG2eflYunSp5G+//RYA0KdPn1gNJyHoZUGaXo7ltj8ePXp0xO8XK5yZiIxwZsqHfg5Q1apVAQA33nhjrIaTEHbu3Cm5Ro0akvX1u3D87W9/C/r+bt26RTi6wuPMRGSExURkhG2esnbtWskbN26UPHDgQADAtddeG+0hJZTKlStL1lselyxZMuT32L17t2S3DOnpp5+WY7Vq1SrMEAuFMxORERYTkRG2ecqWLVskX7p0STJXiNto27at5Pnz50vOycmRnJaWFvR9U6ZMkay3DrjtttsAABMmTDAdZ6Q4MxEZYTERGWGbp/z73/+WXKdOHcm6PaHIXXnllZ7He/bsKfmuu+4CEPgUjE2bNknu16+f5GeeeQYAUKJEfPwx5sxEZCQ+SjqGMjIyJOvtj/UmHqmpqVEdU6Lq3bu3ZP00db2HnnvSyJ133inHli9fLvknP/mJjyMsHM5MREZYTERGkr7Ne/vttyXrffEqVKgQi+EkNH2iQO91p3NRxpmJyAiLichI0rd5LVu2jPUQKEFwZiIywmIiMlLMbfoXZ+JyUHGuWITfx591+Dx/1pyZiIywmIiMsJiIjLCYiIywmIiMsJiIjLCYiIywmIiMsJiIjLCYiIywmIiMsJiIjLCYiIywmIiMsJiIjLCYiIywmIiMsJiIjLCYiIywmIiMJP2+efmZN2+e5A8++CDo9RdeeEFysWLB+2u0atVKsn7WUH7PKKLI3XPPPZJXr14tediwYZJnzpzp+zg4MxEZYTERGWGbl49Vq1ZJ1g9Bc3Rr59Xm7d69W/KcOXMkjxgxwmqI5EH/Lj766KOofjZnJiIjLCYiI2zz8qEfwHXvvfcCCGz9tG3btkn+z3/+E/T69ddfbzw6AoB9+/YBANatWxfjkXyPMxOREW7cH6EzZ85I7tixo2T3t6X23XffRWNISbdxf69evQAAixYt8nxdX1saOnSo5Udz434iP7GYiIzwBEQYdGtXvnx5yV7XmX79619HZUzJRrfRy5cvD3q9bNmykjt37hyVMTmcmYiMsJiIjLDNC8Hx48cBAN27d5dj+S0natu2LQBgzJgxURpd4jty5Ijk/v37S87NzQ362kaNGkm+4YYb/B3YD3BmIjLCYiIywjYvH661A4ApU6YAAN5//33Pr61SpUrQ16ampvo4uuTy1VdfSd61a1fQ602bNpW8bNmyqIzJC2cmIiNcTpSPfv36Sda3sDv655aSkiK5WrVqQV+7ZMkSyfp2dmMJtZxow4YNku+8807JXn9e9QLkLl26+Duw73E5EZGfWExERngCQtH3IqWnp1/2a3W7oa93eN3PtGnTJsk+tnkJ4dKlSwDyTuQA3q0dANSpUwcA0KFDB/8HFgLOTERGWExERtjmKWlpaZLbtWsneevWrZf9Pq9V4+G8Tnmef/55AMA///lPz9f172jx4sUAgHLlyvk/sBBwZiIywutM+fj0008lnzhx4rJfO2HCBMl6K2Tn6NGjkmvWrGkwOk9F9jrT/v37Jd9xxx0AgOzsbDl29dVXS9abp1xzzTVRGJ0nXmci8hOLicgIT0Dko0GDBpd9Xd/CrlsSLz62dkXWjh07JLt9CQHvn+UDDzwgOYatXYE4MxEZYTERGeHZvAjpZUF79uyRXKZMGQDAwoUL5dh9990XjSHF/dm8s2fPSr7uuuskZ2VlBX2t/pktXbpUcvHicfH3P8/mEfmJxURkhGfzwvDWW29J1q2dXi7k2pMotXZFwvnz5wEAjzzyiBzzau003QbGSWtXoKIxSqIigDNTCNyiyyFDhni+rjdUGTRoUFTGVJRs3LgRAPDmm28W+LXDhw8HAEycONHPIfmCMxORERYTkZGEb/NmzJghWZ8o6N27N4DQlvq4Nk9v06u59wICH3yWzPQJhvHjx1/2a/X9SH379gUAVKhQwZ+B+YgzE5ERFhORkYRcTqRbu5EjR0rWbZ5bFb5+/Xo5VqNGDcm//e1vJT/zzDNBn1G7dm3J+j0KWm3uo5gvJ8rMzJR89913S967d+9lv08/tKyIXJ/jciIiP7GYiIwkZJun92+4/fbbJeunKTi6LdNLWPT+1V4yMjI83yOGYt7m9ejRQ3JBF2gff/xxyTNnzpRcqlQpq+H4iW0ekZ8ScmbSDhw4IFn/49ZrG2P9syjoCepxuNwl5jOT3tLY69pSw4YNJeuZvQjizETkJxYTkZGEX06kH9GoH+H49NNPAwDmzJnj+X21atWS7FqWX/ziF34MMWHkdyKmbt26AAKvJyUizkxERlhMREYS/mxeEon52bwkwrN5RH5iMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZSfh98yj22rRpAwA4duyYHNOb/Outkrdu3QoAOHXqVJRGZ4czE5ERFhOREe6blzjidt+8m2++GQCwY8eOAr+2ePHv/34vW7asHNNPsL/rrruCvufnP/+55EqVKkU8zjBw3zwiP7GYiIzwbB75rmXLlgBCa/MuXboEADhz5owce+uttySvXLky6HvS0tIk9+nTJ+JxFhZnJiIjLCYiI0nf5ul2Qj//Vj+Ya+7cuUHfd8stt0jWD1QrXbo0AGDgwIFyrGLFijaDLaKmT58OANi3b58ccxdnLXz88cdm71UYnJmIjCT9dabBgwdLfvnll0P+voKezF6tWjXJ+jrJL3/5S8lVqlQBEDizFULcXmdyJx70NaLTp097fq17InudOnXkWH4/6zFjxgAA2rVrJ8dcZ+AzXmci8hOLichI0rd5H330keT82ry1a9cCAD7//HM5VlCblx/9fRUqVAAA1K5dW47pVqhbt26SdauYj7ht8x588EEAwGuvveb5eosWLSQvW7YMQODPJA6xzSPyE4uJyEjSt3laTk6O5HHjxkmePXt20Ne++OKLknVLcvDgQQBAenq6HDt+/Ljkr776SnI47aFbZnMZcdvmFbRqXP+sHnroIb+HY4FtHpGfWExERpJ+OdH69esljxgxQrI+y+fasVGjRsmx7t27S77qqquC3nfkyJGSv/zyS8n6jGAi27Bhg+RDhw4Fva5/Zu3bt4/KmPzGmYnISFKdgLhw4YLkzZs3AwDuu+8+OZabmyvZLfUBgCeeeAJA4FKgqlWr+jHEwoirExCNGzeW/MknnwS9XrlyZclvvPGG5Jtuuumy73vllVdKDucEjjGegCDyE4uJyEhStXkbN26UfMcdd3z/QfksC5o5c6bkIUOG+DEca0WqzQuH/h3pEzsTJkwAAJQvX75Q7x8BtnlEfmIxERlJqjbvs88+k+xuKNNLfXSbV6JE3iW4Bg0aBL3X/v37/RhiYcRVm6fPfP75z38Oer1Tp06SdfvtpaAV+k2aNJHsVvgDQM2aNUMZaiTY5hH5KalmJr0Swd038+6778qxDz/8UPLXX38tOSsrK+i99CYp7h/CQN5Wvfq29SiJq5np4sWLkt3PWt+rlZKSIllf3/OiFx2/8sorkvV1Q0dvq7xixQrJt956ayjDDhVnJiI/sZiIjCRVmxeOw4cPS3Z7vA0aNEiO6Xuf9D+KmzdvDgDYvXu3zyMMEldtnua2N27VqpUci/TkgL4fbNq0aQCA+fPnyzH9e3G/CwBYunQpAKBu3boRfe4PsM0j8hOLicgI27wwnDx5UvK6desk6/ucXBvSs2dPObZgwYIojC5+2zx323rfvn3lmN78s7D0Gb7HH3/c82vc7k76PqtCYJtH5CcWE5GRpL9tPRz6ealdunSRvH37dslutfmRI0fk2NGjRyXH+eaKvtK7PPXq1UtyYW+01Ddy5qdHjx6F+oxQcGYiMpJQM5O7DqSXn3htdmJBL5fxeqKDvsdGL5pNZvq+Jr20aOjQoZIfe+yxkN8vOzsbgPe+hkDgkzS8ntJujTMTkREWE5GRhOo/3Krv2267TY7pf3hOnTo16Hv0oze1b775RrLevtd56aWXJHvdBzV8+HA5VqNGjYKGntC8TjDoR3LqNk+v9Hb0iu9t27ZJdltR79q1y/Nz9dIhvauRXzgzERlhMREZSajlRO62dH0NIyMjQ3Lbtm2DvkffMh3pQ8t0O/GrX/0KQGDrEiVxu5zIa4lVOE9bD+fBcvqmzEWLFknu0KFDyJ8XAi4nIvJTQs1MzqlTpyRv2bJF8vLlyyW7W6nD+VtPPxW9a9eukh955BHJMdjDzYnbmcnJzMyUrB98sGfPHslet6IX9Dtq3bq1ZPd7BXzdOoAzE5GfWExERhKyzUtScd/m5cfd1g7k3e6vryetWbNGsm7znnrqKQCBz9VKS0vzbZwK2zwiP7GYiIywzUscRbbNK4LY5hH5icVEZITFRGSExURkhMVEZITFRGSExURkhMVEZITFRGSExURkhMVEZITFRGSExURkhMVEZITFRGSExURkhMVEZITFRGSExURkhMVEZITFRGSExURkhMVEZCShHsNZWF988YXkwYMHS9bb9zoDBw6U3K1bN8nuqd5XXHGFH0NMGN9++61k/cjTP/zhDwCAyZMnyzG9t+OoUaMk33PPPQCANm3ayLGSJUvaDzZEnJmIjLCYiIwk/fbI+kndd999t+Ts7OyI3u/VV18FADz66KOFGlcEitT2yHPnzpXcv3//oNfLlSsnWf8ZPXfuXNDX6p/19OnTJfv4RAxuj0zkJxYTkZGkb/Ouv/56yQcPHiz0+1WqVAkAsGrVKjmmzzb5KO7bvKlTp0qeMWOG5OPHj0t+4YUXAABNmjSRY+5p7UDg84O9VK9eXfKmTZskN2jQIIIR54ttHpGfWExERpL+ou3FixdN3+/kyZMAgGnTpsmxN9980/QzipqMjAwAeS0cENja9e7dW3L79u0BBF4Ud8+5BQKfaVuvXj0AQE5OjhzLzMyUnJWVJdm4zfPEmYnISNLPTE888YRkt5QlFDNnzpSsl7joJUn0vdmzZwMIvHbnll0BQIsWLSTfcsstAIDc3Fw5dvvtt0sePXq05KZNmwIAtm/fLsd69uwp+U9/+lPQZ+jrV9Y4MxEZYTERGUn6Nk+3aDqHY9asWZLZ5gXzun7nrscBwNixYyW7kwp6idH48eMv+/66JaxatarkBQsWSO7evTsAoEePHqEOO2ycmYiMsJiIjCR9mxcO3U7oM1NHjhyJxXCKjEaNGgEA1qxZI8dee+01yfXr15e8evVqAEDDhg1Dfv9rr73W83179eoledGiRQDY5hEVCQk5M+l/8FauXFmy/kfvgQMHAAAvvviiHDt79qzk4sXz/p5xi2Hd35oAcOLECcmHDx82GHVi0Ssc9AoGx2s2AsKbkbx06tTJ7L3CxZmJyAiLichIQrZ5bvkKACxevFhymTJlJH/55ZcAgDNnzvgyBr14MxktWbJE8rvvvhv0euvWrSVHox1btmwZAODQoUNy7Oqrrzb9DM5MREZYTERGEqrNc9d+3nnnHTkW7WtAbjmMvsaRLPQZTr3Eypk0aZLkJ598MhpDEu4a4XfffefbZ3BmIjLCYiIyklBtntsR6IMPPojZGI4dOwYA2LJlixxr165drIYTVXrV/f79+4Ne1zf5lS5d2vfx6J23orELF2cmIiMJNTPFA/cPXb1EJllmJk1vfHLdddcBAK655pqYjUFnv3BmIjLCYiIyklBtnluJXKJE3v9WpPvi6W2TP/vss7C/361KJ6BmzZoAgFq1avn+WefPn5esH6Lmli9Vq1bNt8/mzERkhMVEZCSh2ryOHTsCCHzG7M6dOy/7Pf369ZM8dOhQyaVKlZJ84cKFoO/7y1/+Ivnvf/+75H379oUx4uTgbrrUN1/6tRnkypUrJevf/QMPPAAAKF++vC+fC3BmIjLDYiIyklBtnuN2orGSmpoadEzvea0fZta5c2cAecuKgMCzStFYRhNv9uzZAwDYu3evHOvQoYPZ++s9P/Te8dHGmYnISNI/htOau6ainxOk93LTj5d0t9TrEybNmjWL9KNj/hhO/TQKtx0xkDdL33///XIsPT1dcqQnBdyT1wcMGCDHFi5cKFlfU3K/g1tvvTWiz/oBPoaTyE8sJiIjbPOMPfXUUwCAZ599Vo6lpKRIrlKlimS3Q9K8efPk2EMPPRTpR8e8zdMmTpwoefLkyUGv65bvr3/9q+SCrj/p5ULuGqFuo/VTMF5//XXJRu2dwzaPyE8sJiIjCXmdKZbcU8J166H3InetXaLTDytze43rpT5Lly6V3LdvX8ldu3YNei+9nOu5556T7H6uaWlpcmzEiBGSjVu7AnFmIjLCExA+mT59uuQxY8Z4fs0VV1wBAFixYoUc69KlS6QfGVcnIDQ3s+iZQl+TKoj+M6pvP3erTSZMmCDHLFdWXAZPQBD5icVEZIRtnk9ycnIkjxs3TrJ+Qsejjz4KAHj11VctPjJu2zxHXyN65ZVXJP/mN7+RfPLkyaDvK1mypGS9wNgtWWrVqpXpOEPANo/ITywmIiNs8xJH3Ld5CYRtHpGfWExERlhMREZYTERGWExERlhMREZYTERG4vV+Jv+fTEUOf9ZGODMRGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERlhMREZYTERGWExERn5P07Y2I3HjieNAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "plt.figure(figsize=(3, 3 * batch_size))\n", "plt.subplot(121)\n", "plt.imshow(X_batch[:,0].reshape(28 * batch_size, 28), cmap=\"binary\", interpolation=\"nearest\")\n", "plt.axis('off')\n", "plt.subplot(122)\n", "plt.imshow(X_batch[:,1].reshape(28 * batch_size, 28), cmap=\"binary\", interpolation=\"nearest\")\n", "plt.axis('off')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And let's look at the labels (0 means \"different\", 1 means \"same\"):" ] }, { "cell_type": "code", "execution_count": 177, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[1],\n", " [0],\n", " [0],\n", " [1],\n", " [0]])" ] }, "execution_count": 177, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y_batch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perfect!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 10.3.\n", "_Exercise: train the DNN on this training set. For each image pair, you can simultaneously feed the first image to DNN A and the second image to DNN B. The whole network will gradually learn to tell whether two images belong to the same class or not._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's generate a test set composed of many pairs of images pulled from the MNIST test set:" ] }, { "cell_type": "code", "execution_count": 178, "metadata": {}, "outputs": [], "source": [ "X_test1, y_test1 = generate_batch(X_test, y_test, batch_size=len(X_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, let's train the model. There's really nothing special about this step, except for the fact that we need a fairly large `batch_size`, otherwise the model fails to learn anything and ends up with an accuracy of 50%:" ] }, { "cell_type": "code", "execution_count": 179, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Train loss: 0.69103277\n", "0 Test accuracy: 0.542\n", "1 Train loss: 0.6035354\n", "2 Train loss: 0.54946035\n", "3 Train loss: 0.47047246\n", "4 Train loss: 0.4060757\n", "5 Train loss: 0.38308156\n", "5 Test accuracy: 0.824\n", "6 Train loss: 0.39047274\n", "7 Train loss: 0.3390794\n", "8 Train loss: 0.3210671\n", "9 Train loss: 0.31792685\n", "10 Train loss: 0.24494292\n", "10 Test accuracy: 0.8881\n", "11 Train loss: 0.2929235\n", "12 Train loss: 0.23225449\n", "13 Train loss: 0.23180929\n", "14 Train loss: 0.19877923\n", "15 Train loss: 0.20065464\n", "15 Test accuracy: 0.9203\n", "16 Train loss: 0.19700499\n", "17 Train loss: 0.18893136\n", "18 Train loss: 0.19965452\n", "19 Train loss: 0.24071647\n", "20 Train loss: 0.18882024\n", "20 Test accuracy: 0.9367\n", "21 Train loss: 0.12419197\n", "22 Train loss: 0.14013417\n", "23 Train loss: 0.120789476\n", "24 Train loss: 0.15721135\n", "25 Train loss: 0.11507861\n", "25 Test accuracy: 0.948\n", "26 Train loss: 0.13891116\n", "27 Train loss: 0.1526081\n", "28 Train loss: 0.123436704\n", "<<50 more lines>>\n", "70 Test accuracy: 0.9743\n", "71 Train loss: 0.019732744\n", "72 Train loss: 0.039464083\n", "73 Train loss: 0.04187814\n", "74 Train loss: 0.05303406\n", "75 Train loss: 0.052625064\n", "75 Test accuracy: 0.9756\n", "76 Train loss: 0.038283084\n", "77 Train loss: 0.026332883\n", "78 Train loss: 0.07060841\n", "79 Train loss: 0.03239444\n", "80 Train loss: 0.03136283\n", "80 Test accuracy: 0.9731\n", "81 Train loss: 0.04390848\n", "82 Train loss: 0.015268046\n", "83 Train loss: 0.04875638\n", "84 Train loss: 0.029360933\n", "85 Train loss: 0.0418443\n", "85 Test accuracy: 0.9759\n", "86 Train loss: 0.018274888\n", "87 Train loss: 0.038872603\n", "88 Train loss: 0.02969683\n", "89 Train loss: 0.020990817\n", "90 Train loss: 0.045234833\n", "90 Test accuracy: 0.9769\n", "91 Train loss: 0.039237432\n", "92 Train loss: 0.031329047\n", "93 Train loss: 0.033414133\n", "94 Train loss: 0.025883088\n", "95 Train loss: 0.019567214\n", "95 Test accuracy: 0.9765\n", "96 Train loss: 0.020650322\n", "97 Train loss: 0.0339851\n", "98 Train loss: 0.047079965\n", "99 Train loss: 0.03125228\n" ] } ], "source": [ "n_epochs = 100\n", "batch_size = 500\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " for epoch in range(n_epochs):\n", " for iteration in range(len(X_train1) // batch_size):\n", " X_batch, y_batch = generate_batch(X_train1, y_train1, batch_size)\n", " loss_val, _ = sess.run([loss, training_op], feed_dict={X: X_batch, y: y_batch})\n", " print(epoch, \"Train loss:\", loss_val)\n", " if epoch % 5 == 0:\n", " acc_test = accuracy.eval(feed_dict={X: X_test1, y: y_test1})\n", " print(epoch, \"Test accuracy:\", acc_test)\n", "\n", " save_path = saver.save(sess, \"./my_digit_comparison_model.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All right, we reach 97.6% accuracy on this digit comparison task. That's not too bad, this model knows a thing or two about comparing handwritten digits!\n", "\n", "Let's see if some of that knowledge can be useful for the regular MNIST classification task." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 10.4.\n", "_Exercise: now create a new DNN by reusing and freezing the hidden layers of DNN A and adding a softmax output layer on top with 10 neurons. Train this network on split #2 and see if you can achieve high performance despite having only 500 images per class._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create the model, it is pretty straightforward. There are many ways to freeze the lower layers, as explained in the book. In this example, we chose to use the `tf.stop_gradient()` function. Note that we need one `Saver` to restore the pretrained DNN A, and another `Saver` to save the final model: " ] }, { "cell_type": "code", "execution_count": 180, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "dnn_outputs = dnn(X, name=\"DNN_A\")\n", "frozen_outputs = tf.stop_gradient(dnn_outputs)\n", "\n", "logits = tf.layers.dense(frozen_outputs, n_outputs, kernel_initializer=he_init)\n", "Y_proba = tf.nn.softmax(logits)\n", "\n", "xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", "loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True)\n", "training_op = optimizer.minimize(loss)\n", "\n", "correct = tf.nn.in_top_k(logits, y, 1)\n", "accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", "\n", "init = tf.global_variables_initializer()\n", "\n", "dnn_A_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=\"DNN_A\")\n", "restore_saver = tf.train.Saver(var_list={var.op.name: var for var in dnn_A_vars})\n", "saver = tf.train.Saver()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now on to training! We first initialize all variables (including the variables in the new output layer), then we restore the pretrained DNN A. Next, we just train the model on the small MNIST dataset (containing just 5,000 images):" ] }, { "cell_type": "code", "execution_count": 181, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from ./my_digit_comparison_model.ckpt\n", "0 Test accuracy: 0.9455\n", "10 Test accuracy: 0.9634\n", "20 Test accuracy: 0.9659\n", "30 Test accuracy: 0.9656\n", "40 Test accuracy: 0.9655\n", "50 Test accuracy: 0.9656\n", "60 Test accuracy: 0.9655\n", "70 Test accuracy: 0.9656\n", "80 Test accuracy: 0.9654\n", "90 Test accuracy: 0.9654\n" ] } ], "source": [ "n_epochs = 100\n", "batch_size = 50\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", " restore_saver.restore(sess, \"./my_digit_comparison_model.ckpt\")\n", "\n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train2))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size):\n", " X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices]\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " if epoch % 10 == 0:\n", " acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})\n", " print(epoch, \"Test accuracy:\", acc_test)\n", "\n", " save_path = saver.save(sess, \"./my_mnist_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, 96.5% accuracy, that's not the best MNIST model we have trained so far, but recall that we are only using a small training set (just 500 images per digit). Let's compare this result with the same DNN trained from scratch, without using transfer learning:" ] }, { "cell_type": "code", "execution_count": 182, "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", "\n", "n_inputs = 28 * 28 # MNIST\n", "n_outputs = 10\n", "\n", "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", "y = tf.placeholder(tf.int32, shape=(None), name=\"y\")\n", "\n", "dnn_outputs = dnn(X, name=\"DNN_A\")\n", "\n", "logits = tf.layers.dense(dnn_outputs, n_outputs, kernel_initializer=he_init)\n", "Y_proba = tf.nn.softmax(logits)\n", "\n", "xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", "loss = tf.reduce_mean(xentropy, name=\"loss\")\n", "\n", "optimizer = tf.train.MomentumOptimizer(learning_rate, momentum, use_nesterov=True)\n", "training_op = optimizer.minimize(loss)\n", "\n", "correct = tf.nn.in_top_k(logits, y, 1)\n", "accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", "\n", "init = tf.global_variables_initializer()\n", "\n", "dnn_A_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=\"DNN_A\")\n", "restore_saver = tf.train.Saver(var_list={var.op.name: var for var in dnn_A_vars})\n", "saver = tf.train.Saver()" ] }, { "cell_type": "code", "execution_count": 183, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Test accuracy: 0.8694\n", "10 Test accuracy: 0.9276\n", "20 Test accuracy: 0.9299\n", "30 Test accuracy: 0.935\n", "40 Test accuracy: 0.942\n", "50 Test accuracy: 0.9435\n", "60 Test accuracy: 0.9442\n", "70 Test accuracy: 0.9447\n", "80 Test accuracy: 0.9448\n", "90 Test accuracy: 0.945\n", "100 Test accuracy: 0.945\n", "110 Test accuracy: 0.9458\n", "120 Test accuracy: 0.9456\n", "130 Test accuracy: 0.9458\n", "140 Test accuracy: 0.9458\n" ] } ], "source": [ "n_epochs = 150\n", "batch_size = 50\n", "\n", "with tf.Session() as sess:\n", " init.run()\n", "\n", " for epoch in range(n_epochs):\n", " rnd_idx = np.random.permutation(len(X_train2))\n", " for rnd_indices in np.array_split(rnd_idx, len(X_train2) // batch_size):\n", " X_batch, y_batch = X_train2[rnd_indices], y_train2[rnd_indices]\n", " sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n", " if epoch % 10 == 0:\n", " acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})\n", " print(epoch, \"Test accuracy:\", acc_test)\n", "\n", " save_path = saver.save(sess, \"./my_mnist_model_final.ckpt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Only 94.6% accuracy... So transfer learning helped us reduce the error rate from 5.4% to 3.5% (that's over 35% error reduction). Moreover, the model using transfer learning reached over 96% accuracy in less than 10 epochs.\n", "\n", "Bottom line: transfer learning does not always work, but when it does it can make a big difference. So try it out!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "nav_menu": { "height": "360px", "width": "416px" }, "toc": { "navigate_menu": true, "number_sections": true, "sideBar": true, "threshold": 6, "toc_cell": false, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }