{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Introduction to TensorFlow\n", "\n", "Welcome to this week's programming assignment! Up until now, you've always used Numpy to build neural networks, but this week you'll explore a deep learning framework that allows you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. TensorFlow 2.3 has made significant improvements over its predecessor, some of which you'll encounter and implement here!\n", "\n", "By the end of this assignment, you'll be able to do the following in TensorFlow 2.3:\n", "\n", "* Use `tf.Variable` to modify the state of a variable\n", "* Explain the difference between a variable and a constant\n", "* Apply TensorFlow decorators to speed up code\n", "* Train a Neural Network on a TensorFlow dataset\n", "\n", "Programming frameworks like TensorFlow not only cut down on time spent coding, but can also perform optimizations that speed up the code itself. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table of Contents\n", "- [1- Packages](#1)\n", " - [1.1 - Checking TensorFlow Version](#1-1)\n", "- [2 - Basic Optimization with GradientTape](#2)\n", " - [2.1 - Linear Function](#2-1)\n", " - [Exercise 1 - linear_function](#ex-1)\n", " - [2.2 - Computing the Sigmoid](#2-2)\n", " - [Exercise 2 - sigmoid](#ex-2)\n", " - [2.3 - Using One Hot Encodings](#2-3)\n", " - [Exercise 3 - one_hot_matrix](#ex-3)\n", " - [2.4 - Initialize the Parameters](#2-4)\n", " - [Exercise 4 - initialize_parameters](#ex-4)\n", "- [3 - Building Your First Neural Network in TensorFlow](#3)\n", " - [3.1 - Implement Forward Propagation](#3-1)\n", " - [Exercise 5 - forward_propagation](#ex-5)\n", " - [3.2 Compute the Cost](#3-2)\n", " - [Exercise 6 - compute_cost](#ex-6)\n", " - [3.3 - Train the Model](#3-3)\n", "- [4 - Bibliography](#4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 1 - Packages" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Enabling eager execution\n", "INFO:tensorflow:Enabling v2 tensorshape\n", "INFO:tensorflow:Enabling resource variables\n", "INFO:tensorflow:Enabling tensor equality\n", "INFO:tensorflow:Enabling control flow v2\n" ] } ], "source": [ "import h5py\n", "import numpy as np\n", "import tensorflow as tf\n", "import matplotlib.pyplot as plt\n", "from tensorflow.python.framework.ops import EagerTensor\n", "from tensorflow.python.ops.resource_variable_ops import ResourceVariable\n", "import time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 1.1 - Checking TensorFlow Version \n", "\n", "You will be using v2.3 for this assignment, for maximum speed and efficiency." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'2.6.0-dev20210419'" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tf.__version__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 2 - Basic Optimization with GradientTape\n", "\n", "The beauty of TensorFlow 2 is in its simplicity. Basically, all you need to do is implement forward propagation through a computational graph. TensorFlow will compute the derivatives for you, by moving backwards through the graph recorded with `GradientTape`. All that's left for you to do then is specify the cost function and optimizer you want to use! \n", "\n", "When writing a TensorFlow program, the main object to get used and transformed is the `tf.Tensor`. These tensors are the TensorFlow equivalent of Numpy arrays, i.e. multidimensional arrays of a given data type that also contain information about the computational graph.\n", "\n", "Below, you'll use `tf.Variable` to store the state of your variables. Variables can only be created once as its initial value defines the variable shape and type. Additionally, the `dtype` arg in `tf.Variable` can be set to allow data to be converted to that type. But if none is specified, either the datatype will be kept if the initial value is a Tensor, or `convert_to_tensor` will decide. It's generally best for you to specify directly, so nothing breaks!\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here you'll call the TensorFlow dataset created on a HDF5 file, which you can use in place of a Numpy array to store your datasets. You can think of this as a TensorFlow data generator! \n", "\n", "You will use the Hand sign data set, that is composed of images with shape 64x64x3." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "train_dataset = h5py.File('datasets/train_signs.h5', \"r\")\n", "test_dataset = h5py.File('datasets/test_signs.h5', \"r\")" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_dataset['train_set_x']" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "# tf.data.Dataset.from_tensor_slices( list_or_numpy_array ) creates TensorFlow Datasets\n", "x_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_x'])\n", "y_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_y'])\n", "\n", "x_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_x'])\n", "y_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_y'])" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensorflow.python.data.ops.dataset_ops.TensorSliceDataset" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "type(x_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since TensorFlow Datasets are generators, you can't access directly the contents unless you iterate over them in a for loop, or by explicitly creating a Python iterator using `iter` and consuming its\n", "elements using `next`. Also, you can inspect the `shape` and `dtype` of each element using the `element_spec` attribute." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TensorSpec(shape=(64, 64, 3), dtype=tf.uint8, name=None)\n" ] } ], "source": [ "print(x_train.element_spec)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[[227 220 214]\n", " [227 221 215]\n", " [227 222 215]\n", " ...\n", " [232 230 224]\n", " [231 229 222]\n", " [230 229 221]]\n", "\n", " [[227 221 214]\n", " [227 221 215]\n", " [228 221 215]\n", " ...\n", " [232 230 224]\n", " [231 229 222]\n", " [231 229 221]]\n", "\n", " [[227 221 214]\n", " [227 221 214]\n", " [227 221 215]\n", " ...\n", " [232 230 224]\n", " [231 229 223]\n", " [230 229 221]]\n", "\n", " ...\n", "\n", " [[119 81 51]\n", " [124 85 55]\n", " [127 87 58]\n", " ...\n", " [210 211 211]\n", " [211 212 210]\n", " [210 211 210]]\n", "\n", " [[119 79 51]\n", " [124 84 55]\n", " [126 85 56]\n", " ...\n", " [210 211 210]\n", " [210 211 210]\n", " [209 210 209]]\n", "\n", " [[119 81 51]\n", " [123 83 55]\n", " [122 82 54]\n", " ...\n", " [209 210 210]\n", " [209 210 209]\n", " [208 209 209]]], shape=(64, 64, 3), dtype=uint8)\n" ] } ], "source": [ "print(next(iter(x_train)))" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[[227 220 214]\n", " [227 221 215]\n", " [227 222 215]\n", " ...\n", " [232 230 224]\n", " [231 229 222]\n", " [230 229 221]]\n", "\n", " [[227 221 214]\n", " [227 221 215]\n", " [228 221 215]\n", " ...\n", " [232 230 224]\n", " [231 229 222]\n", " [231 229 221]]\n", "\n", " [[227 221 214]\n", " [227 221 214]\n", " [227 221 215]\n", " ...\n", " [232 230 224]\n", " [231 229 223]\n", " [230 229 221]]\n", "\n", " ...\n", "\n", " [[119 81 51]\n", " [124 85 55]\n", " [127 87 58]\n", " ...\n", " [210 211 211]\n", " [211 212 210]\n", " [210 211 210]]\n", "\n", " [[119 79 51]\n", " [124 84 55]\n", " [126 85 56]\n", " ...\n", " [210 211 210]\n", " [210 211 210]\n", " [209 210 209]]\n", "\n", " [[119 81 51]\n", " [123 83 55]\n", " [122 82 54]\n", " ...\n", " [209 210 210]\n", " [209 210 209]\n", " [208 209 209]]], shape=(64, 64, 3), dtype=uint8)\n" ] } ], "source": [ "for element in x_train:\n", " print(element)\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There's one more additional difference between TensorFlow datasets and Numpy arrays: If you need to transform one, you would invoke the `map` method to apply the function passed as an argument to each of the elements." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "def normalize(image):\n", " \"\"\"\n", " Transform an image into a tensor of shape (64 * 64 * 3, 1)\n", " and normalize its components.\n", " \n", " Arguments\n", " image - Tensor.\n", " \n", " Returns: \n", " result -- Transformed tensor \n", " \"\"\"\n", " image = tf.cast(image, tf.float32) / 256.0\n", " image = tf.reshape(image, [-1,1])\n", " return image" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "new_train = x_train.map(normalize)\n", "new_test = x_test.map(normalize)" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TensorSpec(shape=(12288, 1), dtype=tf.float32, name=None)" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "new_train.element_spec" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[0.88671875]\n", " [0.859375 ]\n", " [0.8359375 ]\n", " ...\n", " [0.8125 ]\n", " [0.81640625]\n", " [0.81640625]], shape=(12288, 1), dtype=float32)\n" ] } ], "source": [ "print(next(iter(new_train)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.1 - Linear Function\n", "\n", "Let's begin this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. \n", "\n", "\n", "### Exercise 1 - linear_function\n", "\n", "Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, this is how to define a constant X with the shape (3,1):\n", "```python\n", "X = tf.constant(np.random.randn(3,1), name = \"X\")\n", "\n", "```\n", "Note that the difference between `tf.constant` and `tf.Variable` is that you can modify the state of a `tf.Variable` but cannot change the state of a `tf.constant`.\n", "\n", "You might find the following functions helpful: \n", "- tf.matmul(..., ...) to do a matrix multiplication\n", "- tf.add(..., ...) to do an addition\n", "- np.random.randn(...) to initialize randomly" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "397d354ecaa1a28936096002cde11279", "grade": false, "grade_id": "cell-002e5736767021c0", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# GRADED FUNCTION: linear_function\n", "\n", "def linear_function():\n", " \"\"\"\n", " Implements a linear function: \n", " Initializes X to be a random tensor of shape (3,1)\n", " Initializes W to be a random tensor of shape (4,3)\n", " Initializes b to be a random tensor of shape (4,1)\n", " Returns: \n", " result -- Y = WX + b \n", " \"\"\"\n", "\n", " np.random.seed(1)\n", " \n", " \"\"\"\n", " Note, to ensure that the \"random\" numbers generated match the expected results,\n", " please create the variables in the order given in the starting code below.\n", " (Do not re-arrange the order).\n", " \"\"\"\n", " # (approx. 4 lines)\n", " # X = ...\n", " # W = ...\n", " # b = ...\n", " # Y = ...\n", " # YOUR CODE STARTS HERE\n", " X = tf.constant(np.random.randn(3,1))\n", " W = tf.constant(np.random.randn(4,3))\n", " b = tf.constant(np.random.randn(4,1))\n", " Y = tf.add(tf.matmul(W,X),b)\n", " \n", " # YOUR CODE ENDS HERE\n", " return Y" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "3526a7fd39649d2a6516031720e46748", "grade": true, "grade_id": "cell-b4318ea155f136ab", "locked": true, "points": 10, "schema_version": 3, "solution": false, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[-2.15657382]\n", " [ 2.95891446]\n", " [-1.08926781]\n", " [-0.84538042]], shape=(4, 1), dtype=float64)\n", "\u001b[92mAll test passed\n" ] } ], "source": [ "result = linear_function()\n", "print(result)\n", "\n", "assert type(result) == EagerTensor, \"Use the TensorFlow API\"\n", "assert np.allclose(result, [[-2.15657382], [ 2.95891446], [-1.08926781], [-0.84538042]]), \"Error\"\n", "print(\"\\033[92mAll test passed\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**: \n", "\n", "```\n", "result = \n", "[[-2.15657382]\n", " [ 2.95891446]\n", " [-1.08926781]\n", " [-0.84538042]]\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.2 - Computing the Sigmoid \n", "Amazing! You just implemented a linear function. TensorFlow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`.\n", "\n", "For this exercise, compute the sigmoid of z. \n", "\n", "In this exercise, you will: Cast your tensor to type `float32` using `tf.cast`, then compute the sigmoid using `tf.keras.activations.sigmoid`. \n", "\n", "\n", "### Exercise 2 - sigmoid\n", "\n", "Implement the sigmoid function below. You should use the following: \n", "\n", "- `tf.cast(\"...\", tf.float32)`\n", "- `tf.keras.activations.sigmoid(\"...\")`" ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "34072bb90c73636c7e7e4517e58c454c", "grade": false, "grade_id": "cell-038bb4b7e61dd070", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# GRADED FUNCTION: sigmoid\n", "\n", "def sigmoid(z):\n", " \n", " \"\"\"\n", " Computes the sigmoid of z\n", " \n", " Arguments:\n", " z -- input value, scalar or vector\n", " \n", " Returns: \n", " a -- (tf.float32) the sigmoid of z\n", " \"\"\"\n", " # tf.keras.activations.sigmoid requires float16, float32, float64, complex64, or complex128.\n", " \n", " # (approx. 2 lines)\n", " # z = ...\n", " # result = ...\n", " # YOUR CODE STARTS HERE\n", " z = tf.cast(z,tf.float32)\n", " a = tf.keras.activations.sigmoid(z)\n", " # YOUR CODE ENDS HERE\n", " return a\n" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "ad1c73949744ba2205a0ad0d6f395915", "grade": true, "grade_id": "cell-a04f348c3fdbc2f2", "locked": true, "points": 10, "schema_version": 3, "solution": false, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "type: \n", "dtype: \n", "sigmoid(-1) = tf.Tensor(0.26894143, shape=(), dtype=float32)\n", "sigmoid(0) = tf.Tensor(0.5, shape=(), dtype=float32)\n", "sigmoid(12) = tf.Tensor(0.9999939, shape=(), dtype=float32)\n", "\u001b[92mAll test passed\n" ] } ], "source": [ "result = sigmoid(-1)\n", "print (\"type: \" + str(type(result)))\n", "print (\"dtype: \" + str(result.dtype))\n", "print (\"sigmoid(-1) = \" + str(result))\n", "print (\"sigmoid(0) = \" + str(sigmoid(0.0)))\n", "print (\"sigmoid(12) = \" + str(sigmoid(12)))\n", "\n", "def sigmoid_test(target):\n", " result = target(0)\n", " assert(type(result) == EagerTensor)\n", " assert (result.dtype == tf.float32)\n", " assert sigmoid(0) == 0.5, \"Error\"\n", " assert sigmoid(-1) == 0.26894143, \"Error\"\n", " assert sigmoid(12) == 0.9999939, \"Error\"\n", "\n", " print(\"\\033[92mAll test passed\")\n", "\n", "sigmoid_test(sigmoid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**: \n", "\n", " \n", "\n", "\n", " \n", "\n", "\n", "\n", " \n", "\n", "\n", "\n", " \n", "\n", "\n", "\n", " \n", "\n", "\n", " \n", "\n", "
\n", "type\n", "\n", "class 'tensorflow.python.framework.ops.EagerTensor'\n", "
\n", "dtype\n", "\n", "\"dtype: 'float32'\n", "
\n", "Sigmoid(-1)\n", "\n", "0.2689414\n", "
\n", "Sigmoid(0)\n", "\n", "0.5\n", "
\n", "Sigmoid(12)\n", "\n", "0.999994\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.3 - Using One Hot Encodings\n", "\n", "Many times in deep learning you will have a $Y$ vector with numbers ranging from $0$ to $C-1$, where $C$ is the number of classes. If $C$ is for example 4, then you might have the following y vector which you will need to convert like this:\n", "\n", "\n", "\n", "\n", "This is called \"one hot\" encoding, because in the converted representation, exactly one element of each column is \"hot\" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In TensorFlow, you can use one line of code: \n", "\n", "- [tf.one_hot(labels, depth, axis=0)](https://www.tensorflow.org/api_docs/python/tf/one_hot)\n", "\n", "`axis=0` indicates the new axis is created at dimension 0\n", "\n", "\n", "### Exercise 3 - one_hot_matrix\n", "\n", "Implement the function below to take one label and the total number of classes $C$, and return the one hot encoding in a column whise matrix. Use `tf.one_hot()` to do this, and `tf.reshape()` to reshape your one hot tensor! \n", "\n", "- `tf.reshape(tensor, shape)`" ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "44bfa91af0e57ca117ebf3acce902a28", "grade": false, "grade_id": "cell-15d9db613d8007bb", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# GRADED FUNCTION: one_hot_matrix\n", "def one_hot_matrix(label, depth=6):\n", " \"\"\"\n", "    Computes the one hot encoding for a single label\n", "    \n", "    Arguments:\n", " label -- (int) Categorical labels\n", " depth -- (int) Number of different classes that label can take\n", "    \n", "    Returns:\n", " one_hot -- tf.Tensor A single-column matrix with the one hot encoding.\n", " \"\"\"\n", " # (approx. 1 line)\n", " # one_hot = ...\n", " # YOUR CODE STARTS HERE\n", " one_hot = tf.one_hot(label,depth,axis=0)\n", " one_hot = tf.reshape(one_hot,[depth,1])\n", " \n", " # YOUR CODE ENDS HERE\n", " return one_hot" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "1fb1a7bda24387b5aee077ac4e6ca3af", "grade": true, "grade_id": "cell-100c1b3328215913", "locked": true, "points": 10, "schema_version": 3, "solution": false, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[0.]\n", " [1.]\n", " [0.]\n", " [0.]], shape=(4, 1), dtype=float32)\n", "\u001b[92mAll test passed\n" ] } ], "source": [ "def one_hot_matrix_test(target):\n", " label = tf.constant(1)\n", " depth = 4\n", " result = target(label, depth)\n", " print(result)\n", " assert result.shape[0] == depth, \"Use the parameter depth\"\n", " assert result.shape[1] == 1, f\"Reshape to have only 1 column\"\n", " assert np.allclose(result, [[0.], [1.], [0.], [0.]] ), \"Wrong output. Use tf.one_hot\"\n", " result = target(3, depth)\n", " assert np.allclose(result, [[0.], [0.], [0.], [1.]] ), \"Wrong output. Use tf.one_hot\"\n", " \n", " print(\"\\033[92mAll test passed\")\n", "\n", "one_hot_matrix_test(one_hot_matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected output**\n", "```\n", "tf.Tensor(\n", "[[0.]\n", " [1.]\n", " [0.]\n", " [0.]], shape=(4, 1), dtype=float32)\n", "```" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "new_y_test = y_test.map(one_hot_matrix)\n", "new_y_train = y_train.map(one_hot_matrix)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[1.]\n", " [0.]\n", " [0.]\n", " [0.]\n", " [0.]\n", " [0.]], shape=(6, 1), dtype=float32)\n" ] } ], "source": [ "print(next(iter(new_y_test)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.4 - Initialize the Parameters \n", "\n", "Now you'll initialize a vector of numbers between zero and one. The function you'll be calling is `tf.keras.initializers.GlorotNormal`, which draws samples from a truncated normal distribution centered on 0, with `stddev = sqrt(2 / (fan_in + fan_out))`, where `fan_in` is the number of input units and `fan_out` is the number of output units, both in the weight tensor. \n", "\n", "To initialize with zeros or ones you could use `tf.zeros()` or `tf.ones()` instead. \n", "\n", "\n", "### Exercise 4 - initialize_parameters\n", "\n", "Implement the function below to take in a shape and to return an array of numbers between -1 and 1. \n", "\n", " - `tf.keras.initializers.GlorotNormal(seed=1)`\n", " - `tf.Variable(initializer(shape=())`" ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "da48416c74797c83152e1080b08afb9d", "grade": false, "grade_id": "cell-1d5716c48a16debf", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# GRADED FUNCTION: initialize_parameters\n", "\n", "def initialize_parameters():\n", " \"\"\"\n", " Initializes parameters to build a neural network with TensorFlow. The shapes are:\n", " W1 : [25, 12288]\n", " b1 : [25, 1]\n", " W2 : [12, 25]\n", " b2 : [12, 1]\n", " W3 : [6, 12]\n", " b3 : [6, 1]\n", " \n", " Returns:\n", " parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3\n", " \"\"\"\n", " \n", " initializer = tf.keras.initializers.GlorotNormal(seed=1) \n", " #(approx. 6 lines of code)\n", " W1 = tf.Variable(initializer(shape=(25, 12288)))\n", " b1 = tf.Variable(initializer(shape=(25, 1)))\n", " W2 = tf.Variable(initializer(shape=(12, 25)))\n", " b2 = tf.Variable(initializer(shape=(12, 1)))\n", " W3 = tf.Variable(initializer(shape=(6,12)))\n", " b3 = tf.Variable(initializer(shape=(6, 1)))\n", " # YOUR CODE STARTS HERE\n", " \n", " \n", " # YOUR CODE ENDS HERE\n", "\n", " parameters = {\"W1\": W1,\n", " \"b1\": b1,\n", " \"W2\": W2,\n", " \"b2\": b2,\n", " \"W3\": W3,\n", " \"b3\": b3}\n", " \n", " return parameters" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "dd3fe0b5ed777771156c071d9373e47a", "grade": true, "grade_id": "cell-11012e1fada40919", "locked": true, "points": 10, "schema_version": 3, "solution": false, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "W1 shape: (25, 12288)\n", "b1 shape: (25, 1)\n", "W2 shape: (12, 25)\n", "b2 shape: (12, 1)\n", "W3 shape: (6, 12)\n", "b3 shape: (6, 1)\n", "\u001b[92mAll test passed\n" ] } ], "source": [ "def initialize_parameters_test(target):\n", " parameters = target()\n", "\n", " values = {\"W1\": (25, 12288),\n", " \"b1\": (25, 1),\n", " \"W2\": (12, 25),\n", " \"b2\": (12, 1),\n", " \"W3\": (6, 12),\n", " \"b3\": (6, 1)}\n", "\n", " for key in parameters:\n", " print(f\"{key} shape: {tuple(parameters[key].shape)}\")\n", " assert type(parameters[key]) == ResourceVariable, \"All parameter must be created using tf.Variable\"\n", " assert tuple(parameters[key].shape) == values[key], f\"{key}: wrong shape\"\n", " assert np.abs(np.mean(parameters[key].numpy())) < 0.5, f\"{key}: Use the GlorotNormal initializer\"\n", " assert np.std(parameters[key].numpy()) > 0 and np.std(parameters[key].numpy()) < 1, f\"{key}: Use the GlorotNormal initializer\"\n", "\n", " print(\"\\033[92mAll test passed\")\n", " \n", "initialize_parameters_test(initialize_parameters)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected output**\n", "```\n", "W1 shape: (25, 12288)\n", "b1 shape: (25, 1)\n", "W2 shape: (12, 25)\n", "b2 shape: (12, 1)\n", "W3 shape: (6, 12)\n", "b3 shape: (6, 1)\n", "```" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [], "source": [ "parameters = initialize_parameters()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 3 - Building Your First Neural Network in TensorFlow\n", "\n", "In this part of the assignment you will build a neural network using TensorFlow. Remember that there are two parts to implementing a TensorFlow model:\n", "\n", "- Implement forward propagation\n", "- Retrieve the gradients and train the model\n", "\n", "Let's get into it!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.1 - Implement Forward Propagation \n", "\n", "One of TensorFlow's great strengths lies in the fact that you only need to implement the forward propagation function. \n", "\n", "Here, you'll use a TensorFlow decorator, `@tf.function`, which builds a computational graph to execute the function. `@tf.function` is polymorphic, which comes in very handy, as it can support arguments with different data types or shapes, and be used with other languages, such as Python. This means that you can use data dependent control flow statements.\n", "\n", "When you use `@tf.function` to implement forward propagation, the computational graph is activated, which keeps track of the operations. This is so you can calculate your gradients with backpropagation.\n", "\n", "\n", "### Exercise 5 - forward_propagation\n", "\n", "Implement the `forward_propagation` function.\n", "\n", "**Note** Use only the TF API. \n", "\n", "- tf.math.add\n", "- tf.linalg.matmul\n", "- tf.keras.activations.relu\n" ] }, { "cell_type": "code", "execution_count": 65, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "7c3d7c28e47e314c17d3f35e5a033b15", "grade": false, "grade_id": "cell-23b6d82b3443e298", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# GRADED FUNCTION: forward_propagation\n", "\n", "@tf.function\n", "def forward_propagation(X, parameters):\n", " \"\"\"\n", " Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR\n", " \n", " Arguments:\n", " X -- input dataset placeholder, of shape (input size, number of examples)\n", " parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\"\n", " the shapes are given in initialize_parameters\n", "\n", " Returns:\n", " Z3 -- the output of the last LINEAR unit\n", " \"\"\"\n", " \n", " # Retrieve the parameters from the dictionary \"parameters\" \n", " W1 = parameters['W1']\n", " b1 = parameters['b1']\n", " W2 = parameters['W2']\n", " b2 = parameters['b2']\n", " W3 = parameters['W3']\n", " b3 = parameters['b3']\n", " \n", " #(approx. 5 lines) # Numpy Equivalents:\n", " Z1 = tf.add(tf.matmul(W1,X),b1) # Z1 = np.dot(W1, X) + b1\n", " A1 = tf.keras.activations.relu(Z1) # A1 = relu(Z1)\n", " Z2 = tf.add(tf.matmul(W2,A1),b2) # Z2 = np.dot(W2, A1) + b2\n", " A2 = tf.keras.activations.relu(Z2) # A2 = relu(Z2)\n", " Z3 = tf.add(tf.matmul(W3,A2),b3) # Z3 = np.dot(W3, A2) + b3\n", " # YOUR CODE STARTS HERE\n", " \n", " \n", " # YOUR CODE ENDS HERE\n", " \n", " return Z3" ] }, { "cell_type": "code", "execution_count": 66, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "204b6a74e3c6cbdb3654bdb2ed8f13af", "grade": true, "grade_id": "cell-728b002a6a88ceb1", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[-0.13082105]\n", " [ 0.21228725]\n", " [ 0.7050022 ]\n", " [-1.1224037 ]\n", " [-0.20386747]\n", " [ 0.95262206]], shape=(6, 1), dtype=float32)\n", "\u001b[92mAll test passed\n" ] } ], "source": [ "def forward_propagation_test(target, examples):\n", " for batch in examples:\n", " forward_pass = target(batch, parameters)\n", " assert type(forward_pass) == EagerTensor, \"Your output is not a tensor\"\n", " assert forward_pass.shape == (6, 1), \"Last layer must use W3 and b3\"\n", " assert np.any(forward_pass < 0), \"Don't use a ReLu layer at end of your network\"\n", " assert np.allclose(forward_pass, \n", " [[-0.13082162],\n", " [ 0.21228778],\n", " [ 0.7050022 ],\n", " [-1.1224034 ],\n", " [-0.20386729],\n", " [ 0.9526217 ]]), \"Output does not match\"\n", " print(forward_pass)\n", " break\n", " \n", "\n", " print(\"\\033[92mAll test passed\")\n", "\n", "forward_propagation_test(forward_propagation, new_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected output**\n", "```\n", "tf.Tensor(\n", "[[-0.13082162]\n", " [ 0.21228778]\n", " [ 0.7050022 ]\n", " [-1.1224034 ]\n", " [-0.20386732]\n", " [ 0.9526217 ]], shape=(6, 1), dtype=float32)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.2 Compute the Cost\n", "\n", "Here again, the delightful `@tf.function` decorator steps in and saves you time. All you need to do is specify how to compute the cost, and you can do so in one simple step by using:\n", "\n", "`tf.reduce_mean(tf.keras.losses.binary_crossentropy(y_true = ..., y_pred = ..., from_logits=True))`\n", "\n", "\n", "### Exercise 6 - compute_cost\n", "\n", "Implement the cost function below. \n", "- It's important to note that the \"`y_pred`\" and \"`y_true`\" inputs of [tf.keras.losses.binary_crossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/binary_crossentropy) are expected to be of shape (number of examples, num_classes). Since both the transpose and the original tensors have the same values, just in different order, the result of calculating the binary_crossentropy should be the same if you transpose or not the logits and labels. Just for reference here is how the Binary Cross entropy is calculated in TensorFlow:\n", "\n", "``mean_reduce(max(logits, 0) - logits * labels + log(1 + exp(-abs(logits))), axis=-1)``\n", "\n", "- `tf.reduce_mean` basically does the summation over the examples." ] }, { "cell_type": "code", "execution_count": 67, "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "af252bad785c3ddf4a55fa7bc999477a", "grade": false, "grade_id": "cell-e6cc4d7fefeed231", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# GRADED FUNCTION: compute_cost \n", "\n", "@tf.function\n", "def compute_cost(logits, labels):\n", " \"\"\"\n", " Computes the cost\n", " \n", " Arguments:\n", " logits -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)\n", " labels -- \"true\" labels vector, same shape as Z3\n", " \n", " Returns:\n", " cost - Tensor of the cost function\n", " \"\"\"\n", " \n", " #(1 line of code)\n", " # cost = ...\n", " # YOUR CODE STARTS HERE\n", " cost = tf.reduce_mean(tf.keras.losses.binary_crossentropy(labels,logits,from_logits=True))\n", " \n", " # YOUR CODE ENDS HERE\n", " return cost" ] }, { "cell_type": "code", "execution_count": 68, "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "938580c5cbcf49a72c1fdcda782cfd8a", "grade": true, "grade_id": "cell-9bf72affa2e7b1b5", "locked": true, "points": 10, "schema_version": 3, "solution": false, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.8419182681095858, shape=(), dtype=float64)\n", "\u001b[92mAll test passed\n" ] } ], "source": [ "def compute_cost_test(target):\n", " labels = np.array([[0., 1.], [0., 0.], [1., 0.]])\n", " logits = np.array([[0.6, 0.4], [0.4, 0.6], [0.4, 0.6]])\n", " result = compute_cost(logits, labels)\n", " print(result)\n", " assert(type(result) == EagerTensor), \"Use the TensorFlow API\"\n", " assert (np.abs(result - (0.7752516 + 0.9752516 + 0.7752516) / 3.0) < 1e-7), \"Test does not match. Did you get the mean of your cost functions?\"\n", "\n", " print(\"\\033[92mAll test passed\")\n", "\n", "compute_cost_test(compute_cost)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected output**\n", "```\n", "tf.Tensor(0.87525165, shape=(), dtype=float32)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.3 - Train the Model\n", "\n", "Let's talk optimizers. You'll specify the type of optimizer in one line, in this case `tf.keras.optimizers.Adam` (though you can use others such as SGD), and then call it within the training loop. \n", "\n", "Notice the `tape.gradient` function: this allows you to retrieve the operations recorded for automatic differentiation inside the `GradientTape` block. Then, calling the optimizer method `apply_gradients`, will apply the optimizer's update rules to each trainable parameter. At the end of this assignment, you'll find some documentation that explains this more in detail, but for now, a simple explanation will do. ;) \n", "\n", "\n", "Here you should take note of an important extra step that's been added to the batch training process: \n", "\n", "- `tf.Data.dataset = dataset.prefetch(8)` \n", "\n", "What this does is prevent a memory bottleneck that can occur when reading from disk. `prefetch()` sets aside some data and keeps it ready for when it's needed. It does this by creating a source dataset from your input data, applying a transformation to preprocess the data, then iterating over the dataset the specified number of elements at a time. This works because the iteration is streaming, so the data doesn't need to fit into the memory. " ] }, { "cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [], "source": [ "def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,\n", " num_epochs = 1500, minibatch_size = 32, print_cost = True):\n", " \"\"\"\n", " Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.\n", " \n", " Arguments:\n", " X_train -- training set, of shape (input size = 12288, number of training examples = 1080)\n", " Y_train -- test set, of shape (output size = 6, number of training examples = 1080)\n", " X_test -- training set, of shape (input size = 12288, number of training examples = 120)\n", " Y_test -- test set, of shape (output size = 6, number of test examples = 120)\n", " learning_rate -- learning rate of the optimization\n", " num_epochs -- number of epochs of the optimization loop\n", " minibatch_size -- size of a minibatch\n", " print_cost -- True to print the cost every 100 epochs\n", " \n", " Returns:\n", " parameters -- parameters learnt by the model. They can then be used to predict.\n", " \"\"\"\n", " \n", " costs = [] # To keep track of the cost\n", " \n", " # Initialize your parameters\n", " #(1 line)\n", " parameters = initialize_parameters()\n", "\n", " W1 = parameters['W1']\n", " b1 = parameters['b1']\n", " W2 = parameters['W2']\n", " b2 = parameters['b2']\n", " W3 = parameters['W3']\n", " b3 = parameters['b3']\n", "\n", " optimizer = tf.keras.optimizers.SGD(learning_rate)\n", "\n", " X_train = X_train.batch(minibatch_size, drop_remainder=True).prefetch(8)# <<< extra step \n", " Y_train = Y_train.batch(minibatch_size, drop_remainder=True).prefetch(8) # loads memory faster \n", "\n", " # Do the training loop\n", " for epoch in range(num_epochs):\n", "\n", " epoch_cost = 0.\n", " \n", " for (minibatch_X, minibatch_Y) in zip(X_train, Y_train):\n", " # Select a minibatch\n", " with tf.GradientTape() as tape:\n", " # 1. predict\n", " Z3 = forward_propagation(minibatch_X, parameters)\n", " # 2. loss\n", " minibatch_cost = compute_cost(Z3, minibatch_Y)\n", " \n", " trainable_variables = [W1, b1, W2, b2, W3, b3]\n", " grads = tape.gradient(minibatch_cost, trainable_variables)\n", " optimizer.apply_gradients(zip(grads, trainable_variables))\n", " epoch_cost += minibatch_cost / minibatch_size\n", "\n", " # Print the cost every epoch\n", " if print_cost == True and epoch % 10 == 0:\n", " print (\"Cost after epoch %i: %f\" % (epoch, epoch_cost))\n", " if print_cost == True and epoch % 5 == 0:\n", " costs.append(epoch_cost)\n", "\n", " # Plot the cost\n", " plt.plot(np.squeeze(costs))\n", " plt.ylabel('cost')\n", " plt.xlabel('iterations (per fives)')\n", " plt.title(\"Learning rate =\" + str(learning_rate))\n", " plt.show()\n", "\n", " # Save the parameters in a variable\n", " print (\"Parameters have been trained!\")\n", "\n", " return parameters" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cost after epoch 0: 0.742591\n", "Cost after epoch 10: 0.614557\n", "Cost after epoch 20: 0.598900\n", "Cost after epoch 30: 0.588907\n", "Cost after epoch 40: 0.579898\n", "Cost after epoch 50: 0.570628\n", "Cost after epoch 60: 0.560898\n", "Cost after epoch 70: 0.550808\n", "Cost after epoch 80: 0.540494\n", "Cost after epoch 90: 0.488080\n", "Cost after epoch 100: 0.478259\n", "Cost after epoch 110: 0.472858\n", "Cost after epoch 120: 0.468990\n", "Cost after epoch 130: 0.466013\n", "Cost after epoch 140: 0.463659\n", "Cost after epoch 150: 0.461677\n", "Cost after epoch 160: 0.459950\n", "Cost after epoch 170: 0.458390\n", "Cost after epoch 180: 0.456969\n", "Cost after epoch 190: 0.455646\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEWCAYAAAB8LwAVAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8QVMy6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAsNElEQVR4nO3deZxU5Z3v8c+v972bnabZF0VjBLXFBXdNgmYxZlE0iYmZjCEzTCaTm8lNZu7NKzf3Zq5Z58ab3BjHiZq4J2rEXZMouKDQKAgIIiJL0400W6/0/rt/nNNQtNUrFKe66/t+verVVc95zqlfHaV+9TzPOc9j7o6IiEh3aVEHICIiyUkJQkRE4lKCEBGRuJQgREQkLiUIERGJSwlCRETiUoKQYc3Mzjezt6KOQ2QoUoKQhDGzrWZ2WZQxuPsL7n5ilDF0MbOLzKzyOL3XpWa20cyazOw5M5vSS92RZvawmTWa2TYzu66/x7LAj8xsb/j4sZlZzPb/aWZrzazdzL6fkA8rCaMEIUOamaVHHQMc+qJMin9PZjYaeAj478BIoAK4v5ddfgW0AuOAzwG/NrMP9PNYNwKfBOYApwIfA74as30z8G3g8aP8WBKBpPgfWlKLmaWZ2XfM7J3wV+cDZjYyZvsfzGyXmdWa2bKuL6tw2x1m9msze8LMGoGLw5bKt8zsjXCf+80sJ6x/xK/23uqG279tZtVmVmVmXzEzN7OZPXyO583sh2b2EtAETDezG8xsg5nVm9kWM/tqWDcfeBKYYGYN4WNCX+dikD4FrHf3P7h7M/B9YI6ZzY7zGfKBTwP/3d0b3P1FYAnwhX4e64vAz9y90t13Aj8DvtR1fHe/092fBOqP8jNJBJQgJApfJ/jVeSEwAdhP8Cu2y5PALGAs8Bpwd7f9rwN+CBQCL4ZlVwMLgGkEv2S/1Mv7x61rZguAbwKXATPD+PryBYJf0YXANmA3wa/oIuAG4N/N7HR3bwQuB6rcvSB8VPXjXBxiZpPN7EAvj66uoQ8Aa7r2C9/7nbC8uxOADnffFFO2JqZuX8c6Ynu3fWWIy4g6AElJXwUWu3slQNg3vd3MvuDu7e7+266K4bb9Zlbs7rVh8SPu/lL4vDns8r45/MLFzB4F5vby/j3VvRq43d3Xh9v+B/D5Pj7LHV31Q7FdKUvN7BngfIJEF0+v5yK2ortvB0r6iAegAKjpVlZLkMTi1a3tpW5fx+q+fy1QYGbmmuhtyFMLQqIwBXi465cvsAHoAMaZWbqZ3RR2udQBW8N9RsfsvyPOMXfFPG8i+OLqSU91J3Q7drz36e6IOmZ2uZm9Ymb7ws92BUfG3l2P56If792TBoIWTKwi4nfz9FV3oNuLgAYlh+FBCUKisAO43N1LYh45YR/2dcCVBN08xcDUcB+L2T9RXz7VwMSY15P6sc+hWMwsG3gQ+Ckwzt1LgCc4HHu8uHs7F0cIu5gaenl8Lqy6nmDQuGu/fGBGWN7dJiDDzGbFlM2JqdvXsY7Y3m1fGeKUICTRMs0sJ+aRAdwC/NDCyyXNbIyZXRnWLwRagL1AHvBvxzHWB4AbzOwkM8sDvjfA/bOAbIIumXYzuxz4cMz294BRZlYcU9bbuTiCu2+PGb+I9+gaq3kYOMXMPh0OwH8PeMPdN8Y5ZiPBVUo/MLN8M5tPkKB/389j/Q74ppmVmdkE4L8Ad3Qd38wyw/3SCBJRjiXJlWfSNyUISbQngIMxj+8DvyC4UuYZM6sHXgHOCuv/jmCwdyfwZrjtuAivtrkZeI7g8szl4aaWfu5fTzDo/ADBYPN1BJ+za/tG4F5gS9ilNIHez8VgP0cNwZVJPwzjOAtY2LXdzP7FzJ6M2eXvgFyCAfZ7ga91jav0dSzgN8CjwFpgHcEYzG9itv8HwX/3a4F/DZ9/ARkSTF2FIvGZ2UkEX3rZ3QeMRVKBWhAiMczsKjPLMrMRwI+AR5UcJFUpQYgc6asEYwjvEFxN9LVowxGJjrqYREQkLrUgREQkrmF1J/Xo0aN96tSpUYchIjJkrFq1ao+7j4m3bVgliKlTp1JRURF1GCIiQ4aZbetpW0K7mMxsgZm9ZWabzew7cbb/s5mtDh/rzKyjayZLC2bdXBtu07e+iMhxlrAWRHi35K+ADwGVwEozW+Lub3bVcfefAD8J638c+Cd33xdzmIvdfU+iYhQRkZ4lsgUxD9js7lvcvRW4j+AW/p5cS3AXp4iIJIFEJogyjpzpsjIse59w3psFBBOddXGC6QdWmdmNPb2Jmd1oZhVmVlFT031WYhERGaxEJgiLU9bTTRcfB17q1r00391PJ1hk5e/N7IJ4O7r7re5e7u7lY8bEHYgXEZFBSGSCqOTI6ZInAlU91F1It+6lrgVd3H03wYyS8xIQo4iI9CCRCWIlMMvMpplZFkESWNK9Ujj18YXAIzFl+WZW2PWcYMrkdQmMVUREuklYgggnOFsMPE2wStYD7r7ezBaZ2aKYqlcBz4Tz0ncZB7xoZmuAFcDj7v5UIuLs7HR++de3WbpJ4xciIrESeqOcuz9BsB5AbNkt3V7fQcwCI2HZFo5cpSph0tKM3yzbwlWnlXHhCRrDEBHpormYgAnFuVQdaI46DBGRpKIEAZSW5LCr7mDUYYiIJBUlCKC0OIdqtSBERI6gBAGUFueyt7GV5raOqEMREUkaShAELQiA9+rUihAR6aIEQdCCADRQLSISQwmCYJAa0EC1iEgMJQgOdzGpBSEicpgSBJCXlUFxbibVtWpBiIh0UYIIlRbnsKtWLQgRkS5KEKHS4hx1MYmIxFCCCJWW5LJLl7mKiByiBBGaUJzDPt0sJyJyiBJEaHx4L0S1xiFERAAliEMmhJe66komEZGAEkSotCRsQWigWkQEUII4ZHyRWhAiIrGUIEK5WemMyMvUGISISEgJIsb44lwlCBGRkBJEjAnFOUoQIiIhJYgYpSU5GoMQEQkpQcQoLc7lQFMbB1t1s5yIiBJEjFLdCyEicogSRIxS3U0tInKIEkSMwwsHqQUhIpLQBGFmC8zsLTPbbGbfibP9n81sdfhYZ2YdZjayP/smwvgwQWhdCBGRBCYIM0sHfgVcDpwMXGtmJ8fWcfefuPtcd58LfBdY6u77+rNvIuRkpjMyP4sqJQgRkYS2IOYBm919i7u3AvcBV/ZS/1rg3kHue8yUFutSVxERSGyCKAN2xLyuDMvex8zygAXAgwPd91grLc5VF5OICIlNEBanzHuo+3HgJXffN9B9zexGM6sws4qamppBhHmkYOlRtSBERBKZICqBSTGvJwJVPdRdyOHupQHt6+63unu5u5ePGTPmKMINlJbkUNfcTmNL+1EfS0RkKEtkglgJzDKzaWaWRZAElnSvZGbFwIXAIwPdNxEm6F4IEREggQnC3duBxcDTwAbgAXdfb2aLzGxRTNWrgGfcvbGvfRMVa6zxuptaRASAjEQe3N2fAJ7oVnZLt9d3AHf0Z9/jQS0IEZGA7qTuZlxxNqClR0VElCC6yc5IZ3RBlrqYRCTlKUHEUaqV5URElCDiGa+7qUVElCDi0dKjIiJKEHGVluRS39xOg26WE5EUpgQRx6GV5TTlhoikMCWIOLSynIiIEkRcWptaREQJIq5xRTmYQZVulhORFKYEEUdWRhqjC7K1LoSIpDQliB6UFudQpS4mEUlhShA9KC3OUQtCRFKaEkQPNN2GiKQ6JYgelBbn0NDSTl1zW9ShiIhEQgmiB6Ulwb0Q6mYSkVSlBNGDrnshqnQ3tYikKCWIHhy+WU4tCBFJTUoQPei6WU4JQkRSlRJEDzLT0xhTkK0J+0QkZSlB9KK0RJe6ikjqUoLoxQStLCciKUwJohfjw5Xl3D3qUEREjjsliF5MKM6lqbWDumatLCciqUcJohfjtS6EiKQwJYheTCjpWnpUA9UiknoSmiDMbIGZvWVmm83sOz3UucjMVpvZejNbGlO+1czWhtsqEhlnT7T0qIiksoxEHdjM0oFfAR8CKoGVZrbE3d+MqVMC/D9ggbtvN7Ox3Q5zsbvvSVSMfRlbmE2aqYtJRFJTIlsQ84DN7r7F3VuB+4Aru9W5DnjI3bcDuPvuBMYzYBnpaYwtzNHSoyKSkhKZIMqAHTGvK8OyWCcAI8zseTNbZWbXx2xz4Jmw/Mae3sTMbjSzCjOrqKmpOWbBdyktyWFXnVoQIpJ6EtbFBFicsu43FGQAZwCXArnAcjN7xd03AfPdvSrsdnrWzDa6+7L3HdD9VuBWgPLy8mN+w0JpcQ4bq+uP9WFFRJJeIlsQlcCkmNcTgao4dZ5y98ZwrGEZMAfA3avCv7uBhwm6rI67rpXldLOciKSaRCaIlcAsM5tmZlnAQmBJtzqPAOebWYaZ5QFnARvMLN/MCgHMLB/4MLAugbH2qLQ4h4NtHdQe1MpyIpJaEtbF5O7tZrYYeBpIB37r7uvNbFG4/RZ332BmTwFvAJ3Abe6+zsymAw+bWVeM97j7U4mKtTddl7pWHWimJC8rihBERCKRyDEI3P0J4IluZbd0e/0T4CfdyrYQdjVFrbTk8MpyJ08oijgaEZHjR3dS92HW2AJyM9N5ev2uqEMRETmulCD6UJiTyadOL+ORNVXsbWiJOhwRkeNGCaIfvnjuVFrbO7lv5Y6+K4uIDBNKEP1wwrhC5s8cxV2vbKO9ozPqcEREjgsliH760rnTqK5t5pk334s6FBGR40IJop8umT2WiSNyueOlrVGHIiJyXChB9FN6mvHFc6ayYus+1lfVRh2OiEjCKUEMwNXlk8jNTOfOl7dGHYqISMIpQQxAcV4mV51exp9WV7GvsTXqcEREEkoJYoC+dOiS1+1RhyIiklBKEAN0wrhCzp0xit8v1yWvIjK8KUEMwpfOnapLXkVk2FOCGIRLTxoXXPKqwWoRGcaUIAYhPc24/pwprHhXl7yKyPClBDFI15RP1iWvIjKsKUEMUtclr4/oklcRGaaUII7CF8+ZSosueRWRYUoJ4iicOD645PWu5dtobuuIOhwRkWNKCeIoffXCGVTVNnPNb5bzXl1z1OGIiBwzShBH6cITxvCbL5zB27sb+MQvX+SNygNRhyQickwoQRwDH/nAeB782rlkpKXx2VuWs2RNVdQhiYgctX4lCDP7bH/KUtlJpUU8sng+p04s5uv3vs5Pn36Lzk6POiwRkUHrbwviu/0sS2mjC7K5+ytnc035JH753GYW3bWKxpb2qMMSERmUjN42mtnlwBVAmZndHLOpCNA3XxxZGWnc9OkPcuL4Qv7X42/y6V+/zH9cX86kkXlRhyYiMiB9tSCqgAqgGVgV81gCfCSxoQ1dZsaXz5vG7TfMY+eBg3z05hf4j2VbaGnXpbAiMnT0miDcfY273wnMdPc7w+dLgM3uvr+vg5vZAjN7y8w2m9l3eqhzkZmtNrP1ZrZ0IPsmuwtPGMOSxedx2uQR/PCJDVz6s6UsWVOFu8YmRCT5WX++rMzseeATBF1Sq4EaYKm7f7OXfdKBTcCHgEpgJXCtu78ZU6cEeBlY4O7bzWysu+/uz77xlJeXe0VFRZ+fJwovvF3Dvz2xkQ3VdcyZVMK/XnES86aNjDosEUlxZrbK3cvjbevvIHWxu9cBnwJud/czgMv62GceQUtji7u3AvcBV3arcx3wkLtvB3D33QPYd0g5f9YYHvuH8/jpZ+fwXm0zV/9mOTf+roItNQ1RhyYiEld/E0SGmZUCVwOP9XOfMmBHzOvKsCzWCcAIM3vezFaZ2fUD2BcAM7vRzCrMrKKmpqafoUUjPc34zBkTee5bF/HPHzmRlzbv4UP/vozvPrSWrXsaow5PROQIvV7FFOMHwNPAS+6+0symA2/3sY/FKeven5UBnAFcCuQCy83slX7uGxS63wrcCkEXUx8xJYXcrHT+/uKZXHPmJG7+y9vct2IH963czuWnjOerF8xgzqSSqEMUEelfgnD3PwB/iHm9Bfh0H7tVApNiXk8kuCqqe5097t4INJrZMmBOP/cd8kYXZPODK09h8SUzueOlrfz+lW08sXYXZ08fyaILZ3DhCWMwi5crRUQSr793Uk80s4fNbLeZvWdmD5rZxD52WwnMMrNpZpYFLCS4AirWI8D5ZpZhZnnAWcCGfu47bIwtzOHbC2az/LuX8t8+ehJb9zTxpdtXcvkvXuDh1ytp6+iMOkQRSUH9HYO4neALegLBWMCjYVmP3L0dWEzQNbUBeMDd15vZIjNbFNbZADwFvAGsAG5z93U97TvQDzfUFGRn8JXzp7Ps2xfz08/OoaPT+af71zD/pr/y789u0myxInJc9fcy19XuPrevsqgl82Wug9HZ6Ty/aTd3vryNpZtqyEgzPnLKeK4/ewrzpo1U95OIHLXeLnPt7yD1HjP7PHBv+PpaYO+xCE56lpZmXDJ7HJfMHsfWPY3c9co2HqjYweNvVDN7fCGfP3sKV51WRn52f/8zioj0X39bEJOBXwLnEFxN9DLw9a77F5LFcGtBxHOwtYNHVu/kd8u38WZ1HYXZGXzytDKunTeZkycURR2eiAwxvbUg+psg7gS+0TW9hpmNBH7q7l8+ppEepVRIEF3cnde27+f3y7fxxLpdtLZ3MmdiMdfOm8zH50xQq0JE+uVYJIjX3f20vsqilkoJItaBplYeem0n963czqb3GsjPSucTc8u4bt5kPjixOOrwRCSJHYsxiDQzG9GtBaGfqEmiJC+LL583jRvmT+W17fu5d8UOHn69kntXbOcDE4q4dt5krpw7gcKczKhDFZEhpL8tiOsJFgj6I8EYxNXAD93994kNb2BStQURT+3BNpas3sndr25n4656cjPT+ficUq6dN5m5k0p0BZSIAMegiyk8yMnAJQTTYPylr5lVo6AE8X7uzprKWu5bsZ0la6poau1g9vhCrp03mU+eVkZxrloVIqnsmCSIoUAJonf1zW08uqaae1dsZ+3OWrIz0vjoqaV87qwpnD5ZrQqRVKQEIe+zbmct96zYzpLVVTS0tDN7fCHXnRW0Koo0ViGSMpQgpEeNLe0sWVPF3a9uY93OOnIz0/nEnAl87uzJnDqxJOrwRCTBlCCkX96oPMA9r27nkdVVHGzr4JSyIj531hSunDuBvCxdtCYyHClByIDUNbfxyOuHr4AqzM7g02dM5PNnT2bm2MKowxORY0gJQgbF3Vm1bT93hetUtHZ0cvb0kXzh7Kl8+APjyEzv72TAIpKslCDkqO1taOGBikrufnUblfsPMqYwm4VnTuK6syZTWpwbdXgiMkhKEHLMdHQ6Szft5q5XtvPcW7tJM+PDJ4/ji+dO5SxNQS4y5ByLqTZEAEiPmYJ8x74m7nplG/et3MGT63Zx4rhCrj83mIJcg9oiQ59aEHLUDrZ28OiaKu54eWswBXlOBleXT+L6c6YwZVR+1OGJSC/UxSTHRdeg9p3Lt/Hk2mo63Ln4xLHcMH8q580cre4nkSSkBCHH3e66Zu56dTv3vLqNPQ2tzBpbwA3zp3HVaWXkZqVHHZ6IhJQgJDIt7R08uqaa2196l/VVdZTkZbLwzMlcf84UJpTo6ieRqClBSOTcnZVb93P7S+/y9PpdmBkLPjCeL583jTOmjIg6PJGUpauYJHJmxrxpI5k3bSSV+5v4/fJt3LtiO4+vrWbupBK+fN40Lj9lvG6+E0kiakFIZBpb2nnwtUp+++K7bN3bxITiHK4/dyrXnjmZ4jzNKCtyPKiLSZJaZ6fz1427+c8X32X5lr3kZaXzmTMmcsP8aUwbrctkRRJJCUKGjPVVtfz2xa0sWbOT9k7nspPG8bfnT+fMqSN0maxIAkSWIMxsAfALIB24zd1v6rb9IuAR4N2w6CF3/0G4bStQD3QA7T19gFhKEMPH7vpmfr98G3e9so39TW2cOrGYvzlvGld8sFTjFCLHUCQJwszSgU3Ah4BKYCVwbexa1mGC+Ja7fyzO/luBcnff09/3VIIYfg62dhwap9iyp5EJxTncMH8a18ybpJXvRI6B3hJEIn+KzQM2u/sWd28F7gOuTOD7yTCUm5XO58+ewp+/eSG3XV/O5FF5/PCJDZz7v//K/3zsTSr3N0UdosiwlcjLXMuAHTGvK4Gz4tQ7x8zWAFUErYn1YbkDz5iZA79x91sTGKskubQ047KTx3HZyeNYt7OW217Ywp0vb+WOl7dyxQdL+dvzp2mJVJFjLJEJIt6IYvf+rNeAKe7eYGZXAH8CZoXb5rt7lZmNBZ41s43uvux9b2J2I3AjwOTJk49Z8JK8Tikr5v8sPI1vL5jNHS9v5d5Xt/PomirOmjaSvz1/OpfMHktamga0RY5WIscgzgG+7+4fCV9/F8Dd/3cv+2wlzriDmX0faHD3n/b2nhqDSE31zW3cv3IHv33xXapqm5k+Jp+vnDedT51eRk6m5n0S6U1UYxArgVlmNs3MsoCFwJJugY238NpFM5sXxrPXzPLNrDAszwc+DKxLYKwyhBXmZPKV86ez9NsX84uFc8nLSudfHl7L/Jv+ys1/eZv9ja1RhygyJCWsi8nd281sMfA0wWWuv3X39Wa2KNx+C/AZ4Gtm1g4cBBa6u5vZOODhMHdkAPe4+1OJilWGh8z0NK6cW8Yn5kxg+Za93LpsCz9/dhO/fv4dri6fyFfOn86kkXlRhykyZOhGORnW3tpVz63LtrBkzU46Op3LP1jKVy+YrgFtkZDupJaUt6u2mdtfepd7Xt1OfUs7Z08fyc+unkuZphyXFBfVGIRI0hhfnMN3rziJl797Cf96xUm8vv0Av/zr5qjDEklqShCSUgpzMvnbC6bzsVMnsGT1Thpb2qMOSSRpKUFISlo4bxKNrR08vrY66lBEkpYShKSk8ikjmDEmn/tX7ui7skiKUoKQlGRmLDxzMqu27eft9+qjDkckKSlBSMq66vQyMtNNrQiRHihBSMoaXZDNh04ex0Ov76SlvSPqcESSjhKEpLRrzpzMvsZW/vzm7qhDEUk6ShCS0s6bOZqyklzuW7k96lBEko4ShKS09DTjs+UTeXHzHnbs0+JDIrGUICTlfbZ8EgB/WFUZcSQiyUUJQlJeWUkuF8wawx8qdtDROXzmJhM5WkoQIsDCMydRXdvMsrdrog5FJGkoQYgAl540jlH5Wdy/QvdEiHRRghABsjLS+PQZE/nzhveoqW+JOhyRpKAEIRK6unwS7Z3OQ69psFoElCBEDpk5toAzp47g/pU7GE4LaYkMlhKESIxrzpzMlj2NrNy6P+pQRCKnBCES44oPjqcwO0N3VougBCFyhLysDD4xdwJPrK2m9mBb1OGIREoJQqSb686aTHNbJ3e9si3qUEQipQQh0s0HJhRz8YljuO2FLTS1as1qSV1KECJxLL5kFvub2rj7FY1FSOpSghCJ44wpI5g/cxS3vrCF5jYtJiSpSQlCpAeLL55FTX2LliSVlJXQBGFmC8zsLTPbbGbfibP9IjOrNbPV4eN7/d1XJNHOnj6SM6eO4Jal79Da3hl1OCLHXcIShJmlA78CLgdOBq41s5PjVH3B3eeGjx8McF+RhDEzFl8yi+raZh7U9BuSghLZgpgHbHb3Le7eCtwHXHkc9hU5Zi6YNZo5E4v5f89vpr1DrQhJLYlMEGVAbOdtZVjW3TlmtsbMnjSzDwxwX8zsRjOrMLOKmhrN5S/HVlcrYse+gzyyuirqcESOq0QmCItT1n0GtNeAKe4+B/i/wJ8GsG9Q6H6ru5e7e/mYMWMGG6tIjy47aSwnlRbxq+c3a8U5SSmJTBCVwKSY1xOBI36CuXuduzeEz58AMs1sdH/2FTlezIzFF89kS00jT66rjjockeMmkQliJTDLzKaZWRawEFgSW8HMxpuZhc/nhfHs7c++IsfT5aeMZ+bYAn751810qhUhKSJhCcLd24HFwNPABuABd19vZovMbFFY7TPAOjNbA9wMLPRA3H0TFatIX9LSjL+/eAYbd9Xz5w3vRR2OyHFhw2lhlPLycq+oqIg6DBmm2js6ufTnSynKyWTJ4vmEjV+RIc3MVrl7ebxtupNapJ8y0tP4u4tmsHZnLUs36Yo5Gf6UIEQG4KrTJlJWksu/P7uJ/Y2tUYcjklBKECIDkJWRxrc+cgJv7Kzlgh8/x81/eZuGFk0JLsOTEoTIAF112kSe/sYFnDNjFD9/dhMX/vg5/vPFdzXrqww7GqQWOQqvb9/PT55+i5ff2cuE4hy+cdkJfOr0MjLS9dtLhobeBqmVIESOgZc27+HHT21kTWUtM8bk83cXzeQjp4ynIDsj6tBEeqUEIXIcuDtPr3+Pnz3zFm/vbiA7I42LTxzLR08t5dKTxpKXpWQhyae3BKH/Y0WOETNjwSnj+fDJ41i1fT+Pv1HN42ureWr9LnIy07h09jg+dmopF504ltys9KjDFemTWhAiCdTR6azcuo/H3qjiybW72NvYSl5WOufNHM38maOZP3MUM8YU6KY7iYy6mESSQHtHJyve3cdja6tZ+lYNOw8cBGBcUTbnzhjNuTNGMX/maCaU5EYcqaQSdTGJJIGM9DTOnTmac2eOxt3Zvq+Jlzbv5aV39rB0Uw0Pv74TgGmj8zljygjmTiph7qQSZo8v1FVREgm1IESSQGens3FXPS+/s4fl7+zl9R0H2BfeqZ2TmcYHy4qZM7GEuZODpFFWkqtuKTkm1MUkMsS4Ozv2HeT1HftZveMAq3ccYH1VHa3twbKnRTkZzC4t4uTSImaPL+Sk0iJOGFeowW8ZMHUxiQwxZsbkUXlMHpXHlXOD1XZb2zvZuKuONZW1bKiuY2N1HQ9U7KCpNbiDO81g6uh8Zo8vZOaYAmaMLWDm2AKmjy5Q4pBBUYIQGSKyMtI4dWIJp04sOVTW2ens2N/Ehuo6NlTXs6G6jvVVdTy1bhdd6xqZQVlJLjPHFjBzTAHTxxQwdVQeU0bnU1qUQ1qauqokPiUIkSEsLc2YMiqfKaPyWXBK6aHy5rYOtu5tZPPuBt7Z3cjmmgY2725g+Tt7aQm7qSBIOpNH5gUJY1Q+U0flMXFkHhNLcikbkaub+1Kc/uuLDEM5menMHl/E7PFFR5R3djrVdc1s29PI1r1NbNvbyNa9jWzb28SLm/fQ3NZ5RP0ReZlMHJFHWZgwykpymVCSw/jiXEqLcxhdkE26WiDDlhKESApJS7Pgy74kl3NnHrnN3dld38KOfU3sPHCQyv0HD/19e3c9z2/a/b4Ekp5mjCvMZlxxDqXFOYwvymVsUTbjirIZW5jD2MLgb1Fuhq66GoKUIEQECAbGxxXlMK4oh3iXtLg7+xpbqa5tZldtM9V1zbxX2xy8rjvIxl31PP9WzaFB81jZGWmMDZPG6IIsRhdkM7ogmzGFXX8Pl+VlpSuZJAklCBHpFzNjVEE2owqyOaWsuMd6DS3t7K5rZnd9C+/VNVNT38Lu+pZDZe/uaWTFu/vY39QWd/+czDRG5WczqiCLkflZh56Pyg9ej8zPYkR+FiPzgr9FOWqdJIoShIgcUwXZGRSEV0v1pq2jk32NrdTUt1DT0MKe+hb2NLSyr7GFvY2t7G0IHpt21bOnsfXQPSDdZaQZJXlZjMzPpCQvi5LcTEbkZVGSH/wdkReUj8jLoiQvk5LcTIrzMsnO0KW/fVGCEJFIZKanHerS6ou709jawf7GVvY1trKvqfXQ8/1NrexrbGNfYwv7m9rYtreJ1TsOcKCpjdaO+EkFIC8rPUwWQVIpycukODd4FOUeft79UZiTkTJTnyhBiEjSM7OgZZKdwaSRef3ax91pau1gf1MrB5ra2N/USu3BNg40tXEgLDsQvq492MrbuxuoPdhG7cG2HlsrXQqyMyjKyaAoTCZFOZkU5WaEfzODbWFZYU6wvTAnI3xkkpUxNBKMEoSIDEtmRn52BvnZGUwcMbB9m9s6DiWL2oNt1DYFf+ua26g72B7zPCiv3N9EfXU7dc1tNLS009cMRtkZaWHiOJw0CnOCBFgYk0wOJ5ZMCrrqZmdQkJNBbmbiB/OVIEREusnJTCcnM71f3V/ddXY6Da3t1B1so745+FvX3E59c/C6629dc1d5ULarrpmG8HljnCvBuktPO9yqKivJ5YFF5wzmo/YqoQnCzBYAvwDSgdvc/aYe6p0JvAJc4+5/DMu2AvVAB9De02RSIiLJJC3Ngu6lnMxBH6Oj02loiU0q7TS0xD5vP5RM6lvayUrQmEjCEoSZpQO/Aj4EVAIrzWyJu78Zp96PgKfjHOZid9+TqBhFRJJRepodGhSPUiJHSuYBm919i7u3AvcBV8ap9w/Ag8DuBMYiIiIDlMgEUQbsiHldGZYdYmZlwFXALXH2d+AZM1tlZjf29CZmdqOZVZhZRU1NzTEIW0REILEJIt7wevex/f8D/Fd3jzciM9/dTwcuB/7ezC6I9ybufqu7l7t7+ZgxY44qYBEROSyRg9SVwKSY1xOBqm51yoH7wku1RgNXmFm7u//J3asA3H23mT1M0GW1LIHxiohIjES2IFYCs8xsmpllAQuBJbEV3H2au09196nAH4G/c/c/mVm+mRUCmFk+8GFgXQJjFRGRbhLWgnD3djNbTHB1UjrwW3dfb2aLwu3xxh26jAMeDlsWGcA97v5UomIVEZH3M+/rlr8hpLy83CsqKqIOQ0RkyDCzVT3dZzY0JgQREZHjbli1IMysBtg2yN1HA8l6U55iGxzFNjiKbXCGamxT3D3uJaDDKkEcDTOrSNbpPBTb4Ci2wVFsgzMcY1MXk4iIxKUEISIicSlBHHZr1AH0QrENjmIbHMU2OMMuNo1BiIhIXGpBiIhIXEoQIiISV8onCDNbYGZvmdlmM/tO1PHEMrOtZrbWzFabWeS3iJvZb81st5mtiykbaWbPmtnb4d8Brv6b0Ni+b2Y7w/O32syuiCCuSWb2nJltMLP1ZvaPYXnk562X2JLhvOWY2QozWxPG9j/C8mQ4bz3FFvl5i4kx3cxeN7PHwteDOm8pPQYRrma3iZhV74Bru696F5Vw2dXyZFlVL5xyvQH4nbufEpb9GNjn7jeFCXaEu//XJInt+0CDu//0eMcTE1cpUOrur4UTUK4CPgl8iYjPWy+xXU30582AfHdvMLNM4EXgH4FPEf156ym2BUR83rqY2TcJZssucvePDfbfaaq3IPq76p0A7r4M2Net+ErgzvD5nQRfMMddD7FFzt2r3f218Hk9sIFg4azIz1svsUXOAw3hy8zw4STHeesptqRgZhOBjwK3xRQP6ryleoLoc9W7iPVrVb2IjXP3agi+cICxEcfT3WIzeyPsgoqk+6uLmU0FTgNeJcnOW7fYIAnOW9hNsppgOeJn3T1pzlsPsUESnDeChdi+DXTGlA3qvKV6gujPqndR6teqetKjXwMzgLlANfCzqAIxswKCtde/4e51UcURT5zYkuK8uXuHu88lWGxsnpmdEkUc8fQQW+Tnzcw+Bux291XH4nipniD6s+pdZGJX1QO6VtVLNu+Ffdldfdq7I47nEHd/L/yH3An8BxGdv7Cf+kHgbnd/KCxOivMWL7ZkOW9d3P0A8DxBH39SnLcusbElyXmbD3wiHL+8D7jEzO5ikOct1RNEn6veRcWGzqp6S4Avhs+/CDwSYSxH6PoHEbqKCM5fOKD5n8AGd/95zKbIz1tPsSXJeRtjZiXh81zgMmAjyXHe4saWDOfN3b/r7hPDVToXAn91988z2PPm7in9AK4guJLpHeBfo44nJq7pwJrwsT4ZYgPuJWg6txG0vv4GGAX8BXg7/DsyiWL7PbAWeCP8B1IaQVznEXRbvgGsDh9XJMN56yW2ZDhvpwKvhzGsA74XlifDeesptsjPW7c4LwIeO5rzltKXuYqISM9SvYtJRER6oAQhIiJxKUGIiEhcShAiIhKXEoSIiMSlBCFJzcxeDv9ONbPrjvGx/yXeeyWKmX3SzL6XoGN/NpyV9TkzKzezm4/hsceY2VPH6ngydOgyVxkSzOwi4Fvu/rEB7JPu7h29bG9w94JjEF5/43kZ+IQf5ey88T5X+AX+I3d/7miO3ct73g7c5u4vJeL4kpzUgpCkZmZds2beBJwfzrP/T+FkaT8xs5Xh5GhfDetfFP6KvofgpiXM7E/hhIfruyY9NLObgNzweHfHvpcFfmJm6yxYj+OamGM/b2Z/NLONZnZ3eDcyZnaTmb0ZxvK+6Z7N7ASgpSs5mNkdZnaLmb1gZpvCOXS6JoHr1+eKOfb3CG56uyXc9yIze8zM0ixYU6Qkpu5mMxsXtgoeDN9npZnND7dfaIfXM3i9625+4E/A5wb/X1KGpCjv9NNDj74eBPPrQ8xdoeHrG4H/Fj7PBiqAaWG9RmBaTN2R4d9cgjtfR8UeO857fRp4FkgHxgHbgdLw2LUEc3alAcsJvphHAm9xuEVeEudz3AD8LOb1HcBT4XFmEdz9nTOQz9Xt+M8TrB1yxLkCfgHcED4/C/hz+Pwe4Lzw+WSC6TYAHiWYJBKgAMgIn5cBa6P+/0GP4/vI6DuFiCSlDwOnmtlnwtfFBF+0rcAKd383pu7Xzeyq8PmksN7eXo59HnCvB90475nZUuBMoC48diWABdM9TwVeAZqB28zsceCxOMcsBWq6lT3gwcRub5vZFmD2AD9Xf9wPfA+4nWBunvvD8suAk8MGEEBR2Fp4Cfh52Kp6qOuzEkzuNmGA7y1DnBKEDFUG/IO7P31EYTBW0djt9WXAOe7eZGbPE/xS7+vYPWmJed5B8Au73czmAZcSfAkvBi7ptt9Bgi/7WN0HAJ1+fq4BWA7MNLMxBIvE/K+wPI3gnBzsVv+mMMldAbxiZpe5+0aCc9a9rgxzGoOQoaIeKIx5/TTwNQumq8bMTghnve2uGNgfJofZwNkx29q69u9mGXBNOB4wBrgAWNFTYBasp1Ds7k8A3yBYD6C7DcDMbmWfDccJZhBMzvjWAD5Xv7i7E0wV/3OCbqSultMzBIms6zPMDf/OcPe17v4jgu6t2WGVE0jO2YQlgdSCkKHiDaDdzNYQ9N//gqB757VwoLiG+MsoPgUsMrM3CL6AX4nZdivwhpm95u6xA7APA+cQzKTrwLfdfVeYYOIpBB4xsxyCFsA/xamzDPiZmVn4pU0Yz1KCcY5F7t5sZrf183MNxP0EU9t/Kabs68CvwvOSEca3CPiGmV1M0Dp6E3gyrH8x8PhRxiFDjC5zFTlOzOwXwKPu/mczu4NgIPmPEYfVL2a2DLjS3fdHHYscP+piEjl+/g3IizqIgQq72X6u5JB61IIQEZG41IIQEZG4lCBERCQuJQgREYlLCUJEROJSghARkbj+Pxs8XNBMMNreAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Parameters have been trained!\n" ] }, { "data": { "text/plain": [ "{'W1': ,\n", " 'b1': ,\n", " 'W2': ,\n", " 'b2': ,\n", " 'W3': ,\n", " 'b3': }" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model(new_train, new_y_train, new_test, new_y_test, num_epochs=200)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected output**\n", "\n", "```\n", "Cost after epoch 0: 0.742591\n", "Cost after epoch 10: 0.614557\n", "Cost after epoch 20: 0.598900\n", "Cost after epoch 30: 0.588907\n", "Cost after epoch 40: 0.579898\n", "...\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Congratulations**! You've made it to the end of this assignment, and to the end of this week's material. Amazing work building a neural network in TensorFlow 2.3! \n", "\n", "Here's a quick recap of all you just achieved:\n", "\n", "- Used `tf.Variable` to modify your variables\n", "- Applied TensorFlow decorators and observed how they sped up your code\n", "- Trained a Neural Network on a TensorFlow dataset\n", "- Applied batch normalization for a more robust network\n", "\n", "You are now able to harness the power of TensorFlow's computational graph to create cool things, faster. Nice! " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 4 - Bibliography \n", "\n", "In this assignment, you were introducted to `tf.GradientTape`, which records operations for differentation. Here are a couple of resources for diving deeper into what it does and why: \n", "\n", "Introduction to Gradients and Automatic Differentiation: \n", "https://www.tensorflow.org/guide/autodiff \n", "\n", "GradientTape documentation:\n", "https://www.tensorflow.org/api_docs/python/tf/GradientTape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "oldHeight": 201.4, "position": { "height": "43px", "left": "1239px", "right": "20px", "top": "113px", "width": "279px" }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "varInspector_section_display": "none", "window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }