{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Implementing Back Propagation\n", "\n", "For this recipe, we will show how to do TWO separate examples, a regression example, and a classification example.\n", "\n", "To illustrate how to do back propagation with TensorFlow, we start by loading the necessary libraries and resetting the computational graph." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import tensorflow as tf\n", "from tensorflow.python.framework import ops\n", "ops.reset_default_graph()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a Graph Session" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sess = tf.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A Regression Example\n", "\n", "------------------------------\n", "\n", "We create a regression example as follows. The input data will be 100 random samples from a normal (mean of 1.0, stdev of 0.1). The target will be 100 constant values of 10.0.\n", "\n", "We will fit the regression model: `x_data * A = target_values`\n", "\n", "Theoretically, we know that A should be equal to 10.0.\n", "\n", "We start by creating the data and targets with their respective placholders" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "x_vals = np.random.normal(1, 0.1, 100)\n", "y_vals = np.repeat(10., 100)\n", "x_data = tf.placeholder(shape=[1], dtype=tf.float32)\n", "y_target = tf.placeholder(shape=[1], dtype=tf.float32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now create the variable for our computational graph, `A`." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create variable (one model parameter = A)\n", "A = tf.Variable(tf.random_normal(shape=[1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We add the model operation to the graph. This is just multiplying the input data by A to get the output." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Add operation to graph\n", "my_output = tf.multiply(x_data, A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we have to specify the loss function. This will allow TensorFlow to know how to change the model variables. We will use the L2 loss function here. Note: to use the L1 loss function, change `tf.square()` to `tf.abs()`." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Add L2 loss operation to graph\n", "loss = tf.square(my_output - y_target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we initialize all our variables. For specificity here, this is initializing the variable `A` on our graph with a random standard normal number." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Initialize variables\n", "init = tf.global_variables_initializer()\n", "sess.run(init)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need to create an optimizing operations. Here we use the standard `GradientDescentOptimizer()`, and tell TensorFlow to minimize the loss. Here we use a learning rate of `0.02`, but feel free to experiment around with this rate, and see the learning curve at the end. However, note that learning rates that are too large will result in the algorithm not converging." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create Optimizer\n", "my_opt = tf.train.GradientDescentOptimizer(0.02)\n", "train_step = my_opt.minimize(loss)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Running the Regression Graph!\n", "\n", "Here we will run the regression computational graph for 100 iterations, printing out the A-value and loss every 25 iterations. We should see the value of A get closer and closer to the true value of 10, as the loss goes down." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Step #25 A = [ 5.40734911]\n", "Loss = [ 26.12220001]\n", "Step #50 A = [ 8.33076572]\n", "Loss = [ 7.52755737]\n", "Step #75 A = [ 9.41368389]\n", "Loss = [ 4.23305845]\n", "Step #100 A = [ 9.93231583]\n", "Loss = [ 1.10820949]\n" ] } ], "source": [ "# Run Loop\n", "for i in range(100):\n", " rand_index = np.random.choice(100)\n", " rand_x = [x_vals[rand_index]]\n", " rand_y = [y_vals[rand_index]]\n", " sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})\n", " if (i+1)%25==0:\n", " print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))\n", " print('Loss = ' + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Classification Example\n", "\n", "--------------------\n", "\n", "For the classification example, we will create an x-sample made of two different normal distribution inputs, `Normal(mean = -1, sd = 1)` and `Normal(mean = 3, sd = 1)`. For each of these the target will be the class `0` or `1` respectively.\n", "\n", "The model will fit the binary classification: If `sigmoid(x+A) < 0.5` then predict class `0`, else class `1`.\n", "\n", "Theoretically, we know that `A` should take on the value of the negative average of the two means: `-(mean1 + mean2)/2`.\n", "\n", "We start by resetting the computational graph:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "ops.reset_default_graph()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Start a graph session" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create graph\n", "sess = tf.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We generate the data that we will feed into the graph. Note that the `x_vals` are the combination of two separate normals, and the y_vals are the combination of two separate constants (two classes).\n", "\n", "We also create the relevant placeholders for the model." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create data\n", "x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random.normal(3, 1, 50)))\n", "y_vals = np.concatenate((np.repeat(0., 50), np.repeat(1., 50)))\n", "x_data = tf.placeholder(shape=[1], dtype=tf.float32)\n", "y_target = tf.placeholder(shape=[1], dtype=tf.float32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now create the one model variable, used for classification. We also set the initialization function, a random normal, to have a mean far from the expected theoretical value.\n", "\n", "- Initialized to be around 10.0\n", "- Theoretically around -1.0" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create variable (one model parameter = A)\n", "A = tf.Variable(tf.random_normal(mean=10, shape=[1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we add the model operation to the graph. This will be the adding of the variable `A` to the data. Note that the `sigmoid()` is left out of this operation, because we will use a loss function that has it built in.\n", "\n", "We also have to add the batch dimension to each of the target and input values to use the built in functions." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Add operation to graph\n", "# Want to create the operstion sigmoid(x + A)\n", "# Note, the sigmoid() part is in the loss function\n", "my_output = tf.add(x_data, A)\n", "\n", "# Now we have to add another dimension to each (batch size of 1)\n", "my_output_expanded = tf.expand_dims(my_output, 0)\n", "y_target_expanded = tf.expand_dims(y_target, 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Add classification loss (cross entropy)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "xentropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output_expanded, labels=y_target_expanded)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we declare the optimizer function. Here we will be using the standard gradient descent operator with a learning rate of `0.05`." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Create Optimizer\n", "my_opt = tf.train.GradientDescentOptimizer(0.05)\n", "train_step = my_opt.minimize(xentropy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we create an operation to initialize the variables and then run that operation" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Initialize variables\n", "init = tf.global_variables_initializer()\n", "sess.run(init)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Running the Classification Graph!\n", "\n", "Now we can loop through our classification graph and print the values of A and the loss values." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Step #200 A = [ 4.98884058]\n", "Loss = [[ 9.61972692e-05]]\n", "Step #400 A = [ 1.25237942]\n", "Loss = [[ 0.13927367]]\n", "Step #600 A = [-0.43401867]\n", "Loss = [[ 0.06842836]]\n", "Step #800 A = [-0.78914857]\n", "Loss = [[ 0.18796471]]\n", "Step #1000 A = [-1.01327193]\n", "Loss = [[ 0.03451081]]\n", "Step #1200 A = [-1.02339005]\n", "Loss = [[ 0.03584756]]\n", "Step #1400 A = [-0.94646472]\n", "Loss = [[ 0.03685168]]\n" ] } ], "source": [ "# Run loop\n", "for i in range(1400):\n", " rand_index = np.random.choice(100)\n", " rand_x = [x_vals[rand_index]]\n", " rand_y = [y_vals[rand_index]]\n", " \n", " sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})\n", " if (i+1)%200==0:\n", " print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))\n", " print('Loss = ' + str(sess.run(xentropy, feed_dict={x_data: rand_x, y_target: rand_y})))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can also see how well we did at predicting the data by creating an accuracy function and evaluating them on the known targets." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ending Accuracy = 0.98\n" ] } ], "source": [ "# Evaluate Predictions\n", "predictions = []\n", "for i in range(len(x_vals)):\n", " x_val = [x_vals[i]]\n", " prediction = sess.run(tf.round(tf.sigmoid(my_output)), feed_dict={x_data: x_val})\n", " predictions.append(prediction[0])\n", " \n", "accuracy = sum(x==y for x,y in zip(predictions, y_vals))/100.\n", "print('Ending Accuracy = ' + str(np.round(accuracy, 2)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.5" } }, "nbformat": 4, "nbformat_minor": 2 }