{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# TensorFlow Assignment: Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**[Duke Community Standard](http://integrity.duke.edu/standard.html): By typing your name below, you are certifying that you have adhered to the Duke Community Standard in completing this assignment.**\n", "\n", "Name: [YOUR_NAME_HERE]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that you've run through a simple logistic regression model on MNIST, let's see if we can do better (Hint: we can). For this assignment, you'll build a multilayer perceptron (MLP) and a convolutional neural network (CNN), two popular types of neural networks, and compare their performance. Some potentially useful code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import tensorflow as tf\n", "from tensorflow.examples.tutorials.mnist import input_data\n", "\n", "# Import data\n", "mnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Helper functions for creating weight variables\n", "def weight_variable(shape):\n", " \"\"\"weight_variable generates a weight variable of a given shape.\"\"\"\n", " initial = tf.truncated_normal(shape, stddev=0.1)\n", " return tf.Variable(initial)\n", "\n", "def bias_variable(shape):\n", " \"\"\"bias_variable generates a bias variable of a given shape.\"\"\"\n", " initial = tf.constant(0.1, shape=shape)\n", " return tf.Variable(initial)\n", "\n", "# Tensorflow Functions that might also be of interest:\n", "# tf.nn.sigmoid()\n", "# tf.nn.relu()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multilayer Perceptron\n", "\n", "Build a multilayer perceptron for MNIST digit classfication. Feel free to play around with the model architecture and see how the training time/performance changes, but to begin, try the following:\n", "\n", "Image -> fully connected (500 hidden units) -> nonlinearity (Sigmoid/ReLU) -> fully connected (10 hidden units) -> softmax\n", "\n", "Skeleton framework for you to fill in (Code you need to provide is marked by `###`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Model Inputs\n", "x = ### MNIST images enter graph here ###\n", "y_ = ### MNIST labels enter graph here ###\n", "\n", "# Define the graph\n", "\n", "\n", "### Create your MLP here##\n", "### Make sure to name your MLP output as y_mlp ###\n", "\n", "\n", "\n", "# Loss \n", "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_mlp))\n", "\n", "# Optimizer\n", "train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)\n", "\n", "# Evaluation\n", "correct_prediction = tf.equal(tf.argmax(y_mlp, 1), tf.argmax(y_, 1))\n", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", "\n", "with tf.Session() as sess:\n", " # Initialize all variables\n", " sess.run(tf.global_variables_initializer())\n", " \n", " # Training regimen\n", " for i in range(4000):\n", " # Validate every 250th batch\n", " if i % 250 == 0:\n", " validation_accuracy = 0\n", " for v in range(10):\n", " batch = mnist.validation.next_batch(100)\n", " validation_accuracy += (1/10) * accuracy.eval(feed_dict={x: batch[0], y_: batch[1]})\n", " print('step %d, validation accuracy %g' % (i, validation_accuracy))\n", " \n", " # Train \n", " batch = mnist.train.next_batch(50)\n", " train_step.run(feed_dict={x: batch[0], y_: batch[1]})\n", "\n", " print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Comparison\n", "\n", "How do the sigmoid and rectified linear unit (ReLU) compare?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "\n", "[Your answer here]\n", "\n", "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Convolutional Neural Network\n", "\n", "Build a simple 2-layer CNN for MNIST digit classfication. Feel free to play around with the model architecture and see how the training time/performance changes, but to begin, try the following:\n", "\n", "Image -> CNN (32 5x5 filters) -> nonlinearity (ReLU) -> (2x2 max pool) -> CNN (64 5x5 filters) -> nonlinearity (ReLU) -> (2x2 max pool) -> fully connected (1024 hidden units) -> nonlinearity (ReLU) -> fully connected (10 hidden units) -> softmax\n", "\n", "Some additional functions that you might find helpful:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Convolutional neural network functions\n", "def conv2d(x, W):\n", " \"\"\"conv2d returns a 2d convolution layer with full stride.\"\"\"\n", " return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n", "\n", "def max_pool_2x2(x):\n", " \"\"\"max_pool_2x2 downsamples a feature map by 2X.\"\"\"\n", " return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')\n", "\n", "# Tensorflow Function that might also be of interest:\n", "# tf.reshape()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Skeleton framework for you to fill in (Code you need to provide is marked by `###`):\n", "\n", "*Hint: Convolutional Neural Networks are spatial models. You'll want to transform the flattened MNIST vectors into images for the CNN. Similarly, you might want to flatten it again sometime before you do a softmax. You also might want to look into the [conv2d() documentation](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) to see what shape kernel/filter it's expecting.*" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Model Inputs\n", "x = ### MNIST images enter graph here ###\n", "y_ = ### MNIST labels enter graph here ###\n", "\n", "# Define the graph\n", "\n", "\n", "### Create your CNN here##\n", "### Make sure to name your CNN output as y_conv ###\n", "\n", "\n", "\n", "# Loss \n", "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))\n", "\n", "# Optimizer\n", "train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\n", "\n", "# Evaluation\n", "correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))\n", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n", "\n", "with tf.Session() as sess:\n", " # Initialize all variables\n", " sess.run(tf.global_variables_initializer())\n", " \n", " # Training regimen\n", " for i in range(10000):\n", " # Validate every 250th batch\n", " if i % 250 == 0:\n", " validation_accuracy = 0\n", " for v in range(10):\n", " batch = mnist.validation.next_batch(50)\n", " validation_accuracy += (1/10) * accuracy.eval(feed_dict={x: batch[0], y_: batch[1]})\n", " print('step %d, validation accuracy %g' % (i, validation_accuracy))\n", " \n", " # Train \n", " batch = mnist.train.next_batch(50)\n", " train_step.run(feed_dict={x: batch[0], y_: batch[1]})\n", "\n", " print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some differences from the logistic regression model to note:\n", "\n", "- The CNN model might take a while to train. Depending on your machine, you might expect this to take up to half an hour. If you see your validation performance start to plateau, you can kill the training.\n", "\n", "- The logistic regression model we used previously was pretty basic, and as such, we were able to get away with using the GradientDescentOptimizer, which performs implements the gradient descent algorithm. For more difficult optimization spaces (such as the ones deep networks pose), we might want to use more sophisticated algorithms. Prof David Carlson has a lecture on this later.\n", " \n", "- Because of the larger size of our network, notice that our minibatch size has shrunk.\n", " \n", "- We've added a validation step every 250 minibatches. This let's us see how our model is doing during the training process, rather than sit around twiddling our thumbs and hoping for the best when training finishes. This becomes especially significant as training regimens start approaching days and weeks in length. Normally, we validate on the entire validation set, but for the sake of time we'll just stick to 10 validation minibatches (500 images) for this homework assignment." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Comparison\n", "\n", "How do the MLP and CNN compare in accuracy? Training time? Why would you use one vs the other? Is there a problem you see with MLPs when applied to other image datasets?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "\n", "[Your answer here]\n", "\n", "***" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 2 }