{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

MNIST digit recognition with LeNet

\n", "\n", "In this practical session we will build a convolutional neural network that is able to recognise the digits 0-9 in images using TensorFlow and Keras.\n", "\n", "You can run the code in a cell by selecting the cell and pressing Shift+Enter.\n", "\n", "

1) Import statements

\n", "First, import some of the packages we will need (run the cell below).\n", "\n", "Documentation for each of these packages can be found online:
\n", "For numpy: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html
\n", "For matplotlib: http://matplotlib.org/api/pyplot_api.html
\n", "For Keras: https://keras.io/
\n", "For random: https://docs.python.org/2/library/random.html
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using TensorFlow backend.\n" ] } ], "source": [ "import pickle\n", "import gzip\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "\n", "import tensorflow as tf\n", "import keras\n", "\n", "import time\n", "import random\n", "random.seed(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

2) Loading the data

\n", "Download the data from: http://deeplearning.net/data/mnist/mnist.pkl.gz and save it somewhere on your disc. The function below loads the data from the location where you have saved it (path) and stores it in numpy arrays. The data is already split in a train set, a validation set and a test set. Each of these three sets are saved in two separate variables, one containing the labels and one containing the images. The labels are lists of numbers between 0 and 9. The images are 4-dimensional arrays (of the same length) with the image dimensions in the last 2 dimensions.\n", "\n", "Change the path in the second cell below to the location where you have saved it and run the two cells." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def loadMNIST(path):\n", " f = gzip.open(path, 'rb')\n", " train_set, valid_set, test_set = pickle.load(f, encoding='latin1')\n", " f.close()\n", " \n", " train_set_labels = train_set[1]\n", " train_set_images = np.resize(train_set[0],(len(train_set_labels),28,28,1))\n", " train_set_images = np.pad(train_set_images,((0,0),(2,2),(2,2),(0,0)),'constant', constant_values=0)\n", " \n", " valid_set_labels = valid_set[1]\n", " valid_set_images = np.resize(valid_set[0],(len(valid_set_labels),28,28,1))\n", " valid_set_images = np.pad(valid_set_images,((0,0),(2,2),(2,2),(0,0)),'constant', constant_values=0)\n", "\n", " test_set_labels = test_set[1]\n", " test_set_images = np.resize(test_set[0],(len(test_set_labels),28,28,1))\n", " test_set_images = np.pad(test_set_images,((0,0),(2,2),(2,2),(0,0)),'constant', constant_values=0)\n", " \n", " return train_set_labels, train_set_images, valid_set_labels, valid_set_images, test_set_labels, test_set_images" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "train_set_labels, train_set_images, valid_set_labels, valid_set_images, test_set_labels, test_set_images = loadMNIST(r'/Users/mitko/mnist.pkl.gz')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

3) Visualising the data

\n", "Let's look at the data we've just loaded! \n", "\n", "How many samples are in each set? (Use .shape to see the dimensions) \n", "\n", "How large are the images? \n", "\n", "How many samples are there for each of the 10 digits? \n", "\n", "Show some of the images with plt.imshow (use cmap='gray_r' for black digits on a white background and interpolation='none' to see the real pixels), you can access one of the training images as: train_set_images[i,:,:,0]." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

4) One-hot encoding

\n", "Convert the labels from a number between 0 and 9 to 'one-hot encoding'. This means that for a label with number 3, there should be a 1 at element 3 and 0 everywhere else, i.e. [0, 0, 0, 1, 0, 0, 0, 0 ,0 ,0]. These are our target nodes, the node at position 3 should be active when the input image shows a 3. The code below does this for the training labels. \n", "\n", "Do the same for the validation labels!" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[5 0 4 1 9 2 1 3 1 4]\n", "[[0 0 0 0 0 1 0 0 0 0]\n", " [1 0 0 0 0 0 0 0 0 0]\n", " [0 0 0 0 1 0 0 0 0 0]\n", " [0 1 0 0 0 0 0 0 0 0]\n", " [0 0 0 0 0 0 0 0 0 1]\n", " [0 0 1 0 0 0 0 0 0 0]\n", " [0 1 0 0 0 0 0 0 0 0]\n", " [0 0 0 1 0 0 0 0 0 0]\n", " [0 1 0 0 0 0 0 0 0 0]\n", " [0 0 0 0 1 0 0 0 0 0]]\n" ] } ], "source": [ "train_set_labels_output = np.zeros((len(train_set_labels),10),dtype=np.int16) \n", "for n in range(10):\n", " train_set_labels_output[:,n] = train_set_labels==n\n", "\n", "print(train_set_labels[:10])\n", "print(train_set_labels_output[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

5) Building the network

\n", "The function below builds the LeNet network as we looked at in the lecture. Using the Sequential model from Keras, new layers can be assed using .add. The output_shape statements show the dimensions after the current layer. \n", "\n", "Can you recognise all the elements of the network from the lecture?" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(None, 32, 32, 1)\n", "(None, 28, 28, 6)\n", "(None, 14, 14, 6)\n", "(None, 10, 10, 16)\n", "(None, 5, 5, 16)\n", "(None, 400)\n", "(None, 120)\n", "(None, 84)\n", "(None, 10)\n" ] } ], "source": [ "cnn = keras.models.Sequential()\n", "\n", "layer0 = keras.layers.Conv2D(6, (5, 5), activation='relu', input_shape=(32, 32, 1))\n", "cnn.add(layer0)\n", "print(layer0.input_shape)\n", "print(layer0.output_shape)\n", "\n", "layer1 = keras.layers.MaxPooling2D(pool_size=(2, 2))\n", "cnn.add(layer1)\n", "print(layer1.output_shape)\n", "\n", "layer2 = keras.layers.Conv2D(16, (5, 5), activation='relu')\n", "cnn.add(layer2)\n", "print(layer2.output_shape)\n", "\n", "layer3 = keras.layers.MaxPooling2D(pool_size=(2, 2))\n", "cnn.add(layer3)\n", "print(layer3.output_shape)\n", "\n", "layer4 = keras.layers.Flatten() \n", "cnn.add(layer4)\n", "print(layer4.output_shape)\n", "\n", "layer5 = keras.layers.Dense(120, activation='relu')\n", "cnn.add(layer5)\n", "print(layer5.output_shape)\n", "\n", "layer6 = keras.layers.Dense(84, activation='relu')\n", "cnn.add(layer6)\n", "print(layer6.output_shape)\n", "\n", "layer7 = keras.layers.Dense(10, activation='softmax')\n", "cnn.add(layer7)\n", "print(layer7.output_shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

6) Define optimiser and loss function

\n", "We will use stochastic gradient descent with momentum as optimiser and negative log likelihood (called categorical cross-entropy in keras) as loss function." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "sgd = keras.optimizers.SGD(lr=0.001, momentum=0.9)\n", "cnn.compile(loss='categorical_crossentropy', optimizer=sgd)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

7) Training the network

\n", "Do the training in random batches of a specific number of samples (we set the values below to 250 batches of 100 samples). \n", "\n", "Use random.sample(a,n) to select a random batch of n samples from array a. \n", "\n", "Next, use the cnn.train_on_batch(X,Y) function to perform an update of the network based on a random batch of training images X and training labels Y. Remember to use the one-hot encoding of the training labels!\n", "\n", "The train function returns the loss. Save the loss of each training batch in the variable 'losslist' so we can look at them later (you can use .append() to add the current loss to the list).\n", "\n", "Also keep track of the loss for random batches from the validation set to see if your network is not overfitting on the training set. You can use cnn.test_on_batch(X,Y) to compute the loss on the validation set (without doing an update).\n", "\n", "Remember that if you restart the training process from the beginning you also need to reinitialise the network by running the cells starting from 5) again." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "trainingsamples = list(range(len(train_set_labels))) #numbers from 0 until the number of samples\n", "validsamples = list(range(len(valid_set_labels)))\n", "\n", "minibatches = 250\n", "minibatchsize = 100 \n", "\n", "losslist = []\n", "validlosslist = []\n", "\n", "t0 = time.time()\n", "\n", "for i in range(minibatches):\n", " #select random training en validation samples and perform training and validation steps here. \n", "\n", "t1 = time.time()\n", "print('Training time: {} seconds'.format(t1-t0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

8) Loss curves

\n", "Plot the loss curves for the training and validation sets (use plt.plot(losslist) for the training loss). \n", "\n", "Is 250 batches enough to train the network? How many do we need? \n", "\n", "What happens if you change the learning rate in 6)? \n", "\n", "What happens if you change the minibatchsize? \n", "\n", "What happens if you use another optimizer? \n", "\n", "Try to get the loss as low as possible! \n", "\n", "What happens if you make changes to the network? Use for example more or less filters or nodes, remove a layer, etc." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

9) Evaluation on the test set

\n", "Evaluate the network on the test set with the cnn.predict(X, batch_size=10000) function. Depending on the available memory the batch size can be as large as the whole test set (10 000 in this case). You can use np.argmax() to select the node with the highest probability. \n", "\n", "How well did it do? How many of the 10 000 test samples did it label correctly? \n", "\n", "Have a look at which of the samples that it did not label correctly. Look at which label it selected and which label it should have selected. Can you see why it made the error?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

10) Visualising what the network has learned

\n", "To see what is happening within the network we can visualise the learned filters and their feature maps. We now define additional functions that obtain the feature maps after each layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "layer0f = keras.backend.function([keras.backend.learning_phase(), cnn.layers[0].input], [cnn.layers[0].output])\n", "layer1f = keras.backend.function([keras.backend.learning_phase(), cnn.layers[0].input], [cnn.layers[1].output])\n", "layer2f = keras.backend.function([keras.backend.learning_phase(), cnn.layers[0].input], [cnn.layers[2].output])\n", "layer3f = keras.backend.function([keras.backend.learning_phase(), cnn.layers[0].input], [cnn.layers[3].output])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

11) Visualising the features

\n", "Let's look at the feature maps after the first layer for one of the images from the test set. We have defined the function 'layer0f' for that. You can apply it to the test images with layer0f([cnn,test_set_images]).\n", "\n", "Look at the shape of the output of this function. \n", "\n", "Visualise the 6 features maps for some of the 10 000 test samples with plt.imshow" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

12) Visualising the filters

\n", "Let's look at the filters that are learned. We can use the function 'get_weights' for that. These are the filters that are applied to the images to obtain the feature maps that we saw above.\n", "\n", "Look at the shape of the filters and biases.\n", "\n", "Visualise the 6 filters of the first layer with plt.imshow. Do you see any structure in the learned filters?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

13) Saving a trained model to disc

\n", "Training a network can often take a very long time. It's therefore useful to be able save a trained model to disc. You can use the functions below to save and load a trained network. This is especially useful for the project.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "cnn.save(r'D:\\trained_network.h5')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "cnn = keras.models.load_model(r'D:\\trained_network.h5')" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 1 }