{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "95354f8c477678abe1961a6c7e92d373", "grade": false, "grade_id": "cell-6add8fb517bf450e", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "# Part 2: Implementing an MLP classifier" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "e98069b3fc4364255bbf43f61ea7dcdc", "grade": false, "grade_id": "cell-46f0e98056392225", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "# Execute this code block to install dependencies when running on colab\n", "try:\n", " import torch\n", "except:\n", " from os.path import exists\n", " from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag\n", " platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())\n", " cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\\.\\([0-9]*\\)\\.\\([0-9]*\\)$/cu\\1\\2/'\n", " accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'\n", "\n", " !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-1.0.0-{platform}-linux_x86_64.whl torchvision\n", "\n", "try: \n", " import torchbearer\n", "except:\n", " !pip install torchbearer" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "0d119ab5340e86f17f558c4ac39d144a", "grade": false, "grade_id": "cell-f5db4e68a397f737", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Getting Started with a Baseline Multi-Layer Perceptron Model\n", "\n", "In addition to being a defacto tensor processing and automatic differentiation library, PyTorch provides a general purpose neural network toolbox. Before we start to look at deep convolutional architectures, we can start with something much simpler - a basic multilayer perceptron. Because the MNIST images are relatively small, a fully connected MLP network will have relatively few weights to train; with bigger images, an MLP might not be practical due to the number of weights.\n", "\n", "In this section we will create a simple multi-layer perceptron model with a single hidden layer that achieves an error rate of 1.88%. We will use this as a baseline for comparing more complex convolutional neural network models later.\n", "\n", "Let's start off by importing the classes and functions we will need." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "438bd2f2f83ff9c7556aad79f11a461d", "grade": false, "grade_id": "cell-6d036cd09e565ad0", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "import torch\n", "import torch.nn.functional as F\n", "import torchvision.transforms as transforms\n", "from torch import nn\n", "from torch import optim\n", "from torch.utils.data import DataLoader\n", "from torchvision.datasets import MNIST" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "5f5c4769adb2c33437c8a4eb54d7bd1b", "grade": false, "grade_id": "cell-e58de1bdcfa00145", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "When developing, it is always a good idea to initialize the random number generator to a constant to ensure that the results of your script are reproducible each time you run it. Once the model is implemented and tested we might remove this to enable variance to be captured over multiple training runs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "add29d09a6d33468d43d434ce137ce4c", "grade": false, "grade_id": "cell-4a955106f453e11e", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "# fix random seed for reproducibility\n", "seed = 7\n", "torch.manual_seed(seed)\n", "torch.backends.cudnn.deterministic = True\n", "torch.backends.cudnn.benchmark = False\n", "import numpy as np\n", "np.random.seed(seed)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "09c2bd3128bb5efb31eb3ecf621ab156", "grade": false, "grade_id": "cell-360d3101d4a6a192", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Loading the MNIST data\n", "\n", "For the MLP network that we'll shortly define, we need to define a transformation to the data provided by the `MNIST` object that flattens each images into a vector. Before that happens, our transformation must also convert each image from a PIL image to a PyTorch `Tensor`. We can use the classes provided by the `torchvision.transforms` package to compose a set of transforms that given a PIL image will convert the input image to a `Tensor` and reshape that tensor into a vector:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "751bb0c3298d0e6d999dd85a448f734b", "grade": false, "grade_id": "cell-cb6a61ef1807b97c", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "# flatten 28*28 images to a 784 vector for each image\n", "transform = transforms.Compose([\n", " transforms.ToTensor(), # convert to tensor\n", " transforms.Lambda(lambda x: x.view(-1)) # flatten into vector\n", "])" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "a7128b572c5b3e57bebfb1d451c92439", "grade": false, "grade_id": "cell-4310f448a194990b", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "Note that for the latter part of the transform we use `Lambda` which allows us to construct a custom transform (rather than use one provided by torchvision). In this case our lambda function returns a new view of the input tensor which is a vector of length equal to the product of the input tensor's dimensions. In this particular case the input tensor had shape `[28, 28]`, so the output will have shape `[784]`. Passing `-1` as the input to the `view` method makes it automatically compute the output shape on the basis of the input.\n", "\n", "Now we can load the training and test splits of the MNIST dataset using the torchvision `MNIST` utility class. We can also provide our `transform` object so that it will be applied automatically when the data loads." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "33578b54f937770aa5fde263ed002cac", "grade": false, "grade_id": "cell-a540569d1b188246", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "trainset = MNIST(\".\", train=True, download=True, transform=transform)\n", "testset = MNIST(\".\", train=False, download=True, transform=transform)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "b60571c8f2deca9d3c9fd8748e7977d3", "grade": false, "grade_id": "cell-0f00ba68c3ebd2c3", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "When we train and evaluate our model, we will require batches of data to be provided. As previously mentioned, the `DataLoader` class can automatically provide batches fetched from a `Dataset` object. The `DataLoader` has the ability to fetch batches in the background (using multiple threads if required) to optimise throughput to the deep network model. It also supports shuffling of the data order (very useful for gradient-based learning!)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "f1ef180171d80ac56c32f91ffc1667fe", "grade": false, "grade_id": "cell-c847b428a365a8fb", "locked": true, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "# create data loaders\n", "trainloader = DataLoader(trainset, batch_size=128, shuffle=True)\n", "testloader = DataLoader(testset, batch_size=128, shuffle=True)" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "7651ad04347043d64356ee5f2b943684", "grade": false, "grade_id": "cell-705fe000588218c5", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "DataLoaders are iterable and can be used within a loop to fetch batches of data. Each batch is a tuple containing the images in the first element and the labels in the second. __Complete the following code block to calculate the number of instances of each class by iterating over the training data__:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "1e0e833370653c8e9a724f3476ea0302", "grade": false, "grade_id": "cell-cd6a3d883391addc", "locked": false, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "class_counts = torch.zeros(10, dtype=torch.int32)\n", "\n", "for (images, labels) in trainloader:\n", " # YOUR CODE HERE\n", " raise NotImplementedError()\n", "\n", "print(class_counts)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "7d033c8be175589387360e1d856b91b6", "grade": true, "grade_id": "cell-8e6586ecf9f2b748", "locked": true, "points": 3, "schema_version": 1, "solution": false } }, "outputs": [], "source": [ "assert class_counts.sum()==60000" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "48f410cc3673297634aa8bbcfb84fdc9", "grade": false, "grade_id": "cell-759ef8481db756ca", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__Answer the following questions (enter the answer in the box below each one):__\n", "\n", "__1.__ Do all of the batches have the same size?" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "checksum": "cc25cac17fd079fb98fa6240cacfcf3a", "grade": true, "grade_id": "cell-d7476483eabfb245", "locked": false, "points": 1, "schema_version": 1, "solution": true } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "1022cbf8a7c9ef754b5ea4c8def0e0c6", "grade": false, "grade_id": "cell-29abc8adeede2c0b", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__2.__ How can we configure `DataLoader` to control this?" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "nbgrader": { "checksum": "9f6f03a41e4aee72c044c6cb523d1246", "grade": true, "grade_id": "cell-c87b3cc38b91cb03", "locked": false, "points": 1, "schema_version": 1, "solution": true } }, "source": [ "YOUR ANSWER HERE" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "dcf7111376d81a83638025a7dec1cc88", "grade": false, "grade_id": "cell-6d11f32b10eb427e", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "## Defining the MLP Model\n", "\n", "We are now ready to create our simple neural network model. We will define our model in a class that extends `nn.Module`. `nn.Module` subclasses must do a minimum of one thing: implement the `forward` method which takes a batch of data and performs the forward-pass. PyTorch's autograd system will take care of computing the gradients of the forward pass for us. In the code below we'll also make use of the constructor of our model to instantiate the hidden and output layers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# define baseline model\n", "class BaselineModel(nn.Module):\n", " def __init__(self, input_size, hidden_size, num_classes):\n", " super(BaselineModel, self).__init__()\n", " self.fc1 = nn.Linear(input_size, hidden_size) \n", " self.fc2 = nn.Linear(hidden_size, num_classes) \n", " \n", " def forward(self, x):\n", " out = self.fc1(x)\n", " out = F.relu(out)\n", " out = self.fc2(out)\n", " if not self.training:\n", " out = F.softmax(out, dim=1)\n", " return out" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "d9b2a915cb2a149ac1415ff18351cc2e", "grade": false, "grade_id": "cell-e92861120f90c294", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "The model is a simple neural network with one hidden layer. A rectifier linear unit (ReLU) activation function is used for the neurons in the hidden layer. Note how we use the PyTorch 'Functional API' for stateless operations like applying the ReLU; we could construct a `torch.nn.ReLU` in the constructor and save it in an instance variable instead, but ultimately this would lead to more lines of code.\n", "\n", "The `nn.Module` class defines a instance variable called `training` that is set to `True` when the model is being trained and `False` when it is being evaluated after being trained. In our model definition we've used a softmax activation function on the output layer to turn the outputs into probability-like values, but have only set this to be enabled when we are not training the model. We've done this because we will use PyTorch's implementation of Cross Entropy Loss (`nn.CrossEntropyLoss`) during training which implicitly adds a softmax before a logarithmic loss (technically it adds a log-softmax activation followed by a negative log-likelihood loss as this has much better numerical properties). If you've used a library like Keras before you should note that this approach is different to the one taken there - Keras expects you to explicitly add a `Softmax` layer to use with its `categorical_crossentropy` loss.\n", "\n", "In our case the softmax isn't actually necessary for model evaluation if we're only interested in the most likely class; the _logits_ (unscaled log probabilities) provided by the final fully connected layer before the softmax can be used directly as the largest logit will correspond to the most likely class.\n", "\n", "## Training and Evaluating the Model\n", "We can now fit and evaluate the model. One of the design decisions of PyTorch is that everything should be explicit so we have full control over our models and the training process. This means that we actually need to write the model training loop by hand, and perform each of the various operations (perform the forward-pass, compute the loss, perform the backward-pass, and update the weights). In the code below we'll fit the model to the data over 10 epochs using batches of 128 images provided by the `DataLoader` defined previously. We'll make use of the ADAM optimiser as it broadly tends to work well practically despite its limitations. Running the code below should take just a couple of minutes to complete the training." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# build the model \n", "model = BaselineModel(784, 784, 10)\n", "\n", "# define the loss function and the optimiser\n", "loss_function = nn.CrossEntropyLoss()\n", "optimiser = optim.Adam(model.parameters())\n", "\n", "# the epoch loop\n", "for epoch in range(10):\n", " running_loss = 0.0\n", " for data in trainloader:\n", " # get the inputs\n", " inputs, labels = data\n", "\n", " # zero the parameter gradients\n", " optimiser.zero_grad()\n", "\n", " # forward + loss + backward + optimise (update weights)\n", " outputs = model(inputs)\n", " loss = loss_function(outputs, labels)\n", " loss.backward()\n", " optimiser.step()\n", "\n", " # keep track of the loss this epoch\n", " running_loss += loss.item()\n", " print(\"Epoch %d, loss %4.2f\" % (epoch, running_loss))\n", "print('**** Finished Training ****')" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "2462a62df8582d5597cae7ea8b0a4b79", "grade": false, "grade_id": "cell-bc1ac2c5ef9f4cc8", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "In the above we added a crude indicator of progress by printing the total loss at the end of each epoch. Hopefully the loss went down as the model learned! \n", "\n", "At this point it would be good to compute the overall accuracy of the test set. __Use the following code block to finish implementation of the accuracy computation.__ Note that before the code you'll implement we've made a call to `model.eval()` - this sets the model into evaluation mode and supresses non-training things (gradients, and things such as dropout being applied/computed)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "bf7f8d7a58c1000cdc77b53eb16b3e74", "grade": true, "grade_id": "cell-13268e61111fdb7e", "locked": false, "points": 5, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "model.eval()\n", "\n", "# Compute the model accuracy on the test set\n", "correct = 0\n", "total = 0\n", "\n", "# YOUR CODE HERE\n", "raise NotImplementedError()\n", "\n", "print('Test Accuracy: %2.2f %%' % ((100.0 * correct) / total))" ] }, { "cell_type": "markdown", "metadata": { "deletable": false, "editable": false, "nbgrader": { "checksum": "bc70fc40020d8fdecbe95f65eddc8d8d", "grade": false, "grade_id": "cell-8c5589e8e96c3bff", "locked": true, "schema_version": 1, "solution": false } }, "source": [ "__In multi-class classification tasks it is often instructive to explore the accuracy of each class. Use the code block below to complete the function that produces the per-class accuracy:__" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "nbgrader": { "checksum": "3f15c7920dfc01dc1f8a11c1b359e22b", "grade": true, "grade_id": "cell-64fb2d2effc2543d", "locked": false, "points": 5, "schema_version": 1, "solution": true } }, "outputs": [], "source": [ "# Compute the model accuracy on the test set\n", "class_correct = torch.zeros(10)\n", "class_total = torch.zeros(10)\n", "\n", "# YOUR CODE HERE\n", "raise NotImplementedError()\n", "\n", "for i in range(10):\n", " print('Class %d accuracy: %2.2f %%' % (i, 100.0*class_correct[i] / class_total[i]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }