{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "MHvuWIUHdIpt" }, "source": [ "Adapted by Tias Guns , based (very heavily) on:\n", "\n", "## Handwritten Digit Recognition\n", "- Author = Amitrajit Bose\n", "- Dataset = MNIST\n", "- [Medium Article Link](https://medium.com/@amitrajit_bose/handwritten-digit-mnist-pytorch-977b5338e627)\n", "- Frameworks = PyTorch\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "oGjRmijsaXJ3" }, "source": [ "### Installation\n", "\n", "Recommended installation instructions:\n", "https://pytorch.org/get-started/locally\n", "\n", "This typically involves installing python3, python3-numpy, python3-matplotlib through an installer (anaconda) or system manager (apt), then installing torch and torchvision from python through conda or pip." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "TOyGrPT5ASDc" }, "outputs": [], "source": [ "# Import necessary packages\n", "%matplotlib inline\n", "%config InlineBackend.figure_format = 'retina'\n", "\n", "import os\n", "import numpy as np\n", "import torch\n", "import torchvision\n", "import matplotlib.pyplot as plt\n", "from time import time" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "uLdtrS4zaeEs" }, "source": [ "### Download The Dataset & Define The Transforms\n", "\n", "It is best to do this before the practical; + it allows you to test whether your setup works!\n", "\n", "It will download the files into the __same directory as where you stored this notebook__.\n", "\n", "It will take a minute or two. It will only download it once, so you can rerun this cell over and over." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 119 }, "colab_type": "code", "id": "sZD2NGz2Ak6w", "outputId": "74eec0da-d867-406b-be2c-4013a2162bf1" }, "outputs": [], "source": [ "from torchvision import datasets, transforms\n", "\n", "# Define a transform to normalize the data (effect: all values between -1 and 1)\n", "transform = transforms.Compose([transforms.ToTensor(),\n", " transforms.Normalize((0.5,), (0.5,))])\n", "\n", "# Download and load the training data\n", "trainset = datasets.MNIST('.', download=True, train=True, transform=transform)\n", "testset = datasets.MNIST('.', download=True, train=False, transform=transform)\n", "trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True)\n", "testloader = torch.utils.data.DataLoader(testset, batch_size=128, shuffle=True)\n", "print(\"Data loaded.\")" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "GcAfrn2falkK" }, "source": [ "### Exploring The Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 68 }, "colab_type": "code", "id": "xOjlOyjcCezX", "outputId": "fde25724-ea30-46f6-cf0f-4a7807c2ee0e" }, "outputs": [], "source": [ "dataiter = iter(trainloader)\n", "images, labels = dataiter.next()\n", "print(type(images))\n", "print(images.shape)\n", "print(labels.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above shows that one batch of training data contains 128 images, and that each image is 28x28 pixels.\n", "\n", "Let's display one image:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 265 }, "colab_type": "code", "id": "EuBvOWmGDHOq", "outputId": "127e0264-be67-4f12-b03f-dac28b7280af" }, "outputs": [], "source": [ "plt.imshow(images[0].numpy().squeeze(), cmap='gray_r');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's display a grid of images!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 265 }, "colab_type": "code", "id": "F9CppCcqDLtB", "outputId": "0f59838b-90b6-4370-e23a-0e4a0ba90c2a", "scrolled": true }, "outputs": [], "source": [ "def show_grid_img(images):\n", " dim = 9\n", " figure = plt.figure()\n", " num_of_images = dim*dim\n", " for index in range(num_of_images):\n", " plt.subplot(dim, dim, index+1)\n", " plt.axis('off')\n", " plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')\n", "\n", "dataiter = iter(trainloader)\n", "images, labels = dataiter.next()\n", "show_grid_img(images)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you see a grid of images, your system is __ready__ for the practical of Wednesday.\n", "\n", "__Optional:__ To use the CPMpy modeling environment and _ortools_ CP solver, you should do install them as such:\n", "
python3 -m pip install -U --user cpmpy ortools
\n", "\n", "Test in the following cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from cpmpy import *\n", "\n", "x = IntVar(1,3, name=\"x\") # x \\in {1,2,3}\n", "csp = Model([\n", " x > 1,\n", " x != 2,\n", "])\n", "\n", "if csp.solve():\n", " print(\"X =\",x.value())\n", "else:\n", " print(\"CSP infeasible\")" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "handwritten_digit_recognition_CPU.ipynb", "provenance": [], "version": "0.3.2" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 1 }