{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<!--NAVIGATION-->\n", "# | Basics | [Autograd](2-Autograd.ipynb) >" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References and other resources\n", "- [PyTorch Tutorials](https://pytorch.org/tutorials/)\n", "- [Torchvision](https://pytorch.org/docs/stable/torchvision/index.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Alternatives" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- [Tensorflow](https://www.tensorflow.org/)\n", "- [Keras](https://keras.io/)\n", "- [Theano](http://deeplearning.net/software/theano/)\n", "- [Caffe](http://caffe.berkeleyvision.org/)\n", "- [Caffe2](https://caffe2.ai/)\n", "- [MXNet](https://mxnet.apache.org/)\n", "- [many more...](https://www.google.com/search?q=deep+learning+frameworks&oq=deep+learning+frame&aqs=chrome.0.0j69i57j69i61l2j0l2.2284j0j1&sourceid=chrome&ie=UTF-8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## So why PyTorch?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Simple Python\n", "- Easy to use + debug\n", "- Supported/developed by Facebook\n", "- Nice and extensible interface (modules, etc.)\n", "- A lot of research code is published as PyTorch project" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "____" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Google Colab only!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# execute only if you're using Google Colab\n", "!wget -q https://raw.githubusercontent.com/ahug/amld-pytorch-workshop/master/binder/requirements.txt -O requirements.txt\n", "!pip install -qr requirements.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "___" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import torch" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "print(\"PyTorch Version:\", torch.__version__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Very similar to numpy framework (if that helps!)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tensor Creation " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## First of all, what is a tensor?\n", "\n", "A **matrix** is a grid of numbers, let's say (3x5). In simple terms, a **tensor** can be seen as a generalization of a matrix to higher dimension. It can be of arbitrary shape, e.g. (3 x 6 x 2 x 10). \n", "\n", "For the start, you can think of tensors as multidimensional arrays." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.tensor([1, 2, 3, 4, 5])\n", "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.tensor([[1, 2, 3], [4, 5, 6]])\n", "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy\n", "np.eye(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch\n", "torch.eye(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy\n", "5 * np.eye(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch\n", "5 * torch.eye(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy\n", "np.ones(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch\n", "torch.ones(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy\n", "np.zeros(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch\n", "torch.zeros(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy\n", "np.empty((3, 5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch\n", "torch.empty((3, 5))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy\n", "X = np.random.random((5, 3))\n", "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch\n", "Y = torch.rand((5, 3))\n", "Y" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy\n", "X.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch\n", "Y.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## But wait: Why do we even need tensors if we can do exactly the same with numpy arrays?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`torch.tensor` behaves like numpy arrays under mathematical operations. However, `torch.tensor` additionally keeps track of the gradients (see next notebook) and provides GPU support." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "____" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Linear Algebra Operations" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = np.random.rand(3, 5)\n", "Y = torch.rand(3, 5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy (matrix multiplication)\n", "X.T @ X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch (matrix multiplication)\n", "Y.t() @ Y" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y.t().matmul(Y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# CAUTION: Operator '*' does element-wise multiplication, just like in numpy!\n", "# Y.t() * Y # error, dimensions do not match for element-wise multiplication" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "np.linalg.inv(X.T @ X)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "torch.inverse(Y.t() @ Y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "np.arange(2, 10, 2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "torch.arange(2, 10, 2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "np.linspace(0, 1, 10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "torch.linspace(0, 1, 10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Your turn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**_Create the tensor:_**\n", "\n", "$ \\begin{bmatrix}\n", "5 & 7 & 9 & 11 & 13 & 15 & 17 & 19\n", "\\end{bmatrix} $" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# YOUR TURN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## More on PyTorch Tensors" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each operation is also available as a function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.rand(3, 2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "torch.exp(X)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.exp()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.sqrt()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "(X.exp() + 2).sqrt() - 2 * X.log().sigmoid() # be creative :-)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Many more functions available: sin, cos, tanh, log, etc." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A = torch.eye(3)\n", "A" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A.add(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Functions that mutate (in-place) the passed object end with an underscore, e.g. *add_*, *div_*, etc." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A.add_(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A.div_(3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A.uniform_() # fills the tensor with random uniform numbers in [0, 1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Indexing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, it works just like in numpy." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A = torch.randint(100, (3, 3))\n", "A" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A[0, 0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A[2, 1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A[1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A[:, 1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A[1:2, :], A[1:2, :].shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A[1:, 1:]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "A[:2, :2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_____" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reshaping & Expanding" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.tensor([1, 2, 3, 4])\n", "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = X.repeat(3, 1) # repeat it 3 times along 0th dimension and 1 times along first dimension\n", "X, X.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# equivalent of 'reshape' in numpy (view does not allocate new memory!)\n", "Y = X.view(2, 6)\n", "Y" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = X.view(-1) # -1 tells PyTorch to infer the number of elements along that dimension\n", "Y, Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = X.view(-1, 2)\n", "Y, Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = X.view(-1, 4)\n", "Y, Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = torch.ones(5)\n", "Y, Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = Y.view(-1, 1)\n", "Y, Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y.expand(5, 5) # similar to repeat but does not actually allocate new memory" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.eye(4)\n", "Y = X[3:, :]\n", "Y, Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = Y.squeeze() # removes all dimensions of size '1'\n", "Y, Y.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = Y.unsqueeze(1)\n", "Y, Y.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Your turn!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**_Create the tensor:_**\n", "\n", "$ \\begin{bmatrix}\n", "7 & 5 & 5 & 5 & 5 \\\\\n", "5 & 7 & 5 & 5 & 5 \\\\\n", "5 & 5 & 7 & 5 & 5 \\\\\n", "5 & 5 & 5 & 7 & 5 \\\\\n", "5 & 5 & 5 & 5 & 7 \n", "\\end{bmatrix} $\n", "\n", "Hint: You can use matrix sum and scalar multiplication" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# YOUR TURN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**_Create the tensor:_**\n", "\n", "$ \\begin{bmatrix}\n", "4 & 6 & 8 & 10 & 12 \\\\\n", "14 & 16 & 18 & 20 & 22 \\\\\n", "24 & 26 & 28 & 30 & 32\n", "\\end{bmatrix}$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# YOUR TURN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**_Create the tensor:_**\n", "\n", "$ \\begin{bmatrix}\n", "2 & 2 & 2 & 2 & 2 \\\\\n", "4 & 4 & 4 & 4 & 4 \\\\\n", "6 & 6 & 6 & 6 & 6 \\\\\n", "8 & 8 & 8 & 8 & 8\n", "\\end{bmatrix} $" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# YOUR TURN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_____" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reductions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.randint(10, (3, 4)).float()\n", "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.sum()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.sum().item()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.sum(0) # colum-wise sum" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.sum(dim=1) # row-wise sum" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.mean()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.mean(dim=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X.norm(dim=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Your turn!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute the norms of the row-vectors in matrix **X** without using _torch.norm()_.\n", "\n", "Remember: $$||\\vec{v}||_2 = \\sqrt{x_1^2 + x_2^2 + \\dots + x_n^2}$$\n", "\n", "Hint: _X\\*\\*2_ computes the element-wise square." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.eye(4) + torch.arange(4).repeat(4, 1).float()\n", "\n", "# YOUR TURN\n", "\n", "# SOLUTION: tensor([3.8730, 4.1231, 4.3589, 4.5826]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Masking" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.randint(100, (5, 3))\n", "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "mask = (X > 25) & (X < 75)\n", "mask" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X[mask] # returns all elements matching the criteria in a 1D-tensor" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "mask.sum() # number of elements that fulfill the condition" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "(X == 25) | (X > 60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Your turn!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Get the number of non-zeros in **X**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = torch.tensor([[1, 0, 2], [0, 6, 0]])\n", "# YOUR TURN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute the sum of all entries in X that are larger than the mean of all values in X." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# YOUR TURN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "______" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Some useful properties of tensors" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "x = torch.Tensor([[0,1,2], [3,4,5]])\n", "\n", "print(\"x.shape: \\n%s\\n\" % (x.shape,))\n", "print(\"x.size(): \\n%s\\n\" % (x.size(),))\n", "print(\"x.size(1): \\n%s\\n\" % x.size(1))\n", "print(\"x.dim(): \\n%s\\n\" % x.dim())\n", "\n", "print(\"x.dtype: \\n%s\\n\" % x.dtype)\n", "print(\"x.device: \\n%s\\n\" % x.device)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `nonzero` function returns indices of the non zero elements." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "x = torch.Tensor([[0,1,2], [3,4,5]])\n", "\n", "print(\"x.nonzero(): \\n%s\\n\" % x.nonzero())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# press tab to autocomplete\n", "# x." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Converting between PyTorch and numpy" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = np.random.random((5,3))\n", "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# numpy ---> torch\n", "Y = torch.from_numpy(X) # Y is actually a DoubleTensor (i.e. 64-bit representation)\n", "Y" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "Y = torch.rand((2,4))\n", "Y" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# torch ---> numpy\n", "X = Y.numpy()\n", "X" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "____" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using GPUs " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using **GPU** in pytorch is as simple as calling **`.cuda()`** on your tensor.\n", "\n", "But first, you may want to check: \n", " - that cuda can actually be used : `torch.cuda.is_available()`\n", " - how many gpus are available : `torch.cuda.device_count()`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "torch.cuda.is_available()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "torch.cuda.device_count()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "x = torch.Tensor([[1,2,3], [4,5,6]])\n", "print(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### tensor.cuda\n", "\n", "_Note : If you don't have Cuda on the machine, the following examples won't work_" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "x.cuda(0)\n", "print(x.device)\n", "x = x.cuda(0)\n", "print(x.device)\n", "x = x.cuda(1)\n", "print(x.device)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "x = torch.Tensor([[1,2,3], [4,5,6]])\n", "\n", "# This will generate an error since you cannot do operation on tensor that are not on the same device\n", "x + x.cuda()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Write an if statement that moves x on gpu if cuda is available" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# YOUR TURN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These kinds of if statements used to be all over the place in people's pytorch code. Recently, a more flexible way was introduced:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### torch.device" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A **`torch.device`** is an object representing the device on which a torch.tensor is or will be allocated.\n", "\n", "You can easily move a tensor from a device to another by using the **`tensor.to()`** function" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "cpu = torch.device('cpu')\n", "cuda_0 = torch.device('cuda:0')\n", "\n", "x = x.to(cpu)\n", "print(x.device)\n", "x = x.to(cuda_0)\n", "print(x.device)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It can be more flexible since you can check if cuda exists only once in your code" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n", "x = x.to(device) # We don't need to care anymore about whether cuda is available or not\n", "print(x.device)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Timing GPU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How much faster is GPU ? See for yourself ..." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "A = torch.rand(100, 1000, 1000)\n", "B = A.cuda(1)\n", "A.size()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%timeit -n 3 torch.bmm(A, A)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%timeit -n 30 torch.bmm(B, B)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "___" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Don't forget to download the notebook, otherwise your changes will be lost!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<!--NAVIGATION-->\n", "# | Basics | [Autograd](2-Autograd.ipynb) >" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 2 }