{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\nWhat is PyTorch?\n================\n\nIt\u2019s a Python based scientific computing package targeted at two sets of\naudiences:\n\n- A replacement for numpy to use the power of GPUs\n- a deep learning research platform that provides maximum flexibility\n and speed\n\nGetting Started\n---------------\n\nTensors\n^^^^^^^\n\nTensors are similar to numpy\u2019s ndarrays, with the addition being that\nTensors can also be used on a GPU to accelerate computing.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from __future__ import print_function\nimport torch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Construct a 5x3 matrix, uninitialized:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "x = torch.Tensor(5, 3)\nprint(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Construct a randomly initialized matrix\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "x = torch.rand(5, 3)\nprint(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Get its size\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(x.size())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

``torch.Size`` is in fact a tuple, so it supports the same operations

\n\nOperations\n^^^^^^^^^^\nThere are multiple syntaxes for operations. Let's see addition as an example\n\nAddition: syntax 1\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "y = torch.rand(5, 3)\nprint(x + y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Addition: syntax 2\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(torch.add(x, y))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Addition: giving an output tensor\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "result = torch.Tensor(5, 3)\ntorch.add(x, y, out=result)\nprint(result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Addition: in-place\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# adds x to y\ny.add_(x)\nprint(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Note

Any operation that mutates a tensor in-place is post-fixed with an ``_``\n For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.

\n\nYou can use standard numpy-like indexing with all bells and whistles!\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(x[:, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Read later:**\n\n\n 100+ Tensor operations, including transposing, indexing, slicing,\n mathematical operations, linear algebra, random numbers, etc are described\n `here `_\n\nNumpy Bridge\n------------\n\nConverting a torch Tensor to a numpy array and vice versa is a breeze.\n\nThe torch Tensor and numpy array will share their underlying memory\nlocations, and changing one will change the other.\n\nConverting torch Tensor to numpy Array\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "a = torch.ones(5)\nprint(a)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "b = a.numpy()\nprint(b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See how the numpy array changed in value.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "a.add_(1)\nprint(a)\nprint(b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Converting numpy Array to torch Tensor\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nSee how changing the np array changed the torch Tensor automatically\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\na = np.ones(5)\nb = torch.from_numpy(a)\nnp.add(a, 1, out=a)\nprint(a)\nprint(b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All the Tensors on the CPU except a CharTensor support converting to\nNumPy and back.\n\nCUDA Tensors\n------------\n\nTensors can be moved onto GPU using the ``.cuda`` function.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# let us run this cell only if CUDA is available\nif torch.cuda.is_available():\n x = x.cuda()\n y = y.cuda()\n x + y" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 0 }