{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 2.张量简介与创建\n", "\n", "Q:张量是什么?\n", "- 一个多维数组,它是标量、向量、矩阵的高维拓展\n", "- ![](http://anki190912.xuexihaike.com/20200918142143.png?imageView2/2/h/150)\n", "\n", "Q:Pytorch中的Variable是什么?与Tensor的关系是什么?\n", "- Variable是torch.autograd中的数据类型主要用于封装Tensor,进行自动求导\n", "- data:被包装的Tensor\n", "- grad:data的梯度\n", "- grad_fn:创建Tensor的Function,是自动求导的关键\n", "- requires_grad:指示是否需要梯度\n", "- is_leaf:指示是否是叶子结点(张量)\n", "- ![](http://anki190912.xuexihaike.com/20200918143346.png?imageView2/2/w/200)\n", "\n", "Q:Pytorch中的Tensor是什么?\n", "- PyTorch 0.4.0开始,Variable并入Tensor\n", "- dtype: 张量的数据类型,如torch.FloatTensor, torch.cuda.FloatTensor\n", "- shape: 张量的形状,如(64,3, 224, 224)\n", "- device: 张量所在设备,GPU/CPU,是加速的关键\n", "- ![](http://anki190912.xuexihaike.com/20200918143722.png?imageView2/2/h/100)\n", "\n", "Q:Tensor的函数原型是怎样?\n", "- `torch.tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False)`\n", "- 功能:从data创建tensor\n", "- data: 数据,可以是list,numpy\n", "- dtype: 数据类型,默认与data一致\n", "- device: 所在设备,cuda/cpu\n", "- requires_grad: 是否需要梯度\n", "- pin_memory:是否存于锁页内存\n", "\n", "Q:通过torch.tensor创建Tensor的代码是什么?" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[1. 1. 1.]\n", " [1. 1. 1.]\n", " [1. 1. 1.]]\n", "ndarray的数据类型: float64\n", "tensor([[1., 1., 1.],\n", " [1., 1., 1.],\n", " [1., 1., 1.]], dtype=torch.float64)\n", "tensor([[1., 1., 1.],\n", " [1., 1., 1.],\n", " [1., 1., 1.]], device='cuda:0', dtype=torch.float64)\n" ] } ], "source": [ "import torch\n", "import numpy as np\n", "\n", "arr = np.ones((3, 3))\n", "print(arr)\n", "print('ndarray的数据类型:', arr.dtype)\n", "\n", "t = torch.tensor(arr)\n", "print(t)\n", "\n", "# 放到gpu上\n", "t = torch.tensor(arr, device='cuda')\n", "print(t)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何通过torch.from_numpy创建张量?\n", "- 函数原型:`torch.from_numpy(ndarray)`\n", "- 功能:从numpy创建tensor\n", "- 注意事项:从torch.from_numpy创建的tensor于原ndarray共享内存,当修改其中一个的数据,另外一个也将会被改动\n", "- ![](http://anki190912.xuexihaike.com/20200918151039.png?imageView2/2/h/150)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "numpy array:\n", "[[1 2 3]\n", " [4 5 6]]\n", "tensor:\n", "tensor([[1, 2, 3],\n", " [4, 5, 6]])\n", "修改arr:\n", "numpy array:\n", "[[0 2 3]\n", " [4 5 6]]\n", "tensor:\n", "tensor([[0, 2, 3],\n", " [4, 5, 6]])\n", "修改tensor:\n", "numpy array:\n", "[[ 0 2 3]\n", " [ 4 -10 6]]\n", "tensor:\n", "tensor([[ 0, 2, 3],\n", " [ 4, -10, 6]])\n" ] } ], "source": [ "arr = np.array([[1,2,3],[4,5,6]])\n", "t = torch.from_numpy(arr)\n", "print(\"numpy array:\")\n", "print(arr)\n", "print(\"tensor:\")\n", "print(t)\n", "\n", "print(\"修改arr:\")\n", "arr[0, 0] = 0\n", "print(\"numpy array:\")\n", "print(arr)\n", "print(\"tensor:\")\n", "print(t)\n", "\n", "print(\"修改tensor:\")\n", "arr[1, 1] = -10\n", "print(\"numpy array:\")\n", "print(arr)\n", "print(\"tensor:\")\n", "print(t)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何通过torch.zeros或torch.ones创建张量?\n", "- 函数原型:`torch.zeros(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)`\n", "- 函数原型:`torch.ones(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)`\n", "- 功能:依size创建全0张量和全1\n", "- size:张量的形状\n", "- out:输出的张量,貌似其原始类型必须为tensor,通过out得到的和返回值得到的是完全一样的,相当于赋值\n", "- layout:内存中布局形式,有strided,sparse_coo等\n", "- device:所在设备,gpu/cpu\n", "- requires_grad: 是否需要梯度" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[0, 0, 0],\n", " [0, 0, 0],\n", " [0, 0, 0]])\n", "tensor([[0, 0, 0],\n", " [0, 0, 0],\n", " [0, 0, 0]])\n", "140556294938560 140556294938560 True\n" ] } ], "source": [ "out_t = torch.tensor([1])\n", "t = torch.zeros((3,3), out=out_t)\n", "print(t)\n", "print(out_t)\n", "print(id(t), id(out_t), id(t) == id(out_t))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何通过torch.zeros_like或torch.ones_like创建张量?\n", "- 函数原型:`torch.zeros_like(input, dtype=None, layout=None, device=None, requires_grad=False)`\n", "- 函数原型:`torch.ones_like(input, dtype=None, layout=None, device=None, requires_grad=False)`\n", "- 功能:依input形状创建全0张量或全1,input是一个tensor类型\n", "- input:创建与input同形状的全0张量" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[0., 0., 0.],\n", " [0., 0., 0.]])\n", "tensor([[1., 1., 1.],\n", " [1., 1., 1.]])\n" ] } ], "source": [ "t = torch.empty(2,3)\n", "print(torch.zeros_like(t))\n", "print(torch.ones_like(t))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何通过torch.full创建张量?\n", "- `torch.full(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)`\n", "- `torch.full_like(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format)`\n", "- 功能:创建全等张量\n", "- size: 张量的形状,如(3,3)\n", "- fill_value: 张量的值" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/pytorch/aten/src/ATen/native/TensorFactories.cpp:361: UserWarning: Deprecation warning: In a future PyTorch release torch.full will no longer return tensors of floating dtype by default. Instead, a bool fill_value will return a tensor of torch.bool dtype, and an integral fill_value will return a tensor of torch.long dtype. Set the optional `dtype` or `out` arguments to suppress this warning.\n" ] }, { "data": { "text/plain": [ "tensor([[8., 8., 8.],\n", " [8., 8., 8.],\n", " [8., 8., 8.]])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.full((3,3), 8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何通过torch.arange创建等差数列的1维张量?\n", "- 函数原型:`torch.arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)`\n", "- 功能:创建等差为1的张量\n", "- 注意事项:数值区间为[start, end)\n", "- start: 数列起始值\n", "- end: 数列“结束值”\n", "- step: 数列公差,默认为1" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([2, 4, 6, 8])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.arange(2,10,2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何通过torch.linspace创建均分数列张量\n", "- 函数原型:`torch.linspace(start=0, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)`\n", "- 功能:创建均分的1维张量\n", "- 注意事项:数值区间为[start, end]\n", "- start: 数列起始值\n", "- end: 数列结束值\n", "- steps: 数列长度\n", "- 步长为:(end-start)/(steps-1)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([ 2.0000, 3.3333, 4.6667, 6.0000, 7.3333, 8.6667, 10.0000])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch.linspace(2, 10, 7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何通过torch.logspace创建对数均分的1维张量?\n", "- 函数原型:`torch.logspace(start, end, steps=100, base=10.0, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)`\n", "- 功能:创建对数均分的1维张量\n", "- 注意事项:长度为steps,底为base\n", "- base: 对数函数的低,默认为10\n", "\n", "Q:如何通过torch.eye创建单位对角矩阵?\n", "- 函数原型:`torch.eye(n, m=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)`\n", "- 功能:创建单位对角矩阵(2维张量)\n", "- 注意事项:默认为方阵\n", "- n: 矩阵行数\n", "- m: 矩阵列数\n", "\n", "Q:如何通过torch.normal生成正态分布的张量?\n", "- 函数原型:`torch.normal(mean, std, *, generator=None, out=None)`\n", "- 功能:生成正态分布(高斯分布)\n", "- mean: 均值\n", "- std: 标准差\n", "- 因mean和std可以分别为标量和张量,有4种不同的组合" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "mean:tensor([1., 2., 3., 4.])\n", "std:tensor([1., 2., 3., 4.])\n", "tensor([ 0.6063, 2.9914, 4.0138, -0.5877])\n", "\n", "tensor([ 1.1977, -0.1746, 1.5572, -1.1905])\n", "\n", "mean:tensor([1., 2., 3., 4.])\n", "std:1\n", "tensor([0.7165, 1.5649, 3.2308, 3.2504])\n" ] } ], "source": [ "# mean:张量 std: 张量\n", "# 其中t[i]是从mean[i],std[i]的标准正态分布中采样得来\n", "mean = torch.arange(1, 5, dtype=torch.float)\n", "std = torch.arange(1, 5, dtype=torch.float)\n", "t = torch.normal(mean, std)\n", "print(\"mean:{}\\nstd:{}\".format(mean, std))\n", "print(t)\n", "print()\n", "\n", "# mean:标量 std: 标量,此时要指定size大小\n", "t_normal = torch.normal(0., 1., size=(4,))\n", "print(t_normal)\n", "print()\n", "\n", "# mean:张量 std: 标量\n", "mean = torch.arange(1, 5, dtype=torch.float)\n", "std = 1\n", "t_normal = torch.normal(mean, std)\n", "print(\"mean:{}\\nstd:{}\".format(mean, std))\n", "print(t_normal)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何创建标准正态分布的张量?\n", "- `torch.randn(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor`\n", "- `torch.randn_like(input, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor`\n", "- size:张量的形状" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([-0.2387, -0.3638, 0.3597, 0.1225])\n", "tensor([[ 0.4709, 0.8593, -0.5970],\n", " [-0.1133, 0.3273, 0.0106]])\n" ] } ], "source": [ "print(torch.randn(4))\n", "print(torch.randn(2,3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何生成均匀分布和整数均匀分布的张量?\n", "- 在[0,1)区间上,生成均匀分布\n", "- `torch.rand(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor`\n", "- `torch.rand_like(input, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor`\n", "- 在[low, high)区间生成整数均匀分布\n", "- `torch.randint(low=0, high, size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor`\n", "- `torch.randint_like(input, low=0, high, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor`\n", "- 其中size是张量形状\n", "\n", "Q:如何生成从0到n-1的随机排列?\n", "- `torch.randperm(n, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False) → LongTensor`\n", "- n是张量的长度\n", "- 经常用于生成乱序索引\n", "\n", "Q:如何生成一个伯努利分布的张量?\n", "- `torch.bernoulli(input, *, generator=None, out=None) → Tensor`\n", "- 以input为概率,生成伯努利分布(0-1分布,两点分布)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3.张量操作与线性回归\n", "\n", "## 张量的操作:拼接、切分、索引和变换\n", "\n", "Q:如何用torch.cat对张量进行拼接?\n", "- `torch.cat(tensors, dim=0, out=None) → Tensor`\n", "- 功能:将张量按维度dim进行拼接\n", "- tensors: 张量序列\n", "- dim:要拼接的维度" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[-0.6851, 0.0099, -1.4586],\n", " [ 0.2965, 0.4991, -0.4938]])\n", "tensor([[-0.6851, 0.0099, -1.4586],\n", " [ 0.2965, 0.4991, -0.4938],\n", " [-0.6851, 0.0099, -1.4586],\n", " [ 0.2965, 0.4991, -0.4938]])\n", "shape: torch.Size([4, 3])\n", "tensor([[-0.6851, 0.0099, -1.4586, -0.6851, 0.0099, -1.4586],\n", " [ 0.2965, 0.4991, -0.4938, 0.2965, 0.4991, -0.4938]])\n", "shape: torch.Size([2, 6])\n" ] } ], "source": [ "t = torch.randn(2,3)\n", "print(t)\n", "t1 = torch.cat([t,t], dim=0)\n", "print(t1)\n", "print(\"shape:\", t1.shape)\n", "t2 = torch.cat([t,t], dim=1)\n", "print(t2)\n", "print(\"shape:\", t2.shape)\n", "# dim是指在哪个方向上进行叠加" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何用torch.stack对张量进行拼接?\n", "- `torch.stack(tensors, dim=0, out=None) → Tensor`\n", "- 功能:在新创建的维度dim上进行拼接\n", "- tensors:张量序列\n", "- dim:要拼接的维度\n", "- 注意:cat不会扩张张量的维度,stack会扩张,相当于insert" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[ 0.6266, 0.8918, 0.6165],\n", " [ 1.1646, -1.8152, -0.7309]])\n", "tensor([[[ 0.6266, 0.8918, 0.6165],\n", " [ 1.1646, -1.8152, -0.7309]],\n", "\n", " [[ 0.6266, 0.8918, 0.6165],\n", " [ 1.1646, -1.8152, -0.7309]]])\n", "shape: torch.Size([2, 2, 3])\n" ] } ], "source": [ "t = torch.randn(2,3)\n", "print(t)\n", "t1 = torch.stack([t,t], dim=0)\n", "print(t1)\n", "print(\"shape:\", t1.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何用torch.chunk对切分张量?\n", "- `torch.chunk(input, chunks, dim=0) → List of Tensors`\n", "- 功能:将张量按维度dim进行平均切分\n", "- 返回值:张量列表\n", "- 注意事项:若不能整除,最后一份张量小于其它张量\n", "- input:要切分的张量\n", "- chunks:要切分的份数\n", "- dim:要切分的维度" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[1., 1., 1., 1., 1., 1., 1.],\n", " [1., 1., 1., 1., 1., 1., 1.]])\n", "第0个张量:\n", "tensor([[1., 1., 1.],\n", " [1., 1., 1.]])\n", "第1个张量:\n", "tensor([[1., 1., 1.],\n", " [1., 1., 1.]])\n", "第2个张量:\n", "tensor([[1.],\n", " [1.]])\n" ] } ], "source": [ "a = torch.ones((2,7))\n", "print(a)\n", "list_of_tensors = torch.chunk(a, dim=1, chunks=3)\n", "for i, t in enumerate(list_of_tensors):\n", " print(f\"第{i}个张量:\")\n", " print(t)\n", "# 切分后的长度的计算方式为:7/3向上取整为3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何用torch.split对张量进行切分?\n", "- `torch.split(tensor, split_size_or_sections, dim=0)`\n", "- 功能:将张量按维度dim进行平均切分\n", "- 返回值:张量列表\n", "- tensor:要切分的张量\n", "- split_size_or_sections:为int时,表示每一份的长度;为list时,按list元素切分,list元素和必须为该维度的长度\n", "- dim:要切分的维度" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[1., 1., 1., 1., 1., 1., 1.],\n", " [1., 1., 1., 1., 1., 1., 1.]])\n", "第0个张量:\n", "tensor([[1., 1.],\n", " [1., 1.]])\n", "第1个张量:\n", "tensor([[1., 1., 1., 1.],\n", " [1., 1., 1., 1.]])\n", "第2个张量:\n", "tensor([[1.],\n", " [1.]])\n" ] } ], "source": [ "a = torch.ones((2,7))\n", "print(a)\n", "list_of_tensors = torch.split(a, [2,4,1], dim=1)\n", "for i, t in enumerate(list_of_tensors):\n", " print(f\"第{i}个张量:\")\n", " print(t)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何在dim维度上,按index索引数据?\n", "- `torch.index_select(input, dim, index, out=None) → Tensor`\n", "- 返回值:按index索引数据拼接的张量\n", "- input:要索引的张量\n", "- dim:要索引的维度\n", "- index:要索引数据的序号" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[1, 2, 8],\n", " [2, 3, 0],\n", " [0, 2, 0]])\n", "tensor([0, 2])\n", "tensor([[1, 2, 8],\n", " [0, 2, 0]])\n" ] } ], "source": [ "t = torch.randint(0,9, size=(3,3))\n", "idx = torch.tensor([0,2], dtype=torch.long)\n", "t_select = torch.index_select(t, dim=0, index=idx)\n", "print(t)\n", "print(idx)\n", "print(t_select)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何对张量按mask中的True进行索引?\n", "- `torch.masked_select(input, mask, out=None) → Tensor`\n", "- 返回值:一维张量\n", "- input:要索引的张量\n", "- mask:与input同形状的布尔类型张量" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[2, 5, 0],\n", " [5, 8, 5],\n", " [2, 8, 5]])\n", "tensor([[ True, True, True],\n", " [ True, False, True],\n", " [ True, False, True]])\n", "tensor([2, 5, 0, 5, 5, 2, 5])\n" ] } ], "source": [ "t = torch.randint(0, 9, size=(3, 3))\n", "print(t)\n", "mask = t.le(5) # le是小于等于,还有lt,gt,ge\n", "print(mask)\n", "t_select = torch.masked_select(t, mask)\n", "print(t_select)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何改变张量的形状?\n", "- `torch.reshape(input, shape) → Tensor`\n", "- 注意事项:当张量在内存中是连续时,新张量与input共享数据内存\n", "- input:要变换的张量\n", "- shape:新张量的形状,允许某个维度为-1,意味着这个维度根据其它的算出来的" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([2, 0, 7, 5, 6, 4, 3, 1])\n", "tensor([[2, 0, 7, 5],\n", " [6, 4, 3, 1]])\n" ] } ], "source": [ "t = torch.randperm(8)\n", "t_reshape = torch.reshape(t, (2,4))\n", "print(t)\n", "print(t_reshape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何交换张量的两个维度?\n", "- `torch.transpose(input, dim0, dim1) → Tensor`\n", "- input:要交换的张量\n", "- dim0,dim1:要交换的维度\n", "- 若为2维张量转置,即矩阵转置,可使用`torch.t(input) → Tensor`,等价于`torch.transpose(input, 0, 1)`" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[[0.8657, 0.5869, 0.1105, 0.4381],\n", " [0.7276, 0.6606, 0.3778, 0.3643],\n", " [0.6180, 0.6693, 0.9983, 0.4252]],\n", "\n", " [[0.3526, 0.6365, 0.6643, 0.5310],\n", " [0.4653, 0.5056, 0.1065, 0.7873],\n", " [0.6175, 0.6650, 0.1325, 0.5837]]])\n", "tensor([[[0.8657, 0.7276, 0.6180],\n", " [0.5869, 0.6606, 0.6693],\n", " [0.1105, 0.3778, 0.9983],\n", " [0.4381, 0.3643, 0.4252]],\n", "\n", " [[0.3526, 0.4653, 0.6175],\n", " [0.6365, 0.5056, 0.6650],\n", " [0.6643, 0.1065, 0.1325],\n", " [0.5310, 0.7873, 0.5837]]])\n" ] } ], "source": [ "t = torch.rand((2,3,4))\n", "t_transpose = torch.transpose(t, dim0=1,dim1=2)\n", "print(t)\n", "print(t_transpose)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何压缩长度为1的维度(轴)?\n", "- `torch.squeeze(input, dim=None, out=None) → Tensor`\n", "- dim: 若为None,移除所有长度为1的轴;若指定维度,当且仅当该轴长度为1时,可以被移除" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([1, 2, 3, 1])\n", "torch.Size([2, 3])\n", "torch.Size([2, 3, 1])\n", "torch.Size([1, 2, 3, 1])\n" ] } ], "source": [ "t = torch.rand((1, 2, 3, 1))\n", "t_sq = torch.squeeze(t)\n", "t_0 = torch.squeeze(t, dim=0)\n", "t_1 = torch.squeeze(t, dim=1)\n", "print(t.shape)\n", "print(t_sq.shape)\n", "print(t_0.shape)\n", "print(t_1.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何根据dim扩展维度?\n", "- `torch.unsqueeze(input, dim) → Tensor`\n", "- dim:扩展的维度" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([1, 2, 3, 4])\n", "tensor([[1, 2, 3, 4]])\n", "tensor([[1],\n", " [2],\n", " [3],\n", " [4]])\n" ] } ], "source": [ "t = torch.tensor([1, 2, 3, 4])\n", "print(t)\n", "t1 = torch.unsqueeze(t, 0)\n", "print(t1)\n", "t2 = torch.unsqueeze(t, 1)\n", "print(t2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 张量的数学运算\n", "\n", "Q:有哪些常见的数学运算?\n", "- 一、加减乘除\n", " - torch.add()\n", " - torch.addcdiv()\n", " - torch.addcmul()\n", " - torch.sub() \n", " - torch.div()\n", " - torch.mul()\n", "- 二、对数,指数,幂函数\n", " - torch.log(input, out=None)\n", " - torch.log10(input, out=None)\n", " - torch.log2(input, out=None)\n", " - torch.exp(input, out=None)\n", " - torch.pow()\n", "- 三、三角函数\n", " - torch.abs(input, out=None)\n", " - torch.acos(input, out=None)\n", " - torch.cosh(input, out=None)\n", " - torch.cos(input, out=None)\n", " - torch.asin(input, out=None)\n", " - torch.atan(input, out=None)\n", " - torch.atan2(input, other, out=None)\n", "\n", "Q:如何逐元素计算input + alpha x other?\n", "- `torch.add(input, other, *, alpha=1, out=None)`\n", "- input:第一个张量\n", "- alpha:乘项因子\n", "- other:第二个张量" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "t_0:\n", "tensor([[ 0.5570, -0.4743, 1.0113],\n", " [-1.2665, 0.1997, -0.6957],\n", " [-0.0714, -0.7002, -1.4687]])\n", "t_1:\n", "tensor([[1., 1., 1.],\n", " [1., 1., 1.],\n", " [1., 1., 1.]])\n", "t_add_10:\n", "tensor([[10.5570, 9.5257, 11.0113],\n", " [ 8.7335, 10.1997, 9.3043],\n", " [ 9.9286, 9.2998, 8.5313]])\n" ] } ], "source": [ "t_0 = torch.randn((3, 3))\n", "t_1 = torch.ones_like(t_0)\n", "t_add = torch.add(t_0, t_1, alpha=10)\n", "\n", "print(\"t_0:\\n{}\\nt_1:\\n{}\\nt_add_10:\\n{}\".format(t_0, t_1, t_add))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:如何计算$\\text { out }_{i}=\\text { input }_{i}+\\text { value } \\times \\text { tensor } 1_{i} \\times \\text { tensor } 2_{i}$\n", "- `torch.addcmul(input, tensor1, tensor2, *, value=1, out=None) → Tensor`\n", "\n", "Q:如何计算$\\text { out }_{i}=\\text { input }_{i}+\\text { value } \\times \\frac{\\text { tensor } 1}{\\text { tensor } 2_{i}}$\n", "- `torch.addcdiv(input, tensor1, tensor2, *, value=1, out=None) → Tensor`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 线性回归的Pytorch实现" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "import matplotlib.pyplot as plt\n", "torch.manual_seed(10)\n", "\n", "lr = 0.05 # 学习率\n", "\n", "# 创建训练数据\n", "x = torch.rand(20, 1) * 10 # x data (tensor), shape=(20, 1)\n", "y = 2*x + (5 + torch.randn(20, 1)) # y data (tensor), shape=(20, 1)\n", "\n", "# 构建线性回归参数的初始值\n", "w = torch.randn((1), requires_grad=True)\n", "b = torch.zeros((1), requires_grad=True)\n", "\n", "for iteration in range(1000):\n", "\n", " # 前向传播,计算y_pred=w * x+b\n", " wx = torch.mul(w, x)\n", " y_pred = torch.add(wx, b)\n", "\n", " # 计算 MSE loss\n", " loss = (0.5 * (y - y_pred) ** 2).mean()\n", "\n", " # 反向传播\n", " loss.backward()\n", "\n", " # 更新参数\n", " b.data.sub_(lr * b.grad)\n", " w.data.sub_(lr * w.grad)\n", "\n", " # 清零张量的梯度\n", " w.grad.zero_()\n", " b.grad.zero_()\n", "\n", " # 绘图\n", " if iteration % 20 == 0:\n", " plt.scatter(x.data.numpy(), y.data.numpy())\n", " plt.plot(x.data.numpy(), y_pred.data.numpy(), 'r-', lw=5)\n", " plt.text(2, 20, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color': 'red'})\n", " plt.xlim(1.5, 10)\n", " plt.ylim(8, 28)\n", " plt.title(f\"Iteration: {iteration}\\nw: {w.data.numpy()} b: {b.data.numpy()}\")\n", " plt.pause(0.5)\n", "\n", " if loss.data.numpy() < 1:\n", " break\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](http://anki190912.xuexihaike.com/20200920095346.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4.计算图与动态图机制\n", "\n", "Q:计算图是什么?\n", "- 用来描述运算的有向无环图\n", "- 有两个主要元素:结点(Node)和边(Edge)\n", "- 结点表示数据,如向量、矩阵、张量,边表示运算,如加减乘除卷积等\n", "\n", "Q:如何用计算图表示$y = (x+w)*(w+1)$?\n", "- $a = x + w, b = w + 1, y = a * b$\n", "- ![](http://anki190912.xuexihaike.com/20200919144309.png?imageView2/2/h/200)\n", "\n", "Q:如何用计算图进行梯度求导,如$y = (x+w)*(w+1)$\n", "- $a = x + w, b = w + 1, y = a * b$\n", "- $$\\begin{aligned}\n", "\\frac{\\partial \\mathrm{y}}{\\partial w} &=\\frac{\\partial \\mathrm{y}}{\\partial a} \\frac{\\partial a}{\\partial w}+\\frac{\\partial \\mathrm{y}}{\\partial b} \\frac{\\partial b}{\\partial w} \\\\\n", "&=b * 1+\\mathrm{a} * 1 \\\\\n", "&=\\mathrm{b}+\\mathrm{a} \\\\\n", "&=(\\mathrm{w}+1)+(\\mathrm{x}+\\mathrm{w}) \\\\\n", "&=2 * \\mathrm{w}+\\mathrm{x}+1 \\\\\n", "&=2 * 1+2+1=5\n", "\\end{aligned}$$\n", "- ![](http://anki190912.xuexihaike.com/20200919144541.png?imageView2/2/h/200)\n", "- y对w求导在计算图中其实就是找到y到w的所有路径上的导数,进行求和\n", "\n", "Q:叶子结点是什么?\n", "- ![](http://anki190912.xuexihaike.com/20200919144541.png?imageView2/2/h/200)\n", "- 用户创建的结点称为叶子结点,如X和W\n", "- torch.Tensor中有is_leaf指示张量是否为叶子结点\n", "- 设置叶子结点主要是为了节省内存,因为非叶子结点的梯度在反向传播后会被释放掉\n", "- 若需要保留非叶子结点的梯度,可使用retain_grad()方法\n", "\n", "Q:torch.Tensor中的grad_fn作用是什么?\n", "- 记录创建该张量时所用的方法(函数)\n", "- ![](http://anki190912.xuexihaike.com/20200919144541.png?imageView2/2/h/200)\n", "- y.grad_fn = \\\n", "- a.grad_fn = \\\n", "\n", "Q:$y = (x+w)*(w+1)$计算图的代码示例,求解y对w的梯度?" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([5.])\n", "\n", "is_leaf:\n", " True True False False False\n", "\n", "gradient:\n", " tensor([5.]) tensor([2.]) None None None\n", "\n", "grad_fn:\n", " None None \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/tqc/miniconda3/envs/py36/lib/python3.6/site-packages/torch/tensor.py:746: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.\n", " warnings.warn(\"The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad \"\n" ] } ], "source": [ "import torch\n", "\n", "w = torch.tensor([1.], requires_grad=True)\n", "x = torch.tensor([2.], requires_grad=True)\n", "\n", "a = torch.add(w, x)\n", "# 若需要保留非叶子结点a的梯度,否则调用a.grad时为None\n", "# a.retain_grad()\n", "b = torch.add(w, 1)\n", "y = torch.mul(a, b)\n", "\n", "y.backward()\n", "print(w.grad)\n", "\n", "# 查看叶子结点\n", "print(\"\\nis_leaf:\\n\", w.is_leaf, x.is_leaf, a.is_leaf, b.is_leaf, y.is_leaf)\n", "\n", "# 查看梯度\n", "print(\"\\ngradient:\\n\", w.grad, x.grad, a.grad, b.grad, y.grad)\n", "\n", "# 查看 grad_fn\n", "print(\"\\ngrad_fn:\\n\", w.grad_fn, x.grad_fn, a.grad_fn, b.grad_fn, y.grad_fn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 5.autograd与逻辑回归\n", "\n", "Q:torch.autograd.backward是什么?\n", "- torch.autograd.backward(tensors: Union[torch.Tensor, Sequence[torch.Tensor]], grad_tensors: Union[torch.Tensor, Sequence[torch.Tensor], None] = None, retain_graph: Optional[bool] = None, create_graph: bool = False, grad_variables: Union[torch.Tensor, Sequence[torch.Tensor], None] = None) → None\n", "- 功能:自动求取梯度\n", "- tensors:用于求导的张量,如loss\n", "- retain_graph:保存计算图,若不保存,则紧接着再调用一次backward()会报错\n", "- create_graph:创建导数计算图,用于高阶求导\n", "- grad_tensors:多梯度权重\n", "\n", "Q:torch.autograd.backward中的retain_graph的代码示例?" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([5.])\n", "tensor([10.])\n" ] } ], "source": [ "w = torch.tensor([1.], requires_grad=True)\n", "x = torch.tensor([2.], requires_grad=True)\n", "\n", "a = torch.add(w, x)\n", "b = torch.add(w, 1)\n", "y = torch.mul(a, b)\n", "\n", "y.backward(retain_graph=True\n", " )\n", "print(w.grad)\n", "y.backward()\n", "print(w.grad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:torch.autograd.backward中的grad_tensors的代码示例?" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([9.])\n" ] } ], "source": [ "w = torch.tensor([1.], requires_grad=True)\n", "x = torch.tensor([2.], requires_grad=True)\n", "\n", "a = torch.add(w, x) # retain_grad()\n", "b = torch.add(w, 1)\n", "\n", "y0 = torch.mul(a, b) # y0 = (x+w) * (w+1)\n", "y1 = torch.add(a, b) # y1 = (x+w) + (w+1) dy1/dw = 2\n", "\n", "loss = torch.cat([y0, y1], dim=0) # [y0, y1]\n", "grad_tensors = torch.tensor([1., 2.])\n", "\n", "# gradient 传入 torch.autograd.backward()中的grad_tensors\n", "loss.backward(gradient=grad_tensors)\n", "# 实际上相当于1*y0导数+2*y1导数\n", "\n", "print(w.grad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:torch.autograd.grad是什么?\n", "- torch.autograd.grad(outputs: Union[torch.Tensor, Sequence[torch.Tensor]], inputs: Union[torch.Tensor, Sequence[torch.Tensor]], grad_outputs: Union[torch.Tensor, Sequence[torch.Tensor], None] = None, retain_graph: Optional[bool] = None, create_graph: bool = False, only_inputs: bool = True, allow_unused: bool = False) → Tuple[torch.Tensor, ...]\n", "- 功能:求取梯度\n", "- outputs:用于求导的张量,如loss\n", "- inputs:需要梯度的张量\n", "- create_graph:创建导数计算图,用于高阶求导\n", "- retain_graph:保存计算图\n", "- grad_outputs:多梯度权重\n", "\n", "Q:如何使用torch.autograd.grad对$y=x^2$进行一阶和二阶求导?" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(tensor([6.], grad_fn=),)\n", "(tensor([2.]),)\n" ] } ], "source": [ "x = torch.tensor([3.], requires_grad=True)\n", "y = torch.pow(x, 2) # y = x**2\n", "\n", "# grad_1 = dy/dx = 2x = 2 * 3 = 6\n", "grad_1 = torch.autograd.grad(y, x, create_graph=True)\n", "print(grad_1)\n", "\n", "# grad_2 = d(dy/dx)/dx = d(2x)/dx = 2\n", "# grad_1的返回值是元组,所以要取出第一个\n", "grad_2 = torch.autograd.grad(grad_1[0], x)\n", "print(grad_2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q:autograd的3点使用小贴士是什么?\n", "- 1.梯度不自动清零,每次传播时会一直叠加上去,所以使用梯度之后要手动进行清零,即w.grad.zero_(),其中下划线表示inplace(原地)操作\n", "- 2.依赖于叶子结点的节点,requires_grad默认为True\n", "- 3.叶子结点不可执行in-place\n", "\n", "Q:逻辑回归的pytorch代码实现是什么?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torch.nn as nn\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "torch.manual_seed(10)\n", "\n", "# 生成数据\n", "sample_nums = 100\n", "mean_value = 1.7\n", "bias = 1\n", "n_data = torch.ones(sample_nums, 2)\n", "# 类别0 数据 shape=(100, 2)\n", "x0 = torch.normal(mean_value * n_data, 1) + bias\n", "# 类别0 标签 shape=(100)\n", "y0 = torch.zeros(sample_nums)\n", "# 类别1 数据 shape=(100, 2)\n", "x1 = torch.normal(-mean_value * n_data, 1) + bias\n", "# 类别1 标签 shape=(100)\n", "y1 = torch.ones(sample_nums)\n", "train_x = torch.cat((x0, x1), 0)\n", "train_y = torch.cat((y0, y1), 0)\n", "\n", "# 选择模型\n", "class LR(nn.Module):\n", " def __init__(self):\n", " super(LR, self).__init__()\n", " self.features = nn.Linear(2, 1)\n", " self.sigmoid = nn.Sigmoid()\n", " \n", " def forward(self, x):\n", " x = self.features(x)\n", " x = self.sigmoid(x)\n", " return x\n", "\n", "# 实例化逻辑回归模型\n", "lr_net = LR()\n", "\n", "# 选择损失函数,交叉熵损失\n", "loss_fn = nn.BCELoss()\n", "\n", "# 选择优化器\n", "lr = 0.01 # 学习率\n", "optimizer = torch.optim.SGD(lr_net.parameters(), lr=lr, momentum=0.9)\n", "\n", "# 模型训练\n", "for iteration in range(1000):\n", " # 前向传播\n", " y_pred = lr_net(train_x)\n", " \n", " # 计算loss\n", " loss = loss_fn(y_pred.squeeze(), train_y)\n", " \n", " # 反向传播\n", " loss.backward()\n", " \n", " # 更新参数\n", " optimizer.step()\n", " \n", " # 清空梯度\n", " optimizer.zero_grad()\n", " \n", " # 绘图\n", " if iteration % 50 == 0:\n", " \n", " # 以0.5为阈值进行分类\n", " mask = y_pred.ge(0.5).float().squeeze()\n", " # 计算正确预测的样本个数\n", " correct = (mask == train_y).sum()\n", " # 计算分类准确率\n", " acc = correct.item() / train_y.size(0)\n", " \n", " plt.scatter(x0.data.numpy()[:, 0], x0.data.numpy()[:, 1], c='r', label='class 0')\n", " plt.scatter(x1.data.numpy()[:, 0], x1.data.numpy()[:, 1], c='b', label='class 1')\n", " \n", " w0, w1 = lr_net.features.weight[0]\n", " w0, w1 = float(w0.item()), float(w1.item())\n", " plot_b = float(lr_net.features.bias[0].item())\n", " plot_x = np.arange(-6, 6, 0.1)\n", " plot_y = (-w0 * plot_x - plot_b) / w1\n", " \n", " plt.xlim(-5, 7)\n", " plt.ylim(-7, 7)\n", " plt.plot(plot_x, plot_y)\n", " \n", " plt.text(-5, 5, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size':20, 'color':'red'})\n", " plt.title(\"Iteration: {}\\nw0:{:.2f} w1:{:.2f} b: {:.2f} accuracy:{:.2%}\".format(iteration, w0, w1, plot_b, acc))\n", " plt.legend()\n", "\n", " plt.show()\n", " plt.pause(0.5)\n", "\n", " if acc > 0.99:\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](http://anki190912.xuexihaike.com/20200920100028.png)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.12" }, "toc-autonumbering": false, "toc-showcode": false, "toc-showmarkdowntxt": false, "toc-showtags": false }, "nbformat": 4, "nbformat_minor": 4 }