{ "cells": [ { "cell_type": "markdown", "id": "b4abb964", "metadata": {}, "source": [ "Installing (updating) the following libraries for your Sagemaker\n", "instance." ] }, { "cell_type": "code", "execution_count": null, "id": "a4e39343", "metadata": {}, "outputs": [], "source": [ "!pip install .. # installing d2l\n" ] }, { "cell_type": "markdown", "id": "3c1c4d8e", "metadata": { "origin_pos": 0 }, "source": [ "# 多GPU的简洁实现\n", ":label:`sec_multi_gpu_concise`\n", "\n", "每个新模型的并行计算都从零开始实现是无趣的。此外,优化同步工具以获得高性能也是有好处的。下面我们将展示如何使用深度学习框架的高级API来实现这一点。数学和算法与 :numref:`sec_multi_gpu`中的相同。本节的代码至少需要两个GPU来运行。\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "f997430f", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T09:28:14.308570Z", "iopub.status.busy": "2023-08-18T09:28:14.307945Z", "iopub.status.idle": "2023-08-18T09:28:17.803880Z", "shell.execute_reply": "2023-08-18T09:28:17.802763Z" }, "origin_pos": 3, "tab": [ "paddle" ] }, "outputs": [], "source": [ "import warnings\n", "from d2l import paddle as d2l\n", "\n", "warnings.filterwarnings(\"ignore\")\n", "import paddle\n", "from paddle import nn" ] }, { "cell_type": "markdown", "id": "0ac704e6", "metadata": { "origin_pos": 4 }, "source": [ "## [**简单网络**]\n", "\n", "让我们使用一个比 :numref:`sec_multi_gpu`的LeNet更有意义的网络,它依然能够容易地和快速地训练。我们选择的是 :cite:`He.Zhang.Ren.ea.2016`中的ResNet-18。因为输入的图像很小,所以稍微修改了一下。与 :numref:`sec_resnet`的区别在于,我们在开始时使用了更小的卷积核、步长和填充,而且删除了最大汇聚层。\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "1f054417", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T09:28:17.808279Z", "iopub.status.busy": "2023-08-18T09:28:17.807748Z", "iopub.status.idle": "2023-08-18T09:28:17.816874Z", "shell.execute_reply": "2023-08-18T09:28:17.816075Z" }, "origin_pos": 7, "tab": [ "paddle" ] }, "outputs": [], "source": [ "#@save\n", "def resnet18(num_classes, in_channels=1):\n", " \"\"\"稍加修改的ResNet-18模型\"\"\"\n", " def resnet_block(in_channels, out_channels, num_residuals,\n", " first_block=False):\n", " blk = []\n", " for i in range(num_residuals):\n", " if i == 0 and not first_block:\n", " blk.append(d2l.Residual(in_channels, out_channels,\n", " use_1x1conv=True, strides=2))\n", " else:\n", " blk.append(d2l.Residual(out_channels, out_channels))\n", " return nn.Sequential(*blk)\n", "\n", " # 该模型使用了更小的卷积核、步长和填充,而且删除了最大汇聚层\n", " net = nn.Sequential(\n", " nn.Conv2D(in_channels, 64, kernel_size=3, stride=1, padding=1),\n", " nn.BatchNorm2D(64),\n", " nn.ReLU())\n", " net.add_sublayer(\"resnet_block1\", resnet_block(\n", " 64, 64, 2, first_block=True))\n", " net.add_sublayer(\"resnet_block2\", resnet_block(64, 128, 2))\n", " net.add_sublayer(\"resnet_block3\", resnet_block(128, 256, 2))\n", " net.add_sublayer(\"resnet_block4\", resnet_block(256, 512, 2))\n", " net.add_sublayer(\"global_avg_pool\", nn.AdaptiveAvgPool2D((1, 1)))\n", " net.add_sublayer(\"fc\", nn.Sequential(nn.Flatten(),\n", " nn.Linear(512, num_classes)))\n", " return net" ] }, { "cell_type": "markdown", "id": "6e0484f4", "metadata": { "origin_pos": 8 }, "source": [ "## 网络初始化\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "7cd01bc7", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T09:28:17.820388Z", "iopub.status.busy": "2023-08-18T09:28:17.819773Z", "iopub.status.idle": "2023-08-18T09:28:18.904196Z", "shell.execute_reply": "2023-08-18T09:28:18.903296Z" }, "origin_pos": 13, "tab": [ "paddle" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "W0818 09:28:17.822042 95393 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.8, Runtime API Version: 11.8\n", "W0818 09:28:17.852774 95393 gpu_resources.cc:91] device: 0, cuDNN Version: 8.7.\n" ] } ], "source": [ "net = resnet18(10)\n", "# 获取GPU列表\n", "devices = d2l.try_all_gpus()\n", "# 我们将在训练代码实现中初始化网络" ] }, { "cell_type": "markdown", "id": "a1f70a4c", "metadata": { "origin_pos": 20 }, "source": [ "## [**训练**]\n", "\n", "如前所述,用于训练的代码需要执行几个基本功能才能实现高效并行:\n", "\n", "* 需要在所有设备上初始化网络参数;\n", "* 在数据集上迭代时,要将小批量数据分配到所有设备上;\n", "* 跨设备并行计算损失及其梯度;\n", "* 聚合梯度,并相应地更新参数。\n", "\n", "最后,并行地计算精确度和发布网络的最终性能。除了需要拆分和聚合数据外,训练代码与前几章的实现非常相似。\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "8e36331f", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T09:28:18.908343Z", "iopub.status.busy": "2023-08-18T09:28:18.907698Z", "iopub.status.idle": "2023-08-18T09:28:18.916081Z", "shell.execute_reply": "2023-08-18T09:28:18.915310Z" }, "origin_pos": 23, "tab": [ "paddle" ] }, "outputs": [], "source": [ "def train(net, num_gpus, batch_size, lr):\n", " train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)\n", " devices = [d2l.try_gpu(i) for i in range(num_gpus)]\n", "\n", " init_normal = nn.initializer.Normal(mean=0.0, std=0.01)\n", " for i in net.sublayers():\n", " if type(i) in [nn.Linear, nn.Conv2D]:\n", " init_normal(i.weight)\n", "\n", " # 在多个 GPU 上设置模型\n", " net = paddle.DataParallel(net)\n", " trainer = paddle.optimizer.SGD(parameters=net.parameters(), learning_rate=lr)\n", " loss = nn.CrossEntropyLoss()\n", " timer, num_epochs = d2l.Timer(), 10\n", " animator = d2l.Animator('epoch', 'test acc', xlim=[1, num_epochs])\n", " for epoch in range(num_epochs):\n", " net.train()\n", " timer.start()\n", " for X, y in train_iter:\n", " trainer.clear_grad()\n", " X, y = paddle.to_tensor(X, place=devices[0]), paddle.to_tensor(y, place=devices[0])\n", " l = loss(net(X), y)\n", " l.backward()\n", " trainer.step()\n", " timer.stop()\n", " animator.add(epoch + 1, (d2l.evaluate_accuracy_gpu(net, test_iter),))\n", " print(f'测试精度:{animator.Y[0][-1]:.2f}, {timer.avg():.1f}秒/轮,'\n", " f'在{str(devices)}')" ] }, { "cell_type": "markdown", "id": "a8965206", "metadata": { "origin_pos": 24 }, "source": [ "接下来看看这在实践中是如何运作的。我们先[**在单个GPU上训练网络**]进行预热。\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "bf53c350", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T09:28:18.919218Z", "iopub.status.busy": "2023-08-18T09:28:18.918903Z", "iopub.status.idle": "2023-08-18T09:30:49.973025Z", "shell.execute_reply": "2023-08-18T09:30:49.971682Z" }, "origin_pos": 26, "tab": [ "paddle" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "测试精度:0.91, 13.1秒/轮,在[Place(gpu:0)]\n" ] }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-08-18T09:30:49.941033\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.5.1, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "train(net, num_gpus=1, batch_size=256, lr=0.1)" ] }, { "cell_type": "markdown", "id": "55647fad", "metadata": { "origin_pos": 27 }, "source": [ "接下来我们[**使用2个GPU进行训练**]。与 :numref:`sec_multi_gpu`中评估的LeNet相比,ResNet-18的模型要复杂得多。这就是显示并行化优势的地方,计算所需时间明显大于同步参数需要的时间。因为并行化开销的相关性较小,因此这种操作提高了模型的可伸缩性。\n" ] }, { "cell_type": "markdown", "id": "b0813a93", "metadata": { "origin_pos": 30 }, "source": [ "## 小结\n" ] }, { "cell_type": "markdown", "id": "97c92556", "metadata": { "origin_pos": 32, "tab": [ "paddle" ] }, "source": [ "* 神经网络可以在(可找到数据的)单GPU上进行自动评估。\n", "* 每台设备上的网络需要先初始化,然后再尝试访问该设备上的参数,否则会遇到错误。\n", "* 优化算法在多个GPU上自动聚合。\n" ] }, { "cell_type": "markdown", "id": "b2429ea2", "metadata": { "origin_pos": 33 }, "source": [ "## 练习\n" ] }, { "cell_type": "markdown", "id": "31d9f7c0", "metadata": { "origin_pos": 35, "tab": [ "paddle" ] }, "source": [ "1. 本节使用ResNet-18,请尝试不同的迭代周期数、批量大小和学习率,以及使用更多的GPU进行计算。如果使用$16$个GPU(例如,在AWS p2.16xlarge实例上)尝试此操作,会发生什么?\n", "1. 有时候不同的设备提供了不同的计算能力,我们可以同时使用GPU和CPU,那应该如何分配工作?为什么?\n" ] }, { "cell_type": "markdown", "id": "8af69412", "metadata": { "origin_pos": 38, "tab": [ "paddle" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/11861)\n" ] } ], "metadata": { "kernelspec": { "display_name": "conda_paddle_p36", "name": "conda_paddle_p36" }, "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }