{
"cells": [
{
"cell_type": "markdown",
"id": "b4abb964",
"metadata": {},
"source": [
"Installing (updating) the following libraries for your Sagemaker\n",
"instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a4e39343",
"metadata": {},
"outputs": [],
"source": [
"!pip install .. # installing d2l\n"
]
},
{
"cell_type": "markdown",
"id": "3c1c4d8e",
"metadata": {
"origin_pos": 0
},
"source": [
"# 多GPU的简洁实现\n",
":label:`sec_multi_gpu_concise`\n",
"\n",
"每个新模型的并行计算都从零开始实现是无趣的。此外,优化同步工具以获得高性能也是有好处的。下面我们将展示如何使用深度学习框架的高级API来实现这一点。数学和算法与 :numref:`sec_multi_gpu`中的相同。本节的代码至少需要两个GPU来运行。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f997430f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T09:28:14.308570Z",
"iopub.status.busy": "2023-08-18T09:28:14.307945Z",
"iopub.status.idle": "2023-08-18T09:28:17.803880Z",
"shell.execute_reply": "2023-08-18T09:28:17.802763Z"
},
"origin_pos": 3,
"tab": [
"paddle"
]
},
"outputs": [],
"source": [
"import warnings\n",
"from d2l import paddle as d2l\n",
"\n",
"warnings.filterwarnings(\"ignore\")\n",
"import paddle\n",
"from paddle import nn"
]
},
{
"cell_type": "markdown",
"id": "0ac704e6",
"metadata": {
"origin_pos": 4
},
"source": [
"## [**简单网络**]\n",
"\n",
"让我们使用一个比 :numref:`sec_multi_gpu`的LeNet更有意义的网络,它依然能够容易地和快速地训练。我们选择的是 :cite:`He.Zhang.Ren.ea.2016`中的ResNet-18。因为输入的图像很小,所以稍微修改了一下。与 :numref:`sec_resnet`的区别在于,我们在开始时使用了更小的卷积核、步长和填充,而且删除了最大汇聚层。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1f054417",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T09:28:17.808279Z",
"iopub.status.busy": "2023-08-18T09:28:17.807748Z",
"iopub.status.idle": "2023-08-18T09:28:17.816874Z",
"shell.execute_reply": "2023-08-18T09:28:17.816075Z"
},
"origin_pos": 7,
"tab": [
"paddle"
]
},
"outputs": [],
"source": [
"#@save\n",
"def resnet18(num_classes, in_channels=1):\n",
" \"\"\"稍加修改的ResNet-18模型\"\"\"\n",
" def resnet_block(in_channels, out_channels, num_residuals,\n",
" first_block=False):\n",
" blk = []\n",
" for i in range(num_residuals):\n",
" if i == 0 and not first_block:\n",
" blk.append(d2l.Residual(in_channels, out_channels,\n",
" use_1x1conv=True, strides=2))\n",
" else:\n",
" blk.append(d2l.Residual(out_channels, out_channels))\n",
" return nn.Sequential(*blk)\n",
"\n",
" # 该模型使用了更小的卷积核、步长和填充,而且删除了最大汇聚层\n",
" net = nn.Sequential(\n",
" nn.Conv2D(in_channels, 64, kernel_size=3, stride=1, padding=1),\n",
" nn.BatchNorm2D(64),\n",
" nn.ReLU())\n",
" net.add_sublayer(\"resnet_block1\", resnet_block(\n",
" 64, 64, 2, first_block=True))\n",
" net.add_sublayer(\"resnet_block2\", resnet_block(64, 128, 2))\n",
" net.add_sublayer(\"resnet_block3\", resnet_block(128, 256, 2))\n",
" net.add_sublayer(\"resnet_block4\", resnet_block(256, 512, 2))\n",
" net.add_sublayer(\"global_avg_pool\", nn.AdaptiveAvgPool2D((1, 1)))\n",
" net.add_sublayer(\"fc\", nn.Sequential(nn.Flatten(),\n",
" nn.Linear(512, num_classes)))\n",
" return net"
]
},
{
"cell_type": "markdown",
"id": "6e0484f4",
"metadata": {
"origin_pos": 8
},
"source": [
"## 网络初始化\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7cd01bc7",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T09:28:17.820388Z",
"iopub.status.busy": "2023-08-18T09:28:17.819773Z",
"iopub.status.idle": "2023-08-18T09:28:18.904196Z",
"shell.execute_reply": "2023-08-18T09:28:18.903296Z"
},
"origin_pos": 13,
"tab": [
"paddle"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"W0818 09:28:17.822042 95393 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.8, Runtime API Version: 11.8\n",
"W0818 09:28:17.852774 95393 gpu_resources.cc:91] device: 0, cuDNN Version: 8.7.\n"
]
}
],
"source": [
"net = resnet18(10)\n",
"# 获取GPU列表\n",
"devices = d2l.try_all_gpus()\n",
"# 我们将在训练代码实现中初始化网络"
]
},
{
"cell_type": "markdown",
"id": "a1f70a4c",
"metadata": {
"origin_pos": 20
},
"source": [
"## [**训练**]\n",
"\n",
"如前所述,用于训练的代码需要执行几个基本功能才能实现高效并行:\n",
"\n",
"* 需要在所有设备上初始化网络参数;\n",
"* 在数据集上迭代时,要将小批量数据分配到所有设备上;\n",
"* 跨设备并行计算损失及其梯度;\n",
"* 聚合梯度,并相应地更新参数。\n",
"\n",
"最后,并行地计算精确度和发布网络的最终性能。除了需要拆分和聚合数据外,训练代码与前几章的实现非常相似。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "8e36331f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T09:28:18.908343Z",
"iopub.status.busy": "2023-08-18T09:28:18.907698Z",
"iopub.status.idle": "2023-08-18T09:28:18.916081Z",
"shell.execute_reply": "2023-08-18T09:28:18.915310Z"
},
"origin_pos": 23,
"tab": [
"paddle"
]
},
"outputs": [],
"source": [
"def train(net, num_gpus, batch_size, lr):\n",
" train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)\n",
" devices = [d2l.try_gpu(i) for i in range(num_gpus)]\n",
"\n",
" init_normal = nn.initializer.Normal(mean=0.0, std=0.01)\n",
" for i in net.sublayers():\n",
" if type(i) in [nn.Linear, nn.Conv2D]:\n",
" init_normal(i.weight)\n",
"\n",
" # 在多个 GPU 上设置模型\n",
" net = paddle.DataParallel(net)\n",
" trainer = paddle.optimizer.SGD(parameters=net.parameters(), learning_rate=lr)\n",
" loss = nn.CrossEntropyLoss()\n",
" timer, num_epochs = d2l.Timer(), 10\n",
" animator = d2l.Animator('epoch', 'test acc', xlim=[1, num_epochs])\n",
" for epoch in range(num_epochs):\n",
" net.train()\n",
" timer.start()\n",
" for X, y in train_iter:\n",
" trainer.clear_grad()\n",
" X, y = paddle.to_tensor(X, place=devices[0]), paddle.to_tensor(y, place=devices[0])\n",
" l = loss(net(X), y)\n",
" l.backward()\n",
" trainer.step()\n",
" timer.stop()\n",
" animator.add(epoch + 1, (d2l.evaluate_accuracy_gpu(net, test_iter),))\n",
" print(f'测试精度:{animator.Y[0][-1]:.2f}, {timer.avg():.1f}秒/轮,'\n",
" f'在{str(devices)}')"
]
},
{
"cell_type": "markdown",
"id": "a8965206",
"metadata": {
"origin_pos": 24
},
"source": [
"接下来看看这在实践中是如何运作的。我们先[**在单个GPU上训练网络**]进行预热。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "bf53c350",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T09:28:18.919218Z",
"iopub.status.busy": "2023-08-18T09:28:18.918903Z",
"iopub.status.idle": "2023-08-18T09:30:49.973025Z",
"shell.execute_reply": "2023-08-18T09:30:49.971682Z"
},
"origin_pos": 26,
"tab": [
"paddle"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"测试精度:0.91, 13.1秒/轮,在[Place(gpu:0)]\n"
]
},
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n"
],
"text/plain": [
"