{ "cells": [ { "cell_type": "markdown", "id": "b22066fd", "metadata": {}, "source": [ "# Zero Redundancy Optimization (ZeRO)\n", "\n", "이번 세션에는 Microsoft의 뉴럴넷 학습 최적화 솔루션인 ZeRO에 대해서 알아보도록 하겠습니다." ] }, { "cell_type": "markdown", "id": "7bea8763", "metadata": {}, "source": [ "## 1. Mixed Precision\n", "\n", "최신 GPU들이 Lower precision에 대한 계산을 지원하면서 현대의 뉴럴넷 학습은 대부분 FP16(half)과 FP32(single)을 함께 사용하는 Mixed precision 방식을 사용합니다. V100 기준으로 FP32에서 속도가 14TFLOPS 정도라면, FP16에서는 100TFLOPS의 속도로 모델을 학습할 수 있습니다. 또한 FP16을 사용하면 모델의 사이즈가 줄기 때문에 학습 뿐만 아니라 배포시에도 장점이 있죠.\n", "\n", "
\n", "\n", "![](../images/mixed_precision_1.png)\n", "\n", "
\n", "\n", "### 그런데 왜 Mixed?\n", "그런데 여기에서 의문이 듭니다. FP16으로만 모델을 학습시키면 되지, 굳이 FP32와 FP16을 같이 쓸 필요가 있을까요? 결과부터 말하자면 FP16만으로 학습시 Loss가 심하게 발산하여 학습이 거의 불가능합니다. Gradient를 FP16로 유지하면 대부분의 소수점을 버리는 것이기 때문에 정밀한 학습이 불가능해집니다. 따라서 속도가 빠른 FP16과 정확도가 높은 FP32를 모두 사용해서 두 방식의 장점만을 취하려고 하는 것이죠. \n", "\n", "![](../images/ddp_analysis_3.png)\n", "\n", "Computation cost가 큰 Forward와 Backward는 FP16 모델로 하고, 계산된 Gradient를 정밀도가 높은 FP32 모델에 복사해서 weight를 업데이트 합니다. 그런데 여기서 궁금한 점이 생깁니다. FP16의 Gradient를 FP32에 적용하려면 어떻게 해야할까요? 연구진이 실험한 결과, FP16으로 계산된 Loss를 Backward 하면 크기가 크기가 작았던 일부 값들(그림에서 왼쪽)은 계산이 되면서 0으로 변해버렸다고 합니다.\n", "\n", "![](../images/mixed_precision_4.png)\n", "\n", "
\n", "\n", "### Loss Scaling\n", "이러한 문제를 어떻게 해결할 수 있을까요? 매우 심플한 아이디어로, Loss Gradient에 매우 큰 값을 곱해줘서 분포를 오른쪽으로 밀어주면 됩니다. 이러한 기술의 이름을 Loss scaling이라고 합니다. FP16의 Loss에 매우 큰 값을 곱하면, FP32에 적용 했을 때 사라져 버릴 수 있는 값들도 잘 살려낼 수 있죠.\n", "\n", "![](../images/mixed_precision_5.png)\n" ] }, { "cell_type": "code", "execution_count": null, "id": "52e3d53a", "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "참고: apex/apex/amp/opt.py\n", "\"\"\"\n", "\n", "import contextlib\n", "\n", "@contextlib.contextmanager\n", "def scale_loss(self, loss):\n", " if not self._amp_handle.is_active():\n", " yield loss\n", " return\n", "\n", " # When there are multiple losses per-optimizer, we need\n", " # to save out current grad accumulation, since we won't be\n", " # able to unscale this particulare loss once the grads are\n", " # all mixed together.\n", " cached_grads = []\n", " if self._loss_idx > 0:\n", " for p in master_params(self._optimizer):\n", " if p.grad is not None:\n", " cached_grads.append(p.grad.data.detach().clone())\n", " else:\n", " cached_grads.append(None)\n", " self._optimizer.zero_grad()\n", "\n", " loss_scale = self._cur_loss_scaler().loss_scale()\n", " yield loss * loss_scale" ] }, { "cell_type": "code", "execution_count": null, "id": "5f4813d2", "metadata": { "scrolled": true }, "outputs": [], "source": [ "\"\"\"\n", "참고: apex/tests/L0/run_amp/test_fused_sgd.py\n", "\"\"\"\n", "\n", "with amp.scale_loss(loss0, optimizer, loss_id=loss_ids[0]) as scaled_loss:\n", " scaled_loss.backward()\n", " if i == inject_inf and which_backward == 0:\n", " if inject_inf_loc == \"fp32\":\n", " model0.weight0.grad[0] = float('inf')\n", " elif inject_inf_loc == \"fp16\":\n", " model0.weight1.grad[0] = float('inf')" ] }, { "cell_type": "markdown", "id": "d8e2f8c3", "metadata": {}, "source": [ "실제로 아래 그림처럼 Loss에 큰 값을 곱해주면 발산하지 않고 학습이 잘 되었다고 합니다. 회색 그래프는 scaling을 하지 않았을때, 녹색은 scaling 했을때의 성능입니다. 놀랍게도 FP32와 성능이 거의 흡사하죠.\n", "\n", "![](../images/mixed_precision_2.png)\n", "\n", "이러한 이유로 FP16과 FP32를 함께 사용하는 Mixed precision은 현대 뉴럴넷 학습에 거의 필수가 되었습니다. FP16 정도의 저장 용량으로 FP32의 커버리지를 커버하는 bfloat16 (Google TPU) 방식이 지금보다 더 다양한 GPU에서 지원되고 대중화 되기 전까지는 FP16 + 32의 Mixed precision training은 뉴럴넷 학습에 필수적으로 쓰이는 기술일 것입니다.\n", "\n", "
\n", "\n", "### Mixed Precision의 동작방식\n", "\n", "다음은 Mixed Precision의 동작 방식을 나타낸 그림입니다. 코드와 수식을 이용해 진행 과정을 자세히 살펴봅시다.\n", "\n", "
\n", "\n", "![](../images/mixed_precision_33.png)" ] }, { "cell_type": "markdown", "id": "c3b82a79", "metadata": {}, "source": [ "### 0) 모델과 옵티마이저 생성 \n", "\n", "2개의 레이어를 가진 뉴럴넷을 정의합니다." ] }, { "cell_type": "code", "execution_count": 1, "id": "13f68e55", "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torch.nn as nn\n", "\n", "class Net(nn.Module):\n", " def __init__(self):\n", " super().__init__()\n", " self.w1 = nn.Linear(512, 512, bias=False)\n", " self.w2 = nn.Linear(512, 1, bias=False)\n", " \n", " def forward(self, x):\n", " z1 = self.w1(x)\n", " z2 = self.w2(z1)\n", " return z2" ] }, { "cell_type": "markdown", "id": "82ecdfd8", "metadata": {}, "source": [ "학습할 뉴럴넷과, 옵티마이저 생성합니다." ] }, { "cell_type": "code", "execution_count": 2, "id": "a9969279", "metadata": {}, "outputs": [], "source": [ "from torch.optim import SGD\n", "\n", "fp32_model= Net().to(\"cuda\")\n", "optimizer = SGD(fp32_model.parameters(), lr=1e-2)" ] }, { "cell_type": "code", "execution_count": 3, "id": "68989b0d", "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "'GPU = 1.001953125 GiB'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "f\"GPU = {torch.cuda.memory_allocated(0) / (1024 ** 2)} GiB\"" ] }, { "cell_type": "markdown", "id": "76eb3afb", "metadata": {}, "source": [ "
\n", "\n", "### 1) Float2Half\n", "\n", "이 과정은 단순히 `0.524796132`와 같은 파라미터를 `0.5247`과 같이 잘라내는 작업입니다.\n", "\n", "보시다시피 용량도 FP32 모델의 절반정도 사이즈를 가집니다. (1.0 GB + 0.5 GB)" ] }, { "cell_type": "code", "execution_count": 4, "id": "b9e1e4cc", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fp16_model = Net().half().to(\"cuda\")\n", "fp16_model.load_state_dict(fp32_model.state_dict())" ] }, { "cell_type": "code", "execution_count": 5, "id": "b9da7079", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'GPU = 1.5029296875 GiB'" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "f\"GPU = {torch.cuda.memory_allocated(0) / (1024 ** 2)} GiB\"" ] }, { "cell_type": "markdown", "id": "310dbf91", "metadata": {}, "source": [ "
\n", "\n", "### 2) Forward\n", "\n", "fp16으로 복사된 모델을 이용하여 forward pass를 수행합니다.\n", "\n", "$z_1 = w_1 \\cdot x \\; $ (FWD: layer1)\n", "\n", "$z_2 = w_2 \\cdot z_1 \\; $ (FWD: layer2)" ] }, { "cell_type": "code", "execution_count": 6, "id": "799f9620", "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "'logits type = torch.float16'" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import torch\n", "\n", "# example input sizes\n", "batch_size, hidden_size = 4, 512\n", "\n", "# create dummy data (bsz=4, hid=256)\n", "x = torch.randn(batch_size,hidden_size, dtype=torch.half, device=\"cuda\") \n", "\n", "# do forward\n", "z2 = fp16_model(x)\n", "\n", "# check dtypr of output logits\n", "f\"logits type = {z2.dtype}\"" ] }, { "cell_type": "markdown", "id": "859da756", "metadata": {}, "source": [ "계산된 FP16의 출력값을 이용하여 Loss를 계산합니다.\n", "\n", "$L = \\frac{(y - z_2)^2}{2} \\; $ (Loss computation)" ] }, { "cell_type": "code", "execution_count": 7, "id": "bc12ca00", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'loss type = torch.float16'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# craete dummy data (bsz=4)\n", "y = torch.tensor([[1.9], [9.5], [0.9], [1.2]], dtype=torch.half, device=\"cuda\")\n", "\n", "# compute mean square error loss\n", "L = torch.nn.functional.mse_loss(z2, y)\n", "\n", "# check dtype of loss\n", "f\"loss type = {L.dtype}\"" ] }, { "cell_type": "markdown", "id": "83f14130", "metadata": {}, "source": [ "
\n", "\n", "### 3) Backward \n", "\n", "이제 $w_n := w_n - lr \\cdot \\frac{dL}{dw_n}$와 같은 Gradient Descent Rule로 모델의 파라미터를 업데이트 해야 합니다.\n", "\n", "따라서 $\\frac{dL}{dw_1}$과 $\\frac{dL}{dw_2}$와 같은 Gradient를 구해야 하는데요. 이들은 대략 아래와 같습니다. (chain rule에 의해서 원하는 결과를 얻을 수 있습니다.)\n", "\n", "$\\frac{dL}{dw_2} = \\frac{dL}{dz_2} \\cdot \\frac{dz_2}{dw_2}$\n", "\n", "$\\frac{dL}{dw_1} = \\frac{dL}{dz_2} \\cdot \\frac{dz_2}{dz_1} \\cdot \\frac{dz_1}{dw_1}$\n", "\n", "\n", "
\n", "\n", "\n", "구체적으로는 아래와 같습니다.\n", "\n", "$\\frac{dL}{dz_2} = y - z_2 \\; $ (BWD-activation: layer2)\n", "\n", "$\\frac{dz_2}{dw_2} = z_1 \\;$ (BWD-weight: layer2)\n", "\n", "$\\frac{dz_2}{dz_1} = w_2 \\;$ (BWD-activation: layer1)\n", "\n", "$\\frac{dz_1}{dw_1} = x \\; $ (BWD-weight: layer1)\n", "\n", "
\n", "\n", "$\\frac{dL}{dw_2} = (y - z_2) \\cdot z_1$\n", "\n", "$\\frac{dL}{dw_1} = (y - z_2) \\cdot w_2 \\cdot x$\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "2bb6019c", "metadata": {}, "outputs": [], "source": [ "# loss scaling\n", "L *= 1024\n", "\n", "# do backward\n", "L.backward()" ] }, { "cell_type": "markdown", "id": "ce2ce2c3", "metadata": {}, "source": [ "
\n", "\n", "### 4) Update Weight\n", "\n", "마지막으로 파라미터를 업데이트하기 위해 `optimizer.step()`를 수행합니다.\n", "\n", "$w_1 := w_1 - lr \\cdot \\frac{dL}{dw_1} \\; $ (Weight Update)\n", "\n", "$w_2 := w_2 - lr \\cdot \\frac{dL}{dw_2} \\; $ (Weight Update)" ] }, { "cell_type": "code", "execution_count": 9, "id": "0a2b932a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "before: Parameter containing:\n", "tensor([[-0.0285, -0.0272, 0.0099, ..., -0.0160, -0.0233, -0.0210],\n", " [-0.0306, -0.0086, 0.0071, ..., -0.0419, -0.0042, -0.0374],\n", " [-0.0373, -0.0028, 0.0178, ..., 0.0378, 0.0006, -0.0308],\n", " ...,\n", " [-0.0375, 0.0126, 0.0283, ..., -0.0325, 0.0352, -0.0250],\n", " [ 0.0003, 0.0387, -0.0165, ..., 0.0273, 0.0281, -0.0034],\n", " [-0.0269, -0.0369, 0.0181, ..., 0.0065, -0.0368, 0.0088]],\n", " device='cuda:0', requires_grad=True)\n", "\n", "after: Parameter containing:\n", "tensor([[-0.0285, -0.0272, 0.0099, ..., -0.0160, -0.0233, -0.0210],\n", " [-0.0306, -0.0086, 0.0071, ..., -0.0419, -0.0042, -0.0374],\n", " [-0.0373, -0.0028, 0.0178, ..., 0.0378, 0.0006, -0.0308],\n", " ...,\n", " [-0.0375, 0.0126, 0.0283, ..., -0.0325, 0.0352, -0.0250],\n", " [ 0.0003, 0.0387, -0.0165, ..., 0.0273, 0.0281, -0.0034],\n", " [-0.0269, -0.0369, 0.0181, ..., 0.0065, -0.0368, 0.0088]],\n", " device='cuda:0', requires_grad=True)\n", "\n" ] } ], "source": [ "print(f'before: {fp32_model.w1.weight}\\n')\n", "optimizer.step()\n", "print(f'after: {fp32_model.w1.weight}\\n')" ] }, { "cell_type": "markdown", "id": "69e1cf04", "metadata": {}, "source": [ "생각해보면, FP32 모델은 forward & backward를 수행한적이 없었죠. 따라서 gradient 텐서를 갖고있지 않습니다. 그래서 `optimizer.step()`을 수행 해도 값이 변하지 않았습니다. 따라서 `optimizer.step()`을 수행하기 전에, `backward()`를 거친 FP16모델의 gradient를 복사해야 합니다.\n", "\n", "참고로 PyTorch는 파라미터(`nn.Parameter`) 중 `requires_grad=True`로 설정된 파라미터들은 모두 `grad`라는 애트리뷰트를 가지고 있습니다. 모델이 출력한 텐서의 `backward`가 호출되면 graph를 타고 뒤로 돌아오면서 미분 계산을 수행하고 결과 값을 `grad`라는 공간에 저장합니다. `grad`는 해당 텐서와 동일한 사이즈이기 때문에 모델의 용량이 10GB라면 gradient도 10GB 만큼 필요합니다. 우리가 인퍼런스 할 때 보다 학습할때 메모리가 훨씬 많이 필요한 이유 중 하나입니다. 따라서 학습에 사용될 텐서가 아니라면 반드시 `requires_grad`를 `False`로 설정해야 불필요한 메모리 소모를 막을 수 있습니다.\n" ] }, { "cell_type": "code", "execution_count": 10, "id": "1e68d266", "metadata": { "scrolled": true }, "outputs": [], "source": [ "# copy gradient to FP32 model\n", "fp32_model.w1.weight.grad = fp16_model.w1.weight.grad.float()\n", "fp32_model.w2.weight.grad = fp16_model.w2.weight.grad.float()" ] }, { "cell_type": "code", "execution_count": null, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "before: Parameter containing:\n", "tensor([[-0.0285, -0.0272, 0.0099, ..., -0.0160, -0.0233, -0.0210],\n", " [-0.0306, -0.0086, 0.0071, ..., -0.0419, -0.0042, -0.0374],\n", " [-0.0373, -0.0028, 0.0178, ..., 0.0378, 0.0006, -0.0308],\n", " ...,\n", " [-0.0375, 0.0126, 0.0283, ..., -0.0325, 0.0352, -0.0250],\n", " [ 0.0003, 0.0387, -0.0165, ..., 0.0273, 0.0281, -0.0034],\n", " [-0.0269, -0.0369, 0.0181, ..., 0.0065, -0.0368, 0.0088]],\n", " device='cuda:0', requires_grad=True)\n", "\n", "after: Parameter containing:\n", "tensor([[ 0.3496, 0.8134, 0.5690, ..., 1.9390, 0.6417, 0.6271],\n", " [ 0.9069, 2.0751, 1.3934, ..., 4.8056, 1.6446, 1.5701],\n", " [ 0.8458, 1.9610, 1.3240, ..., 4.6053, 1.5543, 1.4842],\n", " ...,\n", " [-0.8581, -1.8124, -1.1848, ..., -4.2750, -1.4086, -1.4325],\n", " [ 0.7041, 1.6037, 1.0241, ..., 3.6648, 1.2662, 1.2035],\n", " [-0.5163, -1.1244, -0.7056, ..., -2.5235, -0.8974, -0.8299]],\n", " device='cuda:0', requires_grad=True)\n", "\n" ] } ], "source": [ "print(f'before: {fp32_model.w1.weight}\\n')\n", "optimizer.step()\n", "print(f'after: {fp32_model.w1.weight}\\n')" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "source": [ "### Pytorch에서 Mixed precision training 수행하기\n", "\n", "Pytorch에서는 다음과 같이 손쉽게 Mixed precision training을 수행할 수 있습니다." ], "metadata": { "collapsed": false } }, { "cell_type": "code", "execution_count": null, "outputs": [], "source": [ "# 참고: https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/\n", "import torch\n", "# Creates once at the beginning of training\n", "scaler = torch.cuda.amp.GradScaler()\n", "\n", "for data, label in data_iter:\n", " optimizer.zero_grad()\n", " # Casts operations to mixed precision\n", " with torch.cuda.amp.autocast():\n", " loss = model(data)\n", "\n", " # Scales the loss, and calls backward()\n", " # to create scaled gradients\n", " scaler.scale(loss).backward()\n", "\n", " # Unscales gradients and calls\n", " # or skips optimizer.step()\n", " scaler.step(optimizer)\n", "\n", " # Updates the scale for next iteration\n", " scaler.update()" ], "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } } }, { "cell_type": "markdown", "id": "26b4c9e4", "metadata": {}, "source": [ "
\n", "\n", "### Dynamic Loss Scaling\n", "\n", "Loss Scaling은 Mixed Precision 학습을 매우 효과적으로 만들어줬습니다. 그러나 scale 수치를 몇으로 설정하는 것이 가장 좋을지 알기가 매우 어렵습니다. 따라서 몇몇 오픈소스에는 이러한 문제를 해결하기 위해 Dynamic Loss Scaling 기법을 제안합니다. 이는 NVIDIA의 `amp`나 MS의 `deepspeed`에도 구현되어 있습니다. \n", "\n", "Dynamic Loss Scaling의 아이디어는 매우 간단합니다. **목표는 Gradient의 소수점들이 Overflow 되지 않는 선에서 scale값을 최대로 유지하는 것**입니다. Gradient 값을 키우면 키울수록 좋지만 너무 커지면 Overflow가 발생하기 때문에 Overflow가 되지 않는 선에서 최대로 키워주는 것이죠. \n", "\n", "따라서 학습 초반에 매우 큰 값을 scale 값으로 설정합니다. `deepspeed`의 경우 기본 값이 $2^{32}$로 설정되어 있습니다. 이 값으로 Loss를 backward 해보고 만약 Gradient가 Overflow 되었다면 scale 값을 2배 줄입니다. 이 과정을 여러번 반복하면서 Overflow가 발생하지 않는 최대의 scale값을 찾아내는 것이 바로 Dynamic Loss Scaling입니다." ] }, { "cell_type": "markdown", "id": "d5516c0f", "metadata": {}, "source": [ "
\n", "\n", "### AMP (Apex Mixed Precision)\n", "\n", "`apex`는 NVIDIA에서 개발한 라이브러리로, Mixed Precision 라이브러리 중에서 가장 유명한 인지도를 가지고 있습니다. 요즘에는 `torch`자체에 mixed precision 기능이 내장되기도 하고 DeepSpeed, Pytorch-Lightning 등의 도구가 많이 나오게 돼서 `apex`를 예전만큼은 자주 사용하지 않지만 그래도 여전히 많이 사용되고 있는 라이브러리입니다. 사용법은 아래와 같이 매우 간단합니다." ] }, { "cell_type": "code", "execution_count": null, "id": "cbb27e10", "metadata": {}, "outputs": [], "source": [ "import torch\n", "from apex import amp\n", "\n", "\n", "# Declare model and optimizer as usual, with default (FP32) precision\n", "model = torch.nn.Linear(D_in, D_out).cuda()\n", "optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)\n", "\n", "# Allow Amp to perform casts as required by the opt_level\n", "model, optimizer = amp.initialize(model, optimizer, opt_level=\"O1\")\n", "\n", "# loss.backward() becomes:\n", "with amp.scale_loss(loss, optimizer) as scaled_loss:\n", " scaled_loss.backward()" ] }, { "cell_type": "markdown", "id": "31402eec", "metadata": {}, "source": [ "위 코드를 보면 `opt_level`이라는 파라미터가 보입니다. `apex`에는 mixed precision의 level을 설정할 수 있는 기능이 있는데 이를 알아두면 추후에 `apex`를 사용할 일이 생길때 매우 유용할 것입니다. (참고로 알파벳 O + 숫자 0,1,2,3입니다.)\n", "\n", "![](../images/apex.png)\n", "\n", "- `O0`: FP32 학습\n", "- `O1`: FP16을 잘 지원하는 Tensor Core 연산들은 FP16 / 나머지는 FP32\n", "- `O2`: Normalization의 weight를 제외한 모든 파라미터를 FP16으로 설정\n", "- `O3`: FP16 학습" ] }, { "cell_type": "markdown", "id": "c84dd1d5", "metadata": {}, "source": [ "
\n", "\n", "## 2. Zero Redundancy Optimization\n", "\n", "FP16과 FP32를 함께 사용하게 됨으로써 학습 속도는 매우 빨라지게 되었지만 단점이 생겼습니다. 바로 메모리인데요. FP32의 master weight과 FP16 파라미터, Gradient를 모두 GPU에 올려둔 상태이기 때문에 메모리가 기존보다 더 많이 필요해집니다. \n", "\n", "![](../images/zero_1.png)\n", "\n", "그리고 모델 파라미터가 FP16로 존재한다고 해도, Optimization은 FP32에서 일어나기 때문에 AdaGrad, Adam 등의 Adaptive optimizer 들이 필요로 하는 Variance 및 Momentum과 같은 텐서들은 여전히 FP32로 보관되어야 합니다.\n", "\n", "![](../images/adam.png)\n" ] }, { "cell_type": "code", "execution_count": null, "id": "327fa59f", "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "참고: pytorch/torch/optim/adam.py \n", "\"\"\"\n", "\n", "@torch.no_grad()\n", "def step(self, closure=None):\n", " \"\"\"Performs a single optimization step.\n", "\n", " Args:\n", " closure (callable, optional): A closure that reevaluates the model\n", " and returns the loss.\n", " \"\"\"\n", " loss = None\n", " if closure is not None:\n", " with torch.enable_grad():\n", " loss = closure()\n", "\n", " for group in self.param_groups:\n", " params_with_grad = []\n", " grads = []\n", " exp_avgs = []\n", " exp_avg_sqs = []\n", " max_exp_avg_sqs = []\n", " state_steps = []\n", " beta1, beta2 = group['betas']\n", "\n", " for p in group['params']:\n", " if p.grad is not None:\n", " params_with_grad.append(p)\n", " if p.grad.is_sparse:\n", " raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')\n", " grads.append(p.grad)\n", "\n", " state = self.state[p]\n", " # Lazy state initialization \n", " # 모든 파라미터에 대해서 동일 사이즈로 `exp_avg`와 `exp_avg_sq`로 가지고 있음\n", " # 이 때문에 Adam 기반의 optimizer를 사용하면 모델 2개에 해당하는 GPU 메모리가 더 필요해짐. \n", " if len(state) == 0:\n", " state['step'] = 0\n", " # Exponential moving average of gradient values\n", " state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)\n", " # Exponential moving average of squared gradient values\n", " state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)\n", " if group['amsgrad']:\n", " # Maintains max of all exp. moving avg. of sq. grad. values\n", " state['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)\n", "\n", " exp_avgs.append(state['exp_avg'])\n", " exp_avg_sqs.append(state['exp_avg_sq'])\n", "\n", " if group['amsgrad']:\n", " max_exp_avg_sqs.append(state['max_exp_avg_sq'])\n", "\n", " # update the steps for each param group update\n", " state['step'] += 1\n", " # record the step after step update\n", " state_steps.append(state['step'])\n", "\n", " F.adam(params_with_grad,\n", " grads,\n", " exp_avgs,\n", " exp_avg_sqs,\n", " max_exp_avg_sqs,\n", " state_steps,\n", " amsgrad=group['amsgrad'],\n", " beta1=beta1,\n", " beta2=beta2,\n", " lr=group['lr'],\n", " weight_decay=group['weight_decay'],\n", " eps=group['eps'])\n", " return loss" ] }, { "cell_type": "markdown", "id": "5b62bdaf", "metadata": {}, "source": [ "지금까지 FP16 parameter, gradient, FP32 parameter, gradient, momentum, variance 등 우리가 모델을 학습 할 때 메모리에 할당되는 텐서들의 종류에 대해서 조사했습니다. 놀라운 것은 진짜 모델이 차지하는 영역은 얼마 안된다는 것이죠. 이렇게 학습시에는 모델 외에도 **부가적으로 어마어마한 양의 텐서가 GPU 메모리에 할당됩니다.**\n", "\n", "
\n", "\n", "![](../images/memory.png)\n", "\n", "
\n", "\n", "추가로 **Data 텐서**와 **Activation 텐서**도 메모리에 할당됩니다. Data 텐서는 모델에 입력되기 전의 토큰 상태의 텐서를 의미하며, Activation 텐서는 Forward & Bacward 과정에서 연산되는 Hidden states 등의 텐서를 의미합니다. 추가로 분산처리를 수행하면 **통신 중에 텐서들을 담아둘 Bucket 공간** 등도 필요합니다. 버킷에 대해서는 이미 Data Parallelism 세션에서 Gradient Bucketing 등으로 다루었던 적이 있죠. 따라서 **모델과 데이터만 병렬화 할 것이 아니라 이러한 Optimizer States(분산, 모멘텀), Data & Activation Memory 등도 관리할 필요**가 있습니다. \n", "\n", "
\n", "\n", "Zero Redundancy Optimization (이하 ZeRO)는 이러한 부분들을 매우 효율적으로 관리 할 수 있도록 도와주는 **메모리 최적화 기술의 집합체**입니다. 크게 **ZeRO-DP** (ZeRO Data Parallelism)과 **ZeRO-R** (ZeRO Residual States) 등의 솔루션이 존재합니다. 이제부터 차근 차근 알아봅시다." ] }, { "cell_type": "markdown", "id": "b3344324", "metadata": {}, "source": [ "
\n", "\n", "## 3. ZeRO Data Parallelism\n", "\n", "가장 먼저 메모리 상태를 조사해보면, 위 그림에서 왼편 (FP16, 32, model & optimizer & gradient)가 가장 큰 공간을 차지합니다. 따라서 이들을 효율적으로 쪼개서 관리해야 합니다. ZeRO-DP는 Data Parallel과 함께 이러한 텐서들을 디바이스마다 쪼개서 관리 할 수 있도록 도와줍니다.\n", "\n", "![](../images/zero_2.png)\n", "\n", "ZeRO-DP는 4개의 stage로 나누어서 제공되고 있으며 `DeepSpeed` 라이브러리를 통해 선택적으로 적용 할 수 있습니다.\n", "\n", "- **Stage 0**: \n", " - No Partitioning\n", " - ZeRO-DP를 적용하지 않습니다.\n", "- **Stage 1**: \n", " - Optimizer States Partitioning\n", " - Optimizer Stages(모멘텀, 분산) 텐서를 여러 GPU로 분할합니다.\n", " - 메모리 소비량 4배 감소 \n", " - 기존과 비슷한 양의 Communication Cost\n", "- **Stage 2**: \n", " - Stage 1 + Gradient partitioning\n", " - Gradient(기울기) 텐서를 여러 GPU로 분할합니다.\n", " - 메모리 소비량 2배 더 감소\n", " - 기존과 비슷한 양의 Communication Cost\n", "- **Stage 3**: \n", " - Parameter partitioning\n", " - Parameter(모델) 텐서를 여러 GPU로 분할합니다.\n", " - 메모리 소비량 분할 수준에 따라 선형적 감소\n", " - 기존보다 1.5배 많은 Communication Cost\n", " \n", "ZeRO-DP의 동작은 매우 복잡하기 때문에 영상으로 확인하겠습니다.\n", "\n", "https://www.microsoft.com/en-us/research/uploads/prod/2020/02/Turing-Animation.mp4?_=1" ] }, { "cell_type": "code", "execution_count": 15, "id": "28a48886", "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "
\n", "
" ], "text/plain": [ "" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.display import HTML\n", "\n", "HTML(\"\"\"\n", "
\n", "
\"\"\")\n" ] }, { "cell_type": "markdown", "id": "2021a227", "metadata": {}, "source": [ "![](../images/zero_3.png)\n", "\n", "결론적으로 ZeRO-DP를 적용하면 기존보다 훨씬 큰 모델을 작은 GPU에서 학습시킬 수 있습니다. 바로 실습해봅시다. 먼저 configuration 파일을 만듭니다. 저는 learning rate scheduler, fp16, zero optimization (stage 1) 등을 활성화 시켰습니다. 이외에도 deepspeed configuration에는 매우 다양한 옵션들이 있습니다. 더 많은 옵션들은 https://www.deepspeed.ai/docs/config-json 여기에서 확인하세요." ] }, { "cell_type": "markdown", "id": "df7a1054", "metadata": {}, "source": [ "```\n", "{\n", " \"train_batch_size\": 16,\n", " \"gradient_accumulation_steps\": 1,\n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\",\n", " \"params\": {\n", " \"total_num_steps\": 300,\n", " \"warmup_min_lr\": 0,\n", " \"warmup_max_lr\": 3e-5,\n", " \"warmup_num_steps\": 30\n", " }\n", " },\n", " \"fp16\": {\n", " \"enabled\": true,\n", " \"initial_scale_power\": 32,\n", " \"loss_scale_window\": 1000,\n", " \"hysteresis\": 2,\n", " \"min_loss_scale\": 1\n", " },\n", " \"zero_optimization\": {\n", " \"stage\": 1\n", " },\n", " \"zero_allow_untested_optimizer\": true,\n", " \"wall_clock_breakdown\": false,\n", " \"steps_per_print\": 9999999999\n", "}\n", "\n", "```" ] }, { "cell_type": "markdown", "id": "dc7c1512", "metadata": {}, "source": [ "그리고 다음과 같은 코드를 작성합니다. argument parser의 옵션으로 `--local_rank`와 `--deepspeed_config`가 반드시 필요하며, 이 중 `--local_rank`는 스크립트 실행시에 자동으로 입력됩니다." ] }, { "cell_type": "code", "execution_count": null, "id": "fc6b403a", "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "src/zero_dp_args.py\n", "\"\"\"\n", "from argparse import ArgumentParser\n", "from datasets import load_dataset\n", "from torch.optim import Adam\n", "from torch.utils.data import DataLoader\n", "from transformers import GPT2LMHeadModel, GPT2Tokenizer\n", "import deepspeed\n", "import torch.distributed as dist\n", "\n", "model = GPT2LMHeadModel.from_pretrained(\"gpt2\")\n", "tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\n", "tokenizer.pad_token = tokenizer.eos_token\n", "\n", "parser = ArgumentParser()\n", "parser.add_argument(\n", " \"--deepspeed_config\", default=\"../src/zero_dp_config.json\", type=str\n", ")\n", "parser.add_argument(\"--local_rank\", default=0, type=int)\n", "args = parser.parse_args()\n", "\n", "optimizer = Adam(model.parameters(), lr=3e-5, weight_decay=3e-7)\n", "\n", "engine, optimizer, _, scheduler = deepspeed.initialize(\n", " args=args,\n", " model=model,\n", " optimizer=optimizer,\n", ")\n", "\n", "datasets = load_dataset(\"squad\").data[\"train\"][\"context\"]\n", "datasets = [str(sample) for sample in datasets]\n", "data_loader = DataLoader(datasets, batch_size=8, num_workers=8)\n", "\n", "for i, data in enumerate(data_loader):\n", " tokens = tokenizer(\n", " data,\n", " return_tensors=\"pt\",\n", " truncation=True,\n", " padding=True,\n", " max_length=1024,\n", " )\n", "\n", " loss = engine(\n", " input_ids=tokens.input_ids.cuda(),\n", " attention_mask=tokens.attention_mask.cuda(),\n", " labels=tokens.input_ids.cuda(),\n", " ).loss\n", "\n", " engine.backward(loss)\n", " engine.step()\n", "\n", " if i % 10 == 0 and dist.get_rank() == 0:\n", " print(f\"step:{i}, loss:{loss}\")\n", "\n", " if i >= 300:\n", " break\n" ] }, { "cell_type": "code", "execution_count": 24, "id": "9761d852", "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2021-10-27 22:23:20,777] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\n", "[2021-10-27 22:23:20,955] [INFO] [runner.py:360:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ../src/zero_args.py --deepspeed_config=../src/zero_dp_config.json\n", "[2021-10-27 22:23:22,061] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}\n", "[2021-10-27 22:23:22,061] [INFO] [launch.py:89:main] nnodes=1, num_local_procs=4, node_rank=0\n", "[2021-10-27 22:23:22,062] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(, {'localhost': [0, 1, 2, 3]})\n", "[2021-10-27 22:23:22,062] [INFO] [launch.py:102:main] dist_world_size=4\n", "[2021-10-27 22:23:22,062] [INFO] [launch.py:105:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3\n", "[2021-10-27 22:23:27,188] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:23:27,191] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:23:27,255] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:23:27,259] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:23:27,266] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:23:27,270] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:23:27,273] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:23:27,276] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:23:32,824] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed groups\n", "[2021-10-27 22:23:32,824] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed model parallel group with size 1\n", "[2021-10-27 22:23:32,903] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed expert parallel group with size 1\n", "[2021-10-27 22:23:32,903] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert data parallel process group with ranks: [0, 1, 2, 3]\n", "[2021-10-27 22:23:32,903] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [0]\n", "[2021-10-27 22:23:32,903] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [1]\n", "[2021-10-27 22:23:32,904] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [2]\n", "[2021-10-27 22:23:32,904] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [3]\n", "[2021-10-27 22:23:33,170] [INFO] [engine.py:205:__init__] DeepSpeed Flops Profiler Enabled: False\n", "[2021-10-27 22:23:33,170] [INFO] [engine.py:849:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer\n", "[2021-10-27 22:23:33,171] [INFO] [engine.py:854:_configure_optimizer] Using client Optimizer as basic optimizer\n", "[2021-10-27 22:23:33,175] [INFO] [engine.py:871:_configure_optimizer] DeepSpeed Basic Optimizer = Adam\n", "[2021-10-27 22:23:33,175] [INFO] [utils.py:44:is_zero_supported_optimizer] Checking ZeRO support for optimizer=Adam type=\n", "[2021-10-27 22:23:33,176] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 1 optimizer\n", "[2021-10-27 22:23:33,176] [INFO] [stage2.py:111:__init__] Reduce bucket size 500000000\n", "[2021-10-27 22:23:33,176] [INFO] [stage2.py:112:__init__] Allgather bucket size 500000000\n", "[2021-10-27 22:23:33,176] [INFO] [stage2.py:113:__init__] CPU Offload: False\n", "[2021-10-27 22:23:33,176] [INFO] [stage2.py:114:__init__] Round robin gradient partitioning: False\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Emitting ninja build file /home/ubuntu/.cache/torch_extensions/utils/build.ninja...\n", "Building extension module utils...\n", "Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\n", "ninja: no work to do.\n", "Loading extension module utils...\n", "Time to load utils op: 0.3419780731201172 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.402141809463501 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.4021260738372803 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.4021601676940918 seconds\n", "Rank: 0 partition count [4] and sizes[(31109952, False)] \n", "Rank: 2 partition count [4] and sizes[(31109952, False)] \n", "Rank: 3 partition count [4] and sizes[(31109952, False)] \n", "Rank: 1 partition count [4] and sizes[(31109952, False)] \n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.000469207763671875 seconds\n", "Time to load utils op: 0.0004677772521972656 seconds\n", "Time to load utils op: 0.0004513263702392578 seconds\n", "[2021-10-27 22:23:34,521] [INFO] [utils.py:806:see_memory_usage] Before initializing optimizer states\n", "[2021-10-27 22:23:34,522] [INFO] [utils.py:811:see_memory_usage] MA 0.36 GB Max_MA 0.42 GB CA 0.61 GB Max_CA 1 GB \n", "[2021-10-27 22:23:34,522] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 16.61 GB, percent = 6.9%\n", "[2021-10-27 22:23:34,563] [INFO] [utils.py:806:see_memory_usage] After initializing optimizer states\n", "[2021-10-27 22:23:34,564] [INFO] [utils.py:811:see_memory_usage] MA 0.59 GB Max_MA 1.06 GB CA 1.31 GB Max_CA 1 GB \n", "[2021-10-27 22:23:34,564] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 16.61 GB, percent = 6.9%\n", "[2021-10-27 22:23:34,565] [INFO] [stage2.py:474:__init__] optimizer state initialized\n", "[2021-10-27 22:23:34,601] [INFO] [utils.py:806:see_memory_usage] After initializing ZeRO optimizer\n", "[2021-10-27 22:23:34,602] [INFO] [utils.py:811:see_memory_usage] MA 0.59 GB Max_MA 0.59 GB CA 1.31 GB Max_CA 1 GB \n", "[2021-10-27 22:23:34,602] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 16.61 GB, percent = 6.9%\n", "[2021-10-27 22:23:34,602] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = Adam\n", "[2021-10-27 22:23:34,602] [INFO] [engine.py:587:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupDecayLR\n", "[2021-10-27 22:23:34,602] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = \n", "[2021-10-27 22:23:34,602] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[(0.9, 0.999)]\n", "[2021-10-27 22:23:34,602] [INFO] [config.py:940:print] DeepSpeedEngine configuration:\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] activation_checkpointing_config {\n", " \"partition_activations\": false, \n", " \"contiguous_memory_optimization\": false, \n", " \"cpu_checkpointing\": false, \n", " \"number_checkpoints\": null, \n", " \"synchronize_checkpoint_boundary\": false, \n", " \"profile\": false\n", "}\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] allreduce_always_fp32 ........ False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] amp_enabled .................. False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] amp_params ................... False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] checkpoint_tag_validation_enabled True\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] checkpoint_tag_validation_fail False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] curriculum_enabled ........... False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] curriculum_params ............ False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] dataloader_drop_last ......... False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] disable_allgather ............ False\n", "[2021-10-27 22:23:34,604] [INFO] [config.py:944:print] dump_state ................... False\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_enabled ........... False\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_gas_boundary_resolution 1\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_layer_name ........ bert.encoder.layer\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_layer_num ......... 0\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_max_iter .......... 100\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_stability ......... 1e-06\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_tol ............... 0.01\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] eigenvalue_verbose ........... False\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] elasticity_enabled ........... False\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] flops_profiler_config ........ {\n", " \"enabled\": false, \n", " \"profile_step\": 1, \n", " \"module_depth\": -1, \n", " \"top_modules\": 1, \n", " \"detailed\": true, \n", " \"output_file\": null\n", "}\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] fp16_enabled ................. True\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] fp16_master_weights_and_gradients False\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] fp16_mixed_quantize .......... False\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] global_rank .................. 0\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] gradient_accumulation_steps .. 1\n", "[2021-10-27 22:23:34,605] [INFO] [config.py:944:print] gradient_clipping ............ 0.0\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] gradient_predivide_factor .... 1.0\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] initial_dynamic_scale ........ 4294967296\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] loss_scale ................... 0\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] memory_breakdown ............. False\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] optimizer_legacy_fusion ...... False\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] optimizer_name ............... None\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] optimizer_params ............. None\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] pld_enabled .................. False\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] pld_params ................... False\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] prescale_gradients ........... False\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_change_rate ......... 0.001\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_groups .............. 1\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_offset .............. 1000\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_period .............. 1000\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_rounding ............ 0\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_start_bits .......... 16\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_target_bits ......... 8\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_training_enabled .... False\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_type ................ 0\n", "[2021-10-27 22:23:34,606] [INFO] [config.py:944:print] quantize_verbose ............. False\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] scheduler_name ............... WarmupDecayLR\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] scheduler_params ............. {'total_num_steps': 300, 'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 30}\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] sparse_attention ............. None\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] sparse_gradients_enabled ..... False\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] steps_per_print .............. 9999999999\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] tensorboard_enabled .......... False\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] tensorboard_job_name ......... DeepSpeedJobName\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] tensorboard_output_path ...... \n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] train_batch_size ............. 16\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] train_micro_batch_size_per_gpu 4\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] use_quantizer_kernel ......... False\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] wall_clock_breakdown ......... False\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] world_size ................... 4\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] zero_allow_untested_optimizer True\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] zero_config .................. {\n", " \"stage\": 1, \n", " \"contiguous_gradients\": true, \n", " \"reduce_scatter\": true, \n", " \"reduce_bucket_size\": 5.000000e+08, \n", " \"allgather_partitions\": true, \n", " \"allgather_bucket_size\": 5.000000e+08, \n", " \"overlap_comm\": false, \n", " \"load_from_fp32_weights\": true, \n", " \"elastic_checkpoint\": true, \n", " \"offload_param\": null, \n", " \"offload_optimizer\": null, \n", " \"sub_group_size\": 1.000000e+09, \n", " \"prefetch_bucket_size\": 5.000000e+07, \n", " \"param_persistence_threshold\": 1.000000e+05, \n", " \"max_live_parameters\": 1.000000e+09, \n", " \"max_reuse_distance\": 1.000000e+09, \n", " \"gather_fp16_weights_on_model_save\": false, \n", " \"ignore_unused_parameters\": true, \n", " \"round_robin_gradients\": false, \n", " \"legacy_stage1\": false\n", "}\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] zero_enabled ................. True\n", "[2021-10-27 22:23:34,607] [INFO] [config.py:944:print] zero_optimization_stage ...... 1\n", "[2021-10-27 22:23:34,608] [INFO] [config.py:952:print] json = {\n", " \"train_batch_size\": 16, \n", " \"gradient_accumulation_steps\": 1, \n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\", \n", " \"params\": {\n", " \"total_num_steps\": 300, \n", " \"warmup_min_lr\": 0, \n", " \"warmup_max_lr\": 3e-05, \n", " \"warmup_num_steps\": 30\n", " }\n", " }, \n", " \"fp16\": {\n", " \"enabled\": true, \n", " \"initial_scale_power\": 32, \n", " \"loss_scale_window\": 1000, \n", " \"hysteresis\": 2, \n", " \"min_loss_scale\": 1\n", " }, \n", " \"zero_optimization\": {\n", " \"stage\": 1, \n", " \"allgather_partitions\": true, \n", " \"overlap_comm\": false, \n", " \"reduce_scatter\": true\n", " }, \n", " \"zero_allow_untested_optimizer\": true, \n", " \"wall_clock_breakdown\": false, \n", " \"steps_per_print\": 1.000000e+10\n", "}\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.000453948974609375 seconds\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 545.35it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 618.17it/s]\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 576.02it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 578.09it/s]\n", "step:0, loss:5.453125\n", "step:10, loss:3.6484375\n", "step:20, loss:3.546875\n", "step:30, loss:3.76953125\n", "step:40, loss:2.880859375\n", "step:50, loss:2.408203125\n", "step:60, loss:2.5234375\n", "step:70, loss:2.265625\n", "step:80, loss:2.505859375\n", "step:90, loss:2.939453125\n", "step:100, loss:2.791015625\n", "step:110, loss:2.48828125\n", "step:120, loss:2.95703125\n", "step:130, loss:2.361328125\n", "step:140, loss:2.92578125\n", "step:150, loss:3.8515625\n", "step:160, loss:3.044921875\n", "step:170, loss:3.052734375\n", "step:180, loss:1.65625\n", "step:190, loss:3.509765625\n", "step:200, loss:3.716796875\n", "step:210, loss:3.560546875\n", "step:220, loss:2.98046875\n", "step:230, loss:3.251953125\n", "step:240, loss:2.564453125\n", "step:250, loss:3.19921875\n", "step:260, loss:3.564453125\n", "step:270, loss:3.23828125\n", "step:280, loss:2.615234375\n", "step:290, loss:2.23046875\n", "step:300, loss:3.48828125\n" ] } ], "source": [ "!deepspeed --num_gpus=4 ../src/zero_args.py --deepspeed_config=../src/zero_dp_config.json" ] }, { "cell_type": "markdown", "id": "6dd4b739", "metadata": {}, "source": [ "혹은 configuration을 `deepspeed.initialize()`에 직접 넣을 수도 있습니다." ] }, { "cell_type": "code", "execution_count": null, "id": "7fb4f4e5", "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "src/zero_dp_args.py\n", "\"\"\"\n", "from datasets import load_dataset\n", "from torch.optim import Adam\n", "from torch.utils.data import DataLoader\n", "from transformers import GPT2LMHeadModel, GPT2Tokenizer\n", "import deepspeed\n", "import torch.distributed as dist\n", "\n", "model = GPT2LMHeadModel.from_pretrained(\"gpt2\")\n", "tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\n", "tokenizer.pad_token = tokenizer.eos_token\n", "optimizer = Adam(model.parameters(), lr=3e-5, weight_decay=3e-7)\n", "\n", "engine, optimizer, _, scheduler = deepspeed.initialize(\n", " optimizer=optimizer,\n", " model=model,\n", " config={\n", " \"train_batch_size\": 16,\n", " \"gradient_accumulation_steps\": 1,\n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\",\n", " \"params\": {\n", " \"total_num_steps\": 300,\n", " \"warmup_min_lr\": 0,\n", " \"warmup_max_lr\": 3e-5,\n", " \"warmup_num_steps\": 30,\n", " },\n", " },\n", " \"fp16\": {\n", " \"enabled\": True,\n", " \"initial_scale_power\": 32,\n", " \"loss_scale_window\": 1000,\n", " \"hysteresis\": 2,\n", " \"min_loss_scale\": 1,\n", " },\n", " \"zero_optimization\": {\n", " \"stage\": 1,\n", " \"allgather_partitions\": True,\n", " \"allgather_bucket_size\": 5e8,\n", " \"overlap_comm\": False,\n", " \"reduce_scatter\": True,\n", " \"reduce_bucket_size\": 5e8,\n", " \"contiguous_gradients\": True,\n", " },\n", " \"zero_allow_untested_optimizer\": True,\n", " \"wall_clock_breakdown\": False,\n", " \"steps_per_print\": 9999999999,\n", " },\n", ")\n", "\n", "datasets = load_dataset(\"squad\").data[\"train\"][\"context\"]\n", "datasets = [str(sample) for sample in datasets]\n", "data_loader = DataLoader(datasets, batch_size=8, num_workers=8)\n", "\n", "for i, data in enumerate(data_loader):\n", " tokens = tokenizer(\n", " data,\n", " return_tensors=\"pt\",\n", " truncation=True,\n", " padding=True,\n", " max_length=1024,\n", " )\n", "\n", " loss = engine(\n", " input_ids=tokens.input_ids.cuda(),\n", " attention_mask=tokens.attention_mask.cuda(),\n", " labels=tokens.input_ids.cuda(),\n", " ).loss\n", "\n", " engine.backward(loss)\n", " engine.step()\n", "\n", " if i % 10 == 0 and dist.get_rank() == 0:\n", " print(f\"step:{i}, loss:{loss}\")\n", "\n", " if i >= 300:\n", " break\n" ] }, { "cell_type": "code", "execution_count": 20, "id": "fc75c22f", "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2021-10-27 22:17:23,924] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\n", "[2021-10-27 22:17:24,099] [INFO] [runner.py:360:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ../src/zero_dp_config.py\n", "[2021-10-27 22:17:25,207] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}\n", "[2021-10-27 22:17:25,208] [INFO] [launch.py:89:main] nnodes=1, num_local_procs=4, node_rank=0\n", "[2021-10-27 22:17:25,208] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(, {'localhost': [0, 1, 2, 3]})\n", "[2021-10-27 22:17:25,208] [INFO] [launch.py:102:main] dist_world_size=4\n", "[2021-10-27 22:17:25,208] [INFO] [launch.py:105:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3\n", "[2021-10-27 22:17:30,319] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:17:30,322] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:17:30,413] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:17:30,416] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:17:30,439] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:17:30,442] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:17:30,454] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:17:30,457] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:17:36,269] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed groups\n", "[2021-10-27 22:17:36,270] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed model parallel group with size 1\n", "[2021-10-27 22:17:36,294] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed expert parallel group with size 1\n", "[2021-10-27 22:17:36,295] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert data parallel process group with ranks: [0, 1, 2, 3]\n", "[2021-10-27 22:17:36,295] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [0]\n", "[2021-10-27 22:17:36,295] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [1]\n", "[2021-10-27 22:17:36,296] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [2]\n", "[2021-10-27 22:17:36,296] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [3]\n", "[2021-10-27 22:17:36,559] [INFO] [engine.py:205:__init__] DeepSpeed Flops Profiler Enabled: False\n", "[2021-10-27 22:17:36,560] [INFO] [engine.py:849:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer\n", "[2021-10-27 22:17:36,560] [INFO] [engine.py:854:_configure_optimizer] Using client Optimizer as basic optimizer\n", "[2021-10-27 22:17:36,564] [INFO] [engine.py:871:_configure_optimizer] DeepSpeed Basic Optimizer = Adam\n", "[2021-10-27 22:17:36,565] [INFO] [utils.py:44:is_zero_supported_optimizer] Checking ZeRO support for optimizer=Adam type=\n", "[2021-10-27 22:17:36,565] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 1 optimizer\n", "[2021-10-27 22:17:36,565] [INFO] [stage2.py:111:__init__] Reduce bucket size 500000000.0\n", "[2021-10-27 22:17:36,565] [INFO] [stage2.py:112:__init__] Allgather bucket size 500000000.0\n", "[2021-10-27 22:17:36,565] [INFO] [stage2.py:113:__init__] CPU Offload: False\n", "[2021-10-27 22:17:36,565] [INFO] [stage2.py:114:__init__] Round robin gradient partitioning: False\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Emitting ninja build file /home/ubuntu/.cache/torch_extensions/utils/build.ninja...\n", "Building extension module utils...\n", "Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\n", "ninja: no work to do.\n", "Loading extension module utils...\n", "Time to load utils op: 0.3468191623687744 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.40213942527770996 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.40210413932800293 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.4021165370941162 seconds\n", "Rank: 0 partition count [4] and sizes[(31109952, False)] \n", "Rank: 2 partition count [4] and sizes[(31109952, False)] \n", "Rank: 3 partition count [4] and sizes[(31109952, False)] \n", "Rank: 1 partition count [4] and sizes[(31109952, False)] \n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.00046753883361816406 seconds\n", "Time to load utils op: 0.0004527568817138672 seconds\n", "Time to load utils op: 0.00045871734619140625 seconds\n", "[2021-10-27 22:17:37,930] [INFO] [utils.py:806:see_memory_usage] Before initializing optimizer states\n", "[2021-10-27 22:17:37,931] [INFO] [utils.py:811:see_memory_usage] MA 0.36 GB Max_MA 0.42 GB CA 0.61 GB Max_CA 1 GB \n", "[2021-10-27 22:17:37,931] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 15.74 GB, percent = 6.6%\n", "[2021-10-27 22:17:37,971] [INFO] [utils.py:806:see_memory_usage] After initializing optimizer states\n", "[2021-10-27 22:17:37,971] [INFO] [utils.py:811:see_memory_usage] MA 0.59 GB Max_MA 1.06 GB CA 1.31 GB Max_CA 1 GB \n", "[2021-10-27 22:17:37,972] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 15.74 GB, percent = 6.6%\n", "[2021-10-27 22:17:37,972] [INFO] [stage2.py:474:__init__] optimizer state initialized\n", "[2021-10-27 22:17:38,009] [INFO] [utils.py:806:see_memory_usage] After initializing ZeRO optimizer\n", "[2021-10-27 22:17:38,010] [INFO] [utils.py:811:see_memory_usage] MA 0.59 GB Max_MA 0.59 GB CA 1.31 GB Max_CA 1 GB \n", "[2021-10-27 22:17:38,010] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 15.74 GB, percent = 6.6%\n", "[2021-10-27 22:17:38,010] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = Adam\n", "[2021-10-27 22:17:38,010] [INFO] [engine.py:587:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupDecayLR\n", "[2021-10-27 22:17:38,010] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = \n", "[2021-10-27 22:17:38,010] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[(0.9, 0.999)]\n", "[2021-10-27 22:17:38,011] [INFO] [config.py:940:print] DeepSpeedEngine configuration:\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] activation_checkpointing_config {\n", " \"partition_activations\": false, \n", " \"contiguous_memory_optimization\": false, \n", " \"cpu_checkpointing\": false, \n", " \"number_checkpoints\": null, \n", " \"synchronize_checkpoint_boundary\": false, \n", " \"profile\": false\n", "}\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] allreduce_always_fp32 ........ False\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] amp_enabled .................. False\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] amp_params ................... False\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] checkpoint_tag_validation_enabled True\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] checkpoint_tag_validation_fail False\n", "[2021-10-27 22:17:38,012] [INFO] [config.py:944:print] curriculum_enabled ........... False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] curriculum_params ............ False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] dataloader_drop_last ......... False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] disable_allgather ............ False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] dump_state ................... False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_enabled ........... False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_gas_boundary_resolution 1\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_layer_name ........ bert.encoder.layer\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_layer_num ......... 0\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_max_iter .......... 100\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_stability ......... 1e-06\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_tol ............... 0.01\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] eigenvalue_verbose ........... False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] elasticity_enabled ........... False\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] flops_profiler_config ........ {\n", " \"enabled\": false, \n", " \"profile_step\": 1, \n", " \"module_depth\": -1, \n", " \"top_modules\": 1, \n", " \"detailed\": true, \n", " \"output_file\": null\n", "}\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] fp16_enabled ................. True\n", "[2021-10-27 22:17:38,013] [INFO] [config.py:944:print] fp16_master_weights_and_gradients False\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] fp16_mixed_quantize .......... False\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] global_rank .................. 0\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] gradient_accumulation_steps .. 1\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] gradient_clipping ............ 0.0\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] gradient_predivide_factor .... 1.0\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] initial_dynamic_scale ........ 4294967296\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] loss_scale ................... 0\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] memory_breakdown ............. False\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] optimizer_legacy_fusion ...... False\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] optimizer_name ............... None\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] optimizer_params ............. None\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] pld_enabled .................. False\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] pld_params ................... False\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] prescale_gradients ........... False\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] quantize_change_rate ......... 0.001\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] quantize_groups .............. 1\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] quantize_offset .............. 1000\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] quantize_period .............. 1000\n", "[2021-10-27 22:17:38,014] [INFO] [config.py:944:print] quantize_rounding ............ 0\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] quantize_start_bits .......... 16\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] quantize_target_bits ......... 8\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] quantize_training_enabled .... False\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] quantize_type ................ 0\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] quantize_verbose ............. False\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] scheduler_name ............... WarmupDecayLR\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] scheduler_params ............. {'total_num_steps': 300, 'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 30}\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] sparse_attention ............. None\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] sparse_gradients_enabled ..... False\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] steps_per_print .............. 9999999999\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] tensorboard_enabled .......... False\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] tensorboard_job_name ......... DeepSpeedJobName\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] tensorboard_output_path ...... \n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] train_batch_size ............. 16\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] train_micro_batch_size_per_gpu 4\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] use_quantizer_kernel ......... False\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] wall_clock_breakdown ......... False\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] world_size ................... 4\n", "[2021-10-27 22:17:38,015] [INFO] [config.py:944:print] zero_allow_untested_optimizer True\n", "[2021-10-27 22:17:38,016] [INFO] [config.py:944:print] zero_config .................. {\n", " \"stage\": 1, \n", " \"contiguous_gradients\": true, \n", " \"reduce_scatter\": true, \n", " \"reduce_bucket_size\": 5.000000e+08, \n", " \"allgather_partitions\": true, \n", " \"allgather_bucket_size\": 5.000000e+08, \n", " \"overlap_comm\": false, \n", " \"load_from_fp32_weights\": true, \n", " \"elastic_checkpoint\": true, \n", " \"offload_param\": null, \n", " \"offload_optimizer\": null, \n", " \"sub_group_size\": 1.000000e+09, \n", " \"prefetch_bucket_size\": 5.000000e+07, \n", " \"param_persistence_threshold\": 1.000000e+05, \n", " \"max_live_parameters\": 1.000000e+09, \n", " \"max_reuse_distance\": 1.000000e+09, \n", " \"gather_fp16_weights_on_model_save\": false, \n", " \"ignore_unused_parameters\": true, \n", " \"round_robin_gradients\": false, \n", " \"legacy_stage1\": false\n", "}\n", "[2021-10-27 22:17:38,016] [INFO] [config.py:944:print] zero_enabled ................. True\n", "[2021-10-27 22:17:38,016] [INFO] [config.py:944:print] zero_optimization_stage ...... 1\n", "[2021-10-27 22:17:38,016] [INFO] [config.py:952:print] json = {\n", " \"train_batch_size\": 16, \n", " \"gradient_accumulation_steps\": 1, \n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\", \n", " \"params\": {\n", " \"total_num_steps\": 300, \n", " \"warmup_min_lr\": 0, \n", " \"warmup_max_lr\": 3e-05, \n", " \"warmup_num_steps\": 30\n", " }\n", " }, \n", " \"fp16\": {\n", " \"enabled\": true, \n", " \"initial_scale_power\": 32, \n", " \"loss_scale_window\": 1000, \n", " \"hysteresis\": 2, \n", " \"min_loss_scale\": 1\n", " }, \n", " \"zero_optimization\": {\n", " \"stage\": 1, \n", " \"allgather_partitions\": true, \n", " \"allgather_bucket_size\": 5.000000e+08, \n", " \"overlap_comm\": false, \n", " \"reduce_scatter\": true, \n", " \"reduce_bucket_size\": 5.000000e+08, \n", " \"contiguous_gradients\": true\n", " }, \n", " \"zero_allow_untested_optimizer\": true, \n", " \"wall_clock_breakdown\": false, \n", " \"steps_per_print\": 1.000000e+10\n", "}\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.0004813671112060547 seconds\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 533.39it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 534.03it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 625.36it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 500.99it/s]\n", "step:0, loss:5.453125\n", "step:10, loss:3.6484375\n", "step:20, loss:3.546875\n", "step:30, loss:3.76953125\n", "step:40, loss:2.880859375\n", "step:50, loss:2.408203125\n", "step:60, loss:2.5234375\n", "step:70, loss:2.265625\n", "step:80, loss:2.505859375\n", "step:90, loss:2.939453125\n", "step:100, loss:2.791015625\n", "step:110, loss:2.48828125\n", "step:120, loss:2.95703125\n", "step:130, loss:2.361328125\n", "step:140, loss:2.92578125\n", "step:150, loss:3.8515625\n", "step:160, loss:3.044921875\n", "step:170, loss:3.052734375\n", "step:180, loss:1.65625\n", "step:190, loss:3.509765625\n", "step:200, loss:3.716796875\n", "step:210, loss:3.560546875\n", "step:220, loss:2.98046875\n", "step:230, loss:3.251953125\n", "step:240, loss:2.564453125\n", "step:250, loss:3.19921875\n", "step:260, loss:3.564453125\n", "step:270, loss:3.23828125\n", "step:280, loss:2.615234375\n", "step:290, loss:2.23046875\n", "step:300, loss:3.48828125\n" ] } ], "source": [ "!deepspeed --num_gpus=4 ../src/zero_config.py" ] }, { "cell_type": "markdown", "id": "5cd8505c", "metadata": {}, "source": [ "
\n", "\n", "## 4. Activation Checkpointing\n", "\n", "FP 16과 32의 model, gradient, optimizer state 이외에 또 하나의 큰 메모리 영역은 Activation Memory 영역입니다. Activation은 model weight에 곱해지는 입력텐서들을 의미하는데요. 만약 $y = w_1 \\cdot (w_2 \\cdot x)$와 같은 뉴럴넷이 있다면, $w_1$과 곱해지는 $x$와 $w_2$와 곱해지는 $w_2 \\cdot x$ 등의 텐서들이 Activation Memory에 해당합니다. " ] }, { "cell_type": "code", "execution_count": null, "id": "4b791ae6", "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "참고: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html\n", "\"\"\"\n", "\n", "import torch\n", "\n", "\n", "class ReLU(torch.autograd.Function):\n", "\n", " @staticmethod\n", " def forward(ctx, input):\n", " ctx.save_for_backward(input)\n", " # input 값을 저장하고 있음.\n", " \n", " return input.clamp(min=0)\n", "\n", " @staticmethod\n", " def backward(ctx, grad_output):\n", " input, = ctx.saved_tensors\n", " grad_input = grad_output.clone()\n", " grad_input[input < 0] = 0\n", " return grad_input" ] }, { "cell_type": "markdown", "id": "bda44386", "metadata": {}, "source": [ "\n", "![](../images/max_pooling.png)\n", "\n", "이전에 Pipeline parallelism 세션에서 Backward 패스시에 Forward 때 사용했던 Activation 텐서를 저장한다고 말씀드린적 있습니다. 위와 같이 Maxpooling 레이어의 미분계수를 구하려면 pooling된 값들의 원래 위치가 필요하므로 반드시 Forward 때 입력되었던 텐서가 필요합니다. 또한 위의 `ReLU` 구현을 보면 `ctx.save_for_backward`를 통해 `input` 텐서를 저장하고 있는 것을 볼 수 있습니다.\n", "\n", "![](../images/checkpoint_full_act.gif)\n", "\n", "**즉, Backward 단계를 수행하기 위해 Forward 단계의 입력들을 저장해야 합니다.** 위는 그것을 영상으로 보여줍니다. 그러나 이렇게 모든 곳에서 Activation을 저장하고 있으면 메모리 소비량이 매우 커집니다.\n", "\n", "![](../images/checkpoint_no_act.gif)\n", "\n", "따라서 Activation 텐서를 저장하지 않는다면 메모리 소비량을 훨씬 아낄 수 있습니다. 그러나 Activation 텐서를 저장하지 않으면, 위와 같이 Backward 시점에 Forward를 한번 더 해서 Activation 텐서를 구해야합니다. Activation Checkpointing은 두가지 장점을 결합한 방식으로 중간 중간마다 Activation을 저장해둡니다.\n", "\n", "![](../images/checkpoint_act.gif)\n", "\n", "위와 같이 중간 중간에만 저장을 하게 되면 매번 Forwad를 처음부터 하지 않고 중간부터 수행하게 하여 연산 시간을 아낄 수 있고, 거의 대부분의 Activation을 제거함으로써 메모리 소비량을 크게 줄일 수 있습니다. 이렇게 **Activation를 중간 중간마다 저장** 해놓고 Forward가 필요하면 체크포인트 된 곳부터 Forward를 수행해나가게끔 하는 기법을 Activation Checkpointing이라고 합니다. 파이토치에는 이미 checkpointing 기능이 내장되어 있습니다. pytorch를 이용해서 실습해봅시다.\n" ] }, { "cell_type": "code", "execution_count": 19, "id": "6b04b3d6", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "output: tensor([[[-1.4583, 0.6489, -1.3392, ..., -0.6039, 0.2626, 0.6193],\n", " [ 1.2056, -1.7527, 1.4104, ..., -0.1405, -0.9028, -1.6564],\n", " [-1.8641, 0.6331, -0.3740, ..., -0.1908, -0.4829, -0.6025],\n", " [ 0.8196, 1.9792, 0.1852, ..., 0.8961, 0.6273, -1.2254],\n", " [ 0.7911, -0.3338, -0.7460, ..., 0.6872, -1.0973, 1.7147],\n", " [-1.4739, -1.9196, -0.4886, ..., -1.6297, -0.0368, 1.1412]]],\n", " grad_fn=)\n" ] } ], "source": [ "\"\"\"\n", "src/checkpointing.py\n", "\"\"\"\n", "from torch import nn\n", "from torch.utils.checkpoint import checkpoint\n", "from transformers import BertTokenizer, BertLayer, BertConfig\n", "\n", "config = BertConfig.from_pretrained(\"bert-base-cased\")\n", "tokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\n", "tokens = tokenizer(\"Hello I am Kevin\", return_tensors=\"pt\")\n", "\n", "embedding = nn.Embedding(tokenizer.vocab_size, config.hidden_size)\n", "layers = nn.ModuleList([BertLayer(config) for _ in range(6)])\n", "\n", "hidden_states = embedding(tokens.input_ids)\n", "attention_mask = tokens.attention_mask\n", "\n", "for i, layer_module in enumerate(layers):\n", " layer_outputs = checkpoint(\n", " layer_module,\n", " hidden_states,\n", " attention_mask,\n", " )\n", "\n", " hidden_states = layer_outputs[0]\n", "\n", "print(f\"output: {hidden_states}\")" ] }, { "cell_type": "markdown", "id": "0fd693ff", "metadata": {}, "source": [ "사용법은 위 예제처럼 기존에 **`module(a, b, c)`와 같이 사용하던 것을 `checkpoint(module, a, b, c)`와 같이 변경**하기만 하면 끝입니다. \n", "\n", "또한 우리가 자주 사용하는 Hugging Face `transformers`에도 거의 대부분 모델에 이러한 Activation Checkpointing 기능이 탑재되어 있습니다. **단순히 `model.gradient_checkpointing_enable()`와 `model.gradient_checkpointing_disable()`으로 켜고 끌 수 있습니다.** 정말 쉽죠?" ] }, { "cell_type": "markdown", "id": "16c467f1", "metadata": {}, "source": [ "
\n", "\n", "## 4. ZeRO-R\n", "\n", "ZeRO-R은 Activation Memory, Communication Bucket 등의 영역을 고도로 최적화 하기 위한 기술들의 집합입니다.\n", "\n", "![](../images/zero_r_1.png)\n", "\n", "이전 챕터에서 알아본 ZeRO-DP를 통해 Model state memory (FP16 & 32 Parameters, Gradient, Optimizer States)를 효율적으로 개선했습니다. ZeRO-R에서는 다음과 같은 세가지 솔루션을 제안합니다.\n", "\n", "- **Activation Memory Partitioning**\n", "- **Constant Size Buffer**\n", "- **Memory Defragmentation**\n", "\n", "각각에 대해 알아보겠습니다.\n", "\n", "
\n", "\n", "### 1) Activation Memory Partitioning\n", "\n", "
\n", "\n", "![](../images/zero_r_2.png)\n", "\n", "Activation Checkpointing이 메모리 효율성과 속도 향상에 도움이 될 수도 있지만, 큰 모델을 학습할 때는 상당한 메모리 문제를 야기할 수 있습니다. 특히 모델 병렬화와 결합될 경우 Forward가 계산되고나서 여기저기에 Activation Tensor의 사본들이 많이 생겨나게 됩니다. **ZeRO-R은 이러한 Activation Tensor들을 All-gather하여 그 중 필요한 것들만 추려서 GPU로 Partitioning합니다.** 또한 너무 커다란 Activation 들은 속도를 약간 희생하더라도 CPU RAM에 Checkpointing 시켜서 GPU 메모리를 절약합니다.\n", "\n", "
\n", "\n", "### 2) Constant Memory Buffer\n", "\n", "Constant Memory Buffer는 All-reduce, All-gather 등에 사용되는 **버킷의 사이즈를 Constant하게 유지하는 기법**을 의미합니다. 일반적으로 모델이 커질수록 통신에 사용하는 Bucket도 함께 커지는 것이 좋습니다. 그러나 모델의 크기가 매우 커지면 Buffer의 사이즈가 너무 커져서 GPU의 상당부분을 차지하게 되는 경우도 있습니다. 따라서 **Bucket 사이즈의 최대값을 제한하여 고정된 값보다 더 크게는 할당되지 않도록** 합니다. Bucket의 사이즈가 일정 수준 이상으로 커지면 더 키우지 않고 유지만해도 충분히 좋은 효율성을 얻을 수 있습니다.\n", "\n", "
\n", "\n", "### 3) Memory Defragmentation (Contiguous Checkpointing)\n", "\n", "![](../images/zero_r_3.jpeg)\n", "\n", "모델을 학습하다보면 텐서들이 많이 생겨나고 제거됨에 따라 **GPU 메모리의 단편화가 매우 자주 발생**합니다. 때로는 GPU 내의 용량이 충분히 많지만 공간이 단편화 되어있어서 Contiguous한 텐서를 올리지 못하는 문제가 발생할 수 있습니다. 따라서 ZeRO-R은 **빈 공간에 Activation, Gradient 등을 담을 수 있는 빈 메모리 공간을 미리 만들어두고** 비슷한 사이즈의 텐서들이 생성되면 **해당 공간으로 옮겨서 단편화**를 최대한 방지합니다." ] }, { "cell_type": "markdown", "id": "2a41959d", "metadata": {}, "source": [ "ZeRO-DP와 마찬가지로 간단한 Configuration만 작성하면 됩니다. \n", "\n", "- **Constant Buffer Size** \n", " - `allgather_bucket_size`와 `reduce_bucket_size`를 통해 버킷 사이즈의 최대값을 결정하였음\n", "- **Activation Memory** \n", " - `partition_activations`을 통해 activation 메모리의 GPU 간에 분할함.\n", " - `cpu_checkpointing`을 통해 매우 큰 activation 텐서는 CPU로 오프로딩함\n", "- **Memory Defragmentation**:\n", " - `contiguous_memory_optimization`를 통해 메모리 단편화 완화.\n", " \n", "이러한 기법들 외에도 수 많은 기법이 존재합니다. 더 다양한 옵션들에 대해 자세히 알고 싶으시면 논문과 도큐먼트를 참고하세요." ] }, { "cell_type": "markdown", "id": "3ebd4db7", "metadata": {}, "source": [ "\n", "\n", "```\n", "{\n", " \"train_batch_size\": 16,\n", " \"gradient_accumulation_steps\": 1,\n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\",\n", " \"params\": {\n", " \"total_num_steps\": 300,\n", " \"warmup_min_lr\": 0,\n", " \"warmup_max_lr\": 3e-5,\n", " \"warmup_num_steps\": 30\n", " }\n", " },\n", " \"fp16\": {\n", " \"enabled\": true,\n", " \"initial_scale_power\": 32,\n", " \"loss_scale_window\": 1000,\n", " \"hysteresis\": 2,\n", " \"min_loss_scale\": 1\n", " },\n", " \"zero_optimization\": {\n", " \"stage\": 1,\n", " \"allgather_bucket_size\": 5e8,\n", " \"reduce_bucket_size\": 5e8\n", " },\n", " \"activation_checkpointing\": {\n", " \"partition_activations\": true,\n", " \"cpu_checkpointing\": true,\n", " \"contiguous_memory_optimization\": true,\n", " \"number_checkpoints\": 4\n", " },\n", " \"zero_allow_untested_optimizer\": true,\n", " \"wall_clock_breakdown\": false,\n", " \"steps_per_print\": 9999999999\n", "}\n", "```" ] }, { "cell_type": "code", "execution_count": 26, "id": "a72f6948", "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2021-10-27 22:30:25,615] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\n", "[2021-10-27 22:30:25,791] [INFO] [runner.py:360:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ../src/zero_args.py --deepspeed_config=../src/zero_r_config.json\n", "[2021-10-27 22:30:26,909] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}\n", "[2021-10-27 22:30:26,909] [INFO] [launch.py:89:main] nnodes=1, num_local_procs=4, node_rank=0\n", "[2021-10-27 22:30:26,909] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(, {'localhost': [0, 1, 2, 3]})\n", "[2021-10-27 22:30:26,909] [INFO] [launch.py:102:main] dist_world_size=4\n", "[2021-10-27 22:30:26,910] [INFO] [launch.py:105:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3\n", "[2021-10-27 22:30:32,066] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:30:32,069] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:30:32,126] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:30:32,129] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:30:32,144] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:30:32,148] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:30:32,153] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 22:30:32,156] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 22:30:37,512] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed groups\n", "[2021-10-27 22:30:37,512] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed model parallel group with size 1\n", "[2021-10-27 22:30:37,517] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed expert parallel group with size 1\n", "[2021-10-27 22:30:37,517] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert data parallel process group with ranks: [0, 1, 2, 3]\n", "[2021-10-27 22:30:37,517] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [0]\n", "[2021-10-27 22:30:37,518] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [1]\n", "[2021-10-27 22:30:37,518] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [2]\n", "[2021-10-27 22:30:37,518] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [3]\n", "[2021-10-27 22:30:37,781] [INFO] [engine.py:205:__init__] DeepSpeed Flops Profiler Enabled: False\n", "[2021-10-27 22:30:37,781] [INFO] [engine.py:849:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer\n", "[2021-10-27 22:30:37,781] [INFO] [engine.py:854:_configure_optimizer] Using client Optimizer as basic optimizer\n", "[2021-10-27 22:30:37,786] [INFO] [engine.py:871:_configure_optimizer] DeepSpeed Basic Optimizer = Adam\n", "[2021-10-27 22:30:37,786] [INFO] [utils.py:44:is_zero_supported_optimizer] Checking ZeRO support for optimizer=Adam type=\n", "[2021-10-27 22:30:37,786] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 1 optimizer\n", "[2021-10-27 22:30:37,786] [INFO] [stage2.py:111:__init__] Reduce bucket size 500000000.0\n", "[2021-10-27 22:30:37,786] [INFO] [stage2.py:112:__init__] Allgather bucket size 500000000.0\n", "[2021-10-27 22:30:37,786] [INFO] [stage2.py:113:__init__] CPU Offload: False\n", "[2021-10-27 22:30:37,786] [INFO] [stage2.py:114:__init__] Round robin gradient partitioning: False\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Emitting ninja build file /home/ubuntu/.cache/torch_extensions/utils/build.ninja...\n", "Building extension module utils...\n", "Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\n", "ninja: no work to do.\n", "Loading extension module utils...\n", "Time to load utils op: 0.34302282333374023 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.40213775634765625 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.4021179676055908 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.4021291732788086 seconds\n", "Rank: 0 partition count [4] and sizes[(31109952, False)] \n", "Rank: 3 partition count [4] and sizes[(31109952, False)] \n", "Rank: 1 partition count [4] and sizes[(31109952, False)] \n", "Rank: 2 partition count [4] and sizes[(31109952, False)] \n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Loading extension module utils...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.00044083595275878906 seconds\n", "Time to load utils op: 0.0004825592041015625 seconds\n", "Time to load utils op: 0.00045371055603027344 seconds\n", "[2021-10-27 22:30:39,142] [INFO] [utils.py:806:see_memory_usage] Before initializing optimizer states\n", "[2021-10-27 22:30:39,142] [INFO] [utils.py:811:see_memory_usage] MA 0.36 GB Max_MA 0.42 GB CA 0.61 GB Max_CA 1 GB \n", "[2021-10-27 22:30:39,143] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 16.61 GB, percent = 6.9%\n", "[2021-10-27 22:30:39,182] [INFO] [utils.py:806:see_memory_usage] After initializing optimizer states\n", "[2021-10-27 22:30:39,183] [INFO] [utils.py:811:see_memory_usage] MA 0.59 GB Max_MA 1.06 GB CA 1.31 GB Max_CA 1 GB \n", "[2021-10-27 22:30:39,183] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 16.61 GB, percent = 6.9%\n", "[2021-10-27 22:30:39,183] [INFO] [stage2.py:474:__init__] optimizer state initialized\n", "[2021-10-27 22:30:39,219] [INFO] [utils.py:806:see_memory_usage] After initializing ZeRO optimizer\n", "[2021-10-27 22:30:39,220] [INFO] [utils.py:811:see_memory_usage] MA 0.59 GB Max_MA 0.59 GB CA 1.31 GB Max_CA 1 GB \n", "[2021-10-27 22:30:39,220] [INFO] [utils.py:816:see_memory_usage] CPU Virtual Memory: used = 16.61 GB, percent = 6.9%\n", "[2021-10-27 22:30:39,221] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = Adam\n", "[2021-10-27 22:30:39,221] [INFO] [engine.py:587:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupDecayLR\n", "[2021-10-27 22:30:39,221] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = \n", "[2021-10-27 22:30:39,221] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[(0.9, 0.999)]\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:940:print] DeepSpeedEngine configuration:\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:944:print] activation_checkpointing_config {\n", " \"partition_activations\": true, \n", " \"contiguous_memory_optimization\": true, \n", " \"cpu_checkpointing\": true, \n", " \"number_checkpoints\": 4, \n", " \"synchronize_checkpoint_boundary\": false, \n", " \"profile\": false\n", "}\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:944:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:944:print] allreduce_always_fp32 ........ False\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:944:print] amp_enabled .................. False\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:944:print] amp_params ................... False\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:944:print] checkpoint_tag_validation_enabled True\n", "[2021-10-27 22:30:39,221] [INFO] [config.py:944:print] checkpoint_tag_validation_fail False\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] curriculum_enabled ........... False\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] curriculum_params ............ False\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] dataloader_drop_last ......... False\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] disable_allgather ............ False\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] dump_state ................... False\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_enabled ........... False\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_gas_boundary_resolution 1\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_layer_name ........ bert.encoder.layer\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_layer_num ......... 0\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_max_iter .......... 100\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_stability ......... 1e-06\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_tol ............... 0.01\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] eigenvalue_verbose ........... False\n", "[2021-10-27 22:30:39,222] [INFO] [config.py:944:print] elasticity_enabled ........... False\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] flops_profiler_config ........ {\n", " \"enabled\": false, \n", " \"profile_step\": 1, \n", " \"module_depth\": -1, \n", " \"top_modules\": 1, \n", " \"detailed\": true, \n", " \"output_file\": null\n", "}\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] fp16_enabled ................. True\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] fp16_master_weights_and_gradients False\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] fp16_mixed_quantize .......... False\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] global_rank .................. 0\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] gradient_accumulation_steps .. 1\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] gradient_clipping ............ 0.0\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] gradient_predivide_factor .... 1.0\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] initial_dynamic_scale ........ 4294967296\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] loss_scale ................... 0\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] memory_breakdown ............. False\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] optimizer_legacy_fusion ...... False\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] optimizer_name ............... None\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] optimizer_params ............. None\n", "[2021-10-27 22:30:39,224] [INFO] [config.py:944:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] pld_enabled .................. False\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] pld_params ................... False\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] prescale_gradients ........... False\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_change_rate ......... 0.001\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_groups .............. 1\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_offset .............. 1000\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_period .............. 1000\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_rounding ............ 0\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_start_bits .......... 16\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_target_bits ......... 8\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_training_enabled .... False\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_type ................ 0\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] quantize_verbose ............. False\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] scheduler_name ............... WarmupDecayLR\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] scheduler_params ............. {'total_num_steps': 300, 'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 30}\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] sparse_attention ............. None\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] sparse_gradients_enabled ..... False\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] steps_per_print .............. 9999999999\n", "[2021-10-27 22:30:39,225] [INFO] [config.py:944:print] tensorboard_enabled .......... False\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] tensorboard_job_name ......... DeepSpeedJobName\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] tensorboard_output_path ...... \n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] train_batch_size ............. 16\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] train_micro_batch_size_per_gpu 4\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] use_quantizer_kernel ......... False\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] wall_clock_breakdown ......... False\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] world_size ................... 4\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] zero_allow_untested_optimizer True\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] zero_config .................. {\n", " \"stage\": 1, \n", " \"contiguous_gradients\": true, \n", " \"reduce_scatter\": true, \n", " \"reduce_bucket_size\": 5.000000e+08, \n", " \"allgather_partitions\": true, \n", " \"allgather_bucket_size\": 5.000000e+08, \n", " \"overlap_comm\": false, \n", " \"load_from_fp32_weights\": true, \n", " \"elastic_checkpoint\": true, \n", " \"offload_param\": null, \n", " \"offload_optimizer\": null, \n", " \"sub_group_size\": 1.000000e+09, \n", " \"prefetch_bucket_size\": 5.000000e+07, \n", " \"param_persistence_threshold\": 1.000000e+05, \n", " \"max_live_parameters\": 1.000000e+09, \n", " \"max_reuse_distance\": 1.000000e+09, \n", " \"gather_fp16_weights_on_model_save\": false, \n", " \"ignore_unused_parameters\": true, \n", " \"round_robin_gradients\": false, \n", " \"legacy_stage1\": false\n", "}\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] zero_enabled ................. True\n", "[2021-10-27 22:30:39,226] [INFO] [config.py:944:print] zero_optimization_stage ...... 1\n", "[2021-10-27 22:30:39,227] [INFO] [config.py:952:print] json = {\n", " \"train_batch_size\": 16, \n", " \"gradient_accumulation_steps\": 1, \n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\", \n", " \"params\": {\n", " \"total_num_steps\": 300, \n", " \"warmup_min_lr\": 0, \n", " \"warmup_max_lr\": 3e-05, \n", " \"warmup_num_steps\": 30\n", " }\n", " }, \n", " \"fp16\": {\n", " \"enabled\": true, \n", " \"initial_scale_power\": 32, \n", " \"loss_scale_window\": 1000, \n", " \"hysteresis\": 2, \n", " \"min_loss_scale\": 1\n", " }, \n", " \"zero_optimization\": {\n", " \"stage\": 1, \n", " \"allgather_bucket_size\": 5.000000e+08, \n", " \"reduce_bucket_size\": 5.000000e+08\n", " }, \n", " \"activation_checkpointing\": {\n", " \"partition_activations\": true, \n", " \"cpu_checkpointing\": true, \n", " \"contiguous_memory_optimization\": true, \n", " \"number_checkpoints\": 4\n", " }, \n", " \"zero_allow_untested_optimizer\": true, \n", " \"wall_clock_breakdown\": false, \n", " \"steps_per_print\": 1.000000e+10\n", "}\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.0004620552062988281 seconds\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 565.38it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 609.24it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 550.14it/s]\n", "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", "100%|████████████████████████████████████████████| 2/2 [00:00<00:00, 549.78it/s]\n", "step:0, loss:5.453125\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "step:10, loss:3.6484375\n", "step:20, loss:3.546875\n", "step:30, loss:3.76953125\n", "step:40, loss:2.880859375\n", "step:50, loss:2.408203125\n", "step:60, loss:2.5234375\n", "step:70, loss:2.265625\n", "step:80, loss:2.505859375\n", "step:90, loss:2.939453125\n", "step:100, loss:2.791015625\n", "step:110, loss:2.48828125\n", "step:120, loss:2.95703125\n", "step:130, loss:2.361328125\n", "step:140, loss:2.92578125\n", "step:150, loss:3.8515625\n", "step:160, loss:3.044921875\n", "step:170, loss:3.052734375\n", "step:180, loss:1.65625\n", "step:190, loss:3.509765625\n", "step:200, loss:3.716796875\n", "step:210, loss:3.560546875\n", "step:220, loss:2.98046875\n", "step:230, loss:3.251953125\n", "step:240, loss:2.564453125\n", "step:250, loss:3.19921875\n", "step:260, loss:3.564453125\n", "step:270, loss:3.23828125\n", "step:280, loss:2.615234375\n", "step:290, loss:2.23046875\n", "step:300, loss:3.48828125\n" ] } ], "source": [ "!deepspeed --num_gpus=4 ../src/zero_args.py --deepspeed_config=../src/zero_r_config.json" ] }, { "cell_type": "markdown", "id": "588bccfc", "metadata": {}, "source": [ "
\n", "\n", "## 5. ZeRO Offload\n", "\n", "이전챕터에서 봤던 `Activation Memory Partitioning` 기술은 너무 큰 Activation 텐서를 CPU로 내리는 기능을 포함하였습니다. ZeRO-R의 후속작인 Zero Offload는 **Model의 일부분을 CPU RAM으로 오프로드 시키는 방법**을 통해 GPU의 용량 한계를 깨부술 수 있었습니다. ZeRO Offload의 핵심 아이디어는 다음과 같습니다.\n", "\n", "![](../images/zero_off_1.png)\n", "\n", "#### GPU-side\n", "- GPU에 FP16 Parameter & Gradients가 상주한다.\n", "- GPU에서 그들을 이용해 Forward & Backward를 수행한다. (무거운 연산이기 때문)\n", "\n", "#### CPU-side\n", "- CPU에 FP32 Paramter & Gradient & Optimizer States가 상주한다.\n", "- CPU에서 Weight Update를 수행한다. (가벼운 연산이기 때문)\n", "- 특히 CPU에서 매우 빠르게 작동할 수 있는 CPU Adam 옵티마이저를 구현했다.\n", "\n", "일반적으로 CPU의 처리속도는 GPU의 처리속도에 비해 수십배는 느립니다. 따라서 아주 큰 Computation은 반드시 GPU에서 수행해야 합니다. 이러한 이유로 Forward & Backward 연산은 GPU에서 수행합니다. 생각해보면 GPU의 대부분을 FP32의 Parameter & Gradient & Optimizer States가 차지하는데, 정작 그들이 수행하는 연산은 Computation Cost가 적은 Weight Update 파트입니다. \n", "\n", "![](../images/ddp_analysis_3.png)\n", "\n", "따라서 FP32 부분을 모두 CPU로 내려버리면 GPU는 정말 GPU 연산이 반드시 필요한 FP16만 남기 때문에 매우 널널한 상태가 됩니다. \n", "\n", "
\n", "\n", "### DPU: Delayed Paramter Update\n", "\n", "![](../images/zero_off_2.png)\n", "\n", "GPU에서 Forward & Backward이 모두 완료되고나서 CPU로 보내기 시작하면 통신하는 시간동안 GPU가 기다려야 합니다. ZeRO Offload는 Delayed Paramter Update(DPU)라는 기법을 도입했는데, 이는 DDP의 Gradient Bucketing이 그랬던 것과 비슷하게 **Communication과 Computation을 오버랩해서 전체 처리 시간을 단축**시키는 전략입니다.\n", "\n", "![](../images/zero_off_3.png)\n", "\n", "실험 결과 DPU를 적용해도 성능에는 문제가 없었으며, 속도를 다소 개선 할 수 있었다고 합니다.\n", "\n", "
\n", "\n", "### ZeRO Offload + ZeRO DP\n", "\n", "\n", "![](../images/zero_off_4.png)\n", "\n", "ZeRO Offload 기술은 ZeRO DP와 결합할 수 있습니다. 만약 ZeRO-DP를 적용한 상태로 Optimizer States와 Gradient를 CPU로 Offload하면 위와 같은 형태를 띄게 됩니다. 참고로 ZeRO DP와 Offload 간의 결합은 stage 2부터 가능하며, 파라미터까지 Offload 시키려면 ZeRO stage를 3로 설정해야 합니다.\n", "\n", "- **ZeRO stage 2**: Optimizer States Offload\n", "- **ZeRO stage 3**: Optimizer States + Parameter Offlaod\n", "\n", "
\n", "\n", "### CPU Adam\n", "\n", "현재의 Adam Optimizer는 GPU에서 최적화 되어있기 때문에 CPU에서 동작시키면 다소 느린 것이 사실입니다. 다양한 최적화 기법들을 적용하여 CPU에서 매우 빠르게 동작하는 Adam Optimizer를 제공합니다. CPU Adam의 구현은 머신러닝이나 분산처리 분야가 아닌 거의 컴퓨터 아키텍처나 운영체제에 가까운 영역이라서 본 자료에서 자세히 다루지 않겠습니다. 더 자세한 내용은 논문을 참고해주세요. (사실 저도 이 부분은 자세 안보고 넘어가서 잘 모릅니다.. 이 부분 깊게 공부하신 분 계시면 이슈로 알려주세요.)\n", "\n", "\n", "![](../images/cpu_adam.png)\n" ] }, { "cell_type": "markdown", "id": "8dcc0314", "metadata": {}, "source": [ "ZeRO Offload를 실습해봅시다. 마찬가지로 먼저 Configuration을 변경합니다. Optimizer와 Parameter를 모두 Offload 하기 위해서 ZeRO stage를 3으로 설정하였으며 `offload_param`와 `offload_optimizer`을 추가하였습니다." ] }, { "cell_type": "markdown", "id": "9b182165", "metadata": {}, "source": [ "```\n", "{\n", " \"train_batch_size\": 16,\n", " \"gradient_accumulation_steps\": 1,\n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\",\n", " \"params\": {\n", " \"total_num_steps\": 300,\n", " \"warmup_min_lr\": 0,\n", " \"warmup_max_lr\": 3e-5,\n", " \"warmup_num_steps\": 30\n", " }\n", " },\n", " \"fp16\": {\n", " \"enabled\": true,\n", " \"initial_scale_power\": 32,\n", " \"loss_scale_window\": 1000,\n", " \"hysteresis\": 2,\n", " \"min_loss_scale\": 1\n", " },\n", " \"zero_optimization\": {\n", " \"stage\": 3,\n", " \"allgather_bucket_size\": 5e8,\n", " \"reduce_bucket_size\": 5e8,\n", " \"offload_param\": {\n", " \"device\": \"cpu\",\n", " \"pin_memory\": true\n", " },\n", " \"offload_optimizer\": {\n", " \"device\": \"cpu\",\n", " \"pin_memory\": true\n", " }\n", " },\n", " \"activation_checkpointing\": {\n", " \"partition_activations\": true,\n", " \"cpu_checkpointing\": true,\n", " \"contiguous_memory_optimization\": true,\n", " \"number_checkpoints\": 4\n", " },\n", " \"zero_allow_untested_optimizer\": true,\n", " \"wall_clock_breakdown\": false,\n", " \"steps_per_print\": 9999999999\n", "}\n", "```" ] }, { "cell_type": "code", "execution_count": 32, "id": "3ba30a46", "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2021-10-27 23:25:24,828] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\n", "[2021-10-27 23:25:25,004] [INFO] [runner.py:360:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ../src/zero_args.py --deepspeed_config=../src/zero_off_config.json\n", "[2021-10-27 23:25:26,109] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}\n", "[2021-10-27 23:25:26,109] [INFO] [launch.py:89:main] nnodes=1, num_local_procs=4, node_rank=0\n", "[2021-10-27 23:25:26,109] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(, {'localhost': [0, 1, 2, 3]})\n", "[2021-10-27 23:25:26,109] [INFO] [launch.py:102:main] dist_world_size=4\n", "[2021-10-27 23:25:26,109] [INFO] [launch.py:105:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3\n", "[2021-10-27 23:25:31,292] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 23:25:31,295] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 23:25:31,337] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 23:25:31,340] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 23:25:31,355] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 23:25:31,358] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 23:25:31,366] [INFO] [logging.py:68:log_dist] [Rank -1] DeepSpeed info: version=0.5.4, git-hash=unknown, git-branch=unknown\n", "[2021-10-27 23:25:31,369] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl\n", "[2021-10-27 23:25:36,773] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed groups\n", "[2021-10-27 23:25:36,774] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed model parallel group with size 1\n", "[2021-10-27 23:25:36,779] [INFO] [logging.py:68:log_dist] [Rank 0] initializing deepspeed expert parallel group with size 1\n", "[2021-10-27 23:25:36,780] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert data parallel process group with ranks: [0, 1, 2, 3]\n", "[2021-10-27 23:25:36,780] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [0]\n", "[2021-10-27 23:25:36,780] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [1]\n", "[2021-10-27 23:25:36,780] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [2]\n", "[2021-10-27 23:25:36,780] [INFO] [logging.py:68:log_dist] [Rank 0] creating expert parallel process group with ranks: [3]\n", "[2021-10-27 23:25:37,092] [INFO] [engine.py:205:__init__] DeepSpeed Flops Profiler Enabled: False\n", "[2021-10-27 23:25:37,092] [INFO] [engine.py:849:_configure_optimizer] Removing param_group that has no 'params' in the client Optimizer\n", "[2021-10-27 23:25:37,092] [INFO] [engine.py:854:_configure_optimizer] Using client Optimizer as basic optimizer\n", "[2021-10-27 23:25:37,097] [INFO] [engine.py:871:_configure_optimizer] DeepSpeed Basic Optimizer = Adam\n", "[2021-10-27 23:25:37,097] [INFO] [utils.py:44:is_zero_supported_optimizer] Checking ZeRO support for optimizer=Adam type=\n", "[2021-10-27 23:25:37,097] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer\n", "Initializing ZeRO Stage 3\n", "[2021-10-27 23:25:37,101] [INFO] [stage3.py:638:__init__] Reduce bucket size 500000000.0\n", "[2021-10-27 23:25:37,101] [INFO] [stage3.py:639:__init__] Allgather bucket size 50000000\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "Emitting ninja build file /home/ubuntu/.cache/torch_extensions/utils/build.ninja...\n", "Building extension module utils...\n", "Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)\n", "ninja: no work to do.\n", "Loading extension module utils...\n", "Time to load utils op: 0.3441762924194336 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.4021937847137451 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.40210652351379395 seconds\n", "Loading extension module utils...\n", "Time to load utils op: 0.40212202072143555 seconds\n", "[2021-10-27 23:25:38,717] [INFO] [stage3.py:831:__init__] optimizer state initialized\n", "[2021-10-27 23:25:38,942] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = Adam\n", "[2021-10-27 23:25:38,942] [INFO] [engine.py:587:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupDecayLR\n", "[2021-10-27 23:25:38,942] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = \n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "[2021-10-27 23:25:38,943] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[(0.9, 0.999)]\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:940:print] DeepSpeedEngine configuration:\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.00048089027404785156 seconds\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] activation_checkpointing_config {\n", " \"partition_activations\": true, \n", " \"contiguous_memory_optimization\": true, \n", " \"cpu_checkpointing\": true, \n", " \"number_checkpoints\": 4, \n", " \"synchronize_checkpoint_boundary\": false, \n", " \"profile\": false\n", "}\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}\n", "Time to load utils op: 0.0004799365997314453 seconds\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] allreduce_always_fp32 ........ False\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] amp_enabled .................. False\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] amp_params ................... False\n", "Time to load utils op: 0.00047278404235839844 seconds\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] checkpoint_tag_validation_enabled True\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] checkpoint_tag_validation_fail False\n", "[2021-10-27 23:25:38,943] [INFO] [config.py:944:print] curriculum_enabled ........... False\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] curriculum_params ............ False\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] dataloader_drop_last ......... False\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] disable_allgather ............ False\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] dump_state ................... False\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_enabled ........... False\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_gas_boundary_resolution 1\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_layer_name ........ bert.encoder.layer\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_layer_num ......... 0\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_max_iter .......... 100\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_stability ......... 1e-06\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_tol ............... 0.01\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] eigenvalue_verbose ........... False\n", "[2021-10-27 23:25:38,944] [INFO] [config.py:944:print] elasticity_enabled ........... False\n", "[2021-10-27 23:25:38,946] [INFO] [config.py:944:print] flops_profiler_config ........ {\n", " \"enabled\": false, \n", " \"profile_step\": 1, \n", " \"module_depth\": -1, \n", " \"top_modules\": 1, \n", " \"detailed\": true, \n", " \"output_file\": null\n", "}\n", "[2021-10-27 23:25:38,946] [INFO] [config.py:944:print] fp16_enabled ................. True\n", "[2021-10-27 23:25:38,946] [INFO] [config.py:944:print] fp16_master_weights_and_gradients False\n", "[2021-10-27 23:25:38,946] [INFO] [config.py:944:print] fp16_mixed_quantize .......... False\n", "[2021-10-27 23:25:38,946] [INFO] [config.py:944:print] global_rank .................. 0\n", "[2021-10-27 23:25:38,946] [INFO] [config.py:944:print] gradient_accumulation_steps .. 1\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] gradient_clipping ............ 0.0\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] gradient_predivide_factor .... 1.0\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] initial_dynamic_scale ........ 4294967296\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] loss_scale ................... 0\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] memory_breakdown ............. False\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] optimizer_legacy_fusion ...... False\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] optimizer_name ............... None\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] optimizer_params ............. None\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] pld_enabled .................. False\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] pld_params ................... False\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] prescale_gradients ........... False\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] quantize_change_rate ......... 0.001\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] quantize_groups .............. 1\n", "[2021-10-27 23:25:38,947] [INFO] [config.py:944:print] quantize_offset .............. 1000\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] quantize_period .............. 1000\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] quantize_rounding ............ 0\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] quantize_start_bits .......... 16\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] quantize_target_bits ......... 8\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] quantize_training_enabled .... False\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] quantize_type ................ 0\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] quantize_verbose ............. False\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] scheduler_name ............... WarmupDecayLR\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] scheduler_params ............. {'total_num_steps': 300, 'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 30}\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] sparse_attention ............. None\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] sparse_gradients_enabled ..... False\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] steps_per_print .............. 9999999999\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] tensorboard_enabled .......... False\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] tensorboard_job_name ......... DeepSpeedJobName\n", "[2021-10-27 23:25:38,948] [INFO] [config.py:944:print] tensorboard_output_path ...... \n", "[2021-10-27 23:25:38,949] [INFO] [config.py:944:print] train_batch_size ............. 16\n", "[2021-10-27 23:25:38,949] [INFO] [config.py:944:print] train_micro_batch_size_per_gpu 4\n", "[2021-10-27 23:25:38,949] [INFO] [config.py:944:print] use_quantizer_kernel ......... False\n", "[2021-10-27 23:25:38,949] [INFO] [config.py:944:print] wall_clock_breakdown ......... False\n", "[2021-10-27 23:25:38,949] [INFO] [config.py:944:print] world_size ................... 4\n", "[2021-10-27 23:25:38,949] [INFO] [config.py:944:print] zero_allow_untested_optimizer True\n", "[2021-10-27 23:25:38,950] [INFO] [config.py:944:print] zero_config .................. {\n", " \"stage\": 3, \n", " \"contiguous_gradients\": true, \n", " \"reduce_scatter\": true, \n", " \"reduce_bucket_size\": 5.000000e+08, \n", " \"allgather_partitions\": true, \n", " \"allgather_bucket_size\": 5.000000e+08, \n", " \"overlap_comm\": true, \n", " \"load_from_fp32_weights\": true, \n", " \"elastic_checkpoint\": true, \n", " \"offload_param\": {\n", " \"device\": \"cpu\", \n", " \"nvme_path\": null, \n", " \"buffer_count\": 5, \n", " \"buffer_size\": 1.000000e+08, \n", " \"max_in_cpu\": 1.000000e+09, \n", " \"pin_memory\": true\n", " }, \n", " \"offload_optimizer\": {\n", " \"device\": \"cpu\", \n", " \"nvme_path\": null, \n", " \"buffer_count\": 4, \n", " \"pin_memory\": true, \n", " \"pipeline_read\": false, \n", " \"pipeline_write\": false, \n", " \"fast_init\": false, \n", " \"pipeline\": false\n", " }, \n", " \"sub_group_size\": 1.000000e+09, \n", " \"prefetch_bucket_size\": 5.000000e+07, \n", " \"param_persistence_threshold\": 1.000000e+05, \n", " \"max_live_parameters\": 1.000000e+09, \n", " \"max_reuse_distance\": 1.000000e+09, \n", " \"gather_fp16_weights_on_model_save\": false, \n", " \"ignore_unused_parameters\": true, \n", " \"round_robin_gradients\": false, \n", " \"legacy_stage1\": false\n", "}\n", "[2021-10-27 23:25:38,950] [INFO] [config.py:944:print] zero_enabled ................. True\n", "[2021-10-27 23:25:38,950] [INFO] [config.py:944:print] zero_optimization_stage ...... 3\n", "[2021-10-27 23:25:38,950] [INFO] [config.py:952:print] json = {\n", " \"train_batch_size\": 16, \n", " \"gradient_accumulation_steps\": 1, \n", " \"scheduler\": {\n", " \"type\": \"WarmupDecayLR\", \n", " \"params\": {\n", " \"total_num_steps\": 300, \n", " \"warmup_min_lr\": 0, \n", " \"warmup_max_lr\": 3e-05, \n", " \"warmup_num_steps\": 30\n", " }\n", " }, \n", " \"fp16\": {\n", " \"enabled\": true, \n", " \"initial_scale_power\": 32, \n", " \"loss_scale_window\": 1000, \n", " \"hysteresis\": 2, \n", " \"min_loss_scale\": 1\n", " }, \n", " \"zero_optimization\": {\n", " \"stage\": 3, \n", " \"allgather_bucket_size\": 5.000000e+08, \n", " \"reduce_bucket_size\": 5.000000e+08, \n", " \"offload_param\": {\n", " \"device\": \"cpu\", \n", " \"pin_memory\": true\n", " }, \n", " \"offload_optimizer\": {\n", " \"device\": \"cpu\", \n", " \"pin_memory\": true\n", " }\n", " }, \n", " \"activation_checkpointing\": {\n", " \"partition_activations\": true, \n", " \"cpu_checkpointing\": true, \n", " \"contiguous_memory_optimization\": true, \n", " \"number_checkpoints\": 4\n", " }, \n", " \"zero_allow_untested_optimizer\": true, \n", " \"wall_clock_breakdown\": false, \n", " \"steps_per_print\": 1.000000e+10\n", "}\n", "Using /home/ubuntu/.cache/torch_extensions as PyTorch extensions root...\n", "No modifications detected for re-loaded extension module utils, skipping build step...\n", "Loading extension module utils...\n", "Time to load utils op: 0.0004980564117431641 seconds\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Reusing dataset squad (/home/ubuntu/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\n", " 0%| | 0/2 [00:00\n", "\n", "## 6. ZeRO Infinity\n", "\n", "ZeRO Infinity는 NVMe(SSD) 메모리에 파라미터를 저장하는 방식을 채택하였습니다. NVMe는 CPU 메모리보다도 훨씬 커다란 메모리 용량을 가지기 때문에 메모리의 한계를 한번 더 돌파 하였다고 평가됩니다. ZeRO Infinity 알고리즘 역시 매우 복잡하기 영샹으로 확인하겠습니다. https://www.microsoft.com/en-us/research/uploads/prod/2021/04/1400x788_deepspeed_nologo-1.mp4" ] }, { "cell_type": "code", "execution_count": 34, "id": "3ac31629", "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "\n", "
\n", "
" ], "text/plain": [ "" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.display import HTML\n", "\n", "HTML(\"\"\"\n", "
\n", "
\"\"\")\n" ] }, { "cell_type": "markdown", "id": "093cd70f", "metadata": {}, "source": [ "### ZeRO Infinity의 핵심 아이디어\n", "\n", "ZeRO Infinity는 ZeRO Offload의 확장판입니다. 기존에 ZeRO Offload는 CPU RAM과 GPU VRAM를 다음과 같이 운용하였습니다.\n", "\n", "- **GPU**: FP16 parameter & gradient 상주, **Forward & Backward 수행**.\n", "- **CPU**: FP32 parameter & gradient & optimizer 상주, **Weight Update 수행**.\n", "\n", "ZeRO Infinity는 NVMe가 추가되어 세개의 디바이스를 운용합니다. 활용법은 아래와 같습니다.\n", "\n", "- **NVMe**: 기본적으로 모든 파라미터는 사용되지 않을때 NVMe에 상주.\n", "- **GPU**: FP16 parameter & gradient가 Forward & Backward를 수행해야 할 때 NVMe에서 필요한 부분만 GPU로 업로드.\n", "- **CPU**: FP32 parameter & gradient, optimizer가 Weight Update를 수행해야 할 때 NVMe에서 필요한 부분만 CPU로 업로드.\n", "\n", "즉, 기본적으로 모든 텐서를 NVMe로 내리고 있다가, 그들이 필요할때만 CPU & GPU 등의 연산 장비로 올리는 방식을 사용합니다.\n", "\n", "
\n", " \n", "### Offload & ZeRO-DP 등 과의 비교\n", "\n", "\n", "![](../images/zero_infinity.png)\n", "\n", "ZeRO Infinity는 거의 모든 텐서를 NVMe에 내려놓고 있기 때문에 실제로 CPU와 GPU는 거의 텅텅 빈 상태가 됩니다. 따라서 위 그림 처럼 기존의 기법들로는 아예 학습이 불가능 하던 수준의 모델도 학습할 수 있습니다. 또한 실험 결과에 의하면 ZeRO Offload와 비교했을때 속도가 더 빨랐다고 합니다.\n", "\n", "ZeRO Infinity는 NVMe와 연결된 디바이스가 필요하기 때문에 본 자료에서 실습을 하지는 않겠습니다. NVMe가 탑재된 디바이스라면 아래와 같이 `offload_param`과 `offload_optimizer`의 동작 디바이스를 `nvme`로 변경하고 `nvme_path`를 알맞게 설정해주시면 됩니다.\n", "\n", "```\n", "\"offload_param\": {\n", " \"device\": \"nvme\", \n", " \"nvme_path\": \"/local_nvme\",\n", " \"pin_memory\": true\n", "}, \n", "\"offload_optimizer\": {\n", " \"device\": \"nvme\", \n", " \"nvme_path\": \"/local_nvme\",\n", " \"pin_memory\": true\n", "}\n", "```" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 5 }