{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%reload_ext autoreload\n", "%autoreload 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# FP16" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Intro" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook we are going to implement mixed precision floating points. \n", "\n", "By default, all computations are done in single-precision which means that all the floats (inputs, activations and weights) are 32-bit floats. If we could use 16-bit floats for each of these values, we would save half the space in RAM and this would enable us to double the size of our model and double the batch size (the first helping to get better results and the second to train quicker).\n", "\n", "However, half-precision floating points might lead to not-as-accurate results. Specifically, half-precision floating points can only represent 1, 1+2e-10, 1+2\\*2e-10 ... which in standard notation are 1, 1.0009765625, 1.001953125 ... (for more information on the limitations of the encoding of half-precision floating point numbers click [here](https://en.wikipedia.org/wiki/Half-precision_floating-point_format)). There are some specific calculations where this lack of acccuracy will impact our results. These are:\n", "\n", "1. When updating the weights we basically do _w=w-lr*w.grad_ for each weight and usually _lr*w.grad_ is several orders of magnitude below *w*. When this happens (e.g. _w=1_ and _lr*w.grad=0.0001_) the update will make no effect.\n", "2. Your gradients may be replaced by 0 because they are too low (underflow).\n", "3. Activations or loss may hit nan/infinity (overflow) and training might more easily diverge.\n", "\n", "To address these problems we will use a combination of different strategies.\n", "\n", "To take care of 1 and 3, we will use sigle-precision floating points for some parameters in the training. \n", "\n", "For 1, it’s okay if *w* and *grad* are both half floats, but when we do the operation _w = w - lr * grad_, we need to compute it in FP32. To achieve this, we will keep a copy of the weights in FP32 precision (from now on, master model) where we will update and then copy over to the original model. When we copy the weights into the original model we will lose precision, but the updated weight will be kept in FP32 in the master model so that, when the updates add up to a value that can be represented in FP16, the original model can tell the difference (i.e. if the update is +0.0001, the new weight value is updated it will be 1.0001 and the original model will not be able to tell the difference but if it is updated five times the new weight value will be 1.0005 and the original model will incorporate it as 1.0005).\n", "\n", "For 3, we will simply keep our batchnorms in single-precision (so our activations are in single precision) and our loss in single-precision (done by converting the last output of the model to single precision before passing it to the loss).\n", "\n", "For 2, we will take a different approach, called gradient scaling. We multiply the loss by a scale factor to place the values in a scale that FP16 can handle with more precision. We will then calculate the gradients by backpropagation and, before updating the weights, we will rescale the gradients to the original scale by dividing by _scale_ (remember that, because of the solution proposed in 1, we will update the weights in FP32 in the master model)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from nb_004a import *" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DATA_PATH = Path('data')\n", "PATH = DATA_PATH/'cifar10'\n", "\n", "data_mean,data_std = map(tensor, ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261]))\n", "cifar_norm,cifar_denorm = normalize_funcs(data_mean, data_std)\n", "\n", "train_tfms = [flip_lr(p=0.5),\n", " pad(padding=4),\n", " crop(size=32, row_pct=(0,1.), col_pct=(0,1.))]\n", "valid_tfms = []\n", "\n", "bs = 64" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def to_half(b:Collection[Tensor])->Collection[Tensor]: \n", " \"[x,y] -> [x.half(),y] (half precision)\"\n", " return [b[0].half(), b[1]]\n", "\n", "def compose(*funcs:Callable)->Callable:\n", " \"Compose list of funcs\"\n", " def compose_(funcs, x, *args, **kwargs):\n", " for f in listify(funcs): x = f(x, *args, **kwargs)\n", " return x\n", " return partial(compose_, funcs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_ds = ImageDataset.from_folder(PATH/'train', classes=['airplane','dog'])\n", "valid_ds = ImageDataset.from_folder(PATH/'test', classes=['airplane','dog'])\n", "data = DataBunch.create(train_ds, valid_ds, bs=bs, num_workers=0, \n", " train_tfm=train_tfms, valid_tfm=valid_tfms, dl_tfms=cifar_norm)\n", "len(data.train_dl), len(data.valid_dl)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "metrics = [accuracy]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Darknet([1, 2, 2, 2, 2], num_classes=2, nf=16)\n", "learn = Learner(data, model, metrics=metrics)\n", "sched = OneCycleScheduler(learn, 0.1, 5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# FP16" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Utils" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def bn2float(module:nn.Module)->nn.Module:\n", " \"If a module is batchnorm don't use half precision\"\n", " if isinstance(module, torch.nn.modules.batchnorm._BatchNorm): module.float()\n", " for child in module.children(): bn2float(child)\n", " return module\n", "\n", "def model2half(model:nn.Module)->nn.Module:\n", " \"Converts the model to half precision except the batchnorm layers\"\n", " return bn2float(model.half())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Helper function to save the master model in FP32 with flat tensors (apparently it helps with performance)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from torch._utils import _unflatten_dense_tensors\n", "from torch.nn.utils import parameters_to_vector" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will implement the three changes we noted above. A summary of the steps we will follow is:\n", "\n", "\n", " 1. Compute the output with the FP16 model, then the loss\n", " 2. Back-propagate the gradients in half-precision\n", " 3. Copy the gradients in FP32 precision\n", " 4. Do the update on the master model (in FP32 precision)\n", " 5. Copy the master model in the FP16 model\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def get_master(layer_groups:ModuleList, flat_master:bool=False) -> Tuple[List[List[Tensor]], List[List[Tensor]]]:\n", " \"Returns two lists, one for the model parameters in FP16 and one for the master parameters in FP32\"\n", " split_groups = split_bn_bias(layer_groups)\n", " model_params = [[param for param in lg.parameters() if param.requires_grad] for lg in split_groups]\n", " if flat_master:\n", " master_params = []\n", " for lg in model_params:\n", " if len(lg) !=0 :\n", " mp = parameters_to_vector([param.data.float() for param in lg])\n", " mp = torch.nn.Parameter(mp, requires_grad=True)\n", " if mp.grad is None: mp.grad = mp.new(*mp.size())\n", " master_params.append([mp])\n", " else: master_params.append([])\n", " return model_params, master_params\n", " else:\n", " master_params = [[param.clone().float().detach() for param in lg] for lg in model_params]\n", " for mp in master_params:\n", " for param in mp: param.requires_grad = True\n", " return model_params, master_params\n", "\n", "def model_g2master_g(model_params:Sequence[Tensor], master_params:Sequence[Tensor], flat_master:bool=False)->None:\n", " \"Copies the model gradients to the master parameters for the optimizer step\"\n", " if flat_master:\n", " for model_group,master_group in zip(model_params,master_params):\n", " if len(master_group) != 0:\n", " master_group[0].grad.data.copy_(parameters_to_vector([p.grad.data.float() for p in model_group]))\n", " else:\n", " for model_group,master_group in zip(model_params,master_params):\n", " for model, master in zip(model_group, master_group):\n", " if model.grad is not None:\n", " if master.grad is None: master.grad = master.data.new(*master.data.size())\n", " master.grad.data.copy_(model.grad.data)\n", " else: master.grad = None\n", "\n", "def master2model(model_params:Sequence[Tensor], master_params:Sequence[Tensor], flat_master:bool=False)->None:\n", " \"Copy master parameters to model parameters\"\n", " if flat_master:\n", " for model_group,master_group in zip(model_params,master_params):\n", " if len(model_group) != 0:\n", " for model, master in zip(model_group, _unflatten_dense_tensors(master_group[0].data, model_group)):\n", " model.data.copy_(master)\n", " else:\n", " for model_group,master_group in zip(model_params,master_params):\n", " for model, master in zip(model_group, master_group): model.data.copy_(master.data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MixedPrecision" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from torch._utils import _unflatten_dense_tensors\n", "from torch.nn.utils import parameters_to_vector\n", "\n", "@dataclass\n", "class MixedPrecision(Callback):\n", " \"Callback that handles mixed-precision training\"\n", " learn:Learner\n", " loss_scale:float=512.\n", " flat_master:bool=False\n", " def __post_init__(self): assert torch.backends.cudnn.enabled, \"Mixed precision training requires cudnn.\" \n", " \n", " def on_train_begin(self, **kwargs:Any)->None:\n", " \"Ensures everything is in half precision mode\"\n", "# self.learn.data.train_dl.half = True\n", " self.learn.data.train_dl.add_tfm(to_half)\n", " if hasattr(self.learn.data, 'valid_dl') and self.learn.data.valid_dl is not None:\n", "# self.learn.data.valid_dl.half = True\n", " self.learn.data.valid_dl.add_tfm(to_half)\n", " #Get a copy of the model params in FP32\n", " self.model_params, self.master_params = get_master(self.learn.layer_groups, self.flat_master)\n", " #Changes the optimizer so that the optimization step is done in FP32.\n", " opt = self.learn.opt\n", " mom,wd,beta = opt.mom,opt.wd,opt.beta\n", " lrs = [lr for lr in self.learn.opt._lr for _ in range(2)]\n", " opt_params = [{'params': mp, 'lr': lr} for mp,lr in zip(self.master_params, lrs)]\n", " self.learn.opt.opt = self.learn.opt_fn(opt_params)\n", " opt.mom,opt.wd,opt.beta = mom,wd,beta\n", " \n", " def on_train_end(self, **kwargs:Any)->None:\n", " \"Removes half precision transforms added at `on_train_begin`\"\n", " self.learn.data.train_dl.remove_tfm(to_half)\n", " if hasattr(self.learn.data, 'valid_dl') and self.learn.data.valid_dl is not None:\n", " self.learn.data.valid_dl.remove_tfm(to_half)\n", " \n", " def on_loss_begin(self, last_output:Tensor, **kwargs:Any) -> Tensor:\n", " \"Converts half precision output to FP32 to avoid reduction overflow.\"\n", " return last_output.float()\n", " \n", " def on_backward_begin(self, last_loss:Rank0Tensor, **kwargs:Any) -> Rank0Tensor:\n", " \"Scale gradients up by `loss_scale` to prevent underflow\"\n", " #To avoid gradient underflow, we scale the gradients\n", " return last_loss * self.loss_scale\n", " \n", " def on_backward_end(self, **kwargs:Any ):\n", " \"Convert the gradients back to FP32 and divide them by the scale.\"\n", " model_g2master_g(self.model_params, self.master_params, self.flat_master)\n", " for group in self.master_params:\n", " for param in group: param.grad.div_(self.loss_scale)\n", " \n", " def on_step_end(self, **kwargs:Any)->None:\n", " \"Update the params from master to model and zero grad\"\n", " #Zeros the gradients of the model since the optimizer is disconnected.\n", " self.learn.model.zero_grad()\n", " #Update the params from master to model.\n", " master2model(self.model_params, self.master_params, self.flat_master)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def mixed_precision(loss_scale:float=512., flat_master:bool=False, **kwargs:Any)->MixedPrecision:\n", " return partial(MixedPrecision, loss_scale=loss_scale, flat_master=flat_master, **kwargs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cbs = [one_cycle_scheduler(0.1)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Darknet([1, 2, 2, 2, 2], num_classes=2, nf=16)\n", "model = model2half(model)\n", "learn = Learner(data, model, metrics=accuracy, callback_fns=cbs)\n", "mp_cb = MixedPrecision(learn, flat_master=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit(1, 1e-2, callbacks=mp_cb)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.model.layers[0][0].weight.type()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mp_cb.master_params[0][0].size(),mp_cb.master_params[0][0].type()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## to_fp16" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def to_fp16(learn:Learner, loss_scale:float=512., flat_master:bool=False)->Learner:\n", " \"Transforms the learner in FP16 precision\"\n", " learn.model = model2half(learn.model)\n", " learn.mp_cb = MixedPrecision(learn, loss_scale=loss_scale, flat_master=flat_master)\n", " learn.callbacks.append(learn.mp_cb)\n", " return learn\n", "\n", "Learner.to_fp16 = to_fp16" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Darknet([1, 2, 2, 2, 2], num_classes=2, nf=16)\n", "learn = Learner(data, model, metrics=accuracy, callback_fns=cbs)\n", "learn.to_fp16(flat_master=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit(1, 1e-2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.mp_cb.master_params[0][0].size()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test with discriminative lrs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Darknet([1, 2, 2, 2, 2], num_classes=2, nf=16)\n", "model = model2half(model)\n", "learn = Learner(data, model, metrics=accuracy)\n", "\n", "learn.split(lambda m: split_model(m,[m.layers[9],m.layers[15]]))\n", "cbs = [MixedPrecision(learn, flat_master=True), OneCycleScheduler(learn, 0.1)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit(1, 1e-2, callbacks=cbs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.model.layers[0][0].weight.type()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for master in cbs[0].master_params:\n", " print(master[0].size(),master[0].type())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }