{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Mixup / Label smoothing" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from exp.nb_10 import *" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]\n", "bs = 64\n", "\n", "il = ImageList.from_files(path, tfms=tfms)\n", "sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))\n", "ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())\n", "data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Mixup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Jump_to lesson 12 video](https://course19.fast.ai/videos/?lesson=12&t=226)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What is mixup?\n", "\n", "As the name kind of suggests, the authors of the [mixup article](https://arxiv.org/abs/1710.09412) propose to train the model on a mix of the pictures of the training set. Let's say we're on CIFAR10 for instance, then instead of feeding the model the raw images, we take two (which could be in the same class or not) and do a linear combination of them: in terms of tensor it's\n", "``` python\n", "new_image = t * image1 + (1-t) * image2\n", "```\n", "where t is a float between 0 and 1. Then the target we assign to that image is the same combination of the original targets:\n", "``` python\n", "new_target = t * target1 + (1-t) * target2\n", "```\n", "assuming your targets are one-hot encoded (which isn't the case in pytorch usually). And that's as simple as this." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img1 = PIL.Image.open(ll.train.x.items[0])\n", "img1" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img2 = PIL.Image.open(ll.train.x.items[4000])\n", "img2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mixed_up = ll.train.x[0] * 0.3 + ll.train.x[4000] * 0.7\n", "plt.imshow(mixed_up.permute(1,2,0));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "French horn or tench? The right answer is 70% french horn and 30% tench ;)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Jump_to lesson 12 video](https://course19.fast.ai/videos/?lesson=12&t=490)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The implementation relies on something called the *beta distribution* which in turns uses something which Jeremy still finds mildly terrifying called the *gamma function*. To get over his fears, Jeremy reminds himself that *gamma* is just a factorial function that (kinda) interpolates nice and smoothly to non-integers too. How it does that exactly isn't important..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# PyTorch has a log-gamma but not a gamma, so we'll create one\n", "Γ = lambda x: x.lgamma().exp()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NB: If you see math symbols you don't know you can google them like this: [Γ function](https://www.google.com/search?q=Γ+function).\n", "\n", "If you're not used to typing unicode symbols, on Mac type ctrl-cmd-space to bring up a searchable emoji box. On Linux you can use the [compose key](https://help.ubuntu.com/community/ComposeKey). On Windows you can also use a compose key, but you first need to install [WinCompose](https://github.com/samhocevar/wincompose). By default the compose key is the right-hand Alt key.\n", "\n", "You can search for symbol names in WinCompose. The greek letters are generally compose-\\*-letter (where *letter* is, for instance, a to get greek α alpha)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "facts = [math.factorial(i) for i in range(7)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(range(7), facts, 'ro')\n", "plt.plot(torch.linspace(0,6), Γ(torch.linspace(0,6)+1))\n", "plt.legend(['factorial','Γ']);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "torch.linspace(0,0.9,10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the original article, the authors suggested three things:\n", " 1. Create two separate dataloaders and draw a batch from each at every iteration to mix them up\n", " 2. Draw a t value following a beta distribution with a parameter α (0.4 is suggested in their article)\n", " 3. Mix up the two batches with the same value t.\n", " 4. Use one-hot encoded targets\n", "\n", "Why the beta distribution with the same parameters α? Well it looks like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_,axs = plt.subplots(1,2, figsize=(12,4))\n", "x = torch.linspace(0,1, 100)\n", "for α,ax in zip([0.1,0.8], axs):\n", " α = tensor(α)\n", "# y = (x.pow(α-1) * (1-x).pow(α-1)) / (gamma_func(α ** 2) / gamma_func(α))\n", " y = (x**(α-1) * (1-x)**(α-1)) / (Γ(α)**2 / Γ(2*α))\n", " ax.plot(x,y)\n", " ax.set_title(f\"α={α:.1}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With a low `α`, we pick values close to 0. and 1. with a high probability, and the values in the middle all have the same kind of probability. With a greater `α`, 0. and 1. get a lower probability ." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the approach above works very well, it's not the fastest way we can do this. The main point that slows down this process is wanting two different batches at every iteration (which means loading twice the amount of images and applying to them the other data augmentation function). To avoid this slow down, we can be a little smarter and mixup a batch with a shuffled version of itself (this way the images mixed up are still different). This was a trick suggested in the MixUp paper.\n", "\n", "Then pytorch was very careful to avoid one-hot encoding targets when it could, so it seems a bit of a drag to undo this. Fortunately for us, if the loss is a classic cross-entropy, we have\n", "```python\n", "loss(output, new_target) = t * loss(output, target1) + (1-t) * loss(output, target2)\n", "```\n", "so we won't one-hot encode anything and just compute those two losses then do the linear combination.\n", "\n", "Using the same parameter t for the whole batch also seemed a bit inefficient. In our experiments, we noticed that the model can train faster if we draw a different t for every image in the batch (both options get to the same result in terms of accuracy, it's just that one arrives there more slowly).\n", "The last trick we have to apply with this is that there can be some duplicates with this strategy: let's say or shuffle say to mix image0 with image1 then image1 with image0, and that we draw t=0.1 for the first, and t=0.9 for the second. Then\n", "```python\n", "image0 * 0.1 + shuffle0 * (1-0.1) = image0 * 0.1 + image1 * 0.9\n", "image1 * 0.9 + shuffle1 * (1-0.9) = image1 * 0.9 + image0 * 0.1\n", "```\n", "will be the same. Of course, we have to be a bit unlucky but in practice, we saw there was a drop in accuracy by using this without removing those near-duplicates. To avoid them, the tricks is to replace the vector of parameters we drew by\n", "``` python\n", "t = max(t, 1-t)\n", "```\n", "The beta distribution with the two parameters equal is symmetric in any case, and this way we insure that the biggest coefficient is always near the first image (the non-shuffled batch).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In `Mixup` we have handle loss functions that have an attribute `reduction` (like `nn.CrossEntropy()`). To deal with the `reduction=None` with various types of loss function without modifying the actual loss function outside of the scope we need to perform those operations with no reduction, we create a context manager:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class NoneReduce():\n", " def __init__(self, loss_func): \n", " self.loss_func,self.old_red = loss_func,None\n", " \n", " def __enter__(self):\n", " if hasattr(self.loss_func, 'reduction'):\n", " self.old_red = getattr(self.loss_func, 'reduction')\n", " setattr(self.loss_func, 'reduction', 'none')\n", " return self.loss_func\n", " else: return partial(self.loss_func, reduction='none')\n", " \n", " def __exit__(self, type, value, traceback):\n", " if self.old_red is not None: setattr(self.loss_func, 'reduction', self.old_red) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we can use it in `MixUp`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from torch.distributions.beta import Beta\n", "\n", "def unsqueeze(input, dims):\n", " for dim in listify(dims): input = torch.unsqueeze(input, dim)\n", " return input\n", "\n", "def reduce_loss(loss, reduction='mean'):\n", " return loss.mean() if reduction=='mean' else loss.sum() if reduction=='sum' else loss " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class MixUp(Callback):\n", " _order = 90 #Runs after normalization and cuda\n", " def __init__(self, α:float=0.4): self.distrib = Beta(tensor([α]), tensor([α]))\n", " \n", " def begin_fit(self): self.old_loss_func,self.run.loss_func = self.run.loss_func,self.loss_func\n", " \n", " def begin_batch(self):\n", " if not self.in_train: return #Only mixup things during training\n", " λ = self.distrib.sample((self.yb.size(0),)).squeeze().to(self.xb.device)\n", " λ = torch.stack([λ, 1-λ], 1)\n", " self.λ = unsqueeze(λ.max(1)[0], (1,2,3))\n", " shuffle = torch.randperm(self.yb.size(0)).to(self.xb.device)\n", " xb1,self.yb1 = self.xb[shuffle],self.yb[shuffle]\n", " self.run.xb = lin_comb(self.xb, xb1, self.λ)\n", " \n", " def after_fit(self): self.run.loss_func = self.old_loss_func\n", " \n", " def loss_func(self, pred, yb):\n", " if not self.in_train: return self.old_loss_func(pred, yb)\n", " with NoneReduce(self.old_loss_func) as loss_func:\n", " loss1 = loss_func(pred, yb)\n", " loss2 = loss_func(pred, self.yb1)\n", " loss = lin_comb(loss1, loss2, self.λ)\n", " return reduce_loss(loss, getattr(self.old_loss_func, 'reduction', 'mean'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nfs = [32,64,128,256,512]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_learner(nfs, data, lr, layer, loss_func=F.cross_entropy,\n", " cb_funcs=None, opt_func=optim.SGD, **kwargs):\n", " model = get_cnn_model(data, nfs, layer, **kwargs)\n", " init_cnn(model)\n", " return Learner(model, data, loss_func, lr=lr, cb_funcs=cb_funcs, opt_func=opt_func)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cbfs = [partial(AvgStatsCallback,accuracy),\n", " CudaCallback, \n", " ProgressCallback,\n", " partial(BatchTransformXCallback, norm_imagenette),\n", " MixUp]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Questions: How does softmax interact with all this? Should we jump straight from mixup to inference?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Label smoothing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another regularization technique that's often used is label smoothing. It's designed to make the model a little bit less certain of it's decision by changing a little bit its target: instead of wanting to predict 1 for the correct class and 0 for all the others, we ask it to predict `1-ε` for the correct class and `ε` for all the others, with `ε` a (small) positive number and N the number of classes. This can be written as:\n", "\n", "$$loss = (1-ε) ce(i) + ε \\sum ce(j) / N$$\n", "\n", "where `ce(x)` is cross-entropy of `x` (i.e. $-\\log(p_{x})$), and `i` is the correct class. This can be coded in a loss function:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Jump_to lesson 12 video](https://course19.fast.ai/videos/?lesson=12&t=1121)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "class LabelSmoothingCrossEntropy(nn.Module):\n", " def __init__(self, ε:float=0.1, reduction='mean'):\n", " super().__init__()\n", " self.ε,self.reduction = ε,reduction\n", " \n", " def forward(self, output, target):\n", " c = output.size()[-1]\n", " log_preds = F.log_softmax(output, dim=-1)\n", " loss = reduce_loss(-log_preds.sum(dim=-1), self.reduction)\n", " nll = F.nll_loss(log_preds, target, reduction=self.reduction)\n", " return lin_comb(loss/c, nll, self.ε)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: we implement the various reduction attributes so that it plays nicely with MixUp after." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cbfs = [partial(AvgStatsCallback,accuracy),\n", " CudaCallback,\n", " ProgressCallback,\n", " partial(BatchTransformXCallback, norm_imagenette)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn = get_learner(nfs, data, 0.4, conv_layer, cb_funcs=cbfs, loss_func=LabelSmoothingCrossEntropy())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "learn.fit(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can check our loss function `reduction` attribute hasn't changed outside of the training loop:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "assert learn.loss_func.reduction == 'mean'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!./notebook2script.py 10b_mixup_label_smoothing.ipynb" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }