{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Mixup data augmentation" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.callbacks.mixup import *\n", "from fastai.vision import *\n", "from fastai import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is Mixup?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This module contains the implementation of a data augmentation technique called [Mixup](https://arxiv.org/abs/1710.09412). It is extremely efficient at regularizing models in computer vision (we used it to get our time to train CIFAR10 to 94% on one GPU to 6 minutes). \n", "\n", "As the name kind of suggests, the authors of the mixup article propose to train the model on a mix of the pictures of the training set. Let’s say we’re on CIFAR10 for instance, then instead of feeding the model the raw images, we take two (which could be in the same class or not) and do a linear combination of them: in terms of tensor it’s\n", "\n", "`new_image = t * image1 + (1-t) * image2`\n", "\n", "where t is a float between 0 and 1. Then the target we assign to that image is the same combination of the original targets:\n", "\n", "`new_target = t * target1 + (1-t) * target2`\n", "\n", "assuming your targets are one-hot encoded (which isn’t the case in pytorch usually). And that’s as simple as this.\n", "\n", "![mixup](imgs/mixup.png)\n", "\n", "Dog or cat? The right answer here is 70% dog and 30% cat!\n", "\n", "As the picture above shows, it’s a bit hard for a human eye to comprehend the pictures obtained (although we do see the shapes of a dog and a cat) but somehow, it makes a lot of sense to the model which trains more efficiently. The final loss (training or validation) will be higher than when training without mixup even if the accuracy is far better, which means that a model trained like this will make predictions that are a bit less confident." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basic Training" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To test this method, we will first build a [`simple_cnn`](/layers.html#simple_cnn) and train it like we did with [`basic_train`](/basic_train.html#basic_train) so we can compare its results with a network trained with Mixup." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = untar_data(URLs.MNIST_SAMPLE)\n", "data = ImageDataBunch.from_folder(path)\n", "model = simple_cnn((3,16,16,2))\n", "learn = Learner(data, model, metrics=[accuracy])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HBox(children=(IntProgress(value=0, max=8), HTML(value='0.00% [0/8 00:00<00:00]'))), HTML(value…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Total time: 00:18\n", "epoch train loss valid loss accuracy\n", "0 0.154147 0.132811 0.953876 (00:02)\n", "1 0.114623 0.088874 0.970069 (00:02)\n", "2 0.087302 0.082290 0.975466 (00:02)\n", "3 0.055337 0.041746 0.986261 (00:02)\n", "4 0.046112 0.053610 0.980864 (00:02)\n", "5 0.036025 0.032832 0.989205 (00:02)\n", "6 0.034067 0.038534 0.985770 (00:02)\n", "7 0.029102 0.024582 0.992149 (00:02)\n", "\n" ] } ], "source": [ "learn.fit(8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Mixup implementation in the library" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the original article, the authors suggested four things:\n", "\n", " 1. Create two separate dataloaders and draw a batch from each at every iteration to mix them up\n", " 2. Draw a t value following a beta distribution with a parameter alpha (0.4 is suggested in their article)\n", " 3. Mix up the two batches with the same value t.\n", " 4. Use one-hot encoded targets\n", "\n", "The implementation of this module is based on these suggestions but was modified when experiments suggested modifications with positive impact in performance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The authors suggest to use the beta distribution with the same parameters alpha. Why do they suggest this? Well it looks like this:\n", "\n", "![betadist](imgs/betadist-mixup.png)\n", "\n", "so it means there is a very high probability of picking values close to 0 or 1 (in which case the image is almost from 1 category) and then a somewhat constant probability of picking something in the middle (0.33 as likely as 0.5 for instance).\n", "\n", "While this works very well, it’s not the fastest way we can do this and this is the first suggestion we will adjust. The main point that slows down this process is wanting two different batches at every iteration (which means loading twice the amount of images and applying to them the other data augmentation function). To avoid this slow down, ou be a little smarter and mixup a batch with a shuffled version of itself (this way the images mixed up are still different).\n", "\n", "Using the same parameter t for the whole batch is another suggestion we will modify. In our experiments, we noticed that the model can train faster if we draw a different `t` for every image in the batch (both options get to the same result in terms of accuracy, it’s just that one arrives there more slowly).\n", "The last trick we have to apply with this is that there can be some duplicates with this strategy: let’s say we decide to mix `image0` with `image1` then `image1` with `image0`, and that we draw `t=0.1` for the first, and `t=0.9` for the second. Then\n", "\n", "`image0 * 0.1 + shuffle0 * (1-0.1) = image0 * 0.1 + image1 * 0.9`\n", "\n", "and\n", "\n", "`image1 * 0.9 + shuffle1 * (1-0.9) = image1 * 0.9 + image0 * 0.1`\n", "\n", "will be the sames. Of course we have to be a bit unlucky but in practice, we saw there was a drop in accuracy by using this without removing those duplicates. To avoid them, the tricks is to replace the vector of parameters `t` we drew by:\n", "\n", "`t = max(t, 1-t)`\n", "\n", "The beta distribution with the two parameters equal is symmetric in any case, and this way we insure that the biggest coefficient is always near the first image (the non-shuffled batch)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adding Mixup to the Mix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will add [`MixUpCallback`](/callbacks.mixup.html#MixUpCallback) to our Learner so that it modifies our input and target accordingly. The [`mixup`](/train.html#mixup) function does that for us behind the scene, with a few other tweaks detailed below." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HBox(children=(IntProgress(value=0, max=8), HTML(value='0.00% [0/8 00:00<00:00]'))), HTML(value…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Total time: 00:22\n", "epoch train loss valid loss accuracy\n", "0 0.372338 0.166498 0.951423 (00:02)\n", "1 0.349636 0.145830 0.965653 (00:02)\n", "2 0.333157 0.112585 0.977920 (00:02)\n", "3 0.318903 0.104405 0.982336 (00:02)\n", "4 0.317793 0.111996 0.988224 (00:02)\n", "5 0.315467 0.095693 0.986261 (00:02)\n", "6 0.312827 0.091224 0.990186 (00:02)\n", "7 0.309499 0.088683 0.991168 (00:02)\n", "\n" ] } ], "source": [ "model = simple_cnn((3,16,16,2))\n", "learner = Learner(data, model, metrics=[accuracy]).mixup()\n", "learner.fit(8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Training the net with Mixup improves the best accuracy. Note that the validation loss is higher than without MixUp, because the model makes less confident predictions: without mixup, most precisions are very close to 0. or 1. (in terms of probability) whereas the model with MixUp will give predictions that are more nuanced. Be sure to know what is the thing you want to optimize (lower loss or better accuracy) before using it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class MixUpCallback[source]

\n", "\n", "> MixUpCallback(`learner`:[`Learner`](/basic_train.html#Learner), `alpha`:`float`=`0.4`, `stack_x`:`bool`=`False`, `stack_y`:`bool`=`True`) :: [`Callback`](/callback.html#Callback)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixUpCallback, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a [`Callback`](/callback.html#Callback) for mixup on `learn` with a parameter `alpha` for the beta distribution. `stack_x` and `stack_y` determines if we stack our inputs/targets with the vector lambda drawn or do the linear combination (in general, we stack the inputs or ouputs when they correspond to categories or classes and do the linear combination otherwise)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_batch_begin[source]

\n", "\n", "> on_batch_begin(`last_input`, `last_target`, `train`, `kwargs`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixUpCallback.on_batch_begin, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Draws a vector of lambda following a beta distribution with `self.alpha` and operates the mixup on `last_input` and `last_target` according to `self.stack_x` and `self.stack_y`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dealing with the loss" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We often have to modify the loss so that it is compatible with Mixup: pytorch was very careful to avoid one-hot encoding targets when it could, so it seems a bit of a drag to undo this. Fortunately for us, if the loss is a classic [cross-entropy](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.cross_entropy), we have\n", "\n", "`loss(output, new_target) = t * loss(output, target1) + (1-t) * loss(output, target2)`\n", "\n", "so we won’t one-hot encode anything and just compute those two losses then do the linear combination.\n", "\n", "The following class is used to adapt the loss to mixup. Note that the [`mixup`](/train.html#mixup) function will use it to change the `Learner.loss_func` if necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class MixUpLoss[source]

\n", "\n", "> MixUpLoss(`crit`) :: [`Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixUpLoss, doc_string=False, title_level=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a loss function from `crit` that is compatible with MixUp." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

forward[source]

\n", "\n", "> forward(`output`, `target`, `reduction`=`'elementwise_mean'`)\n", "\n", "Defines the computation performed at every call. Should be overridden by all subclasses.\n", "\n", ".. note::\n", " Although the recipe for forward pass needs to be defined within\n", " this function, one should call the :class:`Module` instance afterwards\n", " instead of this since the former takes care of running the\n", " registered hooks while the latter silently ignores them. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixUpLoss.forward)" ] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Implementation of mixup", "title": "callbacks.mixup" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }