{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Mixed precision training" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This module allows the forward and backward passes of your neural net to be done in fp16 (also known as *half precision*). This is particularly important if you have an NVIDIA GPU with [tensor cores](https://www.nvidia.com/en-us/data-center/tensorcore/), since it can speed up your training by 200% or more." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.callbacks.fp16 import *\n", "from fastai import *\n", "from fastai.vision import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To train your model in mixed precision you just have to call [`Learner.to_fp16`](/train.html#to_fp16), which converts the model and modifies the existing [`Learner`](/basic_train.html#Learner) to add [`MixedPrecision`](/callbacks.fp16.html#MixedPrecision)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

to_fp16[source]

\n", "\n", "> to_fp16(`learn`:[`Learner`](/basic_train.html#Learner), `loss_scale`:`float`=`512.0`, `flat_master`:`bool`=`False`) → [`Learner`](/basic_train.html#Learner)\n", "\n", "Transform `learn` in FP16 precision. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Learner.to_fp16)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HBox(children=(IntProgress(value=0, max=1), HTML(value='0.00% [0/1 00:00<00:00]'))), HTML(value…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Total time: 00:02\n", "epoch train loss valid loss accuracy\n", "1 0.172265 0.146274 0.947498 (00:02)\n", "\n" ] } ], "source": [ "path = untar_data(URLs.MNIST_SAMPLE)\n", "data = ImageDataBunch.from_folder(path)\n", "model = simple_cnn((3,16,16,2))\n", "learn = Learner(data, model, metrics=[accuracy]).to_fp16()\n", "learn.fit_one_cycle(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Details about mixed precision training are available in [NVIDIA's documentation](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html). We will just summarize the basics here.\n", "\n", "The only parameter you may want to tweak is `loss_scale`. This is used to scale the loss up, so that it doesn't underflow fp16, leading to loss of accuracy (this is reversed for the final gradient calculation after converting back to fp32). Generally the default `512` works well, however. You can also enable or disable the flattening of the master parameter tensor with `flat_master=True`, however in our testing the different is negligible.\n", "\n", "Internally, the callback ensures that all model parameters (except batchnorm layers, which require fp32) are converted to fp16, and an fp32 copy is also saved. The fp32 copy (the `master` parameters) is what is used for actually updating with the optimizer; the fp16 parameters are used for calculating gradients. This helps avoid underflow with small learning rates.\n", "\n", "All of this is implemented by the following Callback." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class MixedPrecision[source]

\n", "\n", "> MixedPrecision(`learn`:[`Learner`](/basic_train.html#Learner), `loss_scale`:`float`=`512.0`, `flat_master`:`bool`=`False`) :: [`Callback`](/callback.html#Callback)\n", "\n", "Callback that handles mixed-precision training. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixedPrecision)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You don't have to call the following functions yourself - they're called by the callback framework automatically. They're just documented here so you can see exactly what the callback is doing." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_backward_begin[source]

\n", "\n", "> on_backward_begin(`last_loss`:`Rank0Tensor`, `kwargs`:`Any`) → `Rank0Tensor`\n", "\n", "Scale gradients up by `loss_scale` to prevent underflow. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixedPrecision.on_backward_begin)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_backward_end[source]

\n", "\n", "> on_backward_end(`kwargs`:`Any`)\n", "\n", "Convert the gradients back to FP32 and divide them by the scale. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixedPrecision.on_backward_end)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_loss_begin[source]

\n", "\n", "> on_loss_begin(`last_output`:`Tensor`, `kwargs`:`Any`) → `Tensor`\n", "\n", "Convert half precision output to FP32 to avoid reduction overflow. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixedPrecision.on_loss_begin)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_step_end[source]

\n", "\n", "> on_step_end(`kwargs`:`Any`)\n", "\n", "Update the params from master to model and zero grad. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixedPrecision.on_step_end)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_train_begin[source]

\n", "\n", "> on_train_begin(`kwargs`:`Any`)\n", "\n", "Ensure everything is in half precision mode. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixedPrecision.on_train_begin)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_train_end[source]

\n", "\n", "> on_train_end(`kwargs`:`Any`)\n", "\n", "Remove half precision transforms added at `on_train_begin`. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(MixedPrecision.on_train_end)" ] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Training in mixed precision implementation", "title": "callbacks.fp16" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }