{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Mixed precision training" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This module allows the forward and backward passes of your neural net to be done in fp16 (also known as *half precision*). This is particularly important if you have an NVIDIA GPU with [tensor cores](https://www.nvidia.com/en-us/data-center/tensorcore/), since it can speed up your training by 200% or more." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.callbacks.fp16 import *\n", "from fastai.vision import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To train your model in mixed precision you just have to call [`Learner.to_fp16`](/train.html#to_fp16), which converts the model and modifies the existing [`Learner`](/basic_train.html#Learner) to add [`MixedPrecision`](/callbacks.fp16.html#MixedPrecision)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
to_fp16
[source][test]to_fp16
(**`learn`**:[`Learner`](/basic_train.html#Learner), **`loss_scale`**:`float`=***`None`***, **`max_noskip`**:`int`=***`1000`***, **`dynamic`**:`bool`=***`True`***, **`clip`**:`float`=***`None`***, **`flat_master`**:`bool`=***`False`***, **`max_scale`**:`float`=***`16777216`***, **`loss_fp32`**:`bool`=***`True`***) → [`Learner`](/basic_train.html#Learner)\n",
"\n",
"No tests found for to_fp16
. To contribute a test please refer to this guide and this discussion.
epoch | \n", "train_loss | \n", "valid_loss | \n", "accuracy | \n", "time | \n", "
---|---|---|---|---|
0 | \n", "0.126117 | \n", "0.117945 | \n", "0.956820 | \n", "00:03 | \n", "
class
MixedPrecision
[source][test]MixedPrecision
(**`learn`**:[`Learner`](/basic_train.html#Learner), **`loss_scale`**:`float`=***`None`***, **`max_noskip`**:`int`=***`1000`***, **`dynamic`**:`bool`=***`True`***, **`clip`**:`float`=***`None`***, **`flat_master`**:`bool`=***`False`***, **`max_scale`**:`float`=***`16777216`***, **`loss_fp32`**:`bool`=***`True`***) :: [`LearnerCallback`](/basic_train.html#LearnerCallback)\n",
"\n",
"No tests found for MixedPrecision
. To contribute a test please refer to this guide and this discussion.
on_backward_begin
[source][test]on_backward_begin
(**`last_loss`**:`Rank0Tensor`, **\\*\\*`kwargs`**:`Any`) → `Rank0Tensor`\n",
"\n",
"No tests found for on_backward_begin
. To contribute a test please refer to this guide and this discussion.
on_backward_end
[source][test]on_backward_end
(**\\*\\*`kwargs`**:`Any`)\n",
"\n",
"No tests found for on_backward_end
. To contribute a test please refer to this guide and this discussion.
on_loss_begin
[source][test]on_loss_begin
(**`last_output`**:`Tensor`, **\\*\\*`kwargs`**:`Any`) → `Tensor`\n",
"\n",
"No tests found for on_loss_begin
. To contribute a test please refer to this guide and this discussion.
on_step_end
[source][test]on_step_end
(**\\*\\*`kwargs`**:`Any`)\n",
"\n",
"No tests found for on_step_end
. To contribute a test please refer to this guide and this discussion.
on_train_begin
[source][test]on_train_begin
(**\\*\\*`kwargs`**:`Any`)\n",
"\n",
"No tests found for on_train_begin
. To contribute a test please refer to this guide and this discussion.