{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Training tweaks for an RNN" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.callbacks.rnn import * " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.callbacks.rnn import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This callback regroups a few tweaks to properly train RNNs. They all come from [this article](https://arxiv.org/abs/1708.02182) by Stephen Merity et al.\n", "\n", "**Adjusting the learning rate to sequence length:** since we're modifying the bptt at each batch, sometimes by a lot (we divide it by 2 randomly), the learning rate has to be adjusted to take this into account, mainly being multiplied by the ratio `seq_len/bptt`.\n", "\n", "**Activation Regularization:** on top of weight decay, we apply another form of regularization that is pretty similar and consists in adding to the loss a scaled factor of the sum of all the squares of the ouputs (with dropout applied) of the various layers of the RNN. Intuitively, weight decay tries to get the network to learn small weights, this is to get the model to learn to produce smaller activations.\n", "\n", "**Temporal Activation Regularization:** lastly, we add to the loss a scaled factor of the sum of the squares of the `h_(t+1) - h_t`, where `h_i` is the output (before dropout is applied) of one layer of the RNN at the time step i (word i of the sentence). This will encourage the model to produce activations that don’t vary too fast between two consecutive words of the sentence. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

class RNNTrainer[source]

\n", "\n", "> RNNTrainer(`learn`:[`Learner`](/basic_train.html#Learner), `bptt`:`int`, `alpha`:`float`=`0.0`, `beta`:`float`=`0.0`, `adjust`:`bool`=`True`) :: [`Callback`](/callback.html#Callback)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNTrainer, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a [`Callback`](/callback.html#Callback) that adds to learner the RNN tweaks for training on data with `bptt`. `alpha` is the scale for AR, `beta` is the scale for TAR. If `adjust` is False, the learning rate isn't adjusted to the sequence length. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_epoch_begin[source]

\n", "\n", "> on_epoch_begin(`kwargs`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNTrainer.on_epoch_begin, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reset the underlying model before training." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_loss_begin[source]

\n", "\n", "> on_loss_begin(`last_output`:`Tuple`\\[`Tensor`, `Tensor`, `Tensor`\\], `kwargs`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNTrainer.on_loss_begin, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The fastai RNNs return `last_ouput` that are tuples of three elements, the true output (that is returned) and the hidden states before and after dropout (which are saved internally for the next function)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "

on_backward_begin[source]

\n", "\n", "> on_backward_begin(`last_loss`:`Rank0Tensor`, `last_input`:`Tensor`, `kwargs`)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNTrainer.on_backward_begin, doc_string=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Adjusts the learning rate to the size of `last_input`. Adds to `last_loss` the AR and TAR." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Undocumented Methods - Methods moved below this line will intentionally be hidden" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Methods - Please document or move to the undocumented section" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [ { "data": { "text/markdown": [ "

on_epoch_begin[source]

\n", "\n", "> on_epoch_begin(`kwargs`)\n", "\n", "At the beginning of each epoch. " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(RNNTrainer.on_epoch_begin)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "jekyll": { "keywords": "fastai", "summary": "Implementation of a callback for RNN training", "title": "callbacks.rnn" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }