{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Training tweaks for an RNN" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.callbacks.rnn import * " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from fastai.gen_doc.nbdoc import *\n", "from fastai.callbacks.rnn import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This callback regroups a few tweaks to properly train RNNs. They all come from [this article](https://arxiv.org/abs/1708.02182) by Stephen Merity et al.\n", "\n", "**Adjusting the learning rate to sequence length:** since we're modifying the bptt at each batch, sometimes by a lot (we divide it by 2 randomly), the learning rate has to be adjusted to take this into account, mainly being multiplied by the ratio `seq_len/bptt`.\n", "\n", "**Activation Regularization:** on top of weight decay, we apply another form of regularization that is pretty similar and consists in adding to the loss a scaled factor of the sum of all the squares of the ouputs (with dropout applied) of the various layers of the RNN. Intuitively, weight decay tries to get the network to learn small weights, this is to get the model to learn to produce smaller activations.\n", "\n", "**Temporal Activation Regularization:** lastly, we add to the loss a scaled factor of the sum of the squares of the `h_(t+1) - h_t`, where `h_i` is the output (before dropout is applied) of one layer of the RNN at the time step i (word i of the sentence). This will encourage the model to produce activations that don’t vary too fast between two consecutive words of the sentence. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [ { "data": { "text/markdown": [ "
class RNNTrainer[source]RNNTrainer(`learn`:[`Learner`](/basic_train.html#Learner), `bptt`:`int`, `alpha`:`float`=`0.0`, `beta`:`float`=`0.0`, `adjust`:`bool`=`True`) :: [`Callback`](/callback.html#Callback)"
],
"text/plain": [
"on_epoch_begin[source]on_epoch_begin(`kwargs`)"
],
"text/plain": [
"on_loss_begin[source]on_loss_begin(`last_output`:`Tuple`\\[`Tensor`, `Tensor`, `Tensor`\\], `kwargs`)"
],
"text/plain": [
"on_backward_begin[source]on_backward_begin(`last_loss`:`Rank0Tensor`, `last_input`:`Tensor`, `kwargs`)"
],
"text/plain": [
"on_epoch_begin[source]on_epoch_begin(`kwargs`)\n",
"\n",
"At the beginning of each epoch. "
],
"text/plain": [
"