{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "from fastai2.basics import *" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nbdev.showdoc import *" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#default_exp callback.hook" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Model hooks\n", "\n", "> Callback and helper function to add hooks in models" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from fastai2.test_utils import *" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What are hooks?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hooks are functions you can attach to a particular layer in your model and that will be executed in the foward pass (for forward hooks) or backward pass (for backward hooks). Here we begin with an introduction around hooks, but you should jump to `HookCallback` if you quickly want to implement one (and read the following example `ActivationStats`).\n", "\n", "Forward hooks are functions that take three arguments: the layer it's applied to, the input of that layer and the output of that layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Linear(in_features=5, out_features=3, bias=True) (tensor([[ 1.5469e+00, -8.8636e-01, 5.0203e-01, 1.5994e-01, 1.2272e+00],\n", " [ 4.8025e-01, 7.8592e-04, -1.0296e+00, -1.9297e+00, 1.2433e-01],\n", " [ 7.4777e-01, -1.6163e-02, -2.2598e+00, -9.2172e-01, 1.4019e+00],\n", " [ 3.3838e-01, -7.0636e-01, 1.6084e-01, 7.3097e-02, 2.9105e-02]]),) tensor([[ 0.1569, -0.0761, -0.3640],\n", " [ 0.0530, 0.4646, 0.8272],\n", " [ 0.7636, 1.3767, 0.9218],\n", " [ 0.0565, -0.4380, -0.1608]], grad_fn=)\n" ] } ], "source": [ "tst_model = nn.Linear(5,3)\n", "def example_forward_hook(m,i,o): print(m,i,o)\n", " \n", "x = torch.randn(4,5)\n", "hook = tst_model.register_forward_hook(example_forward_hook)\n", "y = tst_model(x)\n", "hook.remove()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Backward hooks are functions that take three arguments: the layer it's applied to, the gradients of the loss with respect to the input, and the gradients with respect to the output." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Linear(in_features=5, out_features=3, bias=True) (tensor([ 0.0418, 0.2453, -0.5478]), None, tensor([[-0.2083, -0.0263, -0.3125],\n", " [-0.3270, 0.4916, -0.7432],\n", " [-0.1454, -0.2618, -0.4977],\n", " [ 0.0569, -0.1084, -0.4065],\n", " [ 0.2697, 0.0396, -0.0799]])) (tensor([[ 0.0223, 0.2184, -0.0859],\n", " [ 0.0111, -0.0302, -0.2742],\n", " [-0.1179, 0.0389, -0.2361],\n", " [ 0.1264, 0.0182, 0.0485]]),)\n" ] } ], "source": [ "def example_backward_hook(m,gi,go): print(m,gi,go)\n", "hook = tst_model.register_backward_hook(example_backward_hook)\n", "\n", "x = torch.randn(4,5)\n", "y = tst_model(x)\n", "loss = y.pow(2).mean()\n", "loss.backward()\n", "hook.remove()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hooks can change the input/output of a layer, or the gradients, print values or shapes. If you want to store something related to theses inputs/outputs, it's best to have your hook associated to a class so that it can put it in the state of an instance of that class." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hook -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "@docs\n", "class Hook():\n", " \"Create a hook on `m` with `hook_func`.\"\n", " def __init__(self, m, hook_func, is_forward=True, detach=True, cpu=False, gather=False):\n", " store_attr(self,'hook_func,detach,cpu,gather')\n", " f = m.register_forward_hook if is_forward else m.register_backward_hook\n", " self.hook = f(self.hook_fn)\n", " self.stored,self.removed = None,False\n", "\n", " def hook_fn(self, module, input, output):\n", " \"Applies `hook_func` to `module`, `input`, `output`.\"\n", " if self.detach:\n", " input,output = to_detach(input, cpu=self.cpu, gather=self.gather),to_detach(output, cpu=self.cpu, gather=self.gather)\n", " self.stored = self.hook_func(module, input, output)\n", "\n", " def remove(self):\n", " \"Remove the hook from the model.\"\n", " if not self.removed:\n", " self.hook.remove()\n", " self.removed=True\n", "\n", " def __enter__(self, *args): return self\n", " def __exit__(self, *args): self.remove()\n", "\n", " _docs = dict(__enter__=\"Register the hook\",\n", " __exit__=\"Remove the hook\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This will be called during the forward pass if `is_forward=True`, the backward pass otherwise, and will optionally `detach`, `gather` and put on the `cpu` the (gradient of the) input/output of the model before passing them to `hook_func`. The result of `hook_func` will be stored in the `stored` attribute of the `Hook`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst_model = nn.Linear(5,3)\n", "hook = Hook(tst_model, lambda m,i,o: o)\n", "y = tst_model(x)\n", "test_eq(hook.stored, y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hook.hook_fn[source]

\n", "\n", "> Hook.hook_fn(**`module`**, **`input`**, **`output`**)\n", "\n", "Applies `hook_func` to [`module`](/layers#module), `input`, `output`." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hook.hook_fn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hook.remove[source]

\n", "\n", "> Hook.remove()\n", "\n", "Remove the hook from the model." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hook.remove)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Note: It's important to properly remove your hooks for your model when you're done to avoid them being called again next time your model is applied to some inputs, and to free the memory that go with their state." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst_model = nn.Linear(5,10)\n", "x = torch.randn(4,5)\n", "y = tst_model(x)\n", "hook = Hook(tst_model, example_forward_hook)\n", "test_stdout(lambda: tst_model(x), f\"{tst_model} ({x},) {y.detach()}\")\n", "hook.remove()\n", "test_stdout(lambda: tst_model(x), \"\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Context Manager" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since it's very important to remove your `Hook` even if your code is interrupted by some bug, `Hook` can be used as context managers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hook.__enter__[source]

\n", "\n", "> Hook.__enter__(**\\*`args`**)\n", "\n", "Register the hook" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hook.__enter__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hook.__exit__[source]

\n", "\n", "> Hook.__exit__(**\\*`args`**)\n", "\n", "Remove the hook" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hook.__exit__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst_model = nn.Linear(5,10)\n", "x = torch.randn(4,5)\n", "y = tst_model(x)\n", "with Hook(tst_model, example_forward_hook) as h:\n", " test_stdout(lambda: tst_model(x), f\"{tst_model} ({x},) {y.detach()}\")\n", "test_stdout(lambda: tst_model(x), \"\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def _hook_inner(m,i,o): return o if isinstance(o,Tensor) or is_listy(o) else list(o)\n", "\n", "def hook_output(module, detach=True, cpu=False, grad=False):\n", " \"Return a `Hook` that stores activations of `module` in `self.stored`\"\n", " return Hook(module, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The activations stored are the gradients if `grad=True`, otherwise the output of `module`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tst_model = nn.Linear(5,10)\n", "x = torch.randn(4,5)\n", "with hook_output(tst_model) as h:\n", " y = tst_model(x)\n", " test_eq(y, h.stored)\n", " assert not h.stored.requires_grad\n", " \n", "with hook_output(tst_model, grad=True) as h:\n", " y = tst_model(x)\n", " loss = y.pow(2).mean()\n", " loss.backward()\n", " test_close(2*y / y.numel(), h.stored[0])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#cuda\n", "with hook_output(tst_model, cpu=True) as h:\n", " y = tst_model.cuda()(x.cuda())\n", " test_eq(h.stored.device, torch.device('cpu'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hooks -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "@docs\n", "class Hooks():\n", " \"Create several hooks on the modules in `ms` with `hook_func`.\"\n", " def __init__(self, ms, hook_func, is_forward=True, detach=True, cpu=False):\n", " self.hooks = [Hook(m, hook_func, is_forward, detach, cpu) for m in ms]\n", "\n", " def __getitem__(self,i): return self.hooks[i]\n", " def __len__(self): return len(self.hooks)\n", " def __iter__(self): return iter(self.hooks)\n", " @property\n", " def stored(self): return L(o.stored for o in self)\n", "\n", " def remove(self):\n", " \"Remove the hooks from the model.\"\n", " for h in self.hooks: h.remove()\n", "\n", " def __enter__(self, *args): return self\n", " def __exit__ (self, *args): self.remove()\n", "\n", " _docs = dict(stored = \"The states saved in each hook.\",\n", " __enter__=\"Register the hooks\",\n", " __exit__=\"Remove the hooks\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]\n", "tst_model = nn.Sequential(*layers)\n", "hooks = Hooks(tst_model, lambda m,i,o: o)\n", "y = tst_model(x)\n", "test_eq(hooks.stored[0], layers[0](x))\n", "test_eq(hooks.stored[1], F.relu(layers[0](x)))\n", "test_eq(hooks.stored[2], y)\n", "hooks.remove()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hooks.stored[source]

\n", "\n", "The states saved in each hook." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hooks.stored, name='Hooks.stored')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hooks.remove[source]

\n", "\n", "> Hooks.remove()\n", "\n", "Remove the hooks from the model." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hooks.remove)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Context Manager" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Like `Hook` , you can use `Hooks` as context managers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hooks.__enter__[source]

\n", "\n", "> Hooks.__enter__(**\\*`args`**)\n", "\n", "Register the hooks" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hooks.__enter__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

Hooks.__exit__[source]

\n", "\n", "> Hooks.__exit__(**\\*`args`**)\n", "\n", "Remove the hooks" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(Hooks.__exit__)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]\n", "tst_model = nn.Sequential(*layers)\n", "with Hooks(layers, lambda m,i,o: o) as h:\n", " y = tst_model(x)\n", " test_eq(h.stored[0], layers[0](x))\n", " test_eq(h.stored[1], F.relu(layers[0](x)))\n", " test_eq(h.stored[2], y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def hook_outputs(modules, detach=True, cpu=False, grad=False):\n", " \"Return `Hooks` that store activations of all `modules` in `self.stored`\"\n", " return Hooks(modules, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The activations stored are the gradients if `grad=True`, otherwise the output of `modules`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]\n", "tst_model = nn.Sequential(*layers)\n", "x = torch.randn(4,5)\n", "with hook_outputs(layers) as h:\n", " y = tst_model(x)\n", " test_eq(h.stored[0], layers[0](x))\n", " test_eq(h.stored[1], F.relu(layers[0](x)))\n", " test_eq(h.stored[2], y)\n", " for s in h.stored: assert not s.requires_grad\n", " \n", "with hook_outputs(layers, grad=True) as h:\n", " y = tst_model(x)\n", " loss = y.pow(2).mean()\n", " loss.backward()\n", " g = 2*y / y.numel()\n", " test_close(g, h.stored[2][0])\n", " g = g @ layers[2].weight.data\n", " test_close(g, h.stored[1][0])\n", " g = g * (layers[0](x) > 0).float()\n", " test_close(g, h.stored[0][0])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#cuda\n", "with hook_outputs(tst_model, cpu=True) as h:\n", " y = tst_model.cuda()(x.cuda())\n", " for s in h.stored: test_eq(s.device, torch.device('cpu'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def dummy_eval(m, size=(64,64)):\n", " \"Evaluate `m` on a dummy input of a certain `size`\"\n", " ch_in = in_channels(m)\n", " x = one_param(m).new(1, ch_in, *size).requires_grad_(False).uniform_(-1.,1.)\n", " with torch.no_grad(): return m.eval()(x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def model_sizes(m, size=(64,64)):\n", " \"Pass a dummy input through the model `m` to get the various sizes of activations.\"\n", " with hook_outputs(m) as hooks:\n", " _ = dummy_eval(m, size=size)\n", " return [o.stored.shape for o in hooks]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))\n", "test_eq(model_sizes(m), [[1, 16, 64, 64], [1, 32, 32, 32], [1, 32, 32, 32]])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def num_features_model(m):\n", " \"Return the number of output features for `m`.\"\n", " sz,ch_in = 32,in_channels(m)\n", " while True:\n", " #Trying for a few sizes in case the model requires a big input size.\n", " try:\n", " return model_sizes(m, (sz,sz))[-1][1]\n", " except Exception as e:\n", " sz *= 2\n", " if sz > 2048: raise e" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = nn.Sequential(nn.Conv2d(5,4,3), nn.Conv2d(4,3,3))\n", "test_eq(num_features_model(m), 3)\n", "m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))\n", "test_eq(num_features_model(m), 32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## HookCallback -" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To make hooks easy to use, we wrapped a version in a Callback where you just have to implement a `hook` function (plus any element you might need)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def has_params(m):\n", " \"Check if `m` has at least one parameter\"\n", " return len(list(m.parameters())) > 0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "assert has_params(nn.Linear(3,4))\n", "assert has_params(nn.LSTM(4,5,2))\n", "assert not has_params(nn.ReLU())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "@funcs_kwargs\n", "class HookCallback(Callback):\n", " \"`Callback` that can be used to register hooks on `modules`\"\n", " _methods = [\"hook\"]\n", " hook = noops\n", " def __init__(self, modules=None, every=None, remove_end=True, is_forward=True, detach=True, cpu=True, **kwargs):\n", " store_attr(self, 'modules,every,remove_end,is_forward,detach,cpu')\n", " assert not kwargs\n", "\n", " def begin_fit(self):\n", " \"Register the `Hooks` on `self.modules`.\"\n", " if self.modules is None: self.modules = [m for m in flatten_model(self.model) if has_params(m)]\n", " if self.every is None: self._register()\n", "\n", " def begin_batch(self):\n", " if self.every is None: return\n", " if self.training and self.train_iter%self.every==0: self._register()\n", "\n", " def after_batch(self):\n", " if self.every is None: return\n", " if self.training and self.train_iter%self.every==0: self._remove()\n", "\n", " def after_fit(self):\n", " \"Remove the `Hooks`.\"\n", " if self.remove_end: self._remove()\n", "\n", " def _register(self): self.hooks = Hooks(self.modules, self.hook, self.is_forward, self.detach, self.cpu)\n", " def _remove(self):\n", " if getattr(self, 'hooks', None): self.hooks.remove()\n", "\n", " def __del__(self): self._remove()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can either subclass and implement a `hook` function (along with any event you want) or pass that a `hook` function when initializing. Such a function needs to take three argument: a layer, input and output (for a backward hook, input means gradient with respect to the inputs, output, gradient with respect to the output) and can either modify them or update the state according to them.\n", "\n", "If not provided, `modules` will default to the layers of `self.model` that have a `weight` attribute. Depending on `do_remove`, the hooks will be properly removed at the end of training (or in case of error). `is_forward` , `detach` and `cpu` are passed to `Hooks`.\n", "\n", "The function called at each forward (or backward) pass is `self.hook` and must be implemented when subclassing this callback." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(#4) [0,4.099071979522705,3.5469274520874023,'00:00']\n" ] } ], "source": [ "class TstCallback(HookCallback):\n", " def hook(self, m, i, o): return o\n", " def after_batch(self): test_eq(self.hooks.stored[0], self.pred)\n", " \n", "learn = synth_learner(n_trn=5, cbs = TstCallback())\n", "learn.fit(1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(#4) [0,12.485884666442871,11.434316635131836,'00:00']\n" ] } ], "source": [ "class TstCallback(HookCallback):\n", " def __init__(self, modules=None, remove_end=True, detach=True, cpu=False):\n", " super().__init__(modules, None, remove_end, False, detach, cpu)\n", " def hook(self, m, i, o): return o\n", " def after_batch(self):\n", " if self.training:\n", " test_eq(self.hooks.stored[0][0], 2*(self.pred-self.y)/self.pred.shape[0])\n", " \n", "learn = synth_learner(n_trn=5, cbs = TstCallback())\n", "learn.fit(1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

HookCallback.begin_fit[source]

\n", "\n", "> HookCallback.begin_fit()\n", "\n", "Register the [`Hooks`](/callback.hook#Hooks) on `self.modules`." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(HookCallback.begin_fit)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "

HookCallback.after_fit[source]

\n", "\n", "> HookCallback.after_fit()\n", "\n", "Remove the [`Hooks`](/callback.hook#Hooks)." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "show_doc(HookCallback.after_fit)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model summary" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def total_params(m):\n", " \"Give the number of parameters of a module and if it's trainable or not\"\n", " params = sum([p.numel() for p in m.parameters()])\n", " trains = [p.requires_grad for p in m.parameters()]\n", " return params, (False if len(trains)==0 else trains[0])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_eq(total_params(nn.Linear(10,32)), (32*10+32,True))\n", "test_eq(total_params(nn.Linear(10,32, bias=False)), (32*10,True))\n", "test_eq(total_params(nn.BatchNorm2d(20)), (20*2, True))\n", "test_eq(total_params(nn.BatchNorm2d(20, affine=False)), (0,False))\n", "test_eq(total_params(nn.Conv2d(16, 32, 3)), (16*32*3*3 + 32, True))\n", "test_eq(total_params(nn.Conv2d(16, 32, 3, bias=False)), (16*32*3*3, True))\n", "#First ih layer 20--10, all else 10--10. *4 for the four gates\n", "test_eq(total_params(nn.LSTM(20, 10, 2)), (4 * (20*10 + 10) + 3 * 4 * (10*10 + 10), True))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "def layer_info(model, *xb):\n", " \"Return layer infos of `model` on `xb` (only support batch first inputs)\"\n", " def _track(m, i, o):\n", " return (m.__class__.__name__,)+total_params(m)+(apply(lambda x:x.shape, o),)\n", " layers = [m for m in flatten_model(model)]\n", " with Hooks(layers, _track) as h:\n", " _ = model.eval()(*apply(lambda o:o[:1], xb))\n", " return xb,h.stored" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))\n", "sample_input = torch.randn((16, 1))\n", "test_eq(layer_info(m, sample_input)[1], [\n", " ('Linear', 100, True, [1, 50]),\n", " ('ReLU', 0, False, [1, 50]),\n", " ('BatchNorm1d', 100, True, [1, 50]),\n", " ('Linear', 51, True, [1, 1])\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Test for multiple inputs model\n", "class _2InpModel(Module):\n", " def __init__(self):\n", " super().__init__()\n", " self.seq = nn.Sequential(nn.Linear(2,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))\n", " def forward(self, *inps):\n", " outputs = torch.cat(inps, dim=-1)\n", " return self.seq(outputs)\n", "\n", "\n", "m = _2InpModel()\n", "sample_inputs = (torch.randn(16, 1), torch.randn(16, 1))\n", "test_eq(layer_info(m, *sample_inputs)[1], [\n", " ('Linear', 150, True, [1, 50]),\n", " ('ReLU', 0, False, [1, 50]),\n", " ('BatchNorm1d', 100, True, [1, 50]),\n", " ('Linear', 51, True, [1, 1])\n", "])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "def _print_shapes(o, bs):\n", " if isinstance(o, torch.Size): return ' x '.join([str(bs)] + [str(t) for t in o[1:]])\n", " else: return str([_print_shapes(x, bs) for x in o])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#hide\n", "#Individual parameters wrapped in ParameterModule aren't called through the hooks in `layer_info`, thus are not counted inside the summary\n", "#TODO: find a way to have them counted in param number somehow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# export\n", "def module_summary(self, *xb):\n", " \"Print a summary of `self` using `xb`\"\n", " sample_inputs,infos = layer_info(self, *xb)\n", " n,bs = 64,find_bs(xb)\n", " inp_sz = _print_shapes(apply(lambda x:x.shape, xb), bs)\n", " res = f\"{self.__class__.__name__} (Input shape: {inp_sz})\\n\"\n", " res += \"=\" * n + \"\\n\"\n", " res += f\"{'Layer (type)':<20} {'Output Shape':<20} {'Param #':<10} {'Trainable':<10}\\n\"\n", " res += \"=\" * n + \"\\n\"\n", " ps,trn_ps = 0,0\n", " infos = [o for o in infos if o is not None] #see comment in previous cell\n", " for typ,np,trn,sz in infos:\n", " if sz is None: continue\n", " ps += np\n", " if trn: trn_ps += np\n", " res += f\"{typ:<20} {_print_shapes(sz, bs)[:19]:<20} {np:<10,} {str(trn):<10}\\n\"\n", " res += \"_\" * n + \"\\n\"\n", " res += f\"\\nTotal params: {ps:,}\\n\"\n", " res += f\"Total trainable params: {trn_ps:,}\\n\"\n", " res += f\"Total non-trainable params: {ps - trn_ps:,}\\n\\n\"\n", " return PrettyString(res)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Sequential (Input shape: ['16 x 1'])\n", "================================================================\n", "Layer (type) Output Shape Param # Trainable \n", "================================================================\n", "Linear 16 x 50 100 False \n", "________________________________________________________________\n", "ReLU 16 x 50 0 False \n", "________________________________________________________________\n", "BatchNorm1d 16 x 50 100 True \n", "________________________________________________________________\n", "Linear 16 x 1 51 True \n", "________________________________________________________________\n", "\n", "Total params: 251\n", "Total trainable params: 151\n", "Total non-trainable params: 100\n" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))\n", "for p in m[0].parameters(): p.requires_grad_(False)\n", "sample_input = torch.randn((16, 1))\n", "module_summary(m, sample_input)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#export\n", "@patch\n", "def summary(self:Learner):\n", " \"Print a summary of the model, optimizer and loss function.\"\n", " xb = self.dls.train.one_batch()[:self.dls.train.n_inp]\n", " res = module_summary(self.model, *xb)\n", " res += f\"Optimizer used: {self.opt_func}\\nLoss function: {self.loss_func}\\n\\n\"\n", " if self.opt is not None:\n", " res += f\"Model \" + (\"unfrozen\\n\\n\" if self.opt.frozen_idx==0 else f\"frozen up to parameter group number {self.opt.frozen_idx}\\n\\n\")\n", " res += \"Callbacks:\\n\" + '\\n'.join(f\" - {cb}\" for cb in sort_by_run(self.cbs))\n", " return PrettyString(res)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Sequential (Input shape: ['16 x 1'])\n", "================================================================\n", "Layer (type) Output Shape Param # Trainable \n", "================================================================\n", "Linear 16 x 50 100 False \n", "________________________________________________________________\n", "ReLU 16 x 50 0 False \n", "________________________________________________________________\n", "BatchNorm1d 16 x 50 100 True \n", "________________________________________________________________\n", "Linear 16 x 1 51 True \n", "________________________________________________________________\n", "\n", "Total params: 251\n", "Total trainable params: 151\n", "Total non-trainable params: 100\n", "\n", "Optimizer used: functools.partial(, mom=0.9)\n", "Loss function: FlattenedLoss of MSELoss()\n", "\n", "Model unfrozen\n", "\n", "Callbacks:\n", " - TrainEvalCallback\n", " - Recorder" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))\n", "for p in m[0].parameters(): p.requires_grad_(False)\n", "learn = synth_learner()\n", "learn.create_opt()\n", "learn.model=m\n", "learn.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "_NOutModel (Input shape: ['16 x 1'])\n", "================================================================\n", "Layer (type) Output Shape Param # Trainable \n", "================================================================\n", "_NOutModel ['16 x 16 x 256', ' 0 False \n", "________________________________________________________________\n", "\n", "Total params: 0\n", "Total trainable params: 0\n", "Total non-trainable params: 0\n", "\n", "Optimizer used: functools.partial(, mom=0.9)\n", "Loss function: FlattenedLoss of MSELoss()\n", "\n", "Callbacks:\n", " - TrainEvalCallback\n", " - Recorder" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Test for multiple output\n", "class _NOutModel(nn.Module):\n", " def forward(self, x1):\n", " seq_len, bs, hid_size = 50, 16, 256\n", " num_layer = 1\n", " return torch.randn((seq_len, bs, hid_size)), torch.randn((num_layer, bs, hid_size))\n", "m = _NOutModel()\n", "learn = synth_learner()\n", "learn.model = m\n", "learn.summary() # Output Shape should be (50, 16, 256), (1, 16, 256)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Activation graphs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is an example of a `HookCallback`, that stores the mean, stds and histograms of activations that go through the network." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#exports\n", "@delegates()\n", "class ActivationStats(HookCallback):\n", " \"Callback that record the mean and std of activations.\"\n", " run_before=TrainEvalCallback\n", " def __init__(self, with_hist=False, **kwargs):\n", " super().__init__(**kwargs)\n", " self.with_hist = with_hist\n", "\n", " def begin_fit(self):\n", " \"Initialize stats.\"\n", " super().begin_fit()\n", " self.stats = L()\n", "\n", " def hook(self, m, i, o):\n", " o = o.float()\n", " res = {'mean': o.mean().item(), 'std': o.std().item(),\n", " 'near_zero': (o<=0.05).long().sum().item()/o.numel()}\n", " if self.with_hist: res['hist'] = o.histc(40,0,10)\n", " return res\n", "\n", " def after_batch(self):\n", " \"Take the stored results and puts it in `self.stats`\"\n", " if self.training and (self.every is None or self.train_iter%self.every == 0):\n", " self.stats.append(self.hooks.stored)\n", " super().after_batch()\n", "\n", " def layer_stats(self, idx):\n", " lstats = self.stats.itemgot(idx)\n", " return L(lstats.itemgot(o) for o in ('mean','std','near_zero'))\n", "\n", " def hist(self, idx):\n", " res = self.stats.itemgot(idx).itemgot('hist')\n", " return torch.stack(tuple(res)).t().float().log1p()\n", "\n", " def color_dim(self, idx, figsize=(10,5), ax=None):\n", " \"The 'colorful dimension' plot\"\n", " res = self.hist(idx)\n", " if ax is None: ax = subplots(figsize=figsize)[1][0]\n", " ax.imshow(res, origin='lower')\n", " ax.axis('off')\n", "\n", " def plot_layer_stats(self, idx):\n", " _,axs = subplots(1, 3, figsize=(12,3))\n", " for o,ax,title in zip(self.layer_stats(idx),axs,('mean','std','% near zero')):\n", " ax.plot(o)\n", " ax.set_title(title)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(#4) [0,7.390630722045898,6.929471969604492,'00:00']\n" ] } ], "source": [ "learn = synth_learner(n_trn=5, cbs = ActivationStats(every=4))\n", "learn.fit(1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(#2) [(#1) [{'mean': 0.5417343378067017, 'std': 0.9456170797348022, 'near_zero': 0.25}],(#1) [{'mean': 0.6769891977310181, 'std': 1.007888913154602, 'near_zero': 0.3125}]]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "learn.activation_stats.stats" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first line contains the means of the outputs of the model for each batch in the training set, the second line their standard deviations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(#4) [0,15.302834510803223,15.13727855682373,'00:00']\n", "(#4) [0,24.2937068939209,16.286128997802734,'00:00']\n", "(#4) [0,11.251078605651855,9.528843879699707,'00:00']\n", "(#4) [0,9.512333869934082,8.019964218139648,'00:00']\n", "(#4) [0,4.20447301864624,2.8407559394836426,'00:00']\n", "(#4) [0,11.206255912780762,8.915884971618652,'00:00']\n" ] } ], "source": [ "import math\n", "\n", "def test_every(n_tr, every):\n", " \"create a learner, fit, then check number of stats collected\"\n", " learn = synth_learner(n_trn=n_tr, cbs=ActivationStats(every=every))\n", " learn.fit(1)\n", " expected_stats_len = math.ceil(n_tr / every)\n", " test_eq(expected_stats_len, len(learn.activation_stats.stats))\n", " \n", "for n_tr in [11, 12, 13]:\n", " test_every(n_tr, 4)\n", " test_every(n_tr, 1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(#4) [0,6.47100305557251,6.345976829528809,'00:00']\n" ] } ], "source": [ "#hide\n", "class TstCallback(HookCallback):\n", " def hook(self, m, i, o): return o\n", " def begin_fit(self):\n", " super().begin_fit()\n", " self.means,self.stds = [],[]\n", " \n", " def after_batch(self):\n", " if self.training:\n", " self.means.append(self.hooks.stored[0].mean().item())\n", " self.stds.append (self.hooks.stored[0].std() .item())\n", "\n", "learn = synth_learner(n_trn=5, cbs = [TstCallback(), ActivationStats()])\n", "learn.fit(1)\n", "test_eq(learn.activation_stats.stats.itemgot(0).itemgot(\"mean\"), learn.tst.means)\n", "test_eq(learn.activation_stats.stats.itemgot(0).itemgot(\"std\"), learn.tst.stds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Export -" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Converted 00_torch_core.ipynb.\n", "Converted 01_layers.ipynb.\n", "Converted 02_data.load.ipynb.\n", "Converted 03_data.core.ipynb.\n", "Converted 04_data.external.ipynb.\n", "Converted 05_data.transforms.ipynb.\n", "Converted 06_data.block.ipynb.\n", "Converted 07_vision.core.ipynb.\n", "Converted 08_vision.data.ipynb.\n", "Converted 09_vision.augment.ipynb.\n", "Converted 09b_vision.utils.ipynb.\n", "Converted 09c_vision.widgets.ipynb.\n", "Converted 10_tutorial.pets.ipynb.\n", "Converted 11_vision.models.xresnet.ipynb.\n", "Converted 12_optimizer.ipynb.\n", "Converted 13_callback.core.ipynb.\n", "Converted 13a_learner.ipynb.\n", "Converted 13b_metrics.ipynb.\n", "Converted 14_callback.schedule.ipynb.\n", "Converted 14a_callback.data.ipynb.\n", "Converted 15_callback.hook.ipynb.\n", "Converted 15a_vision.models.unet.ipynb.\n", "Converted 16_callback.progress.ipynb.\n", "Converted 17_callback.tracker.ipynb.\n", "Converted 18_callback.fp16.ipynb.\n", "Converted 18a_callback.training.ipynb.\n", "Converted 19_callback.mixup.ipynb.\n", "Converted 20_interpret.ipynb.\n", "Converted 20a_distributed.ipynb.\n", "Converted 21_vision.learner.ipynb.\n", "Converted 22_tutorial.imagenette.ipynb.\n", "Converted 23_tutorial.vision.ipynb.\n", "Converted 24_tutorial.siamese.ipynb.\n", "Converted 24_vision.gan.ipynb.\n", "Converted 30_text.core.ipynb.\n", "Converted 31_text.data.ipynb.\n", "Converted 32_text.models.awdlstm.ipynb.\n", "Converted 33_text.models.core.ipynb.\n", "Converted 34_callback.rnn.ipynb.\n", "Converted 35_tutorial.wikitext.ipynb.\n", "Converted 36_text.models.qrnn.ipynb.\n", "Converted 37_text.learner.ipynb.\n", "Converted 38_tutorial.text.ipynb.\n", "Converted 39_tutorial.transformers.ipynb.\n", "Converted 40_tabular.core.ipynb.\n", "Converted 41_tabular.data.ipynb.\n", "Converted 42_tabular.model.ipynb.\n", "Converted 43_tabular.learner.ipynb.\n", "Converted 44_tutorial.tabular.ipynb.\n", "Converted 45_collab.ipynb.\n", "Converted 46_tutorial.collab.ipynb.\n", "Converted 50_tutorial.datablock.ipynb.\n", "Converted 60_medical.imaging.ipynb.\n", "Converted 61_tutorial.medical_imaging.ipynb.\n", "Converted 65_medical.text.ipynb.\n", "Converted 70_callback.wandb.ipynb.\n", "Converted 71_callback.tensorboard.ipynb.\n", "Converted 72_callback.neptune.ipynb.\n", "Converted 73_callback.captum.ipynb.\n", "Converted 74_callback.cutmix.ipynb.\n", "Converted 97_test_utils.ipynb.\n", "Converted 99_pytorch_doc.ipynb.\n", "Converted index.ipynb.\n", "Converted tutorial.ipynb.\n" ] } ], "source": [ "#hide\n", "from nbdev.export import notebook2script\n", "notebook2script()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "jupytext": { "split_at_heading": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }