{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Computed Parameters - a PyTorch hack\n", "\n", "*by Thomas Viehmann*\n", "\n", "If you are anything like me, you like PyTorch and you enjoy an occasional clever hack. So here we go.\n", "\n", "The other day we joked online what breaking changes might be worth forking PyTorch for. So for all the ease with which to define networks and keep the parameters in `nn.Modules`, there isn't an easy way to create _constrained parameters_ which magically enforce their constraints.\n", "\n", "While many networks (obviously) work without them, constrained parameters are not entirely niche, either.\n", "Relatively simple constraints, such as values being positive, often occur when parametrizing distributions - e.g. for [Mixture Density Networks](https://nbviewer.jupyter.org/github/t-vi/pytorch-tvmisc/blob/master/misc/Mixture_Density_Network_Gaussian_1d.ipynb) used e.g. for Grave's famous [Handwriting Generation RNN](https://nbviewer.jupyter.org/github/t-vi/pytorch-tvmisc/blob/master/misc/graves_handwriting_generation.ipynb) or for Gaussian Process modelling.\n", "But there are also more elaborate uses, e.g. spectral normalizaiton to impose a Lipschitz constraint in convolutions and linear layers.\n", "\n", "PyTorch supports spectral norm contraints, but the mechanism it uses seems very elaborate for what should be a very simple thing. We get to this below.\n", "\n", "But first the small print:\n", "\n", "**License**: (c) 2020 by Thomas Viehmann, all rights reserved. I do not recommend its use. This is only displayed for educational purposes. You must not use it in academic or commercial work or display it without reproducing this license and referencing https://lernapparat.de/computed-parameters.\n", "\n", "**Acknowledgement:** This post is dedicated to my one true [fan on GitHub sponsors](https://github.com/sponsors/t-vi/)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import torch\n", "import numpy\n", "import inspect # this should raise the \"we'll do gross things Python internals flag\"\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Spectral normalization and the PyTorch implementation\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A while ago, people found that when training GANs, it was useful to have a continuity constraint on the regularity of the discriminator, or more precisely, a Lipschitz constraint. This led to WGAN (which constrained via weight clipping), WGAN-GP (constrained via a penalty on the \"sampled\" Lipschitz constant) [which I discuss in an old blog post](https://lernapparat.de/improved-wasserstein-gan/), and eventually Spectral Normalization ([T. Miyato et. al: Spectral Normalization for Generative Adversarial Networks, ICLR 2018](https://openreview.net/forum?id=B1QRgziT-)).\n", "\n", "Spectral normalization bounds the continuity of linear or convolution operators when the domain and the image is equipped with the $l^2$ norm. The operator norm in this setting is the _spectral norm_, the largest spectral value of the operator. Now direct methods to compute the singular value decomposition are rather expensive, and so Miyato et al. use the fact that you can get the largest singular value by power iteration:\n", "Starting from some (hopefully not pathological) $v$ of the right dimension, we can iterate\n", "$v = \\frac{A^T u}{|A^T u|}$, $u = \\frac{A v}{|A v|}$, so that $u$ and $v$ become the singular vectors and $u^T A v \\rightarrow \\sigma(A)$. To bound our weight's spectral radius, we then devide it by $\\sigma(A)$.\n", "\n", "The key observation of T. Miyato is that we only need to do as little as one iteration per straining step to keep the weights in check. And this is relatively straightforward to implement as a PyTorch module taking no parameters and returning a spectrally normalized weight (I took the computation from PyTorch):" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# Based on the the original implementation from PyTorch\n", "# So portions copyright by the PyTorch contributors, in particular Simon Wang worked on it a lot.\n", "# Errors probably are my doing.\n", "\n", "class SpectralNormWeight(torch.nn.Module):\n", " def __init__(self, shape, dim=0, eps=1e-12, n_power_iterations=1):\n", " super().__init__()\n", " self.n_power_iterations = n_power_iterations\n", " self.eps = eps\n", " self.dim = dim\n", " self.shape = shape\n", " self.permuted_shape = (shape[dim],) + shape[:dim] + shape[dim+1:]\n", " h = shape[dim]\n", " w = numpy.prod(self.permuted_shape[1:])\n", " self.weight_mat = torch.nn.Parameter(torch.randn(h, w))\n", " self.register_buffer('u', torch.nn.functional.normalize(torch.randn(h), dim=0, eps=self.eps))\n", " self.register_buffer('v', torch.nn.functional.normalize(torch.randn(w), dim=0, eps=self.eps))\n", "\n", " def forward(self):\n", " u = self.u\n", " v = self.v\n", " if self.training:\n", " with torch.no_grad():\n", " for _ in range(self.n_power_iterations):\n", " # Spectral norm of weight equals to `u^T W v`, where `u` and `v`\n", " # are the first left and right singular vectors.\n", " # This power iteration produces approximations of `u` and `v`.\n", " v = torch.nn.functional.normalize(torch.mv(self.weight_mat.t(), u), dim=0, eps=self.eps, out=v)\n", " u = torch.nn.functional.normalize(torch.mv(self.weight_mat, v), dim=0, eps=self.eps, out=u)\n", " # See above on why we need to clone\n", " u = u.clone(memory_format=torch.contiguous_format)\n", " v = v.clone(memory_format=torch.contiguous_format)\n", "\n", " sigma = torch.dot(u, torch.mv(self.weight_mat, v))\n", " weight = (self.weight_mat / sigma).view(self.permuted_shape)\n", " if self.dim != 0:\n", " weight = weight.transpose(0, self.dim)\n", " return weight" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But how to apply this to our weight?\n", "\n", "We can, of course, now do this:\n", "\n", "```\n", "w = SpectralNormWeight((out_feat, in_feat))\n", "res = torch.nn.functional.linear(inp, w, bias)\n", "```\n", "\n", "But wouldn't it be nice to be able to just do this:\n", "\n", "```python\n", "l = torch.nn.Linear(3, 4)\n", "l.weight = ComputedParameter(SpectralNormWeight(l.weight.shape))\n", "```\n", "\n", "Well, it never is that easy in real life. The PyTorch developers used hooks to implement spectral norm in a way that is convenient for the user. But using hooks that way can be brittle because you need to deal with replacing the weight discreetely, seeing that backpropagation works, saving and loading models, multi-GPU, ... All in all, PyTorch's [spectral norm implementation](https://github.com/pytorch/pytorch/blob/master/torch/nn/utils/spectral_norm.py) runs to 300 lines at the time of writing, almost 10x of what we have above. Also, it isn't easily extended if we want to use a different constraint.\n", "\n", "Now the complexity involved here has not gone unnoticed, and so there is a [2-year-old issue](https://github.com/pytorch/pytorch/issues/7313) with discussion of how an abstraction could look like. Yours truely has once explored solutions with some code, more recently Mario Lezcano worked on an implementation available [in a draft PR](https://github.com/pytorch/pytorch/pull/33344) clocking in at 350 lines. Mario does great work there and it looks like a great improvement - it works with `Modules` as the things giving parametrizations, so it can be customized, and removes the fiddling with the hooks. It does retain some of the boring properties of the hook-based spectral norm implementation, notably the idea that the inputs to the computation should be the \"original weight\" and a buffering scheme.\n", "\n", "Couldn't we skip all that?\n", "\n", "## A better way?\n", "\n", "Why cannot we just assign our module above as weight, then? Because we want to update existing modules such as `torch.nn.Conv2d` and `torch.nn.Linear` to replace their parameters. They would need to do `weight = self.weight()` instead of `weight = self.weight`, as `Module`s obviously aren't `Tensor`s.\n", "\n", "But maybe if our Modules could be Tensors, too?\n", "\n", "And here is where our clever hack comes in. PyTorch wants to enable people extend it, and so one of the things people wanted was to have way of building on `Tensor`s (subclassing which works as you expect and is done e.g. by `torch.nn.Parameter`) without quite subclassing them (because they are more restricted in a sense) and but have regular PyTorch operations return the new class (whereas usually they return `Tensor`s). Incidentally, one of the use-cases is to have constrained tensors, e.g. skew-symmetric ones.\n", "\n", "(I should point out here that things like `FloatTensor` aren't a subclass of `Tensor` but they're a gross hack to make `isinstance` work. Don't use them! In fact you should have stopped when PyTorch 0.4 came out.)\n", "\n", "The interesting thing is that these things _aren't_ subclasses of tensors. Rather, in true Python fashion, they have a special `__torch_function__` method that magically converts inputs into `Tensor`s calls PyTorch functions and then post-processes the (`Tensor`) results to whatever it wants. Now but if they aren't subclasses, we can easily be subclasses of `torch.nn.Module` and define the special method. Bingo!\n", "\n", "We also implement caching. The hard part about caching isn't to keep the result around, instead it is to figure out when we need to re-compute. We do this when\n", "- the parameters used in our module are updated (e.g. by our optimizer), because the new result will be different, handily, PyTorch has a counter to keep track of it,\n", "- the cached tensor has been back-propagated through, because the next back-propagation will fail, (ideally, we'd only do this if retain_graph hadn't been used, and there might be corner cases when the module does funny things to cause the hook not to work, but hey...).\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "class ComputedParameter(torch.nn.Module):\n", " def __init__(self, m):\n", " super().__init__()\n", " self.m = m\n", " self.needs_update = True\n", " self.cache = None\n", " self.param_versions = None # should we also treat buffers?\n", "\n", " def require_update(self, *args): # dummy args for use as hook\n", " self.needs_update = True\n", "\n", " def check_param_versions(self):\n", " if self.param_versions is None:\n", " self.require_update()\n", " return\n", " for p, v in zip(self.parameters(), self.param_versions):\n", " if p._version != v:\n", " self.require_update()\n", " return\n", "\n", " def tensor(self):\n", " if self.needs_update:\n", " self.cache = self.m()\n", " self.cache.register_hook(self.require_update)\n", " self.param_versions = [p._version for p in self.parameters()]\n", " return self.cache\n", "\n", " @classmethod\n", " def __torch_function__(cls, func, types, args=(), kwargs=None):\n", " if kwargs is None:\n", " kwargs = {}\n", " args = tuple(a.tensor() if isinstance(a, ComputedParameter) else a for a in args)\n", " return func(*args, **kwargs)\n", "\n", " def __hash__(self):\n", " return super().__hash__()\n", "\n", " def __eq__(self, other):\n", " if isinstance(other, torch.nn.Module):\n", " return super().eq(other)\n", " return torch.eq(self, other)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unsurprisingly, as we don't subclass `Tensor`, we don't have all the methods. Typing them up would be a lot of work, but happily Python let's us patch them in programatically.\n", "(And here I cut corners by ignoring properties and documentation, type annotation etc., but this is just to show off a gross hack, remember?)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# this is very overly simple and should take care of signatures, docstrings and handle class methods, properties\n", "for name, member in inspect.getmembers(torch.Tensor):\n", " if not hasattr(ComputedParameter, name):\n", " if inspect.ismethoddescriptor(member):\n", " def get_proxy(name):\n", " def new_fn(self, *args, **kwargs):\n", " return getattr(self.tensor(), name)(*args, **kwargs)\n", " return new_fn\n", " setattr(ComputedParameter, name, get_proxy(name))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also want to be able to replace parameters with our neat new ComputedParameters, so we monkey-patch `Module`'s `__setattr__` routine." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def replace_setattr():\n", " # make old_setattr local..\n", " old_setattr = torch.nn.Module.__setattr__\n", " def new_setattr(self, n, v):\n", " oldval = getattr(self, n, 1)\n", " if isinstance(v, ComputedParameter) and oldval is None or isinstance(oldval, torch.nn.Parameter):\n", " delattr(self, n)\n", " old_setattr(self, n, v)\n", " torch.nn.Module.__setattr__ = new_setattr\n", "\n", "replace_setattr()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Done!\n", "\n", "Now we can get use computed parameters in the most elegant way:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/usr/local/lib/python3.8/dist-packages/torch/functional.py:1241: UserWarning: torch.norm is deprecated and may be removed in a future PyTorch release. Use torch.linalg.norm instead.\n", " warnings.warn((\n" ] } ], "source": [ "l = torch.nn.Linear(3, 4)\n", "l.weight = ComputedParameter(SpectralNormWeight(l.weight.shape))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our computed parameters show up in the module structure:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Linear(\n", " in_features=3, out_features=4, bias=True\n", " (weight): ComputedParameter(\n", " (m): SpectralNormWeight()\n", " )\n", ")" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try this on an admittedly silly and trivial example, to fit a spectrally normalized target:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "target = torch.randn(3, 4)\n", "target /= torch.svd(target).S[0]" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.9860309362411499\n", "0.2534644603729248\n", "0.12309236824512482\n", "0.05442012473940849\n", "0.03477747365832329\n", "0.013770624995231628\n", "0.00956854410469532\n", "0.0020953379571437836\n", "0.0014274963177740574\n", "0.00041423551738262177\n" ] } ], "source": [ "opt = torch.optim.SGD(l.parameters(), 1e-1)\n", "for i in range(1000):\n", " inp = torch.randn(20, 3)\n", " t = inp @ target\n", " p = l(inp)\n", " loss = torch.nn.functional.mse_loss(p, t)\n", " opt.zero_grad()\n", " loss.backward()\n", " opt.step()\n", " if i % 100 == 0:\n", " print(loss.item())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It works:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([[ 0.0062, -0.0094, 0.0040],\n", " [-0.0025, 0.0056, -0.0054],\n", " [ 0.0026, 0.0012, -0.0014],\n", " [ 0.0192, -0.0099, 0.0039]], grad_fn=)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l.weight - target.t()" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Parameter containing:\n", "tensor([ 6.9609e-05, -1.2957e-04, -1.0454e-04, -2.2778e-04],\n", " requires_grad=True)" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "l.bias" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "I hope you enjoyed the our little hack for computed parameters.\n", "\n", "We looked at constrained parameters, with spectral normalization as an example.\n", "We learned how to make our own `Tensor`ish class using `__tensor_function__` and used that (and a bit of exploiting the opportunities to rewire almost anything that Pyton offers) to make using parameters computed `Module`s easy.\n", "\n", "Remember that this is a fun hack and for educational purposes only.\n", "\n", "If you want to take your PyTorch skills to the next level - check out my [workshop offerings](http://mathinf.com/workshops.en.html).\n", "\n", "I appreciate your feedback at ." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }