{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Why you need a good init" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To understand why initialization is important in a neural net, we'll focus on the basic operation you have there: matrix multiplications. So let's just take a vector `x`, and a matrix `a` initiliazed randomly, then multiply them 100 times (as if we had 100 layers). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Jump_to lesson 9 video](https://course19.fast.ai/videos/?lesson=9&t=1132)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = torch.randn(512)\n", "a = torch.randn(512,512)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(100): x = a @ x" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor(nan), tensor(nan))" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.mean(),x.std()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The problem you'll get with that is activation explosion: very soon, your activations will go to nan. We can even ask the loop to break when that first happens:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = torch.randn(512)\n", "a = torch.randn(512,512)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(100): \n", " x = a @ x\n", " if x.std() != x.std(): break" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "28" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "i" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It only takes 27 multiplications! On the other hand, if you initialize your activations with a scale that is too low, then you'll get another problem:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = torch.randn(512)\n", "a = torch.randn(512,512) * 0.01" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(100): x = a @ x" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor(0.), tensor(0.))" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.mean(),x.std()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, every activation vanished to 0. So to avoid that problem, people have come with several strategies to initialize their weight matices, such as:\n", "- use a standard deviation that will make sure x and Ax have exactly the same scale\n", "- use an orthogonal matrix to initialize the weight (orthogonal matrices have the special property that they preserve the L2 norm, so x and Ax would have the same sum of squares in that case)\n", "- use [spectral normalization](https://arxiv.org/pdf/1802.05957.pdf) on the matrix A (the spectral norm of A is the least possible number M such that `torch.norm(A@x) <= M*torch.norm(x)` so dividing A by this M insures you don't overflow. You can still vanish with this)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The magic number for scaling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we will focus on the first one, which is the Xavier initialization. It tells us that we should use a scale equal to `1/math.sqrt(n_in)` where `n_in` is the number of inputs of our matrix." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Jump_to lesson 9 video](https://course19.fast.ai/videos/?lesson=9&t=1273)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import math" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = torch.randn(512)\n", "a = torch.randn(512,512) / math.sqrt(512)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(100): x = a @ x" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor(-0.0171), tensor(0.9270))" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.mean(),x.std()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And indeed it works. Note that this magic number isn't very far from the 0.01 we had earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.044194173824159216" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "1/ math.sqrt(512)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But where does it come from? It's not that mysterious if you remember the definition of the matrix multiplication. When we do `y = a @ x`, the coefficients of `y` are defined by\n", "\n", "$$y_{i} = a_{i,0} x_{0} + a_{i,1} x_{1} + \\cdots + a_{i,n-1} x_{n-1} = \\sum_{k=0}^{n-1} a_{i,k} x_{k}$$\n", "\n", "or in code:\n", "```\n", "y[i] = sum([c*d for c,d in zip(a[i], x)])\n", "```\n", "\n", "Now at the very beginning, our `x` vector has a mean of roughly 0. and a standard deviation of roughly 1. (since we picked it that way)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor(0.0144), tensor(1.0207))" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = torch.randn(512)\n", "x.mean(), x.std()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NB: This is why it's extremely important to normalize your inputs in Deep Learning, the intialization rules have been designed with inputs that have a mean 0. and a standard deviation of 1.\n", "\n", "If you need a refresher from your statistics course, the mean is the sum of all the elements divided by the number of elements (a basic average). The standard deviation represents if the data stays close to the mean or on the contrary gets values that are far away. It's computed by the following formula:\n", "\n", "$$\\sigma = \\sqrt{\\frac{1}{n}\\left[(x_{0}-m)^{2} + (x_{1}-m)^{2} + \\cdots + (x_{n-1}-m)^{2}\\right]}$$\n", "\n", "where m is the mean and $\\sigma$ (the greek letter sigma) is the standard deviation. Here we have a mean of 0, so it's just the square root of the mean of x squared.\n", "\n", "If we go back to `y = a @ x` and assume that we chose weights for `a` that also have a mean of 0, we can compute the standard deviation of `y` quite easily. Since it's random, and we may fall on bad numbers, we repeat the operation 100 times." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.01184942677617073, 516.7942666625977)" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mean,sqr = 0.,0.\n", "for i in range(100):\n", " x = torch.randn(512)\n", " a = torch.randn(512, 512)\n", " y = a @ x\n", " mean += y.mean().item()\n", " sqr += y.pow(2).mean().item()\n", "mean/100,sqr/100" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that looks very close to the dimension of our matrix 512. And that's no coincidence! When you compute y, you sum 512 product of one element of a by one element of x. So what's the mean and the standard deviation of such a product? We can show mathematically that as long as the elements in `a` and the elements in `x` are independent, the mean is 0 and the std is 1. This can also be seen experimentally:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.013266791818584533, 0.9896199178691623)" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mean,sqr = 0.,0.\n", "for i in range(10000):\n", " x = torch.randn(1)\n", " a = torch.randn(1)\n", " y = a*x\n", " mean += y.item()\n", " sqr += y.pow(2).item()\n", "mean/10000,sqr/10000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we sum 512 of those things that have a mean of zero, and a mean of squares of 1, so we get something that has a mean of 0, and mean of square of 512, hence `math.sqrt(512)` being our magic number. If we scale the weights of the matrix `a` and divide them by this `math.sqrt(512)`, it will give us a `y` of scale 1, and repeating the product has many times as we want won't overflow or vanish." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding ReLU in the mix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can reproduce the previous experiment with a ReLU, to see that this time, the mean shifts and the standard deviation becomes 0.5. This time the magic number will be `math.sqrt(2/512)` to properly scale the weights of the matrix." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.313308998029304, 0.4902977162997965)" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mean,sqr = 0.,0.\n", "for i in range(10000):\n", " x = torch.randn(1)\n", " a = torch.randn(1)\n", " y = a*x\n", " y = 0 if y < 0 else y.item()\n", " mean += y\n", " sqr += y ** 2\n", "mean/10000,sqr/10000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can double check by running the experiment on the whole matrix product." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(9.005213165283203, 257.3701454162598)" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mean,sqr = 0.,0.\n", "for i in range(100):\n", " x = torch.randn(512)\n", " a = torch.randn(512, 512)\n", " y = a @ x\n", " y = y.clamp(min=0)\n", " mean += y.mean().item()\n", " sqr += y.pow(2).mean().item()\n", "mean/100,sqr/100" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or that scaling the coefficient with the magic number gives us a scale of 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(0.5643280637264252, 1.0055165421962737)" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "mean,sqr = 0.,0.\n", "for i in range(100):\n", " x = torch.randn(512)\n", " a = torch.randn(512, 512) * math.sqrt(2/512)\n", " y = a @ x\n", " y = y.clamp(min=0)\n", " mean += y.mean().item()\n", " sqr += y.pow(2).mean().item()\n", "mean/100,sqr/100" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The math behind is a tiny bit more complex, and you can find everything in the [Kaiming](https://arxiv.org/abs/1502.01852) and the [Xavier](http://proceedings.mlr.press/v9/glorot10a.html) paper but this gives the intuition behing those results." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }