{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import numpy as np\n", "#import mplcyberpunk\n", "import torch\n", "from torch import nn\n", "from torch.nn import functional as F\n", "from matplotlib import pyplot as plt\n", "from matplotlib import rcParams\n", "\n", "#plt.style.use(\"cyberpunk\")\n", "\n", "#rcParams[\"font.sans-serif\"] = \"Roboto\"\n", "rcParams[\"xtick.labelsize\"] = 14.\n", "rcParams[\"ytick.labelsize\"] = 14.\n", "rcParams[\"axes.labelsize\"] = 14.\n", "rcParams[\"legend.fontsize\"] = 14\n", "rcParams[\"axes.titlesize\"] = 16." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.random.seed(42)\n", "\n", "_ = torch.manual_seed(42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Auxiliary Functions\n", "\n", "This notebook is a compilation of a few aspects that are \"secondary\" to deep learning models. This provides a quick overview using PyTorch." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exponential Activation Functions\n", "\n", "These activation functions all contain exponential functions, and so vary relatively smoothly with $x$. The general purpose of these functions is to compress output values such that they lie within a certain range, and eventually asymptote/saturate at some point. The sigmoid and tanh functions in particular are classic activation functions, but as you will see below, their derivatives go to zero quite quickly with larger values of $x$.\n", "\n", "### Sigmoid\n", "\n", "$$ \\sigma(x) = \\frac{1}{1 + \\exp(-x)} $$\n", "\n", "Collapses values to the range of [0,1]; used traditionally to mimic neurons firing, although saturates very easily with values of $x$ not near zero and so is less commonly used now between layers, but more for binary classification (0 or 1) or [Bernoulli trial.](https://en.wikipedia.org/wiki/Bernoulli_distribution)\n", "\n", "### Softmax\n", "\n", "$$ \\mathrm{softmax}(x) = \\frac{\\exp(x)}{\\sum \\exp(x)} $$\n", "\n", "Forces the vector to sum to 1 like probabilities of multiple independent events. Used commonly for multiclass classification, which outputs the likelihood of a given class.\n", "\n", "### Softplus\n", "\n", "$$ \\mathrm{softplus}(x) = \\frac{1}{\\beta}\\log(1 + \\exp(\\beta x))$$\n", "\n", "### tanh\n", "\n", "$$ \\tanh(x) = \\frac{\\exp(x) - \\exp(-x)}{\\exp(x) + \\exp(-x)} $$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "funcs = [torch.sigmoid, F.softmax, F.softmin, F.softplus, torch.tanh, torch.cos]\n", "\n", "fig, axarray = plt.subplots(1, len(funcs), figsize=(16,5.))\n", "\n", "with torch.no_grad():\n", " for ax, func in zip(axarray, funcs):\n", " X = torch.arange(-5, 5., step=0.1)\n", " Y = func(X)\n", " ax.plot(X, Y, lw=1.5, alpha=1., label=\"$f(x)$\")\n", " ax.set_title(func.__name__)\n", " ax.set_xlabel(\"X\")\n", "\n", "# Compute gradients this time\n", "for ax, func in zip(axarray, funcs):\n", " X = torch.arange(-5, 5., step=0.1)\n", " X.requires_grad_(True)\n", " Y = func(X)\n", " Y.backward(torch.ones(Y.size()))\n", " grads = X.grad\n", " ax.plot(X.detach(), grads.numpy(), lw=1.5, alpha=1., label=\"$\\\\nabla f(x)$\", ls=\"--\")\n", "\n", "axarray[0].legend(loc=2)\n", "fig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Activation Functions\n", "\n", "### Rectified Linear Unit (ReLU)\n", "\n", "$$ \\mathrm{ReLU}(x) = \\mathrm{max(0, x)} $$\n", "\n", "Despite having little statistical interpretation (compared to sigmoid, for example), it works well for most tasks.\n", "\n", "### Exponential Linear Unit (ELU)\n", "\n", "$$ \\mathrm{ELU}(x) = \\mathrm{max}(0,x) + \\mathrm{min}(0, \\alpha \\exp(x) - 1) $$\n", "\n", "where $\\alpha$ is a hyperparameter tuned to change the rate of change. Allows the gradient to be non-zero with small values of $x$, and only asymptotes at very negative values.\n", "\n", "### Leaky ReLU\n", "\n", "$$ \\mathrm{LeakyReLU}(x) = \\mathrm{max}(0,x) + \\alpha \\mathrm{min}(0,x) $$\n", "\n", "where $\\alpha$ is a hyperparameter tuned to allow some negative gradients backpropagate; otherwise they would asymptote to zero as in ReLU.\n", "\n", "### ReLU6\n", "\n", "$$ \\mathrm{ReLU6}(x) = \\mathrm{min}(\\mathrm{max(0, x)}, 6) $$\n", "\n", "### Scaled Exponential Linear Unit (SELU)\n", "\n", "$$ \\mathrm{SELU}(x) = \\lambda \\mathrm{max}(0, x) + \\mathrm{min}(0,\\alpha (\\exp(x) - 1)) $$\n", "\n", "where $\\lambda$ and $\\alpha$ are hyperparameters, although were numerically tuned to $\\alpha=1.6733$ and $\\lambda=1.0507$\n", "for [self-normalizing neural networks](https://arxiv.org/pdf/1706.02515.pdf).\n", "\n", "### Gaussian Error Linear Unit (GELU)\n", "\n", "$$ \\mathrm{GELU}(x) = x \\Phi(x) $$\n", "\n", "where $\\Phi(x)$ is the cumulative distribution function for a standard Gaussian distribution. Allows for [self-regularization to a certain extent](https://arxiv.org/pdf/1606.08415.pdf) similar to dropouts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "funcs = [F.relu, F.elu, F.leaky_relu, F.relu6, torch.selu, F.gelu]\n", "\n", "fig, axarray = plt.subplots(1, len(funcs), figsize=(18,5.))\n", "\n", "with torch.no_grad():\n", " for ax, func in zip(axarray, funcs):\n", " X = torch.arange(-7, 7., step=0.1)\n", " Y = func(X)\n", " ax.plot(X, Y, lw=1.5, alpha=1., label=\"$f(x)$\")\n", " ax.set_title(func.__name__)\n", " ax.set_xlabel(\"X\")\n", "\n", "for ax, func in zip(axarray, funcs):\n", " X = torch.autograd.Variable(torch.arange(-7, 7., step=0.1), requires_grad=True)\n", " Y = func(X)\n", " Y.backward(torch.ones(Y.size()))\n", " grads = X.grad\n", " ax.plot(X.detach(), grads.numpy(), lw=1.5, alpha=1., label=\"$\\\\nabla f(x)$\", ls=\"--\")\n", " ax.set_ylim([-2., 7.])\n", "\n", "axarray[0].legend(loc=2)\n", "fig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loss functions\n", "\n", "Designing an appropriate loss function is probably the most important aspect of deep learning: you can be creative about how to encourage your model to learn about the problem. You can find all the loss functions implemented in PyTorch [here](https://pytorch.org/docs/stable/nn.html#loss-functions), and mix and match them to make your model learn what you want it to learn.\n", "\n", "### Mean squared error\n", "\n", "Probably the most commonly used loss in machine learning: individual predictions scale quadratically away from the ground truth, but the actual objective of this function is to learn to reproduce the _mean_ of your data.\n", "\n", "$$ \\mathcal{L} = \\frac{1}{N}\\sqrt{\\sum^I_i(\\hat{y}_i - y_i)^2} $$\n", "\n", "Something worth noting here, is that minimizing the mean squared error is equivalent to maximizing the $\\log$ likelihood in _most_ circumstances assuming normally distributed errors. In other words, predictions with models trained on the MSE loss can be thought of as the maximum likelihood estimate. [Read here](https://www.jessicayung.com/mse-as-maximum-likelihood/) for more details.\n", "\n", "### Kullback-Leibler divergence\n", "\n", "Less commonly encountered in the physical sciences unless you work with some information theory. This is an asymmetric loss function, contrasting the mean squared error, and is useful for estimating ``distances'' between probability distributions. Effectively, this loss function measures the extra amount of information you need for a distribution $q$ to encode another distribution $p$; when the two distributions are exactly equivalent, then $D_\\mathrm{KL}$ is zero.\n", "\n", "$$ D_\\mathrm{KL}(p \\vert \\vert q) = p \\frac{\\log p}{\\log q} $$\n", "\n", "This loss is asymmetric, because $D_\\mathrm{KL}(p \\vert \\vert q) \\neq D_\\mathrm{KL}(q \\vert \\vert p)$. To illustrate, we plot two Gaussians below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import entropy, norm" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = np.linspace(-5., 5., 1000)\n", "p = norm(loc=-1.5, scale=0.7).pdf(x)\n", "q = norm(loc=2., scale=0.3).pdf(x) + norm(loc=-2.5, scale=1.3).pdf(x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.fill_between(x, p, 0., label=\"p(x)\", alpha=0.6)\n", "ax.fill_between(x, q, 0., label=\"q(x)\", alpha=0.6)\n", "fig.legend(loc=\"upper center\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The $D_\\mathrm{KL}$ for each direction:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dpq = entropy(p, q)\n", "dqp = entropy(q, p)\n", "\n", "print(f\"D(p||q) = {dpq:.3f}, D(q||p) = {dqp:.3f}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The way to interpret this is coverage: $D_\\mathrm{KL}(p \\vert \\vert q)$ is smaller than $D_\\mathrm{KL}(q \\vert \\vert p)$, because if you wanted to express $p$ with $q$ you would do an okay job (at least with respect to the left Gaussian). Conversely, if you wanted to use $p$ to represent $q$ ($D_\\mathrm{KL}(q \\vert \\vert p)$) it would do a poor job of representing the right Gaussian. Another way of looking at it is at their cumulative distribution functions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cdf_p = norm(loc=-0.8, scale=0.7).cdf(x)\n", "cdf_q = norm(loc=2., scale=0.3).cdf(x) + norm(loc=-2.5, scale=1.3).cdf(x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.fill_between(x, cdf_p, 0., label=\"p(x)\", alpha=0.6)\n", "ax.fill_between(x, cdf_q, 0., label=\"q(x)\", alpha=0.6)\n", "fig.legend(loc=\"upper center\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The (unnormalized) CDFs show how $p$ does not contain any knowledge at higher values of $x$, leading to a higher $D_\\mathrm{KL}$.\n", "\n", "### Binary cross entropy\n", "\n", "This represents a special, yet common enough loss, for binomial targets $[0,1]$. This is usually used for classification tasks, but also for predicting pixel intensities (if they fall in the $[0,1]$ range).\n", "\n", "$$ \\mathcal{L} = -[y \\cdot \\log \\hat{y} + (1 - y) \\cdot \\log(1 - \\hat{y})] $$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A note on implementations\n", "\n", "You will more often than not in many deep learning libraries see a \"X with logits\" implementation of some X loss function. For example, the binary cross entropy in PyTorch has a `BCEWithLogits` and a `BCELoss` implementation. When possible, you use the former, as it ensures numerical stability by taking advantage of $\\log$ of very small numbers. For example when multiplying by likelihoods, you can end up with rounding errors due to a loss in number precision: for $p_a = 10^{-5}$ and $p_b = 10^{-7}$, you preserve the precision by doing $p_a \\times p_b$ in $\\log$ space, which would be $(-5) + (-7)$ in $\\log_{10}$, as opposed to $10^{-12}$.\n", "\n", "If you use `BCEWithLogits`, the loss function will include the sigmoid activation function, and the output layer (if it learns) will generate the $\\log$ likelihood as its output." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 4 }