{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "onY4OU_JZUCB" }, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section7_3-Bayesian-Neural-Networks.ipynb)\n", "\n", "# Bayesian Neural Networks in PyMC\n", "\n", "Bayesian deep learning combines deep neural networks with probabilistic methods to provide information about the **uncertainty** associated with its predictions. Not only is accounting for prediction uncertainty important for real-world applications, it is also be useful in training. For example, we could train the model specifically on samples it is most uncertain about.\n", "\n", "We can also quantify the uncertainty in our estimates of network weights, which could inform us about the **stability** of the learned representations of the network.\n", "\n", "In classical neural networks, weights are often L2-regularized to avoid overfitting, which corresponds exactly to Gaussian priors over the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm).\n", "\n", "If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like [GoogLeNet](https://arxiv.org/abs/1409.4842). \n", "\n", "Additionally, the application of hierarichical modeling allows for the pooling of things that were learned on sub-groups to the overall population. Applied here, individual neural nets can be applied to sub-groups based on sharing information from the overall population. \n", "\n", "Let's generate another simulated classification dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "s1yJfBW3ZUCT" }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "sns.set(style='ticks')\n", "\n", "import aesara\n", "import aesara.tensor as at\n", "import pymc as pm\n", "from sklearn import datasets\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.preprocessing import scale\n", "\n", "floatX = aesara.config.floatX" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 301 }, "id": "iskzLA1aZUCV", "outputId": "14bb7315-d31f-436f-dc2b-fa64c4ff036a" }, "outputs": [], "source": [ "X, y = datasets.make_moons(noise=0.2, n_samples=1000, random_state=2009)\n", "X = scale(X)\n", "\n", "fig, ax = plt.subplots()\n", "ax.scatter(X[y==0, 0], X[y==0, 1], label='Class 0')\n", "ax.scatter(X[y==1, 0], X[y==1, 1], color='r', label='Class 1')\n", "sns.despine(); ax.legend()\n", "ax.set(xlabel='X1', ylabel='X2', title='Toy binary classification data set');" ] }, { "cell_type": "markdown", "metadata": { "id": "ccVcdTLfZUCW" }, "source": [ "We first create training and test sets using scikit-learn, and convert the training set to Aesara tensors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "B6qXiumHZUCX" }, "outputs": [], "source": [ "X = X.astype(floatX)\n", "y = y.astype(floatX)\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3)" ] }, { "cell_type": "markdown", "metadata": { "id": "DsWTaWrmZUCX" }, "source": [ "Using standard normal deviates for initial values will facilitate convergence." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "61Ro4ftIZUCY" }, "outputs": [], "source": [ "n_hidden = 5\n", "\n", "init_1 = np.random.randn(X.shape[1], n_hidden).astype(floatX)\n", "init_2 = np.random.randn(n_hidden, n_hidden).astype(floatX)\n", "init_out = np.random.randn(n_hidden).astype(floatX)" ] }, { "cell_type": "markdown", "metadata": { "id": "J5NL_8PTZUCY" }, "source": [ "Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9M-To4QuZUCZ" }, "outputs": [], "source": [ "with pm.Model() as neural_network:\n", " # Mutable input data\n", " X_data = pm.Data('X_data', X_train, mutable=True)\n", " \n", " # Weights from input to hidden layer\n", " weights_in_1 = pm.Normal('w_in_1', 0, sigma=1, \n", " shape=(X.shape[1], n_hidden), \n", " initval=init_1)\n", "\n", " # Weights from 1st to 2nd layer\n", " weights_1_2 = pm.Normal('w_1_2', 0, sigma=1, \n", " shape=(n_hidden, n_hidden), \n", " initval=init_2)\n", "\n", " # Weights from hidden layer to output\n", " weights_2_out = pm.Normal('w_2_out', 0, sigma=1, \n", " shape=(n_hidden,), \n", " initval=init_out)\n", "\n", " # Build neural-network using tanh activation function\n", " act_1 = pm.math.tanh(pm.math.dot(X_data, \n", " weights_in_1))\n", " act_2 = pm.math.tanh(pm.math.dot(act_1, \n", " weights_1_2))\n", " act_out = pm.math.sigmoid(pm.math.dot(act_2, \n", " weights_2_out))\n", "\n", " # Binary classification -> Bernoulli likelihood\n", " out = pm.Bernoulli('out', \n", " act_out,\n", " observed=y_train,\n", " )" ] }, { "cell_type": "markdown", "metadata": { "id": "WyL-1WAmZUCa" }, "source": [ "We could use Markov chain Monte Carlo sampling, which works pretty well in this case, but this will become very slow as we scale our model up to deeper architectures with more layers.\n", "\n", "Instead, we will use the the ADVI variational inference algorithm. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 109 }, "id": "PhGo_5RYZUCb", "outputId": "e0fc3294-a7ac-4eb7-9004-244d218e2923" }, "outputs": [], "source": [ "with neural_network:\n", " approx = pm.fit(n=30_000)" ] }, { "cell_type": "markdown", "metadata": { "id": "yFS1C7jQZUCc" }, "source": [ "As samples are more convenient to work with, we can very quickly draw samples from the variational approximation using the `sample` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QmNC86ZwZUCc" }, "outputs": [], "source": [ "trace = approx.sample(draws=5000)" ] }, { "cell_type": "markdown", "metadata": { "id": "1RzhgRMMZUCd" }, "source": [ "Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 302 }, "id": "ocJT_y9XZUCd", "outputId": "57d0957d-16d6-45ee-90db-1f4e6d0fd073" }, "outputs": [], "source": [ "plt.plot(approx.hist, alpha=.3)\n", "plt.ylabel('ELBO')\n", "plt.xlabel('iteration');" ] }, { "cell_type": "markdown", "metadata": { "id": "ztRtgFJFZUCd" }, "source": [ "Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). " ] }, { "cell_type": "markdown", "metadata": { "id": "qg0O3kAZh8qH" }, "source": [ "Next, switch out the training observations for the testing observations and use `sample_posterior_predictive()` to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 37 }, "id": "HA4WMcFJefMi", "outputId": "cc637d8a-c259-4bd0-866f-6c15a879436d" }, "outputs": [], "source": [ "with neural_network:\n", " pm.set_data({\"X_data\": X_test})\n", " post_pred = pm.sample_posterior_predictive(trace)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The mean of the samples for each observation is an estimate of the underlying probability of being in class 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "n5je4HYFerlF" }, "outputs": [], "source": [ "y_pred = post_pred.posterior_predictive['out'].values.squeeze().mean(0) > 0.5" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "tc0_1m0cZUCe", "outputId": "18a1b07b-d60b-4ba8-bc71-8cabd3e6982d" }, "outputs": [], "source": [ "print('Accuracy = {:0.1f}%'.format((y_test == y_pred).mean() * 100))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 301 }, "id": "0x3-AfQUZUCe", "outputId": "ff1bf96b-3eba-492e-ba8c-9698c9aea504" }, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "ax.scatter(X_test[y_pred==0, 0], X_test[y_pred==0, 1])\n", "ax.scatter(X_test[y_pred==1, 0], X_test[y_pred==1, 1], color='r')\n", "sns.despine()\n", "ax.set(title='Predicted labels in testing set', xlabel='X1', ylabel='X2');" ] }, { "cell_type": "markdown", "metadata": { "id": "9MTz08DeZUCf" }, "source": [ "Let's look at what the classifier has learned.\n", "\n", "For this, we evaluate the class probability predictions on a grid over the whole input space." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "RtS-eaYhZUCf" }, "outputs": [], "source": [ "grid = pm.floatX(np.mgrid[-3:3:100j,-3:3:100j])\n", "grid_2d = grid.reshape(2, -1).T" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 37 }, "id": "e7y5G2XBgNa_", "outputId": "7ea65b9b-51a3-4eb8-8237-20028078fdc8" }, "outputs": [], "source": [ "with neural_network:\n", " pm.set_data({\"X_data\": grid_2d})\n", " grid_pred = pm.sample_posterior_predictive(trace)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SsceEWv6ZUCf" }, "outputs": [], "source": [ "ppc = grid_pred.posterior_predictive['out'].squeeze().values" ] }, { "cell_type": "markdown", "metadata": { "id": "jLqaq5n_ZUCg" }, "source": [ "The result is a probability surface corresponding to the model predictions." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 561 }, "id": "gAj8znu7ZUCg", "outputId": "e15f454f-4865-49c7-b84d-02f683a49e69" }, "outputs": [], "source": [ "cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)\n", "fig, ax = plt.subplots(figsize=(16, 9))\n", "contour = ax.contourf(grid[0], grid[1], ppc.mean(axis=0).reshape(100, 100), cmap=cmap)\n", "ax.scatter(X_test[y_pred==0, 0], X_test[y_pred==0, 1])\n", "ax.scatter(X_test[y_pred==1, 0], X_test[y_pred==1, 1], color='r')\n", "cbar = plt.colorbar(contour, ax=ax)\n", "_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X1', ylabel='X2');\n", "cbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');" ] }, { "cell_type": "markdown", "metadata": { "id": "nk1TNGvuZUCg" }, "source": [ "However, unlike a classical neural network, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 561 }, "id": "EFDskHd1ZUCg", "outputId": "59af9d74-92ab-40ad-9d3e-73155988ce8a" }, "outputs": [], "source": [ "cmap = sns.cubehelix_palette(light=1, as_cmap=True)\n", "fig, ax = plt.subplots(figsize=(16, 9))\n", "contour = ax.contourf(grid[0], grid[1], ppc.std(axis=0).reshape(100, 100), cmap=cmap)\n", "ax.scatter(X_test[y_pred==0, 0], X_test[y_pred==0, 1])\n", "ax.scatter(X_test[y_pred==1, 0], X_test[y_pred==1, 1], color='r')\n", "cbar = plt.colorbar(contour, ax=ax)\n", "_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');\n", "cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');" ] }, { "cell_type": "markdown", "metadata": { "id": "GFqNEkFYZUCg" }, "source": [ "---\n", "## References\n", "\n", "T. Wiecki and M. Kochurov. (2017) [Variational Inference: Bayesian Neural Networks](http://docs.pymc.io/notebooks/bayesian_neural_network_advi.html)\n", "\n", "D. Rodriguez. (2013) [Basic [1 hidden layer] neural network on Python](http://danielfrg.com/blog/2013/07/03/basic-neural-network-python/).\n", "\n", "D. Britz. (2015) [Implementing a Neural Network from Scratch](http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/)" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "Copy of Section7_3-Bayesian-Neural-Networks.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" } }, "nbformat": 4, "nbformat_minor": 4 }