{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# A tutorial on 'Soft weight-sharing for Neural Network compression' \n", "\n", "## Introduction\n", "\n", "Recently, compression of neural networks has been a much-discussed issue. One reason for this is the desire to run and store them on mobile devices such as smartphones, robots or [Rasberry Pis](https://github.com/samjabrahams/tensorflow-on-raspberry-pi). Another problem is the energy consumption of a multimillion parameter network. When inference is run at scale the costs can quickly add up.\n", "\n", "Compressing a given neural network is a challenging task. While the idea of storing weights with low precision has been around for some time, ideas to store the weight matrix in a different storage format are recent. \n", "The latter proposal has led to the most successful compression scheme so far. \n", "Under the assumption that there is great redundancy within the weights, one can prune most of them. Consequently, only non-zero weights are being stored. These weights can further be compressed by qunatizing them.\n", "However, it is not clear how to infer the redundant weights given a trained neural network or to quantize the remaining weights. In the following, we will identify the problem more clearly and review one recent attempt to tackle it. For the following illustrations, I work with MNIST and LeNet-300-100 a simple 2-fully connected neural network with 300 and 100 units.\n", "\n", "Usually, when training an unregularized neural network, the distribution of weights looks somewhat like a Normal distribution centered at zero. For the proposed storage format this is not ideal. We would like a distribution that is sharply (ideally delta) peaked around some values with significant mass in the zero peak.\n", "\n", "Weight distribution while common training | | Desired distribution \n", ":-------------------------:|:------------:|:-------------------------:\n", "![](./figures/han_pretraining.gif \"title-1\")|| ![](./figures/han_clustering.png \"title-1\")\n", "\n", "Following we will shortly review how [Han et. al. (2016)](https://arxiv.org/abs/1510.00149), the proposers of this compression format and current state-of-the-art in compression, tackle the problem. \n", "The authors use a multistage pipeline: (i) re-training a pre-trained network with Gaussian prior aka L2-norm on the weights (ii) repeatedly cutting off all weights around a threshold close to zero and after that continue training with L2-norm, and (iii) clustering all weights and retraining again the cluster means.\n", "\n", "(i) Re-training with L2 regularization | (ii) Repetitiv Cutting and training \n", " :-------------------------:|:-------------------------:\n", "![](./figures/han_retraining.gif \"title-1\")|![](./figures/han_cutandtrain.gif \"title-1\")\n", "**(ii) Final stage before clustering** | **(iii) Clustering**\n", "![](./figures/han_final_cut.png \"title-1\")|![](./figures/han_clustering.png \"title-1\")\n", "\n", "\n", "Note that this pipeline is not a differentiable function. Furthermore, pruning and quantization are distinct stages. \n", "\n", "In contrast, we propose to sparsify and cluster weights in one differentiable retraining procedure. More precisely, we train the network weights with a Gaussian mixture model prior. \n", "This is an instance of an empirical Bayesian prior because the parameters in the prior are being learned as well. \n", "With this prior present, weights will naturally cluster together since that will allow the gaussian mixture to lower the variance and thus achieve higher probability. \n", "It is important, to carefully initialize the learning procedure for those priors because one might end up in a situation where the weights \"chase\" the mixture and the mixture the weights. \n", "\n", "Note that, even though compression seems to be a core topic of information theory, so far there has been little attention on this angle on things. While in our paper the emphasis lays on this information theoretic view, here we will restrict ourselves to a somewhat practical one.\n", "\n", "Following, we give a tutorial that shall serve as a practical guide to implementing empirical priors and in particular a Gaussian Mixture with an Inverse-Gamma prior on the variances. It is divided into 3 parts.\n", "\n", "\n", "* **PART 1:** Pretraining a Neural Network\n", "\n", "* **PART 2:** Re-train the network with an empirical Gaussian Mixture Prior with Inverse-Gamma hyper-prior on the variances. \n", "\n", "* **PART 3:** Post-process the re-trained network weights\n", "\n", "## PART 1: Pretraining a Neural Network \n", "\n", "\n", "First of all, we need a parameter heavy network to compress. In this first part of the tutorial, we train a simple -2 convolutional, 2 fully connected layer- neural network on MNIST with 642K paramters. \n", "___________________________\n", "\n", "We start by loading some essential libraries." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using Theano backend.\n", "Using gpu device 0: GeForce GTX TITAN X (CNMeM is enabled with initial size: 70.0% of memory, cuDNN 5103)\n", "/home/karen/anaconda2/lib/python2.7/site-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5.\n", " warnings.warn(warn)\n" ] } ], "source": [ "from __future__ import print_function\n", "import numpy as np\n", "%matplotlib inline\n", "import keras\n", "from IPython import display" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "______________________\n", "Following, we load the MNIST dataset into memory." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Successfully loaded 60000 train samples and 10000 test samples.\n" ] } ], "source": [ "from dataset import mnist\n", "\n", "[X_train, X_test], [Y_train, Y_test], [img_rows, img_cols], nb_classes = mnist()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ " ___________________________________________________\n", "\n", "Next, we choose a model. We decide in favor of a classical 2 convolutional, 2 fully connected layer network with ReLu activation." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "____________________________________________________________________________________________________\n", "Layer (type) Output Shape Param # Connected to \n", "====================================================================================================\n", "input (InputLayer) (None, 1, 28, 28) 0 \n", "____________________________________________________________________________________________________\n", "convolution2d_1 (Convolution2D) (None, 25, 12, 12) 650 input[0][0] \n", "____________________________________________________________________________________________________\n", "convolution2d_2 (Convolution2D) (None, 50, 5, 5) 11300 convolution2d_1[0][0] \n", "____________________________________________________________________________________________________\n", "flatten_1 (Flatten) (None, 1250) 0 convolution2d_2[0][0] \n", "____________________________________________________________________________________________________\n", "dense_1 (Dense) (None, 500) 625500 flatten_1[0][0] \n", "____________________________________________________________________________________________________\n", "activation_1 (Activation) (None, 500) 0 dense_1[0][0] \n", "____________________________________________________________________________________________________\n", "dense_2 (Dense) (None, 10) 5010 activation_1[0][0] \n", "____________________________________________________________________________________________________\n", "error_loss (Activation) (None, 10) 0 dense_2[0][0] \n", "====================================================================================================\n", "Total params: 642,460\n", "Trainable params: 642,460\n", "Non-trainable params: 0\n", "____________________________________________________________________________________________________\n" ] } ], "source": [ "from keras import backend as K\n", "\n", "from keras.models import Model\n", "from keras.layers import Input, Dense, Activation, Flatten, Convolution2D\n", "\n", "# We configure the input here to match the backend. If properly done this is a lot faster. \n", "if K._BACKEND == \"theano\":\n", " InputLayer = Input(shape=(1, img_rows, img_cols), name=\"input\")\n", "elif K._BACKEND == \"tensorflow\":\n", " InputLayer = Input(shape=(img_rows, img_cols,1), name=\"input\")\n", "\n", "# A classical architecture ...\n", "# ... with 3 convolutional layers,\n", "Layers = Convolution2D(25, 5, 5, subsample = (2,2), activation = \"relu\")(InputLayer)\n", "Layers = Convolution2D(50, 3, 3, subsample = (2,2), activation = \"relu\")(Layers)\n", "# ... and 2 fully connected layers.\n", "Layers = Flatten()(Layers)\n", "Layers = Dense(500)(Layers)\n", "Layers = Activation(\"relu\")(Layers)\n", "Layers = Dense(nb_classes)(Layers)\n", "PredictionLayer = Activation(\"softmax\", name =\"error_loss\")(Layers)\n", "\n", "# Fianlly, we create a model object:\n", "model = Model(input=[InputLayer], output=[PredictionLayer])\n", "\n", "model.summary()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "---------------------------------------------------------------------------------------------------\n", "Next, we train the network for 100 epochs with the Adam optimizer. Let's see where our model gets us..." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "from keras import optimizers\n", "\n", "epochs = 100\n", "batch_size = 256\n", "opt = optimizers.Adam(lr=0.001)\n", "\n", "model.compile(optimizer= opt,\n", " loss = {\"error_loss\": \"categorical_crossentropy\",},\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test accuracy: 0.9908\n" ] } ], "source": [ "model.fit({\"input\": X_train, }, {\"error_loss\": Y_train},\n", " nb_epoch = epochs, batch_size = batch_size,\n", " verbose = 0, validation_data=(X_test, Y_test))\n", "\n", "score = model.evaluate(X_test, Y_test, verbose=0)\n", "print('Test accuracy:', score[1])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "***Note: *** The model should end up with approx. 0.9% error rate.\n", "___________________________________\n", "Fianlly, we save the model in case we need to reload it later, e.g. if you want to play around with the code ..." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "keras.models.save_model(model, \"./my_pretrained_net\")" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "________________________________________\n", "__________________________________________\n", "\n", "## PART 2: Re-training the network with an empirical prior\n", "\n", "_____________________________________________________\n", "\n", "First of all, we load our pretrained model " ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "pretrained_model = keras.models.load_model(\"./my_pretrained_net\")" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Following, we will initialize a 16 component Gaussian mixture as our empirical prior. We will learn all parameters in the prior but the mean and the mixing proportion of the zero component, we set $\\mu_0=0$ and $\\pi_0=0.99$, respectively. Furthermore, we put a Gamma hyper-prior on the precisions of the Gaussian mixture. We set the mean such that the expected variance is $0.02$. The variance of the hyper-prior is an estimate of how strongly the variance is regularized. Note that, the variance of the zero component has much more data (i.e. weight) evidence than the other components thus we put a stronger prior on it. Somewhat counterintuitive we found it beneficial to have wider and thus noisier expected variances. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "from empirical_priors import GaussianMixturePrior\n", "from extended_keras import extract_weights\n", "\n", "pi_zero = 0.99\n", "\n", "RegularizationLayer = GaussianMixturePrior(nb_components=16, \n", " network_weights=extract_weights(model),\n", " pretrained_weights=pretrained_model.get_weights(), \n", " pi_zero=pi_zero,\n", " name=\"complexity_loss\")(Layers)\n", "\n", "model = Model(input = [InputLayer], output = [PredictionLayer, RegularizationLayer])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We optimize the network again with ADAM, the learning rates for the network parameters, means, variances and mixing proportions may differ though." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "import optimizers \n", "from extended_keras import identity_objective\n", "\n", "tau = 0.003\n", "N = X_train.shape[0] \n", "\n", "opt = optimizers.Adam(lr = [5e-4,1e-4,3e-3,3e-3], #[unnamed, means, log(precition), log(mixing proportions)]\n", " param_types_dict = ['means','gammas','rhos'])\n", "\n", "model.compile(optimizer = opt,\n", " loss = {\"error_loss\": \"categorical_crossentropy\", \"complexity_loss\": identity_objective},\n", " loss_weights = {\"error_loss\": 1. , \"complexity_loss\": tau/N},\n", " metrics = ['accuracy'])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We train our network for 30 epochs, each taking about 45s. You can watch the progress yourself. At each epoch, we compare the original weight distribution (histogram top) to the current distribution (log-scaled histogram right). The joint scatter plot in the middle shows how each weight changed.\n", "\n", "*Note* that we had to scale the histogram logarithmically otherwise it would be little informative due to the zero spike." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "from extended_keras import VisualisationCallback\n", "\n", "epochs = 30\n", "model.fit({\"input\": X_train,},\n", " {\"error_loss\" : Y_train, \"complexity_loss\": np.zeros((N,1))},\n", " nb_epoch = epochs,\n", " batch_size = batch_size,\n", " verbose = 1., callbacks=[VisualisationCallback(model,X_test,Y_test, epochs)])\n", "\n", "display.clear_output()" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "display.Image(url='./figures/retraining.gif')" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## PART 3: Post-processing\n", "\n", "Now, the only thing that is left to do is setting each weight to the mean of the component that takes most responsibility for it i.e. quantising the weights. \n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "from helpers import discretesize\n", "\n", "retrained_weights = np.copy(model.get_weights())\n", "compressed_weights = np.copy(model.get_weights())\n", "compressed_weights[:-3] = discretesize(compressed_weights, pi_zero = pi_zero)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us compare the accuracy of the reference, the retrained and the post-processed network." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "MODEL ACCURACY\n", "Reference Network: 0.9908\n", "Retrained Network: 0.9903\n", "Post-processed Network: 0.9902\n" ] } ], "source": [ "print(\"MODEL ACCURACY\")\n", "score = pretrained_model.evaluate({'input': X_test, },{\"error_loss\" : Y_test,}, verbose=0)[1]\n", "print(\"Reference Network: %0.4f\" %score)\n", "score = model.evaluate({'input': X_test, },{\"error_loss\" : Y_test, \"complexity_loss\": Y_test,}, verbose=0)[3]\n", "print(\"Retrained Network: %0.4f\" %score)\n", "model.set_weights(compressed_weights)\n", "score = model.evaluate({'input': X_test, },{\"error_loss\" : Y_test, \"complexity_loss\": Y_test,}, verbose=0)[3]\n", "print(\"Post-processed Network: %0.4f\" %score)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Finally let us see how many weights have been pruned." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Non-zero weights: 7.43 %\n" ] } ], "source": [ "from helpers import special_flatten\n", "weights = special_flatten(compressed_weights[:-3]).flatten()\n", "print(\"Non-zero weights: %0.2f %%\" % (100.*np.count_nonzero(weights)/ weights.size) )" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "As we can see in this naive implementation we got rid of 19 out of 20 weights. Furthermore note that we quantize weights with only 16 cluster means (aka 4 bit indexes). \n", "\n", "For better results (up to 0.5%) one may anneal $\\tau$, learn the mixing proportion for the zero spike with a beta prior on it for example and ideally optimize with some hyperparamter optimization of choice such as spearmint (I also wrote some example code for deep learning and spearmint).\n", "\n", "We finish this tutorial with a series of histograms showing the results of our procedure." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from helpers import save_histogram\n", "\n", "save_histogram(pretrained_model.get_weights(),save=\"figures/reference\")\n", "save_histogram(retrained_weights,save=\"figures/retrained\")\n", "save_histogram(compressed_weights,save=\"figures/post-processed\")" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "|Weight distribution before retraining | Weight distribution after retraining| Weight distribution after post-processing \n", ":-------------------------:|:-------------------------:|:------------:|:-------------------------:\n", "histogram|![](./figures/reference.png)|| ![](./figures/post-processed.png)\n", "log-scaled histogram|![](./figures/reference_log.png)|| ![](./figures/post-processed_log.png)\n", "_______________________________\n", "### *Reference*\n", "\n", "The paper \"Soft weight-sharing for Neural Network compression\" has been accepted to ICLR 2017.\n", "\n", "\n", " @inproceedings{ullrich2017soft,\n", " title={Soft Weight-Sharing for Neural Network Compression},\n", " author={Ullrich, Karen and Meeds, Edward and Welling, Max},\n", " booktitle={ICLR 2017},\n", " year={2017}\n", " }" ] } ], "metadata": { "anaconda-cloud": {}, "celltoolbar": "Hide code", "kernelspec": { "display_name": "Python [Root]", "language": "python", "name": "Python [Root]" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 0 }