{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook explains how to add batch normalization to VGG. The code shown here is implemented in [vgg_bn.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16bn.py), and there is a version of ``vgg_ft`` (our fine tuning function) with batch norm called ``vgg_ft_bn`` in [utils.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/utils.py)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using gpu device 0: Tesla K80 (CNMeM is disabled, cuDNN 5103)\n", "/home/ubuntu/anaconda2/lib/python2.7/site-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5.\n", " warnings.warn(warn)\n" ] } ], "source": [ "from theano.sandbox import cuda" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using Theano backend.\n" ] } ], "source": [ "%matplotlib inline\n", "import utils; reload(utils)\n", "from utils import *\n", "from __future__ import print_function, division" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# The problem, and the solution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The problem" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The problem that we faced in the lesson 3 is that when we wanted to add batch normalization, we initialized *all* the dense layers of the model to random weights, and then tried to train them with our cats v dogs dataset. But that's a lot of weights to initialize to random - out of 134m params, around 119m are in the dense layers! Take a moment to think about why this is, and convince yourself that dense layers are where most of the weights will be. Also, think about whether this implies that most of the *time* will be spent training these weights. What do you think?\n", "\n", "Trying to train 120m params using just 23k images is clearly an unreasonable expectation. The reason we haven't had this problem before is that the dense layers were not random, but were trained to recognize imagenet categories (other than the very last layer, which only has 8194 params)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The solution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The solution, obviously enough, is to add batch normalization to the VGG model! To do so, we have to be careful - we can't just insert batchnorm layers, since their parameters (*gamma* - which is used to multiply by each activation, and *beta* - which is used to add to each activation) will not be set correctly. Without setting these correctly, the new batchnorm layers will normalize the previous layer's activations, meaning that the next layer will receive totally different activations to what it would have without new batchnorm layer. And that means that all the pre-trained weights are no longer of any use!\n", "\n", "So instead, we need to figure out what beta and gamma to choose when we insert the layers. The answer to this turns out to be pretty simple - we need to calculate what the mean and standard deviation of that activations for that layer are when calculated on all of imagenet, and then set beta and gamma to these values. That means that the new batchnorm layer will normalize the data with the mean and standard deviation, and then immediately un-normalize the data using the beta and gamma parameters we provide. So the output of the batchnorm layer will be identical to it's input - which means that all the pre-trained weights will continue to work just as well as before.\n", "\n", "The benefit of this is that when we wish to fine-tune our own networks, we will have all the benefits of batch normalization (higher learning rates, more resiliant training, and less need for dropout) plus all the benefits of a pre-trained network." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To calculate the mean and standard deviation of the activations on imagenet, we need to download imagenet. You can download imagenet from http://www.image-net.org/download-images . The file you want is the one titled **Download links to ILSVRC2013 image data**. You'll need to request access from the imagenet admins for this, although it seems to be an automated system - I've always found that access is provided instantly. Once you're logged in and have gone to that page, look for the **CLS-LOC dataset** section. Both training and validation images are available, and you should download both. There's not much reason to download the test images, however.\n", "\n", "Note that this will not be the entire imagenet archive, but just the 1000 categories that are used in the annual competition. Since that's what VGG16 was originally trained on, that seems like a good choice - especially since the full dataset is 1.1 terabytes, whereas the 1000 category dataset is 138 gigabytes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Adding batchnorm to Imagenet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sample" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As per usual, we create a sample so we can experiment more rapidly." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%pushd data/imagenet\n", "%cd train" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%mkdir ../sample\n", "%mkdir ../sample/train\n", "%mkdir ../sample/valid\n", "\n", "from shutil import copyfile\n", "\n", "g = glob('*')\n", "for d in g: \n", " os.mkdir('../sample/train/'+d)\n", " os.mkdir('../sample/valid/'+d)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [], "source": [ "g = glob('*/*.JPEG')\n", "shuf = np.random.permutation(g)\n", "for i in range(25000): copyfile(shuf[i], '../sample/train/' + shuf[i])" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/data/jhoward/imagenet/valid\n", "/data/jhoward/imagenet\n" ] } ], "source": [ "%cd ../valid\n", "\n", "g = glob('*/*.JPEG')\n", "shuf = np.random.permutation(g)\n", "for i in range(5000): copyfile(shuf[i], '../sample/valid/' + shuf[i])\n", "\n", "%cd .." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%mkdir sample/results" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%popd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We set up our paths, data, and labels in the usual way. Note that we don't try to read all of Imagenet into memory! We only load the sample into memory." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sample_path = 'data/jhoward/imagenet/sample/'\n", "# This is the path to my fast SSD - I put datasets there when I can to get the speed benefit\n", "fast_path = '/home/jhoward/ILSVRC2012_img_proc/'\n", "#path = '/data/jhoward/imagenet/sample/'\n", "path = 'data/jhoward/imagenet/'" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "batch_size=64" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Found 25000 images belonging to 1000 classes.\n", "Found 5000 images belonging to 1000 classes.\n" ] } ], "source": [ "samp_trn = get_data(path+'train')\n", "samp_val = get_data(path+'valid')" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "save_array(samp_path+'results/trn.dat', samp_trn)\n", "save_array(samp_path+'results/val.dat', samp_val)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "samp_trn = load_array(sample_path+'results/trn.dat')\n", "samp_val = load_array(sample_path+'results/val.dat')" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Found 1281167 images belonging to 1000 classes.\n", "Found 50000 images belonging to 1000 classes.\n", "Found 0 images belonging to 0 classes.\n" ] } ], "source": [ "(val_classes, trn_classes, val_labels, trn_labels, \n", " val_filenames, filenames, test_filenames) = get_classes(path)" ] }, { "cell_type": "code", "execution_count": 58, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Found 25000 images belonging to 1000 classes.\n", "Found 5000 images belonging to 1000 classes.\n", "Found 0 images belonging to 0 classes.\n" ] } ], "source": [ "(samp_val_classes, samp_trn_classes, samp_val_labels, samp_trn_labels, \n", " samp_val_filenames, samp_filenames, samp_test_filenames) = get_classes(sample_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we're just working with the dense layers, we should pre-compute the output of the convolutional layers." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [], "source": [ "vgg = Vgg16()\n", "model = vgg.model" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "layers = model.layers\n", "last_conv_idx = [index for index,layer in enumerate(layers) \n", " if type(layer) is Convolution2D][-1]\n", "conv_layers = layers[:last_conv_idx+1]" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "dense_layers = layers[last_conv_idx+1:]" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [], "source": [ "conv_model = Sequential(conv_layers)" ] }, { "cell_type": "code", "execution_count": 68, "metadata": { "collapsed": true }, "outputs": [], "source": [ "samp_conv_val_feat = conv_model.predict(samp_val, batch_size=batch_size*2)\n", "samp_conv_feat = conv_model.predict(samp_trn, batch_size=batch_size*2)" ] }, { "cell_type": "code", "execution_count": 70, "metadata": { "collapsed": true }, "outputs": [], "source": [ "save_array(sample_path+'results/conv_val_feat.dat', samp_conv_val_feat)\n", "save_array(sample_path+'results/conv_feat.dat', samp_conv_feat)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [], "source": [ "samp_conv_feat = load_array(sample_path+'results/conv_feat.dat')\n", "samp_conv_val_feat = load_array(sample_path+'results/conv_val_feat.dat')" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "(5000, 512, 14, 14)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "samp_conv_val_feat.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is our usual Vgg network just covering the dense layers:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def get_dense_layers():\n", " return [\n", " MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n", " Flatten(),\n", " Dense(4096, activation='relu'),\n", " Dropout(0.5),\n", " Dense(4096, activation='relu'),\n", " Dropout(0.5),\n", " Dense(1000, activation='softmax')\n", " ]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dense_model = Sequential(get_dense_layers())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for l1, l2 in zip(dense_layers, dense_model.layers):\n", " l2.set_weights(l1.get_weights())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's a good idea to check that your models are giving reasonable answers, before using them." ] }, { "cell_type": "code", "execution_count": 75, "metadata": { "collapsed": true }, "outputs": [], "source": [ "dense_model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])" ] }, { "cell_type": "code", "execution_count": 76, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4992/5000 [============================>.] - ETA: 0s" ] }, { "data": { "text/plain": [ "[1.5168307008743287, 0.64359999999999995]" ] }, "execution_count": 76, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dense_model.evaluate(samp_conv_val_feat, samp_val_labels)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": true }, "outputs": [], "source": [ "model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "5000/5000 [==============================] - 52s \n" ] }, { "data": { "text/plain": [ "[1.5168307008743287, 0.64359999999999995]" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# should be identical to above\n", "model.evaluate(val, val_labels)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "24992/25000 [============================>.] - ETA: 0s" ] }, { "data": { "text/plain": [ "[1.0947775667953492, 0.71711999999999998]" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# should be a little better than above, since VGG authors overfit\n", "dense_model.evaluate(conv_feat, trn_labels)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Adding our new layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Calculating batchnorm params" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To calculate the output of a layer in a Keras sequential model, we have to create a function that defines the input layer and the output layer, like this:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true }, "outputs": [], "source": [ "k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], \n", " [dense_model.layers[2].output])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we can call the function to get our layer activations:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [], "source": [ "d0_out = k_layer_out([samp_conv_val_feat, 0])[0]" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": true }, "outputs": [], "source": [ "k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], \n", " [dense_model.layers[4].output])" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [], "source": [ "d2_out = k_layer_out([samp_conv_val_feat, 0])[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've got our activations, we can calculate the mean and standard deviation for each (note that due to a bug in keras, it's actually the variance that we'll need)." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [], "source": [ "mu0,var0 = d0_out.mean(axis=0), d0_out.var(axis=0)\n", "mu2,var2 = d2_out.mean(axis=0), d2_out.var(axis=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating batchnorm model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we're ready to create and insert our layers just after each dense layer." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": true }, "outputs": [], "source": [ "nl1 = BatchNormalization()\n", "nl2 = BatchNormalization()" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [], "source": [ "bn_model = insert_layer(dense_model, nl2, 5)\n", "bn_model = insert_layer(bn_model, nl1, 3)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [], "source": [ "bnl1 = bn_model.layers[3]\n", "bnl4 = bn_model.layers[6]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After inserting the layers, we can set their weights to the variance and mean we just calculated." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [], "source": [ "bnl1.set_weights([var0, mu0, mu0, var0])\n", "bnl4.set_weights([var2, mu2, mu2, var2])" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "bn_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We should find that the new model gives identical results to those provided by the original VGG model." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "5000/5000 [==============================] - 1s \n" ] }, { "data": { "text/plain": [ "[4.7633913375854489, 0.63419999999999999]" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bn_model.evaluate(samp_conv_val_feat, samp_val_labels)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "24992/25000 [============================>.] - ETA: 0s" ] }, { "data": { "text/plain": [ "[3.7052530924750959, 0.70011999999999996]" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bn_model.evaluate(samp_conv_feat, samp_trn_labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Optional - additional fine-tuning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have a VGG model with batchnorm, we might expect that the optimal weights would be a little different to what they were when originally created without batchnorm. So we fine tune the weights for one epoch." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "feat_bc = bcolz.open(fast_path+'trn_features.dat')" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": true }, "outputs": [], "source": [ "labels = load_array(fast_path+'trn_labels.dat')" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": true }, "outputs": [], "source": [ "val_feat_bc = bcolz.open(fast_path+'val_features.dat')" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": true }, "outputs": [], "source": [ "val_labels = load_array(fast_path+'val_labels.dat')" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Train on 2522348 samples, validate on 98200 samples\n", "Epoch 1/1\n", "2522348/2522348 [==============================] - 2521s - loss: 1.0574 - acc: 0.7191 - val_loss: 1.3572 - val_acc: 0.6720\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bn_model.fit(feat_bc, labels, nb_epoch=1, batch_size=batch_size,\n", " validation_data=(val_feat_bc, val_labels))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results look quite encouraging! Note that these VGG weights are now specific to how keras handles image scaling - that is, it squashes and stretches images, rather than adding black borders. So this model is best used on images created in that way." ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "collapsed": true }, "outputs": [], "source": [ "bn_model.save_weights(path+'models/bn_model2.h5')" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": true }, "outputs": [], "source": [ "bn_model.load_weights(path+'models/bn_model2.h5')" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### Create combined model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our last step is simply to copy our new dense layers on to the end of the convolutional part of the network, and save the new complete set of weights, so we can use them in the future when using VGG. (Of course, we'll also need to update our VGG architecture to add the batchnorm layers)." ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "collapsed": false }, "outputs": [], "source": [ "new_layers = copy_layers(bn_model.layers)\n", "for layer in new_layers:\n", " conv_model.add(layer)" ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "collapsed": true }, "outputs": [], "source": [ "copy_weights(bn_model.layers, new_layers)" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "collapsed": true }, "outputs": [], "source": [ "conv_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])" ] }, { "cell_type": "code", "execution_count": 65, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "5000/5000 [==============================] - 52s \n" ] }, { "data": { "text/plain": [ "[1.5741772070884705, 0.65639999999999998]" ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conv_model.evaluate(samp_val, samp_val_labels)" ] }, { "cell_type": "code", "execution_count": 66, "metadata": { "collapsed": true }, "outputs": [], "source": [ "conv_model.save_weights(path+'models/inet_224squash_bn.h5')" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "The code shown here is implemented in [vgg_bn.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16bn.py), and there is a version of ``vgg_ft`` (our fine tuning function) with batch norm called ``vgg_ft_bn`` in [utils.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/utils.py)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" }, "nav_menu": {}, "toc": { "navigate_menu": true, "number_sections": true, "sideBar": true, "threshold": 6, "toc_cell": false, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 0 }