{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "Lucid Tutorial", "version": "0.3.2", "views": {}, "default_view": {}, "provenance": [] }, "kernelspec": { "name": "python2", "display_name": "Python 2" }, "accelerator": "GPU" }, "cells": [ { "metadata": { "id": "JndnmDMp66FL", "colab_type": "text" }, "cell_type": "markdown", "source": [ "##### Copyright 2018 Google LLC.\n", "\n", "Licensed under the Apache License, Version 2.0 (the \"License\");" ] }, { "metadata": { "id": "hMqWDc_m6rUC", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } }, "cellView": "both" }, "cell_type": "code", "source": [ "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "_vAVmphMywZR", "colab_type": "text" }, "cell_type": "markdown", "source": [ "# Lucid: A Quick Tutorial\n", "\n", "This tutorial quickly introduces [**Lucid**](https://github.com/tensorflow/lucid), a network for visualizing neural networks. Lucid is a kind of spiritual successor to DeepDream, but provides flexible abstractions so that it can be used for a wide range of interpretability research.\n", "\n", "**Note**: The easiest way to use this tutorial is [as a colab notebook](https://colab.sandbox.google.com/github/tensorflow/lucid/blob/master/notebooks/tutorial.ipynb), which allows you to dive in with no setup. We recommend you enable a free GPU by going:\n", "\n", "> **Runtime**   →   **Change runtime type**   →   **Hardware Accelerator: GPU**\n", "\n", "Thanks for trying Lucid!\n", "\n", "\n", "\n" ] }, { "metadata": { "id": "FsFc1mE51tCd", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Install, Import, Load Model" ] }, { "metadata": { "id": "tavMPe3KQ8Cs", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "# Install Lucid\n", "\n", "!pip install --quiet lucid==0.2.3\n", "#!pip install --quiet --upgrade-strategy=only-if-needed git+https://github.com/tensorflow/lucid.git" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "RBr8QbboRAdU", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "# Imports\n", "\n", "import numpy as np\n", "import tensorflow as tf\n", "\n", "import lucid.modelzoo.vision_models as models\n", "from lucid.misc.io import show\n", "import lucid.optvis.objectives as objectives\n", "import lucid.optvis.param as param\n", "import lucid.optvis.render as render\n", "import lucid.optvis.transform as transform" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "yNALaA0QRJVT", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 } } }, "cell_type": "code", "source": [ "# Let's import a model from the Lucid modelzoo!\n", "\n", "model = models.InceptionV1()\n", "model.load_graphdef()" ], "execution_count": 0, "outputs": [] }, { "metadata": { "id": "1l31v18X42gc", "colab_type": "text" }, "cell_type": "markdown", "source": [ "In this tutorial, we will be visualizing InceptionV1, also known as GoogLeNet.\n", "\n", "While we will focus on a few neurons, you may wish to experiment with visualizing others. If you'd like, you can try any of the following layers: `conv2d0, maxpool0, conv2d1, conv2d2, maxpool1, mixed3a, mixed3b, maxpool4, mixed4a, mixed4b, mixed4c, mixed4d, mixed4e, maxpool10, mixed5a, mixed5b`.\n", "\n", "You can learn more about GoogLeNet in the [paper](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf). You can also find visualizations of all neurons in mixed3a-mixed5b [here](https://distill.pub/2017/feature-visualization/appendix/)." ] }, { "metadata": { "id": "VcUL29K612SI", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Visualize a Neuron" ] }, { "metadata": { "id": "CLDYzkKoRQtw", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "47739b06-c868-4627-924c-dc28ada359d2", "executionInfo": { "status": "ok", "timestamp": 1520528085592, "user_tz": 480, "elapsed": 9883, "user": { "displayName": "", "photoUrl": "", "userId": "" } } }, "cell_type": "code", "source": [ "# Visualizing a neuron is easy!\n", "\n", "_ = render.render_vis(model, \"mixed4a_pre_relu:476\")" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 1150.7921\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] }, { "metadata": { "id": "6gmaVSej19us", "colab_type": "text" }, "cell_type": "markdown", "source": [ "## Getting a bit deeper\n", "\n", "Lucid splits visualizations into a few components which you can fiddle with completely indpendently:\n", "\n", "* **objectives** -- What do you want the model to visualize?\n", "* **parameterization** -- How do you describe the image?\n", "* **transforms** -- What transformations do you want your visualization to be robust to?\n", "\n", "In this section, we'll experiment with each one." ] }, { "metadata": { "id": "6zO3np2D2tGh", "colab_type": "text" }, "cell_type": "markdown", "source": [ "**Experimenting with objectives**" ] }, { "metadata": { "id": "YyexdOXIcH2i", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "79641344-277d-4c33-993b-da8930e569c7", "executionInfo": { "status": "ok", "timestamp": 1518144808685, "user_tz": 480, "elapsed": 102011, "user": { "displayName": "Christopher Olah", "photoUrl": "//lh6.googleusercontent.com/-BDHAgNAk34E/AAAAAAAAAAI/AAAAAAAAAMw/gTWZ3IeP8dY/s50-c-k-no/photo.jpg", "userId": "104989755527098071788" } } }, "cell_type": "code", "source": [ "# Let's visualize another neuron using a more explicit objective:\n", "\n", "obj = objectives.channel(\"mixed4a_pre_relu\", 465)\n", "_ = render.render_vis(model, obj)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 1785.2615\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] }, { "metadata": { "id": "YdERJ3_7cLdy", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "8201d3be-9487-4ca7-819f-b332b459ab6f", "executionInfo": { "status": "ok", "timestamp": 1518144939030, "user_tz": 480, "elapsed": 101841, "user": { "displayName": "Christopher Olah", "photoUrl": "//lh6.googleusercontent.com/-BDHAgNAk34E/AAAAAAAAAAI/AAAAAAAAAMw/gTWZ3IeP8dY/s50-c-k-no/photo.jpg", "userId": "104989755527098071788" } } }, "cell_type": "code", "source": [ "# Or we could do something weirder:\n", "# (Technically, objectives are a class that implements addition.)\n", "\n", "channel = lambda n: objectives.channel(\"mixed4a_pre_relu\", n)\n", "obj = channel(476) + channel(465)\n", "_ = render.render_vis(model, obj)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 2312.0425\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] }, { "metadata": { "id": "9Rvhpqtn3XXm", "colab_type": "text" }, "cell_type": "markdown", "source": [ "**Transformation Robustness**\n", "\n", "Recomended reading: The Feature Visualization article's section titled [The Enemy of Feature Visualization](https://distill.pub/2017/feature-visualization/#enemy-of-feature-vis) discusion of \"Transformation Robustness.\" In particular, there's an interactive diagram that allows you to easily explore how different kinds of transformation robustness effects visualizations." ] }, { "metadata": { "id": "DDBH4gpD3X2O", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "4eb10b16-79e6-4b86-daa8-8e52d258fe69", "executionInfo": { "status": "ok", "timestamp": 1518145016924, "user_tz": 480, "elapsed": 77826, "user": { "displayName": "Christopher Olah", "photoUrl": "//lh6.googleusercontent.com/-BDHAgNAk34E/AAAAAAAAAAI/AAAAAAAAAMw/gTWZ3IeP8dY/s50-c-k-no/photo.jpg", "userId": "104989755527098071788" } } }, "cell_type": "code", "source": [ "# No transformation robustness\n", "\n", "transforms = []\n", "_ = render.render_vis(model, \"mixed4a_pre_relu:476\", transforms=transforms)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 2420.1245\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] }, { "metadata": { "id": "mDOviz8d4n4c", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "89be067a-b356-443d-d589-ab55016190a1", "executionInfo": { "status": "ok", "timestamp": 1518145095337, "user_tz": 480, "elapsed": 78302, "user": { "displayName": "Christopher Olah", "photoUrl": "//lh6.googleusercontent.com/-BDHAgNAk34E/AAAAAAAAAAI/AAAAAAAAAMw/gTWZ3IeP8dY/s50-c-k-no/photo.jpg", "userId": "104989755527098071788" } } }, "cell_type": "code", "source": [ "# Jitter 2\n", "\n", "transforms = [\n", " transform.jitter(2)\n", "]\n", "_ = render.render_vis(model, \"mixed4a_pre_relu:476\", transforms=transforms)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 1853.4551\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] }, { "metadata": { "id": "325tbTiE5GpJ", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "780d820d-643d-4e99-ff26-ca7b0ebf2453", "executionInfo": { "status": "ok", "timestamp": 1518145209289, "user_tz": 480, "elapsed": 113882, "user": { "displayName": "Christopher Olah", "photoUrl": "//lh6.googleusercontent.com/-BDHAgNAk34E/AAAAAAAAAAI/AAAAAAAAAMw/gTWZ3IeP8dY/s50-c-k-no/photo.jpg", "userId": "104989755527098071788" } } }, "cell_type": "code", "source": [ "# Breaking out all the stops\n", "\n", "transforms = [\n", " transform.pad(16),\n", " transform.jitter(8),\n", " transform.random_scale([n/100. for n in range(80, 120)]),\n", " transform.random_rotate(range(-10,10) + range(-5,5) + 10*range(-2,2)),\n", " transform.jitter(2)\n", "]\n", "_ = render.render_vis(model, \"mixed4a_pre_relu:476\", transforms=transforms)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 1195.9929\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] }, { "metadata": { "id": "lW2Fmtv124Bo", "colab_type": "text" }, "cell_type": "markdown", "source": [ "**Experimenting with parameterization**\n", "\n", "Recomended reading: The Feature Visualization article's section on [Preconditioning and Parameterization](https://distill.pub/2017/feature-visualization/#preconditioning)" ] }, { "metadata": { "id": "N-BTF_W0fHZh", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "e3c92004-c89e-4c7b-da4f-21d364f30906", "executionInfo": { "status": "ok", "timestamp": 1518145357388, "user_tz": 480, "elapsed": 91118, "user": { "displayName": "Christopher Olah", "photoUrl": "//lh6.googleusercontent.com/-BDHAgNAk34E/AAAAAAAAAAI/AAAAAAAAAMw/gTWZ3IeP8dY/s50-c-k-no/photo.jpg", "userId": "104989755527098071788" } } }, "cell_type": "code", "source": [ "# Using alternate parameterizations is one of the primary ingredients for\n", "# effective visualization\n", "\n", "param_f = lambda: param.image(128, fft=False, decorrelate=False)\n", "_ = render.render_vis(model, \"mixed4a_pre_relu:2\", param_f)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 808.84076\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] }, { "metadata": { "id": "8hrCwdxhcUHn", "colab_type": "code", "colab": { "autoexec": { "startup": false, "wait_interval": 0 }, "base_uri": "https://localhost:8080/", "height": 166 }, "outputId": "9e0fd16a-55b2-43e3-fcd8-c29164d35fcc", "executionInfo": { "status": "ok", "timestamp": 1517984295016, "user_tz": 480, "elapsed": 6988, "user": { "displayName": "Christopher Olah", "photoUrl": "//lh6.googleusercontent.com/-BDHAgNAk34E/AAAAAAAAAAI/AAAAAAAAAMw/gTWZ3IeP8dY/s50-c-k-no/photo.jpg", "userId": "104989755527098071788" } } }, "cell_type": "code", "source": [ "# Using alternate parameterizations is one of the primary ingredients for\n", "# effective visualization\n", "\n", "param_f = lambda: param.image(128, fft=True, decorrelate=True)\n", "_ = render.render_vis(model, \"mixed4a_pre_relu:2\", param_f)" ], "execution_count": 0, "outputs": [ { "output_type": "stream", "text": [ "512 1191.0022\n" ], "name": "stdout" }, { "output_type": "display_data", "data": { "text/plain": [ "" ], "text/html": [ "" ] }, "metadata": { "tags": [] } } ] } ] }