{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Variational Autoencoder for pan-cancer gene expression\n", "\n", "**Gregory Way 2017**\n", "\n", "This script trains and outputs results for a [variational autoencoder (VAE)](https://arxiv.org/abs/1312.6114)\n", "applied to gene expression data across 33 different cancer-types from The Cancer Genome Atlas (TCGA).\n", "\n", "A VAE aproximates the data generating function for the cancer data and learns the lower dimensional manifold a tumor occupies in gene expression space. By compressing the gene expression space into lower dimensional space, the VAE would, ideally, learn biological principles, such as cancer hallmark pathway activations, that help explain how tumors are similar and different. The VAE is also a generative model with a latent space that can be interpolated to observe transitions between cancer states.\n", "\n", "The particular model trained in this notebook consists of gene expression input (5000 most variably expressed genes by median absolute deviation) compressed down into two length 100 vectors (mean and variance encoded spaces) which are made deterministic through the reparameterization trick of sampling an epsilon vector from the uniform distribution. The encoded layer is then decoded back to original 5000 dimensions through a single reconstruction layer. I included a layer of batch normalization in the encoding step to prevent dead nodes. The encoding scheme also uses relu activation while the decoder uses a sigmoid activation to enforce positive activations. All weights are glorot uniform initialized. \n", "\n", "Another trick used here to encourage manifold learning is _warm start_ as discussed in [Sonderby et al. 2016](https://arxiv.org/abs/1602.02282). With warm starts, we add a parameter _beta_, which controls the contribution of the KL divergence loss in the total VAE loss (reconstruction + (beta * KL)). In this setting, the model begins training deterministically as a vanilla autoencoder (_beta_ = 0) and slowly ramps up after each epoch linearly until _beta_ = 1. After a parameter sweep, we observed that kappa has little influence in training, therefore, we set _kappa_ = 1, which is a full VAE.\n", "\n", "Much of this script is inspired by the [keras variational_autoencoder.py example](https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using TensorFlow backend.\n" ] } ], "source": [ "import os\n", "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "\n", "import tensorflow as tf\n", "from keras.layers import Input, Dense, Lambda, Layer, Activation\n", "from keras.layers.normalization import BatchNormalization\n", "from keras.models import Model\n", "from keras import backend as K\n", "from keras import metrics, optimizers\n", "from keras.callbacks import Callback\n", "import keras\n", "\n", "import pydot\n", "import graphviz\n", "from keras.utils import plot_model\n", "from keras_tqdm import TQDMNotebookCallback\n", "from IPython.display import SVG\n", "from keras.utils.vis_utils import model_to_dot" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2.1.3\n" ] }, { "data": { "text/plain": [ "'1.4.0'" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(keras.__version__)\n", "tf.__version__" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "plt.style.use('seaborn-notebook')" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "sns.set(style=\"white\", color_codes=True)\n", "sns.set_context(\"paper\", rc={\"font.size\":14,\"axes.titlesize\":15,\"axes.labelsize\":20,\n", " 'xtick.labelsize':14, 'ytick.labelsize':14})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Functions and Classes\n", "\n", "This will facilitate connections between layers and also custom hyperparameters" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# Function for reparameterization trick to make model differentiable\n", "def sampling(args):\n", " \n", " import tensorflow as tf\n", " # Function with args required for Keras Lambda function\n", " z_mean, z_log_var = args\n", "\n", " # Draw epsilon of the same shape from a standard normal distribution\n", " epsilon = K.random_normal(shape=tf.shape(z_mean), mean=0.,\n", " stddev=epsilon_std)\n", " \n", " # The latent vector is non-deterministic and differentiable\n", " # in respect to z_mean and z_log_var\n", " z = z_mean + K.exp(z_log_var / 2) * epsilon\n", " return z\n", "\n", "\n", "class CustomVariationalLayer(Layer):\n", " \"\"\"\n", " Define a custom layer that learns and performs the training\n", " This function is borrowed from:\n", " https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py\n", " \"\"\"\n", " def __init__(self, **kwargs):\n", " # https://keras.io/layers/writing-your-own-keras-layers/\n", " self.is_placeholder = True\n", " super(CustomVariationalLayer, self).__init__(**kwargs)\n", "\n", " def vae_loss(self, x_input, x_decoded):\n", " reconstruction_loss = original_dim * metrics.binary_crossentropy(x_input, x_decoded)\n", " kl_loss = - 0.5 * K.sum(1 + z_log_var_encoded - K.square(z_mean_encoded) - \n", " K.exp(z_log_var_encoded), axis=-1)\n", " return K.mean(reconstruction_loss + (K.get_value(beta) * kl_loss))\n", "\n", " def call(self, inputs):\n", " x = inputs[0]\n", " x_decoded = inputs[1]\n", " loss = self.vae_loss(x, x_decoded)\n", " self.add_loss(loss, inputs=inputs)\n", " # We won't actually use the output.\n", " return x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementing Warm-up as described in Sonderby et al. LVAE\n", "\n", "This is modified code from https://github.com/fchollet/keras/issues/2595" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "class WarmUpCallback(Callback):\n", " def __init__(self, beta, kappa):\n", " self.beta = beta\n", " self.kappa = kappa\n", " # Behavior on each epoch\n", " def on_epoch_end(self, epoch, logs={}):\n", " if K.get_value(self.beta) <= 1:\n", " K.set_value(self.beta, K.get_value(self.beta) + self.kappa)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "np.random.seed(123)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Gene Expression Data" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(10459, 5000)\n" ] }, { "data": { "text/html": [ "
\n", " | RPS4Y1 | \n", "XIST | \n", "KRT5 | \n", "AGR2 | \n", "CEACAM5 | \n", "KRT6A | \n", "KRT14 | \n", "CEACAM6 | \n", "DDX3Y | \n", "KDM5D | \n", "... | \n", "FAM129A | \n", "C8orf48 | \n", "CDK5R1 | \n", "FAM81A | \n", "C13orf18 | \n", "GDPD3 | \n", "SMAGP | \n", "C2orf85 | \n", "POU5F1B | \n", "CHST2 | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TCGA-02-0047-01 | \n", "0.678296 | \n", "0.289910 | \n", "0.034230 | \n", "0.0 | \n", "0.0 | \n", "0.084731 | \n", "0.031863 | \n", "0.037709 | \n", "0.746797 | \n", "0.687833 | \n", "... | \n", "0.440610 | \n", "0.428782 | \n", "0.732819 | \n", "0.634340 | \n", "0.580662 | \n", "0.294313 | \n", "0.458134 | \n", "0.478219 | \n", "0.168263 | \n", "0.638497 | \n", "
TCGA-02-0055-01 | \n", "0.200633 | \n", "0.654917 | \n", "0.181993 | \n", "0.0 | \n", "0.0 | \n", "0.100606 | \n", "0.050011 | \n", "0.092586 | \n", "0.103725 | \n", "0.140642 | \n", "... | \n", "0.620658 | \n", "0.363207 | \n", "0.592269 | \n", "0.602755 | \n", "0.610192 | \n", "0.374569 | \n", "0.722420 | \n", "0.271356 | \n", "0.160465 | \n", "0.602560 | \n", "
2 rows × 5000 columns
\n", "\n", " | RPS4Y1 | \n", "XIST | \n", "KRT5 | \n", "AGR2 | \n", "CEACAM5 | \n", "KRT6A | \n", "KRT14 | \n", "CEACAM6 | \n", "DDX3Y | \n", "KDM5D | \n", "... | \n", "FAM129A | \n", "C8orf48 | \n", "CDK5R1 | \n", "FAM81A | \n", "C13orf18 | \n", "GDPD3 | \n", "SMAGP | \n", "C2orf85 | \n", "POU5F1B | \n", "CHST2 | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TCGA-02-0047-01 | \n", "0.691638 | \n", "0.196529 | \n", "0.194204 | \n", "0.035744 | \n", "0.027784 | \n", "0.055622 | \n", "0.064214 | \n", "0.043093 | \n", "0.721925 | \n", "0.690764 | \n", "... | \n", "0.451367 | \n", "0.486186 | \n", "0.788487 | \n", "0.680068 | \n", "0.575140 | \n", "0.324611 | \n", "0.385545 | \n", "0.605859 | \n", "0.210192 | \n", "0.644549 | \n", "
TCGA-02-0055-01 | \n", "0.099189 | \n", "0.592460 | \n", "0.188617 | \n", "0.106664 | \n", "0.044037 | \n", "0.100698 | \n", "0.115720 | \n", "0.069524 | \n", "0.075663 | \n", "0.055188 | \n", "... | \n", "0.564506 | \n", "0.519069 | \n", "0.646698 | \n", "0.590475 | \n", "0.624337 | \n", "0.385787 | \n", "0.570150 | \n", "0.246259 | \n", "0.181708 | \n", "0.649352 | \n", "
2 rows × 5000 columns
\n", "\n", " | gene mean | \n", "gene abs(sum) | \n", "
---|---|---|
PPAN-P2RY11 | \n", "-0.020364 | \n", "0.230511 | \n", "
GSTT1 | \n", "0.024710 | \n", "0.229753 | \n", "
GSTM1 | \n", "0.005650 | \n", "0.216558 | \n", "
TBC1D3G | \n", "-0.010398 | \n", "0.194532 | \n", "
RPS28 | \n", "0.012242 | \n", "0.176380 | \n", "