{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Deep GPs\n", "========\n", "\n", "### [Neil D. Lawrence](http://inverseprobability.com), University of\n", "\n", "Cambridge \\#\\#\\# 2020-09-16" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Abstract**: In this talk we introduce deep Gaussian processes, an\n", "approach to stochastic process modelling that relies on the composition\n", "of individual stochastic proceses." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$\n", "\\newcommand{\\tk}[1]{}\n", "\\newcommand{\\Amatrix}{\\mathbf{A}}\n", "\\newcommand{\\KL}[2]{\\text{KL}\\left( #1\\,\\|\\,#2 \\right)}\n", "\\newcommand{\\Kaast}{\\kernelMatrix_{\\mathbf{ \\ast}\\mathbf{ \\ast}}}\n", "\\newcommand{\\Kastu}{\\kernelMatrix_{\\mathbf{ \\ast} \\inducingVector}}\n", "\\newcommand{\\Kff}{\\kernelMatrix_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kfu}{\\kernelMatrix_{\\mappingFunctionVector \\inducingVector}}\n", "\\newcommand{\\Kuast}{\\kernelMatrix_{\\inducingVector \\bf\\ast}}\n", "\\newcommand{\\Kuf}{\\kernelMatrix_{\\inducingVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kuu}{\\kernelMatrix_{\\inducingVector \\inducingVector}}\n", "\\newcommand{\\Kuui}{\\Kuu^{-1}}\n", "\\newcommand{\\Qaast}{\\mathbf{Q}_{\\bf \\ast \\ast}}\n", "\\newcommand{\\Qastf}{\\mathbf{Q}_{\\ast \\mappingFunction}}\n", "\\newcommand{\\Qfast}{\\mathbf{Q}_{\\mappingFunctionVector \\bf \\ast}}\n", "\\newcommand{\\Qff}{\\mathbf{Q}_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\aMatrix}{\\mathbf{A}}\n", "\\newcommand{\\aScalar}{a}\n", "\\newcommand{\\aVector}{\\mathbf{a}}\n", "\\newcommand{\\acceleration}{a}\n", "\\newcommand{\\bMatrix}{\\mathbf{B}}\n", "\\newcommand{\\bScalar}{b}\n", "\\newcommand{\\bVector}{\\mathbf{b}}\n", "\\newcommand{\\basisFunc}{\\phi}\n", "\\newcommand{\\basisFuncVector}{\\boldsymbol{ \\basisFunc}}\n", "\\newcommand{\\basisFunction}{\\phi}\n", "\\newcommand{\\basisLocation}{\\mu}\n", "\\newcommand{\\basisMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\basisScalar}{\\basisFunction}\n", "\\newcommand{\\basisVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\activationFunction}{\\phi}\n", "\\newcommand{\\activationMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\activationScalar}{\\basisFunction}\n", "\\newcommand{\\activationVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\bigO}{\\mathcal{O}}\n", "\\newcommand{\\binomProb}{\\pi}\n", "\\newcommand{\\cMatrix}{\\mathbf{C}}\n", "\\newcommand{\\cbasisMatrix}{\\hat{\\boldsymbol{ \\Phi}}}\n", "\\newcommand{\\cdataMatrix}{\\hat{\\dataMatrix}}\n", "\\newcommand{\\cdataScalar}{\\hat{\\dataScalar}}\n", "\\newcommand{\\cdataVector}{\\hat{\\dataVector}}\n", "\\newcommand{\\centeredKernelMatrix}{\\mathbf{ \\MakeUppercase{\\centeredKernelScalar}}}\n", "\\newcommand{\\centeredKernelScalar}{b}\n", "\\newcommand{\\centeredKernelVector}{\\centeredKernelScalar}\n", "\\newcommand{\\centeringMatrix}{\\mathbf{H}}\n", "\\newcommand{\\chiSquaredDist}[2]{\\chi_{#1}^{2}\\left(#2\\right)}\n", "\\newcommand{\\chiSquaredSamp}[1]{\\chi_{#1}^{2}}\n", "\\newcommand{\\conditionalCovariance}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\coregionalizationMatrix}{\\mathbf{B}}\n", "\\newcommand{\\coregionalizationScalar}{b}\n", "\\newcommand{\\coregionalizationVector}{\\mathbf{ \\coregionalizationScalar}}\n", "\\newcommand{\\covDist}[2]{\\text{cov}_{#2}\\left(#1\\right)}\n", "\\newcommand{\\covSamp}[1]{\\text{cov}\\left(#1\\right)}\n", "\\newcommand{\\covarianceScalar}{c}\n", "\\newcommand{\\covarianceVector}{\\mathbf{ \\covarianceScalar}}\n", "\\newcommand{\\covarianceMatrix}{\\mathbf{C}}\n", "\\newcommand{\\covarianceMatrixTwo}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\croupierScalar}{s}\n", "\\newcommand{\\croupierVector}{\\mathbf{ \\croupierScalar}}\n", "\\newcommand{\\croupierMatrix}{\\mathbf{ \\MakeUppercase{\\croupierScalar}}}\n", "\\newcommand{\\dataDim}{p}\n", "\\newcommand{\\dataIndex}{i}\n", "\\newcommand{\\dataIndexTwo}{j}\n", "\\newcommand{\\dataMatrix}{\\mathbf{Y}}\n", "\\newcommand{\\dataScalar}{y}\n", "\\newcommand{\\dataSet}{\\mathcal{D}}\n", "\\newcommand{\\dataStd}{\\sigma}\n", "\\newcommand{\\dataVector}{\\mathbf{ \\dataScalar}}\n", "\\newcommand{\\decayRate}{d}\n", "\\newcommand{\\degreeMatrix}{\\mathbf{ \\MakeUppercase{\\degreeScalar}}}\n", "\\newcommand{\\degreeScalar}{d}\n", "\\newcommand{\\degreeVector}{\\mathbf{ \\degreeScalar}}\n", "\\newcommand{\\diag}[1]{\\text{diag}\\left(#1\\right)}\n", "\\newcommand{\\diagonalMatrix}{\\mathbf{D}}\n", "\\newcommand{\\diff}[2]{\\frac{\\text{d}#1}{\\text{d}#2}}\n", "\\newcommand{\\diffTwo}[2]{\\frac{\\text{d}^2#1}{\\text{d}#2^2}}\n", "\\newcommand{\\displacement}{x}\n", "\\newcommand{\\displacementVector}{\\textbf{\\displacement}}\n", "\\newcommand{\\distanceMatrix}{\\mathbf{ \\MakeUppercase{\\distanceScalar}}}\n", "\\newcommand{\\distanceScalar}{d}\n", "\\newcommand{\\distanceVector}{\\mathbf{ \\distanceScalar}}\n", "\\newcommand{\\eigenvaltwo}{\\ell}\n", "\\newcommand{\\eigenvaltwoMatrix}{\\mathbf{L}}\n", "\\newcommand{\\eigenvaltwoVector}{\\mathbf{l}}\n", "\\newcommand{\\eigenvalue}{\\lambda}\n", "\\newcommand{\\eigenvalueMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\eigenvalueVector}{\\boldsymbol{ \\lambda}}\n", "\\newcommand{\\eigenvector}{\\mathbf{ \\eigenvectorScalar}}\n", "\\newcommand{\\eigenvectorMatrix}{\\mathbf{U}}\n", "\\newcommand{\\eigenvectorScalar}{u}\n", "\\newcommand{\\eigenvectwo}{\\mathbf{v}}\n", "\\newcommand{\\eigenvectwoMatrix}{\\mathbf{V}}\n", "\\newcommand{\\eigenvectwoScalar}{v}\n", "\\newcommand{\\entropy}[1]{\\mathcal{H}\\left(#1\\right)}\n", "\\newcommand{\\errorFunction}{E}\n", "\\newcommand{\\expDist}[2]{\\left<#1\\right>_{#2}}\n", "\\newcommand{\\expSamp}[1]{\\left<#1\\right>}\n", "\\newcommand{\\expectation}[1]{\\left\\langle #1 \\right\\rangle }\n", "\\newcommand{\\expectationDist}[2]{\\left\\langle #1 \\right\\rangle _{#2}}\n", "\\newcommand{\\expectedDistanceMatrix}{\\mathcal{D}}\n", "\\newcommand{\\eye}{\\mathbf{I}}\n", "\\newcommand{\\fantasyDim}{r}\n", "\\newcommand{\\fantasyMatrix}{\\mathbf{ \\MakeUppercase{\\fantasyScalar}}}\n", "\\newcommand{\\fantasyScalar}{z}\n", "\\newcommand{\\fantasyVector}{\\mathbf{ \\fantasyScalar}}\n", "\\newcommand{\\featureStd}{\\varsigma}\n", "\\newcommand{\\gammaCdf}[3]{\\mathcal{GAMMA CDF}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaDist}[3]{\\mathcal{G}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaSamp}[2]{\\mathcal{G}\\left(#1,#2\\right)}\n", "\\newcommand{\\gaussianDist}[3]{\\mathcal{N}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gaussianSamp}[2]{\\mathcal{N}\\left(#1,#2\\right)}\n", "\\newcommand{\\given}{|}\n", "\\newcommand{\\half}{\\frac{1}{2}}\n", "\\newcommand{\\heaviside}{H}\n", "\\newcommand{\\hiddenMatrix}{\\mathbf{ \\MakeUppercase{\\hiddenScalar}}}\n", "\\newcommand{\\hiddenScalar}{h}\n", "\\newcommand{\\hiddenVector}{\\mathbf{ \\hiddenScalar}}\n", "\\newcommand{\\identityMatrix}{\\eye}\n", "\\newcommand{\\inducingInputScalar}{z}\n", "\\newcommand{\\inducingInputVector}{\\mathbf{ \\inducingInputScalar}}\n", "\\newcommand{\\inducingInputMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\inducingScalar}{u}\n", "\\newcommand{\\inducingVector}{\\mathbf{ \\inducingScalar}}\n", "\\newcommand{\\inducingMatrix}{\\mathbf{U}}\n", "\\newcommand{\\inlineDiff}[2]{\\text{d}#1/\\text{d}#2}\n", "\\newcommand{\\inputDim}{q}\n", "\\newcommand{\\inputMatrix}{\\mathbf{X}}\n", "\\newcommand{\\inputScalar}{x}\n", "\\newcommand{\\inputSpace}{\\mathcal{X}}\n", "\\newcommand{\\inputVals}{\\inputVector}\n", "\\newcommand{\\inputVector}{\\mathbf{ \\inputScalar}}\n", "\\newcommand{\\iterNum}{k}\n", "\\newcommand{\\kernel}{\\kernelScalar}\n", "\\newcommand{\\kernelMatrix}{\\mathbf{K}}\n", "\\newcommand{\\kernelScalar}{k}\n", "\\newcommand{\\kernelVector}{\\mathbf{ \\kernelScalar}}\n", "\\newcommand{\\kff}{\\kernelScalar_{\\mappingFunction \\mappingFunction}}\n", "\\newcommand{\\kfu}{\\kernelVector_{\\mappingFunction \\inducingScalar}}\n", "\\newcommand{\\kuf}{\\kernelVector_{\\inducingScalar \\mappingFunction}}\n", "\\newcommand{\\kuu}{\\kernelVector_{\\inducingScalar \\inducingScalar}}\n", "\\newcommand{\\lagrangeMultiplier}{\\lambda}\n", "\\newcommand{\\lagrangeMultiplierMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\lagrangian}{L}\n", "\\newcommand{\\laplacianFactor}{\\mathbf{ \\MakeUppercase{\\laplacianFactorScalar}}}\n", "\\newcommand{\\laplacianFactorScalar}{m}\n", "\\newcommand{\\laplacianFactorVector}{\\mathbf{ \\laplacianFactorScalar}}\n", "\\newcommand{\\laplacianMatrix}{\\mathbf{L}}\n", "\\newcommand{\\laplacianScalar}{\\ell}\n", "\\newcommand{\\laplacianVector}{\\mathbf{ \\ell}}\n", "\\newcommand{\\latentDim}{q}\n", "\\newcommand{\\latentDistanceMatrix}{\\boldsymbol{ \\Delta}}\n", "\\newcommand{\\latentDistanceScalar}{\\delta}\n", "\\newcommand{\\latentDistanceVector}{\\boldsymbol{ \\delta}}\n", "\\newcommand{\\latentForce}{f}\n", "\\newcommand{\\latentFunction}{u}\n", "\\newcommand{\\latentFunctionVector}{\\mathbf{ \\latentFunction}}\n", "\\newcommand{\\latentFunctionMatrix}{\\mathbf{ \\MakeUppercase{\\latentFunction}}}\n", "\\newcommand{\\latentIndex}{j}\n", "\\newcommand{\\latentScalar}{z}\n", "\\newcommand{\\latentVector}{\\mathbf{ \\latentScalar}}\n", "\\newcommand{\\latentMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\learnRate}{\\eta}\n", "\\newcommand{\\lengthScale}{\\ell}\n", "\\newcommand{\\rbfWidth}{\\ell}\n", "\\newcommand{\\likelihoodBound}{\\mathcal{L}}\n", "\\newcommand{\\likelihoodFunction}{L}\n", "\\newcommand{\\locationScalar}{\\mu}\n", "\\newcommand{\\locationVector}{\\boldsymbol{ \\locationScalar}}\n", "\\newcommand{\\locationMatrix}{\\mathbf{M}}\n", "\\newcommand{\\variance}[1]{\\text{var}\\left( #1 \\right)}\n", "\\newcommand{\\mappingFunction}{f}\n", "\\newcommand{\\mappingFunctionMatrix}{\\mathbf{F}}\n", "\\newcommand{\\mappingFunctionTwo}{g}\n", "\\newcommand{\\mappingFunctionTwoMatrix}{\\mathbf{G}}\n", "\\newcommand{\\mappingFunctionTwoVector}{\\mathbf{ \\mappingFunctionTwo}}\n", "\\newcommand{\\mappingFunctionVector}{\\mathbf{ \\mappingFunction}}\n", "\\newcommand{\\scaleScalar}{s}\n", "\\newcommand{\\mappingScalar}{w}\n", "\\newcommand{\\mappingVector}{\\mathbf{ \\mappingScalar}}\n", "\\newcommand{\\mappingMatrix}{\\mathbf{W}}\n", "\\newcommand{\\mappingScalarTwo}{v}\n", "\\newcommand{\\mappingVectorTwo}{\\mathbf{ \\mappingScalarTwo}}\n", "\\newcommand{\\mappingMatrixTwo}{\\mathbf{V}}\n", "\\newcommand{\\maxIters}{K}\n", "\\newcommand{\\meanMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanScalar}{\\mu}\n", "\\newcommand{\\meanTwoMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanTwoScalar}{m}\n", "\\newcommand{\\meanTwoVector}{\\mathbf{ \\meanTwoScalar}}\n", "\\newcommand{\\meanVector}{\\boldsymbol{ \\meanScalar}}\n", "\\newcommand{\\mrnaConcentration}{m}\n", "\\newcommand{\\naturalFrequency}{\\omega}\n", "\\newcommand{\\neighborhood}[1]{\\mathcal{N}\\left( #1 \\right)}\n", "\\newcommand{\\neilurl}{http://inverseprobability.com/}\n", "\\newcommand{\\noiseMatrix}{\\boldsymbol{ E}}\n", "\\newcommand{\\noiseScalar}{\\epsilon}\n", "\\newcommand{\\noiseVector}{\\boldsymbol{ \\epsilon}}\n", "\\newcommand{\\norm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\normalizedLaplacianMatrix}{\\hat{\\mathbf{L}}}\n", "\\newcommand{\\normalizedLaplacianScalar}{\\hat{\\ell}}\n", "\\newcommand{\\normalizedLaplacianVector}{\\hat{\\mathbf{ \\ell}}}\n", "\\newcommand{\\numActive}{m}\n", "\\newcommand{\\numBasisFunc}{m}\n", "\\newcommand{\\numComponents}{m}\n", "\\newcommand{\\numComps}{K}\n", "\\newcommand{\\numData}{n}\n", "\\newcommand{\\numFeatures}{K}\n", "\\newcommand{\\numHidden}{h}\n", "\\newcommand{\\numInducing}{m}\n", "\\newcommand{\\numLayers}{\\ell}\n", "\\newcommand{\\numNeighbors}{K}\n", "\\newcommand{\\numSequences}{s}\n", "\\newcommand{\\numSuccess}{s}\n", "\\newcommand{\\numTasks}{m}\n", "\\newcommand{\\numTime}{T}\n", "\\newcommand{\\numTrials}{S}\n", "\\newcommand{\\outputIndex}{j}\n", "\\newcommand{\\paramVector}{\\boldsymbol{ \\theta}}\n", "\\newcommand{\\parameterMatrix}{\\boldsymbol{ \\Theta}}\n", "\\newcommand{\\parameterScalar}{\\theta}\n", "\\newcommand{\\parameterVector}{\\boldsymbol{ \\parameterScalar}}\n", "\\newcommand{\\partDiff}[2]{\\frac{\\partial#1}{\\partial#2}}\n", "\\newcommand{\\precisionScalar}{j}\n", "\\newcommand{\\precisionVector}{\\mathbf{ \\precisionScalar}}\n", "\\newcommand{\\precisionMatrix}{\\mathbf{J}}\n", "\\newcommand{\\pseudotargetScalar}{\\widetilde{y}}\n", "\\newcommand{\\pseudotargetVector}{\\mathbf{ \\pseudotargetScalar}}\n", "\\newcommand{\\pseudotargetMatrix}{\\mathbf{ \\widetilde{Y}}}\n", "\\newcommand{\\rank}[1]{\\text{rank}\\left(#1\\right)}\n", "\\newcommand{\\rayleighDist}[2]{\\mathcal{R}\\left(#1|#2\\right)}\n", "\\newcommand{\\rayleighSamp}[1]{\\mathcal{R}\\left(#1\\right)}\n", "\\newcommand{\\responsibility}{r}\n", "\\newcommand{\\rotationScalar}{r}\n", "\\newcommand{\\rotationVector}{\\mathbf{ \\rotationScalar}}\n", "\\newcommand{\\rotationMatrix}{\\mathbf{R}}\n", "\\newcommand{\\sampleCovScalar}{s}\n", "\\newcommand{\\sampleCovVector}{\\mathbf{ \\sampleCovScalar}}\n", "\\newcommand{\\sampleCovMatrix}{\\mathbf{s}}\n", "\\newcommand{\\scalarProduct}[2]{\\left\\langle{#1},{#2}\\right\\rangle}\n", "\\newcommand{\\sign}[1]{\\text{sign}\\left(#1\\right)}\n", "\\newcommand{\\sigmoid}[1]{\\sigma\\left(#1\\right)}\n", "\\newcommand{\\singularvalue}{\\ell}\n", "\\newcommand{\\singularvalueMatrix}{\\mathbf{L}}\n", "\\newcommand{\\singularvalueVector}{\\mathbf{l}}\n", "\\newcommand{\\sorth}{\\mathbf{u}}\n", "\\newcommand{\\spar}{\\lambda}\n", "\\newcommand{\\trace}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\BasalRate}{B}\n", "\\newcommand{\\DampingCoefficient}{C}\n", "\\newcommand{\\DecayRate}{D}\n", "\\newcommand{\\Displacement}{X}\n", "\\newcommand{\\LatentForce}{F}\n", "\\newcommand{\\Mass}{M}\n", "\\newcommand{\\Sensitivity}{S}\n", "\\newcommand{\\basalRate}{b}\n", "\\newcommand{\\dampingCoefficient}{c}\n", "\\newcommand{\\mass}{m}\n", "\\newcommand{\\sensitivity}{s}\n", "\\newcommand{\\springScalar}{\\kappa}\n", "\\newcommand{\\springVector}{\\boldsymbol{ \\kappa}}\n", "\\newcommand{\\springMatrix}{\\boldsymbol{ \\mathcal{K}}}\n", "\\newcommand{\\tfConcentration}{p}\n", "\\newcommand{\\tfDecayRate}{\\delta}\n", "\\newcommand{\\tfMrnaConcentration}{f}\n", "\\newcommand{\\tfVector}{\\mathbf{ \\tfConcentration}}\n", "\\newcommand{\\velocity}{v}\n", "\\newcommand{\\sufficientStatsScalar}{g}\n", "\\newcommand{\\sufficientStatsVector}{\\mathbf{ \\sufficientStatsScalar}}\n", "\\newcommand{\\sufficientStatsMatrix}{\\mathbf{G}}\n", "\\newcommand{\\switchScalar}{s}\n", "\\newcommand{\\switchVector}{\\mathbf{ \\switchScalar}}\n", "\\newcommand{\\switchMatrix}{\\mathbf{S}}\n", "\\newcommand{\\tr}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\loneNorm}[1]{\\left\\Vert #1 \\right\\Vert_1}\n", "\\newcommand{\\ltwoNorm}[1]{\\left\\Vert #1 \\right\\Vert_2}\n", "\\newcommand{\\onenorm}[1]{\\left\\vert#1\\right\\vert_1}\n", "\\newcommand{\\twonorm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\vScalar}{v}\n", "\\newcommand{\\vVector}{\\mathbf{v}}\n", "\\newcommand{\\vMatrix}{\\mathbf{V}}\n", "\\newcommand{\\varianceDist}[2]{\\text{var}_{#2}\\left( #1 \\right)}\n", "\\newcommand{\\vecb}[1]{\\left(#1\\right):}\n", "\\newcommand{\\weightScalar}{w}\n", "\\newcommand{\\weightVector}{\\mathbf{ \\weightScalar}}\n", "\\newcommand{\\weightMatrix}{\\mathbf{W}}\n", "\\newcommand{\\weightedAdjacencyMatrix}{\\mathbf{A}}\n", "\\newcommand{\\weightedAdjacencyScalar}{a}\n", "\\newcommand{\\weightedAdjacencyVector}{\\mathbf{ \\weightedAdjacencyScalar}}\n", "\\newcommand{\\onesVector}{\\mathbf{1}}\n", "\\newcommand{\\zerosVector}{\\mathbf{0}}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deep Gaussian Processes\n", "=======================" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Download some utilty files" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import urllib.request" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/teaching_plots.py','teaching_plots.py')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/mlai.py','mlai.py')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/gp_tutorial.py','gp_tutorial.py')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/deepgp_tutorial.py','deepgp_tutorial.py')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "for path in ['gp', 'datasets', 'deepgp']:\n", " if not os.path.exists(path):\n", " os.mkdir(path)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade git+https://github.com/sods/ods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install gpy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "GPy: A Gaussian Process Framework in Python\n", "-------------------------------------------\n", "\n", "Gaussian processes are a flexible tool for non-parametric analysis with\n", "uncertainty. The GPy software was started in Sheffield to provide a easy\n", "to use interface to GPs. One which allowed the user to focus on the\n", "modelling rather than the mathematics.\n", "\n", "\n", "\n", "Figure: GPy is a BSD licensed software code base for implementing\n", "Gaussian process models in Python. It is designed for teaching and\n", "modelling. We welcome contributions which can be made through the Github\n", "repository\n", "https://github.com/SheffieldML/GPy\n", "\n", "GPy is a BSD licensed software code base for implementing Gaussian\n", "process models in python. This allows GPs to be combined with a wide\n", "variety of software libraries.\n", "\n", "The software itself is available on\n", "[GitHub](https://github.com/SheffieldML/GPy) and the team welcomes\n", "contributions.\n", "\n", "The aim for GPy is to be a probabilistic-style programming language,\n", "i.e. you specify the model rather than the algorithm. As well as a large\n", "range of covariance functions the software allows for non-Gaussian\n", "likelihoods, multivariate outputs, dimensionality reduction and\n", "approximations for larger data sets.\n", "\n", "The documentation for GPy can be found\n", "[here](https://gpy.readthedocs.io/en/latest/).\n", "\n", "This notebook depends on PyDeepGP. This library can be installed via\n", "pip." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade git+https://github.com/SheffieldML/PyDeepGP.git" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Universe isn’t as Gaussian as it Was\n", "------------------------------------\n", "\n", "The [Planck space\n", "craft](https://en.wikipedia.org/wiki/Planck_(spacecraft)) was a European\n", "Space Agency space telescope that mapped the cosmic microwave background\n", "(CMB) from 2009 to 2013. The [Cosmic Microwave\n", "Background](https://en.wikipedia.org/wiki/Cosmic_microwave_background)\n", "is the first observable echo we have of the big bang. It dates to\n", "approximately 400,000 years after the big bang, at the time the universe\n", "was approximately $10^8$ times smaller and the temperature of the\n", "Univers was high, around $3 \\times 10^8$ degrees Kelvin. The Universe\n", "was in the form of a hydrogen plasma. The echo we observe is the moment\n", "when the Universe was cool enough for Protons and electrons to combine\n", "to form hydrogen atoms. At this moment, the Universe became transparent\n", "for the first time, and photons could travel through space.\n", "\n", "\n", "\n", "Figure: Artist’s impression of the Planck spacecraft which measured\n", "the Cosmic Microwave Background between 2009 and 2013.\n", "\n", "The objective of the Planck space craft was to measure the anisotropy\n", "and statistics of the Cosmic Microwave Background. This was important,\n", "because if the standard model of the Universe is correct the variations\n", "around the very high temperature of the Universe of the CMB should be\n", "distributed according to a Gaussian process.[1] Currently our best\n", "estimates show this to be the case (Elsner et al., 2016, 2015; Jaffe et\n", "al., 1998; Pontzen and Peiris, 2010).\n", "\n", "To the high degree of precision that we could measure with the Planck\n", "space telescope, the CMB appears to be a Gaussian process. The\n", "parameters of its covariance function are given by the fundamental\n", "parameters of the universe, for example the amount of dark matter and\n", "matter in the universe\n", "\n", "\n", "\n", "Figure: The cosmic microwave background is, to a very high degree of\n", "precision, a Gaussian process. The parameters of its covariance function\n", "are given by fundamental parameters of the universe, such as the amount\n", "of dark matter and mass.\n", "\n", "[1] Most of my understanding of this is taken from conversations with\n", "Kyle Cranmer, a physicist who makes extensive use of machine learning\n", "methods in his work. See e.g. Mishra-Sharma and Cranmer (2020) from Kyle\n", "and Siddharth Mishra-Sharma. Of course, any errors in the above text are\n", "mine and do not stem from Kyle." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simulating a CMB Map\n", "--------------------\n", "\n", "The simulation was created by [Boris\n", "Leistedt](https://ixkael.github.io/), see the [original Jupter notebook\n", "here](https://github.com/ixkael/Prob-tools/blob/master/notebooks/The%20CMB%20as%20a%20Gaussian%20Process.ipynb).\n", "\n", "Here we use that code to simulate our own universe and sample from what\n", "it looks like.\n", "\n", "First we install some specialist software as well as `matplotlib`,\n", "`scipy`, `numpy` we require\n", "\n", "- `camb`:\n", " http://camb.readthedocs.io/en/latest/\n", "- `healpy`:\n", " https://healpy.readthedocs.io/en/latest/" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install camb" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install healpy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "%config IPython.matplotlib.backend = 'retina'\n", "%config InlineBackend.figure_format = 'retina'\n", "\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "from matplotlib import rc\n", "from cycler import cycler\n", "\n", "import numpy as np\n", "\n", "rc(\"font\", family=\"serif\", size=14)\n", "rc(\"text\", usetex=False)\n", "matplotlib.rcParams['lines.linewidth'] = 2\n", "matplotlib.rcParams['patch.linewidth'] = 2\n", "matplotlib.rcParams['axes.prop_cycle'] =\\\n", " cycler(\"color\", ['k', 'c', 'm', 'y'])\n", "matplotlib.rcParams['axes.labelsize'] = 16\n", "\n", "import healpy as hp\n", "\n", "import camb\n", "from camb import model, initialpower" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we use the theoretical power spectrum to design the covariance\n", "function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nside = 512 # Healpix parameter, giving 12*nside**2 equal-area pixels on the sphere.\n", "lmax = 3*nside # band-limit. Should be 2*nside < lmax < 4*nside to get information content." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we design our Universe. It is parameterised according to the\n", "[$\\Lambda$CDM model](https://en.wikipedia.org/wiki/Lambda-CDM_model).\n", "The variables are as follows. `H0` is the Hubble parameter (in\n", "Km/s/Mpc). The `ombh2` is Physical Baryon density parameter. The `omch2`\n", "is the physical dark matter density parameter. `mnu` is the sum of the\n", "neutrino masses (in electron Volts). `omk` is the $\\Omega_k$ is the\n", "curvature parameter, which is here set to 0, tiving the minimal six\n", "parameter Lambda-CDM model. `tau` is the reionization optical depth.\n", "\n", "Then we set `ns`, the “scalar spectral index”. This was estimated by\n", "Planck to be 0.96. Then there’s `r`, the ratio of the tensor power\n", "spectrum to scalar power spectrum. This has been estimated by Planck to\n", "be under 0.11. Here we set it to zero. These parameters are associated\n", "[with inflation](https://en.wikipedia.org/wiki/Primordial_fluctuations)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Mostly following http://camb.readthedocs.io/en/latest/CAMBdemo.html with parameters from https://en.wikipedia.org/wiki/Lambda-CDM_model\n", "\n", "pars = camb.CAMBparams()\n", "pars.set_cosmology(H0=67.74, ombh2=0.0223, omch2=0.1188, mnu=0.06, omk=0, tau=0.066)\n", "pars.InitPower.set_params(ns=0.96, r=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Having set the parameters, we now use the python software “Code for\n", "Anisotropies in the Microwave Background” to get the results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pars.set_for_lmax(lmax, lens_potential_accuracy=0);\n", "results = camb.get_results(pars)\n", "powers = results.get_cmb_power_spectra(pars)\n", "totCL = powers['total']\n", "unlensedCL = powers['unlensed_scalar']\n", "\n", "ells = np.arange(totCL.shape[0])\n", "Dells = totCL[:, 0]\n", "Cells = Dells * 2*np.pi / ells / (ells + 1) # change of convention to get C_ell\n", "Cells[0:2] = 0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cmbmap = hp.synfast(Cells, nside, \n", " lmax=lmax, mmax=None, alm=False, pol=False, \n", " pixwin=False, fwhm=0.0, sigma=None, new=False, verbose=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hp.mollview(cmbmap)\n", "fig = plt.gcf()\n", "fig.savefig('./physics/mollweide-sample-cmb.png', transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: A simulation of the Cosmic Microwave Background obtained\n", "through sampling from the relevant Gaussian process covariance (in polar\n", "co-ordinates).\n", "\n", "The world we see today, of course, is not a Gaussian process. There are\n", "many dicontinuities, for example, in the density of matter, and\n", "therefore in the temperature of the Universe.\n", "\n", "$=f\\Bigg($$\\Bigg)$\n", "\n", "Figure: What we observe today is some non-linear function of the\n", "cosmic microwave background.\n", "\n", "We can think of todays observed Universe, though, as a being a\n", "consequence of those temperature fluctuations in the CMB. Those\n", "fluctuations are only order $10^-6$ of the scale of the overal\n", "temperature of the Universe. But minor fluctations in that density is\n", "what triggered the pattern of formation of the Galaxies and how stars\n", "formed and created the elements that are the building blocks of our\n", "Earth (Vogelsberger et al., 2020)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Low Rank Gaussian Processes\n", "---------------------------\n", "\n", "\n", "\n", "Figure: In recent years, approximations for Gaussian process models\n", "haven’t been the most fashionable approach to machine learning. Image\n", "credit: Kai Arulkumaran\n", "\n", "Inference in a Gaussian process has computational complexity of\n", "$\\mathcal{O}(n^3)$ and storage demands of $\\mathcal{O}(n^2)$. This is\n", "too large for many modern data sets.\n", "\n", "Low rank approximations allow us to work with Gaussian processes with\n", "computational complexity of $\\mathcal{O}(nm^2)$ and storage demands of\n", "$\\mathcal{O}(nm)$, where $m$ is a user chosen parameter.\n", "\n", "In machine learning, low rank approximations date back to Smola and\n", "Bartlett (n.d.), Williams and Seeger (n.d.), who considered the Nyström\n", "approximation and Csató and Opper (2002);Csató (2002) who considered low\n", "rank approximations in the context of on-line learning. Selection of\n", "active points for the approximation was considered by Seeger et al.\n", "(n.d.) and Snelson and Ghahramani (n.d.) first proposed that the active\n", "set could be optimized directly. Those approaches were reviewed by\n", "Quiñonero Candela and Rasmussen (2005) under a unifying likelihood\n", "approximation perspective. General rules for deriving the maximum\n", "likelihood for these sparse approximations were given in Lawrence\n", "(n.d.).\n", "\n", "Modern variational interpretations of these low rank approaches were\n", "first explored in Titsias (n.d.). A more modern summary which considers\n", "each of these approximations as an $\\alpha$-divergence is given by Bui\n", "et al. (2017)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Variational Compression\n", "-----------------------\n", "\n", "Inducing variables are a compression of the real observations. The basic\n", "idea is can I create a new data set that summarizes all the information\n", "in the original data set. If this data set is smaller, I’ve compressed\n", "the information in the original data set.\n", "\n", "Inducing variables can be thought of as pseudo-data, indeed in Snelson\n", "and Ghahramani (n.d.) they were referred to as *pseudo-points*.\n", "\n", "The only requirement for inducing variables is that they are jointly\n", "distributed as a Gaussian process with the original data. This means\n", "that they can be from the space $\\mathbf{ f}$ or a space that is related\n", "through a linear operator (see e.g. Álvarez et al. (2010)). For example\n", "we could choose to store the gradient of the function at particular\n", "points or a value from the frequency spectrum of the function\n", "(Lázaro-Gredilla et al., 2010)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Variational Compression II\n", "--------------------------\n", "\n", "Inducing variables don’t only allow for the compression of the\n", "non-parameteric information into a reduced data aset but they also allow\n", "for computational scaling of the algorithms through, for example\n", "stochastic variational approaches Hensman et al. (n.d.) or\n", "parallelization Gal et al. (n.d.),Dai et al. (2014), Seeger et al.\n", "(2017).\n", "\n", "\n", "\n", "We’ve seen how we go from parametric to non-parametric. The limit\n", "implies infinite dimensional $\\mathbf{ w}$. Gaussian processes are\n", "generally non-parametric: combine data with covariance function to get\n", "model. This representation *cannot* be summarized by a parameter vector\n", "of a fixed size.\n", "\n", "Parametric models have a representation that does not respond to\n", "increasing training set size. Bayesian posterior distributions over\n", "parameters contain the information about the training data, for example\n", "if we use use Bayes’ rule from training data, $$\n", "p\\left(\\mathbf{ w}|\\mathbf{ y}, \\mathbf{X}\\right),\n", "$$ to make predictions on test data $$\n", "p\\left(y_*|\\mathbf{X}_*, \\mathbf{ y}, \\mathbf{X}\\right) = \\int\n", " p\\left(y_*|\\mathbf{ w},\\mathbf{X}_*\\right)p\\left(\\mathbf{ w}|\\mathbf{ y},\n", " \\mathbf{X})\\text{d}\\mathbf{ w}\\right)\n", "$$ then $\\mathbf{ w}$ becomes a bottleneck for information about the\n", "training set to pass to the test set. The solution is to increase $m$ so\n", "that the bottleneck is so large that it no longer presents a problem.\n", "How big is big enough for $m$? Non-parametrics says\n", "$m\\rightarrow \\infty$.\n", "\n", "Now no longer possible to manipulate the model through the standard\n", "parametric form. However, it *is* possible to express *parametric* as\n", "GPs: $$\n", "k\\left(\\mathbf{ x}_i,\\mathbf{ x}_j\\right)=\\phi_:\\left(\\mathbf{ x}_i\\right)^\\top\\phi_:\\left(\\mathbf{ x}_j\\right).\n", "$$ These are known as degenerate covariance matrices. Their rank is at\n", "most $m$, non-parametric models have full rank covariance matrices. Most\n", "well known is the “linear kernel”, $$\n", "k(\\mathbf{ x}_i, \\mathbf{ x}_j) = \\mathbf{ x}_i^\\top\\mathbf{ x}_j.\n", "$$ For non-parametrics prediction at a new point, $\\mathbf{ f}_*$, is\n", "made by conditioning on $\\mathbf{ f}$ in the joint distribution. In GPs\n", "this involves combining the training data with the covariance function\n", "and the mean function. Parametric is a special case when conditional\n", "prediction can be summarized in a *fixed* number of parameters.\n", "Complexity of parametric model remains fixed regardless of the size of\n", "our training data set. For a non-parametric model the required number of\n", "parameters grows with the size of the training data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Augment Variable Space\n", "----------------------\n", "\n", "In inducing variable approximations, we augment the variable space with\n", "a set of inducing points, $\\mathbf{ u}$. These inducing points are\n", "jointly Gaussian distributed with the points from our function,\n", "$\\mathbf{ f}$. So we have a joint Gaussian process with covariance, $$\n", "\\begin{bmatrix}\n", "\\mathbf{ f}\\\\\n", "\\mathbf{ u}\n", "\\end{bmatrix} \\sim \\mathcal{N}\\left(\\mathbf{0},\\mathbf{K}\\right)\n", "$$ where the kernel matrix itself can be decomposed into $$\n", "\\mathbf{K}=\n", "\\begin{bmatrix}\n", "\\mathbf{K}_{\\mathbf{ f}\\mathbf{ f}}& \\mathbf{K}_{\\mathbf{ f}\\mathbf{ u}}\\\\\n", "\\mathbf{K}_{\\mathbf{ u}\\mathbf{ f}}& \\mathbf{K}_{\\mathbf{ u}\\mathbf{ u}}\n", "\\end{bmatrix}\n", "$$\n", "\n", "This defines a joint density between the original function points,\n", "$\\mathbf{ f}$ and our inducing points, $\\mathbf{ u}$. This can be\n", "decomposed through the product rule to give. $$\n", "p(\\mathbf{ f}, \\mathbf{ u}) = p(\\mathbf{ f}| \\mathbf{ u}) p(\\mathbf{ u})\n", "$$ The Gaussian process is (typically) given by a noise corrupted form\n", "of $\\mathbf{ f}$, i.e., $$\n", "y(\\mathbf{ x}) = f(\\mathbf{ x}) + \\epsilon,\n", "$$ which can be written probabilisticlly as, $$\n", "p(\\mathbf{ y}) = \\int p(\\mathbf{ y}|\\mathbf{ f}) p(\\mathbf{ f}) \\text{d}\\mathbf{ f},\n", "$$ where for the independent case we have\n", "$p(\\mathbf{ y}| \\mathbf{ f}) = \\prod_{i=1}^np(y_i|f_i)$.\n", "\n", "Inducing variables are like auxilliary variables in Monte Carlo\n", "algorithms. We introduce the inducing variables by augmenting this\n", "integral with an additional integral over $\\mathbf{ u}$, $$\n", "p(\\mathbf{ y}) = \\int p(\\mathbf{ y}|\\mathbf{ f}) p(\\mathbf{ f}|\\mathbf{ u}) p(\\mathbf{ u}) \\text{d}\\mathbf{ u}\\text{d}\\mathbf{ f}.\n", "$$ Now, conceptually speaking we are going to integrate out\n", "$\\mathbf{ f}$, initially leaving $\\mathbf{ u}$ in place. This gives, $$\n", "p(\\mathbf{ y}) = \\int p(\\mathbf{ y}|\\mathbf{ u}) p(\\mathbf{ u}) \\text{d}\\mathbf{ u}.\n", "$$\n", "\n", "Note the similarity between this form and our standard *parametric*\n", "form. If we had defined our model through standard basis functions we\n", "would have, $$\n", "y(\\mathbf{ x}) = \\mathbf{ w}^\\top\\boldsymbol{ \\phi}(\\mathbf{ x}) + \\epsilon\n", "$$ and the resulting probabilistic representation would be $$\n", "p(\\mathbf{ y}) = \\int p(\\mathbf{ y}|\\mathbf{ w}) p(\\mathbf{ w}) \\text{d} \\mathbf{ w}\n", "$$ allowing us to predict $$\n", "p(\\mathbf{ y}^*|\\mathbf{ y}) = \\int p(\\mathbf{ y}^*|\\mathbf{ w}) p(\\mathbf{ w}|\\mathbf{ y}) \\text{d} \\mathbf{ w}\n", "$$\n", "\n", "The new prediction algorithm involves $$\n", "p(\\mathbf{ y}^*|\\mathbf{ y}) = \\int p(\\mathbf{ y}^*|\\mathbf{ u}) p(\\mathbf{ u}|\\mathbf{ y}) \\text{d} \\mathbf{ u}\n", "$$ but *importantly* the length of $\\mathbf{ u}$ is not fixed at\n", "*design* time like the number of parameters were. We can vary the number\n", "of inducing variables we use to give us the model capacity we require.\n", "\n", "Unfortunately, computation of $p(\\mathbf{ y}|\\mathbf{ u})$ turns out to\n", "be intractable. As a result, we need to turn to approximations to make\n", "progress." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Variational Bound on $p(\\mathbf{ y}|\\mathbf{ u})$\n", "-------------------------------------------------\n", "\n", "The conditional density of the data given the inducing points can be\n", "*lower* bounded variationally $$\n", "\\begin{aligned}\n", " \\log p(\\mathbf{ y}|\\mathbf{ u}) & = \\log \\int p(\\mathbf{ y}|\\mathbf{ f}) p(\\mathbf{ f}|\\mathbf{ u}) \\text{d}\\mathbf{ f}\\\\ & = \\int q(\\mathbf{ f}) \\log \\frac{p(\\mathbf{ y}|\\mathbf{ f}) p(\\mathbf{ f}|\\mathbf{ u})}{q(\\mathbf{ f})}\\text{d}\\mathbf{ f}+ \\text{KL}\\left( q(\\mathbf{ f})\\,\\|\\,p(\\mathbf{ f}|\\mathbf{ y}, \\mathbf{ u}) \\right).\n", "\\end{aligned}\n", "$$\n", "\n", "The key innovation from Titsias (n.d.) was to then make a particular\n", "choice for $q(\\mathbf{ f})$. If we set\n", "$q(\\mathbf{ f})=p(\\mathbf{ f}|\\mathbf{ u})$, $$\n", " \\log p(\\mathbf{ y}|\\mathbf{ u}) \\geq \\int p(\\mathbf{ f}|\\mathbf{ u}) \\log p(\\mathbf{ y}|\\mathbf{ f})\\text{d}\\mathbf{ f}.\n", " $$ $$\n", " p(\\mathbf{ y}|\\mathbf{ u}) \\geq \\exp \\int p(\\mathbf{ f}|\\mathbf{ u}) \\log p(\\mathbf{ y}|\\mathbf{ f})\\text{d}\\mathbf{ f}.\n", " $$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optimal Compression in Inducing Variables\n", "-----------------------------------------\n", "\n", "Maximizing the lower bound minimizes the Kullback-Leibler divergence (or\n", "*information gain*) between our approximating density,\n", "$p(\\mathbf{ f}|\\mathbf{ u})$ and the true posterior density,\n", "$p(\\mathbf{ f}|\\mathbf{ y}, \\mathbf{ u})$.\n", "\n", "$$\n", " \\text{KL}\\left( p(\\mathbf{ f}|\\mathbf{ u})\\,\\|\\,p(\\mathbf{ f}|\\mathbf{ y}, \\mathbf{ u}) \\right) = \\int p(\\mathbf{ f}|\\mathbf{ u}) \\log \\frac{p(\\mathbf{ f}|\\mathbf{ u})}{p(\\mathbf{ f}|\\mathbf{ y}, \\mathbf{ u})}\\text{d}\\mathbf{ u}\n", " $$\n", "\n", "This bound is minimized when the information stored about $\\mathbf{ y}$\n", "is already stored in $\\mathbf{ u}$. In other words, maximizing the bound\n", "seeks an *optimal compression* from the *information gain* perspective.\n", "\n", "For the case where $\\mathbf{ u}= \\mathbf{ f}$ the bound is exact\n", "($\\mathbf{ f}$ $d$-separates $\\mathbf{ y}$ from $\\mathbf{ u}$)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Choice of Inducing Variables\n", "----------------------------\n", "\n", "The quality of the resulting bound is determined by the choice of the\n", "inducing variables. You are free to choose whichever heuristics you like\n", "for the inducing variables, as long as they are drawn jointly from a\n", "valid Gaussian process, i.e. such that $$\n", "\\begin{bmatrix}\n", "\\mathbf{ f}\\\\\n", "\\mathbf{ u}\n", "\\end{bmatrix} \\sim \\mathcal{N}\\left(\\mathbf{0},\\mathbf{K}\\right)\n", "$$ where the kernel matrix itself can be decomposed into $$\n", "\\mathbf{K}=\n", "\\begin{bmatrix}\n", "\\mathbf{K}_{\\mathbf{ f}\\mathbf{ f}}& \\mathbf{K}_{\\mathbf{ f}\\mathbf{ u}}\\\\\n", "\\mathbf{K}_{\\mathbf{ u}\\mathbf{ f}}& \\mathbf{K}_{\\mathbf{ u}\\mathbf{ u}}\n", "\\end{bmatrix}\n", "$$ Choosing the inducing variables amounts to specifying\n", "$\\mathbf{K}_{\\mathbf{ f}\\mathbf{ u}}$ and\n", "$\\mathbf{K}_{\\mathbf{ u}\\mathbf{ u}}$ such that $\\mathbf{K}$ remains\n", "positive definite. The typical choice is to choose $\\mathbf{ u}$ in the\n", "same domain as $\\mathbf{ f}$, associating each inducing output, $u_i$\n", "with a corresponding input location $\\mathbf{ z}$. However, more\n", "imaginative choices are absolutely possible. In particular, if\n", "$\\mathbf{ u}$ is related to $\\mathbf{ f}$ through a linear operator (see\n", "e.g. Álvarez et al. (2010)), then valid\n", "$\\mathbf{K}_{\\mathbf{ u}\\mathbf{ u}}$ and\n", "$\\mathbf{K}_{\\mathbf{ u}\\mathbf{ f}}$ can be constructed. For example we\n", "could choose to store the gradient of the function at particular points\n", "or a value from the frequency spectrum of the function (Lázaro-Gredilla\n", "et al., 2010)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Variational Compression II\n", "--------------------------\n", "\n", "Inducing variables don’t only allow for the compression of the\n", "non-parameteric information into a reduced data set but they also allow\n", "for computational scaling of the algorithms through, for example\n", "stochastic variational approaches(Hensman et al., n.d.; Hoffman et al.,\n", "2012) or parallelization (Dai et al., 2014; Gal et al., n.d.; Seeger et\n", "al., 2017).\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A Simple Regression Problem\n", "---------------------------\n", "\n", "Here we set up a simple one dimensional regression problem. The input\n", "locations, $\\mathbf{X}$, are in two separate clusters. The response\n", "variable, $\\mathbf{ y}$, is sampled from a Gaussian process with an\n", "exponentiated quadratic covariance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import GPy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.random.seed(101)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "N = 50\n", "noise_var = 0.01\n", "X = np.zeros((50, 1))\n", "X[:25, :] = np.linspace(0,3,25)[:,None] # First cluster of inputs/covariates\n", "X[25:, :] = np.linspace(7,10,25)[:,None] # Second cluster of inputs/covariates\n", "\n", "# Sample response variables from a Gaussian process with exponentiated quadratic covariance.\n", "k = GPy.kern.RBF(1)\n", "y = np.random.multivariate_normal(np.zeros(N),k.K(X)+np.eye(N)*np.sqrt(noise_var)).reshape(-1,1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we perform a full Gaussian process regression on the data. We\n", "create a GP model, `m_full`, and fit it to the data, plotting the\n", "resulting fit." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full = GPy.models.GPRegression(X,y)\n", "_ = m_full.optimize(messages=True) # Optimize parameters of covariance function" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import mlai\n", "import teaching_plots as plot \n", "from gp_tutorial import gpplot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full, ax=ax, xlabel='$x$', ylabel='$y$', fontsize=20, portion=0.2)\n", "xlim = ax.get_xlim()\n", "ylim = ax.get_ylim()\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/sparse-demo-full-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Full Gaussian process fitted to the data set.\n", "\n", "Now we set up the inducing variables, $\\mathbf{u}$. Each inducing\n", "variable has its own associated input index, $\\mathbf{Z}$, which lives\n", "in the same space as $\\mathbf{X}$. Here we are using the true covariance\n", "function parameters to generate the fit." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "kern = GPy.kern.RBF(1)\n", "Z = np.hstack(\n", " (np.linspace(2.5,4.,3),\n", " np.linspace(7,8.5,3)))[:,None]\n", "m = GPy.models.SparseGPRegression(X,y,kernel=kern,Z=Z)\n", "m.noise_var = noise_var\n", "m.inducing_inputs.constrain_fixed()\n", "display(m)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, ax=ax, xlabel='$x$', ylabel='$y$', fontsize=20, portion=0.2, xlim=xlim, ylim=ylim)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/sparse-demo-constrained-inducing-6-unlearned-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Sparse Gaussian process fitted with six inducing variables,\n", "no optimization of parameters or inducing variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_ = m.optimize(messages=True)\n", "display(m)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, ax=ax, xlabel='$x$', ylabel='$y$', fontsize=20, portion=0.2, xlim=xlim, ylim=ylim)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/sparse-demo-constrained-inducing-6-learned-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Gaussian process fitted with inducing variables fixed and\n", "parameters optimized" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.randomize()\n", "m.inducing_inputs.unconstrain()\n", "_ = m.optimize(messages=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, ax=ax, xlabel='$x$', ylabel='$y$', fontsize=20, portion=0.2,xlim=xlim, ylim=ylim)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/sparse-demo-unconstrained-inducing-6-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Gaussian process fitted with location of inducing variables\n", "and parameters both optimized\n", "\n", "Now we will vary the number of inducing points used to form the\n", "approximation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.num_inducing=8\n", "m.randomize()\n", "M = 8\n", "m.set_Z(np.random.rand(M,1)*12)\n", "\n", "_ = m.optimize(messages=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, ax=ax, xlabel='$x$', ylabel='$y$', fontsize=20, portion=0.2, xlim=xlim, ylim=ylim)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/sparse-demo-sparse-inducing-8-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "Figure: Comparison of the full Gaussian process fit with a sparse\n", "Gaussian process using eight inducing varibles. Both inducing variables\n", "and parameters are optimized.\n", "\n", "And we can compare the probability of the result to the full model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(m.log_likelihood(), m_full.log_likelihood())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Modern Review\n", "-------------\n", "\n", "- *A Unifying Framework for Gaussian Process Pseudo-Point\n", " Approximations using Power Expectation Propagation* Bui et\n", " al. (2017)\n", "\n", "- *Deep Gaussian Processes and Variational Propagation of Uncertainty*\n", " Damianou (2015)\n", "\n", "Even in the early days of Gaussian processes in machine learning, it was\n", "understood that we were throwing something fundamental away. This is\n", "perhaps captured best by David MacKay in his 1997 NeurIPS tutorial on\n", "Gaussian processes, where he asked “Have we thrown out the baby with the\n", "bathwater?”. The quote below is from his summarization paper.\n", "\n", "> According to the hype of 1987, neural networks were meant to be\n", "> intelligent models which discovered features and patterns in data.\n", "> Gaussian processes in contrast are simply smoothing devices. How can\n", "> Gaussian processes possibly repalce neural networks? What is going on?\n", ">\n", "> MacKay (n.d.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.deep_nn(diagrams='./deepgp/')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: A deep neural network. Input nodes are shown at the bottom.\n", "Each hidden layer is the result of applying an affine transformation to\n", "the previous layer and placing through an activation function.\n", "\n", "Mathematically, each layer of a neural network is given through\n", "computing the activation function, $\\phi(\\cdot)$, contingent on the\n", "previous layer, or the inputs. In this way the activation functions, are\n", "composed to generate more complex interactions than would be possible\n", "with any single layer. $$\n", "\\begin{align}\n", " \\mathbf{ h}_{1} &= \\phi\\left(\\mathbf{W}_1 \\mathbf{ x}\\right)\\\\\n", " \\mathbf{ h}_{2} &= \\phi\\left(\\mathbf{W}_2\\mathbf{ h}_{1}\\right)\\\\\n", " \\mathbf{ h}_{3} &= \\phi\\left(\\mathbf{W}_3 \\mathbf{ h}_{2}\\right)\\\\\n", " \\mathbf{ y}&= \\mathbf{ w}_4 ^\\top\\mathbf{ h}_{3}\n", "\\end{align}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Overfitting\n", "-----------\n", "\n", "One potential problem is that as the number of nodes in two adjacent\n", "layers increases, the number of parameters in the affine transformation\n", "between layers, $\\mathbf{W}$, increases. If there are $k_{i-1}$ nodes in\n", "one layer, and $k_i$ nodes in the following, then that matrix contains\n", "$k_i k_{i-1}$ parameters, when we have layer widths in the 1000s that\n", "leads to millions of parameters.\n", "\n", "One proposed solution is known as *dropout* where only a sub-set of the\n", "neural network is trained at each iteration. An alternative solution\n", "would be to reparameterize $\\mathbf{W}$ with its *singular value\n", "decomposition*. $$\n", " \\mathbf{W}= \\mathbf{U}\\boldsymbol{ \\Lambda}\\mathbf{V}^\\top\n", " $$ or $$\n", " \\mathbf{W}= \\mathbf{U}\\mathbf{V}^\\top\n", " $$ where if $\\mathbf{W}\\in \\Re^{k_1\\times k_2}$ then\n", "$\\mathbf{U}\\in \\Re^{k_1\\times q}$ and $\\mathbf{V}\\in \\Re^{k_2\\times q}$,\n", "i.e. we have a low rank matrix factorization for the weights." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.low_rank_approximation(diagrams='../slides/diagrams')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Pictorial representation of the low rank form of the matrix\n", "$\\mathbf{W}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Bottleneck Layers in Deep Neural Networks\n", "-----------------------------------------" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.deep_nn_bottleneck(diagrams='./deepgp')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Inserting the bottleneck layers introduces a new set of\n", "variables.\n", "\n", "Including the low rank decomposition of $\\mathbf{W}$ in the neural\n", "network, we obtain a new mathematical form. Effectively, we are adding\n", "additional *latent* layers, $\\mathbf{ z}$, in between each of the\n", "existing hidden layers. In a neural network these are sometimes known as\n", "*bottleneck* layers. The network can now be written mathematically as $$\n", "\\begin{align}\n", " \\mathbf{ z}_{1} &= \\mathbf{V}^\\top_1 \\mathbf{ x}\\\\\n", " \\mathbf{ h}_{1} &= \\phi\\left(\\mathbf{U}_1 \\mathbf{ z}_{1}\\right)\\\\\n", " \\mathbf{ z}_{2} &= \\mathbf{V}^\\top_2 \\mathbf{ h}_{1}\\\\\n", " \\mathbf{ h}_{2} &= \\phi\\left(\\mathbf{U}_2 \\mathbf{ z}_{2}\\right)\\\\\n", " \\mathbf{ z}_{3} &= \\mathbf{V}^\\top_3 \\mathbf{ h}_{2}\\\\\n", " \\mathbf{ h}_{3} &= \\phi\\left(\\mathbf{U}_3 \\mathbf{ z}_{3}\\right)\\\\\n", " \\mathbf{ y}&= \\mathbf{ w}_4^\\top\\mathbf{ h}_{3}.\n", "\\end{align}\n", "$$\n", "\n", "$$\n", "\\begin{align}\n", " \\mathbf{ z}_{1} &= \\mathbf{V}^\\top_1 \\mathbf{ x}\\\\\n", " \\mathbf{ z}_{2} &= \\mathbf{V}^\\top_2 \\phi\\left(\\mathbf{U}_1 \\mathbf{ z}_{1}\\right)\\\\\n", " \\mathbf{ z}_{3} &= \\mathbf{V}^\\top_3 \\phi\\left(\\mathbf{U}_2 \\mathbf{ z}_{2}\\right)\\\\\n", " \\mathbf{ y}&= \\mathbf{ w}_4 ^\\top \\mathbf{ z}_{3}\n", "\\end{align}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Cascade of Gaussian Processes\n", "-----------------------------\n", "\n", "Now if we replace each of these neural networks with a Gaussian process.\n", "This is equivalent to taking the limit as the width of each layer goes\n", "to infinity, while appropriately scaling down the outputs.\n", "\n", "$$\n", "\\begin{align}\n", " \\mathbf{ z}_{1} &= \\mathbf{ f}_1\\left(\\mathbf{ x}\\right)\\\\\n", " \\mathbf{ z}_{2} &= \\mathbf{ f}_2\\left(\\mathbf{ z}_{1}\\right)\\\\\n", " \\mathbf{ z}_{3} &= \\mathbf{ f}_3\\left(\\mathbf{ z}_{2}\\right)\\\\\n", " \\mathbf{ y}&= \\mathbf{ f}_4\\left(\\mathbf{ z}_{3}\\right)\n", "\\end{align}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deep Learning\n", "=============\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "DeepFace\n", "--------\n", "\n", "\n", "\n", "Figure: The DeepFace architecture (Taigman et al., 2014), visualized\n", "through colors to represent the functional mappings at each layer. There\n", "are 120 million parameters in the model.\n", "\n", "The DeepFace architecture (Taigman et al., 2014) consists of layers that\n", "deal with *translation* and *rotational* invariances. These layers are\n", "followed by three locally-connected layers and two fully-connected\n", "layers. Color illustrates feature maps produced at each layer. The\n", "neural network includes more than 120 million parameters, where more\n", "than 95% come from the local and fully connected layers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deep Learning as Pinball\n", "\n", "\n", "\n", "Figure: Deep learning models are composition of simple functions. We\n", "can think of a pinball machine as an analogy. Each layer of pins\n", "corresponds to one of the layers of functions in the model. Input data\n", "is represented by the location of the ball from left to right when it is\n", "dropped in from the top. Output class comes from the position of the\n", "ball as it leaves the pins at the bottom.\n", "\n", "Sometimes deep learning models are described as being like the brain, or\n", "too complex to understand, but one analogy I find useful to help the\n", "gist of these models is to think of them as being similar to early pin\n", "ball machines.\n", "\n", "In a deep neural network, we input a number (or numbers), whereas in\n", "pinball, we input a ball.\n", "\n", "Think of the location of the ball on the left-right axis as a single\n", "number. Our simple pinball machine can only take one number at a time.\n", "As the ball falls through the machine, each layer of pins can be thought\n", "of as a different layer of ‘neurons’. Each layer acts to move the ball\n", "from left to right.\n", "\n", "In a pinball machine, when the ball gets to the bottom it might fall\n", "into a hole defining a score, in a neural network, that is equivalent to\n", "the decision: a classification of the input object.\n", "\n", "An image has more than one number associated with it, so it is like\n", "playing pinball in a *hyper-space*." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('pinball{sample:0>3}.svg', \n", " directory='.',\n", " sample=IntSlider(1, 1, 2, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: At initialization, the pins, which represent the parameters\n", "of the function, aren’t in the right place to bring the balls to the\n", "correct decisions.\n", "\n", "\n", "\n", "Figure: After learning the pins are now in the right place to bring\n", "the balls to the correct decisions.\n", "\n", "Learning involves moving all the pins to be in the correct position, so\n", "that the ball ends up in the right place when it’s fallen through the\n", "machine. But moving all these pins in hyperspace can be difficult.\n", "\n", "In a hyper-space you have to put a lot of data through the machine for\n", "to explore the positions of all the pins. Even when you feed many\n", "millions of data points through the machine, there are likely to be\n", "regions in the hyper-space where no ball has passed. When future test\n", "data passes through the machine in a new route unusual things can\n", "happen.\n", "\n", "*Adversarial examples* exploit this high dimensional space. If you have\n", "access to the pinball machine, you can use gradient methods to find a\n", "position for the ball in the hyper space where the image looks like one\n", "thing, but will be classified as another.\n", "\n", "Probabilistic methods explore more of the space by considering a range\n", "of possible paths for the ball through the machine. This helps to make\n", "them more data efficient and gives some robustness to adversarial\n", "examples.\n", "\n", "Mathematically, a deep Gaussian process can be seen as a composite\n", "*multivariate* function, $$\n", " \\mathbf{g}(\\mathbf{ x})=\\mathbf{ f}_5(\\mathbf{ f}_4(\\mathbf{ f}_3(\\mathbf{ f}_2(\\mathbf{ f}_1(\\mathbf{ x}))))).\n", " $$ Or if we view it from the probabilistic perspective we can see that\n", "a deep Gaussian process is specifying a factorization of the joint\n", "density, the standard deep model takes the form of a Markov chain." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import rc\n", "\n", "rc(\"font\", **{'family':'sans-serif','sans-serif':['Helvetica'],'size':30})\n", "rc(\"text\", usetex=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pgm = plot.horizontal_chain(depth=5)\n", "pgm.render().figure.savefig(\"./deepgp/deep-markov.svg\", transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$\n", " p(\\mathbf{ y}|\\mathbf{ x})= p(\\mathbf{ y}|\\mathbf{ f}_5)p(\\mathbf{ f}_5|\\mathbf{ f}_4)p(\\mathbf{ f}_4|\\mathbf{ f}_3)p(\\mathbf{ f}_3|\\mathbf{ f}_2)p(\\mathbf{ f}_2|\\mathbf{ f}_1)p(\\mathbf{ f}_1|\\mathbf{ x})\n", " $$\n", "\n", "\n", "\n", "Figure: Probabilistically the deep Gaussian process can be\n", "represented as a Markov chain." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import rc\n", "rc(\"font\", **{'family':'sans-serif','sans-serif':['Helvetica'], 'size':15})\n", "rc(\"text\", usetex=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pgm = plot.vertical_chain(depth=5)\n", "pgm.render().figure.savefig(\"./deepgp/deep-markov-vertical.svg\", transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: More usually deep probabilistic models are written vertically\n", "rather than horizontally as in the Markov chain." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Why Deep?\n", "---------\n", "\n", "If the result of composing many functions together is simply another\n", "function, then why do we bother? The key point is that we can change the\n", "class of functions we are modeling by composing in this manner. A\n", "Gaussian process is specifying a prior over functions, and one with a\n", "number of elegant properties. For example, the derivative process (if it\n", "exists) of a Gaussian process is also Gaussian distributed. That makes\n", "it easy to assimilate, for example, derivative observations. But that\n", "also might raise some alarm bells. That implies that the *marginal\n", "derivative distribution* is also Gaussian distributed. If that’s the\n", "case, then it means that functions which occasionally exhibit very large\n", "derivatives are hard to model with a Gaussian process. For example, a\n", "function with jumps in.\n", "\n", "A one off discontinuity is easy to model with a Gaussian process, or\n", "even multiple discontinuities. They can be introduced in the mean\n", "function, or independence can be forced between two covariance functions\n", "that apply in different areas of the input space. But in these cases we\n", "will need to specify the number of discontinuities and where they occur.\n", "In otherwords we need to *parameterise* the discontinuities. If we do\n", "not know the number of discontinuities and don’t wish to specify where\n", "they occur, i.e. if we want a non-parametric representation of\n", "discontinuities, then the standard Gaussian process doesn’t help." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Stochastic Process Composition\n", "------------------------------\n", "\n", "The deep Gaussian process leads to *non-Gaussian* models, and\n", "non-Gaussian characteristics in the covariance function. In effect, what\n", "we are proposing is that we change the properties of the functions we\n", "are considering by *composing stochastic processes*. This is an approach\n", "to creating new stochastic processes from well known processes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pgm = plot.vertical_chain(depth=5, shape=[2, 7])\n", "pgm.add_node(daft.Node('y_2', r'$\\mathbf{y}_2$', 1.5, 3.5, observed=True))\n", "pgm.add_edge('f_2', 'y_2')\n", "pgm.render().figure.savefig(\"./deepgp/deep-markov-vertical-side.svg\", transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Additionally, we are not constrained to the formalism of the chain. For\n", "example, we can easily add single nodes emerging from some point in the\n", "depth of the chain. This allows us to combine the benefits of the\n", "graphical modelling formalism, but with a powerful framework for\n", "relating one set of variables to another, that of Gaussian processes\n", "\n", "\n", "\n", "Figure: More generally we aren’t constrained by the Markov chain. We\n", "can design structures that respect our belief about the underlying\n", "conditional dependencies. Here we are adding a side note from the\n", "chain." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Difficulty for Probabilistic Approaches\n", "---------------------------------------" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.non_linear_difficulty_plot_3(diagrams='./dimred/')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The challenge for composition of probabilistic models is that you need\n", "to propagate a probability densities through non linear mappings. This\n", "allows you to create broader classes of probability density.\n", "Unfortunately it renders the resulting densities *intractable*.\n", "\n", "\n", "\n", "Figure: A two dimensional grid mapped into three dimensions to form a\n", "two dimensional manifold." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.non_linear_difficulty_plot_2(diagrams='./dimred/')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: A one dimensional line mapped into two dimensions by two\n", "separate independent functions. Each point can be mapped exactly through\n", "the mappings." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.non_linear_difficulty_plot_1(diagrams='./dimred')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: A Gaussian density over the input of a non linear function\n", "leads to a very non Gaussian output. Here the output is multimodal." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Standard Variational Approach Fails\n", "-----------------------------------\n", "\n", "- Standard variational bound has the form: $$\n", " \\mathcal{L}= \\left<\\log p(\\mathbf{ y}|\\mathbf{Z})\\right>_{q(\\mathbf{Z})} + \\text{KL}\\left( q(\\mathbf{Z})\\,\\|\\,p(\\mathbf{Z}) \\right)\n", " $$\n", "\n", "The standard variational approach would require the expectation of\n", "$\\log p(\\mathbf{ y}|\\mathbf{Z})$ under $q(\\mathbf{Z})$. $$\n", " \\begin{align}\n", " \\log p(\\mathbf{ y}|\\mathbf{Z}) = & -\\frac{1}{2}\\mathbf{ y}^\\top\\left(\\mathbf{K}_{\\mathbf{ f}, \\mathbf{ f}}+\\sigma^2\\mathbf{I}\\right)^{-1}\\mathbf{ y}\\\\ & -\\frac{1}{2}\\log \\det{\\mathbf{K}_{\\mathbf{ f}, \\mathbf{ f}}+\\sigma^2 \\mathbf{I}} -\\frac{n}{2}\\log 2\\pi\n", " \\end{align}\n", " $$ But this is extremely difficult to compute because\n", "$\\mathbf{K}_{\\mathbf{ f}, \\mathbf{ f}}$ is dependent on $\\mathbf{Z}$ and\n", "it appears in the inverse." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Variational Bayesian GP-LVM\n", "---------------------------\n", "\n", "The alternative approach is to consider the collapsed variational bound\n", "(used for low rank (sparse is a misnomer) Gaussian process\n", "approximations. $$\n", " p(\\mathbf{ y})\\geq \\prod_{i=1}^nc_i \\int \\mathcal{N}\\left(\\mathbf{ y}|\\left<\\mathbf{ f}\\right>,\\sigma^2\\mathbf{I}\\right)p(\\mathbf{ u}) \\text{d}\\mathbf{ u}\n", " $$ $$\n", " p(\\mathbf{ y}|\\mathbf{Z})\\geq \\prod_{i=1}^nc_i \\int \\mathcal{N}\\left(\\mathbf{ y}|\\left<\\mathbf{ f}\\right>_{p(\\mathbf{ f}|\\mathbf{ u}, \\mathbf{Z})},\\sigma^2\\mathbf{I}\\right)p(\\mathbf{ u}) \\text{d}\\mathbf{ u}\n", " $$ $$\n", " \\int p(\\mathbf{ y}|\\mathbf{Z})p(\\mathbf{Z}) \\text{d}\\mathbf{Z}\\geq \\int \\prod_{i=1}^nc_i \\mathcal{N}\\left(\\mathbf{ y}|\\left<\\mathbf{ f}\\right>_{p(\\mathbf{ f}|\\mathbf{ u}, \\mathbf{Z})},\\sigma^2\\mathbf{I}\\right) p(\\mathbf{Z})\\text{d}\\mathbf{Z}p(\\mathbf{ u}) \\text{d}\\mathbf{ u}\n", " $$\n", "\n", "To integrate across $\\mathbf{Z}$ we apply the lower bound to the inner\n", "integral. $$\n", " \\begin{align}\n", " \\int \\prod_{i=1}^nc_i \\mathcal{N}\\left(\\mathbf{ y}|\\left<\\mathbf{ f}\\right>_{p(\\mathbf{ f}|\\mathbf{ u}, \\mathbf{Z})},\\sigma^2\\mathbf{I}\\right) p(\\mathbf{Z})\\text{d}\\mathbf{Z}\\geq & \\left<\\sum_{i=1}^n\\log c_i\\right>_{q(\\mathbf{Z})}\\\\ & +\\left<\\log\\mathcal{N}\\left(\\mathbf{ y}|\\left<\\mathbf{ f}\\right>_{p(\\mathbf{ f}|\\mathbf{ u}, \\mathbf{Z})},\\sigma^2\\mathbf{I}\\right)\\right>_{q(\\mathbf{Z})}\\\\& + \\text{KL}\\left( q(\\mathbf{Z})\\,\\|\\,p(\\mathbf{Z}) \\right) \n", " \\end{align}\n", " $$ \\* Which is analytically tractable for Gaussian $q(\\mathbf{Z})$ and\n", "some covariance functions.\n", "\n", "- Need expectations under $q(\\mathbf{Z})$ of: $$\n", " \\log c_i = \\frac{1}{2\\sigma^2} \\left[k_{i, i} - \\mathbf{ k}_{i, \\mathbf{ u}}^\\top \\mathbf{K}_{\\mathbf{ u}, \\mathbf{ u}}^{-1} \\mathbf{ k}_{i, \\mathbf{ u}}\\right]\n", " $$ and $$\n", " \\log \\mathcal{N}\\left(\\mathbf{ y}|\\left<\\mathbf{ f}\\right>_{p(\\mathbf{ f}|\\mathbf{ u},\\mathbf{Y})},\\sigma^2\\mathbf{I}\\right) = -\\frac{1}{2}\\log 2\\pi\\sigma^2 - \\frac{1}{2\\sigma^2}\\left(y_i - \\mathbf{K}_{\\mathbf{ f}, \\mathbf{ u}}\\mathbf{K}_{\\mathbf{ u},\\mathbf{ u}}^{-1}\\mathbf{ u}\\right)^2\n", " $$\n", "\n", "- This requires the expectations $$\n", " \\left<\\mathbf{K}_{\\mathbf{ f},\\mathbf{ u}}\\right>_{q(\\mathbf{Z})}\n", " $$ and $$\n", " \\left<\\mathbf{K}_{\\mathbf{ f},\\mathbf{ u}}\\mathbf{K}_{\\mathbf{ u},\\mathbf{ u}}^{-1}\\mathbf{K}_{\\mathbf{ u},\\mathbf{ f}}\\right>_{q(\\mathbf{Z})}\n", " $$ which can be computed analytically for some covariance functions\n", " (Damianou et al., 2016) or through sampling (Damianou, 2015;\n", " Salimbeni and Deisenroth, 2017).\n", "\n", "Variational approximations aren’t the only approach to approximate\n", "inference. The original work on deep Gaussian processes made use of MAP\n", "approximations (Lawrence and Moore, 2007), which couldn’t propagate the\n", "uncertainty through the model at the data points but sustain uncertainty\n", "elsewhere. Since the variational approximation was proposed researchers\n", "have also considered sampling approaches (Havasi et al., 2018) and\n", "expectation propagation (Bui et al., 2016).\n", "\n", "\n", "\n", "Figure: Even the latest work on Bayesian neural networks has severe\n", "problems handling uncertainty. In this example, (Izmailov et al., 2019),\n", "methods even fail to interpolate through the data correctly or provide\n", "well calibrated error bars in regions where data is observed.\n", "\n", "The argument in the deep learning revolution is that deep architectures\n", "allow us to develop an abstraction of the feature set through model\n", "composition. Composing Gaussian processes is analytically intractable.\n", "To form deep Gaussian processes we use a variational approach to stack\n", "the models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.stack_gp_sample(kernel=GPy.kern.Linear,\n", " diagrams=\"./deepgp\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('stack-gp-sample-Linear-{sample:0>1}.svg', \n", " directory='./deepgp', sample=(0,4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Stacked PCA\n", "-----------\n", "\n", "\n", "\n", "Figure: Composition of linear functions just leads to a new linear\n", "function. Here you see the result of multiple affine transformations\n", "applied to a square in two dimensions.\n", "\n", "Stacking a series of linear functions simply leads to a new linear\n", "function. The use of multiple linear function merely changes the\n", "covariance of the resulting Gaussian. If $$\n", "\\mathbf{Z}\\sim \\mathcal{N}\\left(\\mathbf{0},\\mathbf{I}\\right)\n", "$$ and the $i$th hidden layer is a multivariate linear transformation\n", "defined by $\\mathbf{W}_i$, $$\n", "\\mathbf{Y}= \\mathbf{Z}\\mathbf{W}_1 \\mathbf{W}_2 \\dots \\mathbf{W}_\\ell\n", "$$ then the rules of multivariate Gaussians tell us that $$\n", "\\mathbf{Y}\\sim \\mathcal{N}\\left(\\mathbf{0},\\mathbf{W}_\\ell\\dots \\mathbf{W}_1 \\mathbf{W}^\\top_1 \\dots \\mathbf{W}^\\top_\\ell\\right).\n", "$$ So the model can be replaced by one where we set\n", "$\\mathbf{V}= \\mathbf{W}_\\ell\\dots \\mathbf{W}_2 \\mathbf{W}_1$. So is such\n", "a model trivial? The answer is that it depends. There are two cases in\n", "which such a model remaisn interesting. Firstly, if we make intermediate\n", "observations stemming from the chain. So, for example, if we decide\n", "that, $$\n", "\\mathbf{Z}_i = \\mathbf{W}_i \\mathbf{Z}_{i-1}\n", "$$ and set\n", "$\\mathbf{Z}_{0} = \\mathbf{X}\\sim \\mathcal{N}\\left(\\mathbf{0},\\mathbf{I}\\right)$,\n", "then the matrices $\\mathbf{W}$ inter-relate a series of jointly Gaussian\n", "observations in an interesting way, stacking the full data matrix to\n", "give $$\n", "\\mathbf{Z}= \\begin{bmatrix}\n", "\\mathbf{Z}_0 \\\\\n", "\\mathbf{Z}_1 \\\\\n", "\\vdots \\\\\n", "\\mathbf{Z}_\\ell\n", "\\end{bmatrix}\n", "$$ we can obtain\n", "$$\\mathbf{Z}\\sim \\mathcal{N}\\left(\\mathbf{0},\\begin{bmatrix}\n", "\\mathbf{I}& \\mathbf{W}^\\top_1 & \\mathbf{W}_1^\\top\\mathbf{W}_2^\\top & \\dots & \\mathbf{V}^\\top \\\\\n", "\\mathbf{W}_1 & \\mathbf{W}_1 \\mathbf{W}_1^\\top & \\mathbf{W}_1 \\mathbf{W}_1^\\top \\mathbf{W}_2^\\top & \\dots & \\mathbf{W}_1 \\mathbf{V}^\\top \\\\\n", "\\mathbf{W}_2 \\mathbf{W}_1 & \\mathbf{W}_2 \\mathbf{W}_1 \\mathbf{W}_1^\\top & \\mathbf{W}_2 \\mathbf{W}_1 \\mathbf{W}_1^\\top \\mathbf{W}_2^\\top & \\dots & \\mathbf{W}_2 \\mathbf{W}_1 \\mathbf{V}^\\top \\\\\n", "\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n", "\\mathbf{V}& \\mathbf{V}\\mathbf{W}_1^\\top & \\mathbf{V}\\mathbf{W}_1^\\top \\mathbf{W}_2^\\top& \\dots & \\mathbf{V}\\mathbf{V}^\\top\n", "\\end{bmatrix}\\right)$$ which is a highly structured Gaussian covariance\n", "with hierarchical dependencies between the variables $\\mathbf{Z}_i$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Stacked GP\n", "----------" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.stack_gp_sample(kernel=GPy.kern.RBF,\n", " diagrams=\"./deepgp\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('stack-gp-sample-RBF-{sample:0>1}.svg', \n", " directory='./deepgp', sample=(0,4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Stacking Gaussian process models leads to non linear mappings\n", "at each stage. Here we are mapping from two dimensions to two dimensions\n", "in each layer.\n", "\n", "Note that once the box has folded over on itself, it cannot be unfolded.\n", "So a feature that is generated near the top of the model cannot be\n", "removed further down the model.\n", "\n", "This folding over effect happens in low dimensions. In higher dimensions\n", "it is less common.\n", "\n", "Observation of this effect at a talk in Cambridge was one of the things\n", "that caused David Duvenaud (and collaborators) to consider the behavior\n", "of deeper Gaussian process models (Duvenaud et al., 2014).\n", "\n", "Such folding over in the latent spaces necessarily forces the density to\n", "be non-Gaussian. Indeed, since folding-over is avoided as we increase\n", "the dimensionality of the latent spaces, such processes become more\n", "Gaussian. If we take the limit of the latent space dimensionality as it\n", "tends to infinity, the entire deep Gaussian process returns to a\n", "standard Gaussian process, with a covariance function given as a deep\n", "kernel (such as those described by Cho and Saul (2009)).\n", "\n", "Further analysis of these deep networks has been conducted by Dunlop et\n", "al. (n.d.), who use analysis of the deep network’s stationary density\n", "(treating it as a Markov chain across layers), to explore the nature of\n", "the implied process prior for a deep GP.\n", "\n", "Both of these works, however, make constraining assumptions on the form\n", "of the Gaussian process prior at each layer (e.g. same covariance at\n", "each layer). In practice, the form of this covariance can be learnt and\n", "the densities described by the deep GP are more general than those\n", "mentioned in either of these papers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Stacked GPs (video by David Duvenaud)\n", "-------------------------------------" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.lib.display import YouTubeVideo\n", "YouTubeVideo('XhIvygQYFFQ')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Figure: Visualization of mapping of a two dimensional space through a\n", "deep Gaussian process.\n", "\n", "David Duvenaud also created a YouTube video to help visualize what\n", "happens as you drop through the layers of a deep GP." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/deepgp_tutorial.py','deepgp_tutorial.py')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Late bind setup methods to DeepGP object\n", "from deepgp_tutorial import initialize\n", "from deepgp_tutorial import staged_optimize\n", "from deepgp_tutorial import posterior_sample\n", "from deepgp_tutorial import visualize\n", "from deepgp_tutorial import visualize_pinball\n", "\n", "import deepgp\n", "deepgp.DeepGP.initialize=initialize\n", "deepgp.DeepGP.staged_optimize=staged_optimize\n", "deepgp.DeepGP.posterior_sample=posterior_sample\n", "deepgp.DeepGP.visualize=visualize\n", "deepgp.DeepGP.visualize_pinball=visualize_pinball" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Olympic Marathon Data\n", "---------------------\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "- Gold medal times for Olympic Marathon since 1896.\n", "- Marathons before 1924 didn’t have a standardised distance.\n", "- Present results using pace per km.\n", "- In 1904 Marathon was badly organised leading to very slow times.\n", "\n", "\n", "\n", "\n", "Image from Wikimedia Commons\n", "http://bit.ly/16kMKHQ\n", "\n", "
\n", "\n", "The first thing we will do is load a standard data set for regression\n", "modelling. The data consists of the pace of Olympic Gold Medal Marathon\n", "winners for the Olympics from 1896 to present. First we load in the data\n", "and plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.olympic_marathon_men()\n", "x = data['X']\n", "y = data['Y']\n", "\n", "offset = y.mean()\n", "scale = np.sqrt(y.var())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import teaching_plots as plot\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xlim = (1875,2030)\n", "ylim = (2.5, 6.5)\n", "yhat = (y-offset)/scale\n", "\n", "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "_ = ax.plot(x, y, 'r.',markersize=10)\n", "ax.set_xlabel('year', fontsize=20)\n", "ax.set_ylabel('pace min/km', fontsize=20)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "\n", "mlai.write_figure(figure=fig, \n", " filename='olympic-marathon.svg', \n", " diagrams='./datasets',\n", " transparent=True, \n", " facecolor=(1, 1, 1, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Olympic marathon pace times since 1892.\n", "\n", "Things to notice about the data include the outlier in 1904, in this\n", "year, the olympics was in St Louis, USA. Organizational problems and\n", "challenges with dust kicked up by the cars following the race meant that\n", "participants got lost, and only very few participants completed.\n", "\n", "More recent years see more consistently quick marathons." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alan Turing\n", "-----------\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "Figure: Alan Turing, in 1946 he was only 11 minutes slower than the\n", "winner of the 1948 games. Would he have won a hypothetical games held in\n", "1946? Source:\n", "Alan\n", "Turing Internet Scrapbook.\n", "\n", "If we had to summarise the objectives of machine learning in one word, a\n", "very good candidate for that word would be *generalization*. What is\n", "generalization? From a human perspective it might be summarised as the\n", "ability to take lessons learned in one domain and apply them to another\n", "domain. If we accept the definition given in the first session for\n", "machine learning, $$\n", "\\text{data} + \\text{model} \\stackrel{\\text{compute}}{\\rightarrow} \\text{prediction}\n", "$$ then we see that without a model we can’t generalise: we only have\n", "data. Data is fine for answering very specific questions, like “Who won\n", "the Olympic Marathon in 2012?”, because we have that answer stored,\n", "however, we are not given the answer to many other questions. For\n", "example, Alan Turing was a formidable marathon runner, in 1946 he ran a\n", "time 2 hours 46 minutes (just under four minutes per kilometer, faster\n", "than I and most of the other [Endcliffe Park\n", "Run](http://www.parkrun.org.uk/sheffieldhallam/) runners can do 5 km).\n", "What is the probability he would have won an Olympics if one had been\n", "held in 1946?\n", "\n", "To answer this question we need to generalize, but before we formalize\n", "the concept of generalization let’s introduce some formal representation\n", "of what it means to generalize in machine learning.\n", "\n", "Our first objective will be to perform a Gaussian process fit to the\n", "data, we’ll do this using the [GPy\n", "software](https://github.com/SheffieldML/GPy)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import GPy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full = GPy.models.GPRegression(x,yhat)\n", "_ = m_full.optimize() # Optimize parameters of covariance function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first command sets up the model, then `m_full.optimize()` optimizes\n", "the parameters of the covariance function and the noise level of the\n", "model. Once the fit is complete, we’ll try creating some test points,\n", "and computing the output of the GP model in terms of the mean and\n", "standard deviation of the posterior functions between 1870 and 2030. We\n", "plot the mean function and the standard deviation at 200 locations. We\n", "can obtain the predictions using `y_mean, y_var = m_full.predict(xt)`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xt = np.linspace(1870,2030,200)[:,np.newaxis]\n", "yt_mean, yt_var = m_full.predict(xt)\n", "yt_sd=np.sqrt(yt_var)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we plot the results using the helper function in `teaching_plots`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full, scale=scale, offset=offset, ax=ax, xlabel='year', ylabel='pace min/km', fontsize=20, portion=0.2)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/olympic-marathon-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Gaussian process fit to the Olympic Marathon data. The error\n", "bars are too large, perhaps due to the outlier from 1904." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fit Quality\n", "-----------\n", "\n", "In the fit we see that the error bars (coming mainly from the noise\n", "variance) are quite large. This is likely due to the outlier point in\n", "1904, ignoring that point we can see that a tighter fit is obtained. To\n", "see this make a version of the model, `m_clean`, where that point is\n", "removed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x_clean=np.vstack((x[0:2, :], x[3:, :]))\n", "y_clean=np.vstack((y[0:2, :], y[3:, :]))\n", "\n", "m_clean = GPy.models.GPRegression(x_clean,y_clean)\n", "_ = m_clean.optimize()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_clean, scale=scale, offset=offset, ax=ax, xlabel='year', ylabel='pace min/km', fontsize=20, portion=0.2)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/olympic-marathon-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deep GP Fit\n", "-----------\n", "\n", "Let’s see if a deep Gaussian process can help here. We will construct a\n", "deep Gaussian process with one hidden layer (i.e. one Gaussian process\n", "feeding into another).\n", "\n", "Build a Deep GP with an additional hidden layer (one dimensional) to fit\n", "the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import GPy\n", "import deepgp" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hidden = 1\n", "m = deepgp.DeepGP([y.shape[1],hidden,x.shape[1]],Y=yhat, X=x, inits=['PCA','PCA'], \n", " kernels=[GPy.kern.RBF(hidden,ARD=True),\n", " GPy.kern.RBF(x.shape[1],ARD=True)], # the kernels for each layer\n", " num_inducing=50, back_constraint=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import deepgp" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Call the initalization\n", "m.initialize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now optimize the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for layer in m.layers:\n", " layer.likelihood.variance.constrain_positive(warning=False)\n", "m.optimize(messages=True,max_iters=10000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.staged_optimize(messages=(True,True,True))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, scale=scale, offset=offset, ax=ax, xlabel='year', ylabel='pace min/km', \n", " fontsize=20, portion=0.2)\n", "ax.set_xlim(xlim)\n", "\n", "ax.set_ylim(ylim)\n", "mlai.write_figure(figure=fig, filename='./deepgp/olympic-marathon-deep-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Olympic Marathon Data Deep GP\n", "-----------------------------\n", "\n", "\n", "\n", "Figure: Deep GP fit to the Olympic marathon data. Error bars now\n", "change as the prediction evolves." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_sample(m, scale=scale, offset=offset, samps=10, ax=ax, \n", " xlabel='year', ylabel='pace min/km', portion = 0.225)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "mlai.write_figure(figure=fig, filename='./deepgp/olympic-marathon-deep-gp-samples.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Olympic Marathon Data Deep GP\n", "-----------------------------\n", "\n", "\n", "\n", "Figure: Point samples run through the deep Gaussian process show the\n", "distribution of output locations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fitted GP for each layer\n", "------------------------\n", "\n", "Now we explore the GPs the model has used to fit each layer. First of\n", "all, we look at the hidden layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.visualize(scale=scale, offset=offset, xlabel='year',\n", " ylabel='pace min/km',xlim=xlim, ylim=ylim,\n", " dataset='olympic-marathon',\n", " diagrams='./deepgp')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic-marathon-deep-gp-layer-{sample:0>1}.svg', \n", " './deepgp', sample=(0,1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: The mapping from input to the latent layer is broadly, with\n", "some flattening as time goes on. Variance is high across the input\n", "range.\n", "\n", "\n", "\n", "Figure: The mapping from the latent layer to the output layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "m.visualize_pinball(ax=ax, scale=scale, offset=offset, points=30, portion=0.1,\n", " xlabel='year', ylabel='pace km/min', vertical=True)\n", "mlai.write_figure(figure=fig, filename='./deepgp/olympic-marathon-deep-gp-pinball.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Olympic Marathon Pinball Plot\n", "-----------------------------\n", "\n", "\n", "\n", "Figure: A pinball plot shows the movement of the ‘ball’ as it passes\n", "through each layer of the Gaussian processes. Mean directions of\n", "movement are shown by lines. Shading gives one standard deviation of\n", "movement position. At each layer, the uncertainty is reset. The overal\n", "uncertainty is the cumulative uncertainty from all the layers. There is\n", "some grouping of later points towards the right in the first layer,\n", "which also injects a large amount of uncertainty. Due to flattening of\n", "the curve in the second layer towards the right the uncertainty is\n", "reduced in final output.\n", "\n", "The pinball plot shows the flow of any input ball through the deep\n", "Gaussian process. In a pinball plot a series of vertical parallel lines\n", "would indicate a purely linear function. For the olypmic marathon data\n", "we can see the first layer begins to shift from input towards the right.\n", "Note it also does so with some uncertainty (indicated by the shaded\n", "backgrounds). The second layer has less uncertainty, but bunches the\n", "inputs more strongly to the right. This input layer of uncertainty,\n", "followed by a layer that pushes inputs to the right is what gives the\n", "heteroschedastic noise." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Gene Expression Example\n", "-----------------------\n", "\n", "We now consider an example in gene expression. Gene expression is the\n", "measurement of mRNA levels expressed in cells. These mRNA levels show\n", "which genes are ‘switched on’ and producing data. In the example we will\n", "use a Gaussian process to determine whether a given gene is active, or\n", "we are merely observing a noise response." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Della Gatta Gene Data\n", "---------------------\n", "\n", "- Given given expression levels in the form of a time series from\n", " Della Gatta et al. (2008)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.della_gatta_TRP63_gene_expression(data_set='della_gatta',gene_number=937)\n", "\n", "x = data['X']\n", "y = data['Y']\n", "\n", "offset = y.mean()\n", "scale = np.sqrt(y.var())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import teaching_plots as plot\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xlim = (-20,260)\n", "ylim = (5, 7.5)\n", "yhat = (y-offset)/scale\n", "\n", "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "_ = ax.plot(x, y, 'r.',markersize=10)\n", "ax.set_xlabel('time/min', fontsize=20)\n", "ax.set_ylabel('expression', fontsize=20)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "\n", "mlai.write_figure(figure=fig, \n", " filename='./datasets/della-gatta-gene.svg', \n", " transparent=True, \n", " frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Gene expression levels over time for a gene from data\n", "provided by Della Gatta et al. (2008). We would like to understand\n", "whethere there is signal in the data, or we are only observing\n", "noise.\n", "\n", "- Want to detect if a gene is expressed or not, fit a GP to each gene\n", " Kalaitzis and Lawrence (2011).\n", "\n", "\n", "\n", "Figure: The example is taken from the paper “A Simple Approach to\n", "Ranking Differentially Expressed Gene Expression Time Courses through\n", "Gaussian Process Regression.” Kalaitzis and Lawrence (2011).\n", "\n", "
\n", "\n", "http://www.biomedcentral.com/1471-2105/12/180\n", "\n", "
\n", "\n", "Our first objective will be to perform a Gaussian process fit to the\n", "data, we’ll do this using the [GPy\n", "software](https://github.com/SheffieldML/GPy)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import GPy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full = GPy.models.GPRegression(x,yhat)\n", "m_full.kern.lengthscale=50\n", "_ = m_full.optimize() # Optimize parameters of covariance function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initialize the length scale parameter (which here actually represents a\n", "*time scale* of the covariance function) to a reasonable value. Default\n", "would be 1, but here we set it to 50 minutes, given points are arriving\n", "across zero to 250 minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xt = np.linspace(-20,260,200)[:,np.newaxis]\n", "yt_mean, yt_var = m_full.predict(xt)\n", "yt_sd=np.sqrt(yt_var)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we plot the results using the helper function in `teaching_plots`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full, scale=scale, offset=offset, ax=ax, xlabel='time/min', ylabel='expression', fontsize=20, portion=0.2)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "ax.set_title('log likelihood: {ll:.3}'.format(ll=m_full.log_likelihood()), fontsize=20)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/della-gatta-gene-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Result of the fit of the Gaussian process model with the time\n", "scale parameter initialized to 50 minutes.\n", "\n", "Now we try a model initialized with a longer length scale." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full2 = GPy.models.GPRegression(x,yhat)\n", "m_full2.kern.lengthscale=2000\n", "_ = m_full2.optimize() # Optimize parameters of covariance function" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full2, scale=scale, offset=offset, ax=ax, xlabel='time/min', ylabel='expression', fontsize=20, portion=0.2)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "ax.set_title('log likelihood: {ll:.3}'.format(ll=m_full2.log_likelihood()), fontsize=20)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/della-gatta-gene-gp2.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Result of the fit of the Gaussian process model with the time\n", "scale parameter initialized to 2000 minutes.\n", "\n", "Now we try a model initialized with a lower noise." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full3 = GPy.models.GPRegression(x,yhat)\n", "m_full3.kern.lengthscale=20\n", "m_full3.likelihood.variance=0.001\n", "_ = m_full3.optimize() # Optimize parameters of covariance function" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full3, scale=scale, offset=offset, ax=ax, xlabel='time/min', ylabel='expression', fontsize=20, portion=0.2)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "ax.set_title('log likelihood: {ll:.3}'.format(ll=m_full3.log_likelihood()), fontsize=20)\n", "mlai.write_figure(figure=fig,\n", " filename='./gp/della-gatta-gene-gp3.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Result of the fit of the Gaussian process model with the\n", "noise initialized low (standard deviation 0.1) and the time scale\n", "parameter initialized to 20 minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.multiple_optima(diagrams='./gp')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: \n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "layers = [y.shape[1], 1,x.shape[1]]\n", "inits = ['PCA']*(len(layers)-1)\n", "kernels = []\n", "for i in layers[1:]:\n", " kernels += [GPy.kern.RBF(i)]\n", "m = deepgp.DeepGP(layers,Y=yhat, X=x, \n", " inits=inits, \n", " kernels=kernels, # the kernels for each layer\n", " num_inducing=20, back_constraint=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.initialize()\n", "m.staged_optimize()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, scale=scale, offset=offset, ax=ax, fontsize=20, portion=0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(filename='./deepgp/della-gatta-gene-deep-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Della Gatta Gene Data Deep GP\n", "-----------------------------\n", "\n", "\n", "\n", "Figure: Deep Gaussian process fit to the Della Gatta gene expression\n", "data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_sample(m, scale=scale, offset=offset, samps=10, ax=ax, portion = 0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(figure=fig, filename='./deepgp/della-gatta-gene-deep-gp-samples.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Della Gatta Gene Data Deep GP\n", "-----------------------------\n", "\n", "\n", "\n", "Figure: Deep Gaussian process samples fitted to the Della Gatta gene\n", "expression data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.visualize(offset=offset, scale=scale, xlim=xlim, ylim=ylim,\n", " dataset='della-gatta-gene',\n", " diagrams='./deepgp')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Della Gatta Gene Data Latent 1\n", "------------------------------\n", "\n", "\n", "\n", "Figure: Gaussian process mapping from input to latent layer for the\n", "della Gatta gene expression data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Della Gatta Gene Data Latent 2\n", "------------------------------\n", "\n", "\n", "\n", "Figure: Gaussian process mapping from latent to output layer for the\n", "della Gatta gene expression data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "m.visualize_pinball(offset=offset, ax=ax, scale=scale, xlim=xlim, ylim=ylim, portion=0.1, points=50)\n", "mlai.write_figure(figure=fig, filename='./deepgp/della-gatta-gene-deep-gp-pinball.svg', \n", " transparent=True, frameon=True, ax=ax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TP53 Gene Pinball Plot\n", "----------------------\n", "\n", "\n", "\n", "Figure: A pinball plot shows the movement of the ‘ball’ as it passes\n", "through each layer of the Gaussian processes. Mean directions of\n", "movement are shown by lines. Shading gives one standard deviation of\n", "movement position. At each layer, the uncertainty is reset. The overal\n", "uncertainty is the cumulative uncertainty from all the layers. Pinball\n", "plot of the della Gatta gene expression data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step Function\n", "-------------\n", "\n", "Next we consider a simple step function data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_low=25\n", "num_high=25\n", "gap = -.1\n", "noise=0.0001\n", "x = np.vstack((np.linspace(-1, -gap/2.0, num_low)[:, np.newaxis],\n", " np.linspace(gap/2.0, 1, num_high)[:, np.newaxis]))\n", "y = np.vstack((np.zeros((num_low, 1)), np.ones((num_high,1))))\n", "scale = np.sqrt(y.var())\n", "offset = y.mean()\n", "yhat = (y-offset)/scale" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "_ = ax.plot(x, y, 'r.',markersize=10)\n", "_ = ax.set_xlabel('$x$', fontsize=20)\n", "_ = ax.set_ylabel('$y$', fontsize=20)\n", "xlim = (-2, 2)\n", "ylim = (-0.6, 1.6)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(figure=fig, filename='./datasets/step-function.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step Function Data\n", "------------------\n", "\n", "\n", "\n", "Figure: Simulation study of step function data artificially\n", "generated. Here there is a small overlap between the two lines." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step Function Data GP\n", "---------------------\n", "\n", "We can fit a Gaussian process to the step function data using `GPy` as\n", "follows." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full = GPy.models.GPRegression(x,yhat)\n", "_ = m_full.optimize() # Optimize parameters of covariance function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Where `GPy.models.GPRegression()` gives us a standard GP regression\n", "model with exponentiated quadratic covariance function.\n", "\n", "The model is optimized using `m_full.optimize()` which calls an L-BGFS\n", "gradient based solver in python." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full, scale=scale, offset=offset, ax=ax, fontsize=20, portion=0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "\n", "mlai.write_figure(figure=fig,filename='./gp/step-function-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Gaussian process fit to the step function data. Note the\n", "large error bars and the over-smoothing of the discontinuity. Error bars\n", "are shown at two standard deviations.\n", "\n", "The resulting fit to the step function data shows some challenges. In\n", "particular, the over smoothing at the discontinuity. If we know how many\n", "discontinuities there are, we can parameterize them in the step\n", "function. But by doing this, we form a semi-parametric model. The\n", "parameters indicate how many discontinuities are, and where they are.\n", "They can be optimized as part of the model fit. But if new, unforeseen,\n", "discontinuities arise when the model is being deployed in practice,\n", "these won’t be accounted for in the predictions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step Function Data Deep GP\n", "--------------------------\n", "\n", "First we initialize a deep Gaussian process with three latent layers\n", "(four layers total). Within each layer we create a GP with an\n", "exponentiated quadratic covariance (`GPy.kern.RBF`).\n", "\n", "At each layer we use 20 inducing points for the variational\n", "approximation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "layers = [y.shape[1], 1, 1, 1,x.shape[1]]\n", "inits = ['PCA']*(len(layers)-1)\n", "kernels = []\n", "for i in layers[1:]:\n", " kernels += [GPy.kern.RBF(i)]\n", " \n", "m = deepgp.DeepGP(layers,Y=yhat, X=x, \n", " inits=inits, \n", " kernels=kernels, # the kernels for each layer\n", " num_inducing=20, back_constraint=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once the model is constructed we initialize the parameters, and perform\n", "the staged optimization which starts by optimizing variational\n", "parameters with a low noise and proceeds to optimize the whole model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.initialize()\n", "m.staged_optimize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We plot the output of the deep Gaussian process fitted to the stpe data\n", "as follows." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, scale=scale, offset=offset, ax=ax, fontsize=20, portion=0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(filename='./deepgp/step-function-deep-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The deep Gaussian process does a much better job of fitting the data. It\n", "handles the discontinuity easily, and error bars drop to smaller values\n", "in the regions of data.\n", "\n", "\n", "\n", "Figure: Deep Gaussian process fit to the step function data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Step Function Data Deep GP\n", "--------------------------\n", "\n", "The samples of the model can be plotted with the helper function from\n", "`teaching_plots.py`, `model_sample`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "\n", "plot.model_sample(m, scale=scale, offset=offset, samps=10, ax=ax, portion = 0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(figure=fig, filename='./deepgp/step-function-deep-gp-samples.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The samples from the model show that the error bars, which are\n", "informative for Gaussian outputs, are less informative for this model.\n", "They make clear that the data points lie, in output mainly at 0 or 1, or\n", "occasionally in between.\n", "\n", "\n", "\n", "Figure: Samples from the deep Gaussian process model for the step\n", "function fit.\n", "\n", "The visualize code allows us to inspect the intermediate layers in the\n", "deep GP model to understand how it has reconstructed the step function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.visualize(offset=offset, scale=scale, xlim=xlim, ylim=ylim,\n", " dataset='step-function',\n", " diagrams='./deepgp')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "\n", "\n", "Figure: From top to bottom, the Gaussian process mapping function\n", "that makes up each layer of the resulting deep Gaussian process.\n", "\n", "A pinball plot can be created for the resulting model to understand how\n", "the input is being translated to the output across the different layers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "m.visualize_pinball(offset=offset, ax=ax, scale=scale, xlim=xlim, ylim=ylim, portion=0.1, points=50)\n", "mlai.write_figure(figure=fig, filename='./deepgp/step-function-deep-gp-pinball.svg', \n", " transparent=True, frameon=True, ax=ax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Pinball plot of the deep GP fitted to the step function data.\n", "Each layer of the model pushes the ‘ball’ towards the left or right,\n", "saturating at 1 and 0. This causes the final density to be be peaked at\n", "0 and 1. Transitions occur driven by the uncertainty of the mapping in\n", "each layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.mcycle()\n", "x = data['X']\n", "y = data['Y']\n", "scale=np.sqrt(y.var())\n", "offset=y.mean()\n", "yhat = (y - offset)/scale" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "_ = ax.plot(x, y, 'r.',markersize=10)\n", "_ = ax.set_xlabel('time', fontsize=20)\n", "_ = ax.set_ylabel('acceleration', fontsize=20)\n", "xlim = (-20, 80)\n", "ylim = (-175, 125)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "mlai.write_figure(filename='./datasets/motorcycle-helmet.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Motorcycle Helmet Data\n", "----------------------\n", "\n", "\n", "\n", "Figure: Motorcycle helmet data. The data consists of acceleration\n", "readings on a motorcycle helmet undergoing a collision. The data\n", "exhibits heteroschedastic (time varying) noise levles and\n", "non-stationarity." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full = GPy.models.GPRegression(x,yhat)\n", "_ = m_full.optimize() # Optimize parameters of covariance function" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full, scale=scale, offset=offset, ax=ax, xlabel='time', ylabel='acceleration/$g$', fontsize=20, portion=0.5)\n", "xlim=(-20,80)\n", "ylim=(-180,120)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(figure=fig,filename='./gp/motorcycle-helmet-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Motorcycle Helmet Data GP\n", "-------------------------\n", "\n", "\n", "\n", "Figure: Gaussian process fit to the motorcycle helmet accelerometer\n", "data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import deepgp" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "layers = [y.shape[1], 1, x.shape[1]]\n", "inits = ['PCA']*(len(layers)-1)\n", "kernels = []\n", "for i in layers[1:]:\n", " kernels += [GPy.kern.RBF(i)]\n", "m = deepgp.DeepGP(layers,Y=yhat, X=x, \n", " inits=inits, \n", " kernels=kernels, # the kernels for each layer\n", " num_inducing=20, back_constraint=False)\n", "\n", "\n", "\n", "m.initialize()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.staged_optimize(iters=(1000,1000,10000), messages=(True, True, True))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, scale=scale, offset=offset, ax=ax, xlabel='time', ylabel='acceleration/$g$', fontsize=20, portion=0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(filename='./deepgp/motorcycle-helmet-deep-gp.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Motorcycle Helmet Data Deep GP\n", "------------------------------\n", "\n", "\n", "\n", "Figure: Deep Gaussian process fit to the motorcycle helmet\n", "accelerometer data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_sample(m, scale=scale, offset=offset, samps=10, ax=ax, xlabel='time', ylabel='acceleration/$g$', portion = 0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "\n", "mlai.write_figure(figure=fig, filename='./deepgp/motorcycle-helmet-deep-gp-samples.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Motorcycle Helmet Data Deep GP\n", "------------------------------\n", "\n", "\n", "\n", "Figure: Samples from the deep Gaussian process as fitted to the\n", "motorcycle helmet accelerometer data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.visualize(xlim=xlim, ylim=ylim, scale=scale,offset=offset, \n", " xlabel=\"time\", ylabel=\"acceleration/$g$\", portion=0.5,\n", " dataset='motorcycle-helmet',\n", " diagrams='./deepgp')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Motorcycle Helmet Data Latent 1\n", "-------------------------------\n", "\n", "\n", "\n", "Figure: Mappings from the input to the latent layer for the\n", "motorcycle helmet accelerometer data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Motorcycle Helmet Data Latent 2\n", "-------------------------------\n", "\n", "\n", "\n", "Figure: Mappings from the latent layer to the output layer for the\n", "motorcycle helmet accelerometer data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "m.visualize_pinball(ax=ax, xlabel='time', ylabel='acceleration/g', \n", " points=50, scale=scale, offset=offset, portion=0.1)\n", "mlai.write_figure(figure=fig, filename='./deepgp/motorcycle-helmet-deep-gp-pinball.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Motorcycle Helmet Pinball Plot\n", "------------------------------\n", "\n", "\n", "\n", "Figure: Pinball plot for the mapping from input to output layer for\n", "the motorcycle helmet accelerometer data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Robot Wireless Data\n", "-------------------\n", "\n", "The robot wireless data is taken from an experiment run by Brian Ferris\n", "at University of Washington. It consists of the measurements of WiFi\n", "access point signal strengths as Brian walked in a loop." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data=pods.datasets.robot_wireless()\n", "\n", "x = np.linspace(0,1,215)[:, np.newaxis]\n", "y = data['Y']\n", "offset = y.mean()\n", "scale = np.sqrt(y.var())\n", "yhat = (y-offset)/scale" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ground truth is recorded in the data, the actual loop is given in\n", "the plot below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_figsize)\n", "plt.plot(data['X'][:, 1], data['X'][:, 2], 'r.', markersize=5)\n", "ax.set_xlabel('x position', fontsize=20)\n", "ax.set_ylabel('y position', fontsize=20)\n", "mlai.write_figure(figure=fig, filename='./datasets/robot-wireless-ground-truth.svg', transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Robot Wireless Ground Truth\n", "---------------------------\n", "\n", "\n", "\n", "Figure: Ground truth movement for the position taken while recording\n", "the multivariate time-course of wireless access point signal\n", "strengths.\n", "\n", "We will ignore this ground truth in making our predictions, but see if\n", "the model can recover something similar in one of the latent layers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "output_dim=1\n", "xlim = (-0.3, 1.3)\n", "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "_ = ax.plot(x.flatten(), y[:, output_dim], \n", " 'r.', markersize=5)\n", "\n", "ax.set_xlabel('time', fontsize=20)\n", "ax.set_ylabel('signal strength', fontsize=20)\n", "xlim = (-0.2, 1.2)\n", "ylim = (-0.6, 2.0)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "\n", "mlai.write_figure(figure=fig, filename='./datasets/robot-wireless-dim-' + str(output_dim) + '.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Robot WiFi Data\n", "---------------\n", "\n", "\n", "\n", "Figure: Output dimension 1 from the robot wireless data. This plot\n", "shows signal strength changing over time.\n", "\n", "Perform a Gaussian process fit on the data using GPy." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_full = GPy.models.GPRegression(x,yhat)\n", "_ = m_full.optimize() # Optimize parameters of covariance function" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m_full, output_dim=output_dim, scale=scale, offset=offset, ax=ax, \n", " xlabel='time', ylabel='signal strength', fontsize=20, portion=0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(filename='./gp/robot-wireless-gp-dim-' + str(output_dim)+ '.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Robot WiFi Data GP\n", "------------------\n", "\n", "\n", "\n", "Figure: Gaussian process fit to the Robot Wireless dimension 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "layers = [y.shape[1], 10, 5, 2, 2, x.shape[1]]\n", "inits = ['PCA']*(len(layers)-1)\n", "kernels = []\n", "for i in layers[1:]:\n", " kernels += [GPy.kern.RBF(i, ARD=True)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = deepgp.DeepGP(layers,Y=y, X=x, inits=inits, \n", " kernels=kernels,\n", " num_inducing=50, back_constraint=False)\n", "m.initialize()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.staged_optimize(messages=(True,True,True))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_output(m, output_dim=output_dim, scale=scale, offset=offset, ax=ax, \n", " xlabel='time', ylabel='signal strength', fontsize=20, portion=0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(figure=fig, filename='./deepgp/robot-wireless-deep-gp-dim-' + str(output_dim)+ '.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Robot WiFi Data Deep GP\n", "-----------------------\n", "\n", "\n", "\n", "Figure: Fit of the deep Gaussian process to dimension 1 of the robot\n", "wireless data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax=plt.subplots(figsize=plot.big_wide_figsize)\n", "plot.model_sample(m, output_dim=output_dim, scale=scale, offset=offset, samps=10, ax=ax,\n", " xlabel='time', ylabel='signal strength', fontsize=20, portion=0.5)\n", "ax.set_ylim(ylim)\n", "ax.set_xlim(xlim)\n", "mlai.write_figure(figure=fig, filename='./deepgp/robot-wireless-deep-gp-samples-dim-' + str(output_dim)+ '.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Robot WiFi Data Deep GP\n", "-----------------------\n", "\n", "\n", "\n", "Figure: Samples from the deep Gaussian process fit to dimension 1 of\n", "the robot wireless data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Robot WiFi Data Latent Space\n", "----------------------------\n", "\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_figsize)\n", "ax.plot(m.layers[-2].latent_space.mean[:, 0], \n", " m.layers[-2].latent_space.mean[:, 1], \n", " 'r.-', markersize=5)\n", "\n", "ax.set_xlabel('latent dimension 1', fontsize=20)\n", "ax.set_ylabel('latent dimension 2', fontsize=20)\n", "\n", "mlai.write_figure(figure=fig, filename='./deepgp/robot-wireless-latent-space.svg', \n", " transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Inferred two dimensional latent space of the model for the\n", "robot wireless data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "‘High Five’ Motion Capture Data\n", "-------------------------------\n", "\n", "- ‘High five’ data.\n", "- Model learns structure between two interacting subjects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Shared LVM\n", "----------\n", "\n", "\n", "\n", "Figure: Shared latent variable model structure. Here two related data\n", "sets are brought together with a set of latent variables that are\n", "partially shared and partially specific to one of the data sets.\n", "\n", "\n", "\n", "Figure: Latent spaces of the ‘high five’ data. The structure of the\n", "model is automatically learnt. One of the latent spaces is coordinating\n", "how the two figures walk together, the other latent spaces contain\n", "latent variables that are specific to each of the figures\n", "separately." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Subsample of the MNIST Data\n", "---------------------------\n", "\n", "We will look at a sub-sample of the MNIST digit data set.\n", "\n", "First load in the MNIST data set from scikit learn. This can take a\n", "little while because it’s large to download." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import fetch_openml" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mnist = fetch_mldata('MNIST original')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sub-sample the dataset to make the training faster." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.random.seed(0)\n", "digits = [0,1,2,3,4]\n", "N_per_digit = 100\n", "Y = []\n", "labels = []\n", "for d in digits:\n", " imgs = mnist['data'][mnist['target']==d]\n", " Y.append(imgs[np.random.permutation(imgs.shape[0])][:N_per_digit])\n", " labels.append(np.ones(N_per_digit)*d)\n", "Y = np.vstack(Y).astype(np.float64)\n", "labels = np.hstack(labels)\n", "Y /= 255." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fitting a Deep GP to a the MNIST Digits Subsample\n", "-------------------------------------------------\n", "\n", "Thanks to: Zhenwen Dai and Neil D.\n", "Lawrence\n", "\n", "We now look at the deep Gaussian processes’ capacity to perform\n", "unsupervised learning." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fit a Deep GP\n", "-------------\n", "\n", "We’re going to fit a Deep Gaussian process model to the MNIST data with\n", "two hidden layers. Each of the two Gaussian processes (one from the\n", "first hidden layer to the second, one from the second hidden layer to\n", "the data) has an exponentiated quadratic covariance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import deepgp\n", "import GPy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_latent = 2\n", "num_hidden_2 = 5\n", "m = deepgp.DeepGP([Y.shape[1],num_hidden_2,num_latent],\n", " Y,\n", " kernels=[GPy.kern.RBF(num_hidden_2,ARD=True), \n", " GPy.kern.RBF(num_latent,ARD=False)], \n", " num_inducing=50, back_constraint=False, \n", " encoder_dims=[[200],[200]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initialization\n", "--------------\n", "\n", "Just like deep neural networks, there are some tricks to intitializing\n", "these models. The tricks we use here include some early training of the\n", "model with model parameters constrained. This gives the variational\n", "inducing parameters some scope to tighten the bound for the case where\n", "the noise variance is small and the variances of the Gaussian processes\n", "are around 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.obslayer.likelihood.variance[:] = Y.var()*0.01\n", "for layer in m.layers:\n", " layer.kern.variance.fix(warning=False)\n", " layer.likelihood.variance.fix(warning=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now we optimize for a hundred iterations with the constrained model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.optimize(messages=False,max_iters=100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we remove the fixed constraint on the kernel variance parameters,\n", "but keep the noise output constrained, and run for a further 100\n", "iterations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for layer in m.layers:\n", " layer.kern.variance.constrain_positive(warning=False)\n", "m.optimize(messages=False,max_iters=100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally we unconstrain the layer likelihoods and allow the full model to\n", "be trained for 1000 iterations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for layer in m.layers:\n", " layer.likelihood.variance.constrain_positive(warning=False)\n", "m.optimize(messages=True,max_iters=10000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualize the latent space of the top layer\n", "-------------------------------------------\n", "\n", "Now the model is trained, let’s plot the mean of the posterior\n", "distributions in the top latent layer of the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "from matplotlib import rc\n", "import teaching_plots as plot\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rc(\"font\", **{'family':'sans-serif','sans-serif':['Helvetica'],'size':20})\n", "fig, ax = plt.subplots(figsize=plot.big_figsize)\n", "for d in digits:\n", " ax.plot(m.layer_1.X.mean[labels==d,0],m.layer_1.X.mean[labels==d,1],'.',label=str(d))\n", "_ = plt.legend()\n", "mlai.write_figure(figure=fig, filename=\"./deepgp/mnist-digits-subsample-latent.svg\", transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Latent space for the deep Gaussian process learned through\n", "unsupervised learning and fitted to a subset of the MNIST digits\n", "subsample." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualize the latent space of the intermediate layer\n", "----------------------------------------------------\n", "\n", "We can also visualize dimensions of the intermediate layer. First the\n", "lengthscale of those dimensions is given by" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m.obslayer.kern.lengthscale" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=plot.big_figsize)\n", "for i in range(5):\n", " for j in range(i):\n", " dims=[i, j]\n", " ax.cla()\n", " for d in digits:\n", " ax.plot(m.obslayer.X.mean[labels==d,dims[0]],\n", " m.obslayer.X.mean[labels==d,dims[1]],\n", " '.', label=str(d))\n", " plt.legend()\n", " plt.xlabel('dimension ' + str(dims[0]))\n", " plt.ylabel('dimension ' + str(dims[1]))\n", " mlai.write_figure(figure=fig, filename=\"./deepgp/mnist-digits-subsample-hidden-\" + str(dims[0]) + '-' + str(dims[1]) + '.svg', transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Visualisation of the intermediate layer, plot of dimension 1\n", "vs dimension 0.\n", "\n", "\n", "\n", "Figure: Visualisation of the intermediate layer, plot of dimension 1\n", "vs dimension 0.\n", "\n", "\n", "\n", "Figure: Visualisation of the intermediate layer, plot of dimension 1\n", "vs dimension 0.\n", "\n", "\n", "\n", "Figure: Visualisation of the intermediate layer, plot of dimension 1\n", "vs dimension 0." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Generate From Model\n", "-------------------\n", "\n", "Now we can take a look at a sample from the model, by drawing a Gaussian\n", "random sample in the latent space and propagating it through the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rows = 10\n", "cols = 20\n", "t=np.linspace(-1, 1, rows*cols)[:, None]\n", "kern = GPy.kern.RBF(1,lengthscale=0.05)\n", "cov = kern.K(t, t)\n", "x = np.random.multivariate_normal(np.zeros(rows*cols), cov, num_latent).T" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "yt = m.predict(x)\n", "fig, axs = plt.subplots(rows,cols,figsize=(10,6))\n", "for i in range(rows):\n", " for j in range(cols):\n", " #v = np.random.normal(loc=yt[0][i*cols+j, :], scale=np.sqrt(yt[1][i*cols+j, :]))\n", " v = yt[0][i*cols+j, :]\n", " axs[i,j].imshow(v.reshape(28,28), \n", " cmap='gray', interpolation='none',\n", " aspect='equal')\n", " axs[i,j].set_axis_off()\n", "mlai.write_figure(figure=fig, filename=\"./deepgp/digit-samples-deep-gp.svg\", transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: These digits are produced by taking a tour of the two\n", "dimensional latent space (as described by a Gaussian process sample) and\n", "mapping the tour into the data space. We visualize the mean of the\n", "mapping in the images." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deep Health\n", "-----------\n", "\n", "\n", "\n", "Figure: The deep health model uses different layers of abstraction in\n", "the deep Gaussian process to represent information about diagnostics and\n", "treatment to model interelationships between a patients different data\n", "modalities.\n", "\n", "From a machine learning perspective, we’d like to be able to interrelate\n", "all the different modalities that are informative about the state of the\n", "disease. For deep health, the notion is that the state of the disease is\n", "appearing at the more abstract levels, as we descend the model, we\n", "express relationships between the more abstract concept, that sits\n", "within the physician’s mind, and the data we can measure." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thanks!\n", "-------\n", "\n", "For more information on these subjects and more you might want to check\n", "the following resources.\n", "\n", "- twitter: [@lawrennd](https://twitter.com/lawrennd)\n", "- podcast: [The Talking Machines](http://thetalkingmachines.com)\n", "- newspaper: [Guardian Profile\n", " Page](http://www.theguardian.com/profile/neil-lawrence)\n", "- blog:\n", " [http://inverseprobability.com](http://inverseprobability.com/blog.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "References\n", "----------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Álvarez, M.A., Luengo, D., Titsias, M.K., Lawrence, N.D., 2010.\n", "Efficient multioutput Gaussian processes through variational inducing\n", "kernels, in:. pp. 25–32.\n", "\n", "Bui, T.D., Yan, J., Turner, R.E., 2017. A unifying framework for\n", "Gaussian process pseudo-point approximations using power expectation\n", "propagation. Journal of Machine Learning Research 18, 1–72.\n", "\n", "Bui, T., Hernandez-Lobato, D., Hernandez-Lobato, J., Li, Y., Turner, R.,\n", "2016. Deep Gaussian processes for regression using approximate\n", "expectation propagation, in: Balcan, M.F., Weinberger, K.Q. (Eds.),\n", "Proceedings of the 33rd International Conference on Machine Learning,\n", "Proceedings of Machine Learning Research. PMLR, New York, New York, USA,\n", "pp. 1472–1481.\n", "\n", "Cho, Y., Saul, L.K., 2009. Kernel methods for deep learning, in: Bengio,\n", "Y., Schuurmans, D., Lafferty, J.D., Williams, C.K.I., Culotta, A.\n", "(Eds.), Advances in Neural Information Processing Systems 22. Curran\n", "Associates, Inc., pp. 342–350.\n", "\n", "Csató, L., 2002. Gaussian processes — iterative sparse approximations\n", "(PhD thesis). Aston University.\n", "\n", "Csató, L., Opper, M., 2002. Sparse on-line Gaussian processes. Neural\n", "Computation 14, 641–668.\n", "\n", "Dai, Z., Damianou, A., Hensman, J., Lawrence, N.D., 2014. Gaussian\n", "process models with parallelization and GPU acceleration.\n", "\n", "Damianou, A., 2015. Deep Gaussian processes and variational propagation\n", "of uncertainty (PhD thesis). University of Sheffield.\n", "\n", "Damianou, A., Titsias, M.K., Lawrence, N.D., 2016. Variational inference\n", "for latent variables and uncertain inputs in Gaussian processes. Journal\n", "of Machine Learning Research 17.\n", "\n", "Della Gatta, G., Bansal, M., Ambesi-Impiombato, A., Antonini, D.,\n", "Missero, C., Bernardo, D. di, 2008. Direct targets of the trp63\n", "transcription factor revealed by a combination of gene expression\n", "profiling and reverse engineering. Genome Research 18, 939–948.\n", "\n", "\n", "Dunlop, M.M., Girolami, M.A., Stuart, A.M., Teckentrup, A.L., n.d. How\n", "deep are deep Gaussian processes? Journal of Machine Learning Research\n", "19, 1–46.\n", "\n", "Duvenaud, D., Rippel, O., Adams, R., Ghahramani, Z., 2014. Avoiding\n", "pathologies in very deep networks, in:.\n", "\n", "Elsner, F., Leistedt, B., Peiris, H.V., 2016. Unbiased pseudo-$C_\\ell$\n", "power spectrum estimation with mode projection. Monthly Notices of the\n", "Royal Astronomical Society 465, 1847–1855.\n", "\n", "\n", "Elsner, F., Leistedt, B., Peiris, H.V., 2015. Unbiased methods for\n", "removing systematics from galaxy clustering measurements. Monthly\n", "Notices of the Royal Astronomical Society 456, 2095–2104.\n", "\n", "\n", "Gal, Y., Wilk, M. van der, Rasmussen, C.E., n.d. Distributed variational\n", "inference in sparse Gaussian process regression and latent variable\n", "models, in:.\n", "\n", "Havasi, M., Hernández-Lobato, J.M., Murillo-Fuentes, J.J., 2018.\n", "Inference in deep Gaussian processes using stochastic gradient\n", "Hamiltonian Monte Carlo, in: Bengio, S., Wallach, H., Larochelle, H.,\n", "Grauman, K., Cesa-Bianchi, N., Garnett, R. (Eds.), Advances in Neural\n", "Information Processing Systems 31. Curran Associates, Inc., pp.\n", "7506–7516.\n", "\n", "Hensman, J., Fusi, N., Lawrence, N.D., n.d. Gaussian processes for big\n", "data, in:.\n", "\n", "Hoffman, M., Blei, D.M., Wang, C., Paisley, J., 2012. Stochastic\n", "variational inference, arXiv preprint arXiv:1206.7051.\n", "\n", "Izmailov, P., Maddox, W.J., Kirichenko, P., Garipov, T., Vetrov, D.P.,\n", "Wilson, A.G., 2019. Subspace inference for bayesian deep learning. CoRR\n", "abs/1907.07504.\n", "\n", "Jaffe, A.H., Bond, J.R., Ferreira, P.G., Knox, L.E., 1998. CMB\n", "likelihood functions for beginners and experts, in:.\n", "\n", "\n", "Kalaitzis, A.A., Lawrence, N.D., 2011. A simple approach to ranking\n", "differentially expressed gene expression time courses through Gaussian\n", "process regression. BMC Bioinformatics 12.\n", "\n", "\n", "Lawrence, N.D., n.d. Learning for larger datasets with the Gaussian\n", "process latent variable model, in:. pp. 243–250.\n", "\n", "Lawrence, N.D., Moore, A.J., 2007. Hierarchical Gaussian process latent\n", "variable models, in:. pp. 481–488.\n", "\n", "Lázaro-Gredilla, M., Quiñonero-Candela, J., Rasmussen, C.E., 2010.\n", "Sparse spectrum gaussian processes. Journal of Machine Learning Research\n", "11, 1865–1881.\n", "\n", "MacKay, D.J.C., n.d. Introduction to Gaussian processes, in:. pp.\n", "133–166.\n", "\n", "Mishra-Sharma, S., Cranmer, K., 2020. Semi-parametric $\\gamma$-ray\n", "modeling with Gaussian processes and variational inference.\n", "\n", "Pontzen, A., Peiris, H.V., 2010. The cut-sky cosmic microwave background\n", "is not anomalous. Phys. Rev. D 81, 103008.\n", "\n", "\n", "Quiñonero Candela, J., Rasmussen, C.E., 2005. A unifying view of sparse\n", "approximate Gaussian process regression. Journal of Machine Learning\n", "Research 6, 1939–1959.\n", "\n", "Salimbeni, H., Deisenroth, M., 2017. Doubly stochastic variational\n", "inference for deep Gaussian processes, in: Guyon, I., Luxburg, U.V.,\n", "Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R.\n", "(Eds.), Advances in Neural Information Processing Systems 30. Curran\n", "Associates, Inc., pp. 4591–4602.\n", "\n", "Seeger, M.W., Hetzel, A., Dai, Z., Lawrence, N.D., 2017.\n", "Auto-differentiating linear algebra. CoRR abs/1710.08717.\n", "\n", "Seeger, M., Williams, C.K.I., Lawrence, N.D., n.d. Fast forward\n", "selection to speed up sparse Gaussian process regression, in:.\n", "\n", "Smola, A.J., Bartlett, P.L., n.d. Sparse greedy Gaussian process\n", "regression, in:. pp. 619–625.\n", "\n", "Snelson, E., Ghahramani, Z., n.d. Sparse Gaussian processes using\n", "pseudo-inputs, in:.\n", "\n", "Taigman, Y., Yang, M., Ranzato, M., Wolf, L., 2014. DeepFace: Closing\n", "the gap to human-level performance in face verification, in: Proceedings\n", "of the IEEE Computer Society Conference on Computer Vision and Pattern\n", "Recognition. \n", "\n", "Titsias, M.K., n.d. Variational learning of inducing variables in sparse\n", "Gaussian processes, in:. pp. 567–574.\n", "\n", "Vogelsberger, M., Marinacci, F., Torrey, P., Puchwei, E., 2020.\n", "Cosmological simulations of galaxy formation. Nature Reviews Physics\n", "42–66. \n", "\n", "Williams, C.K.I., Seeger, M., n.d. Using the Nyström method to speed up\n", "kernel machines, in:. pp. 682–688." ] } ], "nbformat": 4, "nbformat_minor": 5, "metadata": {} }