{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Bayesian Methods\n", "### [Neil D. Lawrence](http://inverseprobability.com), Amazon Cambridge and University of Sheffield\n", "### 2018-06-04\n", "\n", "**Abstract**: In his philosophical essay on probabilities, Laplace motivated the\n", "deterministic universe as a *straw man* in terms of driving predictions.\n", "He suggested ignorance of data and models drives the need to turn to\n", "probability. Bayesian formalisms deal with uncertainty in parameters of\n", "the model. In this lecture we review the Bayesian formalism in the\n", "context of linear models, reviewing initially maximum likelihood and\n", "introducing basis functions as a way of driving non-linearity in the\n", "model.\n", "\n", "$$\n", "\\newcommand{\\Amatrix}{\\mathbf{A}}\n", "\\newcommand{\\KL}[2]{\\text{KL}\\left( #1\\,\\|\\,#2 \\right)}\n", "\\newcommand{\\Kaast}{\\kernelMatrix_{\\mathbf{ \\ast}\\mathbf{ \\ast}}}\n", "\\newcommand{\\Kastu}{\\kernelMatrix_{\\mathbf{ \\ast} \\inducingVector}}\n", "\\newcommand{\\Kff}{\\kernelMatrix_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kfu}{\\kernelMatrix_{\\mappingFunctionVector \\inducingVector}}\n", "\\newcommand{\\Kuast}{\\kernelMatrix_{\\inducingVector \\bf\\ast}}\n", "\\newcommand{\\Kuf}{\\kernelMatrix_{\\inducingVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kuu}{\\kernelMatrix_{\\inducingVector \\inducingVector}}\n", "\\newcommand{\\Kuui}{\\Kuu^{-1}}\n", "\\newcommand{\\Qaast}{\\mathbf{Q}_{\\bf \\ast \\ast}}\n", "\\newcommand{\\Qastf}{\\mathbf{Q}_{\\ast \\mappingFunction}}\n", "\\newcommand{\\Qfast}{\\mathbf{Q}_{\\mappingFunctionVector \\bf \\ast}}\n", "\\newcommand{\\Qff}{\\mathbf{Q}_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\aMatrix}{\\mathbf{A}}\n", "\\newcommand{\\aScalar}{a}\n", "\\newcommand{\\aVector}{\\mathbf{a}}\n", "\\newcommand{\\acceleration}{a}\n", "\\newcommand{\\bMatrix}{\\mathbf{B}}\n", "\\newcommand{\\bScalar}{b}\n", "\\newcommand{\\bVector}{\\mathbf{b}}\n", "\\newcommand{\\basisFunc}{\\phi}\n", "\\newcommand{\\basisFuncVector}{\\boldsymbol{ \\basisFunc}}\n", "\\newcommand{\\basisFunction}{\\phi}\n", "\\newcommand{\\basisLocation}{\\mu}\n", "\\newcommand{\\basisMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\basisScalar}{\\basisFunction}\n", "\\newcommand{\\basisVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\activationFunction}{\\phi}\n", "\\newcommand{\\activationMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\activationScalar}{\\basisFunction}\n", "\\newcommand{\\activationVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\bigO}{\\mathcal{O}}\n", "\\newcommand{\\binomProb}{\\pi}\n", "\\newcommand{\\cMatrix}{\\mathbf{C}}\n", "\\newcommand{\\cbasisMatrix}{\\hat{\\boldsymbol{ \\Phi}}}\n", "\\newcommand{\\cdataMatrix}{\\hat{\\dataMatrix}}\n", "\\newcommand{\\cdataScalar}{\\hat{\\dataScalar}}\n", "\\newcommand{\\cdataVector}{\\hat{\\dataVector}}\n", "\\newcommand{\\centeredKernelMatrix}{\\mathbf{ \\MakeUppercase{\\centeredKernelScalar}}}\n", "\\newcommand{\\centeredKernelScalar}{b}\n", "\\newcommand{\\centeredKernelVector}{\\centeredKernelScalar}\n", "\\newcommand{\\centeringMatrix}{\\mathbf{H}}\n", "\\newcommand{\\chiSquaredDist}[2]{\\chi_{#1}^{2}\\left(#2\\right)}\n", "\\newcommand{\\chiSquaredSamp}[1]{\\chi_{#1}^{2}}\n", "\\newcommand{\\conditionalCovariance}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\coregionalizationMatrix}{\\mathbf{B}}\n", "\\newcommand{\\coregionalizationScalar}{b}\n", "\\newcommand{\\coregionalizationVector}{\\mathbf{ \\coregionalizationScalar}}\n", "\\newcommand{\\covDist}[2]{\\text{cov}_{#2}\\left(#1\\right)}\n", "\\newcommand{\\covSamp}[1]{\\text{cov}\\left(#1\\right)}\n", "\\newcommand{\\covarianceScalar}{c}\n", "\\newcommand{\\covarianceVector}{\\mathbf{ \\covarianceScalar}}\n", "\\newcommand{\\covarianceMatrix}{\\mathbf{C}}\n", "\\newcommand{\\covarianceMatrixTwo}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\croupierScalar}{s}\n", "\\newcommand{\\croupierVector}{\\mathbf{ \\croupierScalar}}\n", "\\newcommand{\\croupierMatrix}{\\mathbf{ \\MakeUppercase{\\croupierScalar}}}\n", "\\newcommand{\\dataDim}{p}\n", "\\newcommand{\\dataIndex}{i}\n", "\\newcommand{\\dataIndexTwo}{j}\n", "\\newcommand{\\dataMatrix}{\\mathbf{Y}}\n", "\\newcommand{\\dataScalar}{y}\n", "\\newcommand{\\dataSet}{\\mathcal{D}}\n", "\\newcommand{\\dataStd}{\\sigma}\n", "\\newcommand{\\dataVector}{\\mathbf{ \\dataScalar}}\n", "\\newcommand{\\decayRate}{d}\n", "\\newcommand{\\degreeMatrix}{\\mathbf{ \\MakeUppercase{\\degreeScalar}}}\n", "\\newcommand{\\degreeScalar}{d}\n", "\\newcommand{\\degreeVector}{\\mathbf{ \\degreeScalar}}\n", "% Already defined by latex\n", "%\\newcommand{\\det}[1]{\\left|#1\\right|}\n", "\\newcommand{\\diag}[1]{\\text{diag}\\left(#1\\right)}\n", "\\newcommand{\\diagonalMatrix}{\\mathbf{D}}\n", "\\newcommand{\\diff}[2]{\\frac{\\text{d}#1}{\\text{d}#2}}\n", "\\newcommand{\\diffTwo}[2]{\\frac{\\text{d}^2#1}{\\text{d}#2^2}}\n", "\\newcommand{\\displacement}{x}\n", "\\newcommand{\\displacementVector}{\\textbf{\\displacement}}\n", "\\newcommand{\\distanceMatrix}{\\mathbf{ \\MakeUppercase{\\distanceScalar}}}\n", "\\newcommand{\\distanceScalar}{d}\n", "\\newcommand{\\distanceVector}{\\mathbf{ \\distanceScalar}}\n", "\\newcommand{\\eigenvaltwo}{\\ell}\n", "\\newcommand{\\eigenvaltwoMatrix}{\\mathbf{L}}\n", "\\newcommand{\\eigenvaltwoVector}{\\mathbf{l}}\n", "\\newcommand{\\eigenvalue}{\\lambda}\n", "\\newcommand{\\eigenvalueMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\eigenvalueVector}{\\boldsymbol{ \\lambda}}\n", "\\newcommand{\\eigenvector}{\\mathbf{ \\eigenvectorScalar}}\n", "\\newcommand{\\eigenvectorMatrix}{\\mathbf{U}}\n", "\\newcommand{\\eigenvectorScalar}{u}\n", "\\newcommand{\\eigenvectwo}{\\mathbf{v}}\n", "\\newcommand{\\eigenvectwoMatrix}{\\mathbf{V}}\n", "\\newcommand{\\eigenvectwoScalar}{v}\n", "\\newcommand{\\entropy}[1]{\\mathcal{H}\\left(#1\\right)}\n", "\\newcommand{\\errorFunction}{E}\n", "\\newcommand{\\expDist}[2]{\\left<#1\\right>_{#2}}\n", "\\newcommand{\\expSamp}[1]{\\left<#1\\right>}\n", "\\newcommand{\\expectation}[1]{\\left\\langle #1 \\right\\rangle }\n", "\\newcommand{\\expectationDist}[2]{\\left\\langle #1 \\right\\rangle _{#2}}\n", "\\newcommand{\\expectedDistanceMatrix}{\\mathcal{D}}\n", "\\newcommand{\\eye}{\\mathbf{I}}\n", "\\newcommand{\\fantasyDim}{r}\n", "\\newcommand{\\fantasyMatrix}{\\mathbf{ \\MakeUppercase{\\fantasyScalar}}}\n", "\\newcommand{\\fantasyScalar}{z}\n", "\\newcommand{\\fantasyVector}{\\mathbf{ \\fantasyScalar}}\n", "\\newcommand{\\featureStd}{\\varsigma}\n", "\\newcommand{\\gammaCdf}[3]{\\mathcal{GAMMA CDF}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaDist}[3]{\\mathcal{G}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaSamp}[2]{\\mathcal{G}\\left(#1,#2\\right)}\n", "\\newcommand{\\gaussianDist}[3]{\\mathcal{N}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gaussianSamp}[2]{\\mathcal{N}\\left(#1,#2\\right)}\n", "\\newcommand{\\given}{|}\n", "\\newcommand{\\half}{\\frac{1}{2}}\n", "\\newcommand{\\heaviside}{H}\n", "\\newcommand{\\hiddenMatrix}{\\mathbf{ \\MakeUppercase{\\hiddenScalar}}}\n", "\\newcommand{\\hiddenScalar}{h}\n", "\\newcommand{\\hiddenVector}{\\mathbf{ \\hiddenScalar}}\n", "\\newcommand{\\identityMatrix}{\\eye}\n", "\\newcommand{\\inducingInputScalar}{z}\n", "\\newcommand{\\inducingInputVector}{\\mathbf{ \\inducingInputScalar}}\n", "\\newcommand{\\inducingInputMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\inducingScalar}{u}\n", "\\newcommand{\\inducingVector}{\\mathbf{ \\inducingScalar}}\n", "\\newcommand{\\inducingMatrix}{\\mathbf{U}}\n", "\\newcommand{\\inlineDiff}[2]{\\text{d}#1/\\text{d}#2}\n", "\\newcommand{\\inputDim}{q}\n", "\\newcommand{\\inputMatrix}{\\mathbf{X}}\n", "\\newcommand{\\inputScalar}{x}\n", "\\newcommand{\\inputSpace}{\\mathcal{X}}\n", "\\newcommand{\\inputVals}{\\inputVector}\n", "\\newcommand{\\inputVector}{\\mathbf{ \\inputScalar}}\n", "\\newcommand{\\iterNum}{k}\n", "\\newcommand{\\kernel}{\\kernelScalar}\n", "\\newcommand{\\kernelMatrix}{\\mathbf{K}}\n", "\\newcommand{\\kernelScalar}{k}\n", "\\newcommand{\\kernelVector}{\\mathbf{ \\kernelScalar}}\n", "\\newcommand{\\kff}{\\kernelScalar_{\\mappingFunction \\mappingFunction}}\n", "\\newcommand{\\kfu}{\\kernelVector_{\\mappingFunction \\inducingScalar}}\n", "\\newcommand{\\kuf}{\\kernelVector_{\\inducingScalar \\mappingFunction}}\n", "\\newcommand{\\kuu}{\\kernelVector_{\\inducingScalar \\inducingScalar}}\n", "\\newcommand{\\lagrangeMultiplier}{\\lambda}\n", "\\newcommand{\\lagrangeMultiplierMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\lagrangian}{L}\n", "\\newcommand{\\laplacianFactor}{\\mathbf{ \\MakeUppercase{\\laplacianFactorScalar}}}\n", "\\newcommand{\\laplacianFactorScalar}{m}\n", "\\newcommand{\\laplacianFactorVector}{\\mathbf{ \\laplacianFactorScalar}}\n", "\\newcommand{\\laplacianMatrix}{\\mathbf{L}}\n", "\\newcommand{\\laplacianScalar}{\\ell}\n", "\\newcommand{\\laplacianVector}{\\mathbf{ \\ell}}\n", "\\newcommand{\\latentDim}{q}\n", "\\newcommand{\\latentDistanceMatrix}{\\boldsymbol{ \\Delta}}\n", "\\newcommand{\\latentDistanceScalar}{\\delta}\n", "\\newcommand{\\latentDistanceVector}{\\boldsymbol{ \\delta}}\n", "\\newcommand{\\latentForce}{f}\n", "\\newcommand{\\latentFunction}{u}\n", "\\newcommand{\\latentFunctionVector}{\\mathbf{ \\latentFunction}}\n", "\\newcommand{\\latentFunctionMatrix}{\\mathbf{ \\MakeUppercase{\\latentFunction}}}\n", "\\newcommand{\\latentIndex}{j}\n", "\\newcommand{\\latentScalar}{z}\n", "\\newcommand{\\latentVector}{\\mathbf{ \\latentScalar}}\n", "\\newcommand{\\latentMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\learnRate}{\\eta}\n", "\\newcommand{\\lengthScale}{\\ell}\n", "\\newcommand{\\rbfWidth}{\\ell}\n", "\\newcommand{\\likelihoodBound}{\\mathcal{L}}\n", "\\newcommand{\\likelihoodFunction}{L}\n", "\\newcommand{\\locationScalar}{\\mu}\n", "\\newcommand{\\locationVector}{\\boldsymbol{ \\locationScalar}}\n", "\\newcommand{\\locationMatrix}{\\mathbf{M}}\n", "\\newcommand{\\variance}[1]{\\text{var}\\left( #1 \\right)}\n", "\\newcommand{\\mappingFunction}{f}\n", "\\newcommand{\\mappingFunctionMatrix}{\\mathbf{F}}\n", "\\newcommand{\\mappingFunctionTwo}{g}\n", "\\newcommand{\\mappingFunctionTwoMatrix}{\\mathbf{G}}\n", "\\newcommand{\\mappingFunctionTwoVector}{\\mathbf{ \\mappingFunctionTwo}}\n", "\\newcommand{\\mappingFunctionVector}{\\mathbf{ \\mappingFunction}}\n", "\\newcommand{\\scaleScalar}{s}\n", "\\newcommand{\\mappingScalar}{w}\n", "\\newcommand{\\mappingVector}{\\mathbf{ \\mappingScalar}}\n", "\\newcommand{\\mappingMatrix}{\\mathbf{W}}\n", "\\newcommand{\\mappingScalarTwo}{v}\n", "\\newcommand{\\mappingVectorTwo}{\\mathbf{ \\mappingScalarTwo}}\n", "\\newcommand{\\mappingMatrixTwo}{\\mathbf{V}}\n", "\\newcommand{\\maxIters}{K}\n", "\\newcommand{\\meanMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanScalar}{\\mu}\n", "\\newcommand{\\meanTwoMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanTwoScalar}{m}\n", "\\newcommand{\\meanTwoVector}{\\mathbf{ \\meanTwoScalar}}\n", "\\newcommand{\\meanVector}{\\boldsymbol{ \\meanScalar}}\n", "\\newcommand{\\mrnaConcentration}{m}\n", "\\newcommand{\\naturalFrequency}{\\omega}\n", "\\newcommand{\\neighborhood}[1]{\\mathcal{N}\\left( #1 \\right)}\n", "\\newcommand{\\neilurl}{http://inverseprobability.com/}\n", "\\newcommand{\\noiseMatrix}{\\boldsymbol{ E}}\n", "\\newcommand{\\noiseScalar}{\\epsilon}\n", "\\newcommand{\\noiseVector}{\\boldsymbol{ \\epsilon}}\n", "\\newcommand{\\norm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\normalizedLaplacianMatrix}{\\hat{\\mathbf{L}}}\n", "\\newcommand{\\normalizedLaplacianScalar}{\\hat{\\ell}}\n", "\\newcommand{\\normalizedLaplacianVector}{\\hat{\\mathbf{ \\ell}}}\n", "\\newcommand{\\numActive}{m}\n", "\\newcommand{\\numBasisFunc}{m}\n", "\\newcommand{\\numComponents}{m}\n", "\\newcommand{\\numComps}{K}\n", "\\newcommand{\\numData}{n}\n", "\\newcommand{\\numFeatures}{K}\n", "\\newcommand{\\numHidden}{h}\n", "\\newcommand{\\numInducing}{m}\n", "\\newcommand{\\numLayers}{\\ell}\n", "\\newcommand{\\numNeighbors}{K}\n", "\\newcommand{\\numSequences}{s}\n", "\\newcommand{\\numSuccess}{s}\n", "\\newcommand{\\numTasks}{m}\n", "\\newcommand{\\numTime}{T}\n", "\\newcommand{\\numTrials}{S}\n", "\\newcommand{\\outputIndex}{j}\n", "\\newcommand{\\paramVector}{\\boldsymbol{ \\theta}}\n", "\\newcommand{\\parameterMatrix}{\\boldsymbol{ \\Theta}}\n", "\\newcommand{\\parameterScalar}{\\theta}\n", "\\newcommand{\\parameterVector}{\\boldsymbol{ \\parameterScalar}}\n", "\\newcommand{\\partDiff}[2]{\\frac{\\partial#1}{\\partial#2}}\n", "\\newcommand{\\precisionScalar}{j}\n", "\\newcommand{\\precisionVector}{\\mathbf{ \\precisionScalar}}\n", "\\newcommand{\\precisionMatrix}{\\mathbf{J}}\n", "\\newcommand{\\pseudotargetScalar}{\\widetilde{y}}\n", "\\newcommand{\\pseudotargetVector}{\\mathbf{ \\pseudotargetScalar}}\n", "\\newcommand{\\pseudotargetMatrix}{\\mathbf{ \\widetilde{Y}}}\n", "\\newcommand{\\rank}[1]{\\text{rank}\\left(#1\\right)}\n", "\\newcommand{\\rayleighDist}[2]{\\mathcal{R}\\left(#1|#2\\right)}\n", "\\newcommand{\\rayleighSamp}[1]{\\mathcal{R}\\left(#1\\right)}\n", "\\newcommand{\\responsibility}{r}\n", "\\newcommand{\\rotationScalar}{r}\n", "\\newcommand{\\rotationVector}{\\mathbf{ \\rotationScalar}}\n", "\\newcommand{\\rotationMatrix}{\\mathbf{R}}\n", "\\newcommand{\\sampleCovScalar}{s}\n", "\\newcommand{\\sampleCovVector}{\\mathbf{ \\sampleCovScalar}}\n", "\\newcommand{\\sampleCovMatrix}{\\mathbf{s}}\n", "\\newcommand{\\scalarProduct}[2]{\\left\\langle{#1},{#2}\\right\\rangle}\n", "\\newcommand{\\sign}[1]{\\text{sign}\\left(#1\\right)}\n", "\\newcommand{\\sigmoid}[1]{\\sigma\\left(#1\\right)}\n", "\\newcommand{\\singularvalue}{\\ell}\n", "\\newcommand{\\singularvalueMatrix}{\\mathbf{L}}\n", "\\newcommand{\\singularvalueVector}{\\mathbf{l}}\n", "\\newcommand{\\sorth}{\\mathbf{u}}\n", "\\newcommand{\\spar}{\\lambda}\n", "\\newcommand{\\trace}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\BasalRate}{B}\n", "\\newcommand{\\DampingCoefficient}{C}\n", "\\newcommand{\\DecayRate}{D}\n", "\\newcommand{\\Displacement}{X}\n", "\\newcommand{\\LatentForce}{F}\n", "\\newcommand{\\Mass}{M}\n", "\\newcommand{\\Sensitivity}{S}\n", "\\newcommand{\\basalRate}{b}\n", "\\newcommand{\\dampingCoefficient}{c}\n", "\\newcommand{\\mass}{m}\n", "\\newcommand{\\sensitivity}{s}\n", "\\newcommand{\\springScalar}{\\kappa}\n", "\\newcommand{\\springVector}{\\boldsymbol{ \\kappa}}\n", "\\newcommand{\\springMatrix}{\\boldsymbol{ \\mathcal{K}}}\n", "\\newcommand{\\tfConcentration}{p}\n", "\\newcommand{\\tfDecayRate}{\\delta}\n", "\\newcommand{\\tfMrnaConcentration}{f}\n", "\\newcommand{\\tfVector}{\\mathbf{ \\tfConcentration}}\n", "\\newcommand{\\velocity}{v}\n", "\\newcommand{\\sufficientStatsScalar}{g}\n", "\\newcommand{\\sufficientStatsVector}{\\mathbf{ \\sufficientStatsScalar}}\n", "\\newcommand{\\sufficientStatsMatrix}{\\mathbf{G}}\n", "\\newcommand{\\switchScalar}{s}\n", "\\newcommand{\\switchVector}{\\mathbf{ \\switchScalar}}\n", "\\newcommand{\\switchMatrix}{\\mathbf{S}}\n", "\\newcommand{\\tr}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\loneNorm}[1]{\\left\\Vert #1 \\right\\Vert_1}\n", "\\newcommand{\\ltwoNorm}[1]{\\left\\Vert #1 \\right\\Vert_2}\n", "\\newcommand{\\onenorm}[1]{\\left\\vert#1\\right\\vert_1}\n", "\\newcommand{\\twonorm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\vScalar}{v}\n", "\\newcommand{\\vVector}{\\mathbf{v}}\n", "\\newcommand{\\vMatrix}{\\mathbf{V}}\n", "\\newcommand{\\varianceDist}[2]{\\text{var}_{#2}\\left( #1 \\right)}\n", "% Already defined by latex\n", "%\\newcommand{\\vec}{#1:}\n", "\\newcommand{\\vecb}[1]{\\left(#1\\right):}\n", "\\newcommand{\\weightScalar}{w}\n", "\\newcommand{\\weightVector}{\\mathbf{ \\weightScalar}}\n", "\\newcommand{\\weightMatrix}{\\mathbf{W}}\n", "\\newcommand{\\weightedAdjacencyMatrix}{\\mathbf{A}}\n", "\\newcommand{\\weightedAdjacencyScalar}{a}\n", "\\newcommand{\\weightedAdjacencyVector}{\\mathbf{ \\weightedAdjacencyScalar}}\n", "\\newcommand{\\onesVector}{\\mathbf{1}}\n", "\\newcommand{\\zerosVector}{\\mathbf{0}}\n", "$$\n", "\n", "## What is Machine Learning?\n", "\n", "### What is Machine Learning?\n", "\n", ". . .\n", "\n", "$$ \\text{data} + \\text{model} \\xrightarrow{\\text{compute}} \\text{prediction}$$\n", "\n", ". . .\n", "\n", "- **data** : observations, could be actively or passively acquired\n", " (meta-data).\n", "\n", ". . .\n", "\n", "- **model** : assumptions, based on previous experience (other data!\n", " transfer learning etc), or beliefs about the regularities of the\n", " universe. Inductive bias.\n", "\n", ". . .\n", "\n", "- **prediction** : an action to be taken or a categorization or a\n", " quality score.\n", "\n", ". . .\n", "\n", "- Royal Society Report: [Machine Learning: Power and Promise of\n", " Computers that Learn by\n", " Example](https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf)\n", "\n", "### What is Machine Learning?\n", "\n", "$$\\text{data} + \\text{model} \\xrightarrow{\\text{compute}} \\text{prediction}$$\n", "\n", ". . .\n", "\n", "- To combine data with a model need:\n", "\n", ". . .\n", "\n", "- **a prediction function** $\\mappingFunction(\\cdot)$ includes our\n", " beliefs about the regularities of the universe\n", "\n", ". . .\n", "\n", "- **an objective function** $\\errorFunction(\\cdot)$ defines the cost\n", " of misprediction." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import pods\n", "import teaching_plots as plot\n", "import mlai" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Olympic Marathon Data\n", "\n", "The first thing we will do is load a standard data set for regression\n", "modelling. The data consists of the pace of Olympic Gold Medal Marathon\n", "winners for the Olympics from 1896 to present. First we load in the data\n", "and plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.olympic_marathon_men()\n", "x = data['X']\n", "y = data['Y']\n", "\n", "offset = y.mean()\n", "scale = np.sqrt(y.var())\n", "\n", "xlim = (1875,2030)\n", "ylim = (2.5, 6.5)\n", "yhat = (y-offset)/scale\n", "\n", "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "_ = ax.plot(x, y, 'r.',markersize=10)\n", "ax.set_xlabel('year', fontsize=20)\n", "ax.set_ylabel('pace min/km', fontsize=20)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "\n", "mlai.write_figure(figure=fig, filename='../slides/diagrams/datasets/olympic-marathon.svg', transparent=True, frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Olympic Marathon Data\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "- Gold medal times for Olympic Marathon since 1896.\n", "\n", "- Marathons before 1924 didn’t have a standardised distance.\n", "\n", "- Present results using pace per km.\n", "\n", "- In 1904 Marathon was badly organised leading to very slow times.\n", "\n", "\n", "![image](../slides/diagrams/Stephen_Kiprotich.jpg) Image from\n", "Wikimedia Commons \n", "
\n", "### Olympic Marathon Data\n", "\n", "\n", "\n", "### Regression: Linear Releationship\n", "\n", "$$\\dataScalar_i = m \\inputScalar_i + c$$\n", "\n", "- $\\dataScalar_i$ : winning pace.\n", "\n", "- $\\inputScalar_i$ : year of Olympics.\n", "\n", "- $m$ : rate of improvement over time.\n", "\n", "- $c$ : winning time at year 0.\n", "\n", "## Overdetermined System\n", "\n", "###" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.over_determined_system(diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ipywidgets import IntSlider\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('over_determined_system{samp:0>3}.svg',\n", " directory='../slides/diagrams/ml', \n", " samp=IntSlider(1,1,8,1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{over\\_determined\\_system}{1}{8}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### $\\dataScalar = m\\inputScalar + c$\n", "\n", ". . .\n", "\n", "point 1: $\\inputScalar = 1$, $\\dataScalar=3$ $$\n", "3 = m + c\n", "$$\n", "\n", ". . .\n", "\n", "point 2: $\\inputScalar = 3$, $\\dataScalar=1$ $$\n", "1 = 3m + c\n", "$$\n", "\n", ". . .\n", "\n", "point 3: $\\inputScalar = 2$, $\\dataScalar=2.5$\n", "\n", "$$2.5 = 2m + c$$\n", "\n", "### \n", "\n", "\n", "\n", "### \n", "\n", "\n", "\n", "### \n", "\n", "\n", "\n", "### \n", "\n", "\n", "\n", "### $\\dataScalar = m\\inputScalar + c + \\noiseScalar$\n", "\n", ". . .\n", "\n", "point 1: $\\inputScalar = 1$, $\\dataScalar=3$ $$\n", "3 = m + c + \\noiseScalar_1\n", "$$\n", "\n", ". . .\n", "\n", "point 2: $\\inputScalar = 3$, $\\dataScalar=1$ $$\n", "1 = 3m + c + \\noiseScalar_2\n", "$$\n", "\n", ". . .\n", "\n", "point 3: $\\inputScalar = 2$, $\\dataScalar=2.5$ $$\n", "2.5 = 2m + c + \\noiseScalar_3\n", "$$\n", "\n", "### A Probabilistic Process\n", "\n", ". . .\n", "\n", "Set the mean of Gaussian to be a function. $$p\n", "\\left(\\dataScalar_i|\\inputScalar_i\\right)=\\frac{1}{\\sqrt{2\\pi\\dataStd^2}}\\exp \\left(-\\frac{\\left(\\dataScalar_i-\\mappingFunction\\left(\\inputScalar_i\\right)\\right)^{2}}{2\\dataStd^2}\\right).\n", "$$\n", "\n", ". . .\n", "\n", "This gives us a 'noisy function'.\n", "\n", ". . .\n", "\n", "This is known as a stochastic process.\n", "\n", "### The Gaussian Density\n", "\n", "- Perhaps the most common probability density.\n", "\n", ". . .\n", "\n", "$$\\begin{align}\n", " p(\\dataScalar| \\meanScalar, \\dataStd^2) & = \\frac{1}{\\sqrt{2\\pi\\dataStd^2}}\\exp\\left(-\\frac{(\\dataScalar - \\meanScalar)^2}{2\\dataStd^2}\\right)\\\\& \\buildrel\\triangle\\over = \\gaussianDist{\\dataScalar}{\\meanScalar}{\\dataStd^2}\n", " \\end{align}$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.gaussian_of_height(diagrams='../../slides/diagrams/ml')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Gaussian Density\n", "\n", "\n", "\n", "
\n", "*The Gaussian PDF with ${\\meanScalar}=1.7$ and variance\n", "${\\dataStd}^2=0.0225$. Mean shown as cyan line. It could represent the\n", "heights of a population of students. *\n", "
\n", "### Gaussian Density\n", "\n", "$$\n", "\\gaussianDist{\\dataScalar}{\\meanScalar}{\\dataStd^2} = \\frac{1}{\\sqrt{2\\pi\\dataStd^2}} \\exp\\left(-\\frac{(\\dataScalar-\\meanScalar)^2}{2\\dataStd^2}\\right)\n", "$$\n", "\n", ". . .\n", "\n", "
\n", "$\\dataStd^2$ is the variance of the density and $\\meanScalar$ is the\n", "mean.\n", "
\n", "### Two Important Gaussian Properties\n", "\n", "### Sum of Gaussians\n", "\n", ". . .\n", "\n", "[Sum of Gaussian variables is also Gaussian.]{align=\"left\"}\n", "\n", "$$\\dataScalar_i \\sim \\gaussianSamp{\\meanScalar_i}{\\sigma_i^2}$$\n", "\n", ". . .\n", "\n", "[And the sum is distributed as]{align=\"left\"}\n", "\n", "$$\\sum_{i=1}^{\\numData} \\dataScalar_i \\sim \\gaussianSamp{\\sum_{i=1}^\\numData \\meanScalar_i}{\\sum_{i=1}^\\numData \\sigma_i^2}$$\n", "\n", ". . .\n", "\n", "(*Aside*: As sum increases, sum of non-Gaussian, finite variance\n", "variables is also Gaussian because of [central limit\n", "theorem](https://en.wikipedia.org/wiki/Central_limit_theorem).)\n", "\n", "### Scaling a Gaussian\n", "\n", ". . .\n", "\n", "[Scaling a Gaussian leads to a Gaussian.]{align=\"left\"}\n", "\n", ". . .\n", "\n", "$$\\dataScalar \\sim \\gaussianSamp{\\meanScalar}{\\sigma^2}$$\n", "\n", ". . .\n", "\n", "[And the scaled variable is distributed as]{align=\"left\"}\n", "\n", "$$\\mappingScalar \\dataScalar \\sim \\gaussianSamp{\\mappingScalar\\meanScalar}{\\mappingScalar^2 \\sigma^2}.$$\n", "\n", "## Laplace's Idea\n", "\n", "### A Probabilistic Process\n", "\n", "Set the mean of Gaussian to be a function.\n", "\n", ". . .\n", "\n", "$$p\\left(\\dataScalar_i|\\inputScalar_i\\right)=\\frac{1}{\\sqrt{2\\pi\\dataStd^2}}\\exp\\left(-\\frac{\\left(\\dataScalar_i-f\\left(\\inputScalar_i\\right)\\right)^{2}}{2\\dataStd^2}\\right).$$\n", "\n", ". . .\n", "\n", "This gives us a 'noisy function'.\n", "\n", ". . .\n", "\n", "This is known as a stochastic process.\n", "\n", "### Height as a Function of Weight\n", "\n", "In the standard Gaussian, parametized by mean and variance.\n", "\n", "Make the mean a linear function of an *input*.\n", "\n", "This leads to a regression model. $$\n", "\\begin{align*}\n", " \\dataScalar_i=&\\mappingFunction\\left(\\inputScalar_i\\right)+\\noiseScalar_i,\\\\\n", " \\noiseScalar_i \\sim & \\gaussianSamp{0}{\\dataStd^2}.\n", " \\end{align*}\n", "$$\n", "\n", "Assume $\\dataScalar_i$ is height and $\\inputScalar_i$ is weight.\n", "\n", "### Data Point Likelihood\n", "\n", "Likelihood of an individual data point $$\n", "p\\left(\\dataScalar_i|\\inputScalar_i,m,c\\right)=\\frac{1}{\\sqrt{2\\pi \\dataStd^2}}\\exp\\left(-\\frac{\\left(\\dataScalar_i-m\\inputScalar_i-c\\right)^{2}}{2\\dataStd^2}\\right).\n", "$$ Parameters are gradient, $m$, offset, $c$ of the function and noise\n", "variance $\\dataStd^2$.\n", "\n", "### Data Set Likelihood\n", "\n", "If the noise, $\\epsilon_i$ is sampled independently for each data point.\n", "Each data point is independent (given $m$ and $c$). For *independent*\n", "variables: $$\n", "p(\\dataVector) = \\prod_{i=1}^\\numData p(\\dataScalar_i)\n", "$$ $$\n", "p(\\dataVector|\\inputVector, m, c) = \\prod_{i=1}^\\numData p(\\dataScalar_i|\\inputScalar_i, m, c)\n", "$$\n", "\n", "### For Gaussian\n", "\n", "i.i.d. assumption $$\n", "p(\\dataVector|\\inputVector, m, c) = \\prod_{i=1}^\\numData \\frac{1}{\\sqrt{2\\pi \\dataStd^2}}\\exp \\left(-\\frac{\\left(\\dataScalar_i- m\\inputScalar_i-c\\right)^{2}}{2\\dataStd^2}\\right).\n", "$$ $$\n", "p(\\dataVector|\\inputVector, m, c) = \\frac{1}{\\left(2\\pi \\dataStd^2\\right)^{\\frac{\\numData}{2}}}\\exp\\left(-\\frac{\\sum_{i=1}^\\numData\\left(\\dataScalar_i-m\\inputScalar_i-c\\right)^{2}}{2\\dataStd^2}\\right).\n", "$$\n", "\n", "### Log Likelihood Function\n", "\n", "- Normally work with the log likelihood: $$\n", " L(m,c,\\dataStd^{2})=-\\frac{\\numData}{2}\\log 2\\pi -\\frac{\\numData}{2}\\log \\dataStd^2 -\\sum_{i=1}^{\\numData}\\frac{\\left(\\dataScalar_i-m\\inputScalar_i-c\\right)^{2}}{2\\dataStd^2}.\n", " $$\n", "\n", "### Consistency of Maximum Likelihood\n", "\n", "- If data was really generated according to probability we specified.\n", "- Correct parameters will be recovered in limit as\n", " $\\numData \\rightarrow \\infty$.\n", "- This can be proven through sample based approximations (law of large\n", " numbers) of \"KL divergences\".\n", "- Mainstay of classical statistics.\n", "\n", "### Probabilistic Interpretation of the Error Function\n", "\n", "- Probabilistic Interpretation for Error Function is Negative Log\n", " Likelihood.\n", "- *Minimizing* error function is equivalent to *maximizing* log\n", " likelihood.\n", "- Maximizing *log likelihood* is equivalent to maximizing the\n", " *likelihood* because $\\log$ is monotonic.\n", "- Probabilistic interpretation: Minimizing error function is\n", " equivalent to maximum likelihood with respect to parameters.\n", "\n", "### Error Function\n", "\n", "- Negative log likelihood is the error function leading to an error\n", " function\n", " $$\\errorFunction(m,c,\\dataStd^{2})=\\frac{\\numData}{2}\\log \\dataStd^2+\\frac{1}{2\\dataStd^2}\\sum _{i=1}^{\\numData}\\left(\\dataScalar_i-m\\inputScalar_i-c\\right)^{2}.$$\n", "- Learning proceeds by minimizing this error function for the data set\n", " provided.\n", "\n", "### Connection: Sum of Squares Error\n", "\n", "- Ignoring terms which don’t depend on $m$ and $c$ gives\n", " $$\\errorFunction(m, c) \\propto \\sum_{i=1}^\\numData (\\dataScalar_i - \\mappingFunction(\\inputScalar_i))^2$$\n", " where $\\mappingFunction(\\inputScalar_i) = m\\inputScalar_i + c$.\n", "- This is known as the *sum of squares* error function.\n", "- Commonly used and is closely associated with the Gaussian\n", " likelihood.\n", "\n", "### Reminder\n", "\n", "- Two functions involved:\n", " - *Prediction function*: $\\mappingFunction(\\inputScalar_i)$\n", " - Error, or *Objective function*: $\\errorFunction(m, c)$\n", "- Error function depends on parameters through prediction function.\n", "\n", "### Mathematical Interpretation\n", "\n", "- What is the mathematical interpretation?\n", "- There is a cost function.\n", " - It expresses mismatch between your prediction and reality. $$\n", " \\errorFunction(m, c)=\\sum_{i=1}^\\numData \\left(\\dataScalar_i - m\\inputScalar_i-c\\right)^2\n", " $$\n", " - This is known as the sum of squares error.\n", "\n", "## Sum of Squares Error\n", "\n", "## Linear Algebra" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.olympic_marathon_men()\n", "x = data['X']\n", "y = data['Y']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(x)\n", "print(y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline \n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(x, y, 'rx')\n", "plt.xlabel('year')\n", "plt.ylabel('pace in min/km')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = -0.4\n", "c = 80" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Coordinate Descent\n", "\n", "### Learning is Optimization\n", "\n", "- Learning is minimization of the cost function.\n", "- At the minima the gradient is zero.\n", "- Coordinate ascent, find gradient in each coordinate and set to zero.\n", " $$\\frac{\\text{d}\\errorFunction(c)}{\\text{d}c} = -2\\sum_{i=1}^\\numData \\left(\\dataScalar_i- m \\inputScalar_i - c \\right)$$\n", " $$0 = -2\\sum_{i=1}^\\numData\\left(\\dataScalar_i- m\\inputScalar_i - c \\right)$$\n", "\n", "### Learning is Optimization\n", "\n", "- Fixed point equations\n", " $$0 = -2\\sum_{i=1}^\\numData \\dataScalar_i +2\\sum_{i=1}^\\numData m \\inputScalar_i +2n c$$\n", " $$c = \\frac{\\sum_{i=1}^\\numData \\left(\\dataScalar_i - m\\inputScalar_i\\right)}{\\numData}$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# set c to the minimum\n", "c = (y - m*x).mean()\n", "print(c)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Learning is Optimization\n", "\n", "- Learning is minimization of the cost function.\n", "- At the minima the gradient is zero.\n", "- Coordinate ascent, find gradient in each coordinate and set to zero.\n", " $$\\frac{\\text{d}\\errorFunction(m)}{\\text{d}m} = -2\\sum_{i=1}^\\numData \\inputScalar_i\\left(\\dataScalar_i- m \\inputScalar_i - c \\right)$$\n", " $$0 = -2\\sum_{i=1}^\\numData \\inputScalar_i \\left(\\dataScalar_i-m \\inputScalar_i - c \\right)$$\n", "\n", "### Learning is Optimization\n", "\n", "- Fixed point equations\n", " $$0 = -2\\sum_{i=1}^\\numData \\inputScalar_i\\dataScalar_i+2\\sum_{i=1}^\\numData m \\inputScalar_i^2+2\\sum_{i=1}^\\numData c\\inputScalar_i$$\n", " $$m = \\frac{\\sum_{i=1}^\\numData \\left(\\dataScalar_i -c\\right)\\inputScalar_i}{\\sum_{i=1}^\\numData\\inputScalar_i^2}$$\n", "\n", "$$m^* = \\frac{\\sum_{i=1}^\\numData (\\dataScalar_i - c)\\inputScalar_i}{\\sum_{i=1}^\\numData \\inputScalar_i^2}$$\n", "\n", "### Fixed Point Updates\n", "\n", "[Worked example.]{align=\"left\"} $$\n", "\\begin{aligned}\n", " c^{*}=&\\frac{\\sum\n", "_{i=1}^{\\numData}\\left(\\dataScalar_i-m^{*}\\inputScalar_i\\right)}{\\numData},\\\\\n", " m^{*}=&\\frac{\\sum\n", "_{i=1}^{\\numData}\\inputScalar_i\\left(\\dataScalar_i-c^{*}\\right)}{\\sum _{i=1}^{\\numData}\\inputScalar_i^{2}},\\\\\n", "\\left.\\dataStd^2\\right.^{*}=&\\frac{\\sum\n", "_{i=1}^{\\numData}\\left(\\dataScalar_i-m^{*}\\inputScalar_i-c^{*}\\right)^{2}}{\\numData}\n", "\\end{aligned}\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = ((y - c)*x).sum()/(x**2).sum()\n", "print(m)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "x_test = np.linspace(1890, 2020, 130)[:, None]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f_test = m*x_test + c" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(x_test, f_test, 'b-')\n", "plt.plot(x, y, 'rx')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in np.arange(10):\n", " m = ((y - c)*x).sum()/(x*x).sum()\n", " c = (y-m*x).sum()/y.shape[0]\n", "print(m)\n", "print(c)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f_test = m*x_test + c\n", "plt.plot(x_test, f_test, 'b-')\n", "plt.plot(x, y, 'rx')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Important Concepts Not Covered\n", "\n", "- Other optimization methods:\n", " - Second order methods, conjugate gradient, quasi-Newton and\n", " Newton.\n", "- Effective heuristics such as momentum.\n", "- Local vs global solutions.\n", "\n", "### Reading\n", "\n", "- Section 1.1-1.2 of @Rogers:book11 for fitting linear models.\n", "- Section 1.2.5 of @Bishop:book06 up to equation 1.65.\n", "\n", "### Multi-dimensional Inputs\n", "\n", "- Multivariate functions involve more than one input.\n", "- Height might be a function of weight and gender.\n", "- There could be other contributory factors.\n", "- Place these factors in a feature vector $\\inputVector_i$.\n", "- Linear function is now defined as\n", " $$\\mappingFunction(\\inputVector_i) = \\sum_{j=1}^p w_j \\inputScalar_{i, j} + c$$\n", "\n", "### Vector Notation\n", "\n", "- Write in vector notation,\n", " $$\\mappingFunction(\\inputVector_i) = \\mappingVector^\\top \\inputVector_i + c$$\n", "- Can absorb $c$ into $\\mappingVector$ by assuming extra input\n", " $\\inputScalar_0$ which is always 1.\n", " $$\\mappingFunction(\\inputVector_i) = \\mappingVector^\\top \\inputVector_i$$\n", "\n", "### Objective Functions and Regression\n", "\n", "- Classification: map feature to class label.\n", "- Regression: map feature to real value our *prediction function* is" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "$$\\mappingFunction(\\inputScalar_i) = m\\inputScalar_i + c$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Need an *algorithm* to fit it.\n", "\n", "- Least squares: minimize an error.\n", "\n", "$$\\errorFunction(m, c) = \\sum_{i=1}^\\numData (\\dataScalar_i * \\mappingFunction(\\inputScalar_i))^2$$\n", "\n", "### Regression\n", "\n", "- Create an artifical data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = np.random.normal(size=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now need to decide on a *true* value for $m$ and a *true* value for\n", "$c$ to use for generating the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_true = 1.4\n", "c_true = -3.1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use these values to create our artificial data. The formula\n", "$$\\dataScalar_i = m\\inputScalar_i + c$$ is translated to code as\n", "follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = m_true*x+c_true" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot of Data\n", "\n", "We can now plot the artifical data we've created." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.plot(x, y, 'r.', markersize=10) # plot data as red dots\n", "plt.xlim([-3, 3])\n", "mlai.write_figure(filename=\"../slides/diagrams/ml/regression.svg\", transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "These points lie exactly on a straight line, that's not very realistic,\n", "let's corrupt them with a bit of Gaussian 'noise'.\n", "\n", "### Noise Corrupted Plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "noise = np.random.normal(scale=0.5, size=4) # standard deviation of the noise is 0.5\n", "y = m_true*x + c_true + noise\n", "plt.plot(x, y, 'r.', markersize=10)\n", "plt.xlim([-3, 3])\n", "mlai.write_figure(filename=\"../slides/diagrams/ml/regression_noise.svg\", transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Contour Plot of Error Function\n", "\n", "- Visualise the error function surface, create vectors of values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create an array of linearly separated values around m_true\n", "m_vals = np.linspace(m_true-3, m_true+3, 100) \n", "# create an array of linearly separated values ae\n", "c_vals = np.linspace(c_true-3, c_true+3, 100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- create a grid of values to evaluate the error function in 2D." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_grid, c_grid = np.meshgrid(m_vals, c_vals)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- compute the error function at each combination of $c$ and $m$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "E_grid = np.zeros((100, 100))\n", "for i in range(100):\n", " for j in range(100):\n", " E_grid[i, j] = ((y - m_grid[i, j]*x - c_grid[i, j])**2).sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Contour Plot of Error\n", "\n", "- We can now make a contour plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%load -s regression_contour teaching_plots.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=(5,5))\n", "regression_contour(f, ax, m_vals, c_vals, E_grid)\n", "mlai.write_figure(filename='../slides/diagrams/ml/regression_contour.svg')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Steepest Descent\n", "\n", "- Minimize the sum of squares error function.\n", "- One way of doing that is gradient descent.\n", "- Initialize with a guess for $m$ and $c$\n", "- update that guess by subtracting a portion of the gradient from the\n", " guess.\n", "- Like walking down a hill in the steepest direction of the hill to\n", " get to the bottom.\n", "\n", "### Algorithm\n", "\n", "- We start with a guess for $m$ and $c$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_star = 0.0\n", "c_star = -5.0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Offset Gradient\n", "\n", "- Now we need to compute the gradient of the error function, firstly\n", " with respect to $c$,\n", "\n", "$$\\frac{\\text{d}\\errorFunction(m, c)}{\\text{d} c} =\n", "-2\\sum_{i=1}^\\numData (\\dataScalar_i - m\\inputScalar_i - c)$$\n", "\n", "- This is computed in python as follows" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "c_grad = -2*(y-m_star*x - c_star).sum()\n", "print(\"Gradient with respect to c is \", c_grad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deriving the Gradient\n", "\n", "To see how the gradient was derived, first note that the $c$ appears in\n", "every term in the sum. So we are just differentiating\n", "$(\\dataScalar_i - m\\inputScalar_i - c)^2$ for each term in the sum. The\n", "gradient of this term with respect to $c$ is simply the gradient of the\n", "outer quadratic, multiplied by the gradient with respect to $c$ of the\n", "part inside the quadratic. The gradient of a quadratic is two times the\n", "argument of the quadratic, and the gradient of the inside linear term is\n", "just minus one. This is true for all terms in the sum, so we are left\n", "with the sum in the gradient.\n", "\n", "### Slope Gradient\n", "\n", "The gradient with respect tom $m$ is similar, but now the gradient of\n", "the quadratic's argument is $-\\inputScalar_i$ so the gradient with\n", "respect to $m$ is\n", "\n", "$$\\frac{\\text{d}\\errorFunction(m, c)}{\\text{d} m} = -2\\sum_{i=1}^\\numData \\inputScalar_i(\\dataScalar_i - m\\inputScalar_i -\n", "c)$$\n", "\n", "which can be implemented in python (numpy) as" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m_grad = -2*(x*(y-m_star*x - c_star)).sum()\n", "print(\"Gradient with respect to m is \", m_grad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Update Equations\n", "\n", "- Now we have gradients with respect to $m$ and $c$.\n", "- Can update our inital guesses for $m$ and $c$ using the gradient.\n", "- We don't want to just subtract the gradient from $m$ and $c$,\n", "- We need to take a *small* step in the gradient direction.\n", "- Otherwise we might overshoot the minimum.\n", "- We want to follow the gradient to get to the minimum, the gradient\n", " changes all the time.\n", "\n", "### Move in Direction of Gradient" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=plot.big_figsize)\n", "plot.regression_contour(f, ax, m_vals, c_vals, E_grid)\n", "ax.plot(m_star, c_star, 'g*', markersize=20)\n", "ax.arrow(m_star, c_star, -m_grad*0.1, -c_grad*0.1, head_width=0.2)\n", "mlai.write_figure(filename='../slides/diagrams/ml/regression_contour_step001.svg', transparent=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Update Equations\n", "\n", "- The step size has already been introduced, it's again known as the\n", " learning rate and is denoted by $\\learnRate$. $$\n", " c_\\text{new}\\leftarrow c_{\\text{old}} - \\learnRate \\frac{\\text{d}\\errorFunction(m, c)}{\\text{d}c}\n", " $$\n", "\n", "- gives us an update for our estimate of $c$ (which in the code we've\n", " been calling `c_star` to represent a common way of writing a\n", " parameter estimate, $c^*$) and $$\n", " m_\\text{new} \\leftarrow m_{\\text{old}} - \\learnRate \\frac{\\text{d}\\errorFunction(m, c)}{\\text{d}m}\n", " $$\n", "- Giving us an update for $m$.\n", "\n", "### Update Code\n", "\n", "- These updates can be coded as" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Original m was\", m_star, \"and original c was\", c_star)\n", "learn_rate = 0.01\n", "c_star = c_star - learn_rate*c_grad\n", "m_star = m_star - learn_rate*m_grad\n", "print(\"New m is\", m_star, \"and new c is\", c_star)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Iterating Updates\n", "\n", "- Fit model by descending gradient.\n", "\n", "### Gradient Descent Algorithm" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_plots = plot.regression_contour_fit(x, y, diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('regression_contour_fit{num:0>3}.svg', directory='../slides/diagrams/ml', num=IntSlider(0, 0, num_plots, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{regression\\_contour\\_fit}{1}{28}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Stochastic Gradient Descent\n", "\n", "- If $\\numData$ is small, gradient descent is fine.\n", "- But sometimes (e.g. on the internet $\\numData$ could be a billion.\n", "- Stochastic gradient descent is more similar to perceptron.\n", "- Look at gradient of one data point at a time rather than summing\n", " across *all* data points)\n", "- This gives a stochastic estimate of gradient.\n", "\n", "### Stochastic Gradient Descent\n", "\n", "- The real gradient with respect to $m$ is given by\n", "\n", "$$\\frac{\\text{d}\\errorFunction(m, c)}{\\text{d} m} = -2\\sum_{i=1}^\\numData \\inputScalar_i(\\dataScalar_i -\n", "m\\inputScalar_i - c)$$\n", "\n", "but it has $\\numData$ terms in the sum. Substituting in the gradient we\n", "can see that the full update is of the form\n", "\n", "$$m_\\text{new} \\leftarrow\n", "m_\\text{old} + 2\\learnRate \\left[\\inputScalar_1 (\\dataScalar_1 - m_\\text{old}\\inputScalar_1 - c_\\text{old}) + (\\inputScalar_2 (\\dataScalar_2 - m_\\text{old}\\inputScalar_2 - c_\\text{old}) + \\dots + (\\inputScalar_n (\\dataScalar_n - m_\\text{old}\\inputScalar_n - c_\\text{old})\\right]$$\n", "\n", "This could be split up into lots of individual updates\n", "$$m_1 \\leftarrow m_\\text{old} + 2\\learnRate \\left[\\inputScalar_1 (\\dataScalar_1 - m_\\text{old}\\inputScalar_1 -\n", "c_\\text{old})\\right]$$\n", "$$m_2 \\leftarrow m_1 + 2\\learnRate \\left[\\inputScalar_2 (\\dataScalar_2 -\n", "m_\\text{old}\\inputScalar_2 - c_\\text{old})\\right]$$\n", "$$m_3 \\leftarrow m_2 + 2\\learnRate\n", "\\left[\\dots\\right]$$\n", "$$m_n \\leftarrow m_{n-1} + 2\\learnRate \\left[\\inputScalar_n (\\dataScalar_n -\n", "m_\\text{old}\\inputScalar_n - c_\\text{old})\\right]$$\n", "\n", "which would lead to the same final update.\n", "\n", "### Updating $c$ and $m$\n", "\n", "- In the sum we don't $m$ and $c$ we use for computing the gradient\n", " term at each update.\n", "- In stochastic gradient descent we *do* change them.\n", "- This means it's not quite the same as steepest desceint.\n", "- But we can present each data point in a random order, like we did\n", " for the perceptron.\n", "- This makes the algorithm suitable for large scale web use (recently\n", " this domain is know as 'Big Data') and algorithms like this are\n", " widely used by Google, Microsoft, Amazon, Twitter and Facebook.\n", "\n", "### Stochastic Gradient Descent\n", "\n", "- Or more accurate, since the data is normally presented in a random\n", " order we just can write $$\n", " m_\\text{new} = m_\\text{old} + 2\\learnRate\\left[\\inputScalar_i (\\dataScalar_i - m_\\text{old}\\inputScalar_i - c_\\text{old})\\right]\n", " $$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# choose a random point for the update \n", "i = np.random.randint(x.shape[0]-1)\n", "# update m\n", "m_star = m_star + 2*learn_rate*(x[i]*(y[i]-m_star*x[i] - c_star))\n", "# update c\n", "c_star = c_star + 2*learn_rate*(y[i]-m_star*x[i] - c_star)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### SGD for Linear Regression\n", "\n", "Putting it all together in an algorithm, we can do stochastic gradient\n", "descent for our regression data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_plots = plot.regression_contour_sgd(x, y, diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('regression_sgd_contour_fit{num:0>3}.svg', \n", " directory='../slides/diagrams/mlai', num=IntSlider(0, 0, num_plots, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{regression\\_sgd\\_contour\\_fit}{0}{58}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Reflection on Linear Regression and Supervised Learning\n", "\n", "Think about:\n", "\n", "1. What effect does the learning rate have in the optimization? What's\n", " the effect of making it too small, what's the effect of making it\n", " too big? Do you get the same result for both stochastic and steepest\n", " gradient descent?\n", "\n", "2. The stochastic gradient descent doesn't help very much for such a\n", " small data set. It's real advantage comes when there are many,\n", " you'll see this in the lab.\n", "\n", "### Log Likelihood for Multivariate Regression\n", "\n", "The likelihood of a single data point is\n", "\n", ". . .\n", "\n", "$$p\\left(\\dataScalar_i|\\inputScalar_i\\right)=\\frac{1}{\\sqrt{2\\pi\\dataStd^2}}\\exp\\left(-\\frac{\\left(\\dataScalar_i-\\mappingVector^{\\top}\\inputVector_i\\right)^{2}}{2\\dataStd^2}\\right).$$\n", "\n", ". . .\n", "\n", "Leading to a log likelihood for the data set of\n", "\n", ". . .\n", "\n", "$$L(\\mappingVector,\\dataStd^2)= -\\frac{\\numData}{2}\\log \\dataStd^2-\\frac{\\numData}{2}\\log 2\\pi -\\frac{\\sum_{i=1}^{\\numData}\\left(\\dataScalar_i-\\mappingVector^{\\top}\\inputVector_i\\right)^{2}}{2\\dataStd^2}.$$\n", "\n", "### Error Function\n", "\n", "And a corresponding error function of\n", "$$\\errorFunction(\\mappingVector,\\dataStd^2)=\\frac{\\numData}{2}\\log\\dataStd^2 + \\frac{\\sum_{i=1}^{\\numData}\\left(\\dataScalar_i-\\mappingVector^{\\top}\\inputVector_i\\right)^{2}}{2\\dataStd^2}.$$\n", "\n", "### Expand the Brackets\n", "\n", "$$\n", "\\begin{align*}\n", " \\errorFunction(\\mappingVector,\\dataStd^2) = &\n", "\\frac{\\numData}{2}\\log \\dataStd^2 + \\frac{1}{2\\dataStd^2}\\sum\n", "_{i=1}^{\\numData}\\dataScalar_i^{2}-\\frac{1}{\\dataStd^2}\\sum\n", "_{i=1}^{\\numData}\\dataScalar_i\\mappingVector^{\\top}\\inputVector_i\\\\&+\\frac{1}{2\\dataStd^2}\\sum\n", "_{i=1}^{\\numData}\\mappingVector^{\\top}\\inputVector_i\\inputVector_i^{\\top}\\mappingVector\n", "+\\text{const}.\\\\\n", " = & \\frac{\\numData}{2}\\log \\dataStd^2 + \\frac{1}{2\\dataStd^2}\\sum\n", "_{i=1}^{\\numData}\\dataScalar_i^{2}-\\frac{1}{\\dataStd^2}\n", "\\mappingVector^\\top\\sum_{i=1}^{\\numData}\\inputVector_i\\dataScalar_i\\\\&+\\frac{1}{2\\dataStd^2}\n", "\\mappingVector^{\\top}\\left[\\sum\n", "_{i=1}^{\\numData}\\inputVector_i\\inputVector_i^{\\top}\\right]\\mappingVector +\\text{const}.\n", "\\end{align*}\n", "$$\n", "\n", "## Multiple Input Solution with Linear Algebra" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# define the vector w\n", "w = np.zeros(shape=(2, 1))\n", "w[0] = m\n", "w[1] = c" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Design Matrix" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = np.hstack((np.ones_like(x), x))\n", "print(X)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f = np.dot(X, w) # np.dot does matrix multiplication in python" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "resid = (y-f)\n", "E = np.dot(resid.T, resid) # matrix multiplication on a single vector is equivalent to a dot product.\n", "print(\"Error function is:\", E)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Objective Optimisation\n", "\n", "### Multivariate Derivatives\n", "\n", "- We will need some multivariate calculus.\n", "- For now some simple multivariate differentiation:\n", " $$\\frac{\\text{d}{\\mathbf{a}^{\\top}}{\\mappingVector}}{\\text{d}\\mappingVector}=\\mathbf{a}$$\n", " and\n", " $$\\frac{\\mappingVector^{\\top}\\mathbf{A}\\mappingVector}{\\text{d}\\mappingVector}=\\left(\\mathbf{A}+\\mathbf{A}^{\\top}\\right)\\mappingVector$$\n", " or if $\\mathbf{A}$ is symmetric (*i.e.*\n", " $\\mathbf{A}=\\mathbf{A}^{\\top}$)\n", " $$\\frac{\\text{d}\\mappingVector^{\\top}\\mathbf{A}\\mappingVector}{\\text{d}\\mappingVector}=2\\mathbf{A}\\mappingVector.$$\n", "\n", "### Differentiate the Objective\n", "\n", "[Differentiating with respect to the vector $\\mappingVector$ we\n", "obtain]{align=\"left\"} $$\n", "\\frac{\\partial L\\left(\\mappingVector,\\dataStd^2 \\right)}{\\partial\n", "\\mappingVector}=\\frac{1}{\\dataStd^2} \\sum _{i=1}^{\\numData}\\inputVector_i \\dataScalar_i-\\frac{1}{\\dataStd^2}\n", "\\left[\\sum _{i=1}^{\\numData}\\inputVector_i\\inputVector_i^{\\top}\\right]\\mappingVector\n", "$$ Leading to $$\n", "\\mappingVector^{*}=\\left[\\sum\n", "_{i=1}^{\\numData}\\inputVector_i\\inputVector_i^{\\top}\\right]^{-1}\\sum\n", "_{i=1}^{\\numData}\\inputVector_i\\dataScalar_i,\n", "$$\n", "\n", "### Differentiate the Objective\n", "\n", "Rewrite in matrix notation: $$\n", "\\sum_{i=1}^{\\numData}\\inputVector_i\\inputVector_i^\\top = \\inputMatrix^\\top \\inputMatrix\n", "$$ $$\n", "\\sum_{i=1}^{\\numData}\\inputVector_i\\dataScalar_i = \\inputMatrix^\\top \\dataVector\n", "$$\n", "\n", "## Update Equation for Global Optimum\n", "\n", "### Update Equations\n", "\n", "- Update for $\\mappingVector^{*}$.\n", " $$\\mappingVector^{*} = \\left(\\inputMatrix^\\top \\inputMatrix\\right)^{-1} \\inputMatrix^\\top \\dataVector$$\n", "- The equation for $\\left.\\dataStd^2\\right.^{*}$ may also be found\n", " $$\\left.\\dataStd^2\\right.^{{*}}=\\frac{\\sum_{i=1}^{\\numData}\\left(\\dataScalar_i-\\left.\\mappingVector^{*}\\right.^{\\top}\\inputVector_i\\right)^{2}}{\\numData}.$$\n", "\n", "### Solving the Multivariate System" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.linalg.solve?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "w = np.linalg.solve(np.dot(X.T, X), np.dot(X.T, y))\n", "print(w)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = w[1]; c=w[0]\n", "f_test = m*x_test + c\n", "print(m)\n", "print(c)\n", "plt.plot(x_test, f_test, 'b-')\n", "plt.plot(x, y, 'rx')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.movie_body_count()\n", "movies = data['Y']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(', '.join(movies.columns))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "select_features = ['Year', 'Body_Count', 'Length_Minutes']\n", "X = movies[select_features]\n", "X['Eins'] = 1 # add a column for the offset\n", "y = movies[['IMDB_Rating']]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "w = pd.DataFrame(data=np.linalg.solve(np.dot(X.T, X), np.dot(X.T, y)), # solve linear regression here\n", " index = X.columns, # columns of X become rows of w\n", " columns=['regression_coefficient']) # the column of X is the value of regression coefficient" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(y - np.dot(X, w)).hist()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "w" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import scipy as sp\n", "Q, R = np.linalg.qr(X)\n", "w = sp.linalg.solve_triangular(R, np.dot(Q.T, y)) \n", "w = pd.DataFrame(w, index=X.columns)\n", "w" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reading\n", "\n", "- Section 1.3 of @Rogers:book11 for Matrix & Vector Review.\n", "\n", "### Basis Functions\n", "\n", "### Quadratic Basis\n", "\n", "- Basis functions can be global. E.g. quadratic basis: $$\n", " \\basisVector = [1, \\inputScalar, \\inputScalar^2]\n", " $$\n", "\n", "$$\n", "\\begin{align*}\n", "\\basisFunc_1(\\inputScalar) = 1, \\\\\n", "\\basisFunc_2(\\inputScalar) = x, \\\\\n", "\\basisFunc_3(\\inputScalar) = \\inputScalar^2.\n", "\\end{align*}\n", "$$\n", "\n", "$$\n", "\\basisVector(\\inputScalar) = \\begin{bmatrix} 1\\\\ x \\\\ \\inputScalar^2\\end{bmatrix}.\n", "$$\n", "\n", "### Matrix Valued Function\n", "\n", "$$\n", "\\basisMatrix(\\inputVector) = \n", "\\begin{bmatrix} 1 & \\inputScalar_1 &\n", "\\inputScalar_1^2 \\\\\n", "1 & \\inputScalar_2 & \\inputScalar_2^2\\\\\n", "\\vdots & \\vdots & \\vdots \\\\\n", "1 & \\inputScalar_n & \\inputScalar_n^2\n", "\\end{bmatrix}\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def quadratic(x, **kwargs):\n", " \"\"\"Take in a vector of input values and return the design matrix associated \n", " with the basis functions.\"\"\"\n", " return np.hstack([np.ones((x.shape[0], 1)), x, x**2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Functions Derived from Quadratic Basis\n", "\n", "$$\n", "\\mappingFunction(\\inputScalar) = {\\color{cyan}\\mappingScalar_0} + {\\color{green}\\mappingScalar_1 \\inputScalar} + {\\color{yellow}\\mappingScalar_2 \\inputScalar^2}\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "loc =[[0, 1.4,],\n", " [0, -0.7],\n", " [0.75, -0.2]]\n", "text =['$\\phi(x) = 1$',\n", " '$\\phi(x) = x$',\n", " '$\\phi(x) = x^2$']\n", "\n", "plot.basis(quadratic, x_min=-1.3, x_max=1.3, \n", " fig=f, ax=ax, loc=loc, text=text,\n", " diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{quadratic\\_basis}{0}{2}\n", "\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('quadratic_basis{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(0,0,2,1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# first let's generate some inputs\n", "n = 100\n", "x = np.zeros((n, 1)) # create a data set of zeros\n", "x[:, 0] = np.linspace(-1, 1, n) # fill it with values between -1 and 1\n", "\n", "Phi = quadratic(x)\n", "\n", "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "ax.set_ylim([-1.2, 1.2]) # set y limits to ensure basis functions show.\n", "ax.plot(x[:,0], Phi[:, 0], 'r-', label = '$\\phi=1$', linewidth=3)\n", "ax.plot(x[:,0], Phi[:, 1], 'g-', label = '$\\phi=x$', linewidth=3)\n", "ax.plot(x[:,0], Phi[:, 2], 'b-', label = '$\\phi=x^2$', linewidth=3)\n", "ax.legend(loc='lower right')\n", "_ = ax.set_title('Quadratic Basis Functions')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Quadratic Functions\n", "\n", "\\\\startslides{quadratic\\_function}{0}{2}\n", "\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('quadratic_function{num_function:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(0,0,2,1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Polynomial Fits to Olympic Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from matplotlib import pyplot as plt\n", "import teaching_plots as plot\n", "import mlai\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "basis = mlai.polynomial\n", "\n", "data = pods.datasets.olympic_marathon_men()\n", "\n", "x = data['X']\n", "y = data['Y']\n", "\n", "xlim = [1892, 2020]\n", "\n", "\n", "basis=mlai.basis(mlai.polynomial, number=1, data_limits=xlim)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.rmse_fit(x, y, param_name='number', param_range=(1, 27), \n", " model=mlai.LM, \n", " basis=basis,\n", " xlim=xlim, objective_ylim=[0, 0.8],\n", " diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_LM_polynomial_num_basis{num_basis:0>3}.svg',\n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(1,1,27,1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{olympic\\_LM\\_polynomial\\_num\\_basis}{1}{26}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "## Underdetermined System" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.under_determined_system(diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Underdetermined System\n", "\n", "- What about two unknowns and *one* observation?\n", " $$\\dataScalar_1 = m\\inputScalar_1 + c$$\n", "\n", ". . .\n", "\n", "Can compute $m$ given $c$.\n", "$$m = \\frac{\\dataScalar_1 - c}{\\inputScalar}$$\n", "\n", "### Underdetermined System" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('under_determined_system{samp:0>3}.svg', \n", " directory='../slides/diagrams/ml', samp=IntSlider(0, 0, 10, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{under\\_determined\\_system}{1}{3}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Alan Turing\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "
\n", "*Alan Turing, in 1946 he was only 11 minutes slower than the winner of\n", "the 1948 games. Would he have won a hypothetical games held in 1946?\n", "Source: [Alan Turing Internet\n", "Scrapbook](http://www.turing.org.uk/scrapbook/run.html).*\n", "
\n", "### Probability Winning Olympics?\n", "\n", "- He was a formidable Marathon runner.\n", "- In 1946 he ran a time 2 hours 46 minutes.\n", " - That's a pace of 3.95 min/km.\n", "- What is the probability he would have won an Olympics if one had\n", " been held in 1946?\n", "\n", "### Prior Distribution\n", "\n", "- Bayesian inference requires a prior on the parameters.\n", "\n", "- The prior represents your belief *before* you see the data of the\n", " likely value of the parameters.\n", "\n", "- For linear regression, consider a Gaussian prior on the intercept:\n", " $$c \\sim \\gaussianSamp{0}{\\alpha_1}$$\n", "\n", "### Posterior Distribution\n", "\n", "- Posterior distribution is found by combining the prior with the\n", " likelihood.\n", "- Posterior distribution is your belief *after* you see the data of\n", " the likely value of the parameters.\n", "- The posterior is found through **Bayes’ Rule** $$\n", " p(c|\\dataScalar) = \\frac{p(\\dataScalar|c)p(c)}{p(\\dataScalar)}\n", " $$\n", "\n", "### Bayes Update" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.bayes_update(diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ipywidgets import IntSlider\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('dem_gaussian{stage:0>2}.svg', \n", " diagrams='../slides/diagrams/ml', \n", " stage=IntSlider(1, 1, 3, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{dem\\_gaussian}{1}{3}\n", "\n", "\n", "\n", "\n", "### Stages to Derivation of the Posterior\n", "\n", "- Multiply likelihood by prior\n", "- they are “exponentiated quadratics”, the answer is always also an\n", " exponentiated quadratic because\n", " $\\exp(a^2)\\exp(b^2) = \\exp(a^2 + b^2)$.\n", "- Complete the square to get the resulting density in the form of a\n", " Gaussian.\n", "- Recognise the mean and (co)variance of the Gaussian. This is the\n", " estimate of the posterior.\n", "\n", "### Main Trick\n", "\n", "$$p(c) = \\frac{1}{\\sqrt{2\\pi\\alpha_1}} \\exp\\left(-\\frac{1}{2\\alpha_1}c^2\\right)$$\n", "$$p(\\dataVector|\\inputVector, c, m, \\dataStd^2) = \\frac{1}{\\left(2\\pi\\dataStd^2\\right)^{\\frac{\\numData}{2}}} \\exp\\left(-\\frac{1}{2\\dataStd^2}\\sum_{i=1}^\\numData(\\dataScalar_i - m\\inputScalar_i - c)^2\\right)$$\n", "\n", "### \n", "\n", "$$p(c| \\dataVector, \\inputVector, m, \\dataStd^2) = \\frac{p(\\dataVector|\\inputVector, c, m, \\dataStd^2)p(c)}{p(\\dataVector|\\inputVector, m, \\dataStd^2)}$$\n", "\n", "$$p(c| \\dataVector, \\inputVector, m, \\dataStd^2) = \\frac{p(\\dataVector|\\inputVector, c, m, \\dataStd^2)p(c)}{\\int p(\\dataVector|\\inputVector, c, m, \\dataStd^2)p(c) \\text{d} c}$$\n", "\n", "### \n", "\n", "$$p(c| \\dataVector, \\inputVector, m, \\dataStd^2) \\propto p(\\dataVector|\\inputVector, c, m, \\dataStd^2)p(c)$$\n", "\n", "$$\\begin{aligned}\n", " \\log p(c | \\dataVector, \\inputVector, m, \\dataStd^2) =&-\\frac{1}{2\\dataStd^2} \\sum_{i=1}^\\numData(\\dataScalar_i-c - m\\inputScalar_i)^2-\\frac{1}{2\\alpha_1} c^2 + \\text{const}\\\\\n", " = &-\\frac{1}{2\\dataStd^2}\\sum_{i=1}^\\numData(\\dataScalar_i-m\\inputScalar_i)^2 -\\left(\\frac{\\numData}{2\\dataStd^2} + \\frac{1}{2\\alpha_1}\\right)c^2\\\\\n", " & + c\\frac{\\sum_{i=1}^\\numData(\\dataScalar_i-m\\inputScalar_i)}{\\dataStd^2},\n", " \\end{aligned}$$\n", "\n", "### \n", "\n", "complete the square of the quadratic form to obtain\n", "$$\\log p(c | \\dataVector, \\inputVector, m, \\dataStd^2) = -\\frac{1}{2\\tau^2}(c - \\mu)^2 +\\text{const},$$\n", "where $\\tau^2 = \\left(\\numData\\dataStd^{-2} +\\alpha_1^{-1}\\right)^{-1}$\n", "and\n", "$\\mu = \\frac{\\tau^2}{\\dataStd^2} \\sum_{i=1}^\\numData(\\dataScalar_i-m\\inputScalar_i)$.\n", "\n", "### Two Dimensional Gaussian\n", "\n", "- Consider height, $h/m$ and weight, $w/kg$.\n", "- Could sample height from a distribution: $$\n", " p(h) \\sim \\gaussianSamp{1.7}{0.0225}\n", " $$\n", "- And similarly weight: $$\n", " p(w) \\sim \\gaussianSamp{75}{36}\n", " $$\n", "\n", "### Height and Weight Models" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.height_weight(diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Gaussian distributions for height and weight.\n", "\n", "### Sampling Two Dimensional Variables" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.independent_height_weight(num_samps=8, \n", " diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('independent_height_weight{fig:0>3}.png', '../slides/diagrams/ml', fig=IntSlider(0, 0, 8, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{independent\\_height\\_weight}{0}{7}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Independence Assumption\n", "\n", "- This assumes height and weight are independent.\n", " $$p(h, w) = p(h)p(w)$$\n", "\n", "- In reality they are dependent (body mass index) $= \\frac{w}{h^2}$.\n", "\n", "### Sampling Two Dimensional Variables" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.correlated_height_weight(num_samps=8, \n", " diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('correlated_height_weight{fig:0>3}.png', '../slides/diagrams/ml', fig=IntSlider(0, 0, 8, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{correlated\\_height\\_weight}{0}{7}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Independent Gaussians\n", "\n", "$$\n", "p(w, h) = p(w)p(h)\n", "$$\n", "\n", "### Independent Gaussians\n", "\n", "$$\n", "p(w, h) = \\frac{1}{\\sqrt{2\\pi \\dataStd_1^2}\\sqrt{2\\pi\\dataStd_2^2}} \\exp\\left(-\\frac{1}{2}\\left(\\frac{(w-\\meanScalar_1)^2}{\\dataStd_1^2} + \\frac{(h-\\meanScalar_2)^2}{\\dataStd_2^2}\\right)\\right)\n", "$$\n", "\n", "### Independent Gaussians\n", "\n", "$$\n", "p(w, h) = \\frac{1}{\\sqrt{2\\pi\\dataStd_1^22\\pi\\dataStd_2^2}} \\exp\\left(-\\frac{1}{2}\\left(\\begin{bmatrix}w \\\\ h\\end{bmatrix} - \\begin{bmatrix}\\meanScalar_1 \\\\ \\meanScalar_2\\end{bmatrix}\\right)^\\top\\begin{bmatrix}\\dataStd_1^2& 0\\\\0&\\dataStd_2^2\\end{bmatrix}^{-1}\\left(\\begin{bmatrix}w \\\\ h\\end{bmatrix} - \\begin{bmatrix}\\meanScalar_1 \\\\ \\meanScalar_2\\end{bmatrix}\\right)\\right)\n", "$$\n", "\n", "### Independent Gaussians\n", "\n", "$$\n", "p(\\dataVector) = \\frac{1}{\\det{2\\pi \\mathbf{D}}^{\\frac{1}{2}}} \\exp\\left(-\\frac{1}{2}(\\dataVector - \\meanVector)^\\top\\mathbf{D}^{-1}(\\dataVector - \\meanVector)\\right)\n", "$$\n", "\n", "### Correlated Gaussian\n", "\n", "Form correlated from original by rotating the data space using matrix\n", "$\\rotationMatrix$.\n", "\n", "$$\n", "p(\\dataVector) = \\frac{1}{\\det{2\\pi\\mathbf{D}}^{\\frac{1}{2}}} \\exp\\left(-\\frac{1}{2}(\\dataVector - \\meanVector)^\\top\\mathbf{D}^{-1}(\\dataVector - \\meanVector)\\right)\n", "$$\n", "\n", "### Correlated Gaussian\n", "\n", "Form correlated from original by rotating the data space using matrix\n", "$\\rotationMatrix$.\n", "\n", "$$\n", "p(\\dataVector) = \\frac{1}{\\det{2\\pi\\mathbf{D}}^{\\frac{1}{2}}} \\exp\\left(-\\frac{1}{2}(\\rotationMatrix^\\top\\dataVector - \\rotationMatrix^\\top\\meanVector)^\\top\\mathbf{D}^{-1}(\\rotationMatrix^\\top\\dataVector - \\rotationMatrix^\\top\\meanVector)\\right)\n", "$$\n", "\n", "### Correlated Gaussian\n", "\n", "Form correlated from original by rotating the data space using matrix\n", "$\\rotationMatrix$.\n", "\n", "$$\n", "p(\\dataVector) = \\frac{1}{\\det{2\\pi\\mathbf{D}}^{\\frac{1}{2}}} \\exp\\left(-\\frac{1}{2}(\\dataVector - \\meanVector)^\\top\\rotationMatrix\\mathbf{D}^{-1}\\rotationMatrix^\\top(\\dataVector - \\meanVector)\\right)\n", "$$ this gives a covariance matrix: $$\n", "\\covarianceMatrix^{-1} = \\rotationMatrix \\mathbf{D}^{-1} \\rotationMatrix^\\top\n", "$$\n", "\n", "### Correlated Gaussian\n", "\n", "Form correlated from original by rotating the data space using matrix\n", "$\\rotationMatrix$.\n", "\n", "$$\n", "p(\\dataVector) = \\frac{1}{\\det{2\\pi\\covarianceMatrix}^{\\frac{1}{2}}} \\exp\\left(-\\frac{1}{2}(\\dataVector - \\meanVector)^\\top\\covarianceMatrix^{-1} (\\dataVector - \\meanVector)\\right)\n", "$$ this gives a covariance matrix: $$\n", "\\covarianceMatrix = \\rotationMatrix \\mathbf{D} \\rotationMatrix^\\top\n", "$$\n", "\n", "### Sampling the Prior\n", "\n", "- Always useful to perform a ‘sanity check’ and sample from the prior\n", " before observing the data.\n", "- Since $\\dataVector = \\basisMatrix \\mappingVector + \\noiseVector$\n", " just need to sample $$\n", " \\mappingVector \\sim \\gaussianSamp{0}{\\alpha\\eye}\n", " $$ $$\n", " \\noiseVector \\sim \\gaussianSamp{\\zerosVector}{\\dataStd^2}\n", " $$ with $\\alpha=1$ and $\\dataStd^2 = 0.01$.\n", "\n", "### Computing the Posterior\n", "\n", "$$\n", "p(\\mappingVector | \\dataVector, \\inputVector, \\dataStd^2) = \\gaussianDist{\\mappingVector}{\\meanVector_\\mappingScalar}{\\covarianceMatrix_\\mappingScalar}\n", "$$ with $$\n", "\\covarianceMatrix_\\mappingScalar = \\left(\\dataStd^{-2}\\basisMatrix^\\top \\basisMatrix + \\alpha^{-1}\\eye\\right)^{-1}\n", "$$ and $$\n", "\\meanVector_\\mappingScalar = \\covarianceMatrix_\\mappingScalar \\dataStd^{-2}\\basisMatrix^\\top \\dataVector\n", "$$\n", "\n", "### Olympic Data with Bayesian Polynomials" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mlai\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.olympic_marathon_men()\n", "x = data['X']\n", "y = data['Y']\n", "num_data = x.shape[0]\n", "\n", "data_limits = [1892, 2020]\n", "basis = mlai.basis(mlai.polynomial, number=1, data_limits=data_limits)\n", "\n", "max_basis = y.shape[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import teaching_plots as plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.rmse_fit(x, y, param_name='number', param_range=(1, max_basis+1),\n", " model=mlai.BLM, \n", " basis=basis, \n", " alpha=1, \n", " sigma2=0.04, \n", " data_limits=data_limits,\n", " xlim=data_limits, \n", " objective_ylim=[0.5,1.6]\n", " diagrams='../slides/diagrams/ml')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_BLM_polynomial_number{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml/', \n", " num_basis=IntSlider(1, 1, 27, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{olympic\\_BLM\\_polynomial\\_number}{1}{26}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Hold Out Validation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot.holdout_fit(x, y, param_name='number', param_range=(1, 27),\n", " diagrams='../slides/diagrams/ml',\n", " model=mlai.BLM, \n", " basis=basis, \n", " alpha=1, \n", " sigma2=0.04,\n", " xlim=data_limits, \n", " objective_ylim=[0.1,0.6], \n", " permute=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_val_BLM_polynomial_number{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(1, 1, 27, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{olympic\\_val\\_BLM\\_polynomial\\_number}{1}{26}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### 5-fold Cross Validation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_parts=5\n", "plot.cv_fit(x, y, param_name='number', param_range=(1, 27), \n", " diagrams='../slides/diagrams/ml',\n", " model=mlai.BLM, \n", " basis=basis, \n", " alpha=1, \n", " sigma2=0.04, \n", " xlim=data_limits, \n", " objective_ylim=[0.2,0.6], \n", " num_parts=num_parts)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_5cv{part:0>2}_BLM_polynomial_number{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " part=(0, 5), \n", " num_basis=IntSlider(1, 1, 27, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\\\startslides{olympic\\_5cv05\\_BLM\\_polynomial\\_number}{1}{26}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Marginal Likelihood\n", "\n", "- The marginal likelihood can also be computed, it has the form: $$\n", " p(\\dataVector|\\inputMatrix, \\dataStd^2, \\alpha) = \\frac{1}{(2\\pi)^\\frac{n}{2}\\left|\\kernelMatrix\\right|^\\frac{1}{2}} \\exp\\left(-\\frac{1}{2} \\dataVector^\\top \\kernelMatrix^{-1} \\dataVector\\right)\n", " $$ where\n", " $\\kernelMatrix = \\alpha \\basisMatrix\\basisMatrix^\\top + \\dataStd^2 \\eye$.\n", "\n", "- So it is a zero mean $\\numData$-dimensional Gaussian with covariance\n", " matrix $\\kernelMatrix$.\n", "\n", "### References {#references .unnumbered}" ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 2 }