{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Generalization: Model Validation\n", "### [Neil D. Lawrence](http://inverseprobability.com), University of Sheffield\n", "### 2015-10-27\n", "\n", "**Abstract**: Generalization is the main objective of a machine learning algorithm.\n", "The models we design should work on data they have not seen before.\n", "Confirming whether a model generalizes well or not is the domain of\n", "*model validation*. In this lecture we introduce approaches to model\n", "validation such as hold out validation and cross validation.\n", "\n", "$$\n", "\\newcommand{\\tk}[1]{}\n", "%\\newcommand{\\tk}[1]{\\textbf{TK}: #1}\n", "\\newcommand{\\Amatrix}{\\mathbf{A}}\n", "\\newcommand{\\KL}[2]{\\text{KL}\\left( #1\\,\\|\\,#2 \\right)}\n", "\\newcommand{\\Kaast}{\\kernelMatrix_{\\mathbf{ \\ast}\\mathbf{ \\ast}}}\n", "\\newcommand{\\Kastu}{\\kernelMatrix_{\\mathbf{ \\ast} \\inducingVector}}\n", "\\newcommand{\\Kff}{\\kernelMatrix_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kfu}{\\kernelMatrix_{\\mappingFunctionVector \\inducingVector}}\n", "\\newcommand{\\Kuast}{\\kernelMatrix_{\\inducingVector \\bf\\ast}}\n", "\\newcommand{\\Kuf}{\\kernelMatrix_{\\inducingVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kuu}{\\kernelMatrix_{\\inducingVector \\inducingVector}}\n", "\\newcommand{\\Kuui}{\\Kuu^{-1}}\n", "\\newcommand{\\Qaast}{\\mathbf{Q}_{\\bf \\ast \\ast}}\n", "\\newcommand{\\Qastf}{\\mathbf{Q}_{\\ast \\mappingFunction}}\n", "\\newcommand{\\Qfast}{\\mathbf{Q}_{\\mappingFunctionVector \\bf \\ast}}\n", "\\newcommand{\\Qff}{\\mathbf{Q}_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\aMatrix}{\\mathbf{A}}\n", "\\newcommand{\\aScalar}{a}\n", "\\newcommand{\\aVector}{\\mathbf{a}}\n", "\\newcommand{\\acceleration}{a}\n", "\\newcommand{\\bMatrix}{\\mathbf{B}}\n", "\\newcommand{\\bScalar}{b}\n", "\\newcommand{\\bVector}{\\mathbf{b}}\n", "\\newcommand{\\basisFunc}{\\phi}\n", "\\newcommand{\\basisFuncVector}{\\boldsymbol{ \\basisFunc}}\n", "\\newcommand{\\basisFunction}{\\phi}\n", "\\newcommand{\\basisLocation}{\\mu}\n", "\\newcommand{\\basisMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\basisScalar}{\\basisFunction}\n", "\\newcommand{\\basisVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\activationFunction}{\\phi}\n", "\\newcommand{\\activationMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\activationScalar}{\\basisFunction}\n", "\\newcommand{\\activationVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\bigO}{\\mathcal{O}}\n", "\\newcommand{\\binomProb}{\\pi}\n", "\\newcommand{\\cMatrix}{\\mathbf{C}}\n", "\\newcommand{\\cbasisMatrix}{\\hat{\\boldsymbol{ \\Phi}}}\n", "\\newcommand{\\cdataMatrix}{\\hat{\\dataMatrix}}\n", "\\newcommand{\\cdataScalar}{\\hat{\\dataScalar}}\n", "\\newcommand{\\cdataVector}{\\hat{\\dataVector}}\n", "\\newcommand{\\centeredKernelMatrix}{\\mathbf{ \\MakeUppercase{\\centeredKernelScalar}}}\n", "\\newcommand{\\centeredKernelScalar}{b}\n", "\\newcommand{\\centeredKernelVector}{\\centeredKernelScalar}\n", "\\newcommand{\\centeringMatrix}{\\mathbf{H}}\n", "\\newcommand{\\chiSquaredDist}[2]{\\chi_{#1}^{2}\\left(#2\\right)}\n", "\\newcommand{\\chiSquaredSamp}[1]{\\chi_{#1}^{2}}\n", "\\newcommand{\\conditionalCovariance}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\coregionalizationMatrix}{\\mathbf{B}}\n", "\\newcommand{\\coregionalizationScalar}{b}\n", "\\newcommand{\\coregionalizationVector}{\\mathbf{ \\coregionalizationScalar}}\n", "\\newcommand{\\covDist}[2]{\\text{cov}_{#2}\\left(#1\\right)}\n", "\\newcommand{\\covSamp}[1]{\\text{cov}\\left(#1\\right)}\n", "\\newcommand{\\covarianceScalar}{c}\n", "\\newcommand{\\covarianceVector}{\\mathbf{ \\covarianceScalar}}\n", "\\newcommand{\\covarianceMatrix}{\\mathbf{C}}\n", "\\newcommand{\\covarianceMatrixTwo}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\croupierScalar}{s}\n", "\\newcommand{\\croupierVector}{\\mathbf{ \\croupierScalar}}\n", "\\newcommand{\\croupierMatrix}{\\mathbf{ \\MakeUppercase{\\croupierScalar}}}\n", "\\newcommand{\\dataDim}{p}\n", "\\newcommand{\\dataIndex}{i}\n", "\\newcommand{\\dataIndexTwo}{j}\n", "\\newcommand{\\dataMatrix}{\\mathbf{Y}}\n", "\\newcommand{\\dataScalar}{y}\n", "\\newcommand{\\dataSet}{\\mathcal{D}}\n", "\\newcommand{\\dataStd}{\\sigma}\n", "\\newcommand{\\dataVector}{\\mathbf{ \\dataScalar}}\n", "\\newcommand{\\decayRate}{d}\n", "\\newcommand{\\degreeMatrix}{\\mathbf{ \\MakeUppercase{\\degreeScalar}}}\n", "\\newcommand{\\degreeScalar}{d}\n", "\\newcommand{\\degreeVector}{\\mathbf{ \\degreeScalar}}\n", "% Already defined by latex\n", "%\\newcommand{\\det}[1]{\\left|#1\\right|}\n", "\\newcommand{\\diag}[1]{\\text{diag}\\left(#1\\right)}\n", "\\newcommand{\\diagonalMatrix}{\\mathbf{D}}\n", "\\newcommand{\\diff}[2]{\\frac{\\text{d}#1}{\\text{d}#2}}\n", "\\newcommand{\\diffTwo}[2]{\\frac{\\text{d}^2#1}{\\text{d}#2^2}}\n", "\\newcommand{\\displacement}{x}\n", "\\newcommand{\\displacementVector}{\\textbf{\\displacement}}\n", "\\newcommand{\\distanceMatrix}{\\mathbf{ \\MakeUppercase{\\distanceScalar}}}\n", "\\newcommand{\\distanceScalar}{d}\n", "\\newcommand{\\distanceVector}{\\mathbf{ \\distanceScalar}}\n", "\\newcommand{\\eigenvaltwo}{\\ell}\n", "\\newcommand{\\eigenvaltwoMatrix}{\\mathbf{L}}\n", "\\newcommand{\\eigenvaltwoVector}{\\mathbf{l}}\n", "\\newcommand{\\eigenvalue}{\\lambda}\n", "\\newcommand{\\eigenvalueMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\eigenvalueVector}{\\boldsymbol{ \\lambda}}\n", "\\newcommand{\\eigenvector}{\\mathbf{ \\eigenvectorScalar}}\n", "\\newcommand{\\eigenvectorMatrix}{\\mathbf{U}}\n", "\\newcommand{\\eigenvectorScalar}{u}\n", "\\newcommand{\\eigenvectwo}{\\mathbf{v}}\n", "\\newcommand{\\eigenvectwoMatrix}{\\mathbf{V}}\n", "\\newcommand{\\eigenvectwoScalar}{v}\n", "\\newcommand{\\entropy}[1]{\\mathcal{H}\\left(#1\\right)}\n", "\\newcommand{\\errorFunction}{E}\n", "\\newcommand{\\expDist}[2]{\\left<#1\\right>_{#2}}\n", "\\newcommand{\\expSamp}[1]{\\left<#1\\right>}\n", "\\newcommand{\\expectation}[1]{\\left\\langle #1 \\right\\rangle }\n", "\\newcommand{\\expectationDist}[2]{\\left\\langle #1 \\right\\rangle _{#2}}\n", "\\newcommand{\\expectedDistanceMatrix}{\\mathcal{D}}\n", "\\newcommand{\\eye}{\\mathbf{I}}\n", "\\newcommand{\\fantasyDim}{r}\n", "\\newcommand{\\fantasyMatrix}{\\mathbf{ \\MakeUppercase{\\fantasyScalar}}}\n", "\\newcommand{\\fantasyScalar}{z}\n", "\\newcommand{\\fantasyVector}{\\mathbf{ \\fantasyScalar}}\n", "\\newcommand{\\featureStd}{\\varsigma}\n", "\\newcommand{\\gammaCdf}[3]{\\mathcal{GAMMA CDF}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaDist}[3]{\\mathcal{G}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaSamp}[2]{\\mathcal{G}\\left(#1,#2\\right)}\n", "\\newcommand{\\gaussianDist}[3]{\\mathcal{N}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gaussianSamp}[2]{\\mathcal{N}\\left(#1,#2\\right)}\n", "\\newcommand{\\given}{|}\n", "\\newcommand{\\half}{\\frac{1}{2}}\n", "\\newcommand{\\heaviside}{H}\n", "\\newcommand{\\hiddenMatrix}{\\mathbf{ \\MakeUppercase{\\hiddenScalar}}}\n", "\\newcommand{\\hiddenScalar}{h}\n", "\\newcommand{\\hiddenVector}{\\mathbf{ \\hiddenScalar}}\n", "\\newcommand{\\identityMatrix}{\\eye}\n", "\\newcommand{\\inducingInputScalar}{z}\n", "\\newcommand{\\inducingInputVector}{\\mathbf{ \\inducingInputScalar}}\n", "\\newcommand{\\inducingInputMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\inducingScalar}{u}\n", "\\newcommand{\\inducingVector}{\\mathbf{ \\inducingScalar}}\n", "\\newcommand{\\inducingMatrix}{\\mathbf{U}}\n", "\\newcommand{\\inlineDiff}[2]{\\text{d}#1/\\text{d}#2}\n", "\\newcommand{\\inputDim}{q}\n", "\\newcommand{\\inputMatrix}{\\mathbf{X}}\n", "\\newcommand{\\inputScalar}{x}\n", "\\newcommand{\\inputSpace}{\\mathcal{X}}\n", "\\newcommand{\\inputVals}{\\inputVector}\n", "\\newcommand{\\inputVector}{\\mathbf{ \\inputScalar}}\n", "\\newcommand{\\iterNum}{k}\n", "\\newcommand{\\kernel}{\\kernelScalar}\n", "\\newcommand{\\kernelMatrix}{\\mathbf{K}}\n", "\\newcommand{\\kernelScalar}{k}\n", "\\newcommand{\\kernelVector}{\\mathbf{ \\kernelScalar}}\n", "\\newcommand{\\kff}{\\kernelScalar_{\\mappingFunction \\mappingFunction}}\n", "\\newcommand{\\kfu}{\\kernelVector_{\\mappingFunction \\inducingScalar}}\n", "\\newcommand{\\kuf}{\\kernelVector_{\\inducingScalar \\mappingFunction}}\n", "\\newcommand{\\kuu}{\\kernelVector_{\\inducingScalar \\inducingScalar}}\n", "\\newcommand{\\lagrangeMultiplier}{\\lambda}\n", "\\newcommand{\\lagrangeMultiplierMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\lagrangian}{L}\n", "\\newcommand{\\laplacianFactor}{\\mathbf{ \\MakeUppercase{\\laplacianFactorScalar}}}\n", "\\newcommand{\\laplacianFactorScalar}{m}\n", "\\newcommand{\\laplacianFactorVector}{\\mathbf{ \\laplacianFactorScalar}}\n", "\\newcommand{\\laplacianMatrix}{\\mathbf{L}}\n", "\\newcommand{\\laplacianScalar}{\\ell}\n", "\\newcommand{\\laplacianVector}{\\mathbf{ \\ell}}\n", "\\newcommand{\\latentDim}{q}\n", "\\newcommand{\\latentDistanceMatrix}{\\boldsymbol{ \\Delta}}\n", "\\newcommand{\\latentDistanceScalar}{\\delta}\n", "\\newcommand{\\latentDistanceVector}{\\boldsymbol{ \\delta}}\n", "\\newcommand{\\latentForce}{f}\n", "\\newcommand{\\latentFunction}{u}\n", "\\newcommand{\\latentFunctionVector}{\\mathbf{ \\latentFunction}}\n", "\\newcommand{\\latentFunctionMatrix}{\\mathbf{ \\MakeUppercase{\\latentFunction}}}\n", "\\newcommand{\\latentIndex}{j}\n", "\\newcommand{\\latentScalar}{z}\n", "\\newcommand{\\latentVector}{\\mathbf{ \\latentScalar}}\n", "\\newcommand{\\latentMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\learnRate}{\\eta}\n", "\\newcommand{\\lengthScale}{\\ell}\n", "\\newcommand{\\rbfWidth}{\\ell}\n", "\\newcommand{\\likelihoodBound}{\\mathcal{L}}\n", "\\newcommand{\\likelihoodFunction}{L}\n", "\\newcommand{\\locationScalar}{\\mu}\n", "\\newcommand{\\locationVector}{\\boldsymbol{ \\locationScalar}}\n", "\\newcommand{\\locationMatrix}{\\mathbf{M}}\n", "\\newcommand{\\variance}[1]{\\text{var}\\left( #1 \\right)}\n", "\\newcommand{\\mappingFunction}{f}\n", "\\newcommand{\\mappingFunctionMatrix}{\\mathbf{F}}\n", "\\newcommand{\\mappingFunctionTwo}{g}\n", "\\newcommand{\\mappingFunctionTwoMatrix}{\\mathbf{G}}\n", "\\newcommand{\\mappingFunctionTwoVector}{\\mathbf{ \\mappingFunctionTwo}}\n", "\\newcommand{\\mappingFunctionVector}{\\mathbf{ \\mappingFunction}}\n", "\\newcommand{\\scaleScalar}{s}\n", "\\newcommand{\\mappingScalar}{w}\n", "\\newcommand{\\mappingVector}{\\mathbf{ \\mappingScalar}}\n", "\\newcommand{\\mappingMatrix}{\\mathbf{W}}\n", "\\newcommand{\\mappingScalarTwo}{v}\n", "\\newcommand{\\mappingVectorTwo}{\\mathbf{ \\mappingScalarTwo}}\n", "\\newcommand{\\mappingMatrixTwo}{\\mathbf{V}}\n", "\\newcommand{\\maxIters}{K}\n", "\\newcommand{\\meanMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanScalar}{\\mu}\n", "\\newcommand{\\meanTwoMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanTwoScalar}{m}\n", "\\newcommand{\\meanTwoVector}{\\mathbf{ \\meanTwoScalar}}\n", "\\newcommand{\\meanVector}{\\boldsymbol{ \\meanScalar}}\n", "\\newcommand{\\mrnaConcentration}{m}\n", "\\newcommand{\\naturalFrequency}{\\omega}\n", "\\newcommand{\\neighborhood}[1]{\\mathcal{N}\\left( #1 \\right)}\n", "\\newcommand{\\neilurl}{http://inverseprobability.com/}\n", "\\newcommand{\\noiseMatrix}{\\boldsymbol{ E}}\n", "\\newcommand{\\noiseScalar}{\\epsilon}\n", "\\newcommand{\\noiseVector}{\\boldsymbol{ \\epsilon}}\n", "\\newcommand{\\norm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\normalizedLaplacianMatrix}{\\hat{\\mathbf{L}}}\n", "\\newcommand{\\normalizedLaplacianScalar}{\\hat{\\ell}}\n", "\\newcommand{\\normalizedLaplacianVector}{\\hat{\\mathbf{ \\ell}}}\n", "\\newcommand{\\numActive}{m}\n", "\\newcommand{\\numBasisFunc}{m}\n", "\\newcommand{\\numComponents}{m}\n", "\\newcommand{\\numComps}{K}\n", "\\newcommand{\\numData}{n}\n", "\\newcommand{\\numFeatures}{K}\n", "\\newcommand{\\numHidden}{h}\n", "\\newcommand{\\numInducing}{m}\n", "\\newcommand{\\numLayers}{\\ell}\n", "\\newcommand{\\numNeighbors}{K}\n", "\\newcommand{\\numSequences}{s}\n", "\\newcommand{\\numSuccess}{s}\n", "\\newcommand{\\numTasks}{m}\n", "\\newcommand{\\numTime}{T}\n", "\\newcommand{\\numTrials}{S}\n", "\\newcommand{\\outputIndex}{j}\n", "\\newcommand{\\paramVector}{\\boldsymbol{ \\theta}}\n", "\\newcommand{\\parameterMatrix}{\\boldsymbol{ \\Theta}}\n", "\\newcommand{\\parameterScalar}{\\theta}\n", "\\newcommand{\\parameterVector}{\\boldsymbol{ \\parameterScalar}}\n", "\\newcommand{\\partDiff}[2]{\\frac{\\partial#1}{\\partial#2}}\n", "\\newcommand{\\precisionScalar}{j}\n", "\\newcommand{\\precisionVector}{\\mathbf{ \\precisionScalar}}\n", "\\newcommand{\\precisionMatrix}{\\mathbf{J}}\n", "\\newcommand{\\pseudotargetScalar}{\\widetilde{y}}\n", "\\newcommand{\\pseudotargetVector}{\\mathbf{ \\pseudotargetScalar}}\n", "\\newcommand{\\pseudotargetMatrix}{\\mathbf{ \\widetilde{Y}}}\n", "\\newcommand{\\rank}[1]{\\text{rank}\\left(#1\\right)}\n", "\\newcommand{\\rayleighDist}[2]{\\mathcal{R}\\left(#1|#2\\right)}\n", "\\newcommand{\\rayleighSamp}[1]{\\mathcal{R}\\left(#1\\right)}\n", "\\newcommand{\\responsibility}{r}\n", "\\newcommand{\\rotationScalar}{r}\n", "\\newcommand{\\rotationVector}{\\mathbf{ \\rotationScalar}}\n", "\\newcommand{\\rotationMatrix}{\\mathbf{R}}\n", "\\newcommand{\\sampleCovScalar}{s}\n", "\\newcommand{\\sampleCovVector}{\\mathbf{ \\sampleCovScalar}}\n", "\\newcommand{\\sampleCovMatrix}{\\mathbf{s}}\n", "\\newcommand{\\scalarProduct}[2]{\\left\\langle{#1},{#2}\\right\\rangle}\n", "\\newcommand{\\sign}[1]{\\text{sign}\\left(#1\\right)}\n", "\\newcommand{\\sigmoid}[1]{\\sigma\\left(#1\\right)}\n", "\\newcommand{\\singularvalue}{\\ell}\n", "\\newcommand{\\singularvalueMatrix}{\\mathbf{L}}\n", "\\newcommand{\\singularvalueVector}{\\mathbf{l}}\n", "\\newcommand{\\sorth}{\\mathbf{u}}\n", "\\newcommand{\\spar}{\\lambda}\n", "\\newcommand{\\trace}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\BasalRate}{B}\n", "\\newcommand{\\DampingCoefficient}{C}\n", "\\newcommand{\\DecayRate}{D}\n", "\\newcommand{\\Displacement}{X}\n", "\\newcommand{\\LatentForce}{F}\n", "\\newcommand{\\Mass}{M}\n", "\\newcommand{\\Sensitivity}{S}\n", "\\newcommand{\\basalRate}{b}\n", "\\newcommand{\\dampingCoefficient}{c}\n", "\\newcommand{\\mass}{m}\n", "\\newcommand{\\sensitivity}{s}\n", "\\newcommand{\\springScalar}{\\kappa}\n", "\\newcommand{\\springVector}{\\boldsymbol{ \\kappa}}\n", "\\newcommand{\\springMatrix}{\\boldsymbol{ \\mathcal{K}}}\n", "\\newcommand{\\tfConcentration}{p}\n", "\\newcommand{\\tfDecayRate}{\\delta}\n", "\\newcommand{\\tfMrnaConcentration}{f}\n", "\\newcommand{\\tfVector}{\\mathbf{ \\tfConcentration}}\n", "\\newcommand{\\velocity}{v}\n", "\\newcommand{\\sufficientStatsScalar}{g}\n", "\\newcommand{\\sufficientStatsVector}{\\mathbf{ \\sufficientStatsScalar}}\n", "\\newcommand{\\sufficientStatsMatrix}{\\mathbf{G}}\n", "\\newcommand{\\switchScalar}{s}\n", "\\newcommand{\\switchVector}{\\mathbf{ \\switchScalar}}\n", "\\newcommand{\\switchMatrix}{\\mathbf{S}}\n", "\\newcommand{\\tr}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\loneNorm}[1]{\\left\\Vert #1 \\right\\Vert_1}\n", "\\newcommand{\\ltwoNorm}[1]{\\left\\Vert #1 \\right\\Vert_2}\n", "\\newcommand{\\onenorm}[1]{\\left\\vert#1\\right\\vert_1}\n", "\\newcommand{\\twonorm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\vScalar}{v}\n", "\\newcommand{\\vVector}{\\mathbf{v}}\n", "\\newcommand{\\vMatrix}{\\mathbf{V}}\n", "\\newcommand{\\varianceDist}[2]{\\text{var}_{#2}\\left( #1 \\right)}\n", "% Already defined by latex\n", "%\\newcommand{\\vec}{#1:}\n", "\\newcommand{\\vecb}[1]{\\left(#1\\right):}\n", "\\newcommand{\\weightScalar}{w}\n", "\\newcommand{\\weightVector}{\\mathbf{ \\weightScalar}}\n", "\\newcommand{\\weightMatrix}{\\mathbf{W}}\n", "\\newcommand{\\weightedAdjacencyMatrix}{\\mathbf{A}}\n", "\\newcommand{\\weightedAdjacencyScalar}{a}\n", "\\newcommand{\\weightedAdjacencyVector}{\\mathbf{ \\weightedAdjacencyScalar}}\n", "\\newcommand{\\onesVector}{\\mathbf{1}}\n", "\\newcommand{\\zerosVector}{\\mathbf{0}}\n", "$$\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "## Review\n", "\n", "- Last time: introduced basis functions.\n", "- Showed how to maximize the likelihood of a non-linear model that's\n", " linear in parameters.\n", "- Explored the different characteristics of different basis function\n", " models\n", "\n", "## Alan Turing \\[edit\\]\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "Figure: Alan Turing, in 1946 he was only 11 minutes slower than the\n", "winner of the 1948 games. Would he have won a hypothetical games held in\n", "1946? Source: [Alan Turing Internet\n", "Scrapbook](http://www.turing.org.uk/scrapbook/run.html).\n", "\n", "If we had to summarise the objectives of machine learning in one word, a\n", "very good candidate for that word would be *generalization*. What is\n", "generalization? From a human perspective it might be summarised as the\n", "ability to take lessons learned in one domain and apply them to another\n", "domain. If we accept the definition given in the first session for\n", "machine learning, $$\n", "\\text{data} + \\text{model} \\xrightarrow{\\text{compute}} \\text{prediction}\n", "$$ then we see that without a model we can't generalise: we only have\n", "data. Data is fine for answering very specific questions, like \"Who won\n", "the Olympic Marathon in 2012?\", because we have that answer stored,\n", "however, we are not given the answer to many other questions. For\n", "example, Alan Turing was a formidable marathon runner, in 1946 he ran a\n", "time 2 hours 46 minutes (just under four minutes per kilometer, faster\n", "than I and most of the other [Endcliffe Park\n", "Run](http://www.parkrun.org.uk/sheffieldhallam/) runners can do 5 km).\n", "What is the probability he would have won an Olympics if one had been\n", "held in 1946?\n", "\n", "To answer this question we need to generalize, but before we formalize\n", "the concept of generalization let's introduce some formal representation\n", "of what it means to generalize in machine learning.\n", "\n", "## Expected Loss \\[edit\\]\n", "\n", "Our objective function so far has been the negative log likelihood,\n", "which we have minimized (via the sum of squares error) to obtain our\n", "model. However, there is an alternative perspective on an objective\n", "function, that of a *loss function*. A loss function is a cost function\n", "associated with the penalty you might need to pay for a particular\n", "incorrect decision. One approach to machine learning involves specifying\n", "a loss function and considering how much a particular model is likely to\n", "cost us across its lifetime. We can represent this with an expectation.\n", "If our loss function is given as\n", "$L(\\dataScalar, \\inputScalar, \\mappingVector)$ for a particular model\n", "that predicts $\\dataScalar$ given $\\inputScalar$ and $\\mappingVector$\n", "then we are interested in minimizing the expected loss under the likely\n", "distribution of $\\dataScalar$ and $\\inputScalar$. To understand this\n", "formally we define the *true* distribution of the data samples,\n", "$\\dataScalar$, $\\inputScalar$. This is a particularl distribution that\n", "we don't have access to very often, and to represent that we define it\n", "with a variant of the letter 'P',\n", "$\\mathbb{P}(\\dataScalar, \\inputScalar)$. If we genuinely pay\n", "$L(\\dataScalar, \\inputScalar, \\mappingVector)$ for every mistake we\n", "make, and the future test data is genuinely drawn from\n", "$\\mathbb{P}(\\dataScalar, \\inputScalar)$ then we can define our expected\n", "loss, or risk, to be, $$\n", "R(\\mappingVector) = \\int L(\\dataScalar, \\inputScalar, \\mappingVector) \\mathbb{P}(\\dataScalar, \\inputScalar) \\text{d}\\dataScalar\n", "\\text{d}\\inputScalar.\n", "$$ Of course, in practice, this value can't be computed *but* it serves\n", "as a reminder of what it is we are aiming to minimize and under certain\n", "circumstances it can be approximated.\n", "\n", "## Sample Based Approximations\n", "\n", "A sample based approximation to an expectation involves replacing the\n", "true expectation with a sum over samples from the distribution.\n", "\n", "$$\n", "\\int \\mappingFunction(z) p(z) \\text{d}z\\approx \\frac{1}{s}\\sum_{i=1}^s \\mappingFunction(z_i).\n", "$$ if $\\{z_i\\}_{i=1}^s$ are a set of $s$ independent and identically\n", "distributed samples from the distribution $p(z)$. This approximation\n", "becomes better for larger $s$, although the *rate of convergence* to the\n", "true integral will be very dependent on the distribution $p(z)$ *and*\n", "the function $\\mappingFunction(z)$.\n", "\n", "That said, this means we can approximate our true integral with the sum,\n", "$$\n", "R(\\mappingVector) \\approx \\frac{1}{\\numData}\\sum_{i=1}^{\\numData} L(\\dataScalar_i, \\inputScalar_i, \\mappingVector).\n", "$$\n", "\n", "if $\\dataScalar_i$ and $\\inputScalar_i$ are independent samples from the\n", "true distribution $\\mathbb{P}(\\dataScalar, \\inputScalar)$. Minimizing\n", "this sum directly is known as *empirical risk minimization*. The sum of\n", "squares error we have been using can be recovered for this case by\n", "considering a *squared loss*, $$\n", "L(\\dataScalar, \\inputScalar, \\mappingVector) = (\\dataScalar-\\mappingVector^\\top\\boldsymbol{\\phi}(\\inputScalar))^2,\n", "$$ which gives an empirical risk of the form $$\n", "R(\\mappingVector) \\approx \\frac{1}{\\numData} \\sum_{i=1}^{\\numData}\n", "(\\dataScalar_i - \\mappingVector^\\top \\boldsymbol{\\phi}(\\inputScalar_i))^2\n", "$$ which up to the constant $\\frac{1}{\\numData}$ is identical to the\n", "objective function we have been using so far.\n", "\n", "## Estimating Risk through Validation \\[edit\\]\n", "\n", "Unfortuantely, minimising the empirial risk only guarantees something\n", "about our performance on the training data. If we don't have enough data\n", "for the approximation to the risk to be valid, then we can end up\n", "performing significantly worse on test data. Fortunately, we can also\n", "estimate the risk for test data through estimating the risk for unseen\n", "data. The main trick here is to 'hold out' a portion of our data from\n", "training and use the models performance on that sub-set of the data as a\n", "proxy for the true risk. This data is known as 'validation' data. It\n", "contrasts with test data, because its values are known at the model\n", "design time. However, in contrast to test date we don't use it to fit\n", "our model. This means that it doesn't exhibit the same bias that the\n", "empirical risk does when estimating the true risk.\n", "\n", "## Validation \\[edit\\]\n", "\n", "In this lab we will explore techniques for model selection that make use\n", "of validation data. Data that isn't seen by the model in the learning\n", "(or fitting) phase, but is used to *validate* our choice of model from\n", "amoungst the different designs we have selected.\n", "\n", "In machine learning, we are looking to minimise the value of our\n", "objective function $E$ with respect to its parameters $\\mappingVector$.\n", "We do this by considering our training data. We minimize the value of\n", "the objective function as it's observed at each training point. However\n", "we are really interested in how the model will perform on future data.\n", "For evaluating that we choose to *hold out* a portion of the data for\n", "evaluating the quality of the model.\n", "\n", "We will review the different methods of model selection on the Olympics\n", "marathon data. Firstly we import the Olympic marathon data.\n", "\n", "## Olympic Marathon Data\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "- Gold medal times for Olympic Marathon since 1896.\n", "- Marathons before 1924 didn't have a standardised distance.\n", "- Present results using pace per km.\n", "- In 1904 Marathon was badly organised leading to very slow times.\n", "\n", "\n", "\n", "Image from Wikimedia Commons \n", "
\n", "The first thing we will do is load a standard data set for regression\n", "modelling. The data consists of the pace of Olympic Gold Medal Marathon\n", "winners for the Olympics from 1896 to present. First we load in the data\n", "and plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pods.datasets.olympic_marathon_men()\n", "x = data['X']\n", "y = data['Y']\n", "\n", "offset = y.mean()\n", "scale = np.sqrt(y.var())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import teaching_plots as plot\n", "import mlai" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n", "xlim = (1875,2030)\n", "ylim = (2.5, 6.5)\n", "yhat = (y-offset)/scale\n", "\n", "fig, ax = plt.subplots(figsize=plot.big_wide_figsize)\n", "_ = ax.plot(x, y, 'r.',markersize=10)\n", "ax.set_xlabel('year', fontsize=20)\n", "ax.set_ylabel('pace min/km', fontsize=20)\n", "ax.set_xlim(xlim)\n", "ax.set_ylim(ylim)\n", "\n", "mlai.write_figure(figure=fig, \n", " filename='../slides/diagrams/datasets/olympic-marathon.svg', \n", " transparent=True, \n", " frameon=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Olympic marathon pace times since 1892.\n", "\n", "Things to notice about the data include the outlier in 1904, in this\n", "year, the olympics was in St Louis, USA. Organizational problems and\n", "challenges with dust kicked up by the cars following the race meant that\n", "participants got lost, and only very few participants completed.\n", "\n", "More recent years see more consistently quick marathons.\n", "\n", "## Validation on the Olympic Marathon Data \\[edit\\]\n", "\n", "The first thing we'll do is fit a standard linear model to the data. We\n", "recall from previous lectures and lab classes that to do this we need to\n", "solve the system $$\n", "\\basisMatrix^\\top \\basisMatrix \\mappingVector = \\basisMatrix^\\top \\dataVector\n", "$$ for $\\mappingVector$ and use the resulting vector to make predictions\n", "at the training points and test points, $$\n", "\\mappingFunctionVector = \\basisMatrix \\mappingVector.\n", "$$ The prediction function can be used to compute the objective\n", "function, $$\n", "E(\\mappingVector) = \\sum_{i}^{\\numData} (\\dataScalar_i - \\mappingVector^\\top\\phi(\\dataVector_i))^2\n", "$$ by substituting in the prediction in vector form we have $$\n", "E(\\mappingVector) = (\\dataVector - \\mappingFunctionVector)^\\top(\\dataVector - \\mappingFunctionVector)\n", "$$\n", "\n", "### Question 1\n", "\n", "In this question you will construct some flexible general code for\n", "fitting linear models.\n", "\n", "Create a python function that computes $\\basisMatrix$ for the linear\n", "basis,\n", "$$\\basisMatrix = \\begin{bmatrix} \\dataVector & \\mathbf{1}\\end{bmatrix}$$\n", "Name your function `linear`. `Phi` should be in the form of a *design\n", "matrix* and `x` should be in the form of a `numpy` two dimensional array\n", "with $\\numData$ rows and 1 column Calls to your function should be in\n", "the following form:\n", "\n", "`Phi = linear(x)`\n", "\n", "Create a python function that accepts, as arguments, a python function\n", "that defines a basis (like the one you've just created called `linear`)\n", "as well as a set of inputs and a vector of parameters. Your new python\n", "function should return a prediction. Name your function `prediction`.\n", "The return value `f` should be a two dimensional `numpy` array with\n", "$\\numData$ rows and $1$ column, where $\\numData$ is the number of data\n", "points. Calls to your function should be in the following form:\n", "\n", "`f = prediction(w, x, linear)`\n", "\n", "Create a python function that computes the sum of squares objective\n", "function (or error function). It should accept your input data (or\n", "covariates) and target data (or response variables) and your parameter\n", "vector `w` as arguments. It should also accept a python function that\n", "represents the basis. Calls to your function should be in the following\n", "form:\n", "\n", "`e = objective(w, x, y, linear)`\n", "\n", "Create a function that solves the linear system for the set of\n", "parameters that minimizes the sum of squares objective. It should accept\n", "input data, target data and a python function for the basis as the\n", "inputs. Calls to your function should be in the following form:\n", "\n", "`w = fit(x, y, linear)`\n", "\n", "Fit a linear model to the olympic data using these functions and plot\n", "the resulting prediction between 1890 and 2020. Set the title of the\n", "plot to be the error of the fit on the *training data*.\n", "\n", "*15 marks*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write code for your answer to Question 1 in this box\n", "# provide the answers so that the code runs correctly otherwise you will loose marks!\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Polynomial Fit: Training Error\n", "\n", "### Question 2\n", "\n", "In this question we extend the code above to a non- linear basis (a\n", "quadratic function).\n", "\n", "Start by creating a python-function called `quadratic`. It should\n", "compute the quadratic basis. $$\n", "\\basisMatrix = \\begin{bmatrix} \\mathbf{1} & \\dataVector & \\dataVector^2\\end{bmatrix}\n", "$$ It should be called in the following form:\n", "\n", "`Phi = quadratic(x)`\n", "\n", "Use this to compute the quadratic fit for the model, again plotting the\n", "result titled by the error.\n", "\n", "*10 marks*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write code for your answer to Question 2 in this box\n", "# provide the answers so that the code runs correctly otherwise you will loose marks!\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Polynomial Fits to Olympics Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_LM_polynomial_number{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(1, 1, max_basis, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Polynomial fit to olympic data with 26 basis functions.\n", "\n", "## Hold Out Validation on Olympic Marathon Data \\[edit\\]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_val_extra_LM_polynomial_number{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(1, 1, max_basis, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Olympic marathon data with validation error for\n", "extrapolation.\n", "\n", "## Extrapolation\n", "\n", "## Interpolation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_val_inter_LM_polynomial_number{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(1, 1, max_basis, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: Olympic marathon data with validation error for\n", "interpolation.\n", "\n", "## Choice of Validation Set\n", "\n", "## Hold Out Data\n", "\n", "You have a conclusion as to which model fits best under the training\n", "error, but how do the two models perform in terms of validation? In this\n", "section we consider *hold out* validation. In hold out validation we\n", "remove a portion of the training data for *validating* the model on. The\n", "remaining data is used for fitting the model (training). Because this is\n", "a time series prediction, it makes sense for us to hold out data at the\n", "end of the time series. This means that we are validating on future\n", "predictions. We will hold out data from after 1980 and fit the model to\n", "the data before 1980." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# select indices of data to 'hold out'\n", "indices_hold_out = np.flatnonzero(x>1980)\n", "\n", "# Create a training set\n", "x_train = np.delete(x, indices_hold_out, axis=0)\n", "y_train = np.delete(y, indices_hold_out, axis=0)\n", "\n", "# Create a hold out set\n", "x_valid = np.take(x, indices_hold_out, axis=0)\n", "y_valid = np.take(y, indices_hold_out, axis=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Question 3\n", "\n", "For both the linear and quadratic models, fit the model to the data up\n", "until 1980 and then compute the error on the held out data (from 1980\n", "onwards). Which model performs better on the validation data?\n", "\n", "*10 marks*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write code for your answer to Question 3 in this box\n", "# provide the answers so that the code runs correctly otherwise you will loose marks!\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Richer Basis Set\n", "\n", "Now we have an approach for deciding which model to retain, we can\n", "consider the entire family of polynomial bases, with arbitrary degrees.\n", "\n", "### Question 4\n", "\n", "Now we are going to build a more sophisticated form of basis function,\n", "one that can accept arguments to its inputs (similar to those we used in\n", "[this lab](./week4.ipynb)). Here we will start with a polynomial basis." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def polynomial(x, degree, loc, scale):\n", " degrees =np.arange(degree+1)\n", " return ((x-loc)/scale)**degrees" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The basis as we've defined it has three arguments as well as the input.\n", "The degree of the polynomial, the scale of the polynomial and the\n", "offset. These arguments need to be passed to the basis functions\n", "whenever they are called. Modify your code to pass these additional\n", "arguments to the python function for creating the basis. Do this for\n", "each of your functions `predict`, `fit` and `objective`. You will find\n", "`*args` (or `**kwargs`) useful.\n", "\n", "Write code that tries to fit different models to the data with\n", "polynomial basis. Use a maximum degree for your basis from 0 to 17. For\n", "each polynomial store the *hold out validation error* and the *training\n", "error*. When you have finished the computation plot the hold out error\n", "for your models and the training error for your p. When computing your\n", "polynomial basis use `offset=1956.` and `scale=120.` to ensure that the\n", "data is mapped (roughly) to the -1, 1 range.\n", "\n", "Which polynomial has the minimum training error? Which polynomial has\n", "the minimum validation error?\n", "\n", "*25 marks*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write code for your answer to Question 4 in this box\n", "# provide the answers so that the code runs correctly otherwise you will loose marks!\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Leave One Out Validation \\[edit\\]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ipywidgets import IntSlider\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_loo{part:0>3}_LM_polynomial_number{num_basis:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " num_basis=IntSlider(1, 1, max_basis, 1), \n", " part=IntSlider(0, 0, x.shape[0], 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hold out validation uses a portion of the data to hold out and a portion\n", "of the data to train on. There is always a compromise between how much\n", "data to hold out and how much data to train on. The more data you hold\n", "out, the better the estimate of your performance at 'run-time' (when the\n", "model is used to make predictions in real applications). However, by\n", "holding out more data, you leave less data to train on, so you have a\n", "better validation, but a poorer quality model fit than you could have\n", "had if you'd used all the data for training. Leave one out cross\n", "validation leaves as much data in the training phase as possible: you\n", "only take *one point* out for your validation set. However, if you do\n", "this for hold-out validation, then the quality of your validation error\n", "is very poor because you are testing the model quality on one point\n", "only. In *cross validation* the approach is to improve this estimate by\n", "doing more than one model fit. In *leave one out cross validation* you\n", "fit $\\numData$ different models, where $\\numData$ is the number of your\n", "data. For each model fit you take out one data point, and train the\n", "model on the remaining $n-1$ data points. You validate the model on the\n", "data point you've held out, but you do this $\\numData$ times, once for\n", "each different model. You then take the *average* of all the $\\numData$\n", "badly estimated hold out validation errors. The average of this estimate\n", "is a good estimate of performance of those models on the test data.\n", "\n", "### Question 5\n", "\n", "Write code that computes the *leave one out* validation error for the\n", "olympic data and the polynomial basis. Use the functions you have\n", "created above: `objective`, `fit`, `polynomial`. Compute the\n", "*leave-one-out* cross validation error for basis functions containing a\n", "maximum degree from 0 to 17.\n", "\n", "*20 marks*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write code for your answer to Question 5 in this box\n", "# provide the answers so that the code runs correctly otherwise you will loose marks!\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## $k$-fold Cross Validation \\[edit\\]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ipywidgets import IntSlider\n", "import pods" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('olympic_{num_parts}'.format(num_parts=num_parts) + 'cv{part:0>2}_LM_polynomial_number{number:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " part=IntSlider(0,0,5,1),\n", " number=IntSlider(1, 1, max_basis, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Leave one out cross validation produces a very good estimate of the\n", "performance at test time, and is particularly useful if you don't have a\n", "lot of data. In these cases you need to make as much use of your data\n", "for model fitting as possible, and having a large hold out data set (to\n", "validate model performance) can have a significant effect on the size of\n", "the data set you have to fit your model, and correspondingly, the\n", "complexity of the model you can fit. However, leave one out cross\n", "validation involves fitting $\\numData$ models, where $\\numData$ is your\n", "number of training data. For the olympics example, this is only 27 model\n", "fits, but in practice many data sets consist thousands or millions of\n", "data points, and fitting many millions of models for estimating\n", "validation error isn't really practical. One option is to return to\n", "*hold out* validation, but another approach is to perform $k$-fold cross\n", "validation. In $k$-fold cross validation you split your data into $k$\n", "parts. Then you use $k-1$ of those parts for training, and hold out one\n", "part for validation. Just like we did for the hold out validation above.\n", "In *cross* validation, however, you repeat this process. You swap the\n", "part of the data you just used for validation back in to the training\n", "set and select another part for validation. You then fit the model to\n", "the new training data and validate on the portion of data you've just\n", "extracted. Each split of training/validation data is called a *fold* and\n", "since you do this process $k$ times, the procedure is known as $k$-fold\n", "cross validation. The term *cross* refers to the fact that you cross\n", "over your validation portion back into the training data every time you\n", "perform a fold.\n", "\n", "### Question 6\n", "\n", "Perform $k$-fold cross validation on the olympic data with your\n", "polynomial basis. Use $k$ set to 5 (e.g. five fold cross validation). Do\n", "the different forms of validation select different models? Does five\n", "fold cross validation always select the same model?\n", "\n", "*Note*: The data doesn't divide into 5 equal size partitions for the\n", "five fold cross validation error. Don't worry about this too much. Two\n", "of the partitions will have an extra data point. You might find\n", "`np.random.permutation?` useful.\n", "\n", "*20 marks*" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write code for your answer to Question 6 in this box\n", "# provide the answers so that the code runs correctly otherwise you will loose marks!\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bias Variance Decomposition \\[edit\\]\n", "\n", "Expected test error for different variations of the *training data*\n", "sampled from, $\\Pr(\\dataVector, \\dataScalar)$\n", "$$\\mathbb{E}\\left[ \\left(\\dataScalar - \\mappingFunction^*(\\dataVector)\\right)^2 \\right]$$\n", "Decompose as\n", "$$\\mathbb{E}\\left[ \\left(\\dataScalar - \\mappingFunction(\\dataVector)\\right)^2 \\right] = \\text{bias}\\left[\\mappingFunction^*(\\dataVector)\\right]^2 + \\text{variance}\\left[\\mappingFunction^*(\\dataVector)\\right] +\\sigma^2$$\n", "\n", "- Given by $$\\text{bias}\\left[\\mappingFunction^*(\\dataVector)\\right] =\n", " \\mathbb{E}\\left[\\mappingFunction^*(\\dataVector)\\right] * \\mappingFunction(\\dataVector)$$\n", "- Error due to bias comes from a model that's too simple.\n", "\n", "- Given by\n", " $$\\text{variance}\\left[\\mappingFunction^*(\\dataVector)\\right] = \\mathbb{E}\\left[\\left(\\mappingFunction^*(\\dataVector) - \\mathbb{E}\\left[\\mappingFunction^*(\\dataVector)\\right]\\right)^2\\right]$$\n", "- Slight variations in the training set cause changes in the\n", " prediction. Error due to variance is error in the model due to an\n", " overly complex model.\n", "\n", "## Bias vs Variance Error Plots \\[edit\\]\n", "\n", "Helper function for sampling data from two different classes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def create_data(per_cluster=30):\n", " \"\"\"Create a randomly sampled data set\n", " \n", " :param per_cluster: number of points in each cluster\n", " \"\"\"\n", " X = []\n", " y = []\n", " scale = 3\n", " prec = 1/(scale*scale)\n", " pos_mean = [[-1, 0],[0,0.5],[1,0]]\n", " pos_cov = [[prec, 0.], [0., prec]]\n", " neg_mean = [[0, -0.5],[0,-0.5],[0,-0.5]]\n", " neg_cov = [[prec, 0.], [0., prec]]\n", " for mean in pos_mean:\n", " X.append(np.random.multivariate_normal(mean=mean, cov=pos_cov, size=per_class))\n", " y.append(np.ones((per_class, 1)))\n", " for mean in neg_mean:\n", " X.append(np.random.multivariate_normal(mean=mean, cov=neg_cov, size=per_class))\n", " y.append(np.zeros((per_class, 1)))\n", " return np.vstack(X), np.vstack(y).flatten()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Helper function for plotting the decision boundary of the SVM." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_contours(ax, cl, xx, yy, **params):\n", " \"\"\"Plot the decision boundaries for a classifier.\n", "\n", " :param ax: matplotlib axes object\n", " :param cl: a classifier\n", " :param xx: meshgrid ndarray\n", " :param yy: meshgrid ndarray\n", " :param params: dictionary of params to pass to contourf, optional\n", " \"\"\"\n", " Z = cl.decision_function(np.c_[xx.ravel(), yy.ravel()])\n", " Z = Z.reshape(xx.shape)\n", " # Plot decision boundary and regions\n", " out = ax.contour(xx, yy, Z, \n", " levels=[-1., 0., 1], \n", " colors='black', \n", " linestyles=['dashed', 'solid', 'dashed'])\n", " out = ax.contourf(xx, yy, Z, \n", " levels=[Z.min(), 0, Z.max()], \n", " colors=[[0.5, 1.0, 0.5], [1.0, 0.5, 0.5]])\n", " return out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mlai\n", "import os" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def decision_boundary_plot(models, X, y, axs, filename, titles, xlim, ylim):\n", " \"\"\"Plot a decision boundary on the given axes\n", " \n", " :param axs: the axes to plot on.\n", " :param models: the SVM models to plot\n", " :param titles: the titles for each axis\n", " :param X: input training data\n", " :param y: target training data\"\"\"\n", " for ax in axs.flatten():\n", " ax.clear()\n", " X0, X1 = X[:, 0], X[:, 1]\n", " if xlim is None:\n", " xlim = [X0.min()-1, X0.max()+1]\n", " if ylim is None:\n", " ylim = [X1.min()-1, X1.max()+1]\n", " xx, yy = np.meshgrid(np.arange(xlim[0], xlim[1], 0.02),\n", " np.arange(ylim[0], ylim[1], 0.02))\n", " for cl, title, ax in zip(models, titles, axs.flatten()):\n", " plot_contours(ax, cl, xx, yy,\n", " cmap=plt.cm.coolwarm, alpha=0.8)\n", " ax.plot(X0[y==1], X1[y==1], 'r.', markersize=10)\n", " ax.plot(X0[y==0], X1[y==0], 'g.', markersize=10)\n", " ax.set_xlim(xlim)\n", " ax.set_ylim(ylim)\n", " ax.set_xticks(())\n", " ax.set_yticks(())\n", " ax.set_title(title)\n", " mlai.write_figure(os.path.join(filename),\n", " figure=fig,\n", " transparent=True)\n", " return xlim, ylim" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib\n", "font = {'family' : 'sans',\n", " 'weight' : 'bold',\n", " 'size' : 22}\n", "\n", "matplotlib.rc('font', **font)\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an instance of SVM and fit the data. \n", "C = 100.0 # SVM regularization parameter\n", "gammas = [0.001, 0.01, 0.1, 1]\n", "\n", "\n", "per_class=30\n", "num_samps = 20\n", "# Set-up 2x2 grid for plotting.\n", "fig, ax = plt.subplots(1, 4, figsize=(10,3))\n", "xlim=None\n", "ylim=None\n", "for samp in range(num_samps):\n", " X, y=create_data(per_class)\n", " models = []\n", " titles = []\n", " for gamma in gammas:\n", " models.append(svm.SVC(kernel='rbf', gamma=gamma, C=C))\n", " titles.append('$\\gamma={}$'.format(gamma))\n", " models = (cl.fit(X, y) for cl in models)\n", " xlim, ylim = decision_boundary_plot(models, X, y, \n", " axs=ax, \n", " filename='../slides/diagrams/ml/bias-variance{samp:0>3}.svg'.format(samp=samp), \n", " titles=titles,\n", " xlim=xlim,\n", " ylim=ylim)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('bias-variance{samp:0>3}.svg', \n", " directory='../slides/diagrams/ml', \n", " samp=IntSlider(0,0,10,1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "Figure: In each figure the more simple model is on the left, and the\n", "more complex model is on the right. Each fit is done to a different\n", "version of the data set. The simpler model is more consistent in its\n", "errors (bias error), whereas the more complex model is varying in its\n", "errors (variance error).\n", "\n", "\\addreading{@Rogers:book11}{Section 1.5}\n", "\\reading\n", "# References" ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 2 }