{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Artificial Intelligence, Data Science and Machine Learning Systems\n", "Design\n", "### [Neil D. Lawrence](http://inverseprobability.com), Amazon Cambridge and University of Sheffield\n", "### 2018-11-30\n", "\n", "**Abstract**: Machine learning solutions, in particular those based on deep learning\n", "methods, form an underpinning of the current revolution in “artificial\n", "intelligence” that has dominated popular press headlines and is having a\n", "significant influence on the wider tech agenda. In this talk I will give\n", "an overview of where we are now with machine learning solutions, and\n", "what challenges we face both in the near and far future. These include\n", "practical application of existing algorithms in the face of the need to\n", "explain decision making, mechanisms for improving the quality and\n", "availability of data, dealing with large unstructured datasets.\n", "\n", "$$\n", "\\newcommand{\\Amatrix}{\\mathbf{A}}\n", "\\newcommand{\\KL}[2]{\\text{KL}\\left( #1\\,\\|\\,#2 \\right)}\n", "\\newcommand{\\Kaast}{\\kernelMatrix_{\\mathbf{ \\ast}\\mathbf{ \\ast}}}\n", "\\newcommand{\\Kastu}{\\kernelMatrix_{\\mathbf{ \\ast} \\inducingVector}}\n", "\\newcommand{\\Kff}{\\kernelMatrix_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kfu}{\\kernelMatrix_{\\mappingFunctionVector \\inducingVector}}\n", "\\newcommand{\\Kuast}{\\kernelMatrix_{\\inducingVector \\bf\\ast}}\n", "\\newcommand{\\Kuf}{\\kernelMatrix_{\\inducingVector \\mappingFunctionVector}}\n", "\\newcommand{\\Kuu}{\\kernelMatrix_{\\inducingVector \\inducingVector}}\n", "\\newcommand{\\Kuui}{\\Kuu^{-1}}\n", "\\newcommand{\\Qaast}{\\mathbf{Q}_{\\bf \\ast \\ast}}\n", "\\newcommand{\\Qastf}{\\mathbf{Q}_{\\ast \\mappingFunction}}\n", "\\newcommand{\\Qfast}{\\mathbf{Q}_{\\mappingFunctionVector \\bf \\ast}}\n", "\\newcommand{\\Qff}{\\mathbf{Q}_{\\mappingFunctionVector \\mappingFunctionVector}}\n", "\\newcommand{\\aMatrix}{\\mathbf{A}}\n", "\\newcommand{\\aScalar}{a}\n", "\\newcommand{\\aVector}{\\mathbf{a}}\n", "\\newcommand{\\acceleration}{a}\n", "\\newcommand{\\bMatrix}{\\mathbf{B}}\n", "\\newcommand{\\bScalar}{b}\n", "\\newcommand{\\bVector}{\\mathbf{b}}\n", "\\newcommand{\\basisFunc}{\\phi}\n", "\\newcommand{\\basisFuncVector}{\\boldsymbol{ \\basisFunc}}\n", "\\newcommand{\\basisFunction}{\\phi}\n", "\\newcommand{\\basisLocation}{\\mu}\n", "\\newcommand{\\basisMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\basisScalar}{\\basisFunction}\n", "\\newcommand{\\basisVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\activationFunction}{\\phi}\n", "\\newcommand{\\activationMatrix}{\\boldsymbol{ \\Phi}}\n", "\\newcommand{\\activationScalar}{\\basisFunction}\n", "\\newcommand{\\activationVector}{\\boldsymbol{ \\basisFunction}}\n", "\\newcommand{\\bigO}{\\mathcal{O}}\n", "\\newcommand{\\binomProb}{\\pi}\n", "\\newcommand{\\cMatrix}{\\mathbf{C}}\n", "\\newcommand{\\cbasisMatrix}{\\hat{\\boldsymbol{ \\Phi}}}\n", "\\newcommand{\\cdataMatrix}{\\hat{\\dataMatrix}}\n", "\\newcommand{\\cdataScalar}{\\hat{\\dataScalar}}\n", "\\newcommand{\\cdataVector}{\\hat{\\dataVector}}\n", "\\newcommand{\\centeredKernelMatrix}{\\mathbf{ \\MakeUppercase{\\centeredKernelScalar}}}\n", "\\newcommand{\\centeredKernelScalar}{b}\n", "\\newcommand{\\centeredKernelVector}{\\centeredKernelScalar}\n", "\\newcommand{\\centeringMatrix}{\\mathbf{H}}\n", "\\newcommand{\\chiSquaredDist}[2]{\\chi_{#1}^{2}\\left(#2\\right)}\n", "\\newcommand{\\chiSquaredSamp}[1]{\\chi_{#1}^{2}}\n", "\\newcommand{\\conditionalCovariance}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\coregionalizationMatrix}{\\mathbf{B}}\n", "\\newcommand{\\coregionalizationScalar}{b}\n", "\\newcommand{\\coregionalizationVector}{\\mathbf{ \\coregionalizationScalar}}\n", "\\newcommand{\\covDist}[2]{\\text{cov}_{#2}\\left(#1\\right)}\n", "\\newcommand{\\covSamp}[1]{\\text{cov}\\left(#1\\right)}\n", "\\newcommand{\\covarianceScalar}{c}\n", "\\newcommand{\\covarianceVector}{\\mathbf{ \\covarianceScalar}}\n", "\\newcommand{\\covarianceMatrix}{\\mathbf{C}}\n", "\\newcommand{\\covarianceMatrixTwo}{\\boldsymbol{ \\Sigma}}\n", "\\newcommand{\\croupierScalar}{s}\n", "\\newcommand{\\croupierVector}{\\mathbf{ \\croupierScalar}}\n", "\\newcommand{\\croupierMatrix}{\\mathbf{ \\MakeUppercase{\\croupierScalar}}}\n", "\\newcommand{\\dataDim}{p}\n", "\\newcommand{\\dataIndex}{i}\n", "\\newcommand{\\dataIndexTwo}{j}\n", "\\newcommand{\\dataMatrix}{\\mathbf{Y}}\n", "\\newcommand{\\dataScalar}{y}\n", "\\newcommand{\\dataSet}{\\mathcal{D}}\n", "\\newcommand{\\dataStd}{\\sigma}\n", "\\newcommand{\\dataVector}{\\mathbf{ \\dataScalar}}\n", "\\newcommand{\\decayRate}{d}\n", "\\newcommand{\\degreeMatrix}{\\mathbf{ \\MakeUppercase{\\degreeScalar}}}\n", "\\newcommand{\\degreeScalar}{d}\n", "\\newcommand{\\degreeVector}{\\mathbf{ \\degreeScalar}}\n", "% Already defined by latex\n", "%\\newcommand{\\det}[1]{\\left|#1\\right|}\n", "\\newcommand{\\diag}[1]{\\text{diag}\\left(#1\\right)}\n", "\\newcommand{\\diagonalMatrix}{\\mathbf{D}}\n", "\\newcommand{\\diff}[2]{\\frac{\\text{d}#1}{\\text{d}#2}}\n", "\\newcommand{\\diffTwo}[2]{\\frac{\\text{d}^2#1}{\\text{d}#2^2}}\n", "\\newcommand{\\displacement}{x}\n", "\\newcommand{\\displacementVector}{\\textbf{\\displacement}}\n", "\\newcommand{\\distanceMatrix}{\\mathbf{ \\MakeUppercase{\\distanceScalar}}}\n", "\\newcommand{\\distanceScalar}{d}\n", "\\newcommand{\\distanceVector}{\\mathbf{ \\distanceScalar}}\n", "\\newcommand{\\eigenvaltwo}{\\ell}\n", "\\newcommand{\\eigenvaltwoMatrix}{\\mathbf{L}}\n", "\\newcommand{\\eigenvaltwoVector}{\\mathbf{l}}\n", "\\newcommand{\\eigenvalue}{\\lambda}\n", "\\newcommand{\\eigenvalueMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\eigenvalueVector}{\\boldsymbol{ \\lambda}}\n", "\\newcommand{\\eigenvector}{\\mathbf{ \\eigenvectorScalar}}\n", "\\newcommand{\\eigenvectorMatrix}{\\mathbf{U}}\n", "\\newcommand{\\eigenvectorScalar}{u}\n", "\\newcommand{\\eigenvectwo}{\\mathbf{v}}\n", "\\newcommand{\\eigenvectwoMatrix}{\\mathbf{V}}\n", "\\newcommand{\\eigenvectwoScalar}{v}\n", "\\newcommand{\\entropy}[1]{\\mathcal{H}\\left(#1\\right)}\n", "\\newcommand{\\errorFunction}{E}\n", "\\newcommand{\\expDist}[2]{\\left<#1\\right>_{#2}}\n", "\\newcommand{\\expSamp}[1]{\\left<#1\\right>}\n", "\\newcommand{\\expectation}[1]{\\left\\langle #1 \\right\\rangle }\n", "\\newcommand{\\expectationDist}[2]{\\left\\langle #1 \\right\\rangle _{#2}}\n", "\\newcommand{\\expectedDistanceMatrix}{\\mathcal{D}}\n", "\\newcommand{\\eye}{\\mathbf{I}}\n", "\\newcommand{\\fantasyDim}{r}\n", "\\newcommand{\\fantasyMatrix}{\\mathbf{ \\MakeUppercase{\\fantasyScalar}}}\n", "\\newcommand{\\fantasyScalar}{z}\n", "\\newcommand{\\fantasyVector}{\\mathbf{ \\fantasyScalar}}\n", "\\newcommand{\\featureStd}{\\varsigma}\n", "\\newcommand{\\gammaCdf}[3]{\\mathcal{GAMMA CDF}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaDist}[3]{\\mathcal{G}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gammaSamp}[2]{\\mathcal{G}\\left(#1,#2\\right)}\n", "\\newcommand{\\gaussianDist}[3]{\\mathcal{N}\\left(#1|#2,#3\\right)}\n", "\\newcommand{\\gaussianSamp}[2]{\\mathcal{N}\\left(#1,#2\\right)}\n", "\\newcommand{\\given}{|}\n", "\\newcommand{\\half}{\\frac{1}{2}}\n", "\\newcommand{\\heaviside}{H}\n", "\\newcommand{\\hiddenMatrix}{\\mathbf{ \\MakeUppercase{\\hiddenScalar}}}\n", "\\newcommand{\\hiddenScalar}{h}\n", "\\newcommand{\\hiddenVector}{\\mathbf{ \\hiddenScalar}}\n", "\\newcommand{\\identityMatrix}{\\eye}\n", "\\newcommand{\\inducingInputScalar}{z}\n", "\\newcommand{\\inducingInputVector}{\\mathbf{ \\inducingInputScalar}}\n", "\\newcommand{\\inducingInputMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\inducingScalar}{u}\n", "\\newcommand{\\inducingVector}{\\mathbf{ \\inducingScalar}}\n", "\\newcommand{\\inducingMatrix}{\\mathbf{U}}\n", "\\newcommand{\\inlineDiff}[2]{\\text{d}#1/\\text{d}#2}\n", "\\newcommand{\\inputDim}{q}\n", "\\newcommand{\\inputMatrix}{\\mathbf{X}}\n", "\\newcommand{\\inputScalar}{x}\n", "\\newcommand{\\inputSpace}{\\mathcal{X}}\n", "\\newcommand{\\inputVals}{\\inputVector}\n", "\\newcommand{\\inputVector}{\\mathbf{ \\inputScalar}}\n", "\\newcommand{\\iterNum}{k}\n", "\\newcommand{\\kernel}{\\kernelScalar}\n", "\\newcommand{\\kernelMatrix}{\\mathbf{K}}\n", "\\newcommand{\\kernelScalar}{k}\n", "\\newcommand{\\kernelVector}{\\mathbf{ \\kernelScalar}}\n", "\\newcommand{\\kff}{\\kernelScalar_{\\mappingFunction \\mappingFunction}}\n", "\\newcommand{\\kfu}{\\kernelVector_{\\mappingFunction \\inducingScalar}}\n", "\\newcommand{\\kuf}{\\kernelVector_{\\inducingScalar \\mappingFunction}}\n", "\\newcommand{\\kuu}{\\kernelVector_{\\inducingScalar \\inducingScalar}}\n", "\\newcommand{\\lagrangeMultiplier}{\\lambda}\n", "\\newcommand{\\lagrangeMultiplierMatrix}{\\boldsymbol{ \\Lambda}}\n", "\\newcommand{\\lagrangian}{L}\n", "\\newcommand{\\laplacianFactor}{\\mathbf{ \\MakeUppercase{\\laplacianFactorScalar}}}\n", "\\newcommand{\\laplacianFactorScalar}{m}\n", "\\newcommand{\\laplacianFactorVector}{\\mathbf{ \\laplacianFactorScalar}}\n", "\\newcommand{\\laplacianMatrix}{\\mathbf{L}}\n", "\\newcommand{\\laplacianScalar}{\\ell}\n", "\\newcommand{\\laplacianVector}{\\mathbf{ \\ell}}\n", "\\newcommand{\\latentDim}{q}\n", "\\newcommand{\\latentDistanceMatrix}{\\boldsymbol{ \\Delta}}\n", "\\newcommand{\\latentDistanceScalar}{\\delta}\n", "\\newcommand{\\latentDistanceVector}{\\boldsymbol{ \\delta}}\n", "\\newcommand{\\latentForce}{f}\n", "\\newcommand{\\latentFunction}{u}\n", "\\newcommand{\\latentFunctionVector}{\\mathbf{ \\latentFunction}}\n", "\\newcommand{\\latentFunctionMatrix}{\\mathbf{ \\MakeUppercase{\\latentFunction}}}\n", "\\newcommand{\\latentIndex}{j}\n", "\\newcommand{\\latentScalar}{z}\n", "\\newcommand{\\latentVector}{\\mathbf{ \\latentScalar}}\n", "\\newcommand{\\latentMatrix}{\\mathbf{Z}}\n", "\\newcommand{\\learnRate}{\\eta}\n", "\\newcommand{\\lengthScale}{\\ell}\n", "\\newcommand{\\rbfWidth}{\\ell}\n", "\\newcommand{\\likelihoodBound}{\\mathcal{L}}\n", "\\newcommand{\\likelihoodFunction}{L}\n", "\\newcommand{\\locationScalar}{\\mu}\n", "\\newcommand{\\locationVector}{\\boldsymbol{ \\locationScalar}}\n", "\\newcommand{\\locationMatrix}{\\mathbf{M}}\n", "\\newcommand{\\variance}[1]{\\text{var}\\left( #1 \\right)}\n", "\\newcommand{\\mappingFunction}{f}\n", "\\newcommand{\\mappingFunctionMatrix}{\\mathbf{F}}\n", "\\newcommand{\\mappingFunctionTwo}{g}\n", "\\newcommand{\\mappingFunctionTwoMatrix}{\\mathbf{G}}\n", "\\newcommand{\\mappingFunctionTwoVector}{\\mathbf{ \\mappingFunctionTwo}}\n", "\\newcommand{\\mappingFunctionVector}{\\mathbf{ \\mappingFunction}}\n", "\\newcommand{\\scaleScalar}{s}\n", "\\newcommand{\\mappingScalar}{w}\n", "\\newcommand{\\mappingVector}{\\mathbf{ \\mappingScalar}}\n", "\\newcommand{\\mappingMatrix}{\\mathbf{W}}\n", "\\newcommand{\\mappingScalarTwo}{v}\n", "\\newcommand{\\mappingVectorTwo}{\\mathbf{ \\mappingScalarTwo}}\n", "\\newcommand{\\mappingMatrixTwo}{\\mathbf{V}}\n", "\\newcommand{\\maxIters}{K}\n", "\\newcommand{\\meanMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanScalar}{\\mu}\n", "\\newcommand{\\meanTwoMatrix}{\\mathbf{M}}\n", "\\newcommand{\\meanTwoScalar}{m}\n", "\\newcommand{\\meanTwoVector}{\\mathbf{ \\meanTwoScalar}}\n", "\\newcommand{\\meanVector}{\\boldsymbol{ \\meanScalar}}\n", "\\newcommand{\\mrnaConcentration}{m}\n", "\\newcommand{\\naturalFrequency}{\\omega}\n", "\\newcommand{\\neighborhood}[1]{\\mathcal{N}\\left( #1 \\right)}\n", "\\newcommand{\\neilurl}{http://inverseprobability.com/}\n", "\\newcommand{\\noiseMatrix}{\\boldsymbol{ E}}\n", "\\newcommand{\\noiseScalar}{\\epsilon}\n", "\\newcommand{\\noiseVector}{\\boldsymbol{ \\epsilon}}\n", "\\newcommand{\\norm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\normalizedLaplacianMatrix}{\\hat{\\mathbf{L}}}\n", "\\newcommand{\\normalizedLaplacianScalar}{\\hat{\\ell}}\n", "\\newcommand{\\normalizedLaplacianVector}{\\hat{\\mathbf{ \\ell}}}\n", "\\newcommand{\\numActive}{m}\n", "\\newcommand{\\numBasisFunc}{m}\n", "\\newcommand{\\numComponents}{m}\n", "\\newcommand{\\numComps}{K}\n", "\\newcommand{\\numData}{n}\n", "\\newcommand{\\numFeatures}{K}\n", "\\newcommand{\\numHidden}{h}\n", "\\newcommand{\\numInducing}{m}\n", "\\newcommand{\\numLayers}{\\ell}\n", "\\newcommand{\\numNeighbors}{K}\n", "\\newcommand{\\numSequences}{s}\n", "\\newcommand{\\numSuccess}{s}\n", "\\newcommand{\\numTasks}{m}\n", "\\newcommand{\\numTime}{T}\n", "\\newcommand{\\numTrials}{S}\n", "\\newcommand{\\outputIndex}{j}\n", "\\newcommand{\\paramVector}{\\boldsymbol{ \\theta}}\n", "\\newcommand{\\parameterMatrix}{\\boldsymbol{ \\Theta}}\n", "\\newcommand{\\parameterScalar}{\\theta}\n", "\\newcommand{\\parameterVector}{\\boldsymbol{ \\parameterScalar}}\n", "\\newcommand{\\partDiff}[2]{\\frac{\\partial#1}{\\partial#2}}\n", "\\newcommand{\\precisionScalar}{j}\n", "\\newcommand{\\precisionVector}{\\mathbf{ \\precisionScalar}}\n", "\\newcommand{\\precisionMatrix}{\\mathbf{J}}\n", "\\newcommand{\\pseudotargetScalar}{\\widetilde{y}}\n", "\\newcommand{\\pseudotargetVector}{\\mathbf{ \\pseudotargetScalar}}\n", "\\newcommand{\\pseudotargetMatrix}{\\mathbf{ \\widetilde{Y}}}\n", "\\newcommand{\\rank}[1]{\\text{rank}\\left(#1\\right)}\n", "\\newcommand{\\rayleighDist}[2]{\\mathcal{R}\\left(#1|#2\\right)}\n", "\\newcommand{\\rayleighSamp}[1]{\\mathcal{R}\\left(#1\\right)}\n", "\\newcommand{\\responsibility}{r}\n", "\\newcommand{\\rotationScalar}{r}\n", "\\newcommand{\\rotationVector}{\\mathbf{ \\rotationScalar}}\n", "\\newcommand{\\rotationMatrix}{\\mathbf{R}}\n", "\\newcommand{\\sampleCovScalar}{s}\n", "\\newcommand{\\sampleCovVector}{\\mathbf{ \\sampleCovScalar}}\n", "\\newcommand{\\sampleCovMatrix}{\\mathbf{s}}\n", "\\newcommand{\\scalarProduct}[2]{\\left\\langle{#1},{#2}\\right\\rangle}\n", "\\newcommand{\\sign}[1]{\\text{sign}\\left(#1\\right)}\n", "\\newcommand{\\sigmoid}[1]{\\sigma\\left(#1\\right)}\n", "\\newcommand{\\singularvalue}{\\ell}\n", "\\newcommand{\\singularvalueMatrix}{\\mathbf{L}}\n", "\\newcommand{\\singularvalueVector}{\\mathbf{l}}\n", "\\newcommand{\\sorth}{\\mathbf{u}}\n", "\\newcommand{\\spar}{\\lambda}\n", "\\newcommand{\\trace}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\BasalRate}{B}\n", "\\newcommand{\\DampingCoefficient}{C}\n", "\\newcommand{\\DecayRate}{D}\n", "\\newcommand{\\Displacement}{X}\n", "\\newcommand{\\LatentForce}{F}\n", "\\newcommand{\\Mass}{M}\n", "\\newcommand{\\Sensitivity}{S}\n", "\\newcommand{\\basalRate}{b}\n", "\\newcommand{\\dampingCoefficient}{c}\n", "\\newcommand{\\mass}{m}\n", "\\newcommand{\\sensitivity}{s}\n", "\\newcommand{\\springScalar}{\\kappa}\n", "\\newcommand{\\springVector}{\\boldsymbol{ \\kappa}}\n", "\\newcommand{\\springMatrix}{\\boldsymbol{ \\mathcal{K}}}\n", "\\newcommand{\\tfConcentration}{p}\n", "\\newcommand{\\tfDecayRate}{\\delta}\n", "\\newcommand{\\tfMrnaConcentration}{f}\n", "\\newcommand{\\tfVector}{\\mathbf{ \\tfConcentration}}\n", "\\newcommand{\\velocity}{v}\n", "\\newcommand{\\sufficientStatsScalar}{g}\n", "\\newcommand{\\sufficientStatsVector}{\\mathbf{ \\sufficientStatsScalar}}\n", "\\newcommand{\\sufficientStatsMatrix}{\\mathbf{G}}\n", "\\newcommand{\\switchScalar}{s}\n", "\\newcommand{\\switchVector}{\\mathbf{ \\switchScalar}}\n", "\\newcommand{\\switchMatrix}{\\mathbf{S}}\n", "\\newcommand{\\tr}[1]{\\text{tr}\\left(#1\\right)}\n", "\\newcommand{\\loneNorm}[1]{\\left\\Vert #1 \\right\\Vert_1}\n", "\\newcommand{\\ltwoNorm}[1]{\\left\\Vert #1 \\right\\Vert_2}\n", "\\newcommand{\\onenorm}[1]{\\left\\vert#1\\right\\vert_1}\n", "\\newcommand{\\twonorm}[1]{\\left\\Vert #1 \\right\\Vert}\n", "\\newcommand{\\vScalar}{v}\n", "\\newcommand{\\vVector}{\\mathbf{v}}\n", "\\newcommand{\\vMatrix}{\\mathbf{V}}\n", "\\newcommand{\\varianceDist}[2]{\\text{var}_{#2}\\left( #1 \\right)}\n", "% Already defined by latex\n", "%\\newcommand{\\vec}{#1:}\n", "\\newcommand{\\vecb}[1]{\\left(#1\\right):}\n", "\\newcommand{\\weightScalar}{w}\n", "\\newcommand{\\weightVector}{\\mathbf{ \\weightScalar}}\n", "\\newcommand{\\weightMatrix}{\\mathbf{W}}\n", "\\newcommand{\\weightedAdjacencyMatrix}{\\mathbf{A}}\n", "\\newcommand{\\weightedAdjacencyScalar}{a}\n", "\\newcommand{\\weightedAdjacencyVector}{\\mathbf{ \\weightedAdjacencyScalar}}\n", "\\newcommand{\\onesVector}{\\mathbf{1}}\n", "\\newcommand{\\zerosVector}{\\mathbf{0}}\n", "$$\n", "\n", "\n", "\n", "\n", "\n", "\n", "## The Gartner Hype Cycle\n", "\n", "\n", "\n", "The [Gartner Hype Cycle](https://en.wikipedia.org/wiki/Hype_cycle) tries\n", "to assess where an idea is in terms of maturity and adoption. It splits\n", "the evolution of technology into a technological trigger, a peak of\n", "expectations followed by a trough of disillusionment and a final\n", "ascension into a useful technology. It looks rather like a classical\n", "control response to a final set point." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "from ipywidgets import IntSlider" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pods.notebook.display_plots('ai-bd-dm-dl-ml-google-trends{sample:0>3}.svg', \n", " '../slides/diagrams/data-science/', sample=IntSlider(1, 1, 4, 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "Google trends for different technological terms on the hype\n", "cycle.\n", "
\n", "Google trends gives us insight into how far along various technological\n", "terms are on the hype cycle.\n", "\n", "\n", "### Lies and Damned Lies\n", "\n", "> There are three types of lies: lies, damned lies and statistics\n", ">\n", "> Benjamin Disraeli 1804-1881\n", "\n", "The quote lies, damned lies and statistics was credited to Benjamin\n", "Disraeli by Mark Twain in his autobiography. It characterizes the idea\n", "that statistic can be made to prove anything. But Disraeli died in 1881\n", "and Mark Twain died in 1910. The important breakthrough in overcoming\n", "our tendency to overinterpet data came with the formalization of the\n", "field through the development of *mathematical statistics*.\n", "\n", "### *Mathematical* Statistics\n", "\n", "\n", "\n", "[Karl Pearson](https://en.wikipedia.org/wiki/Karl_Pearson) (1857-1936),\n", "[Ronald Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher) (1890-1962)\n", "and others considered the question of what conclusions can truly be\n", "drawn from data. Their mathematical studies act as a restraint on our\n", "tendency to over-interpret and see patterns where there are none. They\n", "introduced concepts such as randomized control trials that form a\n", "mainstay of the our decision making today, from government, to\n", "clinicians to large scale A/B testing that determines the nature of the\n", "web interfaces we interact with on social media and shopping.\n", "\n", "Today the statement \"There are three types of lies: lies, damned lies\n", "and 'big data'\" may be more apt. We are revisiting many of the mistakes\n", "made in interpreting data from the 19th century. Big data is laid down\n", "by happenstance, rather than actively collected with a particular\n", "question in mind. That means it needs to be treated with care when\n", "conclusions are being drawn. For data science to succede it needs the\n", "same form of rigour that Pearson and Fisher brought to statistics, a\n", "\"mathematical data science\" is needed.\n", "\n", "## What is Machine Learning?\n", "\n", "What is machine learning? At its most basic level machine learning is a\n", "combination of\n", "\n", "$$ \\text{data} + \\text{model} \\xrightarrow{\\text{compute}} \\text{prediction}$$\n", "\n", "where *data* is our observations. They can be actively or passively\n", "acquired (meta-data). The *model* contains our assumptions, based on\n", "previous experience. That experience can be other data, it can come from\n", "transfer learning, or it can merely be our beliefs about the\n", "regularities of the universe. In humans our models include our inductive\n", "biases. The *prediction* is an action to be taken or a categorization or\n", "a quality score. The reason that machine learning has become a mainstay\n", "of artificial intelligence is the importance of predictions in\n", "artificial intelligence. The data and the model are combined through\n", "computation.\n", "\n", "In practice we normally perform machine learning using two functions. To\n", "combine data with a model we typically make use of:\n", "\n", "**a prediction function** a function which is used to make the\n", "predictions. It includes our beliefs about the regularities of the\n", "universe, our assumptions about how the world works, e.g. smoothness,\n", "spatial similarities, temporal similarities.\n", "\n", "**an objective function** a function which defines the cost of\n", "misprediction. Typically it includes knowledge about the world's\n", "generating processes (probabilistic objectives) or the costs we pay for\n", "mispredictions (empiricial risk minimization).\n", "\n", "The combination of data and model through the prediction function and\n", "the objectie function leads to a *learning algorithm*. The class of\n", "prediction functions and objective functions we can make use of is\n", "restricted by the algorithms they lead to. If the prediction function or\n", "the objective function are too complex, then it can be difficult to find\n", "an appropriate learning algorithm. Much of the acdemic field of machine\n", "learning is the quest for new learning algorithms that allow us to bring\n", "different types of models and data together.\n", "\n", "A useful reference for state of the art in machine learning is the UK\n", "Royal Society Report, [Machine Learning: Power and Promise of Computers\n", "that Learn by\n", "Example](https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf).\n", "\n", "You can also check my blog post on [\"What is Machine\n", "Learning?\"](http://inverseprobability.com/2017/07/17/what-is-machine-learning)\n", "\n", "### Artificial Intelligence and Data Science\n", "\n", "Machine learning technologies have been the driver of two related, but\n", "distinct disciplines. The first is *data science*. Data science is an\n", "emerging field that arises from the fact that we now collect so much\n", "data by happenstance, rather than by *experimental design*. Classical\n", "statistics is the science of drawing conclusions from data, and to do so\n", "statistical experiments are carefully designed. In the modern era we\n", "collect so much data that there's a desire to draw inferences directly\n", "from the data.\n", "\n", "As well as machine learning, the field of data science draws from\n", "statistics, cloud computing, data storage (e.g. streaming data),\n", "visualization and data mining.\n", "\n", "In contrast, artificial intelligence technologies typically focus on\n", "emulating some form of human behaviour, such as understanding an image,\n", "or some speech, or translating text from one form to another. The recent\n", "advances in artifcial intelligence have come from machine learning\n", "providing the automation. But in contrast to data science, in artifcial\n", "intelligence the data is normally collected with the specific task in\n", "mind. In this sense it has strong relations to classical statistics.\n", "\n", "Classically artificial intelligence worried more about *logic* and\n", "*planning* and focussed less on data driven decision making. Modern\n", "machine learning owes more to the field of *Cybernetics*\n", "[@Wiener:cybernetics48] than artificial intelligence. Related fields\n", "include *robotics*, *speech recognition*, *language understanding* and\n", "*computer vision*.\n", "\n", "There are strong overlaps between the fields, the wide availability of\n", "data by happenstance makes it easier to collect data for designing AI\n", "systems. These relations are coming through wide availability of sensing\n", "technologies that are interconnected by celluar networks, WiFi and the\n", "internet. This phenomenon is sometimes known as the *Internet of\n", "Things*, but this feels like a dangerous misnomer. We must never forget\n", "that we are interconnecting people, not things.\n", "\n", "## Natural and Artificial Intelligence: Embodiment Factors\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
\n", "\n", "\n", "\n", "\n", "
\n", "compute\n", "\n", "$$\\approx 100 \\text{ gigaflops}$$\n", "\n", "$$\\approx 16 \\text{ petaflops}$$\n", "
\n", "communicate\n", "\n", "$$1 \\text{ gigbit/s}$$\n", "\n", "$$100 \\text{ bit/s}$$\n", "
\n", "(compute/communicate)\n", "\n", "$$10^{4}$$\n", "\n", "$$10^{14}$$\n", "
\n", "There is a fundamental limit placed on our intelligence based on our\n", "ability to communicate. Claude Shannon founded the field of information\n", "theory. The clever part of this theory is it allows us to separate our\n", "measurement of information from what the information pertains to[^1].\n", "\n", "Shannon measured information in bits. One bit of information is the\n", "amount of information I pass to you when I give you the result of a coin\n", "toss. Shannon was also interested in the amount of information in the\n", "English language. He estimated that on average a word in the English\n", "language contains 12 bits of information.\n", "\n", "Given typical speaking rates, that gives us an estimate of our ability\n", "to communicate of around 100 bits per second [@Reed-information98].\n", "Computers on the other hand can communicate much more rapidly. Current\n", "wired network speeds are around a billion bits per second, ten million\n", "times faster.\n", "\n", "When it comes to compute though, our best estimates indicate our\n", "computers are slower. A typical modern computer can process make around\n", "100 billion floating point operations per second, each floating point\n", "operation involves a 64 bit number. So the computer is processing around\n", "6,400 billion bits per second.\n", "\n", "It's difficult to get similar estimates for humans, but by some\n", "estimates the amount of compute we would require to *simulate* a human\n", "brain is equivalent to that in the UK's fastest computer\n", "[@Ananthanarayanan-cat09], the MET office machine in Exeter, which in\n", "2018 ranks as the 11th fastest computer in the world. That machine\n", "simulates the world's weather each morning, and then simulates the\n", "world's climate. It is a 16 petaflop machine, processing around 1,000\n", "*trillion* bits per second.\n", "\n", "So when it comes to our ability to compute we are extraordinary, not\n", "compute in our conscious mind, but the underlying neuron firings that\n", "underpin both our consciousness, our sbuconsciousness as well as our\n", "motor control etc. By analogy I sometimes like to think of us as a\n", "Formula One engine. But in terms of our ability to deploy that\n", "computation in actual use, to share the results of what we have\n", "inferred, we are very limited. So when you imagine the F1 car that\n", "represents a psyche, think of an F1 car with bicycle wheels.\n", "\n", "\n", "\n", "In contrast, our computers have less computational power, but they can\n", "communicate far more fluidly. They are more like a go-kart, less well\n", "powered, but with tires that allow them to deploy that power.\n", "\n", "\n", "\n", "For humans, that means much of our computation should be dedicated to\n", "considering *what* we should compute. To do that efficiently we need to\n", "model the world around us. The most complex thing in the world around us\n", "is other humans. So it is no surprise that we model them. We second\n", "guess what their intentions are, and our communication is only necessary\n", "when they are departing from how we model them. Naturally, for this to\n", "work well, we need to understand those we work closely with. So it is no\n", "surprise that social communication, social bonding, forms so much of a\n", "part of our use of our limited bandwidth.\n", "\n", "There is a second effect here, our need to anthropomorphise objects\n", "around us. Our tendency to model our fellow humans extends to when we\n", "interact with other entities in our environment. To our pets as well as\n", "inanimate objects around us, such as computers or even our cars. This\n", "tendency to overinterpret could be a consequence of our limited ability\n", "to communicate.\n", "\n", "For more details see this paper [\"Living Together: Mind and Machine\n", "Intelligence\"](https://arxiv.org/abs/1705.07996), and this [TEDx\n", "talk](http://inverseprobability.com/talks/lawrence-tedx17/living-together.html).\n", "\n", "## Evolved Relationship with Information\n", "\n", "The high bandwidth of computers has resulted in a close relationship\n", "between the computer and data. Large amounts of information can flow\n", "between the two. The degree to which the computer is mediating our\n", "relationship with data means that we should consider it an intermediary.\n", "\n", "Originaly our low bandwith relationship with data was affected by two\n", "characteristics. Firstly, our tendency to over-interpret driven by our\n", "need to extract as much knowledge from our low bandwidth information\n", "channel as possible. Secondly, by our improved understanding of the\n", "domain of *mathematical* statistics and how our cognitive biases can\n", "mislead us.\n", "\n", "With this new set up there is a potential for assimilating far more\n", "information via the computer, but the computer can present this to us in\n", "various ways. If it's motives are not aligned with ours then it can\n", "misrepresent the information. This needn't be nefarious it can be simply\n", "as a result of the computer pursuing a different objective from us. For\n", "example, if the computer is aiming to maximize our interaction time that\n", "may be a different objective from ours which may be to summarize\n", "information in a representative manner in the *shortest* possible length\n", "of time.\n", "\n", "For example, for me it was a common experience to pick up my telephone\n", "with the intention of checking when my next appointment was, but to soon\n", "find myself distracted by another application on the phone, and end up\n", "reading something on the internet. By the time I'd finished reading, I\n", "would often have forgotten the reason I picked up my phone in the first\n", "place.\n", "\n", "There are great benefits to be had from the huge amount of information\n", "we can unlock from this evolved relationship between us and data. In\n", "biology, large scale data sharing has been driven by a revolution in\n", "genomic, transcriptomic and epigenomic measurement. The improved\n", "inferences that that can be drawn through summarizing data by computer\n", "have fundamentally changed the nature of biological science, now this\n", "phenomenon is also infuencing us in our daily lives as data measured by\n", "*happenstance* is increasingly used to characterize us.\n", "\n", "Better mediation of this flow actually requires a better understanding\n", "of human-computer interaction. This in turn involves understanding our\n", "own intelligence better, what its cognitive biases are and how these\n", "might mislead us.\n", "\n", "For further thoughts see [this Guardian\n", "article](https://www.theguardian.com/media-network/2015/jul/23/data-driven-economy-marketing)\n", "from 2015 on marketing in the internet era and [this blog\n", "post](http://inverseprobability.com/2015/12/04/what-kind-of-ai) on\n", "System Zero.\n", "\n", "
\n", "New direction of information flow, information is reaching us\n", "mediated by the computer\n", "
\n", "### What does Machine Learning do?\n", "\n", "Any process of automation allows us to scale what we do by codifying a\n", "process in some way that makes it efficient and repeatable. Machine\n", "learning automates by emulating human (or other actions) found in data.\n", "Machine learning codifies in the form of a mathematical function that is\n", "learnt by a computer. If we can create these mathematical functions in\n", "ways in which they can interconnect, then we can also build systems.\n", "\n", "Machine learning works through codifing a prediction of interest into a\n", "mathematical function. For example, we can try and predict the\n", "probability that a customer wants to by a jersey given knowledge of\n", "their age, and the latitude where they live. The technique known as\n", "logistic regression estimates the odds that someone will by a jumper as\n", "a linear weighted sum of the features of interest.\n", "\n", "$$ \\text{odds} = \\frac{p(\\text{bought})}{p(\\text{not bought})} $$\n", "$$ \\log \\text{odds} = \\beta_0 + \\beta_1 \\text{age} + \\beta_2 \\text{latitude}$$\n", "\n", "Here $\\beta_0$, $\\beta_1$ and $\\beta_2$ are the parameters of the model.\n", "If $\\beta_1$ and $\\beta_2$ are both positive, then the log-odds that\n", "someone will buy a jumper increase with increasing latitude and age, so\n", "the further north you are and the older you are the more likely you are\n", "to buy a jumper. The parameter $\\beta_0$ is an offset parameter, and\n", "gives the log-odds of buying a jumper at zero age and on the equator. It\n", "is likely to be negative\\[\\^logarithms\\] indicating that the purchase is\n", "odds-against. This is actually a classical statistical model, and models\n", "like logistic regression are widely used to estimate probabilities from\n", "ad-click prediction to risk of disease.\n", "\n", "This is called a generalized linear model, we can also think of it as\n", "estimating the *probability* of a purchase as a nonlinear function of\n", "the features (age, lattitude) and the parameters (the $\\beta$ values).\n", "The function is known as the *sigmoid* or [logistic\n", "function](https://en.wikipedia.org/wiki/Logistic_regression), thus the\n", "name *logistic* regression.\n", "\n", "$$ p(\\text{bought}) = \\sigmoid{\\beta_0 + \\beta_1 \\text{age} + \\beta_2 \\text{latitude}}$$\n", "\n", "In the case where we have *features* to help us predict, we sometimes\n", "denote such features as a vector, $\\inputVector$, and we then use an\n", "inner product between the features and the parameters,\n", "$\\boldsymbol{\\beta}^\\top \\inputVector = \\beta_1 \\inputScalar_1 + \\beta_2 \\inputScalar_2 + \\beta_3 \\inputScalar_3 ...$,\n", "to represent the argument of the sigmoid.\n", "\n", "$$ p(\\text{bought}) = \\sigmoid{\\boldsymbol{\\beta}^\\top \\inputVector}$$\n", "\n", "More generally, we aim to predict some aspect of our data,\n", "$\\dataScalar$, by relating it through a mathematical function,\n", "$\\mappingFunction(\\cdot)$, to the parameters, $\\boldsymbol{\\beta}$ and\n", "the data, $\\inputVector$.\n", "\n", "$$ \\dataScalar = \\mappingFunction\\left(\\inputVector, \\boldsymbol{\\beta}\\right)$$\n", "\n", "We call $\\mappingFunction(\\cdot)$ the *prediction function*\n", "\n", "To obtain the fit to data, we use a separate function called the\n", "*objective function* that gives us a mathematical representation of the\n", "difference between our predictions and the real data.\n", "\n", "$$\\errorFunction(\\boldsymbol{\\beta}, \\dataMatrix, \\inputMatrix)$$ A\n", "commonly used examples (for example in a regression problem) is least\n", "squares,\n", "$$\\errorFunction(\\boldsymbol{\\beta}, \\dataMatrix, \\inputMatrix) = \\sum_{i=1}^\\numData \\left(\\dataScalar_i - \\mappingFunction(\\inputVector_i, \\boldsymbol{\\beta})\\right)^2.$$\n", "\n", "If a linear prediction function is combined with the least squares\n", "objective function then that gives us a classical *linear regression*,\n", "another classical statistical model. Statistics often focusses on linear\n", "models because it makes interpretation of the model easier.\n", "Interpretation is key in statistics because the aim is normally to\n", "validate questions by analysis of data. Machine learning has typically\n", "focussed more on the prediction function itself and worried less about\n", "the interpretation of parameters, which are normally denoted by\n", "$\\mathbf{w}$ instead of $\\boldsymbol{\\beta}$. As a result *non-linear*\n", "functions are explored more often as they tend to improve quality of\n", "predictions but at the expense of interpretability.\n", "\n", "- These are interpretable models: vital for disease etc.\n", "\n", "- Modern machine learning methods are less interpretable\n", "\n", "- Example: face recognition\n", "\n", "\n", "
\n", "The DeepFace architecture [@Taigman:deepface14], visualized through\n", "colors to represent the functional mappings at each layer. There are 120\n", "million parameters in the model.\n", "
\n", "The DeepFace architecture [@Taigman:deepface14] consists of layers that\n", "deal with *translation* and *rotational* invariances. These layers are\n", "followed by three locally-connected layers and two fully-connected\n", "layers. Color illustrates feature maps produced at each layer. The net\n", "includes more than 120 million parameters, where more than 95% come from\n", "the local and fully connected layers.\n", "\n", "\n", "
\n", "Deep learning models are composition of simple functions. We can\n", "think of a pinball machine as an analogy. Each layer of pins corresponds\n", "to one of the layers of functions in the model. Input data is\n", "represented by the location of the ball from left to right when it is\n", "dropped in from the top. Output class comes from the position of the\n", "ball as it leaves the pins at the bottom.\n", "
\n", "We can think of what these models are doing as being similar to early\n", "pin ball machines. In a neural network, we input a number (or numbers),\n", "whereas in pinball, we input a ball. The location of the ball on the\n", "left-right axis can be thought of as the number. As the ball falls\n", "through the machine, each layer of pins can be thought of as a different\n", "layer of neurons. Each layer acts to move the ball from left to right.\n", "\n", "In a pinball machine, when the ball gets to the bottom it might fall\n", "into a hole defining a score, in a neural network, that is equivalent to\n", "the decision: a classification of the input object.\n", "\n", "An image has more than one number associated with it, so it's like\n", "playing pinball in a *hyper-space*." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pods\n", "pods.notebook.display_plots('pinball{sample:0>3}.svg', \n", " '../slides/diagrams', sample=(1,2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "
\n", "At initialization, the pins, which represent the parameters of the\n", "function, aren't in the right place to bring the balls to the correct\n", "decisions.\n", "
\n", "\n", "
\n", "After learning the pins are now in the right place to bring the balls\n", "to the correct decisions.\n", "
\n", "Learning involves moving all the pins to be in the right position, so\n", "that the ball falls in the right place. But moving all these pins in\n", "hyperspace can be difficult. In a hyper space you have to put a lot of\n", "data through the machine for to explore the positions of all the pins.\n", "Adversarial learning reflects the fact that a ball can be moved a small\n", "distance and lead to a very different result.\n", "\n", "Probabilistic methods explore more of the space by considering a range\n", "of possible paths for the ball through the machine.\n", "\n", "\n", "\n", "### Data Science\n", "\n", "- Industrial Revolution 4.0?\n", "- *Industrial Revolution* (1760-1840) term coined by Arnold Toynbee,\n", " late 19th century.\n", "- Maybe: But this one is dominated by *data* not *capital*\n", "- That presents *challenges* and *opportunities*\n", "\n", "compare [digital\n", "oligarchy](https://www.theguardian.com/media-network/2015/mar/05/digital-oligarchy-algorithms-personal-data)\n", "vs [how Africa can benefit from the data\n", "revolution](https://www.theguardian.com/media-network/2015/aug/25/africa-benefit-data-science-information)\n", "\n", "- Apple vs Nokia: How you handle disruption.\n", "\n", "Disruptive technologies take time to assimilate, and best practices, as\n", "well as the pitfalls of new technologies take time to share.\n", "Historically, new technologies led to new professions. [Isambard Kingdom\n", "Brunel](https://en.wikipedia.org/wiki/Isambard_Kingdom_Brunel) (born\n", "1806) was a leading innovator in civil, mechanical and naval\n", "engineering. Each of these has its own professional institutions founded\n", "in 1818, 1847, and 1860 respectively.\n", "\n", "[Nikola Tesla](https://en.wikipedia.org/wiki/Nikola_Tesla) developed the\n", "modern approach to electrical distribution, he was born in 1856 and the\n", "American Instiute for Electrical Engineers was founded in 1884, the UK\n", "equivalent was founded in 1871.\n", "\n", "[William Schockley Jr](https://en.wikipedia.org/wiki/William_Shockley),\n", "born 1910, led the group that developed the transistor, referred to as\n", "\"the man who brought silicon to Silicon Valley\", in 1963 the American\n", "Institute for Electical Engineers merged with the Institute of Radio\n", "Engineers to form the Institute of Electrical and Electronic Engineers.\n", "\n", "[Watts S. Humphrey](https://en.wikipedia.org/wiki/Watts_Humphrey), born\n", "1927, was known as the \"father of software quality\", in the 1980s he\n", "founded a program aimed at understanding and managing the software\n", "process. The British Computer Society was founded in 1956.\n", "\n", "Why the need for these professions? Much of it is about codification of\n", "best practice and developing trust between the public and practitioners.\n", "These fundamental characteristics of the professions are shared with the\n", "oldest professions (Medicine, Law) as well as the newest (Information\n", "Technology).\n", "\n", "So where are we today? My best guess is we are somewhere equivalent to\n", "the 1980s for Software Engineering. In terms of professional deployment\n", "we have a basic understanding of the equivalent of \"programming\" but\n", "much less understanding of *machine learning systems design* and *data\n", "infrastructure*. How the components we ahve developed interoperate\n", "together in a reliable and accountable manner. Best practice is still\n", "evolving, but perhaps isn't being shared widely enough.\n", "\n", "One problem is that the art of data science is superficially similar to\n", "regular software engineering. Although in practice it is rather\n", "different. Modern software engineering practice operates to generate\n", "code which is well tested as it is written, agile programming techniques\n", "provide the appropriate degree of flexibility for the individual\n", "programmers alongside sufficient formalization and testing. These\n", "techniques have evolved from an overly restrictive formalization that\n", "was proposed in the early days of software engineering.\n", "\n", "While data science involves programming, it is different in the\n", "following way. Most of the work in data science involves understanding\n", "the data and the appropriate manipulations to apply to extract knowledge\n", "from the data. The eventual number of lines of code that are required to\n", "extract that knowledge are often very few, but the amount of thought and\n", "attention that needs to be applied to each line is much more than a\n", "traditional line of software code. Testing of those lines is also of a\n", "different nature, provisions have to be made for evolving data\n", "environments. Any development work is often done on a static snapshot of\n", "data, but deployment is made in a live environment where the nature of\n", "data changes. Quality control involves checking for degradation in\n", "performance arising form unanticipated changes in data quality. It may\n", "also need to check for regulatory conformity. For example, in the UK the\n", "General Data Protection Regulation stipulates standards of\n", "explainability and fairness that may need to be monitored. These\n", "concerns do not affect traditional software deployments.\n", "\n", "Others are also pointing out these challenges, [this\n", "post](https://medium.com/@karpathy/software-2-0-a64152b37c35) from\n", "Andrej Karpathy (now head of AI at Tesla) covers the notion of \"Software\n", "2.0\". Google researchers have highlighted the challenges of \"Technical\n", "Debt\" in machine learning [@Sculley:debt15]. Researchers at Berkeley\n", "have characterized the systems challenges associated with machine\n", "learning [@Stoica:systemsml17].\n", "\n", "Data science is not only about technical expertise and analysis of data,\n", "we need to also generate a culture of decision making that acknowledges\n", "the true challenges in data-driven automated decision making. In\n", "particular, a focus on algorithms has neglected the importance of data\n", "in driving decisions. The quality of data is paramount in that poor\n", "quality data will inevitably lead to poor quality decisions. Anecdotally\n", "most data scientists will suggest that 80% of their time is spent on\n", "data clean up, and only 20% on actually modelling.\n", "\n", "### The Software Crisis\n", "\n", "> The major cause of the software crisis is that the machines have\n", "> become several orders of magnitude more powerful! To put it quite\n", "> bluntly: as long as there were no machines, programming was no problem\n", "> at all; when we had a few weak computers, programming became a mild\n", "> problem, and now we have gigantic computers, programming has become an\n", "> equally gigantic problem.\n", ">\n", "> Edsger Dijkstra (1930-2002), The Humble Programmer\n", "\n", "In the late sixties early software programmers made note of the\n", "increasing costs of software development and termed the challenges\n", "associated with it as the \"[Software\n", "Crisis](https://en.wikipedia.org/wiki/Software_crisis)\". Edsger Dijkstra\n", "referred to the crisis in his 1972 Turing Award winner's address.\n", "\n", "### The Data Crisis\n", "\n", "> The major cause of the data crisis is that machines have become more\n", "> interconnected than ever before. Data access is therefore cheap, but\n", "> data quality is often poor. What we need is cheap high quality data.\n", "> That implies that we develop processes for improving and verifying\n", "> data quality that are efficient.\n", ">\n", "> There would seem to be two ways for improving efficiency. Firstly, we\n", "> should not duplicate work. Secondly, where possible we should automate\n", "> work.\n", "\n", "What I term \"The Data Crisis\" is the modern equivalent of this problem.\n", "The quantity of modern data, and the lack of attention paid to data as\n", "it is initially \"laid down\" and the costs of data cleaning are bringing\n", "about a crisis in data-driven decision making. Just as with software,\n", "the crisis is most correctly addressed by 'scaling' the manner in which\n", "we process our data. Duplication of work occurs because the value of\n", "data cleaning is not correctly recognised in management decision making\n", "processes. Automation of work is increasingly possible through\n", "techniques in \"artificial intelligence\", but this will also require\n", "better management of the data science pipeline so that data about data\n", "science (meta-data science) can be correctly assimilated and processed.\n", "The Alan Turing institute has a program focussed on this area, [AI for\n", "Data\n", "Analytics](https://www.turing.ac.uk/research_projects/artificial-intelligence-data-analytics/).\n", "\n", "- Reusability of Data\n", "- Deployment of Machine Learning Systems\n", "\n", "- Reusability of Data\n", "- Deployment of Machine Learning Systems\n", "\n", "## Data Readiness Levels\n", "\n", "[Data Readiness\n", "Levels](http://inverseprobability.com/2017/01/12/data-readiness-levels)\n", "[@Lawrence:drl17] are an attempt to develop a language around data\n", "quality that can bridge the gap between technical solutions and decision\n", "makers such as managers and project planners. The are inspired by\n", "Technology Readiness Levels which attempt to quantify the readiness of\n", "technologies for deployment.\n", "\n", "Data-readiness describes, at its coarsest level, three separate stages\n", "of data graduation.\n", "\n", "- Grade C - accessibility\n", "\n", "- Grade B - validity\n", "\n", "- Grade A - usability\n", "\n", "### Accessibility: Grade C\n", "\n", "The first grade refers to the accessibility of data. Most data science\n", "practitioners will be used to working with data-providers who, perhaps\n", "having had little experience of data-science before, state that they\n", "\"have the data\". More often than not, they have not verified this. A\n", "convenient term for this is \"Hearsay Data\", someone has *heard* that\n", "they have the data so they *say* they have it. This is the lowest grade\n", "of data readiness.\n", "\n", "Progressing through Grade C involves ensuring that this data is\n", "accessible. Not just in terms of digital accessiblity, but also for\n", "regulatory, ethical and intellectual property reasons.\n", "\n", "### Validity: Grade B\n", "\n", "Data transits from Grade C to Grade B once we can begin digital analysis\n", "on the computer. Once the challenges of access to the data have been\n", "resolved, we can make the data available either via API, or for direct\n", "loading into analysis software (such as Python, R, Matlab, Mathematica\n", "or SPSS). Once this has occured the data is at B4 level. Grade B\n", "involves the *validity* of the data. Does the data really represent what\n", "it purports to? There are challenges such as missing values, outliers,\n", "record duplication. Each of these needs to be investigated.\n", "\n", "Grade B and C are important as if the work done in these grades is\n", "documented well, it can be reused in other projects. Reuse of this\n", "labour is key to reducing the costs of data-driven automated decision\n", "making. There is a strong overlap between the work required in this\n", "grade and the statistical field of [*exploratory data\n", "analysis*](https://en.wikipedia.org/wiki/Exploratory_data_analysis)\n", "[@Tukey:exploratory77].\n", "\n", "### Usability: Grade A\n", "\n", "Once the validity of the data is determined, the data set can be\n", "considered for use in a particular task. This stage of data readiness is\n", "more akin to what machine learning scientists are used to doing in\n", "Universities. Bringing an algorithm to bear on a well understood data\n", "set.\n", "\n", "In Grade A we are concerned about the utility of the data given a\n", "particular task. Grade A may involve additional data collection\n", "(experimental design in statistics) to ensure that the task is\n", "fulfilled.\n", "\n", "This is the stage where the data and the model are brought together, so\n", "expertise in learning algorithms and their application is key. Further\n", "ethical considerations, such as the fairness of the resulting\n", "predictions are required at this stage. At the end of this stage a\n", "prototype model is ready for deployment.\n", "\n", "Deployment and maintenance of machine learning models in production is\n", "another important issue which Data Readiness Levels are only a part of\n", "the solution for.\n", "\n", "To find out more, or to contribute ideas go to\n", "\n", "\n", "Throughout the data preparation pipeline, it is important to have close\n", "interaction between data scientists and application domain experts.\n", "Decisions on data preparation taken outside the context of application\n", "have dangerous downstream consequences. This provides an additional\n", "burden on the data scientist as they are required for each project, but\n", "it should also be seen as a learning and familiarization exercise for\n", "the domain expert. Long term, just as biologists have found it necessary\n", "to assimilate the skills of the bioinformatician to be effective in\n", "their science, most domains will also require a familiarity with the\n", "nature of data driven decision making and its application. Working\n", "closely with data-scientists on data preparation is one way to begin\n", "this sharing of best practice.\n", "\n", "The processes involved in Grade C and B are often badly taught in\n", "courses on data science. Perhaps not due to a lack of interest in the\n", "areas, but maybe more due to a lack of access to real world examples\n", "where data quality is poor.\n", "\n", "These stages of data science are also ridden with ambiguity. In the long\n", "term they could do with more formalization, and automation, but best\n", "practice needs to be understood by a wider community before that can\n", "happen.\n", "\n", "- Challenges in deploying AI.\n", "- Currently this is in the form of \"machine learning systems\"\n", "\n", "- Fog computing: barrier between cloud and device blurring.\n", " - Computing on the Edge\n", "- Complex feedback between algorithm and implementation\n", "\n", "- Major new challenge for systems designers.\n", "- Internet of Intelligence but currently:\n", " - AI systems are *fragile*\n", "\n", "## Machine Learning System Design\n", "\n", "The way we are deploying artificial intelligence systems in practice is\n", "to build up systems of machine learning components. To build a machine\n", "learning system, we decompose the task into parts, each of which we can\n", "emulate with ML methods. These parts are typically independently\n", "constructed and verified. For example, in a driverless car we can\n", "decompose the tasks into components such as \"pedestrian detection\" and\n", "\"road line detection\". Each of these components can be constructed with,\n", "for example, an independent classifier. We can then superimpose a logic\n", "on top. For example, \"Follow the road line unless you detect a\n", "pedestrian in the road\".\n", "\n", "This allows for verification of car performance, as long as we can\n", "verify the individual components. However, it also implies that the AI\n", "systems we deploy are *fragile*.\n", "\n", "Our intelligent systems are composed by \"pigeonholing\" each indvidual\n", "task, then substituting with a machine learning model.\n", "\n", "### Rapid Reimplementation\n", "\n", "This is also the classical approach to automation, but in traditional\n", "automation we also ensure the *environment* in which the system operates\n", "becomes controlled. For example, trains run on railway lines, fast cars\n", "run on motorways, goods are manufactured in a controlled factory\n", "environment.\n", "\n", "The difference with modern automated decision making systems is our\n", "intention is to deploy them in the *uncontrolled* environment that makes\n", "up our own world.\n", "\n", "This exposes us to either unforseen circumstances or adversarial action.\n", "And yet it is unclear our our intelligent systems are capable of\n", "adapting to this.\n", "\n", "We become exposed to mischief and adversaries. Adversaries intentially\n", "may wish to take over the artificial intelligence system, and mischief\n", "is the constant practice of many in our society. Simply watching a 10\n", "year old interact with a voice agent such as Alexa or Siri shows that\n", "they are delighted when the can make the the \"intelligent\" agent seem\n", "foolish.\n", "\n", "\n", "
\n", "*Watt's Governor as held by \"Science\" on Holborn Viaduct*\n", "
\n", "\n", "
\n", "*Watt's Steam Engine which made Steam Power Efficient and Practical*\n", "
\n", "One of the first automated decision making systems was Watt's governor,\n", "as held by \"Science\" on Holborns viaduct. Watt's governor was a key\n", "component in his steam engine. It senses increases in speed in the\n", "engine and closed the steam valve to prevent the engine overspeeding and\n", "destroying itself. Until the invention of this device, it was a human\n", "job to do this.\n", "\n", "The formal study of governors and other feedback control devices was\n", "then began by [James Clerk\n", "Maxwell](https://en.wikipedia.org/wiki/James_Clerk_Maxwell), the\n", "Scottish physicist. This field became the foundation of our modern\n", "techniques of artificial intelligence through Norbert Wiener's book\n", "*Cybernetics* [@Wiener:cybernetics48]. Cybernetics is Greek for\n", "governor, a word that in itself simply means helmsman in English.\n", "\n", "The recent WannaCry virus that had a wide impact on our health services\n", "ecosystem was exploiting a security flaw in Windows systems that was\n", "first exploited by a virus called Stuxnet.\n", "\n", "Stuxnet was a virus designed to infect the Iranian nuclear program's\n", "Uranium enrichment centrifuges. A centrifuge is prevented from overspeed\n", "by a controller, just like Watt's governor. Only now it is implemented\n", "in control logic, in this case on a Siemens PLC controller.\n", "\n", "Stuxnet infected these controllers and took over the response signal in\n", "the centrifuge, fooling the system into thinking that no overspeed was\n", "occuring. As a result, the centrifuges destroyed themselves through\n", "spinning too fast.\n", "\n", "This is equivalent to detaching Watt's governor from the steam engine.\n", "Such sabotage would be easily recognized by a steam engine operator. The\n", "challenge for the operators of the Iranian Uranium centrifuges was that\n", "the sabotage was occurring inside the electronics.\n", "\n", "That is the effect of an adversary on an intelligent system, but even\n", "without adveraries, the mischief of a 10 year old can confuse our AIs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.lib.display import YouTubeVideo\n", "YouTubeVideo('1y2UKz47gew')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Asking Siri \"What is a trillion to the power of a thousand minus one?\"\n", "leads to a 30 minute response consisting of only 9s. I found this out\n", "because my nine year old grabbed my phone and did it. The only way to\n", "stop Siri was to force closure. This is an interesting example of a\n", "system feature that's *not* a bug, in fact it requires clever processing\n", "from Wolfram Alpha. But it's an unexpected result from the system\n", "performing correctly.\n", "\n", "This challenge of facing a circumstance that was unenvisaged in design\n", "but has consequences in deployment becomes far larger when the\n", "environment is uncontrolled. Or in the extreme case, where actions of\n", "the intelligent system effect the wider environment and change it.\n", "\n", "These unforseen circumstances are likely to lead to need for much more\n", "efficient turn-around and update for our intelligent systems. Whether we\n", "are correcting for security flaws (which *are* bugs) or unenvisaged\n", "circumstantial challenges: an issue I'm referring to as *peppercorns*.\n", "Rapid deployment of system updates is required. For example, Apple have\n", "\"fixed\" the problem of Siri returning long numbers.\n", "\n", "The challenge is particularly acute because of the *scale* at which we\n", "can deploy AI solutions. This means when something does go wrong, it may\n", "be going wrong in billions of households simultaneously.\n", "\n", "See also [this blog on the differences between natural and artificial\n", "intelligence](http://inverseprobability.com/2018/02/06/natural-and-artificial-intelligence)\n", "and this paper [on the need for diversity in decision\n", "making](http://inverseprobability.com/2017/11/15/decision-making).\n", "\n", "### Uncertainty Quantification\n", "\n", "- Deep nets are powerful approach to images, speech, language.\n", "\n", "- Proposal: Deep GPs may also be a great approach, but better to\n", " deploy according to natural strengths.\n", "\n", "### Uncertainty Quantification\n", "\n", "- Probabilistic numerics, surrogate modelling, emulation, and UQ.\n", "\n", "- Not a fan of AI as a term.\n", "\n", "- But we are faced with increasing amounts of *algorithmic decision\n", " making*.\n", "\n", "### ML and Decision Making\n", "\n", "- When trading off decisions: compute or acquire data?\n", "\n", "- There is a critical need for uncertainty.\n", "\n", "### Uncertainty Quantification\n", "\n", "> Uncertainty quantification (UQ) is the science of quantitative\n", "> characterization and reduction of uncertainties in both computational\n", "> and real world applications. It tries to determine how likely certain\n", "> outcomes are if some aspects of the system are not exactly known.\n", "\n", "- Interaction between physical and virtual worlds of major interest\n", " for Amazon.\n", "\n", "We will to illustrate different concepts of [Uncertainty\n", "Quantification](https://en.wikipedia.org/wiki/Uncertainty_quantification)\n", "(UQ) and the role that Gaussian processes play in this field. Based on a\n", "simple simulator of a car moving between a valley and a mountain, we are\n", "going to illustrate the following concepts:\n", "\n", "- **Systems emulation**. Many real world decisions are based on\n", " simulations that can be computationally very demanding. We will show\n", " how simulators can be replaced by *emulators*: Gaussian process\n", " models fitted on a few simulations that can be used to replace the\n", " *simulator*. Emulators are cheap to compute, fast to run, and always\n", " provide ways to quantify the uncertainty of how precise they are\n", " compared the original simulator.\n", "\n", "- **Emulators in optimization problems**. We will show how emulators\n", " can be used to optimize black-box functions that are expensive to\n", " evaluate. This field is also called Bayesian Optimization and has\n", " gained an increasing relevance in machine learning as emulators can\n", " be used to optimize computer simulations (and machine learning\n", " algorithms) quite efficiently.\n", "\n", "- **Multi-fidelity emulation methods**. In many scenarios we have\n", " simulators of different quality about the same measure of interest.\n", " In these cases the goal is to merge all sources of information under\n", " the same model so the final emulator is cheaper and more accurate\n", " than an emulator fitted only using data from the most accurate and\n", " expensive simulator.\n", "\n", "### Example: Formula One Racing\n", "\n", "- Designing an F1 Car requires CFD, Wind Tunnel, Track Testing etc.\n", "\n", "- How to combine them?\n", "\n", "### Mountain Car Simulator\n", "\n", "To illustrate the above mentioned concepts we we use the [mountain car\n", "simulator](https://github.com/openai/gym/wiki/MountainCarContinuous-v0).\n", "This simulator is widely used in machine learning to test reinforcement\n", "learning algorithms. The goal is to define a control policy on a car\n", "whose objective is to climb a mountain. Graphically, the problem looks\n", "as follows:\n", "\n", "\n", "\n", "The goal is to define a sequence of actions (push the car right or left\n", "with certain intensity) to make the car reach the flag after a number\n", "$T$ of time steps.\n", "\n", "At each time step $t$, the car is characterized by a vector\n", "$\\inputVector_{t} = (p_t,v_t)$ of states which are respectively the the\n", "position and velocity of the car at time $t$. For a sequence of states\n", "(an episode), the dynamics of the car is given by\n", "\n", "$$\\inputVector_{t+1} = \\mappingFunction(\\inputVector_{t},\\textbf{u}_{t})$$\n", "\n", "where $\\textbf{u}_{t}$ is the value of an action force, which in this\n", "example corresponds to push car to the left (negative value) or to the\n", "right (positive value). The actions across a full episode are\n", "represented in a policy $\\textbf{u}_{t} = \\pi(\\inputVector_{t},\\theta)$\n", "that acts according to the current state of the car and some parameters\n", "$\\theta$. In the following examples we will assume that the policy is\n", "linear which allows us to write $\\pi(\\inputVector_{t},\\theta)$ as\n", "\n", "$$\\pi(\\inputVector,\\theta)= \\theta_0 + \\theta_p p + \\theta_vv.$$\n", "\n", "For $t=1,\\dots,T$ now given some initial state $\\inputVector_{0}$ and\n", "some some values of each $\\textbf{u}_{t}$, we can **simulate** the full\n", "dynamics of the car for a full episode using\n", "[Gym](https://gym.openai.com/envs/). The values of $\\textbf{u}_{t}$ are\n", "fully determined by the parameters of the linear controller.\n", "\n", "After each episode of length $T$ is complete, a reward function\n", "$R_{T}(\\theta)$ is computed. In the mountain car example the reward is\n", "computed as 100 for reaching the target of the hill on the right hand\n", "side, minus the squared sum of actions (a real negative to push to the\n", "left and a real positive to push to the right) from start to goal. Note\n", "that our reward depend on $\\theta$ as we make it dependent on the\n", "parameters of the linear controller.\n", "\n", "### Emulate the Mountain Car" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import gym" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env = gym.make('MountainCarContinuous-v0')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our goal in this section is to find the parameters $\\theta$ of the\n", "linear controller such that\n", "\n", "$$\\theta^* = arg \\max_{\\theta} R_T(\\theta).$$\n", "\n", "In this section, we directly use Bayesian optimization to solve this\n", "problem. We will use [GPyOpt](https://sheffieldml.github.io/GPyOpt/) so\n", "we first define the objective function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mountain_car as mc\n", "import GPyOpt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "obj_func = lambda x: mc.run_simulation(env, x)[0]\n", "objective = GPyOpt.core.task.SingleObjective(obj_func)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For each set of parameter values of the linear controller we can run an\n", "episode of the simulator (that we fix to have a horizon of $T=500$) to\n", "generate the reward. Using as input the parameters of the controller and\n", "as outputs the rewards we can build a Gaussian process emulator of the\n", "reward.\n", "\n", "We start defining the input space, which is three-dimensional:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## --- We define the input space of the emulator\n", "\n", "space= [{'name':'postion_parameter', 'type':'continuous', 'domain':(-1.2, +1)},\n", " {'name':'velocity_parameter', 'type':'continuous', 'domain':(-1/0.07, +1/0.07)},\n", " {'name':'constant', 'type':'continuous', 'domain':(-1, +1)}]\n", "\n", "design_space = GPyOpt.Design_space(space=space)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we initizialize a Gaussian process emulator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = GPyOpt.models.GPModel(optimize_restarts=5, verbose=False, exact_feval=True, ARD=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In Bayesian optimization an acquisition function is used to balance\n", "exploration and exploitation to evaluate new locations close to the\n", "optimum of the objective. In this notebook we select the expected\n", "improvement (EI). For further details have a look to the review paper of\n", "[Shahriari et al\n", "(2015)](http://www.cs.ox.ac.uk/people/nando.defreitas/publications/BayesOptLoop.pdf)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(design_space)\n", "acquisition = GPyOpt.acquisitions.AcquisitionEI(model, design_space, optimizer=aquisition_optimizer)\n", "evaluator = GPyOpt.core.evaluators.Sequential(acquisition) # Collect points sequentially, no parallelization." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To initalize the model we start sampling some initial points (25) for\n", "the linear controler randomly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from GPyOpt.experiment_design.random_design import RandomDesign" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_initial_points = 25\n", "random_design = RandomDesign(design_space)\n", "initial_design = random_design.get_samples(n_initial_points)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we start any optimization, lets have a look to the behavior of\n", "the car with the first of these initial points that we have selected\n", "randomly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "random_controller = initial_design[0,:]\n", "_, _, _, frames = mc.run_simulation(env, np.atleast_2d(random_controller), render=True)\n", "anim=mc.animate_frames(frames, 'Random linear controller')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.core.display import HTML" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "HTML(anim.to_jshtml())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "As we can see the random linear controller does not manage to push the\n", "car to the top of the mountain. Now, let's optimize the regret using\n", "Bayesian optimization and the emulator for the reward. We try 50 new\n", "parameters chosen by the EI." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "max_iter = 50\n", "bo = GPyOpt.methods.ModularBayesianOptimization(model, design_space, objective, acquisition, evaluator, initial_design)\n", "bo.run_optimization(max_iter = max_iter )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we visualize the result for the best controller that we have found\n", "with Bayesian optimization." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, _, _, frames = mc.run_simulation(env, np.atleast_2d(bo.x_opt), render=True)\n", "anim=mc.animate_frames(frames, 'Best controller after 50 iterations of Bayesian optimization')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "HTML(anim.to_jshtml())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "he car can now make it to the top of the mountain! Emulating the reward\n", "function and using the EI helped as to find a linear controller that\n", "solves the problem.\n", "\n", "### Data Efficient Emulation\n", "\n", "In the previous section we solved the mountain car problem by directly\n", "emulating the reward but no considerations about the dynamics\n", "$\\inputVector_{t+1} = \\mappingFunction(\\inputVector_{t},\\textbf{u}_{t})$\n", "of the system were made. Note that we had to run 75 episodes of 500\n", "steps each to solve the problem, which required to call the simulator\n", "$500\\times 75 =37500$ times. In this section we will show how it is\n", "possible to reduce this number by building an emulator for $f$ that can\n", "later be used to directly optimize the control.\n", "\n", "The inputs of the model for the dynamics are the velocity, the position\n", "and the value of the control so create this space accordingly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import gym" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env = gym.make('MountainCarContinuous-v0')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import GPyOpt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "space_dynamics = [{'name':'position', 'type':'continuous', 'domain':[-1.2, +0.6]},\n", " {'name':'velocity', 'type':'continuous', 'domain':[-0.07, +0.07]},\n", " {'name':'action', 'type':'continuous', 'domain':[-1, +1]}]\n", "design_space_dynamics = GPyOpt.Design_space(space=space_dynamics)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The outputs are the velocity and the position. Indeed our model will\n", "capture the change in position and velocity on time. That is, we will\n", "model\n", "\n", "$$\\Delta v_{t+1} = v_{t+1} - v_{t}$$\n", "\n", "$$\\Delta x_{t+1} = p_{t+1} - p_{t}$$\n", "\n", "with Gaussian processes with prior mean $v_{t}$ and $p_{t}$\n", "respectively. As a covariance function, we use a Matern52. We need\n", "therefore two models to capture the full dynamics of the system." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "position_model = GPyOpt.models.GPModel(optimize_restarts=5, verbose=False, exact_feval=True, ARD=True)\n", "velocity_model = GPyOpt.models.GPModel(optimize_restarts=5, verbose=False, exact_feval=True, ARD=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we sample some input parameters and use the simulator to compute\n", "the outputs. Note that in this case we are not running the full\n", "episodes, we are just using the simulator to compute\n", "$\\inputVector_{t+1}$ given $\\inputVector_{t}$ and $\\textbf{u}_{t}$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from GPyOpt.experiment_design.random_design import RandomDesign\n", "import mountain_car as mc" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### --- Random locations of the inputs\n", "n_initial_points = 500\n", "random_design_dynamics = RandomDesign(design_space_dynamics)\n", "initial_design_dynamics = random_design_dynamics.get_samples(n_initial_points)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### --- Simulation of the (normalized) outputs\n", "y = np.zeros((initial_design_dynamics.shape[0], 2))\n", "for i in range(initial_design_dynamics.shape[0]):\n", " y[i, :] = mc.simulation(initial_design_dynamics[i, :])\n", "\n", "# Normalize the data from the simulation\n", "y_normalisation = np.std(y, axis=0)\n", "y_normalised = y/y_normalisation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general we might use much smarter strategies to design our emulation\n", "of the simulator. For example, we could use the variance of the\n", "predictive distributions of the models to collect points using\n", "uncertainty sampling, which will give us a better coverage of the space.\n", "For simplicity, we move ahead with the 500 randomly selected points.\n", "\n", "Now that we have a data set, we can update the emulators for the\n", "location and the velocity." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "position_model.updateModel(initial_design_dynamics, y[:, [0]], None, None)\n", "velocity_model.updateModel(initial_design_dynamics, y[:, [1]], None, None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now have a look to how the emulator and the simulator match.\n", "First, we show a contour plot of the car aceleration for each pair of\n", "can position and velocity. You can use the bar bellow to play with the\n", "values of the controler to compare the emulator and the simulator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.html.widgets import interact" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "We can see how the emulator is doing a fairly good job approximating the\n", "simulator. On the edges, however, it struggles to captures the dynamics\n", "of the simulator.\n", "\n", "Given some input parameters of the linear controlling, how do the\n", "dynamics of the emulator and simulator match? In the following figure we\n", "show the position and velocity of the car for the 500 time steps of an\n", "episode in which the parameters of the linear controller have been fixed\n", "beforehand. The value of the input control is also shown." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "controller_gains = np.atleast_2d([0, .6, 1]) # change the valus of the linear controller to observe the trayectories." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "We now make explicit use of the emulator, using it to replace the\n", "simulator and optimize the linear controller. Note that in this\n", "optimization, we don't need to query the simulator anymore as we can\n", "reproduce the full dynamics of an episode using the emulator. For\n", "illustrative purposes, in this example we fix the initial location of\n", "the car.\n", "\n", "We define the objective reward function in terms of the simulator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### --- Optimize control parameters with emulator\n", "car_initial_location = np.asarray([-0.58912799, 0]) \n", "\n", "### --- Reward objective function using the emulator\n", "obj_func_emulator = lambda x: mc.run_emulation([position_model, velocity_model], x, car_initial_location)[0]\n", "objective_emulator = GPyOpt.core.task.SingleObjective(obj_func_emulator)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And as before, we use Bayesian optimization to find the best possible\n", "linear controller." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### --- Elements of the optimization that will use the multi-fidelity emulator\n", "model = GPyOpt.models.GPModel(optimize_restarts=5, verbose=False, exact_feval=True, ARD=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The design space is the three continuous variables that make up the\n", "linear controller." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "space= [{'name':'linear_1', 'type':'continuous', 'domain':(-1/1.2, +1)},\n", " {'name':'linear_2', 'type':'continuous', 'domain':(-1/0.07, +1/0.07)},\n", " {'name':'constant', 'type':'continuous', 'domain':(-1, +1)}]\n", "\n", "design_space = GPyOpt.Design_space(space=space)\n", "aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(design_space)\n", "\n", "random_design = RandomDesign(design_space)\n", "initial_design = random_design.get_samples(25)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We set the acquisition function to be expected improvement using\n", "`GPyOpt`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "acquisition = GPyOpt.acquisitions.AcquisitionEI(model, design_space, optimizer=aquisition_optimizer)\n", "evaluator = GPyOpt.core.evaluators.Sequential(acquisition)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bo_emulator = GPyOpt.methods.ModularBayesianOptimization(model, design_space, objective_emulator, acquisition, evaluator, initial_design)\n", "bo_emulator.run_optimization(max_iter=50)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, _, _, frames = mc.run_simulation(env, np.atleast_2d(bo_emulator.x_opt), render=True)\n", "anim=mc.animate_frames(frames, 'Best controller using the emulator of the dynamics')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.core.display import HTML" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "HTML(anim.to_jshtml())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "And the problem is again solved, but in this case we have replaced the\n", "simulator of the car dynamics by a Gaussian process emulator that we\n", "learned by calling the simulator only 500 times. Compared to the 37500\n", "calls that we needed when applying Bayesian optimization directly on the\n", "simulator this is a great gain.\n", "\n", "In some scenarios we have simulators of the same environment that have\n", "different fidelities, that is that reflect with different level of\n", "accuracy the dynamics of the real world. Running simulations of the\n", "different fidelities also have a different cost: hight fidelity\n", "simulations are more expensive the cheaper ones. If we have access to\n", "these simulators we can combine high and low fidelity simulations under\n", "the same model.\n", "\n", "So let's assume that we have two simulators of the mountain car\n", "dynamics, one of high fidelity (the one we have used) and another one of\n", "low fidelity. The traditional approach to this form of multi-fidelity\n", "emulation is to assume that\n", "\n", "$$\\mappingFunction_i\\left(\\inputVector\\right) = \\rho\\mappingFunction_{i-1}\\left(\\inputVector\\right) + \\delta_i\\left(\\inputVector \\right)$$\n", "\n", "where $\\mappingFunction_{i-1}\\left(\\inputVector\\right)$ is a low\n", "fidelity simulation of the problem of interest and\n", "$\\mappingFunction_i\\left(\\inputVector\\right)$ is a higher fidelity\n", "simulation. The function $\\delta_i\\left(\\inputVector \\right)$ represents\n", "the difference between the lower and higher fidelity simulation, which\n", "is considered additive. The additive form of this covariance means that\n", "if $\\mappingFunction_{0}\\left(\\inputVector\\right)$ and\n", "$\\left\\{\\delta_i\\left(\\inputVector \\right)\\right\\}_{i=1}^m$ are all\n", "Gaussian processes, then the process over all fidelities of simuation\n", "will be a joint Gaussian process.\n", "\n", "But with Deep Gaussian processes we can consider the form\n", "\n", "$$\\mappingFunction_i\\left(\\inputVector\\right) = \\mappingFunctionTwo_{i}\\left(\\mappingFunction_{i-1}\\left(\\inputVector\\right)\\right) + \\delta_i\\left(\\inputVector \\right),$$\n", "\n", "where the low fidelity representation is non linearly transformed by\n", "$\\mappingFunctionTwo(\\cdot)$ before use in the process. This is the\n", "approach taken in @Perdikaris:multifidelity17. But once we accept that\n", "these models can be composed, a highly flexible framework can emerge. A\n", "key point is that the data enters the model at different levels, and\n", "represents different aspects. For example these correspond to the two\n", "fidelities of the mountain car simulator.\n", "\n", "We start by sampling both of them at 250 random input locations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import gym" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env = gym.make('MountainCarContinuous-v0')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import GPyOpt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### --- Collect points from low and high fidelity simulator --- ###\n", "\n", "space = GPyOpt.Design_space([\n", " {'name':'position', 'type':'continuous', 'domain':(-1.2, +1)},\n", " {'name':'velocity', 'type':'continuous', 'domain':(-0.07, +0.07)},\n", " {'name':'action', 'type':'continuous', 'domain':(-1, +1)}])\n", "\n", "n_points = 250\n", "random_design = GPyOpt.experiment_design.RandomDesign(space)\n", "x_random = random_design.get_samples(n_points)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we evaluate the high and low fidelity simualtors at those\n", "locations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import mountain_car as mc" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "d_position_hf = np.zeros((n_points, 1))\n", "d_velocity_hf = np.zeros((n_points, 1))\n", "d_position_lf = np.zeros((n_points, 1))\n", "d_velocity_lf = np.zeros((n_points, 1))\n", "\n", "# --- Collect high fidelity points\n", "for i in range(0, n_points):\n", " d_position_hf[i], d_velocity_hf[i] = mc.simulation(x_random[i, :])\n", "\n", "# --- Collect low fidelity points \n", "for i in range(0, n_points):\n", " d_position_lf[i], d_velocity_lf[i] = mc.low_cost_simulation(x_random[i, :])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is time to build the multi-fidelity model for both the position and\n", "the velocity.\n", "\n", "As we did in the previous section we use the emulator to optimize the\n", "simulator. In this case we use the high fidelity output of the emulator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### --- Optimize controller parameters \n", "obj_func = lambda x: mc.run_simulation(env, x)[0]\n", "obj_func_emulator = lambda x: mc.run_emulation([position_model, velocity_model], x, car_initial_location)[0]\n", "objective_multifidelity = GPyOpt.core.task.SingleObjective(obj_func)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we optimize using Bayesian optimzation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from GPyOpt.experiment_design.random_design import RandomDesign" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = GPyOpt.models.GPModel(optimize_restarts=5, verbose=False, exact_feval=True, ARD=True)\n", "space= [{'name':'linear_1', 'type':'continuous', 'domain':(-1/1.2, +1)},\n", " {'name':'linear_2', 'type':'continuous', 'domain':(-1/0.07, +1/0.07)},\n", " {'name':'constant', 'type':'continuous', 'domain':(-1, +1)}]\n", "\n", "design_space = GPyOpt.Design_space(space=space)\n", "aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(design_space)\n", "\n", "n_initial_points = 25\n", "random_design = RandomDesign(design_space)\n", "initial_design = random_design.get_samples(n_initial_points)\n", "acquisition = GPyOpt.acquisitions.AcquisitionEI(model, design_space, optimizer=aquisition_optimizer)\n", "evaluator = GPyOpt.core.evaluators.Sequential(acquisition)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bo_multifidelity = GPyOpt.methods.ModularBayesianOptimization(model, design_space, objective_multifidelity, acquisition, evaluator, initial_design)\n", "bo_multifidelity.run_optimization(max_iter=50)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, _, _, frames = mc.run_simulation(env, np.atleast_2d(bo_multifidelity.x_opt), render=True)\n", "anim=mc.animate_frames(frames, 'Best controller with multi-fidelity emulator')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.core.display import HTML" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "HTML(anim.to_jshtml())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "And problem solved! We see how the problem is also solved with 250\n", "observations of the high fidelity simulator and 250 of the low fidelity\n", "simulator.\n", "\n", "- Artificial Intelligence and Data Science are fundamentally\n", " different.\n", "- In one you are dealing with data collected by happenstance.\n", "- In the other you are trying to build systems in the real world,\n", " often by actively collecting data.\n", "- Our approaches to systems design are building powerful machines that\n", " will be deployed in evolving environments.\n", "\n", "[^1]: the challenge of understanding what information pertains to is\n", " known as knowledge representation." ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 2 }