{ "metadata": { "name": "01_Introduction" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Machine Learning Defined

\n", "

\n", "

  1. field of study that gives computers the ability to learn without being explicitly programmed : Arthur Samuel 1959
  2. \n", " \n", "
  3. a computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T as measured by P, \n", " improves with E : Tom Mitchell 1998
  4. \n", "
\n", "

\n", "**Supervised Learning** : training data includes the \u201ccorrect\u201d answer\n", " \n", "\n", "**Unsupervised Learning** : problems where the algorithm is given a data set without any \u201cright answers\u201d. The objective is to find some underlying structure in the data, e.g. clustering\n", "

\n", "**Reinforcement Learning** : problems where a sequence of decisions are made as opposed to a single decision (or prediction) \n", "

\n", "**Learning Theory** : study of how and why (mathematically) a learning algorithm works" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Introduction

\n", "The material presented will include both the *Bayesian* and *frequentist* approaches. The labels, or even their *\"proper\"* use is, in practical terms, of little consequence. \n", "The important considerations are, as in any modeling, what assumptions are being made to create a given mathematical model and how do those \n", "assumptions affect model quality. However, in an effort to be consistent with the literature, I will attempt to use these terms where appropriate. \n", "With respect to Bayesian modeling vs. frequentist modeling the most fundamental difference is how data is viewed. In simplistic terms, the \n", "Bayesian approach asserts that their is some prior knowledge about the distribution of the data and model parameters. This prior knowledge alone could be used to make \n", "predictions about the future observation - however poor they may be. Once data is observed, it is used to *modulate* the prior assumptions.\n", "The frenquentist approach tends to ignore any such prior knowledge - relying more heavily, or even completely, on the observed data. Both approaches, of course, have\n", "adavantages and disadvantages that I will try to point out along the way.\n", "\n", "

A Little Probability

\n", "Assume we have a random variable (RV) $X$. In the case where $X$ is a discrete random variable, the probability that $X$ takes some particular value $x$ is $p(X=x)=p_X(x)$ where\n", "$p_X$ is the *probability distribution function* of the RV $X$. For simplicity, we will usually drop the $X$ subscript when using the probability distribution function, i.e.\n", "$p(x)=p_X(x)$ and $p(x,y)=p_{X,Y}(x,y)$\n", "Now assume we have a second RV, $Y$. The probability that $X$ and $Y$ together take some particular value $(x,y)$ is the *joint probability* $p_{X,Y}(x,y)=p(X=x, Y=y)$. The\n", "joint probability can be defined in terms of conditional and marginal probabilities as follows:\n", "

\n", "**product rule** $p(X,Y) = p_Y\\left(Y|X\\right)p_X\\left(X\\right) = p(Y,X) = p_X\\left(X|Y\\right)p_Y\\left(Y\\right)$\n", "

\n", "**sum rule** $p(X)=\\sum_Yp\\left(X,Y\\right)$\n", "

\n", "where the *conditional probability*, $p_X(X|Y)$ , is the probability distribution of X given that Y has taken some specific value and $p_X(x)$ is the *marginal probability*. Note\n", "that the marginal probability is simply the probability distribution of $X$ as described above, but when considered as part of a larger set of RVs, as in $(X,Y)$ in this case, the \n", "probability distribution of an isolated RV is referred to as the marginal probability. If $X$ and $Y$ are independent events, then \n", "$p(X,Y)=p(X)p(Y)$. \n", "

\n", "If $X$ is a continuous RV, then the probability distribution gives way to a *probability density function*, $p_X(x)$. What's the difference? In the case of a discrete RV, we think of $p_X(x)$\n", "as being a finite value in $[0,1]$ representing the probability that $X=x$. In the case of a continuous RV, we can only think of the probability that $X$ is in some range $(a,b)$ defined by\n", "

\n", "$P(X\\in (a,b)) = \\int_a^b p_X(x)dx$\n", "

\n", "From this definition, one sees that the $P(X=a)=\\int_a^a p_X(x)dx = 0$. The *product* rule remains the same rules for continuous RVs, however the *sum* rule becomes:

\n", "**sum rule** $p(X) = \\int p_{X,Y}(x,y) dy = \\int p_X(X|Y=y) p_Y(y) dy = E_Y[p(X|Y)]$

\n", "where $E_{\\Phi}[X]$ is the **expected value** of $X$ under the distribution $\\Phi$. Normally, $\\Phi$ is the marginal distribution of $X$ so that $E_X[X] = \\int x p_X(x) dx$ for the RV $X$. \n", "\n", "The following definition of a **compound distribution** will also be useful. Let $t$ be a RV with distribution $F$ paramaterized by $\\mathbf{w}$ and let $\\mathbf{w}$ be a RV distributed by $G$ \n", "parameterized by $\\mathbf{t}$, then the compound distribution $H$ parameterized by $\\mathbf{t}$ for the random variable $t$ is defined by:

\n", "$p_H(t|\\mathbf{t}) = \\int_{\\mathbf{w}} P_F(t|\\mathbf{w}) P_G(\\mathbf{w}|\\mathbf{t})d\\mathbf{w}$

\n", "\n", "

Bayesian Modeling

\n", "Using the probability rules above, it is possible to obtain the following, known as *Bayes' theorem*,

\n", "$p(Y|X) = \\frac{p\\left(X|Y\\right)p\\left(Y\\right)}{p\\left(X\\right)}$\n", "

\n", "Bayes' theorem will appear repeatadly in the discussion of machine learning.\n", "Not surprisingly, Bayes' theorem plays a fundamental role in Bayesian modelling. Assume, that we model some process where the model has \n", "free parameters contained in the vector $\\mathbf{w}$. Now assume that we have some notion of the probability distribution of these parameters, $p(\\mathbf{w})$,\n", "called the *prior*. That is, we assume that **any** set of values from some space (e.g. the real numbers) is a possible best choice \n", "for $\\mathbf{w}$ with some probability $p(\\mathbf{w})$. Finally, assume that we observe a set of data, $\\mathbf{D}$, for the output we are attempting to predict with our model. Our objective is to find\n", "the *best* set of parameters $\\mathbf{w}$ given the observed data. How we choose the *best* set is the challenge and is where Bayesian modeling\n", "departs from frequentist modeling. In the Bayesian approach we attempt to maximize the *the probability of the parameters given the data*, \n", "$p(\\mathbf{w}|D)$, known as the *posterior*. Using Bayes' theorem we can express the *posterior* as

\n", "\n", "$p(\\mathbf{w}|D) = \\frac{p(D|\\mathbf{w})p(\\mathbf{w})}{p(D)}$

\n", "\n", "In order to apply a fully Bayesian approach, we must formulate models for both the *prior*, $p(\\mathbf{w})$, and the *likelihood function*, $p(D|\\mathbf{w})$. Given\n", "these models and a set of data we can compute appropriate values for our free parameter vector $\\mathbf{w}$ by maximizing \n", "$p(\\mathbf{w}|D) \\propto p(D|\\mathbf{w})p(\\mathbf{w})$. How does this differ from frequentist modeling?\n", "The frequentist approach, or *maximum likelihood* approach, ignores the formulation of a *prior*, and goes directly to maximizing the likelihood function \n", "to find the model parameters. Thus, the frequentist approach can be described as *maximizing the probability of the data given the parameters*. Under certain\n", "conditions the results of Bayesian and frequentist modeling will conincide, but this is not true in general. \n", "\n", "One could obtain a point estimate for $\\mathbf{w}$ by maximizing the *posterior probability* model, but this not typical. Instead a *predictive distribution* of the value of the target\n", "variable, $t$, is formed based on the compound distribution definition provided above. Taking the mean of this distribution provides a point estimate of $t$ while distribution itself provides\n", "a measure of the uncertainty in the estimate, say by considering the standard deviation.\n", "\n", "**TODO:** Add a simple example illustrating the difference. For now, a good illustration is available \n", "here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Maximum Likelihood

\n", "A maximum likelihood approach does not attempt to formulate models for the parameter *priors*, $p(\\mathbf{w})$, or the parameter *posterior probabilities*, $p(\\mathbf{w}|D)$. Rather it views the data, $D$, as fixed and \n", "attempts to determine the model parameters, $\\mathbf{w}$, by maximizing the likelihood function. As we will frequently use this approach, it is useful to understand the likelihood function. \n", "\n", "We assume we have specified a probability density model, $p_{\\mathbf{w}}(d)$ for the observed data elements, ${d \\in D}$ that is parameterized by $\\mathbf{w}$, i.e. $p$ is a parametric model for the distribution of $D$. As\n", "an example, if $D$ ahs a normal distribution with mean $\\mu$ and variance $\\sigma^2$, then

\n", "$\\mathbf{w} = (\\mu, \\sigma^2)$

\n", "and

\n", "$p_{\\mathbf{w}}(d) = \\frac{1}{\\sqrt{2 \\pi} \\sigma} e^{-(d-\\mu)^2/2\\sigma^2}$

\n", "The likelihood function, regardless of our choice of model $p$, is defined by

\n", "$L(\\mathbf{w}; D) = \\prod_{i=1}^N p_{\\mathbf{w}}(d_i)$

\n", "where $N$ is the number of elements in $D$. Thus the likelihood function is simply the product of the probability of all the individual data points, $d_i \\in D$, under the probability model, $p_{\\mathbf{w}}$. Note that this\n", "definition implicitly assumes these data points are independent events. \n", "\n", "Out of mathematical convenience, we will most often work with the *log-likelihood* function (which turns the product into a sum by properties of the log function), i.e. the logarithm of $L(\\mathbf{w}; D)$, defined as

\n", "$l(\\mathbf{w};D) = \\sum_{i=1}^N l(\\mathbf{w};d_i) = \\sum_{i=1}^N \\log p_{\\mathbf{w}}(d_i)$

\n", "where we recall that $\\log(ab) = \\log(a) + \\log(b)$. \n", "\n", "The method of maximum likelihood chooses the value $\\mathbf{w} = \\widehat{\\mathbf{w}}$ that maximizes the *log-likelihood* function. We will also often work with an **error function**, $E(\\mathbf{w})$, defined as the\n", "negative of the log-likelihood function

\n", "$E(\\mathbf{w}) = -l(\\mathbf{w};D)$

\n", "where we note $-\\log(a) = \\log(1/a)$\n" ] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }