{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# EXERCISES WITH SOLUTIONS\n", "\n", "### Bayesian Machine Learning and Information Processing (5SSD0)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Prelude \n", "\n", "- In this notebook, we provide a set of exercises that should help you prepare for the exam. There are two sets of exercises: \n", " 1. (**5SSB0**). The first set contains questions from previous exams of class 5SSB0 (Adaptive Information Processing). 5SSB0 was last taught at TU/e in the academic year 2018/19. Starting with the academic year 2019/20, the current class (5SSD0) is offered instead. In comparison to 5SSB0, the emphasis of the current class is more on the Bayesian framework, rather than maximum likelihood. Below, we present a selection of exercises that were used in the 5SSB0 exams. While these exercises may not represent the emphasis on Bayesian methods, they do represent the \"style of questions\" and the \"level of difficulty\" that you may expect in upcoming 5SSD0 exams. Also, there is nothing in the questions below that is considered outside the scope of 5SSD0. \n", " 2. (**Rehearsal**). A second set of exercises are categorized by lesson headers, e.g., \"Probability Theory\" or \"Generative Classification\". These exercises intend to test you on the contents of the corresponding lesson. Of course, for some exercises you may need some contents of some other (usually earlier) lessons or background materials. A perfect categorization is not feasible, but we've tried to link each question to a lesson so as to make it easier to test yourself after studying a specific lesson. \n", " \n", "- For some of these exercises you will be able to find solutions quickly on the internet. Try to resist this route to solving the problems. You will not be graded for these exercises and solutions will be made available in a separate notebook. **Your ability to solve these exercises without external help provides an excellent indicator of your readiness to pass the exam**. \n", "\n", "- This notebook is still under construction. We will be adding more exercises to help you prepare. Also, not all solutions are provided yet. We are working on this too. If you absolutely need the solution for a problem that doesn't have a solution yet, please contact one of the TAs (see http://bmlip.nl for contact info.)\n", "\n", "- As 2019/20 is the first time we teach this class, please be alert to errors (and let us know if you find any!) or more generally, let us know if things are unclear to you or if a question can be improved. \n", "\n", "- We are aware there are some rendering problems in some browsers. We are trying to fix that as well. \n", "\n", "- Good luck!" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Cheatsheet\n", "\n", "- You are not allowed to bring books or notes to the exam. Instead, feel free to make use of the following cheatsheet as we will provide this or a similar cheatsheet in an appendix of the exam papers.\n", "\n", "- Some Matrix Calculus, see also Bishop, appendix C. \n", "$$\\begin{align*}\n", "|A^{-1}|&=|A|^{-1} \\\\\n", "\\nabla_A \\log |A| &= (A^{T})^{-1} = (A^{-1})^T \\\\\n", "\\mathrm{Tr}[ABC]&= \\mathrm{Tr}[CAB] = \\mathrm{Tr}[BCA] \\\\\n", "\\nabla_A \\mathrm{Tr}[AB] &=\\nabla_A \\mathrm{Tr}[BA]= B^T \\\\\n", "\\nabla_A \\mathrm{Tr}[ABA^T] &= A(B+B^T)\\\\\n", " \\nabla_x x^TAx &= (A+A^T)x\\\\\n", "\\nabla_X a^TXb &= \\nabla_X \\mathrm{Tr}[ba^TX] = ab^T\n", "\\end{align*}$$\n", "\n", "- Definition of the Multivariate Gaussian Distribution (MVG)\n", "$$\n", "\\mathcal{N}(x|\\,\\mu,\\Sigma) = |2 \\pi \\Sigma|^{-\\frac{1}{2}} \\exp\\left\\{-\\frac{1}{2}(x-\\mu)^T\n", "\\Sigma^{-1} (x-\\mu) \\right\\}\n", "$$\n", "\n", "- A **linear transformation** $z=Ax+b$ of a Gaussian variable $\\mathcal{N}(x|\\mu,\\Sigma)$ is Gaussian distributed as\n", "$$\n", "p(z) = \\mathcal{N} \\left(z \\,|\\, A\\mu+b, A\\Sigma A^T \\right) \n", "$$\n", "\n", "- **Multiplication** of 2 Gaussian distributions\n", "$$ \n", " \\mathcal{N}(x|\\mu_a,\\Sigma_a) \\cdot \\mathcal{N}(x|\\mu_b,\\Sigma_b) = \\alpha \\cdot \\mathcal{N}(x|\\mu_c,\\Sigma_c)\n", "$$\n", "with\n", "$$\\begin{align*}\n", "\\Sigma_c^{-1} &= \\Sigma_a^{-1} + \\Sigma_b^{-1} \\\\\n", "\\Sigma_c^{-1}\\mu_c &= \\Sigma_a^{-1}\\mu_a + \\Sigma_b^{-1}\\mu_b \\\\\n", "\\alpha &= \\mathcal{N}(\\mu_a | \\mu_b, \\Sigma_a + \\Sigma_b)\n", "\\end{align*}$$\n", "\n", "- **Conditioning** and **marginalization** of Gaussians. Let $z = \\begin{bmatrix} x \\\\ y \\end{bmatrix}$ be jointly normal distributed as\n", "$$\\begin{align*}\n", "p(z) &= \\mathcal{N}(z | \\mu, \\Sigma) \n", " =\\mathcal{N} \\left( \\begin{bmatrix} x \\\\ y \\end{bmatrix} \\,\\left|\\, \\begin{bmatrix} \\mu_x \\\\ \\mu_y \\end{bmatrix}, \n", " \\begin{bmatrix} \\Sigma_x & \\Sigma_{xy} \\\\ \\Sigma_{yx} & \\Sigma_y \\end{bmatrix} \\right. \\right)\\,,\n", "\\end{align*}$$\n", "then $p(z) = p(y|x)\\cdot p(x)$, with \n", "$$\\begin{align*}\n", "p(y|x) &= \\mathcal{N}\\left(y\\,|\\,\\mu_y + \\Sigma_{yx}\\Sigma_x^{-1}(x-\\mu_x),\\, \\Sigma_y - \\Sigma_{yx}\\Sigma_x^{-1}\\Sigma_{xy} \\right) \\\\\n", "p(x) &= \\mathcal{N}\\left( x\\,|\\,\\mu_x, \\Sigma_x \\right)\n", "\\end{align*}$$\n", "\n", "- For a binary variable $x \\in \\{0,1\\}$, the **Bernoulli** distribution is given by\n", "$$ \n", "p(x|\\mu) = \\mu^{x}(1-\\mu)^{1-x}\n", "$$\n", "\n", "- The conjugate prior for $\\mu$ is the **Beta** distribution, given by\n", "$$\n", "p(\\mu) = \\mathcal{B}(\\mu|\\alpha,\\beta) = \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}\n", "$$\n", "where $\\alpha$ and $\\beta$ are \"hyperparameters\" that you can set to reflect your prior beliefs about $\\mu$. " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Selected exercises from previous exams \"5SSB0\"\n", "\n", "\n", "- **[1]** Answer shortly (max. 3 sentences): What is the difference between supervised and unsupervised learning? \n", " > Supervised learning concerns learning a map from inputs to targets. Unsupervised learning concerns analysis of data without targets, such as pattern discovery and compression. (Alternative answers may also be accepted.) \n", "\n", "- **[2]** Which of the following statements is true (or justified)? \n", " (a) If $X$ and $Y$ are independent Gaussian distributed variables, then $Z = 3X+Y$ is also a Gaussian distributed variable. \n", " (b) The sum of two Gaussian functions is always also a Gaussian function. \n", " (c) Discriminative classification is more similar to regression than to density estimation. \n", " (d) Density estimation is more similar to generative classification than to discriminative classification. \n", " (e) Clustering is more similar to supervised classification than to unsupervised classification. \n", " > a, c, d\n", "\n", "- **[3]** Consider a binary classification problem with two classes $\\{y_1,y_2\\}$ and input vector $x$. Outputs $y_k$ are recorded by a one-hot encoding scheme. We are given a data set to train the parameters $\\theta$ for a likelihood model of the form\n", "$$\n", "p(y_k=1|x,\\theta) = \\frac{1}{1 + e^{-\\theta_k^T x}}\n", "$$\n", "There a two fundamentally different ways to train $\\theta$, namely through a generative model or by discriminative training. \n", " (a) Explain shortly how we train $\\theta$ through a generative model. No need to work out all equations for Gaussian models, but explain the strategy in probabilistic modeling terms. \n", " (b) Explain shortly how we train $\\theta$ through a discriminative\n", "approach. \n", "> (a) In a generative model, the class posterior is obtained through\n", "Bayes rule,\n", "$$\n", "p(y_k=1|x,\\theta) \\propto p(x|y_k=1,\\theta) p(y_k=1|\\theta)\n", "$$\n", "In terms of ML training, this means we maximize the *joint* log-likelihood $\\sum_n \\log p(x_n,y_n|\\theta)$ wrt $\\theta$. This leads to a structured breakdown of the model (and parameters) into a class-conditional likelihood $ p(x|y_k=1,\\theta) $ and class priors $p(y_k=1|\\theta)$. \n", "> (b) In a discriminative model, the posterior class density\n", "$ p(y_k=1|x,\\theta) $ is directly trained, i.o.w. we maximize the\n", "*conditional* log-likelihood $\\sum_n \\log p(y_{nk}|x_n,\\theta)$.\n", "There's no structured model breakdown.\n", "\n", "- **[4]** What is the difference between supervised and unsupervised learning? Express the goals of these two learning methods in terms of a probability distribution. (I'm looking here for a statement such as: \" Given $\\ldots$, the goals of supervised/unsupervised learning is to estimate $p(\\cdot|\\cdot)$\".) \n", "> Given data $D=\\{(x_1,y_1),\\ldots,(x_N,y_N)\\}$ and a model $p(y|x,\\theta)$, the goal of supervised learning is to estimate $p(\\theta|D)$. Given data $D=\\{x_1,\\ldots,x_N\\}$ and a model $p(x|\\theta)$, the goal of unsupervised learning is to estimate $p(\\theta|D)$. \n", "\n", "- **[5]** In a particular model with hidden variables, the log-likelihood can be worked out to the following expression:\n", "$$\n", " L(\\theta) = \\sum_n \\log \\left(\\sum_k \\pi_k\\,\\mathcal{N}(x_n|\\mu_k,\\Sigma_k)\\right)\n", "$$\n", "Do you prefer a gradient descent or EM algorithm to estimate maximum likelihood values for the parameters? Explain your answer. (No need to work out the equations.)\n", "> Since this expression does not degenerate into simple MVGs, the EM approach is in practice preferred.\n", "\n", "- **[6]** Consider a data set $D = \\{x_1,x_2,\\ldots,x_N\\}$ where we assume that each sample $x_n$ is IID distributed by a multivariate Gaussian (MVG) $\\mathcal{N}(x_n|\\,\\mu,\\Sigma)$. \n", "Proof that the maximum likelihood estimate (MLE) of the mean value of this distribution is given by\n", "$$\\begin{equation*}\n", "\\hat \\mu = \\frac{1}{N}\\sum_n x_n \n", "\\end{equation*}$$ \n", "(Note the list of [matrix calculus formulas](#matrix-calculus) above).\n", "> $$\\begin{align*}\n", "\\nabla_\\mu \\log p(D|\\theta) &\\propto -\\frac{1}{2}\\sum_n \\nabla_\\mu \\left( (x_n-\\mu)^T \\Sigma^{-1} (x_n-\\mu) \\right) \\\\\n", "&= -\\frac{1}{2}\\sum_n \\nabla_\\mu \\left( -2\\mu^T\\Sigma^{-1}x_n + \\mu^T\\Sigma^{-1}\\mu \\right) \\\\\n", "&= -\\frac{1}{2}\\sum_n \\left( -2\\Sigma^{-1} x_n + 2\\Sigma^{-1}\\mu \\right) \\\\\n", "&= \\Sigma^{-1}\\,\\sum_n \\left( x_n-\\mu \\right)\n", "\\end{align*}$$\n", "Set to zero yields\n", "$$\\begin{equation*}\n", "\\hat \\mu = \\frac{1}{N} \\sum_n x_n\n", "\\end{equation*}$$\n", "\n", "- **[7]** Consider a data set $D = \\{(x_1,y_1), (x_2,y_2),\\dots,(x_N,y_N)\\}$ with 1-of-$K$ notation for the discrete classes, i.e.,\n", "\n", "$$ y_{nk} = \\begin{cases} 1 & \\text{if $y_n$ in $k$th class}\\\\\n", " 0 & \\text{otherwise} \n", " \\end{cases}$$\n", "\n", "together with class-conditional distribution $p(x_n| y_{nk}=1,\\theta) = \\mathcal{N}(x_n|\\mu_k,\\Sigma)$ and multinomial prior $p(y_{nk}=1) = \\pi_k$. \n", " (a) Proof that the joint log-likelihood is given by\n", "$$\\begin{equation*}\n", "\\log p(D|\\theta) = \\sum_{n,k} y_{nk} \\log \\mathcal{N}(x_n|\\mu_k,\\Sigma) + \\sum_{n,k} y_{nk} \\log \\pi_k\n", "\\end{equation*}$$\n", " > $$\\begin{align*}\n", " \\log p(D|\\theta) &= \\sum_n \\log \\prod_k p(x_n,y_{nk}|\\theta)^{y_{nk}} \\\\\n", " &= \\sum_{n,k} y_{nk} \\log p(x_n,y_{nk}|\\theta)\\\\\n", " &= \\sum_{n,k} y_{nk} \\log \\mathcal{N}(x_n|\\mu_k,\\Sigma) + \\sum_{n,k} y_{nk} \\log \\pi_k\n", " \\end{align*}$$ \n", " \n", " (b) Show now that the MLE of the *class-conditional* mean is given by\n", "$$\\begin{equation*}\n", " \\hat \\mu_k = \\frac{\\sum_n y_{nk} x_n}{\\sum_n y_{nk}} \n", "\\end{equation*}\n", "$$\n", " > \n", "\n", "- **[8]** Consider an IID data set $D=\\{(x_1,y_1),\\ldots,(x_N,y_N)\\}$. We will model this data set by a model $$y_n =\\theta^T f(x_n) + e_n\\,,$$ where $f(x_n)$ is an $M$-dimensional feature vector of input $x_n$; $y_n$ is a scalar output and $e_n \\sim \\mathcal{N}(0,\\sigma^2)$. \n", " (a) Rewrite the model in matrix form by lumping input features in a matrix $F=[f(x_1),\\ldots,f(x_N)]^T$, outputs and noise in the vectors $y=[y_1,\\ldots,y_N]^T$ and $e=[e_1,\\ldots,e_N]^T$, respectively. \n", " > $y = F\\theta + e$\n", "\n", " (b) Now derive an expression for the log-likelihood $\\log p(y|\\,F,\\theta,\\sigma^2)$. \n", " > $$\\begin{align*}\n", " \\log p(D|\\theta,\\sigma^2) &= \\log \\mathcal{N}(y|\\,F\\theta ,\\sigma^2)\\\\\n", " &\\propto -\\frac{1}{2\\sigma^2}\\left( {y - F\\theta } \\right)^T \\left( {y - F\\theta } \\right)\n", "\\end{align*}$$\n", "\n", " (c) Proof that the maximum likelihood estimate for the parameters is given by \n", "$$\\hat\\theta_{\\text{ml}} = (F^TF)^{-1}F^Ty$$ \n", "(Note the list of [matrix calculus formulas](#matrix-calculus) above). \n", " > Taking the derivative to $\\theta$\n", "$$\n", "\\nabla_\\theta \\log p(D|\\theta) = \\frac{1}{\\sigma^2} F^T(y-F\\theta)\n", "$$\n", " Set derivative to zero for maximum likelihood estimate\n", "$$\n", "\\hat\\theta_{\\text{ml}} = (F^TF)^{-1}F^Ty\n", "$$\n", "\n", " (d) What is the predicted output value $y_\\bullet$, given an observation $x_\\bullet$ and the maximum likelihood parameters $\\hat \\theta_{\\text{ml}}$. Work this expression out in terms of $F$, $y$ and $f(x_\\bullet)$. \n", " > Prediction of new data point: $\\hat y_\\bullet = \\hat \\theta^T f(x_\\bullet) = \\left((F^TF)^{-1}F^Ty\\right)^T f(x_\\bullet) $\n", "\n", " (e) Suppose that, before the data set $D$ was observed, we had reason to assume a prior distribution $p(\\theta)=\\mathcal{N}(0,\\sigma_0^2)$. Derive the Maximum a posteriori (MAP) estimate $\\hat \\theta_{\\text{map}}$.(hint: work this out in the $\\log$ domain.) \n", " > $$\\begin{align*}\n", "\\log p(\\theta|D) &\\propto \\log p(D|\\theta) p(\\theta) \\\\\n", " &\\propto -\\frac{1}{2\\sigma^2}\\left( {y - F\\theta } \\right)^T \\left( {y - F\\theta } \\right) + \\frac{1}{2 \\sigma_0^2}\\theta^T \\theta\n", "\\end{align*}$$\n", " Derivative $\\nabla_\\theta \\log p(\\theta|D) = -(1/\\sigma^2)F^T(y-F\\theta) + (1/ \\sigma_0^2) \\theta$ \n", " Set derivative to zero for MAP estimate leads to \n", "$$\\hat\\theta_{\\text{map}} = \\left(F^T F + \\frac{\\sigma^2}{\\sigma_0^2} I\\right)^{-1}F^Ty$$\n", "\n", "- **[9]** For each of the following sub-questions, provide a *short but essential* answer. \n", " (a) The joint distribution for feature vector $x$ and membership of $k$th class is given by (we use one-hot encoding of classes)\n", "$$p(x,y_k=1) = \\pi_k \\cdot \\mathcal{N}(x |\\mu_k,\\Sigma)$$ \n", "Write down an expression for the posterior class probability $p(y_k=1|x)$ (No derivations are needed, just a proper expression)? \n", " > $$\n", "p(y_k=1|x) = \\frac{\\pi_k \\cdot \\mathcal{N}(x | \\mu_k,\\Sigma)}{\\sum_j \\pi_j \\cdot \\mathcal{N}(x | \\mu_j,\\Sigma)}\n", "$$\n", "\n", " (b) Why does maximum likelihood estimation become a better approximation to Bayesian learning as you collect more data? \n", " > The likelihood tends to get 'narrower' and more informative than the prior, since the latter does not change with more data.\n", "\n", " (c) Given is a model \n", "$$\\begin{align*}\n", "p(x|z) &= \\mathcal{N}(x | W z,\\Psi) \\\\\n", "p(z) &= \\mathcal{N}(z|0,I)\n", "\\end{align*}$$ \n", "Work out an expression for the marginal distribution $p(x)$. \n", " > Since both the prior and likelihood are Gaussian, you know that $p(x)$ has to be Gaussian as well. That means we only need to compute the mean and variance for $p(x)$. The model can also be written as\n", "$$\\begin{align*}\n", "x &= W z + \\epsilon \\\\\n", "z &\\sim \\mathcal{N}(0,I) \\\\\n", "\\epsilon &\\sim \\mathcal{N}(0,\\Psi)\n", "\\end{align*}$$\n", "This leads to \n", "$$\\begin{align*}\n", "E[x] &= E[W z + \\epsilon] = W E[z] + E[ \\epsilon] = 0\\\\\n", "var[x] &= E[xx^T] - E[x]E[x]^T \\\\\n", "&= E[xx^T] \\\\\n", "&= E[(W z +\\epsilon)(W z +\\epsilon)^T] \\\\\n", "&= W E[zz^T] W^T + E[\\epsilon \\epsilon^T] = W W^T + \\Psi \n", "\\end{align*}$$\n", "and therefore\n", "$$p(x) = \\mathcal{N}(x|\\,0,W W^T + \\Psi)$$\n", "\n", "\n", "- **[10]** Explain shortly how Bayes rule relates to machine learning in the context of an observed data set $D$ and a model $M$ with parameters $\\theta$. Your answer must contain the expression for Bayes rule. \n", "> Machine learning is inference over models (hypotheses, parameters, etc.) from a given data set. Bayes rule makes this statement precise. Let $\\theta \\in \\Theta$ and $D$ represent a model parameter vector and the given data set, respectively. Then, Bayes rule,\n", "$$\n", "p(\\theta|D,M) = \\frac{p(D|\\theta,M)}{p(D|M)} p(\\theta|M)\n", "$$\n", "relates the information that we have about $\\theta$ before we saw the data (i.e., the distribution $p(\\theta,M)$) to what we know after having seen the data, $p(\\theta|D,M)$.\n", "\n", "- **[11]** Consider the following state-space model:\n", "$$\\begin{align*}\n", "z_k &= A z_{k-1} + w_k \\\\\n", "x_k &= C z_k + v_k \n", "\\end{align*}$$\n", "where $k=1,2,\\ldots,n$ is the time step counter; $z_k$ is an *unobserved* state sequence; $x_k$ is an *observed* sequence; $w_k \\sim \\mathcal{N}(0,\\Sigma_w)$ and $v_k \\sim \\mathcal{N}(0,\\Sigma_v)$ are (unobserved) state and observation noise sequences respectively; $z_0 \\sim \\mathcal{N}(0,\\Sigma_0)$ is the initial state and $A$, $C$, $\\Sigma_v$,$\\Sigma_w$ and $\\Sigma_0$ are known parameters. The Forney-style factor graph (FFG) for one time step is depicted here:\n", "\n", "\n", "\n", " (a) Rewrite the state-space equations as a set of conditional probability distributions. \n", " $$\\begin{align*}\n", " p(z_k|z_{k-1},A,\\Sigma_w) &= \\ldots \\\\\n", " p(x_k|z_k,C,\\Sigma_v) &= \\ldots \\\\\n", " p(z_0|\\Sigma_0) &= \\ldots\n", "\\end{align*}$$ \n", " > $$\\begin{align*}\n", " p(z_k|z_{k-1},A,\\Sigma_w) &= \\mathcal{N}(z_k|A z_{k-1},\\Sigma_w) \\\\\n", " p(x_k|z_k,C,\\Sigma_v) &= \\mathcal{N}(x_k|C z_k,\\Sigma_v) \\\\\n", " p(z_0|\\Sigma_0) &= \\mathcal{N}(z_0|0,\\Sigma_0)\n", "\\end{align*}$$ \n", "\n", " (b) Define $z^n \\triangleq (z_0,z_1,\\ldots,z_n)$, $x^n \\triangleq (x_1,\\ldots,x_n)$ and $\\theta=\\{A,C,\\Sigma_w,\\Sigma_v\\}$. Now write out the generative model $p(x^n,z^n|\\theta)$ as a product of factors. \n", " > $$\\begin{align*}\n", "p(x^n,z^n|\\theta) &= p(z_0|\\Sigma_0) \\prod_{k=1}^n p(x_k|z_k,C,\\Sigma_v) \\,p(z_k|z_{k-1},A,\\Sigma_w) \\\\\n", " &= \\mathcal{N}(z_0|0,\\Sigma_0) \\prod_{k=1}^n \\mathcal{N}(x_k|C z_k,\\Sigma_v) \\,\\mathcal{N}(z_k|A z_{k-1},\\Sigma_w)\n", "\\end{align*}$$\n", "\n", " (c) We are interested in estimating $z_k$ from a given estimate for $z_{k-1}$ and the current observation $x_k$, i.e., we are interested in computing $p(z_k|z_{k-1},x_k,\\theta)$. Can $p(z_k|z_{k-1},x_k,\\theta)$ be expressed as a Gaussian distribution? Explain why or why not in one sentence. \n", " > Yes, since the generative model $p(x^n,z^n|\\theta)$ is (one big) Gaussian.\n", "\n", " (d) Copy the graph onto your exam paper and draw the message passing schedule for computing $p(z_k|z_{k-1},x_k,\\theta)$ by drawing arrows in the factor graph. Indicate the order of the messages by assigning numbers to the arrows. \n", " > \n", " \n", " > Some permutations of this order are also possible. The most important thing here is that you recognize the tree with $Z_k$ as a root of the tree and pass messages from the terminals (e.g., $Z_{k-1}$, $X_k$, etc.) towards the root. \n", " \n", " (e) Now assume that our belief about parameter $\\Sigma_v$ is instead given by a distribution $p(\\Sigma_v)$ (rather than a known value). Adapt the factor graph drawing of the previous answer to reflects our belief about $\\Sigma_v$. \n", " > See drawing in previous answer. \n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Rehearsal Exercises\n", "\n", "- Below you will find more questions that test your knowledge of each lesson. My perception of the difficulty level of each question may differ from yours, but I ll indicate my ratings with three levels, (#) for easy, (##) for intermediate level and (###) for (very) challenging. While all questions are in principle within the scope of the lessons, there should be enough questions of levels (#) and (##) in the exams to pass the class. " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Machine Learning Overview\n", "\n", "- **[1]** (##) Pick three applications from the [\"Some Machine Learning Applications\"](https://nbviewer.jupyter.org/github/bertdv/BMLIP/blob/master/lessons/notebooks/Machine-Learning-Overview.ipynb#some-ml-apps)-slide and (shortly) describe for each application how (a combination of) clustering, dimensionality reduction, regression classification or reinforcement learning could accomplish the task." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Probability Theory Review\n", "\n", "\n", "- **[1]** (a) (#) Proof that the \"elementary\" sum rule $p(A) + p(\\bar{A}) = 1$ follows from the (general) sum rule $$p(A+B) = p(A) + p(B) - p(A,B)\\,.$$ \n", " (b) (###) Conversely, derive the general sum rule\n", " $p(A + B) = p(A) + p(B) - p(A,B)$\n", "from the elementary sum rule $p(A) + p(\\bar A) = 1$ and the product rule. Here, you may make use of the (Boolean logic) fact that $A + B = \\overline {\\bar A \\bar B }$. \n", " > $$\\begin{align*}\n", " p\\left( {A + B} \\right) &\\underset{\\mathrm{bool}}{=} p\\left( {\\overline {\\bar A \\bar B } } \\right) \\\\\n", " &\\underset{\\mathrm{sum}}{=} 1 - p\\left( {\\bar A \\bar B } \\right) \\\\\n", " &\\underset{\\mathrm{prod}}{=} 1 - p\\left( {\\bar A |\\bar B } \\right)p\\left( {\\bar B } \\right) \\\\\n", " &\\underset{\\mathrm{sum}}{=} 1 - \\left( {1 - p\\left( {A|\\bar B } \\right)} \\right)\\left( {1 - p\\left( B \\right)} \\right) \\\\\n", " &= p(B) + \\left( {1 - p\\left( B \\right)} \\right)p\\left( {A|\\bar B } \\right) \\\\\n", " &\\underset{\\mathrm{prod}}{=} p(B) + \\left( {1 - p\\left( B \\right)} \\right)p\\left( {\\bar B |A} \\right)\\frac{{p\\left( A \\right)}}\n", "{{p\\left( {\\bar B } \\right)}} \\\\\n", " &\\underset{\\mathrm{sum}}{=} p(B) + p\\left( {\\bar B |A} \\right)p\\left( A \\right) \\\\\n", " &\\underset{\\mathrm{sum}}{=} p(B) + \\left( {1 - p\\left( {B|A} \\right)} \\right)p\\left( A \\right) \\\\\n", " &\\underset{\\mathrm{sum}}{=} p\\left( A \\right) + p(B) - p\\left( {A,B} \\right) \n", "\\end{align*}$$\n", " Note that, aside from the first boolean rewrite, everything follows straight application of sum and product rules. \n", "\n", "- **[2]** Box 1 contains 8 apples and 4 oranges. Box 2 contains 10 apples and 2 oranges. Boxes are chosen with equal probability. \n", " (a) (#) What is the probability of choosing an apple? \n", " (b) (##) If an apple is chosen, what is the probability that it came from box 1?\n", "> The following probabilities are given in the problem statement,\n", "$$\\begin{align*}\n", "p(b_1) &= p(b_2) = 1/2\\\\\n", "p(a|b_1) &= 8/12, \\quad p(a|b_2)=10/12\\\\\n", "p(o|b_1) &= 4/12, \\quad p(o|b_2)=2/12\n", "\\end{align*}$$\n", "> \n", "> (a) $p(a) = \\sum_i p(a,b_i) = \\sum_i p(a|b_i)p(b_i)=\\frac{8}{12}\\cdot\\frac{1}{2} + \\frac{10}{12}\\cdot\\frac{1}{2} = \\frac{3}{4}$ \n", "> (b) $p(b_1|a) = \\frac{p(a,b_1)}{p(a)} = \\frac{p(a|b_1)p(b_1)}{p(a)} = \\frac{\\frac{8}{12}\\cdot\\frac{1}{2}}{\\frac{3}{4}} = \\frac{4}{9}$\n", "\n", "- **[3]** (###) The inhabitants of an island tell the truth one third of the time. They lie with probability $2/3$. On an occasion, after one of them made a statement, you ask another \"was that statement true?\" and he says \"yes\". What is the probability that the statement was indeed true?\n", "> We use variables $S_1$ and $S_2$ for statements 1 and 2 and shorthand \"y\", \"n\", \"t\" and \"f\" for \"yes\", \"no\", \"true and \"false\", respectively. The problem statement provides us with the following probabilities,\n", "$$\\begin{align*}\n", "p(S_1=\\text{t})&= 1/3\\\\\n", "p(S_1=\\text{f})&= 1 - p(S_1=\\text{t})= 2/3\\\\\n", "p(S_2=\\text{y}|S_1=\\text{t})&= 1/3 \\\\\n", "p(S_2=\\text{y}|S_1=\\text{f})&= 1-p(S_2=\\text{y}|S_1=\\text{t})= 2/3\n", "\\end{align*}$$\n", "We are asked to compute $p(S_1=\\text{t}|S_2=\\text{y})$. Use Bayes rule,\n", "$$\\begin{align*}\n", "p(S_1=\\text{t}|S_2=\\text{y}) &= \\frac{p(S_1=\\text{t},S_2=\\text{y})}{p(S_2=\\text{y})}\\\\\n", "&=\\frac{p(S_2=\\text{y}|S_1=\\text{t})p(S_1=\\text{t})}{p(S_2=\\text{y}|S_1=\\text{t})p(S_1=\\text{t})+p(S_2=\\text{y}|S_1=\\text{f})p(S_1=\\text{f})}\\\\\n", "&= \\frac{\\frac{1}{3}\\cdot\\frac{1}{3}}{\\frac{1}{3}\\cdot\\frac{1}{3}+\\frac{2}{3}\\cdot\\frac{2}{3}} = \\frac{1}{5}\n", "\\end{align*}$$\n", "\n", "- **[4]** (##) A bag contains one ball, known to be either white or black. A white ball is put in, the bag is shaken, and a ball is drawn out, which proves to be white. What is now the chance of drawing a white ball? (Note that the state of the bag, after the operations, is exactly identical to its state before.)\n", "> There are two hypotheses: let $H = 0$ mean that the original ball in the bag was white and $H = 1$ that is was black.\n", "Assume the prior probabilities are equal. The data is that when a randomly selected ball was drawn from the bag, which contained a white one and the unknown one, it turned out to be white. The probability of this result according to each hypothesis is:\n", "$$ P(D|H =0) = 1,\\quad P(D|H =1) = 1/2$$\n", "So by Bayes theorem, the posterior probability of $H$ is\n", "$$P(H =0|D) = 2/3,\\quad P(H =1|D) = 1/3$$\n", "\n", "- **[5]** A dark bag contains five red balls and seven green ones. \n", " (a) (#) What is the probability of drawing a red ball on the first draw? \n", " (b) (##) Balls are not returned to the bag after each draw. If you know that on the second draw the ball was a green one, what is now the probability of drawing a red ball on the first draw? \n", "> (a) $p(S_1=R) = \\frac{N_R}{N_R+N_G}= \\frac{5}{12}$ \n", "> (b) The outcome of the $n$th draw is referred to by variable $S_n$. Use Bayes rule to get \n", "$$\\begin{align*}\n", "p(S_1=\\text{R}|S_2=\\text{G}) &=\\frac{p(S_2=\\text{G}|S_1=\\text{R})p(S_1=\\text{R})}{p(S_2=\\text{G}|S_1=\\text{R})p(S_1=\\text{R})+p(S_2=\\text{G}|S_1=\\text{G})p(S_1=\\text{G})}\\\\\n", "&= \\frac{\\frac{7}{11}\\cdot\\frac{5}{12}}{\\frac{7}{11}\\cdot\\frac{5}{12}+\\frac{6}{11}\\cdot\\frac{7}{12}} = \\frac{5}{11}\n", "\\end{align*}$$\n", "\n", "- **[6]** (#) Is it more correct to speak about the likelihood of a _model_ (or model parameters) than about the likelihood of an _observed data set_. And why? \n", "> When a data generating distribution is considered as a function of the model _parameters_ for given data, i.e. $L(\\theta) \\triangleq \\log p(D|\\theta)$, it is called a likelihood. It is more correct to speak about the likelihood of a _model_ (or of the likelihood of the parameters). \n", "\n", "- **[7]** (##) Is a speech signal a 'probabilistic' (random) or a deterministic signal?\n", "> That depends. The term 'probabilistic' refers to a state-of-knowledge (or beliefs) about something (in this case, about the values of a speech signal). The fundamental issue here is to realize that the signal itself is not probabilistic (nor deterministic), but rather that these attributes reflect a state-of-knowledge. If you had a perfect microphone and recorded a speech signal perfectly at its source, then you would know all the signal values perfectly. You could say that the signal is deterministic since there is no uncertainty. However, before you would record the signal, how would you describe your state-of-knowledge about the signal values that your are going to record? There is uncertainty, so you would need to describe that speech signal by a probability distribution over all possible values. \n", " \n", "- **[8]** (##) Proof that, for any distribution of $x$ and $y$ and $z=x+y$\n", "$$\\begin{align*}\n", " \\mathbb{E}[z] &= \\mathbb{E}[x] + \\mathbb{E}[y] \\\\\n", " \\mathbb{V}[z] &= \\mathbb{V}[x] + \\mathbb{V}[y] + 2\\mathbb{V}[x,y] \n", "\\end{align*}$$\n", "where $\\mathbb{E}[\\cdot]$, $\\mathbb{V}[\\cdot]$ and $\\mathbb{V}[\\cdot,\\cdot]$ refer to the expectation (mean), variance and covariance operators respectively. You may make use of the more general theorem that the mean and variance of any distribution $p(x)$ is processed by a linear tranformation as\n", " $$\\begin{align*}\n", "\\mathbb{E}[Ax +b] &= A\\mathbb{E}[x] + b \\\\\n", "\\mathbb{V}[Ax +b] &= A\\,\\mathbb{V}[x]\\,A^T \n", "\\end{align*}$$\n", "> Define $A = [I, I]$, $w = [x;y]$ (where the notation \";\" stacks the columns of $x$ and $y$ and $I$ is the identity matrix). Then $z = A w$. Now apply the formula for the mean and variance of a RV after a linear transformation. \n", "$$\\begin{align*}\n", "\\mathbb{E}[z] &= \\mathbb{E}[Aw] \\\\ \n", " &= \\mathbb{E}[x+y] \\\\\n", " &= \\mathbb{E}[x]+\\mathbb{E}[y]\\\\\n", " \\mathbb{V}[z] &= \\mathbb{V}[Aw] \\\\\n", " &= A\\mathbb{V}[w]A^T \\\\\n", " &= \\begin{bmatrix}I & I \\end{bmatrix}\\begin{bmatrix} \\mathbb{V}[x] & \\mathbb{V}[x,y] \\\\ \\mathbb{V}[x,y] & \\mathbb{V}[y]\\end{bmatrix}\\begin{bmatrix}I \\\\ I \\end{bmatrix} \\\\\n", " &= \\mathbb{V}[x]+\\mathbb{V}[y]+2\\mathbb{V}[x,y]\n", "\\end{align*}$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Bayesian Machine Learning\n", "\n", "\n", "- **[1]** (#) (a) Explain shortly the relation between machine learning and Bayes rule. \n", " (b) How are Maximum a Posteriori (MAP) and Maximum Likelihood (ML) estimation related to Bayes rule and machine learning?\n", "> (a) Machine learning is inference over models (hypotheses, parameters, etc.) from a given data set. *Bayes rule* makes this statement precise. Let $\\theta \\in \\Theta$ and $D$ represent a model parameter vector and the given data set, respectively. Then, Bayes rule,\n", "$$\n", "p(\\theta|D) = \\frac{p(D|\\theta)}{p(D)} p(\\theta)\n", "$$\n", "relates the information that we have about $\\theta$ before we saw the data (i.e., the distribution $p(\\theta)$) to what we know after having seen the data, $p(\\theta|D)$. \n", "> (b) The *Maximum a Posteriori* (MAP) estimate picks a value $\\hat\\theta$ for which the posterior distribution $p(\\theta|D)$ is maximal, i.e.,\n", "$$ \\hat\\theta_{MAP} = \\arg\\max_\\theta p(\\theta|D)$$\n", "In a sense, MAP estimation approximates Bayesian learning, since we approximated $p(\\theta|D)$ by $\\delta(\\theta-\\hat\\theta_{\\text{MAP}})$. Note that, by Bayes rule, $$\\arg\\max_\\theta p(\\theta|D) = \\arg\\max_\\theta p(D|\\theta)p(\\theta)$$\n", "If we further assume that prior to seeing the data all values for $\\theta$ are equally likely (i.e., $p(\\theta)=\\text{const.}$), then the MAP estimate reduces to the *Maximum Likelihood* estimate,\n", "$$ \\hat\\theta_{ML} = \\arg\\max_\\theta p(D|\\theta)$$\n", "\n", "- **[2]** (#) What are the four stages of the Bayesian design approach?\n", "> (1) Model specification, (2) parameter estimation, (3) model evaluation and (4) application of the model to tasks.\n", "\n", "- **[3]** (##) The Bayes estimate is a summary of a posterior distribution by a delta distribution on its mean, i.e., \n", "$$\n", "\\hat \\theta_{bayes} = \\int \\theta \\, p\\left( \\theta |D \\right)\n", "\\,\\mathrm{d}{\\theta}\n", "$$\n", "Proof that the Bayes estimate minimizes the expected mean-squared error, i.e., proof that\n", "$$\n", "\\hat \\theta_{bayes} = \\arg\\min_{\\hat \\theta} \\int_\\theta (\\hat \\theta -\\theta)^2 p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta}\n", "$$\n", ">To minimize the expected mean-squared error we will look for $\\hat{\\theta}$ that makes the gradient of the integral with respect to $\\hat{\\theta}$ vanish.\n", "$$\\begin{align*}\n", " \\nabla_{\\hat{\\theta}} \\int_\\theta (\\hat \\theta -\\theta)^2 p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta} &= 0 \\\\\n", " \\int_\\theta \\nabla_{\\hat{\\theta}} (\\hat \\theta -\\theta)^2 p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta} &= 0 \\\\\n", " \\int_\\theta 2(\\hat \\theta -\\theta) p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta} &= 0 \\\\\n", " \\int_\\theta \\hat \\theta p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta} &= \\int_\\theta \\theta p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta} \\\\\n", " \\hat \\theta \\underbrace{\\int_\\theta p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta}}_{1} &= \\int_\\theta \\theta p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta} \\\\\n", " \\hat \\theta &= \\int_\\theta \\theta p \\left( \\theta |D \\right) \\,\\mathrm{d}{\\theta}\n", " \\end{align*}$$\n", "\n", "- **[4]** (###) We make $N$ IID observations $D=\\{x_1 \\dots x_N\\}$ and assume the following model\n", "$$\n", "x_k = A + \\epsilon_k \n", "$$\n", " where $\\epsilon_k = \\mathcal{N}(\\epsilon_k | 0,\\sigma^2)$ with known $\\sigma^2=1$. We are interested in deriving an estimator for $A$. \n", " (a) Make a reasonable assumption for a prior on $A$ and derive a Bayesian (posterior) estimate. \n", " (b) (##) Derive the Maximum Likelihood estimate for $A$. \n", " (c) Derive the MAP estimates for $A$. \n", " (d) Now assume that we do not know the variance of the noise term? Describe the procedure for Bayesian estimation of both $A$ and $\\sigma^2$ (No need to fully work out to closed-form estimates). \n", "> (a) Since there is no assumption on the values A can take it makes sense to assume a distribution that has support over the reals. A Gaussian prior is a good candidate. Let us assume $p(A) = \\mathcal{N}(A|m_A,v_A)$. Since $p(D|A) = \\prod_k \\mathcal{N}(x_k|A,\\sigma^2)$ is a Gaussian likelihood and $p(A)$ is a Gaussian prior, their multiplication is proportional to a Gaussian. We will work this out with the canonical parameterization of the Gaussian since it is easier to multiply Gaussians in that domain. This means the posterior $p(A|D)$ is\n", "$$\\begin{align*}\n", " p(A|D) &\\propto p(A) p(D|A) \\\\\n", " &= \\mathcal{N}(A|m_A,v_A) \\prod_{k=1}^N \\mathcal{N}(x_k|A,\\sigma^2) \\\\\n", " &= \\mathcal{N}(A|m_A,v_A) \\prod_{k=1}^N \\mathcal{N}(A|x_k,\\sigma^2) \\\\\n", " &= \\mathcal{N}_c\\big(A \\Bigm|\\frac{m_A}{v_A},\\frac{1}{v_A}\\big)\\prod_{k=1}^N \\mathcal{N}_c\\big(A\\Bigm| \\frac{x_k}{\\sigma^2},\\frac{1}{\\sigma^2}\\big) \\\\\n", " &\\propto \\mathcal{N}_c\\big(A \\Bigm| \\frac{m_A}{v_A} + \\frac{1}{\\sigma^2} \\sum_k x_k , \\frac{1}{v_A} + \\frac{N}{\\sigma^2} \\big) \\,, \n", "\\end{align*}$$\n", "where we have made use of the fact that precision-weighted means and precisions add when multiplying Gaussians. In principle this description of the posterior completes the answer. \n", "> (b) The ML estimate can be found by\n", " $$\\begin{align*}\n", " \\nabla \\log p(D|A) &=0\\\\\n", " \\nabla \\sum_k \\log \\mathcal{N}(x_k|A,\\sigma^2) &= 0 \\\\\n", " \\nabla \\frac{-1}{2}\\sum_k \\frac{(x_k-A)^2}{\\sigma^2} &=0\\\\\n", " \\sum_k(x_k-A) &= 0 \\\\\n", " \\hat{A}_{ML} = \\frac{1}{N}\\sum_{k=1}^N x_k\n", " \\end{align*}$$ \n", "> (c) The MAP is simply the location where the posterior has its maximum value, which for a Gaussian posterior is its mean value. We computed in (a) the precision-weighted mean, so we need to divide by precision (or multiply by variance) to get the location of the mean: \n", "$$\\begin{align*} \n", "\\hat{A}_{MAP} &= \\left( \\frac{m_A}{v_A} + \\frac{1}{\\sigma^2} \\sum_k x_k\\right)\\cdot \\left( \\frac{1}{v_A} + \\frac{N}{\\sigma^2} \\right)^{-1} \\\\\n", "&= \\frac{v_A \\sum_k x_k + \\sigma^2 m_A}{N v_A + \\sigma^2}\n", "\\end{align*}$$ \n", "> (d) A Bayesian treatment requires putting a prior on the unknown variance. The variance is constrained to be positive hence the support of the prior distribution needs to be on the positive reals. (In a multivariate case positivity needs to be extended to symmetric positive definiteness.) Choosing a conjugate prior will simplify matters greatly. In this scenerio the inverse Gamma distribution is the conjugate prior for the unknown variance. In the literature this model is called a Normal-Gamma distribution. See https://www.seas.harvard.edu/courses/cs281/papers/murphy-2007.pdf for the analytical treatment. \n", " \n", "- **[5]** (##) We consider the coin toss example from the notebook and use a conjugate prior for a Bernoulli likelihood function. \n", " (a) Derive the Maximum Likelihood estimate. \n", " (b) Derive the MAP estimate. \n", " (c) Do these two estimates ever coincide (if so under what circumstances)? \n", "> (a) $$\\begin{align*}\n", " \\nabla \\log p(D|\\mu) &= 0 \\\\\n", " \\nabla \\left( n\\log \\mu + (N-n)\\log(1-\\mu)\\right) &= 0\\\\\n", " \\frac{n}{\\mu} - \\frac{N-n}{1-\\mu} &= 0 \\\\\n", " \\hat{\\mu}_{\\text{ML}} = \\frac{n}{N}\n", " \\end{align*}$$ \n", "> (b) Assuming a beta prior $\\mathcal{B}(\\mu|\\alpha,\\beta)$, we can write the posterior as as\n", " $$\\begin{align*}\n", " p(\\mu|D) &\\propto p(D|\\mu)p(\\mu) \\\\\n", " &\\propto \\mu^n (1-\\mu)^{N-n} \\mu^{\\alpha-1} (1-\\mu)^{\\beta-1} \\\\\n", " &\\propto \\mathcal{B}(\\mu|n+\\alpha,N-n+\\beta)\n", " \\end{align*}$$ \n", " The MAP estimate for a beta distribution $\\mathcal{B}(a,b)$ is located at $\\frac{a - 1}{a+b-2}$, see [wikipedia](https://en.wikipedia.org/wiki/Beta_distribution). Hence, \n", " $$\\begin{align*}\n", "\\hat{\\mu}_{\\text{MAP}} &= \\frac{(n+\\alpha)-1}{(n+\\alpha) + (N-n+\\beta) -2} \\\\\n", " &= \\frac{n+\\alpha-1}{N + \\alpha +\\beta -2}\n", "\\end{align*}$$ \n", "> (c) As $N$ gets larger, the MAP estimate approaches the ML estimate. In the limit the MAP solution converges to the ML solution." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "- **[6]** (###) Given a single observation $x_0$ from a uniform distribution $\\mathrm{Unif}[0,1/\\theta]$, where $\\theta > 0$. \n", " (a) Show that $\\mathbb{E}[g(x_0)] = \\theta$ if and only if $\\int_0^{1/\\theta} g(u) du =1$. \n", " (b) Show that there is no function $g$ that satisfies the condition for all $\\theta > 0$. \n", "> (a) $$\\begin{align*}\n", " \\mathbb{E}[g(x_0)] &= \\int g(x_0)p(x_0) \\mathrm{d}x_0 \\\\\n", " &= \\int_0^{1/\\theta} g(x_0)\\theta \\mathrm{d}x_0 \\\\\n", " &= \\theta \\int_0^{1/\\theta} g(x_0)\\mathrm{d}x_0\n", " \\end{align*}$$\n", " Hence $\\mathbb{E}[g(x_0)]=\\theta$ if and only if $\\int_0^{1/\\theta} g(u) \\mathrm{d}u=1$\n", " \n", "> (b) Suppose that there is a $g$ that satisfies the condition. Let $\\theta_1>\\theta_2$. The we have\n", " $$\\begin{align*}\n", " \\int_0^{1/\\theta_1} g(u)\\mathrm{d}u &=1 \\\\\n", " \\int_0^{1/\\theta_2} g(u)\\mathrm{d}u &=1 \n", " \\end{align*}$$ \n", "> Then we can subtract the second line from the first line and obtain $\\int_{1/\\theta_1}^{1/\\theta_2} g(u)\\mathrm{d}u = 0$. But this is only possible if $g(u) = 0$ which creates a contradiction. Hence there is no function that satisfies the condition for all $\\theta > 0$." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Continuous Data and the Gaussian Distribution\n", "\n", "\n", "- **[1]** (##) We are given an IID data set $D = \\{x_1,x_2,\\ldots,x_N\\}$, where $x_n \\in \\mathbb{R}^M$. Let's assume that the data were drawn from a multivariate Gaussian (MVG),\n", "$$\\begin{align*}\n", "p(x_n|\\theta) = \\mathcal{N}(x_n|\\,\\mu,\\Sigma) = |2 \\pi \\Sigma|^{-\\frac{1}{2}} \\exp\\left\\{-\\frac{1}{2}(x_n-\\mu)^T\n", "\\Sigma^{-1} (x_n-\\mu) \\right\\}\n", "\\end{align*}$$ \n", " (a) Derive the log-likelihood of the parameters for these data. \n", " (b) Derive the maximum likelihood estimates for the mean $\\mu$ and variance $\\Sigma$ by setting the derivative of the log-likelihood to zero.\n", "\n", "> (a) Let $\\theta ={\\mu,\\Sigma}$. Then the log-likelihood can be worked out as \n", "$$\\begin{align*}\n", "\\log p(D|\\theta) &= \\log \\prod_n p(x_n|\\theta) \\\\\n", " &= \\log \\prod_n \\mathcal{N}(x_n|\\mu, \\Sigma) \\\\\n", "&= \\log \\prod_n (2\\pi)^{-M/2} |\\Sigma|^{-1/2} \\exp\\left( -\\frac{1}{2}(x_n-\\mu)^T \\Sigma^{-1}(x_n-\\mu)\\right) \\\\\n", "&= \\sum_n \\left( \\log (2\\pi)^{-M/2} + \\log |\\Sigma|^{-1/2} -\\frac{1}{2}(x_n-\\mu)^T \\Sigma^{-1}(x_n-\\mu)\\right) \\\\\n", "&\\propto \\frac{N}{2}\\log |\\Sigma|^{-1} - \\frac{1}{2}\\sum_n (x-\\mu)^T \\Sigma^{-1}(x-\\mu)\n", "\\end{align*}$$ \n", "> (b) First we take the derivative with respect to the mean.\n", "$$\\begin{align*}\n", "\\nabla_{\\mu} \\log p(D|\\theta) &\\propto - \\sum_n \\nabla_{\\mu} \\left(x_n-\\mu \\right)^T\\Sigma^{-1}\\left(x_n-\\mu \\right) \\\\\n", "&= - \\sum_n \\nabla_{\\mu} \\mathrm{Tr}\\left[-2 \\mu^T\\Sigma^{-1}x_n + \\mu^T \\Sigma^{-1}\\mu \\right] \\\\\n", "&= - \\sum_n \\left(-2 \\Sigma^{-1}x_n + 2\\Sigma^{-1}\\mu \\right) \\\\\n", "&= \\Sigma^{-1} \\sum_n (x_n - \\mu)\n", "\\end{align*}$$\n", "Setting the derivative to zeros leads to $\\hat{\\mu} = \\frac{1}{N}\\sum_n x_n$.\n", "The derivative with respect to covariance is a bit more involved. It's actually easier to compute this by taking the derivative to the precision:\n", "$$\\begin{align*}\n", "\\nabla_{\\Sigma^{-1}} \\log p(D|\\theta) &= \\nabla_{\\Sigma^{-1}} \\left( \\frac{N}{2} \\log |\\Sigma| ^{-1} -\\frac{1}{2}\\sum_n (x_n-\\mu)^T\n", "\\Sigma^{-1} (x_n-\\mu)\\right) \\\\\n", "&= \\nabla_{\\Sigma^{-1}} \\left( \\frac{N}{2} \\log |\\Sigma| ^{-1} - \\frac{1}{2}\\sum_n \\mathrm{Tr}\\left[(x_n-\\mu)\n", "(x_n-\\mu)^T \\Sigma^{-1} \\right]\\right) \\\\\n", "&=\\frac{N}{2}\\Sigma - \\frac{1}{2}\\sum_n (x_n-\\mu)\n", "(x_n-\\mu)^T\n", "\\end{align*}$$\n", "Setting the derivative to zero leads to $\\hat{\\Sigma} = \\frac{1}{N}\\sum_n (x_n-\\hat{\\mu})\n", "(x_n-\\hat{\\mu})^T$.\n", "- **[2]** (#) Shortly explain why the Gaussian distribution is often preferred as a prior distribution over other distributions with the same support?\n", "> You can get this answer straight from the lession notebook. Aside from the computational advantages (operations on distributions tends to make them more Gaussian, and Gaussians tends to remain Gaussians in computational manipulations), the Gaussian distribution is also the maximum-entropy distribution among distributions that are defined over real numbers. This means that there is no distribution with the same variance that assumes less information about its argument. \n", "\n", "- **[3]** (###) Proof that the Gaussian distribution is the maximum entropy distribution over the reals with specified mean and variance. \n", "> This is a challenging question (e.g., too diffucult for a written exam:) which requires calculus of variations to solve rigorously. We will show how to maximize the entropy functional which is $-\\int q(x) \\log q(x) \\mathrm{d}x$ with the specified constraints. We have three constraints: (1) we require $q(x)$ to be normalized, (2) $\\mathbb{E}[x] = m$ and (3) $\\mathbb{E}[x^2] = m^2+\\sigma^2$, where $m \\in \\mathbb{R}$ and $\\sigma^2 \\in \\mathbb{R}^{+}$ are arbitrary. Let us write entropy with the given constraints with undetermined multipliers as a Lagrangian\n", "$$\\begin{align*}\n", "L[q] = -\\int q(x)\\log q(x)\\mathrm{d}x + \\lambda \\left(\\int xq(x)\\mathrm{d}x - m\\right) + \\gamma \\left(\\int x^2q(x)\\mathrm{d}x - (\\sigma^2+m^2)\\right) + \\psi \\left(\\int q(x)\\mathrm{d}x -1 \\right).\n", "\\end{align*}$$\n", "We are searching for a distribution in a space of functions that minimizes the above Lagrangian. This is is a functional minimization problem that is defined over a function space as opposed to ordinary ($\\mathbb{R}^N$). Even though the computational mechanics are somewhat different the idea is same with ordinary minimization problems. We look at the functional derivative that has a similar interpretation as a gradient(It can be thought of as the derivative of a functional with respect to a function). We want to solve\n", "$$\\begin{align*}\n", "\\frac{\\delta L[q]}{\\delta q} &= 0 \\\\\n", "-\\log q(x) + \\psi + \\lambda x + \\gamma x^2 &= 0 \\\\\n", "q(x) &= \\exp(+\\psi +\\lambda x + \\gamma x^2)\n", "\\end{align*}$$\n", "where $\\frac{\\delta L[q]}{\\delta q} $ is the functional derivative. We can plug $q(x)$ back into the constraints and solve for the multipliers. Doing that we obtain $\\lambda=\\frac{m}{\\sigma^2}$,$\\gamma = -\\frac{1}{2\\sigma^2}$ and $\\psi = -\\frac{m^2}{2\\sigma^2}-\\log \\sqrt{2\\pi \\sigma^2}$. This means the distribution that maximizes entropy, $q(x)$, is a Gaussian distribution. \n", "\n", "- **[4]** (##) Proof that a linear transformation $z=Ax+b$ of a Gaussian variable $\\mathcal{N}(x|\\mu,\\Sigma)$ is Gaussian distributed as\n", "$$\n", "p(z) = \\mathcal{N} \\left(z \\,|\\, A\\mu+b, A\\Sigma A^T \\right) \n", "$$ \n", "\n", "> First, we show that a linear transformation of a Gaussian is a Gaussian. In general, the transformed distribution of $z=g(x)$ is given by\n", "$$ p_Z(z) = \\frac{p_X(g^{-1}(z))}{\\mathrm{det}[g(z)]}\\,.$$ Since the transformation is linear, $\\mathrm{det}[g] = \\mathrm{det}[A]$, which is independent of $z$, and consequently $p_Z(z)$ has the same functional form as $p_X(x)$, i.e. $p_Z(z)$ is a also Gaussian. The mean and variance can easily be determined by the calculation that we used in [question 8 of the Probability Theory exercises](#distribution-of-sum). This results in\n", "$p(z) = \\mathcal{N} \\left(z \\,|\\, A\\mu+b, A\\Sigma A^T \\right)$.\n", "\n", "- **[5]** (#) Given independent variables\n", "$x \\sim \\mathcal{N}(\\mu_x,\\sigma_x^2)$ and $y \\sim \\mathcal{N}(\\mu_y,\\sigma_y^2)$, what is the PDF for $z = A\\cdot(x -y) + b$? \n", "> $z$ is also Gaussian with \n", "$$\n", "p_z(z) = \\mathcal{N}(z \\,|\\, A(\\mu_x-\\mu_y)+b, \\, A (\\sigma_x^2 + \\sigma_y^2) A^T)\n", "$$\n", "\n", "\n", "- **[6]** (###) Compute\n", "\n", "\\begin{equation*}\n", " \\int_{-\\infty}^{\\infty} \\exp(-x^2)\\mathrm{d}x \\,.\n", " \\end{equation*}\n", " \n", "> For a Gaussian with zero mean and varance equal to $1$ we have\n", "$$\n", "\\int \\frac{1}{\\sqrt{2\\pi}}\\exp(-\\frac{1}{2}x^2) = 1\n", "$$\n", "Substitution of $y = \\sqrt{2}x$ with $\\mathrm{d}x=\\sqrt{2}\\mathrm{d}y$ will simply lead you to $ \\int_{-\\infty}^{\\infty} \\exp(-x^2)\\mathrm{d}x=\\sqrt{\\pi}$.\n", "If you don't want to use the result of the Gaussian integral, you can still do this integral, see [youtube clip](https://www.youtube.com/watch?v=FYNHt4AMxc0). \n", "\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Discrete Data and the Multinomial Distribution\n", "\n", "- **[1]** (##) We consider IID data $D = \\{x_1,x_2,\\ldots,x_N\\}$ obtained from tossing a $K$-sided die. We use a *binary selection variable*\n", "$$x_{nk} \\equiv \\begin{cases} 1 & \\text{if $x_n$ lands on $k$th face}\\\\\n", " 0 & \\text{otherwise}\n", "\\end{cases}\n", "$$\n", "with probabilities $p(x_{nk} = 1)=\\theta_k$. \n", " (a) Write down the probability for the $n$th observation $p(x_n|\\theta)$ and derive the log-likelihood $\\log p(D|\\theta)$. \n", " (b) Derive the maximum likelihood estimate for $\\theta$.\n", "> See lecture notes (on class homepage). \n", " (a) $p(x_n|\\theta) = \\prod_k \\theta_k^{x_{nk}} \\quad \\text{subject to} \\quad \\sum_k \\theta_k = 1$.\n", "$$\\ell(\\theta) = \\sum_k m_k \\log \\theta_k$$\n", "where $m_k = \\sum_n x_{nk}$. \n", " (b) $\\hat \\theta = \\frac{m_k}{N}$, the *sample proportion*.\n", "\n", "- **[2]** (#) In the notebook, Laplace's generalized rule of succession (the probability that we throw the $k$th face at the next toss) was derived as \n", "$$\\begin{align*}\n", "p(x_{\\bullet,k}=1|D) = \\frac{m_k + \\alpha_k }{ N+ \\sum_k \\alpha_k}\n", "\\end{align*}$$\n", "Provide an interpretation of the variables $m_k,N,\\alpha_k,\\sum_k\\alpha_k$.\n", "\n", "> $m_k$ is the total number of occurances that we threw $k$ eyes, $\\alpha_k$ is the prior pseudo counts representing the number of observations in the $k$th that we assume to have seen already. $\\sum_k m_k = N $ is the total number of rolls and $\\sum_k \\alpha_k $ is the total number of prior pseudo rolls.\n", "\n", "\n", "\n", "- **[3]** (##) Show that Laplace's generalized rule of succession can be worked out to a prediction that is composed of a prior prediction and data-based correction term. \n", "\n", "> $$\\begin{align*}\n", "p(x_{\\bullet,k}=1|D) &= \\frac{m_k + \\alpha_k }{ N+ \\sum_k \\alpha_k} \\\\\n", "&= \\frac{N}{N+\\sum_k \\alpha_k} \\frac{m_k}{N} + \\frac{\\sum_k \\alpha_k}{N+\\sum_k \\alpha_k}\\frac{\\alpha_k}{\\sum_k\\alpha_k} \\\\\n", "&= \\underbrace{\\frac{\\alpha_k}{\\sum_k\\alpha_k}}_{\\text{prior prediction}} + \\underbrace{\\frac{N}{N+\\sum_k \\alpha_k} \\cdot \\underbrace{\\left(\\frac{m_k}{N} - \\frac{\\alpha_k}{\\sum_k\\alpha_k}\\right)}_{\\text{prediction error}}}_{\\text{data-based correction}}\n", "\\end{align*}$$\n", "\n", "- **[4]** (#) Verify that \n", " (a) the categorial distribution is a special case of the multinomial for $N=1$. \n", " (b) the Bernoulli is a special case of the categorial distribution for $K=2$. \n", " (c) the binomial is a special case of the multinomial for $K=2$.\n", "\n", "> (a) The probability mass function of a multinomial distribution is $p(D_m|\\mu) =\\frac{N!}{m_1! m_2!\\ldots m_K!} \\,\\prod_k \\mu_k^{m_k}$ over the data frequencies $D_m=\\{m_1,\\ldots,m_K\\}$ with the constraint that $\\sum_k \\mu_k = 1$ and $\\sum_k m_k=N$. Setting $N=1$ we see that $p(D_m|\\mu) \\propto \\prod_k \\mu_k^{m_k}$ with $\\sum_k m_k=1$, making the sample space one-hot coded given by the categorical distribution. \n", "> (b) When $K=2$, the constraint for the categorical distribution takes the form $m_1=1-m_2$ leading to $p(D_m|\\mu) \\propto \\mu_1^{m_1}(1-\\mu_1)^{1-m_1}$ which is associated with the Bernoulli distribution. \n", "> (c) Plugging $K=2$ into the multinomial distribution leads to $p(D_m|\\mu) =\\frac{N!}{m_1! m_2!}\\mu_1^{m_1}\\left(\\mu_2^{m_2}\\right)$ with the constraints $m_1+m_2=N$ and $\\mu_1+\\mu_2=1$. Then plugging the constraints back in we obtain $p(D_m|\\mu) = \\frac{N!}{m_1! (N-m1)!}\\mu_1^{m_1}\\left(1-\\mu_1\\right)^{N-m_1}$ as the binomial distribution.\n", "\n", "\n", "- **[5]** (###) Determine the mean, variance and mode of a Beta distribution.\n", "> The Beta distribution is given by $\\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}x^{\\alpha-1}(1-x)^{\\beta-1}$. Define $\\frac{\\Gamma(\\alpha)\\Gamma(\\beta)}{\\Gamma(\\alpha+\\beta)} \\triangleq \\mathcal{B}(\\alpha,\\beta)$, which is the normalization constant. Notice that this definition makes $\\int_0^1 x^{\\alpha-1}(1-x)^{\\beta-1}\\mathrm{d}x = \\mathcal{B}(\\alpha,\\beta)$. Together with $\\Gamma(x+1) = x\\Gamma(x)$ we can use these identities to obtain the requested statistics:\n", "$$\\begin{align*}\n", "\\mathbb{E}[x] &= \\frac{1}{\\mathcal{B}(\\alpha,\\beta)}\\int_0^1 x x^{\\alpha-1}(1-x)^{\\beta-1}\\mathrm{d}x \\\\\n", "&= \\frac{1}{\\mathcal{B}(\\alpha,\\beta)}\\int_0^1x^{\\alpha}(1-x)^{\\beta-1}\\mathrm{d}x \\\\\n", "&= \\frac{\\mathcal{B}(\\alpha+1,\\beta)}{\\mathcal{B}(\\alpha,\\beta)} \\\\\n", "&= \\frac{\\Gamma(\\alpha+1)\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\alpha+\\beta+1)}\\\\\n", "&= \\frac{\\alpha \\Gamma(\\alpha)\\Gamma(\\alpha+\\beta) }{(\\alpha+\\beta)\\Gamma(\\alpha)\\Gamma(\\alpha+\\beta)}\\\\\n", "&= \\frac{\\alpha}{\\alpha+\\beta} \\\\\n", "\\mathbb{V}[x] &= \\mathbb{E}[x^2] - \\mathbb{E}[x]^2 \\\\\n", "&= \\frac{1}{\\mathcal{B}(\\alpha,\\beta)}\\int_0^1 x^2 x^{\\alpha-1}(1-x)^{\\beta-1}\\mathrm{d}x - \\frac{\\alpha^2}{(\\alpha+\\beta)^2} \\\\\n", "&= \\frac{\\mathcal{B}(\\alpha+2,\\beta)}{\\mathcal{B}(\\alpha,\\beta)} - \\frac{\\alpha^2}{(\\alpha+\\beta)^2} \\\\\n", "&= \\frac{\\alpha}{\\alpha+\\beta}\\left(\\frac{\\alpha+1}{\\alpha+\\beta+1} - \\frac{\\alpha}{\\alpha+\\beta}\\right) \\\\\n", "&= \\frac{\\alpha\\beta}{(\\alpha+\\beta)^2(\\alpha+\\beta+1)}\n", "\\end{align*}$$\n", "If $\\alpha=\\beta$, then the Beta distribution is identical to a uniform distribution, which doesn't have a unique mode. If one of the parameters is $<1$, then the mode is at one of the edges. When both parameters are $>1$, then the mode is well-defined and is within the interior of the distribution. Assuming the parameters are $>1$ we can evaluate the mode as\n", "$$\\begin{align*}\n", "\\nabla_x x^{\\alpha-1}(1-x)^{\\beta-1} &= 0\\\\\n", "\\frac{\\alpha-1}{\\beta-1} &= \\frac{x}{1-x} \\\\\n", "\\alpha-1 &= x(\\alpha+\\beta-2) \\\\\n", "\\Rightarrow x_{mode} &= \\frac{\\alpha-1}{\\alpha+\\beta-2}.\n", "\\end{align*}$$\n", "\n", "- **[6]** (###) Consider a data set of binary variables $D=\\{x_1,x_2,\\ldots,x_N\\}$ with a Bernoulli distribution $\\mathrm{Ber}(x_k|\\mu)$ as data generating distribution and a Beta prior for $\\mu$. Assume that you make $n$ observations with $x=1$ and $N-n$ observations with $x=0$. Now consider a new draw $x_\\bullet$. We are interested in computing $p(x_\\bullet|D)$. Show that the mean value for $p(x_\\bullet|D)$ lies in between the prior mean and Maximum Likelihood estimate.\n", "\n", "> In the lectures we have seen that $p(x_\\bullet =1|D) = \\frac{a+n}{a+b+N}$, where $a$ and $b$ are parameters of the Beta prior. The ML estimate is $\\frac{n}{N}$ and the prior mean is $\\frac{a}{a+b}$. To show that the prediction lies in between ML and prior estimate, we will try to write the prediction as a convex combination of the latter two. That is we want to solve for $\\lambda$ \n", "$$\\begin{align*}\n", "(1-\\lambda) \\frac{n}{N} + \\lambda\\frac{a}{a+b} &= \\frac{a+n}{a+b+N} \\\\\n", "\\lambda &= \\frac{1}{1+\\frac{N}{a+b}} \n", "\\end{align*}$$\n", "Since $a,b$ and $N$ are positive, it follows that $0<\\lambda <1$. This means the prediction is a convex combination of prior and ML estimates and thus lies in between the two.\n", "\n", "\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Regression\n", "\n", "\n", "\n", "- **[1]** (#) (a) Write down the generative model for Bayesian linear ordinary regression (i.e., write the likelihood and prior). \n", " (b) State the inference task for the weight parameter in the model. \n", " (c) Why do we call this problem linear? \n", " \n", "> (a)\n", "$$\\begin{align*}\n", "\\text{likelihood: } p(y_n|x_n,w) &= \\mathcal{N}(y_n|w^T\\phi(x_n),\\beta^{-1}) \\\\\n", "\\text{prior: } p(w|\\alpha) &= \\mathcal{N}(w|0,\\alpha^{-1}I)\n", "\\end{align*}$$ \n", "> (b) The inference task is to compute\n", " $$p(w|D) = \\frac{p(D|w)p(w)}{p(D)}$$ \n", "> (c) The model is linear with respect to $w$, which is the reason we call it linear.\n", "\n", "\n", "- **[2]** (##) Consider a linear regression problem \n", "$$\\begin{align*}\n", "p(y\\,|\\,\\mathbf{X},w,\\beta) &= \\mathcal{N}(y\\,|\\,\\mathbf{X} w,\\beta^{-1} \\mathbf{I}) \\\\\n", " &= \\prod_n \\mathcal{N}(y_n\\,|\\,w^T x_n,\\beta^{-1})\n", "\\end{align*}$$\n", "with $y, X$ and $w$ as defined in the notebook. \n", " (a) Work out the maximum likelihood solution for linear regression by solving\n", "$$\n", "\\nabla_{w} \\log p(y|X,w) = 0 \\,.\n", "$$ \n", " (b) Work out the MAP solution. How does it relate to the ML solution?\n", "> (a) The gradient of the log-likelihood is \n", "$$\\begin{equation*}\n", "\\nabla_{w} \\log p(y|X,w) = X^T(y-Xw) \n", "\\end{equation*}$$ \n", "> Setting the derivation to zero leads to \n", "$$\\begin{equation*}\n", "w_{ML} = (X^TX)^{-1}X^Ty \n", "\\end{equation*}$$\n", "> (b) We now add a prior $w \\sim \\mathcal{N}(0,\\alpha^{-1})$, and a similar derivation leads to $$\\begin{equation*}\n", "\\nabla_{w} \\log p(y,w|X) =-\\beta X^T(y-Xw)+\\alpha w\n", "\\end{equation*}$$ \n", "> Setting the derivation to zero leads to \n", "$$\\begin{equation*}\n", "w_{MAP} = (X^TX+\\frac{\\alpha}{\\beta}I)^{-1}X^Ty \n", "\\end{equation*}$$ \n", "> The MAP solution weighs both the prior and likelihood. If $\\frac{\\alpha}{\\beta} $ is close to zero (if the prior is uninformative), then the ML solution and MAP solutions are close to each other.\n", "\n", "\n", "- **[3]** (###) Show that the variance of the predictive distribution for linear regression decreases as more data becomes available.\n", "> Variance of the predictive distribution is given by\n", "$$\\begin{align*}\n", "\\sigma_{N+1}^2(x) &= 1/\\beta + \\phi(x)^TS_{N+1}\\phi(x) \\\\\n", "S_{N+1} &= (S_N^{-1} + \\beta\\phi_{N+1}\\phi_{N+1}^T)^{-1} \\\\\n", "&= S_N - \\frac{\\beta S_N\\phi_{N+1}\\phi_{N+1}^TS_N}{1+\\beta\\phi_{N+1}^TS_N\\phi_{N+1}}\n", "\\end{align*}$$\n", "where in the last equality, we applied [Woodbury's matrix identity](https://en.wikipedia.org/wiki/Woodbury_matrix_identity), which is also listed in [Sam Roweis' matrix notes, eq. 10](https://github.com/bertdv/BMLIP/raw/master/lessons/notebooks/files/Roweis-1999-matrix-identities.pdf?dl=0). Using the recursive relation for $S_N$ we can write the variance for the next observation as \n", "$$\\begin{align*}\n", "\\sigma_{N+1}^2(x) &= \\sigma_N^2(x) - \\frac{\\beta\\phi(x)^TS_N\\phi_{N+1}\\phi_{N+1}^TS_N\\phi(x)}{1+\\beta\\phi_{N+1}^TS_N\\phi_{N+1}}.\n", "\\end{align*}$$\n", "Because $S_N$ is positive definite, the numerator and the denominator of the second term wil be non-negative, hence \n", "$\\sigma_N^2(x) \\geqslant \\sigma_{N+1}^2(x)$. This shows that the predictive variance decrease as more data becomes available.\n", "\n", "\n", "- **[4]** (#) Assume a given data set $D=\\{(x_1,y_1),(x_2,y_2),\\ldots,(x_N,y_N)\\}$ with $x \\in \\mathbb{R}^M$ and $y \\in \\mathbb{R}$. We propose a model given by the following data generating distribution and weight prior functions: \n", "$$\\begin{equation*} p(y_n|x_n,w)\\cdot p(w)\\,. \\end{equation*}$$\n", " (a) Write down Bayes rule for generating the posterior $p(w|D)$ from a prior and likelihood. \n", " (b) Work out how to compute a distribution for the predicted value $y_\\bullet$, given a new input $x_\\bullet$. \n", "> (a) \n", "$$\n", "p(w|D) = \\frac{p(w) \\prod_{n=1}^N p(y_n|x_n,w)}{\\int p(w) \\prod_{n=1}^N p(y_n|x_n,w)\\mathrm{d}w}\n", "$$\n", "> (b)\n", "$$\n", "p(y_\\bullet|x_\\bullet,D) = \\int p(y_\\bullet|x_\\bullet,w) p(w|D) \\mathrm{d}w\n", "$$\n", "\n", "- **[5]** (#) In the class we use the following prior for the weights:\n", "$$\\begin{equation*}\n", "p(w|\\alpha) = \\mathcal{N}\\left(w | 0, \\alpha^{-1} I \\right)\n", "\\end{equation*}$$\n", " (a) Give some considerations for choosing a Gaussian prior for the weights. \n", " (b) We could have chosen a prior with full (not diagonal) covariance matrix $p(w|\\alpha) = \\mathcal{N}\\left(w | 0, \\Sigma \\right)$. Would that be better? Give your thoughts on that issue. \n", " (c) Generally we choose $\\alpha$ as a small positive number. Give your thoughts on that choice as opposed to choosing a large positive value. How about choosing a negative value for $\\alpha$?\n", "> (a) These considerations can be both computational (eg, Gaussian prior times Gaussian likelihood leads to a Gaussian posterior) or based on available information (eg, among all distributions with the same variance, the Gaussian distribution has the largest entropy. Roughly this means that the Gaussian makes the least amount of assumptions across all distributions with the same variance). \n", "> (b) If you have no prior information about co-variances, why make that assumption? If you do have some prior information, eg based on the physical process, then by all means feel free to add those constraints to the prior. Note that the posterior variance is given by $S_N = \\left( \\alpha \\mathbf{I} + \\beta \\mathbf{X}^T\\mathbf{X}\\right)^{-1}$. Importantly, the term $\\alpha \\mathbf{I}$ for small $\\alpha$ makes sure that the matrix is invertible, even for zero observations. \n", "> (c) As you can see from the posterior variance (see answer to (b)), for smaller values of $\\alpha$, the data term $\\mathbf{X}^T\\mathbf{X}$ gets to play a role after fewer observations. Hence, if you have little prior information, it's better to choose a small value for $\\alpha$. \n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Generative Classification\n", "\n", "- **[1]** You have a machine that measures property $x$, the \"orangeness\" of liquids. You wish to discriminate between $C_1 = \\text{`Fanta'}$ and $C_2 = \\text{`Orangina'}$. It is known that\n", "\n", "$$\\begin{align*}\n", "p(x|C_1) &= \\begin{cases} 10 & 1.0 \\leq x \\leq 1.1\\\\\n", " 0 & \\text{otherwise}\n", " \\end{cases}\\\\\n", "p(x|C_2) &= \\begin{cases} 200(x - 1) & 1.0 \\leq x \\leq 1.1\\\\\n", "0 & \\text{otherwise}\n", "\\end{cases}\n", "\\end{align*}$$\n", "\n", "The prior probabilities $p(C_1) = 0.6$ and $p(C_2) = 0.4$ are also known from experience. \n", " (a) (##) A \"Bayes Classifier\" is given by\n", " \n", "$$ \\text{Decision} = \\begin{cases} C_1 & \\text{if } p(C_1|x)>p(C_2|x) \\\\\n", " C_2 & \\text{otherwise}\n", " \\end{cases}\n", "$$\n", "\n", "Derive the optimal Bayes classifier. \n", " (b) (###) The probability of making the wrong decision, given $x$, is\n", " \n", "$$\n", "p(\\text{error}|x)= \\begin{cases} p(C_1|x) & \\text{if we decide $C_2$}\\\\\n", " p(C_2|x) & \\text{if we decide $C_1$}\n", "\\end{cases}\n", "$$\n", "\n", "Compute the **total** error probability $p(\\text{error})$ for the Bayes classifier in this example.\n", "\n", "> (a) We choose $C_1$ if $p(C_1|x)/p(C_2|x) > 1$. This condition can be worked out as\n", "$$\n", "\\frac{p(C_1|x)}{p(C_2|x)} = \\frac{p(x|C_1)p(C_1)}{p(x|C_2)p(C_2)} = \\frac{10 \\times 0.6}{200(x-1)\\times 0.4}>1\n", "$$\n", "which evaluates to choosing\n", "$$\\begin{align*}\n", "C_1 &\\quad \\text{ if $1.0\\leq x < 1.075$}\\\\ \n", "C_2 &\\quad \\text{ if $1.075 \\leq x \\leq 1.1$ }\n", "\\end{align*}$$\n", "The probability that $x$ falls outside the interval $[1.0,1.1]$ is zero. \n", "> (b) The total probability of error $p(\\text{error})=\\int_x p(\\text{error}|x)p(x) \\mathrm{d}{x}$. We can work this out as\n", "\n", "$$\\begin{align*}\n", "p(\\text{error}) &= \\int_x p(\\text{error}|x)p(x)\\mathrm{d}{x}\\\\\n", "&= \\int_{1.0}^{1.075} p(C_2|x)p(x) \\mathrm{d}{x} + \\int_{1.075}^{1.1} p(C_1|x)p(x) \\mathrm{d}{x}\\\\\n", "&= \\int_{1.0}^{1.075} p(x|C_2)p(C_2) \\mathrm{d}{x} + \\int_{1.075}^{1.1} p(x|C_1)p(C_1) \\mathrm{d}{x}\\\\\n", "&= \\int_{1.0}^{1.075}0.4\\cdot 200(x-1) \\mathrm{d}{x} + \\int_{1.075}^{1.1} 0.6\\cdot 10 \\mathrm{d}{x}\\\\\n", "&=80\\cdot[x^2/2-x]_{1.0}^{1.075} + 6\\cdot[x]_{1.075}^{1.1}\\\\\n", "&=0.225 + 0.15\\\\\n", "&=0.375\n", "\\end{align*}$$\n", "\n", "\n", "\n", "- **[2]** (#) (see Bishop exercise 4.8): Using (4.57) and (4.58) (from Bishop's book), derive the result (4.65) for the posterior class probability in the two-class generative model with Gaussian densities, and verify the results (4.66) and (4.67) for the parameters $w$ and $w0$. \n", "\n", "> Substitute 4.64 into 4.58 to get \n", "$$\\begin{align*}\n", "a &= \\log \\left( \\frac{ \\frac{1}{(2\\pi)^{D/2}} \\cdot \\frac{1}{|\\Sigma|^{1/2}} \\cdot \\exp\\left( -\\frac{1}{2}(x-\\mu_1)^T \\Sigma^{-1} (x-\\mu_1)\\right) \\cdot p(C_1)}{\\frac{1}{(2\\pi)^{D/2}} \\cdot \\frac{1}{|\\Sigma|^{1/2}}\\cdot \\exp\\left( -\\frac{1}{2}(x-\\mu_2)^T \\Sigma^{-1} (x-\\mu_2)\\right) \\cdot p(C_2)}\\right) \\\\\n", "&= \\log \\left( \\exp\\left(-\\frac{1}{2}(x-\\mu_1)^T \\Sigma^{-1} (x-\\mu_1) + \\frac{1}{2}(x-\\mu_2)^T \\Sigma^{-1} (x-\\mu_2) \\right) \\right) + \\log \\frac{p(C_1)}{p(C_2)} \\\\\n", "&= ... \\\\\n", "&=( \\mu_1-\\mu_2)^T\\Sigma^{-1}x - 0.5\\left(\\mu_1^T\\Sigma^{-1}\\mu_1 - \\mu_2^T\\Sigma^{-1} \\mu_2\\right)+ \\log \\frac{p(C_1)}{p(C_2)} \n", "\\end{align*}$$ \n", "Substituting this into the right-most form of (4.57) we obtain (4.65), with $w$ and $w0$ given by (4.66) and (4.67), respectively.\n", "\n", "- **[3]** (###) (see Bishop exercise 4.9). \n", "> The Log-likelihood is given by\n", "$$\\log p(\\{\\phi_n,t,n\\} | \\{\\pi_k\\}) = \\sum_n \\sum_k t_{nk}\\left(\\log p(\\phi_n|C_k) + \\log \\pi_k\\right)\\,.$$ Using the method of Lagrange multipliers (Bishop app.E), we augment the log-likelihood with the constraint and obtain the Lagrangian $$\\log p(\\{\\phi_n,t_{nk}\\} | \\{\\pi_k\\})+\\lambda \\left(\\sum_k \\pi_k -1 \\right)\\,.$$ In order to maximize, we set the derivative with respect to $\\pi_k$ equal to zero and obtain \n", "$$\\begin{align*}\n", "\\sum_n \\frac{t_{nk}}{\\pi_k}+\\lambda &=0 \\\\\n", "-\\pi_k\\lambda &=\\sum_n t_{nk} = N_k \\\\\n", "-\\lambda \\sum_k \\pi_k &= \\sum_k \\sum_n t_{nk} \\\\\n", "\\lambda &= -N \n", "\\end{align*}$$\n", "\n", "- **[4]** (##) (see Bishop exercise 4.10). \n", "> We can write the log-likelihood as\n", "$$\\begin{align*}\n", "\\log p(\\{\\phi_n,t_n\\}|\\{\\pi_k\\}) \\propto -0.5\\sum_n\\sum_kt_{nk}\\left(\\log |\\Sigma|+(\\phi_n-\\mu_k)^T\\Sigma^{-1}(\\phi-\\mu)\\right)\n", "\\end{align*}$$\n", "The derivatives of the likelihood with respect to mean and shared covariance are respectively\n", "$$\\begin{align*}\n", "\\nabla_{\\mu_k}\\log p(\\{\\phi_n,t_n\\}|\\{\\pi_k\\}) &= \\sum_n\\sum_k t_{nk}\\Sigma^{-1}\\left(\\phi_n-\\mu_k\\right) = 0 \\\\\n", "\\sum_n t_{nk}\\left(\\phi_n-\\mu_k\\right))&=0 \\\\\n", "\\mu_k &= \\frac{1}{N_k}\\sum_n t_{nk}\\phi_n \\\\\n", "\\nabla_{\\Sigma}\\log p(\\{\\phi_n,t_n\\}|\\{\\pi_k\\})&=\\sum_n\\sum_k t_{nk}\\left(\\Sigma - (\\phi_n-\\mu_k)(\\phi_n-\\mu_k)^T\\right) = 0 \\\\\n", "\\sum_n\\sum_k t_{nk}\\Sigma &= \\sum_n\\sum_k t_{nk}(\\phi_n-\\mu_k)(\\phi_n-\\mu_k)^T \\\\\n", "\\Sigma &= \\frac{1}{N}\\sum_k\\sum_n t_{nk}(\\phi_n-\\mu_k)(\\phi_n-\\mu_k)^T\n", "\\end{align*}$$\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Discriminative Classification\n", "\n", "- **[1]** Given a data set $D=\\{(x_1,y_1),\\ldots,(x_N,y_N)\\}$, where $x_n \\in \\mathbb{R}^M$ and $y_n \\in \\{0,1\\}$. The probabilistic classification method known as *logistic regression* attempts to model these data as\n", "$$p(y_n=1|x_n) = \\sigma(\\theta^T x_n + b)$$\n", "where $\\sigma(x) = 1/(1+e^{-x})$ is the *logistic function*. Let's introduce shorthand notation $\\mu_n=\\sigma(\\theta^T x_n + b)$. So, for every input $x_n$, we have a model output $\\mu_n$ and an actual data output $y_n$. \n", " (a) Express $p(y_n|x_n)$ as a Bernoulli distribution in terms of $\\mu_n$ and $y_n$. \n", " (b) If furthermore is given that the data set is IID, show that the log-likelihood is given by\n", "$$\n", "L(\\theta) \\triangleq \\log p(D|\\theta) = \\sum_n \\left\\{y_n \\log \\mu_n + (1-y_n)\\log(1-\\mu_n)\\right\\}\n", "$$ \n", " (c) Prove that the derivative of the logistic function is given by\n", "$$\n", "\\sigma^\\prime(\\xi) = \\sigma(\\xi)\\cdot\\left(1-\\sigma(\\xi)\\right)\n", "$$ \n", " (d) Show that the derivative of the log-likelihood is\n", "$$\n", "\\nabla_\\theta L(\\theta) = \\sum_{n=1}^N \\left( y_n - \\sigma(\\theta^T x_n +b)\\right)x_n\n", "$$ \n", " (e) Design a gradient-ascent algorithm for maximizing $L(\\theta)$ with respect to $\\theta$. \n", "\n", " > (a) $p(y_n|x_n) = p(y_n=1|x_n)^{y_n} p(y_n=0|x_n)^{1-y_n} = \\mu_n^{y_n}(1-\\mu_n)^{1-y_n}$ \n", " > (b) The log-likelihood is given by\n", "$$\\begin{align*} L(\\theta) &= \\log p(D|\\theta) = \\sum_n \\log p(y_n|x_n,\\theta)\\\\\n", "&= \\sum_n \\left\\{y_n \\log \\mu_n + (1-y_n)\\log(1-\\mu_n)\\right\\}\n", "\\end{align*}$$ \n", " > (c) $$\\begin{align*}\n", "\\frac{d{}}{d{x}}\\left( \\frac{1}{1+e^{-x}}\\right) &= \\frac{(1+e^{-x})\\cdot 0 - (-e^{-x}\\cdot 1)}{(1+e^{-x})^2}\\\\\n", "&= \\frac{e^{-x}}{(1+e^{-x})^2} = \\frac{1}{1+e^{-x}}\\cdot \\frac{e^{-x}}{1+e^{-x}}\\\\\n", "&=\\sigma(x)\\left( 1-\\sigma(x)\\right)\n", "\\end{align*}$$ \n", " > (d)\n", "$$\\begin{align*}\n", "\\nabla_\\theta L(\\theta) &= \\sum_n \\frac{\\partial{L}}{\\partial{\\mu_n}}\\cdot \\frac{\\partial{\\mu_n}}{\\partial{(\\theta^T x_n +b)}} \\cdot \\frac{\\partial{(\\theta^T x_n +b)}}{\\partial{\\theta}}\\\\\n", "&= \\sum_n \\left(\\frac{y_n}{\\mu_n} - \\frac{1-y_n}{1-\\mu_n} \\right) \\cdot \\mu_n(1-\\mu_n) \\cdot x_n\\\\\n", "&= \\sum_n \\frac{y_n - \\mu_n}{\\mu_n(1-\\mu_n)} \\cdot \\mu_n(1-\\mu_n) \\cdot x_n\\\\\n", "&= \\sum_n (y_n - \\mu_n) \\cdot x_n\n", "\\end{align*}$$ \n", " > (e)\n", "$$ \\theta^{(t+1)} = \\theta^{(t)} + \\rho \\sum_n (y_n - \\mu_n^{(t)})x_n$$\n", "\n", "- **[2]** Describe shortly the similarities and differences between the discriminative and generative approach to classification.\n", " > Both aim to build an algorithm for $p(y|x)$ where $y$ is a discrete class label and $x$ is a vector of real (or possibly discretely valued) variables. In the discriminative approach, we propose a model $p(y|x,\\theta)$ and use a training data set $D=\\{(x_1,y_1),(x_2,y_2),\\ldots,(x_N,y_N)\\}$ to infer good values for the parameters. For instance, in a maximum likelihood setting, we choose the parameters $\\hat{\\theta}$ that maximize $p(D|\\theta)$. The classification algorithm is now given by $$p(y|x) = p(y|x,\\hat{\\theta})\\,.$$ In the generative approach, we also aim to design an algorithm $p(y|x)$ through a parametric model that is now given by $p(y,x|\\theta) = p(x|y,\\theta)p(y|\\theta)$. Again, we use the data set to train the parameters, eg, $\\hat{\\theta} = \\arg\\max_\\theta p(D|\\theta)$, and the classification algorithm is now given by Bayes rule: \n", " $$\n", " p(y|x) \\propto p(x|y,\\hat{\\theta})\\cdot p(y|\\hat{\\theta})\n", " $$\n", "\n", "- **[3]** (Bishop ex.4.7) (#) Show that the logistic sigmoid function $\\sigma(a) = \\frac{1}{1+\\exp(-a)}$ satisfies the property $\\sigma(-a) = 1-\\sigma(a)$ and that its inverse is given by $\\sigma^{-1}(y) = \\log\\{y/(1-y)\\}$.\n", " > $$\\begin{align*} \n", " 1- \\sigma(a) &= 1 - \\frac{1}{1 + \\exp(-a)} = \\frac{1+\\exp(-a) - 1}{1+\\exp(-a)} \\\\\n", " &= \\frac{\\exp(-a)}{1+\\exp(-a)} = \\frac{1}{\\exp(a)+1} = \\sigma(-a)\\end{align*}$$\n", " \n", " > Regarding the inverse, \n", " $$\\begin{align*} \n", " y = \\sigma(a) &= \\frac{1}{1+\\exp(-a)} \\\\\n", " \\Rightarrow \\frac1y - 1 &= \\exp(-a) \\\\\n", " \\Rightarrow \\log\\left( \\frac{1-y}{y}\\right) &= -a \\\\\n", " \\Rightarrow \\log\\left( \\frac{y}{1-y}\\right) &= a = \\sigma^{-1}(y)\n", "\\end{align*}$$\n", " \n", "- **[4]** (Bishop ex.4.16) (###) Consider a binary classification problem in which each observation $x_n$ is known to belong to one of two classes, corresponding to $y_n = 0$ and $y_n = 1$. Suppose that the procedure for collecting training data is imperfect, so that training points are sometimes mislabelled. For every data point $x_n$, instead of having a value $y_n$ for the class label, we have instead a value $\\pi_n$ representing the probability that $y_n = 1$. Given a probabilistic model $p(y_n = 1|x_n,\\theta)$, write down the log-likelihood function appropriate to such a data set.\n", " > If the values of the $\\{y_n\\}$ were known then each data point for which $y_n = 1$ would contribute $\\log p(y_n = 1|x_n,\\theta)$ to the log likelihood, and each point for which $y_n = 0$ would contribute $\\log p(y_n=0|x_n,\\theta) = \\log(1 − p(y_n = 1|x_n,\\theta))$ to the log likelihood. A data point whose probability of having $y_n = 1$ is given by $\\pi_n$ will therefore contribute\n", " $$\\pi_n \\log p(y_n=1|x_n,\\theta) + (1 - \\pi_n) \\log p(y_n=0|x_n,\\theta)$$\n", " and so the overall log-likelihood given the data set is \n", "$$\\sum_{n=1}^N \\pi_n \\log p(y_n=1|x_n,\\theta) + (1 - \\pi_n) \\log p(y_n=0|x_n,\\theta)$$\n", "\n", "\n", "\n", "- **[5]** (###) Let $X$ be a real valued random variable with probability density\n", "$$\n", "p_X(x) = \\frac{e^{-x^2/2}}{\\sqrt{2\\pi}},\\quad\\text{for all $x$}.\n", "$$\n", "Also $Y$ is a real valued random variable with conditional density\n", "$$\n", "p_{Y|X}(y|x) = \\frac{e^{-(y-x)^2/2}}{\\sqrt{2\\pi}},\\quad\\text{for all $x$ and $y$}. \n", "$$\n", " (a) Give an (integral) expression for $p_Y(y)$. Do not try to evaluate the integral. \n", " (b) Approximate $p_Y(y)$ using the Laplace approximation.\n", " Give the detailed derivation, not just the answer.\n", "Hint: You may use the following results.\n", "Let \n", "$$g(x) = \\frac{e^{-x^2/2}}{\\sqrt{2\\pi}}$$\n", "and\n", "$$\n", "h(x) = \\frac{e^{-(y-x)^2/2}}{\\sqrt{2\\pi}}$$\n", "for some real value $y$. Then:\n", "$$\\begin{align*}\n", "\\frac{\\partial}{\\partial x} g(x) &= -xg(x) \\\\\n", "\\frac{\\partial^2}{\\partial x^2} g(x) &= (x^2-1)g(x) \\\\\n", "\\frac{\\partial}{\\partial x} h(x) &= (y-x)h(x) \\\\\n", "\\frac{\\partial^2}{\\partial x^2} h(x) &= ((y-x)^2-1)h(x) \n", "\\end{align*}$$ \n", "> (a) \n", "$$\n", "p_Y(y) = \\int_{-\\infty}^{\\infty} p_X(x)p_{Y|X}(y|x)\\,dx =\n", " \\int_{-\\infty}^{\\infty}\n", " \\frac{e^{-\\frac12(x^2+(y-x)^2)}}{2\\pi}\\,dx\n", " $$\n", "> (b) Using the hint we determine the first derivative of\n", " $$\\begin{align*}\n", " f(x) &= g(x)h(x), \\\\\n", " \\frac{\\partial}{\\partial x} f(x) &= \\frac{\\partial}{\\partial x} g(x)\\cdot h(x) = -xg(x)h(x)+g(x)(y-x)h(x) = (y-2x)f(x)\n", "\\end{align*}$$ \n", "Setting this to zero gives \n", "$$\\begin{align*}\n", " y-2x&= 0; \\quad \\text{so}\\quad x=\\frac12y. \\\\\n", " \\frac{\\partial}{\\partial x} \\ln f(x) &= \\frac{\\frac{\\partial}{\\partial x} f(x)}{f(x)} = (y-2x) \\\\\n", " \\frac{\\partial^2}{\\partial x^2} \\ln f(x) &= \\frac{\\partial}{\\partial x} (y-2x) = -2.\n", "\\end{align*}$$\n", "So, we find $A=2$, see lecture notes, and thus\n", " $$\\begin{align*}\n", " p_Y(y) &= \\int_{-\\infty}^{\\infty}f(x)\\,dx\\approx f(\\frac{y}{2})\\sqrt{\\frac{2\\pi}{A}} \\\\\n", " &= g(\\frac{y}{2})h(\\frac{y}{2})\\sqrt{\\frac{2\\pi}{A}} \\\\\n", " &= \\frac{1}{\\sqrt{2\\pi\\cdot2}}e^{-y^2/4}.\n", " \\end{align*}$$\n", "So $Y$ is a Gaussian with mean $m=0$ and variance $\\sigma^2=2$.\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Latent Variable Models and Variational Bayes\n", "\n", "\n", "- **[1]** (##) For a Gaussian mixture model, given by generative equations\n", "\n", "$$\n", "p(x,z) = \\prod_{k=1}^K (\\underbrace{\\pi_k \\cdot \\mathcal{N}\\left( x | \\mu_k, \\Sigma_k\\right) }_{p(x,z_{k}=1)})^{z_{k}} \n", "$$\n", "\n", "proof that the marginal distribution for observations $x_n$ evaluates to \n", "\n", "$$\n", "p(x) = \\sum_{j=1}^K \\pi_k \\cdot \\mathcal{N}\\left( x | \\mu_j, \\Sigma_j \\right) \n", "$$\n", "\n", " > $$\\begin{align*}\n", " p(x) &= \\sum_{z} p(x,z) \\\\\n", " &= \\sum_{z} \\prod_{k=1}^K \\left(\\pi_k \\cdot \\mathcal{N}\\left( x | \\mu_k, \\Sigma_k\\right) \\right)^{z_{k}}\n", "\\end{align*}$$\n", " Exploiting the one-hot coding scheme for $z$, we can re-write the RHS as\n", " $$\\begin{equation*}\n", " \\sum_{j=1}^K \\prod_{k=1}^K \\left(\\pi_k \\cdot \\mathcal{N}\\left( x | \\mu_k, \\Sigma_k\\right) \\right)^{I_{kj}} = \\sum_{j=1}^K \\pi_j \\cdot \\mathcal{N}\\left( x | \\mu_j, \\Sigma_j\\right) \n", " \\end{equation*}$$\n", " where $I_{kj} = 1$ if $k=j$ and $0$ otherwise.\n", "\n", "- **[2]** (#) Given the free energy functional $F[q] = \\sum_z q(z) \\log \\frac{q(z)}{p(x,z)}$, proof the [EE, DE and AC decompositions](https://nbviewer.jupyter.org/github/bertdv/BMLIP/blob/master/lessons/notebooks/Latent-Variable-Models-and-VB.ipynb#fe-decompositions). \n", " > The Energy-Entropy decomposition follows simply from $\\log \\frac{a}{b} = \\log(a) - \\log(b)$. The Divergence-Evidence decomposition follows from $p(x,z) = p(z|x)p(x)$ and the Complexity-Accuracy decomposition follows from substituting $p(x,z)=p(x|z)p(z)$. Altogether leading to\n", " $$\\begin{align*}\n", "\\mathrm{F}[q] &= \\underbrace{-\\sum_z q(z) \\log p(x,z)}_{\\text{energy}} - \\underbrace{\\sum_z q(z) \\log \\frac{1}{q(z)}}_{\\text{entropy}} \\tag{EE} \\\\\n", "&= \\underbrace{\\sum_z q(z) \\log \\frac{q(z)}{p(z|x)}}_{\\text{KL divergence}\\geq 0} - \\underbrace{\\log p(x)}_{\\text{log-evidence}} \\tag{DE}\\\\\n", "&= \\underbrace{\\sum_z q(z)\\log\\frac{q(z)}{p(z)}}_{\\text{complexity}} - \\underbrace{\\sum_z q(z) \\log p(x|z)}_{\\text{accuracy}} \\tag{CA}\n", "\\end{align*}$$\n", "\n", "- **[3]** (#) The Free energy functional $\\mathrm{F}[q] = -\\sum_z q(z) \\log p(x,z) - \\sum_z q(z) \\log \\frac{1}{q(z)}$ decomposes into \"Energy minus Entropy\". So apparently the entropy of the posterior $q(z)$ is maximized. This entropy maximization may seem puzzling at first because inference should intuitively lead to *more* informed posteriors, i.e., posterior distributions whose entropy is smaller than the entropy of the prior. Explain why entropy maximization is still a reasonable objective. \n", " > Note that Free Energy minimization is a balancing act: FE minimization implies entropy maximization *and at the same time* energy minimization. Minimizing the energy term leads to aligning $q(z)$ with $\\log p(x,z)$, ie, it tries to move the bulk of the function $q(z)$ to areas in $z$-space where $p(x,z)$ is large ($p(x,z)$ is here just a function of $z$, since x is observed). However, aside from aligning with $p(x,z)$, we want $q(z)$ to be as uninformative as possible. Everything that can be inferred should be represented in $p(x,z)$ (which is prior times likelihood). We don't want to learn anything that is not in either the prior or the likelihood. The entropy term balances the energy term by favoring distributions that are as uninformative as possible. \n", "\n", "- **[4]** (#) Explain the following update rule for the mean of the Gaussian cluster-conditional data distribution (from the example about mean-field updating of a Gaussian mixture model):\n", "\n", "$$\n", "m_k = \\frac{1}{\\beta_k} \\left( \\beta_0 m_0 + N_k \\bar{x}_k \\right) \\tag{B-10.61} \n", "$$\n", " > We see here an example of \"precision-weighted means add\" when two sources of information are fused, just like precision-weighted means add when two Gaussians are multiplied, eg a prior and likelihood. In this case, the prior is $m_0$ and the likelihood estimate is $\\bar{x}$. $\\beta_0$ can be interpreted as the number of pseudo-observations in the prior. \n", "\n", "- **[5]** (##) Consider a model $p(x,z|\\theta)$, where $D=\\{x_1,x_2,\\ldots,x_N\\}$ is observed, $z$ are unobserved variables and $\\theta$ are parameters. The EM algorithm estimates the parameters by iterating over the following two equations ($i$ is the iteration index):\n", "\n", "$$\\begin{align*}\n", "q^{(i)}(z) &= p(z|D,\\theta^{(i-1)}) \\\\\n", "\\theta^{(i)} &= \\arg\\max_\\theta \\sum_z q^{(i)}(z) \\cdot \\log p(D,z|\\theta)\n", "\\end{align*}$$\n", "\n", "Proof that this algorithm minimizes the Free Energy functional \n", "$$\\begin{align*}\n", "F[q,\\theta] = \\sum_z q(z) \\log \\frac{q(z)}{p(D,z|\\theta)} \n", "\\end{align*}$$\n", " > Let's start with a prior estimate $\\theta^{(i-1)}$ and we want to minimize the free energy functional wrt $q$. This leads to\n", " $$\\begin{align*}\n", "q^{(i)}(z) &= \\arg\\min_q F[q,\\theta^{(i-1)}] \\\\\n", " &= \\arg\\min_q \\sum_z q(z) \\log \\frac{q(z)}{p(D,z|\\theta^{(i-1)})} \\\\\n", " &= \\arg\\min_q \\sum_z q(z) \\log \\frac{q(z)}{p(z|D,\\theta^{(i-1)}) \\cdot p(D|\\theta^{(i-1)})} \\\\\n", " &= p(z|D,\\theta^{(i-1)})\n", "\\end{align*}$$\n", "Next, we use $q^{(i)}(z)=p(z|D,\\theta^{(i-1)})$ and minimize the free energy w.r.t. $\\theta$, leading to\n", " $$\\begin{align*}\n", " \\theta^{(i)} &= \\arg\\min_\\theta F[q^{(i)}(z),\\theta] \\\\\n", " &= \\arg\\min_\\theta \\sum_z p(z|D,\\theta^{(i-1)}) \\log \\frac{p(z|D,\\theta^{(i-1)})}{p(D,z|\\theta)} \\\\\n", " &= \\arg\\max_\\theta \\sum_z \\underbrace{p(z|D,\\theta^{(i-1)})}_{q^{(i)}(z)} \\log p(D,z|\\theta)\n", "\\end{align*}$$\n", "\n", "- **[6]** (###) Consult the internet on what *overfitting* and *underfitting* is and then explain how FE minimization finds a balance between these two (unwanted) extremes.\n", " > Overfitting relates to learning a posterior that \"listens\" too much to the data (and not enough to the prior). Underfitting does the opposite. The CA decompostion \n", "$$\\begin{equation*} \n", "\\underbrace{\\sum_z q(z)\\log\\frac{q(z)}{p(z)}}_{\\text{complexity}} - \\underbrace{\\sum_z q(z) \\log p(x|z)}_{\\text{accuracy}} \\tag{CA}\n", "\\end{equation*}$$\n", "exposes this dilemma nicely. The complrxity term tries to keep the posterior $q(z)$ near the prior $p(z)$ whereas the accuracy term tries to align the posterior $q(z)$ with the likelihood $p(x|z)$. Thus, minimizing Free Energy keeps an eye on avoiding both under- and over-fitting. \n", "\n", "- **[7]** (##) Consider a model $p(x,z|\\theta) = p(x|z,\\theta) p(z|\\theta)$ where $x$ and $z$ relate to observed and unobserved variables, respectively. Also available is an observed data set $D=\\left\\{x_1,x_2,\\ldots,x_N\\right\\}$. One iteration of the EM-algorithm for estimating the parameters $\\theta$ is described by ($m$ is the iteration counter)\n", "$$\n", "\\hat{\\theta}^{(m+1)} := \\arg \\max_\\theta \\left(\\sum_z p(z|x=D,\\hat{\\theta}^{(m)}) \\log p(x=D,z|\\theta) \\right) \\,.\n", "$$\n", "\n", " (a) Apparently, in order to execute EM, we need to work out an expression for the 'responsibility' $p(z|x=D,\\hat{\\theta}^{(m)})$. Use Bayes rule to show how we can compute the responsibility that allows us to execute an EM step. \n", " > Use Bayes rule:\n", "$$p(z|x=D,\\hat{\\theta}^{(m)}) = \\frac{p(x=D|z,\\hat{\\theta}^{(m)}) \\,p(z|\\hat{\\theta}^{(m)})}{\\int p(x=D|z,\\hat{\\theta}^{(m)}) \\,p(z|\\hat{\\theta}^{(m)}) \\,\\mathrm{d}z}$$\n", "Note that the RHS is an expression in $z$ since $D$ and $\\hat{\\theta}$ are given. If you want to evaluate the RHS, you need to make a specific choice for your model $$p(x,z|\\theta) = \\underbrace{p(x|z,\\theta)}_{\\text{likelihood}} \\underbrace{p(z|\\theta)}_{\\text{prior}}$$ \n", "\n", " (b) Why do we need multiple iterations in the EM algorithm? \n", " > We must have a parameter estimate in order to compute the responsibilities, and vice versa, we need responsibilities to update the parameter estimate. Thus, in the EM algorithm, we iterate between updating responsibilities (beliefs about $z$) and parameter estimates (beliefs about $\\theta$). \n", "\n", " (c) Why can't we just use simple maximum log-likelihood to estimate parameters, as described by \n", "$$\n", "\\hat{\\theta} := \\arg \\max_\\theta \\log p(x=D,z|\\theta) \\,?\n", "$$ \n", " > Because $z$ is not observed.\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Dynamic Models \n", "\n", "- **[1]** (##) Given the Markov property\n", "\\begin{equation*}\n", "p(x_n|x_{n-1},x_{n-2},\\ldots,x_1) = p(x_n|x_{n-1}) \\tag{A1}\n", "\\end{equation*}\n", "proof that, for any $n$,\n", "\\begin{align*}\n", "p(x_n,x_{n-1},&\\ldots,x_{k+1},x_{k-1},\\ldots,x_1|x_k) = \\\\\n", "&p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k) \\cdot p(x_{k-1},x_{k-2},\\ldots,x_1|x_k) \\tag{A2}\\,.\n", "\\end{align*}\n", "In other words, proof that, if the Markov property A1 holds, then, given the \"present\" ($x_k$), the \"future\" $(x_n,x_{n-1},\\ldots,x_{k+1})$ is _independent_ of the \"past\" $(x_{k-1},x_{k-2},\\ldots,x_1)$.\n", "> First, we rewrite A2 as\n", "\\begin{align*}\n", "p(&x_n,x_{n-1},\\ldots,x_{k+1},x_{k-1},\\ldots,x_1|x_k) = \\frac{p(x_n,x_{n-1},\\ldots,x_1)}{p(x_k)} \\\\\n", "&= \\frac{p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k,\\ldots,x_1) \\cdot p(x_k,x_{k-1},\\ldots,x_1)}{p(x_k)} \\\\\n", "&= p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k,\\ldots,x_1) \\cdot p(x_{k-1},\\ldots,x_1|x_k) \\tag{A3}\n", "\\end{align*}\n", "The first term in A3 can be simplified if A1 holds to \n", "\\begin{align*}\n", "p(x_n,&x_{n-1},\\ldots,x_{k+1}|x_k,x_{k-1},\\ldots,x_1) \\\\\n", "&= p(x_n|x_{n-1},x_{n-2},\\ldots,x_1) \\cdot p(x_{n-1}|x_{n-2},x_{n-3},\\ldots,x_1) \\cdots \\\\\n", "&\\quad \\cdots p(x_{k+1}|x_{k},x_{k-2},\\ldots,x_1) \\\\\n", "&= p(x_n|x_{n-1},x_{n-2},\\ldots,x_k) \\cdot p(x_{n-1}|x_{n-2},x_{n-3},\\ldots,x_k) \\cdots \\\\\n", "&\\quad \\cdots p(x_{k+1}|x_{k}) \\\\\n", "&= p(x_n,x_{n-1},\\ldots,x_{k+1}|x_k) \\tag{A4}\n", "\\end{align*}\n", "Substitution of A4 into A3 leads to A2. QED.\n", "\n", "- **[2]** (#) \n", " (a) What's the difference between a hidden Markov model and a linear Dynamical system? \n", " > HMM has binary-valued (on-off) states, where the LDS has continuously valued states. \n", " \n", " (b) For the same number of state variables, which of these two models has a larger memory capacity, and why? \n", " > The latter holds more capacity because, eg, a 16-bit representation of a continuously-valued variable holds $2^{16}$ different states.\n", " \n", " \n", "- **[3]** (#) \n", "(a) What is the 1st-order Markov assumption? \n", "(b) Derive the joint probability distribution $p(x_{1:T},z_{0:T})$ (where $x_t$ and $z_t$ are observed and latent variables respectively) for the state-space model with transition and observation models $p(z_t|z_{t-1})$ and $p(x_t|z_t)$. \n", "(c) What is a Hidden Markov Model (HMM)? \n", "(d) What is a Linear Dynamical System (LDS)? \n", "(e) What is a Kalman Filter? \n", "(f) How does the Kalman Filter relate to the LDS? \n", "(g) Explain the popularity of Kalman filtering and HMMs? \n", "(h) How relates a HMM to a GMM? \n", "> (a) An auto-regressive model is first-order Markov if $$p(x_t|x_{t-1},x_{t-2},\\ldots,x_1) = p(x_t|x_{t-1})\\,.$$ \n", "> (b) $$p(x_{1:T},z_{0:T}) = p(z_0)\\prod_{t=1}^Tp(z_t|z_{t-1}) \\prod_{t=1}^T p(x_t|z_t)$$ \n", "> (c) A HMM is a state-space model (as described in (b)) where the latent variable $z_t$ is discretely valued. Iow, the HMM has hidden clusters. \n", "> (d) An LDS is a state-space model (also described by the eq in (b)), but now the latent variable $z_t$ is continuously valued. \n", "> (e) A Kalman filter is a recursive solution to the inference problem $p(z_t|x_t,x_{t-1},\\dots,x_1)$, based on a state estimate at the previous time step $p(z_{t-1}|x_{t-1},x_{t-2},\\dots,x_1)$ and a new observation $x_t$. Basically, it's a recursive filter that updates the optimal Bayesian estimate of the current state $z_t$ based on all past observations $x_t,x_{t-1},\\dots,x_1$. \n", "> (f) The LDS describes a (generative) *model*. The Kalman filter does not describe a model, but rather describes an *inference task* on the LDS model. \n", "> (g) The LDS and HMM models are both quite general and flexible generative probabilistic models for time series. There exists very efficient algorithms for executing the latent state inference tasks (Kalman filter for LDS and there is a similar algorithm for the HMM). That makes these models flexible and practical. Hence the popularity of these models. \n", "> (h) An HMM can be interpreted as a Gaussian-Mixture-model-over-time. \n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Intelligent Agents and Active Inference\n", "\n", "- **[1]** (##) I asked you to watch a video segment (https://www.vibby.com/watch?vib=71iPtUJxd) where Karl Friston talks about two main approaches to goal-directed acting by agents: (1) choosing actions that maximize (the expectation of) a value function $V(s)$ of the state ($s$) of the environment; or (2) choosing actions that minimize a functional ($F[q(s)]$) of *beliefs* ($q(s)$) over environmental states ($s$). Discuss the advantage of the latter appraoch. \n", " > We'll discuss two advantages here. Either one would suffice for full credit and there are likely multiple alternative answers that would be adequate as well. (1) One advantage is that the value function $V$ needs to be uniquely chosen for each problem. Brains cannot afford to come up with a new value function for each problem as thousands of new problems are encountered each day. In contrast, $F[q(s)]$ holds the free-energy functional (a given cost functional) for posterior beliefs that technically are defined by a generative model $p$ and Bayes rule. In other words, in the latter approach, there is one value (cost) function for *all* problems. (2) A second advantage of $F[q(s)]$ is that inference for actions can take into account the uncertainty about our state-of-knowledge of the environment. This may lead to actions that are information seeking rather than goal-driven if our belief are very uncertain. For instance, if I want to cross a street, my first actions will be to seek information (look for cars and how fast they go), and only after enough information has been collected, the goal-driven action (decision) cross-vs-stay will be executed. When minimization of $F[q(s)]$ drives actions, both information-seeking and goal-driven actions can be accomodated in the same framework. This is very difficult when the value function is a direct function of the state of the world ($V(s)$, because there is no accomodation to represent our uncertainties about the state of the world. \n", "\n", "- **[2]** (#) The *good regulator theorem* states that a \"successful and efficient\" controller of the world must contain a model of the world. But it's hard to imagine how just learning a model of the world leads to goal-directed behavior, like learning how to read or drive a car. Which other ingredient do we need to get learning agents to behave as goal-directed agents? \n", " > In the Free Energy Principle framework, goals (targets) are encoded in a generative model of the environment as prior distributions on future observations. Actions are inferred through free energy minimization in this extended model. As a result, the inferred actions aim to generate future observations that are maximally consistent with the goal priors. This kind of behavior can be interpreted as goal-directed behavior.\n", " \n", "\n", "- **[3]** (##) The figure below reflects the state of a factor graph realization of an active inference agent after having pushed action $a_t$ onto the environment and having received observation $x_t$. In this graph, the variables $x_\\bullet$, $u_\\bullet$ and $s_\\bullet$ correspond to observations, and unobserved control and internal states respectively. Copy the figure onto your sheet and draw a message passing schedule to infer a posterior belief (i.e. after observing $x_t$) over the next control state $u_{t+1}$. \n", "\n", "\n", "\n", "> Imagine picking up the tree at the $u_{t+1}$ edge (call this edge the root of the tree). Then pass messages from the leaves of the tree towards the root, see Figure below. Note that the posterior belief over next control (action) $u_{t+1}$\n", " incorporates information from the recent past (blue messages), from prior information about what worked in the past (green arrow), and from expectations about future observations (red messages).\n", " \n", "\n", "\n", "- **[4]** (##) The Free Energy Principle (FEP) is a theory about biological self-organization, in particular about how brains develop through interactions with their environment. Which of the following statements is not consistent with FEP (and explain your answer): \n", " (a) We act to fullfil our predictions about future sensory inputs. \n", " (b) Perception is inference about the environmental causes of our sensations. \n", " (c) Our actions aim to reduce the complexity of our model of the environment. \n", " > Statement (c) is not consistent with the FEP formulation of biological self-organization. The Complexity-Accuracy decomposition of the Free Energy reveals that the \"data\" (observations) is exclusively part of the accuracy term (not in the complexity term). Observations are controlled by actions and hence actions aim to minimize accuracy rather than model complexity. \n", " \n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "slideshow": { "slide_type": "skip" } }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "open(\"../../styles/aipstyle.html\") do f\n", " display(\"text/html\", read(f,String))\n", "end" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "@webio": { "lastCommId": null, "lastKernelId": null }, "anaconda-cloud": {}, "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Julia 1.6.3", "language": "julia", "name": "julia-1.6" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.6.3" } }, "nbformat": 4, "nbformat_minor": 4 }