{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "a833dc9f", "metadata": {}, "source": [ "# Regression\n", "\n", "\n", "\n", "- **[1]** (#) (a) Write down the generative model for Bayesian linear ordinary regression (i.e., write the likelihood and prior). \n", " (b) State the inference task for the weight parameter in the model. \n", " (c) Why do we call this problem linear? \n", " \n", "> (a)\n", "$$\\begin{align*}\n", "\\text{likelihood: } p(y_n|x_n,w) &= \\mathcal{N}(y_n|w^T\\phi(x_n),\\beta^{-1}) \\\\\n", "\\text{prior: } p(w|\\alpha) &= \\mathcal{N}(w|0,\\alpha^{-1}I)\n", "\\end{align*}$$ \n", "> (b) The inference task is to compute\n", "$$p(w|D) = \\frac{p(D|w)p(w)}{p(D)}$$ \n", "> (c) The model is linear with respect to $w$, which is the reason we call it linear.\n", "\n", "\n", "- **[2]** (##) Consider a linear regression problem \n", "$$\\begin{align*}\n", "p(y\\,|\\,\\mathbf{X},w,\\beta) &= \\mathcal{N}(y\\,|\\,\\mathbf{X} w,\\beta^{-1} \\mathbf{I}) \\\\\n", " &= \\prod_n \\mathcal{N}(y_n\\,|\\,w^T x_n,\\beta^{-1})\n", "\\end{align*}$$\n", "with $y, X$ and $w$ as defined in the notebook. \n", " (a) Work out the maximum likelihood solution for linear regression by solving\n", "$$\n", "\\nabla_{w} \\log p(y|X,w) = 0 \\,.\n", "$$ \n", " (b) Work out the MAP solution. How does it relate to the ML solution?\n", "> (a) The gradient of the log-likelihood is \n", "> $$\\begin{equation*} \\nabla_{w} \\log p(y|X,w) = X^T(y-Xw) \\end{equation*}$$ \n", "> Setting the derivation to zero leads to \n", "> $$\\begin{equation*}w_{ML} = (X^TX)^{-1}X^Ty \\end{equation*}$$\n", "> (b) We now add a prior $w \\sim \\mathcal{N}(0,\\alpha^{-1})$, and a similar derivation leads to \n", "> $$\\begin{equation*}\\nabla_{w} \\log p(y,w|X) =-\\beta X^T(y-Xw)+\\alpha w \\end{equation*}$$ \n", "> Setting the derivation to zero leads to \n", "> $$\\begin{equation*} w_{MAP} = (X^TX+\\frac{\\alpha}{\\beta}I)^{-1}X^Ty \\end{equation*}$$ \n", "> The MAP solution weighs both the prior and likelihood. If $\\frac{\\alpha}{\\beta} $ is close to zero (if the prior is uninformative), then the ML solution and MAP solutions are close to each other.\n", "\n", "\n", "- **[3]** (###) Show that the variance of the predictive distribution for linear regression decreases as more data becomes available.\n", "> The variance of the predictive distribution is given by\n", "$$\\begin{align*} \\sigma_{N+1}^2(x) &= 1/\\beta + \\phi(x)^TS_{N+1}\\phi(x) \\\\\n", "S_{N+1} &= (S_N^{-1} + \\beta\\phi_{N+1}\\phi_{N+1}^T)^{-1} \\\\\n", "&= S_N - \\frac{\\beta S_N\\phi_{N+1}\\phi_{N+1}^TS_N}{1+\\beta\\phi_{N+1}^TS_N\\phi_{N+1}}\n", "\\end{align*}$$\n", "> where in the last equality, we applied [Woodbury's matrix identity](https://en.wikipedia.org/wiki/Woodbury_matrix_identity), which is also listed in [Sam Roweis' matrix notes, eq. 10](https://github.com/bertdv/BMLIP/raw/master/lessons/notebooks/files/Roweis-1999-matrix-identities.pdf?dl=0). Using the recursive relation for $S_N$ we can write the variance for the next observation as \n", "$$\\begin{align*}\n", "\\sigma_{N+1}^2(x) &= \\sigma_N^2(x) - \\frac{\\beta\\phi(x)^TS_N\\phi_{N+1}\\phi_{N+1}^TS_N\\phi(x)}{1+\\beta\\phi_{N+1}^TS_N\\phi_{N+1}}.\n", "\\end{align*}$$\n", "> Because $S_N$ is positive definite, the numerator and the denominator of the second term wil be non-negative, hence \n", "$\\sigma_N^2(x) \\geqslant \\sigma_{N+1}^2(x)$. This shows that the predictive variance decrease as more data becomes available.\n", "\n", "\n", "- **[4]** (#) Assume a given data set $D=\\{(x_1,y_1),(x_2,y_2),\\ldots,(x_N,y_N)\\}$ with $x \\in \\mathbb{R}^M$ and $y \\in \\mathbb{R}$. We propose a model given by the following data generating distribution and weight prior functions: \n", "$$\\begin{equation*} p(y_n|x_n,w)\\cdot p(w)\\,. \\end{equation*}$$\n", " (a) Write down Bayes rule for generating the posterior $p(w|D)$ from a prior and likelihood. \n", " (b) Work out how to compute a distribution for the predicted value $y_\\bullet$, given a new input $x_\\bullet$. \n", "> (a) \n", "$$ p(w|D) = \\frac{p(w) \\prod_{n=1}^N p(y_n|x_n,w)}{\\int p(w) \\prod_{n=1}^N p(y_n|x_n,w)\\mathrm{d}w}$$\n", "> (b)\n", "$$p(y_\\bullet|x_\\bullet,D) = \\int p(y_\\bullet|x_\\bullet,w) p(w|D) \\mathrm{d}w$$\n", "\n", "- **[5]** (#) In the class we use the following prior for the weights:\n", "$$\\begin{equation*}\n", "p(w|\\alpha) = \\mathcal{N}\\left(w | 0, \\alpha^{-1} I \\right)\n", "\\end{equation*}$$\n", " (a) Give some considerations for choosing a Gaussian prior for the weights. \n", " (b) We could have chosen a prior with full (not diagonal) covariance matrix $p(w|\\alpha) = \\mathcal{N}\\left(w | 0, \\Sigma \\right)$. Would that be better? Give your thoughts on that issue. \n", " (c) Generally we choose $\\alpha$ as a small positive number. Give your thoughts on that choice as opposed to choosing a large positive value. How about choosing a negative value for $\\alpha$?\n", "> (a) These considerations can be both computational (eg, Gaussian prior times Gaussian likelihood leads to a Gaussian posterior) or based on available information (eg, among all distributions with the same variance, the Gaussian distribution has the largest entropy. Roughly this means that the Gaussian makes the least amount of assumptions across all distributions with the same variance). \n", "> (b) If you have no prior information about co-variances, why make that assumption? If you do have some prior information, eg based on the physical process, then by all means feel free to add those constraints to the prior. Note that the posterior variance is given by $S_N = \\left( \\alpha \\mathbf{I} + \\beta \\mathbf{X}^T\\mathbf{X}\\right)^{-1}$. Importantly, the term $\\alpha \\mathbf{I}$ for small $\\alpha$ makes sure that the matrix is invertible, even for zero observations. \n", "> (c) As you can see from the posterior variance (see answer to (b)), for smaller values of $\\alpha$, the data term $\\mathbf{X}^T\\mathbf{X}$ gets to play a role after fewer observations. Hence, if you have little prior information, it's better to choose a small value for $\\alpha$. \n", "\n", "- **[6]** Consider an IID data set $D=\\{(x_1,y_1),\\ldots,(x_N,y_N)\\}$. We will model this data set by a model $$y_n =\\theta^T f(x_n) + e_n\\,,$$ where $f(x_n)$ is an $M$-dimensional feature vector of input $x_n$; $y_n$ is a scalar output and $e_n \\sim \\mathcal{N}(0,\\sigma^2)$. \n", " (a) Rewrite the model in matrix form by lumping input features in a matrix $F=[f(x_1),\\ldots,f(x_N)]^T$, outputs and noise in the vectors $y=[y_1,\\ldots,y_N]^T$ and $e=[e_1,\\ldots,e_N]^T$, respectively. \n", "$y = F\\theta + e$\n", "\n", " (b) Now derive an expression for the log-likelihood $\\log p(y|\\,F,\\theta,\\sigma^2)$. \n", "$$\\begin{align*}\n", " \\log p(D|\\theta,\\sigma^2) &= \\log \\mathcal{N}(y|\\,F\\theta ,\\sigma^2)\\\\\n", " &\\propto -\\frac{1}{2\\sigma^2}\\left( {y - F\\theta } \\right)^T \\left( {y - F\\theta } \\right)\n", "\\end{align*}$$\n", "\n", " (c) Proof that the maximum likelihood estimate for the parameters is given by\n", "$$\\hat\\theta_{\\text{ml}} = (F^T F)^{-1}F^Ty \\,.$$ \n", "\n", " > Taking the derivative to $\\theta$\n", "$$\\nabla_\\theta \\log p(D|\\theta) = \\frac{1}{\\sigma^2} F^T(y-F\\theta)$$\n", " > Set derivative to zero for maximum likelihood estimate\n", "$$ \\hat\\theta_{\\text{ml}} = (F^TF)^{-1}F^Ty$$\n", "\n", " (d) What is the predicted output value $y_\\bullet$, given an observation $x_\\bullet$ and the maximum likelihood parameters $\\hat \\theta_{\\text{ml}}$. Work this expression out in terms of $F$, $y$ and $f(x_\\bullet)$. \n", " > Prediction of new data point: $\\hat y_\\bullet = \\hat \\theta^T f(x_\\bullet) = \\left((F^TF)^{-1}F^Ty\\right)^T f(x_\\bullet) $\n", "\n", " (e) Suppose that, before the data set $D$ was observed, we had reason to assume a prior distribution $p(\\theta)=\\mathcal{N}(0,\\sigma_0^2)$. Derive the Maximum a posteriori (MAP) estimate $\\hat \\theta_{\\text{map}}$.(hint: work this out in the $\\log$ domain.) \n", "$$\\begin{align*}\n", "\\log p(\\theta|D) &\\propto \\log p(D|\\theta) p(\\theta) \\\\\n", " &\\propto -\\frac{1}{2\\sigma^2}\\left( {y - F\\theta } \\right)^T \\left( {y - F\\theta } \\right) - \\frac{1}{2 \\sigma_0^2}\\theta^T \\theta\n", "\\end{align*}$$\n", " > Derivative $\\nabla_\\theta \\log p(\\theta|D) = (1/\\sigma^2)F^T(y-F\\theta) - (1/ \\sigma_0^2) \\theta$ \n", " > Set derivative to zero for MAP estimate leads to \n", "$$\\hat\\theta_{\\text{map}} = \\left(F^T F + \\frac{\\sigma^2}{\\sigma_0^2} I\\right)^{-1}F^Ty$$\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "id": "cd7401aa", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Julia 1.8.2", "language": "julia", "name": "julia-1.8" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.8.2" } }, "nbformat": 4, "nbformat_minor": 5 }