{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "219d803a",
   "metadata": {},
   "source": [
    "Sascha Spors,\n",
    "Professorship Signal Theory and Digital Signal Processing,\n",
    "Institute of Communications Engineering (INT),\n",
    "Faculty of Computer Science and Electrical Engineering (IEF),\n",
    "University of Rostock,\n",
    "Germany\n",
    "\n",
    "# Data Driven Audio Signal Processing - A Tutorial with Computational Examples\n",
    "\n",
    "Winter Semester 2023/24 (Master Course #24512)\n",
    "\n",
    "- lecture: https://github.com/spatialaudio/data-driven-audio-signal-processing-lecture\n",
    "- tutorial: https://github.com/spatialaudio/data-driven-audio-signal-processing-exercise\n",
    "\n",
    "Feel free to contact lecturer frank.schultz@uni-rostock.de"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a6a27ac",
   "metadata": {},
   "source": [
    "# Trade-Off Between Bias^2 / Variance and Check for Model Complexity\n",
    "\n",
    "- we use plain ordinary **least squares** (OLS) based **linear regression** to discuss a very fundamental aspect when we learn from data, i.e. when we create prediction models\n",
    "- this aspect is known as bias-variance trade-off\n",
    "- in general we can split the **sum of (true model data - predicted model data)^2** into three components\n",
    "\n",
    "$$\\text{model bias}^2 + \\text{model variance} + \\text{noise variance}$$\n",
    "\n",
    "- a model will never explain all variance (which is actually not wanted for a useful robust model), so certain noise variance remains\n",
    "- we can influence the model bias and model variance obviously by the choice of the model  \n",
    "- however, we cannot at the same time have lowest model bias *and* lowest model variance to reduce the overall error for predictions\n",
    "- we therefore need to find a good compromise between bias and variance and especially we need to avoid two extremes\n",
    "    - underfit case, with typically too low model complexity yielding high bias and low variance\n",
    "    - overfit case, with typically too high model complexity yielding low bias and high variance"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44e84b71",
   "metadata": {},
   "source": [
    "In this notebook we therefore check **over**-/**underfitting** via bias$^2$/variance quantities and $R_{\\text{adjusted}}^2$ on linear (and unregularized) models that were trained and predicted on noisy data (note here: **training data=test data** for convenience).\n",
    "\n",
    "For this toy example we know the true world (unnoisy) data, because we know the linear model equation that creates this data.\n",
    "Hence, we can be pretty confident about our interpretations on the performances of the different models.\n",
    "In real practice we typically deal with an unknown model equation.\n",
    "Then we need to properly check for over-/underfitting on our model estimates.\n",
    "\n",
    "A robust prediction model should have a **reasonable trade-off between bias^2 and variance** and reasonable **high** $R_{\\text{adjusted}}^2$ **mean** but **low** $R_{\\text{adjusted}}^2$ **variance** (see this notebook).\n",
    "\n",
    "Furthermore, a robust prediction model should predict **reasonable outcome to unknown input data**, such that it **generalizes well** on **unseen data**. This is discussed in the notebook [bias_variance_ridge_regression.ipynb](bias_variance_ridge_regression.ipynb)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b21fe3d6",
   "metadata": {},
   "source": [
    "Useful chapters in textbooks on this fundamental aspect:\n",
    "- [Bishop 2006] Christopher M. Bishop, *Pattern Recognition and Machine Learning*, Springer, 2006, Chapter 3.2\n",
    "- Sergios Theodoridis, *Machine Learning*, Academic Press, 2020, 2nd ed., Chapter 3.9\n",
    "- Kevin P. Murphy, *Machine Learning-A Probabilistic Perspective*, MIT Press, 2012, 1st ed., Chapter 6.4.4\n",
    "- Kevin P. Murphy, *Probabilistic Machine Learning-An Introduction*, MIT Press, 2022, Chapter 4.7.6.3\n",
    "- Trevor Hastie, Robert Tibshirani, Jerome Friedman, *The Elements of  Statistical Learning: Data Mining, Inference, and Prediction*, Springer, 2009, 2nd ed., Chapter 2.9\n",
    "- Gareth James, Daniela Witten, Trevor Hastie, Rob Tibshirani, *An Introduction to Statistical Learning with Applications in R*, Springer, 2021, 2nd ed., Chapter 2.2.2\n",
    "- Richard O. Duda, Peter E. Hart, David G. Stork, *Pattern Classification*, Wiley, 2000, 2nd ed., Chapter 9.3"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b375c3b7",
   "metadata": {},
   "source": [
    "## Expectation and Moments, Continuous Data\n",
    "\n",
    "For a (mapping) function $f(x)$ and and a (we assume a stationary) random process for drawing values $x$ described by the probability density function (PDF) $p(x)$ the expectation of $f(x)$ is defined as\n",
    "\n",
    "$$\\mathbb{E}_x\\{f(x)\\} = \\int f(x) p(x) \\mathrm{d}x.$$\n",
    "\n",
    "The so called **1st order raw moment** (also known as **linear mean**) can be derived from that basic expectation formula just by setting the mapping function $f(x) = x^1$.\n",
    "The 1st raw moment is thus defined as\n",
    "$$\\mu_x = \\mathbb{E}_x\\{x^1\\} = \\int x^1 p(x) \\mathrm{d}x.$$\n",
    "\n",
    "The so called **2nd order raw moment** (also known as quadratic mean) can be derived from that basic expectation formula just by setting the mapping function $f(x) = x^2$.\n",
    "The 2nd raw moment is thus defined as\n",
    "$$\\mathbb{E}_x\\{x^2\\} = \\int x^2 p(x) \\mathrm{d}x.$$\n",
    "\n",
    "The so called **2nd order centralized moment** plays a fundamental role in processing statistical.\n",
    "Its mapping function is $f(x) = (x^1 - \\mathbb{E}_x\\{x\\})^2$ and hence\n",
    "$$\\mathbb{E}_x\\{(x^1 - \\mathbb{E}_x\\{x\\})^2\\} = \\int (x^1 - \\mathbb{E}_x\\{x\\})^2 p(x) \\mathrm{d}x.$$\n",
    "With the linear mean $\\mu_x = \\mathbb{E}_x\\{x\\}$ from above we can rewrite\n",
    "$$\\sigma_x^2 = \\mathbb{E}_x\\{(x - \\mu_x)^2\\} = \\int (x - \\mu_x)^2 p(x) \\mathrm{d}x.$$\n",
    "The **2nd centralized moment** $\\sigma_x^2$ is known as **variance**, its square root $\\sqrt{\\sigma_x^2} = \\sigma_x$ is known as standard deviation."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "867f0121",
   "metadata": {},
   "source": [
    "## Expectation and Moments, Sampled Data\n",
    "\n",
    "Above formulations hold for continuous $x$, thus continuously given arguments (data).\n",
    "Most often we deal with sampled data.\n",
    "Then, we need to approximate the above moments.\n",
    "This is an own theory by itself, see our lecture / tutorial for digital signal processing\n",
    "- https://github.com/spatialaudio/digital-signal-processing-lecture\n",
    "- https://github.com/spatialaudio/digital-signal-processing-exercises\n",
    "\n",
    "In short, we approximate the integral by a sum and the integral kernel with the signal values itself and an average (to overcome the missing information from the PDF that is used above).\n",
    "\n",
    "Then the basic expectation equation is\n",
    "$$\\mathbb{E}\\{f(x)\\} \\approx \\lim_{M \\to \\infty} \\frac{1}{M} \\sum_{m=1}^{M} f(x_m)$$\n",
    "denoting $x_m$ as the $m$-th sample of the data set with finite amount of samples.\n",
    "\n",
    "The raw and centralized moments can now be approximated by the very same mappings as above.\n",
    "\n",
    "The so called **1st order raw moment** (also known as **linear mean**) can be approximated from the basic expectation formula by setting the mapping function $f(x) = x^1$.\n",
    "The 1st raw moment thus is defined as\n",
    "$$\\bar{x} = \\mathbb{E}\\{x\\} = \\lim_{M \\to \\infty} \\frac{1}{M} \\sum_{m=1}^{M} x_m$$\n",
    "\n",
    "The so called **2nd order raw moment** (also known as quadratic mean) can be approximated from the basic expectation formula by setting the mapping function $f(x) = x^2$.\n",
    "The 2nd raw moment thus is defined as\n",
    "$$\\mathbb{E}\\{x^2\\} = \\lim_{M \\to \\infty} \\frac{1}{M} \\sum_{m=1}^{M} x^2_m$$\n",
    "\n",
    "The so called **2nd order centralized moment** (also known as **variance**) can be approximated from the basic expectation formula by setting the mapping function $f(x) = x - \\bar{x}$.\n",
    "The 2nd centralized moment thus is defined as\n",
    "$$\\mathbb{E}\\{(x-\\bar{x})^2\\} = \\lim_{M \\to \\infty} \\frac{1}{M} \\sum_{m=1}^{M} (x_m - \\bar{x})^2$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a877b0c",
   "metadata": {},
   "source": [
    "## Bias and Variance for OLS Model Example\n",
    "\n",
    "### Model Assumption\n",
    "We utilize our well known least-squares error problem\n",
    "\n",
    "$$\\min_{\\text{wrt }\\beta} (||\\mathbf{y} - \\mathbf{X} \\beta||_2^2)$$\n",
    "\n",
    "with the solution\n",
    "\n",
    "$$\\hat{\\beta} = (\\mathbf{X}^\\mathrm{H} \\mathbf{X})^{-1} \\mathbf{X}^\\mathrm{H} \\mathbf{y}$$\n",
    "\n",
    "as always assuming an $M \\times N$, full column rank $r=N$, $M \\gg N$ matrix $\\mathbf{X}$ (i.e. more data samples than features). \n",
    "\n",
    "### Data Handling\n",
    "In our toy example we set up matrices by a given data column vector $\\mathbf{x}$ and $N-1$ different (non)-linear mapping functions $f(\\mathbf{x})$, such that the feature matrix becomes\n",
    "\n",
    "$$\n",
    "\\mathbf{X} = \n",
    "\\begin{bmatrix}\n",
    "1 & f_1(\\mathbf{x}_1) & f_2(\\mathbf{x}_1) & \\dots & f_{N-1}(\\mathbf{x}_1)\\\\\n",
    "1 & f_1(\\mathbf{x}_2) & f_2(\\mathbf{x}_2) & \\dots & f_{N-1}(\\mathbf{x}_2)\\\\\n",
    "\\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\\n",
    "1 & f_1(\\mathbf{x}_M) & f_2(\\mathbf{x}_M)  & \\dots & f_{N-1}(\\mathbf{x}_M)\\\\\n",
    "\\end{bmatrix}\n",
    "$$\n",
    "\n",
    "with the first column filled with ones to realize the intercept of the linear model.\n",
    "\n",
    "Note, that the **feature design** $f(\\mathbf{x})$ **might be non-linear**, but the **model itself is still linear**, because the outcome is the linear combination $\\hat{\\mathbf{y}} = \\mathbf{X} \\hat{\\beta}$."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f92d4a4",
   "metadata": {},
   "source": [
    "### True Data\n",
    "The true data is created with a column vector $\\mathbf{x}$ containing $0$ to $2\\pi$ equidistantly sampled with $M=2^8$ samples. The corresponding true feature matrix $\\mathbf{X}_t$ is build with\n",
    "\n",
    "- intercept $\\mathbf{1}$ vector\n",
    "- $f_1(\\mathbf{x}) = \\cos(\\mathbf{x})$\n",
    "- $f_2(\\mathbf{x}) = \\sin(2\\mathbf{x})$\n",
    "- $f_3(\\mathbf{x}) = \\cos(5\\mathbf{x})$\n",
    "- $f_4(\\mathbf{x}) = \\cos(6\\mathbf{x})$\n",
    "\n",
    "and has rank 5.\n",
    "The true model weights are $\\beta_t = [3,2,1, \\frac{1}{2}, \\frac{1}{4}]^\\mathrm{T}$.\n",
    "The true outcome vector (model output) is then $\\mathbf{t} = \\mathbf{X}_t \\beta_t$, which should serve as *reference for the optimum prediction* (cf. $h(\\mathbf{x})$ in [Bishop 2006, Sec. 3.2]) in the discussion for bias/variance trade-off.\n",
    "\n",
    "We should realize that we do a Fourier series synthesis here."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d350e9a1",
   "metadata": {},
   "source": [
    "### Measured Data\n",
    "\n",
    "Measured data will be obtained by\n",
    "\n",
    "$$\\mathbf{y} = \\mathbf{t} + \\mathbf{n},$$\n",
    "\n",
    "where the column vector $\\mathbf{n}$ (for noise) contains a random signal drawn from normal distribution with zero mean and variance `noise_scale**2`.\n",
    "Thus $\\mathbf{y}$ potentially also lives in the left null space of $\\mathbf{X}_t$, whereas we know by design that $\\mathbf{t}$ purely lives in column space of $\\mathbf{X}_t$.\n",
    "\n",
    "We create $L$ different $\\mathbf{y}$ vectors, as we sample $L$ different $\\mathbf{n}$ vectors.\n",
    "\n",
    "We should denote the $l$-th measurement data vector with $$\\mathbf{y}^{(l)}$$\n",
    "\n",
    "We should denote the $m$-th entry in $\\mathbf{y}^{(l)}$ as $$\\mathbf{y}_m^{(l)}$$\n",
    "\n",
    "So, the indices $l=1...L$ and $m=1...M$ are used later on."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5bec1557",
   "metadata": {},
   "source": [
    "### Model Training / Fitting\n",
    "\n",
    "We specify a certain model with a dedicated feature matrix $\\mathbf{X}$. We always use above introduced column vector $\\mathbf{x}$ but different features and by that different model complexity.\n",
    "\n",
    "The $l$-th measurement leads to the $l$-th estimate of the model parameters\n",
    "\n",
    "$$\\hat{\\beta}^{(l)} = (\\mathbf{X}^\\mathrm{H} \\mathbf{X})^{-1} \\mathbf{X}^\\mathrm{H} \\mathbf{y}^{(l)}$$\n",
    "\n",
    "and thus the $l$-th prediction is given as\n",
    "\n",
    "$$\\hat{\\mathbf{y}}^{(l)} = \\mathbf{X} \\hat{\\beta}^{(l)} = \\mathbf{X} (\\mathbf{X}^\\mathrm{H} \\mathbf{X})^{-1} \\mathbf{X}^\\mathrm{H} \\mathbf{y}^{(l)}$$\n",
    "\n",
    "We should consistently denote the $m$-th entry in the predicted model output $\\hat{\\mathbf{y}}^{(l)}$ as $$\\hat{\\mathbf{y}}_m^{(l)}$$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a5fa0a83",
   "metadata": {},
   "source": [
    "### Calculate Bias and Variance\n",
    "\n",
    "According to [Bishop 2006, Sec. 3.2] (it is worth to really think about these equations) we can calculate bias$^2$ and variance, when here assuming that the optimum prediction is precisely our true data (which we fortunately know here, as we designed it with $\\mathbf{t}$).\n",
    "\n",
    "Firstly, a **1st order raw moment** is calculated as [Bishop 2006, (3.45)] \n",
    "\n",
    "$$\\tilde{\\mathbf{y}} = \\frac{1}{L} \\sum_{l=1}^{L} \\hat{\\mathbf{y}}^{(l)},$$\n",
    "\n",
    "so we get a $M \\times 1$ column vector $\\tilde{\\mathbf{y}}$ (note the tilde above it), where the $m$-th entry is the average of all $m$-th entries of the $L$ models. Doing that and hoping that the prediction noise is averaging out, we get a fair idea what the true data vector could be.\n",
    "\n",
    "We could set up a **2nd order moment** to quantify the error between $\\tilde{\\mathbf{y}}$ and the true data $\\mathbf{t}$ in squared sense. This time we should sum over the $M$ samples contained in both vectors, so it is a **sum of squared deviation** operation, which very often occurs in machine learning, cf. typical loss functions. \n",
    "\n",
    "We then get the bias$^2$ [Bishop 2006, (3.46)]\n",
    "\n",
    "$$\\mathrm{bias}^2 = \\frac{1}{M} \\sum_{m=1}^{M} (\\mathbf{t}_m - \\tilde{\\mathbf{y}}_m)^2$$\n",
    "\n",
    "The variance is a little more complicated, cf. [Bishop 2006, (3.47)]. First we calculate a **2nd order centralized moment** over the $L$ models resulting in an $M \\times 1$ column vector\n",
    "\n",
    "$$ \\mathbf{v} = \\frac{1}{L} \\sum_{l=1}^{L} (\\hat{\\mathbf{y}}^{(l)} - \\tilde{\\mathbf{y}})^2$$\n",
    "\n",
    "and then a **1st order raw moment** over the $M$ samples\n",
    "\n",
    "$$\\mathrm{variance} = \\frac{1}{M} \\sum_{m=1}^{M} \\mathbf{v}_m$$\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "94b8d3d8",
   "metadata": {},
   "source": [
    "## Empirical Correlation Coefficient $R^2$ Between $\\mathbf{y}$ and  $\\hat{\\mathbf{y}}$\n",
    "\n",
    "Let us define a 1st raw moment for the measured data along its samples (note the bar above ${y}$ and do not confuse it with the tilde vector from above!), we get a scalar\n",
    "\n",
    "$$\\bar{{y}} = \\frac{1}{M} \\sum_{m=1}^{M} \\mathbf{y}_m$$\n",
    "\n",
    "We can define 2nd order moments, here rather called **sum of squares**\n",
    "\n",
    "- Sum of Squares **Total** (SST), we use the measured data and the mean of the measured data\n",
    "\n",
    "$$\\mathrm{SST} = \\sum_{m=1}^{M} (\\mathbf{y}_m - \\bar{{y}})^2$$\n",
    "\n",
    "- Sum of Squares (due to) **Regression** (SSR), we use the predicted data and the mean of the measured data\n",
    "\n",
    "$$\\mathrm{SSR} = \\sum_{m=1}^{M} (\\hat{\\mathbf{y}}_m - \\bar{{y}})^2$$\n",
    "\n",
    "- Sum of Squares **Error** (SSE), we use the measured data and the predicted data\n",
    "\n",
    "$$\\mathrm{SSE} = \\sum_{m=1}^{M} (\\mathbf{y}_m - \\hat{\\mathbf{y}}_m)^2$$\n",
    "\n",
    "SSE is actually the cost function for the optimization problem, because it can be rewriten as $||\\mathbf{y} - \\mathbf{X} \\hat{\\beta}||_2^2 = (\\mathbf{y} - \\mathbf{X} \\hat{\\beta})^\\mathrm{T} (\\mathbf{y} - \\mathbf{X} \\hat{\\beta})$, which was used above.\n",
    "\n",
    "**Important** note: in literature different abbreviations and phrases are used for these three quantities, we should  carefully check them. Sum of Squares of Residuals might be abbreviated as SSR, as well as we actually use it for Sum of Squares Regression, but that are totally different things! This might be very confusing at the beginning, the more we know what these equations actually do, the less important is the actual wording for them.\n",
    "\n",
    "It is known that\n",
    "\n",
    "$$\\mathrm{SST} = \\mathrm{SSR} + \\mathrm{SSE}$$\n",
    "\n",
    "We might be interested in the ratio\n",
    "\n",
    "$$R^2 = \\frac{\\mathrm{SSR}}{\\mathrm{SST}} = \\frac{\\mathrm{SST}-\\mathrm{SSE}}{\\mathrm{SST}} = 1^2 - \\frac{\\mathrm{SSE}}{\\mathrm{SST}}$$\n",
    "which is typically called **empirical correlation coefficient** or **coefficient of determination** and for which\n",
    "\n",
    "$$0 \\leq R^2  \\leq 1$$\n",
    "\n",
    "holds.\n",
    "For SSE = 0, the measured data and the predicted data are identical, so $R^2=1$.\n",
    "\n",
    "Typically, $R^2$ can be increased by increasing the number of data samples $M$ and the number of features $N$, so taking more data and making the model more complex. But that's somehow a trivial and naive approach to train models...which brings us easily into the region of overfitting.\n",
    "\n",
    "So, we rather need a robust measure which is somehow independent of $M$ and $N$, to draw reasonable conclusions of the models performance.\n",
    "A well know approach is to compensate for the so called **degrees of freedom** in the model, which brings up the so called **adjusted** $R^2$\n",
    "\n",
    "$$R_\\mathrm{adj}^2 = 1^2 - \\frac{\\frac{\\mathrm{SSE}}{M-N}}{\\frac{\\mathrm{SST}}{M-1}}$$\n",
    "\n",
    "for models that use a intercept parameter $\\hat{\\beta}_0$ (which we do here).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "15ad5d63",
   "metadata": {},
   "source": [
    "Let us getting into some coding business for this toy example..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "7d969206",
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "from statsmodels.api import OLS"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "id": "f18dc6d5",
   "metadata": {},
   "outputs": [],
   "source": [
    "def my_r2adj(X, beta_hat, y):\n",
    "    \"\"\"(Adjusted) coefficient of determination R^2.\n",
    "\n",
    "    also known as empirical correlation coefficient between y and yhat\n",
    "    \"\"\"\n",
    "    M = X.shape[0]  # number of samples / observations\n",
    "    N = X.shape[1]  # number of model parameters (including intercept beta[0])\n",
    "\n",
    "    yhat = X @ beta_hat  # do regression / prediction\n",
    "\n",
    "    # sum of squares T.otal:\n",
    "    SST = np.sum((y - np.mean(y)) ** 2)\n",
    "    # sum of squares due to R.egression:\n",
    "    SSR = np.sum((yhat - np.mean(y)) ** 2)\n",
    "    # sum of squared E.rrors:\n",
    "    SSE = np.sum((y - yhat) ** 2)\n",
    "    # SST = SSR + SSE holds (numerical errors might corrupt this equality)\n",
    "\n",
    "    # R2 = SSR / SST is the ratio between regression and total, 0<=R2<=1\n",
    "    # rearranged R2 = (SST - SSE) / SST = 1**2 - SSE / SST\n",
    "    # p.58 in [MT11], p.111 in [DB18], p.108 (1.34)-(1.36) in [FHT96],\n",
    "    # p.125 in [FKLM21], Ch.2.4.6 in [Agresti15]\n",
    "    R2 = 1**2 - SSE / SST\n",
    "\n",
    "    # R2 should be adjusted by number of samples and model complexity\n",
    "    # note that this equation holds for models that include an intercept term\n",
    "    # p.58 in [MT11], p.163 in [FKLM21], Ch.2.4.6 in [Agresti15]\n",
    "    # R2adj = 1**2 - (1-R2)*(n-1)/(n-d)\n",
    "    # or rewritten to see the adjustments in a more convenient way\n",
    "    R2adj = 1**2 - (SSE / (M - N)) / (SST / (M - 1))\n",
    "    return R2adj"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "daa15502",
   "metadata": {},
   "source": [
    "## True Model and Its Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "id": "5e9a45d9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# number of observations / samples:\n",
    "M = 2**8\n",
    "# true model with x as input variable\n",
    "x = np.linspace(0, 2 * np.pi, M)\n",
    "# to create 4 features in design/feature matrix X\n",
    "X = np.column_stack((np.cos(x), np.sin(2 * x), np.cos(5 * x), np.cos(6 * x)))\n",
    "# add a bias/intercept column to the design/feature matrix:\n",
    "X = np.hstack((np.ones((X.shape[0], 1)), X))\n",
    "hasconst = True\n",
    "# some nice numbers for the true model parameters beta:\n",
    "beta = np.array([3, 2, 1, 1 / 2, 1 / 4])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "22514278",
   "metadata": {},
   "outputs": [],
   "source": [
    "# generate 'true' data with the design matrix of 'true' model\n",
    "y = X @ beta\n",
    "plt.figure(figsize=(6, 3))\n",
    "plt.plot(y, \"k-\")\n",
    "plt.xlabel(\"independent features' input variable x\")\n",
    "plt.ylabel((\"dependent variable y, true data\"))\n",
    "plt.title(\"true model data as linear model (x -> 4 features + intercept)\")\n",
    "plt.xlim(0, M)\n",
    "plt.ylim(-2, 8)\n",
    "plt.grid(True)\n",
    "print(X.shape, y.shape)\n",
    "plt.tight_layout()\n",
    "plt.savefig('bias_variance_plots/true_data.png', dpi=300)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49fe6e62",
   "metadata": {},
   "source": [
    "## Function for Train / Predict and Calc Bias^2 / Variance "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "afeca815",
   "metadata": {},
   "outputs": [],
   "source": [
    "def bias_variance_of_model(\n",
    "    X, noise_scale=1 / 3\n",
    "):  # noise_scale=2 is nice as well\n",
    "    # add bias column to the design matrix\n",
    "    X = np.hstack((np.ones((X.shape[0], 1)), X))\n",
    "    hasconst = True\n",
    "    print(\n",
    "        \"\\nshape of model/feature matrix X:\",\n",
    "        X.shape,\n",
    "        \"\\nrank of matrix X / # of model parameters:\",\n",
    "        np.linalg.matrix_rank(X),\n",
    "    )\n",
    "    # init random number generator to reproduce results\n",
    "    rng = np.random.default_rng(12345)\n",
    "    # generate L data sets with added noise\n",
    "    L = 2**7\n",
    "    noise = rng.normal(loc=0, scale=noise_scale, size=(M, L))\n",
    "    Yn = y[:, None] + noise\n",
    "    # alloc memory for all predictions\n",
    "    Yhat = np.zeros((M, L))\n",
    "    rsquared_adj = np.zeros(L)\n",
    "    # train and predict L models on these L data sets\n",
    "    for i in range(L):\n",
    "        model = OLS(Yn[:, i], X, hasconst=hasconst)  # OLS model\n",
    "        results = model.fit()  # train the model\n",
    "        Yhat[:, i] = results.predict(X)  # predict outcome\n",
    "        rsquared_adj[i] = my_r2adj(X, results.params, Yn[:, i])\n",
    "        # print(np.allclose(results.rsquared_adj, rsquared_adj[i]))\n",
    "\n",
    "    # get average prediction, i.e. mean over the L models\n",
    "    # which is a numerical eval of the expectation:\n",
    "    ym = np.mean(Yhat, axis=1)  # (3.45) in [Bishop 2006]\n",
    "\n",
    "    # get integrated squared bias (numerical eval of the expectation):\n",
    "    # note: y is the true model data\n",
    "    bias_squared = np.mean((ym - y) ** 2)  # (3.42), (3.46) in [Bishop 2006]\n",
    "\n",
    "    # get integrated variance (numerical eval of the expectation):\n",
    "    variance = np.mean(\n",
    "        np.mean((Yhat - ym[:, None]) ** 2, axis=1), axis=0\n",
    "    )  # (3.43), (3.47) in [Bishop 2006]\n",
    "\n",
    "    for i in range(L):\n",
    "        axs[0, 0].plot(Yn[:, i])\n",
    "        axs[0, 1].plot(Yhat[:, i])\n",
    "\n",
    "    axs[0, 1].plot(y, \"k-\", label=\"true model\")\n",
    "\n",
    "    axs[0, 1].plot(\n",
    "        np.mean(Yhat, axis=1), \":\", color=\"gold\", label=\"$\\mu(\\hat{Y})$\"\n",
    "    )\n",
    "\n",
    "    axs[0, 1].plot(\n",
    "        np.mean(Yhat, axis=1) + np.std(Yhat, axis=1),\n",
    "        \"--\",\n",
    "        lw=0.75,\n",
    "        color=\"gold\",\n",
    "        label=\"$\\mu(\\hat{Y}) \\pm \\sigma(\\hat{Y})$\",\n",
    "    )\n",
    "    axs[0, 1].plot(\n",
    "        np.mean(Yhat, axis=1) - np.std(Yhat, axis=1),\n",
    "        \"-.\",\n",
    "        lw=0.75,\n",
    "        color=\"gold\",\n",
    "    )\n",
    "\n",
    "    axs[0, 1].set_title(\n",
    "        r\"bias$^2$=\"\n",
    "        + \"{:4.3f}\".format(bias_squared)\n",
    "        + \", var=\"\n",
    "        + \"{:4.3f}\".format(variance)\n",
    "        + r\", bias$^2$+var=\"\n",
    "        + \"{:4.3f}\".format(bias_squared + variance)\n",
    "    )\n",
    "    for i in range(2):\n",
    "        axs[0, i].set_xlim(0, M)\n",
    "        axs[0, i].set_ylim(-2, 8)\n",
    "        axs[0, i].grid(True)\n",
    "        axs[0, i].set_xlabel(\"independent features' input variable x\")\n",
    "    axs[0, 0].set_ylabel(\"dependent variable yn\")\n",
    "    axs[0, 1].set_ylabel(\"predicted variable yhat\")\n",
    "    axs[0, 1].legend()\n",
    "\n",
    "    axs[1, 0].plot(rsquared_adj)\n",
    "    axs[1, 0].set_title(\n",
    "        r\"$\\hat{\\mu}(R_{adj}^2)=\"\n",
    "        + \"{:4.3f}$\".format(np.mean(rsquared_adj))\n",
    "        + r\", $\\hat{\\sigma}(R_{adj}^2)=\"\n",
    "        + \"{:4.3f}$\".format(np.std(rsquared_adj))\n",
    "    )\n",
    "    axs[1, 0].set_xlim(0, L)\n",
    "    axs[1, 0].set_ylim(0, 1)\n",
    "    axs[1, 0].set_xlabel(\"model index\")\n",
    "    axs[1, 0].set_ylabel(r\"$R_{adj}^2$\")\n",
    "    axs[1, 0].grid(True)\n",
    "\n",
    "    axs[1, 1].set_xlabel(\"intentionally empty\")\n",
    "\n",
    "    plt.tight_layout()\n",
    "\n",
    "    print(\"bias^2 + variance  = \", bias_squared + variance)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fbaefad6",
   "metadata": {},
   "source": [
    "## Check Models on Noisy Data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "16f0937e",
   "metadata": {},
   "source": [
    "### Too Simple\n",
    "\n",
    "typically\n",
    "- high bias\n",
    "- low variance\n",
    "\n",
    "which reads as: the model is far away from predicting the true data (high bias part), but this wrong prediction is very consistent (low variance part). In this example, the predicted data is along lines (obviously not matching the true data shape), but these lines are similarly aligned.\n",
    "This typically leads to non appropriate predictions on unseen data, because we model the true world to simple.\n",
    "We should avoid such modeling."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7c1edf6b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# we take just a simple line equation model y = beta1 x + beta0 here\n",
    "# note  that intercept is only added in function bias_variance_of_model(X)\n",
    "X = np.copy(x)[:, None]\n",
    "fig, axs = plt.subplots(2, 2, figsize=(10, 5))\n",
    "bias_variance_of_model(X)\n",
    "axs[0, 0].set_title(\"underfit, too low model complexity, high bias, low var\");\n",
    "plt.tight_layout()\n",
    "plt.savefig('bias_variance_plots/too_simple_model.png', dpi=300)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6dd86618",
   "metadata": {},
   "source": [
    "### Too Complex\n",
    "\n",
    "typically\n",
    "- low bias\n",
    "- high variance\n",
    "\n",
    "which reads as: the model is very (too) good to predict all the different **noisy** data sets (low bias part), but on average these different individual predictions differ too much from the true data (high variance). This typically leads to non-robust predictions on noisy unknown data.\n",
    "We should avoid such modeling."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1cfedd48",
   "metadata": {},
   "outputs": [],
   "source": [
    "# we take a Fourier series expansion model here\n",
    "X = np.column_stack((np.cos(x), np.sin(x)))  # init with first two features\n",
    "# add more features according to a Fourier series expansion\n",
    "# <=M//2 makes sure we do not use more model parameters than signal samples\n",
    "# in order to solve this as a least-squares problem, i.e. using left-inverse\n",
    "for m in range(2, M // 2):\n",
    "    X = np.column_stack((X, np.sin(m * x), np.cos(m * x)))\n",
    "# note  that intercept is only added in function bias_variance_of_model(X)\n",
    "fig, axs = plt.subplots(2, 2, figsize=(10, 5))\n",
    "bias_variance_of_model(X)\n",
    "axs[0, 0].set_title(\"overfit, too high model complexity, low bias, high var\");\n",
    "plt.tight_layout()\n",
    "plt.savefig('bias_variance_plots/too_complex_model.png', dpi=300)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f41f622",
   "metadata": {},
   "source": [
    "### Prediction Model == True Model\n",
    "\n",
    "a rare case (and obviously here for didactic reasons): we know the exact model equation and thus can set up our feature matrix according to it.\n",
    "By concept this yields the \n",
    "\n",
    "- lowest possible bias\n",
    "- lowest possible variance\n",
    "\n",
    "The remaining variance is due to the added (measurement) noise, which we don't want to have explained by the model, because we then end up in the *too complex model* world, just discussed above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "07b84657",
   "metadata": {},
   "outputs": [],
   "source": [
    "# we take all features of the true model here\n",
    "# (we generally not know this exactly in practice)\n",
    "X = np.column_stack((np.cos(x), np.sin(2 * x), np.cos(5 * x), np.cos(6 * x)))\n",
    "# note  that intercept is only added in function bias_variance_of_model(X)\n",
    "fig, axs = plt.subplots(2, 2, figsize=(10, 5))\n",
    "bias_variance_of_model(X)  # lowest possible bias^2+variance, because we\n",
    "# know the true model (again: which in practice likely never will occur)\n",
    "# the remaining variance is from the added noise\n",
    "axs[0, 0].set_title(\"true model features, lowest bias, lowest var\");\n",
    "plt.tight_layout()\n",
    "plt.savefig('bias_variance_plots/true_model.png', dpi=300)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "691587c0",
   "metadata": {},
   "source": [
    "### Not Too Simple, Not Too Complex Model\n",
    "\n",
    "If the exact model equation is not known, we need to find a model that is neither to simple, nor too complex.\n",
    "\n",
    "We could consider the following model as the most robust for this example.\n",
    "\n",
    "A robust prediction model should have a **reasonable trade-off between bias^2 and variance** and reasonable **high** $R_{\\text{adjusted}}^2$ **mean** but **low** $R_{\\text{adjusted}}^2$ **variance**.\n",
    "\n",
    "This is in good agreement with the model here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "013c29be",
   "metadata": {},
   "outputs": [],
   "source": [
    "# we take only the first two features of the true model\n",
    "# as these oscillations explain much of the dependent variable y\n",
    "X = np.column_stack((np.cos(x), np.sin(2 * x)))\n",
    "# note  that intercept is only added in function bias_variance_of_model(X)\n",
    "fig, axs = plt.subplots(2, 2, figsize=(10, 5))\n",
    "bias_variance_of_model(X)\n",
    "axs[0, 0].set_title(\"reasonable bias/var trade-off if true model is unknown\");\n",
    "plt.tight_layout()\n",
    "plt.savefig('bias_variance_plots/robust_model.png', dpi=300)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5af9dca",
   "metadata": {},
   "source": [
    "## Check Higher Noise Power\n",
    "\n",
    "We might check other noise power, for example `bias_variance_of_model(X, noise_scale=2)`.\n",
    "Although, the *Prediction Model == True Model* has by concept lowest bias and variance, the performance obviously degrades by increasing noise power. This is also indicated by the decreasing $R_{\\text{adjusted}}^2$ value.\n",
    "We should compare the performance of *The Not Too Simple, Not Too Complex Model* with the *Prediction Model == True Model*.\n",
    "Recall that we almost never have access to the true model equation. That's why the last model should be considered as the most robust in this toy example.\n",
    "We might check another model that includes the 3 most important features of the true model.\n",
    "What do we expect in terms of the performance measures?"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f4fb954",
   "metadata": {},
   "source": [
    "## Copyright\n",
    "\n",
    "- the notebooks are provided as [Open Educational Resources](https://en.wikipedia.org/wiki/Open_educational_resources)\n",
    "- the text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/)\n",
    "- the code of the IPython examples is licensed under the [MIT license](https://opensource.org/licenses/MIT)\n",
    "- feel free to use the notebooks for your own purposes\n",
    "- please attribute the work as follows: *Frank Schultz, Data Driven Audio Signal Processing - A Tutorial Featuring Computational Examples, University of Rostock* ideally with relevant file(s), github URL https://github.com/spatialaudio/data-driven-audio-signal-processing-exercise, commit number and/or version tag, year."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "myddasp",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}