{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {},
"id": "view-in-github"
},
"source": [
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"# Tutorial 3: Confidence intervals and bootstrapping\n",
"\n",
"**Week 1, Day 2: Model Fitting**\n",
"\n",
"**By Neuromatch Academy**\n",
"\n",
"**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith\n",
"\n",
"**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Ella Batty, Michael Waskom\n",
"\n",
"**Production editors:** Spiros Chavlis\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Tutorial Objectives\n",
"\n",
"*Estimated timing of tutorial: 23 minutes*\n",
"\n",
"This is Tutorial 3 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).\n",
"\n",
"In this tutorial, we will discuss how to gauge how good our estimated model parameters are.\n",
"- Learn how to use bootstrapping to generate new sample datasets\n",
"- Estimate our model parameter on these new sample datasets\n",
"- Quantify the variance of our estimate using confidence intervals"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Tutorial slides\n",
"# @markdown These are the slides for the videos in all tutorials today\n",
"from IPython.display import IFrame\n",
"link_id = \"2mkq4\"\n",
"print(f\"If you want to download the slides: https://osf.io/download/{link_id}/\")\n",
"IFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\", width=854, height=480)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Install and import feedback gadget\n",
"\n",
"!pip3 install vibecheck datatops --quiet\n",
"\n",
"from vibecheck import DatatopsContentReviewContainer\n",
"def content_review(notebook_section: str):\n",
" return DatatopsContentReviewContainer(\n",
" \"\", # No text prompt\n",
" notebook_section,\n",
" {\n",
" \"url\": \"https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab\",\n",
" \"name\": \"neuromatch_cn\",\n",
" \"user_key\": \"y1x3mpx5\",\n",
" },\n",
" ).render()\n",
"\n",
"\n",
"feedback_prefix = \"W1D2_T3\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"execution": {}
},
"outputs": [],
"source": [
"# Imports\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Figure Settings\n",
"import logging\n",
"logging.getLogger('matplotlib.font_manager').disabled = True\n",
"\n",
"%config InlineBackend.figure_format = 'retina'\n",
"plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/main/nma.mplstyle\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Plotting Functions\n",
"\n",
"def plot_original_and_resample(x, y, x_, y_):\n",
" \"\"\" Plot the original sample and the resampled points from this sample.\n",
"\n",
" Args:\n",
" x (ndarray): An array of shape (samples,) that contains the input values.\n",
" y (ndarray): An array of shape (samples,) that contains the corresponding\n",
" measurement values to the inputs.\n",
" x_ (ndarray): An array of shape (samples,) with a subset of input values from x\n",
" y_ (ndarray): An array of shape (samples,) with a the corresponding subset\n",
" of measurement values as x_ from y\n",
"\n",
" \"\"\"\n",
" fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))\n",
" ax1.scatter(x, y)\n",
" ax1.set(title='Original', xlabel='x', ylabel='y')\n",
"\n",
" ax2.scatter(x_, y_, color='c')\n",
"\n",
" ax2.set(title='Resampled', xlabel='x', ylabel='y',\n",
" xlim=ax1.get_xlim(), ylim=ax1.get_ylim())\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Introduction\n",
"\n",
"Up to this point we have been finding ways to estimate model parameters to fit some observed data. Our approach has been to optimize some criterion, either minimize the mean squared error or maximize the likelihood while using the entire dataset. How good is our estimate really? How confident are we that it will generalize to describe new data we haven't seen yet?\n",
"\n",
"One solution to this is to just collect more data and check the MSE on this new dataset with the previously estimated parameters. However this is not always feasible and still leaves open the question of how quantifiably confident we are in the accuracy of our model.\n",
"\n",
"In Section 1, we will explore how to implement bootstrapping. In Section 2, we will build confidence intervals of our estimates using the bootstrapping method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Video 1: Confidence Intervals & Bootstrapping\n",
"from ipywidgets import widgets\n",
"from IPython.display import YouTubeVideo\n",
"from IPython.display import IFrame\n",
"from IPython.display import display\n",
"\n",
"\n",
"class PlayVideo(IFrame):\n",
" def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n",
" self.id = id\n",
" if source == 'Bilibili':\n",
" src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n",
" elif source == 'Osf':\n",
" src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n",
" super(PlayVideo, self).__init__(src, width, height, **kwargs)\n",
"\n",
"\n",
"def display_videos(video_ids, W=400, H=300, fs=1):\n",
" tab_contents = []\n",
" for i, video_id in enumerate(video_ids):\n",
" out = widgets.Output()\n",
" with out:\n",
" if video_ids[i][0] == 'Youtube':\n",
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
" height=H, fs=fs, rel=0)\n",
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
" else:\n",
" video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n",
" height=H, fs=fs, autoplay=False)\n",
" if video_ids[i][0] == 'Bilibili':\n",
" print(f'Video available at https://www.bilibili.com/video/{video.id}')\n",
" elif video_ids[i][0] == 'Osf':\n",
" print(f'Video available at https://osf.io/{video.id}')\n",
" display(video)\n",
" tab_contents.append(out)\n",
" return tab_contents\n",
"\n",
"\n",
"video_ids = [('Youtube', 'hs6bVGQNSIs'), ('Bilibili', 'BV1vK4y1s7py')]\n",
"tab_contents = display_videos(video_ids, W=854, H=480)\n",
"tabs = widgets.Tab()\n",
"tabs.children = tab_contents\n",
"for i in range(len(tab_contents)):\n",
" tabs.set_title(i, video_ids[i][0])\n",
"display(tabs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Confidence_Intervals_and_Bootstrapping_Video\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Section 1: Bootstrapping\n",
"\n",
"*Estimated timing to here from start of tutorial: 7 min*\n",
"\n",
"[Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) is a widely applicable method to assess confidence/uncertainty about estimated parameters, it was originally [proposed](https://projecteuclid.org/euclid.aos/1176344552) by [Bradley Efron](https://en.wikipedia.org/wiki/Bradley_Efron). The idea is to generate many new synthetic datasets from the initial true dataset by randomly sampling from it, then finding estimators for each one of these new datasets, and finally looking at the distribution of all these estimators to quantify our confidence.\n",
"\n",
"Note that each new resampled datasets will be the same size as our original one, with the new data points sampled with replacement i.e. we can repeat the same data point multiple times. Also note that in practice we need a lot of resampled datasets, here we use 2000.\n",
"\n",
"To explore this idea, we will start again with our noisy samples along the line $y_i = 1.2x_i + \\epsilon_i$, but this time, we only use half the data points as last time (15 instead of 30)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"#@title\n",
"\n",
"#@markdown Execute this cell to simulate some data\n",
"\n",
"# setting a fixed seed to our random number generator ensures we will always\n",
"# get the same psuedorandom number sequence\n",
"np.random.seed(121)\n",
"\n",
"# Let's set some parameters\n",
"theta = 1.2\n",
"n_samples = 15\n",
"\n",
"# Draw x and then calculate y\n",
"x = 10 * np.random.rand(n_samples) # sample from a uniform distribution over [0,10)\n",
"noise = np.random.randn(n_samples) # sample from a standard normal distribution\n",
"y = theta * x + noise\n",
"\n",
"fig, ax = plt.subplots()\n",
"ax.scatter(x, y) # produces a scatter plot\n",
"ax.set(xlabel='x', ylabel='y');"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 1: Resample Dataset with Replacement\n",
"\n",
"In this exercise you will implement a method to resample a dataset with replacement. The method accepts $\\mathbf{x}$ and $\\mathbf{y}$ arrays. It should return a new set of $\\mathbf{x}'$ and $\\mathbf{y}'$ arrays that are created by randomly sampling from the originals.\n",
"\n",
"We will then compare the original dataset to a resampled dataset.\n",
"\n",
"**Hint:** The [numpy.random.choice](https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html) method would be useful here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "both",
"execution": {}
},
"outputs": [],
"source": [
"def resample_with_replacement(x, y):\n",
" \"\"\"Resample data points with replacement from the dataset of `x` inputs and\n",
" `y` measurements.\n",
"\n",
" Args:\n",
" x (ndarray): An array of shape (samples,) that contains the input values.\n",
" y (ndarray): An array of shape (samples,) that contains the corresponding\n",
" measurement values to the inputs.\n",
"\n",
" Returns:\n",
" ndarray, ndarray: The newly resampled `x` and `y` data points.\n",
" \"\"\"\n",
" #######################################################\n",
" ## TODO for students: resample dataset with replacement\n",
" # Fill out function and remove\n",
" raise NotImplementedError(\"Student exercise: resample dataset with replacement\")\n",
" #######################################################\n",
"\n",
" # Get array of indices for resampled points\n",
" sample_idx = ...\n",
"\n",
" # Sample from x and y according to sample_idx\n",
" x_ = ...\n",
" y_ = ...\n",
"\n",
" return x_, y_\n",
"\n",
"x_, y_ = resample_with_replacement(x, y)\n",
"\n",
"plot_original_and_resample(x, y, x_, y_)"
]
},
{
"cell_type": "markdown",
"metadata": {
"cellView": "both",
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W1D2_ModelFitting/solutions/W1D2_Tutorial3_Solution_81af3bd6.py)\n",
"\n",
"*Example output:*\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Resample_dataset_with_replacement_Exercise\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"In the resampled plot on the right, the actual number of points is the same, but some have been repeated so they only display once.\n",
"\n",
"Now that we have a way to resample the data, we can use that in the full bootstrapping process."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 2: Bootstrap Estimates\n",
"\n",
"In this exercise you will implement a method to run the bootstrap process of generating a set of $\\hat\\theta$ values from a dataset of inputs ($\\mathbf{x}$) and measurements ($\\mathbf{y}$). You should use `resample_with_replacement` here, and you may also invoke the helper function `solve_normal_eqn` from Tutorial 1 to produce the MSE-based estimator.\n",
"\n",
"We will then use this function to look at the theta_hat from different samples.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @markdown Execute this cell for helper function `solve_normal_eqn`\n",
"def solve_normal_eqn(x, y):\n",
" \"\"\"Solve the normal equations to produce the value of theta_hat that minimizes\n",
" MSE.\n",
"\n",
" Args:\n",
" x (ndarray): An array of shape (samples,) that contains the input values.\n",
" y (ndarray): An array of shape (samples,) that contains the corresponding\n",
" measurement values to the inputs.\n",
" thata_hat (float): An estimate of the slope parameter.\n",
"\n",
" Returns:\n",
" float: the value for theta_hat arrived from minimizing MSE\n",
" \"\"\"\n",
" theta_hat = (x.T @ y) / (x.T @ x)\n",
" return theta_hat"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"def bootstrap_estimates(x, y, n=2000):\n",
" \"\"\"Generate a set of theta_hat estimates using the bootstrap method.\n",
"\n",
" Args:\n",
" x (ndarray): An array of shape (samples,) that contains the input values.\n",
" y (ndarray): An array of shape (samples,) that contains the corresponding\n",
" measurement values to the inputs.\n",
" n (int): The number of estimates to compute\n",
"\n",
" Returns:\n",
" ndarray: An array of estimated parameters with size (n,)\n",
" \"\"\"\n",
" theta_hats = np.zeros(n)\n",
"\n",
" ##############################################################################\n",
" ## TODO for students: implement bootstrap estimation\n",
" # Fill out function and remove\n",
" raise NotImplementedError(\"Student exercise: implement bootstrap estimation\")\n",
" ##############################################################################\n",
"\n",
" # Loop over number of estimates\n",
" for i in range(n):\n",
"\n",
" # Resample x and y\n",
" x_, y_ = ...\n",
"\n",
" # Compute theta_hat for this sample\n",
" theta_hats[i] = ...\n",
"\n",
" return theta_hats\n",
"\n",
"\n",
"# Set random seed\n",
"np.random.seed(123)\n",
"\n",
"# Get bootstrap estimates\n",
"theta_hats = bootstrap_estimates(x, y, n=2000)\n",
"print(theta_hats[0:5])"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"You should see `[1.27550888 1.17317819 1.18198819 1.25329255 1.20714664]` as the first five estimates."
]
},
{
"cell_type": "markdown",
"metadata": {
"cellView": "both",
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/main/tutorials/W1D2_ModelFitting/solutions/W1D2_Tutorial3_Solution_d73b40e4.py)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Bootsrap_Estimates_Exercise\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Now that we have our bootstrap estimates, we can visualize all the potential models (models computed with different resampling) together to see how distributed they are."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @markdown Execute this cell to visualize all potential models\n",
"\n",
"fig, ax = plt.subplots()\n",
"\n",
"# For each theta_hat, plot model\n",
"theta_hats = bootstrap_estimates(x, y, n=2000)\n",
"for i, theta_hat in enumerate(theta_hats):\n",
" y_hat = theta_hat * x\n",
" ax.plot(x, y_hat, c='r', alpha=0.01, label='Resampled Fits' if i==0 else '')\n",
"\n",
"# Plot observed data\n",
"ax.scatter(x, y, label='Observed')\n",
"\n",
"# Plot true fit data\n",
"y_true = theta * x\n",
"ax.plot(x, y_true, 'g', linewidth=2, label='True Model')\n",
"\n",
"ax.set(\n",
" title='Bootstrapped Slope Estimation',\n",
" xlabel='x',\n",
" ylabel='y'\n",
")\n",
"\n",
"# Change legend line alpha property\n",
"handles, labels = ax.get_legend_handles_labels()\n",
"handles[0].set_alpha(1)\n",
"\n",
"ax.legend()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"This looks pretty good! The bootstrapped estimates spread around the true model, as we would have hoped. Note that here we have the luxury to know the ground truth value for $\\theta$, but in applications we are trying to guess it from data. Therefore, assessing the quality of estimates based on finite data is a task of fundamental importance in data analysis.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Section 2: Confidence Intervals\n",
"\n",
"*Estimated timing to here from start of tutorial: 17 min*\n",
"\n",
"Let us now quantify how uncertain our estimated slope is. We do so by computing [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval) (CIs) from our bootstrapped estimates. The most direct approach is to compute percentiles from the empirical distribution of bootstrapped estimates. Note that this is widely applicable as we are not assuming that this empirical distribution is Gaussian."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @markdown Execute this cell to plot bootstrapped CI\n",
"\n",
"theta_hats = bootstrap_estimates(x, y, n=2000)\n",
"print(f\"mean = {np.mean(theta_hats):.2f}, std = {np.std(theta_hats):.2f}\")\n",
"\n",
"fig, ax = plt.subplots()\n",
"ax.hist(theta_hats, bins=20, facecolor='C1', alpha=0.75)\n",
"ax.axvline(theta, c='g', label=r'True $\\theta$')\n",
"ax.axvline(np.percentile(theta_hats, 50), color='r', label='Median')\n",
"ax.axvline(np.percentile(theta_hats, 2.5), color='b', label='95% CI')\n",
"ax.axvline(np.percentile(theta_hats, 97.5), color='b')\n",
"ax.legend()\n",
"ax.set(\n",
" title='Bootstrapped Confidence Interval',\n",
" xlabel=r'$\\hat{{\\theta}}$',\n",
" ylabel='count',\n",
" xlim=[1.0, 1.5]\n",
")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Looking at the distribution of bootstrapped $\\hat{\\theta}$ values, we see that the true $\\theta$ falls well within the 95% confidence interval, which is reassuring. We also see that the value $\\theta = 1$ does not fall within the confidence interval. From this we would reject the hypothesis that the slope was 1."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Summary\n",
"\n",
"*Estimated timing of tutorial: 23 minutes*\n",
"\n",
"- Bootstrapping is a resampling procedure that allows to build confidence intervals around inferred parameter values\n",
"- it is a widely applicable and very practical method that relies on computational power and pseudo-random number generators (as opposed to more classical approaches that depend on analytical derivations)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Notation\n",
"\n",
"\\begin{align}\n",
"\\theta &\\quad \\text{parameter}\\\\\n",
"\\hat{\\theta} &\\quad \\text{estimated parameter}\\\\\n",
"x &\\quad \\text{input, independent variable}\\\\\n",
"y &\\quad \\text{response measurement, dependent variable}\\\\\n",
"\\mathbf{x} &\\quad \\text{vector of input values}\\\\\n",
"\\mathbf{y} &\\quad \\text{vector of measurements}\\\\\n",
"\\mathbf{x}' &\\quad \\text{vector of resampled input values }\\\\\n",
"\\mathbf{y}' &\\quad \\text{vector of resampled measurement values}\\\\\n",
"\\end{align}"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"**Suggested readings**\n",
"\n",
"[Computer Age Statistical Inference: Algorithms, Evidence and Data Science](https://doi.org/10.1017/CBO9781316576533) by Bradley Efron and Trevor Hastie\n"
]
}
],
"metadata": {
"celltoolbar": "Slideshow",
"colab": {
"collapsed_sections": [],
"include_colab_link": true,
"name": "W1D2_Tutorial3",
"provenance": [],
"toc_visible": true
},
"kernel": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
}
},
"nbformat": 4,
"nbformat_minor": 0
}