{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Regression" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Think Bayes, Second Edition\n", "\n", "Copyright 2020 Allen B. Downey\n", "\n", "License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:43.792591Z", "iopub.status.busy": "2021-04-16T19:38:43.792140Z", "iopub.status.idle": "2021-04-16T19:38:43.794041Z", "shell.execute_reply": "2021-04-16T19:38:43.794390Z" }, "tags": [] }, "outputs": [], "source": [ "# If we're running on Colab, install empiricaldist\n", "# https://pypi.org/project/empiricaldist/\n", "\n", "import sys\n", "IN_COLAB = 'google.colab' in sys.modules\n", "\n", "if IN_COLAB:\n", " !pip install empiricaldist" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:43.798000Z", "iopub.status.busy": "2021-04-16T19:38:43.797535Z", "iopub.status.idle": "2021-04-16T19:38:43.800051Z", "shell.execute_reply": "2021-04-16T19:38:43.799576Z" }, "tags": [] }, "outputs": [], "source": [ "# Get utils.py\n", "\n", "from os.path import basename, exists\n", "\n", "def download(url):\n", " filename = basename(url)\n", " if not exists(filename):\n", " from urllib.request import urlretrieve\n", " local, _ = urlretrieve(url, filename)\n", " print('Downloaded ' + local)\n", " \n", "download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:43.803533Z", "iopub.status.busy": "2021-04-16T19:38:43.802836Z", "iopub.status.idle": "2021-04-16T19:38:44.488550Z", "shell.execute_reply": "2021-04-16T19:38:44.488120Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import set_pyplot_params\n", "set_pyplot_params()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous chapter we saw several examples of logistic regression, which is based on the assumption that the likelihood of an outcome, expressed in the form of log odds, is a linear function of some quantity (continuous or discrete).\n", "\n", "In this chapter we'll work on examples of simple linear regression, which models the relationship between two quantities. Specifically, we'll look at changes over time in snowfall and the marathon world record.\n", "\n", "The models we'll use have three parameters, so you might want to review the tools we used for the three-parameter model in <<_MarkandRecapture>>." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## More Snow?\n", "\n", "I am under the impression that we don't get as much snow around here as we used to. By \"around here\" I mean Norfolk County, Massachusetts, where I was born, grew up, and currently live. And by \"used to\" I mean compared to when I was young, like in 1978 when we got [27 inches of snow](https://en.wikipedia.org/wiki/Northeastern_United_States_blizzard_of_1978) and I didn't have to go to school for a couple of weeks.\n", "\n", "Fortunately, we can test my conjecture with data. Norfolk County happens to be the location of the [Blue Hill Meteorological Observatory](https://en.wikipedia.org/wiki/Blue_Hill_Meteorological_Observatory), which keeps the oldest continuous weather record in North America.\n", "\n", "Data from this and many other weather stations is available from the [National Oceanic and Atmospheric Administration](https://www.ncdc.noaa.gov/cdo-web/search) (NOAA). I collected data from the Blue Hill Observatory from May 11, 1967 to May 11, 2020. " ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "To get more data, go to [National Oceanic and Atmospheric Administration](https://www.ncdc.noaa.gov/cdo-web/search), select daily summaries, choose a date range, and search for Stations with search term \"Blue Hill Coop\". Add it to the cart.\n", "\n", "Open cart and select \"Custom GHCN-Daily CSV\", then continue.\n", "\n", "Select all data types (but particularly Precipitation) and continue.\n", "\n", "Provide an email address and submit order.\n", "\n", "You'll get an email with a download link. Download the CSV file and move it into the current directory." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The following cell downloads the data as a CSV file." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.492216Z", "iopub.status.busy": "2021-04-16T19:38:44.491795Z", "iopub.status.idle": "2021-04-16T19:38:44.493838Z", "shell.execute_reply": "2021-04-16T19:38:44.493472Z" }, "tags": [] }, "outputs": [], "source": [ "download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2239075.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use Pandas to read the data into `DataFrame`:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.497478Z", "iopub.status.busy": "2021-04-16T19:38:44.496929Z", "iopub.status.idle": "2021-04-16T19:38:44.536916Z", "shell.execute_reply": "2021-04-16T19:38:44.537394Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "\n", "df = pd.read_csv('2239075.csv', parse_dates=[2])" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Here's what the last few rows look like." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.549351Z", "iopub.status.busy": "2021-04-16T19:38:44.541697Z", "iopub.status.idle": "2021-04-16T19:38:44.560204Z", "shell.execute_reply": "2021-04-16T19:38:44.559851Z" }, "tags": [] }, "outputs": [], "source": [ "df.tail(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The columns we'll use are:\n", "\n", "* `DATE`, which is the date of each observation,\n", "\n", "* `SNOW`, which is the total snowfall in inches.\n", "\n", "I'll add a column that contains just the year part of the dates." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.563856Z", "iopub.status.busy": "2021-04-16T19:38:44.563284Z", "iopub.status.idle": "2021-04-16T19:38:44.568089Z", "shell.execute_reply": "2021-04-16T19:38:44.567683Z" } }, "outputs": [], "source": [ "df['YEAR'] = df['DATE'].dt.year" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And use `groupby` to add up the total snowfall in each year." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.572178Z", "iopub.status.busy": "2021-04-16T19:38:44.571162Z", "iopub.status.idle": "2021-04-16T19:38:44.574545Z", "shell.execute_reply": "2021-04-16T19:38:44.574115Z" } }, "outputs": [], "source": [ "snow = df.groupby('YEAR')['SNOW'].sum()" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The first and last years are not complete, so I'll drop them." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.578061Z", "iopub.status.busy": "2021-04-16T19:38:44.577552Z", "iopub.status.idle": "2021-04-16T19:38:44.579968Z", "shell.execute_reply": "2021-04-16T19:38:44.580391Z" }, "tags": [] }, "outputs": [], "source": [ "snow = snow.iloc[1:-1]\n", "len(snow)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following figure shows total snowfall during each of the complete years in my lifetime." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.584664Z", "iopub.status.busy": "2021-04-16T19:38:44.583819Z", "iopub.status.idle": "2021-04-16T19:38:44.768641Z", "shell.execute_reply": "2021-04-16T19:38:44.768224Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import decorate\n", "\n", "snow.plot(ls='', marker='o', alpha=0.5)\n", "\n", "decorate(xlabel='Year',\n", " ylabel='Total annual snowfall (inches)',\n", " title='Total annual snowfall in Norfolk County, MA')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looking at this plot, it's hard to say whether snowfall is increasing, decreasing, or unchanged. In the last decade, we've had several years with more snow than 1978, including 2015, which was the snowiest winter in the Boston area in modern history, with a total of 141 inches.\n", "\n", "This kind of question -- looking at noisy data and wondering whether it is going up or down -- is precisely the question we can answer with Bayesian regression." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.774042Z", "iopub.status.busy": "2021-04-16T19:38:44.773344Z", "iopub.status.idle": "2021-04-16T19:38:44.776141Z", "shell.execute_reply": "2021-04-16T19:38:44.776580Z" }, "tags": [] }, "outputs": [], "source": [ "snow.loc[[1978, 1996, 2015]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Regression Model\n", "\n", "The foundation of regression (Bayesian or not) is the assumption that a time series like this is the sum of two parts:\n", "\n", "1. A linear function of time, and\n", "\n", "2. A series of random values drawn from a distribution that is not changing over time.\n", "\n", "Mathematically, the regression model is\n", "\n", "$$y = a x + b + \\epsilon$$\n", "\n", "where $y$ is the series of measurements (snowfall in this example), $x$ is the series of times (years) and $\\epsilon$ is the series of random values.\n", "\n", "$a$ and $b$ are the slope and intercept of the line through the data. They are unknown parameters, so we will use the data to estimate them.\n", "\n", "We don't know the distribution of $\\epsilon$, so we'll make the additional assumption that it is a normal distribution with mean 0 and unknown standard deviation, $\\sigma$. \n", "To see whether this assumption is reasonable, I'll plot the distribution of total snowfall and a normal model with the same mean and standard deviation.\n", "\n", "Here's a `Pmf` object that represents the distribution of snowfall." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.780474Z", "iopub.status.busy": "2021-04-16T19:38:44.779794Z", "iopub.status.idle": "2021-04-16T19:38:44.786003Z", "shell.execute_reply": "2021-04-16T19:38:44.785532Z" } }, "outputs": [], "source": [ "from empiricaldist import Pmf\n", "\n", "pmf_snowfall = Pmf.from_seq(snow)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here are the mean and standard deviation of the data." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.789914Z", "iopub.status.busy": "2021-04-16T19:38:44.789309Z", "iopub.status.idle": "2021-04-16T19:38:44.791963Z", "shell.execute_reply": "2021-04-16T19:38:44.791602Z" } }, "outputs": [], "source": [ "mean, std = pmf_snowfall.mean(), pmf_snowfall.std()\n", "mean, std" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I'll use the `norm` object from SciPy to compute the CDF of a normal distribution with the same mean and standard deviation." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.796359Z", "iopub.status.busy": "2021-04-16T19:38:44.795815Z", "iopub.status.idle": "2021-04-16T19:38:44.797503Z", "shell.execute_reply": "2021-04-16T19:38:44.797862Z" } }, "outputs": [], "source": [ "from scipy.stats import norm\n", "\n", "dist = norm(mean, std)\n", "qs = pmf_snowfall.qs\n", "ps = dist.cdf(qs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's what the distribution of the data looks like compared to the normal model." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.821740Z", "iopub.status.busy": "2021-04-16T19:38:44.812987Z", "iopub.status.idle": "2021-04-16T19:38:44.975037Z", "shell.execute_reply": "2021-04-16T19:38:44.974589Z" }, "tags": [] }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "plt.plot(qs, ps, color='C5', label='model')\n", "pmf_snowfall.make_cdf().plot(label='data')\n", "\n", "decorate(xlabel='Total snowfall (inches)',\n", " ylabel='CDF',\n", " title='Normal model of variation in snowfall')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've had more winters below the mean than expected, but overall this looks like a reasonable model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Least Squares Regression\n", "\n", "Our regression model has three parameters: slope, intercept, and standard deviation of $\\epsilon$.\n", "Before we can estimate them, we have to choose priors.\n", "To help with that, I'll use StatsModel to fit a line to the data by [least squares regression](https://en.wikipedia.org/wiki/Least_squares).\n", "\n", "First, I'll use `reset_index` to convert `snow`, which is a `Series`, to a `DataFrame`." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.985446Z", "iopub.status.busy": "2021-04-16T19:38:44.984866Z", "iopub.status.idle": "2021-04-16T19:38:44.988417Z", "shell.execute_reply": "2021-04-16T19:38:44.988051Z" } }, "outputs": [], "source": [ "data = snow.reset_index()\n", "data.head(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is a `DataFrame` with two columns, `YEAR` and `SNOW`, in a format we can use with StatsModels.\n", "\n", "As we did in the previous chapter, I'll center the data by subtracting off the mean." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:44.994657Z", "iopub.status.busy": "2021-04-16T19:38:44.993833Z", "iopub.status.idle": "2021-04-16T19:38:44.996662Z", "shell.execute_reply": "2021-04-16T19:38:44.997259Z" } }, "outputs": [], "source": [ "offset = round(data['YEAR'].mean())\n", "data['x'] = data['YEAR'] - offset\n", "offset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And I'll add a column to `data` so the dependent variable has a standard name." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.002030Z", "iopub.status.busy": "2021-04-16T19:38:45.001189Z", "iopub.status.idle": "2021-04-16T19:38:45.002832Z", "shell.execute_reply": "2021-04-16T19:38:45.003449Z" } }, "outputs": [], "source": [ "data['y'] = data['SNOW']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can use StatsModels to compute the least squares fit to the data and estimate `slope` and `intercept`." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.007222Z", "iopub.status.busy": "2021-04-16T19:38:45.006101Z", "iopub.status.idle": "2021-04-16T19:38:45.077844Z", "shell.execute_reply": "2021-04-16T19:38:45.078260Z" } }, "outputs": [], "source": [ "import statsmodels.formula.api as smf\n", "\n", "formula = 'y ~ x'\n", "results = smf.ols(formula, data=data).fit()\n", "results.params" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The intercept, about 64 inches, is the expected snowfall when `x=0`, which is the beginning of 1994.\n", "The estimated slope indicates that total snowfall is increasing at a rate of about 0.5 inches per year. \n", "\n", "`results` also provides `resid`, which is an array of residuals, that is, the differences between the data and the fitted line.\n", "The standard deviation of the residuals is an estimate of `sigma`." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.082559Z", "iopub.status.busy": "2021-04-16T19:38:45.081856Z", "iopub.status.idle": "2021-04-16T19:38:45.085047Z", "shell.execute_reply": "2021-04-16T19:38:45.084586Z" } }, "outputs": [], "source": [ "results.resid.std()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use these estimates to choose prior distributions for the parameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Priors\n", "\n", "I'll use uniform distributions for all three parameters." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.090543Z", "iopub.status.busy": "2021-04-16T19:38:45.089992Z", "iopub.status.idle": "2021-04-16T19:38:45.091913Z", "shell.execute_reply": "2021-04-16T19:38:45.092335Z" } }, "outputs": [], "source": [ "import numpy as np\n", "from utils import make_uniform\n", "\n", "qs = np.linspace(-0.5, 1.5, 51)\n", "prior_slope = make_uniform(qs, 'Slope')" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.097282Z", "iopub.status.busy": "2021-04-16T19:38:45.096524Z", "iopub.status.idle": "2021-04-16T19:38:45.098971Z", "shell.execute_reply": "2021-04-16T19:38:45.098521Z" } }, "outputs": [], "source": [ "qs = np.linspace(54, 75, 41)\n", "prior_inter = make_uniform(qs, 'Intercept')" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.103775Z", "iopub.status.busy": "2021-04-16T19:38:45.103183Z", "iopub.status.idle": "2021-04-16T19:38:45.105613Z", "shell.execute_reply": "2021-04-16T19:38:45.105017Z" } }, "outputs": [], "source": [ "qs = np.linspace(20, 35, 31)\n", "prior_sigma = make_uniform(qs, 'Sigma')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I made the prior distributions different lengths for two reasons. First, if we make a mistake and use the wrong distribution, it will be easier to catch the error if they are all different lengths.\n", "\n", "Second, it provides more precision for the most important parameter, `slope`, and spends less computational effort on the least important, `sigma`.\n", "\n", "In <<_ThreeParameterModel>> we made a joint distribution with three parameters. I'll wrap that process in a function:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.110768Z", "iopub.status.busy": "2021-04-16T19:38:45.109913Z", "iopub.status.idle": "2021-04-16T19:38:45.111841Z", "shell.execute_reply": "2021-04-16T19:38:45.112302Z" } }, "outputs": [], "source": [ "from utils import make_joint\n", "\n", "def make_joint3(pmf1, pmf2, pmf3):\n", " \"\"\"Make a joint distribution with three parameters.\"\"\"\n", " joint2 = make_joint(pmf2, pmf1).stack()\n", " joint3 = make_joint(pmf3, joint2).stack()\n", " return Pmf(joint3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And use it to make a `Pmf` that represents the joint distribution of the three parameters." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.116443Z", "iopub.status.busy": "2021-04-16T19:38:45.115752Z", "iopub.status.idle": "2021-04-16T19:38:45.129978Z", "shell.execute_reply": "2021-04-16T19:38:45.129613Z" } }, "outputs": [], "source": [ "prior = make_joint3(prior_slope, prior_inter, prior_sigma)\n", "prior.head(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The index of `Pmf` has three columns, containing values of `slope`, `inter`, and `sigma`, in that order.\n", "\n", "With three parameters, the size of the joint distribution starts to get big. Specifically, it is the product of the lengths of the prior distributions. In this example, the prior distributions have 51, 41, and 31 values, so the length of the joint prior is 64,821." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.133803Z", "iopub.status.busy": "2021-04-16T19:38:45.133137Z", "iopub.status.idle": "2021-04-16T19:38:45.135815Z", "shell.execute_reply": "2021-04-16T19:38:45.136173Z" }, "tags": [] }, "outputs": [], "source": [ "len(prior_slope), len(prior_inter), len(prior_sigma)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.139897Z", "iopub.status.busy": "2021-04-16T19:38:45.139320Z", "iopub.status.idle": "2021-04-16T19:38:45.141694Z", "shell.execute_reply": "2021-04-16T19:38:45.142063Z" }, "tags": [] }, "outputs": [], "source": [ "len(prior_slope) * len(prior_inter) * len(prior_sigma)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.145315Z", "iopub.status.busy": "2021-04-16T19:38:45.144835Z", "iopub.status.idle": "2021-04-16T19:38:45.147028Z", "shell.execute_reply": "2021-04-16T19:38:45.147371Z" }, "tags": [] }, "outputs": [], "source": [ "len(prior)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Likelihood\n", "\n", "Now we'll compute the likelihood of the data.\n", "To demonstrate the process, let's assume temporarily that the parameters are known." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.150860Z", "iopub.status.busy": "2021-04-16T19:38:45.150270Z", "iopub.status.idle": "2021-04-16T19:38:45.153978Z", "shell.execute_reply": "2021-04-16T19:38:45.153346Z" } }, "outputs": [], "source": [ "inter = 64\n", "slope = 0.51\n", "sigma = 25" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I'll extract the `xs` and `ys` from `data` as `Series` objects:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.157932Z", "iopub.status.busy": "2021-04-16T19:38:45.157287Z", "iopub.status.idle": "2021-04-16T19:38:45.159173Z", "shell.execute_reply": "2021-04-16T19:38:45.159629Z" } }, "outputs": [], "source": [ "xs = data['x']\n", "ys = data['y']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And compute the \"residuals\", which are the differences between the actual values, `ys`, and the values we expect based on `slope` and `inter`." ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.164698Z", "iopub.status.busy": "2021-04-16T19:38:45.163879Z", "iopub.status.idle": "2021-04-16T19:38:45.166494Z", "shell.execute_reply": "2021-04-16T19:38:45.165985Z" } }, "outputs": [], "source": [ "expected = slope * xs + inter\n", "resid = ys - expected" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "According to the model, the residuals should follow a normal distribution with mean 0 and standard deviation `sigma`. So we can compute the likelihood of each residual value using `norm` from SciPy." ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.171835Z", "iopub.status.busy": "2021-04-16T19:38:45.171084Z", "iopub.status.idle": "2021-04-16T19:38:45.173013Z", "shell.execute_reply": "2021-04-16T19:38:45.173490Z" } }, "outputs": [], "source": [ "densities = norm(0, sigma).pdf(resid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is an array of probability densities, one for each element of the dataset; their product is the likelihood of the data." ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.177823Z", "iopub.status.busy": "2021-04-16T19:38:45.176986Z", "iopub.status.idle": "2021-04-16T19:38:45.179736Z", "shell.execute_reply": "2021-04-16T19:38:45.180173Z" } }, "outputs": [], "source": [ "likelihood = densities.prod()\n", "likelihood" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we saw in the previous chapter, the likelihood of any particular dataset tends to be small.\n", "If it's too small, we might exceed the limits of floating-point arithmetic.\n", "When that happens, we can avoid the problem by computing likelihoods under a log transform.\n", "But in this example that's not necessary." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Update\n", "\n", "Now we're ready to do the update. First, we need to compute the likelihood of the data for each possible set of parameters." ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:38:45.185541Z", "iopub.status.busy": "2021-04-16T19:38:45.184543Z", "iopub.status.idle": "2021-04-16T19:39:24.492328Z", "shell.execute_reply": "2021-04-16T19:39:24.491834Z" } }, "outputs": [], "source": [ "likelihood = prior.copy()\n", "\n", "for slope, inter, sigma in prior.index:\n", " expected = slope * xs + inter\n", " resid = ys - expected\n", " densities = norm.pdf(resid, 0, sigma)\n", " likelihood[slope, inter, sigma] = densities.prod()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This computation takes longer than many of the previous examples.\n", "We are approaching the limit of what we can do with grid approximations.\n", "\n", "Nevertheless, we can do the update in the usual way:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:24.496186Z", "iopub.status.busy": "2021-04-16T19:39:24.495332Z", "iopub.status.idle": "2021-04-16T19:39:24.505907Z", "shell.execute_reply": "2021-04-16T19:39:24.506555Z" }, "tags": [] }, "outputs": [], "source": [ "posterior = prior * likelihood\n", "posterior.normalize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is a `Pmf` with a three-level index containing values of `slope`, `inter`, and `sigma`.\n", "To get the marginal distributions from the joint posterior, we can use `Pmf.marginal`, which we saw in <<_ThreeParameterModel>>." ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:24.511934Z", "iopub.status.busy": "2021-04-16T19:39:24.510975Z", "iopub.status.idle": "2021-04-16T19:39:24.521557Z", "shell.execute_reply": "2021-04-16T19:39:24.521978Z" } }, "outputs": [], "source": [ "posterior_slope = posterior.marginal(0)\n", "posterior_inter = posterior.marginal(1)\n", "posterior_sigma = posterior.marginal(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the posterior distribution for `sigma`:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:24.565248Z", "iopub.status.busy": "2021-04-16T19:39:24.537773Z", "iopub.status.idle": "2021-04-16T19:39:24.888743Z", "shell.execute_reply": "2021-04-16T19:39:24.888197Z" }, "tags": [] }, "outputs": [], "source": [ "posterior_sigma.plot()\n", "\n", "decorate(xlabel='$\\sigma$, standard deviation of $\\epsilon$',\n", " ylabel='PDF',\n", " title='Posterior marginal distribution of $\\sigma$')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most likely values for `sigma` are near 26 inches, which is consistent with our estimate based on the standard deviation of the data.\n", "\n", "However, to say whether snowfall is increasing or decreasing, we don't really care about `sigma`. It is a \"nuisance parameter\", so-called because we have to estimate it as part of the model, but we don't need it to answer the questions we are interested in.\n", "\n", "Nevertheless, it is good to check the marginal distributions to make sure \n", "\n", "* The location is consistent with our expectations, and \n", "\n", "* The posterior probabilities are near 0 at the extremes of the range, which indicates that the prior distribution covers all parameters with non-negligible probability.\n", "\n", "In this example, the posterior distribution of `sigma` looks fine." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the posterior distribution of `inter`:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:24.912873Z", "iopub.status.busy": "2021-04-16T19:39:24.909890Z", "iopub.status.idle": "2021-04-16T19:39:25.019756Z", "shell.execute_reply": "2021-04-16T19:39:25.019133Z" }, "tags": [] }, "outputs": [], "source": [ "posterior_inter.plot(color='C1')\n", "decorate(xlabel='intercept (inches)',\n", " ylabel='PDF',\n", " title='Posterior marginal distribution of intercept')" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.023742Z", "iopub.status.busy": "2021-04-16T19:39:25.023322Z", "iopub.status.idle": "2021-04-16T19:39:25.027308Z", "shell.execute_reply": "2021-04-16T19:39:25.027853Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import summarize\n", " \n", "summarize(posterior_inter) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The posterior mean is about 64 inches, which is the expected amount of snow during the year at the midpoint of the range, 1994.\n", "\n", "And finally, here's the posterior distribution of `slope`:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.063785Z", "iopub.status.busy": "2021-04-16T19:39:25.044731Z", "iopub.status.idle": "2021-04-16T19:39:25.212957Z", "shell.execute_reply": "2021-04-16T19:39:25.213516Z" }, "tags": [] }, "outputs": [], "source": [ "posterior_slope.plot(color='C4')\n", "decorate(xlabel='Slope (inches per year)',\n", " ylabel='PDF',\n", " title='Posterior marginal distribution of slope')" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.217691Z", "iopub.status.busy": "2021-04-16T19:39:25.217238Z", "iopub.status.idle": "2021-04-16T19:39:25.219918Z", "shell.execute_reply": "2021-04-16T19:39:25.219480Z" }, "tags": [] }, "outputs": [], "source": [ "summarize(posterior_slope)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The posterior mean is about 0.51 inches, which is consistent with the estimate we got from least squared regression. \n", "\n", "The 90% credible interval is from 0.1 to 0.9, which indicates that our uncertainty about this estimate is pretty high. In fact, there is still a small posterior probability (about 2\\%) that the slope is negative. " ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.224069Z", "iopub.status.busy": "2021-04-16T19:39:25.223483Z", "iopub.status.idle": "2021-04-16T19:39:25.226209Z", "shell.execute_reply": "2021-04-16T19:39:25.225770Z" }, "tags": [] }, "outputs": [], "source": [ "posterior_slope.make_cdf()(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, it is more likely that my conjecture was wrong: we are actually getting more snow around here than we used to, increasing at a rate of about a half-inch per year, which is substantial. On average, we get an additional 25 inches of snow per year than we did when I was young.\n", "\n", "This example shows that with slow-moving trends and noisy data, your instincts can be misleading. \n", "\n", "Now, you might suspect that I overestimate the amount of snow when I was young because I enjoyed it, and underestimate it now because I don't. But you would be mistaken.\n", "\n", "During the Blizzard of 1978, we did not have a snowblower and my brother and I had to shovel. My sister got a pass for no good reason. Our driveway was about 60 feet long and three cars wide near the garage. And we had to shovel Mr. Crocker's driveway, too, for which we were not allowed to accept payment. Furthermore, as I recall it was during this excavation that I accidentally hit my brother with a shovel on the head, and it bled a lot because, you know, scalp wounds.\n", "\n", "Anyway, the point is that I don't think I overestimate the amount of snow when I was young because I have fond memories of it. " ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Optimization\n", "\n", "The way we computed the likelihood in the previous section was pretty slow. The problem is that we looped through every possible set of parameters in the prior distribution, and there were more than 60,000 of them.\n", "\n", "If we can do more work per iteration, and run the loop fewer times, we expect it to go faster.\n", "\n", "In order to do that, I'll unstack the prior distribution:" ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.229663Z", "iopub.status.busy": "2021-04-16T19:39:25.228880Z", "iopub.status.idle": "2021-04-16T19:39:25.265229Z", "shell.execute_reply": "2021-04-16T19:39:25.265554Z" }, "tags": [] }, "outputs": [], "source": [ "joint3 = prior.unstack()\n", "joint3.head(3)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The result is a `DataFrame` with `slope` and `intercept` down the rows and `sigmas` across the columns.\n", "\n", "The following is a version of `likelihood_regression` that takes the joint prior distribution in this form and returns the posterior distribution in the same form." ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.270539Z", "iopub.status.busy": "2021-04-16T19:39:25.270009Z", "iopub.status.idle": "2021-04-16T19:39:25.272082Z", "shell.execute_reply": "2021-04-16T19:39:25.271704Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import normalize\n", "\n", "def update_optimized(prior, data):\n", " \"\"\"Posterior distribution of regression parameters\n", " `slope`, `inter`, and `sigma`.\n", " \n", " prior: Pmf representing the joint prior\n", " data: DataFrame with columns `x` and `y`\n", " \n", " returns: Pmf representing the joint posterior\n", " \"\"\"\n", " xs = data['x']\n", " ys = data['y']\n", " sigmas = prior.columns \n", " likelihood = prior.copy()\n", "\n", " for slope, inter in prior.index:\n", " expected = slope * xs + inter\n", " resid = ys - expected\n", " resid_mesh, sigma_mesh = np.meshgrid(resid, sigmas)\n", " densities = norm.pdf(resid_mesh, 0, sigma_mesh)\n", " likelihood.loc[slope, inter] = densities.prod(axis=1)\n", " \n", " posterior = prior * likelihood\n", " normalize(posterior)\n", " return posterior" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "This version loops through all possible pairs of `slope` and `inter`, so the loop runs about 2000 times." ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.276199Z", "iopub.status.busy": "2021-04-16T19:39:25.275543Z", "iopub.status.idle": "2021-04-16T19:39:25.278211Z", "shell.execute_reply": "2021-04-16T19:39:25.278659Z" }, "tags": [] }, "outputs": [], "source": [ "len(prior_slope) * len(prior_inter)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Each time through the loop, it uses a grid mesh to compute the likelihood of the data for all values of `sigma`. The result is an array with one column for each data point and one row for each value of `sigma`. Taking the product across the columns (`axis=1`) yields the probability of the data for each value of sigma, which we assign as a row in `likelihood`." ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:25.282991Z", "iopub.status.busy": "2021-04-16T19:39:25.282462Z", "iopub.status.idle": "2021-04-16T19:39:27.180270Z", "shell.execute_reply": "2021-04-16T19:39:27.179860Z" }, "tags": [] }, "outputs": [], "source": [ "%time posterior_opt = update_optimized(joint3, data)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "We get the same result either way." ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.185227Z", "iopub.status.busy": "2021-04-16T19:39:27.184354Z", "iopub.status.idle": "2021-04-16T19:39:27.188848Z", "shell.execute_reply": "2021-04-16T19:39:27.189315Z" }, "tags": [] }, "outputs": [], "source": [ "np.allclose(posterior, posterior_opt.stack())" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "But this version is about 25 times faster than the previous version. \n", "\n", "This optimization works because many functions in NumPy and SciPy are written in C, so they run fast compared to Python. If you can do more work each time you call these functions, and less time running the loop in Python, your code will often run substantially faster.\n", "\n", "In this version of the posterior distribution, `slope` and `inter` run down the rows and `sigma` runs across the columns. So we can use `marginal` to get the posterior joint distribution of `slope` and `intercept`." ] }, { "cell_type": "code", "execution_count": 49, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.192910Z", "iopub.status.busy": "2021-04-16T19:39:27.192447Z", "iopub.status.idle": "2021-04-16T19:39:27.200629Z", "shell.execute_reply": "2021-04-16T19:39:27.200147Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import marginal\n", "\n", "posterior2 = marginal(posterior_opt, 1)\n", "posterior2.head(3)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The result is a `Pmf` with two columns in the index.\n", "To plot it, we have to unstack it." ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.204066Z", "iopub.status.busy": "2021-04-16T19:39:27.203624Z", "iopub.status.idle": "2021-04-16T19:39:27.226398Z", "shell.execute_reply": "2021-04-16T19:39:27.225993Z" }, "tags": [] }, "outputs": [], "source": [ "joint_posterior = posterior2.unstack().transpose()\n", "joint_posterior.head(3)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Here's what it looks like." ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.240820Z", "iopub.status.busy": "2021-04-16T19:39:27.240115Z", "iopub.status.idle": "2021-04-16T19:39:27.414765Z", "shell.execute_reply": "2021-04-16T19:39:27.414340Z" }, "tags": [] }, "outputs": [], "source": [ "from utils import plot_contour\n", "\n", "plot_contour(joint_posterior)\n", "decorate(title='Posterior joint distribution of slope and intercept')" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The ovals in the contour plot are aligned with the axes, which indicates that there is no correlation between `slope` and `inter` in the posterior distribution, which is what we expect since we centered the values.\n", "\n", "In this example, the motivating question is about the slope of the line, so we answered it by looking at the posterior distribution of slope.\n", "\n", "In the next example, the motivating question is about prediction, so we'll use the joint posterior distribution to generate predictive distributions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Marathon World Record\n", "\n", "For many running events, if you plot the world record pace over time, the result is a remarkably straight line. People, [including me](http://allendowney.blogspot.com/2011/04/two-hour-marathon-in-2045.html), have speculated about possible reasons for this phenomenon.\n", "\n", "People have also speculated about when, if ever, the world record time for the marathon will be less than two hours.\n", "(Note: In 2019 Eliud Kipchoge ran the marathon distance in under two hours, which is an astonishing achievement that I fully appreciate, but for several reasons it did not count as a world record).\n", "\n", "So, as a second example of Bayesian regression, we'll consider the world record progression for the marathon (for male runners), estimate the parameters of a linear model, and use the model to predict when a runner will break the two-hour barrier. " ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The following cell downloads a web page from Wikipedia that includes a table of marathon world records, and uses Pandas to put the data in a `DataFrame`." ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.418039Z", "iopub.status.busy": "2021-04-16T19:39:27.417570Z", "iopub.status.idle": "2021-04-16T19:39:27.824997Z", "shell.execute_reply": "2021-04-16T19:39:27.825376Z" }, "tags": [] }, "outputs": [], "source": [ "url = 'https://en.wikipedia.org/wiki/Marathon_world_record_progression#Men'\n", "tables = pd.read_html(url)\n", "len(tables)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "If that doesn't work, I have made a copy of this page available. The following cell downloads and parses it." ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.828135Z", "iopub.status.busy": "2021-04-16T19:39:27.827662Z", "iopub.status.idle": "2021-04-16T19:39:27.829786Z", "shell.execute_reply": "2021-04-16T19:39:27.829411Z" }, "tags": [] }, "outputs": [], "source": [ "#import os\n", "\n", "#datafile = 'Marathon_world_record_progression.html'\n", "#download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/Marathon_world_record_progression.html')\n", "\n", "#tables = pd.read_html(datafile)\n", "#len(tables)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The first table is the one we want." ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.838976Z", "iopub.status.busy": "2021-04-16T19:39:27.838551Z", "iopub.status.idle": "2021-04-16T19:39:27.842313Z", "shell.execute_reply": "2021-04-16T19:39:27.842643Z" }, "tags": [] }, "outputs": [], "source": [ "table = tables[0]\n", "table.tail(3)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "We can use Pandas to parse the dates.\n", "A few of them include notes that cause parsing problems, but the argument `errors='coerce'` tells Pandas to fill invalid dates with `NaT`, which is a version of `NaN` that represents \"not a time\". " ] }, { "cell_type": "code", "execution_count": 54, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.853767Z", "iopub.status.busy": "2021-04-16T19:39:27.853247Z", "iopub.status.idle": "2021-04-16T19:39:27.855693Z", "shell.execute_reply": "2021-04-16T19:39:27.856161Z" }, "tags": [] }, "outputs": [], "source": [ "table['date'] = pd.to_datetime(table['Date'], errors='coerce')\n", "table['date'].head()" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "We can also use Pandas to parse the record times." ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.860393Z", "iopub.status.busy": "2021-04-16T19:39:27.859869Z", "iopub.status.idle": "2021-04-16T19:39:27.861986Z", "shell.execute_reply": "2021-04-16T19:39:27.861604Z" }, "tags": [] }, "outputs": [], "source": [ "table['time'] = pd.to_timedelta(table['Time'])" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "And convert the times to paces in miles per hour." ] }, { "cell_type": "code", "execution_count": 56, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.868089Z", "iopub.status.busy": "2021-04-16T19:39:27.867544Z", "iopub.status.idle": "2021-04-16T19:39:27.869998Z", "shell.execute_reply": "2021-04-16T19:39:27.870375Z" }, "tags": [] }, "outputs": [], "source": [ "table['y'] = 26.2 / table['time'].dt.total_seconds() * 3600\n", "table['y'].head()" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The following function plots the results." ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.874390Z", "iopub.status.busy": "2021-04-16T19:39:27.873871Z", "iopub.status.idle": "2021-04-16T19:39:27.876092Z", "shell.execute_reply": "2021-04-16T19:39:27.875730Z" }, "tags": [] }, "outputs": [], "source": [ "def plot_speeds(df):\n", " \"\"\"Plot marathon world record speed as a function of time.\n", " \n", " df: DataFrame with date and mph\n", " \"\"\"\n", " plt.axhline(13.1, color='C5', ls='--')\n", " plt.plot(df['date'], df['y'], 'o', \n", " label='World record speed', \n", " color='C1', alpha=0.5)\n", " \n", " decorate(xlabel='Date',\n", " ylabel='Speed (mph)')" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Here's what the results look like.\n", "The dashed line shows the speed required for a two-hour marathon, 13.1 miles per hour." ] }, { "cell_type": "code", "execution_count": 58, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:27.896251Z", "iopub.status.busy": "2021-04-16T19:39:27.894071Z", "iopub.status.idle": "2021-04-16T19:39:28.025535Z", "shell.execute_reply": "2021-04-16T19:39:28.026172Z" }, "tags": [] }, "outputs": [], "source": [ "plot_speeds(table)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "It's not a perfectly straight line. In the early years of the marathon, the record speed increased quickly; since about 1970, it has been increasing more slowly.\n", "\n", "For our analysis, let's focus on the recent progression, starting in 1970." ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.039659Z", "iopub.status.busy": "2021-04-16T19:39:28.039051Z", "iopub.status.idle": "2021-04-16T19:39:28.041586Z", "shell.execute_reply": "2021-04-16T19:39:28.041953Z" }, "tags": [] }, "outputs": [], "source": [ "recent = table['date'] > pd.to_datetime('1970')\n", "data = table.loc[recent].copy()\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the notebook for this chapter, you can see how I loaded and cleaned the data. The result is a `DataFrame` that contains the following columns (and additional information we won't use):\n", "\n", "* `date`, which is a Pandas `Timestamp` representing the date when the world record was broken, and\n", "\n", "* `speed`, which records the record-breaking pace in mph.\n", "\n", "Here's what the results look like, starting in 1970:" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.057142Z", "iopub.status.busy": "2021-04-16T19:39:28.056331Z", "iopub.status.idle": "2021-04-16T19:39:28.210918Z", "shell.execute_reply": "2021-04-16T19:39:28.210485Z" }, "tags": [] }, "outputs": [], "source": [ "plot_speeds(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data points fall approximately on a line, although it's possible that the slope is increasing.\n", "\n", "To prepare the data for regression, I'll subtract away the approximate midpoint of the time interval, 1995." ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.214686Z", "iopub.status.busy": "2021-04-16T19:39:28.214200Z", "iopub.status.idle": "2021-04-16T19:39:28.215796Z", "shell.execute_reply": "2021-04-16T19:39:28.216157Z" } }, "outputs": [], "source": [ "offset = pd.to_datetime('1995')\n", "timedelta = table['date'] - offset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When we subtract two `Timestamp` objects, the result is a \"time delta\", which we can convert to seconds and then to years." ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.221866Z", "iopub.status.busy": "2021-04-16T19:39:28.220942Z", "iopub.status.idle": "2021-04-16T19:39:28.223586Z", "shell.execute_reply": "2021-04-16T19:39:28.223108Z" } }, "outputs": [], "source": [ "data['x'] = timedelta.dt.total_seconds() / 3600 / 24 / 365.24" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.231701Z", "iopub.status.busy": "2021-04-16T19:39:28.231064Z", "iopub.status.idle": "2021-04-16T19:39:28.234172Z", "shell.execute_reply": "2021-04-16T19:39:28.233719Z" }, "tags": [] }, "outputs": [], "source": [ "data['x'].describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As in the previous example, I'll use least squares regression to compute point estimates for the parameters, which will help with choosing priors." ] }, { "cell_type": "code", "execution_count": 64, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.243896Z", "iopub.status.busy": "2021-04-16T19:39:28.243090Z", "iopub.status.idle": "2021-04-16T19:39:28.246900Z", "shell.execute_reply": "2021-04-16T19:39:28.246348Z" } }, "outputs": [], "source": [ "import statsmodels.formula.api as smf\n", "\n", "formula = 'y ~ x'\n", "results = smf.ols(formula, data=data).fit()\n", "results.params" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The estimated intercept is about 12.5 mph, which is the interpolated world record pace for 1995. The estimated slope is about 0.015 mph per year, which is the rate the world record pace is increasing, according to the model.\n", "\n", "Again, we can use the standard deviation of the residuals as a point estimate for `sigma`." ] }, { "cell_type": "code", "execution_count": 65, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.251555Z", "iopub.status.busy": "2021-04-16T19:39:28.250995Z", "iopub.status.idle": "2021-04-16T19:39:28.253487Z", "shell.execute_reply": "2021-04-16T19:39:28.253872Z" } }, "outputs": [], "source": [ "results.resid.std()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These parameters give us a good idea where we should put the prior distributions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Priors\n", "\n", "Here are the prior distributions I chose for `slope`, `intercept`, and `sigma`." ] }, { "cell_type": "code", "execution_count": 66, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.258022Z", "iopub.status.busy": "2021-04-16T19:39:28.257434Z", "iopub.status.idle": "2021-04-16T19:39:28.259420Z", "shell.execute_reply": "2021-04-16T19:39:28.259062Z" } }, "outputs": [], "source": [ "qs = np.linspace(0.012, 0.018, 51)\n", "prior_slope = make_uniform(qs, 'Slope')" ] }, { "cell_type": "code", "execution_count": 67, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.263808Z", "iopub.status.busy": "2021-04-16T19:39:28.263043Z", "iopub.status.idle": "2021-04-16T19:39:28.265744Z", "shell.execute_reply": "2021-04-16T19:39:28.265217Z" } }, "outputs": [], "source": [ "qs = np.linspace(12.4, 12.5, 41)\n", "prior_inter = make_uniform(qs, 'Intercept')" ] }, { "cell_type": "code", "execution_count": 68, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.270542Z", "iopub.status.busy": "2021-04-16T19:39:28.269999Z", "iopub.status.idle": "2021-04-16T19:39:28.272030Z", "shell.execute_reply": "2021-04-16T19:39:28.272481Z" } }, "outputs": [], "source": [ "qs = np.linspace(0.01, 0.21, 31)\n", "prior_sigma = make_uniform(qs, 'Sigma')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's the joint prior distribution." ] }, { "cell_type": "code", "execution_count": 69, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.277187Z", "iopub.status.busy": "2021-04-16T19:39:28.276205Z", "iopub.status.idle": "2021-04-16T19:39:28.287288Z", "shell.execute_reply": "2021-04-16T19:39:28.287636Z" } }, "outputs": [], "source": [ "prior = make_joint3(prior_slope, prior_inter, prior_sigma)\n", "prior.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can compute likelihoods as in the previous example:" ] }, { "cell_type": "code", "execution_count": 70, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:39:28.292255Z", "iopub.status.busy": "2021-04-16T19:39:28.291742Z", "iopub.status.idle": "2021-04-16T19:40:08.384282Z", "shell.execute_reply": "2021-04-16T19:40:08.384694Z" } }, "outputs": [], "source": [ "xs = data['x']\n", "ys = data['y']\n", "likelihood = prior.copy()\n", "\n", "for slope, inter, sigma in prior.index:\n", " expected = slope * xs + inter\n", " resid = ys - expected\n", " densities = norm.pdf(resid, 0, sigma)\n", " likelihood[slope, inter, sigma] = densities.prod()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can do the update in the usual way." ] }, { "cell_type": "code", "execution_count": 71, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.387896Z", "iopub.status.busy": "2021-04-16T19:40:08.387383Z", "iopub.status.idle": "2021-04-16T19:40:08.394693Z", "shell.execute_reply": "2021-04-16T19:40:08.395061Z" }, "tags": [] }, "outputs": [], "source": [ "posterior = prior * likelihood\n", "posterior.normalize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And unpack the marginals:" ] }, { "cell_type": "code", "execution_count": 72, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.398478Z", "iopub.status.busy": "2021-04-16T19:40:08.397965Z", "iopub.status.idle": "2021-04-16T19:40:08.406205Z", "shell.execute_reply": "2021-04-16T19:40:08.406577Z" } }, "outputs": [], "source": [ "posterior_slope = posterior.marginal(0)\n", "posterior_inter = posterior.marginal(1)\n", "posterior_sigma = posterior.marginal(2)" ] }, { "cell_type": "code", "execution_count": 73, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.429912Z", "iopub.status.busy": "2021-04-16T19:40:08.424037Z", "iopub.status.idle": "2021-04-16T19:40:08.536904Z", "shell.execute_reply": "2021-04-16T19:40:08.536549Z" }, "tags": [] }, "outputs": [], "source": [ "posterior_sigma.plot();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the posterior distribution of `inter`:" ] }, { "cell_type": "code", "execution_count": 74, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.558330Z", "iopub.status.busy": "2021-04-16T19:40:08.557823Z", "iopub.status.idle": "2021-04-16T19:40:08.711888Z", "shell.execute_reply": "2021-04-16T19:40:08.712426Z" }, "tags": [] }, "outputs": [], "source": [ "posterior_inter.plot(color='C1')\n", "decorate(xlabel='intercept',\n", " ylabel='PDF',\n", " title='Posterior marginal distribution of intercept')" ] }, { "cell_type": "code", "execution_count": 75, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.717982Z", "iopub.status.busy": "2021-04-16T19:40:08.717261Z", "iopub.status.idle": "2021-04-16T19:40:08.721465Z", "shell.execute_reply": "2021-04-16T19:40:08.720768Z" }, "tags": [] }, "outputs": [], "source": [ "summarize(posterior_inter)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The posterior mean is about 12.5 mph, which is the world record marathon pace the model predicts for the midpoint of the date range, 1994.\n", "\n", "And here's the posterior distribution of `slope`." ] }, { "cell_type": "code", "execution_count": 76, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.774456Z", "iopub.status.busy": "2021-04-16T19:40:08.760561Z", "iopub.status.idle": "2021-04-16T19:40:08.957028Z", "shell.execute_reply": "2021-04-16T19:40:08.957858Z" }, "tags": [] }, "outputs": [], "source": [ "posterior_slope.plot(color='C4')\n", "decorate(xlabel='Slope',\n", " ylabel='PDF',\n", " title='Posterior marginal distribution of slope')" ] }, { "cell_type": "code", "execution_count": 77, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.966418Z", "iopub.status.busy": "2021-04-16T19:40:08.965585Z", "iopub.status.idle": "2021-04-16T19:40:08.972250Z", "shell.execute_reply": "2021-04-16T19:40:08.972843Z" }, "tags": [] }, "outputs": [], "source": [ "summarize(posterior_slope)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The posterior mean is about 0.015 mph per year, or 0.15 mph per decade.\n", "\n", "That's interesting, but it doesn't answer the question we're interested in: When will there be a two-hour marathon? To answer that, we have to make predictions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prediction\n", "\n", "To generate predictions, I'll draw a sample from the posterior distribution of parameters, then use the regression equation to combine the parameters with the data.\n", "\n", "`Pmf` provides `choice`, which we can use to draw a random sample with replacement, using the posterior probabilities as weights." ] }, { "cell_type": "code", "execution_count": 78, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.976629Z", "iopub.status.busy": "2021-04-16T19:40:08.975627Z", "iopub.status.idle": "2021-04-16T19:40:08.979423Z", "shell.execute_reply": "2021-04-16T19:40:08.980266Z" }, "tags": [] }, "outputs": [], "source": [ "np.random.seed(17)" ] }, { "cell_type": "code", "execution_count": 79, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.983792Z", "iopub.status.busy": "2021-04-16T19:40:08.983006Z", "iopub.status.idle": "2021-04-16T19:40:08.987959Z", "shell.execute_reply": "2021-04-16T19:40:08.988740Z" } }, "outputs": [], "source": [ "sample = posterior.choice(101)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result is an array of tuples. Looping through the sample, we can use the regression equation to generate predictions for a range of `xs`." ] }, { "cell_type": "code", "execution_count": 80, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:08.995701Z", "iopub.status.busy": "2021-04-16T19:40:08.994987Z", "iopub.status.idle": "2021-04-16T19:40:09.151613Z", "shell.execute_reply": "2021-04-16T19:40:09.151148Z" } }, "outputs": [], "source": [ "xs = np.arange(-25, 50, 2)\n", "pred = np.empty((len(sample), len(xs)))\n", "\n", "for i, (slope, inter, sigma) in enumerate(sample):\n", " epsilon = norm(0, sigma).rvs(len(xs))\n", " pred[i] = inter + slope * xs + epsilon" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each prediction is an array with the same length as `xs`, which I store as a row in `pred`. So the result has one row for each sample and one column for each value of `x`.\n", "\n", "We can use `percentile` to compute the 5th, 50th, and 95th percentiles in each column." ] }, { "cell_type": "code", "execution_count": 81, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.158240Z", "iopub.status.busy": "2021-04-16T19:40:09.157499Z", "iopub.status.idle": "2021-04-16T19:40:09.161662Z", "shell.execute_reply": "2021-04-16T19:40:09.162303Z" } }, "outputs": [], "source": [ "low, median, high = np.percentile(pred, [5, 50, 95], axis=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To show the results, I'll plot the median of the predictions as a line and the 90% credible interval as a shaded area." ] }, { "cell_type": "code", "execution_count": 82, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.193803Z", "iopub.status.busy": "2021-04-16T19:40:09.193041Z", "iopub.status.idle": "2021-04-16T19:40:09.467845Z", "shell.execute_reply": "2021-04-16T19:40:09.468385Z" }, "tags": [] }, "outputs": [], "source": [ "times = pd.to_timedelta(xs*365.24, unit='days') + offset\n", "\n", "plt.fill_between(times, low, high, \n", " color='C2', alpha=0.1)\n", "plt.plot(times, median, color='C2')\n", "\n", "plot_speeds(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dashed line shows the two-hour marathon pace, which is 13.1 miles per hour.\n", "Visually we can estimate that the prediction line hits the target pace between 2030 and 2040.\n", "\n", "To make this more precise, we can use interpolation to see when the predictions cross the finish line. SciPy provides `interp1d`, which does linear interpolation by default." ] }, { "cell_type": "code", "execution_count": 83, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.477523Z", "iopub.status.busy": "2021-04-16T19:40:09.475247Z", "iopub.status.idle": "2021-04-16T19:40:09.489783Z", "shell.execute_reply": "2021-04-16T19:40:09.488592Z" } }, "outputs": [], "source": [ "from scipy.interpolate import interp1d\n", "\n", "future = np.array([interp1d(high, xs)(13.1),\n", " interp1d(median, xs)(13.1),\n", " interp1d(low, xs)(13.1)])" ] }, { "cell_type": "code", "execution_count": 84, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.501330Z", "iopub.status.busy": "2021-04-16T19:40:09.500502Z", "iopub.status.idle": "2021-04-16T19:40:09.504183Z", "shell.execute_reply": "2021-04-16T19:40:09.504946Z" }, "tags": [] }, "outputs": [], "source": [ "dts = pd.to_timedelta(future*365.24, unit='day') + offset\n", "pd.DataFrame(dict(datetime=dts),\n", " index=['early', 'median', 'late'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The median prediction is 2036, with a 90% credible interval from 2032 to 2043. So there is about a 5% chance we'll see a two-hour marathon before 2032." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "This chapter introduces Bayesian regression, which is based on the same model as least squares regression; the difference is that it produces a posterior distribution for the parameters rather than point estimates.\n", "\n", "In the first example, we looked at changes in snowfall in Norfolk County, Massachusetts, and concluded that we get more snowfall now than when I was young, contrary to my expectation.\n", "\n", "In the second example, we looked at the progression of world record pace for the men's marathon, computed the joint posterior distribution of the regression parameters, and used it to generate predictions for the next 20 years.\n", "\n", "These examples have three parameters, so it takes a little longer to compute the likelihood of the data.\n", "With more than three parameters, it becomes impractical to use grid algorithms. \n", "\n", "In the next few chapters, we'll explore other algorithms that reduce the amount of computation we need to do a Bayesian update, which makes it possible to use models with more parameters.\n", "\n", "But first, you might want to work on these exercises." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Exercise:** I am under the impression that it is warmer around here than it used to be. In this exercise, you can put my conjecture to the test.\n", "\n", "We'll use the same dataset we used to model snowfall; it also includes daily low and high temperatures in Norfolk County, Massachusetts during my lifetime." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Here's the data." ] }, { "cell_type": "code", "execution_count": 85, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.508642Z", "iopub.status.busy": "2021-04-16T19:40:09.507790Z", "iopub.status.idle": "2021-04-16T19:40:09.598819Z", "shell.execute_reply": "2021-04-16T19:40:09.596426Z" }, "tags": [] }, "outputs": [], "source": [ "df = pd.read_csv('2239075.csv', parse_dates=[2])\n", "df.head(3)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Again, I'll create a column that contains the year part of the dates." ] }, { "cell_type": "code", "execution_count": 86, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.605621Z", "iopub.status.busy": "2021-04-16T19:40:09.604420Z", "iopub.status.idle": "2021-04-16T19:40:09.610189Z", "shell.execute_reply": "2021-04-16T19:40:09.610858Z" }, "tags": [] }, "outputs": [], "source": [ "df['YEAR'] = df['DATE'].dt.year" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "This dataset includes `TMIN` and `TMAX`, which are the daily low and high temperatures in degrees F.\n", "I'll create a new column with the daily midpoint of the low and high temperatures." ] }, { "cell_type": "code", "execution_count": 87, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.620934Z", "iopub.status.busy": "2021-04-16T19:40:09.620132Z", "iopub.status.idle": "2021-04-16T19:40:09.624087Z", "shell.execute_reply": "2021-04-16T19:40:09.625038Z" }, "tags": [] }, "outputs": [], "source": [ "df['TMID'] = (df['TMIN'] + df['TMAX']) / 2" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Now we can group by year and compute the mean of these daily temperatures." ] }, { "cell_type": "code", "execution_count": 88, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.634750Z", "iopub.status.busy": "2021-04-16T19:40:09.632057Z", "iopub.status.idle": "2021-04-16T19:40:09.638953Z", "shell.execute_reply": "2021-04-16T19:40:09.639856Z" }, "tags": [] }, "outputs": [], "source": [ "tmid = df.groupby('YEAR')['TMID'].mean()\n", "len(tmid)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Again, I'll drop the first and last years, which are incomplete." ] }, { "cell_type": "code", "execution_count": 89, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.646871Z", "iopub.status.busy": "2021-04-16T19:40:09.642980Z", "iopub.status.idle": "2021-04-16T19:40:09.653320Z", "shell.execute_reply": "2021-04-16T19:40:09.654470Z" }, "tags": [] }, "outputs": [], "source": [ "complete = tmid.iloc[1:-1]\n", "len(complete)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Here's what the time series looks like." ] }, { "cell_type": "code", "execution_count": 90, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.672386Z", "iopub.status.busy": "2021-04-16T19:40:09.671591Z", "iopub.status.idle": "2021-04-16T19:40:09.886776Z", "shell.execute_reply": "2021-04-16T19:40:09.887123Z" }, "tags": [] }, "outputs": [], "source": [ "complete.plot(ls='', marker='o', alpha=0.5)\n", "\n", "decorate(xlabel='Year',\n", " ylabel='Annual average of daily temperature (deg F)')" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "As we did with the snow data, I'll convert the `Series` to a `DataFrame` to prepare it for regression." ] }, { "cell_type": "code", "execution_count": 91, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.899059Z", "iopub.status.busy": "2021-04-16T19:40:09.898484Z", "iopub.status.idle": "2021-04-16T19:40:09.901665Z", "shell.execute_reply": "2021-04-16T19:40:09.902018Z" }, "tags": [] }, "outputs": [], "source": [ "data = complete.reset_index()\n", "data.head()" ] }, { "cell_type": "code", "execution_count": 92, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.907118Z", "iopub.status.busy": "2021-04-16T19:40:09.906333Z", "iopub.status.idle": "2021-04-16T19:40:09.910678Z", "shell.execute_reply": "2021-04-16T19:40:09.910015Z" }, "tags": [] }, "outputs": [], "source": [ "offset = round(data['YEAR'].mean())\n", "offset" ] }, { "cell_type": "code", "execution_count": 93, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.918655Z", "iopub.status.busy": "2021-04-16T19:40:09.917835Z", "iopub.status.idle": "2021-04-16T19:40:09.922722Z", "shell.execute_reply": "2021-04-16T19:40:09.921649Z" }, "tags": [] }, "outputs": [], "source": [ "data['x'] = data['YEAR'] - offset\n", "data['x'].mean()" ] }, { "cell_type": "code", "execution_count": 94, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.929320Z", "iopub.status.busy": "2021-04-16T19:40:09.928415Z", "iopub.status.idle": "2021-04-16T19:40:09.935474Z", "shell.execute_reply": "2021-04-16T19:40:09.936590Z" }, "tags": [] }, "outputs": [], "source": [ "data['y'] = data['TMID']\n", "data['y'].std()" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Now we can use StatsModels to estimate the parameters." ] }, { "cell_type": "code", "execution_count": 95, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.950115Z", "iopub.status.busy": "2021-04-16T19:40:09.948247Z", "iopub.status.idle": "2021-04-16T19:40:09.955668Z", "shell.execute_reply": "2021-04-16T19:40:09.954729Z" }, "tags": [] }, "outputs": [], "source": [ "import statsmodels.formula.api as smf\n", "\n", "formula = 'y ~ x'\n", "results = smf.ols(formula, data=data).fit()\n", "results.params" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "And compute the standard deviation of the parameters." ] }, { "cell_type": "code", "execution_count": 96, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.961504Z", "iopub.status.busy": "2021-04-16T19:40:09.960803Z", "iopub.status.idle": "2021-04-16T19:40:09.965281Z", "shell.execute_reply": "2021-04-16T19:40:09.964756Z" }, "tags": [] }, "outputs": [], "source": [ "results.resid.std()" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "According to the least squares regression model, annual average temperature is increasing by about 0.044 degrees F per year.\n", "\n", "To quantify the uncertainty of these parameters and generate predictions for the future, we can use Bayesian regression." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Use StatsModels to generate point estimates for the regression parameters.\n", "\n", "2. Choose priors for `slope`, `intercept`, and `sigma` based on these estimates, and use `make_joint3` to make a joint prior distribution.\n", "\n", "3. Compute the likelihood of the data and compute the posterior distribution of the parameters.\n", "\n", "4. Extract the posterior distribution of `slope`. How confident are we that temperature is increasing?\n", "\n", "5. Draw a sample of parameters from the posterior distribution and use it to generate predictions up to 2067.\n", "\n", "6. Plot the median of the predictions and a 90% credible interval along with the observed data. \n", "\n", "Does the model fit the data well? How much do we expect annual average temperatures to increase over my (expected) lifetime?" ] }, { "cell_type": "code", "execution_count": 97, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.973077Z", "iopub.status.busy": "2021-04-16T19:40:09.972198Z", "iopub.status.idle": "2021-04-16T19:40:09.976051Z", "shell.execute_reply": "2021-04-16T19:40:09.976805Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 98, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.983354Z", "iopub.status.busy": "2021-04-16T19:40:09.982599Z", "iopub.status.idle": "2021-04-16T19:40:09.987585Z", "shell.execute_reply": "2021-04-16T19:40:09.986707Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 99, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:09.994924Z", "iopub.status.busy": "2021-04-16T19:40:09.993916Z", "iopub.status.idle": "2021-04-16T19:40:09.997161Z", "shell.execute_reply": "2021-04-16T19:40:09.996368Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 100, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:10.005414Z", "iopub.status.busy": "2021-04-16T19:40:10.003358Z", "iopub.status.idle": "2021-04-16T19:40:10.031755Z", "shell.execute_reply": "2021-04-16T19:40:10.030704Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 101, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:10.145345Z", "iopub.status.busy": "2021-04-16T19:40:10.038506Z", "iopub.status.idle": "2021-04-16T19:40:51.077178Z", "shell.execute_reply": "2021-04-16T19:40:51.076708Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 102, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.080748Z", "iopub.status.busy": "2021-04-16T19:40:51.080082Z", "iopub.status.idle": "2021-04-16T19:40:51.088787Z", "shell.execute_reply": "2021-04-16T19:40:51.088316Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 103, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.092397Z", "iopub.status.busy": "2021-04-16T19:40:51.091793Z", "iopub.status.idle": "2021-04-16T19:40:51.100270Z", "shell.execute_reply": "2021-04-16T19:40:51.099775Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 104, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.135977Z", "iopub.status.busy": "2021-04-16T19:40:51.118308Z", "iopub.status.idle": "2021-04-16T19:40:51.267312Z", "shell.execute_reply": "2021-04-16T19:40:51.267692Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 105, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.272380Z", "iopub.status.busy": "2021-04-16T19:40:51.271814Z", "iopub.status.idle": "2021-04-16T19:40:51.274600Z", "shell.execute_reply": "2021-04-16T19:40:51.274242Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 106, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.306721Z", "iopub.status.busy": "2021-04-16T19:40:51.295008Z", "iopub.status.idle": "2021-04-16T19:40:51.412801Z", "shell.execute_reply": "2021-04-16T19:40:51.413372Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 107, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.417909Z", "iopub.status.busy": "2021-04-16T19:40:51.417437Z", "iopub.status.idle": "2021-04-16T19:40:51.422135Z", "shell.execute_reply": "2021-04-16T19:40:51.421776Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 108, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.428829Z", "iopub.status.busy": "2021-04-16T19:40:51.426223Z", "iopub.status.idle": "2021-04-16T19:40:51.498420Z", "shell.execute_reply": "2021-04-16T19:40:51.498841Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 109, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.502786Z", "iopub.status.busy": "2021-04-16T19:40:51.502071Z", "iopub.status.idle": "2021-04-16T19:40:51.505269Z", "shell.execute_reply": "2021-04-16T19:40:51.505614Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 110, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.522757Z", "iopub.status.busy": "2021-04-16T19:40:51.520923Z", "iopub.status.idle": "2021-04-16T19:40:51.650753Z", "shell.execute_reply": "2021-04-16T19:40:51.651267Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": 111, "metadata": { "execution": { "iopub.execute_input": "2021-04-16T19:40:51.654627Z", "iopub.status.busy": "2021-04-16T19:40:51.654214Z", "iopub.status.idle": "2021-04-16T19:40:51.658356Z", "shell.execute_reply": "2021-04-16T19:40:51.658836Z" } }, "outputs": [], "source": [ "# Solution goes here" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 1 }