{ "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "Homework 2: Desperately Seeking Silver" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Due Thursday, Oct 3, 11:59 PM" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "
\n", "
\n", "\n", "In HW1, we explored how to make predictions (with uncertainties) about upcoming elections based on the Real Clear Politics poll. This assignment also focuses on election prediction, but we are going to implement and evaluate a number of more sophisticated forecasting techniques. \n", "\n", "We are going to focus on the 2012 Presidential election. Analysts like Nate Silver, Drew Linzer, and Sam Wang developed highly accurate models that correctly forecasted most or all of the election outcomes in each of the 50 states. We will explore how hard it is to recreate similarly successful models. The goals of this assignment are:\n", "\n", "1. To practice data manipulation with Pandas\n", "1. To develop intuition about the interplay of **precision**, **accuracy**, and **bias** when making predictions\n", "1. To better understand how election forecasts are constructed\n", "\n", "The data for our analysis will come from demographic and polling data. We will simulate building our model on October 2, 2012 -- approximately one month before the election. \n", "\n", "### Instructions\n", "\n", "The questions in this assignment are numbered. The questions are also usually italicised, to help you find them in the flow of this notebook. At some points you will be asked to write functions to carry out certain tasks. Its worth reading a little ahead to see how the function whose body you will fill in will be used.\n", "\n", "**This is a long homework. Please do not wait until the last minute to start it!**\n", "\n", "The data for this homework can be found at [this link](https://www.dropbox.com/s/vng5x10b837ahnc/hw2_data.zip). Download it to the same folder where you are running this notebook, and uncompress it. You should find the following files there:\n", "\n", "1. us-states.json\n", "2. electoral_votes.csv\n", "3. predictwise.csv\n", "4. g12.csv\n", "5. g08.csv\n", "6. 2008results.csv\n", "7. nat.csv\n", "8. p04.csv\n", "9. 2012results.csv\n", "10. cleaned-state_data2012.csv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Setup and Plotting code" ] }, { "cell_type": "code", "collapsed": false, "input": [ "%matplotlib inline\n", "from collections import defaultdict\n", "import json\n", "\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import pandas as pd\n", "\n", "from matplotlib import rcParams\n", "import matplotlib.cm as cm\n", "import matplotlib as mpl\n", "\n", "#colorbrewer2 Dark2 qualitative color table\n", "dark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),\n", " (0.8509803921568627, 0.37254901960784315, 0.00784313725490196),\n", " (0.4588235294117647, 0.4392156862745098, 0.7019607843137254),\n", " (0.9058823529411765, 0.1607843137254902, 0.5411764705882353),\n", " (0.4, 0.6509803921568628, 0.11764705882352941),\n", " (0.9019607843137255, 0.6705882352941176, 0.00784313725490196),\n", " (0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]\n", "\n", "rcParams['figure.figsize'] = (10, 6)\n", "rcParams['figure.dpi'] = 150\n", "rcParams['axes.color_cycle'] = dark2_colors\n", "rcParams['lines.linewidth'] = 2\n", "rcParams['axes.facecolor'] = 'white'\n", "rcParams['font.size'] = 14\n", "rcParams['patch.edgecolor'] = 'white'\n", "rcParams['patch.facecolor'] = dark2_colors[0]\n", "rcParams['font.family'] = 'StixGeneral'\n", "\n", "\n", "def remove_border(axes=None, top=False, right=False, left=True, bottom=True):\n", " \"\"\"\n", " Minimize chartjunk by stripping out unnecesasry plot borders and axis ticks\n", " \n", " The top/right/left/bottom keywords toggle whether the corresponding plot border is drawn\n", " \"\"\"\n", " ax = axes or plt.gca()\n", " ax.spines['top'].set_visible(top)\n", " ax.spines['right'].set_visible(right)\n", " ax.spines['left'].set_visible(left)\n", " ax.spines['bottom'].set_visible(bottom)\n", " \n", " #turn off all ticks\n", " ax.yaxis.set_ticks_position('none')\n", " ax.xaxis.set_ticks_position('none')\n", " \n", " #now re-enable visibles\n", " if top:\n", " ax.xaxis.tick_top()\n", " if bottom:\n", " ax.xaxis.tick_bottom()\n", " if left:\n", " ax.yaxis.tick_left()\n", " if right:\n", " ax.yaxis.tick_right()\n", " \n", "pd.set_option('display.width', 500)\n", "pd.set_option('display.max_columns', 100)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 1 }, { "cell_type": "code", "collapsed": false, "input": [ "#this mapping between states and abbreviations will come in handy later\n", "states_abbrev = {\n", " 'AK': 'Alaska',\n", " 'AL': 'Alabama',\n", " 'AR': 'Arkansas',\n", " 'AS': 'American Samoa',\n", " 'AZ': 'Arizona',\n", " 'CA': 'California',\n", " 'CO': 'Colorado',\n", " 'CT': 'Connecticut',\n", " 'DC': 'District of Columbia',\n", " 'DE': 'Delaware',\n", " 'FL': 'Florida',\n", " 'GA': 'Georgia',\n", " 'GU': 'Guam',\n", " 'HI': 'Hawaii',\n", " 'IA': 'Iowa',\n", " 'ID': 'Idaho',\n", " 'IL': 'Illinois',\n", " 'IN': 'Indiana',\n", " 'KS': 'Kansas',\n", " 'KY': 'Kentucky',\n", " 'LA': 'Louisiana',\n", " 'MA': 'Massachusetts',\n", " 'MD': 'Maryland',\n", " 'ME': 'Maine',\n", " 'MI': 'Michigan',\n", " 'MN': 'Minnesota',\n", " 'MO': 'Missouri',\n", " 'MP': 'Northern Mariana Islands',\n", " 'MS': 'Mississippi',\n", " 'MT': 'Montana',\n", " 'NA': 'National',\n", " 'NC': 'North Carolina',\n", " 'ND': 'North Dakota',\n", " 'NE': 'Nebraska',\n", " 'NH': 'New Hampshire',\n", " 'NJ': 'New Jersey',\n", " 'NM': 'New Mexico',\n", " 'NV': 'Nevada',\n", " 'NY': 'New York',\n", " 'OH': 'Ohio',\n", " 'OK': 'Oklahoma',\n", " 'OR': 'Oregon',\n", " 'PA': 'Pennsylvania',\n", " 'PR': 'Puerto Rico',\n", " 'RI': 'Rhode Island',\n", " 'SC': 'South Carolina',\n", " 'SD': 'South Dakota',\n", " 'TN': 'Tennessee',\n", " 'TX': 'Texas',\n", " 'UT': 'Utah',\n", " 'VA': 'Virginia',\n", " 'VI': 'Virgin Islands',\n", " 'VT': 'Vermont',\n", " 'WA': 'Washington',\n", " 'WI': 'Wisconsin',\n", " 'WV': 'West Virginia',\n", " 'WY': 'Wyoming'\n", "}" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is some code to plot [State Chloropleth](http://en.wikipedia.org/wiki/Choropleth_map) maps in matplotlib. `make_map` is the function you will use." ] }, { "cell_type": "code", "collapsed": false, "input": [ "#adapted from https://github.com/dataiap/dataiap/blob/master/resources/util/map_util.py\n", "\n", "#load in state geometry\n", "state2poly = defaultdict(list)\n", "\n", "data = json.load(file(\"data/us-states.json\"))\n", "for f in data['features']:\n", " state = states_abbrev[f['id']]\n", " geo = f['geometry']\n", " if geo['type'] == 'Polygon':\n", " for coords in geo['coordinates']:\n", " state2poly[state].append(coords)\n", " elif geo['type'] == 'MultiPolygon':\n", " for polygon in geo['coordinates']:\n", " state2poly[state].extend(polygon)\n", "\n", " \n", "def draw_state(plot, stateid, **kwargs):\n", " \"\"\"\n", " draw_state(plot, stateid, color=..., **kwargs)\n", " \n", " Automatically draws a filled shape representing the state in\n", " subplot.\n", " The color keyword argument specifies the fill color. It accepts keyword\n", " arguments that plot() accepts\n", " \"\"\"\n", " for polygon in state2poly[stateid]:\n", " xs, ys = zip(*polygon)\n", " plot.fill(xs, ys, **kwargs)\n", "\n", " \n", "def make_map(states, label):\n", " \"\"\"\n", " Draw a cloropleth map, that maps data onto the United States\n", " \n", " Inputs\n", " -------\n", " states : Column of a DataFrame\n", " The value for each state, to display on a map\n", " label : str\n", " Label of the color bar\n", "\n", " Returns\n", " --------\n", " The map\n", " \"\"\"\n", " fig = plt.figure(figsize=(12, 9))\n", " ax = plt.gca()\n", "\n", " if states.max() < 2: # colormap for election probabilities \n", " cmap = cm.RdBu\n", " vmin, vmax = 0, 1\n", " else: # colormap for electoral votes\n", " cmap = cm.binary\n", " vmin, vmax = 0, states.max()\n", " norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)\n", " \n", " skip = set(['National', 'District of Columbia', 'Guam', 'Puerto Rico',\n", " 'Virgin Islands', 'American Samoa', 'Northern Mariana Islands'])\n", " for state in states_abbrev.values():\n", " if state in skip:\n", " continue\n", " color = cmap(norm(states.ix[state]))\n", " draw_state(ax, state, color = color, ec='k')\n", "\n", " #add an inset colorbar\n", " ax1 = fig.add_axes([0.45, 0.70, 0.4, 0.02]) \n", " cb1=mpl.colorbar.ColorbarBase(ax1, cmap=cmap,\n", " norm=norm,\n", " orientation='horizontal')\n", " ax1.set_title(label)\n", " remove_border(ax, left=False, bottom=False)\n", " ax.set_xticks([])\n", " ax.set_yticks([])\n", " ax.set_xlim(-180, -60)\n", " ax.set_ylim(15, 75)\n", " return ax" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 3 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Today: the day we make the prediction" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# We are pretending to build our model 1 month before the election\n", "import datetime\n", "today = datetime.datetime(2012, 10, 2)\n", "today" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 4 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Background: The Electoral College\n", "\n", "US Presidential elections revolve around the Electoral College . In this system, each state receives a number of Electoral College votes depending on it's population -- there are 538 votes in total. In most states, all of the electoral college votes are awarded to the presidential candidate who recieves the most votes in that state. A candidate needs 269 votes to be elected President. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thus, to calculate the total number of votes a candidate gets in the election, we add the electoral college votes in the states that he or she wins. (This is not entirely true, with Nebraska and Maine splitting their electoral college votes, but, for the purposes of this homework, we shall assume that the winner of the most votes in Maine and Nebraska gets ALL the electoral college votes there.) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is the electoral vote breakdown by state:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*As a matter of convention, we will index all our dataframes by the state name*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "electoral_votes = pd.read_csv(\"data/electoral_votes.csv\").set_index('State')\n", "electoral_votes.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 5 }, { "cell_type": "markdown", "metadata": {}, "source": [ "To illustrate the use of `make_map` we plot the Electoral College" ] }, { "cell_type": "code", "collapsed": false, "input": [ "make_map(electoral_votes.Votes, \"Electoral Vlotes\");" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 6 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Question 1: Simulating elections" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The PredictWise Baseline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will start by examining a successful forecast that [PredictWise](http://www.predictwise.com/results/2012/president) made on October 2, 2012. This will give us a point of comparison for our own forecast models.\n", "\n", "PredictWise aggregated polling data and, for each state, estimated the probability that the Obama or Romney would win. Here are those estimated probabilities:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "predictwise = pd.read_csv('data/predictwise.csv').set_index('States')\n", "predictwise.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 7 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.1** Each row is the probability predicted by Predictwise that Romney or Obama would win a state. The votes column lists the number of electoral college votes in that state. *Use `make_map` to plot a map of the probability that Obama wins each state, according to this prediction*." ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 8 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Later on in this homework we will explore some approaches to estimating probabilities like these and quatifying our uncertainty about them. But for the time being, we will focus on how to make a prediction assuming these probabilities are known.\n", "\n", "Even when we assume the win probabilities in each state are known, there is still uncertainty left in the election. We will use simulations from a simple probabilistic model to characterize this uncertainty. From these simulations, we will be able to make a prediction about the expected outcome of the election, and make a statement about how sure we are about it.\n", "\n", "**1.2** We will assume that the outcome in each state is the result of an independent coin flip whose probability of coming up Obama is given by a Dataframe of state-wise win probabilities. *Write a function that uses this **predictive model** to simulate the outcome of the election given a Dataframe of probabilities*." ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "simulate_election\n", "\n", "Inputs\n", "------\n", "model : DataFrame\n", " A DataFrame summarizing an election forecast. The dataframe has 51 rows -- one for each state and DC\n", " It has the following columns:\n", " Obama : Forecasted probability that Obama wins the state\n", " Votes : Electoral votes for the state\n", " The DataFrame is indexed by state (i.e., model.index is an array of state names)\n", " \n", "n_sim : int\n", " Number of simulations to run\n", " \n", "Returns\n", "-------\n", "results : Numpy array with n_sim elements\n", " Each element stores the number of electoral college votes Obama wins in each simulation. \n", "\"\"\"\n", "\n", "#Your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 9 }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following cells takes the necessary DataFrame for the Predictwise data, and runs 10000 simulations. We use the results to compute the probability, according to this predictive model, that Obama wins the election (i.e., the probability that he receives 269 or more electoral college votes)" ] }, { "cell_type": "code", "collapsed": false, "input": [ "result = simulate_election(predictwise, 10000)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 10 }, { "cell_type": "code", "collapsed": false, "input": [ "#compute the probability of an Obama win, given this simulation\n", "#Your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 11 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.3** **Now, write a function called `plot_simulation` to visualize the simulation**. This function should:\n", "\n", "* Build a histogram from the result of simulate_election\n", "* Overplot the \"victory threshold\" of 269 votes as a vertical black line (hint: use axvline)\n", "* Overplot the result (Obama winning 332 votes) as a vertical red line\n", "* Compute the number of votes at the 5th and 95th quantiles, and display the difference (this is an estimate of the outcome's uncertainty)\n", "* Display the probability of an Obama victory \n", " " ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "plot_simulation\n", "\n", "Inputs\n", "------\n", "simulation: Numpy array with n_sim (see simulate_election) elements\n", " Each element stores the number of electoral college votes Obama wins in each simulation.\n", " \n", "Returns\n", "-------\n", "Nothing \n", "\"\"\"\n", "\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 12 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets plot the result of the Predictwise simulation. Your plot should look something like this:\n", "\n", "" ] }, { "cell_type": "code", "collapsed": false, "input": [ "plot_simulation(result)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 13 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Evaluating and Validating our Forecast\n", "\n", "The point of creating a probabilistic predictive model is to simultaneously make a forecast and give an estimate of how certain we are about it. \n", "\n", "However, in order to trust our prediction or our reported level of uncertainty, the model needs to be *correct*. We say a model is *correct* if it honestly accounts for all of the mechanisms of variation in the system we're forecasting.\n", "\n", "In this section, we **evaluate** our prediction to get a sense of how useful it is, and we **validate** the predictive model by comparing it to real data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.4** Suppose that we believe the model is correct. Under this assumption, we can **evaluate** our prediction by characterizing its **accuracy** and **precision** (see [here](http://celebrating200years.noaa.gov/magazine/tct/accuracy_vs_precision_556.jpg) for an illustration of these ideas). *What does the above plot reveal about the **accuracy** and **precision** of the PredictWise model?*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Your Answer Here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.5** Unfortunately, we can never be *absolutely sure* that a model is correct, just as we can never be absolutely sure that the sun will rise tomorrow. But we can test a model by making predictions assuming that it is true and comparing it to real events -- this constitutes a hypothesis test. After testing a large number of predictions, if we find no evidence that says the model is wrong, we can have some degree of confidence that the model is right (the same reason we're still quite confident about the sun being here tomorrow). We call this process **model checking**, and use it to **validate** our model.\n", "\n", "*Describe how the graph provides one way of checking whether the prediction model is correct. How many predictions have we checked in this case? How could we increase our confidence in the model's correctness?*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Your Answer Here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Gallup Party Affiliation Poll" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will try to **estimate** our own win probabilities to plug into our predictive model.\n", "\n", "We will start with a simple forecast model. We will try to predict the outcome of the election based the estimated proportion of people in each state who identify with one one political party or the other.\n", "\n", "Gallup measures the political leaning of each state, based on asking random people which party they identify or affiliate with. [Here's the data](http://www.gallup.com/poll/156437/heavily-democratic-states-concentrated-east.aspx#2) they collected from January-June of 2012:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "gallup_2012=pd.read_csv(\"data/g12.csv\").set_index('State')\n", "gallup_2012[\"Unknown\"] = 100 - gallup_2012.Democrat - gallup_2012.Republican\n", "gallup_2012.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 14 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each row lists a state, the percent of surveyed individuals who identify as Democrat/Republican, the percent whose identification is unknown or who haven't made an affiliation yet, the margin between Democrats and Republicans (`Dem_Adv`: the percentage identifying as Democrats minus the percentage identifying as Republicans), and the number `N` of people surveyed.\n", "\n", "**1.6** This survey can be used to predict the outcome of each State's election. The simplest forecast model assigns 100% probability that the state will vote for the majority party. *Implement this simple forecast*." ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "simple_gallup_model\n", "\n", "A simple forecast that predicts an Obama (Democratic) victory with\n", "0 or 100% probability, depending on whether a state\n", "leans Republican or Democrat.\n", "\n", "Inputs\n", "------\n", "gallup : DataFrame\n", " The Gallup dataframe above\n", "\n", "Returns\n", "-------\n", "model : DataFrame\n", " A dataframe with the following column\n", " * Obama: probability that the state votes for Obama. All values should be 0 or 1\n", " model.index should be set to gallup.index (that is, it should be indexed by state name)\n", " \n", "Examples\n", "---------\n", ">>> simple_gallup_model(gallup_2012).ix['Florida']\n", "Obama 1\n", "Name: Florida, dtype: float64\n", ">>> simple_gallup_model(gallup_2012).ix['Arizona']\n", "Obama 0\n", "Name: Arizona, dtype: float64\n", "\"\"\"\n", "\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 15 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we run the simulation with this model, and plot it." ] }, { "cell_type": "code", "collapsed": false, "input": [ "model = simple_gallup_model(gallup_2012)\n", "model = model.join(electoral_votes)\n", "prediction = simulate_election(model, 10000)\n", "\n", "plot_simulation(prediction)\n", "plt.show()\n", "make_map(model.Obama, \"P(Obama): Simple Model\")" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 16 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.7** Attempt to **validate** the predictive model using the above simulation histogram. *Does the evidence contradict the predictive model?*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Adding Polling Uncertainty to the Predictive Model\n", "\n", "The model above is brittle -- it includes no accounting for uncertainty, and thus makes predictions with 100% confidence. This is clearly wrong -- there are numerous sources of uncertainty in estimating election outcomes from a poll of affiliations. \n", "\n", "The most obvious source of error in the Gallup data is the finite sample size -- Gallup did not poll *everybody* in America, and thus the party affilitions are subject to sampling errors. How much uncertainty does this introduce?\n", "\n", "On their [webpage](http://www.gallup.com/poll/156437/heavily-democratic-states-concentrated-east.aspx#2) discussing these data, Gallup notes that the sampling error for the states is between 3 and 6%, with it being 3% for most states. (The calculation of the sampling error itself is an exercise in statistics. Its fun to think of how you could arrive at the sampling error if it was not given to you. One way to do it would be to assume this was a two-choice situation and use binomial sampling error for the non-unknown answers, and further model the error for those who answered 'Unknown'.)\n", "\n", "**1.8** Use Gallup's estimate of 3% to build a Gallup model with some uncertainty. Assume that the `Dem_Adv` column represents the mean of a Gaussian, whose standard deviation is 3%. Build the model in the function `uncertain_gallup_model`. *Return a forecast where the probability of an Obama victory is given by the probability that a sample from the `Dem_Adv` Gaussian is positive.*\n", "\n", "\n", "**Hint**\n", "The probability that a sample from a Gaussian with mean $\\mu$ and standard deviation $\\sigma$ exceeds a threhold $z$ can be found using the the Cumulative Distribution Function of a Gaussian:\n", "\n", "$$\n", "CDF(z) = \\frac1{2}\\left(1 + {\\rm erf}\\left(\\frac{z - \\mu}{\\sqrt{2 \\sigma^2}}\\right)\\right) \n", "$$\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "uncertain_gallup_model\n", "\n", "A forecast that predicts an Obama (Democratic) victory if the random variable drawn\n", "from a Gaussian with mean Dem_Adv and standard deviation 3% is >0\n", "\n", "Inputs\n", "------\n", "gallup : DataFrame\n", " The Gallup dataframe above\n", "\n", "Returns\n", "-------\n", "model : DataFrame\n", " A dataframe with the following column\n", " * Obama: probability that the state votes for Obama.\n", " model.index should be set to gallup.index (that is, it should be indexed by state name)\n", "\"\"\"\n", "# your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 17 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We construct the model by estimating the probabilities:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "model = uncertain_gallup_model(gallup_2012)\n", "model = model.join(electoral_votes)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 18 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once again, we plot a map of these probabilities, run the simulation, and display the results" ] }, { "cell_type": "code", "collapsed": false, "input": [ "make_map(model.Obama, \"P(Obama): Gallup + Uncertainty\")\n", "plt.show()\n", "prediction = simulate_election(model, 10000)\n", "plot_simulation(prediction)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 19 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.9** *Attempt to **validate** the above model using the histogram. Does the predictive distribution appear to be consistent with the real data? Comment on the accuracy and precision of the prediction.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Your answers here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Biases\n", "\n", "While accounting for uncertainty is one important part of making predictions, we also want to avoid systematic errors. We call systematic over- or under-estimation of an unknown quantity **bias**. In the case of this forecast, our predictions would be biased if the estimates from this poll *systematically* over- or under-estimate vote proportions on election day. There are several reasons this might happen:\n", "\n", "1. **Gallup is wrong**. The poll may systematically over- or under-estimate party affiliation. This could happen if the people who answer Gallup phone interviews might not be a representative sample of people who actually vote, Gallup's methodology is flawed, or if people lie during a Gallup poll.\n", "1. **Our assumption about party affiliation is wrong**. Party affiliation may systematically over- or under-estimate vote proportions. This could happen if people identify with one party, but strongly prefer the candidate from the other party, or if undecided voters do not end up splitting evenly between Democrats and Republicans on election day.\n", "1. **Our assumption about equilibrium is wrong**. This poll was released in August, with more than two months left for the elections. If there is a trend in the way people change their affiliations during this time period (for example, because one candidate is much worse at televised debates), an estimate in August could systematically miss the true value in November.\n", "\n", "One way to account for bias is to calibrate our model by estimating the bias and adjusting for it. Before we do this, let's explore how sensitive our prediction is to bias." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.10** *Implement a `biased_gallup` forecast, which assumes the vote share for the Democrat on election day will be equal to `Dem_Adv` shifted by a fixed negative amount.* We will call this shift the \"bias\", so a bias of 1% means that the expected vote share on election day is `Dem_Adv`-1.\n", "\n", "**Hint** You can do this by wrapping the `uncertain_gallup_model` in a function that modifies its inputs." ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "biased_gallup_poll\n", "\n", "Subtracts a fixed amount from Dem_Adv, beofore computing the uncertain_gallup_model.\n", "This simulates correcting a hypothetical bias towards Democrats\n", "in the original Gallup data.\n", "\n", "Inputs\n", "-------\n", "gallup : DataFrame\n", " The Gallup party affiliation data frame above\n", "bias : float\n", " The amount by which to shift each prediction\n", " \n", "Examples\n", "--------\n", ">>> model = biased_gallup(gallup, 1.)\n", ">>> model.ix['Flordia']\n", ">>> .460172\n", "\"\"\"\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 20 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.11** *Simulate elections assuming a bias of 1% and 5%, and plot histograms for each one.*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 21 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that even a small bias can have a dramatic effect on the predictions. Pundits made a big fuss about bias during the last election, and for good reason -- it's an important effect, and the models are clearly sensitive to it. Forecastors like Nate Silver would have had an easier time convincing a wide audience about their methodology if bias wasn't an issue.\n", "\n", "Furthermore, because of the nature of the electoral college, biases get blown up large. For example, suppose you mis-predict the party Florida elects. We've possibly done this as a nation in the past :-). Thats 29 votes right there. So, the penalty for even one misprediction is high." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Estimating the size of the bias from the 2008 election\n", "\n", "While bias can lead to serious inaccuracy in our predictions, it is fairly easy to correct *if* we are able to estimate the size of the bias and adjust for it. This is one form of **calibration**.\n", "\n", "One approach to calibrating a model is to use historical data to estimate the bias of a prediction model. We can use our same prediction model on historical data and compare our historical predictions to what actually occurred and see if, on average, the predictions missed the truth by a certain amount. Under some assumptions (discussed in a question below), we can use the estimate of the bias to adjust our current forecast.\n", "\n", "In this case, we can use data from the 2008 election. (The Gallup data from 2008 are from the whole of 2008, including after the election):" ] }, { "cell_type": "code", "collapsed": false, "input": [ "gallup_08 = pd.read_csv(\"data/g08.csv\").set_index('State')\n", "results_08 = pd.read_csv('data/2008results.csv').set_index('State')\n", "\n", "prediction_08 = gallup_08[['Dem_Adv']]\n", "prediction_08['Dem_Win']=results_08[\"Obama Pct\"] - results_08[\"McCain Pct\"]\n", "prediction_08.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 22 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.12** *Make a scatter plot using the `prediction_08` dataframe of the democratic advantage in the 2008 Gallup poll (X axis) compared to the democratic win percentage -- the difference between Obama and McCain's vote percentage -- in the election (Y Axis). Overplot a linear fit to these data.*\n", "\n", "**Hint**\n", "The `np.polyfit` function can compute linear fits, as can `sklearn.linear_model.LinearModel`" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 23 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that a lot of states in which Gallup reported a Democratic affiliation, the results were strongly in the opposite direction. Why might that be? You can read more about the reasons for this [here](http://www.gallup.com/poll/114016/state-states-political-party-affiliation.aspx#1)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A quick look at the graph will show you a number of states where Gallup showed a Democratic advantage, but where the elections were lost by the democrats. Use Pandas to list these states." ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 24 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We compute the average difference between the Democrat advantages in the election and Gallup poll" ] }, { "cell_type": "code", "collapsed": false, "input": [ "print (prediction_08.Dem_Adv - prediction_08.Dem_Win).mean()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 25 }, { "cell_type": "markdown", "metadata": {}, "source": [ "*your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.13** * **Calibrate** your forecast of the 2012 election using the estimated bias from 2008. Validate the resulting model against the real 2012 outcome. Did the calibration help or hurt your prediction?*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 26 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1.14** *Finally, given that we know the actual outcome of the 2012 race, and what you saw from the 2008 race would you trust the results of the an election forecast based on the 2012 Gallup party affiliation poll?*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##Question 2: Logistic Considerations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous forecast, we used the strategy of taking some side-information about an election (the partisan affiliation poll) and relating that to the predicted outcome of the election. We tied these two quantities together using a very simplistic assumption, namely that the vote outcome is deterministically related to estimated partisan affiliation.\n", "\n", "In this section, we use a more sophisticated approach to link side information -- usually called **features** or **predictors** -- to our prediction. This approach has several advantages, including the fact that we may use multiple features to perform our predictions. Such data may include demographic data, exit poll data, and data from previous elections.\n", "\n", "First, we'll construct a new feature called PVI, and use it and the Gallup poll to build predictions. Then, we'll use **logistic regression** to estimate win probabilities, and use these probabilities to build a prediction." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The Partisan Voting Index\n", "\n", "The Partisan Voting Index (PVI) is defined as the excessive swing towards a party in the previous election in a given state. In other words:\n", "\n", "$$\n", "PVI_{2008} (state) = \n", "Democratic.Percent_{2004} ( state ) - Republican.Percent_{2004} ( state) - \\\\ \n", " \\Big ( Democratic.Percent_{2004} (national) - Republican.Percent_{2004} (national) \\Big )\n", "$$\n", "\n", "To calculate it, let us first load the national percent results for republicans and democrats in the last 3 elections and convert it to the usual `democratic - republican` format." ] }, { "cell_type": "code", "collapsed": false, "input": [ "national_results=pd.read_csv(\"data/nat.csv\")\n", "national_results.set_index('Year',inplace=True)\n", "national_results.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 27 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us also load in data about the 2004 elections from `p04.csv` which gets the results in the above form for the 2004 election for each state." ] }, { "cell_type": "code", "collapsed": false, "input": [ "polls04=pd.read_csv(\"data/p04.csv\")\n", "polls04.State=polls04.State.replace(states_abbrev)\n", "polls04.set_index(\"State\", inplace=True);\n", "polls04.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 28 }, { "cell_type": "code", "collapsed": false, "input": [ "pvi08=polls04.Dem - polls04.Rep - (national_results.xs(2004)['Dem'] - national_results.xs(2004)['Rep'])\n", "pvi08.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 29 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.1** *Build a new DataFrame called `e2008`.* The dataframe `e2008` must have the following columns:\n", "\n", "* a column named pvi with the contents of the partisan vote index `pvi08`\n", "* a column named `Dem_Adv` which has the Democratic advantage from the frame `prediction_08` of the last question **with the mean subtracted out**\n", "* a column named `obama_win` which has a 1 for each state Obama won in 2008, and 0 otherwise\n", "* a column named `Dem_Win` which has the 2008 election Obama percentage minus McCain percentage, also from the frame `prediction_08`\n", "* **The DataFrame should be indexed and sorted by State**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 30 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We construct a similar frame for 2012, obtaining `pvi` using the 2008 Obama win data which we already have. There is no `obama_win` column since, well, our job is to predict it!" ] }, { "cell_type": "code", "collapsed": false, "input": [ "pvi12 = e2008.Dem_Win - (national_results.xs(2008)['Dem'] - national_results.xs(2008)['Rep'])\n", "e2012 = pd.DataFrame(dict(pvi=pvi12, Dem_Adv=gallup_2012.Dem_Adv - gallup_2012.Dem_Adv.mean()))\n", "e2012 = e2012.sort_index()\n", "e2012.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 31 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We load in the actual 2012 results so that we can compare our results to the predictions." ] }, { "cell_type": "code", "collapsed": false, "input": [ "results2012 = pd.read_csv(\"data/2012results.csv\")\n", "results2012.set_index(\"State\", inplace=True)\n", "results2012 = results2012.sort_index()\n", "results2012.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 32 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exploratory Data Analysis" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.2** Lets do a little exploratory data analysis. *Plot a scatter plot of the two PVi's against each other. What are your findings? Is the partisan vote index relatively stable from election to election?*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 33 }, { "cell_type": "markdown", "metadata": {}, "source": [ "*your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.3** Lets do a bit more exploratory data analysis. *Using a scatter plot, plot `Dem_Adv` against `pvi` in both 2008 and 2012. Use colors red and blue depending upon `obama_win` for the 2008 data points. Plot the 2012 data using gray color. Is there the possibility of making a linear separation (line of separation) between the red and the blue points on the graph?*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 34 }, { "cell_type": "markdown", "metadata": {}, "source": [ "*your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The Logistic Regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Logistic regression is a probabilistic model that links observed binary data to a set of features.\n", "\n", "Suppose that we have a set of binary (that is, taking the values 0 or 1) observations $Y_1,\\cdots,Y_n$, and for each observation $Y_i$ we have a vector of features $X_i$. The logistic regression model assumes that there is some set of **weights**, **coefficients**, or **parameters** $\\beta$, one for each feature, so that the data were generated by flipping a weighted coin whose probability of giving a 1 is given by the following equation:\n", "\n", "$$\n", "P(Y_i = 1) = \\mathrm{logistic}(\\sum \\beta_i X_i),\n", "$$\n", "\n", "where\n", "\n", "$$\n", "\\mathrm{logistic}(x) = \\frac{e^x}{1+e^x}.\n", "$$\n", "\n", "When we *fit* a logistic regression model, we determine values for each $\\beta$ that allows the model to best fit the *training data* we have observed (the 2008 election). Once we do this, we can use these coefficients to make predictions about data we have not yet observed (the 2012 election).\n", "\n", "Sometimes this estimation procedure will overfit the training data yielding predictions that are difficult to generalize to unobserved data. Usually, this occurs when the magnitudes of the components of $\\beta$ become too large. To prevent this, we can use a technique called *regularization* to make the procedure prefer parameter vectors that have smaller magnitude. We can adjust the strength of this regularization to reduce the error in our predictions.\n", "\n", "We now write some code as technology for doing logistic regression. By the time you start doing this homework, you will have learnt the basics of logistic regression, but not all the mechanisms of cross-validation of data sets. Thus we provide here the code for you to do the logistic regression, and the accompanying cross-validation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first build the features from the 2008 data frame, returning `y`, the vector of labels, and `X` the feature-sample matrix where the columns are the features in order from the list `featurelist`, and each row is a data \"point\"." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from sklearn.linear_model import LogisticRegression\n", "\n", "def prepare_features(frame2008, featureslist):\n", " y= frame2008.obama_win.values\n", " X = frame2008[featureslist].values\n", " if len(X.shape) == 1:\n", " X = X.reshape(-1, 1)\n", " return y, X" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 35 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the above function to get the label vector and feature-sample matrix for feeding to scikit-learn. We then use the usual scikit-learn incantation `fit` to fit a logistic regression model with regularization parameter `C`. The parameter `C` is a hyperparameter of the model, and is used to penalize too high values of the parameter co-efficients in the loss function that is minimized to perform the logistic regression. We build a new dataframe with the usual `Obama` column, that holds the probabilities used to make the prediction. Finally we return a tuple of the dataframe and the classifier instance, in that order." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def fit_logistic(frame2008, frame2012, featureslist, reg=0.0001):\n", " y, X = prepare_features(frame2008, featureslist)\n", " clf2 = LogisticRegression(C=reg)\n", " clf2.fit(X, y)\n", " X_new = frame2012[featureslist]\n", " obama_probs = clf2.predict_proba(X_new)[:, 1]\n", " \n", " df = pd.DataFrame(index=frame2012.index)\n", " df['Obama'] = obama_probs\n", " return df, clf2" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 36 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are not done yet. In order to estimate `C`, we perform a grid search over many `C` to find the best `C` that minimizes the loss function. For each point on that grid, we carry out a `n_folds`-fold cross-validation. What does this mean?\n", "\n", "Suppose `n_folds=10`. Then we will repeat the fit 10 times, each time randomly choosing 50/10 ~ 5 states out as a test set, and using the remaining 45/46 as the training set. We use the average score on the test set to score each particular choice of `C`, and choose the one with the best performance." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from sklearn.grid_search import GridSearchCV\n", "\n", "def cv_optimize(frame2008, featureslist, n_folds=10, num_p=100):\n", " y, X = prepare_features(frame2008, featureslist)\n", " clf = LogisticRegression()\n", " parameters = {\"C\": np.logspace(-4, 3, num=num_p)}\n", " gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds)\n", " gs.fit(X, y)\n", " return gs.best_params_, gs.best_score_\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 37 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally we write the function that we use to make our fits. It takes both the 2008 and 2012 frame as arguments, as well as the featurelist, and the number of cross-validation folds to do. It uses the above defined `logistic_score` to find the best-fit `C`, and then uses this value to return the tuple of result dataframe and classifier described above. This is the function you will be using." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def cv_and_fit(frame2008, frame2012, featureslist, n_folds=5):\n", " bp, bs = cv_optimize(frame2008, featureslist, n_folds=n_folds)\n", " predict, clf = fit_logistic(frame2008, frame2012, featureslist, reg=bp['C'])\n", " return predict, clf" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 38 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.4** *Carry out a logistic fit using the `cv_and_fit` function developed above. As your featurelist use the features we have: `Dem_Adv` and `pvi`." ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 39 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.5** *As before, plot a histogram and map of the simulation results, and interpret the results in terms of accuracy and precision.*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#code to make the histogram\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 40 }, { "cell_type": "code", "collapsed": false, "input": [ "#code to make the map\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 41 }, { "cell_type": "markdown", "metadata": {}, "source": [ "*your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Classifier Decision boundary" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One nice way to visualize a 2-dimensional logistic regression is to plot the probability as a function of each dimension. This shows the **decision boundary** -- the set of parameter values where the logistic fit yields P=0.5, and shifts between a preference for Obama or McCain/Romney.\n", "\n", "The function below draws such a figure (it is adapted from the scikit-learn website), and overplots the data." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from matplotlib.colors import ListedColormap\n", "def points_plot(e2008, e2012, clf):\n", " \"\"\"\n", " e2008: The e2008 data\n", " e2012: The e2012 data\n", " clf: classifier\n", " \"\"\"\n", " Xtrain = e2008[['Dem_Adv', 'pvi']].values\n", " Xtest = e2012[['Dem_Adv', 'pvi']].values\n", " ytrain = e2008['obama_win'].values == 1\n", " \n", " X=np.concatenate((Xtrain, Xtest))\n", " \n", " # evenly sampled points\n", " x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n", " y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n", " xx, yy = np.meshgrid(np.linspace(x_min, x_max, 50),\n", " np.linspace(y_min, y_max, 50))\n", " plt.xlim(xx.min(), xx.max())\n", " plt.ylim(yy.min(), yy.max())\n", "\n", " #plot background colors\n", " ax = plt.gca()\n", " Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n", " Z = Z.reshape(xx.shape)\n", " cs = ax.contourf(xx, yy, Z, cmap='RdBu', alpha=.5)\n", " cs2 = ax.contour(xx, yy, Z, cmap='RdBu', alpha=.5)\n", " plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14)\n", " \n", " # Plot the 2008 points\n", " ax.plot(Xtrain[ytrain == 0, 0], Xtrain[ytrain == 0, 1], 'ro', label='2008 McCain')\n", " ax.plot(Xtrain[ytrain == 1, 0], Xtrain[ytrain == 1, 1], 'bo', label='2008 Obama')\n", " \n", " # and the 2012 points\n", " ax.scatter(Xtest[:, 0], Xtest[:, 1], c='k', marker=\"s\", s=50, facecolors=\"k\", alpha=.5, label='2012')\n", " plt.legend(loc='upper left', scatterpoints=1, numpoints=1)\n", "\n", " return ax" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 1 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2.6** *Plot your results on the classification space boundary plot. How sharp is the classification boundary, and how does this translate into accuracy and precision of the results?*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 43 }, { "cell_type": "markdown", "metadata": {}, "source": [ "*your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Question 3: Trying to catch Silver: Poll Aggregation\n", "\n", "In the previous section, we tried to use heterogeneous side-information to build predictions of the election outcome. In this section, we switch gears to bringing together homogeneous information about the election, by aggregating different polling result together.\n", "\n", "This approach -- used by the professional poll analysists -- involves combining many polls about the election itself. One advantage of this approach is that it addresses the problem of bias in individual polls, a problem we found difficult to deal with in problem 1. If we assume that the polls are all attempting to estimate the same quantity, any individual biases should cancel out when averaging many polls (pollsters also try to correct for known biases). This is often a better assumption than assuming constant bias between election cycles, as we did above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following table aggregates many of the pre-election polls available as of October 2, 2012. We are most interested in the column \"obama_spread\". We will clean the data for you:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "multipoll = pd.read_csv('data/cleaned-state_data2012.csv', index_col=0)\n", "\n", "#convert state abbreviation to full name\n", "multipoll.State.replace(states_abbrev, inplace=True)\n", "\n", "#convert dates from strings to date objects, and compute midpoint\n", "multipoll.start_date = multipoll.start_date.apply(pd.datetools.parse)\n", "multipoll.end_date = multipoll.end_date.apply(pd.datetools.parse)\n", "multipoll['poll_date'] = multipoll.start_date + (multipoll.end_date - multipoll.start_date).values / 2\n", "\n", "#compute the poll age relative to Oct 2, in days\n", "multipoll['age_days'] = (today - multipoll['poll_date']).values / np.timedelta64(1, 'D')\n", "\n", "#drop any rows with data from after oct 2\n", "multipoll = multipoll[multipoll.age_days > 0]\n", "\n", "#drop unneeded columns\n", "multipoll = multipoll.drop(['Date', 'start_date', 'end_date', 'Spread'], axis=1)\n", "\n", "#add electoral vote counts\n", "multipoll = multipoll.join(electoral_votes, on='State')\n", "\n", "#drop rows with missing data\n", "multipoll.dropna()\n", "\n", "multipoll.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 44 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3.1** Using this data, compute a new data frame that averages the obama_spread for each state. Also compute the standard deviation of the obama_spread in each state, and the number of polls for each state.\n", "\n", "*Define a function `state_average` which returns this dataframe*\n", "\n", "**Hint**\n", "\n", "[pd.GroupBy](http://pandas.pydata.org/pandas-docs/dev/groupby.html) could come in handy" ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "state_average\n", "\n", "Inputs\n", "------\n", "multipoll : DataFrame\n", " The multipoll data above\n", " \n", "Returns\n", "-------\n", "averages : DataFrame\n", " A dataframe, indexed by State, with the following columns:\n", " N: Number of polls averaged together\n", " poll_mean: The average value for obama_spread for all polls in this state\n", " poll_std: The standard deviation of obama_spread\n", " \n", "Notes\n", "-----\n", "For states where poll_std isn't finite (because N is too small), estimate the\n", "poll_std value as .05 * poll_mean\n", "\"\"\"\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 45 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets call the function on the `multipoll` data frame, and join it with the `electoral_votes` frame." ] }, { "cell_type": "code", "collapsed": false, "input": [ "avg = state_average(multipoll).join(electoral_votes, how='outer')\n", "avg.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 46 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some of the reddest and bluest states are not present in this data (people don't bother polling there as much). The `default_missing` function gives them strong Democratic/Republican advantages" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def default_missing(results):\n", " red_states = [\"Alabama\", \"Alaska\", \"Arkansas\", \"Idaho\", \"Wyoming\"]\n", " blue_states = [\"Delaware\", \"District of Columbia\", \"Hawaii\"]\n", " results.ix[red_states, [\"poll_mean\"]] = -100.0\n", " results.ix[red_states, [\"poll_std\"]] = 0.1\n", " results.ix[blue_states, [\"poll_mean\"]] = 100.0\n", " results.ix[blue_states, [\"poll_std\"]] = 0.1\n", "default_missing(avg)\n", "avg.head()" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 47 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Unweighted aggregation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3.2** *Build an `aggregated_poll_model` function that takes the `avg` DataFrame as input, and returns a forecast DataFrame*\n", "in the format you've been using to simulate elections. Assume that the probability that Obama wins a state\n", "is given by the probability that a draw from a Gaussian with $\\mu=$poll_mean and $\\sigma=$poll_std is positive." ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "aggregated_poll_model\n", "\n", "Inputs\n", "------\n", "polls : DataFrame\n", " DataFrame indexed by State, with the following columns:\n", " poll_mean\n", " poll_std\n", " Votes\n", "\n", "Returns\n", "-------\n", "A DataFrame indexed by State, with the following columns:\n", " Votes: Electoral votes for that state\n", " Obama: Estimated probability that Obama wins the state\n", "\"\"\"\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 48 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3.3** *Run 10,000 simulations with this model, and plot the results. Describe the results in a paragraph -- compare the methodology and the simulation outcome to the Gallup poll. Also plot the usual map of the probabilities*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 49 }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Your summary here*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 50 }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Weighted Aggregation\n", "\n", "Not all polls are equally valuable. A poll with a larger margin of error should not influence a forecast as heavily. Likewise, a poll further in the past is a less valuable indicator of current (or future) public opinion. For this reason, polls are often weighted when building forecasts. \n", "\n", "A weighted estimate of Obama's advantage in a given state is given by\n", "\n", "$$\n", "\\mu = \\frac{\\sum w_i \\times \\mu_i}{\\sum w_i}\n", "$$\n", "\n", "where $\\mu_i$ are individual polling measurements or a state, and $w_i$ are the weights assigned to each poll. The uncertainty on the weighted mean, assuming each measurement is independent, is given by\n", "\n", "The estimate of the variance of $\\mu$, when $\\mu_i$ are unbiased estimators of $\\mu$, is\n", "\n", "$$\\textrm{Var}(\\mu) = \\frac{1}{(\\sum_i w_i)^2} \\sum_{i=1}^n w_i^2 \\textrm{Var}(\\mu_i).$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Whats the matter with Kansas?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need to find an estimator of the variance of $\\mu_i$, $Var(\\mu_i)$. In the case of states that have a lot of polls, we expect the bias in $\\mu$ to be negligible, and then the above formula for the variance of $\\mu$ holds. However, lets take a look at the case of Kansas." ] }, { "cell_type": "code", "collapsed": false, "input": [ "multipoll[multipoll.State==\"Kansas\"]" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 51 }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are only two polls in the last year! And, the results in the two polls are far, very far from the mean.\n", "\n", "Now, Kansas is a safely Republican state, so this dosent really matter, but if it were a swing state, we'd be in a pickle. We'd have no unbiased estimator of the variance in Kansas. So, to be conservative, and play it safe, we follow the same tack we did with the unweighted averaging of polls, and simply assume that the variance in a state is the square of the standard deviation of `obama_spread`.\n", "\n", "This will overestimate the errors for a lot of states, but unless we do a detailed state-by-state analysis, its better to be conservative. Thus, we use:\n", "\n", "$\\textrm{Var}(\\mu)$ = `obama_spread.std()`$^2$ .\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The weights $w_i$ should combine the uncertainties from the margin of error and the age of the forecast. One such combination is:\n", "\n", "$$\n", "w_i = \\frac1{MoE^2} \\times \\lambda_{\\rm age}\n", "$$\n", "\n", "where\n", "\n", "$$\n", "\\lambda_{\\rm age} = 0.5^{\\frac{{\\rm age}}{30 ~{\\rm days}}}\n", "$$\n", "\n", "This model makes a few ad-hoc assumptions:\n", "\n", "1. The equation for $\\sigma$ assumes that every measurement is independent. This is not true in the case that a given pollster in a state makes multiple polls, perhaps with some of the same respondents (a longitudinal survey). But its a good assumption to start with.\n", "1. The equation for $\\lambda_{\\rm age}$ assumes that a 30-day old poll is half as valuable as a current one\n", "\n", "**3.4** Nevertheless, it's worth exploring how these assumptions affect the forecast model. *Implement the model in the function `weighted_state_average`*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "\"\"\"\n", "Function\n", "--------\n", "weighted_state_average\n", "\n", "Inputs\n", "------\n", "multipoll : DataFrame\n", " The multipoll data above\n", " \n", "Returns\n", "-------\n", "averages : DataFrame\n", " A dataframe, indexed by State, with the following columns:\n", " N: Number of polls averaged together\n", " poll_mean: The average value for obama_spread for all polls in this state\n", " poll_std: The standard deviation of obama_spread\n", " \n", "Notes\n", "-----\n", "For states where poll_std isn't finite (because N is too small), estimate the\n", "poll_std value as .05 * poll_mean\n", "\"\"\"\n", "\n", "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 52 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3.5** *Put this all together -- compute a new estimate of `poll_mean` and `poll_std` for each state, apply the `default_missing` function to handle missing rows, build a forecast with `aggregated_poll_model`, run 10,000 simulations, and plot the results, both as a histogram and as a map.*" ] }, { "cell_type": "code", "collapsed": false, "input": [ "#your code here\n" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 53 }, { "cell_type": "code", "collapsed": false, "input": [ "#your map code here\n", "make_map(model.Obama, \"P(Obama): Weighted Polls\")" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 54 }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3.6** *Discuss your results in terms of bias, accuracy and precision, as before*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*your answer here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For fun, but not to hand in, play around with turning off the time decay weight and the sample error weight individually." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Parting Thoughts: What do the pros do?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The models we have explored in this homework have been fairly ad-hoc. Still, we have seen predicting by simulation, prediction using heterogeneous side-features, and finally by weighting polls that are made in the election season. The pros pretty much start from poll-averaging, adding in demographics and economic information, and moving onto trend-estimation as the election gets closer. They also employ models of likely voters vs registered voters, and how independents might break. At this point, you are prepared to go and read more about these techniques, so let us leave you with some links to read:\n", "\n", "1. Skipper Seabold's reconstruction of parts of Nate Silver's model: https://github.com/jseabold/538model . We've drawn direct inspiration from his work , and indeed have used some of the data he provides in his repository\n", "\n", "2. The simulation techniques are partially drawn from Sam Wang's work at http://election.princeton.edu . Be sure to check out the FAQ, Methods section, and matlab code on his site.\n", "\n", "3. Nate Silver, who we are still desperately seeking, has written a lot about his techniques: http://www.fivethirtyeight.com/2008/03/frequently-asked-questions-last-revised.html . Start there and look around\n", "\n", "4. Drew Linzer uses bayesian techniques, check out his work at: http://votamatic.org/evaluating-the-forecasting-model/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How to submit\n", "\n", "To submit your homework, create a folder named lastname_firstinitial_hw2 and place this notebook file in the folder. Also put the data folder in this folder. **Make sure everything still works!** Select Kernel->Restart Kernel to restart Python, Cell->Run All to run all cells. You shouldn't hit any errors. Compress the folder (please use .zip compression) and submit to the CS109 dropbox in the appropriate folder. If we cannot access your work because these directions are not followed correctly, we will not grade your work." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "*css tweaks in this cell*\n", "" ] } ], "metadata": {} } ] }