{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "from scipy.stats import ttest_ind" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Assignment 4 - Hypothesis Testing\n", "This assignment requires more individual learning than previous assignments - you are encouraged to check out the [pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) to find functions or methods you might not have used yet, or ask questions on [Stack Overflow](http://stackoverflow.com/) and tag them as pandas and python related. And of course, the discussion forums are open for interaction with your peers and the course staff.\n", "\n", "Definitions:\n", "* A _quarter_ is a specific three month period, Q1 is January through March, Q2 is April through June, Q3 is July through September, Q4 is October through December.\n", "* A _recession_ is defined as starting with two consecutive quarters of GDP decline, and ending with two consecutive quarters of GDP growth.\n", "* A _recession bottom_ is the quarter within a recession which had the lowest GDP.\n", "* A _university town_ is a city which has a high percentage of university students compared to the total population of the city.\n", "\n", "**Hypothesis**: University towns have their mean housing prices less effected by recessions. Run a t-test to compare the ratio of the mean price of houses in university towns the quarter before the recession starts compared to the recession bottom. (`price_ratio=quarter_before_recession/recession_bottom`)\n", "\n", "The following data files are available for this assignment:\n", "* From the [Zillow research data site](http://www.zillow.com/research/data/) there is housing data for the United States. In particular the datafile for [all homes at a city level](http://files.zillowstatic.com/research/public/City/City_Zhvi_AllHomes.csv), ```City_Zhvi_AllHomes.csv```, has median home sale prices at a fine grained level.\n", "* From the Wikipedia page on college towns is a list of [university towns in the United States](https://en.wikipedia.org/wiki/List_of_college_towns#College_towns_in_the_United_States) which has been copy and pasted into the file ```university_towns.txt```.\n", "* From Bureau of Economic Analysis, US Department of Commerce, the [GDP over time](http://www.bea.gov/national/index.htm#gdp) of the United States in current dollars (use the chained value in 2009 dollars), in quarterly intervals, in the file ```gdplev.xls```. For this assignment, only look at GDP data from the first quarter of 2000 onward.\n", "\n", "Each function in this assignment below is worth 10%, with the exception of ```run_ttest()```, which is worth 50%." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Use this dictionary to map state names to two letter acronyms\n", "states = {'OH': 'Ohio', 'KY': 'Kentucky', 'AS': 'American Samoa', 'NV': 'Nevada', 'WY': 'Wyoming', 'NA': 'National', 'AL': 'Alabama', 'MD': 'Maryland', 'AK': 'Alaska', 'UT': 'Utah', 'OR': 'Oregon', 'MT': 'Montana', 'IL': 'Illinois', 'TN': 'Tennessee', 'DC': 'District of Columbia', 'VT': 'Vermont', 'ID': 'Idaho', 'AR': 'Arkansas', 'ME': 'Maine', 'WA': 'Washington', 'HI': 'Hawaii', 'WI': 'Wisconsin', 'MI': 'Michigan', 'IN': 'Indiana', 'NJ': 'New Jersey', 'AZ': 'Arizona', 'GU': 'Guam', 'MS': 'Mississippi', 'PR': 'Puerto Rico', 'NC': 'North Carolina', 'TX': 'Texas', 'SD': 'South Dakota', 'MP': 'Northern Mariana Islands', 'IA': 'Iowa', 'MO': 'Missouri', 'CT': 'Connecticut', 'WV': 'West Virginia', 'SC': 'South Carolina', 'LA': 'Louisiana', 'KS': 'Kansas', 'NY': 'New York', 'NE': 'Nebraska', 'OK': 'Oklahoma', 'FL': 'Florida', 'CA': 'California', 'CO': 'Colorado', 'PA': 'Pennsylvania', 'DE': 'Delaware', 'NM': 'New Mexico', 'RI': 'Rhode Island', 'MN': 'Minnesota', 'VI': 'Virgin Islands', 'NH': 'New Hampshire', 'MA': 'Massachusetts', 'GA': 'Georgia', 'ND': 'North Dakota', 'VA': 'Virginia'}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import re\n", "from scipy import stats\n", "\n", "def get_list_of_university_towns():\n", " '''Returns a DataFrame of towns and the states they are in from the \n", " university_towns.txt list. The format of the DataFrame is:\n", " DataFrame( [ [\"Michigan\",\"Ann Arbor\"], [\"Michigan\", \"Yipsilanti\"] ], \n", " columns=[\"State\",\"RegionName\"] )'''\n", "\n", " f = open('university_towns.txt').readlines()\n", " states = [line.split('[edit]')[0] for line in f if '[edit]' in line] # each state name ends in [edit]\n", " \n", " f2 = open('university_towns.txt').read()\n", " town_groups = re.split(r'[A-Z]{1}\\w*\\s?\\w*\\[edit\\]', f2)[1:]# split on state names to get list of towns in all states\n", " \n", " \n", " school = []\n", "\n", " for state in range(len(states)):\n", " for town in town_groups[state].split('\\n'):\n", " if (town != '') and (town != '\\n'):\n", " temp = {\"RegionName\" : town.split(' (')[0], \"State\" : states[state]}\n", " school.append(temp)\n", " \n", " col_order = ['State', 'RegionName']\n", " university_towns = pd.DataFrame(school)\n", " university_towns = university_towns.sort_values(by=['State']).reset_index()\n", " university_towns = university_towns[col_order]\n", "\n", " return university_towns\n", "get_list_of_university_towns().head(n=10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gdp = pd.read_excel('gdplev.xls', skiprows=5)\n", "gdp = gdp.drop(gdp.index[0:2])\n", "gdp = gdp.reset_index()\n", "gdp = gdp.drop(['index', 'Unnamed: 3', 'Unnamed: 7'], axis=1)\n", "gdp = gdp.rename(columns={'Unnamed: 0' : 'Year', 'Unnamed: 4' : 'Quarterly'})\n", "\n", "\n", "gdp = gdp.drop(gdp.index[0:212])\n", "gdp = gdp.reset_index()\n", "\n", "col_to_keep = ['Quarterly', 'GDP in billions of current dollars.1', 'GDP in billions of chained 2009 dollars.1']\n", "gdp = gdp[col_to_keep]\n", "\n", "gdp['deltaGdp'] = gdp['GDP in billions of chained 2009 dollars.1'].diff()\n", "\n", "def get_recession_start():\n", " '''Returns the year and quarter of the recession start time as a \n", " string value in a format such as 2005q3'''\n", "\n", " rec_years = []\n", " for i in range(len(gdp)):\n", " \n", " if (gdp.loc[i, 'deltaGdp'] < 0) and (gdp.loc[i+1, 'deltaGdp'] < 0):\n", " if gdp.loc[i, 'Quarterly'] not in rec_years:\n", " rec_years.append(gdp.loc[i, 'Quarterly'])\n", " rec_years.append(gdp.loc[i+1, 'Quarterly'])\n", " \n", " return rec_years[0]\n", " \n", "get_recession_start()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gdp = pd.read_excel('gdplev.xls', skiprows=5)\n", "gdp = gdp.drop(gdp.index[0:2])\n", "gdp = gdp.reset_index()\n", "gdp = gdp.drop(['index', 'Unnamed: 3', 'Unnamed: 7'], axis=1)\n", "gdp = gdp.rename(columns={'Unnamed: 0' : 'Year', 'Unnamed: 4' : 'Quarterly'})\n", "\n", "\n", "gdp = gdp.drop(gdp.index[0:212])\n", "gdp = gdp.reset_index()\n", "\n", "col_to_keep = ['Quarterly', 'GDP in billions of current dollars.1', 'GDP in billions of chained 2009 dollars.1']\n", "gdp = gdp[col_to_keep]\n", "\n", "gdp['deltaGdp'] = gdp['GDP in billions of chained 2009 dollars.1'].diff()\n", "\n", "gdp.head(n=10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_recession_end():\n", " '''Returns the year and quarter of the recession end time as a \n", " string value in a format such as 2005q3'''\n", "\n", " rec_years = []\n", " for i in range(len(gdp)):\n", " \n", " if (gdp.loc[i, 'deltaGdp'] < 0) and (gdp.loc[i+1, 'deltaGdp'] < 0):\n", " if gdp.loc[i, 'Quarterly'] not in rec_years:\n", " rec_years.append(gdp.loc[i, 'Quarterly'])\n", " rec_years.append(i)\n", " rec_years.append(gdp.loc[i+1, 'Quarterly'])\n", " rec_years.append(i+1)\n", "\n", " \n", " return gdp.loc[rec_years[-1]+2, 'Quarterly']\n", " \n", " #return rec_years\n", " \n", "get_recession_end()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_recession_bottom():\n", " '''Returns the year and quarter of the recession bottom time as a \n", " string value in a format such as 2005q3'''\n", " \n", " rec_years = []\n", " for i in range(len(gdp)):\n", " \n", " if (gdp.loc[i, 'deltaGdp'] < 0) and (gdp.loc[i+1, 'deltaGdp'] < 0):\n", " if gdp.loc[i, 'Quarterly'] not in rec_years:\n", " rec_years.append(gdp.loc[i, 'Quarterly'])\n", " rec_years.append(i)\n", " rec_years.append(gdp.loc[i+1, 'Quarterly'])\n", " rec_years.append(i+1)\n", " \n", " rec_start = rec_years[1]\n", " rec_end = rec_years[-1]\n", " \n", " rec_bot = gdp.loc[rec_start : rec_end, ['GDP in billions of chained 2009 dollars.1', 'Quarterly']]\n", " mask = min(rec_bot['GDP in billions of chained 2009 dollars.1'])\n", " \n", " k = rec_bot.where(rec_bot['GDP in billions of chained 2009 dollars.1'] == mask).dropna().reset_index()\n", " \n", " \n", " return k.loc[0, 'Quarterly']\n", "\n", "get_recession_bottom()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def quarter_rows(row):\n", " for i in range(0, len(row), 3):\n", " row.replace(row[i], np.mean(row[i:i+3]), inplace=True)\n", " return row\n", "\n", "def convert_housing_data_to_quarters():\n", " '''Converts the housing data to quarters and returns it as mean \n", " values in a dataframe. This dataframe should be a dataframe with\n", " columns for 2000q1 through 2016q3, and should have a multi-index\n", " in the shape of [\"State\",\"RegionName\"].\n", " \n", " Note: Quarters are defined in the assignment description, they are\n", " not arbitrary three month periods.\n", " \n", " The resulting dataframe should have 67 columns, and 10,730 rows.\n", " '''\n", " housing = pd.read_csv('City_Zhvi_AllHomes.csv')\n", " \n", " housing3 = housing.set_index([\"State\",\"RegionName\"]).ix[:, '2000-01' : ]\n", " \n", " '''\n", " def quarter_rows(row):\n", " for i in range(0, len(row), 3):\n", " row.replace(row[i], np.mean(row[i:i+3]), inplace=True)\n", " return row\n", " housing = housing.apply(quarter_rows, axis=1)\n", " This for loop accomplished the same purpose as datetime and resampling below, but much slower\n", " '''\n", " \n", " housing3.columns = pd.to_datetime(housing3.columns).to_period('M')\n", " housing3 = housing3.resample('q', axis=1).mean()\n", " \n", " \n", " states = {'OH': 'Ohio', 'KY': 'Kentucky', 'AS': 'American Samoa', 'NV': 'Nevada', \\\n", " 'WY': 'Wyoming', 'NA': 'National', 'AL': 'Alabama', 'MD': 'Maryland', \\\n", " 'AK': 'Alaska', 'UT': 'Utah', 'OR': 'Oregon', 'MT': 'Montana', 'IL': 'Illinois', \\\n", " 'TN': 'Tennessee', 'DC': 'District of Columbia', 'VT': 'Vermont', 'ID': 'Idaho', \\\n", " 'AR': 'Arkansas', 'ME': 'Maine', 'WA': 'Washington', 'HI': 'Hawaii', 'WI': 'Wisconsin', \\\n", " 'MI': 'Michigan', 'IN': 'Indiana', 'NJ': 'New Jersey', 'AZ': 'Arizona', 'GU': 'Guam', \\\n", " 'MS': 'Mississippi', 'PR': 'Puerto Rico', 'NC': 'North Carolina', 'TX': 'Texas', \\\n", " 'SD': 'South Dakota', 'MP': 'Northern Mariana Islands', 'IA': 'Iowa', 'MO': 'Missouri', \\\n", " 'CT': 'Connecticut', 'WV': 'West Virginia', 'SC': 'South Carolina', 'LA': 'Louisiana', \\\n", " 'KS': 'Kansas', 'NY': 'New York', 'NE': 'Nebraska', 'OK': 'Oklahoma', 'FL': 'Florida', \\\n", " 'CA': 'California', 'CO': 'Colorado', 'PA': 'Pennsylvania', 'DE': 'Delaware', 'NM': 'New Mexico', \\\n", " 'RI': 'Rhode Island', 'MN': 'Minnesota', 'VI': 'Virgin Islands', 'NH': 'New Hampshire', \\\n", " 'MA': 'Massachusetts', 'GA': 'Georgia', 'ND': 'North Dakota', 'VA': 'Virginia'}\n", " \n", " housing3 = housing3.rename(index=states)\n", " return housing3\n", "\n", "convert_housing_data_to_quarters().head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def run_ttest():\n", " '''First creates new data showing the decline or growth of housing prices\n", " between the recession start and the recession bottom. Then runs a ttest\n", " comparing the university town values to the non-university towns values, \n", " return whether the alternative hypothesis (that the two groups are the same)\n", " is true or not as well as the p-value of the confidence. \n", " \n", " Return the tuple (different, p, better) where different=True if the t-test is\n", " True at a p<0.01 (we reject the null hypothesis), or different=False if \n", " otherwise (we cannot reject the null hypothesis). The variable p should\n", " be equal to the exact p value returned from scipy.stats.ttest_ind(). The\n", " value for better should be either \"university town\" or \"non-university town\"\n", " depending on which has a lower mean price ratio (which is equivilent to a\n", " reduced market loss).'''\n", " \n", " better = ''\n", " \n", " univ_towns = get_list_of_university_towns()\n", " univ_towns['univ_town'] = 'university town' # create a new column to mark university towns\n", " hous_data = convert_housing_data_to_quarters()\n", " hous_data.columns = hous_data.columns.map(str) # convert the column headings from PeriodIndex to string\n", "\n", " hous_data['change'] = hous_data['2008Q1'] - hous_data['2009Q2'] # compute changes in housing price\n", " hous_data['mean_pr'] = hous_data['2008Q1'].div(hous_data['2009Q2'])\n", "\n", " all_towns = pd.merge(hous_data, univ_towns, how='outer', right_on=['State', 'RegionName'], left_index=True)\n", " all_towns = all_towns.set_index(['State', 'RegionName'])\n", "\n", " all_towns.ix[all_towns.univ_town != 'university town', 'univ_town'] = 'non-university town'\n", "\n", " uni = all_towns[all_towns['univ_town'] == 'university town'].dropna()\n", " non_uni = all_towns[all_towns['univ_town'] == 'non-university town'].dropna()\n", "\n", " m, p = stats.ttest_ind(non_uni['mean_pr'], uni['mean_pr'])\n", " \n", " if m > 0:\n", " better = 'university town'\n", " \n", " return True, p, better\n", "\n", "run_ttest()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "coursera": { "course_slug": "python-data-analysis", "graded_item_id": "Il9Fx", "launcher_item_id": "TeDW0", "part_id": "WGlun" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 1 }