{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Goal - show examples of cross-validation for model selection and bootstrapping for prediction error estimation.\n",
    "</p>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "ename": "ModuleNotFoundError",
     "evalue": "No module named 'course_utils'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mModuleNotFoundError\u001b[0m                       Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-1-560675e049d1>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m      2\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mlinear_model\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      3\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mos\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mcourse_utils\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mbd\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      5\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mpandas\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mpd\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      6\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mnumpy\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'course_utils'"
     ]
    }
   ],
   "source": [
    "from sklearn.metrics import confusion_matrix, roc_auc_score\n",
    "from sklearn import linear_model\n",
    "import os\n",
    "import course_utils as bd\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')\n",
    "%matplotlib inline\n",
    "import imp\n",
    "imp.reload(bd)\n",
    "\n",
    "#We assume data is in a parallel directory to this one called 'data'\n",
    "cwd = os.getcwd()\n",
    "datadir = '/'.join(cwd.split('/')[0:-1]) + '/data/'\n",
    "#or you can hardcode the directory\n",
    "#datadir = \n",
    "\n",
    "\n",
    "\n",
    "#Load data and downsample for a 50/50 split, then split into a train/test\n",
    "f = datadir + 'ads_dataset_cut.txt'\n",
    "\n",
    "train_split = 0.8\n",
    "tdat = pd.read_csv(f, header = 0,sep = '\\t')\n",
    "moddat = bd.downSample(tdat, 'y_buy', 4)\n",
    "\n",
    "#We know the dataset is sorted so we can just split by index\n",
    "train = moddat[:int(np.floor(moddat.shape[0]*train_split))]\n",
    "test = moddat[int(np.floor(moddat.shape[0]*train_split)):]\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Let's say we want to build a classifier that does ranking based on the probability of the label being 1. We can use Logistic Regression with AUC as our validation metric. We need to choose a regularization weight, but because we have limited data, we can do this via cross-validation.<br><br>\n",
    "\n",
    "The following function performs the cross-validation for various levels of C\n",
    "\n",
    "</p>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# from sklearn.cross_validation import *\n",
    "from sklearn.model_selection import *\n",
    "\n",
    "def xValAUC(tr, lab, k, cs):\n",
    "    '''\n",
    "    Perform k-fold cross validation on logistic regression, varies C,\n",
    "    returns a dictionary where key=c,value=[auc-c1, auc-c2, ...auc-ck].\n",
    "    '''\n",
    "#     cv = KFold(n = tr.shape[0], n_folds = k)\n",
    "    cv = KFold(n_splits = k).split(tr)\n",
    "    aucs = {}\n",
    "\n",
    "    for train_index, test_index in cv:\n",
    "        tr_f = tr.iloc[train_index]\n",
    "        va_f = tr.iloc[test_index]\n",
    "    \n",
    "        for c in cs:\n",
    "            logreg = linear_model.LogisticRegression(C = c)\n",
    "            logreg.fit(tr_f.drop(lab, 1),tr_f[lab])\n",
    "            met = roc_auc_score(va_f[lab], logreg.predict_proba(va_f.drop(lab,1))[:,1])\n",
    "\n",
    "            if c in aucs:\n",
    "                aucs[c].append(met)\n",
    "            else:\n",
    "                aucs[c] = [met]\n",
    "    \n",
    "    return aucs "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Now we can run our experiment</p>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "xval_dict = {'e':[], 'mu':[], 'sig':[]}\n",
    "k = 10\n",
    "auc_cv = xValAUC(train, 'y_buy', k, [10**i for i in range(-20,20)])\n",
    "for i in range(-20,20):\n",
    "    xval_dict['e'].append(i)\n",
    "    xval_dict['mu'].append(np.array(auc_cv[10**i]).mean())\n",
    "    xval_dict['sig'].append(np.sqrt(np.array(auc_cv[10**i]).var()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>We can now load the results from above into a dataframe and begin to analyze\n",
    "\n",
    "</p>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "res = pd.DataFrame(xval_dict)\n",
    "\n",
    "#Get the confidence intervals\n",
    "res['low'] = res['mu'] - 1.96*res['sig']/np.sqrt(10)\n",
    "res['up'] = res['mu'] + 1.96*res['sig']/np.sqrt(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Now let's plot the results to get a sense of how AUC varies with regularization strength. We'll also plot the lower and upper 95% confidence bands to get a sense of the variance.\n",
    "</p>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig = plt.figure()\n",
    "ax = fig.add_subplot(111)\n",
    "\n",
    "a1, = plt.plot(res['e'], res['mu'], label = 'mu')\n",
    "a2, = plt.plot(res['e'], res['low'], 'k--', label = 'low')\n",
    "a3, = plt.plot(res['e'], res['up'], 'k--', label = 'up')\n",
    "\n",
    "plt.legend(handles = [a1,a2,a3])\n",
    "ax.set_xlabel('log10 of C')\n",
    "ax.set_ylabel('Mean Val AUC')\n",
    "plt.title('X-validated AUC by C')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>We see that AUC is definitely affected by the choice of regularization strength (here we seem to do better with less regularization, probably due to very strong signal-to-noise in the data). But its hard to see what is best in the range $1$ to $10^{30}$. We can zoom in to see this better."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig = plt.figure()\n",
    "ax = fig.add_subplot(111)\n",
    "\n",
    "res2 = res[(res['e']>-6)]\n",
    "\n",
    "best = res2['mu'].max()\n",
    "best_1sde = best - res2['sig'][(res2['mu']==best)]/np.sqrt(10)\n",
    "\n",
    "plt.plot(res2['e'], res2['mu'], label='Avg AUC')\n",
    "plt.plot(res2['e'], res2['low'], 'k--', label='Low 95%')\n",
    "plt.plot(res2['e'], res2['up'], 'k--', label='Up 95%')\n",
    "plt.plot(res2['e'], best_1sde.values[0]*np.ones(res2.shape[0]), 'r', label='best_1sde')\n",
    "\n",
    "plt.legend(loc=4)\n",
    "ax.set_xlabel('log10 of C')\n",
    "ax.set_ylabel('Mean Val AUC')\n",
    "plt.title('X-validated AUC by C')\n",
    "\n",
    "e_max = res2['e'][(res2['mu']==best)].values[0]\n",
    "e_1sde = res2['e'][(res2['mu']>=best_1sde.values[0])].values.min()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>So now we have 2 ways to select our regularization:\n",
    "<ul>\n",
    "    <li>Choose C with $max(AUC_{xval})$</li>\n",
    "    <li>Choose min(C) where $AUC_{xval}^C>max(AUC_{xval})-stderr(max(AUC_{xval})$</li>\n",
    "</ul><br>\n",
    "This latter decision criteria is more conservative and accounts for the fact that we have variance in our cross-validated estimates of AUC. We argue that any $C$ where $AUC_{xval}^C>max(AUC_{xval})-stderr(max(AUC_{xval})$ is statistically equivalent to the max. Therefore we take the most conservative, least complex model option.<br><br>\n",
    "\n",
    "Now that we have selected a model, let's retrain on the full training set and evaluate on the test. We'll bootstrap the testing estimation so we can get a sense of the variance.\n",
    "</p>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def testAUCBoot(test, nruns, model, lab):\n",
    "    '''\n",
    "    Samples with replacement, runs multiple eval attempts\n",
    "    returns all bootstrapped results\n",
    "    '''\n",
    "    auc_res = []; oops = 0\n",
    "    for i in range(nruns):\n",
    "        test_samp = test.iloc[np.random.randint(0, len(test), size=len(test))]\n",
    "        try:\n",
    "            auc_res.append(roc_auc_score(test_samp[lab], model.predict_proba(test_samp.drop(lab,1))[:,1]))\n",
    "        except:\n",
    "            oops += 1\n",
    "    return auc_res"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lab = 'y_buy'\n",
    "logreg_max = linear_model.LogisticRegression(C = 10.0**(e_max))\n",
    "logreg_max.fit(train.drop(lab,1), train[lab])\n",
    "auc_max = testAUCBoot(test, 100, logreg_max, lab)\n",
    "\n",
    "logreg_1sde = linear_model.LogisticRegression(C = 10.0**(e_1sde))\n",
    "logreg_1sde.fit(train.drop(lab,1), train[lab])\n",
    "auc_1sde = testAUCBoot(test, 1000, logreg_1sde, lab)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Now let's look at the distribution of AUC across the bootstrapped samples. Even though we can't use the test data for model selection, we can at least look at the test results for models built with the 2 selection criteria discussed above.\n",
    "\n",
    "</p>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "r = [min(min(auc_max), min(auc_1sde)), max(max(auc_max), max(auc_1sde))]\n",
    "b = 15\n",
    "fig = plt.figure()\n",
    "frame = plt.gca()\n",
    "ax = fig.add_subplot(111)\n",
    "h1 = plt.hist(auc_max, range=r, bins=b, normed=True, histtype='step', stacked=True,\n",
    "              label='max({})'.format(np.round(np.mean(auc_max),3)))\n",
    "\n",
    "h2 = plt.hist(auc_1sde,range=r, bins=b, normed=True, histtype='step', stacked=True,\n",
    "              label='1sde({})'.format(np.round(np.mean(auc_1sde),3)))\n",
    "plt.legend()\n",
    "\n",
    "ax.set_xlabel('bootstrapped Test AUC')\n",
    "ax.set_ylabel('Pct Runs')\n",
    "plt.title('Distribution of BootStrap Test AUC')\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}