{
 "metadata": {
  "name": "",
  "signature": "sha256:d17bd0d5a4ec0f9f6580a9537540e68fc12196323d9079f3a3394f09bd8c6696"
 },
 "nbformat": 3,
 "nbformat_minor": 0,
 "worksheets": [
  {
   "cells": [
    {
     "cell_type": "heading",
     "level": 1,
     "metadata": {},
     "source": [
      "HW4: kNN, Naive Bayes and Text Feature Selection"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "For this HW, we'll explore kNN and Naive Bayes further and look at different feature selection strategies. First of all, download the dataset for this HW from <a href=\"https://raw.githubusercontent.com/biddata/datascience/master/F15/hw4/data.tar.gz\">here</a> and unpack into a hw4 directory with \n",
      "<pre>\n",
      "tar xvf hw4data.tar.gz\n",
      "</pre>\n",
      "The dataset is from the RCV1 v2 corpus which is a collection of news articles and category labels. There are 103 categories, and each document can lie in one or more categories. \n",
      "\n",
      "The sparse train and test matrices are encoded as nnz x 3 integer matrices, where nnz is the total number of words in all docs. Each row of these matrices contains (doc, word, count) triples. We will store those in sparse matrices whose dimensions are ndocs x nwords\n"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "import numpy as np\n",
      "import scipy.sparse as sp\n",
      "import matplotlib.pyplot as plt\n",
      "%matplotlib inline"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Now read the data files. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "iwords = np.loadtxt(\"data/words.imat.txt\")          # training data matrix in nnz x 3 form - rows are (doc, word, count) triples\n",
      "cats = np.loadtxt(\"data/cats.imat.txt\")             # training labels in an ndocs x ncats matrix\n",
      "tiwords = np.loadtxt(\"data/testwords.imat.txt\")     # test data matrix in nnz x 3 form\n",
      "tcats = np.loadtxt(\"data/testcats.imat.txt\")        # testing labels in an ndocs x ncats matrix"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "iwords and tiwords encode the sparse train and test matrices. Each row of these matrices contains (doc, word, count) triples. We will store those in sparse matrices whose dimensions are ndocs x nwords. The exact matrix type is CSR which stands for compressed sparse rows. Look up its description here: https://en.wikipedia.org/wiki/Sparse_matrix"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "fwords = iwords[:,2]\n",
      "train = sp.csr_matrix((fwords, (iwords[:,1], iwords[:,0])))\n",
      "ndocs = train.shape[0]\n",
      "nwords = train.shape[1]\n",
      "\n",
      "test = sp.csr_matrix((tiwords[:,2], (tiwords[:,1], tiwords[:,0])),shape=(4000,nwords))  # need to match the number of cols (words)\n",
      "ntdocs = test.shape[0]"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Although the dataset has 103 category labels, we'll focus on training a single category. We will use category 6, which is fairly evenly balanced between positives and negatives. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "cat6 = cats[:,6]\n",
      "tcat6 = tcats[:,6]"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "This time we'll train Scikit-Learn's builtin kNN classifier. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "from sklearn.neighbors import KNeighborsClassifier\n",
      "knnc = KNeighborsClassifier(n_neighbors=3)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "And then \"fit\" it to the data. Notice how fast this is! Of course, training kNN is basically a no-op, and predicting is where all the work lies:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "knnc.fit(train, cat6)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Now lets try predicting. This will actually take some time, but hopefully only a few tens of seconds..."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "preds = knnc.predict(test);preds"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Now we can compute the accuracy on the actual test set labels. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "np.mean(tcat6 == preds)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Not bad, but kNN is very sensitive to the distance function as we argued in lab. Lets try first normalizing each row of \"train\" and \"test\" by the L2-norm of that row, i.e. we will divide by the sqrt of the sum of the squared word counts in the row. Comparing the normalized documents should do much better. \n",
      "\n",
      "> TODO: implement row normalization below. Compute the row norms first. Your implementation strategy is important for this to run in reasonable time. **Dont** try to construct an ndocs x ndocs dense matrix. You probably also dont want to iterate over all the docs or all the words. The fastest solution is to pre-multiply by a sparse diagonal matrix which contains the scale factors. No loops are necessary. Normalization should take a fraction of a second. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "trainnorm =                 # L2 norms for the rows of train\n",
      "testnorm =                  # L2 norms for the rows of test\n",
      "trainfact =                 # train correction factor (inverse of trainnorm)\n",
      "testfact =                  # test correction factor (inverse of testnorm)\n",
      "trainnmat =                 # scaled (normalized) training mat\n",
      "testnmat =                  # scaled (normalized) test mat\n",
      "testnmat.shape              # should be (10000,44718)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Now train and evaluate a knn model using the normalized datasets:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "knnc.fit(trainnmat, cat6)\n",
      "preds = knnc.predict(testnmat);preds\n",
      "np.mean(tcat6 == preds)     "
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "> TODO: What was the accuracy of your kNN classifier on the normalized data?"
     ]
    },
    {
     "cell_type": "heading",
     "level": 2,
     "metadata": {},
     "source": [
      "Naive Bayes Classification"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Lets create a Multinomial Naive Bayes classifier. The multinomial distribution models each word in a document as being drawn from a category-specific probability distribution over the vocabulary. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "from sklearn.naive_bayes import MultinomialNB\n",
      "mnb = MultinomialNB()"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Now we can fit the NB model, which simply means tabulating the word counts for each label."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "mnb.fit(train, cat6)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "...and predict the category labels for cat6:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "preds = mnb.predict(test);preds"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "> TODO: Lets check the accuracy below. How did NB do compared to kNN? "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "np.mean(tcat6 == preds)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "heading",
     "level": 2,
     "metadata": {},
     "source": [
      "Frequency-Based Feature Selection"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "We argued that frequency-based feature selection is useful approach to culling features to reduce the size of a model. In some cases, accuracy improves as well. \n",
      "\n",
      "Lets construct a matrix rcounts which contains the counts of features from the training set. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "rcounts = (train>0).sum(0)\n",
      "rcounts"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Next we'll find the indices of non-zero elements of the predicate (rcounts >= count). These are the features that occurred at least count times. ii is the indices of these features.\n",
      "\n",
      "We then extract the columns of the train and test matrices corresponding to those high-count features. The last line tells us how many features we kept."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "threshold = 1\n",
      "ii0 = np.asarray(np.nonzero(rcounts >= threshold)[1])\n",
      "ii = ii0.reshape(ii0.size,)\n",
      "\n",
      "strain = train[:,ii]\n",
      "stest = test[:,ii]\n",
      "ii.size"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Finally we run training and evaluation on train/test data restricted to those columns to get an accuracy score."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "mnb.fit(strain, cat6)\n",
      "preds = mnb.predict(stest);preds\n",
      "np.mean(tcat6 == preds)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "> TODO: For threshold counts of 1, 2, 5, 10, 20, tabulate the count, number of features kept, and the accuracy score. "
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "> TODO: How did frequency-based feature selection affect accuracy? How much was model size (proportional to words kept) reduced? "
     ]
    },
    {
     "cell_type": "heading",
     "level": 2,
     "metadata": {},
     "source": [
      "Feature Selection Based on Statistical Tests\n"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Lets try a more sophisticated method to select the (word) features. Since we have count data, the Chi-Squared test is appropriate. Remember that Chi-squared can be used on 2x2 contingency tables to determine if there is an interaction. In this case, we test if the label interacts with the feature. "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "fcounts = np.zeros((4,nwords))           # holds the counts of various combinations of label and feature\n",
      "expected = np.zeros((4,nwords))          # holds the predicted counts of those same combinations"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "mcat6 = cat6.reshape(1,ndocs)\n",
      "fcounts[0,:] = mcat6 * (train > 0)                                   # label and word both = 1\n",
      "fcounts[1,:] = (1-mcat6) * (train > 0)                               # label = 0, word = 1\n",
      "fcounts[2,:] = np.sum(mcat6) * np.ones((1,nwords)) - fcounts[0,:]    # label = 1, word = 0\n",
      "fcounts[3,:] = np.sum(1-mcat6) * np.ones((1,nwords)) - fcounts[1,:]  # label = 0, word = 0\n"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Review the definition of the CHI-squared test here:\n",
      "https://en.wikipedia.org/wiki/Chi-squared_test\n",
      "Under the hull hypothesis that category label doesnt matter, we expect to be able to compute "
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "fcounts[0,:]"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "allf = np.sum(fcounts,axis=0)\n",
      "pword0 = (fcounts[2,:] + fcounts[3,:]) / allf    # prob word = 0\n",
      "pword1 = (fcounts[0,:] + fcounts[1,:]) / allf    # prob word = 1\n",
      "pcat0 = (fcounts[1,:] + fcounts[3,:]) / allf     # prob label = 0\n",
      "pcat1 = (fcounts[0,:] + fcounts[2,:]) / allf     # prob label = 1"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "If label and word occurence are independent, then the expected count should be the product of the total count and the corresponding probabilities:\n",
      "\n",
      "> TODO: Implement the expected counts here:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "expected[0,:] =   # expected count with label = 1, word = 1\n",
      "expected[1,:] =   # expected count with label = 0, word = 1\n",
      "expected[2,:] =   # expected count with label = 1, word = 0\n",
      "expected[3,:] =   # expected count with label = 0, word = 0"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "To compute the chi-squared statistic value, we want the Sum of the squared differences between actual and predicted counts, normalized by the expected counts. The following two cells perform that calculation. We have to guard against divide-by-zero so we use a small lower-bound on the denominator."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "diff = fcounts - expected\n",
      "safeexpected = np.maximum(expected,1e-7)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "> TODO: Implement the CHI-squared statistic value here"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "chisquaredstat =  # implement the chisquared statistic value here\n",
      "chisquaredstat.shape"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "We can then use the chi-squared statistic to choose the best features. The CHI-squared stastistic is a measure of interaction between a word feature and the label. The higher its value, the better that word should be as a predictor of the category. We set a threshold and take the words whose chi-squared statistic is above this value:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "chisqthreshold = 1.5\n",
      "ii0 = np.asarray(np.nonzero(chisquaredstat > chisqthreshold)[0])\n",
      "ii = ii0.reshape(ii0.size,);ii.shape"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Finally we use the indices of good features selected by the chi-squared filter to train and evaluate the Naive Bayes model."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "strain = train[:,ii]\n",
      "stest = test[:,ii]\n",
      "mnb.fit(strain, cat6)\n",
      "preds = mnb.predict(stest);preds\n",
      "np.mean(tcat6 == preds)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": []
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "> TODO: Compare the accuracy of CHI-squared feature selection to the frequency-based selection you did earlier. For each frequency threshold (1,2,5,10,20), adjust the chisquared threshold to retain the smallest number of features that is at least as large as the number of features kept by that frequency threshold. Then compute the accuracy in the last cell for that threhold. Finally tabulate both frequency-based and chi-squared accuracies. That is, make a table whose columns are:\n",
      "* freq threshold\n",
      "* number of features retained by this frequency threshold\n",
      "* chisquared threshold \n",
      "* number of features retained by this chisquared threshold (should be >= column 2)\n",
      "* accuracy of frequency thresholded model\n",
      "* accuracy of chisquared thresholded model\n",
      "and whose rows are indexed by the frequency thresholds 1, 2, 5, 10, 20\n"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "> TODO: Which method gave better accuracy for a given number of features? "
     ]
    },
    {
     "cell_type": "heading",
     "level": 2,
     "metadata": {},
     "source": [
      "Submit"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Submit your HW4 notebook <a href=\"https://bcourses.berkeley.edu/courses/1377158/assignments/6675866\">here</a>."
     ]
    }
   ],
   "metadata": {}
  }
 ]
}