{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# code for loading the format for the notebook\n", "import os\n", "\n", "# path : store the current path to convert back to it later\n", "path = os.getcwd()\n", "os.chdir(os.path.join('..', '..', 'notebook_format'))\n", "from formats import load_style\n", "load_style(plot_style = False)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ethen 2017-10-24 14:51:01 \n", "\n", "CPython 3.5.2\n", "IPython 6.2.1\n", "\n", "numpy 1.13.3\n", "pandas 0.20.3\n", "sklearn 0.19.1\n" ] } ], "source": [ "os.chdir(path)\n", "\n", "# 1. magic for inline plot\n", "# 2. magic to print version\n", "# 3. magic so that the notebook will reload external python modules\n", "%matplotlib inline\n", "%load_ext watermark\n", "%load_ext autoreload \n", "%autoreload 2\n", "\n", "import numpy as np\n", "import pandas as pd\n", "from sklearn.naive_bayes import MultinomialNB\n", "from sklearn.preprocessing import LabelEncoder\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Naive Bayes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Naive Bayes** classifiers is based on Bayes’ theorem, and the adjective naive comes from the assumption that the features in a dataset are mutually independent. In practice, the independence assumption is often violated, but **Naive Bayes** still tend to perform very well in the fields of text/document classification. Common applications includes spam filtering (categorized a text message as spam or not-spam) and sentiment analysis (categorized a text message as positive or negative review). More importantly, the simplicity of the method means that it takes order of magnitude less time to train when compared to more complexed models like support vector machines." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Text/Document Representations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Text classifiers often don't use any kind of deep representation about language: often times a document is represented as a bag of words. (A bag is like a set that allows repeating elements.) This is an extremely simple representation as it throws away the word order and only keeps which words are included in the document and how many times each word occurs.\n", " \n", "We shall look at two probabilistic models of documents, both of which represent documents as a bag of words, using the **Naive Bayes** assumption. Both models represent documents using feature vectors\n", "whose components correspond to word types. If we have a document containing $|V|$ distinct vocabularies,\n", "then the feature vector dimension $d=|V|$.\n", "\n", "- **Bernoulli document model:** a document is represented by a feature vector with binary elements taking value 1 if the corresponding word is present in the document and 0 if the word is not present.\n", "- **Multinomial document model:** a document is represented by a feature vector with integer elements whose value is the frequency of that word in the document.\n", "\n", "Example: Consider the vocabulary V = {blue,red, dog, cat, biscuit, apple}. In this case |V| = d = 6. Now consider the (short) document \"the blue dog ate a blue biscuit\". If $d^B$ is the **Bernoulli** feature vector for this document, and $d^M$ is the **Multinomial** feature vector, then we would have:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "bernoulli [1, 0, 1, 0, 1, 0]\n", "multinomial [2, 0, 1, 0, 1, 0]\n" ] } ], "source": [ "vocab = ['blue', 'red', 'dog', 'cat', 'biscuit', 'apple']\n", "doc = \"the blue dog ate a blue biscuit\"\n", "\n", "# note that the words that didn't appear in the vocabulary will be discarded\n", "bernoulli = [1 if v in doc else 0 for v in vocab]\n", "multinomial = [doc.count(v) for v in vocab]\n", "print('bernoulli', bernoulli)\n", "print('multinomial', multinomial)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bernoulli Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Consider a corpus of documents (training data) whose class is given by $C = 1, 2, ..., K$. Using **Naive Bayes** (no matter if it's the bernoulli model or the multinomial model which we'll later see), we classify a document $D$ as the class which has the highest posterior probability $argmax_{ k = 1, 2, ..., K} \\, p(C = k|D)$, which can be re-expressed using Bayes’ Theorem:\n", "\n", "$$p(C = k|D) = \\frac{ p(C = k) \\, p(D|C = k) }{p(D)} \\ \\propto p(C = k) \\, p(D|C = k)$$\n", "\n", "Where:\n", "\n", "- $\\propto$ means is proportional to.\n", "- $p(C = k)$ represents the class k's **prior** probabilities.\n", "- $p(D|C = k)$ is the **likelihoods** of the document given the class k.\n", "- $p(D)$ is the **normalizing factor** which we don't have to compute since it does not depend on the class $C$. i.e. this factor will be the same across all class $C$, thus the numerator will be enough to determine which $p(C = k|D)$ is the largest." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Starting with $p(D|C)$. The spirit of **Naive Bayes** is it assumes that each of the features it uses are conditionally independent of one another given some class. More formally, if we wish to calculate the probability of observing features $X_1$ through $X_d$, given some class $C$ we can do it by the following math formula:\n", "\n", "$$p(x_{1},x_{2},...,x_{d} \\mid C) = \\prod_{i=1}^{d}p(x_{i} \\mid C)$$ \n", "\n", "Suppose we have a vocabulary (features) $V$ containing a set of $|V|$ words and the $t_{th}$ dimension of a document vector corresponds to word $w_t$ in the vocabulary. Following the **Naive Bayes** assumption, that the probability of each word occurring in the document is independent of the occurrences of the other words, we can then re-write the $i_{th}$ document's likelihood $p(D_i \\mid C)$ as:\n", "\n", "$$p(D_i \\mid C ) = \\prod_{t=1}^{d}b_{it}p(w_t \\mid C) + ( 1 - b_{it} ) (1- p(w_t \\mid C)) $$\n", "\n", "Where:\n", "\n", "- $p(w_t \\mid C)$ is the probability of word $w_t$ occurring in a document of class $C$.\n", "- $1- p(w_t \\mid C)$ is the probability of $w_t$ not occurring in a document of class $C$.\n", "- $b_{it}$ is either 0 or 1 representing the absence or presence of word $w_t$ in the $i_{th}$ document.\n", "\n", "This product goes over all words in the vocabulary. If word $w_t$ is present, then $b_{it} = 1$ and the associated probability is $p(w_t \\mid C)$; If word $w_t$ is not present, then $b_{it} = 0$ and the associated probability becomes $1- p(w_t \\mid C)$.\n", "\n", "\n", "As for the word likelihood $p(w_t \\mid C)$, we can learn (estimate) these parameters from a training set of documents labelled with class $C=k$.\n", "\n", "$$p(w_t \\mid C = k) = \\frac{n_k(w_t)}{N_k}$$\n", "\n", "Where:\n", "\n", "- $n_k(w_t)$ is the number of class $C=k$'s document in which $w_t$ is observed.\n", "- $N_k$ is the number of documents that belongs to class $k$.\n", "\n", "Last, calculating $p(C)$ is relatively simple: If there are $N$ documents in total in the training set, then the prior probability of class $C=k$ may be estimated as the relative frequency of documents of class $C=k$:\n", "\n", "$$p(C = k)\\,= \\frac{N_k}{N}$$\n", "\n", "Where $N$ is the total number of documents in the training set." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bernoulli Model Implementation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Consider a set of documents, each of which is related either to Class 1 or to Class 0. Given\n", "a training set of 11 documents, we would like to train a Naive Bayes classifier, using the Bernoulli\n", "document model, to classify unlabelled documents as Class 1 or 0. We define a vocabulary of eight words.\n", "\n", "Thus the training data $X$ is presented below as a 11*8 matrix, in which each row represents an 8-dimensional document vector. And the $y$ represents the class of each document. Then we would like to classify the two testing data." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "training data:\n", "[[1 0 0 0 1 1 1 1]\n", " [0 0 1 0 1 1 0 0]\n", " [0 1 0 1 0 1 1 0]\n", " [1 0 0 1 0 1 0 1]\n", " [1 0 0 0 1 0 1 1]\n", " [0 0 1 1 0 0 1 1]\n", " [0 1 1 0 0 0 1 0]\n", " [1 1 0 1 0 0 1 1]\n", " [0 1 1 0 0 1 0 0]\n", " [0 0 0 0 0 0 0 0]\n", " [0 0 1 0 1 0 1 0]]\n", "\n", "[1 1 1 1 1 1 0 0 0 0 0]\n", "\n", "testing data:\n", "[[1 0 0 1 1 1 0 1]\n", " [0 1 1 0 1 0 1 0]]\n" ] } ], "source": [ "train = np.genfromtxt('bernoulli.txt', dtype = np.int)\n", "X_train = train[:, :-1]\n", "y_train = train[:, -1] # the last column is the class\n", "print('training data:')\n", "print(X_train)\n", "print()\n", "print(y_train)\n", "print()\n", "print('testing data:')\n", "X_test = np.array([[1, 0, 0, 1, 1, 1, 0, 1], [0, 1, 1, 0, 1, 0, 1, 0]])\n", "print(X_test)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def bernoulli_nb(X_train, y_train, X_test):\n", " \"\"\"\n", " Pass in the training data, it's label and \n", " predict the testing data's class using bernoulli naive bayes\n", " \"\"\"\n", " \n", " # calculate the prior proabability p(C=k)\n", " N = X_train.shape[0]\n", " priors = np.bincount(y_train) / N\n", " \n", " # obtain the unique class's type (since it may not be 0 and 1)\n", " class_type = np.unique(y_train)\n", " class_nums = class_type.shape[0]\n", " word_likelihood = np.zeros((class_nums, X_train.shape[1]))\n", " \n", " # compute the word likelihood p(w_t∣C)\n", " for index, output in enumerate(class_type):\n", " subset = X_train[np.equal(y_train, output)]\n", " word_likelihood[index, :] = np.sum(subset, axis = 0) / subset.shape[0]\n", " \n", " # make predictions on the test set\n", " # note that this code will break if the test set happens to \n", " # be a 1d-array, since the first for loop will not be \n", " # looping through each document, but each document's feature instead\n", " predictions = np.zeros(X_test.shape[0], dtype = np.int)\n", " for index1, document in enumerate(X_test):\n", " \n", " # stores the p(C|D) for each class\n", " posteriors = np.zeros(class_nums)\n", " \n", " # compute p(C = k|D) for the document for all class\n", " # and return the predicted class with the maximum probability\n", " for c in range(class_nums):\n", " \n", " # start with p(C = k)\n", " posterior = priors[c]\n", " word_likelihood_subset = word_likelihood[c, :]\n", " \n", " # loop through features to compute p(D∣C = k)\n", " for index2, feature in enumerate(document):\n", " if feature:\n", " prob = word_likelihood_subset[index2]\n", " else:\n", " prob = 1 - word_likelihood_subset[index2]\n", " \n", " posterior *= prob\n", "\n", " posteriors[c] = posterior\n", " \n", " # compute the maximum p(C|D)\n", " predicted_class = class_type[np.argmax(posteriors)]\n", " predictions[index1] = predicted_class\n", " \n", " return predictions" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([1, 0])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "predictions = bernoulli_nb(X_train, y_train, X_test)\n", "predictions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multinomial Distribution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before discussing the multinomial document model, it is important to be familiar with the multinomial\n", "distribution. The multinomial distribution can be used to compute the probabilities in situations in which there are more than two possible outcomes. For example, suppose that two chess players had played numerous games and it was determined that the probability that Player A would win is 0.40, the probability that Player B would win is 0.35, and the probability that the game would end in a draw is 0.25. The multinomial distribution can be used to answer questions such as: \"If these two chess players played 12 games, what is the probability that Player A would win 7 games, Player B would win 2 games, and the remaining 3 games would be drawn?\" The following generalized formula gives the probability of obtaining a specific set of outcomes when there are $n$ possible outcomes for each event:\n", "\n", "$$P = \\frac{n!}{n_1!n_2!...n_d!}p_1^{n_1}p_2^{n_2}...p_d^{n_d}$$\n", "\n", "- n is the total number of events.\n", "- $n_1, ..., n_d$ is the number of times outcome 1 to d occurred.\n", "- $p_1, ..., p_d$ is the probability of outcome 1 to d occurred.\n", "\n", "Or more compactly written as:\n", "\n", "$$P = \\frac{n!}{\\prod_{t=1}^{d}n_t!}\\prod_{t=1}^{d}p_t^{n_t}$$\n", "\n", "If all of that is still unclear, refer to the following link for a worked example. [Youtube: Introduction to the Multinomial Distribution](https://www.youtube.com/watch?v=syVW7DgvUaY)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multinomial Model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall that for **Naive Bayes** $argmax_{k = 1, 2, ..., K} \\, p(D|C = k)p(C)$ is the objective function that we're trying to solve for. In the multinomial case, calculating $p(D|C = k)$ for the $i_{th}$ document becomes:\n", "\n", "$$p(D_i|C = k) = \\frac{x_i!}{\\prod_{t=1}^{d}x_{it}!}\\prod_{t=1}^{d}p(w_t|C)^{x_{it}} \\propto \\prod_{t=1}^{d}p(w_t|C)^{x_{it}}$$\n", "\n", "Where:\n", "\n", "- $x_{it}$, is the count of the number of times word $w_t$ occurs in document $D_i$.\n", "- $x_i= \\sum_t x_{it}$ is the total number of words in document $D_i$.\n", "- Often times, we don't need the normalization term $\\frac{x_i!}{\\prod_{t=1}^{d}x_{it}!}$, because it does not depend on the class, $C$.\n", "- $p(w_t \\mid C)$ is the probability of word $w_t$ occurring in a document of class $C$. This time estimated using the word frequency information from the document's feature vectors. More specifically, this is: $\\text{Number of word } w_t \\text{ in class } C \\big/ \\text{Total number of words in class } C$.\n", "- $\\prod_{t=1}^{d}p(w_t|C)^{x_{it}}$ can be interpreted as the product of word likelihoods for each word in the document." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Laplace Smoothing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One drawback of the equation for the multinomial model is that the likelihood $p(D_i|C = k)$ involves taking a product of probabilities $p(w_t \\mid C)$. Hence if any one of the terms of the product is zero, then the whole product becomes zero. This means that the probability of the document belonging to the class in question is zero (impossible). Intuitively, just because a word does not occur in a document class in the training data does not mean that it cannot occur in any document of that class. \n", "\n", "Therefore, one way to alleviate the problem is **Laplace Smoothing** or **add one smoothing**, where we add a count of one to each word type and the denominator will be increased by $|V|$, the number of vocabularies (features), to ensure that the probabilities are still normalized. More formally, our $p(w_t \\mid C)$ becomes:\n", "\n", "$$p(w_t \\mid C) = \\frac{( \\text{Number of word } w_t \\text{ in class } C + 1 )}{( \\text{Total number of words in class } C) + |V|} $$\n", "\n", "In sum, by performing **Laplace Smoothing**, we ensure that our $p(w_t \\mid C)$ will never equal to 0." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Log-Transformation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our original formula for classifying a document in to a class using Multinomial Naive Bayes was:\n", "\n", "$$p(C|D) = p(C)\\prod_{t=1}^{d}p(w_t|C)^{x_{it}}$$\n", "\n", "In practice, when we have a lot of unique words, we create very small values by computing the product of many $p(w_t \\mid C)$ terms. On a computer, the values may become so small that they may \"underflow\" (run out of memory to represent the value and thus it will be rounded to zero). To prevent this, we can simply throw a logarithm around everything:\n", "\n", "$$p(C|D) = log \\left( p(C)\\prod_{t=1}^{d}p(w_t|C)^{x_{it}}\\right)$$\n", "\n", "Using the property that $log(ab) = log(a) + log(b)$, the formula above then becomes:\n", "\n", "$$p(C|D) = log \\, p(C) + \\sum_{t=1}^d x_{it}log \\, p(w_t|C)$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multinomial Model Implementation" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
messagelabel
0Chinese Beijing Chinesec
1Chinese Chinese Shanghaic
2Chinese Macaoc
3Tokyo Japan Chinesej
\n", "
" ], "text/plain": [ " message label\n", "0 Chinese Beijing Chinese c\n", "1 Chinese Chinese Shanghai c\n", "2 Chinese Macao c\n", "3 Tokyo Japan Chinese j" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text = pd.read_table('multinomial.txt', sep = ',', header = None, names = ['message', 'label'])\n", "X_train = text['message']\n", "y_train = text['label']\n", "text.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given the four documents and its corresponding class (label), which class does the document with the message ```Chinese Chinese Chinese Tokyo Japan``` more likely belong to." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "feature name: ['beijing', 'chinese', 'japan', 'macao', 'shanghai', 'tokyo']\n", "training:\n", "[[1 2 0 0 0 0]\n", " [0 2 0 0 1 0]\n", " [0 1 0 1 0 0]\n", " [0 1 1 0 0 1]]\n", "\n", "testing:\n", "[[0 3 1 0 0 1]]\n" ] } ], "source": [ "vect = CountVectorizer()\n", "X_train_dtm = vect.fit_transform(X_train)\n", "X_test_dtm = vect.transform(['Chinese Chinese Chinese Tokyo Japan'])\n", "\n", "print('feature name: ', vect.get_feature_names())\n", "\n", "# convert to dense array for better visualize representation\n", "print('training:')\n", "print(X_train_dtm.toarray())\n", "print('\\ntesting:')\n", "print(X_test_dtm.toarray())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The implementation in the following code chunk is a very crude implementation while the one two code chunks below is a more efficient and robust implementation that leverages sparse matrix and matrix multiplication." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def mutinomial_nb(X_train_dtm, y_train, X_test_dtm):\n", " \"\"\"\n", " Pass in the training data, it's label and \n", " predict the testing data's class using mutinomial naive bayes\n", " \"\"\"\n", " \n", " # compute the priors\n", " # convert the character class to numbers (easier to work with)\n", " le = LabelEncoder()\n", " y = le.fit_transform(y_train)\n", " priors = np.bincount(y) / y.shape[0]\n", "\n", " class_type = np.unique(y)\n", " class_nums = class_type.shape[0]\n", " feature_nums = X_train_dtm.shape[1]\n", " likelihood = np.zeros((class_nums, feature_nums))\n", "\n", " # compute the word likelihood p(w_t∣C)\n", " # apply lapace smoothing\n", " for index, output in enumerate(class_type):\n", " subset = X_train_dtm[np.equal(y, output)]\n", " likelihood[index, :] = (np.sum(subset, axis = 0) + 1) / (np.sum(subset) + feature_nums)\n", " \n", " # make prediction on test set\n", " predictions = np.zeros(X_test_dtm.shape[0], dtype = np.int)\n", " for index1, document in enumerate(X_test_dtm):\n", " \n", " # stores the p(C|D) for each class\n", " posteriors = np.zeros(class_nums)\n", "\n", " # compute p(C = k|D) for the document for all class\n", " # and return the predicted class with the maximum probability\n", " for c in range(class_nums):\n", "\n", " # start with p(C = k)\n", " posterior = np.log(priors[c])\n", " likelihood_subset = likelihood[c, :]\n", "\n", " # compute p(D∣C = k)\n", " prob = document * np.log(likelihood_subset)\n", " posterior += np.sum(prob)\n", " posteriors[c] = posterior\n", "\n", " # compute the maximum p(C|D)\n", " prediction = np.argmax(posteriors)\n", " predictions[index1] = prediction\n", " \n", " # convert the prediction to the original class label\n", " predicted_class = le.inverse_transform(predictions)\n", " return predicted_class" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np\n", "from scipy.misc import logsumexp\n", "from sklearn.preprocessing import LabelBinarizer\n", "\n", "\n", "class NaiveBayes:\n", " \"\"\"\n", " Multinomial Naive Bayes classifier [1]_.\n", "\n", " Parameters\n", " ----------\n", " smooth : float, default 1.0\n", " Additive Laplace smoothing.\n", "\n", " Attributes\n", " ----------\n", " classes_ : 1d ndarray, shape [n_class]\n", " Holds the original label for each class.\n", "\n", " class_log_prior_ : 1d ndarray, shape [n_class]\n", " Empirical log probability for each class.\n", "\n", " feature_log_prob_ : 1d ndarray, shape [n_classes, n_features]\n", " Smootheed empirical log probability of features given a class,\n", " ``P(feature | class)``.\n", "\n", " class_count_ : 1d ndarray, shape [n_classes]\n", " Number of samples encountered for each class during fitting.\n", "\n", " feature_count_ : 2d ndarray, shape [n_classes, n_features]\n", " Number of samples encountered for each class and feature\n", " during fitting.\n", "\n", " References\n", " ----------\n", " .. [1] `Scikit-learn MultinomialNB\n", " `_\n", " \"\"\"\n", "\n", " def __init__(self, smooth = 1.0):\n", " self.smooth = smooth\n", "\n", " def fit(self, X, y):\n", " \"\"\"\n", " Fit the model according to the training data X and\n", " training label y.\n", "\n", " Parameters\n", " ----------\n", " X : scipy sparse csr_matrix, shape [n_samples, n_features]\n", " Training data.\n", "\n", " y : 1d ndarray, shape [n_samples]\n", " Label values.\n", "\n", " Returns\n", " -------\n", " self\n", " \"\"\"\n", "\n", " # one hot encode the label column and for binary\n", " # label, also expand it to two columns since it\n", " # only returns a single column vector\n", " labelbin = LabelBinarizer()\n", " Y = labelbin.fit_transform(y).astype(np.float64)\n", " if Y.shape[1] == 1:\n", " Y = np.concatenate((1 - Y, Y), axis = 1)\n", "\n", " self.classes_ = labelbin.classes_\n", "\n", " # for sparse matrix, the \"*\" operator performs matrix multiplication\n", " # https://stackoverflow.com/questions/36782588/dot-product-sparse-matrices\n", " self.feature_count_ = Y.T * X\n", " self.class_count_ = Y.sum(axis = 0)\n", "\n", " # compute feature log probability:\n", " # number of a particular word in a particular class / total number of words in that class\n", " smoothed_count = self.feature_count_ + self.smooth\n", " smoothed_class = np.sum(smoothed_count, axis = 1)\n", " self.feature_log_prob_ = (np.log(smoothed_count) -\n", " np.log(smoothed_class.reshape(-1, 1)))\n", "\n", " # compute class log prior:\n", " # number of observation in a particular class / total number of observation\n", " self.class_log_prior_ = (np.log(self.class_count_) -\n", " np.log(self.class_count_.sum()))\n", " return self\n", "\n", " def predict(self, X):\n", " \"\"\"\n", " Perform classification for input data X.\n", "\n", " Parameters\n", " ----------\n", " X : 2d ndarray, shape [n_samples, n_features]\n", " Input data\n", "\n", " Returns\n", " -------\n", " pred_class : 1d ndarray, shape [n_samples]\n", " Predicted label for X\n", " \"\"\"\n", " joint_prob = self._joint_log_likelihood(X)\n", " pred_class = self.classes_[np.argmax(joint_prob, axis = 1)]\n", " return pred_class\n", "\n", " def predict_proba(self, X):\n", " \"\"\"\n", " Return probability estimates for input data X.\n", "\n", " Parameters\n", " ----------\n", " X : 2d ndarray, shape [n_samples, n_features]\n", " Input data\n", "\n", " Returns\n", " -------\n", " pred_proba : 2d ndarray, shape [n_samples, n_classes]\n", " Returns the probability of the samples for each class.\n", " The columns correspond to the classes in sorted\n", " order, as they appear in the attribute `classes_`.\n", " \"\"\"\n", " joint_prob = self._joint_log_likelihood(X)\n", "\n", " # a crude implementation would be to take a exponent\n", " # and perform a normalization\n", " # temp = np.exp(joint_prob)\n", " # temp / temp.sum(axis = 1, keepdims = True)\n", " # but this would be numerically instable\n", " # https://hips.seas.harvard.edu/blog/2013/01/09/computing-log-sum-exp/\n", " joint_prob_norm = logsumexp(joint_prob, axis = 1, keepdims = True)\n", " pred_proba = np.exp(joint_prob - joint_prob_norm)\n", " return pred_proba\n", "\n", " def _joint_log_likelihood(self, X):\n", " \"\"\"\n", " Compute the unnormalized posterior log probability of X, which is\n", " the features' joint log probability (feature log probability times\n", " the number of times that word appeared in that document) times the\n", " class prior (since we're working in log space, it becomes an addition)\n", " \"\"\"\n", " joint_prob = X * self.feature_log_prob_.T + self.class_log_prior_\n", " return joint_prob" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "crude implementation ['c']\n", "efficient implementation ['c']\n", "library implementation ['c']\n" ] } ], "source": [ "pred = mutinomial_nb(\n", " X_train_dtm.toarray(), y_train, X_test_dtm.toarray())\n", "print('crude implementation', pred)\n", "\n", "nb = NaiveBayes()\n", "nb.fit(X_train_dtm, y_train)\n", "pred = nb.predict(X_test_dtm)\n", "print('efficient implementation', pred)\n", "\n", "nb = MultinomialNB()\n", "nb.fit(X_train_dtm, y_train)\n", "pred = nb.predict(X_test_dtm)\n", "print('library implementation', pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pros and Cons of Naive Bayes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll end this notebook with this algorithm's pros and cons.\n", "\n", "**Pros:**\n", "\n", "- Extremely fast to train/apply and is reliably a high bias/low variance classifier (less likely to overfit).\n", "- Handles extraneous features well, meaning it's robust to irrelevant features.\n", "- Famously good at text classification. e.g. spam filtering. Or domains where you have many equally important features, which tends to be a problem for other kind of classifiers, in particular tree based algorithms.\n", "- No parameter tuning is required\n", "\n", "**Cons:**\n", "\n", "- Conditional independence is not always a valid assumption, thus can be outperformed by other methods.\n", "- Predicted probabilities are not well-calibrated." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Reference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- [Tutorial: Multinomial Distribution](http://onlinestatbook.com/2/probability/multinomial.html)\n", "- [Notes: Text Classification using Naive Bayes](http://www.inf.ed.ac.uk/teaching/courses/inf2b/learnnotes/inf2b-learn-note07-2up.pdf)\n", "- [Blog: naive-bayes for dummies a simple explanation](http://blog.aylien.com/post/120703930533/naive-bayes-for-dummies-a-simple-explanation)\n", "- [Youtube: Multinomial Naive Bayes, a worked example](https://www.youtube.com/watch?v=km2LoOpdB3A)\n", "- [Youtube: Introduction to the Multinomial Distribution](https://www.youtube.com/watch?v=syVW7DgvUaY)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "toc": { "nav_menu": { "height": "322px", "width": "252px" }, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": {}, "toc_section_display": "block", "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }