{ "metadata": { "name": "", "signature": "sha256:705c5562e760fdb129045cf6f0c1c5860012a52c78fc8d1ae65511ab26923071" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "Naive Bayes" ] }, { "cell_type": "heading", "level": 4, "metadata": {}, "source": [ "by Chiyuan Zhang" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook illustrates multiclass learning using Naive Bayes in Shogun. A semi-formal introduction to Logistic Regression is provided at the end." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "$$\n", "P\\left( Y=k | X = x \\right) = \\frac{P(X=x|Y=k)P(Y=k)}{P(X=x)}\n", "$$\n", "\n", "The prediction is then made by\n", "\n", "$$\n", "y = \\operatorname*{argmax}_{k\\in\\{1,\\ldots,K\\}}\\; P(Y=k|X=x)\n", "$$\n", "\n", "Since $P(X=x)$ is a constant factor for all $P(Y=k|X=x)$, $k=1,\\ldots,K$, there\n", "is no need to compute it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In `SHOGUN`, [CGaussianNaiveBayes](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianNaiveBayes.html) implements the Naive Bayes\n", "algorithm. It is prefixed with \"Gaussian\" because the probability model for\n", "$P(X=x|Y=k)$ for each $k$ is taken to be a multi-variate Gaussian distribution.\n", "Furthermore, each dimension of the feature vector $X$ is assumed to be\n", "independent. The *Naive* independence assumption enables us the learn the\n", "model by estimating the parameters for each feature dimension independently,\n", "thus the whole learning algorithm runs very quickly. And this is also the \n", "reason for its name. However, this assumption can be very restrictive. In\n", "this demo, we show a simple 2D example. There are 3 linearly\n", "separable classes. The scattered points are training samples with colors\n", "indicating their labels. The filled area indicate the hypothesis learned by\n", "the [CGaussianNaiveBayes](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianNaiveBayes.html). The training samples are actually\n", "generated from three Gaussian distributions. But since the covariance for those\n", "Gaussian distributions are not diagonal (i.e. there are *rotations*), the GNB\n", "algorithm cannot handle them properly.\n", "\n", "We first init the models for generating samples for this demo:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "%matplotlib inline\n", "import os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n", "import numpy as np\n", "import pylab as pl\n", "\n", "np.random.seed(0)\n", "\n", "n_train = 300\n", "\n", "models = [{'mu': [8, 0], 'sigma':\n", " np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)],\n", " [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))},\n", " {'mu': [0, 0], 'sigma':\n", " np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)],\n", " [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))},\n", " {'mu': [-8,0], 'sigma':\n", " np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)],\n", " [np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot(np.diag([1,4]))}]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A helper function is defined to generate samples:" ] }, { "cell_type": "code", "collapsed": false, "input": [ " def gen_samples(n_samples):\n", " X_all = np.zeros((2, 0))\n", " Y_all = np.zeros(0)\n", " \n", " for i, model in enumerate(models):\n", " Y = np.zeros(n_samples) + i+1\n", " X = np.array(model['sigma']).dot(np.random.randn(2, n_samples)) + np.array(model['mu']).reshape((2,1))\n", " \n", " X_all = np.hstack((X_all, X))\n", " Y_all = np.hstack((Y_all, Y))\n", " \n", " return (X_all, Y_all)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we train the GNB model with `SHOGUN`:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "from shogun import GaussianNaiveBayes\n", "from shogun import RealFeatures\n", "from shogun import MulticlassLabels\n", "\n", "X_train, Y_train = gen_samples(n_train)\n", "\n", "machine = GaussianNaiveBayes()\n", "machine.set_features(RealFeatures(X_train))\n", "machine.set_labels(MulticlassLabels(Y_train))\n", "\n", "machine.train()\n" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run classification over the whole area to generate color regions:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "delta = 0.1\n", "x = np.arange(-20, 20, delta)\n", "y = np.arange(-20, 20, delta)\n", "X,Y = np.meshgrid(x,y)\n", "Z = machine.apply_multiclass(RealFeatures(np.vstack((X.flatten(), Y.flatten())))).get_labels()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plot figure:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "pl.figure(figsize=(8,5))\n", "pl.contourf(X, Y, Z.reshape(X.shape), np.arange(0, len(models)+1))\n", "pl.scatter(X_train[0,:],X_train[1,:], c=Y_train)\n", "pl.axis('off')\n", "pl.tight_layout()" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Although the independent assumption is usually considered to be too optimistic \n", "in reality, Naive Bayes sometimes works very well in some applications. For \n", "example, in email spam filtering, Naive Bayes (More specifically, the \n", " discrete Naive Bayes is generally used in this scenario. The main \n", " difference with Gaussian Naive Bayes is that a tabular instead of a \n", " parametric Gaussian distribution is used to describe the likelihood \n", " $P(X=x|K=k)$) is a very popular and widely used method." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This algorithm is closely related to the *Gaussian Mixture Model* (GMM) learning\n", "algorithm. However, while GMM is an unsupervised learning algorithm, Gaussian\n", "Naive Bayes is supervised learning. It uses the training labels to directly\n", "estimate the Gaussian parameters for each class, thus avoids the iterative\n", "*Expectation Maximization* procedures in GMM.\n", "\n", "The merit of GNB is that both training and predicting are very fast, and it has\n", "no hyper-parameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Brief Introduction to Logistic Regression\n", "\n", "Although named logistic *regression*, it is actually a classification\n", "algorithm. Similar to *Naive Bayes*, logistic regression computes the\n", "posterior $P(Y=k|X=x)$ and makes prediction by\n", "\n", "$$\n", "y = \\operatorname*{argmax}_{k\\in\\{1,\\ldots,K\\}}\\; P(Y=k|X=x)\n", "$$\n", "\n", "However, Naive Bayes is a *generative model*, in which the distribution of\n", "the input variable $X$ is also modeled (by a Gaussian distribution in this\n", "case). But logistic regression is a *discriminative model*, which doesn't\n", "care about the distribution of $X$, and models the posterior directly.\n", "Actually, the two algorithms are a *generative-discriminative pair*. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To be specific, logistic regression uses *linear* functions in $X$ to model the\n", "posterior probabilities:\n", "\n", "\n", "\\begin{eqnarray}\n", " \\log\\frac{P(Y=1|X=x)}{P(Y=K|X=x)} &=& \\beta_{10} + \\beta_1^Tx \\\\\\\\\n", " \\log\\frac{P(Y=2|X=x)}{P(Y=K|X=x)} &=& \\beta_{20} + \\beta_2^Tx \\\\\\\\\n", " &\\vdots& \\nonumber\\\\\\\\\n", " \\log\\frac{P(Y=K-1|X=x)}{P(Y=K|X=x)} &=& \\beta_{(K-1)0} + \\beta_{K-1}^Tx\n", "\\end{eqnarray}\n", "\n", "\n", "The training of a logistic regression model is carried out via *maximum\n", " likelihood estimation* of the parameters\n", "$\\boldsymbol\\beta = \\{\\beta_{10},\\beta_1^T,\\ldots,\\beta_{(K-1)0},\\beta_{K-1}^T\\}$. There is no\n", "closed form solution for the estimated parameters.\n", "\n", "There is no independent implementation of logistic regression in `SHOGUN`, \n", "but the `CLibLinear` becomes a logistic regression model when\n", "constructed with the argument `L2R_LR`. This model also include a\n", "regularization term of the $\\ell_2$-norm of $\\boldsymbol\\beta$. If sparsity in\n", "$\\boldsymbol\\beta$ is needed, one can also use `L1R_LR`, which replaces\n", "the $\\ell_2$-norm regularizer with a $\\ell_1$-norm regularizer.\n", "\n", "Unfortunately, while the original LibLinear implementation of Logistic Regression support multiclass case, due to interface incompatability, one cannot use LibLinear as a multiclass-machine in `SHOGUN` directly so far. An easy work-around is to use multiclass-to-binary reduction instead. Please see the Multiclass Reduction tutorial for details." ] } ], "metadata": {} } ] }