{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# K-Nearest Neighbors (KNN)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### by Chiyuan Zhang and Sören Sonnenburg" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook illustrates the K-Nearest Neighbors (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of Cover Trees on speed is illustrated by comparing KNN with and without it. Finally, a comparison with Multiclass Support Vector Machines is shown. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The basics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias.\n", "\n", "In `SHOGUN`, you can use [CKNN](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKNN.html) to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard [CEuclideanDistance](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CEuclideanDistance.html), but in general, any subclass of [CDistance](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistance.html) could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we load and init data split:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n", "import os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n", "\n", "from scipy.io import loadmat, savemat\n", "from numpy import random\n", "from os import path\n", "\n", "mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))\n", "Xall = mat['data']\n", "Yall = np.array(mat['label'].squeeze(), dtype=np.double)\n", "\n", "# map from 1..10 to 0..9, since shogun\n", "# requires multiclass labels to be\n", "# 0, 1, ..., K-1\n", "Yall = Yall - 1\n", "\n", "random.seed(0)\n", "\n", "subset = random.permutation(len(Yall))\n", "\n", "Xtrain = Xall[:, subset[:5000]]\n", "Ytrain = Yall[subset[:5000]]\n", "\n", "Xtest = Xall[:, subset[5000:6000]]\n", "Ytest = Yall[subset[5000:6000]]\n", "\n", "Nsplit = 2\n", "all_ks = range(1, 21)\n", "\n", "print(Xall.shape)\n", "print(Xtrain.shape)\n", "print(Xtest.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us plot the first five examples of the train data (first row) and test data (second row)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline\n", "import pylab as P\n", "def plot_example(dat, lab):\n", " for i in range(5):\n", " ax=P.subplot(1,5,i+1)\n", " P.title(int(lab[i]))\n", " ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')\n", " ax.set_xticks([])\n", " ax.set_yticks([])\n", " \n", " \n", "_=P.figure(figsize=(17,6))\n", "P.gray()\n", "plot_example(Xtrain, Ytrain)\n", "\n", "_=P.figure(figsize=(17,6))\n", "P.gray()\n", "plot_example(Xtest, Ytest)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we import shogun components and convert the data to shogun objects:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from shogun import MulticlassLabels, RealFeatures\n", "from shogun import KNN, EuclideanDistance\n", "\n", "labels = MulticlassLabels(Ytrain)\n", "feats = RealFeatures(Xtrain)\n", "k=3\n", "dist = EuclideanDistance()\n", "knn = KNN(k, dist, labels)\n", "labels_test = MulticlassLabels(Ytest)\n", "feats_test = RealFeatures(Xtest)\n", "knn.train(feats)\n", "pred = knn.apply_multiclass(feats_test)\n", "print(\"Predictions\", pred.get_int_labels()[:5])\n", "print(\"Ground Truth\", Ytest[:5])\n", "\n", "from shogun import MulticlassAccuracy\n", "evaluator = MulticlassAccuracy()\n", "accuracy = evaluator.evaluate(pred, labels_test)\n", "\n", "print(\"Accuracy = %2.2f%%\" % (100*accuracy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "idx=np.where(pred != Ytest)[0]\n", "Xbad=Xtest[:,idx]\n", "Ybad=Ytest[idx]\n", "_=P.figure(figsize=(17,6))\n", "P.gray()\n", "plot_example(Xbad, Ybad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "knn.set_k(13)\n", "multiple_k=knn.classify_for_multiple_k()\n", "print(multiple_k.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have the prediction for each of the 13 k's now and can quickly compute the accuracies:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for k in range(13):\n", " print(\"Accuracy for k=%d is %2.2f%%\" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So k=3 seems to have been the optimal choice." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Accellerating KNN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in `SHOGUN` will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In `SHOGUN`, you can use *Cover Trees* to speed up the nearest neighbor searching process in KNN. Just call `set_use_covertree` on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from shogun import Time, KNN_COVER_TREE, KNN_BRUTE\n", "start = Time.get_curtime()\n", "knn.set_k(3)\n", "knn.set_knn_solver_type(KNN_BRUTE)\n", "pred = knn.apply_multiclass(feats_test)\n", "print(\"Standard KNN took %2.1fs\" % (Time.get_curtime() - start))\n", "\n", "\n", "start = Time.get_curtime()\n", "knn.set_k(3)\n", "knn.set_knn_solver_type(KNN_COVER_TREE)\n", "pred = knn.apply_multiclass(feats_test)\n", "print(\"Covertree KNN took %2.1fs\" % (Time.get_curtime() - start))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def evaluate(labels, feats, use_cover_tree=False):\n", " from shogun import MulticlassAccuracy, CrossValidationSplitting\n", " import time\n", " split = CrossValidationSplitting(labels, Nsplit)\n", " split.build_subsets()\n", " \n", " accuracy = np.zeros((Nsplit, len(all_ks)))\n", " acc_train = np.zeros(accuracy.shape)\n", " time_test = np.zeros(accuracy.shape)\n", " for i in range(Nsplit):\n", " idx_train = split.generate_subset_inverse(i)\n", " idx_test = split.generate_subset_indices(i)\n", "\n", " for j, k in enumerate(all_ks):\n", " #print \"Round %d for k=%d...\" % (i, k)\n", "\n", " feats.add_subset(idx_train)\n", " labels.add_subset(idx_train)\n", "\n", " dist = EuclideanDistance(feats, feats)\n", " knn = KNN(k, dist, labels)\n", " knn.set_store_model_features(True)\n", " if use_cover_tree:\n", " knn.set_knn_solver_type(KNN_COVER_TREE)\n", " else:\n", " knn.set_knn_solver_type(KNN_BRUTE)\n", " knn.train()\n", "\n", " evaluator = MulticlassAccuracy()\n", " pred = knn.apply_multiclass()\n", " acc_train[i, j] = evaluator.evaluate(pred, labels)\n", "\n", " feats.remove_subset()\n", " labels.remove_subset()\n", " feats.add_subset(idx_test)\n", " labels.add_subset(idx_test)\n", "\n", " t_start = time.clock()\n", " pred = knn.apply_multiclass(feats)\n", " time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()\n", "\n", " accuracy[i, j] = evaluator.evaluate(pred, labels)\n", "\n", " feats.remove_subset()\n", " labels.remove_subset()\n", " return {'eout': accuracy, 'ein': acc_train, 'time': time_test}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Evaluate KNN with and without Cover Tree. This takes a few seconds:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "labels = MulticlassLabels(Ytest)\n", "feats = RealFeatures(Xtest)\n", "print(\"Evaluating KNN...\")\n", "wo_ct = evaluate(labels, feats, use_cover_tree=False)\n", "wi_ct = evaluate(labels, feats, use_cover_tree=True)\n", "print(\"Done!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Generate plots with the data collected in the evaluation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib\n", "\n", "fig = P.figure(figsize=(8,5))\n", "P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')\n", "P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')\n", "P.legend([\"Test Accuracy\", \"Training Accuracy\"])\n", "P.xlabel('K')\n", "P.ylabel('Accuracy')\n", "P.title('KNN Accuracy')\n", "P.tight_layout()\n", "\n", "fig = P.figure(figsize=(8,5))\n", "P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')\n", "P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')\n", "P.xlabel(\"K\")\n", "P.ylabel(\"time\")\n", "P.title('KNN time')\n", "P.legend([\"Plain KNN\", \"CoverTree KNN\"], loc='center right')\n", "P.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN *learning* becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparison to Multiclass Support Vector Machines" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from shogun import GaussianKernel, GMNPSVM\n", "\n", "width=80\n", "C=1\n", "\n", "gk=GaussianKernel()\n", "gk.set_width(width)\n", "\n", "svm=GMNPSVM(C, gk, labels)\n", "_=svm.train(feats)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's apply the SVM to the same test data set to compare results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "out=svm.apply(feats_test)\n", "evaluator = MulticlassAccuracy()\n", "accuracy = evaluator.evaluate(out, labels_test)\n", "\n", "print(\"Accuracy = %2.2f%%\" % (100*accuracy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the SVM performs way better on this task - let's apply it to all data we did not use in training." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "Xrem=Xall[:,subset[6000:]]\n", "Yrem=Yall[subset[6000:]]\n", "\n", "feats_rem=RealFeatures(Xrem)\n", "labels_rem=MulticlassLabels(Yrem)\n", "out=svm.apply(feats_rem)\n", "\n", "evaluator = MulticlassAccuracy()\n", "accuracy = evaluator.evaluate(out, labels_rem)\n", "\n", "print(\"Accuracy = %2.2f%%\" % (100*accuracy))\n", "\n", "idx=np.where(out.get_labels() != Yrem)[0]\n", "Xbad=Xrem[:,idx]\n", "Ybad=Yrem[idx]\n", "_=P.figure(figsize=(17,6))\n", "P.gray()\n", "plot_example(Xbad, Ybad)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The misclassified examples are indeed much harder to label even for human beings." ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 0 }