{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# Lecture 6: Classification\n", "In data science, it is essential to distinguish the type of problems we are solving for. So far in this course, we have learned some **regression** techniques such as linear regressions, and **classifiers** such as logistic regressions and decision trees. In this lecture we are exploring more methods to solve **classification** problems.\n", "\n", "## K Nearest Neighbors\n", "One of the simplest classification models is called **K Nearest Neighbors**, or kNN. In kNN, the prediction is made based on a majority vote amoung its neighbors, increasing the searching radius around the point until the number of neighbors sums up to k. As such, k tends to be a small integer, and often an odd number in order to be a *tie-breaker*. For example, if you implement kNN classifer with k-value of 3, the classifier will keep searching the neighbors around the prediction point until finding 3 neighbors. While kNN is very easy to interpret, it is called an **instance-based learning** or **lazy learning** since the algorithm is only applied locally and not all detasets are used for the prediction. In the \"training phase\" of a kNN classifier, it simply stores all data points and their categories. \n", "\n", "![image](https://mayuresha.files.wordpress.com/2013/04/knn.jpg)\n", "\n", "### K-value Selection\n", "Using the \"right\" k-value will significantly affect the output of the classifier. If you choose larger k-value, the model will become more robust to small noises; however, the boundary line between 2 classes will be less distinct. Intuitively, with the larger k-value, the borderline between classes becomes smoother while the smaller k-value will have a potential problem of over-fitting. For example, when k is 1, the closest neighbor in train data becomes the point itself. Thus, the accuracy of this model on training dataset will be 100%. However, this model simply predicts a datapoint to be the identical class with its closest neighbor. This method is only usuful for the initial analysis of data or classification of very large datasets.\n", "\n", "Smaller k value will increase **variance** in the outputs but tends to **overfit** the classifier. On the other hands, larger k value will increase **bias** while becoming more robust to the outliers. (Continued in lecture 8)\n", "\n", "\n", "\n", "\n", "\n", "\n", "### Weighted Nearest Classifier\n", "Intuitively speaking, if the point to predict is located very closely to some class *A*, it is very likely to belong to A. On the other hand, the confidence level decreases as the distance from the closest point increases. The algorithm to apply this \"intuition\" is called **weighted nearest classifier**. This model simply improves the normal kNN classifier by calcurating the weighted sum of neighbors as it multiples by 1/d where d is the distance from the center. This will account for the fact that the distance from the point and the confidence level of prediction is in the negative correlation. \n", "\n", "### kNN in R\n", "Before using R to craete classification models, we need to prepare a dataset that contains 2 classes. The following code will teach you how to create 100 data points with three values, x coordinate, y-coordinate, and class (-1 or 1)." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAA0gAAANICAMAAADKOT/pAAAAOVBMVEX9/v0AAAAAAP9MTUxn\naGd7e3uLjIuZmpmmpqaxsrG7vLvFxsXOz87X2Nff4N/n6Ofu7+79/v3/AAA7dfO6AAAAE3RS\nTlP//////////////////////wD/DFvO9wAAAAlwSFlzAAASdAAAEnQB3mYfeAAAGsFJREFU\neJzt3eFC2soWgNGbAmqtR6Xv/7BXUCwgYkL2JHsma/3w0FYlevKZmUmI/9sCo/1v7g2AFggJ\nAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJ\nAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJ\nAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJ\nAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJ\nAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJ\nAggJAggJAggJAggJAggJAkwQUgeVGb6XTxFS+aeASEKCAEKCAEKCAEKCAEKCAEKCAEKCAEKC\nAEKCAEKCAEKCAEKCAEKCAEKCAEKCAEKCAEKCAEKiIn///p17E74hJKqxryhpSkKiGn+P3mYj\nJGrx9+y/qQiJWghpMCHxlZAGExIXmCMNJSQusGo3lJC4yHmkYYREZYQEAYQEAYQEAYQEAYQE\nAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQE\nAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQE\nAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQEAYQE\nAYTEdX///p17E2ogJK7ZVySln80Q0uOqWz9efxchZfH36C1XTBnS8123etz+7nY2V99TSEn8\nPfsv35kwpOd9QQ/d/ev25a67ekwSUhJC6mvCkO67h+32oVvtHr9262vvKqQkhNTXhCF1++fq\n7o7+cPrPR258CqKZI/U0eUh/3sd07wemb9/1xqcgmlW7niYd2r3Njt697od53xNSHs4j9TJh\nSK+rzyFbd/2AJCRqM+l5pIdDPqurxyMhUR1XNkAAIUEAIUEAIUEAIUEAIUEAIUEAIUEAIUEA\nIUEAIUEAIUEAIUEAIZFZNa+GEhJ5VfT6XCGRV0V3jBASadV0DyMhkZaQxhISWyGNJyR2zJFG\nEhI7Vu1GEhLvnEcaRUhURkgQQEjQ15WBppBIoYLJ0NWlDyGRQBXLc1cX44VEAjWcMLp+elhI\nzK+KSxiERHZCKkRIy1JFSOZIpFfDHMmqHelVsWrnPBL5VXAe6SohQQAhQQAhQQAhQQAhQQAh\nQQAhQQAhQQAhQQAhQQAhQQAhQQAhQQAhQQAhQQAhQQAhNaT2F8fVTEjNqOTl2o0SUjMmv4GI\nA+ARIbVi6ltaOQCeEFIrJg9p0mdLT0itmDikOu7pOB0hNWPaQ4SQTgmpGdNOWoR0SkgNmXQZ\nzRzphJC4jVW7E0LiVs4jHRESBBASBBASBBASBBASBBASBBASBBASBBASBBASBBASBBASBBAS\nBBASBBASBBASc2jutUxCYnoNvrpWSEyvwfs9CKltKYdQLd6BSEgtSzqEEtKekKqRdAglpD0h\n1SLtDps08DGE1LC8IeUcco4hpIZdCCnL4kOW7QgjpJadD6EaPBJkIaSWnYfT4NwkCyG17WQI\nlXbO1AAhLYiQyhHSggipHCEtyfdzpOZW0aYmpCX5btXOat5oQlqWy0ceq3mjCQlzpwBCQkgB\nhISQAggJc6QAQsKqXQAhseM80khCggBCggBCggBCggBThvR633Wbp4/nvfrEQqIyE4b0uup2\n7t6fV0i0ZMKQHrrHt5oeV5v98wppGOvTuU0Y0ur9uV5W6xchDeSMaXYThnRo53WzuRRSd+zG\np2jWQq/hqegwPGFI6+718GjjiDTIMq8qreowPGFIj939x6OXbiOkIRYa0tHb9KZc/n74rOfp\nh9GbkE4tMqS6vuhJT8g+3x0evdwLaYiqfjgHEdJ4QjqTfbpQYlVASOMJ6Yt+u+o8y1yFMq/q\nMCykhsx13Cq0x2c/DJ8QUkNm+hFebgzmPNJIQrrFXJOKuiYzhQjpBkl/UAppRkIaLO3QfbYd\nuqpVgUKENFje3WauLUv7o2VCQhoq8UBmvh066WB3QkIaKnFIduj5CGmo1CExFyENlneOxHyE\nNJipNV8J6QZmIpwTEgQQEgQQEgQQEgQQEgQQEgQQEgQQUuOc85qGkJqW8CqMRssWUhZFdrB0\n1wUmLDuGkHIos4Plu1I9XdlRhJRDmR0sXUjpNiiMkFIotIOl22/TbVAYIaVQagfLNpIS0hEh\nxSsWUra5fbaywwgph2I7WLLV5sJlz/fVCimHww7269evuTelsIL7+pzHXyFl8Xef0duD5lMq\nZs5xo5Ay+XX0lqFmXckQUiK/zv7LIEL6QkgMJ6QvhMQNzJHOLTQkc6RxrNqdW2xIVu3GcR7p\n1FJD2i7gPFKjhAQBhAQBhAQBhAQBhFRQsiuvKUhIxaR7LRAFCamYRK9hc2g8VeD7IaRS8ryq\n2qHxVJHvh5BKSRTS0VsKfT+EVMrYkMKucchTdA5lvh9CKmbUD77Aq+6Cd5zq51tCqsyooXjg\ndeChO04D8y0hVef2H96hr0yKnBO0MN8yR1qO2JDijiJNzLes2i1H8Gtlw+Y1TYTkPNKCJH2t\nbCMhFSCknLK+VraFOVIRQsoq52tlG1i1K0NIDFP9eaQyRoTUfTXjVsGchAQBRoV0/rmENETO\nSRC3EdJMsi7LcRuLDTNJeqKIGwlpHm7z3ZgxIb3ed93m6ePzhO77Qhr/BIaNkxoR0utqv1J3\n9/55hDRI4ZDMwKY2IqSH7vGtpsfVZv95hDRM2TmSGdjURoS0ev/Yl9X6RUiDFT1mmIFNLmD5\n+3WzEdINCs5ihDS5ESGtu9fDo42QUhHS5EaE9Njdfzx66TZCSsUcaWpjlr8fPut5CrzObkdI\nI1m1m9qoE7LPd4dHL/dCysV5pGm5sgECCAkCCInZtPRi26iQuu5zDS+AkBagrds/xIW0/XN3\n6R9uIqQFaOuGRIZ2zKOxW+QJiXkISUgEEJJ7NhDBHOnwoUJiBKt2hw91XztGcR7p/UNrDqnC\n/4cVbnI/TXxhy1xsqHBUUeEm95PvC7sp7IWGdPS2EhVucj/ZvrAbwx4b0uN6u31Zd+v/hn+e\nKwqHVOHKa4Wb3E+6L+zGsEeG9LSbF+1vyxVakpDOVbjJ/WT7wm7dnpEhbbo/2+duvf3TbYZ/\nou8J6VyFm9xPti9sppB2B6Tn7qG2uwhlG5f3UOEm95PsC5sxpLvuqbqQ0q0U/ajCTe4n2xc2\nzxxp0z0/dattZUO7bZXnLirc5H5yfWHzrNrtbh/U/d4dkJ6Gf6LvubKB+cxyHulxtZshbdd/\nhn+eK4REZZZ5QhaCtRuSG7sxoYBVu73VKmJrPj/t6M/gVqOJ5FpNKCMopJdsy99ufp1GtvXt\nMkaE9HTyCor1zFt1yq9jyCPZGddCxhyR1scd5brWTkhpZLsGaKfAWDNqjhRLSO3IF1KRsWar\nq3Zp50iLW0xMGNLR2zDNhpRz1S7pZhWVbY5UpuyxIf1eR9+vYafZ80hpD5QFZVu1SxnS7/gb\nn+y0emXDQqduuc4jpQxp1T32f6r+dxwSEuVknCMNORA9Xg+pyH29khFSBhlX7e661/4f+Lzq\n+6KlVkNa5BwpoXznkV5WmwFnYvcvSu+j3ZAWuGq3DKOHdoNGY4/dc6mtqkXKxURGmzakvhoO\niTa1ekIWJiUkCDDy9yMZ2rUp1ynUGgiJL7Jd1FMDQzu+KHmZaavHOiGN1tyCdsEXPrR7rBsd\n0p/N27DuLva2djWF1OAp1pIhlfrEsxsb0uZjhhR6x+KqQjp624hyIeV7kV+YkSE9dqvdvYqf\nhlwF3kM9ITV5GWqx44aQjh2HtP645uc52V2EJtNmSKVmMkI6dvFlFEtd/m4ypHJra+ZIRy4f\nkZLdaXUyDc6RCrJqd8Qc6ViDq3ZFOY/0yardqUrPI1W62VmNP490t+zzSJVyIA3myoYkJh7y\nmNoFE1IK307CCw3AGl1snFHM0O4+9DfILjGko7dHig3AhBQtarHhLmqD9pYW0ncnKosNwIQU\nbWRID4tf/g7xTUgFd3dzpGCj77S6oEuEyi0YB4Q0cOOs2gVziVBfRXe9y3Ok/iGdblyvFcC0\n55HqPGU7emh3OCKFTpJShnT0Ntw3q3a9n/P4HS9+rrTZnKv1IqLRv9ZlP0f6r/fNiPtJGFLp\n6fnFn8N9j4InG3fh6FbRQK7Wy1rjbhAZeQOUBYb03dP22v2PN+7SfKuepYVqX2ghpJ5SLxj/\nEFLqbT+11JAKSRhS2p/q+2PW0cYJaRZC6uvbecas8/iPrTreuK+zjIpCWuocqZCMIX1TzMzz\n+M9D0a+jxe/t+brX8QEr+YrYUlftysgZ0kXzjvguH2u+xPJZew27afbULxPSODOPmno//ccB\nq9aBU35CGqeWkN5VO5XPT0jjzD2PHzayFFIxQhpp5lXxYWsdQipGSCPNfvXNoNV3c6RShDRa\nNdeDbutYtauTkBamzsXl/IQEAYQEAYQEAYQEAYQEAYQEAYQEAYSUWk0ne5dNSInNfvkRvQkp\nsay3ieCr5YVUz2hp7pdoMMDSQqpptCSkiiwupKO32QmpIgsLqa59s6bql05IidU0Dl06IaVW\nz8pIZlO8BmthIRktLc80rwpeXEhGS0szzX0qlhaS0dLSTHTnpOWFVIA2ExNSLYwWUxNSLaxf\n5GaOlMn3o7faVtQXx6pdHtdGb0JKz3mkLK6N3oTEVki9XG/FHAkh9fJDSFbtEFIfP43eSp1H\ncn6qHkLqY5bRmyNdTYTUxyz7tLlXTYTUz/SjLKuBVRFSVkKqipCyElJVhJSWOVJNhJRWwAqH\n9fPJCCmxkR1YP5+QkNplbDghITXLasWUhNQsIU1JSM0S0pSE1K7LcyQreUUIqV2XVu2s5BUi\npJZ9PfpYyStESLepdIA04bzp80YJU9wxYX5CukXSAdLPdU8W0uete6a5h8/8hHSLlAOkPnVP\nF9Lh7TR3lZufkG6Qc2G5V90T/Qj4vL3pRPc5nZ+QbpAypH4bNdGgVEg9CKnikD4mUqXXSoTU\ng5BSzpF+fSyP9diq48NSoTU1c6SfCSnlqt3f3eb8/dtno/79HCi2pmbV7mdC2mY8j/S3d91H\ng8CCxwvnkX4ipIz2o7q3jHrstP9CWswMpjwhNWJAE0IqQEiN6NfE+4j0c44kpDBCakWP6c5h\nFvVvNrWUNbXyhNSKHstj/1brfn0ufv/4QfQipHb8tDx28ZTtMtbUyhPScqS8IKMVQloOIRUk\npAXJeGVTK4S0IBmvbGqFkBIqd/1RviubWiGkdBw3aiSkdMxkajRpSP/9vut27h7+u/6OSw7J\n2lqVJgzpdd39s4nequT6T06EVKUJQ3roVn+e949enlbdw7V3bS2kIdMeIVVpwpBW3fPn4+du\nde1dmwvp6G3sO5PEhCF13Xd/+PibIzc+RVLDDjJW7WrkiDSBoaM1Z3vqM+0c6ell/2hpcyTT\nnvZNufy9ORq7rV+Dtyo1057mTXse6WF/Hml193th55FMe5rnyoZpmPY0TkjLc3hRrLgDCWlp\nDrdpSDbcrP0l70JamsONg1ItgNR/ExYhBes1XppxUHXYWXMtydd/WzAhheo1Xpp1UJUypAZu\nVCmkUJl+a95lQipDSJF67Z0z78IZ50hCKmTRIRWeQKVctTNHKmPBId2wfw8tL+F5JKt2ZdQa\nUsAcafCIK9WR5XbOI5VQb0hjV+2GT6AyzXUWTEjBRp5HGhxSqtW3BRNSLkKqlJCSGTpSE1IO\nQkpm8NqBOVIKQkpn4Kp0I6t2tRNS/RKdD1ouIUEAIUEAIS2McWAZQloUKxOlCGlRrJWXIqQl\ncfa2GCEtiZCKEdKSCKkYIS2KOVIpQsoudL3aql0pQsotfM93HqkMIeVmLFYJIaVmdaAWQhpg\n+mGRkGohpN7mmKgLqRZC6m2a6crZbanMkSohpL4mOTh8uVGi9epKCKmvaUI6ent4PhnVQEh9\nTRFSAzeTXyoh9TbBdEVI1RJSbxNMV4RULSENUH66Uv+vN1kqIaVS/683WSohJVP7rzdZKiFB\nACFBgIWG5CwnsRYZkutuiLbMkI7eQoQlhuS1CYQTEgQQ0rysejRiiSHlmSNZ9WjGMkPKsv+m\nKZqx6gopbCCUY0SVaYzJODWFlOZAEkVI7agqpKO3TRBSOyoKqcHdrrkfDcslpDk1N1hdLiHN\nK8eqB6NVFJKBEHlVFZKBUCGOi6PVFJL/4WX4ARWgrpAowZA5gJAWr8lFnMktOqQBI8WGB5VC\nirDgkAZMDZqeRQgpwpJDOnob964Vavurm8hyQxrwg/jGn9m1DAebPt5ORUilQqpp96wl+cSE\n9NMu9OvfXjYspOEfQr2WG1K/Pf34uHJDR0paiiWH1Gfs9av/u176wKIhud9+IgsOqc/U4DOH\nwbOI4iH5DTCpLDqkn43IofQcKeJ3kjmmhRHSVWNCKrtqF/BbMhd4TCv3k0NI1405rhRdVI4I\naewnqE3JnxxCui7t2aDxIbXxm5+HHGNK/uQQ0k+ynqwcvVe0ENKgY0zRL1hItRo9TmkipKO3\nPd9ZSJwZO3Ouf440LA0hUUT9q3YD0zBHoozazyMNDcmqHVwy9BjjPFJ+WVf3mpZndCqkGGnP\nN7Uuy+hUSDG8+mjhhBTCq4+WrqWQZpyl3BJSlkEJEdoJadZZyvCQ8kyTidBQSEdvpzf42eu/\nrIBjzYQ08yxl6PGwhQvdOCKkKMNmaEJqjJDmIaTGNBNSbWdyzJHa0lBIdV1bYNWuLe2EVN3V\nbs4jtaSlkGA2QoIAQoIAQoIAQoIAQoIAQiqlssV4xpkwpO5U8FYlU9npYcaaMKTHRYV09JYF\nmHJo97za9HzP6kOq7BJaRpt0jvTcPfR7RyFRmWkXGx67517vJyQqk2fVrvcEqgrmSAuTJ6Rj\nDYRk1W5ZhFSK80iLMkdIP4/cWgiJRRESBBDSbQzcOCGkW1hK4IyQbmFxmzNCuoHTrZyz/H0D\nIXFOSDcQEueEdAtzJM4I6RZW7TgjpNs4j8QJIUEAIUEAIUEAIUEAIUEAIUEAIUEAIU3J2adm\nCWk6rodomJCm4wq9hglpMq4Zb5mQJiOklglpMkJqmZCmY47UMCFNx6pdw4Q0JeeRmiUkCCAk\nCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCAkCCCkGnhlbXpC\nys+9HiogpPzcfagCQkrP/fBqIKRPWSciQqqBkD7knYgIqQZC+pB4IpJ40zgQ0rvMP/bzHiz5\nJKR3mUPKO33jk5De5Q6J9IT0wUSEMYT0wUSEMYT0yUSE2wkJAggJAggJAggJAggJAggJAggJ\nAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAggJAiQNCSozfC+fIKSrMh6uEm5Twk3K\nuE0zbpKQvkq4TQk3KeM2CSmVhNuUcJMybpOQUkm4TQk3KeM2CSmVhNuUcJMybpOQUkm4TQk3\nKeM2CSmVhNuUcJMybpOQUkm4TQk3KeM2CSmVhNuUcJMybpOQUkm4TQk3KeM2CSmVhNuUcJMy\nbpOQUkm4TQk3KeM2LTgkaIKQIICQIICQIICQIICQIICQIICQIICQIICQIICQIICQIICQIICQ\nIICQIICQIECCkB5TvULsYdWtHl7n3oozub5FO4/rbN+m1/uuu3+e7ennD+n5lnv/F7PZ/zKC\n9dybcSrXt2jnYf9tWmUqabXfpNlKmj2k51WmveS/bvW826T/5t6QY7m+RTvP3f3r7jh5P/eG\n/POw25iH7m6u5587pMduk2kveeie3t7+6X7PvSFHkn2Ldu7etyfTZq263eFxvi2aO6TuIdX/\njrvuZbv7gTvbD7YLkn2LjuTbrG411zPPHdJzrv8dXb4ftdm+Rf+8dpu5N+HMQ/c411PPHdI2\n116SMaRtvu1597gfB+fxp3s7eM9FSCeE1N/LKtMA+M3j3Wq+ya2QTgipt9dVtoHdm/vZxnYz\nhXT8u6Mz7SUrIfW1SXaybe91ttUGIZ14X7V7SbVqt831LXr3st68zL0Nl8z2nTK0O/F7P39+\nmnHSelGmb9HeU7oFu/fzSC+zXZQipBMpr2zI9S3aeUnX0fuVDa93S5sjHUu1l6z3Y85su0mq\nb9F2N6fvjgfnKazm/T8npFOv+6u/596Kc6m+RdvDFDdXSLvr9teznY/NEBLUT0gQQEgQQEgQ\nQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQ\nQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEgQQEi1OPv1eE/n/3r674/dxb+mFCHV\n4rSIdXf+ryf//twJaVpCqtN5H6d/fl59/llI0xBSna6G9NhthDQxIeXztu8/fP5q9cf1x+/q\n3hXRdS933er3x68Vf/vLp7diNk/bs166h62QJiakfLru966Tze7xpjs8fA9ptfvj70NIj/v/\ndo9nvTxvhTQ1IeXzlsvzbprzZ7v98+/he0ib17d61oc+Vt3z7n3WV4Z6QpqGkPLput1g7am7\n227vPh5uDiH9tz08+vee262QZiekfD6Xrr88PORzePTQdXfPz0cfc/45hDQVIeXTP6Tt792c\nafUipNkJKZ8BIb0N+x7W5kgJCCmf95nQU3f/b450921IF/58+LvTBxQlpHwOq3ZPF1bt3v99\nf0Zpu7tQ6I9VuxyElE/X7c8e3e0en59H2n48Wr/Vtmto77+TMeA3DyhKSPm87ft3H5czbLeP\nq5MrGw6P/lvvQnq/suGwJL4V0nyElM8t+/7hY758qJCmIaR8RoT05z7ikzGckPK5LaT9R91d\n/muKE1I+I0Lq+deEExIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIE\nEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIEEBIE+D+Bi9LJhIDhHgAAAABJ\nRU5ErkJggg==", "text/plain": [ "plot without title" ] }, "metadata": { "image/svg+xml": { "isolated": true } }, "output_type": "display_data" } ], "source": [ "# Set up parameters\n", "meanA <- 1.5 # mean for class A\n", "meanB <- 0 # mean for class B\n", "sdA <- 1 # standard deviation for A\n", "sdB <- 0.7 # standard deviation for B\n", "\n", "# Create 100 random points and store them into 50 x 2 matrix\n", "matA <- matrix(data = rnorm(n = 100, mean = meanA, sd = sdA), nrow = 50, ncol = 2)\n", "matB <- matrix(data = rnorm(n = 100, mean = meanB, sd = sdB), nrow = 50, ncol = 2)\n", "\n", "# Combine 2 matrix \n", "points <- rbind(matA, matB)\n", "\n", "# Label A as 1, B as -1\n", "labels <- matrix(c(rep(1, 50), rep(-1,50)))\n", "\n", "# Now plot class A as a red point, B as a blue point\n", "plot(points, col = ifelse(labels > 0, 2, 4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have a dataset, we are going to split it into train data and test data.\n", "In this case, we are simply sampling 70 data points as a train data and use the rest as a test data." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Splitting dataset into a train set and a test set\n", "sample_index <- sample(100,70) # randomly pick 70 indecies\n", "\n", "train_data <- points[sample_index,]\n", "test_data <- points[-sample_index,]\n", "train_label <- labels[sample_index,]\n", "test_label <- labels[-sample_index,]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's use knn function from *class* library.\n", "We are creating 4 classifiers with diffrent k values. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.766666666666667" ], "text/latex": [ "0.766666666666667" ], "text/markdown": [ "0.766666666666667" ], "text/plain": [ "[1] 0.7666667" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "0.766666666666667" ], "text/latex": [ "0.766666666666667" ], "text/markdown": [ "0.766666666666667" ], "text/plain": [ "[1] 0.7666667" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "0.8" ], "text/latex": [ "0.8" ], "text/markdown": [ "0.8" ], "text/plain": [ "[1] 0.8" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "0.8" ], "text/latex": [ "0.8" ], "text/markdown": [ "0.8" ], "text/plain": [ "[1] 0.8" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Import class library\n", "library(class)\n", "\n", "# kNN Classifier with k = 1\n", "classifier_1 <- knn(train = train_data, test = test_data, cl = train_label, k = 1)\n", "# kNN Classifier with k = 3\n", "classifier_3 <- knn(train = train_data, test = test_data, cl = train_label, k = 3)\n", "# kNN Classifier with k = 5\n", "classifier_5 <- knn(train = train_data, test = test_data, cl = train_label, k = 5)\n", "# kNN Classifier with k = 7\n", "classifier_7 <- knn(train = train_data, test = test_data, cl = train_label, k = 7)\n", "\n", "# Here is a simple way to check accuracy\n", "mean(classifier_1 == test_label)\n", "mean(classifier_3 == test_label)\n", "mean(classifier_5 == test_label)\n", "mean(classifier_7 == test_label)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also see the result using *table* function." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " classifier_1\n", "test_label -1 1\n", " -1 14 4\n", " 1 3 9" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ " classifier_3\n", "test_label -1 1\n", " -1 15 3\n", " 1 4 8" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ " classifier_5\n", "test_label -1 1\n", " -1 14 4\n", " 1 2 10" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ " classifier_7\n", "test_label -1 1\n", " -1 15 3\n", " 1 3 9" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "table(test_label, classifier_1)\n", "table(test_label, classifier_3)\n", "table(test_label, classifier_5)\n", "table(test_label, classifier_7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(Conclusion) or should I implement kknn (weighted kNN)?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Naive Bayes\n", "Our second classifier is called a naive Bayes classifier. This classifier applies Bayes' theorem with an assumption that all parameters are independent. \n", "\n", "### Bayes' Theorem (Quick Stats Review)\n", "Bayes' Theorem is a fundamental equation to solve for conditional probabilities. Conditional probability is a probability of some event *A* when you have literally a condition *B*. For example, when you want to find the probabilty of eating out at the college town (an event A), but with the condition that it is a final week (an event B). Probability of eating out on any random day, P(A), is probably low if you are a college student. However, the probability will most likely raise if it is a random day during the final week, because students don't have time to cook. The second analogy represents the conditional probability of A given B and the mathmatical notation is $P(A|B)$.\n", "\n", "Bayes' Theorem shows the mathmatical relationship of those conditional probabilities and it is represented as follows:\n", "\n", "$$P(A|B) = \\frac{P(B|A)\\times P(A)}{P(B)}$$\n", "\n", "### Independence (Quick Stats Review 2)\n", "By definition, two events are called independent if the occurance of one event has no effect on the probability of other event.\n", "If 2 events are independent, you can show the mathmatical relationship by the following equation:\n", "\n", "$$P(A\\cap B) = P(A)\\times P(B)$$\n", "\n", "This assumption will make the calculation of naive Bayes significatly easier.\n", "\n", "### Naive Bayes\n", "Let's imagine we want to identify the gender of a randomly selected person given his or hers social media profile (of course the gender information is missing!). We have many feature parameters such as name, picture, where this person is from, high school, favorite music they listen to, liked posts, shared events etc. Naive bayes will calcurate the probability of this particular individual being male given those parameters and compare it with the probability of this individual being female given the parameters. If the probability of male is higher than female, the classifier predicts this person to be male.\n", "\n", "Let's say we have n feature variables and call them $X_1, X_2, X_3, ... , X_n$. Also we define the probability of a person being male as P(M), and female as P(F).\n", "\n", "Naive Bayes will compare $P(M|X)$ and $P(F|X)$.\n", "\n", "From Bayes' Theorem, we can state following relationship.\n", "\n", "$$P(M|X) = \\frac{P(X|M)\\times P(M)}{P(X)} = \\frac{P(X_1\\cap X_2\\cap... X_n|M)\\times P(M)}{P(X)}$$\n", "\n", "By assuing all parameters are indepentent, we can also show following.\n", "\n", "$$P(X_1\\cap X_2\\cap... X_n|M) = P(X_1|M)\\times P(X_2|M)\\times ... \\times P(X_n|M)$$\n", "\n", "So what does this mean? In this particular problem, naive Bayes classifier will look at the social media profile, and for each parameters it calculates the probability of being male or female. For example, if this person's name is Peter, how liktely is this person to be male? If this person listens to jazz a lot, how likely is this person to be male? etc. The result of such questions will be multiplied all together to find the probability of this individual being male.\n", "\n", "There are several kinds of naive Bayes classifier based on how to deal with those questions. If the classifier assumes that the parameters are distributed *normally*, it is a **Gaussian Naive Bayes**. If the classifier assumes it is a bernoulli distribution, then it is a **Bernoulli Naive Bayes**. \n", "\n", "### Naive Bayes in R\n", "In R, we use *naiveBayes* function from *e1071* library." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " \n", "pred setosa versicolor virginica\n", " setosa 8 0 0\n", " versicolor 0 14 2\n", " virginica 0 0 6" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "library(e1071)\n", "\n", "ind <- sample(150, 150*0.8)\n", "iris.train <- iris[ind,]\n", "iris.test <- iris[-ind,]\n", "fit <- naiveBayes(Species~., data=iris.train)\n", "\n", "pred <- predict(newdata=iris.test, object = fit)\n", "table(pred, iris.test$Species)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## SVM\n", "Support Vector Machine is another classification method which is known for its memory friendly nature. In the lecture, we introduced Maximal Margin Classifier since the support vector machine is essentialy a generalization of maximal margin classifer in higher dimentions.\n", "\n", "### Maximal Margin Classifier\n", "Let's take a look at the plot we made for kNN classifier. In this graph, we have 2 classes, A and B, plotted with a red and blue point accordingly. Maximal margin classifer is a classifier to discover an optimal linear separation, called **hyperplane**, which divides different classes. In this situation, we are using 2 dimentional datasets and the hyperplane is a linear model. In general, in a p-dimensinal space, a hyperplane exists in a subspace of p-1 dimension. For example, in a three dimentional space, hyperplane is a 2 dimentional separation, in other words, a plane.\n", "\n", "Let's look back to the plot from our example. \n", "In two dimensions, a hyperplane is represented by the equation\n", "\n", "$$B_0 + B_1X_1 + B_2X_2= 0$$\n", "\n", "You can distinguish two classes by wheather or not the equation outputs a negative value or positive value.\n", "Margimal margin classifier is essentialy the process to find optimal coefficient values $B_0$, $B_1$ and $B_2$.\n", "\n", "After finding some candidates for the optimal hyperplane, the classifier predicts new data point based on which side of the hyper plane the point locates. In some specific situations, maximal margin classifier is able to find a perfect linear borderline that divides 2 classes; however, it is very unlikely to have dataset where such a linear division exists. In this case, there will be some data points that locate on a \"wrong\" side of the borderline. \n", "\n", "The first step of optimizing a hyper plane is to minimize the amount of points which exist on the \"wrong\" side. To do this, we look at borderline cases, more specifically, the data points which exist near the hyperplane. These points are called **support vectors**.\n", " \n", "### Support Vector Machines\n", "We can think of Support Vector Machine as a generalized maximal margin classifier for cases where we cannot linearly separate classes. In this note, we are going to introduce the concpet of **soft margin** and **kernel trick**.\n", "\n", "In most cases, datasets are not linearly separable. Even if it is separable, there are some cases that we intentianally not to choose such a separation for the **bias** and **sensitivity** trade-offs (Lecture 8). Let's look at specific case. The graph below shows the hyperplane between two classes, blue are red. As you can see the existance of one outlier drasticaly changes the hyperplane. In this case it is better to allow one outlier to fall on to the wrong side. **soft margin support vector machine**.\n", "\n", "\n", "Mathmatical way to apply this is called a **cost function** and it is represented with $-log(1-\\frac{1}{1+e^{cx}})$. It looks like an exponential function and the coefficient c will change how fast the value increases. This function dictates how much to penalize support vectors for being mislabled. If the penalty value is high, then svm is **hard margin**, while the penalty value is low it is **soft margin**. (This correlates with lecture 8 material)\n", "\n", "### Non-linear Hyperplane - Kernel Trick \n", "Let's take a look at an example. In this scinario, we cannot separate 2 classes with a linear hyperplane. However obvious there is a clear pattern between 2 classes, which should be an oval shape.\n", "\n", "This problem is easily fixed by adding a new feature, z, by solving $x^2 + y^2$. Now the new x-z graph looks like this.\n", "\n", "\n", "This process is called a **kernel trick** or a **kernel method**. Support Vector Machine applies functions to transform lower dimentional data to higher dimentional data (in this case, x-y 2D data into x-y-z 3D data). This process is very important to find **non-linear hyperplane**. After transforming the data, we successfully found \n", "\n", "\n", "#### Support Vector Regression\n", "Even though we introduced Support Vector Machine as an example of classifier, you can also use support vector machine as a regression model. In this situation, you can use the hyporplane generated by SVM as a regression line. \n", "\n", "### SVM in R\n", "In R, we use *svm* function from *e1071* library. This function looks at the data frame and automatically uses SVM as a classifier if you are training categorical values, and it uses as a regression if you are predicting a numeric value. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# import library e1071\n", "library(\"e1071\")\n", "\n", "# If you are using SVM as a classifier, you need to input non-numeric value to the function\n", "svm.classifier <- svm(train_data, as.factor(train_label))" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.866666666666667" ], "text/latex": [ "0.866666666666667" ], "text/markdown": [ "0.866666666666667" ], "text/plain": [ "[1] 0.8666667" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Now test your SVM model with test_data\n", "svm.prediction <- predict(svm.classifier, test_data)\n", "\n", "# Check the accuracy\n", "sum(svm.prediction == test_label) / 30" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ " svm.prediction\n", "test_label -1 1\n", " -1 12 3\n", " 1 1 14" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "table(test_label, svm.prediction)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's take a look at the classifier model we found by using *summary* function." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "\n", "Call:\n", "svm.default(x = train_data, y = as.factor(train_label))\n", "\n", "\n", "Parameters:\n", " SVM-Type: C-classification \n", " SVM-Kernel: radial \n", " cost: 1 \n", " gamma: 0.5 \n", "\n", "Number of Support Vectors: 36\n", "\n", " ( 18 18 )\n", "\n", "\n", "Number of Classes: 2 \n", "\n", "Levels: \n", " -1 1\n", "\n", "\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "summary(svm.classifier)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Optimization\n", "\n", "Under *Prameters* we see *cost* and *gamma*. These values are the parameters that we can optimize to improve the model. \n", "\n", "*cost* represents the parameter for the soft margin cost function. This function dictates how much they *penalize* support vectors for being mislabeled. The greater the cost value is, the smaller the margin will become. Intuitively, the classifier tends to overfit when c is too large, but the classifier will lose its prediction ability if it allows too many mislabled support vectors.\n", "\n", "*gamma* represents the coefficient of nonlinear kernel function (outside of the scope of this lecture). \n", "\n", "$$F(x,y) = e^{(-\\gamma |x-y|^2)}$$\n", "\n", "Intuitively, a small gamma means that the difference between parameters will have larger infuluence in variance of the data. This will allow support vectors to have larger influence on classification while larger gamma value will limit the effect of support vectors. In this case, support vectors will have their influence locally.\n", "\n", "\n", "*tune* function from *e1071* library will iteratively try the cost and gamma value within the range you define and give you the optimal parameter.\n", "\n", "For this example, we are going to try cost from 0.1, 10, 100 and 1000, and gamma value of .5 to 2." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "\n", "Parameter tuning of ‘svm’:\n", "\n", "- sampling method: 10-fold cross validation \n", "\n", "- best parameters:\n", " gamma cost\n", " 0.5 0.1\n", "\n", "- best performance: 0.07142857 \n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Optimize SVM classifier\n", "tune.svm(x = train_data, y = factor(train_label), cost = 10^(-1:3), gamma =c(.5:2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Look under the best parameters. Those values are the suggestions made by *tune* function. Now we can create new svm classifier with suggested values." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "0.8" ], "text/latex": [ "0.8" ], "text/markdown": [ "0.8" ], "text/plain": [ "[1] 0.8" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Create new classifier with different parameters\n", "tuned_svm_classifier <- svm(train_data, as.factor(train_label), cost = 0.1, gamma = 0.5)\n", "tuned_prediction <- predict(tuned_svm_classifier, test_data)\n", "\n", "# Check the accuracy\n", "sum(tuned_prediction == test_label) / 30" ] } ], "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "ir" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "3.3.1" } }, "nbformat": 4, "nbformat_minor": 0 }