{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Classification\n", "\n", "Smile's classification algorithms are in the package `smile.classification` and all algorithms implement the interface `Classifier` that has the method `predict` to predict the class label of an instance. An overloaded version in `SoftClassifier` can also calculate the a posteriori probabilities besides the class label.\n", "\n", "Some algorithms with online learning capability also implement the interface `OnlineClassifier`. Online learning is a model of induction that learns one instance at a time. The method `update` updates the model with a new instance.\n", "\n", "The high-level operators are defined in Scala package object of `smile.classification`. In what follows, we will discuss each algorithm, their high-level Scala API, and examples." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import $ivy.`com.github.haifengl::smile-scala:3.1.0`\n", "import $ivy.`org.slf4j:slf4j-simple:2.0.13` \n", "\n", "import scala.language.postfixOps\n", "import smile._\n", "import smile.math._\n", "import smile.math.distance._\n", "import smile.math.kernel._\n", "import smile.math.matrix._\n", "import smile.math.matrix.Matrix._\n", "import smile.math.rbf._\n", "import smile.stat.distribution._\n", "import smile.data._\n", "import smile.data.formula._\n", "import smile.data.measure._\n", "import smile.data.`type`._\n", "import smile.base.cart.SplitRule\n", "import smile.base.mlp._\n", "import smile.base.rbf.RBF\n", "import smile.classification._\n", "import smile.feature._\n", "import smile.validation._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In below examples, we will use the `iris` data as samples." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "val iris = read.arff(\"../data/weka/iris.arff\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also define a `Formula` that specifies the class labels and predictors. In the example, the right-hand side of formula is empty, which means that all the rest of variables in the data frame will be used as predictors." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "val formula: Formula = \"class\" ~\n", "val x = formula.x(iris).toArray\n", "val y = formula.y(iris).toIntArray" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lastly, we extract the predictors and class labels with the formula." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Nearest Neighbor\n", "\n", "The k-nearest neighbor algorithm (k-NN) is a method for classifying objects by a majority vote of its neighbors, with the object being assigned to the class most common amongst its `k` nearest neighbors (`k` is typically small). k-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification.val mat = matrix(100, 100)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv.classification(10, x, y) { case (x, y) => knn(x, y, 3) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The funciton `cv.classification(k, x , y)` performs `k`-fold cross validation and takes a code block that trains the model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Decision Trees\n", "\n", "A decision tree can be learned by splitting the training set into subsets based on an attribute value test. This process is repeated on each derived subset in a recursive manner called recursive partitioning. The recursion is completed when the subset at a node all has the same value of the target variable, or when splitting no longer adds value to the predictions.\n", "\n", "The algorithms that are used for constructing decision trees usually work top-down by choosing a variable at each step that is the next best variable to use in splitting the set of items. \"Best\" is defined by how well the variable splits the set into homogeneous subsets that have the same value of the target variable. Different algorithms use different formulae for measuring \"best\". Used by the CART (Classification and Regression Tree) algorithm, Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it were randomly labeled according to the distribution of labels in the subset. Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. It reaches its minimum (zero) when all cases in the node fall into a single target category. Information gain is another popular measure, used by the ID3, C4.5 and C5.0 algorithms. Information gain is based on the concept of entropy used in information theory. For categorical variables with different number of levels, however, information gain are biased in favor of those attributes with more levels. Instead, one may employ the information gain ratio, which solves the drawback of information gain.\n", "```\n", "def cart(formula: Formula,\n", " data: DataFrame,\n", " splitRule: SplitRule = SplitRule.GINI,\n", " maxDepth: Int = 20,\n", " maxNodes: Int = 0,\n", " nodeSize: Int = 5): DecisionTree\n", "``` \n", "where `maxDepth` controls the maximum depth of the tree as a way of regularization. Similarly, the parameter `maxNodes`, if positive, is the maximum number of leaf nodes in the tree as a regularization control. If 0 or negative, `maxNodes` will be ignored. The parameter `nodeSize` controls the minimum number of samples in the leaf nodes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv.classification(10, formula, iris) { case (formula, data) => cart(formula, data) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Decision tree techniques have a number of advantages over many of those alternative techniques.\n", "\n", "- **Simple to understand and interpret**:\n", "In most cases, the interpretation of results summarized in a tree is very simple. This simplicity is useful not only for purposes of rapid classification of new observations, but can also often yield a much simpler \"model\" for explaining why observations are classified or predicted in a particular manner.\n", "\n", "- **Able to handle both numerical and categorical data**:\n", "Other techniques are usually specialized in analyzing datasets that have only one type of variable.\n", "\n", "- **Nonparametric and nonlinear**:\n", "The final results of using tree methods for classification or regression can be summarized in a series of (usually few) logical if-then conditions (tree nodes). Therefore, there is no implicit assumption that the underlying relationships between the predictor variables and the dependent variable are linear, follow some specific non-linear link function, or that they are even monotonic in nature. Thus, tree methods are particularly well suited for data mining tasks, where there is often little a priori knowledge nor any coherent set of theories or predictions regarding which variables are related and how. In those types of data analytics, tree methods can often reveal simple relationships between just a few variables that could have easily gone unnoticed using other analytic techniques.\n", "\n", "One major problem with classification and regression trees is their high variance. Often a small change in the data can result in a very different series of splits, making interpretation somewhat precarious. Besides, decision-tree learners can create over-complex trees that cause over-fitting. Mechanisms such as pruning are necessary to avoid this problem. Another limitation of trees is the lack of smoothness of the prediction surface.\n", "\n", "Some techniques such as bagging, boosting, and random forest use more than one decision tree for their analysis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Random Forest\n", "\n", "Random forest is an ensemble classifier that consists of many decision trees and outputs the majority vote of individual trees. The method combines bagging idea and the random selection of features.\n", "\n", "Each tree is constructed using the following algorithm:\n", "\n", " - If the number of cases in the training set is `N`, randomly sample `N` cases with replacement from the original data. This sample will be the training set for growing the tree.\n", " - If there are `M` input variables, a number `m` << `M` is specified such that at each node, `m` variables are selected at random out of the `M` and the best split on these m is used to split the node. The value of `m` is held constant during the forest growing.\n", " - Each tree is grown to the largest extent possible. There is no pruning.\n", " \n", "where ntrees is the number of trees, and mtry is the number of attributed randomly selected at each node to choose the best split. Although the original random forest algorithm trains each tree fully without pruning, it is useful to control the tree size some times, which can be achieved by the parameter maxNodes. The tree can also be regularized by limiting the minimum number of observations in trees' terminal nodes with the parameter nodeSize. When subsample = 1.0, we use the sampling with replacement to train each tree as described above. If subsample < 1.0, we instead select a subset of samples (without replacement) to train each tree. If the classes are not balanced, the user should provide the classWeight (proportional to the class priori) so that the sampling is done in stratified way. Otherwise, small classes may be not sampled sufficiently." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv.classification(10, formula, iris) { case (formula, data) => randomForest(formula, data) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The advantages of random forest are:\n", "\n", "- For many data sets, it produces a highly accurate classifier.\n", "- It runs efficiently on large data sets.\n", "- It can handle thousands of input variables without variable deletion.\n", "- It gives estimates of what variables are important in the classification.\n", "- It generates an internal unbiased estimate of the generalization error as the forest building progresses.\n", "- It has an effective method for estimating missing data and maintains accuracy when a large proportion of the data are missing.\n", "\n", "The disadvantages are\n", "\n", "- Random forests are prone to over-fitting for some datasets. This is even more pronounced on noisy data.\n", "- For data including categorical variables with different number of levels, random forests are biased in favor of those attributes with more levels. Therefore, the variable importance scores from random forest are not reliable for this type of data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gradient Boosted Trees\n", "\n", "The idea of gradient boosting originated in the observation that boosting can be interpreted as an optimization algorithm on a suitable cost function. In particular, the boosting algorithms can be abstracted as iterative functional gradient descent algorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction Gradient boosting is typically used with CART regression trees of a fixed size as base learners.\n", "\n", "Generic gradient boosting at the *t-th* step would fit a regression tree to pseudo-residuals. Let `J` be the number of its leaves. The tree partitions the input space into J disjoint regions and predicts a constant value in each region. The parameter `J` controls the maximum allowed level of interaction between variables in the model. With `J = 2` (decision stumps), no interaction between variables is allowed. With `J = 3` the model may include effects of the interaction between up to two variables, and so on. Hastie et al. comment that typically `4 ≤ J ≤ 8` work well for boosting and results are fairly insensitive to the choice of in this range, `J = 2` is insufficient for many applications, and `J > 10` is unlikely to be required.\n", "\n", "Fitting the training set too closely can lead to degradation of the model's generalization ability. Several so-called regularization techniques reduce this over-fitting effect by constraining the fitting procedure. One natural regularization parameter is the number of gradient boosting iterations T (i.e. the number of trees in the model when the base learner is a decision tree). Increasing T reduces the error on training set, but setting it too high may lead to over-fitting. An optimal value of T is often selected by monitoring prediction error on a separate validation data set.\n", "\n", "Another regularization approach is the shrinkage which times a parameter `η` (called the \"learning rate\") to update term. Empirically it has been found that using small learning rates (such as `η < 0.1`) yields dramatic improvements in model's generalization ability over gradient boosting without shrinking (`η = 1`). However, it comes at the price of increasing computational time both during training and prediction: lower learning rate requires more iterations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv.classification(10, formula, iris) { case (formula, data) => gbm(formula, data) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Soon after the introduction of gradient boosting Friedman proposed a minor modification to the algorithm, motivated by Breiman's bagging method. Specifically, he proposed that at each iteration of the algorithm, a base learner should be fit on a subsample of the training set drawn at random without replacement. Friedman observed a substantial improvement in gradient boosting's accuracy with this modification.\n", "\n", "Subsample size is some constant fraction `f` of the size of the training set. When `f = 1`, the algorithm is deterministic and identical to the one described above. Smaller values of f introduce randomness into the algorithm and help prevent over-fitting, acting as a kind of regularization. The algorithm also becomes faster, because regression trees have to be fit to smaller datasets at each iteration. Typically, `f` is set to 0.5, meaning that one half of the training set is used to build each base learner.\n", "\n", "Also, like in bagging, sub-sampling allows one to define an out-of-bag estimate of the prediction performance improvement by evaluating predictions on those observations which were not used in the building of the next base learner. Out-of-bag estimates help avoid the need for an independent validation dataset, but often underestimate actual performance improvement and the optimal number of iterations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SVM\n", "\n", "The basic support vector machine (SVM) is a binary linear classifier which chooses the hyperplane that represents the largest separation, or margin, between the two classes. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "val train = read.libsvm(\"../data/libsvm/svmguide1\")\n", "val test = read.libsvm(\"../data/libsvm/svmguide1.t\")\n", "\n", "val n = train.size\n", "val x = Array.ofDim[Double](n, 4)\n", "val y = Array.ofDim[Int](n)\n", "(0 until n) foreach { i =>\n", " val sample = train.get(i)\n", " sample.x.forEach(e => x(i)(e.i) = e.x)\n", " y(i) = if (sample.label > 0) +1 else -1\n", "}\n", "\n", "val testn = test.size\n", "val testx = Array.ofDim[Double](testn, 4)\n", "val testy = Array.ofDim[Int](testn)\n", "(0 until testn) foreach { i =>\n", " val sample = test.get(i)\n", " sample.x.forEach(e => testx(i)(e.i) = e.x)\n", " testy(i) = if (sample.label > 0) +1 else -1\n", "}\n", "\n", "val kernel = new GaussianKernel(90)\n", "val model = SVM.fit(x, y, kernel, 100, 1E-3)\n", "\n", "val prediction = Validation.test(model, testx)\n", "println(s\"accuracy = ${accuracy(testy, prediction)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If there exists no hyperplane that can perfectly split the positive and negative instances, the soft margin method will choose a hyperplane that splits the instances as cleanly as possible, while still maximizing the distance to the nearest cleanly split instances.\n", "\n", "The nonlinear SVMs are created by applying the kernel trick to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space be high dimensional. For example, the feature space corresponding Gaussian kernel is a Hilbert space of infinite dimension. Thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space. Maximum margin classifiers are well regularized, so the infinite dimension does not spoil the results.\n", "\n", "The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameter `C`. Given a kernel, best combination of C and kernel's parameters is often selected by a grid-search with cross validation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Logistic Regression\n", "\n", "Logistic regression (logit model) is a generalized linear model used for binomial regression. Logistic regression applies maximum likelihood estimation after transforming the dependent into a logit variable. A logit is the natural log of the odds of the dependent equaling a certain value or not (usually 1 in binary logistic models, the highest value in multinomial models). In this way, logistic regression estimates the odds of a certain event (value) occurring.\n", "```\n", "def logit(x: Array[Array[Double]],\n", " y: Array[Int],\n", " lambda: Double = 0.0,\n", " tol: Double = 1E-5,\n", " maxIter: Int = 500): LogisticRegression\n", "```\n", "where the parameter `lambda` (> 0) gives a \"regularized\" estimate of linear weights which often has superior generalization performance, especially when the dimensionality is high." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv.classification(10, x, y) { case (x, y) => logit(x, y) }" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Logistic regression has many analogies to ordinary least squares (OLS) regression. Unlike OLS regression, however, logistic regression does not assume linearity of relationship between the raw values of the independent variables and the dependent, does not require normally distributed variables, does not assume homoscedasticity, and in general has less stringent requirements.\n", "\n", "Compared with linear discriminant analysis, logistic regression has several advantages:\n", "\n", "- It is more robust: the independent variables don't have to be normally distributed, or have equal variance in each group\n", "- It does not assume a linear relationship between the independent variables and dependent variable.\n", "- It may handle nonlinear effects since one can add explicit interaction and power terms.\n", "\n", "However, it requires much more data to achieve stable, meaningful results.\n", "\n", "Logistic regression also has strong connections with neural network and maximum entropy modeling. For example, binary logistic regression is equivalent to a one-layer, single-output neural network with a logistic activation function trained under log loss. Similarly, multinomial logistic regression is equivalent to a one-layer, softmax-output neural network.\n", "\n", "Logistic regression estimation also obeys the maximum entropy principle, and thus logistic regression is sometimes called \"maximum entropy modeling\", and the resulting classifier the \"maximum entropy classifier\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Neural Networks\n", "\n", "A multilayer perceptron neural network consists of several layers of nodes, interconnected through weighted acyclic arcs from each preceding layer to the following, without lateral or feedback connections. Each node calculates a transformed weighted linear combination of its inputs (output activations from the preceding layer), with one of the weights acting as a trainable bias connected to a constant input. The transformation, called activation function, is a bounded non-decreasing (non-linear) function, such as the sigmoid functions (ranges from `0` to `1`). Another popular activation function is hyperbolic tangent which is actually equivalent to the sigmoid function in shape but ranges from `-1` to `1`. More specialized activation functions include radial basis functions which are used in RBF networks.\n", "\n", "For neural networks, the input patterns usually should be scaled/standardized. Commonly, each input variable is scaled into interval `[0, 1]` or to have mean `0` and standard deviation `1`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "val pendigits = read.csv(\"../data/classification/pendigits.txt\", delimiter = '\\t', header = false)\n", "\n", "val formula: Formula = \"V17\" ~\n", "val data = formula.x(pendigits).toArray\n", "val y = formula.y(pendigits).toIntArray\n", "\n", "val scaler = WinsorScaler.fit(data, 0.01, 0.99)\n", "val x = scaler.transform(data)\n", "\n", "val p = x(0).length\n", "val k = MathEx.max(y) + 1\n", "\n", "MathEx.setSeed(19650218); // to get repeatable results.\n", "cv.classification(10, x, y) { (x, y) =>\n", " val model = new MLP(p,\n", " Layer.sigmoid(50),\n", " Layer.mle(k, OutputFunction.SIGMOID)\n", " )\n", "\n", " (0 until 8) foreach { eopch =>\n", " val permutation = MathEx.permutate(x.length)\n", " for (i <- permutation) {\n", " model.update(x(i), y(i))\n", " }\n", " }\n", "\n", " model\n", "}" ] } ], "metadata": { "kernelspec": { "display_name": "Scala (2.13)", "language": "scala", "name": "scala213" }, "language_info": { "codemirror_mode": "text/x-scala", "file_extension": ".scala", "mimetype": "text/x-scala", "name": "scala", "nbconvert_exporter": "script", "version": "2.13.1" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": false, "sideBar": false, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": false, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }