{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " \n", "## [mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course \n", "\n", "Author: [Yury Kashnitsky](https://yorko.github.io). Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Nerses Bagiyan](https://www.linkedin.com/in/nersesbagiyan/), [Yulia Klimushina](https://www.linkedin.com/in/yuliya-klimushina-7168a9139), and [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#
Topic 4. Linear Classification and Regression\n", "##
Part 4. Where Logistic Regression Is Good and Where It's Not\n", " \n", " \n", "## Article outline\n", "1. [Analysis of IMDB movie reviews](#1.-Analysis-of-IMDB-movie-reviews)\n", "2. [A Simple Count of Words](#2.-A-Simple-Count-of-Words)\n", "3. [XOR-Problem](#3.-XOR-Problem)\n", "4. [Demo assignment](#4.-Demo-assignment)\n", "5. [Useful resources](#5.-Useful-resources)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Analysis of IMDB movie reviews" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now for a little practice! We want to solve the problem of binary classification of IMDB movie reviews. We have a training set with marked reviews, 12500 reviews marked as good, another 12500 bad. Here, it's not easy to get started with machine learning right away because we don't have the matrix $X$; we need to prepare it. We will use a simple approach: bag of words model. Features of the review will be represented by indicators of the presence of each word from the whole corpus in this review. The corpus is the set of all user reviews. The idea is illustrated by a picture\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "from sklearn.datasets import load_files\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "from sklearn.linear_model import LogisticRegression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**To get started, we automatically download the dataset from [here](http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz) and unarchive it along with the rest of datasets in the data folder. The dataset is briefly described [here](http://ai.stanford.edu/~amaas/data/sentiment/). There are 12.5k of good and bad reviews in the test and training sets.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import tarfile\n", "from io import BytesIO\n", "\n", "import requests\n", "\n", "url = \"http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\"\n", "\n", "\n", "def load_imdb_dataset(extract_path=\"../../data\", overwrite=False):\n", " # check if existed already\n", " if (\n", " os.path.isfile(os.path.join(extract_path, \"aclImdb\", \"README\"))\n", " and not overwrite\n", " ):\n", " print(\"IMDB dataset is already in place.\")\n", " return\n", "\n", " print(\"Downloading the dataset from: \", url)\n", " response = requests.get(url)\n", "\n", " tar = tarfile.open(mode=\"r:gz\", fileobj=BytesIO(response.content))\n", "\n", " data = tar.extractall(extract_path)\n", "\n", "\n", "load_imdb_dataset()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# change if you have it in alternative location\n", "PATH_TO_IMDB = \"../../data/aclImdb\"\n", "\n", "reviews_train = load_files(\n", " os.path.join(PATH_TO_IMDB, \"train\"), categories=[\"pos\", \"neg\"]\n", ")\n", "text_train, y_train = reviews_train.data, reviews_train.target\n", "\n", "reviews_test = load_files(os.path.join(PATH_TO_IMDB, \"test\"), categories=[\"pos\", \"neg\"])\n", "text_test, y_test = reviews_test.data, reviews_test.target" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# # Alternatively, load data from previously pickled objects.\n", "# import pickle\n", "# with open('../../data/imdb_text_train.pkl', 'rb') as f:\n", "# text_train = pickle.load(f)\n", "# with open('../../data/imdb_text_test.pkl', 'rb') as f:\n", "# text_test = pickle.load(f)\n", "# with open('../../data/imdb_target_train.pkl', 'rb') as f:\n", "# y_train = pickle.load(f)\n", "# with open('../../data/imdb_target_test.pkl', 'rb') as f:\n", "# y_test = pickle.load(f)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Number of documents in training data: %d\" % len(text_train))\n", "print(np.bincount(y_train))\n", "print(\"Number of documents in test data: %d\" % len(text_test))\n", "print(np.bincount(y_test))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Here are a few examples of the reviews.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(text_train[1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_train[1] # bad review" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "text_train[2]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_train[2] # good review" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# import pickle\n", "# with open('../../data/imdb_text_train.pkl', 'wb') as f:\n", "# pickle.dump(text_train, f)\n", "# with open('../../data/imdb_text_test.pkl', 'wb') as f:\n", "# pickle.dump(text_test, f)\n", "# with open('../../data/imdb_target_train.pkl', 'wb') as f:\n", "# pickle.dump(y_train, f)\n", "# with open('../../data/imdb_target_test.pkl', 'wb') as f:\n", "# pickle.dump(y_test, f)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. A Simple Count of Words" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**First, we will create a dictionary of all the words using CountVectorizer**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv = CountVectorizer()\n", "cv.fit(text_train)\n", "\n", "len(cv.vocabulary_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**If you look at the examples of \"words\" (let's call them tokens), you can see that we have omitted many of the important steps in text processing (automatic text processing can itself be a completely separate series of articles).**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(cv.get_feature_names()[:50])\n", "print(cv.get_feature_names()[50000:50050])" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "**Secondly, we are encoding the sentences from the training set texts with the indices of incoming words. We'll use the sparse format.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train = cv.transform(text_train)\n", "X_train" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Let's see how our transformation worked**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(text_train[19726])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train[19726].nonzero()[1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train[19726].nonzero()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Third, we will apply the same operations to the test set**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_test = cv.transform(text_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**The next step is to train Logistic Regression.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "logit = LogisticRegression(solver=\"lbfgs\", n_jobs=-1, random_state=7)\n", "logit.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Let's look at accuracy on the both the training and the test sets.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "round(logit.score(X_train, y_train), 3), round(logit.score(X_test, y_test), 3)," ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**The coefficients of the model can be beautifully displayed.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def visualize_coefficients(classifier, feature_names, n_top_features=25):\n", " # get coefficients with large absolute values\n", " coef = classifier.coef_.ravel()\n", " positive_coefficients = np.argsort(coef)[-n_top_features:]\n", " negative_coefficients = np.argsort(coef)[:n_top_features]\n", " interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])\n", " # plot them\n", " plt.figure(figsize=(15, 5))\n", " colors = [\"red\" if c < 0 else \"blue\" for c in coef[interesting_coefficients]]\n", " plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors)\n", " feature_names = np.array(feature_names)\n", " plt.xticks(\n", " np.arange(1, 1 + 2 * n_top_features),\n", " feature_names[interesting_coefficients],\n", " rotation=60,\n", " ha=\"right\",\n", " );" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_grid_scores(grid, param_name):\n", " plt.plot(\n", " grid.param_grid[param_name],\n", " grid.cv_results_[\"mean_train_score\"],\n", " color=\"green\",\n", " label=\"train\",\n", " )\n", " plt.plot(\n", " grid.param_grid[param_name],\n", " grid.cv_results_[\"mean_test_score\"],\n", " color=\"red\",\n", " label=\"test\",\n", " )\n", " plt.legend();" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "visualize_coefficients(logit, cv.get_feature_names())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**To make our model better, we can optimize the regularization coefficient for the `Logistic Regression`. We'll use `sklearn.pipeline` because `CountVectorizer` should only be applied to the training data (so as to not \"peek\" into the test set and not count word frequencies there). In this case, `pipeline` determines the correct sequence of actions: apply `CountVectorizer`, then train `Logistic Regression`.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "from sklearn.pipeline import make_pipeline\n", "\n", "text_pipe_logit = make_pipeline(\n", " CountVectorizer(),\n", " # for some reason n_jobs > 1 won't work\n", " # with GridSearchCV's n_jobs > 1\n", " LogisticRegression(solver=\"lbfgs\", n_jobs=1, random_state=7),\n", ")\n", "\n", "text_pipe_logit.fit(text_train, y_train)\n", "print(text_pipe_logit.score(text_test, y_test))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "from sklearn.model_selection import GridSearchCV\n", "\n", "param_grid_logit = {\"logisticregression__C\": np.logspace(-5, 0, 6)}\n", "grid_logit = GridSearchCV(\n", " text_pipe_logit, param_grid_logit, return_train_score=True, cv=3, n_jobs=-1\n", ")\n", "\n", "grid_logit.fit(text_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Let's print best $C$ and cv-score using this hyperparameter:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grid_logit.best_params_, grid_logit.best_score_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_grid_scores(grid_logit, \"logisticregression__C\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the validation set:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grid_logit.score(text_test, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Now let's do the same with random forest. We see that, with logistic regression, we achieve better accuracy with less effort.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import RandomForestClassifier" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "forest = RandomForestClassifier(n_estimators=200, n_jobs=-1, random_state=17)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "forest.fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "round(forest.score(X_test, y_test), 3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. XOR-Problem\n", "Let's now consider an example where linear models are worse.\n", "\n", "Linear classification methods still define a very simple separating surface - a hyperplane. The most famous toy example of where classes cannot be divided by a hyperplane (or line) with no errors is \"the XOR problem\".\n", "\n", "XOR is the \"exclusive OR\", a Boolean function with the following truth table:\n", "\n", "\n", "\n", "\n", "\n", "XOR is the name given to a simple binary classification problem in which the classes are presented as diagonally extended intersecting point clouds." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# creating dataset\n", "rng = np.random.RandomState(0)\n", "X = rng.randn(200, 2)\n", "y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.scatter(X[:, 0], X[:, 1], s=30, c=y, cmap=plt.cm.Paired);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Obviously, one cannot draw a single straight line to separate one class from another without errors. Therefore, logistic regression performs poorly with this task." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_boundary(clf, X, y, plot_title):\n", " xx, yy = np.meshgrid(np.linspace(-3, 3, 50), np.linspace(-3, 3, 50))\n", " clf.fit(X, y)\n", " # plot the decision function for each datapoint on the grid\n", " Z = clf.predict_proba(np.vstack((xx.ravel(), yy.ravel())).T)[:, 1]\n", " Z = Z.reshape(xx.shape)\n", "\n", " image = plt.imshow(\n", " Z,\n", " interpolation=\"nearest\",\n", " extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n", " aspect=\"auto\",\n", " origin=\"lower\",\n", " cmap=plt.cm.PuOr_r,\n", " )\n", " contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2, linetypes=\"--\")\n", " plt.scatter(X[:, 0], X[:, 1], s=30, c=y, cmap=plt.cm.Paired)\n", " plt.xticks(())\n", " plt.yticks(())\n", " plt.xlabel(r\"$x_1$\")\n", " plt.ylabel(r\"$x_2$\")\n", " plt.axis([-3, 3, -3, 3])\n", " plt.colorbar(image)\n", " plt.title(plot_title, fontsize=12);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_boundary(\n", " LogisticRegression(solver=\"lbfgs\"), X, y, \"Logistic Regression, XOR problem\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But if one were to give polynomial features as an input (here, up to 2 degrees), then the problem is solved." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.pipeline import Pipeline\n", "from sklearn.preprocessing import PolynomialFeatures" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logit_pipe = Pipeline(\n", " [\n", " (\"poly\", PolynomialFeatures(degree=2)),\n", " (\"logit\", LogisticRegression(solver=\"lbfgs\")),\n", " ]\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_boundary(logit_pipe, X, y, \"Logistic Regression + quadratic features. XOR problem\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, logistic regression has still produced a hyperplane but in a 6-dimensional feature space $1, x_1, x_2, x_1^2, x_1x_2$ and $x_2^2$. When we project to the original feature space, $x_1, x_2$, the boundary is nonlinear.\n", "\n", "In practice, polynomial features do help, but it is computationally inefficient to build them explicitly. SVM with the kernel trick works much faster. In this approach, only the distance between the objects (defined by the kernel function) in a high dimensional space is computed, and there is no need to produce a combinatorially large number of features. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Demo assignment\n", "To practice with linear models, you can complete [this assignment](https://www.kaggle.com/kashnitsky/a4-demo-sarcasm-detection-with-logit) where you'll build a sarcasm detection model. The assignment is just for you to practice, and goes with [solution](https://www.kaggle.com/kashnitsky/a4-demo-sarcasm-detection-with-logit-solution).\n", "\n", "## 5. Useful resources\n", "- Medium [\"story\"](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-4-linear-classification-and-regression-44a41b9b5220) based on this notebook\n", "- Main course [site](https://mlcourse.ai), [course repo](https://github.com/Yorko/mlcourse.ai), and YouTube [channel](https://www.youtube.com/watch?v=QKTuw4PNOsU&list=PLVlY_7IJCMJeRfZ68eVfEcu-UcN9BbwiX)\n", "- Course materials as a [Kaggle Dataset](https://www.kaggle.com/kashnitsky/mlcourse)\n", "- If you read Russian: an [article](https://habrahabr.ru/company/ods/blog/323890/) on Habr.com with ~ the same material. And a [lecture](https://youtu.be/oTXGQ-_oqvI) on YouTube\n", "- A nice and concise overview of linear models is given in the book [\"Deep Learning\"](http://www.deeplearningbook.org) (I. Goodfellow, Y. Bengio, and A. Courville).\n", "- Linear models are covered practically in every ML book. We recommend \"Pattern Recognition and Machine Learning\" (C. Bishop) and \"Machine Learning: A Probabilistic Perspective\" (K. Murphy).\n", "- If you prefer a thorough overview of linear model from a statistician's viewpoint, then look at \"The elements of statistical learning\" (T. Hastie, R. Tibshirani, and J. Friedman).\n", "- The book \"Machine Learning in Action\" (P. Harrington) will walk you through implementations of classic ML algorithms in pure Python.\n", "- [Scikit-learn](http://scikit-learn.org/stable/documentation.html) library. These guys work hard on writing really clear documentation.\n", "- Scipy 2017 [scikit-learn tutorial](https://github.com/amueller/scipy-2017-sklearn) by Alex Gramfort and Andreas Mueller.\n", "- One more [ML course](https://github.com/diefimov/MTH594_MachineLearning) with very good materials.\n", "- [Implementations](https://github.com/rushter/MLAlgorithms) of many ML algorithms. Search for linear regression and logistic regression." ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 1 }