{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Classification of text documents using sparse features\n\nThis is an example showing how scikit-learn can be used to classify documents by\ntopics using a [Bag of Words approach](https://en.wikipedia.org/wiki/Bag-of-words_model). This example uses a\nTf-idf-weighted document-term sparse matrix to encode the features and\ndemonstrates various classifiers that can efficiently handle sparse matrices.\n\nFor document analysis via an unsupervised learning approach, see the example\nscript `sphx_glr_auto_examples_text_plot_document_clustering.py`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading and vectorizing the 20 newsgroups text dataset\n\nWe define a function to load data from `20newsgroups_dataset`, which\ncomprises around 18,000 newsgroups posts on 20 topics split in two subsets:\none for training (or development) and the other one for testing (or for\nperformance evaluation). Note that, by default, the text samples contain some\nmessage metadata such as `'headers'`, `'footers'` (signatures) and `'quotes'`\nto other posts. The `fetch_20newsgroups` function therefore accepts a\nparameter named `remove` to attempt stripping such information that can make\nthe classification problem \"too easy\". This is achieved using simple\nheuristics that are neither perfect nor standard, hence disabled by default.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from time import time\n\nfrom sklearn.datasets import fetch_20newsgroups\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\ncategories = [\n \"alt.atheism\",\n \"talk.religion.misc\",\n \"comp.graphics\",\n \"sci.space\",\n]\n\n\ndef size_mb(docs):\n return sum(len(s.encode(\"utf-8\")) for s in docs) / 1e6\n\n\ndef load_dataset(verbose=False, remove=()):\n \"\"\"Load and vectorize the 20 newsgroups dataset.\"\"\"\n\n data_train = fetch_20newsgroups(\n subset=\"train\",\n categories=categories,\n shuffle=True,\n random_state=42,\n remove=remove,\n )\n\n data_test = fetch_20newsgroups(\n subset=\"test\",\n categories=categories,\n shuffle=True,\n random_state=42,\n remove=remove,\n )\n\n # order of labels in `target_names` can be different from `categories`\n target_names = data_train.target_names\n\n # split target in a training set and a test set\n y_train, y_test = data_train.target, data_test.target\n\n # Extracting features from the training data using a sparse vectorizer\n t0 = time()\n vectorizer = TfidfVectorizer(\n sublinear_tf=True, max_df=0.5, min_df=5, stop_words=\"english\"\n )\n X_train = vectorizer.fit_transform(data_train.data)\n duration_train = time() - t0\n\n # Extracting features from the test data using the same vectorizer\n t0 = time()\n X_test = vectorizer.transform(data_test.data)\n duration_test = time() - t0\n\n feature_names = vectorizer.get_feature_names_out()\n\n if verbose:\n # compute size of loaded data\n data_train_size_mb = size_mb(data_train.data)\n data_test_size_mb = size_mb(data_test.data)\n\n print(\n f\"{len(data_train.data)} documents - \"\n f\"{data_train_size_mb:.2f}MB (training set)\"\n )\n print(f\"{len(data_test.data)} documents - {data_test_size_mb:.2f}MB (test set)\")\n print(f\"{len(target_names)} categories\")\n print(\n f\"vectorize training done in {duration_train:.3f}s \"\n f\"at {data_train_size_mb / duration_train:.3f}MB/s\"\n )\n print(f\"n_samples: {X_train.shape[0]}, n_features: {X_train.shape[1]}\")\n print(\n f\"vectorize testing done in {duration_test:.3f}s \"\n f\"at {data_test_size_mb / duration_test:.3f}MB/s\"\n )\n print(f\"n_samples: {X_test.shape[0]}, n_features: {X_test.shape[1]}\")\n\n return X_train, X_test, y_train, y_test, feature_names, target_names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analysis of a bag-of-words document classifier\n\nWe will now train a classifier twice, once on the text samples including\nmetadata and once after stripping the metadata. For both cases we will analyze\nthe classification errors on a test set using a confusion matrix and inspect\nthe coefficients that define the classification function of the trained\nmodels.\n\n### Model without metadata stripping\n\nWe start by using the custom function `load_dataset` to load the data without\nmetadata stripping.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X_train, X_test, y_train, y_test, feature_names, target_names = load_dataset(\n verbose=True\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our first model is an instance of the\n:class:`~sklearn.linear_model.RidgeClassifier` class. This is a linear\nclassification model that uses the mean squared error on {-1, 1} encoded\ntargets, one for each possible class. Contrary to\n:class:`~sklearn.linear_model.LogisticRegression`,\n:class:`~sklearn.linear_model.RidgeClassifier` does not\nprovide probabilistic predictions (no `predict_proba` method),\nbut it is often faster to train.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.linear_model import RidgeClassifier\n\nclf = RidgeClassifier(tol=1e-2, solver=\"sparse_cg\")\nclf.fit(X_train, y_train)\npred = clf.predict(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We plot the confusion matrix of this classifier to find if there is a pattern\nin the classification errors.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nfrom sklearn.metrics import ConfusionMatrixDisplay\n\nfig, ax = plt.subplots(figsize=(10, 5))\nConfusionMatrixDisplay.from_predictions(y_test, pred, ax=ax)\nax.xaxis.set_ticklabels(target_names)\nax.yaxis.set_ticklabels(target_names)\n_ = ax.set_title(\n f\"Confusion Matrix for {clf.__class__.__name__}\\non the original documents\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The confusion matrix highlights that documents of the `alt.atheism` class are\noften confused with documents with the class `talk.religion.misc` class and\nvice-versa which is expected since the topics are semantically related.\n\nWe also observe that some documents of the `sci.space` class can be misclassified as\n`comp.graphics` while the converse is much rarer. A manual inspection of those\nbadly classified documents would be required to get some insights on this\nasymmetry. It could be the case that the vocabulary of the space topic could\nbe more specific than the vocabulary for computer graphics.\n\nWe can gain a deeper understanding of how this classifier makes its decisions\nby looking at the words with the highest average feature effects:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nimport pandas as pd\n\n\ndef plot_feature_effects():\n # learned coefficients weighted by frequency of appearance\n average_feature_effects = clf.coef_ * np.asarray(X_train.mean(axis=0)).ravel()\n\n for i, label in enumerate(target_names):\n top5 = np.argsort(average_feature_effects[i])[-5:][::-1]\n if i == 0:\n top = pd.DataFrame(feature_names[top5], columns=[label])\n top_indices = top5\n else:\n top[label] = feature_names[top5]\n top_indices = np.concatenate((top_indices, top5), axis=None)\n top_indices = np.unique(top_indices)\n predictive_words = feature_names[top_indices]\n\n # plot feature effects\n bar_size = 0.25\n padding = 0.75\n y_locs = np.arange(len(top_indices)) * (4 * bar_size + padding)\n\n fig, ax = plt.subplots(figsize=(10, 8))\n for i, label in enumerate(target_names):\n ax.barh(\n y_locs + (i - 2) * bar_size,\n average_feature_effects[i, top_indices],\n height=bar_size,\n label=label,\n )\n ax.set(\n yticks=y_locs,\n yticklabels=predictive_words,\n ylim=[\n 0 - 4 * bar_size,\n len(top_indices) * (4 * bar_size + padding) - 4 * bar_size,\n ],\n )\n ax.legend(loc=\"lower right\")\n\n print(\"top 5 keywords per class:\")\n print(top)\n\n return ax\n\n\n_ = plot_feature_effects().set_title(\"Average feature effect on the original data\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can observe that the most predictive words are often strongly positively\nassociated with a single class and negatively associated with all the other\nclasses. Most of those positive associations are quite easy to interpret.\nHowever, some words such as `\"god\"` and `\"people\"` are positively associated to\nboth `\"talk.misc.religion\"` and `\"alt.atheism\"` as those two classes expectedly\nshare some common vocabulary. Notice however that there are also words such as\n`\"christian\"` and `\"morality\"` that are only positively associated with\n`\"talk.misc.religion\"`. Furthermore, in this version of the dataset, the word\n`\"caltech\"` is one of the top predictive features for atheism due to pollution\nin the dataset coming from some sort of metadata such as the email addresses\nof the sender of previous emails in the discussion as can be seen below:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_train = fetch_20newsgroups(\n subset=\"train\", categories=categories, shuffle=True, random_state=42\n)\n\nfor doc in data_train.data:\n if \"caltech\" in doc:\n print(doc)\n break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Such headers, signature footers (and quoted metadata from previous messages)\ncan be considered side information that artificially reveals the newsgroup by\nidentifying the registered members and one would rather want our text\nclassifier to only learn from the \"main content\" of each text document instead\nof relying on the leaked identity of the writers.\n\n### Model with metadata stripping\n\nThe `remove` option of the 20 newsgroups dataset loader in scikit-learn allows\nto heuristically attempt to filter out some of this unwanted metadata that\nmakes the classification problem artificially easier. Be aware that such\nfiltering of the text contents is far from perfect.\n\nLet us try to leverage this option to train a text classifier that does not\nrely too much on this kind of metadata to make its decisions:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "(\n X_train,\n X_test,\n y_train,\n y_test,\n feature_names,\n target_names,\n) = load_dataset(remove=(\"headers\", \"footers\", \"quotes\"))\n\nclf = RidgeClassifier(tol=1e-2, solver=\"sparse_cg\")\nclf.fit(X_train, y_train)\npred = clf.predict(X_test)\n\nfig, ax = plt.subplots(figsize=(10, 5))\nConfusionMatrixDisplay.from_predictions(y_test, pred, ax=ax)\nax.xaxis.set_ticklabels(target_names)\nax.yaxis.set_ticklabels(target_names)\n_ = ax.set_title(\n f\"Confusion Matrix for {clf.__class__.__name__}\\non filtered documents\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By looking at the confusion matrix, it is more evident that the scores of the\nmodel trained with metadata were over-optimistic. The classification problem\nwithout access to the metadata is less accurate but more representative of the\nintended text classification problem.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "_ = plot_feature_effects().set_title(\"Average feature effects on filtered documents\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the next section we keep the dataset without metadata to compare several\nclassifiers.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Benchmarking classifiers\n\nScikit-learn provides many different kinds of classification algorithms. In\nthis section we will train a selection of those classifiers on the same text\nclassification problem and measure both their generalization performance\n(accuracy on the test set) and their computation performance (speed), both at\ntraining time and testing time. For such purpose we define the following\nbenchmarking utilities:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn import metrics\nfrom sklearn.utils.extmath import density\n\n\ndef benchmark(clf, custom_name=False):\n print(\"_\" * 80)\n print(\"Training: \")\n print(clf)\n t0 = time()\n clf.fit(X_train, y_train)\n train_time = time() - t0\n print(f\"train time: {train_time:.3}s\")\n\n t0 = time()\n pred = clf.predict(X_test)\n test_time = time() - t0\n print(f\"test time: {test_time:.3}s\")\n\n score = metrics.accuracy_score(y_test, pred)\n print(f\"accuracy: {score:.3}\")\n\n if hasattr(clf, \"coef_\"):\n print(f\"dimensionality: {clf.coef_.shape[1]}\")\n print(f\"density: {density(clf.coef_)}\")\n print()\n\n print()\n if custom_name:\n clf_descr = str(custom_name)\n else:\n clf_descr = clf.__class__.__name__\n return clf_descr, score, train_time, test_time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now train and test the datasets with 8 different classification models and\nget performance results for each model. The goal of this study is to highlight\nthe computation/accuracy tradeoffs of different types of classifiers for\nsuch a multi-class text classification problem.\n\nNotice that the most important hyperparameters values were tuned using a grid\nsearch procedure not shown in this notebook for the sake of simplicity. See\nthe example script\n`sphx_glr_auto_examples_model_selection_plot_grid_search_text_feature_extraction.py` # noqa: E501\nfor a demo on how such tuning can be done.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression, SGDClassifier\nfrom sklearn.naive_bayes import ComplementNB\nfrom sklearn.neighbors import KNeighborsClassifier, NearestCentroid\nfrom sklearn.svm import LinearSVC\n\nresults = []\nfor clf, name in (\n (LogisticRegression(C=5, max_iter=1000), \"Logistic Regression\"),\n (RidgeClassifier(alpha=1.0, solver=\"sparse_cg\"), \"Ridge Classifier\"),\n (KNeighborsClassifier(n_neighbors=100), \"kNN\"),\n (RandomForestClassifier(), \"Random Forest\"),\n # L2 penalty Linear SVC\n (LinearSVC(C=0.1, dual=False, max_iter=1000), \"Linear SVC\"),\n # L2 penalty Linear SGD\n (\n SGDClassifier(\n loss=\"log_loss\", alpha=1e-4, n_iter_no_change=3, early_stopping=True\n ),\n \"log-loss SGD\",\n ),\n # NearestCentroid (aka Rocchio classifier)\n (NearestCentroid(), \"NearestCentroid\"),\n # Sparse naive Bayes classifier\n (ComplementNB(alpha=0.1), \"Complement naive Bayes\"),\n):\n print(\"=\" * 80)\n print(name)\n results.append(benchmark(clf, name))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot accuracy, training and test time of each classifier\n\nThe scatter plots show the trade-off between the test accuracy and the\ntraining and testing time of each classifier.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "indices = np.arange(len(results))\n\nresults = [[x[i] for x in results] for i in range(4)]\n\nclf_names, score, training_time, test_time = results\ntraining_time = np.array(training_time)\ntest_time = np.array(test_time)\n\nfig, ax1 = plt.subplots(figsize=(10, 8))\nax1.scatter(score, training_time, s=60)\nax1.set(\n title=\"Score-training time trade-off\",\n yscale=\"log\",\n xlabel=\"test accuracy\",\n ylabel=\"training time (s)\",\n)\nfig, ax2 = plt.subplots(figsize=(10, 8))\nax2.scatter(score, test_time, s=60)\nax2.set(\n title=\"Score-test time trade-off\",\n yscale=\"log\",\n xlabel=\"test accuracy\",\n ylabel=\"test time (s)\",\n)\n\nfor i, txt in enumerate(clf_names):\n ax1.annotate(txt, (score[i], training_time[i]))\n ax2.annotate(txt, (score[i], test_time[i]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The naive Bayes model has the best trade-off between score and\ntraining/testing time, while Random Forest is both slow to train, expensive to\npredict and has a comparatively bad accuracy. This is expected: for\nhigh-dimensional prediction problems, linear models are often better suited as\nmost problems become linearly separable when the feature space has 10,000\ndimensions or more.\n\nThe difference in training speed and accuracy of the linear models can be\nexplained by the choice of the loss function they optimize and the kind of\nregularization they use. Be aware that some linear models with the same loss\nbut a different solver or regularization configuration may yield different\nfitting times and test accuracy. We can observe on the second plot that once\ntrained, all linear models have approximately the same prediction speed which\nis expected because they all implement the same prediction function.\n\nKNeighborsClassifier has a relatively low accuracy and has the highest testing\ntime. The long prediction time is also expected: for each prediction the model\nhas to compute the pairwise distances between the testing sample and each\ndocument in the training set, which is computationally expensive. Furthermore,\nthe \"curse of dimensionality\" harms the ability of this model to yield\ncompetitive accuracy in the high dimensional feature space of text\nclassification problems.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }