{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Scaling the regularization parameter for SVCs\n\nThe following example illustrates the effect of scaling the regularization\nparameter when using `svm` for `classification `.\nFor SVC classification, we are interested in a risk minimization for the\nequation:\n\n\n\\begin{align}C \\sum_{i=1, n} \\mathcal{L} (f(x_i), y_i) + \\Omega (w)\\end{align}\n\nwhere\n\n- $C$ is used to set the amount of regularization\n- $\\mathcal{L}$ is a `loss` function of our samples and our model parameters.\n- $\\Omega$ is a `penalty` function of our model parameters\n\nIf we consider the loss function to be the individual error per sample, then the\ndata-fit term, or the sum of the error for each sample, increases as we add more\nsamples. The penalization term, however, does not increase.\n\nWhen using, for example, `cross validation `, to set the\namount of regularization with `C`, there would be a different amount of samples\nbetween the main problem and the smaller problems within the folds of the cross\nvalidation.\n\nSince the loss function dependens on the amount of samples, the latter\ninfluences the selected value of `C`. The question that arises is \"How do we\noptimally adjust C to account for the different amount of training samples?\"\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data generation\n\nIn this example we investigate the effect of reparametrizing the regularization\nparameter `C` to account for the number of samples when using either L1 or L2\npenalty. For such purpose we create a synthetic dataset with a large number of\nfeatures, out of which only a few are informative. We therefore expect the\nregularization to shrink the coefficients towards zero (L2 penalty) or exactly\nzero (L1 penalty).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_classification\n\nn_samples, n_features = 100, 300\nX, y = make_classification(\n n_samples=n_samples, n_features=n_features, n_informative=5, random_state=1\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## L1-penalty case\nIn the L1 case, theory says that provided a strong regularization, the\nestimator cannot predict as well as a model knowing the true distribution\n(even in the limit where the sample size grows to infinity) as it may set some\nweights of otherwise predictive features to zero, which induces a bias. It does\nsay, however, that it is possible to find the right set of non-zero parameters\nas well as their signs by tuning `C`.\n\nWe define a linear SVC with the L1 penalty.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.svm import LinearSVC\n\nmodel_l1 = LinearSVC(penalty=\"l1\", loss=\"squared_hinge\", dual=False, tol=1e-3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We compute the mean test score for different values of `C` via\ncross-validation.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nimport pandas as pd\n\nfrom sklearn.model_selection import ShuffleSplit, validation_curve\n\nCs = np.logspace(-2.3, -1.3, 10)\ntrain_sizes = np.linspace(0.3, 0.7, 3)\nlabels = [f\"fraction: {train_size}\" for train_size in train_sizes]\nshuffle_params = {\n \"test_size\": 0.3,\n \"n_splits\": 150,\n \"random_state\": 1,\n}\n\nresults = {\"C\": Cs}\nfor label, train_size in zip(labels, train_sizes):\n cv = ShuffleSplit(train_size=train_size, **shuffle_params)\n train_scores, test_scores = validation_curve(\n model_l1,\n X,\n y,\n param_name=\"C\",\n param_range=Cs,\n cv=cv,\n n_jobs=2,\n )\n results[label] = test_scores.mean(axis=1)\nresults = pd.DataFrame(results)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(12, 6))\n\n# plot results without scaling C\nresults.plot(x=\"C\", ax=axes[0], logx=True)\naxes[0].set_ylabel(\"CV score\")\naxes[0].set_title(\"No scaling\")\n\nfor label in labels:\n best_C = results.loc[results[label].idxmax(), \"C\"]\n axes[0].axvline(x=best_C, linestyle=\"--\", color=\"grey\", alpha=0.7)\n\n# plot results by scaling C\nfor train_size_idx, label in enumerate(labels):\n train_size = train_sizes[train_size_idx]\n results_scaled = results[[label]].assign(\n C_scaled=Cs * float(n_samples * np.sqrt(train_size))\n )\n results_scaled.plot(x=\"C_scaled\", ax=axes[1], logx=True, label=label)\n best_C_scaled = results_scaled[\"C_scaled\"].loc[results[label].idxmax()]\n axes[1].axvline(x=best_C_scaled, linestyle=\"--\", color=\"grey\", alpha=0.7)\n\naxes[1].set_title(\"Scaling C by sqrt(1 / n_samples)\")\n\n_ = fig.suptitle(\"Effect of scaling C with L1 penalty\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the region of small `C` (strong regularization) all the coefficients\nlearned by the models are zero, leading to severe underfitting. Indeed, the\naccuracy in this region is at the chance level.\n\nUsing the default scale results in a somewhat stable optimal value of `C`,\nwhereas the transition out of the underfitting region depends on the number of\ntraining samples. The reparametrization leads to even more stable results.\n\nSee e.g. theorem 3 of :arxiv:`On the prediction performance of the Lasso\n<1402.1700>` or :arxiv:`Simultaneous analysis of Lasso and Dantzig selector\n<0801.1095>` where the regularization parameter is always assumed to be\nproportional to 1 / sqrt(n_samples).\n\n## L2-penalty case\nWe can do a similar experiment with the L2 penalty. In this case, the\ntheory says that in order to achieve prediction consistency, the penalty\nparameter should be kept constant as the number of samples grow.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_l2 = LinearSVC(penalty=\"l2\", loss=\"squared_hinge\", dual=True)\nCs = np.logspace(-8, 4, 11)\n\nlabels = [f\"fraction: {train_size}\" for train_size in train_sizes]\nresults = {\"C\": Cs}\nfor label, train_size in zip(labels, train_sizes):\n cv = ShuffleSplit(train_size=train_size, **shuffle_params)\n train_scores, test_scores = validation_curve(\n model_l2,\n X,\n y,\n param_name=\"C\",\n param_range=Cs,\n cv=cv,\n n_jobs=2,\n )\n results[label] = test_scores.mean(axis=1)\nresults = pd.DataFrame(results)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(12, 6))\n\n# plot results without scaling C\nresults.plot(x=\"C\", ax=axes[0], logx=True)\naxes[0].set_ylabel(\"CV score\")\naxes[0].set_title(\"No scaling\")\n\nfor label in labels:\n best_C = results.loc[results[label].idxmax(), \"C\"]\n axes[0].axvline(x=best_C, linestyle=\"--\", color=\"grey\", alpha=0.8)\n\n# plot results by scaling C\nfor train_size_idx, label in enumerate(labels):\n results_scaled = results[[label]].assign(\n C_scaled=Cs * float(n_samples * np.sqrt(train_sizes[train_size_idx]))\n )\n results_scaled.plot(x=\"C_scaled\", ax=axes[1], logx=True, label=label)\n best_C_scaled = results_scaled[\"C_scaled\"].loc[results[label].idxmax()]\n axes[1].axvline(x=best_C_scaled, linestyle=\"--\", color=\"grey\", alpha=0.8)\naxes[1].set_title(\"Scaling C by sqrt(1 / n_samples)\")\n\nfig.suptitle(\"Effect of scaling C with L2 penalty\")\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the L2 penalty case, the reparametrization seems to have a smaller impact\non the stability of the optimal value for the regularization. The transition\nout of the overfitting region occurs in a more spread range and the accuracy\ndoes not seem to be degraded up to chance level.\n\nTry increasing the value to `n_splits=1_000` for better results in the L2\ncase, which is not shown here due to the limitations on the documentation\nbuilder.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }