{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Release Highlights for scikit-learn 0.24\n\n.. currentmodule:: sklearn\n\nWe are pleased to announce the release of scikit-learn 0.24! Many bug fixes\nand improvements were added, as well as some new key features. We detail\nbelow a few of the major features of this release. **For an exhaustive list of\nall the changes**, please refer to the `release notes `.\n\nTo install the latest version (with pip)::\n\n pip install --upgrade scikit-learn\n\nor with conda::\n\n conda install -c conda-forge scikit-learn\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Successive Halving estimators for tuning hyper-parameters\nSuccessive Halving, a state of the art method, is now available to\nexplore the space of the parameters and identify their best combination.\n:class:`~sklearn.model_selection.HalvingGridSearchCV` and\n:class:`~sklearn.model_selection.HalvingRandomSearchCV` can be\nused as drop-in replacement for\n:class:`~sklearn.model_selection.GridSearchCV` and\n:class:`~sklearn.model_selection.RandomizedSearchCV`.\nSuccessive Halving is an iterative selection process illustrated in the\nfigure below. The first iteration is run with a small amount of resources,\nwhere the resource typically corresponds to the number of training samples,\nbut can also be an arbitrary integer parameter such as `n_estimators` in a\nrandom forest. Only a subset of the parameter candidates are selected for the\nnext iteration, which will be run with an increasing amount of allocated\nresources. Only a subset of candidates will last until the end of the\niteration process, and the best parameter candidate is the one that has the\nhighest score on the last iteration.\n\nRead more in the `User Guide ` (note:\nthe Successive Halving estimators are still :term:`experimental\n`).\n\n.. figure:: ../model_selection/images/sphx_glr_plot_successive_halving_iterations_001.png\n :target: ../model_selection/plot_successive_halving_iterations.html\n :align: center\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom scipy.stats import randint\nfrom sklearn.experimental import enable_halving_search_cv # noqa\nfrom sklearn.model_selection import HalvingRandomSearchCV\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification\n\nrng = np.random.RandomState(0)\n\nX, y = make_classification(n_samples=700, random_state=rng)\n\nclf = RandomForestClassifier(n_estimators=10, random_state=rng)\n\nparam_dist = {\n \"max_depth\": [3, None],\n \"max_features\": randint(1, 11),\n \"min_samples_split\": randint(2, 11),\n \"bootstrap\": [True, False],\n \"criterion\": [\"gini\", \"entropy\"],\n}\n\nrsh = HalvingRandomSearchCV(\n estimator=clf, param_distributions=param_dist, factor=2, random_state=rng\n)\nrsh.fit(X, y)\nrsh.best_params_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Native support for categorical features in HistGradientBoosting estimators\n:class:`~sklearn.ensemble.HistGradientBoostingClassifier` and\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor` now have native\nsupport for categorical features: they can consider splits on non-ordered,\ncategorical data. Read more in the `User Guide\n`.\n\n.. figure:: ../ensemble/images/sphx_glr_plot_gradient_boosting_categorical_001.png\n :target: ../ensemble/plot_gradient_boosting_categorical.html\n :align: center\n\nThe plot shows that the new native support for categorical features leads to\nfitting times that are comparable to models where the categories are treated\nas ordered quantities, i.e. simply ordinal-encoded. Native support is also\nmore expressive than both one-hot encoding and ordinal encoding. However, to\nuse the new `categorical_features` parameter, it is still required to\npreprocess the data within a pipeline as demonstrated in this `example\n`.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Improved performances of HistGradientBoosting estimators\nThe memory footprint of :class:`ensemble.HistGradientBoostingRegressor` and\n:class:`ensemble.HistGradientBoostingClassifier` has been significantly\nimproved during calls to `fit`. In addition, histogram initialization is now\ndone in parallel which results in slight speed improvements.\nSee more in the [Benchmark page](https://scikit-learn.org/scikit-learn-benchmarks/).\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New self-training meta-estimator\nA new self-training implementation, based on [Yarowski's algorithm](https://doi.org/10.3115/981658.981684) can now be used with any\nclassifier that implements :term:`predict_proba`. The sub-classifier\nwill behave as a\nsemi-supervised classifier, allowing it to learn from unlabeled data.\nRead more in the `User guide `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom sklearn import datasets\nfrom sklearn.semi_supervised import SelfTrainingClassifier\nfrom sklearn.svm import SVC\n\nrng = np.random.RandomState(42)\niris = datasets.load_iris()\nrandom_unlabeled_points = rng.rand(iris.target.shape[0]) < 0.3\niris.target[random_unlabeled_points] = -1\nsvc = SVC(probability=True, gamma=\"auto\")\nself_training_model = SelfTrainingClassifier(svc)\nself_training_model.fit(iris.data, iris.target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New SequentialFeatureSelector transformer\nA new iterative transformer to select features is available:\n:class:`~sklearn.feature_selection.SequentialFeatureSelector`.\nSequential Feature Selection can add features one at a time (forward\nselection) or remove features from the list of the available features\n(backward selection), based on a cross-validated score maximization.\nSee the `User Guide `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.feature_selection import SequentialFeatureSelector\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.datasets import load_iris\n\nX, y = load_iris(return_X_y=True, as_frame=True)\nfeature_names = X.columns\nknn = KNeighborsClassifier(n_neighbors=3)\nsfs = SequentialFeatureSelector(knn, n_features_to_select=2)\nsfs.fit(X, y)\nprint(\n \"Features selected by forward sequential selection: \"\n f\"{feature_names[sfs.get_support()].tolist()}\"\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New PolynomialCountSketch kernel approximation function\nThe new :class:`~sklearn.kernel_approximation.PolynomialCountSketch`\napproximates a polynomial expansion of a feature space when used with linear\nmodels, but uses much less memory than\n:class:`~sklearn.preprocessing.PolynomialFeatures`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import fetch_covtype\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.kernel_approximation import PolynomialCountSketch\nfrom sklearn.linear_model import LogisticRegression\n\nX, y = fetch_covtype(return_X_y=True)\npipe = make_pipeline(\n MinMaxScaler(),\n PolynomialCountSketch(degree=2, n_components=300),\n LogisticRegression(max_iter=1000),\n)\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, train_size=5000, test_size=10000, random_state=42\n)\npipe.fit(X_train, y_train).score(X_test, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For comparison, here is the score of a linear baseline for the same data:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "linear_baseline = make_pipeline(MinMaxScaler(), LogisticRegression(max_iter=1000))\nlinear_baseline.fit(X_train, y_train).score(X_test, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Individual Conditional Expectation plots\nA new kind of partial dependence plot is available: the Individual\nConditional Expectation (ICE) plot. ICE plots visualize the dependence of the\nprediction on a feature for each sample separately, with one line per sample.\nSee the `User Guide `\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.ensemble import RandomForestRegressor\nfrom sklearn.datasets import fetch_california_housing\n\n# from sklearn.inspection import plot_partial_dependence\nfrom sklearn.inspection import PartialDependenceDisplay\n\nX, y = fetch_california_housing(return_X_y=True, as_frame=True)\nfeatures = [\"MedInc\", \"AveOccup\", \"HouseAge\", \"AveRooms\"]\nest = RandomForestRegressor(n_estimators=10)\nest.fit(X, y)\n\n# plot_partial_dependence has been removed in version 1.2. From 1.2, use\n# PartialDependenceDisplay instead.\n# display = plot_partial_dependence(\ndisplay = PartialDependenceDisplay.from_estimator(\n est,\n X,\n features,\n kind=\"individual\",\n subsample=50,\n n_jobs=3,\n grid_resolution=20,\n random_state=0,\n)\ndisplay.figure_.suptitle(\n \"Partial dependence of house value on non-location features\\n\"\n \"for the California housing dataset, with BayesianRidge\"\n)\ndisplay.figure_.subplots_adjust(hspace=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New Poisson splitting criterion for DecisionTreeRegressor\nThe integration of Poisson regression estimation continues from version 0.23.\n:class:`~sklearn.tree.DecisionTreeRegressor` now supports a new `'poisson'`\nsplitting criterion. Setting `criterion=\"poisson\"` might be a good choice\nif your target is a count or a frequency.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.tree import DecisionTreeRegressor\nfrom sklearn.model_selection import train_test_split\nimport numpy as np\n\nn_samples, n_features = 1000, 20\nrng = np.random.RandomState(0)\nX = rng.randn(n_samples, n_features)\n# positive integer target correlated with X[:, 5] with many zeros:\ny = rng.poisson(lam=np.exp(X[:, 5]) / 2)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=rng)\nregressor = DecisionTreeRegressor(criterion=\"poisson\", random_state=0)\nregressor.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New documentation improvements\n\nNew examples and documentation pages have been added, in a continuous effort\nto improve the understanding of machine learning practices:\n\n- a new section about `common pitfalls and recommended\n practices `,\n- an example illustrating how to `statistically compare the performance of\n models `\n evaluated using :class:`~sklearn.model_selection.GridSearchCV`,\n- an example on how to `interpret coefficients of linear models\n `,\n- an `example\n `\n comparing Principal Component Regression and Partial Least Squares.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }