{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Release Highlights for scikit-learn 1.4\n\n.. currentmodule:: sklearn\n\nWe are pleased to announce the release of scikit-learn 1.4! Many bug fixes\nand improvements were added, as well as some new key features. We detail\nbelow a few of the major features of this release. **For an exhaustive list of\nall the changes**, please refer to the `release notes `.\n\nTo install the latest version (with pip)::\n\n pip install --upgrade scikit-learn\n\nor with conda::\n\n conda install -c conda-forge scikit-learn\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## HistGradientBoosting Natively Supports Categorical DTypes in DataFrames\n:class:`ensemble.HistGradientBoostingClassifier` and\n:class:`ensemble.HistGradientBoostingRegressor` now directly supports dataframes with\ncategorical features. Here we have a dataset with a mixture of\ncategorical and numerical features:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n\nX_adult, y_adult = fetch_openml(\"adult\", version=2, return_X_y=True)\n\n# Remove redundant and non-feature columns\nX_adult = X_adult.drop([\"education-num\", \"fnlwgt\"], axis=\"columns\")\nX_adult.dtypes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By setting `categorical_features=\"from_dtype\"`, the gradient boosting classifier\ntreats the columns with categorical dtypes as categorical features in the\nalgorithm:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.ensemble import HistGradientBoostingClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import roc_auc_score\n\nX_train, X_test, y_train, y_test = train_test_split(X_adult, y_adult, random_state=0)\nhist = HistGradientBoostingClassifier(categorical_features=\"from_dtype\")\n\nhist.fit(X_train, y_train)\ny_decision = hist.decision_function(X_test)\nprint(f\"ROC AUC score is {roc_auc_score(y_test, y_decision)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Polars output in `set_output`\nscikit-learn's transformers now support polars output with the `set_output` API.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import polars as pl\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.compose import ColumnTransformer\n\ndf = pl.DataFrame(\n {\"height\": [120, 140, 150, 110, 100], \"pet\": [\"dog\", \"cat\", \"dog\", \"cat\", \"cat\"]}\n)\npreprocessor = ColumnTransformer(\n [\n (\"numerical\", StandardScaler(), [\"height\"]),\n (\"categorical\", OneHotEncoder(sparse_output=False), [\"pet\"]),\n ],\n verbose_feature_names_out=False,\n)\npreprocessor.set_output(transform=\"polars\")\n\ndf_out = preprocessor.fit_transform(df)\ndf_out" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(f\"Output type: {type(df_out)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Missing value support for Random Forest\nThe classes :class:`ensemble.RandomForestClassifier` and\n:class:`ensemble.RandomForestRegressor` now support missing values. When training\nevery individual tree, the splitter evaluates each potential threshold with the\nmissing values going to the left and right nodes. More details in the\n`User Guide `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom sklearn.ensemble import RandomForestClassifier\n\nX = np.array([0, 1, 6, np.nan]).reshape(-1, 1)\ny = [0, 0, 1, 1]\n\nforest = RandomForestClassifier(random_state=0).fit(X, y)\nforest.predict(X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Add support for monotonic constraints in tree-based models\nWhile we added support for monotonic constraints in histogram-based gradient boosting\nin scikit-learn 0.23, we now support this feature for all other tree-based models as\ntrees, random forests, extra-trees, and exact gradient boosting. Here, we show this\nfeature for random forest on a regression problem.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nfrom sklearn.inspection import PartialDependenceDisplay\nfrom sklearn.ensemble import RandomForestRegressor\n\nn_samples = 500\nrng = np.random.RandomState(0)\nX = rng.randn(n_samples, 2)\nnoise = rng.normal(loc=0.0, scale=0.01, size=n_samples)\ny = 5 * X[:, 0] + np.sin(10 * np.pi * X[:, 0]) - noise\n\nrf_no_cst = RandomForestRegressor().fit(X, y)\nrf_cst = RandomForestRegressor(monotonic_cst=[1, 0]).fit(X, y)\n\ndisp = PartialDependenceDisplay.from_estimator(\n rf_no_cst,\n X,\n features=[0],\n feature_names=[\"feature 0\"],\n line_kw={\"linewidth\": 4, \"label\": \"unconstrained\", \"color\": \"tab:blue\"},\n)\nPartialDependenceDisplay.from_estimator(\n rf_cst,\n X,\n features=[0],\n line_kw={\"linewidth\": 4, \"label\": \"constrained\", \"color\": \"tab:orange\"},\n ax=disp.axes_,\n)\ndisp.axes_[0, 0].plot(\n X[:, 0], y, \"o\", alpha=0.5, zorder=-1, label=\"samples\", color=\"tab:green\"\n)\ndisp.axes_[0, 0].set_ylim(-3, 3)\ndisp.axes_[0, 0].set_xlim(-1, 1)\ndisp.axes_[0, 0].legend()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Enriched estimator displays\nEstimators displays have been enriched: if we look at `forest`, defined above:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "forest" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can access the documentation of the estimator by clicking on the icon \"?\" on\nthe top right corner of the diagram.\n\nIn addition, the display changes color, from orange to blue, when the estimator is\nfitted. You can also get this information by hovering on the icon \"i\".\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.base import clone\n\nclone(forest) # the clone is not fitted" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Metadata Routing Support\nMany meta-estimators and cross-validation routines now support metadata\nrouting, which are listed in the `user guide\n`. For instance, this is how you can do a nested\ncross-validation with sample weights and :class:`~model_selection.GroupKFold`:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import sklearn\nfrom sklearn.metrics import get_scorer\nfrom sklearn.datasets import make_regression\nfrom sklearn.linear_model import Lasso\nfrom sklearn.model_selection import GridSearchCV, cross_validate, GroupKFold\n\n# For now by default metadata routing is disabled, and need to be explicitly\n# enabled.\nsklearn.set_config(enable_metadata_routing=True)\n\nn_samples = 100\nX, y = make_regression(n_samples=n_samples, n_features=5, noise=0.5)\nrng = np.random.RandomState(7)\ngroups = rng.randint(0, 10, size=n_samples)\nsample_weights = rng.rand(n_samples)\nestimator = Lasso().set_fit_request(sample_weight=True)\nhyperparameter_grid = {\"alpha\": [0.1, 0.5, 1.0, 2.0]}\nscoring_inner_cv = get_scorer(\"neg_mean_squared_error\").set_score_request(\n sample_weight=True\n)\ninner_cv = GroupKFold(n_splits=5)\n\ngrid_search = GridSearchCV(\n estimator=estimator,\n param_grid=hyperparameter_grid,\n cv=inner_cv,\n scoring=scoring_inner_cv,\n)\n\nouter_cv = GroupKFold(n_splits=5)\nscorers = {\n \"mse\": get_scorer(\"neg_mean_squared_error\").set_score_request(sample_weight=True)\n}\nresults = cross_validate(\n grid_search,\n X,\n y,\n cv=outer_cv,\n scoring=scorers,\n return_estimator=True,\n params={\"sample_weight\": sample_weights, \"groups\": groups},\n)\nprint(\"cv error on test sets:\", results[\"test_mse\"])\n\n# Setting the flag to the default `False` to avoid interference with other\n# scripts.\nsklearn.set_config(enable_metadata_routing=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Improved memory and runtime efficiency for PCA on sparse data\nPCA is now able to handle sparse matrices natively for the `arpack`\nsolver by levaraging `scipy.sparse.linalg.LinearOperator` to avoid\nmaterializing large sparse matrices when performing the\neigenvalue decomposition of the data set covariance matrix.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.decomposition import PCA\nimport scipy.sparse as sp\nfrom time import time\n\nX_sparse = sp.random(m=1000, n=1000, random_state=0)\nX_dense = X_sparse.toarray()\n\nt0 = time()\nPCA(n_components=10, svd_solver=\"arpack\").fit(X_sparse)\ntime_sparse = time() - t0\n\nt0 = time()\nPCA(n_components=10, svd_solver=\"arpack\").fit(X_dense)\ntime_dense = time() - t0\n\nprint(f\"Speedup: {time_dense / time_sparse:.1f}x\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }