{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Release Highlights for scikit-learn 1.3\n\n.. currentmodule:: sklearn\n\nWe are pleased to announce the release of scikit-learn 1.3! Many bug fixes\nand improvements were added, as well as some new key features. We detail\nbelow a few of the major features of this release. **For an exhaustive list of\nall the changes**, please refer to the `release notes `.\n\nTo install the latest version (with pip)::\n\n pip install --upgrade scikit-learn\n\nor with conda::\n\n conda install -c conda-forge scikit-learn\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Metadata Routing\nWe are in the process of introducing a new way to route metadata such as\n``sample_weight`` throughout the codebase, which would affect how\nmeta-estimators such as :class:`pipeline.Pipeline` and\n:class:`model_selection.GridSearchCV` route metadata. While the\ninfrastructure for this feature is already included in this release, the work\nis ongoing and not all meta-estimators support this new feature. You can read\nmore about this feature in the `Metadata Routing User Guide\n`. Note that this feature is still under development and\nnot implemented for most meta-estimators.\n\nThird party developers can already start incorporating this into their\nmeta-estimators. For more details, see\n`metadata routing developer guide\n`.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## HDBSCAN: hierarchical density-based clustering\nOriginally hosted in the scikit-learn-contrib repository, :class:`cluster.HDBSCAN`\nhas been adpoted into scikit-learn. It's missing a few features from the original\nimplementation which will be added in future releases.\nBy performing a modified version of :class:`cluster.DBSCAN` over multiple epsilon\nvalues simultaneously, :class:`cluster.HDBSCAN` finds clusters of varying densities\nmaking it more robust to parameter selection than :class:`cluster.DBSCAN`.\nMore details in the `User Guide `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom sklearn.cluster import HDBSCAN\nfrom sklearn.datasets import load_digits\nfrom sklearn.metrics import v_measure_score\n\nX, true_labels = load_digits(return_X_y=True)\nprint(f\"number of digits: {len(np.unique(true_labels))}\")\n\nhdbscan = HDBSCAN(min_cluster_size=15).fit(X)\nnon_noisy_labels = hdbscan.labels_[hdbscan.labels_ != -1]\nprint(f\"number of clusters found: {len(np.unique(non_noisy_labels))}\")\n\nprint(v_measure_score(true_labels[hdbscan.labels_ != -1], non_noisy_labels))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## TargetEncoder: a new category encoding strategy\nWell suited for categorical features with high cardinality,\n:class:`preprocessing.TargetEncoder` encodes the categories based on a shrunk\nestimate of the average target values for observations belonging to that category.\nMore details in the `User Guide `.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom sklearn.preprocessing import TargetEncoder\n\nX = np.array([[\"cat\"] * 30 + [\"dog\"] * 20 + [\"snake\"] * 38], dtype=object).T\ny = [90.3] * 30 + [20.4] * 20 + [21.2] * 38\n\nenc = TargetEncoder(random_state=0)\nX_trans = enc.fit_transform(X, y)\n\nenc.encodings_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Missing values support in decision trees\nThe classes :class:`tree.DecisionTreeClassifier` and\n:class:`tree.DecisionTreeRegressor` now support missing values. For each potential\nthreshold on the non-missing data, the splitter will evaluate the split with all the\nmissing values going to the left node or the right node.\nSee more details in the `User Guide ` or see\n`sphx_glr_auto_examples_ensemble_plot_hgbt_regression.py` for a usecase\nexample of this feature in :class:`~ensemble.HistGradientBoostingRegressor`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom sklearn.tree import DecisionTreeClassifier\n\nX = np.array([0, 1, 6, np.nan]).reshape(-1, 1)\ny = [0, 0, 1, 1]\n\ntree = DecisionTreeClassifier(random_state=0).fit(X, y)\ntree.predict(X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## New display :class:`~model_selection.ValidationCurveDisplay`\n:class:`model_selection.ValidationCurveDisplay` is now available to plot results\nfrom :func:`model_selection.validation_curve`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_classification\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import ValidationCurveDisplay\n\nX, y = make_classification(1000, 10, random_state=0)\n\n_ = ValidationCurveDisplay.from_estimator(\n LogisticRegression(),\n X,\n y,\n param_name=\"C\",\n param_range=np.geomspace(1e-5, 1e3, num=9),\n score_type=\"both\",\n score_name=\"Accuracy\",\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gamma loss for gradient boosting\nThe class :class:`ensemble.HistGradientBoostingRegressor` supports the\nGamma deviance loss function via `loss=\"gamma\"`. This loss function is useful for\nmodeling strictly positive targets with a right-skewed distribution.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.datasets import make_low_rank_matrix\nfrom sklearn.ensemble import HistGradientBoostingRegressor\n\nn_samples, n_features = 500, 10\nrng = np.random.RandomState(0)\nX = make_low_rank_matrix(n_samples, n_features, random_state=rng)\ncoef = rng.uniform(low=-10, high=20, size=n_features)\ny = rng.gamma(shape=2, scale=np.exp(X @ coef) / 2)\ngbdt = HistGradientBoostingRegressor(loss=\"gamma\")\ncross_val_score(gbdt, X, y).mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Grouping infrequent categories in :class:`~preprocessing.OrdinalEncoder`\nSimilarly to :class:`preprocessing.OneHotEncoder`, the class\n:class:`preprocessing.OrdinalEncoder` now supports aggregating infrequent categories\ninto a single output for each feature. The parameters to enable the gathering of\ninfrequent categories are `min_frequency` and `max_categories`.\nSee the `User Guide ` for more details.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.preprocessing import OrdinalEncoder\nimport numpy as np\n\nX = np.array(\n [[\"dog\"] * 5 + [\"cat\"] * 20 + [\"rabbit\"] * 10 + [\"snake\"] * 3], dtype=object\n).T\nenc = OrdinalEncoder(min_frequency=6).fit(X)\nenc.infrequent_categories_" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }