{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Feature transformations with ensembles of trees\n\nTransform your features into a higher dimensional, sparse space. Then train a\nlinear model on these features.\n\nFirst fit an ensemble of trees (totally random trees, a random forest, or\ngradient boosted trees) on the training set. Then each leaf of each tree in the\nensemble is assigned a fixed arbitrary feature index in a new feature space.\nThese leaf indices are then encoded in a one-hot fashion.\n\nEach sample goes through the decisions of each tree of the ensemble and ends up\nin one leaf per tree. The sample is encoded by setting feature values for these\nleaves to 1 and the other feature values to 0.\n\nThe resulting transformer has then learned a supervised, sparse,\nhigh-dimensional categorical embedding of the data.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we will create a large dataset and split it into three sets:\n\n- a set to train the ensemble methods which are later used to as a feature\n engineering transformer;\n- a set to train the linear model;\n- a set to test the linear model.\n\nIt is important to split the data in such way to avoid overfitting by leaking\ndata.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split\n\nX, y = make_classification(n_samples=80_000, random_state=10)\n\nX_full_train, X_test, y_full_train, y_test = train_test_split(\n X, y, test_size=0.5, random_state=10\n)\nX_train_ensemble, X_train_linear, y_train_ensemble, y_train_linear = train_test_split(\n X_full_train, y_full_train, test_size=0.5, random_state=10\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For each of the ensemble methods, we will use 10 estimators and a maximum\ndepth of 3 levels.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n_estimators = 10\nmax_depth = 3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we will start by training the random forest and gradient boosting on\nthe separated training set\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier\n\nrandom_forest = RandomForestClassifier(\n n_estimators=n_estimators, max_depth=max_depth, random_state=10\n)\nrandom_forest.fit(X_train_ensemble, y_train_ensemble)\n\ngradient_boosting = GradientBoostingClassifier(\n n_estimators=n_estimators, max_depth=max_depth, random_state=10\n)\n_ = gradient_boosting.fit(X_train_ensemble, y_train_ensemble)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that :class:`~sklearn.ensemble.HistGradientBoostingClassifier` is much\nfaster than :class:`~sklearn.ensemble.GradientBoostingClassifier` starting\nwith intermediate datasets (`n_samples >= 10_000`), which is not the case of\nthe present example.\n\nThe :class:`~sklearn.ensemble.RandomTreesEmbedding` is an unsupervised method\nand thus does not required to be trained independently.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.ensemble import RandomTreesEmbedding\n\nrandom_tree_embedding = RandomTreesEmbedding(\n n_estimators=n_estimators, max_depth=max_depth, random_state=0\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we will create three pipelines that will use the above embedding as\na preprocessing stage.\n\nThe random trees embedding can be directly pipelined with the logistic\nregression because it is a standard scikit-learn transformer.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\n\nrt_model = make_pipeline(random_tree_embedding, LogisticRegression(max_iter=1000))\nrt_model.fit(X_train_linear, y_train_linear)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, we can pipeline random forest or gradient boosting with a logistic\nregression. However, the feature transformation will happen by calling the\nmethod `apply`. The pipeline in scikit-learn expects a call to `transform`.\nTherefore, we wrapped the call to `apply` within a `FunctionTransformer`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.preprocessing import FunctionTransformer, OneHotEncoder\n\n\ndef rf_apply(X, model):\n return model.apply(X)\n\n\nrf_leaves_yielder = FunctionTransformer(rf_apply, kw_args={\"model\": random_forest})\n\nrf_model = make_pipeline(\n rf_leaves_yielder,\n OneHotEncoder(handle_unknown=\"ignore\"),\n LogisticRegression(max_iter=1000),\n)\nrf_model.fit(X_train_linear, y_train_linear)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def gbdt_apply(X, model):\n return model.apply(X)[:, :, 0]\n\n\ngbdt_leaves_yielder = FunctionTransformer(\n gbdt_apply, kw_args={\"model\": gradient_boosting}\n)\n\ngbdt_model = make_pipeline(\n gbdt_leaves_yielder,\n OneHotEncoder(handle_unknown=\"ignore\"),\n LogisticRegression(max_iter=1000),\n)\ngbdt_model.fit(X_train_linear, y_train_linear)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can finally show the different ROC curves for all the models.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nfrom sklearn.metrics import RocCurveDisplay\n\n_, ax = plt.subplots()\n\nmodels = [\n (\"RT embedding -> LR\", rt_model),\n (\"RF\", random_forest),\n (\"RF embedding -> LR\", rf_model),\n (\"GBDT\", gradient_boosting),\n (\"GBDT embedding -> LR\", gbdt_model),\n]\n\nmodel_displays = {}\nfor name, pipeline in models:\n model_displays[name] = RocCurveDisplay.from_estimator(\n pipeline, X_test, y_test, ax=ax, name=name\n )\n_ = ax.set_title(\"ROC curve\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "_, ax = plt.subplots()\nfor name, pipeline in models:\n model_displays[name].plot(ax=ax)\n\nax.set_xlim(0, 0.2)\nax.set_ylim(0.8, 1)\n_ = ax.set_title(\"ROC curve (zoomed in at top left)\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }