{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Comparing Target Encoder with Other Encoders\n\n.. currentmodule:: sklearn.preprocessing\n\nThe :class:`TargetEncoder` uses the value of the target to encode each\ncategorical feature. In this example, we will compare three different approaches\nfor handling categorical features: :class:`TargetEncoder`,\n:class:`OrdinalEncoder`, :class:`OneHotEncoder` and dropping the category.\n\n

Note

`fit(X, y).transform(X)` does not equal `fit_transform(X, y)` because a\n cross fitting scheme is used in `fit_transform` for encoding. See the\n `User Guide `. for details.

\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading Data from OpenML\nFirst, we load the wine reviews dataset, where the target is the points given\nbe a reviewer:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n\nwine_reviews = fetch_openml(data_id=42074, as_frame=True)\n\ndf = wine_reviews.frame\ndf.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this example, we use the following subset of numerical and categorical\nfeatures in the data. The target are continuous values from 80 to 100:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "numerical_features = [\"price\"]\ncategorical_features = [\n \"country\",\n \"province\",\n \"region_1\",\n \"region_2\",\n \"variety\",\n \"winery\",\n]\ntarget_name = \"points\"\n\nX = df[numerical_features + categorical_features]\ny = df[target_name]\n\n_ = y.hist()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training and Evaluating Pipelines with Different Encoders\nIn this section, we will evaluate pipelines with\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor` with different encoding\nstrategies. First, we list out the encoders we will be using to preprocess\nthe categorical features:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, TargetEncoder\n\ncategorical_preprocessors = [\n (\"drop\", \"drop\"),\n (\"ordinal\", OrdinalEncoder(handle_unknown=\"use_encoded_value\", unknown_value=-1)),\n (\n \"one_hot\",\n OneHotEncoder(handle_unknown=\"ignore\", max_categories=20, sparse_output=False),\n ),\n (\"target\", TargetEncoder(target_type=\"continuous\")),\n]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we evaluate the models using cross validation and record the results:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.ensemble import HistGradientBoostingRegressor\nfrom sklearn.model_selection import cross_validate\nfrom sklearn.pipeline import make_pipeline\n\nn_cv_folds = 3\nmax_iter = 20\nresults = []\n\n\ndef evaluate_model_and_store(name, pipe):\n result = cross_validate(\n pipe,\n X,\n y,\n scoring=\"neg_root_mean_squared_error\",\n cv=n_cv_folds,\n return_train_score=True,\n )\n rmse_test_score = -result[\"test_score\"]\n rmse_train_score = -result[\"train_score\"]\n results.append(\n {\n \"preprocessor\": name,\n \"rmse_test_mean\": rmse_test_score.mean(),\n \"rmse_test_std\": rmse_train_score.std(),\n \"rmse_train_mean\": rmse_train_score.mean(),\n \"rmse_train_std\": rmse_train_score.std(),\n }\n )\n\n\nfor name, categorical_preprocessor in categorical_preprocessors:\n preprocessor = ColumnTransformer(\n [\n (\"numerical\", \"passthrough\", numerical_features),\n (\"categorical\", categorical_preprocessor, categorical_features),\n ]\n )\n pipe = make_pipeline(\n preprocessor, HistGradientBoostingRegressor(random_state=0, max_iter=max_iter)\n )\n evaluate_model_and_store(name, pipe)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Native Categorical Feature Support\nIn this section, we build and evaluate a pipeline that uses native categorical\nfeature support in :class:`~sklearn.ensemble.HistGradientBoostingRegressor`,\nwhich only supports up to 255 unique categories. In our dataset, the most of\nthe categorical features have more than 255 unique categories:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n_unique_categories = df[categorical_features].nunique().sort_values(ascending=False)\nn_unique_categories" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To workaround the limitation above, we group the categorical features into\nlow cardinality and high cardinality features. The high cardinality features\nwill be target encoded and the low cardinality features will use the native\ncategorical feature in gradient boosting.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "high_cardinality_features = n_unique_categories[n_unique_categories > 255].index\nlow_cardinality_features = n_unique_categories[n_unique_categories <= 255].index\nmixed_encoded_preprocessor = ColumnTransformer(\n [\n (\"numerical\", \"passthrough\", numerical_features),\n (\n \"high_cardinality\",\n TargetEncoder(target_type=\"continuous\"),\n high_cardinality_features,\n ),\n (\n \"low_cardinality\",\n OrdinalEncoder(handle_unknown=\"use_encoded_value\", unknown_value=-1),\n low_cardinality_features,\n ),\n ],\n verbose_feature_names_out=False,\n)\n\n# The output of the of the preprocessor must be set to pandas so the\n# gradient boosting model can detect the low cardinality features.\nmixed_encoded_preprocessor.set_output(transform=\"pandas\")\nmixed_pipe = make_pipeline(\n mixed_encoded_preprocessor,\n HistGradientBoostingRegressor(\n random_state=0, max_iter=max_iter, categorical_features=low_cardinality_features\n ),\n)\nmixed_pipe" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we evaluate the pipeline using cross validation and record the results:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "evaluate_model_and_store(\"mixed_target\", mixed_pipe)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting the Results\nIn this section, we display the results by plotting the test and train scores:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport pandas as pd\n\nresults_df = (\n pd.DataFrame(results).set_index(\"preprocessor\").sort_values(\"rmse_test_mean\")\n)\n\nfig, (ax1, ax2) = plt.subplots(\n 1, 2, figsize=(12, 8), sharey=True, constrained_layout=True\n)\nxticks = range(len(results_df))\nname_to_color = dict(\n zip((r[\"preprocessor\"] for r in results), [\"C0\", \"C1\", \"C2\", \"C3\", \"C4\"])\n)\n\nfor subset, ax in zip([\"test\", \"train\"], [ax1, ax2]):\n mean, std = f\"rmse_{subset}_mean\", f\"rmse_{subset}_std\"\n data = results_df[[mean, std]].sort_values(mean)\n ax.bar(\n x=xticks,\n height=data[mean],\n yerr=data[std],\n width=0.9,\n color=[name_to_color[name] for name in data.index],\n )\n ax.set(\n title=f\"RMSE ({subset.title()})\",\n xlabel=\"Encoding Scheme\",\n xticks=xticks,\n xticklabels=data.index,\n )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When evaluating the predictive performance on the test set, dropping the\ncategories perform the worst and the target encoders performs the best. This\ncan be explained as follows:\n\n- Dropping the categorical features makes the pipeline less expressive and\n underfitting as a result;\n- Due to the high cardinality and to reduce the training time, the one-hot\n encoding scheme uses `max_categories=20` which prevents the features from\n expanding too much, which can result in underfitting.\n- If we had not set `max_categories=20`, the one-hot encoding scheme would have\n likely made the pipeline overfitting as the number of features explodes with rare\n category occurrences that are correlated with the target by chance (on the training\n set only);\n- The ordinal encoding imposes an arbitrary order to the features which are then\n treated as numerical values by the\n :class:`~sklearn.ensemble.HistGradientBoostingRegressor`. Since this\n model groups numerical features in 256 bins per feature, many unrelated categories\n can be grouped together and as a result overall pipeline can underfit;\n- When using the target encoder, the same binning happens, but since the encoded\n values are statistically ordered by marginal association with the target variable,\n the binning use by the :class:`~sklearn.ensemble.HistGradientBoostingRegressor`\n makes sense and leads to good results: the combination of smoothed target\n encoding and binning works as a good regularizing strategy against\n overfitting while not limiting the expressiveness of the pipeline too much.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }