{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Encoding of categorical variables\n", "\n", "In this notebook, we present some typical ways of dealing with **categorical\n", "variables** by encoding them, namely **ordinal encoding** and **one-hot\n", "encoding**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's first load the entire adult dataset containing both numerical and\n", "categorical data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "adult_census = pd.read_csv(\"../datasets/adult-census.csv\")\n", "# drop the duplicated column `\"education-num\"` as stated in the first notebook\n", "adult_census = adult_census.drop(columns=\"education-num\")\n", "\n", "target_name = \"class\"\n", "target = adult_census[target_name]\n", "\n", "data = adult_census.drop(columns=[target_name])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Identify categorical variables\n", "\n", "As we saw in the previous section, a numerical variable is a\n", "quantity represented by a real or integer number. These variables can be\n", "naturally handled by machine learning algorithms that are typically composed\n", "of a sequence of arithmetic instructions such as additions and\n", "multiplications.\n", "\n", "In contrast, categorical variables have discrete values, typically\n", "represented by string labels (but not only) taken from a finite list of\n", "possible choices. For instance, the variable `native-country` in our dataset\n", "is a categorical variable because it encodes the data using a finite list of\n", "possible countries (along with the `?` symbol when this information is\n", "missing):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"native-country\"].value_counts().sort_index()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How can we easily recognize categorical columns among the dataset? Part of\n", "the answer lies in the columns' data type:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.dtypes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we look at the `\"native-country\"` column, we observe its data type is\n", "`object`, meaning it contains string values.\n", "\n", "## Select features based on their data type\n", "\n", "In the previous notebook, we manually defined the numerical columns. We could\n", "do a similar approach. Instead, we can use the scikit-learn helper function\n", "`make_column_selector`, which allows us to select columns based on their data\n", "type. We now illustrate how to use this helper." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.compose import make_column_selector as selector\n", "\n", "categorical_columns_selector = selector(dtype_include=object)\n", "categorical_columns = categorical_columns_selector(data)\n", "categorical_columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we created the selector by passing the data type to include; we then\n", "passed the input dataset to the selector object, which returned a list of\n", "column names that have the requested data type. We can now filter out the\n", "unwanted columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_categorical = data[categorical_columns]\n", "data_categorical.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"The dataset is composed of {data_categorical.shape[1]} features\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the remainder of this section, we will present different strategies to\n", "encode categorical data into numerical data which can be used by a\n", "machine-learning algorithm." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Strategies to encode categories\n", "\n", "### Encoding ordinal categories\n", "\n", "The most intuitive strategy is to encode each category with a different\n", "number. The `OrdinalEncoder` transforms the data in such manner. We start by\n", "encoding a single column to understand how the encoding works." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import OrdinalEncoder\n", "\n", "education_column = data_categorical[[\"education\"]]\n", "\n", "encoder = OrdinalEncoder().set_output(transform=\"pandas\")\n", "education_encoded = encoder.fit_transform(education_column)\n", "education_encoded" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that each category in `\"education\"` has been replaced by a numeric\n", "value. We could check the mapping between the categories and the numerical\n", "values by checking the fitted attribute `categories_`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "encoder.categories_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can check the encoding applied on all categorical features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_encoded = encoder.fit_transform(data_categorical)\n", "data_encoded[:5]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"The dataset encoded contains {data_encoded.shape[1]} features\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see that the categories have been encoded for each feature (column)\n", "independently. We also note that the number of features before and after the\n", "encoding is the same.\n", "\n", "However, be careful when applying this encoding strategy:\n", "using this integer representation leads downstream predictive models\n", "to assume that the values are ordered (0 < 1 < 2 < 3... for instance).\n", "\n", "By default, `OrdinalEncoder` uses a lexicographical strategy to map string\n", "category labels to integers. This strategy is arbitrary and often\n", "meaningless. For instance, suppose the dataset has a categorical variable\n", "named `\"size\"` with categories such as \"S\", \"M\", \"L\", \"XL\". We would like the\n", "integer representation to respect the meaning of the sizes by mapping them to\n", "increasing integers such as `0, 1, 2, 3`.\n", "However, the lexicographical strategy used by default would map the labels\n", "\"S\", \"M\", \"L\", \"XL\" to 2, 1, 0, 3, by following the alphabetical order.\n", "\n", "The `OrdinalEncoder` class accepts a `categories` constructor argument to\n", "pass categories in the expected ordering explicitly. You can find more\n", "information in the\n", "[scikit-learn documentation](https://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)\n", "if needed.\n", "\n", "If a categorical variable does not carry any meaningful order information\n", "then this encoding might be misleading to downstream statistical models and\n", "you might consider using one-hot encoding instead (see below).\n", "\n", "### Encoding nominal categories (without assuming any order)\n", "\n", "`OneHotEncoder` is an alternative encoder that prevents the downstream\n", "models to make a false assumption about the ordering of categories. For a\n", "given feature, it creates as many new columns as there are possible\n", "categories. For a given sample, the value of the column corresponding to the\n", "category is set to `1` while all the columns of the other categories\n", "are set to `0`.\n", "\n", "We can encode a single feature (e.g. `\"education\"`) to illustrate how the\n", "encoding works." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import OneHotEncoder\n", "\n", "encoder = OneHotEncoder(sparse_output=False).set_output(transform=\"pandas\")\n", "education_encoded = encoder.fit_transform(education_column)\n", "education_encoded" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Note
\n", "sparse_output=False is used in the OneHotEncoder for didactic purposes,\n", "namely easier visualization of the data.
\n", "Sparse matrices are efficient data structures when most of your matrix\n", "elements are zero. They won't be covered in detail in this course. If you\n", "want more details about them, you can look at\n", "this.
\n", "Note
\n", "In general OneHotEncoder is the encoding strategy used when the\n", "downstream models are linear models while OrdinalEncoder is often a\n", "good strategy with tree-based models.
\n", "Tip
\n", "Be aware the OrdinalEncoder exposes a parameter also named handle_unknown.\n", "It can be set to use_encoded_value. If that option is chosen, you can define\n", "a fixed value that is assigned to all unknown categories during transform.\n", "For example, OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1) would set all values encountered during transform to -1\n", "which are not part of the data encountered during the fit call. You are\n", "going to use these parameters in the next exercise.
\n", "Note
\n", "Here, we need to increase the maximum number of iterations to obtain a fully\n", "converged LogisticRegression and silence a ConvergenceWarning. Contrary\n", "to the numerical features, the one-hot encoded categorical features are all\n", "on the same scale (values are 0 or 1), so they would not benefit from\n", "scaling. In this case, increasing max_iter is the right thing to do.
\n", "