{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Feature importance\n", "\n", "In this notebook, we will detail methods to investigate the importance of\n", "features used by a given model. We will look at:\n", "\n", "1. interpreting the coefficients in a linear model;\n", "2. the attribute `feature_importances_` in RandomForest;\n", "3. `permutation feature importance`, which is an inspection technique that can\n", " be used for any fitted model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0. Presentation of the dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This dataset is a record of neighborhoods in California district, predicting\n", "the **median house value** (target) given some information about the\n", "neighborhoods, as the average number of rooms, the latitude, the longitude or\n", "the median income of people in the neighborhoods (block)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import fetch_california_housing\n", "import pandas as pd\n", "\n", "X, y = fetch_california_housing(as_frame=True, return_X_y=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To speed up the computation, we take the first 10000 samples" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = X[:10000]\n", "y = y[:10000]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The feature reads as follow:\n", "- MedInc: median income in block\n", "- HouseAge: median house age in block\n", "- AveRooms: average number of rooms\n", "- AveBedrms: average number of bedrooms\n", "- Population: block population\n", "- AveOccup: average house occupancy\n", "- Latitude: house block latitude\n", "- Longitude: house block longitude\n", "- MedHouseVal: Median house value in 100k$ *(target)*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To assert the quality of our inspection technique, let's add some random\n", "feature that won't help the prediction (un-informative feature)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "import numpy as np\n", "\n", "# Adding random features\n", "rng = np.random.RandomState(0)\n", "bin_var = pd.Series(rng.randint(0, 1, X.shape[0]), name=\"rnd_bin\")\n", "num_var = pd.Series(np.arange(X.shape[0]), name=\"rnd_num\")\n", "X_with_rnd_feat = pd.concat((X, bin_var, num_var), axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will split the data into training and testing for the remaining part of\n", "this notebook" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(\n", " X_with_rnd_feat, y, random_state=29\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's quickly inspect some features and the target:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "\n", "train_dataset = X_train.copy()\n", "train_dataset.insert(0, \"MedHouseVal\", y_train)\n", "_ = sns.pairplot(\n", " train_dataset[\n", " [\"MedHouseVal\", \"Latitude\", \"AveRooms\", \"AveBedrms\", \"MedInc\"]\n", " ],\n", " kind=\"reg\",\n", " diag_kind=\"kde\",\n", " plot_kws={\"scatter_kws\": {\"alpha\": 0.1}},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see in the upper right plot that the median income seems to be positively\n", "correlated to the median house price (the target).\n", "\n", "We can also see that the average number of rooms `AveRooms` is very correlated\n", "to the average number of bedrooms `AveBedrms`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Linear model inspection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In linear models, the target value is modeled as a linear combination of the\n", "features.\n", "\n", "Coefficients represent the relationship between the given feature $X_i$ and\n", "the target $y$, assuming that all the other features remain constant\n", "(conditional dependence). This is different from plotting $X_i$ versus $y$ and\n", "fitting a linear relationship: in that case all possible values of the other\n", "features are taken into account in the estimation (marginal dependence)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import RidgeCV\n", "\n", "model = RidgeCV()\n", "\n", "model.fit(X_train, y_train)\n", "\n", "print(f\"model score on training data: {model.score(X_train, y_train)}\")\n", "print(f\"model score on testing data: {model.score(X_test, y_test)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our linear model obtains a $R^2$ score of .60, so it explains a significant\n", "part of the target. Its coefficient should be somehow relevant. Let's look at\n", "the coefficient learnt:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "coefs = pd.DataFrame(\n", " model.coef_, columns=[\"Coefficients\"], index=X_train.columns\n", ")\n", "\n", "coefs.plot(kind=\"barh\", figsize=(9, 7))\n", "plt.title(\"Ridge model\")\n", "plt.axvline(x=0, color=\".5\")\n", "plt.subplots_adjust(left=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Sign of coefficients\n", "\n", "```{admonition} A surprising association?\n", "**Why is the coefficient associated to `AveRooms` negative?** Does the\n", "price of houses decreases with the number of rooms?\n", "```\n", "\n", "The coefficients of a linear model are a *conditional* association: they\n", "quantify the variation of a the output (the price) when the given feature is\n", "varied, **keeping all other features constant**. We should not interpret them\n", "as a *marginal* association, characterizing the link between the two\n", "quantities ignoring all the rest.\n", "\n", "The coefficient associated to `AveRooms` is negative because the number of\n", "rooms is strongly correlated with the number of bedrooms, `AveBedrms`. What we\n", "are seeing here is that for districts where the houses have the same number of\n", "bedrooms on average, when there are more rooms (hence non-bedroom rooms), the\n", "houses are worth comparatively less.\n", "\n", "### Scale of coefficients\n", "\n", "The `AveBedrms` have the higher coefficient. However, we can't compare the\n", "magnitude of these coefficients directly, since they are not scaled. Indeed,\n", "`Population` is an integer which can be thousands, while `AveBedrms` is around\n", "4 and Latitude is in degree.\n", "\n", "So the Population coefficient is expressed in \"$100k\\\\$$ / habitant\" while the\n", "AveBedrms is expressed in \"$100k\\\\$$ / nb of bedrooms\" and the Latitude\n", "coefficient in \"$100k\\\\$$ / degree\".\n", "\n", "We see that changing population by one does not change the outcome, while as\n", "we go south (latitude increase) the price becomes cheaper. Also, adding a\n", "bedroom (keeping all other feature constant) shall rise the price of the house\n", "by 80k$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So looking at the coefficient plot to gauge feature importance can be\n", "misleading as some of them vary on a small scale, while others vary a lot\n", "more, several decades.\n", "\n", "This becomes visible if we compare the standard deviations of our different\n", "features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train.std(axis=0).plot(kind=\"barh\", figsize=(9, 7))\n", "plt.title(\"Features std. dev.\")\n", "plt.subplots_adjust(left=0.3)\n", "plt.xlim((0, 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So before any interpretation, we need to scale each column (removing the mean\n", "and scaling the variance to 1)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.pipeline import make_pipeline\n", "from sklearn.preprocessing import StandardScaler\n", "\n", "model = make_pipeline(StandardScaler(), RidgeCV())\n", "\n", "model.fit(X_train, y_train)\n", "\n", "print(f\"model score on training data: {model.score(X_train, y_train)}\")\n", "print(f\"model score on testing data: {model.score(X_test, y_test)}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "coefs = pd.DataFrame(\n", " model[1].coef_, columns=[\"Coefficients\"], index=X_train.columns\n", ")\n", "\n", "coefs.plot(kind=\"barh\", figsize=(9, 7))\n", "plt.title(\"Ridge model\")\n", "plt.axvline(x=0, color=\".5\")\n", "plt.subplots_adjust(left=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that the coefficients have been scaled, we can safely compare them.\n", "\n", "The median income feature, with longitude and latitude are the three variables\n", "that most influence the model.\n", "\n", "The plot above tells us about dependencies between a specific feature and the\n", "target when all other features remain constant, i.e., conditional\n", "dependencies. An increase of the `HouseAge` will induce an increase of the\n", "price when all other features remain constant. On the contrary, an increase of\n", "the average rooms will induce an decrease of the price when all other features\n", "remain constant." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checking the variability of the coefficients" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can check the coefficient variability through cross-validation: it is a\n", "form of data perturbation.\n", "\n", "If coefficients vary significantly when changing the input dataset their\n", "robustness is not guaranteed, and they should probably be interpreted with\n", "caution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import cross_validate\n", "from sklearn.model_selection import RepeatedKFold\n", "\n", "cv_model = cross_validate(\n", " model,\n", " X_with_rnd_feat,\n", " y,\n", " cv=RepeatedKFold(n_splits=5, n_repeats=5),\n", " return_estimator=True,\n", " n_jobs=2,\n", ")\n", "coefs = pd.DataFrame(\n", " [model[1].coef_ for model in cv_model[\"estimator\"]],\n", " columns=X_with_rnd_feat.columns,\n", ")\n", "plt.figure(figsize=(9, 7))\n", "sns.boxplot(data=coefs, orient=\"h\", color=\"cyan\", saturation=0.5)\n", "plt.axvline(x=0, color=\".5\")\n", "plt.xlabel(\"Coefficient importance\")\n", "plt.title(\"Coefficient importance and its variability\")\n", "plt.subplots_adjust(left=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Every coefficient looks pretty stable, which mean that different Ridge model\n", "put almost the same weight to the same feature." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Linear models with sparse coefficients (Lasso)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In it important to keep in mind that the associations extracted depend on the\n", "model. To illustrate this point we consider a Lasso model, that performs\n", "feature selection with a L1 penalty. Let us fit a Lasso model with a strong\n", "regularization parameters `alpha`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import Lasso\n", "\n", "model = make_pipeline(StandardScaler(), Lasso(alpha=0.015))\n", "\n", "model.fit(X_train, y_train)\n", "\n", "print(f\"model score on training data: {model.score(X_train, y_train)}\")\n", "print(f\"model score on testing data: {model.score(X_test, y_test)}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "coefs = pd.DataFrame(\n", " model[1].coef_, columns=[\"Coefficients\"], index=X_train.columns\n", ")\n", "\n", "coefs.plot(kind=\"barh\", figsize=(9, 7))\n", "plt.title(\"Lasso model, strong regularization\")\n", "plt.axvline(x=0, color=\".5\")\n", "plt.subplots_adjust(left=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here the model score is a bit lower, because of the strong regularization.\n", "However, it has zeroed out 3 coefficients, selecting a small number of\n", "variables to make its prediction.\n", "\n", "We can see that out of the two correlated features `AveRooms` and `AveBedrms`,\n", "the model has selected one. Note that this choice is partly arbitrary:\n", "choosing one does not mean that the other is not important for prediction.\n", "**Avoid over-interpreting models, as they are imperfect**.\n", "\n", "As above, we can look at the variability of the coefficients:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv_model = cross_validate(\n", " model,\n", " X_with_rnd_feat,\n", " y,\n", " cv=RepeatedKFold(n_splits=5, n_repeats=5),\n", " return_estimator=True,\n", " n_jobs=2,\n", ")\n", "coefs = pd.DataFrame(\n", " [model[1].coef_ for model in cv_model[\"estimator\"]],\n", " columns=X_with_rnd_feat.columns,\n", ")\n", "plt.figure(figsize=(9, 7))\n", "sns.boxplot(data=coefs, orient=\"h\", color=\"cyan\", saturation=0.5)\n", "plt.axvline(x=0, color=\".5\")\n", "plt.xlabel(\"Coefficient importance\")\n", "plt.title(\"Coefficient importance and its variability\")\n", "plt.subplots_adjust(left=0.3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that both the coefficients associated to `AveRooms` and `AveBedrms`\n", "have a strong variability and that they can both be non zero. Given that they\n", "are strongly correlated, the model can pick one or the other to predict well.\n", "This choice is a bit arbitrary, and must not be over-interpreted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Lessons learned\n", "\n", "Coefficients must be scaled to the same unit of measure to retrieve feature\n", "importance, or comparing them.\n", "\n", "Coefficients in multivariate linear models represent the dependency between a\n", "given feature and the target, conditional on the other features.\n", "\n", "Correlated features might induce instabilities in the coefficients of linear\n", "models and their effects cannot be well teased apart.\n", "\n", "Inspecting coefficients across the folds of a cross-validation loop gives an\n", "idea of their stability." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. RandomForest `feature_importances_`\n", "\n", "On some algorithms, there are some feature importance methods, inherently\n", "built within the model. It is the case in RandomForest models. Let's\n", "investigate the built-in `feature_importances_` attribute." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import RandomForestRegressor\n", "\n", "model = RandomForestRegressor()\n", "\n", "model.fit(X_train, y_train)\n", "\n", "print(f\"model score on training data: {model.score(X_train, y_train)}\")\n", "print(f\"model score on testing data: {model.score(X_test, y_test)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Contrary to the testing set, the score on the training set is almost perfect,\n", "which means that our model is overfitting here." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "importances = model.feature_importances_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The importance of a feature is basically: how much this feature is used in\n", "each tree of the forest. Formally, it is computed as the (normalized) total\n", "reduction of the criterion brought by that feature." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "indices = np.argsort(importances)\n", "\n", "fig, ax = plt.subplots()\n", "ax.barh(range(len(importances)), importances[indices])\n", "ax.set_yticks(range(len(importances)))\n", "_ = ax.set_yticklabels(np.array(X_train.columns)[indices])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Median income is still the most important feature.\n", "\n", "It also has a small bias toward high cardinality features, such as the noisy\n", "feature `rnd_num`, which are here predicted having .07 importance, more than\n", "`HouseAge` (which has low cardinality)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Feature importance by permutation\n", "\n", "We introduce here a new technique to evaluate the feature importance of any\n", "given fitted model. It basically shuffles a feature and sees how the model\n", "changes its prediction. Thus, the change in prediction will correspond to the\n", "feature importance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Any model could be used here\n", "\n", "model = RandomForestRegressor()\n", "# model = make_pipeline(StandardScaler(),\n", "# RidgeCV())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "model.fit(X_train, y_train)\n", "\n", "print(f\"model score on training data: {model.score(X_train, y_train)}\")\n", "print(f\"model score on testing data: {model.score(X_test, y_test)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the model gives a good prediction, it has captured well the link between X\n", "and y. Hence, it is reasonable to interpret what it has captured from the\n", "data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature importance" ] }, { "cell_type": "markdown", "metadata": { "lines_to_next_cell": 2 }, "source": [ "Lets compute the feature importance for a given feature, say the `MedInc`\n", "feature.\n", "\n", "For that, we will shuffle this specific feature, keeping the other feature as\n", "is, and run our same model (already fitted) to predict the outcome. The\n", "decrease of the score shall indicate how the model had used this feature to\n", "predict the target. The permutation feature importance is defined to be the\n", "decrease in a model score when a single feature value is randomly shuffled.\n", "\n", "For instance, if the feature is crucial for the model, the outcome would also\n", "be permuted (just as the feature), thus the score would be close to zero.\n", "Afterward, the feature importance is the decrease in score. So in that case,\n", "the feature importance would be close to the score.\n", "\n", "On the contrary, if the feature is not used by the model, the score shall\n", "remain the same, thus the feature importance will be close to 0." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_score_after_permutation(model, X, y, curr_feat):\n", " \"\"\"return the score of model when curr_feat is permuted\"\"\"\n", "\n", " X_permuted = X.copy()\n", " col_idx = list(X.columns).index(curr_feat)\n", " # permute one column\n", " X_permuted.iloc[:, col_idx] = np.random.permutation(\n", " X_permuted[curr_feat].values\n", " )\n", "\n", " permuted_score = model.score(X_permuted, y)\n", " return permuted_score\n", "\n", "\n", "def get_feature_importance(model, X, y, curr_feat):\n", " \"\"\"compare the score when curr_feat is permuted\"\"\"\n", "\n", " baseline_score_train = model.score(X, y)\n", " permuted_score_train = get_score_after_permutation(model, X, y, curr_feat)\n", "\n", " # feature importance is the difference between the two scores\n", " feature_importance = baseline_score_train - permuted_score_train\n", " return feature_importance\n", "\n", "\n", "curr_feat = \"MedInc\"\n", "\n", "feature_importance = get_feature_importance(model, X_train, y_train, curr_feat)\n", "print(\n", " f'feature importance of \"{curr_feat}\" on train set is '\n", " f\"{feature_importance:.3}\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since there is some randomness, it is advisable to run it multiple times and\n", "inspect the mean and the standard deviation of the feature importance." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "n_repeats = 10\n", "\n", "list_feature_importance = []\n", "for n_round in range(n_repeats):\n", " list_feature_importance.append(\n", " get_feature_importance(model, X_train, y_train, curr_feat)\n", " )\n", "\n", "print(\n", " f'feature importance of \"{curr_feat}\" on train set is '\n", " f\"{np.mean(list_feature_importance):.3} \"\n", " f\"\u00b1 {np.std(list_feature_importance):.3}\"\n", ")" ] }, { "cell_type": "markdown", "metadata": { "lines_to_next_cell": 2 }, "source": [ "0.67 over 0.98 is very relevant (note the $R^2$ score could go below 0). So we\n", "can imagine our model relies heavily on this feature to predict the class. We\n", "can now compute the feature permutation importance for all the features." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "lines_to_next_cell": 2 }, "outputs": [], "source": [ "def permutation_importance(model, X, y, n_repeats=10):\n", " \"\"\"Calculate importance score for each feature.\"\"\"\n", "\n", " importances = []\n", " for curr_feat in X.columns:\n", " list_feature_importance = []\n", " for n_round in range(n_repeats):\n", " list_feature_importance.append(\n", " get_feature_importance(model, X, y, curr_feat)\n", " )\n", "\n", " importances.append(list_feature_importance)\n", "\n", " return {\n", " \"importances_mean\": np.mean(importances, axis=1),\n", " \"importances_std\": np.std(importances, axis=1),\n", " \"importances\": importances,\n", " }\n", "\n", "\n", "# This function could directly be access from sklearn\n", "# from sklearn.inspection import permutation_importance" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_feature_importances(perm_importance_result, feat_name):\n", " \"\"\"bar plot the feature importance\"\"\"\n", "\n", " fig, ax = plt.subplots()\n", "\n", " indices = perm_importance_result[\"importances_mean\"].argsort()\n", " plt.barh(\n", " range(len(indices)),\n", " perm_importance_result[\"importances_mean\"][indices],\n", " xerr=perm_importance_result[\"importances_std\"][indices],\n", " )\n", "\n", " ax.set_yticks(range(len(indices)))\n", " _ = ax.set_yticklabels(feat_name[indices])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's compute the feature importance by permutation on the training data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "perm_importance_result_train = permutation_importance(\n", " model, X_train, y_train, n_repeats=10\n", ")\n", "\n", "plot_feature_importances(perm_importance_result_train, X_train.columns)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see again that the feature `MedInc`, `Latitude` and `Longitude` are very\n", "important for the model.\n", "\n", "We note that our random variable `rnd_num` is now very less important than\n", "latitude. Indeed, the feature importance built-in in RandomForest has bias for\n", "continuous data, such as `AveOccup` and `rnd_num`.\n", "\n", "However, the model still uses these `rnd_num` feature to compute the output.\n", "It is in line with the overfitting we had noticed between the train and test\n", "score." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### discussion\n", "\n", "1. For correlated feature, the permutation could give non realistic sample\n", " (e.g. nb of bedrooms higher than the number of rooms)\n", "2. It is unclear whether you should use training or testing data to compute\n", " the feature importance.\n", "3. Note that dropping a column and fitting a new model will not allow to\n", " analyse the feature importance for a specific model, since a *new* model\n", " will be fitted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Take Away\n", "\n", "* One could directly interpret the coefficient in linear model (if the feature\n", " have been scaled first)\n", "* Model like RandomForest have built-in feature importance\n", "* `permutation_importance` gives feature importance by permutation for any\n", " fitted model" ] } ], "metadata": { "jupytext": { "main_language": "python" }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 5 }