{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Approach to categorical variables" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Categorical variables are the essence of many real-world tasks. Every business task you're will ever solve will include categorical variables. So it's better to have a good taste of them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For demonstration purposes, I will use 2 models RF and Linear as they have different nature and would better highlight differences in category treating.\n", "\n", "Dataset from kaggle medium competition, where we should predict a number of claps (likes) to the article." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import warnings\n", "\n", "import feather\n", "import numpy as np\n", "import pandas as pd\n", "from sklearn.decomposition import PCA\n", "from sklearn.ensemble import RandomForestRegressor\n", "from sklearn.linear_model import Ridge\n", "from sklearn.manifold import TSNE\n", "from sklearn.metrics import mean_absolute_error\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.preprocessing import StandardScaler\n", "\n", "warnings.filterwarnings(\"ignore\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def add_date_parts(df, date_column=\"published\"):\n", " df[\"hour\"] = df[date_column].dt.hour\n", " df[\"month\"] = df[date_column].dt.month\n", " df[\"weekday\"] = df[date_column].dt.weekday\n", " df[\"year\"] = df[date_column].dt.year\n", " df[\"week\"] = df[date_column].dt.week\n", " df[\"working_day\"] = (df[\"weekday\"] < 5).astype(\"int\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "PATH_TO_DATA = \"../../data/medium/\"\n", "train_df = feather.read_dataframe(PATH_TO_DATA + \"medium_train\")\n", "train_df.set_index(\"id\", inplace=True)\n", "add_date_parts(train_df)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_df.head(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The text is not the purpose of this tutorial, so I'll drop it" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_df = train_df[\n", " [\n", " \"author\",\n", " \"domain\",\n", " \"lang\",\n", " \"log_recommends\",\n", " \"hour\",\n", " \"month\",\n", " \"weekday\",\n", " \"year\",\n", " \"week\",\n", " \"working_day\",\n", " ]\n", "]\n", "train_df.head(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Basic approach LE." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "LE (label encoding) is the most simple. We have some categories (country for example) ['Russia', 'USA', 'GB']. But algoritms do not work with strings, they need numbers. Ok, we can do it ['Russia', 'USA', 'GB'] -> [0, 1, 2]. Relly simple. Let's try." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "autor_to_int = dict(\n", " (zip(train_df.author.unique(), range(train_df.author.unique().shape[0])))\n", ")\n", "domain_to_int = dict(\n", " (zip(train_df.domain.unique(), range(train_df.domain.unique().shape[0])))\n", ")\n", "lang_to_int = dict(\n", " (zip(train_df.lang.unique(), range(train_df.lang.unique().shape[0])))\n", ")\n", "train_df_le = train_df.copy()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_df_le[\"author\"] = train_df_le[\"author\"].apply(lambda aut: autor_to_int[aut])\n", "train_df_le[\"domain\"] = train_df_le[\"domain\"].apply(lambda aut: domain_to_int[aut])\n", "train_df_le[\"lang\"] = train_df_le[\"lang\"].apply(lambda aut: lang_to_int[aut])\n", "train_df_le.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = train_df_le.log_recommends\n", "X = train_df_le.drop(\"log_recommends\", axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### RF label encoded" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)\n", "rf = RandomForestRegressor()\n", "rf.fit(X_train, y_train)\n", "preds = rf.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### LR label encoded" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Linear models like scaled input" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scaler = StandardScaler()\n", "X = scaler.fit_transform(X)\n", "X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ridge = Ridge()\n", "ridge.fit(X_train, y_train)\n", "preds = ridge.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It seems linear model perform worse. Yes, it is, because of their nature. Linear model tries to find weight W that would be multiplied with input X, y = W*X + b. With LE we are telling to out model (with mapping ['Russia', 'USA', 'GB'] -> [0, 1, 2]), that weight in \"Russia\" doesn't matter because X==0, and that GB two times bigger than USA.\n", "\n", "So it's not ok to use LE with linear models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### One-hot-encoding (OHE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can treat category as the thing on its own. ['Russia', 'USA', 'GB'] will convert to 3 features, each of which would take value 0 or 1.\n", "\n", "This way we can treat features independently, but cardinality blows up." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_df_ohe = train_df.copy()\n", "y = train_df_ohe.log_recommends\n", "X = train_df_ohe.drop(\"log_recommends\", axis=1)\n", "X[X.columns] = X[X.columns].astype(\"category\")\n", "X = pd.get_dummies(X, prefix=X.columns)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Boom! It was 9 dimensions now it's 317k dimensions. (Yes, I treat day-year-week as a category)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### RF ohe" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)\n", "rf = RandomForestRegressor(n_jobs=-1)\n", "rf.fit(X_train, y_train)\n", "preds = rf.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Score improved but learning time and memory consumption jumped drastically. (It was > 20Gb RAM)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### LR ohe" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ridge = Ridge()\n", "ridge.fit(X_train, y_train)\n", "preds = ridge.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wow! Significant improvement." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Categorical embeddings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You already knew everything that was above.\n", "\n", "Now it's time to try something new. We'll look at NN approach to categorical variables.\n", "\n", "In kaggle competitions, we can see, that in competitions with heavy use of categorical data tree ensembling methods work the best (XGBoost). Why in ages of rising NN they still haven't conquered this area?
\n", "In principle a neural network can approximate any continuous function and piecewise continuous function. However, it is not suitable to approximate arbitrary non-continuous functions as it assumes a certain level of continuity in its general form. During the training phase the continuity of the data guarantees the convergence of the optimization, and during the prediction phase it ensures that slightly changing the values of the input keeps the output stable.
\n", "Trees don't have this assumption about data continuity and can divide the states of a variable as fine as necessary." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NN is somehow close to the linear model. What have we done to linear model? We used OHE, but it blew our dimensionality. For many real-world tasks when features may have cardinality about millions it would be harder. Secondly, we've lost some information with such a transformation. In our example, we have language as a feature. When we are converting \"SPANISH\" -> [1,0,0,...,0] and when \"ENGLISH\" -> [0,1,0,...,0]. Both languages have the same distance between each other, but there is no doubts Spanish and English are more similar than English and Chinese. We want to get this inner relation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The solution to these problems is to use embeddings, which translate large sparse vectors into a lower-dimensional space that preserves semantic relationships." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How it works in NLP field:\n", "\n", "| feature | vector |\n", "|------|------|\n", "| puppy | [0.9, 1.0, 0.0] |\n", "| dog | [1.0, 0.2, 0.0]|\n", "| kitten | [0.0, 1.0, 0.9]|\n", "| cat | [0.0, 0.2, 1.0]|\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We see words share some values, that we can consider as \"dogness\" or \"size\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To do this, all we need is the matrix of embeddings.\n", "\n", "At the start, we are applying OHE and obtaining N rows with M columns. Where m is a category value. Then we picking row that encodes our category from the embedding matrix. Further we using this vector that repsents some rich properties of our initial category. \n", "We can obtain embeddings with NN magic. We are training embedding matrix with the size of MxP where P is number which we are picking (hyperparameter). Google's heuristic says us to pick M**0.25" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import Image\n", "\n", "Image(url=\"https://habrastorage.org/webt/of/jy/gd/ofjygd5fmbpxwz8x6boeu2nnpk4.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I'll use keras, but it's not important it's just a tool." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import keras\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "from keras.layers import BatchNormalization, Dense, Dropout, Embedding, Input\n", "from keras.models import Model, Sequential" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class EmbeddingMapping:\n", " \"\"\"\n", " Helper class for handling categorical variables\n", " An instance of this class should be defined for each categorical variable we want to use.\n", " \"\"\"\n", "\n", " def __init__(self, series):\n", " # get a list of unique values\n", " values = series.unique().tolist()\n", "\n", " # Set a dictionary mapping from values to integer value\n", " self.embedding_dict = {\n", " value: int_value + 1 for int_value, value in enumerate(values)\n", " }\n", "\n", " # The num_values will be used as the input_dim when defining the embedding layer.\n", " # It will also be returned for unseen values\n", " self.num_values = len(values) + 1\n", "\n", " def get_mapping(self, value):\n", " # If the value was seen in the training set, return its integer mapping\n", " if value in self.embedding_dict:\n", " return self.embedding_dict[value]\n", " # Else, return the same integer for unseen values\n", " else:\n", " return self.num_values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# converting some out features\n", "author_mapping = EmbeddingMapping(train_df[\"author\"])\n", "domain_mapping = EmbeddingMapping(train_df[\"domain\"])\n", "lang_mapping = EmbeddingMapping(train_df[\"lang\"])\n", "X_emb = X_emb.assign(author_mapping=X_emb[\"author\"].apply(author_mapping.get_mapping))\n", "X_emb = X_emb.assign(lang_mapping=X_emb[\"lang\"].apply(lang_mapping.get_mapping))\n", "X_emb = X_emb.assign(domain_mapping=X_emb[\"domain\"].apply(domain_mapping.get_mapping))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_emb.sample(1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_emb = train_df.copy()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_val, y_train, y_val = train_test_split(X_emb, y, test_size=0.2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Keras functional API\n", "# Input\n", "author_input = Input(shape=(1,), dtype=\"int32\")\n", "lang_input = Input(shape=(1,), dtype=\"int32\")\n", "domain_input = Input(shape=(1,), dtype=\"int32\")\n", "\n", "# It's google's fule of thumb N_embeddings == N_originall_dim**0.25\n", "# Let’s define the embedding layer and flatten it\n", "# Originally 31331 unique authors\n", "author_embedings = Embedding(\n", " output_dim=13, input_dim=author_mapping.num_values, input_length=1\n", ")(author_input)\n", "author_embedings = keras.layers.Reshape((13,))(author_embedings)\n", "# Originally 62 unique langs\n", "lang_embedings = Embedding(\n", " output_dim=3, input_dim=lang_mapping.num_values, input_length=1\n", ")(lang_input)\n", "lang_embedings = keras.layers.Reshape((3,))(lang_embedings)\n", "# Originally 221 unique domains\n", "domain_embedings = Embedding(\n", " output_dim=4, input_dim=domain_mapping.num_values, input_length=1\n", ")(domain_input)\n", "domain_embedings = keras.layers.Reshape((4,))(domain_embedings)\n", "\n", "\n", "# Concatenate continuous and embeddings inputs\n", "all_input = keras.layers.concatenate(\n", " [lang_embedings, author_embedings, domain_embedings]\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Fully connected layer to train NN and learn embeddings\n", "units = 25\n", "dense1 = Dense(units=units, activation=\"relu\")(all_input)\n", "dense1 = Dropout(0.5)(dense1)\n", "dense2 = Dense(units, activation=\"relu\")(dense1)\n", "dense2 = Dropout(0.5)(dense2)\n", "predictions = Dense(1)(dense2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "epochs = 40\n", "model = Model(inputs=[lang_input, author_input, domain_input], outputs=predictions)\n", "model.compile(loss=\"mae\", optimizer=\"adagrad\")\n", "\n", "history = model.fit(\n", " [X_train[\"lang_mapping\"], X_train[\"author_mapping\"], X_train[\"domain_mapping\"]],\n", " y_train,\n", " epochs=epochs,\n", " batch_size=128,\n", " verbose=0,\n", " validation_data=(\n", " [X_val[\"lang_mapping\"], X_val[\"author_mapping\"], X_val[\"domain_mapping\"]],\n", " y_val,\n", " ),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At this step, we've trained a NN, but we are not going to use it. We want to get the embeddings layer.\n", "\n", "For each category, we have distinct embedding. Let's extract them and use it in our simple models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.layers" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.layers[5].get_weights()[0].shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lang_embedding = model.layers[3].get_weights()[0]\n", "lang_emb_cols = [f\"lang_emb_{i}\" for i in range(lang_embedding.shape[1])]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "author_embedding = model.layers[4].get_weights()[0]\n", "aut_emb_cols = [f\"aut_emb_{i}\" for i in range(author_embedding.shape[1])]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "domain_embedding = model.layers[5].get_weights()[0]\n", "dom_emb_cols = [f\"dom_emb_{i}\" for i in range(domain_embedding.shape[1])]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have embeddings, and all we need is to take a row that corresponds to our examples." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_author_vector(aut_num):\n", " return author_embedding[aut_num, :]\n", "\n", "\n", "def get_lang_vector(lang_num):\n", " return lang_embedding[lang_num, :]\n", "\n", "\n", "def get_domain_vector(dom_num):\n", " return domain_embedding[dom_num, :]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "get_lang_vector(4)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lang_emb = pd.DataFrame(\n", " X_emb[\"lang_mapping\"].apply(get_lang_vector).values.tolist(), columns=lang_emb_cols\n", ")\n", "lang_emb.index = X_emb.index\n", "X_emb[lang_emb_cols] = lang_emb" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aut_emb = pd.DataFrame(\n", " X_emb[\"author_mapping\"].apply(get_author_vector).values.tolist(),\n", " columns=aut_emb_cols,\n", ")\n", "aut_emb.index = X_emb.index\n", "X_emb[aut_emb_cols] = aut_emb" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dom_emb = pd.DataFrame(\n", " X_emb[\"domain_mapping\"].apply(get_domain_vector).values.tolist(),\n", " columns=dom_emb_cols,\n", ")\n", "dom_emb.index = X_emb.index\n", "X_emb[dom_emb_cols] = dom_emb" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_emb.drop(\n", " [\n", " \"author\",\n", " \"lang\",\n", " \"domain\",\n", " \"log_recommends\",\n", " \"author_mapping\",\n", " \"lang_mapping\",\n", " \"domain_mapping\",\n", " ],\n", " axis=1,\n", " inplace=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_emb.columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_val, y_train, y_val = train_test_split(X_emb, y, test_size=0.2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rf = RandomForestRegressor(n_jobs=-1)\n", "rf.fit(X_train, y_train)\n", "preds = rf.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ridge = Ridge()\n", "ridge.fit(X_train, y_train)\n", "preds = ridge.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It seems like a success." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One nice property of embeddings - our categories have some simularity(distance) from each other. Let's look at the graph." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import bokeh.models as bm\n", "import bokeh.plotting as pl\n", "from bokeh.io import output_notebook\n", "\n", "output_notebook()\n", "\n", "\n", "def draw_vectors(\n", " x,\n", " y,\n", " radius=10,\n", " alpha=0.25,\n", " color=\"blue\",\n", " width=600,\n", " height=400,\n", " show=True,\n", " **kwargs,\n", "):\n", " \"\"\" draws an interactive plot for data points with auxilirary info on hover \"\"\"\n", " if isinstance(color, str):\n", " color = [color] * len(x)\n", " data_source = bm.ColumnDataSource({\"x\": x, \"y\": y, \"color\": color, **kwargs})\n", "\n", " fig = pl.figure(active_scroll=\"wheel_zoom\", width=width, height=height)\n", " fig.scatter(\"x\", \"y\", size=radius, color=\"color\", alpha=alpha, source=data_source)\n", "\n", " fig.add_tools(bm.HoverTool(tooltips=[(key, \"@\" + key) for key in kwargs.keys()]))\n", " if show:\n", " pl.show(fig)\n", " return fig" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "langs_vectors = [get_lang_vector(l) for l in lang_mapping.embedding_dict.values()]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lang_tsne = TSNE().fit_transform(langs_vectors)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "draw_vectors(\n", " lang_tsne[:, 0], lang_tsne[:, 1], token=list(lang_mapping.embedding_dict.keys())\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "langs_vectors_pca = PCA(n_components=2).fit_transform(langs_vectors)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "draw_vectors(\n", " langs_vectors_pca[:, 0],\n", " langs_vectors_pca[:, 1],\n", " token=list(lang_mapping.embedding_dict.keys()),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This time graphs doesn't look any meaningfull, but score speaks for itself." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cat2Vec" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another approach came from NLP is word2Vec that was renamed to Cat2Vec. It haven't firm confirmation about it's usefulness, but there are some papers that argue that. (Links below)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Distributional semantics and John Rupert Firth says \"You shall know a word by the company it keeps\". Some words share the same context, so they are somehow similar. We can suggest, that categories may share some inner correlation by they co-occurrence. For example weather and city. Maybe city \"Philadelphia\" may be similar to weather \"always sunny\", or \"Moskow\" with \"snowy\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Firstly we applying Feature encoding, then we can make \"sentence\" from our row.\n", "\n", "In the example below, let's imagine we have an article at \"Monday January 2018 English_language Medium.com\" Here our sentence so maybe if English co-occurs with Medium more often then Chinese with hackernoon.com. (Poor consideration but just for example)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The only consideration is \"word\" order. Word2Vec relays on order, fro categorical \"sentence\" it doesn't matter, so it's better to shuffle sentences.\n", "\n", "Let's implement it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_w2v = train_df.copy()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "month_int_to_name = {\n", " 1: \"jan\",\n", " 2: \"feb\",\n", " 3: \"apr\",\n", " 4: \"march\",\n", " 5: \"may\",\n", " 6: \"june\",\n", " 7: \"jul\",\n", " 8: \"aug\",\n", " 9: \"sept\",\n", " 10: \"okt\",\n", " 11: \"nov\",\n", " 12: \"dec\",\n", "}\n", "weekday_int_to_day = {\n", " 0: \"mon\",\n", " 1: \"thus\",\n", " 2: \"wen\",\n", " 3: \"thusd\",\n", " 4: \"fri\",\n", " 5: \"sut\",\n", " 6: \"sun\",\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "working_day_int_to_day = {1: \"work\", 0: \"not_work\"}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_w2v.month = X_w2v.month.apply(lambda x: month_int_to_name[x])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_w2v.weekday = X_w2v.weekday.apply(lambda x: weekday_int_to_day[x])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_w2v.working_day = X_w2v.working_day.apply(lambda x: working_day_int_to_day[x])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "all_list = list()\n", "for ind, r in X_w2v.iterrows():\n", " values_list = [str(val).replace(\" \", \"_\") for val in r.values]\n", " all_list.append(values_list)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from gensim.models import Word2Vec\n", "\n", "model = Word2Vec(\n", " all_list,\n", " size=32, # embedding vector size\n", " min_count=5, # consider words that occured at least 5 times\n", " window=5,\n", ").wv" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.most_similar(\"june\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "words = sorted(\n", " model.vocab.keys(), key=lambda word: model.vocab[word].count, reverse=True\n", ")[:1000]\n", "\n", "print(words[::100])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "word_vectors = np.array([model.get_vector(wrd) for wrd in words])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Draw a graph as usual" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "word_tsne = TSNE().fit_transform(word_vectors)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "draw_vectors(word_tsne[:, 0], word_tsne[:, 1], color=\"green\", token=words)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our categories mingled, but we can notice that years, days, languages are stays apart from authors cloud." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_phrase_embedding(phrase):\n", " \"\"\"\n", " Convert phrase to a vector by aggregating it's word embeddings. See description above.\n", " \"\"\"\n", " # 1. lowercase phrase\n", " # 2. tokenize phrase\n", " # 3. average word vectors for all words in tokenized phrase\n", " # skip words that are not in model's vocabulary\n", " # if all words are missing from vocabulary, return zeros\n", "\n", " vector = np.zeros([model.vector_size], dtype=\"float32\")\n", " word_count = 0\n", "\n", " for word in phrase.split():\n", " if word in model.vocab:\n", " vector += model.get_vector(word)\n", " word_count += 1\n", "\n", " if word_count:\n", " vector /= word_count\n", "\n", " return vector" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "new_features = list()\n", "for ph in all_list:\n", " vector = get_phrase_embedding(\" \".join(ph))\n", " new_features.append(vector)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "new_features = pd.DataFrame(new_features)\n", "new_features.index = X_w2v.index\n", "X_w2v = pd.concat([X_w2v, new_features], axis=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_w2v.drop(\n", " [\n", " \"author\",\n", " \"domain\",\n", " \"lang\",\n", " \"working_day\",\n", " \"year\",\n", " \"month\",\n", " \"weekday\",\n", " \"log_recommends\",\n", " ],\n", " axis=1,\n", " inplace=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_val, y_train, y_val = train_test_split(X_w2v, y, test_size=0.2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rf = RandomForestRegressor(n_jobs=-1)\n", "rf.fit(X_train, y_train)\n", "preds = rf.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ridge = Ridge()\n", "ridge.fit(X_train, y_train)\n", "preds = ridge.predict(X_val)\n", "mean_absolute_error(y_val, preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Poor result, but I cutted a lot of features that could help this algorithm to word." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you know that categorical variables it is a tricky beast and that we can get a lot of it by embeddings and cat2Vec technics. They work not only for NN but in simpler models, so it is possible to use it in production low-latency systems." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "https://arxiv.org/ftp/arxiv/papers/1603/1603.04259.pdf ITEM2VEC: NEURAL ITEM EMBEDDING FOR COLLABORATIVE FILTERING
\n", "https://openreview.net/pdf?id=HyNxRZ9xg CAT2VEC: LEARNING DISTRIBUTED REPRESENTATION OF MULTI-FIELD CATEGORICAL DATA
\n", "https://arxiv.org/pdf/1604.06737v1.pdf Entity Embeddings of Categorical Variables
\n", "https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture Embeddings
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }