{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "" ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# code for loading the format for the notebook\n", "import os\n", "\n", "# path : store the current path to convert back to it later\n", "path = os.getcwd()\n", "os.chdir(os.path.join('..', 'notebook_format'))\n", "from formats import load_style\n", "load_style(css_style = 'custom2.css', plot_style = False)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ethen 2018-04-07 14:28:04 \n", "\n", "CPython 3.6.4\n", "IPython 6.2.1\n", "\n", "numpy 1.14.2\n", "pandas 0.22.0\n", "sklearn 0.19.1\n", "tqdm 4.19.8\n", "scipy 1.0.0\n" ] } ], "source": [ "os.chdir(path)\n", "\n", "# 1. magic to print version\n", "# 2. magic so that the notebook will reload external python modules\n", "%load_ext watermark\n", "%load_ext autoreload\n", "%autoreload 2\n", "\n", "import numpy as np\n", "import pandas as pd\n", "from math import ceil\n", "from tqdm import trange\n", "from subprocess import call\n", "from scipy.sparse import csr_matrix, dok_matrix\n", "\n", "%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,tqdm,scipy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Collaborative Filtering for Implicit Feedback Datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One common scenario in real-world recommendation system is we only have **implicit** instead of **explicit** user-item interaction data. To elaborate on this a little bit more, a user may be searching for an item on the web, or listening to songs. Unlike a rating data, where we have direct access to the user's preference towards an item, these type of actions do not **explicitly** state or quantify any preference of the user for the item, but instead gives us **implicit confidence** about the user's opinion.\n", "\n", "Even when we do have explicit data, it might still be a good idea to incorporate implicit data into the model. Consider, for example, listening to songs. When users listen to music on a streaming service, they might rarely ever rate a song that he/she like or dislike. But more often they skip a song, or listen only halfway through it if they dislike it. If the user really liked a song, they will often come back and listen to it. So, to infer a user's musical taste profile, their listens, repeat listens, skips and fraction of tracks listened to, etc. might be far more valuable signals than explicit ratings." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Formulation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall from the previous notebook that the loss function for training the recommendation model on explicit feedback data was:\n", "\n", "$$\n", "\\begin{align}\n", "L_{explicit} &= \\sum\\limits_{u,i \\in S}( r_{ui} - x_{u} y_{i}^{T} )^{2} + \\lambda \\big( \\sum\\limits_{u} \\left\\Vert x_{u} \\right\\Vert^{2} + \\sum\\limits_{i} \\left\\Vert y_{i} \\right\\Vert^{2} \\big)\n", "\\end{align}\n", "$$\n", "\n", "Where:\n", "\n", "- $r_{ui}$ is the true rating given by user $u$ to the item $i$\n", "- $x_u$ and $y_i$ are user u's and item i's latent factors, both are $1×d$ dimensional, where $d$ the number of latent factors that the user can specify\n", "- $S$ was the set of all user-item ratings\n", "- $\\lambda$ controls the regularization strength that prevents overfitting the user and item vectors\n", "\n", "To keep it concrete, let's assume we're working music data and the value of our $r_{ui}$ will consists of implicit ratings that counts the number of times a user has listened to a song (song listen count). Then new formulation becomes:\n", "\n", "$$\n", "\\begin{align}\n", "L_{implicit} &= \\sum\\limits_{u,i} c_{ui}( p_{ui} - x_{u} y_{i}^{T} )^2 + \\lambda \\big( \\sum\\limits_{u} \\left\\Vert x_{u} \\right\\Vert^{2} + \\sum\\limits_{i} \\left\\Vert y_{i} \\right\\Vert^{2} \\big)\n", "\\end{align}\n", "$$\n", "\n", "Recall that with implicit feedback, we do not have ratings anymore; rather, we have users' preferences for items. Therefore, in the new loss function, the ratings $r_{ui}$ has been replaced with a preference $p_{ui}$ indicating the preference of user $u$ to item $i$. $p_{ui}$ is a set of binary variables and is computed by binarizing $r_{ui}$.\n", "\n", "$$\n", "\\begin{align}\n", "p_{ui} &= \\begin{cases} 1 &\\mbox{if } r_{ui} > 0 \\\\ 0 & \\mbox{otherwise} \\end{cases}\n", "\\end{align}\n", "$$\n", "\n", "We make the assumption that if a user has interacted at all with an item ($r_{ui} > 0$), then we set $p_{ui} = 1$ to indicate that user $u$ has a liking/preference for item $i$. Otherwise, we set $p_{ui} = 0$. However, these assumptions comes with varying degrees of confidence. First of all, when $p_{ui} = 0$, we assume that it should be associated with a lower confidence, as there are many reasons beyond disliking the item as to why the user has not interacted with it. e.g. Unaware of it's existence. On the other hand, as the number of implicit feedback, $r_{ui}$, grows, we have a stronger indication that the user does indeed like the item (regardless of whether he/she if buying a gift for someone else). So to measure the level of confidence mentioned above, we introduce another set of variables $c_{ui}$ that measures our confidence in observing $p_{ui}$:\n", "\n", "$$\n", "\\begin{align}\n", "c_{ui} = 1 + \\alpha r_{ui}\n", "\\end{align}\n", "$$\n", "\n", "Where the 1 ensures we have some minimal confidence for every user-item pair, and as we observe more and more implicit feedback (as $r_{ui}$ gets larger and larger), our confidence in $p_{ui} = 1$ increases accordingly. The term $\\alpha$ is a parameter that we have to specify to control the rate of the increase. This formulation makes intuitive sense when we look back at the $c_{ui}( p_{ui} - x_{u} y_{i}^{T} )^2$ term in the loss function. A larger $c_{ui}$ means that the prediction $x_{u} y_{i}^{T}$ has to be that much closer to $p_{ui}$ so that term will not contribute too much to the total loss.\n", "\n", "The implementation in the later section will be based on the formula above, but note that there are many ways in which we can tune the formulation above. For example, we can derive $p_{ui}$ from $r_{ui}$ differently. So instead of setting the binary cutoff at 0, we can set it at another threshold that we feel is appropriate for the domain. Similarly, there are many ways to transform $r_{ui}$ into the confidence level $c_{ui}$. e.g. we can use:\n", "\n", "$$\n", "\\begin{align}\n", "c_{ui} = 1 + \\alpha log \\left( 1 + r_{ui} / \\epsilon \\right)\n", "\\end{align}\n", "$$\n", "\n", "Regardless of the scheme, it's important to realize that we are transforming the raw observation $r_{ui}$ into two distinct representation, the preference $p_{ui}$ and the confidence levels of the preference $c_{ui}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Alternating Least Squares" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's assume we have $m$ users and $n$ items. Now, to solve for the loss function above, we start by treating $y_i$ as constant and solve the loss function with respect to $x_u$. To do this, we rewrite and expand the first term in the loss function (excluding the regularization terms), $\\sum\\limits_{u,i} c_{ui}( p_{ui} - x_{u} y_{i}^{T} )^2$ part as:\n", "\n", "$$\n", "\\begin{align}\n", "\\sum\\limits_{u,i} c_{ui}( p_{ui} - x_{u} y_{i}^{T} )^2\n", "&= \\sum_u c_u( p_u^T - x_u Y^T )^2 \\\\\n", "&= \\sum_u p_u^T C^u p_u - 2 x_u Y^T C^u p_u + x_u Y^T C^u Y x_u^T\n", "\\end{align}\n", "$$\n", "\n", "Where: \n", "\n", "- $Y \\in \\mathbb{R}^{n \\times d}$ represents all item row vectors vertically stacked on each other\n", "- $p_u \\in \\mathbb{R^{n \\times 1}}$ contains element all of the preferences of the user\n", "- The diagonal matrix $C^u \\in \\mathbb{R^{n \\times n}}$ consists of $c_{ui}$ in row/column $i$, which is the user's confidence across all the items. e.g. if $u = 0$ then the matrix for user $u_0$ will look like:\n", "\n", "$$\n", "\\begin{align}\n", "{C}^{u_0} = \\begin{bmatrix} c_{u_{01}} & 0 & 0 & 0 & ... & 0 \\\\ 0 & c_{u_{02}} & 0 & 0 & ... &0\\\\ ... \\\\ ... \\\\ 0 & 0 & 0 & 0 & ... & c_{u_{0n}}\\end{bmatrix}\n", "\\end{align}\n", "$$\n", "\n", "The formula above can also be used to monitor the loss function at each iteration. If we set $A = Y^T C^u Y$ and $b = Y^T C^u$, the last two terms can be rewritten as $(A x_u^T - 2b p_u) x_u$. As for the first term $p_u^T C^u p_u$ we can leverage the fact that $p_u$ is 1 for all positive items, and just sum the confidence term $C^u$.\n", "\n", "Now for the derivation of the partial derivative.\n", "\n", "$$\n", "\\begin{align}\n", "\\frac{\\partial L_{implicit}}{\\partial x_u} \n", "&= -2 Y^T C^u p_u + 2 Y^T C^u Y x_u + 2 \\lambda x_u = 0 \\\\\n", "&= (Y^T C^u Y + \\lambda I)x_u = Y^T C^u p_{u} \\\\\n", "&= x_u = (Y^T C^u Y + \\lambda I)^{-1} Y^T C^u p_u\n", "\\end{align}\n", "$$\n", "\n", "The main computational bottleneck in the expression above is the need to compute $Y^T C^u Y$ for every user. Speedup can be obtained by re-writing the expression as:\n", "\n", "$$\n", "\\begin{align}\n", "{Y}^T {C}^{u} {Y} &= Y^T Y + {Y}^T \\left( C^u - I \\right) Y\n", "\\end{align}\n", "$$\n", "\n", "Now the term $Y^T Y$ becomes independent of each user and can be computed independently, next notice $\\left(C^u - I \\right)$ has only $n_u$ non-zero elements, where $n_u$ is the number of items for which $r_{ui} > 0$. Similarly, $C^u p_u$ contains only $n_u$ non-zero elements since $p_u$ is a binary transformation of $r_{ui}$. Thus the final formulation becomes:\n", "\n", "$$\n", "\\begin{align}\n", "\\frac{\\partial L_{implicit}}{\\partial x_u} \n", "&= x_u = (Y^T Y + Y^T \\left( C^u - I \\right) Y + \\lambda I)^{-1} Y^T C^u p_u\n", "\\end{align}\n", "$$\n", "\n", "After solving for $x_u$ the same procedure can be carried out to solve for $y_i$ giving a similar expression:\n", "\n", "$$\n", "\\begin{align}\n", "\\frac{\\partial L_{implicit}}{\\partial y_i} \n", "&= y_i = (X^T X + X^T \\left( C^i - I \\right) X + \\lambda I)^{-1} X^T C^i p_i\n", "\\end{align}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implementation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll use the same movielens dataset like the previous notebook. The movielens data is not an implicit feedback dataset as the user did provide explicit ratings, but we will use it for now to test out our implementation. The overall preprocessing procedure of loading the data and doing the train/test split is the same as the previous notebook. But here we'll do it in a sparse matrix fashion." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "data dimension: \n", " (100000, 4)\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
user_iditem_idratingtimestamp
01962423881250949
11863023891717742
2223771878887116
3244512880606923
41663461886397596
\n", "
" ], "text/plain": [ " user_id item_id rating timestamp\n", "0 196 242 3 881250949\n", "1 186 302 3 891717742\n", "2 22 377 1 878887116\n", "3 244 51 2 880606923\n", "4 166 346 1 886397596" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "file_dir = 'ml-100k'\n", "file_path = os.path.join(file_dir, 'u.data')\n", "if not os.path.isdir(file_dir):\n", " call(['curl', '-O', 'http://files.grouplens.org/datasets/movielens/' + file_dir + '.zip'])\n", " call(['unzip', file_dir + '.zip'])\n", "\n", "names = ['user_id', 'item_id', 'rating', 'timestamp']\n", "df = pd.read_csv(file_path, sep = '\\t', names = names)\n", "print('data dimension: \\n', df.shape)\n", "df.head()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def create_matrix(data, user_col, item_col, rating_col):\n", " \"\"\"\n", " creates the sparse user-item interaction matrix\n", " \n", " Parameters\n", " ----------\n", " data : DataFrame\n", " implicit rating data\n", "\n", " user_col : str\n", " user column name\n", "\n", " item_col : str\n", " item column name\n", " \n", " ratings_col : str\n", " implicit rating column name\n", "\n", " Returns\n", " -------\n", " ratings : scipy sparse csr_matrix [n_users, n_items]\n", " user/item ratings matrix\n", "\n", " data : DataFrame\n", " the implict rating data that retains only the positive feedback\n", " (if specified to do so)\n", " \"\"\"\n", " # map each item and user to a unique numeric value\n", " for col in (item_col, user_col):\n", " data[col] = data[col].astype('category')\n", " \n", " # create a sparse matrix of using the (rating, (rows, cols)) format\n", " rows = data[user_col].cat.codes\n", " cols = data[item_col].cat.codes\n", " rating = data[rating_col]\n", " ratings = csr_matrix((rating, (rows, cols)))\n", " ratings.eliminate_zeros()\n", " return ratings, data" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<943x1682 sparse matrix of type ''\n", "\twith 100000 stored elements in Compressed Sparse Row format>" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "user_col = 'user_id'\n", "item_col = 'item_id'\n", "rating_col = 'rating'\n", "X, df = create_matrix(df, user_col, item_col, rating_col)\n", "X" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following train and test set split function is assuming that you're doing a train/test split using the current dataset. Though it's probably better to use time to perform the train/test split. For example, using the year 2016's data as training and the 1 first month of 2017's data as testing." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def create_train_test(ratings, test_size = 0.2, seed = 1234):\n", " \"\"\"\n", " split the user-item interactions matrix into train and test set\n", " by removing some of the interactions from every user and pretend\n", " that we never seen them\n", " \n", " Parameters\n", " ----------\n", " ratings : scipy sparse csr_matrix\n", " The user-item interactions matrix\n", " \n", " test_size : float between 0.0 and 1.0, default 0.2\n", " Proportion of the user-item interactions for each user\n", " in the dataset to move to the test set; e.g. if set to 0.2\n", " and a user has 10 interactions, then 2 will be moved to the\n", " test set\n", " \n", " seed : int, default 1234\n", " Seed for reproducible random splitting the \n", " data into train/test set\n", " \n", " Returns\n", " -------\n", " train : scipy sparse csr_matrix\n", " Training set\n", " \n", " test : scipy sparse csr_matrix\n", " Test set\n", " \"\"\"\n", " assert test_size < 1.0 and test_size > 0.0\n", "\n", " # Dictionary Of Keys based sparse matrix is more efficient\n", " # for constructing sparse matrices incrementally compared with csr_matrix\n", " train = ratings.copy().todok()\n", " test = dok_matrix(train.shape)\n", " \n", " # 1. for all the users assign randomly chosen interactions\n", " # to the test and assign those interactions to zero in the training;\n", " # when computing the interactions to go into the test set, \n", " # remember to round up the numbers (e.g. a user has 4 ratings, if the\n", " # test_size is 0.2, then 0.8 ratings will go to test, thus we need to\n", " # round up to ensure the test set gets at least 1 rating);\n", " # 2. note that we can easily the parallelize the for loop if we were to\n", " # aim for a more efficient implementation\n", " rstate = np.random.RandomState(seed)\n", " for u in range(ratings.shape[0]):\n", " split_index = ratings[u].indices\n", " n_splits = ceil(test_size * split_index.shape[0])\n", " test_index = rstate.choice(split_index, size = n_splits, replace = False)\n", " test[u, test_index] = ratings[u, test_index]\n", " train[u, test_index] = 0\n", " \n", " train, test = train.tocsr(), test.tocsr()\n", " return train, test" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<943x1682 sparse matrix of type ''\n", "\twith 79619 stored elements in Compressed Sparse Row format>" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "seed = 1234\n", "test_size = 0.2\n", "X_train, X_test = create_train_test(X, test_size, seed)\n", "X_train" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following implementation uses some tricks to speed up the procedure. First of all, when we need to solve $Ax = b$ where $A$ is an $n \\times n$ matrix, a lot of books might write the solution as $x = A^{-1} b$, however, in practice there is hardly ever a good reason to calculate that it that way as solving the equation $Ax = b$ is faster than finding $A^{-1}$.\n", "\n", "The next one is the idea of computing matrix product $X^T X$ using a [outer product](https://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html) of each row." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[9, 4],\n", " [3, 1],\n", " [5, 2]])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# example matrix\n", "X = np.array([[9, 3, 5], [4, 1, 2]]).T\n", "X" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[115, 49],\n", " [ 49, 21]])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# normal matrix product\n", "X.T.dot(X)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "row:\n", " [9 4]\n", "outer product of row:\n", " [[81 36]\n", " [36 16]]\n", "row:\n", " [3 1]\n", "outer product of row:\n", " [[9 3]\n", " [3 1]]\n", "row:\n", " [5 2]\n", "outer product of row:\n", " [[25 10]\n", " [10 4]]\n" ] }, { "data": { "text/plain": [ "array([[115., 49.],\n", " [ 49., 21.]])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# intialize an empty array\n", "end_result = np.zeros((2, 2))\n", "\n", "# loop through each row add up the outer product\n", "for i in range(X.shape[0]):\n", " out = np.outer(X[i], X[i])\n", " end_result += out\n", " print('row:\\n', X[i])\n", " print('outer product of row:\\n', out)\n", "\n", "end_result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The reason why this can speed things up is that the matrix product is now the sum of the outer products of the rows, where each row's computation is independent of another can be computed in the parallelized fashion then added back together!\n", "\n", "Last but not least, is exploiting the property of scipy's [Compressed Sparse Row Matrix](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.sparse.csr_matrix.html) to access the non-zero elements. For those that are unfamiliar with it, the following link has a pretty decent quick introduction. [Blog: Empty rows in sparse arrays](http://mike.place/2015/sparse/)." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "class ALSWR:\n", " \"\"\"\n", " Alternating Least Squares with Weighted Regularization\n", " for implicit feedback\n", "\n", " Parameters\n", " ----------\n", " n_iters : int\n", " number of iterations to train the algorithm\n", "\n", " n_factors : int\n", " number/dimension of user and item latent factors\n", "\n", " alpha : int\n", " scaling factor that indicates the level of confidence in preference\n", "\n", " reg : int\n", " regularization term for the user and item latent factors\n", "\n", " seed : int\n", " seed for the randomly initialized user, item latent factors\n", "\n", " Reference\n", " ---------\n", " Y. Hu, Y. Koren, C. Volinsky Collaborative Filtering for Implicit Feedback Datasets\n", " http://yifanhu.net/PUB/cf.pdf\n", " \"\"\"\n", " def __init__(self, n_iters, n_factors, alpha, reg, seed):\n", " self.reg = reg\n", " self.seed = seed\n", " self.alpha = alpha\n", " self.n_iters = n_iters\n", " self.n_factors = n_factors\n", " \n", " def fit(self, ratings):\n", " \"\"\"\n", " ratings : scipy sparse csr_matrix [n_users, n_items]\n", " sparse matrix of user-item interactions\n", " \"\"\" \n", " # the original confidence vectors should include a + 1,\n", " # but this direct addition is not allowed when using sparse matrix,\n", " # thus we'll have to deal with this later in the computation\n", " Cui = ratings.copy().tocsr()\n", " Cui.data *= self.alpha\n", " Ciu = Cui.T.tocsr()\n", " self.n_users, self.n_items = Cui.shape\n", " \n", " # initialize user latent factors and item latent factors\n", " # randomly with a specified set seed\n", " rstate = np.random.RandomState(self.seed)\n", " self.user_factors = rstate.normal(size = (self.n_users, self.n_factors))\n", " self.item_factors = rstate.normal(size = (self.n_items, self.n_factors))\n", " \n", " for _ in trange(self.n_iters, desc = 'training progress'):\n", " self._als_step(Cui, self.user_factors, self.item_factors)\n", " self._als_step(Ciu, self.item_factors, self.user_factors) \n", " \n", " return self\n", " \n", " def _als_step(self, Cui, X, Y):\n", " \"\"\"\n", " when solving the user latent vectors,\n", " the item vectors will be fixed and vice versa\n", " \"\"\"\n", " # the variable name follows the notation when holding\n", " # the item vector Y constant and solving for user vector X\n", " \n", " # YtY is a d * d matrix that is computed\n", " # independently of each user\n", " YtY = Y.T.dot(Y)\n", " data = Cui.data\n", " indptr, indices = Cui.indptr, Cui.indices\n", "\n", " # for every user build up A and b then solve for Ax = b,\n", " # this for loop is the bottleneck and can be easily parallized\n", " # as each users' computation is independent of one another\n", " for u in range(self.n_users):\n", " # initialize a new A and b for every user\n", " b = np.zeros(self.n_factors)\n", " A = YtY + self.reg * np.eye(self.n_factors)\n", " \n", " for index in range(indptr[u], indptr[u + 1]):\n", " # indices[index] stores non-zero positions for a given row\n", " # data[index] stores corresponding confidence,\n", " # we also add 1 to the confidence, since we did not \n", " # do it in the beginning, when we were to give every \n", " # user-item pair and minimal confidence\n", " i = indices[index]\n", " confidence = data[index] + 1\n", " factor = Y[i]\n", "\n", " # for b, Y^T C^u p_u\n", " # there should be a times 1 for the preference \n", " # Pui = 1\n", " # b += confidence * Y[i] * Pui\n", " # but the times 1 can be dropped\n", " b += confidence * factor\n", " \n", " # for A, Y^T (C^u - I) Y\n", " A += (confidence - 1) * np.outer(factor, factor)\n", "\n", " X[u] = np.linalg.solve(A, b)\n", " \n", " return self\n", "\n", " def predict(self):\n", " \"\"\"predict ratings for every user and item\"\"\"\n", " prediction = self.user_factors.dot(self.item_factors.T)\n", " return prediction\n", " \n", " def _predict_user(self, user):\n", " \"\"\"\n", " returns the predicted ratings for the specified user,\n", " this is mainly used in computing evaluation metric\n", " \"\"\"\n", " user_pred = self.user_factors[user].dot(self.item_factors.T)\n", " return user_pred" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "training progress: 100%|██████████| 15/15 [00:28<00:00, 1.93s/it]\n" ] }, { "data": { "text/plain": [ "<__main__.ALSWR at 0x1079186d8>" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "als = ALSWR(n_iters = 15, n_factors = 20, alpha = 15, reg = 0.01, seed = 1234)\n", "als.fit(X_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Ranking Metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've built our recommendation engine, the next important thing to do is to evaluate our model's offline performance.\n", "\n", "Let's say that there are some users and some items, like movies, songs or jobs. Each user might be interested in some items. The client asks us to recommend a few items (the number is x) for each user. In other words, what we're after is the top-N recommendation for each user and after recommending these items to the user, we need a way to measure whether the recommendation is useful or not. One key thing to note is that metrics such as RMSE might not be the best at assessing the quality of recommendations because the training focused on items with the most ratings, achieving a good fit for those. The items with few ratings don't mean much in terms of their impact on the loss. As a result, predictions for them will be off. \n", "\n", "Long story short, we need metrics specifically crafted for ranking evaluation and the two most popular ranking metrics are **MAP (Mean Average Precision)** and **NDCG (Normalized Discounted Cumulative Gain)**. The main difference between the two is that MAP assumes binary relevance (an item is either of interest or not, e.g. a user clicked a link, watched a video, purchased a product), while NDCG can be used in any case where we can assign relevance score to a recommended item (binary, integer or real). The relationship is just like classification and regression." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Mean Average Precision" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this section of the content, a large portion is based on the excellent blog post at the following link. [Blog: What you wanted to know about Mean Average Precision](http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/). This documentation builds on top of it by carrying out the educational implementation.\n", "\n", "Let's say that there are some users and some items, like movies, songs or jobs. Each user might be interested in some items. The client asks us to recommend a few items (the number is x) for each user. After recommending the items to the user, we need a way to measure whether the recommendation is useful or not. One way to do this is using **MAP@k (Mean Average Precision at k)** .\n", "\n", "The intuition behind this evaluation metric is that:\n", "\n", "- We can recommend at most k items for each user (this is essentially the `@k` part), but we will be penalized for bad recommendations\n", "- Order matters, so it's better to submit more certain recommendations first, followed by recommendations we are less sure about\n", "\n", "Diving a bit deeper, we will first ignore `@k` and get M out of the way. MAP is just the mean of APs, or average precision, for all users. If we have 1000 users, we sum APs for each user and divide the sum by 1000. This is MAP. So now, what is AP, or average precision?\n", "\n", "One way to understand average precision is this way: we type something in Google and it shows us 10 results. It's probably best if all of them were relevant. If only some are relevant, say five of them, then it's much better if the relevant ones are shown first. It would be bad if first five were irrelevant and good ones only started from sixth, wouldn't it? The formula for computing AP is: \n", "\n", "sum i=1:k of (precision at i * change in recall at i)\n", "\n", "Where precision at i is a percentage of correct items among first i recommendations. Change in recall is 1/k if item at i is correct (for every correct item), otherwise zero. Note that this is base on the assumption that the number of relevant items is bigger or equal to k: r >= k. If not, change in recall is 1/r for each correct i instead of 1/k.\n", "\n", "For example, If the actual items were [1 2 3 4 5] and we recommend [6 4 7 1 2]. In this case we get 4, 1 and 2 right, but with some incorrect guesses in between. Now let's say we were to compute AP@2, so only two first predictions matter: 6 and 4. First is wrong, so precision@1 is 0. Second is right, so precision@2 is 0.5. Change in recall is 0 and 0.5 (that's 1/k) respectively, so AP@2 = 0 * 0 + 0.5 * 0.5 = 0.25" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def compute_apk(y_true, y_pred, k):\n", " \"\"\"\n", " average precision at k, y_pred is assumed \n", " to be truncated to length k prior to feeding\n", " it to the function\n", " \"\"\"\n", " # convert to set since membership \n", " # testing in a set is vastly faster\n", " actual = set(y_true)\n", " \n", " # precision at i is a percentage of correct \n", " # items among first i recommendations; the\n", " # correct count will be summed up by n_hit\n", " n_hit = 0\n", " precision = 0\n", " for i, p in enumerate(y_pred, 1):\n", " if p in actual:\n", " n_hit += 1\n", " precision += n_hit / i\n", "\n", " # divide by recall at the very end\n", " avg_precision = precision / min(len(actual), k)\n", " return avg_precision" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.25" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# example 1\n", "\n", "# y_true, is the true interaction of a user\n", "# and y_pred is the recommendation we decided\n", "# to give to the user\n", "k = 2\n", "y_true = np.array([1, 2, 3, 4, 5])\n", "y_pred = np.array([6, 4, 7, 1, 2])\n", "compute_apk(y_true, y_pred[:k], k) # 0.25" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.325" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# example 2\n", "\n", "k = 5\n", "y_true = np.array([1, 2])\n", "y_pred = np.array([6, 4, 7, 1, 2])\n", "compute_apk(y_true, y_pred[:k], k) # 0.325" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After computing the average precision for this individual user, we then compute this for every single user and take the mean of these values, then that would essentially be our MAP@k, mean average precision at k. For this metric, the bigger the better." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "def mapk_score(model, ratings, k):\n", " \"\"\"\n", " mean average precision at rank k for the ALS model\n", "\n", " Parameters\n", " ----------\n", " model : ALSWR instance\n", " fitted ALSWR model\n", "\n", " ratings : scipy sparse csr_matrix [n_users, n_items]\n", " sparse matrix of user-item interactions\n", "\n", " k : int\n", " mean average precision at k's k\n", " \n", " Returns\n", " -------\n", " mapk : float\n", " the mean average precision at k's score\n", " \"\"\"\n", " # compare the top k predictions' index to the actual index,\n", " # the model is assumed to have the _predict_user method\n", " mapk = 0\n", " n_users = ratings.shape[0]\n", " for u in range(n_users):\n", " y_true = ratings[u].indices\n", " u_pred = model._predict_user(u)\n", " y_pred = np.argsort(u_pred)[::-1][:k]\n", " mapk += compute_apk(y_true, y_pred, k)\n", "\n", " mapk /= n_users\n", " return mapk" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "mapk training: 0.20038882997525595\n", "mapk testing: 0.05916489925768833\n" ] } ], "source": [ "k = 5\n", "mapk_train = mapk_score(als, X_train, k)\n", "mapk_test = mapk_score(als, X_test, k)\n", "print('mapk training:', mapk_train)\n", "print('mapk testing:', mapk_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that it's normal for this metric to be low. We can compare this metric with a baseline to get a sense of how well the algorithm is performing. And a nice baseline for recommendation engine is to simply recommend every user the most popular items (items that has the most user interaction)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## NDCG" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose that on a four-point scale, we give a 0 score for an irrelevant result, 1 for a partially relevant, 2 for relevant, and 3 for perfect. Suppose also that a query is judged by one of our judges, and the first four results that the search engine returns are assessed as relevant (2), irrelevant (0), perfect (3), and relevant (2) by a judge. \n", "\n", "The idea behind **NDCG** is: A recommender returns some items and we'd like to compute how good the list is. Each item has a relevance score, usually a non-negative number. That's **gain (G)**. For items we don't have user feedback for we usually set the gain to zero. Now we add up those scores, that's **cumulative gain (CG)**. So, the cumulative gain for the four results is the sum of the scores for each result: 2 + 0 + 3 + 2 = 7. In mathematical notations, the CG at a particular rank ${\\displaystyle k}$ is defined as:\n", "\n", "\\begin{align}\n", "CG_k = \\sum_{i=1}^k rel_i\n", "\\end{align}\n", "\n", "Where $rel_i$ is the graded relevance of the result at position ${\\displaystyle i}$. As we can see from the formula, the value computed with this function is unaffected by changes in the ordering of search results, thus DCG is used in place of CG for a more accurate measure about the usefulness of results' ranking.\n", "\n", "When evaluating rankings, we’d prefer to see the most relevant items at the top of the list, i.e the first result in our search results is more important than the second, the second more important than the third, and so on. Therefore before summing the scores we divide each by a growing number, which is the **discount (D)**. One simple way to make position one more important than two (and so on) is to divide each score by the rank. So, for example, if the third result is \"great\", its contribution is $3 / 3 = 1$ (since the score for \"great\" is 3 , and the rank of the result is 3). If \"great\" were the first result, its contribution would be 3 / 1 = 3. Though in practice, it's more common to discount it using a logarithm of the item position, giving us:\n", "\n", "\\begin{align}\n", "DCG_k = \\sum_{i=1}^k \\frac{rel_i}{\\log_2{\\left(i+1\\right)}}\n", "\\end{align}" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "4.361353116146786" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def dcg_at_k(score, k = None):\n", " \"\"\"\n", " discounted cumulative gain (dcg)\n", " \n", " Parameters\n", " ----------\n", " score : 1d nd.array\n", " ranking/relevance score\n", " \n", " k : int, default None\n", " evaluate the measure for the top-k ranking score,\n", " default None evaluates all\n", " \n", " Returns\n", " -------\n", " dcg: float\n", " \"\"\"\n", " if k is not None:\n", " score = score[:k]\n", "\n", " discounts = np.log2(np.arange(2, score.size + 2))\n", " dcg = np.sum(score / discounts)\n", " return dcg\n", "\n", "\n", "score = np.array([2, 0, 3, 2])\n", "dcg_at_k(score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There's an alternative formulation of DCG that places stronger emphasis on retrieving relevant documents:\n", "\n", "\\begin{align}\n", "DCG_k = \\sum_{i=1}^k \\frac{2^{rel_i} - 1}{\\log_2{\\left(i+1\\right)}}\n", "\\end{align}\n", "\n", "This formula is commonly used in industry including major web search companies and data science competition platform such as Kaggle." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "7.79202967422018" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def dcg_at_k(score, k = None):\n", " \"\"\"\n", " discounted cumulative gain (dcg)\n", " \n", " Parameters\n", " ----------\n", " score : 1d nd.array\n", " ranking/relevance score\n", " \n", " k : int, default None\n", " evaluate the measure for the top-k ranking score,\n", " default None evaluates all\n", " \n", " Returns\n", " -------\n", " dcg: float\n", " \"\"\"\n", " if k is not None:\n", " score = score[:k]\n", "\n", " gain = 2 ** score - 1\n", " discounts = np.log2(np.arange(2, score.size + 2))\n", " dcg = np.sum(gain / discounts)\n", " return dcg\n", "\n", "\n", "score = np.array([2, 0, 3, 2])\n", "dcg_at_k(score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The final touch to this metric is **Normalized (N)**. It's not fair to compare DCG values across queries because some queries are easier than others or result lists vary in length depending on the query, so we normalize them by: First, figure out what the best ranking score is for this result and compute DCG for that, then we divide the raw DCG by this ideal DCG to get NDCG@K, a number between 0 and 1. In our previous example, we had 2, then 0, 3, and a 2. The best arrangement of these same results would have been: 3, 2, 2, 0, that is, if the \"great\" result had been ranked first, followed by the two \"relevant\" ones, and then the \"irrelevant\". So we compute the DCG score for the rank 3, 2, 2, 0 to obtain our ideal DCG (IDCG) and simply perform:\n", "\n", "\\begin{align}\n", "NDCG_k = \\frac{DCG_k}{IDCG_k}\n", "\\end{align}\n", "\n", "to obtain our final ndcg." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.7497534568197889" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def ndcg_at_k(score, k = None):\n", " \"\"\"\n", " normalized discounted cumulative gain (ndcg)\n", " \n", " Parameters\n", " ----------\n", " score : 1d nd.array\n", " ranking/relevance score\n", " \n", " k : int, default None\n", " evaluate the measure for the top-k ranking score,\n", " default None evaluates all\n", " \n", " Returns\n", " -------\n", " ndcg: float, 0.0 ~ 1.0\n", " \"\"\"\n", " actual_dcg = dcg_at_k(score, k)\n", " sorted_score = np.sort(score)[::-1]\n", " best_dcg = dcg_at_k(sorted_score, k)\n", " ndcg = actual_dcg / best_dcg\n", " return ndcg\n", "\n", "\n", "ndcg_at_k(score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next section modifies the function API a little bit so it becomes more suitable for evaluating the recommendation engine." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "def ndcg_score(model, ratings, k):\n", " \"\"\"\n", " Normalized discounted cumulative gain (NDCG) at rank k\n", " for the ALS model; which computes the ndcg score for\n", " each users' recommendation and does a simply average\n", " \n", " Parameters\n", " ----------\n", " model : ALSWR instance\n", " fitted ALSWR model\n", "\n", " ratings : scipy sparse csr_matrix [n_users, n_items]\n", " sparse matrix of user-item interactions\n", "\n", " k : int\n", " rank k's k\n", " \n", " Returns\n", " -------\n", " avg_ndcg : float\n", " ndcg at k's score averaged across all users\n", " \"\"\"\n", " ndcg = 0.0\n", " n_users, n_items = ratings.shape\n", " for u in range(n_users):\n", " y_true = np.zeros(n_items)\n", " y_true[ratings[u].indices] = 1\n", " u_pred = model._predict_user(u)\n", " ndcg += ndcg_at_k(y_true, u_pred, k)\n", " \n", " avg_ndcg = ndcg / n_users\n", " return avg_ndcg\n", "\n", "\n", "def ndcg_at_k(y_true, y_score, k = 10):\n", " \"\"\"\n", " Normalized discounted cumulative gain (NDCG) at rank k\n", " \n", " Parameters\n", " ----------\n", " y_true : array-like, shape = [n_samples]\n", " Ground truth (true relevance labels)\n", " \n", " y_score : array-like, shape = [n_samples]\n", " Predicted scores\n", " \n", " k : int\n", " Rank\n", "\n", " Returns\n", " -------\n", " ndcg : float, 0.0 ~ 1.0\n", " \"\"\"\n", " actual = dcg_at_k(y_true, y_score, k)\n", " best = dcg_at_k(y_true, y_true, k) \n", " ndcg = actual / best\n", " return ndcg\n", "\n", "\n", "def dcg_at_k(y_true, y_score, k = 10):\n", " \"\"\"\n", " Discounted cumulative gain (DCG) at rank k\n", " \n", " Parameters\n", " ----------\n", " y_true : array-like, shape = [n_samples]\n", " Ground truth (true relevance labels)\n", " \n", " y_score : array-like, shape = [n_samples]\n", " Predicted scores\n", " \n", " k : int\n", " Rank\n", "\n", " Returns\n", " -------\n", " dcg : float\n", " \"\"\"\n", " order = np.argsort(y_score)[::-1]\n", " y_true = np.take(y_true, order[:k])\n", " gains = 2 ** y_true - 1\n", " discounts = np.log2(np.arange(2, gains.size + 2))\n", " dcg = np.sum(gains / discounts)\n", " return dcg" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ndcg training: 0.2959125755797325\n", "ndcg testing: 0.11226091289209723\n" ] } ], "source": [ "k = 5\n", "ndcg_train = ndcg_score(als, X_train, k)\n", "ndcg_test = ndcg_score(als, X_test, k)\n", "print('ndcg training:', ndcg_train)\n", "print('ndcg testing:', ndcg_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Reference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- [Blog: Don’t invert that matrix](https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/)\n", "- [Blog: Empty rows in sparse arrays](http://mike.place/2015/sparse/)\n", "- [Blog: Implicit Feedback and Collaborative Filtering](http://datamusing.info/blog/2015/01/07/implicit-feedback-and-collaborative-filtering/)\n", "- [Paper: Y. Hu, Y. Koren, C. Volinsky Collaborative Filtering for Implicit Feedback Datasets](http://yifanhu.net/PUB/cf.pdf)\n", "- [StackExchange: Analytic solution for matrix factorization using alternating least squares](http://math.stackexchange.com/questions/1072451/analytic-solution-for-matrix-factorization-using-alternating-least-squares)\n", "\n", "---\n", "\n", "\n", "- [Gist: Ranking Metrics](https://gist.github.com/bwhite/3726239)\n", "- [Gist: Learning to rank metrics](https://gist.github.com/mblondel/7337391)\n", "- [Github: Average Precision](https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/average_precision.py)\n", "- [Github: Average Precision Unit Test](https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/test/test_average_precision.py)\n", "- [Wiki: Discounted cumulative gain](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)\n", "- [Blog: Measuring Search Relevance](http://www.ebaytechblog.com/2010/11/10/measuring-search-relevance/)\n", "- [Blog: Evaluating recommender systems](http://fastml.com/evaluating-recommender-systems/)\n", "- [Blog: What you wanted to know about Mean Average Precision](http://fastml.com/what-you-wanted-to-know-about-mean-average-precision/)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "toc": { "nav_menu": { "height": "107px", "width": "252px" }, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": {}, "toc_section_display": "block", "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }