{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Policy Evaluation in Contextual Bandits \n", "** *\n", "\n", "This IPython notebook illustrates the usage of the [contextualbandits](https://www.github.com/david-cortes/contextualbandits) package's `evaluation` module through a simulation with public datasets.\n", "\n", "**Small note: if the TOC here is not clickable or the math symbols don't show properly, try visualizing this same notebook from nbviewer following [this link](http://nbviewer.jupyter.org/github/david-cortes/contextualbandits/blob/master/example/policy_evaluation.ipynb).**\n", "\n", "** *\n", "### Sections\n", "\n", "[1. Problem description](#p1)\n", "\n", "[2. Methods](#p2)\n", "\n", "[3. Experiments](#p3)\n", "\n", "[4. References](#p4)\n", "\n", "** *\n", "\n", "## 1. Problem description\n", "\n", "For a general description of the contextual bandits problem, see the first part of the package's guide [Online Contextual Bandits](http://nbviewer.jupyter.org/github/david-cortes/contextualbandits/blob/master/example/online_contextual_bandits.ipynb).\n", "\n", "The previous two guides [Online Contextual Bandits](http://nbviewer.jupyter.org/github/david-cortes/contextualbandits/blob/master/example/online_contextual_bandits.ipynb) and [Off-policy Learning in Contextual Bandits](http://nbviewer.jupyter.org/github/david-cortes/contextualbandits/blob/master/example/offpolicy_learning.ipynb) evaluated the performance of different policies by looking at the actions they would have chosen in a fully-labeled dataset for multi-label classification.\n", "\n", "However, in contextual bandits settings one doesn't have access to fully-labeled data, and the data that one has is usually very biased, as it is collected through some policy that aims to maximize rewards. In this situation, it is a lot more difficult to evaluate the performance of a new policy. This module deals with such problem.\n", "\n", "** *\n", "\n", "## 2. Methods\n", " \n", "\n", "This module implements three policy evaluation methods:\n", "\n", "* `evaluateRejectionSampling` (see _\"A contextual-bandit approach to personalized news article recommendation\"_), for both online and offline policies.\n", "\n", "* `evaluateDoublyRobust` (see _\"Doubly Robust Policy Evaluation and Learning\"_).\n", "\n", "* `evaluateNCIS` (see _\"Offline a/b testing for recommender systems\"_)\n", "\n", "These should ideally be based on a train-test split - that is, the policy is trained with some data and evaluated on different data.\n", "\n", "The best way to obtain a good estimate of the performance of a policy is to collect some data on which actions are chosen at random. When such data is available, one can iterate through it, let the policy choose an action for each observation, and if it matches with what was chosen, take it along with its rewards for evaluation purposes, skip it if not. This simple rejection sampling method is unbiased and lets you evaluate both online and offline algorithms. **It must be stressed that evaluating data like this only works when the actions of this test sample are chosen at random, otherwise the estimates will be biased (and likely very wrong)**.\n", "\n", "When such data is not available and there is reasonable variety of actions chosen, other options are doubly-robust estimates and inverse-propensity-adjusted estimates. The first one is meant for the case of continuous rewards though, and doesn't work as well with discrete rewards, especially when there are many labels, but can still be tried.\n", "\n", "The doubly-robust estimate requires, as it names suggests, two estimates: one of the reward that each arm will give, and another of the probability or score that the policy that collected the data gave to each arm it chose for each observation.\n", "\n", "The estimates based on inverse-prosensity corrections require probabilistic distributions over the chosen arms, which **this package does not produce** as it follows different paradigms, and as such, cannot really be used properly with the kind of data that is shown here, and its outputs will not have the same properties as described in the references, but it can still provide an improvement in the estimations.\n", "\n", "There are different ways of building a reward estimator for the doubly-robust method. One option is to fit a (non-online) model to both the train and test sets to make reward estimates on the test set, or fit it only on the test set (while the policy to be evaluated is fitted to the training set); or perhaps even use the score estimates from the old policy (which chose the actions on the training and test data) or from the new policy. The function `evaluateDoublyRobust` provides an API that can accomodate all these methods.\n", "\n", "** *\n", "\n", "## 3. Experiments\n", "\n", "\n", "Just like in the previous guide [Off-policy Learning in Contextual Bandits](http://nbviewer.jupyter.org/github/david-cortes/contextualbandits/blob/master/example/offpolicy_learning.ipynb), I will simualate data generated from a policy by fitting a logistic regression model with a sample of the **fully-labeled** data, then let it choose actions for some more data, and take those actions and rewards as input for a new policy, along with the estimated reward probabilities for the actions that were chosen.\n", "\n", "The new policy will then be evaluated on a test sample with actions already pre-selected, and the estimates from the methods here will be compared with the real rewards, which we can know because the data is fully labeled.\n", "\n", "The data are again the Bibtext dataset, plus the larger Mediamill dataset.\n", "\n", "** *\n", "\n", "Loading the Bibtex dataset again:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(7395, 1836)\n", "(7395, 159)\n" ] } ], "source": [ "import pandas as pd, numpy as np, re\n", "from sklearn.preprocessing import MultiLabelBinarizer\n", "from sklearn.datasets import load_svmlight_file\n", "\n", "def parse_data(filename):\n", " with open(filename, \"rb\") as f:\n", " infoline = f.readline()\n", " infoline = re.sub(r\"^b'\", \"\", str(infoline))\n", " n_features = int(re.sub(r\"^\\d+\\s(\\d+)\\s\\d+.*$\", r\"\\1\", infoline))\n", " features, labels = load_svmlight_file(f, n_features=n_features, multilabel=True)\n", " mlb = MultiLabelBinarizer()\n", " labels = mlb.fit_transform(labels)\n", " features = np.array(features.todense())\n", " features = np.ascontiguousarray(features)\n", " return features, labels\n", "\n", "features, y = parse_data(\"Bibtex_data.txt\")\n", "print(features.shape)\n", "print(y.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simulating a stationary exploration policy and a test set:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression\n", "\n", "# the 'explorer' polcy will be fit to this small sample of the rows\n", "st_seed = 0\n", "end_seed = 2000\n", "\n", "# then it will choose actions for this larger sample, which will be the input for the new policy\n", "st_exploration = 0\n", "end_exploration = 3000\n", "\n", "# the new policy will be evaluated with a separate test set\n", "st_test = 3000\n", "end_test = 7395\n", "\n", "# separating the covariates data for each case\n", "Xseed = features[st_seed:end_seed, :]\n", "Xexplore_sample = features[st_exploration:end_exploration, :]\n", "Xtest = features[st_test:end_test, :]\n", "nchoices = y.shape[1]\n", "\n", "# now constructing an exploration policy as explained above, with fully-labeled data\n", "explorer = LogisticRegression(C=0.1, solver=\"lbfgs\", max_iter=15000)\n", "explorer.fit(Xseed, np.argmax(y[st_seed:end_seed], axis=1))\n", "\n", "# letting the exploration policy choose actions for the new policy input\n", "actions_explore_sample=explorer.predict(Xexplore_sample)\n", "rewards_explore_sample=y[st_exploration:end_exploration, :]\\\n", " [np.arange(end_exploration - st_exploration), actions_explore_sample]\n", "\n", "# extracting the probabilities it estimated\n", "ix_internal_actions = {j:i for i,j in enumerate(explorer.classes_)}\n", "ix_internal_actions = [ix_internal_actions[i] for i in actions_explore_sample]\n", "ix_internal_actions = np.array(ix_internal_actions)\n", "prob_actions_explore = explorer.predict_proba(Xexplore_sample)[np.arange(Xexplore_sample.shape[0]),\n", " ix_internal_actions]\n", "\n", "# generating a test set with random actions\n", "actions_test = np.random.randint(nchoices, size=end_test - st_test)\n", "rewards_test = y[st_test:end_test, :][np.arange(end_test - st_test), actions_test]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rejection sampling estimate:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test set Rejection Sampling mean reward estimate (new policy)\n", "Estimated mean reward: 0.13043478260869565\n", "Sample size: 23\n", "----------------\n", "Real mean reward: 0.1447098976109215\n" ] } ], "source": [ "from contextualbandits.online import SeparateClassifiers\n", "from contextualbandits.evaluation import evaluateRejectionSampling\n", "\n", "new_policy = SeparateClassifiers(LogisticRegression(C=0.1, solver=\"lbfgs\", max_iter=15000),\n", " y.shape[1], smoothing=(1,2), beta_prior=None, random_state=123)\n", "new_policy.fit(Xexplore_sample, actions_explore_sample, rewards_explore_sample)\n", "est_r, ncases = evaluateRejectionSampling(new_policy, X=Xtest, a=actions_test, r=rewards_test,\n", " online=False)\n", "real_r = np.mean(y[st_test:end_test,:][np.arange(end_test - st_test), new_policy.predict(Xtest)])\n", "\n", "print('Test set Rejection Sampling mean reward estimate (new policy)')\n", "print('Estimated mean reward: ',est_r)\n", "print('Sample size: ', ncases)\n", "print('----------------')\n", "print('Real mean reward: ', real_r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also evaluate the exploration policy with the same method:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test set Rejection Sampling mean reward estimate (old policy)\n", "Estimated mean reward: 0.5789473684210527\n", "Sample size: 19\n", "----------------\n", "Real mean reward: 0.4814562002275313\n" ] } ], "source": [ "est_r, ncases = evaluateRejectionSampling(explorer, X=Xtest, a=actions_test, r=rewards_test, online=False)\n", "real_r = np.mean(y[st_test:end_test, :][np.arange(end_test - st_test), explorer.predict(Xtest)])\n", "\n", "print('Test set Rejection Sampling mean reward estimate (old policy)')\n", "print('Estimated mean reward: ', est_r)\n", "print('Sample size: ', ncases)\n", "print('----------------')\n", "print('Real mean reward: ', real_r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_(Remember that the exploration policy was fit with a smaller set of fully-labeled data, thus it's no surprise it performs a lot better)_\n", "\n", "The estimates are not exact, but they are somewhat close to the real values as expected. They get better the more cases are successfully sampled, and their estimate should follow the central limit theorem.\n", "** *\n", "\n", "To be stressed again, such an evaluation method only works when the data was collected by choosing actions at random. **If we evaluate it with the actions chosen by the exploration policy, the results will be totally biased as demonstrated here:**" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Biased Test set Rejection Sampling mean reward estimate (new policy)\n", "Estimated mean reward: 1.0\n", "Sample size: 551\n", "----------------\n", "Real mean reward: 0.1447098976109215\n", "(Don't try rejection sampling on a biased test set)\n" ] } ], "source": [ "actions_test_biased = explorer.predict(Xtest)\n", "rewards_test_biased = y[st_test:end_test, :][np.arange(end_test - st_test), actions_test_biased]\n", "est_r, ncases = evaluateRejectionSampling(new_policy, X=Xtest, a=actions_test_biased,\n", " r=rewards_test_biased, online=False)\n", "real_r = np.mean(y[st_test:end_test, :][np.arange(end_test - st_test), new_policy.predict(Xtest)])\n", "\n", "print('Biased Test set Rejection Sampling mean reward estimate (new policy)')\n", "print('Estimated mean reward: ', est_r)\n", "print('Sample size: ', ncases)\n", "print('----------------')\n", "print('Real mean reward: ', real_r)\n", "print(\"(Don't try rejection sampling on a biased test set)\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also try Doubly-Robust estimates, but these work poorly for a dataset like this:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Biased Test set mean reward estimates (new policy)\n", "DR estimate (reward estimator fit on train+test): 0.8688975703150306\n", "DR estimate (reward estimator fit on test only): 0.8724866012102672\n", "----------------\n", "Real mean reward: 0.1447098976109215\n" ] } ], "source": [ "from contextualbandits.evaluation import evaluateDoublyRobust\n", "\n", "# getting estimated probabilities for the biased test sample chosen by the old policy\n", "ix_internal_actions = {j:i for i,j in enumerate(explorer.classes_)}\n", "ix_internal_actions = [ix_internal_actions[i] for i in actions_test_biased]\n", "ix_internal_actions = np.array(ix_internal_actions)\n", "prob_actions_test_biased = explorer.predict_proba(Xtest)[np.arange(Xtest.shape[0]), ix_internal_actions]\n", "\n", "\n", "# actions that the new policy will choose\n", "pred = new_policy.predict(Xtest)\n", "\n", "# method 1: estimating rewards by fitting another model to the whole data (train + test)\n", "model_fit_on_all_data = SeparateClassifiers(LogisticRegression(C=0.1, solver=\"lbfgs\", max_iter=15000),\n", " y.shape[1], random_state=123)\n", "model_fit_on_all_data.fit(np.r_[Xexplore_sample, Xtest],\n", " np.r_[actions_explore_sample, actions_test_biased],\n", " np.r_[rewards_explore_sample, rewards_test_biased])\n", "est_r_dr_whole = evaluateDoublyRobust(pred,\n", " X=Xtest, a=actions_test_biased, r=rewards_test_biased,\n", " p=prob_actions_test_biased, reward_estimator=model_fit_on_all_data,\n", " random_state=123)\n", "\n", "# method 2: estimating rewards by fitting another model to the test data only\n", "est_r_dr_test_only = evaluateDoublyRobust(pred, X=Xtest, a=actions_test_biased,\n", " r=rewards_test_biased, p=prob_actions_test_biased,\n", " reward_estimator=LogisticRegression(C=0.1, solver=\"lbfgs\", max_iter=15000),\n", " nchoices=y.shape[1], random_state=123)\n", "\n", "print('Biased Test set mean reward estimates (new policy)')\n", "print('DR estimate (reward estimator fit on train+test): ', est_r_dr_whole)\n", "print('DR estimate (reward estimator fit on test only): ', est_r_dr_test_only)\n", "print('----------------')\n", "print('Real mean reward: ', real_r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Both estimates are very wrong, although less wrong than rejection sampling with a non-random test set. This is however a rather unlucky case as varying e.g. the random seed and regularization in the original policy can lead to far better estimates with the doubly-robust method.\n", "** *\n", "\n", "In this situation, the NCIS method can provide a more realistic estimate - **it should be stressed again that this is not the same as the NCIS method described in the references, as here we don't use probabilistic distributions over arms, and thus, it will not enjoy the same theoretical guarantees** (see documentation for details)." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Approximate NCIS mean reward estimates (new policy)\n", "Estimated mean reward (approx. NCIS): 0.3235873655002847\n", "----------------\n", "Real mean reward: 0.1447098976109215\n" ] } ], "source": [ "from contextualbandits.evaluation import evaluateNCIS\n", "\n", "rew_pred = new_policy.predict_proba_separate(Xtest)[np.arange(Xtest.shape[0]), actions_test_biased]\n", "est_r_ncis = evaluateNCIS(rew_pred, rewards_test_biased, prob_actions_test_biased)\n", "print('Approximate NCIS mean reward estimates (new policy)')\n", "print('Estimated mean reward (approx. NCIS): ', est_r_ncis)\n", "print('----------------')\n", "print('Real mean reward: ', real_r)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** *\n", "Finally, rejection sampling can also be used to evaluate online policies - in this case though, be aware that the estimate will only be considered up to a certain number of rounds (as many as it accepts, but it will end up rejecting the majority), but online policies keep improving with time.\n", "\n", "Here I will use the Mediamill dataset instead, as it has a lot more data - be aware that it takes a long time to evaluate:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Batches: 100%|██████████| 4391/4391 [08:48<00:00, 8.30it/s]\n" ] }, { "data": { "text/plain": [ "(0.45393258426966293, 445)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from contextualbandits.online import BootstrappedUCB\n", "\n", "features, y = parse_data(\"Mediamill_data.txt\")\n", "nchoices = y.shape[1]\n", "\n", "np.random.seed(456)\n", "actions_random = np.random.randint(nchoices, size = features.shape[0])\n", "rewards_actions = y[np.arange(y.shape[0]), actions_random]\n", "\n", "online_policy = BootstrappedUCB(LogisticRegression(C=0.1, solver=\"lbfgs\", max_iter=15000),\n", " y.shape[1], random_state=1234)\n", "evaluateRejectionSampling(online_policy,\n", " X = features,\n", " a = actions_random,\n", " r = rewards_actions,\n", " online = True,\n", " start_point_online = 'random',\n", " batch_size = 10,\n", " random_state = 5678)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** *\n", "\n", "## 4. References\n", "\n", "* Li, L., Chu, W., Langford, J., & Schapire, R. E. (2010, April). A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (pp. 661-670). ACM.\n", "\n", "* Dudík, M., Langford, J., & Li, L. (2011). Doubly robust policy evaluation and learning. arXiv preprint arXiv:1103.4601." ] } ], "metadata": { "kernelspec": { "display_name": "Python (OpenBLAS)", "language": "python", "name": "py3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "nbformat": 4, "nbformat_minor": 2 }