{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Bias scan using Multi-Dimensional Subset Scan (MDSS)\n", "\n", "\"Identifying Significant Predictive Bias in Classifiers\" https://arxiv.org/abs/1611.08292\n", "\n", "The goal of bias scan is to identify a subgroup(s) that has significantly more predictive bias than would be expected from an unbiased classifier. There are $\\prod_{m=1}^{M}\\left(2^{|X_{m}|}-1\\right)$ unique subgroups from a dataset with $M$ features, with each feature having $|X_{m}|$ discretized values, where a subgroup is any $M$-dimension\n", "Cartesian set product, between subsets of feature-values from each feature --- excluding the empty set. Bias scan mitigates this computational hurdle by approximately identifing the most statistically biased subgroup in linear time (rather than exponential).\n", "\n", "\n", "We define the statistical measure of predictive bias function, $score_{bias}(S)$ as a likelihood ratio score and a function of a given subgroup $S$. The null hypothesis is that the given prediction's odds are correct for all subgroups in\n", "\n", "$\\mathcal{D}$: $H_{0}:odds(y_{i})=\\frac{\\hat{p}_{i}}{1-\\hat{p}_{i}}\\ \\forall i\\in\\mathcal{D}$.\n", "\n", "The alternative hypothesis assumes some constant multiplicative bias in the odds for some given subgroup $S$:\n", "\n", "\n", "$H_{1}:\\ odds(y_{i})=q\\frac{\\hat{p}_{i}}{1-\\hat{p}_{i}},\\ \\text{where}\\ q>1\\ \\forall i\\in S\\ \\mbox{and}\\ q=1\\ \\forall i\\notin S.$\n", "\n", "In the classification setting, each observation's likelihood is Bernoulli distributed and assumed independent. This results in the following scoring function for a subgroup $S$\n", "\n", "\\begin{align*}\n", "score_{bias}(S)= & \\max_{q}\\log\\prod_{i\\in S}\\frac{Bernoulli(\\frac{q\\hat{p}_{i}}{1-\\hat{p}_{i}+q\\hat{p}_{i}})}{Bernoulli(\\hat{p}_{i})}\\\\\n", "= & \\max_{q}\\log(q)\\sum_{i\\in S}y_{i}-\\sum_{i\\in S}\\log(1-\\hat{p}_{i}+q\\hat{p}_{i}).\n", "\\end{align*}\n", "Our bias scan is thus represented as: $S^{*}=FSS(\\mathcal{D},\\mathcal{E},F_{score})=MDSS(\\mathcal{D},\\hat{p},score_{bias})$.\n", "\n", "where $S^{*}$ is the detected most anomalous subgroup, $FSS$ is one of several subset scan algorithms for different problem settings, $\\mathcal{D}$ is a dataset with outcomes $Y$ and discretized features $\\mathcal{X}$, $\\mathcal{E}$ are a set of expectations or 'normal' values for $Y$, and $F_{score}$ is an expectation-based scoring statistic that measures the amount of anomalousness between subgroup observations and their expectations.\n", "\n", "Predictive bias emphasizes comparable predictions for a subgroup and its observations and Bias scan provides a more general method that can detect and characterize such bias, or poor classifier fit, in the larger space of all possible subgroups, without a priori specification." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import itertools\n", "\n", "from aif360.metrics import BinaryLabelDatasetMetric \n", "from aif360.metrics.mdss_classification_metric import MDSSClassificationMetric\n", "from aif360.algorithms.preprocessing.optim_preproc_helpers.data_preproc_functions import load_preproc_data_compas\n", "\n", "from IPython.display import Markdown, display\n", "import numpy as np\n", "import pandas as pd" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from aif360.metrics import BinaryLabelDatasetMetric " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll demonstrate scoring a subset and finding the most anomalous subset with bias scan using the compas dataset.\n", "\n", "We can specify subgroups to be scored or scan for the most anomalous subgroup. Bias scan allows us to decide if we aim to identify bias as `higher` than expected probabilities or `lower` than expected probabilities. Depending on the favourable label, the corresponding subgroup may be categorized as priviledged or unprivileged." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "np.random.seed(0)\n", "\n", "dataset_orig = load_preproc_data_compas()\n", "\n", "female_group = [{'sex': 1}]\n", "male_group = [{'sex': 0}]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dataset has the categorical features one-hot encoded so we'll modify the dataset to convert them back \n", "to the categorical featues because scanning one-hot encoded features may find subgroups that are not meaningful eg. a subgroup with 2 race values. " ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "dataset_orig_df = pd.DataFrame(dataset_orig.features, columns=dataset_orig.feature_names)\n", "\n", "age_cat = np.argmax(dataset_orig_df[['age_cat=Less than 25', 'age_cat=25 to 45', \n", " 'age_cat=Greater than 45']].values, axis=1).reshape(-1, 1)\n", "priors_count = np.argmax(dataset_orig_df[['priors_count=0', 'priors_count=1 to 3', \n", " 'priors_count=More than 3']].values, axis=1).reshape(-1, 1)\n", "c_charge_degree = np.argmax(dataset_orig_df[['c_charge_degree=F', 'c_charge_degree=M']].values, axis=1).reshape(-1, 1)\n", "\n", "features = np.concatenate((dataset_orig_df[['sex', 'race']].values, age_cat, priors_count, \\\n", " c_charge_degree, dataset_orig.labels), axis=1)\n", "feature_names = ['sex', 'race', 'age_cat', 'priors_count', 'c_charge_degree']" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(features, columns=feature_names + ['two_year_recid'])" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sexraceage_catpriors_countc_charge_degreetwo_year_recid
00.00.01.00.00.01.0
10.00.00.02.00.01.0
20.01.01.02.00.01.0
31.01.01.00.01.00.0
40.01.01.00.00.00.0
\n", "
" ], "text/plain": [ " sex race age_cat priors_count c_charge_degree two_year_recid\n", "0 0.0 0.0 1.0 0.0 0.0 1.0\n", "1 0.0 0.0 0.0 2.0 0.0 1.0\n", "2 0.0 1.0 1.0 2.0 0.0 1.0\n", "3 1.0 1.0 1.0 0.0 1.0 0.0\n", "4 0.0 1.0 1.0 0.0 0.0 0.0" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### training\n", "We'll create a structured dataset and then train a simple classifier to predict the probability of the outcome" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from aif360.datasets import StandardDataset\n", "dataset = StandardDataset(df, label_name='two_year_recid', favorable_classes=[0],\n", " protected_attribute_names=['sex', 'race'],\n", " privileged_classes=[[1], [1]],\n", " instance_weights_name=None)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "dataset_orig_train, dataset_orig_test = dataset.split([0.7], shuffle=True)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "#### Training Dataset shape" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "(3694, 5)\n" ] }, { "data": { "text/markdown": [ "#### Favorable and unfavorable labels" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "0.0 1.0\n" ] }, { "data": { "text/markdown": [ "#### Protected attribute names" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "['sex', 'race']\n" ] }, { "data": { "text/markdown": [ "#### Privileged and unprivileged protected attribute values" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "[array([1.]), array([1.])] [array([0.]), array([0.])]\n" ] }, { "data": { "text/markdown": [ "#### Dataset feature names" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "['sex', 'race', 'age_cat', 'priors_count', 'c_charge_degree']\n" ] } ], "source": [ "display(Markdown(\"#### Training Dataset shape\"))\n", "print(dataset_orig_train.features.shape)\n", "display(Markdown(\"#### Favorable and unfavorable labels\"))\n", "print(dataset_orig_train.favorable_label, dataset_orig_train.unfavorable_label)\n", "display(Markdown(\"#### Protected attribute names\"))\n", "print(dataset_orig_train.protected_attribute_names)\n", "display(Markdown(\"#### Privileged and unprivileged protected attribute values\"))\n", "print(dataset_orig_train.privileged_protected_attributes, \n", " dataset_orig_train.unprivileged_protected_attributes)\n", "display(Markdown(\"#### Dataset feature names\"))\n", "print(dataset_orig_train.feature_names)\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Train set: Difference in mean outcomes between unprivileged and privileged groups = -0.124496\n", "Test set: Difference in mean outcomes between unprivileged and privileged groups = -0.159410\n" ] } ], "source": [ "metric_train = BinaryLabelDatasetMetric(dataset_orig_train, \n", " unprivileged_groups=male_group,\n", " privileged_groups=female_group)\n", "\n", "print(\"Train set: Difference in mean outcomes between unprivileged and privileged groups = %f\" % metric_train.mean_difference())\n", "metric_test = BinaryLabelDatasetMetric(dataset_orig_test, \n", " unprivileged_groups=male_group,\n", " privileged_groups=female_group)\n", "print(\"Test set: Difference in mean outcomes between unprivileged and privileged groups = %f\" % metric_test.mean_difference())\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It shows that overall Females in the dataset have a lower observed recidivism them Males." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we train a classifier, the model is likely to pick up this bias in the dataset" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "LogisticRegression()" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.linear_model import LogisticRegression\n", "clf = LogisticRegression(solver='lbfgs', C=1.0, penalty='l2')\n", "clf.fit(dataset_orig_train.features, dataset_orig_train.labels.flatten())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the probability scores we use are the probabilities of the favorable label, which is 0 in this case." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "dataset_bias_test_prob = clf.predict_proba(dataset_orig_test.features)[:,0]" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "dff = pd.DataFrame(dataset_orig_test.features, columns=dataset_orig_test.feature_names)\n", "dff['observed'] = pd.Series(dataset_orig_test.labels.flatten(), index=dff.index)\n", "dff['probabilities'] = pd.Series(dataset_bias_test_prob, index=dff.index)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll the create another structured dataset as the classified dataset by assigning the predicted probabilities to the scores attribute" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "dataset_bias_test = dataset_orig_test.copy()\n", "dataset_bias_test.scores = dataset_bias_test_prob\n", "dataset_bias_test.labels = dataset_orig_test.labels" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### bias scoring" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we try to observe the difference between the model prediction and the actual observations of the favorable label, which in this case is 0. We create a new test_df for this computation. \n", "\n", "If the model's average prediction of the favorable label is higher than the actual observations average, then the group is said to be privileged. In the converse case, the group is said to be unprivileged.\n", "\n", "We would check for whether the male and female groups are privileged or not using mdss score" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sexraceage_catpriors_countc_charge_degreetwo_year_recidmodel_not_recidobserved_not_recid
24791.01.02.02.00.01.00.5529450.0
35741.00.01.00.00.00.00.7409601.0
5130.01.00.01.00.00.00.3747341.0
17250.00.02.02.00.01.00.4444860.0
960.01.01.01.01.01.00.5849040.0
...........................
49310.01.00.01.00.00.00.3747341.0
32640.00.00.00.00.01.00.5357620.0
16530.00.01.01.00.00.00.4900411.0
26071.01.01.00.00.01.00.7691410.0
27320.01.00.02.01.01.00.2517240.0
\n", "

1584 rows × 8 columns

\n", "
" ], "text/plain": [ " sex race age_cat priors_count c_charge_degree two_year_recid \\\n", "2479 1.0 1.0 2.0 2.0 0.0 1.0 \n", "3574 1.0 0.0 1.0 0.0 0.0 0.0 \n", "513 0.0 1.0 0.0 1.0 0.0 0.0 \n", "1725 0.0 0.0 2.0 2.0 0.0 1.0 \n", "96 0.0 1.0 1.0 1.0 1.0 1.0 \n", "... ... ... ... ... ... ... \n", "4931 0.0 1.0 0.0 1.0 0.0 0.0 \n", "3264 0.0 0.0 0.0 0.0 0.0 1.0 \n", "1653 0.0 0.0 1.0 1.0 0.0 0.0 \n", "2607 1.0 1.0 1.0 0.0 0.0 1.0 \n", "2732 0.0 1.0 0.0 2.0 1.0 1.0 \n", "\n", " model_not_recid observed_not_recid \n", "2479 0.552945 0.0 \n", "3574 0.740960 1.0 \n", "513 0.374734 1.0 \n", "1725 0.444486 0.0 \n", "96 0.584904 0.0 \n", "... ... ... \n", "4931 0.374734 1.0 \n", "3264 0.535762 0.0 \n", "1653 0.490041 1.0 \n", "2607 0.769141 0.0 \n", "2732 0.251724 0.0 \n", "\n", "[1584 rows x 8 columns]" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_df = dataset_bias_test.convert_to_dataframe()[0]\n", "test_df['model_not_recid'] = dataset_bias_test.scores.flatten()\n", "test_df['observed_not_recid'] = 1 - test_df['two_year_recid']\n", "test_df" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "model_not_recid 0.617559\n", "observed_not_recid 0.657051\n", "dtype: float64" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Females actual vs predicted rates of positive label\n", "test_df[test_df.sex == 1][['model_not_recid','observed_not_recid']].mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since model average predictions for the positive label is lower than the observed average by a substantial amount (about 4%), the female group is most likely unprivileged." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "model_not_recid 0.512445\n", "observed_not_recid 0.497642\n", "dtype: float64" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Males actual vs predicted rates of positive label\n", "test_df[test_df.sex == 0][['model_not_recid','observed_not_recid']].mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since model average predictions for the positive label is greater than the observed average by a small amount (about 1.5%), the male group could be privileged." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we'll create an instance of the MDSS Classification Metric and assess the apriori defined privileged and unprivileged groups; females and males respectively. \n", "\n", "By apriori defining the male group as unprivileged, we are saying we expect that the model's predictions is systematically lower than the actual observation.\n", "\n", "By apriori defining the female group as privileged, we are saying we expect that the model's predictions is systematically higher than the actual observation.\n", "\n", "From our mini-analysis above, we know that these hypothesis are unlikely to be true " ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "mdss_classified = MDSSClassificationMetric(dataset_orig_test, dataset_bias_test,\n", " unprivileged_groups=male_group,\n", " privileged_groups=female_group)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-0.0" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# We are asking the question:\n", "# Is there evidence that the hypothesized privileged group is actually privileged?\n", "\n", "female_privileged_score = mdss_classified.score_groups(privileged=True)\n", "female_privileged_score" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By having a score very close to zero, mdss bias score is informing us that there is no evidence from the data that our hypothesis of the female group being privileged is true." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-0.0" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# We are asking the question:\n", "# Is there evidence that the hypothesized unprivileged group is actually unprivileged?\n", "\n", "male_unprivileged_score = mdss_classified.score_groups(privileged=False)\n", "male_unprivileged_score" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By having a score very close zero, mdss bias score is informing us that there is no evidence from the data to support our hypothesis of the male group being unprivileged is true." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can flip our initial hypothesis and check if the male group is privileged or the female group is unprivileged." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "mdss_classified = MDSSClassificationMetric(dataset_orig_test, dataset_bias_test,\n", " unprivileged_groups=female_group,\n", " privileged_groups=male_group)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.6301" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "male_privileged_score = mdss_classified.score_groups(privileged=True)\n", "male_privileged_score" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By having a positive score, mdss bias score is informing us that there is evidence from the data that our hypothesis of the male group being privileged is true." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1.1771" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "female_unprivileged_score = mdss_classified.score_groups(privileged=False)\n", "female_unprivileged_score" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By having a positive score, mdss bias score is informing us that there is evidence from the data to support our hypothesis of the female group being unprivileged is true." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By taking into account the size of the group and the magnitude of the deviation, mdss bias core has been able to tell us the following about the male and female groups:\n", "- There is no evidence that the female group is privileged.\n", "- There is no evidence that the male group is unprivileged.\n", "- There is evidence that the male group is privileged.\n", "- There is evidence that the female is unprivileged." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### bias scan\n", "We get the bias score for the apriori defined subgroup but assuming we had no prior knowledge \n", "about the predictive bias and wanted to find the subgroups with the most bias, we can apply bias scan to identify the priviledged and unpriviledged groups. The privileged argument is not a reference to a group but the direction for which to scan for bias." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Function bias_scan is deprecated; Change to new interface - aif360.detectors.mdss_detector.bias_scan by version 0.5.0.\n", "Function bias_scan is deprecated; Change to new interface - aif360.detectors.mdss_detector.bias_scan by version 0.5.0.\n" ] } ], "source": [ "privileged_subset = mdss_classified.bias_scan(penalty=0.5, privileged=True)\n", "unprivileged_subset = mdss_classified.bias_scan(penalty=0.5, privileged=False)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "({'race': [0.0], 'age_cat': [0.0], 'sex': [0.0]}, 3.1531)\n", "({'sex': [1.0], 'race': [0.0]}, 3.3037)\n" ] } ], "source": [ "print(privileged_subset)\n", "print(unprivileged_subset)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "assert privileged_subset[0]\n", "assert unprivileged_subset[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can observe that the bias score is higher than the score of the prior groups. These subgroups are guaranteed to be the highest scoring subgroup among the exponentially many subgroups.\n", "\n", "For the purposes of this example, the logistic regression model systematically under estimates the recidivism risk of individuals in the `Non-caucasian`, `less than 25`, `Male` subgroup whereas individuals belonging to the `Causasian`, `Female` are assigned a higher risk than is actually observed. We refer to these subgroups as the `detected privileged group` and `detected unprivileged group` respectively." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can create another srtuctured dataset using the new groups to compute other dataset metrics. " ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "protected_attr_names = set(privileged_subset[0].keys()).union(set(unprivileged_subset[0].keys()))\n", "dataset_orig_test.protected_attribute_names = list(protected_attr_names)\n", "dataset_bias_test.protected_attribute_names = list(protected_attr_names)\n", "\n", "protected_attr = np.where(np.isin(dataset_orig_test.feature_names, list(protected_attr_names)))[0]\n", "\n", "dataset_orig_test.protected_attributes = dataset_orig_test.features[:, protected_attr]\n", "dataset_bias_test.protected_attributes = dataset_bias_test.features[:, protected_attr]" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "#### Training Dataset shape" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "(1584, 5)\n" ] }, { "data": { "text/markdown": [ "#### Favorable and unfavorable labels" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "0.0 1.0\n" ] }, { "data": { "text/markdown": [ "#### Protected attribute names" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "['sex', 'race', 'age_cat']\n" ] }, { "data": { "text/markdown": [ "#### Privileged and unprivileged protected attribute values" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "[array([1.]), array([1.])] [array([0.]), array([0.])]\n" ] }, { "data": { "text/markdown": [ "#### Dataset feature names" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "['sex', 'race', 'age_cat', 'priors_count', 'c_charge_degree']\n" ] } ], "source": [ "display(Markdown(\"#### Training Dataset shape\"))\n", "print(dataset_bias_test.features.shape)\n", "display(Markdown(\"#### Favorable and unfavorable labels\"))\n", "print(dataset_bias_test.favorable_label, dataset_orig_train.unfavorable_label)\n", "display(Markdown(\"#### Protected attribute names\"))\n", "print(dataset_bias_test.protected_attribute_names)\n", "display(Markdown(\"#### Privileged and unprivileged protected attribute values\"))\n", "print(dataset_bias_test.privileged_protected_attributes, \n", " dataset_bias_test.unprivileged_protected_attributes)\n", "display(Markdown(\"#### Dataset feature names\"))\n", "print(dataset_bias_test.feature_names)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "# converts from dictionary of lists to list of dictionaries\n", "a = list(privileged_subset[0].values())\n", "subset_values = list(itertools.product(*a))\n", "\n", "detected_privileged_groups = []\n", "for vals in subset_values:\n", " detected_privileged_groups.append((dict(zip(privileged_subset[0].keys(), vals))))\n", " \n", "a = list(unprivileged_subset[0].values())\n", "subset_values = list(itertools.product(*a))\n", "\n", "detected_unprivileged_groups = []\n", "for vals in subset_values:\n", " detected_unprivileged_groups.append((dict(zip(unprivileged_subset[0].keys(), vals))))" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test set: Difference in mean outcomes between unprivileged and privileged groups = 0.345722\n" ] } ], "source": [ "metric_bias_test = BinaryLabelDatasetMetric(dataset_bias_test, \n", " unprivileged_groups=detected_unprivileged_groups,\n", " privileged_groups=detected_privileged_groups)\n", "\n", "print(\"Test set: Difference in mean outcomes between unprivileged and privileged groups = %f\" \n", " % metric_bias_test.mean_difference())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It appears the detected privileged group have a higher risk of recidivism than the unpriviledged group." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As noted in the paper, predictive bias is different from predictive fairness so there's no the emphasis in the subgroups having comparable predictions between them. \n", "We can investigate the difference in what the model predicts vs what we actually observed as well as the multiplicative difference in the odds of the subgroups." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "to_choose = dff[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)\n", "temp_df = dff.loc[to_choose]" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Our detected priviledged group has a size of 192, we observe 0.6770833333333334 as the average risk of recidivism, but our model predicts 0.5730004938240802'" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "\"Our detected priviledged group has a size of {}, we observe {} as the average risk of recidivism, but our model predicts {}\"\\\n", ".format(len(temp_df), temp_df['observed'].mean(), 1 - temp_df['probabilities'].mean())" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'This is a multiplicative increase in the odds by 2.81370969044125'" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "group_obs = temp_df['observed'].mean()\n", "group_prob = temp_df['probabilities'].mean()\n", "\n", "odds_mul = (group_obs / (1 - group_obs)) / (group_prob /(1 - group_prob))\n", "\"This is a multiplicative increase in the odds by {}\"\\\n", ".format(odds_mul)" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "assert odds_mul > 1" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "to_choose = dff[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)\n", "temp_df = dff.loc[to_choose]" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Our detected unpriviledged group has a size of 169, we observe 0.33136094674556216 as the average risk of recidivism, but our model predicts 0.43652313575727764'" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "\"Our detected unpriviledged group has a size of {}, we observe {} as the average risk of recidivism, but our model predicts {}\"\\\n", ".format(len(temp_df), temp_df['observed'].mean(), 1 - temp_df['probabilities'].mean())" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'This is a multiplicative decrease in the odds by 0.38392002104569445'" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "group_obs = temp_df['observed'].mean()\n", "group_prob = temp_df['probabilities'].mean()\n", "\n", "odds_mul = (group_obs / (1 - group_obs)) / (group_prob /(1 - group_prob))\n", "\"This is a multiplicative decrease in the odds by {}\"\\\n", ".format(odds_mul)" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "assert odds_mul < 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In summary, this notebook demonstrates the use of bias scan to identify subgroups with significant predictive bias, as quantified by a likelihood ratio score, using subset scannig. This allows consideration of not just subgroups of a priori interest or small dimensions, but the space of all possible subgroups of features.\n", "It also presents opportunity for a kind of bias mitigation technique that uses the multiplicative odds in the over-or-under estimated subgroups to adjust for predictive fairness." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "interpreter": { "hash": "a7b8e4082fc046e7b321ebd13577b0b02bbec122b09da65f91f262e840b142f2" }, "kernelspec": { "display_name": "aif360", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" } }, "nbformat": 4, "nbformat_minor": 4 }