{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# COMPAS Analysis using Aequitas\n", "-----" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recent work in the Machine Learning community has raised concerns about the risk of unintended bias in Algorithmic Decision-Making systems, affecting individuals unfairly. While many bias metrics and fairness definitions have been proposed in recent years, the community has not reached a consensus on which definitions and metrics should be used, and there has been very little empirical analyses of real-world problems using the proposed metrics. \n", "\n", "We present the Aequitas toolkit as an intuitive addition to the machine learning workflow, enabling users to to seamlessly test models for several bias and fairness metrics in relation to multiple population groups. We believe the tool will faciliate informed and equitable decision-making around developing and deploying predictive risk-assessment tools for both machine learnining practitioners and policymakers, allowing researchers and program managers to answer a host of questions related to machine learning models, including:\n", "\n", "- [What biases exist in my model?](#existing_biases)\n", " - [What is the distribution of groups, predicted scores, and labels across my dataset?](#xtab)\n", " - [What are bias metrics across groups?](#xtab_metrics)\n", " - [How do I interpret biases in my model?](#interpret_bias)\n", " - [How do I visualize biases in my model?](#bias_viz)\n", "\n", "- [What levels of disparity exist between population groups?](#disparities)\n", " - [How does the selected reference group affect disparity calculations?](#disparity_calc)\n", " - [How do I interpret calculated disparity ratios?](#interpret_disp)\n", " - [How do I visualize disparities in my model?](#disparity_viz) \n", "\n", "- [How do I assess model fairness??](#fairness)\n", " - [How do I interpret parities?](#interpret_fairness)\n", " - [How do I visualize bias metric parity?](#fairness_group_viz)\n", " - [How do I visualize parity between groups in my model?](#fairness_disp_viz) \n", "\n", "\n", "We apply the toolkit to the COMPAS dataset reported on by ProPublica below.\n", "\n", "### Background\n", "\n", "In 2016, ProPublica reported on racial inequality in automated criminal risk assessment algorithms. The [report](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) is based on [this analysis](https://github.com/propublica/compas-analysis). Using a clean version of the COMPAS dataset from the ProPublica GitHub repo, we demostrate the use of the Aequitas bias reporting tool.\n", "\n", "Northpointe's COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is one of the most widesly utilized risk assessment tools/ algorithms within the criminal justice system for guiding decisions such as how to set bail. The ProPublica dataset represents two years of COMPAS predicitons from Broward County, FL." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import seaborn as sns\n", "from aequitas.group import Group\n", "from aequitas.bias import Bias\n", "from aequitas.fairness import Fairness\n", "from aequitas.plotting import Plot\n", "\n", "# import warnings; warnings.simplefilter('ignore')\n", "\n", "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv(\"data/compas_for_aequitas.csv\")\n", "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pre-Aequitas: Exploring the COMPAS Dataset\n", "\n", "__Risk assessment by race__\n", "\n", "COMPAS produces a risk score that predicts a person's likelihood of commiting a crime in the next two years. The output is a score between 1 and 10 that maps to low, medium or high. For Aequitas, we collapse this to a binary prediction. A score of 0 indicates a prediction of \"low\" risk according to COMPAS, while a 1 indicates \"high\" or \"medium\" risk.\n", "\n", "This categorization is based on ProPublica's interpretation of Northpointe's practioner guide:\n", "\n", " \"According to Northpointe’s practitioners guide, COMPAS “scores in the medium and high range \n", " garner more interest from supervision agencies than low scores, as a low score would suggest \n", " there is little risk of general recidivism,” so we considered scores any higher than “low” to \n", " indicate a risk of recidivism.\"\n", "\n", "In the bar charts below, we see a large difference in how these scores are distributed by race, with a majority of white and Hispanic people predicted as low risk (score = 0) and a majority of black people predicted high and medium risk (score = 1). We also see that while the majority of people in age categories over 25 are predicted as low risk (score = 0), the majority of people below 25 are predicted as high and medium risk (score = 1)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aq_palette = sns.diverging_palette(225, 35, n=2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "by_race = sns.countplot(x=\"race\", hue=\"score\", data=df[df.race.isin(['African-American', 'Caucasian', 'Hispanic'])], palette=aq_palette)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "by_sex = sns.countplot(x=\"sex\", hue=\"score\", data=df, palette=aq_palette)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "by_age = sns.countplot(x=\"age_cat\", hue=\"score\", data=df, palette=aq_palette)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Levels of recidivism__\n", "\n", "This dataset includes information about whether or not the subject recidivated, and so we can directly test the accuracy of the predictions. First, we visualize the recidivsm rates across race. \n", "\n", "Following ProPublica, we defined recidivism as a new arrest within two years. (If a person recidivates, `label_value` = 1). They \"based this decision on Northpointe’s practitioners guide, which says that its recidivism score is meant to predict 'a new misdemeanor or felony offense within two years of the COMPAS administration date.'\"\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "label_by_race = sns.countplot(x=\"race\", hue=\"label_value\", data=df[df.race.isin(['African-American', 'Caucasian', 'Hispanic'])], palette=aq_palette)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "label_by_age = sns.countplot(x=\"sex\", hue=\"label_value\", data=df, palette=aq_palette)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "label_by_sex = sns.countplot(x=\"age_cat\", hue=\"label_value\", data=df, palette=aq_palette)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Putting Aequitas to the task\n", "\n", "The graphs above show the base rates for recidivism are higher for black defendants compared to white defendants (.51 vs .39), though the predictions do not match the base rates. \n", "\n", "Practitioners face the challenge of determining whether or not such patterns reflect bias or not. The fact that there are multiple ways to measure bias adds complexity to the decision-making process. With Aequitas, we provide a tool that automates the reporting of various fairness metrics to aid in this process.\n", "\n", "Applying Aequitas progammatically is a three step process represented by three python classes: \n", "\n", "`Group()`: Define groups \n", "\n", "`Bias()`: Calculate disparities\n", "\n", "`Fairness()`: Assert fairness\n", "\n", "Each class builds on the previous one expanding the output DataFrame.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Formatting\n", "\n", "Data for this example was preprocessed for compatibility with Aequitas. **The Aequitas tool always requires a `score` column and requires a binary `label_value` column for supervised metrics**, (i.e., False Discovery Rate, False Positive Rate, False Omission Rate, and False Negative Rate).\n", "\n", "Preprocessing includes but is not limited to checking for mandatory `score` and `label_value` columns as well as at least one column representing attributes specific to the data set. See [documentation](../input_data.html) for more information about input data.\n", "\n", "Note that while `entity_id` is not necessary for this example, Aequitas recognizes `entity_id` as a reserve column name and will not recognize it as an attribute column." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "## What biases exist in my model?\n", "\n", "### _Aequitas Group() Class_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### What is the distribution of groups, predicted scores, and labels across my dataset?\n", "\n", "Aequitas's `Group()` class enables researchers to evaluate biases across all subgroups in their dataset by assembling a confusion matrix of each subgroup, calculating commonly used metrics such as false positive rate and false omission rate, as well as counts by group and group prevelance among the sample population. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "The **`get_crosstabs()`** method tabulates a confusion matrix for each subgroup and calculates commonly used metrics such as false positive rate and false omission rate. It also provides counts by group and group prevelances.\n", "\n", "#### Group Counts Calculated:\n", "\n", "| Count Type | Column Name |\n", "| --- | --- |\n", "| False Positive Count | 'fp' |\n", "| False Negative Count | 'fn' |\n", "| True Negative Count | 'tn' |\n", "| True Positive Count | 'tp' |\n", "| Predicted Positive Count | 'pp' |\n", "| Predicted Negative Count | 'pn' |\n", "| Count of Negative Labels in Group | 'group_label_neg' |\n", "| Count of Positive Labels in Group | 'group_label_pos' | \n", "| Group Size | 'group_size'|\n", "| Total Entities | 'total_entities' |\n", "\n", "#### Absolute Metrics Calculated:\n", "\n", "| Metric | Column Name |\n", "| --- | --- |\n", "| True Positive Rate | 'tpr' |\n", "| True Negative Rate | 'tnr' |\n", "| False Omission Rate | 'for' |\n", "| False Discovery Rate | 'fdr' |\n", "| False Positive Rate | 'fpr' |\n", "| False Negative Rate | 'fnr' |\n", "| Negative Predictive Value | 'npv' |\n", "| Precision | 'precision' |\n", "| Predicted Positive Ratio$_k$ | 'ppr' |\n", "| Predicted Positive Ratio$_g$ | 'pprev' |\n", "| Group Prevalence | 'prev' |\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note**: The **`get_crosstabs()`** method expects a dataframe with predefined columns `score`, and `label_value` and treats other columns (with a few exceptions) as attributes against which to test for disparities. In this case, we include `race`, `sex` and `age_cat`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "g = Group()\n", "xtab, _ = g.get_crosstabs(df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "### What are bias metrics across groups?\n", "\n", "Once you have run the `Group()` class **`get_crosstabs()`** method, you'll have a dataframe of the [group counts](#counts_description) and [group value bias metrics](#counts_description).\n", "\n", "The `Group()` class has a **`list_absolute_metrics()`** method, which you can use for faster slicing to view just counts or bias metrics." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "absolute_metrics = g.list_absolute_metrics(xtab)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### View calculated counts across sample population groups" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xtab[[col for col in xtab.columns if col not in absolute_metrics]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### View calculated absolute metrics for each sample population group" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xtab[['attribute_name', 'attribute_value'] + absolute_metrics].round(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "### How do I interpret biases in my model? \n", "In the slice of the crosstab dataframe created by the `Group()` class **`get_crosstabs()`** method directly above, we see that African-Americans have a false positive rate (`fpr`) of 45%, while Caucasians have a false positive rate of only 23%. This means that African-American people are far more likely to be falsely labeled as high-risk than white people. On the other hand, false ommision rates (`for`) and false discovery rates (`fdr`) are much closer for those two groups." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "## How do I visualize bias in my model?\n", "\n", "Absolute group bias metrics metrics from the crosstab dataframe created by the `Group()` class **`get_crosstabs()`** can be visualized with two methods in the Aequitas `Plot()` class. \n", "\n", "One metric can be specified with **`plot_group_metric()`**, or a list of particular metrics of interest (or `'all'` metrics) can be plotted with **`plot_group_metric_all()`**." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aqp = Plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing a single absolute group metric across all population groups\n", "The chart below displays group metric False Negative Rate (fnr) calculated across each attribute, colored based on number of samples in the attribute group. \n", "\n", "We can see from the longer bars that across 'age_cat', 'sex', and 'race' attributes, the groups COMPAS incorrectly predicts as 'low' or 'medium' risk most often are 25-45, Male, and African American. From the darker coloring, we can also tell that these are the three largest populations in the data set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fnr = aqp.plot_group_metric(xtab, 'fnr')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### View group metrics for only groups over a certain size threshold\n", "Extremely small group sizes increase standard error of estimates, and could be factors in prediction error such as false negatives. Use the `min_group` parameter to vizualize only those sample population groups above a user-specified percentage of the total sample size. When we remove groups below 5% of the sample size, we are left with only two of the six 'race' groups, as there are much smaller groups in that attribute category than in 'sex' or 'age_cat' (age cateogry). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fnr = aqp.plot_group_metric(xtab, 'fnr', min_group_size=0.05)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing multiple user-specified absolute group metrics across all population groups\n", "\n", "The charts below display the all calculated group metrics across each attribute, colored based on absolute metric magnitude. The group size is included in parentheses for context.\n", "\n", "We can see that the largest 'race' group, African Americans, are predicted positive more often than any other race group (predicted positive rate `PPR` of 0.66), and are more likely to be incorrectly classified as 'high' risk (false positive rate `FPR` of 0.45) than incorrectly classified as 'low' or 'medium' risk (false negative rate `FNR` of 0.28). Note that Native Americans are predicted positive at a higher _prevalence_ `PPREV`in relation to their group size than all other 'race groups' (predicted prevalence of 0.67). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p = aqp.plot_group_metric_all(xtab, metrics=['ppr','pprev','fnr','fpr'], ncols=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing default absolute group metrics across all population groups\n", "#### Default absolute group metrics\n", "When visualizing more than one absolute group metric, you can specify a list of metrics, specify `'all'` metrics, or use the Aequitas default metrics by not supplying an argument:\n", "- Predicted Positive Group Rate Disparity (pprev), \n", "- Predicted Positive Rate Disparity (ppr), \n", "- False Discovery Rate (fdr), \n", "- False Omission Rate (for)\n", "- False Positive Rate (fpr)\n", "- False Negative Rate (fnr)\n", "\n", "The charts below display the default group metrics calculated across each attribute, colored based on number of samples in the attribute group. \n", "\n", "Note that the 45+ age category is almost twice as likely to be incorrectly included in an intervention group (false discovery rate `FDR` of 0.46) than incorrectly excluded from intervention (false omission rate `FOR` 0.24). We can also see that the model is equally likely to predict a woman as 'high' risk as it is for a man (false positive rate `FPR` of 0.32 for both Male and Female)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = aqp.plot_group_metric_all(xtab, ncols=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "## What levels of disparity exist between population groups?\n", "\n", "### _Aequitas Bias() Class_\n", "We use the Aequitas `Bias()` class to calculate disparities between groups based on the crosstab returned by the `Group()` class **`get_crosstabs()`** method described above. Disparities are calculated as a ratio of a metric for a group of interest compared to a base group. For example, the False Negative Rate Disparity for black defendants vis-a-vis whites is:\n", "$$Disparity_{FNR} = \\frac{FNR_{black}}{FNR_{white}}$$ \n", "\n", "Below, we use **`get_disparity_predefined_groups()`** which allows us to choose reference groups that clarify the output for the practitioner. \n", "\n", "The Aequitas `Bias()` class includes two additional get disparity functions: **`get_disparity_major_group()`** and **`get_disparity_min_metric()`**, which automate base group selection based on sample majority (across each attribute) and minimum value for each calculated bias metric, respectively. \n", "\n", "The **`get_disparity_predefined_groups()`** allows user to define a base group for each attribute, as illustrated below. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Disparities Calculated Calcuated:\n", "\n", "| Metric | Column Name |\n", "| --- | --- |\n", "| True Positive Rate Disparity | 'tpr_disprity' |\n", "| True Negative Rate | 'tnr_disparity' |\n", "| False Omission Rate | 'for_disparity' |\n", "| False Discovery Rate | 'fdr_disparity' |\n", "| False Positive Rate | 'fpr_disparity' |\n", "| False NegativeRate | 'fnr_disparity' |\n", "| Negative Predictive Value | 'npv_disparity' |\n", "| Precision Disparity | 'precision_disparity' |\n", "| Predicted Positive Ratio$_k$ Disparity | 'ppr_disparity' |\n", "| Predicted Positive Ratio$_g$ Disparity | 'pprev_disparity' |\n", "\n", "\n", "Columns for each disparity are appended to the crosstab dataframe, along with a column indicating the reference group for each calculated metric (denoted by `[METRIC NAME]_ref_group_value`). We see a slice of the dataframe with calculated metrics in the next section." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b = Bias()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Disparities calculated in relation to a user-specified group for each attribute" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bdf = b.get_disparity_predefined_groups(xtab, original_df=df, \n", " ref_groups_dict={'race':'Caucasian', 'sex':'Male', 'age_cat':'25 - 45'})\n", "bdf.style" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `Bias()` class includes a method to quickly return a list of calculated disparities from the dataframe returned by the **`get_disparity_`** methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# View disparity metrics added to dataframe\n", "bdf[['attribute_name', 'attribute_value'] +\n", " b.list_disparities(bdf)].style" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "### How do I interpret calculated disparity ratios?\n", "The calculated disparities from the dataframe returned by the `Bias()` class **`get_disparity_`** methods are in relation to a reference group, which will always have a disparity of 1.0.\n", "\n", "The differences in False Positive Rates, noted in the discussion of the `Group()` class above, are clarified using the disparity ratio (`fpr_disparity`). Black people are falsely identified as being high or medium risks 1.9 times the rate for white people. \n", "\n", "As seen above, False Discovery Rates have much less disparity (`fdr_disparity`), or fraction of false postives over predicted positive in a group. As reference groups have disparity = 1 by design in Aequitas, the lower disparity is highlighted by the `fdr_disparity` value close to 1.0 (0.906) for the race attribute group 'African-American' when disparities are calculated using predefined base group 'Caucasian'. Note that COMPAS is calibrated to balance False Positive Rate and False Discovery Rates across groups." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "### How does the selected reference group affect disparity calculations?\n", "\n", "Disparities calculated in the the Aequitas `Bias()` class based on the crosstab returned by the `Group()` class **`get_crosstabs()`** method can be derived using several different base gorups. In addition to using user-specified groups illustrated above, Aequitas can automate base group selection based on dataset characterisitcs:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Evaluating disparities calculated in relation to a different 'race' reference group\n", "Changing even one attribute in the predefined groups will alter calculated disparities. When a differnet pre-defined group 'Hispanic' is used, we can see that Black people are 2.1 times more likely to be falsely identified as being high or medium risks as Hispanic people are (compared to 1.9 times more likely than white people), and even less likely to be falsely identified as low risk when compared to Hispanic people rather than white people." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "hbdf = b.get_disparity_predefined_groups(xtab, original_df=df, \n", " ref_groups_dict={'race':'Hispanic', 'sex':'Male', 'age_cat':'25 - 45'})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# View disparity metrics added to dataframe\n", "hbdf[['attribute_name', 'attribute_value'] + \n", " b.list_disparities(hbdf)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Disparities calculated in relation to sample population majority group (in terms of group prevalence) for each attribute\n", "The majority population groups for each attribute ('race', 'sex', 'age_cat') in the COMPAS dataset are 'African American', 'Male', and '25 - 45'. Using the **`get_disparity_major_group()`** method of calculation allows researchers to quickly evaluate how much more (or less often) other groups are falsely or correctly identified as high- or medium-risk in relation to the group they have the most data on." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "majority_bdf = b.get_disparity_major_group(xtab, original_df=df)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "majority_bdf[['attribute_name', 'attribute_value'] + b.list_disparities(majority_bdf)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Disparities calculated in relation to the minimum value for each metric\n", "\n", "When you do not have a pre-existing don’t frame of reference or policy context for the dataset (ex: Caucasians or males historically favored), you may choose to view disparities in relation to the group with the lowest value for every disparity metric, as then every group's value will be at least 1.0, and relationships can be evaluated more linearly.\n", "\n", "\n", "Note that disparities are much more varied, and may have larger magnitude, when the minimum value per metric is used as a reference group versus one of the other two methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "min_metric_bdf = b.get_disparity_min_metric(df=xtab, original_df=df)\n", "min_metric_bdf.style" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "## How do I visualize disparities in my model?\n", "To visualize disparities in the dataframe returned by one of the `Bias()` class **`get_disparity_`** methods use one of two methods in the Aequitas `Plot()` class:\n", "\n", "A particular disparity metric can be specified with **`plot_disparity()`**. To plot a single disparity, a metric and an attribute must be specified.\n", "\n", "Disparities related to a list of particular metrics of interest or `'all'` metrics can be plotted with **`plot_disparity_all()`**. At least one metric or at least one attribute must be specified when plotting multiple disparities (or the same disparity across multiple attributes). For example, to plot PPR and and Precision disparity for all attributes, specify `metrics=['ppr', 'precision']` with no attribute specified, and to plot default metrics by race, specify `attributes=['race']` and with no metrics specified.\n", "\n", "**Reference groups are displayed in grey, and always have a disparity = 1.** Note that disparities greater than 10x reference group will are visualized as 10x, and disparities less than 0.1x reference group are visualized as 0.1x." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing disparities between groups in a single user-specified attribute for a single user-specified disparity metric\n", "\n", "The treemap below displays precision disparity values calculated using a predefined group, in this case the 'Caucasian' group within the race attribute, sized based on the group size and colored based on disparity magnitude.\n", "\n", "**Note**: Groups are visualized at no less than 0.1 times the size of the reference group, and no more than 10 times the size of the reference group." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aqp.plot_disparity(bdf, group_metric='fpr_disparity', attribute_name='race')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When another group, 'Hispanic', is the reference group, the colors change to indicate higher or lower disparity in relation to that group. Treemap square sizes may also be adjusted, as group size limits for visualization are in relation to the reference group (minimum 0.1 times reference group size and maximum 10 times the reference group size)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aqp.plot_disparity(hbdf, group_metric='fpr_disparity', attribute_name='race')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing disparities between all groups for a single user-specified disparity metric\n", "\n", "The treemaps belows display False Positive Rate disparities calculated based on predefined reference groups ('race' attribute: Hispanic, 'sex' attribute: Male, 'age_cat' attribute: 25-45), sized based on group size, and colored based on disparity magnitude. \n", "\n", "It is clear that the majority of samples in the data are African-American, male, and 25-45 for the 'race', 'sex', and age category attributes, respectively. Based on the lighter colors in the treemaps, we see that there is precision disparity relatively close to 1 (a disparity of 1 indicates no disparity) across all attributes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "j = aqp.plot_disparity_all(majority_bdf, metrics=['precision_disparity'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing disparities between groups in a single user-specified attribute for default metrics\n", "##### Default Metrics\n", "When visualizing more than one disparity, you can specify a list of disparity metrics, `'all'` disaprity metrics, or use the Aequitas default disparity metrics by not supplying an argument:\n", "- Predicted Positive Group Rate Disparity (pprev_disparity),\n", "- Predicted Positive Rate Disparity (ppr_disparity),\n", "- False Discovery Rate Disparity (fdr_disparity),\n", "- False Omission Rate Disparity (for_disparity)\n", "- False Positive Rate Disparity (fpr_disparity)\n", "- False Negative Rate Disparity (fnr_disparity)\n", "\n", "The treemaps below display the default disparities between 'age_cat' groups calculated based on the minimum value of each metric, colored based on disparity magnitude. We can see based on coloring that there is a lower level of false discovery rate disparity ('fdr_disparity') between age categories than predicted positive group rate disparity or ('pprev_disparity') predicted positive rate disparity ('ppr_disparity')." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "min_met = aqp.plot_disparity_all(min_metric_bdf, attributes=['age_cat'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing disparities between groups in a single user-specified attribute for all calculated disparity metrics\n", "\n", "The treemaps below display disparities between 'race' attribute groups calculated based on predefined reference groups ('race' attribute: Hispanic, 'sex' attribute: Male, 'age_cat' attribute: 25-45) for all 10 disparity metrics, colored based on disparity magnitude." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tm_capped = aqp.plot_disparity_all(hbdf, attributes=['race'], metrics = 'all')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing disparity between all groups for multiple user-specified disparity metrics\n", "\n", "The treemaps below display False Omission Rate and False Positive Rate disparities (calculated in relation to the sample majority group for each attribute) between groups acorss all three attributes, colored based on disparity magnitude.\n", "\n", "We see that several groups (Asian, Native American) have a much lower false omission rate than African Americans, with fairly close false omission rates between the sexes and the two older oldest age groups. Though there are many more men in the sample, the two groups have nearly identical false positive rates, while color tells us that there are larger false positive rate disparities between races and age categories than false omission rate disparity." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "dp = aqp.plot_disparity_all(majority_bdf, metrics=['for_disparity', 'fpr_disparity'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "## How do I assess model fairness?\n", "\n", "### _Aequitas Fairness() Class_\n", "Finally, the Aequitas `Fairness()` class provides three functions that provide a high level summary of fairness. This class builds on the dataframe returned from one of the three `Bias()` class **`get_dispariy_`** methods. \n", "\n", "Using FPR disparity as an example and the default fairness threshold, we have:\n", "\n", "$$ 0.8 < Disparity_{FNR} = \\frac{FPR_{group}}{FPR_{base group}} < 1.25 $$ \n", "\n", "We can assess fairness at various levels of detail:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Group Level Fairness\n", "When the `label_value` column is not included in the original data set, Aequitas calculates only Statistical Parity and Impact Parities.\n", "\n", "When the `label_value` is included in the original data set, the **`get_group_value_fairness()`** function builds on the previous dataframe. The **`get_group_value_fairness()`**\n", "function gives us attribute group-level statistics for fairness determinations:\n", "\n", "#### Pairities Calcuated:\n", "\n", "| Parity | Column Name |\n", "| --- | --- |\n", "| True Positive Rate Parity | 'TPR Parity' |\n", "| True Negative Rate Parity | 'TNR Parity' |\n", "| False Omission Rate Parity | 'FOR Parity' |\n", "| False Discovery Rate Parity | 'FDR Parity' |\n", "| False Positive Rate Parity | 'FPR Parity' |\n", "| False Negative Rate Parity | 'FNR Parity' |\n", "| Negative Predictive Value Parity | 'NPV Parity' |\n", "| Precision Parity | 'Precision Parity' |\n", "| Predicted Positive Ratio$_k$ Parity | 'Statistical Parity' |\n", "| Predicted Positive Ratio$_g$ Parity | 'Impact Parity' |\n", "\n", "#### Also assessed:\n", "- **_Type I Parity_**: Fairness in both FDR Parity and FPR Parity\n", "- **_Type II Parity_**: Fairness in both FOR Parity and FNR Parity\n", "- **_Equalized Odds_**: Fairness in both FPR Parity and TPR Parity\n", "- **_Unsupervised Fairness_**: Fairness in both Statistical Parity and Impact Parity\n", "- **_Supervised Fairness_**: Fairness in both Type I and Type II Parity\n", "- **_Overall Fairness_**: Fairness across all parities for all attributes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f = Fairness()\n", "fdf = f.get_group_value_fairness(bdf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `Fairness()` class includes a method to quickly return a list of fairness determinations from the dataframe returned by the **`get_group_value_fairness()`** method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "parity_detrminations = f.list_parities(fdf)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fdf[['attribute_name', 'attribute_value'] + absolute_metrics + b.list_disparities(fdf) + parity_detrminations].style" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "### How do I interpret parities?\n", "Calling the Aequitas `Fairness()` class **`get_group_value_fairness()`** method on the dataframe returned from a `Bias()` class `get_dispariy` method will return the dataframe with additional columns indicating parities, as seen in the slice of the `get_group_value_fairness` data frame directly above.\n", "\n", "In this case, our base groups are Caucasian for race, Male for gender, and 25-45 for age_cat. By construction, the base group has supervised fairness. (The disparity ratio is 1). Relative to the base groups, the COMPAS predictions only provide supervised fairness to one group, Hispanic.\n", "\n", "Above, the African-American false omission and false discovery are within the bounds of fairness. This result is expected because COMPAS is calibrated. (Given calibration, it is surprising that Asian and Native American rates are so low. This may be a matter of having few observations for these groups.)\n", "\n", "On the other hand, African-Americans are roughly twice as likely to have false positives and 40 percent less likely to false negatives. In real terms, 44.8% of African-Americans who did not recidivate were marked high or medium risk (with potential for associated penalties), compared with 23.4% of Caucasian non-reoffenders. This is unfair and is marked False below.\n", "\n", "These findings mark an inherent trade-off between FPR Fairness, FNR Fairness and calibration, which is present in any decision system where base rates are not equal. See [Chouldechova (2017)](https://www.andrew.cmu.edu/user/achoulde/files/disparate_impact.pdf). Aequitas helps bring this trade-off to the forefront with clear metrics and asks system designers to make a reasoned decision based on their use case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attribute Level Fairness\n", "Use the **`get_group_attribute_fairness()`** function to view only the calculated parities from the **`get_group_value_fairness()`** function at the attribute level." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gaf = f.get_group_attribute_fairness(fdf)\n", "gaf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Overall Fairness\n", "The **`get_overall_fairness()`** function gives a quick boolean assessment of the output of **`get_group_value_fairness()`** or **`get_group_attribute_fairness()`**, returning a dictionary with a determination across all attributes for each of:\n", "- Unsupervised Fairness\n", "- Supervised Fairness\n", "- Overall Fairness" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gof = f.get_overall_fairness(fdf)\n", "gof" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "## How do I visualize bias metric parity?\n", "Once you have run the `Group()` class to retrieve a crosstab of absolute group value bias metrics, added calculdated disparities via one of the the `Bias()` class **`get_disparity`** functions, and added parity determinations via the `Fairness()` class **`get_group_value_fairness()`** or **`get_group_attribute_fairness()`** method, you are ready to visualize biases and disparities in terms of fairness determination.\n", "\n", "For visualizing absolute metric fairness with the the Aequitas `Plot()` class, a particular metric can be specified with **`plot_fairness_group()`**. A list of particular metrics of interest or 'all' metrics can be plotted with **`plot_fairness_group_all()`**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing parity of a single absolute group metric across all population groups\n", "\n", "The chart below displays absolute group metric Predicted Positive Rate Disparity (ppr) across each attribute, colored based on fairness determination for that attribute group (green = 'True' and red = 'False'). \n", "\n", "We can see from the green color that only the 25-45 age group, Male category, and Caucasian groups have been determined to be fair. Sound familiar? They should! These are the groups selected as reference groups, so this model is not fair in terms of Statistical Parity for any of the other groups." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "z = aqp.plot_fairness_group(fdf, group_metric='ppr')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing all absolute group metrics across all population groups\n", "The charts below display all calculated absolute group metrics across each attribute, colored based on fairness determination for that attribute group (green = 'True' and red = 'False'). \n", "\n", "Immediately we can see that negative predictive parity status is 'True' for all population groups, and that only two groups had a 'False' determination for true negative parity. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fg = aqp.plot_fairness_group_all(fdf, ncols=5, metrics = \"all\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)\n", "\n", "\n", "## How do I visualize parity between groups in my model? \n", "To visualize disparity fairness based on the dataframe returned from the Fairness() class **`get_group_value_fairness()`** method, a particular disparity metric can be specified with the **`plot_fairness_disparity()`** method in the the Aequitas `Plot()` class. To plot a single disparity, a metric and an attribute must be specified.\n", "\n", "Disparities related to a list of particular metrics of interest or `'all'` metrics can be plotted with **`plot_fairness_disparity_all()`**. At least one metric or at least one attribute **must** be specified when plotting multiple fairness disparities (or the same disparity across multiple attributes)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing parity between groups in a single user-specified attribute for all calculated disparity metrics\n", "\n", "The treemap below displays False Discovery Rate disparity values between race attribute groups calculated based on a predefined reference group ('Caucasian'), colored based on fairness determination for that attribute group (green = 'True' and red = 'False'). We see very quickly that only two groups have a 'False' parity determination." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = aqp.plot_fairness_disparity(fdf, group_metric='fdr', attribute_name='race')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fpr = aqp.plot_fairness_disparity(fdf, group_metric='fpr', attribute_name='race')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Researcher Check: Could the unfairness I am seeing be related to small group sizes in my sample?\n", "\n", "Use the `min_group` parameter on all visualization methods to vizualize parities for only those sample population groups above a user-specified percentage of the total sample size. Note that only the smallest groups had an 'False' determination for false discovery rate parity above. The parity determination is 'True' for all groups at least 1% of the sample size ." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = aqp.plot_fairness_disparity(fdf, group_metric='fdr', attribute_name='race', \n", " min_group_size=0.01, significance_alpha=0.05)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing parity between groups in a single user-specified attribute for all calculated disparity metrics\n", "\n", "The treemaps below display disparities between race attribute groups calculated based on a predefined reference group ('Caucasian') for all 10 disparity metrics, colored based on fairness determination for that attribute group (green = 'True' and red = 'False').\n", "\n", "As all treemap squares are sized and positioned based on group size, the population groups on all subplots are found in the same place across all disparity metrics, allowing for ease of comparison of fairness determinations for each 'race' group across every calculated dipsarity metric." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "a_tm = aqp.plot_fairness_disparity_all(fdf, attributes=['race'], metrics='all')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing parity between all groups for multiple user-specified disparity metrics\n", "\n", "The treemaps below display Predicted Positive Group Rate (pprev) and Predicted Positive Rate (ppr) disparities between attribute groups for all three attributes (race, sex, age category) calculated based on predefined reference groups ('race' attribute: Caucasian, 'sex' attribute: Male, 'age_cat' attribute: 25-45), colored based on fairness determination for that attribute group (green = 'True' and red = 'False'). As we want to plot for all groups, there is no need to specify any attributes. \n", "\n", "We can see that the Predicted Positive Group Rate Parity (Impact Parity) determination was 'False' for nearly every race in comparison to Caucausians, and 'False' for every other age category in comparison to the 25-45 age group, and that overall Predicted Positive Rate Parity (Statistical Parity) did not have any 'True' fairness determinations at all." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "r_tm = aqp.plot_fairness_disparity_all(fdf, metrics=['pprev_disparity', 'ppr_disparity'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing parity between groups in multiple user-specified attributes\n", "\n", "The treemaps below display disparities between attribute groups for all two attributes (sex, age category) calculated based on predefined reference groups ('sex' attribute: Male, 'age_cat' attribute: 25-45) for the six default disparity metrics, colored based on fairness determination for that attribute group (green = 'True' and red = 'False'). As we want to see only the default metrics, we do not need to set the 'metrics' parameter. \n", "\n", "Note that there is slightly more parity between the sexes (FNR, FDR, FNR, and Statistical Parity) than between age categories (FDR Parity only)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "n_tm = aqp.plot_fairness_disparity_all(fdf, attributes=['sex', 'age_cat'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Aequitas Effect\n", "\n", "By breaking down the COMPAS predictions using a variety of bias and disparity metrics calculated using different reference groups, we are able to surface the specific metrics for which the model is imposing bias on given attribute groups, and have a clearer lens when evaluating models and making recommendations for intervention. \n", "\n", "Researchers utilizing Aequitas will be able to make similar evaluations on their own data sets, and as they continue to use the tool, will begin to identify patterns in where biases exist and which models appear to produce less bias, thereby helping to reduce bias and its effects in future algorithm-based decision-making." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Back to Top](#top_cell)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 2 }