{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**DoWhy example on Twins dataset**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we study the twins dataset as studied by Louizos et al. We focus on twins which are the same sex and weigh less than 2kgs. The treatment t = 1 is being born the heavier twin and the outcome is mortality of each of the twins in their first year of life.The confounding variable taken is 'gestat10', the number of gestational weeks prior to birth, as it is highly correlated with the outcome. The results using the methods below are in coherence with those obtained in the paper. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os, sys\n", "sys.path.append(os.path.abspath(\"../../../\"))\n", "import pandas as pd\n", "import numpy as np\n", "import dowhy\n", "from dowhy import CausalModel\n", "from dowhy import causal_estimators" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Load the Data**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data loading process involves combining the covariates, treatment and outcome, and resolving the pair property in the data. Since there are entries for both the twins, their mortalities can be treated as two potential outcomes. The treatment is given in terms of weights of the twins.Therefore, to get a binary treatment, each child's information is added in a separate row instead of both's information being condensed in a single row as in the original data source." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "#The covariates data has 46 features\n", "x = pd.read_csv(\"https://raw.githubusercontent.com/AMLab-Amsterdam/CEVAE/master/datasets/TWINS/twin_pairs_X_3years_samesex.csv\")\n", "\n", "#The outcome data contains mortality of the lighter and heavier twin\n", "y = pd.read_csv(\"https://raw.githubusercontent.com/AMLab-Amsterdam/CEVAE/master/datasets/TWINS/twin_pairs_Y_3years_samesex.csv\")\n", "\n", "#The treatment data contains weight in grams of both the twins\n", "t = pd.read_csv(\"https://raw.githubusercontent.com/AMLab-Amsterdam/CEVAE/master/datasets/TWINS/twin_pairs_T_3years_samesex.csv\")" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "#_0 denotes features specific to the lighter twin and _1 denotes features specific to the heavier twin\n", "lighter_columns = ['pldel', 'birattnd', 'brstate', 'stoccfipb', 'mager8',\n", " 'ormoth', 'mrace', 'meduc6', 'dmar', 'mplbir', 'mpre5', 'adequacy',\n", " 'orfath', 'frace', 'birmon', 'gestat10', 'csex', 'anemia', 'cardiac',\n", " 'lung', 'diabetes', 'herpes', 'hydra', 'hemo', 'chyper', 'phyper',\n", " 'eclamp', 'incervix', 'pre4000', 'preterm', 'renal', 'rh', 'uterine',\n", " 'othermr', 'tobacco', 'alcohol', 'cigar6', 'drink5', 'crace',\n", " 'data_year', 'nprevistq', 'dfageq', 'feduc6', 'infant_id_0',\n", " 'dlivord_min', 'dtotord_min', 'bord_0',\n", " 'brstate_reg', 'stoccfipb_reg', 'mplbir_reg']\n", "heavier_columns = [ 'pldel', 'birattnd', 'brstate', 'stoccfipb', 'mager8',\n", " 'ormoth', 'mrace', 'meduc6', 'dmar', 'mplbir', 'mpre5', 'adequacy',\n", " 'orfath', 'frace', 'birmon', 'gestat10', 'csex', 'anemia', 'cardiac',\n", " 'lung', 'diabetes', 'herpes', 'hydra', 'hemo', 'chyper', 'phyper',\n", " 'eclamp', 'incervix', 'pre4000', 'preterm', 'renal', 'rh', 'uterine',\n", " 'othermr', 'tobacco', 'alcohol', 'cigar6', 'drink5', 'crace',\n", " 'data_year', 'nprevistq', 'dfageq', 'feduc6',\n", " 'infant_id_1', 'dlivord_min', 'dtotord_min', 'bord_1',\n", " 'brstate_reg', 'stoccfipb_reg', 'mplbir_reg']" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "#Since data has pair property,processing the data to get separate row for each twin so that each child can be treated as an instance\n", "data = []\n", "\n", "for i in range(len(t.values)):\n", " \n", " #select only if both <=2kg\n", " if t.iloc[i].values[1]>=2000 or t.iloc[i].values[2]>=2000:\n", " continue\n", " \n", " this_instance_lighter = list(x.iloc[i][lighter_columns].values)\n", " this_instance_heavier = list(x.iloc[i][heavier_columns].values)\n", " \n", " #adding weight\n", " this_instance_lighter.append(t.iloc[i].values[1])\n", " this_instance_heavier.append(t.iloc[i].values[2])\n", " \n", " #adding treatment, is_heavier\n", " this_instance_lighter.append(0)\n", " this_instance_heavier.append(1)\n", " \n", " #adding the outcome\n", " this_instance_lighter.append(y.iloc[i].values[1])\n", " this_instance_heavier.append(y.iloc[i].values[2])\n", " data.append(this_instance_lighter)\n", " data.append(this_instance_heavier)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
pldelbirattndbrstatestoccfipbmager8ormothmracemeduc6dmarmplbir...infant_iddlivord_mindtotord_minbordbrstate_regstoccfipb_regmplbir_regwttreatmentoutcome
01.01.01.01.03.00.01.03.01.01.0...35.03.03.02.05.05.05.0936.000.0
11.01.01.01.03.00.01.03.01.01.0...34.03.03.01.05.05.05.01006.010.0
21.01.01.01.03.00.01.02.00.01.0...47.0NaNNaNNaN5.05.05.0737.000.0
31.01.01.01.03.00.01.02.00.01.0...46.0NaNNaNNaN5.05.05.0850.011.0
41.01.01.01.03.00.01.03.01.01.0...52.01.01.01.05.05.05.01830.000.0
\n", "

5 rows × 53 columns

\n", "
" ], "text/plain": [ " pldel birattnd brstate stoccfipb mager8 ormoth mrace meduc6 dmar \\\n", "0 1.0 1.0 1.0 1.0 3.0 0.0 1.0 3.0 1.0 \n", "1 1.0 1.0 1.0 1.0 3.0 0.0 1.0 3.0 1.0 \n", "2 1.0 1.0 1.0 1.0 3.0 0.0 1.0 2.0 0.0 \n", "3 1.0 1.0 1.0 1.0 3.0 0.0 1.0 2.0 0.0 \n", "4 1.0 1.0 1.0 1.0 3.0 0.0 1.0 3.0 1.0 \n", "\n", " mplbir ... infant_id dlivord_min dtotord_min bord brstate_reg \\\n", "0 1.0 ... 35.0 3.0 3.0 2.0 5.0 \n", "1 1.0 ... 34.0 3.0 3.0 1.0 5.0 \n", "2 1.0 ... 47.0 NaN NaN NaN 5.0 \n", "3 1.0 ... 46.0 NaN NaN NaN 5.0 \n", "4 1.0 ... 52.0 1.0 1.0 1.0 5.0 \n", "\n", " stoccfipb_reg mplbir_reg wt treatment outcome \n", "0 5.0 5.0 936.0 0 0.0 \n", "1 5.0 5.0 1006.0 1 0.0 \n", "2 5.0 5.0 737.0 0 0.0 \n", "3 5.0 5.0 850.0 1 1.0 \n", "4 5.0 5.0 1830.0 0 0.0 \n", "\n", "[5 rows x 53 columns]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cols = [ 'pldel', 'birattnd', 'brstate', 'stoccfipb', 'mager8',\n", " 'ormoth', 'mrace', 'meduc6', 'dmar', 'mplbir', 'mpre5', 'adequacy',\n", " 'orfath', 'frace', 'birmon', 'gestat10', 'csex', 'anemia', 'cardiac',\n", " 'lung', 'diabetes', 'herpes', 'hydra', 'hemo', 'chyper', 'phyper',\n", " 'eclamp', 'incervix', 'pre4000', 'preterm', 'renal', 'rh', 'uterine',\n", " 'othermr', 'tobacco', 'alcohol', 'cigar6', 'drink5', 'crace',\n", " 'data_year', 'nprevistq', 'dfageq', 'feduc6',\n", " 'infant_id', 'dlivord_min', 'dtotord_min', 'bord',\n", " 'brstate_reg', 'stoccfipb_reg', 'mplbir_reg','wt','treatment','outcome']\n", "df = pd.DataFrame(columns=cols,data=data)\n", "df.head()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.16421895861148197\n", "0.1894192256341789\n", "ATE -0.025200267022696926\n" ] } ], "source": [ "df = df.astype({\"treatment\":'bool'}, copy=False) #explicitly assigning treatment column as boolean \n", "\n", "df.fillna(value=df.mean(),inplace=True) #filling the missing values\n", "df.fillna(value=df.mode().loc[0],inplace=True)\n", "\n", "data_1 = df[df[\"treatment\"]==1]\n", "data_0 = df[df[\"treatment\"]==0]\n", "print(np.mean(data_1[\"outcome\"]))\n", "print(np.mean(data_0[\"outcome\"]))\n", "print(\"ATE\", np.mean(data_1[\"outcome\"])- np.mean(data_0[\"outcome\"]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**1. Model**" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.\n", "INFO:dowhy.causal_graph:If this is observed data (not from a randomized experiment), there might always be missing confounders. Adding a node named \"Unobserved Confounders\" to reflect this.\n", "INFO:dowhy.causal_model:Model to find the causal effect of treatment ['treatment'] on outcome ['outcome']\n" ] } ], "source": [ "#The causal model has \"treatment = is_heavier\", \"outcome = mortality\" and \"gestat10 = gestational weeks before birth\"\n", "model=CausalModel(\n", " data = df,\n", " treatment='treatment',\n", " outcome='outcome',\n", " common_causes='gestat10'\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**2. Identify**" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "scrolled": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['gestat10', 'U']\n", "WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARN: Do you want to continue by ignoring any unobserved confounders? (use proceed_when_unidentifiable=True to disable this prompt) [y/n] y\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[]\n" ] } ], "source": [ "identified_estimand = model.identify_effect()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3. Estimate Using Various Methods**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3.1 Using Linear Regression**" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator\n", "INFO:dowhy.causal_estimator:b: outcome~treatment+gestat10\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "*** Causal Estimate ***\n", "\n", "## Target estimand\n", "Estimand type: nonparametric-ate\n", "### Estimand : 1\n", "Estimand name: backdoor\n", "Estimand expression:\n", " d \n", "────────────(Expectation(outcome|gestat10))\n", "d[treatment] \n", "Estimand assumption 1, Unconfoundedness: If U→{treatment} and U→outcome then P(outcome|treatment,gestat10,U) = P(outcome|treatment,gestat10)\n", "### Estimand : 2\n", "Estimand name: iv\n", "No such variable found!\n", "\n", "## Realized estimand\n", "b: outcome~treatment+gestat10\n", "## Estimate\n", "Value: -0.025200267022696315\n", "\n", "## Statistical Significance\n", "p-value: <0.001\n", "\n", "ATE -0.025200267022696926\n", "Causal Estimate is -0.025200267022696315\n" ] } ], "source": [ "estimate = model.estimate_effect(identified_estimand,\n", " method_name=\"backdoor.linear_regression\", test_significance=True\n", ")\n", "\n", "print(estimate)\n", "print(\"ATE\", np.mean(data_1[\"outcome\"])- np.mean(data_0[\"outcome\"]))\n", "print(\"Causal Estimate is \" + str(estimate.value))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**3.2 Using Propensity Score Matching**" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dowhy.causal_estimator:INFO: Using Propensity Score Matching Estimator\n", "INFO:dowhy.causal_estimator:b: outcome~treatment+gestat10\n", "/home/arshia/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:760: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n", " y = column_or_1d(y, warn=True)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Causal Estimate is -0.012600133511348465\n", "ATE -0.025200267022696926\n" ] } ], "source": [ "estimate = model.estimate_effect(identified_estimand,\n", " method_name=\"backdoor.propensity_score_matching\"\n", ")\n", "\n", "print(\"Causal Estimate is \" + str(estimate.value))\n", "\n", "print(\"ATE\", np.mean(data_1[\"outcome\"])- np.mean(data_0[\"outcome\"]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**4. Refute**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**4.1 Adding a random cause**" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dowhy.causal_estimator:INFO: Using Propensity Score Matching Estimator\n", "INFO:dowhy.causal_estimator:b: outcome~treatment+gestat10+w_random\n", "/home/arshia/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:760: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n", " y = column_or_1d(y, warn=True)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Refute: Add a Random Common Cause\n", "Estimated effect:(-0.012600133511348465,)\n", "New effect:(-0.02891355140186916,)\n", "\n" ] } ], "source": [ "refute_results=model.refute_estimate(identified_estimand, estimate,\n", " method_name=\"random_common_cause\")\n", "print(refute_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**4.2 Using a placebo treatment**" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dowhy.causal_estimator:INFO: Using Propensity Score Matching Estimator\n", "INFO:dowhy.causal_estimator:b: outcome~placebo+gestat10\n", "/home/arshia/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:760: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n", " y = column_or_1d(y, warn=True)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Refute: Use a Placebo Treatment\n", "Estimated effect:(-0.012600133511348465,)\n", "New effect:(-0.16384345794392524,)\n", "\n" ] } ], "source": [ "res_placebo=model.refute_estimate(identified_estimand, estimate,\n", " method_name=\"placebo_treatment_refuter\", placebo_type=\"permute\")\n", "print(res_placebo)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**4.3 Using a data subset refuter**" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dowhy.causal_estimator:INFO: Using Propensity Score Matching Estimator\n", "INFO:dowhy.causal_estimator:b: outcome~treatment+gestat10\n", "/home/arshia/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:760: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n", " y = column_or_1d(y, warn=True)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Refute: Use a subset of data\n", "Estimated effect:(-0.012600133511348465,)\n", "New effect:(0.15136062305873627,)\n", "\n" ] } ], "source": [ "res_subset=model.refute_estimate(identified_estimand, estimate,\n", " method_name=\"data_subset_refuter\", subset_fraction=0.9)\n", "print(res_subset)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 4 }