{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Fraud Classification on DC/OS\n", "Fraud classification is a common data science problem with many solutions. It is similar in approach to many others (e.g., click prediction or spam detection) in that it is a rare events binary classification problem. That is, there are two classes, fraud and not fraud and one case is rare.\n", "\n", "The notebooks in these examples were created on JupyterLab running on DC/OS. In this set of notebooks, we will walk through four modeling approaches: logistic regression, decision tree, random forest, and a neural network. General pros and cons will be given for each.\n", "\n", "These examples are simple and can be run outside DC/OS. They lay the foundation, however, for future posts. Future posts will demonstrate how DC/OS can make scaling with multiple nodes and GPU's easy. Instructions for installing and running JupyterLab on DC/OS can be found [here](https://mesosphere.com/blog/dcos-tensorflow-jupyter-beakerx/).\n", "## Data\n", "The data used was generated with a [payment simulator](https://github.com/EdgarLopezPhD/PaySim) based on this [fraud simulation paper](https://www.researchgate.net/profile/Stefan_Axelsson4/publication/313138956_PAYSIM_A_FINANCIAL_MOBILE_MONEY_SIMULATOR_FOR_FRAUD_DETECTION/links/5890f87e92851cda2568a295/PAYSIM-A-FINANCIAL-MOBILE-MONEY-SIMULATOR-FOR-FRAUD-DETECTION.pdf)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Description\n", "There are a total of ~1.3 million records and 11 columns in the dataset. Because only two types are required to build the models (Transfer and Cash-out), less than 600k records are kept. There are only about 8.4k fraud cases. Columns that have no value for the analysis are also dropped, leaving 7 columns (6 independent variables and 1 dependent variable).\n", "A description of the columns follows:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "|Variable|Description|Keep|\n", "|:--- |:--- |:--- |\n", "|step|Maps a unit of time in the real world. In this case, 1 step is 1 hour of time.|Drop|\n", "|type|CASH-IN, CASH-OUT, DEBIT, PAYMENT and TRANSFER|Keep (TRANSFER and CASH-OUT)|\n", "|amount|The amount of the transaction.|Keep|\n", "|nameOrig|The customer ID for the initiator of the transaction.|Drop|\n", "|oldbalanceOrg|The initial balance before the transaction.|Keep|\n", "|newbalanceOrg|The customer's balance after the transaction.|Keep|\n", "|nameDest|The customer ID for the recipient of the transaction.|Drop|\n", "|oldbalanceDest|The initial recipient balance before the transaction.|Keep|\n", "|newbalanceDest|The recipient's balance after the transaction.|Keep|\n", "|isFraud|This identifies a fraudulent transaction (1) and non fraudulent transaction(0).|Keep|\n", "|isFlaggedFraud|This is a rule-based system that flags illegal attempts to transfer more than 200.000 in a single transaction.|Drop|" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exploration\n", "These examples are focused mostly on the models and interpretation. Most of the initial data exploration is skipped.\n", "## Measuring model performance\n", "We will use the most common metrics for assessing fraud models: Accuracy, Precision, Recall, and F1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "| . |Predicted Not Fraud|Predicted Is Fraud|\n", "|------------------|-------------------|------------------|\n", "|Actual Not Fraud | True Negative | False Positive |\n", "|Actual Is Fraud | False Negative | True Positive |" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Accuracy - Proportion of predictions that are correct. $\\frac{True Positive + True Negative}{True Positive + True Negative + False Positive + False Negative}$\n", "* Precision - True positive over total positive actual cases. $\\frac{True Positive}{True Positive + False Positive}$\n", "* Recall - True positive over total positive predicted cases. $\\frac{True Positive}{True Positive + False Negative}$\n", "* F1 - A balance between Precision and Recall (harmonic mean of precision and recall) $\\frac{2 * Precision * Recall}{Precision + Recall}$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Models" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true }, "source": [ "### Logistic Regression\n", "In statistics, a logistic regression model is a statistical model that is for data with a binary dependent variable. A typical logistic model is a model that the log-odds of the probability of an event is a linear combination of independent or predictor variables. The two possible dependent variable values are often labelled as 1 or 0, which represent outcomes such as fraud/no fraud, click/no click, or spam/no spam. The logistic regression model can be generalized to more than two levels of the dependent variable, but that is not needed for this example.\n", "\n", "* Pros\n", " * Compuationally inexpensive\n", " * Easy to interpret\n", " * Easy to implement\n", "* Cons\n", " * Prone to overfitting when penalty cost is not large enough.\n", " * Assumes parametric.\n", " * Assumes linear relationship (in transformed space) between independent variables.\n", " * Must know interactions ahead of time.\n", "\n", "[Logistic Regression](./examples/Logistic+Regression.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Decision Tree\n", "A decision tree is among the easiest models to conceptually understand and interpret the results. There are many different algorithms, but perhaps the easiest to describe is a recursive partition approach. A dataset is split recursively. Each split is determined based on the independent variable that results in the largest possible reduction in heterogeneity of the dependent variable (there are different measures, e.g. gini or enthropy). The splits stop when they a reach predetermined stop criterion.\n", "\n", "* Pros\n", " * Compuationally inexpensive\n", " * Very easy to interpret\n", " * Is non-parametric.\n", " * Easy to implement resulting model (at least the most important rules at top of tree)\n", " * Does not assume linear relationship between independent variables.\n", " * Can discover interactions.\n", "* Cons\n", " * Difficult to tune optimally\n", " * Prone to overfitting when the cost of additional splits are not large enough.\n", " * Likely to perform poorly when cost of additional splits are too large.\n", "\n", "[Decision Tree](./examples/Tree+Classifier.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Random Forest\n", "Random Forests are essentially bootstrapped Decision Trees. A random set of features are selected. Given those features a bootstap selection (with replacement) of records are generated and a decision tree is created. This process is repeated N times (N = the number of trees created). Each iteration generates a tree and each tree gets a \"vote\" on what features are most important and the magnitude of importance.
\n", "This is a rough description with many relevant details omitted. In general. however, random forests have the following properties.\n", "\n", "* Pros\n", " * Resistant to overfitting with little tuning.\n", " * Non parametric.\n", " * Easy to determine most influential variables.\n", " * Easy to run training in parallel.\n", "* Cons\n", " * Compuationally expensive.\n", " * Sometimes difficult to implement resulting model (depends on infrastructure).\n", "\n", "[Random Forest](./examples/Random+Forest.ipynb)" ] }, { "cell_type": "markdown", "metadata": { "heading_collapsed": true }, "source": [ "### Neural Network\n", "Artificial neural networks (ANNs) are computing systems vaguely inspired by the biological neural networks that constitute animal brains. They are a relatively new type of model, but have quickly become the dominant approach to solving certain types of problems (e.g. object detection and NLP). They do well with many type of problems where huge amounts of training data is available. \n", "\n", "* Pros\n", " * Non parametric.\n", " * Performs exceptionally well (often best) for many complex problems with large training sets. Examples include:\n", " * Image Recognition\n", " * Self Driving Cars\n", " * Natural Language Processing\n", "* Cons\n", " * Compuationally expensive.\n", " * Sometimes difficult to implement resulting model (depends on infrastructure).\n", " * Prone to overfitting without significant tuning.\n", " * Difficult to interpret model.\n", " * Many parameters and hyper parameters to tune.\n", " * Typically requires large training sets.\n", "\n", "[Neural Network](./examples/Neural+Network.ipynb)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }