{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Exploratory analysis\n",
    "\n",
    "We aren't focusing on defining metrics or data cleaning in this workshop, so our first step will be exploratory analysis on a prepared dataset.  This will involve looking at the data and then examining each variable in isolation to see if we can identify any interesting patterns."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll start by reading in our fraud dataset and looking at the column names:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df = pd.read_parquet(\"fraud-cleaned-sample.parquet\")\n",
    "df.columns"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When dealing with an unfamiliar dataset, we first need to understand what data we've collected.  If we're lucky, our organization will publish a data catalog or data glossary that explains what each dataset contains.  If we don't have this extra metadata, we'll need to explore the dataset manually and document our observations.  Sometimes the column names will be self-explanatory, but if they aren't, we'll have to dig deeper to make sense of them.  A good second step is examining the types of each column."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.dtypes"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Sometimes, `string`-valued attributes in structured data are free-form text, but some string-valued variables are [categorical variables](https://en.wikipedia.org/wiki/Categorical_variable).  We can usually identify these by looking at unique values to identify attributes with a small number of possible values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "list(df[\"label\"].unique())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "list(df[\"trans_type\"].unique())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If we're dealing with messy data, categorical variables may have a large number of unique values, some of which are spurious and may occur only once.  In this case, we don't want to examine every unique value but we can look at the top few values to get a sense for what the range of values _should_ be.  In this next cell, we're looking at the three most common transaction types."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[[\"timestamp\", \"trans_type\"]].groupby([\"trans_type\"]).count().nlargest(3, \"timestamp\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can look at the most common transaction types for each class of transaction, as well."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[df[\"label\"] == \"fraud\"][[\"timestamp\", \"trans_type\"]].groupby(\n",
    "    [\"trans_type\"]\n",
    ").count(\n",
    ").nlargest(3, \"timestamp\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[df[\"label\"] == \"legitimate\"][[\"timestamp\", \"trans_type\"]].groupby(\n",
    "    [\"trans_type\"]\n",
    ").count(\n",
    ").nlargest(3, \"timestamp\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Transaction type distribution\n",
    "\n",
    "Let's look at the transaction type first to see what the distribution looks like for legitimate and fraud transactions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pt = pd.pivot_table(df[[\"label\", \"trans_type\", \"timestamp\"]], \n",
    "                    index=[\"label\", \"trans_type\"], aggfunc=len)\n",
    "\n",
    "pt.columns = ['count']\n",
    "gdf = pd.DataFrame(pt.to_records())\n",
    "\n",
    "gdf['total'] = gdf.groupby('label')['count'].transform(np.sum)\n",
    "gdf['percentage'] = gdf['count'] / gdf['total']\n",
    "\n",
    "gdf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import altair as alt\n",
    "\n",
    "alt.Chart(gdf).mark_bar().encode(\n",
    "    alt.Y('percentage:Q', axis=alt.Axis(format='.0%')), column='trans_type', x=\"label\", color='label'\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Foreign transaction distribution\n",
    "\n",
    "We can repeat this process for percentage of foreign vs domestic transactions for each label."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pt = pd.pivot_table(df[[\"label\", \"foreign\", \"timestamp\"]], \n",
    "                    index=[\"label\", \"foreign\"], aggfunc=len)\n",
    "\n",
    "pt.columns = ['count']\n",
    "gdf = pd.DataFrame(pt.to_records())\n",
    "\n",
    "gdf['total'] = gdf.groupby('label')['count'].transform(np.sum)\n",
    "gdf['pctage'] = gdf['count'] / gdf['total']\n",
    "\n",
    "gdf\n",
    "\n",
    "alt.Chart(gdf).mark_bar().encode(\n",
    "    alt.Y('pctage:Q', axis=alt.Axis(format='.0%')), column='foreign', x=\"label\", color='label'\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Transaction amount distribution\n",
    "\n",
    "When we're dealing with continuous variables, rather than categorical ones, we'll need to do something other than simply counting values to make sense of their distributions (we aren't interested, for example, in knowing that there were exactly 24 transactions for \\$17.35).  There are a few options for how to proceed here:\n",
    "\n",
    "1.  we could _quantize_ values by rounding to the nearest dollar (or ten dollars, or to the nearest power of _k_ dollars),\n",
    "1.  we could _discretize_ values by grouping nearby values into histogram buckets, or\n",
    "1.  we could take the _cumulative distribution_ of values, which gives us a curve for the whole distribution.\n",
    "\n",
    "Let's take a look at the latter approach."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "qs = df[['label','amount']].groupby('label').quantile(q=[0.01,0.05,0.1,0.25,0.5,0.75,0.9,0.95,0.99])\n",
    "qs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "What does the above data mean?  For each label, we have a set of rows, corresponding to the first, fifth, tenth, ..., ninety-fifth, and ninety-ninth percentiles.  The value at each row indicates the transaction amount at each percentile.  As an example, this means that 25% of legitimate transaction amounts are less than or equal to $12.43.\n",
    "\n",
    "We can plot the cumulative distribution of each transaction type to compare these distributions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "qdf = pd.DataFrame(qs.to_records())\n",
    "\n",
    "alt.Chart(qdf).mark_line(interpolate=\"monotone\").encode(\n",
    "    alt.Y(\"amount\", axis=alt.Axis(title='transaction amounts (log scale)'), scale=alt.Scale(type='log')), \n",
    "    alt.X(\"level_0\", axis=alt.Axis(title='cumulative distribution'), scale=alt.Scale(type='linear')), \n",
    "    color=\"label\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we can see, relatively more of the fraudulent transactions are for low dollar amounts."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Interarrival times\n",
    "\n",
    "Another interesting feature to look at is the distribution of interarrival times, or the time gaps between transactions.  Our intuition is that legitimate transactions will be spread apart further (and will have some large gaps), while fraudulent transactions will cluster together."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fraudsamp = df[df[\"label\"] == \"fraud\"].copy()\n",
    "legitsamp = df[df[\"label\"] == \"legitimate\"].sample(len(fraudsamp)).copy()\n",
    "\n",
    "fraudsamp['irank'] = fraudsamp['interarrival'].rank(pct=True, method=\"dense\")\n",
    "legitsamp['irank'] = legitsamp['interarrival'].rank(pct=True, method=\"dense\")\n",
    "qdf = pd.concat([fraudsamp.groupby(['label', 'interarrival', 'irank']).size(), legitsamp.groupby(['label', 'interarrival', 'irank']).size()])\n",
    "qdf = pd.DataFrame(pd.DataFrame(qdf).to_records())\n",
    "qdf = qdf[qdf['interarrival'] > 0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "alt.Chart(qdf.sample(5000)).mark_line().encode(\n",
    "    alt.Y(\"interarrival\", axis=alt.Axis(title='interarrival time'), scale=alt.Scale(type='log')), \n",
    "    alt.X(\"irank\", axis=alt.Axis(title='cumulative distribution'), scale=alt.Scale(type='linear')), \n",
    "    color=\"label\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Activity by time of day\n",
    "\n",
    "We've generated users to act during typical sleep-wake cycles at different times of day.  (To simplify things, the data generator assumes that a user with a user ID _k_ is in time zone _t_ if _k_ mod 24 == _t_.)  As we'll see, legitimate transactions are centered around waking hours, while fraudulent transactions are roughly equally likely to happen at any time of day."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[\"hour\"] = df[\"timestamp\"] % 86400 // 3600\n",
    "df[\"tz\"] = df[\"user_id\"] % 24\n",
    "\n",
    "tzpivot = pd.pivot_table(df[[\"label\", \"hour\", \"tz\", \"timestamp\"]], \n",
    "                         index=[\"label\", \"hour\", \"tz\"], aggfunc=len)\n",
    "\n",
    "tzdf = pd.DataFrame(tzpivot.to_records()).set_axis([\"label\", \"tz\", \"hour\", \"count\"], axis=\"columns\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "alt.Chart(tzdf).mark_area().encode(\n",
    "    x=\"hour\",\n",
    "    y=alt.Y(\"count\", stack=\"normalize\", axis=alt.Axis(title=\"percentage of total\")), \n",
    "    color=\"tz\",\n",
    "    column=\"label\",\n",
    "    tooltip=['label', 'tz', 'hour', 'count']\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}