{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "5bchGceJvcb4" }, "source": [ "# Churn prediction with Adventure Works\n", "\n", "In this notebook we'll use the staging tables created in the previous challenges and try to predict whether our customers will churn.\n", "\n", "Since the dataset doesn't contain any churn flag, we're going to start with deciding what churn means in our case and then add that information to our data so that we can train a model." ] }, { "cell_type": "markdown", "metadata": { "id": "UKKorWuguifH" }, "source": [ "## Parameters\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gV1x0wdct25o" }, "outputs": [], "source": [ "REGION=\"us-central1\" #@param region {type:\"string\"}" ] }, { "cell_type": "markdown", "metadata": { "id": "9WYdihgyeZJh" }, "source": [ "## Getting started with `pandas`\n", "\n", "The well known `pandas` framework supports reading data from bigquery tables, and Colab comes pre-installed with all of the required libraries.\n", "\n", "> BigQuery also provides the [BigFrames](https://cloud.google.com/bigquery/docs/dataframes-quickstart) package that's designed to be compatible with `pandas` data frames and can handle large amounts of data, but since we're dealing relatively small datasets we'll stick to the familiar `pandas` data frames." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "fUsQzlgjiA_G" }, "outputs": [], "source": [ "import pandas as pd\n", "\n", "df = pd.read_gbq(f\"curated.stg_sales_order_header\")" ] }, { "cell_type": "markdown", "metadata": { "id": "vcCdOQ7HwVY5" }, "source": [ "Let's have a quick look at our data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YQguJBV6iRSI" }, "outputs": [], "source": [ "df.head(20)" ] }, { "cell_type": "markdown", "metadata": { "id": "VXZvtJiqwaJk" }, "source": [ "## Churned or not\n", "\n", "In order to decide whether a customer can be considered as _churned_ we're going to look at their last purchase date, if that's over a threshold, i.e, customer hasn't purchased anything since last _N_ days, we'll mark them as churned.\n", "\n", "> There's a number of different methods to do churn analysis, including survival analysis, time to event predictions etc. These are beyond the scope of this exercise, so we're keeping things very simple.\n", "\n", "But what's a good threshold for our dataset? Let's analyze our customer base and find out how many days have passed since the last purchase date of every customer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oZUA6ZlZj72O" }, "outputs": [], "source": [ "lpd = df.groupby(\"customer_id\")[\"order_date\"].max()" ] }, { "cell_type": "markdown", "metadata": { "id": "uelihyMCvatx" }, "source": [ "Now we have the last purchase date for each customer, we could subtract that from the current date, but the dataset we're using (although updated for dates) only has data for a specific period. Let's find the date that we can use as the _current date_ for this dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bbME6kpgll6J" }, "outputs": [], "source": [ "mpd = max(lpd)" ] }, { "cell_type": "markdown", "metadata": { "id": "I6AfLWJIxk-9" }, "source": [ "Okay, we're ready to calculate the number of days since last purchase." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wX71qTuSl71_" }, "outputs": [], "source": [ "days_since_last_purchase = pd.to_datetime(mpd) - pd.to_datetime(lpd)" ] }, { "cell_type": "markdown", "metadata": { "id": "LlG55ngjxpjl" }, "source": [ "How does the distribution of this look like?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "o0sfuqnpoqEo" }, "outputs": [], "source": [ "days_since_last_purchase.dt.days.hist();" ] }, { "cell_type": "markdown", "metadata": { "id": "q4vpoYmZxty-" }, "source": [ "We see that majority of our customers have been making relatively recent purchases, although there's a few that haven't bought anything since **3** years, those have certainly churned.\n", "\n", "That's useful information, but we need more data. We need to find out how long it takes between two consecutive purchases, to determine our potential threshold." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EGmMzN1Upxch" }, "outputs": [], "source": [ "diffs = df.sort_values([\"customer_id\", \"order_date\"]).groupby(\"customer_id\")[\"order_date\"].diff()" ] }, { "cell_type": "markdown", "metadata": { "id": "C1MOJ0obyX9p" }, "source": [ "A picture is worth thousand words, let's visualize a histogram of this data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2AZ-Lc5nhpxo" }, "outputs": [], "source": [ "diffs.dt.days.hist();" ] }, { "cell_type": "markdown", "metadata": { "id": "10WWACBZyi4C" }, "source": [ "It looks like most purchases are done within 100 days. So, to stay on the safe side of things, we're going use **180** days (almost 6 months), to be our threshold. So, if a customer hasn't done a purchase for more than 180 days, we'll consider them as churned. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "394wdUHEi9oi" }, "outputs": [], "source": [ "df[\"last_purchase_date\"] = df.groupby(\"customer_id\")[\"order_date\"].transform(\"max\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "o6IqVh5WmsYH" }, "outputs": [], "source": [ "df[\"churned\"] = (pd.to_datetime(mpd) - pd.to_datetime(df[\"last_purchase_date\"])).dt.days > 180" ] }, { "cell_type": "markdown", "metadata": { "id": "EUIYU7XWy9IJ" }, "source": [ "Now we've established our churned customer definition, let's have a look at the distribution of customers who have churned." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "X0n3PNtnm--z" }, "outputs": [], "source": [ "df.groupby(\"churned\")[\"churned\"].count().plot.bar();" ] }, { "cell_type": "markdown", "metadata": { "id": "TulzFSwczJvo" }, "source": [ "That looks pretty nice and balanced, although in real world we'd expect (or hope for) less customers churning." ] }, { "cell_type": "markdown", "metadata": { "id": "KuVdBXmi7phJ" }, "source": [ "## Training data\n", "\n", "Alright, we're almost ready to do some training. We've now established which customers have churned, next step is to combine that information with for example customer details, so that we can make predictions based on customer details." ] }, { "cell_type": "markdown", "metadata": { "id": "dpiiQr0uu4qV" }, "source": [ "### Exploration playground\n", "\n", "Data scientists typically need a separate place where they can create different types of derived tables, so let's create another dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XK7KxJINu-Zr" }, "outputs": [], "source": [ "! bq show exploration || bq mk --location=$REGION exploration" ] }, { "cell_type": "markdown", "metadata": { "id": "YEBjPXPiy6nB" }, "source": [ "Now we have a separate dataset, let's store the dataframe that we used to determine the churn information." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zhFidjSW7x6_" }, "outputs": [], "source": [ "tdf = df.groupby(\"customer_id\", as_index=False).max(\"churned\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "coATJWD68BqP" }, "outputs": [], "source": [ "tdf[[\"customer_id\", \"churned\"]].to_gbq(\"exploration.churn_labels\", if_exists=\"replace\")" ] }, { "cell_type": "markdown", "metadata": { "id": "glO7zBvrzFO2" }, "source": [ "The training data consists of customer details joined with the churn information, we can do that using `pandas` dataframes, or since both tables are now in BigQuery, using `SQL`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dPHTX04M89Ei" }, "outputs": [], "source": [ "%%bigquery\n", "CREATE OR REPLACE TABLE\n", " exploration.churn_training AS\n", "SELECT\n", " c.customer_id,\n", " p.*,\n", " l.churned\n", "FROM\n", " curated.stg_customer c,\n", " curated.stg_person p,\n", " exploration.churn_labels l\n", "WHERE\n", " c.person_id = p.business_entity_id AND\n", " c.customer_id = l.customer_id" ] }, { "cell_type": "markdown", "metadata": { "id": "rbokY8t6esD7" }, "source": [ "## Model training\n", "\n", "Now we have the data, we have multiple options. We can use any framework to train a new model, scikit-learn, Tensorflow, PyTorch etc. We could also use Vertex AI to do this training using a managed service on specific hardware. But since the star of this hack is BigQuery, we'll use **BQML**.\n", "\n", "> Note that we're keeping things very simple, building an end to end MLOps pipeline is beyond the scope of this hack, however if you're interested in that, we have another [gHack](https://ghacks.dev/hacks/mlops-on-gcp) specifically designed for it.\n", "\n", "Training a model with BigQuery is quite trivial, you can stick to the defaults for most of the parameters, but see the [docs](https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-glm) for more information. BigQuery even automatically [pre-processes the features](https://cloud.google.com/bigquery/docs/auto-preprocessing)!\n", "\n", "Go ahead and create a new _Logistic Regression_ model in the dataset `dwh` with the name `churn_model`, based on the training data created in the previous step." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HAd5wwJMi9UJ" }, "outputs": [], "source": [ "%%bigquery\n", "# TODO Challenge 6: Create or replace a new Logistic Regression model with BQML" ] }, { "cell_type": "markdown", "metadata": { "id": "LqRjc5e1phye" }, "source": [ "The training should take less than a minute as we're dealing with a small dataset that converges relatively quickly.\n", "\n", "The model is stored in BigQuery, however, it's also possible to [store it in Vertex AI Model Repository](https://cloud.google.com/bigquery/docs/create_vertex) in order to use the rest of the Vertex AI services." ] }, { "cell_type": "markdown", "metadata": { "id": "bxMTGRVzxPfK" }, "source": [ "### Evaluation\n", "\n", "Great, we have a model now, but, how good is it?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2N4cdcV2wzJq" }, "outputs": [], "source": [ "%%bigquery\n", "SELECT\n", " *\n", "FROM\n", " ML.EVALUATE(MODEL `dwh.churn_model`)" ] }, { "cell_type": "markdown", "metadata": { "id": "7UtE3xcN0ejz" }, "source": [ "That doesn't look too bad for the amount of effort that we spent on this (you should see an ROC AUC value of > 0.8)!" ] }, { "cell_type": "markdown", "metadata": { "id": "zmWA9tJUxl9j" }, "source": [ "## Conclusion\n", "\n", "This concludes our data science adventure. With this notebook we've shown how to connect to BigQuery from an interactive environment, use familiar Python libraries and train models using BQML." ] } ], "metadata": { "colab": { "cell_execution_strategy": "setup", "name": "Customer Churn Analysis", "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }