{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Analyze A/B Test Results\n", "\n", "Understanding the results of an A/B test run by an e-commerce website.\n", "\n", "## Table of Contents\n", "- [Introduction](#intro)\n", "- [Part I - Probability](#probability)\n", "- [Part II - A/B Test](#ab_test)\n", "- [Part III - Regression](#regression)\n", "\n", "\n", "<a id='intro'></a>\n", "### Introduction\n", "\n", "For this project, we will be working to understand the results of an A/B test run by an e-commerce website. Our goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.\n", "\n", "**As working through this notebook, we follow along in the classroom and answer the corresponding quiz questions associated with each question.** The labels for each classroom concept are provided for each question. This assure we are on the right track as working through the project.\n", "\n", "<a id='probability'></a>\n", "#### Part I - Probability\n", "\n", "To get started, let's import our libraries." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import random\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "#We are setting the seed to assure you get the same answers on quizzes as we set up\n", "random.seed(42)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**\n", "\n", "a. Read in the dataset and take a look at the top few rows here:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>user_id</th>\n", " <th>timestamp</th>\n", " <th>group</th>\n", " <th>landing_page</th>\n", " <th>converted</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>851104</td>\n", " <td>2017-01-21 22:11:48.556739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>804228</td>\n", " <td>2017-01-12 08:01:45.159739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>661590</td>\n", " <td>2017-01-11 16:55:06.154213</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>853541</td>\n", " <td>2017-01-08 18:28:03.143765</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>864975</td>\n", " <td>2017-01-21 01:52:26.210827</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>1</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " user_id timestamp group landing_page converted\n", "0 851104 2017-01-21 22:11:48.556739 control old_page 0\n", "1 804228 2017-01-12 08:01:45.159739 control old_page 0\n", "2 661590 2017-01-11 16:55:06.154213 treatment new_page 0\n", "3 853541 2017-01-08 18:28:03.143765 treatment new_page 0\n", "4 864975 2017-01-21 01:52:26.210827 control old_page 1" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# read dataset and get few rows\n", "df = pd.read_csv('ab_data.csv')\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "b. Use the cell below to find the number of rows in the dataset." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(294478, 5)" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# number of rows\n", "df.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "c. The number of unique users in the dataset." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "290584" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# number of unique users\n", "df.user_id.nunique()" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "d. The proportion of users converted." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.11965919355605512" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# proportion of users converted\n", "df.query('converted == 1')['user_id'].count() / df.shape[0]" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "e. The number of times the `new_page` and `treatment` don't match." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3893" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# number of times new_page and treatment don't match\n", "df.query('landing_page == \"new_page\" and group != \"treatment\"').shape[0] + df.query('landing_page != \"new_page\" and group == \"treatment\"').shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "f. Do any of the rows have missing values?" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "user_id False\n", "timestamp False\n", "group False\n", "landing_page False\n", "converted False\n", "dtype: bool" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# check any missing values\n", "df.isnull().any()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`2.` For the rows where **treatment** does not match with **new_page** or **control** does not match with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to figure out how we should handle these rows. \n", "\n", "a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in **df2**." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# rows where treatment does not match with new_page or control does not match with old_page are mismatch lines\n", "# We remove them !\n", "# Those represent 2 groups :\n", "\n", "# Create a dataframe for group 1: rows for which landing_page is new_page and the group is different from treatment\n", "df_mismatch_1 = df.query('landing_page == \"new_page\" and group != \"treatment\"')\n", "\n", "# Create a dataframe for group 2: rows for which landing_page is different from new_page and group is equal to treatment\n", "df_mismatch_2 = df.query('landing_page != \"new_page\" and group == \"treatment\"')\n", "\n", "# remove group 1 from df based on the indexes, get the result a temporary dataframe\n", "df_temp = df.drop(df_mismatch_1.index, axis=0)\n", "\n", "# remove group 2 from the temporary dataframe based on the indexes, to obtain df2\n", "df2 = df_temp.drop(df_mismatch_2.index, axis=0)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Double Check all of the correct rows were removed - this should be 0\n", "df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "a. How many unique **user_id**s are in **df2**?" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "290584" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# number of uniques user_ids in df2\n", "df2.user_id.nunique()" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "b. There is one **user_id** repeated in **df2**. What is it?" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>user_id</th>\n", " <th>timestamp</th>\n", " <th>group</th>\n", " <th>landing_page</th>\n", " <th>converted</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>2893</th>\n", " <td>773192</td>\n", " <td>2017-01-14 02:55:59.590927</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " user_id timestamp group landing_page converted\n", "2893 773192 2017-01-14 02:55:59.590927 treatment new_page 0" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# check duplicated user_ids\n", "df2[df2['user_id'].duplicated() == True]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "c. What is the row information for the repeat **user_id**? " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>user_id</th>\n", " <th>timestamp</th>\n", " <th>group</th>\n", " <th>landing_page</th>\n", " <th>converted</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>1899</th>\n", " <td>773192</td>\n", " <td>2017-01-09 05:37:58.781806</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>2893</th>\n", " <td>773192</td>\n", " <td>2017-01-14 02:55:59.590927</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " user_id timestamp group landing_page converted\n", "1899 773192 2017-01-09 05:37:58.781806 treatment new_page 0\n", "2893 773192 2017-01-14 02:55:59.590927 treatment new_page 0" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# row information for the repeated user_id\n", "df2.query('user_id == 773192')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# remove one duplicated user_id line\n", "df2.drop([2893], inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`4.` Use **df2** in the cells below to answer the quiz questions related to **Quiz 4** in the classroom.\n", "\n", "a. What is the probability of an individual converting regardless of the page they receive?" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.11959708724499628" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# probability of an individual converting\n", "df2.query('converted == 1').shape[0] / df2.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "b. Given that an individual was in the `control` group, what is the probability they converted?" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.1203863045004612" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# probability of an individual in the \"control\" group, to be converted\n", "df_control = df2[df2['group'] == \"control\"]\n", "df_control.query('converted == 1').shape[0] / df_control.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "c. Given that an individual was in the `treatment` group, what is the probability they converted?" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.11880806551510564" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# probability of an individual in the \"treatment\" group to be converted\n", "df_treatment = df2[df2['group'] == \"treatment\"]\n", "df_treatment.query('converted == 1').shape[0] / df_treatment.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "d. What is the probability that an individual received the new page?" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.5000619442226688" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# probability of an individual received the new page\n", "df2.query('landing_page == \"new_page\"').shape[0] / df2.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "e. Consider your results from parts (a) through (d) above, and explain below whether you think there is sufficient evidence to conclude that the new treatment page leads to more conversions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**For an individual in the control group and for another one in the treatment group, the probability for them to be converted are too close. Consequently, there is not enough evidence to conclude that the new treatment page leads to more conversions**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='ab_test'></a>\n", "### Part II - A/B Test\n", "\n", "Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed. \n", "\n", "However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another? \n", "\n", "These questions are the difficult parts associated with A/B tests in general. \n", "\n", "\n", "`1.` For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of **$p_{old}$** and **$p_{new}$**, which are the converted rates for the old and new pages." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", "\n", ">Null Hypothesis : **$p_{old}$** >= **$p_{new}$** \n", "Alternative Hypothesis : **$p_{old}$** < **$p_{new}$**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`2.` Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have \"true\" success rates equal to the **converted** success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the **converted** rate in **ab_data.csv** regardless of the page. <br><br>\n", "\n", "Use a sample size for each page equal to the ones in **ab_data.csv**. <br><br>\n", "\n", "Perform the sampling distribution for the difference in **converted** between the two pages over 10,000 iterations of calculating an estimate from the null. <br><br>\n", "\n", "Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use **Quiz 5** in the classroom to make sure you are on the right track.<br><br>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "a. What is the **conversion rate** for $p_{new}$ under the null? " ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.11965919355605512" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# conversion rate under the null\n", "p_new = df.query('converted == 1').shape[0] / df.shape[0]\n", "p_new" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "b. What is the **conversion rate** for $p_{old}$ under the null? <br><br>" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.11965919355605512" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "p_old = df.query('converted == 1').shape[0] / df.shape[0]\n", "p_old" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "c. What is $n_{new}$, the number of individuals in the treatment group?" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "145310" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# individuals in the treatment group\n", "n_new = df2.query('group == \"treatment\"').shape[0]\n", "n_new" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "d. What is $n_{old}$, the number of individuals in the control group?" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "145274" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# individuals in the control group\n", "n_old = df2.query('group == \"control\"').shape[0]\n", "n_old" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "e. Simulate $n_{new}$ transactions with a conversion rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "# simulate n_new transactions with a conversion rate of p_new\n", "new_page_converted = df.sample(n_new, replace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "f. Simulate $n_{old}$ transactions with a conversion rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**." ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "# simulate n_old transactions with a conversion rate of p_old\n", "old_page_converted = df.sample(n_old, replace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f)." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-2.928976242783099e-05" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# compute p_new - p_old. This is the observed statistic\n", "sim_p_new_p_old = (new_page_converted.query('converted == 1').shape[0] / new_page_converted.shape[0]) - (old_page_converted.query('converted == 1').shape[0] / old_page_converted.shape[0])\n", "sim_p_new_p_old" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation\n", "> **There is a tempting practical reasoning here which is to say that P_old is higher than P_new. But, this is a statistic, and we still don't know, at this step, if our sample is large enough, or not.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "h. Create 10,000 $p_{new}$ - $p_{old}$ values using the same simulation process you used in parts (a) through (g) above. Store all 10,000 values in a NumPy array called **p_diffs**." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# create sampling distribution for the p_new and p_old difference\n", "p_diffs = []\n", "\n", "for _ in range(10000):\n", " # simulate n_new and n_old transaction respectively with p_new and p_old under the null\n", " new_page_c = df.sample(n_new, replace=True)\n", " old_page_c = df.sample(n_old, replace=True)\n", " \n", " # compute p_new and p_old for the simulation\n", " p_new_sim = new_page_c.query('converted == 1').shape[0] / new_page_c.shape[0]\n", " p_old_sim = old_page_c.query('converted == 1').shape[0] / old_page_c.shape[0]\n", " \n", " # add the diff in p_diffs array\n", " p_diffs.append(p_new_sim - p_old_sim)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "# convert p_diffs in a NumPy Array\n", "p_diffs = np.array(p_diffs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYAAAAD8CAYAAAB+UHOxAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAEkxJREFUeJzt3X+s3fV93/HnqyaQbUmLCRfm2mammScV/ijJLMKU/cFKCwaqmEqLRKS1VorkSgMp0TpNpvmDLh0TaddQRUupaLHqdEkd1iSKlXijLktVVWrAJiUEQ5lvgIYbe9idCUkVjc30vT/Ox83BnHvvuT/PtT/Ph/TV+Z739/P9fj+fr6/u635/nONUFZKk/vzQpDsgSZoMA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUqQsm3YG5XHrppbVly5ZJd0OSzilPPPHEX1fV1Hzt1nQAbNmyhcOHD0+6G5J0TknyV+O08xKQJHXKAJCkThkAktSpeQMgyVuTPJ7k60mOJPn3rX5lkseSHE3y2SQXtvpF7f10W75laFt3t/pzSW5aqUFJkuY3zhnAa8BPVtVPANcA25NcB3wMuL+qtgKvAHe09ncAr1TVPwbub+1IchVwO3A1sB34rSTrlnMwkqTxzRsANfA37e1b2lTATwJ/2Op7gdva/I72nrb8hiRp9X1V9VpVvQBMA9cuyygkSQs21j2AJOuSPAmcAA4C3wS+U1WnW5MZYGOb3wi8BNCWvwq8Y7g+Yh1J0iobKwCq6vWqugbYxOCv9h8f1ay9ZpZls9XfIMmuJIeTHD558uQ43ZMkLcKCngKqqu8AfwJcB1yc5MwHyTYBx9r8DLAZoC3/EeDUcH3EOsP7eLCqtlXVtqmpeT/IJklapHk/CZxkCvh/VfWdJH8P+CkGN3a/AvxLYB+wE/hiW2V/e//nbfn/qKpKsh/4TJKPAz8KbAUeX+bxqDNbdn95Yvt+8b5bJ7ZvaTmM81UQG4C97YmdHwIerqovJXkG2JfkPwB/ATzU2j8E/H6SaQZ/+d8OUFVHkjwMPAOcBu6sqteXdziSpHHNGwBV9RTwrhH15xnxFE9V/R/g/bNs617g3oV3U5K03PwksCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE7NGwBJNif5SpJnkxxJ8qFW/5Uk307yZJtuGVrn7iTTSZ5LctNQfXurTSfZvTJDkiSN44Ix2pwGfqmqvpbk7cATSQ62ZfdX1X8abpzkKuB24GrgR4E/TvJP2uJPAj8NzACHkuyvqmeWYyCSpIWZNwCq6jhwvM1/L8mzwMY5VtkB7Kuq14AXkkwD17Zl01X1PECSfa2tASBJE7CgewBJtgDvAh5rpbuSPJVkT5L1rbYReGlotZlWm60uSZqAsQMgyduAzwEfrqrvAg8A7wSuYXCG8Btnmo5Yveaon72fXUkOJzl88uTJcbsnSVqgsQIgyVsY/PL/dFV9HqCqXq6q16vqb4Hf4QeXeWaAzUOrbwKOzVF/g6p6sKq2VdW2qamphY5HkjSmcZ4CCvAQ8GxVfXyovmGo2c8CT7f5/cDtSS5KciWwFXgcOARsTXJlkgsZ3CjevzzDkCQt1DhPAb0X+DngG0mebLVfBj6Q5BoGl3FeBH4RoKqOJHmYwc3d08CdVfU6QJK7gEeAdcCeqjqyjGORJC3AOE8B/Rmjr98fmGOde4F7R9QPzLWeJGn1+ElgSeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqfG+SSwNK8tu7886S5IWiDPACSpUwaAJHXKAJCkThkAktQpA0CSOuVTQNIiTerJpxfvu3Ui+9X5xzMASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpU/MGQJLNSb6S5NkkR5J8qNUvSXIwydH2ur7Vk+QTSaaTPJXk3UPb2tnaH02yc+WGJUmazzhnAKeBX6qqHweuA+5MchWwG3i0qrYCj7b3ADcDW9u0C3gABoEB3AO8B7gWuOdMaEiSVt+8AVBVx6vqa23+e8CzwEZgB7C3NdsL3NbmdwCfqoGvAhcn2QDcBBysqlNV9QpwENi+rKORJI1tQfcAkmwB3gU8BlxeVcdhEBLAZa3ZRuClodVmWm22uiRpAsYOgCRvAz4HfLiqvjtX0xG1mqN+9n52JTmc5PDJkyfH7Z4kaYHGCoAkb2Hwy//TVfX5Vn65XdqhvZ5o9Rlg89Dqm4Bjc9TfoKoerKptVbVtampqIWORJC3AOE8BBXgIeLaqPj60aD9w5kmencAXh+o/354Gug54tV0iegS4Mcn6dvP3xlaTJE3AOP8l5HuBnwO+keTJVvtl4D7g4SR3AN8C3t+WHQBuAaaB7wMfBKiqU0l+FTjU2n20qk4tyygkSQs2bwBU1Z8x+vo9wA0j2hdw5yzb2gPsWUgHJUkrw08CS1KnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOmUASFKnDABJ6tS8AZBkT5ITSZ4eqv1Kkm8nebJNtwwtuzvJdJLnktw0VN/eatNJdi//UCRJCzHOGcDvAdtH1O+vqmvadAAgyVXA7cDVbZ3fSrIuyTrgk8DNwFXAB1pbSdKEXDBfg6r60yRbxtzeDmBfVb0GvJBkGri2LZuuqucBkuxrbZ9ZcI8lSctiKfcA7kryVLtEtL7VNgIvDbWZabXZ6m+SZFeSw0kOnzx5cgndkyTNZbEB8ADwTuAa4DjwG62eEW1rjvqbi1UPVtW2qto2NTW1yO5JkuYz7yWgUarq5TPzSX4H+FJ7OwNsHmq6CTjW5merS5ImYFFnAEk2DL39WeDME0L7gduTXJTkSmAr8DhwCNia5MokFzK4Ubx/8d2WJC3VvGcASf4AuB64NMkMcA9wfZJrGFzGeRH4RYCqOpLkYQY3d08Dd1bV6207dwGPAOuAPVV1ZNlHI0ka2zhPAX1gRPmhOdrfC9w7on4AOLCg3kmSVoyfBJakThkAktQpA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktSpeQMgyZ4kJ5I8PVS7JMnBJEfb6/pWT5JPJJlO8lSSdw+ts7O1P5pk58oMR5I0rnHOAH4P2H5WbTfwaFVtBR5t7wFuBra2aRfwAAwCA7gHeA9wLXDPmdCQJE3GvAFQVX8KnDqrvAPY2+b3ArcN1T9VA18FLk6yAbgJOFhVp6rqFeAgbw4VSdIqumCR611eVccBqup4kstafSPw0lC7mVabrf4mSXYxOHvgiiuuWGT3+rVl95cn3QVJ54jlvgmcEbWao/7mYtWDVbWtqrZNTU0ta+ckST+w2AB4uV3aob2eaPUZYPNQu03AsTnqkqQJWewloP3ATuC+9vrFofpdSfYxuOH7artE9AjwH4du/N4I3L34bkv9muRlvhfvu3Vi+9bymzcAkvwBcD1waZIZBk/z3Ac8nOQO4FvA+1vzA8AtwDTwfeCDAFV1KsmvAodau49W1dk3liVJq2jeAKiqD8yy6IYRbQu4c5bt7AH2LKh3kqQV4yeBJalTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTSwqAJC8m+UaSJ5McbrVLkhxMcrS9rm/1JPlEkukkTyV593IMQJK0OMtxBvAvquqaqtrW3u8GHq2qrcCj7T3AzcDWNu0CHliGfUuSFmklLgHtAPa2+b3AbUP1T9XAV4GLk2xYgf1Lksaw1AAo4I+SPJFkV6tdXlXHAdrrZa2+EXhpaN2ZVpMkTcAFS1z/vVV1LMllwMEkfzlH24yo1ZsaDYJkF8AVV1yxxO5JkmazpDOAqjrWXk8AXwCuBV4+c2mnvZ5ozWeAzUOrbwKOjdjmg1W1raq2TU1NLaV7kqQ5LDoAkvyDJG8/Mw/cCDwN7Ad2tmY7gS+2+f3Az7enga4DXj1zqUiStPqWcgnocuALSc5s5zNV9d+THAIeTnIH8C3g/a39AeAWYBr4PvDBJexbkrREiw6Aqnoe+IkR9f8N3DCiXsCdi92fJGl5+UlgSeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHVqqf8hjEbYsvvLk+6CJM3LMwBJ6pQBIEmd8hKQpLFN6vLmi/fdOpH9nu88A5CkThkAktQpA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUqVUPgCTbkzyXZDrJ7tXevyRpYFW/CyjJOuCTwE8DM8ChJPur6pmV2J9fyyxJs1vtL4O7FpiuqucBkuwDdgArEgCSzg+T/GPufP4iutUOgI3AS0PvZ4D3rHIfJGls5/M3oK52AGRErd7QINkF7Gpv/ybJc8u4/0uBv17G7Z2rPA4DHocBj8PAmjoO+diSVv9H4zRa7QCYATYPvd8EHBtuUFUPAg+uxM6THK6qbSux7XOJx2HA4zDgcRjo8Tis9lNAh4CtSa5MciFwO7B/lfsgSWKVzwCq6nSSu4BHgHXAnqo6spp9kCQNrPp/CVlVB4ADq73fZkUuLZ2DPA4DHocBj8NAd8chVTV/K0nSecevgpCkTp0XAZDkkiQHkxxtr+tnabeztTmaZOdQ/Z8m+Ub7eopPJMlZ6/3bJJXk0pUey1Ks1HFI8utJ/jLJU0m+kOTi1RrTuOb7ipEkFyX5bFv+WJItQ8vubvXnktw07jbXouU+Dkk2J/lKkmeTHEnyodUbzeKtxM9DW7YuyV8k+dLKj2IVVNU5PwG/Buxu87uBj41ocwnwfHtd3+bXt2WPA/+MwecU/htw89B6mxnctP4r4NJJj3USxwG4EbigzX9s1HYnPO51wDeBHwMuBL4OXHVWm38N/Habvx34bJu/qrW/CLiybWfdONtca9MKHYcNwLtbm7cD/7PH4zC03r8BPgN8adLjXI7pvDgDYPB1Envb/F7gthFtbgIOVtWpqnoFOAhsT7IB+OGq+vMa/At/6qz17wf+HWd9YG2NWpHjUFV/VFWn2/pfZfD5jbXk775ipKr+L3DmK0aGDR+bPwRuaGc4O4B9VfVaVb0ATLftjbPNtWbZj0NVHa+qrwFU1feAZxl8on8tW4mfB5JsAm4FfncVxrAqzpcAuLyqjgO018tGtBn1NRQb2zQzok6S9wHfrqqvr0SnV8CKHIez/AKDs4O1ZLYxjWzTwuxV4B1zrDvONtealTgOf6ddJnkX8Ngy9nklrNRx+E0Gfwz+7fJ3eTJW/THQxUryx8A/HLHoI+NuYkStZqsn+ftt2zeOuf1VsdrH4ax9fwQ4DXx6zH2tlnn7Pkeb2eqj/jha62eBK3EcBislbwM+B3y4qr676B6ujmU/Dkl+BjhRVU8kuX6J/VszzpkAqKqfmm1ZkpeTbKiq4+1SxokRzWaA64febwL+pNU3nVU/BryTwTXAr7d7oZuAryW5tqr+1xKGsiQTOA5ntr0T+BnghnaJaC2Z9ytGhtrMJLkA+BHg1DzrzrfNtWZFjkOStzD45f/pqvr8ynR9Wa3EcXgf8L4ktwBvBX44yX+pqn+1MkNYJZO+CbEcE/DrvPHm56+NaHMJ8AKDG5/r2/wlbdkh4Dp+cPPzlhHrv8javwm8IscB2M7gK7unJj3GWcZ9AYOb2Vfyg5t+V5/V5k7eeNPv4TZ/NW+86fc8g5uI825zrU0rdBzC4H7Qb056fJM8Dmetez3nyU3giXdgmf7B3wE8Chxtr2d+oW0Dfneo3S8wuKkzDXxwqL4NeJrBHf//TPuA3Fn7OBcCYEWOQ2v3EvBkm3570mMdMfZbGDyh8k3gI632UeB9bf6twH9tY3kc+LGhdT/S1nuONz4B9qZtrvVpuY8D8M8ZXBp5aujf/01/IK21aSV+HoaWnzcB4CeBJalT58tTQJKkBTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnq1P8H1GGS05fYqXgAAAAASUVORK5CYII=\n", "text/plain": [ "<matplotlib.figure.Figure at 0x7f50f2892828>" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# plot p_diffs\n", "plt.hist(p_diffs);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**This is a normal distribution as expected. It means that our sample here is large enough. From a practical reasoning the outcome is clear, but we still need to confirm the Null hypothesis using the p-value.**" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "# simulate distribution under the null hypothesis\n", "null_vals = np.random.normal(0, p_diffs.std(), p_diffs.size)" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYAAAAD8CAYAAAB+UHOxAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAEnpJREFUeJzt3X+MXeV95/H3pyaQ3U1aTBlYaps1Tb1VzR8lWYuwyv7Bli4YiGIqbSQjbWOlSK60ICXaVpXT/EE3XSRot6GKNqWiBdXpJnXYJlEs8C51aaKqUvlhUkIwlGUCNEzsxW5NSKpo2XX2u3/cx+Fi7szc8cydO+nzfklH99zvec45z3k8ms+cH/c6VYUkqT8/NO0OSJKmwwCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdeqsaXdgIeeff35t3rx52t2Qxvfss4PXn/zJ6fZDXXv88cf/tqpmFmu3pgNg8+bNHDp0aNrdkMZ35ZWD1y9/eZq9UOeS/M047bwEJEmdMgAkqVMGgCR1atEASPLWJI8m+WqSw0n+Y6tfkuSRJM8l+WySs1v9nPZ+ti3fPLStj7T6s0mumdRBSZIWN84ZwGvAz1TVTwOXAduTXAHcAdxZVVuAV4CbWvubgFeq6ieAO1s7kmwFdgKXAtuB30mybiUPRpI0vkUDoAb+vr19S5sK+Bngj1t9L3BDm9/R3tOWX5Ukrb6vql6rqheAWeDyFTkKSdKSjXUPIMm6JE8Ax4CDwNeBb1XVydZkDtjQ5jcALwG05a8CPzpcH7GOJGmVjRUAVfW9qroM2Mjgr/afGtWsvWaeZfPV3yDJ7iSHkhw6fvz4ON2TJJ2BJT0FVFXfAr4MXAGcm+TUB8k2Akfa/BywCaAt/xHgxHB9xDrD+7i7qrZV1baZmUU/yCZJOkOLfhI4yQzwf6vqW0n+EfCzDG7sfgn4t8A+YBfwxbbK/vb+L9vyP6uqSrIf+EySjwM/BmwBHl3h45FWzeY9D7yptu/5vwNg54hlK+XF26+f2LbVl3G+CuIiYG97YueHgPuq6v4kTwP7kvwn4K+Ae1r7e4A/TDLL4C//nQBVdTjJfcDTwEng5qr63soejiRpXIsGQFU9CbxzRP15RjzFU1X/G3j/PNu6Dbht6d2UJK00PwksSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpU4sGQJJNSb6U5Jkkh5N8qNV/Lck3kzzRpuuG1vlIktkkzya5Zqi+vdVmk+yZzCFJksZx1hhtTgK/VFVfSfJ24PEkB9uyO6vqPw83TrIV2AlcCvwY8KdJ/nlb/Eng3wBzwGNJ9lfV0ytxIJKkpVk0AKrqKHC0zX8nyTPAhgVW2QHsq6rXgBeSzAKXt2WzVfU8QJJ9ra0BIElTMM4ZwPcl2Qy8E3gEeA9wS5IPAIcYnCW8wiAcHh5abY7XA+Ol0+rvHrGP3cBugIsvvngp3VOHNu95YNpdkH5gjX0TOMnbgM8BH66qbwN3Ae8ALmNwhvBbp5qOWL0WqL+xUHV3VW2rqm0zMzPjdk+StERjnQEkeQuDX/6frqrPA1TVy0PLfw+4v72dAzYNrb4RONLm56tLklbZOE8BBbgHeKaqPj5Uv2io2c8BT7X5/cDOJOckuQTYAjwKPAZsSXJJkrMZ3CjevzKHIUlaqnHOAN4D/DzwtSRPtNqvAjcmuYzBZZwXgV8EqKrDSe5jcHP3JHBzVX0PIMktwIPAOuDeqjq8gsciSVqCcZ4C+gtGX78/sMA6twG3jagfWGg9SdLq8ZPAktQpA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOrVoACTZlORLSZ5JcjjJh1r9vCQHkzzXXte3epJ8IslskieTvGtoW7ta++eS7JrcYUmSFjPOGcBJ4Jeq6qeAK4Cbk2wF9gAPVdUW4KH2HuBaYEubdgN3wSAwgFuBdwOXA7eeCg1J0upbNACq6mhVfaXNfwd4BtgA7AD2tmZ7gRva/A7gUzXwMHBukouAa4CDVXWiql4BDgLbV/RoJEljW9I9gCSbgXcCjwAXVtVRGIQEcEFrtgF4aWi1uVabr376PnYnOZTk0PHjx5fSPUnSEowdAEneBnwO+HBVfXuhpiNqtUD9jYWqu6tqW1Vtm5mZGbd7kqQlGisAkryFwS//T1fV51v55XZph/Z6rNXngE1Dq28EjixQlyRNwThPAQW4B3imqj4+tGg/cOpJnl3AF4fqH2hPA10BvNouET0IXJ1kfbv5e3WrSZKm4Kwx2rwH+Hnga0meaLVfBW4H7ktyE/AN4P1t2QHgOmAW+C7wQYCqOpHk14HHWruPVdWJFTkKSdKSLRoAVfUXjL5+D3DViPYF3DzPtu4F7l1KByVJk+EngSWpU+NcApK0hmze88DU9v3i7ddPbd9aeZ4BSFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOrVoACS5N8mxJE8N1X4tyTeTPNGm64aWfSTJbJJnk1wzVN/earNJ9qz8oUiSlmKcM4A/ALaPqN9ZVZe16QBAkq3ATuDSts7vJFmXZB3wSeBaYCtwY2srSZqSsxZrUFV/nmTzmNvbAeyrqteAF5LMApe3ZbNV9TxAkn2t7dNL7rEkaUUs5x7ALUmebJeI1rfaBuCloTZzrTZf/U2S7E5yKMmh48ePL6N7kqSFnGkA3AW8A7gMOAr8VqtnRNtaoP7mYtXdVbWtqrbNzMycYfckSYtZ9BLQKFX18qn5JL8H3N/ezgGbhppuBI60+fnqkqQpOKMzgCQXDb39OeDUE0L7gZ1JzklyCbAFeBR4DNiS5JIkZzO4Ubz/zLstSVquRc8AkvwRcCVwfpI54FbgyiSXMbiM8yLwiwBVdTjJfQxu7p4Ebq6q77Xt3AI8CKwD7q2qwyt+NJKksY3zFNCNI8r3LND+NuC2EfUDwIEl9U6SNDF+EliSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSp87o20Cl023e88C0uyBpiTwDkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOmUASFKnDABJ6pQBIEmdWjQAktyb5FiSp4Zq5yU5mOS59rq+1ZPkE0lmkzyZ5F1D6+xq7Z9LsmsyhyNJGtc4ZwB/AGw/rbYHeKiqtgAPtfcA1wJb2rQbuAsGgQHcCrwbuBy49VRoSJKmY9EAqKo/B06cVt4B7G3ze4EbhuqfqoGHgXOTXARcAxysqhNV9QpwkDeHiiRpFZ3pPYALq+ooQHu9oNU3AC8NtZtrtfnqkqQpWembwBlRqwXqb95AsjvJoSSHjh8/vqKdkyS97kwD4OV2aYf2eqzV54BNQ+02AkcWqL9JVd1dVduqatvMzMwZdk+StJgzDYD9wKkneXYBXxyqf6A9DXQF8Gq7RPQgcHWS9e3m79WtJkmakrMWa5Dkj4ArgfOTzDF4mud24L4kNwHfAN7fmh8ArgNmge8CHwSoqhNJfh14rLX7WFWdfmNZkrSKFg2AqrpxnkVXjWhbwM3zbOde4N4l9U6SNDF+EliSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjplAEhSpwwASeqUASBJnTIAJKlTBoAkdWrRr4OWpFM273lgKvt98fbrp7Lff+g8A5CkThkAktQpA0CSOmUASFKnDABJ6pQBIEmdMgAkqVMGgCR1ygCQpE4ZAJLUKQNAkjq1rABI8mKSryV5IsmhVjsvycEkz7XX9a2eJJ9IMpvkySTvWokDkCSdmZU4A/jXVXVZVW1r7/cAD1XVFuCh9h7gWmBLm3YDd63AviVJZ2gSl4B2AHvb/F7ghqH6p2rgYeDcJBdNYP+SpDEsNwAK+JMkjyfZ3WoXVtVRgPZ6QatvAF4aWneu1SRJU7Dc/w/gPVV1JMkFwMEkf71A24yo1ZsaDYJkN8DFF1+8zO5JkuazrDOAqjrSXo8BXwAuB14+dWmnvR5rzeeATUOrbwSOjNjm3VW1raq2zczMLKd7kqQFnHEAJPknSd5+ah64GngK2A/sas12AV9s8/uBD7Snga4AXj11qUiStPqWcwnoQuALSU5t5zNV9T+SPAbcl+Qm4BvA+1v7A8B1wCzwXeCDy9i3JGmZzjgAqup54KdH1P8OuGpEvYCbz3R/kqSV5SeBJalTBoAkdcoAkKROLfdzAFpjNu95YNpdkPQDwjMASeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqcMAEnqlAEgSZ0yACSpUwaAJHXKAJCkThkAktQpA0CSOmUASFKn/A9hJK150/yPjl68/fqp7XvSPAOQpE4ZAJLUKQNAkjplAEhSp7wJPAHTvGElSeNa9TOAJNuTPJtkNsme1d6/JGlgVQMgyTrgk8C1wFbgxiRbV7MPkqSB1b4EdDkwW1XPAyTZB+wAnl7lfkjSWKZ1SXc1Pn+w2gGwAXhp6P0c8O5J7cxr8ZI0v9UOgIyo1RsaJLuB3e3t3yd5duK9Wr7zgb+ddifWgO7H4V+emrnjvd2PReM4DCx5HHLHsvb3z8ZptNoBMAdsGnq/ETgy3KCq7gbuXs1OLVeSQ1W1bdr9mDbH4XWOxYDjMLBWx2G1nwJ6DNiS5JIkZwM7gf2r3AdJEqt8BlBVJ5PcAjwIrAPurarDq9kHSdLAqn8QrKoOAAdWe78T9gN1yWqCHIfXORYDjsPAmhyHVNXirSRJ/+D4XUCS1CkDYAFJzktyMMlz7XX9PO12tTbPJdk1VP8XSb7WvvbiE0ly2nq/nKSSnD/pY1mOSY1Dkt9M8tdJnkzyhSTnrtYxLcViX1+S5Jwkn23LH0myeWjZR1r92STXjLvNtWilxyHJpiRfSvJMksNJPrR6R7M8k/iZaMvWJfmrJPdP/iiAqnKaZwJ+A9jT5vcAd4xocx7wfHtd3+bXt2WPMng0PMB/B64dWm8Tg5vhfwOcP+1jncY4AFcDZ7X5O0Ztd9oTg4cVvg78OHA28FVg62lt/j3wu21+J/DZNr+1tT8HuKRtZ90421xr04TG4SLgXa3N24H/udbHYVJjMbTefwA+A9y/GsfiGcDCdgB72/xe4IYRba4BDlbViap6BTgIbE9yEfDDVfWXNfiX/dRp698J/AqnfRBujZrIOFTVn1TVybb+www+F7LWfP/rS6rq/wCnvr5k2PD4/DFwVTvL2QHsq6rXquoFYLZtb5xtrjUrPg5VdbSqvgJQVd8BnmHwbQFr3SR+JkiyEbge+P1VOAbAS0CLubCqjgK01wtGtBn19RYb2jQ3ok6S9wHfrKqvTqLTEzCRcTjNLzA4O1hr5juukW1aoL0K/OgC646zzbVmEuPwfe0SyTuBR1awz5MyqbH4bQZ/FP6/le/yaN3/fwBJ/hT4pyMWfXTcTYyo1Xz1JP+4bfvqMbe/KlZ7HE7b90eBk8Cnx9zXalq0/wu0ma8+6g+vtX4mOIlxGKyUvA34HPDhqvr2Gfdw9az4WCR5L3Csqh5PcuUy+ze27gOgqn52vmVJXk5yUVUdbZcyjo1oNgdcOfR+I/DlVt94Wv0I8A4G1/6+2u6FbgS+kuTyqvpfyziUZZnCOJza9i7gvcBV7RLRWrPo15cMtZlLchbwI8CJRdZdbJtrzUTGIclbGPzy/3RVfX4yXV9xkxiL9wHvS3Id8Fbgh5P816r6d5M5hGbaN1TW8gT8Jm+8+fkbI9qcB7zA4Mbn+jZ/Xlv2GHAFr9/8vG7E+i+y9m8CT2QcgO0Mvgp8ZtrHuMCxn8XghvYlvH7D79LT2tzMG2/43dfmL+WNN/yeZ3ADcdFtrrVpQuMQBveEfnvaxzftsTht3StZpZvAUx/MtTwxuGb3EPBcez31C20b8PtD7X6Bwc2cWeCDQ/VtwFMM7vT/F9oH707bxw9CAExkHFq7l4An2vS70z7WeY7/OgZPqHwd+GirfQx4X5t/K/Df2vE8Cvz40Lofbes9yxufAnvTNtf6tNLjAPwrBpdFnhz6GXjTH0lrcZrEz8TQ8lULAD8JLEmd8ikgSeqUASBJnTIAJKlTBoAkdcoAkKROGQCS1CkDQJI6ZQBIUqf+P1zIj0UZgIR9AAAAAElFTkSuQmCC\n", "text/plain": [ "<matplotlib.figure.Figure at 0x7f50f1d83d30>" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# plot null distribution\n", "plt.hist(null_vals);\n", "\n", "# plot line for observed statistic\n", "plt.axvline(x=sim_p_new_p_old, color='red');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.51619999999999999" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# compute the p-value\n", "(null_vals > sim_p_new_p_old).mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "k. Please explain using the vocabulary you've learned in this course what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**On part (g) the observed statistic gave the impression that p_new (converted rate for the new page) is less than p_old (converted rate for the old page), which means that the Null hypothesis might be true. \n", "Now, clearly we have a high p-value. So we choose the Null hypothesis. This choice is also because the p-value is also higher that the 5% error rate**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let `n_old` and `n_new` refer the the number of rows associated with the old page and new pages, respectively." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/opt/conda/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n", " from pandas.core import datetools\n" ] } ], "source": [ "import statsmodels.api as sm\n", "\n", "# number of conversions for old page\n", "convert_old = (df2.query('landing_page == \"old_page\" and converted == 1')).shape[0]\n", "\n", "# number of conversions for new page\n", "convert_new = (df2.query('landing_page == \"new_page\" and converted == 1')).shape[0]\n", "\n", "# number of users who landed to the old page\n", "n_old = (df2.query('landing_page == \"old_page\"')).shape[0]\n", "\n", "# number of users who landed to the new page\n", "n_new = (df2.query('landing_page == \"new_page\"')).shape[0] " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](http://knowledgetack.com/python/statsmodels/proportions_ztest/) is a helpful link on using the built in." ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(1.3109241984234394, 0.18988337448195103)" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from statsmodels.stats.proportion import proportions_ztest\n", "\n", "# count: number of success of each sample, here number of conversions for old and new page\n", "counts = np.array([convert_old, convert_new])\n", "\n", "# nobs: number of observations, here number of users landed to old and new page\n", "nobs =np.array([n_old, n_new])\n", "\n", "# compute the statistic and the p-value\n", "stat, pval = proportions_ztest(counts, nobs)\n", "\n", "# print the p-value\n", "# print('{0:0.3f}'.format(pval))\n", "(stat, pval)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**The p-value is higher than the alpha 0.05, so we fail to reject the Null hypothesis which means the converted rate for the old page is higher or equal to the one for the new page. We reach the same conclusion as previously in (j) and (k). Those results might be because our sample is large enough.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='regression'></a>\n", "### Part III - A regression approach\n", "\n", "`1.` In this final part, you will see that the result you achieved in the A/B test in Part II above can also be achieved by performing regression.<br><br> \n", "\n", "a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**I should be performing Logistic regression because we want to predict only 2 possibles outcomes.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create in df2 a column for the intercept, and create a dummy variable column for which page each user received. Add an **intercept** column, as well as an **ab_page** column, which is 1 when an individual receives the **treatment** and 0 if **control**." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>user_id</th>\n", " <th>timestamp</th>\n", " <th>group</th>\n", " <th>landing_page</th>\n", " <th>converted</th>\n", " <th>intercept</th>\n", " <th>ab_page</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>851104</td>\n", " <td>2017-01-21 22:11:48.556739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>804228</td>\n", " <td>2017-01-12 08:01:45.159739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>661590</td>\n", " <td>2017-01-11 16:55:06.154213</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>853541</td>\n", " <td>2017-01-08 18:28:03.143765</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>864975</td>\n", " <td>2017-01-21 01:52:26.210827</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>936923</td>\n", " <td>2017-01-10 15:20:49.083499</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>679687</td>\n", " <td>2017-01-19 03:26:46.940749</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>719014</td>\n", " <td>2017-01-17 01:48:29.539573</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>817355</td>\n", " <td>2017-01-04 17:58:08.979471</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>9</th>\n", " <td>839785</td>\n", " <td>2017-01-15 18:11:06.610965</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>1</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " user_id timestamp group landing_page converted \\\n", "0 851104 2017-01-21 22:11:48.556739 control old_page 0 \n", "1 804228 2017-01-12 08:01:45.159739 control old_page 0 \n", "2 661590 2017-01-11 16:55:06.154213 treatment new_page 0 \n", "3 853541 2017-01-08 18:28:03.143765 treatment new_page 0 \n", "4 864975 2017-01-21 01:52:26.210827 control old_page 1 \n", "5 936923 2017-01-10 15:20:49.083499 control old_page 0 \n", "6 679687 2017-01-19 03:26:46.940749 treatment new_page 1 \n", "7 719014 2017-01-17 01:48:29.539573 control old_page 0 \n", "8 817355 2017-01-04 17:58:08.979471 treatment new_page 1 \n", "9 839785 2017-01-15 18:11:06.610965 treatment new_page 1 \n", "\n", " intercept ab_page \n", "0 1 0 \n", "1 1 0 \n", "2 1 1 \n", "3 1 1 \n", "4 1 0 \n", "5 1 0 \n", "6 1 1 \n", "7 1 0 \n", "8 1 1 \n", "9 1 1 " ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Add an \"intercept\" column to df2\n", "df2['intercept'] = 1\n", "\n", "# Add 2 columns as dummies for \"landing_page\" column\n", "# in \"ab_page\" we have 1 when an individual receives the treatment and 0 otherwise\n", "df2[['ab_page', 'old_page']] = pd.get_dummies(df2['landing_page'])\n", "\n", "# Remove the column \"old_page\" from df2 dataframe\n", "df2 = df2.drop('old_page', axis=1)\n", "\n", "# Get a view of the new df2 dataframe\n", "df2.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "c. Use **statsmodels** to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part **b.** to predict whether or not an individual converts. " ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optimization terminated successfully.\n", " Current function value: 0.366118\n", " Iterations 6\n" ] } ], "source": [ "# Instantiate regression model on \"intercept\" and \"ab_page\"\n", "log_reg = sm.Logit(df2['converted'], df2[['intercept', 'ab_page']])\n", "\n", "# fit the model using the 2 previous columns\n", "results = log_reg.fit()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "d. Provide the summary of your model below, and use it as necessary to answer the following questions." ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<table class=\"simpletable\">\n", "<caption>Logit Regression Results</caption>\n", "<tr>\n", " <th>Dep. Variable:</th> <td>converted</td> <th> No. Observations: </th> <td>290584</td> \n", "</tr>\n", "<tr>\n", " <th>Model:</th> <td>Logit</td> <th> Df Residuals: </th> <td>290582</td> \n", "</tr>\n", "<tr>\n", " <th>Method:</th> <td>MLE</td> <th> Df Model: </th> <td> 1</td> \n", "</tr>\n", "<tr>\n", " <th>Date:</th> <td>Sat, 23 Mar 2019</td> <th> Pseudo R-squ.: </th> <td>8.077e-06</td> \n", "</tr>\n", "<tr>\n", " <th>Time:</th> <td>13:15:54</td> <th> Log-Likelihood: </th> <td>-1.0639e+05</td>\n", "</tr>\n", "<tr>\n", " <th>converged:</th> <td>True</td> <th> LL-Null: </th> <td>-1.0639e+05</td>\n", "</tr>\n", "<tr>\n", " <th> </th> <td> </td> <th> LLR p-value: </th> <td>0.1899</td> \n", "</tr>\n", "</table>\n", "<table class=\"simpletable\">\n", "<tr>\n", " <td></td> <th>coef</th> <th>std err</th> <th>z</th> <th>P>|z|</th> <th>[0.025</th> <th>0.975]</th> \n", "</tr>\n", "<tr>\n", " <th>intercept</th> <td> -1.9888</td> <td> 0.008</td> <td> -246.669</td> <td> 0.000</td> <td> -2.005</td> <td> -1.973</td>\n", "</tr>\n", "<tr>\n", " <th>ab_page</th> <td> -0.0150</td> <td> 0.011</td> <td> -1.311</td> <td> 0.190</td> <td> -0.037</td> <td> 0.007</td>\n", "</tr>\n", "</table>" ], "text/plain": [ "<class 'statsmodels.iolib.summary.Summary'>\n", "\"\"\"\n", " Logit Regression Results \n", "==============================================================================\n", "Dep. Variable: converted No. Observations: 290584\n", "Model: Logit Df Residuals: 290582\n", "Method: MLE Df Model: 1\n", "Date: Sat, 23 Mar 2019 Pseudo R-squ.: 8.077e-06\n", "Time: 13:15:54 Log-Likelihood: -1.0639e+05\n", "converged: True LL-Null: -1.0639e+05\n", " LLR p-value: 0.1899\n", "==============================================================================\n", " coef std err z P>|z| [0.025 0.975]\n", "------------------------------------------------------------------------------\n", "intercept -1.9888 0.008 -246.669 0.000 -2.005 -1.973\n", "ab_page -0.0150 0.011 -1.311 0.190 -0.037 0.007\n", "==============================================================================\n", "\"\"\"" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Get the summary of the model\n", "results.summary()" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "intercept 7.306593\n", "ab_page 1.015102\n", "dtype: float64" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# might be useful to interpret the results\n", "1/np.exp(results.params)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**?<br><br> **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in **Part II**?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", "> **The P-value associated with ab_page is 0.19.\n", "It differs from the one found in Part II, because here it is assessing whether we have a relationship between \"ab_page\" and \"converted\" ot not. So it helps us here to know if the conversion depends on the page the user receives or not.** \n", "**We are testing the conversion here. So the Alternative is \"there is conversion\" and the Null is \"there is no conversion\". From my understanding now, if there is conversion, P_new > P_old, which lead to the alternative hypothesis and consequently the assumption we had under the Null (P_old = P_new) in Part II. So those hypothesis are the same at the end.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**Based on the p-value 0.19, it appears that \"ab_page\" does not statistically significant relationship with the conversion. Consequently it is a good idea to consider other factors into the regression model. \n", "We might have disadvantage adding additional terms to the model here : \"ab_page\" which is not significant at this step might become statistically significant due to the inclusion of a new term, or vice versa. This is risk. A way to manage that could be to study the impact of this new term to \"ab_page\" for example.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives in. You will need to read in the **countries.csv** dataset and merge together your datasets on the appropriate rows. [Here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) are the docs for joining tables. \n", "\n", "Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - **Hint: You will need two columns for the three dummy variables.** Provide the statistical output as well as a written response to answer this question." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>user_id</th>\n", " <th>country</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>834778</td>\n", " <td>UK</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>928468</td>\n", " <td>US</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>822059</td>\n", " <td>UK</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>711597</td>\n", " <td>UK</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>710616</td>\n", " <td>UK</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " user_id country\n", "0 834778 UK\n", "1 928468 US\n", "2 822059 UK\n", "3 711597 UK\n", "4 710616 UK" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# load countries file : countries.csv\n", "df_countries = pd.read_csv('countries.csv')\n", "\n", "# Get a view on the content\n", "df_countries.head()" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>user_id</th>\n", " <th>timestamp</th>\n", " <th>group</th>\n", " <th>landing_page</th>\n", " <th>converted</th>\n", " <th>intercept</th>\n", " <th>ab_page</th>\n", " <th>country</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>851104</td>\n", " <td>2017-01-21 22:11:48.556739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>804228</td>\n", " <td>2017-01-12 08:01:45.159739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>661590</td>\n", " <td>2017-01-11 16:55:06.154213</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>US</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>853541</td>\n", " <td>2017-01-08 18:28:03.143765</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>US</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>864975</td>\n", " <td>2017-01-21 01:52:26.210827</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " user_id timestamp group landing_page converted \\\n", "0 851104 2017-01-21 22:11:48.556739 control old_page 0 \n", "1 804228 2017-01-12 08:01:45.159739 control old_page 0 \n", "2 661590 2017-01-11 16:55:06.154213 treatment new_page 0 \n", "3 853541 2017-01-08 18:28:03.143765 treatment new_page 0 \n", "4 864975 2017-01-21 01:52:26.210827 control old_page 1 \n", "\n", " intercept ab_page country \n", "0 1 0 US \n", "1 1 0 US \n", "2 1 1 US \n", "3 1 1 US \n", "4 1 0 US " ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# merge countries and df2 together, each time on the user_id\n", "df_all = df2.merge(df_countries, on='user_id', how='inner')\n", "\n", "# view on the new dataframe\n", "df_all.head()" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array(['US', 'CA', 'UK'], dtype=object)" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# how many different countries do we have ?\n", "df_all.country.unique()" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>user_id</th>\n", " <th>timestamp</th>\n", " <th>group</th>\n", " <th>landing_page</th>\n", " <th>converted</th>\n", " <th>intercept</th>\n", " <th>ab_page</th>\n", " <th>country</th>\n", " <th>UK</th>\n", " <th>US</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>851104</td>\n", " <td>2017-01-21 22:11:48.556739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " <td>0</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>804228</td>\n", " <td>2017-01-12 08:01:45.159739</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " <td>0</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>661590</td>\n", " <td>2017-01-11 16:55:06.154213</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>US</td>\n", " <td>0</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>853541</td>\n", " <td>2017-01-08 18:28:03.143765</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>US</td>\n", " <td>0</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>864975</td>\n", " <td>2017-01-21 01:52:26.210827</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " <td>0</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>936923</td>\n", " <td>2017-01-10 15:20:49.083499</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " <td>0</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>679687</td>\n", " <td>2017-01-19 03:26:46.940749</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>CA</td>\n", " <td>0</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>719014</td>\n", " <td>2017-01-17 01:48:29.539573</td>\n", " <td>control</td>\n", " <td>old_page</td>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>0</td>\n", " <td>US</td>\n", " <td>0</td>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>817355</td>\n", " <td>2017-01-04 17:58:08.979471</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>UK</td>\n", " <td>1</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>9</th>\n", " <td>839785</td>\n", " <td>2017-01-15 18:11:06.610965</td>\n", " <td>treatment</td>\n", " <td>new_page</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>1</td>\n", " <td>CA</td>\n", " <td>0</td>\n", " <td>0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " user_id timestamp group landing_page converted \\\n", "0 851104 2017-01-21 22:11:48.556739 control old_page 0 \n", "1 804228 2017-01-12 08:01:45.159739 control old_page 0 \n", "2 661590 2017-01-11 16:55:06.154213 treatment new_page 0 \n", "3 853541 2017-01-08 18:28:03.143765 treatment new_page 0 \n", "4 864975 2017-01-21 01:52:26.210827 control old_page 1 \n", "5 936923 2017-01-10 15:20:49.083499 control old_page 0 \n", "6 679687 2017-01-19 03:26:46.940749 treatment new_page 1 \n", "7 719014 2017-01-17 01:48:29.539573 control old_page 0 \n", "8 817355 2017-01-04 17:58:08.979471 treatment new_page 1 \n", "9 839785 2017-01-15 18:11:06.610965 treatment new_page 1 \n", "\n", " intercept ab_page country UK US \n", "0 1 0 US 0 1 \n", "1 1 0 US 0 1 \n", "2 1 1 US 0 1 \n", "3 1 1 US 0 1 \n", "4 1 0 US 0 1 \n", "5 1 0 US 0 1 \n", "6 1 1 CA 0 0 \n", "7 1 0 US 0 1 \n", "8 1 1 UK 1 0 \n", "9 1 1 CA 0 0 " ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Add an \"intercept\" column to df_all\n", "df_all['intercept'] = 1\n", "\n", "# Create the additional columns to map the countries\n", "df_all[['CA', 'UK', 'US']] = pd.get_dummies(df_all['country'])\n", "\n", "# Remove the column \"CA\" from df_all dataframe, as we need to remove one dummy column\n", "df_all = df_all.drop('CA', axis=1)\n", "\n", "# Get a view of the new df_all dataframe\n", "df_all.head(10)" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optimization terminated successfully.\n", " Current function value: 0.366116\n", " Iterations 6\n" ] } ], "source": [ "# Looking for the impact of country on the conversion\n", "# Instantiate regression model on \"intercept\", \"UK\" and \"US\"\n", "log_reg_1 = sm.Logit(df_all['converted'], df_all[['intercept', 'UK', 'US']])\n", "\n", "# fit the model using the 2 previous columns\n", "results_1 = log_reg_1.fit()" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "<table class=\"simpletable\">\n", "<caption>Logit Regression Results</caption>\n", "<tr>\n", " <th>Dep. Variable:</th> <td>converted</td> <th> No. Observations: </th> <td>290584</td> \n", "</tr>\n", "<tr>\n", " <th>Model:</th> <td>Logit</td> <th> Df Residuals: </th> <td>290581</td> \n", "</tr>\n", "<tr>\n", " <th>Method:</th> <td>MLE</td> <th> Df Model: </th> <td> 2</td> \n", "</tr>\n", "<tr>\n", " <th>Date:</th> <td>Sat, 23 Mar 2019</td> <th> Pseudo R-squ.: </th> <td>1.521e-05</td> \n", "</tr>\n", "<tr>\n", " <th>Time:</th> <td>20:39:44</td> <th> Log-Likelihood: </th> <td>-1.0639e+05</td>\n", "</tr>\n", "<tr>\n", " <th>converged:</th> <td>True</td> <th> LL-Null: </th> <td>-1.0639e+05</td>\n", "</tr>\n", "<tr>\n", " <th> </th> <td> </td> <th> LLR p-value: </th> <td>0.1984</td> \n", "</tr>\n", "</table>\n", "<table class=\"simpletable\">\n", "<tr>\n", " <td></td> <th>coef</th> <th>std err</th> <th>z</th> <th>P>|z|</th> <th>[0.025</th> <th>0.975]</th> \n", "</tr>\n", "<tr>\n", " <th>intercept</th> <td> -2.0375</td> <td> 0.026</td> <td> -78.364</td> <td> 0.000</td> <td> -2.088</td> <td> -1.987</td>\n", "</tr>\n", "<tr>\n", " <th>UK</th> <td> 0.0507</td> <td> 0.028</td> <td> 1.786</td> <td> 0.074</td> <td> -0.005</td> <td> 0.106</td>\n", "</tr>\n", "<tr>\n", " <th>US</th> <td> 0.0408</td> <td> 0.027</td> <td> 1.518</td> <td> 0.129</td> <td> -0.012</td> <td> 0.093</td>\n", "</tr>\n", "</table>" ], "text/plain": [ "<class 'statsmodels.iolib.summary.Summary'>\n", "\"\"\"\n", " Logit Regression Results \n", "==============================================================================\n", "Dep. Variable: converted No. Observations: 290584\n", "Model: Logit Df Residuals: 290581\n", "Method: MLE Df Model: 2\n", "Date: Sat, 23 Mar 2019 Pseudo R-squ.: 1.521e-05\n", "Time: 20:39:44 Log-Likelihood: -1.0639e+05\n", "converged: True LL-Null: -1.0639e+05\n", " LLR p-value: 0.1984\n", "==============================================================================\n", " coef std err z P>|z| [0.025 0.975]\n", "------------------------------------------------------------------------------\n", "intercept -2.0375 0.026 -78.364 0.000 -2.088 -1.987\n", "UK 0.0507 0.028 1.786 0.074 -0.005 0.106\n", "US 0.0408 0.027 1.518 0.129 -0.012 0.093\n", "==============================================================================\n", "\"\"\"" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Get the summary of the model : impact of country on the conversion\n", "results_1.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**UK p-value is 0.074. US p-value is 0.13. Those values are in relation to CA which is our baseline. But they are all above 0.05. We can say that the country has no significant effect on the conversion.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model. \n", "\n", "Provide the summary results, and your conclusions based on the results." ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optimization terminated successfully.\n", " Current function value: 0.366113\n", " Iterations 6\n" ] } ], "source": [ "# Instantiate regression model on \"intercept\", \"ab_page\", \"UK\" and \"US\"\n", "log_reg_2 = sm.Logit(df_all['converted'], df_all[['intercept', 'ab_page', 'UK', 'US']])\n", "\n", "# fit the model using the 2 previous columns\n", "results_2 = log_reg_2.fit()" ] }, { "cell_type": "code", "execution_count": 55, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "<table class=\"simpletable\">\n", "<caption>Logit Regression Results</caption>\n", "<tr>\n", " <th>Dep. Variable:</th> <td>converted</td> <th> No. Observations: </th> <td>290584</td> \n", "</tr>\n", "<tr>\n", " <th>Model:</th> <td>Logit</td> <th> Df Residuals: </th> <td>290580</td> \n", "</tr>\n", "<tr>\n", " <th>Method:</th> <td>MLE</td> <th> Df Model: </th> <td> 3</td> \n", "</tr>\n", "<tr>\n", " <th>Date:</th> <td>Sat, 23 Mar 2019</td> <th> Pseudo R-squ.: </th> <td>2.323e-05</td> \n", "</tr>\n", "<tr>\n", " <th>Time:</th> <td>18:38:47</td> <th> Log-Likelihood: </th> <td>-1.0639e+05</td>\n", "</tr>\n", "<tr>\n", " <th>converged:</th> <td>True</td> <th> LL-Null: </th> <td>-1.0639e+05</td>\n", "</tr>\n", "<tr>\n", " <th> </th> <td> </td> <th> LLR p-value: </th> <td>0.1760</td> \n", "</tr>\n", "</table>\n", "<table class=\"simpletable\">\n", "<tr>\n", " <td></td> <th>coef</th> <th>std err</th> <th>z</th> <th>P>|z|</th> <th>[0.025</th> <th>0.975]</th> \n", "</tr>\n", "<tr>\n", " <th>intercept</th> <td> -2.0300</td> <td> 0.027</td> <td> -76.249</td> <td> 0.000</td> <td> -2.082</td> <td> -1.978</td>\n", "</tr>\n", "<tr>\n", " <th>ab_page</th> <td> -0.0149</td> <td> 0.011</td> <td> -1.307</td> <td> 0.191</td> <td> -0.037</td> <td> 0.007</td>\n", "</tr>\n", "<tr>\n", " <th>UK</th> <td> 0.0506</td> <td> 0.028</td> <td> 1.784</td> <td> 0.074</td> <td> -0.005</td> <td> 0.106</td>\n", "</tr>\n", "<tr>\n", " <th>US</th> <td> 0.0408</td> <td> 0.027</td> <td> 1.516</td> <td> 0.130</td> <td> -0.012</td> <td> 0.093</td>\n", "</tr>\n", "</table>" ], "text/plain": [ "<class 'statsmodels.iolib.summary.Summary'>\n", "\"\"\"\n", " Logit Regression Results \n", "==============================================================================\n", "Dep. Variable: converted No. Observations: 290584\n", "Model: Logit Df Residuals: 290580\n", "Method: MLE Df Model: 3\n", "Date: Sat, 23 Mar 2019 Pseudo R-squ.: 2.323e-05\n", "Time: 18:38:47 Log-Likelihood: -1.0639e+05\n", "converged: True LL-Null: -1.0639e+05\n", " LLR p-value: 0.1760\n", "==============================================================================\n", " coef std err z P>|z| [0.025 0.975]\n", "------------------------------------------------------------------------------\n", "intercept -2.0300 0.027 -76.249 0.000 -2.082 -1.978\n", "ab_page -0.0149 0.011 -1.307 0.191 -0.037 0.007\n", "UK 0.0506 0.028 1.784 0.074 -0.005 0.106\n", "US 0.0408 0.027 1.516 0.130 -0.012 0.093\n", "==============================================================================\n", "\"\"\"" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Get the summary of the model\n", "results_2.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Observation and response\n", ">**The p-value for UK is 0.074. The one for US is 0.13. The one for ab_page is 0.19. \n", "Consequently, page and country have no significant effects on conversion.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id='conclusions'></a>\n", "## Conclusions\n", "\n", "> About the probabilities: for the control group and for the treatment group, we found out that the probabilities for individuals to be converted are too close. So we could not conclude from that.\n", "\n", "> Simulating what we believe to be possible under the Null hypothesis: We failed to reject the Null hypothesis, meaning that the conversion rate for the old page is higher or equal to the one for the new page.\n", "\n", "> Using logistic regression approach: It appeared as a confirmation of the previous simulation under the Null. In addition, it led to assess which of the variables - amongst country and page - has a significant effect on the conversion. We found out that none of them has a significant effect to the conversion. This makes sens regarding the failure to reject the Null hypothesis.\n", "\n", "> So regarding the approach, for A/B testing result analysis, the stidy here shows that the simulation under the Null hypothesis is the best approach. Starting from that, using a logistic regression appears relevant to assess the significance of some of the involved variables to get more deeper insights.\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from subprocess import call\n", "call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.3" } }, "nbformat": 4, "nbformat_minor": 2 }