{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> All content here is under a Creative Commons Attribution [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and all source code is released under a [BSD-2 clause license](https://en.wikipedia.org/wiki/BSD_licenses). \n", ">\n", ">Please reuse, remix, revise, and [reshare this content](https://github.com/kgdunn/python-basic-notebooks) in any way, keeping this notice." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run this cell once, at the start, to load the notebook's style sheet.\n", "from IPython.core.display import HTML\n", "css_file = './images/style.css'\n", "HTML(open(css_file, \"r\").read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Module 9: Overview \n", "\n", "In the prior [module 8](https://yint.org/pybasic08) you got more exposure to Pandas data frames.\n", "\n", "In this module we use these data frames for getting a brief exposure to **statistics** and **plotting**. We can look at each topic separately, but they go hand-in-hand. You've probably heard: \"*always start your data analysis by plotting your data*\". There's a good reason for that: the type of statistical analysis is certainly guided by what is in that data. Plotting the data is one of the most effective ways to figure that out.\n", "\n", "\n", "### Preparing for this module###\n", "\n", "You should have read [Chapter 1](https://learnche.org/pid/data-visualization/) of the book \"Process Improvement using Data\".\n", "\n", "### Cloning this notebook for yourself\n", "\n", "If you are seeing a read-only version of this notebook, it is best you clone this notebook to your hard drive and work in it directly. Check out this repo using Git. Use your favourite Git user-interface, or at the command line:\n", "\n", ">```\n", ">git clone git@github.com:kgdunn/python-basic-notebooks.git\n", ">\n", "># If you already have the repo cloned:\n", ">git pull\n", ">```\n", "\n", "to update it to the later version.\n", "\n", "### Summarizing data visually and numerically (statistics)\n", "\n", "In [this notebook](https://yint.org/pybasic09):\n", " \n", "1. Box plots\n", "2. Bar plots (bar charts) \n", "3. Histograms\n", "
\n", "\n", "In the [next notebook](https://yint.org/pybasic10):\n", "4. Data tables\n", "5. Time-series, or a sequence plot\n", "6. Scatter plot\n", "\n", "
\n", "Statistical concepts are indicated with this icon.
In between, throughout the notes, we will also introduce statistical and data science concepts. This way you will learn how to interpret the plots and also communicate your results with the correct language.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A general work flow for any project where you deal with data\n", "\n", "*** After years of experience, and working with data you will find your own approach. ***\n", "\n", "Here is my 6-step approach (not linear, but iterative): **Define**, **Get**, **Explore**, **Clean**, **Manipulate**, **Communicate**\n", "\n", "1. **Define**/clarify the *objective*. Write down exactly what you need to deliver to have the project/assignment considered as completed.\n", "\n", " Then your next steps become clear.\n", " \n", "2. Look for and **get** your data (or it will be given to you by a colleague). Since you have your objective clarified, it is clearer now which data, and how much data you need.\n", "\n", "3. Then start looking at the data. Are the data what we expect? This is the **explore** step. Use plots and table summaries.\n", "\n", "4. **Clean** up your data. This step and the prior step are iterative. As you explore your data you notice problems, bad data, you ask questions, you gain a bit of insight into the data. You clean, and re-explore, but always with the goal(s) in mind. Or perhaps you realize already this isn't the right data to reach your objective. You need other data, so you iterate.\n", "\n", "5. Modifying, making calculations from, and **manipulate** the data. This step is also called modeling, if you are building models, but sometimes you are simply summarizing your data to get the objective solved.\n", "\n", "6. From the data models and summaries and plots you start extracting the insights and conclusions you were looking for. Again, you can go back to any of the prior steps if you realize you need that to better achieve your goal(s). You **communicate** clear visualizations to your colleagues, with crisp, short text explanations that meet the objectives.\n", "\n", "___\n", "\n", "The above work flow (also called a '*pipeline*') is not new or unique to this course. Other people have written about similar approaches:\n", "\n", "* Garrett Grolemund and Hadley Wickham in their book on R for Data Science have this diagram (from this part of their book). It matches the above, with slightly different names for the steps. It misses, in my opinion, the most important step of ***defining your goal*** first.\n", "\n", "\n", "___\n", "* Hilary Mason and Chris Wiggins in their article on A Taxonomy of Data Science describe their 5 steps in detail:\n", " 1. **Obtain**: pointing and clicking does not scale. In other words, pointing and clicking in Excel, Minitab, or similar software is OK for small data/quick analysis, but does not scale to large data, nor repeated data analysis.\n", " 2. **Scrub**: the world is a messy place\n", " 3. **Explore**: you can see a lot by looking\n", " 4. **Models**: always bad, sometimes ugly\n", " 5. **Interpret**: \"the purpose of computing is insight, not numbers.\"\n", " \n", " You can read their article, as well as this view on it, which is bit more lighthearted.\n", " \n", "___\n", "\n", "What has been your approach so far?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Box plots: using the Ammonia case study\n", "\n", "We will implement the 6-step workflow suggested above.\n", "\n", "\n", "### Defining the problem (step 1)\n", "Our end (1) **objective** is to describe what time-based trends we see in the ammonia concentration of a wastewater stream. We have a single measurement, taken every six hours. We will first see how we can summarize the data.\n", "\n", "### Getting the data (step 2)\n", "\n", "The next step is to (2) **get** the data. We have a data file from [this website](https://openmv.net/info/ammonia) where there is 1 column of numbers and several rows of ammonia measurements.\n", "\n", "### Overview of remaining steps\n", "\n", "Step 3 and 4 of exploring the data are often iterative and can happen interchangeably. We will (3) **explore** the data and see if our knowledge that ammonia concentrations should be in the range of 15 to 50 mmol/L is true. We might have to sometimes (4) **clean** up the data if there are problems.\n", "\n", "We will also summarize the data by doing various calculations, also called (5) **manipulations**, and we will (6) **communicate** what we see with plots." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's get started. There are 3 ways to **get** the data:\n", "1. Download the file to your computer\n", "2. Read the file directly from the website (no proxy server)\n", "3. Read the file directly from the website (you are behind a proxy server)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Loading the data from a local file\n", "import os\n", "import pandas as pd\n", "\n", "# If the file is on your computer:\n", "directory = r'C:\\location\\of\\file'\n", "data_file = 'ammonia.csv' \n", "full_filename = os.path.join(directory, data_file)\n", "ammonia = pd.read_csv(full_filename)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Read the CSV file directly from a web server:\n", "import pandas as pd\n", "ammonia = pd.read_csv('http://openmv.net/file/ammonia.csv')\n", "\n", "# If you are on a work computer behind a proxy server, you\n", "# have to take a few more steps. Add these 6 lines of code.\n", "import io\n", "import requests\n", "proxyDict = {\"http\" : \"http://replace.with.proxy.address:port\"}\n", "url = \"http://openmv.net/file/ammonia.csv\"\n", "s = requests.get(url, proxies=proxyDict).content\n", "web_dataset = io.StringIO(s.decode('utf-8'))\n", "\n", "# Convert the file fetched from the web to a Pandas dataframe\n", "ammonia = pd.read_csv(web_dataset)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Show only the first few lines of the data table (by default it will show 5 lines)\n", "print(ammonia.head())\n", "\n", "# And the last 10:\n", "print(ammonia.tail(n=10))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exploration (step 3)\n", "\n", "Once we have opened the data we check with the ``.head(...)`` command if our data are within the expected range. At least the first few values. Similar for the ``.tail(...)`` values.\n", "\n", "Those two commands are always good to check first.\n", "\n", "Now we are ready to move on, to explore further with the ``.describe(...)`` command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run this single line of code, and answer the questions below\n", "ammonia.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Check your knowledge\n", "\n", "1. There are \\_\\_\\_\\_\\_\\_ rows of data. Measured at 6 hours apart, this represents \\_\\_\\_\\_\\_\\_ days of sensor readings.\n", "2. We expected ammonia concentrations to typically be in the range of 15 to 50 mmol/L. Is that the case from the description?\n", "3. What is the average ammonia concentration?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The median, or the 50th percentile\n", "\n", "There are 1440 rows, or data points. If we sort these from low to high we will find the minimum as the first entry, and the maximum in the last position of the vector.\n", "\n", " What value will we find halfway? It is called the **median**, the middle value, the one that separates your data set in half. If there are an even number of data values, you take the average between the two middle values. \n", "\n", "\n", "Try find the median value manually:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Sort according to the values. In Pandas we have to be \n", "# explicit what to sort by (it could have been the index!)\n", "ammonia_sorted = ammonia.sort_values(by='Ammonia')\n", "\n", "# Verify that sorting happened\n", "print(ammonia_sorted.head())\n", "print(ammonia_sorted.tail())\n", "\n", "# Notice the indexes are maintained. So you can see, for example, sample 811 and 812 (0-based) \n", "# were the lowest recorded ammonia values.\n", "\n", "# Find the middle two values: 719 and 720, and calculate the average:\n", "ammonia_sorted[719:721] # gets entry 719 and 720, which are the middle two values of the 1440 numbers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So there is the median: 36.18 mmol/L. And compared to the average, the mean of 36.094, they agree closely.\n", "\n", "```python\n", "# There is a quicker way to find the median. Does it match the manual value above?\n", "ammonia.median()\n", "```\n", "\n", "Now, with that knowledge can you interpret what the row **\"50%\"** means in the above ``.describe()`` command?\n", "\n", " The \"50%\" row in that description is called the 50th *percentile*. \n", "![alt=Pandas \"describe\" output](images/summarizing-data/ammonia-description.png)\n", "\n", "It is the value in the dataset above which 50% of the values are found, and below which 50% of the values are found. A shortcut name that we use for the 50th percentile is **median**. It is the only percentile which has a special name. All the others we just call by their number, e.g. we say \"*the 75th percentile is 42.37*\" for the Ammonia column.\n", "\n", "\n", "##### Check your knowledge\n", "\n", "1. What does the 25th percentile mean? Below the 25th percentile value we will find \\_\\_\\_\\_% of the values, and above the 25th percentile we find \\_\\_\\_\\_% of the values. In this case that means the 25th percentile will be close to value of the 360th entry in the sorted vector of data. Try it:\n", "\n", " ``ammonia_sorted[358:362]``\n", "\n", "2. What does the 75th percentile mean? Below the 75th percentile value we will find \\_\\_\\_\\_% of the values, and above the 75th percentile we find \\_\\_\\_\\_% of the values. In this case that means the 75th percentile will be close to value of the 1080th entry in the sorted vector of data. Try it:\n", "\n", " ``ammonia_sorted[1078:1082]``\n", "\n", "3. So therefore: between the 25th percentile and the 75th percentile, we will find \\_\\_\\_\\_% of the values in our vector. \n", "\n", "4. Given this knowledge, does this match with the expectation we have that our Ammonia concentration values should lie between 15 to 50 mmol/L?\n", "\n", "And there is the key reason why you are given the 25th and 75th percentile values. Half of the data in the sorted data vector lie between these two values. 25% of the data lie below the 25th percentile, and the other 25% lie above the 75th percentile, and the bulk of the data lie between these two values." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introducing the box plot\n", "\n", "We have looked at the extremes with ``.head()`` and ``.tail()``, and we have learned about the mean and the median. \n", "\n", "What about the **typical** values? What do we even mean by _typical_ or _usual_ or _common_ values? Could we use the 25th and 75th percentiles to help guide us?\n", "\n", "One way to get a feel for that is to plot these numbers: 25th, 50th and 75th percentiles. Let's see how, by using a **boxplot**." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import pyplot\n", "%matplotlib inline\n", "\n", "# The plotting library needs access to the raw data values. Access those\n", "# using the ``.values`` method\n", "raw_values = ammonia.values\n", "pyplot.boxplot(raw_values);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The boxplot gives you an idea of the distribution, the spread, of the data.\n", "\n", "The key point is the orange center line, the line that splits the centre square (actually it is a rectangle, but it looks squarish). That horizontal line is the median.\n", "\n", "It is surprising to see that middle chunk, that middle 50% of the sorted data values fall in such a narrow range of the rectangle.\n", "![alt=\"Boxplot for the percentiles](images/summarizing-data/percentiles-ammonia.png)\n", "\n", " The bottom 25% of the data falls below the box, and the top 25% of the data falls above the box. That is indicated to some extent by the whiskers, the lines leaving the middle square/rectangle shape. The whiskers tell how much spread there is in our data. We we see 2 single circles below the bottom whisker. These are likely *outliers*, data which are unusual, given the context of the rest of the data. More about *outliers* later.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run this code to see that basic histogram.\n", "# 1. Adjust the number of bins, and see how the histogram changes. The default number is 10.\n", "# 2. Adjust the colour of the bin edges (borders). Try 'red', or 'black' or 'xkcd:pea soup'\n", "pyplot.hist(raw_values, bins=30, edgecolor='white')\n", "pyplot.xlabel('Ammonia concentration [mmol/L]');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Don't worry about the interpretation of this plot just yet. We have a separate section later which is all about histograms. You should see something like this:\n", "\n", "\"Ammonia\n", "\n", "The key point is to get an idea of what the percentiles are. We will add these now on top of the histogram." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# It is helpful to visualize the 25th and 75th percentiles on the histogram.\n", "\n", "# Run the following code:\n", "pyplot.hist(raw_values, bins=20, edgecolor='white');\n", "\n", "# Now superimpose on the histogram the 25th and the 75th percentiles (a type of quantile)\n", "# as vertical lines (vlines) on the histogram\n", "pyplot.vlines(x=ammonia.quantile(0.25), ymin=0, ymax=250, color=\"red\")\n", "pyplot.vlines(x=ammonia.quantile(0.50), ymin=0, ymax=250, color=\"orange\")\n", "pyplot.vlines(x=ammonia.quantile(0.75), ymin=0, ymax=250, color=\"red\");\n", "\n", "# NOTE: the 0.5 quantile, is the same as the 50th percentile, is the same as the median.\n", "print('The 50th percentile is at: {}'.format(ammonia.quantile(0.5))) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should get something like this:\n", "\n", "\"Ammonia\n", "\n", "It might not appear like it is the case, but \n", "* 25% of the histogram area is to the left of the first red line\n", "* 25% of the histogram area is between the red and the orange line\n", "* 25% of the histogram area is between the orange line and the next red line to the right\n", "* 25% of the histogram area is to the right of the second red line\n", "\n", "All of that you can get from this single table which you can create with ``.describe()``:\n", "![alt=Pandas \"describe\" output](images/summarizing-data/ammonia-description.png)\n", "\n", "Which brings us to two important points:\n", "1. Tables **are** (despite what some people might say), a very effective form of summarizing data\n", "2. Start your data analysis with the ``.describe()`` function to get a (tabular) feel for your data.\n", "\n", "\n", "### Looking ahead\n", "\n", "We have not solved our complete objective yet. Scroll up, and recall what we needed to do: \"*describe what **time-based** trends we see in the ammonia concentration of a wastewater stream*\". We will look at that in the [next notebook](https://yint.org/pybasic10).\n", "\n", "### Summary\n", "\n", "We have learned quite a bit in this section. See if you can explain these concepts to a friend/colleague:\n", "\n", "* head and tail of a data set\n", "* median\n", "* spread in the data\n", "* distribution of a data column\n", "* box plot\n", "* percentile" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: box plots for thickness of plastic sheets\n", "\n", "Box plots on a single column are interesting, but they are even more interesting if you have several variables to ***compare***. That is the key point: they are useful for comparisons.\n", "\n", "In this case we put the box plots side-by-side, from left-to-right. The variable on the y-axis is usually the same for all box plots. It doesn't make sense if the box plots being compared are of different measurements. For example: compare several temperature values, but it does not make sense if one box plot is temperature and the other is pressure.\n", "\n", "In the data set at http://openmv.net/info/film-thickness we measure the thickness of a plastic sheet, also called a film. It is rectangular film, and measured at 4 positions. The data are from a confidential source, but are from a true process.\n", "\n", "##### Answer these questions\n", "\n", "1. At which position on the film seems to have the most outliers?\n", "2. At which position do we have the least variability in the measurements?\n", "3. At which position is the average thickness the lowest? [Use the median and the mean to make your judgment]." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Imports for reading the file and to plot it\n", "import io\n", "import requests\n", "import pandas as pd\n", "from matplotlib import pyplot\n", "%matplotlib inline\n", "\n", "url = 'http://openmv.net/file/film-thickness.csv'\n", "proxyDict = {\"http\" : \"http://replace.with.proxy.address:port\"}\n", "s = requests.get(url, proxies=proxyDict).content\n", "web_dataset = io.StringIO(s.decode('utf-8'))\n", "\n", "# Convert the file fetched from the web to a Pandas dataframe\n", "data = pd.read_csv(web_dataset)\n", "data = data.set_index('Number')\n", "print(data.describe())\n", "data.boxplot();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: Judging the Judges\n", "\n", "Many companies making food products have taste panels. In these panels a number of people judge the product based on different attributes. \n", "\n", "In the data set at https://openmv.net/info/peas we have multiple columns, but only six are scored by judges: flavour, sweetness, fruity flavour, off-flavour, mealiness and hardness. \n", "\n", "Remember in Pandas you can select columns using: ``df.loc[:, 'Flavour': 'Hardness']``, which will select all columns from `Flavour` up to, and including `Hardness`. \n", "\n", "##### Answer these questions:\n", "\n", "* Create a box plot of the 6 attributes, so they are shown side-by-side in one figure.\n", "* What scale was used for the 6 attributes? Can we actually compare the values from the 6 attributes with each other?\n", "* Which of the 6 attributes has the lowest variability?\n", "* Which attribute has the most outliers?\n", "* For which of the 6 attributes is the median most imbalanced (not half-way between the 25th and 75 percentile)?\n", "* For that attribute, is the distribution shifted to the left, or to the right?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bar plots\n", "\n", "Bar plots are a simple (though inefficient) way to visualize information. You don't need to explain them, almost everyone has seen one and knows how to read it. \n", "\n", "We will use the case study below to introduce the topic and point out some issues to be aware of.\n", "\n", "### Website case study\n", "\n", "We will follow the 6 steps from the [general data science workflow](#A-general-work-flow-for-any-project-where-you-deal-with-data) described above.\n", "\n", "**Step 1** is to ***define*** your objective: we have recorded visits to a small website. Which day of the week is the most popular, and which is the least popular? \n", "\n", "**Step 2** is to get your data.\n", "\n", "**Step 3** asks to explore your data, look at it and make summaries, get a feeling for what you have.\n", "\n", "**Step 4** is to clean up your data. Thankfully this has been done already.\n", "\n", "**Step 5** is use the data to solve your goal/objective, to manipulate the data.\n", "\n", "**Step 6** is to communicate your results, which is what the main task is here, using a bar plot.\n", "\n", "\n", "### Step 1: Define your objective\n", "\n", "We have a small website, and we record the number of visitors each data. Our ***objective*** is to find which day of the week is the most popular, and which is the least popular. \n", "\n", "Why? If we absolutely need to take the website off-line, we can pick a day which has minimal disruption for our visitors.\n", "\n", "### Step 2: Get the data\n", "\n", "The data has been assembled for you already. You can read more about the data, and download it from here: http://openmv.net/info/website-traffic \n", "\n", "Refer back to the module on [loading data from a CSV file](https://yint.org/pybasic07#Reading-a-CSV-file-with-Pandas), if needed. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "website = pd.read_csv('http://openmv.net/file/website-traffic.csv')\n", "print(website.head())\n", "print(website.tail())\n", "website.describe()\n", "\n", "# You will need to modify the above code if you are behind a proxy server." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3: explore your data\n", "\n", "We have done a little bit of this step already, above, when we used the ``.describe()`` function.\n", "\n", "So it seems like we have data from 1 June 2009 till 31 December 2009 here, sorted in order.\n", "\n", "If you are paying attention, you will notice that the ``.head()`` command gives information about more columns than ``.describe()``. \n", "\n", "That is because, by default, ``.describe()`` will only describe numeric columns. To see a summary of all columns, use the following:\n", "\n", "```python\n", "website.describe(include='all')\n", "```\n", "\n", "and you should get this type of output:\n", "\n", " \n", "\n", "We see that `DayOfWeek` has 7 unique values, which is expected.\n", "\n", "`DayOfWeek` is not a quantitative (numeric) column. So we cannot calculate the average, the minimum, the maximum, etc, which is why those rows in the summary table are `NaN` (not a number).\n", "\n", "Columns `Year` and ``Visits`` are however quantitative, so those averages, minimums, maximums, etc can be quantified.\n", "\n", " In contrast, `DayOfWeek` can be collected in groups (categories), and then we can count the number of data items in that group. For example, we could ask how many rows (in our dataset) are on \"Monday\". When we can can group rows into categories, we call it ``Categorical`` data.\n", "\n", "\n", "Plenty of data you will work with are categorical. Some examples:\n", "* *type of operating system*: Linux, Mac, Windows,...\n", "* *colour eyes*: grey, blue, green, brown, ...\n", "* *shape of object*: square, circular, rectangular, ...\n", "\n", "Categories are natural groupers: it is logical, for example, to see box plots grouped by ``operating system``, or number of visits grouped by ``Day`` or ``Month``. Understanding and using categories is essential.\n", "\n", "##### Self-check:\n", "\n", "* Name/describe some other examples of categorical data you have worked with recently.\n", "* Which of these are categorical, and which are quantitative?\n", "\n", " * Number of years of education since high-school\n", " * Highest level of education achieved\n", " * 1st year student, 2nd year student, 3rd year student, ...\n", " * Relationship status\n", " * Fuel type used in cars\n", " * Octane number\n", " * Type of sweetener used: sugar, honey, stevia, maple syrup, ...\n", " \n", "### Step 4: Clean up your data\n", "\n", "We don't see any issues in the data yet. It actually was in a good condition already. In the next notebook we will show you can plot the number of visits against time, as a time-series plot. Perhaps there are issues that you will see then. \n", "\n", "For now we will assume the data are clean and that we can start to manipulate it.\n", "\n", "### Step 5: manipulate your data, making calculations based on it \n", "\n", "To answer our question from step 1, we would like to summarize the average number of website visits, grouped per day.\n", "\n", "In step 2 we saw that there is a column called `DayOfWeek`. In other words, we want to collect all visits from the same day together and calculate the average number of visits on that day.\n", "\n", "If this were a table of results, we would want one column with 7 rows, one for each day of the week. In a second column we would want the average number of visitors on that day.\n", "\n", "Luckily Pandas provides a function that does that for us: ``.groupby(...)``. It will group the data by a given categorical column.\n", "\n", "```python\n", "website.groupby(by='DayOfWeek')```\n", "\n", "But once the rows have been grouped, you need to indicate what you want to do within those groups. Here are some examples:\n", "\n", "```python\n", "website.groupby(by='DayOfWeek').mean() # calculate the average per group for the other columns\n", "website.groupby(by='DayOfWeek').count()\n", "website.groupby(by='DayOfWeek').max() # once grouped, calculate the maximum per group\n", "website.groupby(by='DayOfWeek').min()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import display\n", "\n", "# Now we are ready to manipulate the data:\n", "average_visits_per_day = website.groupby('DayOfWeek').mean() \n", "display(average_visits_per_day)\n", "\n", "# The 'Year' column is not needed, and will cause problems \n", "# with our visualization. Since it is \"2009\" for all rows, \n", "# it also provides little value.\n", "website = website.drop(columns='Year')\n", "average_visits_per_day = average_visits_per_day.drop(columns='Year')\n", "print('After removing the \"Year\" column there is only 1 column of data:')\n", "display(average_visits_per_day)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 6: communicate your message clearly\n", "\n", "A bar plot can be used to show these results graphically. Though, a table, as above, is perfectly valid, and actually meets the goals we set in step 1. We will come back to this point later.\n", "\n", "```python\n", "# Plot the data in a horizontal bar (barh)\n", "average_visits_per_day.plot.barh(figsize=(15, 4));\n", "```\n", "\n", "The results are more clearly communicated with horizontal bars (use the ``barh`` command), than with vertical bars. Try using vertical bars, by modifying the above code and simply use ``.bar(...)``. Why is the ``barh`` command preferred?\n", "\n", "##### Final checks\n", "1. The most visits, *on average*, occur on a \\_\\_\\_\\_day.\n", "2. If the website should go offline for an entire day for maintenance, the best day to pick would be a \\_\\_\\_\\_day.\n", "3. Is the bar plot strictly necessary in this case study when compared to the data table? *In other words*, what value does the bar plot provide, if any, that is not provided by the table?\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Bar plot theory (can be skipped initially)\n", "\n", "\n", "![alt=\"Bar plot terminology\"](images/summarizing-data/barplot-example-expenses.png)\n", "\n", "* A bar plot should be used when there are many categories.\n", "* The category axis can be shown on the vertical axis. In this case we call it a horizontal bar chart (`barh`), since the bars are horizontal. It makes the chart labels more readable; but a \"regular\" bar plot with vertical bars (`bar`) and labels on the horizontal axis is also possible.\n", "* The text can sometimes be added *inside the bar* if there is restricted space.\n", "* An interesting characteristic of a bar plot is that **the *interpretation* of a bar plot does not differ if the category axis is reordered.** It can be easier to interpret the plot with a particular ordering; however, the interpretation won't be *different* if using a different ordering. The example below demonstrates this: the interpretation has not changed, but the visualization is far more effective.\n", "\n", "### Definition of a bar plot\n", "\n", "It seems strange to end off this section with a definition of a bar plot. But perhaps it isn't: you see these types of plots everywhere, especially in the media. But it is hard to describe what they actually are. Here's one definition:\n", "\n", "> The bar plot is a univariate plot on a two-dimensional axis. The axes are not called x- or y-axes. Instead, one axis is called the ***category axis*** showing the category name, while the other, the ***value axis***, shows the value of each category as a bar.\n", "\n", "\n", "### Enrichment\n", "\n", "\n", "\n", "Bar plots are notorious for their use of excessive 'ink': using many pixels to show a small amount of 'data'. We should aim to maximize the data:ink ratio, which means high quantities of data are represented with as few pixels as possible. Bar plots do not do that, and so are not actually a suitable plot always. \n", "\n", "Read more [about barplots here](https://learnche.org/pid/data-visualization/bar-plots).\n", "\n", "\n", "### Ordering the bars in the bar plot\n", "\n", "The categories used in a bar plot can often be rearranged without 'breaking' the message. We saw an example above.\n", "\n", "This happens because each bar is independent of the others. If you re-order them, the information shown - based on the length of the bars on the value axis - is still the same.\n", "\n", "This does not mean you should show the bar plot in a random order. By ordering the information you make the plot easier to read, and in an underhanded way you subtly alter how the user reads the message. You can use this power to your advantage to make the message clearer, but you can also use it to frustrate your reader. Rather do the former, and not the latter.\n", "\n", "A terrible use of bar plots is to show time-ordered data. For example, the profit of a business is shown for each year, where the profit is the bar's height. You can of course reorder the bars, but then you break the time-based message. ***Simple rule***: if you can reorder the bars, and the ultimate message is still there, then a bar plot is OK.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import pyplot\n", "%matplotlib inline\n", "\n", "fig = pyplot.figure(figsize=(15, 4));\n", "pyplot.subplots_adjust(top=0.8, bottom=0.1, left=0.1, right=0.9, hspace=0.5, wspace=0.4);\n", "\n", "# Left plot: subplot(1,2,1) means: create 1 row, with 2 columns, and draw in the 1st box\n", "average_visits_per_day.plot.barh(ax=pyplot.subplot(1, 2, 1));\n", "\n", "# Right plot: subplot(1,2,2) means: create 1 row, with 2 columns, and draw in the 2nd box\n", "# Take the same grouped data from before, except sort it now:\n", "sorted_data = average_visits_per_day.sort_values('Visits', ascending=False) \n", "sorted_data.plot.barh(ax=pyplot.subplot(1, 2, 2));\n", "\n", "pyplot.suptitle(('Showing a bar plot with no ordering (left) and '\n", " 'with ordering (right).\\n The message is clearer; '\n", " 'and our objective is reached.'), fontsize=16);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![alt=\"Bar plot explanation\"](images/summarizing-data/barplot-explanation.png)\n", "\n", "\n", "The plot on the right helps make our objective clearer. Recall, it was \"to find which day of the week is the most popular, and which is the least popular\" for our website. \n", "\n", "The plot on the left can answer those questions, but the plot on the right is far more effective, and easier to read." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: PCA plots\n", "\n", "PCA (principal component analysis) models can be used to reduce multiple columns down to a smaller number. For example, summarize 10 columns of data with 2 columns. In this situation we say we have 2 components.\n", "\n", "The purpose of this exercise is not to explain PCA, but rather to visualize how much variation is explained by component 1, component 2, *etc*.\n", "\n", "Use this template code:\n", "```python\n", "from matplotlib import pyplot\n", "%matplotlib inline\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.decomposition import PCA\n", "\n", "# Get the subset of raw data\n", "peas = pd.read_csv('https://openmv.net/file/peas.csv')\n", "judges = peas.loc[:, 'Flavour': 'Hardness']\n", "\n", "# Preprocess the raw data: center and scale it\n", "scalar = StandardScaler(copy=True, with_mean=True, with_std=True)\n", "scalar.fit(judges)\n", "judges_centered_scaled = scalar.transform(judges)\n", "\n", "# Fit a PCA model to the data, using 4 components\n", "A = 4\n", "pca = PCA(n_components = A)\n", "pca.fit(judges_centered_scaled) \n", "\n", "# Prepare the plot.\n", "x = components = list(range(1, A+1))\n", "y = pca.explained_variance_ratio_\n", "tick_labels = [int(i) for i in x]\n", "pyplot.bar(x = ___, y = ___, tick_label = ___)\n", "```\n", "\n", "##### Answer these questions:\n", "\n", "1. Complete the code; how much variation is explained with 1 component, approximately, from the plot?\n", "2. The variation explained is for each component. But change the code to show the *cumulative* variation explained by the components.\n", "3. Again modify the code to skip the [preprocessing step](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html). In other words, just fit the PCA model to the 6 columns without centering and scaling. What variation is now explained? ***Note***: what you see is here exceptional - normally preprocessing has a substantial effect on the fitting." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Histograms\n", "\n", "\n", "\n", "In this section we see how to create histograms, which are just another form of [bar plot](#Bar-plots) where instead of a categorical axis, we have a continuous numerical axis.\n", "\n", "Like, bar plots, histograms are fairly simple to understand and you don't need to explain how to interpret them. \n", "\n", "Again, we will use a case study to introduce the topic." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Following the 6 data science steps of **Define**, **Get**, **Explore**, **Clean**, **Manipulate**, **Communicate**\n", "we want to look at a data set where students were allowed unlimited time to write an exam. \n", "\n", "In a [later notebook](https://yint.org/pybasic09#Scatter-plots) we will look at the goal to determine if students who took a longer time to finish actually scored a higher grade. For now, our objective is quite simple: visualize the distribution (spread) of the two variables:\n", "1. the time to write the exam\n", "2. the grade (out of 100) achieved on the exam\n", "\n", "So the above is our (1) definition, and (2) we get the data from a website where this dataset has already been prepared for us (https://openmv.net/info/unlimited-time-test). We will (3) explore the data, and notice we don't really need to (4) clean it, since it has been done for us already. We will (5) manipulate the data into a histogram and visualize that to (6) communicate our goal: what does the spread of the data looks like.\n", "\n", "You can [read the paper/presentation](https://yint.org/static/unlimited-time/index.html) that was generated from this data. This is one of the 8 data sets used to show that unlimited time to write an exam does not necessarily lead to a higher grade." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1, 2 and 3: Define, Get and Explore\n", "\n", "If you visit the [the website page](https://openmv.net/info/unlimited-time-test), you can right-click and download the CSV file to your computer. But, you can also directly import the file from the URL.\n", "\n", "Use the `.describe()` function once you have loaded the data to get a summary description." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Standard imports required to show plots and tables \n", "from matplotlib import pyplot\n", "from IPython.display import display\n", "%matplotlib inline\n", "import pandas as pd\n", "data = pd.read_csv('https://openmv.net/file/unlimited-time-test.csv')\n", "\n", "# Add a single line of code below, using the .describe function\n", "# to get a summary of the data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should get a summary that looks like this: \n", "\n", "Note that it matches the data source, showing there are 80 rows (each row corresponds to a student). Answer these questions:\n", "\n", "1. What was the average grade (out of 100) for this exam?\n", "2. What was the shortest duration a student spent on the exam?\n", "3. The student who wrote for the longest time wrote for ____ minutes.\n", "4. The student with the lowest grade passed or failed the exam?\n", "5. Does the median grade correspond closely with the average grade?\n", "6. What else can you see in the description table that is interesting (*hint*: look at the 25 and 75th percentiles)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 4 and 5: Clean and Manipulate\n", "\n", "Step 4 is not needed (cleaning the data), but we will look at step 5 next, which is to manipulate the data to draw a histogram.\n", "\n", "A histogram can be drawn directly in Matplotlib with the ``pyplot.hist(...)`` function, where the only input that is needed is a vector of data.\n", "\n", "We want to get 2 histograms: one for the `Grade`s and another for the `Time` taken to write the exam.\n", "\n", "You can extract each vector using Panda's ability to access the column from the table: ``data['Grade']`` and ``data['Time']`` will each return their respective columns." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Will plot both histograms in the same \"axis\" (graph)\n", "pyplot.hist(data['Grade']);\n", "pyplot.hist(data['Time']);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above is very basic, but actually confusing. There is no x-axis label, and we are plotting two different things (grades and time) on the x-axis.\n", "\n", "Let's clean this up and create two separate plots, side-by-side. Below you see the creation of what is called \"subplots\": multiple plots within the same figure.\n", "\n", "The `pyplot.subplot(nrows, ncols, index)` command will create ``nrows`` rows of plots and ``ncols`` columns, and will draw in the ``index`` space. For example ``pyplot.subplot(2, 3, 1)`` will create 2 rows and 3 columns (there 6 subplots), and draw in the 1st subplot. Once you are done with that plot, you write ``pyplot.subplot(2, 3, 2)`` and it will go to the next subplot. \n", "\n", "Let's try this out:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We want our histograms in separate axes:\n", "pyplot.figure(figsize=(15, 5)) # (width, height) = (15, 5)\n", "\n", "# Start with the 1st plot:\n", "pyplot.subplot(1, 2, 1)\n", "pyplot.hist(data['Grade']);\n", "pyplot.title('Histogram of the student grades')\n", "pyplot.xlabel('Grade (percentage)')\n", "\n", "# Then draw the second one\n", "pyplot.subplot(1, 2, 2)\n", "pyplot.hist(data['Time'])\n", "pyplot.title('Histogram of the time taken')\n", "pyplot.xlabel('Time (minutes)');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you see, you get the same outputs as the graphs above, but now in two separate axes (1 \"row\" and 2 \"columns\" of graphs), each with their appropriate labels.\n", "\n", "The histogram is calculated by dividing the range of the data (from low to high) into a certain number of bins, or sub-ranges. The bin ranges are equally spaced. For each bin we count the number of data points that lie in that sub-range. For example, there seem to be 3 students that spent between 100 and 125 minutes to write the exam.\n", "\n", "We can change the number of bins, and this alters the shape of the histogram. We also add another parameter, ``edgecolor``, which improves the readability." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We want our histograms in separate axes:\n", "pyplot.figure(figsize=(15, 5)) # (width, height) = (15, 5)\n", "\n", "# Create 3 subplots, side-by-side\n", "pyplot.subplot(1, 3, 1)\n", "pyplot.hist(data['Grade'], bins=10, edgecolor='white');\n", "pyplot.title('Student grades: 10 bins')\n", "pyplot.xlabel('Grade (percentage)')\n", "\n", "pyplot.subplot(1, 3, 2)\n", "pyplot.hist(data['Grade'], bins=20, edgecolor='white');\n", "pyplot.title('Student grades: 20 bins')\n", "pyplot.xlabel('Grade (percentage)')\n", "\n", "pyplot.subplot(1, 3, 3)\n", "pyplot.hist(data['Grade'], bins=30, edgecolor='white');\n", "pyplot.title('Student grades: 30 bins')\n", "pyplot.xlabel('Grade (percentage)');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some things to note:\n", "\n", "* The `subplot` command with 1 row and 3 columns is used to visualize the differences side-by-side. Our eyes are very good at picking out differences this way. When you want to communicate differences and contrasts, this is an effective way to do so.\n", "* Compare the 1st plot here (with the white edges) to the one further up the page. The white edges help highlight the bins and improve readability. It also emphasizes the point just made. It is hard to scroll up-and-down the page to make comparisons.\n", "* As you add more bins there might be some sub-ranges where there are no data. These lead to the appearance of gaps in the histogram.\n", "\n", "\n", "\n", "* Notice how the shape of the histogram can change quite dramatically: the 10 and 20-bin histogram still look similar, but the one with 30 bins does not. You can use the number of bins to - unjustifiably - alter your message about the distribution of the data. Use it to \"tune\" your message, but be careful of using an excessive number of bins, leading to a sparse histogram with many gaps.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Enrichment\n", " \n", "1. Try entering this command below: \n", "```python\n", "counts, bins, patches = pyplot.hist(data['Grade'], bins=20, edgecolor='white')\n", "```\n", "and inspect the value of ``counts`` and ``bins`` that you get as output. What do you think these are? Compare them to the plots above.\n", "\n", "2. Read this article of [5 important data distributions](https://www.kdnuggets.com/2019/07/5-probability-distributions-every-data-scientist-should-know.html) you should be aware of." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Try the above code here, and interpret the output\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 6: Communicating your results\n", "\n", "We need to wrap up our workflow with the final step of communicating our goal: what does the spread of the data looks like?\n", "\n", "We won't provide a definitive answer, rather, we will give some phrases and sentences below that you might use if you had to write this in a report. ***Which of these are correct***, and ***which are incorrect***?\n", "\n", "* The grades of the students are uniformly distributed (spread).\n", "* The time taken is approximately symmetrically spread, with a center of 190 minutes.\n", "* There is long tail observed in the student grades, with a tail (skew) to the right.\n", "* There is long tail observed in the student grades, with a tail (skew) to the left\n", "* There appear to be outliers in the `Time` taken variable, with some students taking an exceptionally short time.\n", "* The median grade seems to be between 70 and 75%, and matches the value in the ``describe()`` table.\n", "\n", "### Summary\n", "\n", "You should, from the above exercise see the value of a histogram. New terminology is ***highlighted***.\n", "\n", "\n", "\n", "* Histograms are a graphic summary of the spread of a single variable. We sometimes use the word \"***distribution***\" instead of \"spread\". The word ***scale*** is also used by statisticians as well for this concept.\n", "* We get a good idea of the center of the data. Also called the ***location***.\n", "* We can, depending on the number of bins, detect if there are outliers.\n", "* It indicates if there is skew in the data; does the histogram have a tail to one side?\n", "* We also see how many 'humps' there are in the data. Is everything collected in one hump, or are there two humps (peaks). This webpage shows an example of distribution with two peaks." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: Random walks\n", "\n", "Imagine a person walking. Every step forward also includes a small random amount to the left (negative values) or right (positive values).\n", "\n", "We can model these values with numbers from a normal distribution, which is centered at zero. If they were walking perfectly straight ahead, then viewed from the back, their position stays at zero if they walk in such a straight line.\n", "\n", "If they have had too much to drink, their steps might be biased a bit more. We can increase the standard deviation of the normal distribution to make the distribution wider.\n", "\n", "##### Note \n", "The study of [random walks](https://en.wikipedia.org/wiki/Random_walk) helps explain many observed phenomena in fields as diverse as ecology, psychology, computer science, physics, chemistry, biology as well as economics. There are some interesting visualizations on that Wikipedia page.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import norm\n", "\n", "# 20 steps for a regular person, showing the deviation to the \n", "# left (negative) or to the right (positive) when they are \n", "# walking straight. Values are in centimeters.\n", "regular_steps = norm.rvs(loc=0, scale=5, size = 20)\n", "print('Regular walking: \\n{}'.format(regular_steps))\n", "\n", "# Consumed too much? Standard deviation (scale) is larger:\n", "deviating_steps = norm.rvs(loc=0, scale=12, size = 20)\n", "print('Someone who has consumed too much: \\n{}'.format(deviating_steps))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Questions\n", "\n", "1. Visualize the histogram of 1000 steps of someone who is walking *normally* 😃.\n", "2. Visualize, in a subplot, side-by-side, the histogram of someone who has consumed too much.\n", "3. Looking ahead: in the [next module](https://yint.org/pybasic10) we will show what this random walk looks like in a time-series plot.\n", "\n", "Both histograms should be centered at zero. Give each histogram a title, and a label on the x-axis, including units of centimeters.\n", "\n", "***Hint*** To create a Pandas series of the values, remember [from worksheet 7](https://yint.org/pybasic07) that you can do that as follows:\n", "```python\n", "import pandas as pd\n", "steps = pd.Series(data = ...)\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Put your code here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A person walking in random way has a cumulative effect. If they have deviated 30cm to the left, therefore they are at `-30`, and their next step is to sway to the right by 10cm, then they will be at `-20`. \n", "\n", "Modify your code to show the histogram of the cumulative sum of the deviations! You only need to make a very small modification to do this - thank you Pandas!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: Understanding the normal distribution\n", "\n", "Use these function from Pandas (we introduced them in [worksheet 7](https://yint.org/pybasic07))\n", "\n", "\n", "In general, the same things you can calculate in NumPy, you can repeat in Pandas:\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
NumPyPandasDescription
np.meandf.meanArithmetic average
np.mediandf.medianMedian value
np.stddf.stdStandard deviation
-df.madMean absolute deviation
np.vardf.varVariance
np.mindf.minMinimum value
np.maxdf.maxMaximum value
\n", "\n", "\n", "Create a sample of $n=10000$ values from the normal distribution. \n", "```python\n", "from scipy.stats import norm\n", "import pandas as pd\n", "n = 10000\n", "normal_values = pd.Series(norm.rvs(loc=7.5, scale=1.0, size = n))\n", "```\n", "\n", "1. What is the mean of the above values? Run the code several times, since everytime you will get a different sample of values.\n", "2. Does the standard deviation match with the ``scale`` input provided?\n", "3. What is the median value?\n", "4. Use the `.describe()` function and see what the 25th percentile is. Calculate this value yourself. In Pandas the function to do so is called `quantile(q=0.25)`.\n", "5. Repeat this for the 75th percentile. Therefore, between which two values do 50% of the data from a normal distribution lie?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import norm\n", "import pandas as pd\n", "\n", "n = 10000\n", "normal_values = pd.Series(norm.rvs(loc=0, scale=1, size = n))\n", "normal_values.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: Can you see normal distributions using a histogram?\n", "\n", "Many statistical tools require the data to be normally distributed. \n", "\n", "Novices fall in the trap of plotting the histogram, and saying that it looks to be normally distributed, and then keep going with their next steps. There is a better way to test this,which is shown in [the next module](https://yint.org/pybasic10).\n", "\n", "But for now, try to run this code, to convince yourself that histograms are not a great tool to visualize if data are normally distributed. \n", "\n", "***Run the code several times***. Sometimes the normal distribution does not look bell-curve shaped, and the others do; but also the other way around." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import f, norm, t\n", "import pandas as pd\n", "from matplotlib import pyplot\n", "%matplotlib inline\n", "\n", "N = 100\n", "bins = 10\n", "\n", "true_normal = norm.rvs(size=N)\n", "f_distributed = f.rvs(dfn=50, dfd=30, size=N)\n", "t_distributed = t.rvs(df=5, size=N)\n", "\n", "norm_values = pd.Series(data=true_normal)\n", "f_values = pd.Series(data=f_distributed)\n", "t_values = pd.Series(data=t_distributed)\n", "\n", "# Plotting\n", "# We want our histograms in separate axes:\n", "pyplot.figure(figsize=(15, 5)) # (width, height) = (15, 5)\n", "\n", "# Create 3 subplots, side-by-side\n", "pyplot.subplot(1, 3, 1)\n", "pyplot.hist(norm_values, bins=bins, edgecolor='white');\n", "pyplot.title('Values from a normal distribution')\n", "\n", "pyplot.subplot(1, 3, 2)\n", "pyplot.hist(f_values, bins=bins, edgecolor='white');\n", "pyplot.title('Values from an f-distribution')\n", "\n", "pyplot.subplot(1, 3, 3)\n", "pyplot.hist(t_values, bins=bins, edgecolor='white');\n", "pyplot.title('Values from a t-distribution');\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: Dice averages are normally distributed\n", "\n", "A common concept in a first statistics course is to talk about the central limit theorem. As you take averages of more and more independent data values, the average becomes normally distributed, even if the original values are not themselves normally distributed.\n", "\n", "We know from the above challenge we should not judge normality by plots, but here we will show how the data ***appear*** to be become more normal. We are not testing, or saying, they are normal.\n", "\n", "\n", "\n", "If you throw a dice, each throw is ***independent*** of the next one: the value you throw now does not depend on the value before, and it cannot influence the value after. This is another important statistical concept which is essential for many methods (e.g. for t-tests) to work successfully. It is often ignored in practice, leading to misleading results.\n", "\n", "Each throw of the dice comes from the [***uniform*** distribution](https://learnche.org/pid/univariate-review/uniform-distribution). Each value of 1, 2, 3, 4, 5 or 6 has a uniform (or equal) chance of appearing.\n", "```python\n", "import numpy as np\n", "\n", "# Throw a dice 100 times. Values are between 1 (inclusive) and 7 (exclusive)\n", "np.random.randint(1, 7, size=100)\n", "```\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Try throwing the dice 60000 times.\n", "dice_throws = ___\n", "\n", "# Count, to ensure there are approximately\n", "# 10000 values of each number present:\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will take the average of 2 throws, and we will do this 10000 times:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "\n", "N = 10000\n", "throws = 2\n", "\n", "dice_throws = np.random.randint(1, 7, size=N * throws)\n", "dice_throws = dice_throws.reshape((N, 2))\n", "average_throws = pd.Series(dice_throws.mean(axis=1))\n", "ax = average_throws.hist()\n", "ax.set_xlabel(\"Average of {} throws\".format(throws));\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "Repeat the above code to show how, in a sequence of subplots, when you add more and more values together in an average, the average becomes normally distributed. \n", "\n", "At the end, you would like something like this, which [was generated in R](https://learnche.org/pid/univariate-review/normal-distribution-and-checking-for-normality).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Matplotlib for visualization\n", "\n", "Behind Pandas' plotting functions lies the well-know Python library called `matplotlib`. It is a core library you must eventually understand, if only the concept behind it. MATLAB users will find it no problem: it uses the same principle of adding successive layers to a base figure, and breaking down your plot in terms of a figure, axes and other layers of data points and annotations.\n", "\n", "1. An excellent tutorial - one of the best I have found - to understand these concepts: https://www.datacamp.com/community/tutorials/matplotlib-tutorial-python\n", "2. A quicker introduction, if you need to find the right command is in this 4-part notebook:\n", " * [Basic figures](https://nbviewer.jupyter.org/gist/manujeevanprakash/138c66c44533391a5af1)\n", " * [Style and colour](https://nbviewer.jupyter.org/gist/manujeevanprakash/7dc56e7906ee83e0bbe6), markers, line thickness and patterns\n", " * [Annotation, axis range, and tick marks](https://nbviewer.jupyter.org/gist/manujeevanprakash/7cdf7d659cd69d0c22b2)\n", " * [Log axes, aspect ratio](https://nbviewer.jupyter.org/gist/manujeevanprakash/7d8a9860f8e43f6237cc)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">***Feedback and comments about this worksheet?***\n", "> Please provide any anonymous [comments, feedback and tips](https://docs.google.com/forms/d/1Fpo0q7uGLcM6xcLRyp4qw1mZ0_igSUEnJV6ZGbpG4C4/viewform)." ] } ], "metadata": { "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": true, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "221.984px" }, "toc_section_display": true, "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }