{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> All content here is under a Creative Commons Attribution [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and all source code is released under a [BSD-2 clause license](https://en.wikipedia.org/wiki/BSD_licenses).\n", ">\n", ">Please reuse, remix, revise, and [reshare this content](https://github.com/kgdunn/python-basic-notebooks) in any way, keeping this notice." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Course overview\n", "\n", "This is the third module of several (11, 12, 13, 14, 15 and 16), which refocuses the course material in the [prior 10 modules](https://github.com/kgdunn/python-basic-notebooks) in a slightly different way. It places more emphasis on\n", "\n", "* dealing with data: importing, merging, filtering;\n", "* calculations from the data;\n", "* visualization of it.\n", "\n", "In short: ***how to extract value from your data***.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Module 13 Overview\n", "\n", "This is the third of 6 modules. In this module we will cover\n", "\n", "* Becoming more comfortable with Pandas data processing\n", "* Basic plotting with Pandas\n", "\n", "**Requirements before starting**\n", "\n", "* Have your Python installation working as you had for modules 11 and 12, and also the Pandas and Plotly libraries installed." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A general work flow for any project where you deal with data\n", "\n", "***After years of experience, and working with data you will find your own approach.***\n", "\n", "Here is my 6-step approach (not linear, but iterative): **Define**, **Get**, **Explore**, **Clean**, **Manipulate**, **Communicate**\n", "\n", "1. **Define**/clarify the *objective*. Write down exactly what you need to deliver to have the project/assignment considered as completed.\n", "\n", " Then your next steps become clear.\n", " \n", "2. Look for and **get** your data (or it will be given to you by a colleague). Since you have your objective clarified, it is clearer now which data, and how much data you need.\n", "\n", "3. Then start looking at the data. Are the data what we expect? This is the **explore** step. Use plots and table summaries.\n", "\n", "4. **Clean** up your data. This step and the prior step are iterative. As you explore your data you notice problems, bad data, you ask questions, you gain a bit of insight into the data. You clean, and re-explore, but always with the goal(s) in mind. Or perhaps you realize already this isn't the right data to reach your objective. You need other data, so you iterate.\n", "\n", "5. Modifying, making calculations from, and **manipulate** the data. This step is also called modeling, if you are building models, but sometimes you are simply summarizing your data to get the objective solved.\n", "\n", "6. From the data models and summaries and plots you start extracting the insights and conclusions you were looking for. Again, you can go back to any of the prior steps if you realize you need that to better achieve your goal(s). You **communicate** clear visualizations to your colleagues, with crisp, short text explanations that meet the objectives.\n", "\n", "___\n", "\n", "The above work flow (also called a '*pipeline*') is not new or unique to this course. Other people have written about similar approaches:\n", "\n", "* Garrett Grolemund and Hadley Wickham in their book on R for Data Science have this diagram (from this part of their book). It matches the above, with slightly different names for the steps. It misses, in my opinion, the most important step of ***defining your goal*** first.\n", "\n", "\n", "___\n", "* Hilary Mason and Chris Wiggins in their article on A Taxonomy of Data Science describe their 5 steps in detail:\n", " 1. **Obtain**: pointing and clicking does not scale. In other words, pointing and clicking in Excel, Minitab, or similar software is OK for small data/quick analysis, but does not scale to large data, nor repeated data analysis.\n", " 2. **Scrub**: the world is a messy place\n", " 3. **Explore**: you can see a lot by looking\n", " 4. **Models**: always bad, sometimes ugly\n", " 5. **Interpret**: \"the purpose of computing is insight, not numbers.\"\n", " \n", " You can read their article, as well as this view on it, which is bit more lighthearted.\n", " \n", "___\n", "\n", "What has been your approach so far?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualization with Pandas\n", "\n", "In this module we want to show how you can quickly create visualizations of your data with Pandas.\n", "\n", "But first, you should check if you are using the appropriate visualization tool. This website helps you select: https://www.data-to-viz.com\n", "\n", "In this module we will consider:\n", "\n", "\n", "* box plots\n", "* bar plots (histograms only; general bar plots will come later)\n", "* time-series (sequence plots), \n", "* scatter plots (plot one column against another column)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Box plots (and histograms): using the Ammonia case study\n", "\n", "We will implement the 6-step workflow suggested above.\n", "\n", "\n", "### Defining the problem (step 1)\n", "Our (1) **objective** is to \n", "\n", ">Describe what time-based trends we see in the ammonia concentration of a wastewater stream. We have a single measurement, taken every six hours. \n", "\n", "We will first see how we can summarize the data.\n", "\n", "### Getting the data (step 2)\n", "\n", "The next step is to (2) **get** the data. We have a data file from [this website](https://openmv.net/info/ammonia) where there is 1 column of numbers and several rows of ammonia measurements.\n", "\n", "### Overview of remaining steps\n", "\n", "Step 3 and 4 of exploring the data are often iterative and can happen interchangeably. We will (3) **explore** the data and see if our knowledge that ammonia concentrations should be in the range of 15 to 50 mmol/L is true. We might have to sometimes (4) **clean** up the data if there are problems.\n", "\n", "We will also summarize the data by doing various calculations, also called (5) **manipulations**, and we will (6) **communicate** what we see with plots." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's get started. There are 3 ways to **get** the data:\n", "1. Download the file to your computer\n", "2. Read the file directly from the website (no proxy server)\n", "3. Read the file directly from the website (you are behind a proxy server)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import the plotly library\n", "import plotly.graph_objects as go\n", "import plotly.io as pio\n", "pio.renderers.default = \"iframe\" # \"notebook\" # jupyterlab" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "pd.options.plotting.backend = \"plotly\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Loading the data from a local file, if you have it saved to your own computer\n", "data_file = r'C:\\location\\of\\file\\ammonia.csv'\n", "waste = pd.read_csv(data_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Read the CSV file directly from a web server:\n", "import pandas as pd\n", "waste = pd.read_csv('https://openmv.net/file/ammonia.csv')\n", "\n", "# If you are on a work computer behind a proxy server, you\n", "# have to take a few more steps. Uncomment these lines of code.\n", "#\n", "# import io\n", "# import requests\n", "# proxyDict = {\"http\" : \"http://replace.with.proxy.address:port\"}\n", "# url = \"http://openmv.net/file/ammonia.csv\"\n", "# s = requests.get(url, proxies=proxyDict).content\n", "# web_dataset = io.StringIO(s.decode('utf-8'))\n", "# waste = pd.read_csv(web_dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Show only the first few lines of the data table (by default it will show 5 lines)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "waste" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Print the last 10 rows of the data to the screen:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exploration (step 3)\n", "\n", "Once we have opened the data we check with the ``.head(...)`` command if our data are within the expected range. At least the first few values. Similar for the ``.tail(...)`` values.\n", "\n", "Those two commands are always good to check first.\n", "\n", "Now we are ready to move on, to explore further with the ``.describe(...)`` command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run this single line of code, and answer the questions below\n", "waste.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Check your knowledge\n", "\n", "1. There are \\_\\_\\_\\_\\_\\_ rows of data. Measured at 6 hours apart, this represents \\_\\_\\_\\_\\_\\_ days of sensor readings.\n", "2. We expected ammonia concentrations to typically be in the range of 15 to 50 mmol/L. Is that the case from the description?\n", "3. What is the average ammonia concentration?\n", "4. Sort the ammonia values from low to how, and store the result in a new variable called ``ammonia_sorted``.\n", "\n", "4. What does the 25th percentile mean? Below the 25th percentile value we will find \\_\\_\\_\\_% of the values, and above the 25th percentile we find \\_\\_\\_\\_% of the values. In this case that means the 25th percentile will be close to value of the 360th entry in the sorted vector of data. Try it:\n", "\n", " ``ammonia_sorted[358:362]``\n", "\n", "5. What does the 75th percentile mean? Below the 75th percentile value we will find \\_\\_\\_\\_% of the values, and above the 75th percentile we find \\_\\_\\_\\_% of the values. In this case that means the 75th percentile will be close to value of the 1080th entry in the sorted vector of data. Try it:\n", "\n", " ``ammonia_sorted[1078:1082]``\n", "\n", "6. So therefore: between the 25th percentile and the 75th percentile, we will find \\_\\_\\_\\_% of the values in our vector. \n", "\n", "7. Given this knowledge, does this match with the expectation we have that our Ammonia concentration values should lie between 15 to 50 mmol/L?\n", "\n", "And there is the key reason why you are given the 25th and 75th percentile values. Half of the data in the sorted data vector lie between these two values. 25% of the data lie below the 25th percentile, and the other 25% lie above the 75th percentile, and the bulk of the data lie between these two values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Add your code here to answer the above questions.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introducing the box plot\n", "\n", "We have looked at the extremes with ``.head()`` and ``.tail()``, and we have learned about the mean and the median. \n", "\n", "What about the **typical** values? What do we even mean by _typical_ or _usual_ or _common_ values? Could we use the 25th and 75th percentiles to help guide us?\n", "\n", "One way to get a feel for that is to plot these numbers: 25th, 50th and 75th percentiles. Let's see how, by using a **boxplot**." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "waste.boxplot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In general, it is worth taking a look at the documentation for the function you are using here. This is available on the Pandas website:\n", "\n", "https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.boxplot.html\n", "\n", "which you can quickly find by searching for `pandas boxplot`, and the first link as result will likely be the one above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The boxplot gives you an idea of the distribution, the spread, of the data.\n", "\n", "The key point is the orange center line, the line that splits the centre square (actually it is a rectangle, but it looks squarish). That horizontal line is the median.\n", "\n", "It is surprising to see that middle chunk, that middle 50% of the sorted data values fall in such a narrow range of the rectangle.\n", "![alt=\"Boxplot for the percentiles](https://raw.githubusercontent.com/kgdunn/python-basic-notebooks/master/images/summarizing-data/percentiles-ammonia.png)\n", "\n", " The bottom 25% of the data falls below the box, and the top 25% of the data falls above the box. That is indicated to some extent by the whiskers, the lines leaving the middle square/rectangle shape. The whiskers tell how much spread there is in our data. We we see 2 single circles below the bottom whisker. These are likely *outliers*, data which are unusual, given the context of the rest of the data. More about *outliers* later.\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let us try plotting a histogram of these same data.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "waste.hist()\n", "\n", "# Search for the documentation for this function. Adjust the number of bins to 30.\n", "# Add code here for a histogram with 30 bins.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Make the figure less wide and less high:\n", "waste.hist(width=400, height=400)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# AIM: create a histogram, with extra annotations\n", "# `fig` is a Plotly figure:\n", "fig = waste.hist(nbins=30)\n", "\n", "# With this variable you can further manipulate the plot. Use it, for example, to add an x-axis label:\n", "fig.update_layout(xaxis_title_text='Ammonia concentration [mmol/L]')\n", "# \n", "# Update the y-axis label here:\n", "\n", "\n", "\n", "# Superimpose the 25th and the 75th percentiles as vertical lines (vlines) on the histogram\n", "fig.add_vline(x=waste['Ammonia'].quantile(0.25), line_color=\"purple\", line_width=1, line_dash='solid')\n", "fig.add_vline(x=waste['Ammonia'].quantile(0.75), line_color=\"purple\", line_width=1, line_dash='solid' )\n", "fig.add_hline(y= 80, line_dash='longdash', annotation_text=\"Cutoff\")\n", "fig.add_vline(\n", " x=waste['Ammonia'].quantile(0.50), \n", " line_color=\"orange\", \n", " line_width=3, \n", " annotation_text=\"Median\", \n", " line_dash='longdash') # 'dot', 'dash', 'longdash', 'dashdot', 'longdashdot'\n", "\n", "\n", "# Make the range bigger (e.g. to match requirements in a technical report)\n", "fig.update_layout(xaxis_range=[0, 70],\n", " yaxis_range=[0, 200])\n", "display(fig)\n", "\n", "# NOTE: the 0.5 quantile, is the same as the 50th percentile, is the same as the median.\n", "print(f'The 50th percentile (also called the median) is at: {waste[\"Ammonia\"].quantile(0.5)}') " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All of this you can get from this single table which you can create with ``.describe()``:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Which brings us to two important points:\n", "1. Tables **are** (despite what some people might say), a very effective form of summarizing data\n", "2. Start your data analysis with the ``.describe()`` function to get a (tabular) feel for your data.\n", "\n", "\n", "### Looking ahead\n", "\n", "We have not solved our complete objective yet. Scroll up, and recall what we needed to do: \"*describe what **time-based** trends we see in the ammonia concentration of a wastewater stream*\". We will look at that next.\n", "\n", "### Summary\n", "\n", "We have learned quite a bit in this section so far:\n", "\n", "* head and tail of a data set\n", "* median\n", "* spread in the data\n", "* distribution of a data column\n", "* box plot\n", "* percentile" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time-series, or a sequence plot\n", "\n", "\n", "If you have a single column of data, you may see interesting trends in the sequence of numbers when plotting it. These trends are not always visible when just looking at the numbers, and they definitely cannot be seen in a box plot.\n", "\n", "An effective way of plotting these columns is horizontally, as a series plot, or a trace. We also call them time-series plots, if there is a second column of information indicating the corresponding time of each data point." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below we import the data. \n", "\n", "* The dataset had no time-based column, so Pandas provides a simple function for creating your tie column: `pd.date_range(...)`.\n", "* We were told the data were collected every 6 hours. \n", "* Set that time-based column to be our ***index***. Do you [recall that term](http://yint.org/pybasic12) about a Pandas data frame?\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "waste.shape[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "waste = pd.read_csv('http://openmv.net/file/ammonia.csv')\n", "datetimes = pd.date_range('1/1/2020', periods=waste.shape[0], freq='6H')\n", "\n", "print(datetimes)\n", "# What is this \"datetimes\" variable we have just created?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "waste.set_index(datetimes, inplace=True)\n", "# Why \"inplace\" ?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The code to plot the data as a time-series sequence:\n", "fig = waste.plot.line() # you can also say: waste.plot()\n", "fig" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "waste.plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Make the plot look a bit different:\n", "fig = waste.plot.line(\n", " markers=True, \n", " log_y=True, \n", " range_y=[5, 85], \n", " color_discrete_sequence=[\"orange\"],\n", " width=800,\n", " height=400,\n", ")\n", "fig.update_layout(showlegend=False)\n", "fig.update_layout(plot_bgcolor=\"black\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = go.Figure()\n", "fig.add_trace(\n", " go.Scatter(\n", " x=waste.index,\n", " y=waste['Ammonia'],\n", " name=\"Ammonia\",\n", " )\n", ")\n", "fig.update_layout(showlegend=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# The above code is the same as \n", "waste.plot.line()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now calculation the the 5-day moving average:\n", "waste['Ammonia'].rolling('5D').mean()\n", "\n", "# OK, now plot that moving average:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Add a rolling average on top of the raw data, to help see the signal through the noise.\n", "\n", "# First plot the original data\n", "fig = go.Figure()\n", "fig.add_trace(\n", " go.Scatter(\n", " x=waste.index,\n", " y=waste['Ammonia'],\n", " name=\"Ammonia concentration\",\n", " line_color=\"rosybrown\",\n", " )\n", ")\n", "\n", "# Then add the moving average on top\n", "fig.add_trace(\n", " go.Scatter(\n", " x=waste.index,\n", " y=waste['Ammonia'].rolling('5D', center=True).mean(),\n", " name=\"5 day moving average\",\n", " line_color=\"black\",\n", " )\n", ")\n", "fig.update_layout(showlegend=True)\n", "fig.show()\n", "\n", "# Later on this code will make more sense; for now, hopefully it is useful in your daily work." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Try different rolling window sizes: `'12H'` (12 hours), `'2D'` (2 days), `'30D'`, etc.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DataFrame operations\n", "\n", "Last time we looked at some basic data frame operations. Let's recap some important ones.\n", "\n", "* adding rows, \n", "* deleting rows,\n", "* merging data frames.\n", "\n", "We will use this made-up data set, showing how much food is used by each country. You can replace these data with numbers and columns and rows which make sense to your application.\n", "\n", ">```python\n", ">import pandas as pd\n", ">data = {'Herring': [27, 13, 52, 54, 5, 19], \n", "> 'Coffee': [90, 94, 96, 97, 30, 73],\n", "> 'Tea': [88, 48, 98, 93, 99, 88]}\n", ">countries = ['Germany', 'Belgium', 'Netherlands', 'Sweden', 'Ireland', 'Switzerland']\n", ">food_consumed = pd.DataFrame(data, index=countries)\n", ">\n", ">print(data)\n", ">print(countries)\n", ">print(type(data))\n", ">print(type(countries))\n", ">print(type(food_consumed))\n", ">food_consumed\n", ">```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "data = {'Herring': [27, 13, 52, 54, 5, 19], \n", " 'Coffee': [90, 94, 96, 97, 30, 73],\n", " 'Tea': [88, 48, 98, 93, 99, 88]}\n", "countries = ['Germany', 'Belgium', 'Netherlands', 'Sweden', 'Ireland', 'Switzerland']\n", "food_consumed = pd.DataFrame(data, index=countries)\n", "\n", "print(data)\n", "print(countries)\n", "print(type(data))\n", "print(type(countries))\n", "print(type(food_consumed))\n", "food_consumed" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Getting an idea about your data first: you are now very comfortable with these\n", "\n", "```python\n", "# The first rows:\n", "food_consumed.head()\n", "\n", "# The last rows:\n", "food_consumed.tail()\n", "\n", "# Some basic statistics\n", "food_consumed.describe()\n", "\n", "# Some information about the data structure: missing values, memory usage, etc\n", "food_consumed.info()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1. Shape of a data frame\n", "\n", "```python\n", "# There were 6 countries, and 3 food types. Verify:\n", "food_consumed.shape\n", "\n", "# Transposed and then shape:\n", "food_consumed.T.shape\n", "\n", "# Interesting: what shapes do summary vectors have?\n", "food_consumed.mean().shape\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "food_consumed" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2. Unique entries\n", "```python\n", "food_consumed['Tea'].unique()\n", "\n", "# Unique names of the rows: (not so useful in this example, because they are already unique)\n", "food_consumed.index.unique()\n", "\n", "# Get counts (n) of the unique entries:\n", "food_consumed.nunique() # in each column \n", "food_consumed.nunique(axis=1) # in each row\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3. Add a new column\n", "```python\n", "# Works just like a dictionary!\n", "# If the data are in the same row order\n", "food_consumed['Yoghurt'] = [30, 20, 53, 2, 3, 48]\n", "print(food_consumed)\n", "display(food_consumed)\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4. Joining dataframes \n", "```python\n", "# Note the row order is different this time:\n", "more_foods = pd.DataFrame(index=['Belgium', 'Germany', 'Ireland', 'Netherlands', 'Sweden', 'Switzerland'],\n", " data={'Garlic': [29, 22, 5, 15, 9, 64]})\n", "print(food_consumed)\n", "print(more_foods)\n", "# Merge 'more_foods' into the 'food_consumed' data frame. Merging works, even if row order is not the same!\n", "food_consumed = food_consumed.join(more_foods)\n", "food_consumed\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 5. Adding a new row\n", "```python\n", "# Collect the new data in a Series. Note that 'Tea' is (intentionally) missing!\n", "portugal = pd.Series({'Coffee': 72, 'Herring': 20, 'Yoghurt': 6, 'Garlic': 89},\n", " name = 'Portugal')\n", "\n", "food_consumed = food_consumed.append(portugal)\n", "# See the missing value created?\n", "print(food_consumed)\n", "\n", "# What happens if you run the above commands more than once?\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6. Delete or drop a row/column\n", "```python\n", "# Drop a column, and returns its values to you\n", "coffee_column = food_consumed.pop('Coffee')\n", "print(coffee_column)\n", "print(food_consumed)\n", "\n", "# Leaves the original data untouched; returns only \n", "# a copy, with those columns removed\n", "food_consumed.drop(['Garlic', 'Yoghurt'], axis=1)\n", "print(food_consumed)\n", "\n", "# Leaves the original data untouched; returns only \n", "# a copy, with those rows removed. \n", "non_EU_consumption = food_consumed.drop(['Switzerland', ], axis=0)\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 7. Remove rows with missing values\n", "```python\n", "# Returns a COPY of the array, with no missing values:\n", "cleaned_data = food_consumed.dropna() \n", "\n", "# Makes the deletion inplace; you do not not have to assign the output to a new variable.\n", "# Inplace is not always faster!\n", "food_consumed.dropna(inplace=True) \n", "\n", "# Remove only rows where all values are missing:\n", "food_consumed.dropna(how='all')\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 8. Sort the data\n", "\n", "```python\n", "food_consumed.sort_values(by=\"Garlic\")\n", "food_consumed.sort_values(by=\"Garlic\", inplace=True)\n", "food_consumed.sort_values(by=\"Garlic\", inplace=True, ascending=False)\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scatter plots\n", "\n", "\n", "Scatter plots are widely used and easy to understand. ***When should you use a scatter plot?*** When your goal is to draw the reader's attention between the relationship of 2 (or more) variables.\n", "\n", "* Data tables also show relationships between two or more variables, but the trends are sometimes harder to see.\n", "* A time-series plot shows the relationship between time and another variable. So also two variables, but one of which is time. \n", "\n", "In a scatter plot we use 2 sets of axes, at 90 degrees to each other. We place a marker at the intersection of the values shown on the horizontal (x) axis and vertical (y) axis. \n", "\n", "\n", "* Most often **variable 1 and 2** (also called the dimensions) will be continuous variables. Or at least [***ordinal variables***](https://en.wikipedia.org/wiki/Ordinal_data). You will seldom use categorical data on the $x$ and $y$ axes.\n", "\n", "* You can add a **3rd dimension**: the marker's size indicates the value of a 3rd variable. It makes sense to use a numeric variable here, not a categorical variable.\n", "\n", "* You can add a **4th dimension**: the marker's colour indicates the value of a 4th variable: usually this will be a categorical variable. E.g. red = category 1, blue = category 2, green = category 3. Continuous numeric transitions are hard to map onto colour. However it is possible to use transitions, e.g. values from low to high are shown on a sliding gray scale\n", "\n", "* You can add a **5th dimension**: the marker's shape can indicate the discrete values of a 5th categorical variable. E.g. circles = category 1, squares = category 2, triangles = category 3, etc.\n", "\n", "In summary:\n", "\n", "* marker's size = numeric variable\n", "* marker's colour = categorical, maybe numeric, especially with a gray-scale\n", "* marker's shape = can only be categorical\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use the Bioreactor yields data set. There is information about it here:\n", "\n", "http://openmv.net/info/bioreactor-yields\n", "\n", "\n", "Read in the data into a Pandas data frame, and use the `.describe` function to check it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Standard imports required to show plots and tables \n", "import pandas as pd\n", "\n", "yields = pd.read_csv('http://openmv.net/file/bioreactor-yields.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Are all 5 columns shown in the summary? Modify the `.describe` function call to show information on all 5 columns." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "yields" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now plot the data as a scatter plot, using this code as a guide. We want to see if there is a relationship between temperature and yield." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = yields.plot.scatter(\n", " x='temperature', \n", " y='yield',\n", " width=500,\n", " height=400,\n", " title='Yield [%] as a function of temperature [°C]',\n", ")\n", "fig.update_layout(xaxis_title_text='Temperature [°C]')\n", "fig.update_layout(yaxis_title_text='Yield [%]')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The objective of this data file was to check if there is a relationship between `Temperature` and `Yield`. Visually that is confirmed.\n", "\n", "Let us also quantify it with the correlation value we introduced above. Calculate the correlation with this code:\n", "\n", "```python\n", "\n", "display(yields.corr())\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "display(yields.corr())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The correlation value is $r=-0.746$, essentially negative 75%.\n", "\n", "* The correlation value is ***symmetrical***: the correlation between temperature and yield is the same as between yield and temperature.\n", "* Interesting tip: the $R^2$ value from a regression model is that same value squared: in other words, $R^2 = (-0.746356)^2 = 0.5570$, or roughly 55.7%.\n", "\n", "> Think of the implication of that: you can calculate the $R^2$ value - *the* value often used to judge how good a linear regression is - without calculating the linear regression model!! Further, it shows that for linear regression it does not matter which variable is on your $x$-axis, or your $y$-axis: the $R^2$ value is the same.\n", "\n", "> If you understand these 2 points, you will understand why $R^2$ is not a great number at all to judge a linear regression model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding more dimensions to your scatter plots\n", "\n", "We saw that we can alter the size, colour, and shape of the marker to indicate a 3rd, 4th or 5th dimension.\n", "\n", "We consider changing the markers' colour and shape in the next piece of code.\n", "\n", "Colour and shape are perfect for categorical variables, but unfortunately in this data set we only have 1 categorical variable. So use it for both the marker's shape (symbol) and colour." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "yields['baffles'].unique()\n", "\n", "# So we see the \"baffles\" column is actually text: \"Yes\" or \"No\". \n", "# We call this a categorical variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Use the \"baffles\" column to pick colours\n", "fig = yields.plot.scatter(\n", " x='temperature', \n", " y='yield',\n", " width=500,\n", " height=400,\n", " title='Yield [%] as a function of temperature [°C]; colours indicate presence/absense of baffles',\n", " color=\"baffles\",\n", " symbol=\"baffles\",\n", " size=\"speed\",\n", ")\n", "fig.update_layout(xaxis_title_text='Temperature [°C]')\n", "fig.update_layout(yaxis_title_text='Yield [%]')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the code below we want to make the marker size proportional to the speed of the impeller. \n", "\n", "The `size` input of the `.scatter` function can be specified as the name of the column to determine the marker size:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Modify the speed column to be more \"spread out\" in size\n", "yields['Speed(plot)'] = (yields['speed']-3200).pow(2) / 1000\n", "yields" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = yields.plot.scatter(\n", " x='temperature', \n", " y='yield',\n", " width=500,\n", " height=400,\n", " title='Yield [%] as a function of temperature [°C]; colours for baffles; size related to impeller speed',\n", " color=\"baffles\", \n", " size='Speed(plot)',\n", ")\n", "fig.update_layout(xaxis_title_text='Temperature [°C]')\n", "fig.update_layout(yaxis_title_text='Yield [%]')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Saving your plots\n", "\n", "Once you have created your plot you can of course include it in a document. Click on the \"camera\" icon at the top right hover area." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Demonstration time\n", "\n", "Some examples will be shown on what you can do with data frames.\n", "\n", "* Dashboards\n", "* Calculations" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Diversion: how is time represented in Python?\n", "\n", "Try the following in the space below:\n", "```python\n", "from datetime import datetime\n", "now = datetime.now()\n", "\n", "# Do some things with `now`:\n", "print(now)\n", "print(now.year)\n", "print(f\"Which weekday is it today? It is day: {now.isoweekday()} in the week\")\n", "print(now.second)\n", "print(now.seconds) # use singular\n", "```\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After trying the above, try these lines below. Comment out the lines that cause errors.\n", "\n", "```python\n", "later = datetime.now()\n", "print(later)\n", "print(type(later))\n", "print(later - now)\n", "print(now - later) \n", "print(now + later)\n", "\n", "delta = later - now\n", "print(delta)\n", "print(type(delta))\n", "print(f\"There were this many seconds between 'now' and 'later': {delta.total_seconds()}\")\n", "print(later + delta)\n", "\n", "sometime_in_the_future = later + delta*1000\n", "print(sometime_in_the_future)\n", "print(sometime_in_the_future - now)\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " " ] } ], "metadata": { "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "toc": { "base_numbering": "1", "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": true, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "348.969px" }, "toc_section_display": true, "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }