{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> All content here is under a Creative Commons Attribution [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and all source code is released under a [BSD-2 clause license](https://en.wikipedia.org/wiki/BSD_licenses). \n", ">\n", ">Please reuse, remix, revise, and [reshare this content](https://github.com/kgdunn/python-basic-notebooks) in any way, keeping this notice." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Run this cell once, at the start, to load the notebook's style sheet.\n", "from IPython.core.display import HTML\n", "css_file = './images/style.css'\n", "HTML(open(css_file, \"r\").read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Module 10: Overview \n", "\n", "In the prior [module 9](https://yint.org/pybasic09) you learned an approach to follow for any data analysis project, as well as some basic plots and statistics. In this module you will learn about what 5 objectives we see for data analysis, as well as some further plots and statistical concepts.\n", "\n", "
\n", " Check out this repo using Git. Use your favourite Git user-interface, or at the command line:\n", "\n", ">```\n", ">git clone git@github.com:kgdunn/python-basic-notebooks.git\n", ">\n", "># If you already have the repo cloned:\n", ">git pull\n", ">```\n", "\n", "to update it to the later version.\n", "\n", "\n", "### Preparing for this module###\n", "\n", "You should have:\n", "* understood the core plotting library in Python, `matplotlib`, with [this tutorial](https://www.datacamp.com/community/tutorials/matplotlib-tutorial-python)\n", "* read [this post](https://towardsdatascience.com/matplotlib-tutorial-learn-basics-of-pythons-powerful-plotting-library-b5d1b8f67596) for more details about `matplotlib`\n", "* ensured you understand [scatter plots](https://learnche.org/pid/data-visualization/relational-graphs-scatter-plots) and how you can plot 5 dimensions on a 2-dimensional plot; maybe 6 dimensions if you have mixed reality smartglasses.\n", "\n", "\n", "### Summarizing data visually and numerically (statistics)\n", "\n", "In the [prior module](https://yint.org/pybasic09) we covered:\n", "1. Box plots\n", "2. Bar plots (bar charts) \n", "3. Histograms\n", "\n", "while in this module we will cover:\n", "4. Data tables\n", "5. Time-series, or sequence plots\n", "6. Scatter plots\n", "7. Creating better box plots\n", "\n", "In between, throughout the notes, we will also introduce statistical and data science concepts. This way you will learn how to interpret the plots and also communicate your results with the correct language." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Five main goals with data science\n", "\n", "\n", "In the [prior module](https://yint.org/pybasic09) I described my approach for any data analysis project. The first step is to **define the goals**. When I take a look at various projects I have worked on, the goals always fall into one or more of these categories, or 'application domains'.\n", "\n", "1. Learning more about our system\n", "2. Troubleshooting a problem that is occurring\n", "3. Making predictions using (some) data from the system\n", "4. Monitoring that system in real-time, or nearly real time \n", "5. Optimizing the system\n", "\n", "I will describe these goals shortly. But why look at this? The reason is that certain goals can be solved with a subset of tools. The number of tools available to you is large. Knowing which one to use for which type of goal helps you along the way faster.\n", "\n", "
\n", "Goals 1 and 2 take place off-line, using data that has been collected already.\n", "\n", "Goals 3, making predictions from the system, e.g. predicting what quality is being produced by the system; or how much longer a batch should be run before it is completed. The prediction is typically required to support other decisions, or to apply real-time control on the system. \n", "\n", "Goal 4 also can take place on-line, and is used to ensure the system is operating in a stable manner, and if not, using the data to figure what is going wrong, or about to go wrong.\n", "\n", "Goal 5 is typically off-line, and here we use the data to make longer term improvements. For example, we try to move the system to a different state of operation that is more optimal/profitable. This can also be done in real-time, where systems are continuously shifted around to track an optimum target.\n", "\n", "
\n", "\n", "This is just one way to to categorize data science problems. There are of course other ways to do this: such as if you are dealing with one variable (vector) or many variables (matrices). Or which type of technique you are using: ***supervised*** or ***unsupervised***.\n", "\n", "We will encounter these terms along the way. But for now, you should be able to see any problem where you have used data as fitting into one of these five categories above. \n", "\n", "\n", "### Examples of using this categorization\n", "\n", "For example: your manager asks you to use data (whatever is available) to discover why we are seeing increased number of customers returning our most profitable product to the store. Your objective: Find reason(s) for increased returns of product.\n", "\n", "Which of the 5 goals above are used?: Number 2 \"Troubleshoot a problem that is occurring\" is the most direct. But along the way to achieving that goal, you will almost certainly apply number 1: \"Learn more about your system\".\n", "\n", "Following up: in the future, after you have found the reasons for returned product, you might have done number 5: \"optimizing the system\" to find settings for the machines, so that fewer low-quality products are produced. Then, in a different data science project, based on number 4: you \"monitor the system in real-time\" to prevent producing bad quality products\". This might be done by applying number 3: \"making predictions of the product quality\" in real-time, while the system is operating.\n", "\n", "\n", "As you can see, these 5 goals are generally very broad. Why do we mention them?\n", "\n", "You might learn, in other courses and later in your career, about different tools to implement. Then you can interchange the tools in your toolbox. For example, linear regression is one type of prediction tool to achieve goal 3, but so is a neural network. If one tool does not work so well, you can swap it for another one in your pipeline.\n", "\n", "### Try it yourself\n", "\n", "Try breaking down the existing data-based project you are currently working in. Check which one or more of the five apply.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Data tables\n", "\n", "Data tables are an effective form of data visualization. Some tips:\n", "\n", "* align numbers in the column, all at the decimal point, so trends can be scanned when reading from top to bottom.\n", "* alternate the background shading of each row\n", "* sort the table by a particular variable, to emphasize a particular message.\n", "\n", "Here's an example of the [Blender Efficiency](http://openmv.net/info/blender-efficiency) data set. It was a designed experiment to see how the blending efficiency can be improved, using 18 experiments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [], "source": [ "import pandas as pd\n", "blender = pd.read_csv('http://openmv.net/file/blender-efficiency.csv')\n", "blender.sort_values('BlendingEfficiency', inplace=True)\n", "blender" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Click on the column header for ``BlendingEfficiency`` and you can sort from low-to-high, or high-to-low. You can now instantly see that ``ParticleSize`` has the greatest effect on blending efficiency. No plotting required. \n", "\n", "In terms of the 5 goals above - here we have used the table to **learn more** about our process: what direction is the ***correlation*** between particle size and blending efficiency? *Positive* or *negative* correlation?\n", "\n", "##### To try yourself\n", "\n", "Create a box plot of blending efficiency against particle size. This will achieve the goal of learning more about our system even further, because then we can quantify the negative correlation. ``blender.boxplot('BlendingEfficiency', by='ParticleSize')``\n", "\n", "### Setting the level of precision\n", "\n", "In Pandas data tables, especially for calculated variables, you might see too many decimals (the default is 5). If you want to adjust that, run this command: ``pd.set_option('precision', 2)`` for 2 decimals. See the code in the next section for an example.\n", "\n", "### Pie charts, when tables will do\n", "\n", "Run the code below to convince yourself that pie charts should not be used instead of a table. If you are pressured to use a pie chart instead of table, use the example below (and some of the links) to help argue your case." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import pyplot\n", "%matplotlib inline\n", "import pandas as pd\n", "\n", "website = pd.read_csv('http://openmv.net/file/website-traffic.csv')\n", "website.drop(columns=['MonthDay', 'Year'], inplace=True)\n", "average_visits_per_day = website.groupby('DayOfWeek').mean() \n", "percentage = average_visits_per_day / average_visits_per_day.sum() * 100\n", "\n", "fig = pyplot.figure(figsize=(15, 4));\n", "axes = pyplot.subplot(1, 2, 1)\n", "percentage.plot.pie(y='Visits', ax=axes, legend=False)\n", "axes.set_aspect('equal')\n", "\n", "# Right plot: subplot(1,2,2) means: create 1 row, with 2 columns, and draw in the 2nd box\n", "# Take the same grouped data from before, except sort it now:\n", "percentage.sort_values('Visits', ascending=True, inplace=True) \n", "percentage.plot.barh(ax=pyplot.subplot(1, 2, 2), legend=False)\n", "\n", "pd.set_option('precision', 2)\n", "percentage" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Answer these questions, based on the above\n", "\n", "1. From the pie chart alone: which day has the second highest number of visits? Note how long it takes you to discover that.\n", "2. From the pie chart only: which percentage of visits occur on a Saturday? How accurate is your guess? Check it afterwards against the bar chart and the values in the table.\n", "\n", " The superiority of tables is not surprising here. The human eye excels at at finding differences in 2-dimensions with respect to length and location. But it is not good at estimating area and angles, yet a pie chart encodes its information only in terms of area and angles.\n", "\n", "3. Compare the bar plot with the table now: you get the same information from both, but in terms of the [data:ink ratio concept](https://infovis-wiki.net/wiki/Data-Ink_Ratio), which is better?\n", "\n", "Need more convincing evidence?\n", "* From pie to bar: https://www.darkhorseanalytics.com/portfolio/2016/1/7/data-looks-better-naked-pie-charts\n", "* Chartjunk in bar plots: https://www.darkhorseanalytics.com/blog/data-looks-better-naked\n", "* And the full essay on the lack of utility of pie charts: [Save the Pies for Dessert](https://www.perceptualedge.com/articles/visual_business_intelligence/save_the_pies_for_dessert.pdf)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Colour-coded tables for heatmaps\n", "\n", "\n", "Related to data tables is the concept of colour-coding the entries in the data table according to their values. High values get a specific colour (e.g. red), and low values another colour (e.g. blue) and then the in-between values are shaded in a transition. This is also related to a colour map: each value is mapped to a certain colour (more on that below, in the section on [scatter plots](#Scatter-plots)).\n", "\n", "This is helpful for emphasizing trends in the data which are not easy to pick up with the numbers alone.\n", "\n", "These colour-coded tables are called heatmaps.\n", "\n", "##### Example: Peas\n", "\n", "In the [prior module](https://yint.org/pybasic09#%E2%9E%9C-Challenge-yourself:-Judging-the-Judges) we created box plots for the taste ratings given to various samples of Peas, based on their flavour attributes: flavour, sweetness, fruity flavour, off-flavour, mealiness and hardness.\n", "\n", "The judges give scores on a scale of 1 to 10." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load the data\n", "import pandas as pd\n", "peas = pd.read_csv('https://openmv.net/file/peas.csv')\n", "judges = peas.loc[:, 'Flavour': 'Hardness']\n", "judges.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now visualize the table trends:\n", "import seaborn as sns\n", "%matplotlib inline\n", "\n", "# Change the default figure size\n", "sns.set(rc={'figure.figsize':(15, 5)})\n", "\n", "# Look at the transpose of the heatmap instead\n", "sns.heatmap(judges.T);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That visualization is somewhat helpful, because we get an idea already that some of the attributes move together: see that if `Flavour`, `Sweet` and `Fruity` are low (dark colours), that they are jointly low. And that it is opposite to the other 3 flavour characteristics.\n", "\n", "Now let's sort the data set and try again:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "judges.sort_values(by='Hardness', inplace=True)\n", "sns.heatmap(judges.T);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What a difference! Now the visualization is greatly improved, and actually tells a story. That's the purpose of any visualization.\n", "\n", "Now we quickly see the opposite trends occurring which took us much longer to realize in the prior plot. ***How would you describe the trends to someone***?\n", "\n", "Note also that you could not have seen these trends from a box plot! \n", "\n", "Next we can calculate the [***correlation*** value](https://learnche.org/pid/least-squares-modelling/covariance-and-correlation#correlation), which is a number between $-1$ and $+1$ that shows how strongly variables are related. A value of 0 is no correlation. A value of $-1$ is a perfect negative relationship, and $+1$ is a perfect positive relationship.\n", "\n", "We will visualize what a strong, or a weak correlation is in a next section on scatter plots. Here we already see how the columns are correlated to each other: both in a table, and a heat map. Heat maps are great way to visualize correlations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import display\n", "import numpy as np\n", "import pandas as pd\n", "pd.set_option('precision', 3)\n", "\n", "display(judges.corr())\n", "corr = judges.corr()\n", "\n", "# Create a mask for the upper triangle\n", "mask = np.zeros_like(corr, dtype=np.bool)\n", "mask[np.triu_indices_from(mask)] = True\n", "\n", "# Generate a colormap for the correlations\n", "cmap = sns.diverging_palette(220, 10, as_cmap=True)\n", "\n", "# Draw the heatmap with the mask and correct aspect ratio\n", "sns.heatmap(corr, mask=mask, cmap=cmap, \n", " square=True, linewidths=.2, \n", " cbar_kws={\"shrink\": 0.5});" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instantly you can confirm your expectation of the trends in that data set:\n", "* `Flavour`, `Sweet` and `Fruity` attributes are correlated together. As one goes up, the other also goes up.\n", "* `Off-flavour`, `Mealiness` and `Hardness` attributes are also correlated together. As one goes up, the other also goes up.\n", "* The first 3 attributes are negatively correlated to the other 3 attributes.\n", "* It would be very unusual to find a pea that had high values in all 6 attributes. Think about that: it makes sense! Flavour and off-flavour cannot be simultaneously high.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: correlation plot for cheese taste!\n", "\n", "The **goal** of this challenge is to discover how the columns in the [cheese taste data set](http://openmv.net/info/cheddar-cheese) are related to each other. In this data set the concentrations of:\n", "\n", "1. acetic acid,\n", "2. $\\text{H}_2\\text{S}$, and \n", "3. lactic acid are give for 30 samples of mature cheddar cheese. \n", "\n", "A subjective taste value is also provided as the 4th column.\n", "\n", "* Display the correlation matrix of every variable with the other variable. There are 4 variables, so there are 6 pairs of correlation values. \n", "* Visualize these correlations in a heat map.\n", "* Describe the correlations.\n", "\n", "***If you want to cheat*** scroll down to see a partial solution." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time-series, or a sequence plot\n", "\n", "\n", "If you have a single column of data, you may see interesting trends in the sequence of numbers when plotting it. These trends are not always visible when just looking at the numbers, and they definitely cannot be seen in a box plot.\n", "\n", "An effect way of plotting these columns is horizontally, as a series plot, or a trace. We also call them time-series plots, if there is a second column of information indicating the corresponding time of each data point." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As promised in the [prior notebook](https://yint.org/pybasic09), we will now look at the time-based trends of the [website visits data set](http://openmv.net/info/website-traffic).\n", "\n", "Below we import the data. \n", "* Modify the code, if necessary, if you are behind a proxy server.\n", "* Note how we can force a particular column to be a time-based variable, if Pandas does not import it as time.\n", "* Lastly, we can set that time-based column to be our ***index***. Do you [recall that term](http://yint.org/pybasic07) about a Pandas series?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "website = pd.read_csv('http://openmv.net/file/website-traffic.csv')\n", "dates = pd.to_datetime(website['MonthDay'], format='%B %d')\n", "website.set_index(dates, inplace=True, drop=True)\n", "website.plot(y='Visits', figsize=(15,5))\n", "\n", "# Smooth it a bit, with a rolling mean\n", "website['Visits'].rolling(5).mean().plot(linewidth=5);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the common problem with smoothed rolling average data: it introduces a 'delay' into the time-series. The smoothed peaks are shifted to the right in time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Try it yourself\n", "\n", "Copy and paste the above code, and try this again for the [Ammonia dataset](http://openmv.net/info/ammonia). Note in the code below:\n", "\n", "* The dataset had no time-based column, so Pandas provides a simple function for doing that (`pd.date_range(...)`). We were told the data were collected every 6 hours. \n", "* Note how the plot's colours can be altered, and the line thickness.\n", "\n", "Modify the code below:\n", "\n", "1. Try different rolling window sizes: `'12H'` (12 hours), `'2D'` (2 days), `'30D'`, etc.\n", "2. Which smoother shows the trends clearly?\n", "3. How would you describe these time-trends to a colleague?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ammonia = pd.read_csv('http://openmv.net/file/ammonia.csv')\n", "datetimes = pd.date_range('1/1/2020', periods=ammonia.shape[0], freq='6H')\n", "ammonia.set_index(datetimes, inplace=True)\n", "ammonia['Ammonia'].plot(figsize=(15,5), color='lightblue')\n", "ammonia['Ammonia'].rolling('2D').mean().plot(color='black', linewidth=3);\n" ] }, { "cell_type": "markdown", "metadata": { "hide_input": true }, "source": [ "#### ➜ Challenge yourself: random walks again\n", "\n", "The **goal** of this challenge is to understand what a random walk looks like, visually, as a time-series.\n", "\n", "In the [prior module](https://yint.org/pybasic09) you created the numbers that represent a random walk. Then you looked only at the distribution. Here's the prior code:\n", "\n", "\n", "```python\n", "from scipy.stats import norm\n", "\n", "# 20 steps for a regular person, showing the deviation to the \n", "# left (negative) or to the right (positive) when they are \n", "# walking straight. Values are in centimeters.\n", "regular_steps = norm.rvs(loc=0, scale=5, size = 20)\n", "print('Regular walking: \\n{}'.format(regular_steps))\n", "\n", "# Consumed too much? Standard deviation (scale) is larger:\n", "deviating_steps = norm.rvs(loc=0, scale=12, size = 20)\n", "print('Someone who has consumed too much: \\n{}'.format(deviating_steps))\n", "```\n", "\n", "In the space below, start with the code given above, then modify it to:\n", "* create a series for `size=400` steps\n", "* convert this to a Pandas series, using a frequency of 1 second\n", "* plot the random walks for 2 people: one regular, and one with deviating steps. \n", "* Remember: plot the **cumulative sum** of their steps, not the step changes. \n", "* You can add horizontal lines to an existing axis: \n", "```python\n", "ax = df.plot(...) # the output of the plot function is an axis\n", "ax.axhline(y = 0, color='k')\n", "```\n", "* You can also use the axis ``ax`` to set labels: ``ax.set_xlabel(...)`` or ``ax.set_ylabel(...)``\n", "\n", "\n", "Here's how my plot looked. Run your code several times to see how different the random walks appear." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "from scipy.stats import norm\n", "from matplotlib import style\n", "# print(style.available)\n", "style.use('ggplot') \n", "\n", "N = 400\n", "regular_steps = norm.rvs(loc=0, scale=5, size = N)\n", "deviating_steps = norm.rvs(loc=0, scale=12, size = N)\n", "\n", "datetimes = pd.date_range('1/1/2020', periods=N, freq='1S')\n", "regular = pd.Series(regular_steps.cumsum(), index = datetimes)\n", "regular.plot(figsize=(15,5))\n", "\n", "deviating = pd.Series(deviating_steps.cumsum(), index = datetimes)\n", "ax = deviating.plot()\n", "ax.axhline(y=0, color='k', linestyle='-', linewidth=2)\n", "ax.set_ylabel('Deviation from the starting point [cm]')\n", "ax.legend(['Regular steps', 'Deviating steps']);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: growth rate of bacteria\n", "\n", "The **goal** of this challenge is to understand the growth rate (reaction kinetics) of bacteria. To see what the growth looks like visually, but also to discover when the growth rate is the fastest, the most productive.\n", "\n", "Back in [module 3](https://yint.org/pybasic03#Challenge-4) we integrated an equation for bacteria growing on a plate:\n", "\n", "$$ \\dfrac{dP}{dt} = rP $$\n", "\n", "where $P$ is the number of bacteria in the population, and $r$ is their exponential rate of growth [number of bacteria/minute]. This is not realistic. Eventually the bacteria will run out of space and their food source. So the equation is modified:\n", "\n", "$$ \\dfrac{dP}{dt} = rP - aP^2$$\n", "where they are limited by the factor $a$ in the equation.\n", "\n", "The differential equation can be re-written as: \n", "$$P_{i+1} = P_i + \\left[\\,rP_i -a\\,P_i^2\\,\\right]\\delta t$$ \n", "\n", "which shows how the population at time point $i+1$ (one step in the future) is related to the population size now, at time $i$ over a short interval of time $\\delta t$ minutes. \n", "\n", "Starting from 500 cells initially with a rate $r=0.032$ and the coefficient $a = 1.4 \\times 10^{-7}$ we can generate the growth curves and plot them." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": false }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "from IPython.display import display\n", "\n", "p_initial = 500\n", "r = 0.032\n", "a = 1.6E-7\n", "delta_t = 1 # minutes\n", "time_final = 8*60\n", "\n", "# Create the two outputs of interest\n", "time = np.arange(start=0.0, stop=time_final, step=delta_t)\n", "population = np.zeros(time.shape)\n", "population[0] = p_initial\n", "\n", "for idx, t_value in enumerate(time[1:]):\n", " population[idx + 1] = population[idx] + (r*population[idx] - a * population[idx]**2) * delta_t\n", "\n", "# Now plot the data\n", "bugs = pd.DataFrame(data = {'bacteria': population}, index=time)\n", "display(bugs.head())\n", "ax = bugs.plot(figsize=(15,5)) \n", "ax.grid(True)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Answer these questions:\n", "\n", "1. Add an x-axis label, y-axis label and title to your figure. Hide the legend.\n", "2. Add another time-series plot (as a subplot) to discover at which point in time the growth rate is the steepest. Estimate it from the plot, and also find it in the data frame. Look back at the equation: the growth rate is $ \\dfrac{dP}{dt} = rP - aP^2$. So plot this value.\n", "3. What steady-state population is reached?\n", "4. If you start with 1000 bacteria, do you end up with a different final colony size?\n", "5. Perhaps not necessary for these values on the plot, but usually with bacterial growth we use log scale plots on the y-axis. Change only 1 line in the above code to make this y-axis a log scale." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scatter plots\n", "\n", "\n", "Scatter plots are widely used and easy to understand. ***When should you use a scatter plot?*** When your goal is to draw the reader's attention between the relationship of 2 (or more) variables.\n", "\n", "* Data tables also show relationships between two or more variables, but the trends are sometimes harder to see.\n", "* A time-series plot shows the relationship between time and another variable. So also two variables, but one of which is time. \n", "\n", "In a scatter plot we use 2 sets of axes, at 90 degrees to each other. We place a marker at the intersection of the values shown on the horizontal (x) axis and vertical (y) axis. \n", "\n", "\n", "* Most often **variable 1 and 2** (also called the dimensions) will be continuous variables. Or at least [***ordinal variables***](https://en.wikipedia.org/wiki/Ordinal_data). You will seldom use categorical data on the $x$ and $y$ axes.\n", "\n", "* You can add a **3rd dimension**: the marker's size indicates the value of a 3rd variable. It makes sense to use a numeric variable here, not a categorical variable.\n", "\n", "* You can add a **4th dimension**: the marker's colour indicates the value of a 4th variable: usually this will be a categorical variable. E.g. red = category 1, blue = category 2, green = category 3. Continuous numeric transitions are hard to map onto colour. However it is possible to use transitions, e.g. values from low to high are shown on a sliding gray scale\n", "\n", "* You can add a **5th dimension**: the marker's shape can indicate the discrete values of a 5th categorical variable. E.g. circles = category 1, squares = category 2, triangles = category 3, etc.\n", "\n", "In summary:\n", "\n", "* marker's size = numeric variable\n", "* marker's colour = categorical, maybe numeric, especially with a gray-scale\n", "* marker's shape = can only be categorical\n", "\n", "\n", "Let's get started with some examples. We will start off with the example from the [prior module](https://yint.org/pybasic09#Histograms) where we considered the grades of students, and how long it took to write the exam." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Standard imports required to show plots and tables \n", "from matplotlib import pyplot\n", "from IPython.display import display\n", "%matplotlib inline\n", "import pandas as pd\n", "\n", "# Modify the code if you are behind a proxy server\n", "grades = pd.read_csv('https://openmv.net/file/unlimited-time-test.csv')\n", "\n", "ax = grades.plot.scatter(x = 'Time', y = 'Grade', \n", " figsize = (8, 8),\n", " \n", " # These remaining inputs are optional, but\n", " # specified below so you can explicitly see them\n", " \n", " # Size of the dots: change this to get a feeling \n", " # for the range of values you should use\n", " s = 50, \n", " \n", " # Specify the colour\n", " c = 'darkgreen',\n", " \n", " # The shape of the marker\n", " # See https://matplotlib.org/3.1.1/api/markers_api.html\n", " marker = 'D'\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember our objective from the [prior notebook](https://yint.org/pybasic09#Histograms)? Do students score a higher `Grade` if they have a longer `Time` to finish the exam? The idea was that students will have less stress with unlimited time, because they had all their books and notes with them. In theory these are fairly ideal exam conditions.\n", "\n", "The scatter plot however shows there isn't anything conclusive in the data to believe that there is a relationship. Let us also quantify it with the correlation value we introduced above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "display(grades.corr())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The correlation value is $r=-0.044$, essentially zero. So now you get an idea of what a zero correlation means.\n", "\n", "* The correlation value is ***symmetrical***: a value of -0.044 is the correlation between time and grades, and also the correlation between grades and time.\n", "* Interesting tip: the $R^2$ value from a regression model is that value squared: in other words, $R^2 = (-0.044229)^2 = 0.001956$.\n", "\n", "Think of the implication of that: you can calculate the $R^2$ value - *the* value often used to judge how good a linear regression is - without calculating the linear regression model!! Further, it shows that for linear regression it does not matter which variable is on your $x$-axis, or your $y$-axis: the $R^2$ value is the same.\n", "\n", "If you understand these 2 points, you will understand why $R^2$ is not a great number at all to judge a linear regression model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at some other correlations. If you completed the Cheese Challenge above, you have already seen what the correlation values are for that dataset.\n", "\n", "* Strong relationship between `Taste` and the amount of `H2S` present (correlation of 0.756), while \n", "* the amount of `Lactic` acid present is also quite strongly correlated with the amount of `H2S` (0.644)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cheese = pd.read_csv('http://openmv.net/file/cheddar-cheese.csv')\n", "cheese.set_index('Case', inplace=True)\n", "pd.set_option('precision', 3)\n", "from IPython.display import display\n", "display(cheese.corr())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will like to visualize these pairwise relationships. We can draw 6 scatter plots to show all the pairwise combinations of `Acetic`, `H2S`, `Lactic` and `Taste`.\n", "\n", "The [Seaborn library](https://seaborn.pydata.org/), based on matplotlib, does this in a single line of code, using their ``sns.pairplot(...)`` function. \n", "\n", "##### Confirm your knowledge\n", "\n", "Visually relate the scatter plots below, with the numeric correlations in the table above. Get a feeling for what a correlation of $r=0.6$ or in other words $R^2 = 0.36$ is. It is fairly strong! You can see trends and relationships.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.set(rc={'figure.figsize':(15, 5)})\n", "sns.pairplot(cheese);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding more dimensions to your scatter plots\n", "\n", "We saw that we can alter the size (`s = ...`), colour (`c = ...`) and shape (`marker = ...`) of the marker to indicate a 3rd, 4th or 5th dimension.\n", "\n", "In the plots above you saw you to specify `s`, `c` and `marker` if the all the values are the same. Below you see how to do that if they are different. You specify a vector for `s` and `c`, the same length as the data.\n", "\n", "The vector for the size, `s`, is often a function of the variable being plotted. Remember that a doubling of the circle's area is related to the square root of the radius.\n", "\n", "The colour, `c` is often a categorical variable. In the example below we use red for \"Yes\" (baffles are present) and black for \"No\". \n", "\n", "We consider changing the markers' shape in the next piece of code." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import pyplot\n", "yields = pd.read_csv('http://openmv.net/file/bioreactor-yields.csv')\n", "baffles = yields['baffles'].values\n", "\n", "# Idea: [f(x) if condition else g(x) for x in sequence]\n", "colour = ['red' if b == 'Yes' else 'black' for b in baffles]\n", "size = (pd.np.sqrt((yields['speed']-3200))-4)*10\n", "ax = yields.plot.scatter(x='temperature', y='yield', figsize=(10,8), \n", " s = size,\n", " c = colour)\n", "ax.set_xlabel('Temperature [°C]')\n", "ax.set_ylabel('Yield [%]');\n", "ax.set_title('Yield as a function of temperature [location], baffles [marker colour] and speed [marker size]');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the above visualization we quickly see how the red points (baffles are present in the reactor) have a reducing effect on the yield. The yield also drops off with temperature.\n", "\n", "What can you say about the marker size, which represents the speed of the impeller in the bioreactor?\n", "\n", "We don't actually have a 5th dimension to visualize in this data set, to also change the marker shape. Marker shapes must be associated with a categorical variable. We will show how you could do it, based on the `baffles` column. The idea is to iterate over each unique category, taking the colour and shape from a dictionary." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "markers = {'No': 's', # square\n", " 'Yes': 'o'} # circle\n", "\n", "colours = {'No': 'black', \n", " 'Yes': 'red'}\n", "\n", "# Create an empty axis to plot in\n", "ax = pyplot.subplot(1,1,1)\n", "for baffle_type in yields['baffles'].unique():\n", " subset = yields[yields['baffles'] == baffle_type]\n", " subset.plot.scatter(ax = ax,\n", " figsize = (10,8),\n", " x = 'temperature', y='yield',\n", " s = (pd.np.sqrt((subset['speed']-3200))-4)*10,\n", " c = colours[baffle_type],\n", " marker = markers[baffle_type]\n", " )\n", "ax.set_xlabel('Temperature [°C]')\n", "ax.set_ylabel('Yield [%]');\n", "ax.set_title('Yield as a function of temperature [location], baffles [colour and marker shape] and speed [marker size]');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### To investigate yourself\n", "\n", "If you have a sliding scale for colour, then you need to use a colour map. See the [matplotlib colormap reference](https://matplotlib.org/3.1.1/api/cm_api.html).\n", "\n", "A colour map takes an input value between 0 and 255, and relates it to a particular colour. On that webpage you see various colour maps." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: pairplots and the peas\n", "\n", "The **goal** of this challenge is to see how the judges' values for the 6 taste attributes of the peas are (co)related.\n", "\n", "Generate a pairplot set of scatter plots for all of the 6 combinations.\n", "\n", "##### Compare and contrast\n", "\n", "1. You have seen in the above plots a correlation of nearly zero (grades vs time for the exam).\n", "2. In the cheddar cheese data you saw correlation of around 50 to 60% (the cheese dataset).\n", "3. In this data set you see correlations of above 90 and 95% (this dataset for peas).\n", "\n", "This is done intentionally for you to get a visual idea of what the correlation coefficient $(r)$ means, as well as $R^2$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide_input": true }, "outputs": [], "source": [ "import pandas as pd\n", "import seaborn as sns\n", "peas = pd.read_csv('https://openmv.net/file/peas.csv')\n", "judges = peas.loc[:, 'Flavour': 'Hardness']\n", "sns.pairplot(judges);\n", "#judges.corr()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating better box plots\n", "\n", "Here we will consider alternatives, or additions, to the box plot, which we saw in the prior module. \n", "\n", "The additions, in the order of progressively adding more information, are:\n", "\n", "* violin plot: shows the distribution\n", "* swarm plot: shows the raw data, and how it is distributed\n", "* raincloud plot: combines elements of the both of the above\n", "\n", "All 3 options improve the box plot, by showing the distribution of the underlying data and raw values from the column being visualized." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get the data\n", "\n", "import pandas as pd\n", "ammonia = pd.read_csv('http://openmv.net/file/ammonia.csv')\n", "# You might need the proxy server settings" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import pyplot\n", "%matplotlib inline\n", "import seaborn as sns\n", "\n", "# Change the default figure size\n", "#sns.set(rc={'figure.figsize':(15, 5)})\n", "\n", "fig = pyplot.figure(figsize=(15, 5));\n", "axis1 = pyplot.subplot(1, 2, 1)\n", "axis2 = pyplot.subplot(1, 2, 2)\n", "\n", "sns.boxplot(data=ammonia, ax = axis1)\n", "axis1.set_ylim(0, 70)\n", "axis1.grid(True)\n", "sns.violinplot(y='Ammonia', data=ammonia, ax=axis2,\n", " \n", " # Play with these settings\n", " inner = \"box\", # the default\n", " # inner = \"quartile\"\n", " \n", " linewidth=3)\n", "axis2.set_ylim(0, 70)\n", "axis2.grid(True);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Swarm plots can compliment a violin plot, as they show all the raw underlying data. Not just the distribution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from matplotlib import pyplot\n", "%matplotlib inline\n", "import seaborn as sns\n", "\n", "# Change the default figure size\n", "#sns.set(rc={'figure.figsize':(15, 5)})\n", "\n", "fig = pyplot.figure(figsize=(15, 5));\n", "axis1 = pyplot.subplot(1, 2, 1)\n", "axis2 = pyplot.subplot(1, 2, 2)\n", "\n", "sns.boxplot(data=ammonia, ax = axis1)\n", "axis1.set_ylim(0, 70)\n", "axis1.grid(True)\n", "sns.swarmplot(y='Ammonia', data=ammonia, ax=axis2,\n", " \n", " # Play with these settings\n", " #inner = \"box\", # the default\n", " # inner = \"quartile\"\n", " \n", " #linewidth=3\n", " )\n", "axis2.set_ylim(0, 70);\n", "axis2.grid(True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import ptitprince as pt\n", "%matplotlib inline\n", "\n", "fig = pyplot.figure(figsize=(15, 5));\n", "axis1 = pyplot.subplot(1, 2, 1)\n", "axis2 = pyplot.subplot(1, 2, 2)\n", "\n", "sns.boxplot(data=ammonia, ax = axis1, orient='h')\n", "axis1.set_xlim(0, 70)\n", "axis1.grid(True)\n", "\n", "pt.RainCloud(y = 'Ammonia', \n", " ax=axis2,\n", " data = ammonia, \n", " width_viol = .8,\n", " width_box = .4,\n", " figsize = (12, 8), orient = 'h',\n", " move = .0)\n", "axis2.set_xlim(0, 70)\n", "axis2.grid(True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But where a raincloud plot really works well is with comparison of multiple variables. Let's go back to an [earlier worksheet case study](https://yint.org/pybasic09#%E2%9E%9C-Challenge-yourself:-box-plots-for-thickness-of-plastic-sheets), where we compared the thickness of plastic film.\n", "\n", "The thickness was measured at the 4 corners." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import ptitprince as pt\n", "%matplotlib inline\n", "\n", "films = pd.read_csv('http://openmv.net/file/film-thickness.csv')\n", "films.set_index('Number', inplace=True)\n", "ax = pt.RainCloud(#y = 'Ammonia', \n", " data = films, \n", " width_viol = .8,\n", " width_box = .4,\n", " figsize = (12, 8), orient = 'h',\n", " move = .0)\n", "\n", "ax.grid(True)\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Extended challenges\n", "\n", "Below we give some challenges that go beyond, but build on, the topics covered in this worksheet.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: Seaborn library\n", "\n", "The Seaborn library wraps Matplotlib up, and creates easy-to-use function for common data visualization steps. The **goal** of this challenge is to get more familiar with one of the most useful visualization libraries in Python.\n", "\n", "* Take a look at the [Seaborn Gallery](https://seaborn.pydata.org/examples/index.html) to see some examples. Which ones can you use in your next project?\n", "\n", "* Great blog post about [9 different visualizations](https://www.marsja.se/python-data-visualization-techniques-you-should-learn-seaborn/). We have looked at all of these, but it is nice to see them all on one page.\n", "\n", "* [Seaborn tutorial](https://www.datacamp.com/community/tutorials/seaborn-python-tutorial): a structured tutorial is always worthwhile. Look at this to understand topics we have not covered in detail: \n", "\n", " * colour maps, \n", " * adding and rotating text, \n", " * 'contexts': adjusting the plot settings for different use cases: talks, posters, on paper or in notebooks\n", " * change plot style\n", " * adding a title\n", " \n", "* [Interactively create Seaborn visualizations](https://engmrk.com/module7-introduction-to-seaborn/) within this webpage (if you can handle all the advertising!)\n", "\n", "* Though not related to Seaborn, it is worth giving a link to the [Visualizations from the Pandas library](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ➜ Challenge yourself: plotting data in real-time\n", "\n", "The **goal** of this challenge is to visualize data that is coming from a real-time live stream. The plots above are all static. But what if you want to monitor your process in real-time? See goal number 4 above.\n", "\n", "Let's give it a try. We will monitor the CPU usage of your computer. You can install a small Python package to get the CPU percentage used. You will need the non-built in package called ``psutil``. Install it with: ``python3 -m pip install psutil`` or with your package manager (e.g. Anaconda)\n", "\n", "```python\n", "import psutil\n", "# Measure the CPU used in a 0.9 second interval\n", "psutil.cpu_percent(interval=0.9)\n", "```\n", "\n", "Run that code multiple times and check that the values change.\n", "\n", "Now you would like to watch that value on a graph, changing in real-time. See these pages for inspiration on how to visualize that with Python:\n", "* https://makersportal.com/blog/2018/8/14/real-time-graphing-in-python\n", "* https://learn.sparkfun.com/tutorials/graph-sensor-data-with-python-and-matplotlib/all (scroll to the end of the page)\n", "\n", "You can apply this challenge directly for acquiring and plotting data from your own sensors. For example, you can inexpensively buy a Raspberry Pi board, add some sensors and create a home monitoring system for temperature, humidity and noise." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">***Feedback and comments about this worksheet?***\n", "> Please provide any anonymous [comments, feedback and tips](https://docs.google.com/forms/d/1Fpo0q7uGLcM6xcLRyp4qw1mZ0_igSUEnJV6ZGbpG4C4/viewform)." ] } ], "metadata": { "gist": { "data": { "description": "Module-10-interactive.ipynb", "public": true }, "id": "" }, "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": true, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "221.974px" }, "toc_section_display": true, "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }