{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Software Engineering for Molecular Data Scientists\n", "\n", "## *Introduction Pandas*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Today's Objectives\n", "\n", "#### 0. Review of *flow control* in Python\n", "\n", "#### 1. Loading data with ``pandas``\n", "\n", "#### 2. Cleaning and Manipulating data with ``pandas``" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0. Review of *flow control* in Python\n", "\n", "In our last tutorial, we discussed **_lists_**, **_dictionaries_**, and **_flow control_**.\n", "\n", "**_Lists_** are **_ordered collections_** of data that can be used to hold multiple pieces of information while preserving thier order. We use `[` and `]` to access elements by their indices which start with `0`. All things that operate on **_lists_** like slices use the concept of an inclusive lower bound and an exclusive upper bound. So, the following gets elements from the **_list_** `my_list` with index values of `0`, `1`, and `2`, but **not** `3`!\n", "\n", "```\n", "my_list[0:3]\n", "```\n", "\n", "It is equivalent to what other way of writing the same statement using **_slicing_**? Hint, think about leaving out one of the numbers in the slice!\n", "\n", "**_Dictionaries_** are **_named_** **_collections_** of data that can be used to hold multiple pieces of information as **_values_** that are addressed by **_keys_** resulting in a **_key_** to **_value_** data structure. They are accessed with `[` and `]` but intialized with `{` and `}`. E.g.\n", "\n", "```\n", "my_dict = { 'cake' : 'Tasty!', 'toenails' : 'Gross!' }\n", "my_dict['cake']\n", "```\n", "\n", "Finally, we talked about **_flow control_** and using the concept of **_conditional execution_** to decide which code statements were executed. Remember this figure?\n", "\n", "\n", "Flow control figure\n", "\n", "What are the **_if_** statments?\n", "\n", "Where do **_for_** loops fit in?\n", "\n", "What was the overarching concept of a **_function_**?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Loading data with ``pandas``\n", "\n", "With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.1 Python's Data Science Ecosystem\n", "\n", "In addition to Python's built-in modules like the ``math`` module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.\n", "Some of the most important ones are:\n", "\n", "#### [``numpy``](http://numpy.org/): Numerical Python\n", "\n", "Numpy is short for \"Numerical Python\", and contains tools for efficient manipulation of arrays of data.\n", "If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.\n", "\n", "#### [``scipy``](http://scipy.org/): Scientific Python\n", "\n", "Scipy is short for \"Scientific Python\", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more.\n", "We will not look closely at Scipy today, but we will use its functionality later in the course.\n", "\n", "#### [``pandas``](http://pandas.pydata.org/): Labeled Data Manipulation in Python\n", "\n", "Pandas is short for \"Panel Data\", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a *Data Frame*.\n", "If you've used the [R](http://rstats.org) statistical language (and in particular the so-called \"Hadley Stack\"), much of the functionality in Pandas should feel very familiar.\n", "\n", "#### [``matplotlib``](http://matplotlib.org): Visualization in Python\n", "\n", "Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.2 Installing Pandas & friends\n", "\n", "Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like ``conda``. All it takes is to run\n", "\n", "```\n", "$ conda install numpy scipy pandas matplotlib\n", "```\n", "\n", "and (so long as your conda setup is working) the packages will be downloaded and installed on your system." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.3 Importing Pandas to use in a notebook\n", "\n", "We begin by loading the Panda's package. Packages are collections of functions that share a common utility. We've seen `import` before. Let's use it to import Pandas and all the richness that pandas has." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "import pandas\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "df = pandas.DataFrame()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pandas.DataFrame()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because we'll use it so much, we often import under a shortened name using the ``import ... as ...`` pattern:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "import pandas as pd\n", "import numpy as np\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create an empty _data frame_ and put the result into a variable called `df`. This is a popular choice for a _data frame_ variable name." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "df = pd.DataFrame()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "Let's create some random data as a pandas data frame. Before we get to the dataframe, let's briefly talk about numpy's `random` function. If we look at the [`random`](https://numpy.org/doc/stable/reference/random/generated/numpy.random.random.html) documentation, you can see it takes a size argument. This should be a `list` or a `tuple` that says what the \"height\" and \"width\" of the generated data will be. In our case, we will get 10 rows of data in three columns with the following:\n", "\n", "```\n", "np.random.random((10,3))\n", "```\n", "\n", "\n", "Notice we change the value of the `df` variable to point to a new data frame.\n", "\n", "```\n", "df = pd.DataFrame(data=np.random.random((10,3)), columns=['v1', 'v2', 'v3'])\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(data=np.random.random((10,3)), columns=['v1', 'v2', 'v3'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Note: strings in Python can be defined either with double quotes or single quotes*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ``head()`` and ``tail()`` methods show us the first and last rows of the data.\n", "\n", "```\n", "df.head()\n", "df.tail()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.tail()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.4 Loading Data with Pandas" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can use the ``read_csv`` command to read the comma-separated-value data. This command is pretty sophisticated. It can read data via a URL (Uniform Resource Locator, see Lecture 2). Not only that, it can load data from a `.zip` file by on the fly decompressing it and opening the first `.csv` it finds. You can open different `.csv` files in the `.zip` file with additional arguments. See the [docs](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html) for more information." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pd.read_csv('http://faculty.washington.edu/dacb/HCEPDB_moldata.zip')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.5 Viewing Pandas Dataframes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ``head()`` and ``tail()`` methods show us the first and last rows of the data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.tail()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ``shape`` attribute shows us the number of elements:\n", "\n", "```\n", "data.shape\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ``columns`` attribute gives us the column names\n", "\n", "```\n", "data.columns\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.6 Indexes** or Indices** in Pandas\n", "\n", "The ``index`` attribute gives us the index names\n", "\n", "```\n", "data.index\n", "```\n", "\n", "** Index is one of a few words with multiple acceptable plural variants" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.index" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's make our ``id`` column the ``index``\n", "\n", "```\n", "data.set_index('id', inplace=True)\n", "```\n", "\n", "*Note:* the use of `inplace=True`. This cases the original data frame to be modified *in place* instead of creating a new data frame and returning the result to be stored in a new variable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# note inplace!\n", "data.set_index('id', inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's revisit the ``data.index``\n", "\n", "```\n", "data.index\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.index" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "View it with head again:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "data.head()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "data.tail()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.tail()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ``dtypes`` attribute gives the data types of each column:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "data.dtypes\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.dtypes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Manipulating data with ``pandas``\n", "\n", "Here we'll cover some key features of manipulating data with pandas" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Access columns by name using square-bracket indexing:\n", "\n", "```\n", "data['mass']\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data['mass']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Mathematical operations on columns happen *element-wise* (note 18.01528 is the weight of H2O):\n", "\n", "```\n", "data['mass'] / 18.01528\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data['mass'] / 18.01528" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Columns can be created (or overwritten) with the assignment operator.\n", "Let's create a *mass_ratio_H2O* column with the mass ratio of each molecule to H2O:\n", "\n", "```\n", "data['mass_ratio_H2O'] = data['mass'] / 18.01528\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data['mass_ratio_H2O'] = data['mass'] / 18.01528" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's view our dataframe including the new columns.\n", "\n", "```\n", "data.head()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In preparation for grouping the data, let's bin the molecules by their molecular mass. For that, we'll use ``pd.cut``. Documentation of [cut](https://pandas.pydata.org/docs/reference/api/pandas.cut.html). Cut is used when you want to bin numeric values into discrete intervals. This is useful for discretizing continuous data and for making histograms.\n", "\n", "```\n", "data['mass_group'] = pd.cut(data['mass'], 10)\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data['mass_group'] = pd.cut(data['mass'], 10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see the new `mass_group` column using head again.\n", "\n", "```\n", "data.head()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What are the data types of the new columns we have created?\n", "\n", "```\n", "data.dtypes\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.dtypes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.1 Simple Grouping of Data\n", "\n", "The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at *value counts* and the basics of *group-by* operations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Value Counts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pandas includes an array of useful functionality for manipulating and analyzing tabular data.\n", "We'll take a look at two of these here." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ``pandas.value_counts`` returns statistics on the unique values within each column.\n", "\n", "We can use it, for example, to break down the molecules by their mass group that we just created:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "pd.value_counts(data['mass_group'])\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.value_counts(data['mass_group'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What happens if we try this on a continuous valued variable?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "pd.value_counts(data['mass'])\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.value_counts(data['mass'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can do a little data exploration with this to look 0s in columns. Here, let's look at the power conversion effeciency (``pce``)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "pd.value_counts(data['pce'])\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.value_counts(data['pce'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.1.1. The Group-by Operation\n", "\n", "One of the killer features of the Pandas dataframe is the ability to do group-by operations.\n", "You can visualize the group-by like this (image utilized from the Python Data Science Handbook).\n", "\n", "![image](https://github.com/UWDIRECT/UWDIRECT.github.io/raw/master/Wi21_content/SEDS/split_apply_combine.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's break take this in smaller steps.\n", "Recall our ``mass_group`` column.\n", "\n", "```\n", "pd.value_counts(data['mass_group'])\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.value_counts(data['mass_group'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `groupby` allows us to look at the number of values for each column and each value." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "data.groupby(['mass_group']).count()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.groupby(['mass_group']).count()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's find the mean of each of the columns for each ``mass_group``. *Notice* what happens to the non-numeric columns." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "data.groupby(['mass_group']).mean()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.groupby(['mass_group']).mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can specify a groupby using the names of table columns and compute other functions, such as the ``sum``, ``count``, ``std``, and ``describe``." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try describe now...\n", "\n", "```\n", "data.groupby(['mass_group'])['pce'].describe()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.groupby(['mass_group'])['pce'].describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)\n", "\n", "```\n", ".groupby().()\n", "```\n", "\n", "You can even group by multiple values: for example we can look at the LUMO-HOMO gap grouped by the ``mass_group`` and ``pce``." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#grouped = data.groupby(['mass_group', 'pce'])['e_gap_alpha'].mean()\n", "#grouped" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a moment to try some of the other aggregation functions such as `sum`, `count` and `mean`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Visualizing data with ``pandas``\n", "\n", "Of course, looking at tables of data is not very intuitive.\n", "Fortunately Pandas has many useful plotting functions built-in, all of which make use of the ``matplotlib`` library to generate plots.\n", "\n", "Whenever you do plotting in the Jupyter notebook, you will want to first run this *magic command* which configures the notebook to work well with plots.\n", "\n", "Note the *magic command* is any command that starts with `%` in a Jupyter notebook. More documentation can be found [here](https://ipython.readthedocs.io/en/stable/interactive/magics.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "import matplotlib\n", "%matplotlib inline\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can simply call the ``plot()`` method of any series or dataframe to get a reasonable view of the data:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "data.groupby(['mass_group'])['pce'].mean().plot()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.groupby(['mass_group'])['pce'].mean().plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**_Questions_**:\n", "* What do you think of this plot?\n", "* What would you change if you could?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.1. Other plot types\n", "\n", "Pandas supports a range of other plotting types; you can find these by using the autocomplete on the ``plot`` method or looking at the documentation which is [here](https://pandas.pydata.org/docs/user_guide/visualization.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.groupby(['mass_group'])['mass'].count().plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.hist('pce')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Breakout for Functions and Pandas\n", "\n", "Write a function that takes a column in Pandas and computes the [arithmetic mean](https://en.wikipedia.org/wiki/Arithmetic_mean) value of the data in it without using Pandas **_aggregate_** functions.\n", "\n", "Compare that result to the one from Pandas **_aggregate_** function `.mean()`. How did your values compare? Were they exactly equal? Did you expect them to be given what you know about **_floating point_** numbers?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 1 }