{ "cells": [ { "cell_type": "markdown", "source": [ "

Table of Contents

\n", "
" ], "metadata": { "toc": true } }, { "cell_type": "markdown", "source": [ "> All content here is under a Creative Commons Attribution [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and all source code is released under a [BSD-2 clause license](https://en.wikipedia.org/wiki/BSD_licenses).\n", ">\n", ">Please reuse, remix, revise, and [reshare this content](https://github.com/kgdunn/python-basic-notebooks) in any way, keeping this notice." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "# Course overview\n", "\n", "This is the second module of several (11, 12, 13, 14, 15 and 16), which refocuses the course material in the [prior 10 modules](https://github.com/kgdunn/python-basic-notebooks) in a slightly different way. It places more emphasis on\n", "\n", "* dealing with data: importing, merging, filtering;\n", "* calculations from the data;\n", "* visualization of it.\n", "\n", "In short: ***how to extract value from your data***.\n" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "# Module 12 Overview\n", "\n", "This is the second of 6 modules. In this module we will cover\n", "\n", "* Using and understanding the Pandas library\n", "* Creating a Pandas data frame\n", "* Reading in Excel files as an alternative to create a data frame\n", "* Basic calculations with data frames\n", "\n", "**Requirements before starting**\n", "\n", "* Have your Python installation working as for module 11, and also the Pandas library installed." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Introduction to Pandas\n", "\n", "\n", "Why use ``pandas`` if you already can use tools like MATLAB and Excel?\n", "\n", "* In MATLAB you have arrays (matrix) of data. Pandas adds column headings and row labels (indexes) and calls the result a ``DataFrame``. Think of a spreadsheet.\n", "* In Pandas we often use the variable name ``df`` to refer to the data frame.\n", "* The advantage of using column heading is that you can then write code like this:\n", "\n", " ``df[\"TemperatureC\"] = (df[\"TemperatureF\"] - 32) * 5 / 9``\n", "\n", " to convert Fahrenheit to Celsius for the entire column. \n", " \n", " You do *not* need to know the column number, like in MATLAB, where you have to write ``X(:, 5) = (X(:, 2) - 32) * 5 / 9``, for example. In a spreadsheet you typically also write your formulas using the column labels, like ``= (B:B - 32) * 5 / 9``, if column B is the temperature in Fahrenheit.\n", "* Apart from referring to columns (or rows) by name, you can also merge two data frames together: for example to merge data from the lab, and data from the process. You have to specify which column is the common column. In Excel you can use `VLOOKUP` to do this, but it is messy, and with MATLAB you have to write code yourself to merge two data sets.\n", "* With Pandas, if your row names are time-based, then you can take advantage of that: e.g. you can, with 1 line of code, calculate the average over a week, or a month. In other languages you have to manually program that averaging, including taking into account that months sometimes have 28, 29, 30 or 31 days.\n", "* Data which are not time-based are equally well handled by Pandas.\n", "* Pandas also has multi-level indexing, or hierarchical indexing. More on that later.\n", "* If you do something on a data frame, like calculate an average over all rows, then the output result also gets those labels, the column headings in this case, kept in place.\n", "* Pandas takes care of missing data handling. So if you calculate the average, it will, by default, ignore missing values. Unlike MATLAB where you get `nan` as a result.\n", "* With Pandas you can quickly visualize your data, often with 1 line of code: \n", "\n", " * ``df[\"TemperatureC\"].plot()``\n", " * ``df.boxplot(column='activity', by='reactor')`` will create a boxplot of the values in the `activity` column, for every `reactor` \n", " " ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Getting started with the Pandas library" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "A library is a collection of someone else's Python code. It saves time to use existing, good-quality libraries, so you can focus on your work. E.g. focus on interpreting the data, and less on how to manipulate/process your data.\n", "\n", "You can load the Pandas library with this command:\n", "\n", "```python\n", "import pandas as pd\n", "pd.__version__ # ensure you have a version >= 1.3\n", "```\n", "\n", "Try it below:" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "There are 2 types of objects in Pandas we will use: a ``Series`` and a ``DataFrame``. \n", "\n", "* A ``Series`` is roughly the equivalent of a vector, or a column/row in a spreadsheet.\n", "* A ``DataFrame`` is a collection of ``Series`` objects, next to each other, to create a matrix of data." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Working with Pandas ``Series`` objects\n", "\n", "### What is a ``Series`` object?\n", "Let's see some characteristics of a ``Series``\n", "```python\n", "# Create a Series from a list. \n", "s = pd.Series([ ... ]) \n", "print(s)\n", "```\n", "\n", "Put your own numbers inside the list in the space below. You learned [about lists in the prior module](https://yint.org/pybasic11)." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Notice the index (the column to the left of your numbers)? Let's look at another example:\n", "```python\n", ">>> s = pd.Series([ 5, 9, 1, -4, float('nan'), 5 ])\n", ">>> print(s) \n", "0 5.0\n", "1 9.0\n", "2 1.0\n", "3 -4.0\n", "4 NaN\n", "5 5.0\n", "dtype: float64\n", "```\n", "If you do not provide any labels for the rows, the these will be automatically generated for you, starting from 0." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "What if you have your own labels already?\n", "```python\n", "# You call the function with two inputs. One input is \n", "# mandatory (the first one), the other is optional.\n", "s = pd.Series(\n", " data = [5, 9, 1, -4, float('nan'), 5 ], \n", " index = ['a', 'b', 'c', 'd', 'e', 'f']\n", ")\n", "print(s)\n", "print(s.values)\n", "type(s.values)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Ah ha! See what you get there in the output from ``s.values``? Pandas is built on top of another library, called NumPy. The underlying data are NumPy arrays, and Pandas adds extra functionality on top of that. We will refer back to NumPy later, or you will see it commonly referenced in Python websites that deal with data processing. So it is good to know about it." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Lastly, give your series a nice name:\n", "```python\n", "s.name = 'Random values'\n", "print(s)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Mathematical calculations\n", "\n", "The series you created above, can be used in calculations. Notice how missing data are handled seamlessly.\n", "\n", "```python\n", "s = pd.Series(\n", " data = [5, 9, 1, -4, float('nan'), 5 ], \n", " index = ['a', 'b', 'c', 'd', 'e', 'f'],\n", " name = 'Calculations'\n", ")\n", "print(s * 5 + 2)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "What type is a series object? Hint, use the ``type(...)`` function." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Calculate the square root of this column `s`. Remember in the [prior module](https://yint.org/pybasic11) how we calculated the square root by raising the number to the power of 0.5? \n", "\n", "Since the square root is not defined for negative numbers, such as the $-4$ in row `d`, what do you expect as an answer? Check it out in the space below." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Logical operations are possible too. Try some of these out:\n", "```python\n", "s > 4\n", "s.isna()\n", "s.notna()\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Accessing entries\n", "\n", "Like with lists, you can access the data entries using the square bracket notation. In Pandas:\n", "```python\n", "s[2]\n", "s['e']\n", "```\n", "\n", "Notice the second example above: you can access entries in the Series by their name!" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Selected subsets from the series can be accessed too, again using square brackets:\n", "```python\n", "s[[2, 4, 0]]\n", "s[['f', 'd', 'b']]\n", "\n", "# Selection based on logic: I want only values greater than 4. This is called filtering.\n", "s[s > 4]\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "You can also access a ``range`` of entries:\n", "```python\n", "s[0:2]\n", "s['a':'c']\n", "```\n", "Take a careful look at that output. You might have expected them to be the same length, but they are not! When accessing with the index **names**, you get the range _inclusive_ of the last entry. When accessing by index **number**, it behaves consistent with Python lists.\n", "\n", "That makes sense. Names of the rows, the index, do not necessarily have to be sequential, like ``['a', 'b', ... 'f']`` as in this example. Often the index is unordered. \n", "\n", "For example, if you had a series related to different Canadian cities: \n", "\n", "`['Toronto', 'Vancouver', 'Ottawa', 'Montréal', 'Halifax']`\n", "\n", "then with `['Vancouver':'Montréal']` you expect to see the middle 3 entries, inclusive of `Montréal`." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Working with Pandas ``DataFrame`` objects\n", "\n", "\n", "Imagine you have 5 temperature measurements (rows) for 4 cities (columns). In actual data the columns would be the temperature measurement from a different part of the process. For this example, each column is a city.\n", "\n", "We can create a ``DataFrame`` using a ***list-of-lists***:\n", "```python\n", "import pandas as pd\n", "rawdata = [[17, 19, 22, 20], \n", " [11, 14, 15, 12], \n", " [ 7, 11, 8, 7], \n", " [ 8, 9, 8, 8], \n", " [ 7, 9, 8, 6]]\n", "df = pd.DataFrame(\n", " data=rawdata, \n", " columns = ['Johannesburg', 'Cape Town', 'Pretoria', 'Durban']\n", ")\n", "print(df)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Tip: Pandas can handle column names with a space in them.😊 This is why when you want to see one column, you can refer to it as follows:\n", "\n", "* ``df[\"Cape Town\"]``\n", "* ``df['Johannesburg']``\n", "* What type is each column inside ``df``? Try finding out: ``type(df[\"Cape Town\"])``" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Try some calculations now:\n", "```python\n", "df.max() \n", "df.max(axis=0) \n", "df.max(axis=1) \n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Now try some other types of calculations on all the columns: \n", "\n", "* ``df.sum``\n", "* ``df.mean``\n", "* ``df.median``\n", "* ``df.std # Standard deviation``\n", "* ``df.min ``\n", "* ``df.idxmin``\n", "* ``df.diff``\n", "\n", "Notice that these calculations take place on the columns, by default. What if you wanted to do them on the rows?\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Try the following to expand your knowledge.\n", "\n", "\n", "* Calculations on certain columns. The beauty of Pandas is how easy it is to write equations, based on the columns:\n", "```python\n", "df['Johannesburg'] * 4 - df['Durban']\n", "```\n", "The above does exactly what you think it should.\n", "\n", "* What does this do? \n", "\n", "```python\n", ">>> df.diff().abs().max()\n", "\n", "# and this? \n", ">>> df.diff().abs().max().idxmax()\n", "```\n", "\n", "* What is the interpretation of that long command?" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "You can stack up your sequential operations quite compactly in Pandas. It works because the output from one function is the input for the next one to the right.\n", "\n", "**A tip on style**\n", "\n", "You can also use ``df.Johannesburg`` to access a column, but this is not good Pandas style, so don't do this. It cannot handle column names with spaces, and if you have a column name that is also a built-in operation, like ``max``, for example, it is confusing." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Time for a diversion: Dictionaries!\n", "\n", "A dictionary is a Python ***object*** that is a flexible data container for other objects. It contains these objects using what are called ***key*** - ***value*** pairs. You create a dictionary like this:\n", "\n", "```python\n", "random_objects = {\n", " 'my integer': 45,\n", " 'a float': 12.34,\n", " 'short_list': [1, 4, 7],\n", " 'longer list': [2, 4, 6, 9, 12, 16, 20, 25, 30, 36, 42],\n", " 'website': \"https://learnche.org\",\n", " 'a tuple': (1, 2.0, 33, 444, '5555', 'etc'),\n", "}\n", "print(random_objects)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "In older Python versions, the dictionary print out will be a random order. Newer versions of Python ***maintain the order*** of the container. \n", "\n", "The dictionary has what are called ***keys*** and ***values***:\n", "```python\n", "\n", "# These both return a list:\n", "random_objects.keys()\n", "random_objects.values()\n", "\n", "# What is the \"type\" of this dictionary?\n", "print(f'The object is of: {type(random_objects)}')\n", "\n", "# You can access individual values from the dictionary by using the key:\n", "random_objects['short_list']\n", "\n", "# What happens when you use a non-existent key?\n", "random_objects['mystery']\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "In the above example, the keys were all ***string*** objects. But that is not required. You can use integers, floating point values, strings, tuples, or a mixture of them. There are other options too, but these are comprehensive enough.\n", "\n", "Dictionary values may be any ***objects***, even other dictionaries. Yes, so a dictionary within a dictionary is possible. \n", "\n", "Dictionary objects are excellent ***containers***. If you need to return several objects from a function, collect them in a dictionary, and return them in that single object. It is not required, but it can make your code neater, and more logical.\n", "\n", "### Try it\n", "\n", "Create a dictionary for yourself with 4 `key`-`value` pairs, which summarizes a regression model. The `key` is the first item below, followed by a description of what you should create as the `value`:\n", "1. `intercept`: make up a floating-point value which is the intercept of your linear model\n", "2. `slope`: pick any floating-point value as the slope\n", "3. `R2`: the $R^2$ value of the regression model\n", "5. `residuals`: a list (vector) of residuals. You can use a Pandas Series here also!\n", "\n", "You can create the above dictionary in a single line of code. " ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "regression_model = ___\n" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Add a new item to the dictionary\n", "\n", "```python\n", "regression_model = { ... } # create your dictionary\n", "regression_model['new key'] = 'additional value'\n", "```\n", "\n", "And you can overwrite/update an existing key-value pair in the same way:\n", "```python\n", "random_objects['my integer'] = 42\n", "```\n", "This implies you can never have 2 keys which are the same. If you try to create a second key which already exists, it will overwrite the object associated with the existing key.\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Creating a Series from a dictionary\n", "\n", "Now we can combine two new concepts you have just learned: Dictionaries and Pandas.\n", "\n", "```python\n", "raw_data = {\n", " 'Germany': 27, \n", " 'Belgium': 13, \n", " 'Netherlands': 52, \n", " 'Sweden': 54, \n", " 'Ireland': 5\n", "}\n", "tons_herring_eaten = pd.Series(raw_data)\n", "print(tons_herring_eaten)\n", "```\n", "\n", "The row names (index) are taken from the dictionary keys, associated with each value. \n", "\n", "1. Write the Pandas command to determine which country eats the most herring. It is **not** with the ``tons_herring_eaten.max()`` command!\n", "2. And the least herring?\n", "3. What does this do? ``tons_herring_eaten.sort_values()``. Print the variable afterwards. \n", "4. And what does this do then? ``tons_herring_eaten.sort_index()``" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## DataFrame operations\n", "\n", "Now you will use your knowledge of dictionaries you just developed above.\n", "\n", "We will show code for some commonly-used Pandas operations:\n", "* shape of an array, \n", "* what are the unique entries, \n", "* adding and merging columns, \n", "* adding rows, \n", "* deleting rows,\n", "* removing missing values.\n", "\n", "We will use this made-up data set, showing how much food is used by each country. You can replace these data with numbers and columns and rows which make sense to your application.\n", "\n", ">```python\n", ">import pandas as pd\n", ">data = {\n", "> 'Herring': [27, 13, 52, 54, 5, 19], \n", "> 'Coffee': [90, 94, 96, 97, 30, 73],\n", "> 'Tea': [88, 48, 98, 93, 99, 88]\n", ">}\n", ">countries = ['Germany', 'Belgium', 'Netherlands', 'Sweden', 'Ireland', 'Switzerland']\n", ">food_consumed = pd.DataFrame(data, index=countries)\n", ">\n", ">print(data)\n", ">print(countries)\n", ">print(type(data))\n", ">print(type(countries))\n", ">print(type(food_consumed))\n", ">food_consumed\n", ">```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 0. Getting an idea about your data first\n", "\n", "```python\n", "# The first rows:\n", "food_consumed.head()\n", "\n", "# The last rows:\n", "food_consumed.tail()\n", "\n", "# Some basic statistics\n", "food_consumed.describe()\n", "\n", "# Some information about the data structure: missing values, memory usage, etc\n", "food_consumed.info()\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 1. Shape of a data frame\n", "\n", "```python\n", "# There were 6 countries, and 3 food types. Verify:\n", "food_consumed.shape\n", "\n", "# Transposed and then shape:\n", "food_consumed.T.shape\n", "\n", "# Interesting: what shapes do summary vectors have?\n", "food_consumed.mean().shape\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 2. Unique entries\n", "```python\n", "food_consumed['Tea'].unique()\n", "\n", "# Unique names of the rows: (not so useful in this example, because they are already unique)\n", "food_consumed.index.unique()\n", "\n", "# Get counts (n) of the unique entries:\n", "food_consumed.nunique() # in each column \n", "food_consumed.nunique(axis=1) # in each row\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 3. Add a new column\n", "```python\n", "# Works just like a dictionary!\n", "# If the data are in the same row order\n", "food_consumed['Yoghurt'] = [30, 20, 53, 2, 3, 48]\n", "print(food_consumed)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 4. Merging dataframes \n", "```python\n", "# Note the row order is different this time:\n", "more_foods = pd.DataFrame(\n", " index=['Belgium', 'Germany', 'Ireland', 'Netherlands', 'Sweden', 'Switzerland'],\n", " data={'Garlic': [29, 22, 5, 15, 9, 64]},\n", ")\n", "print(food_consumed)\n", "print(more_foods)\n", "# Merge 'more_foods' into the 'food_consumed' data frame. Merging works, even if row order is not the same!\n", "food_consumed = food_consumed.join(more_foods)\n", "food_consumed\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 5. Adding a new row\n", "```python\n", "# Collect the new data in a Series. Note that 'Tea' is (intentionally) missing!\n", "portugal = pd.Series(\n", " data = {\n", " 'Coffee': 72, \n", " 'Herring': 20, \n", " 'Yoghurt': 6, \n", " 'Garlic': 89,\n", " },\n", " name = 'Portugal'\n", ")\n", "\n", "food_consumed = food_consumed.append(portugal)\n", "# See the missing value created?\n", "print(food_consumed)\n", "\n", "# What happens if you run the above commands more than once?\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 6. Delete or drop a row/column\n", "```python\n", "# Drop a column, and returns its values to you\n", "coffee_column = food_consumed.pop('Coffee')\n", "print(coffee_column)\n", "print(food_consumed)\n", "\n", "# Leaves the original data untouched; returns only \n", "# a copy, with those columns removed\n", "food_consumed.drop(['Garlic', 'Yoghurt'], axis=1)\n", "print(food_consumed)\n", "\n", "# Leaves the original data untouched; returns only \n", "# a copy, with those rows removed. \n", "non_EU_consumption = food_consumed.drop(['Switzerland', ], axis=0)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 7. Remove rows with missing values\n", "```python\n", "# Returns a COPY of the array, with no missing values:\n", "cleaned_data = food_consumed.dropna() \n", "\n", "# Makes the deletion inplace; you do not not have to assign the output to a new variable.\n", "# Inplace is not always faster!\n", "food_consumed.dropna(inplace=True) \n", "\n", "# Remove only rows where all values are missing:\n", "food_consumed.dropna(how='all')\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### 8. Sort the data\n", "\n", "```python\n", "food_consumed.sort_values(by=\"Garlic\")\n", "food_consumed.sort_values(by=\"Garlic\", inplace=True)\n", "food_consumed.sort_values(by=\"Garlic\", inplace=True, ascending=False)\n", "```" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Reading and writing Excel files with Pandas\n", "\n", "The basic command to read an Excel file is straight-forward:\n", "\n", "```python\n", "filename = r\"C:\\temp\\colour-reference.xlsx\" # use the 'r' at the start with Windows directory names\n", "\n", "# or, you can even specify the web address for the file\n", "filename = \"https://yint.org/static/colour-reference.xlsx\"\n", "colour_data = pd.read_excel(\n", " filename, \n", " sheet_name='Sheet1', \n", " skiprows=0, \n", " index_col=0,\n", ")\n", "print(colour_data)\n", "```\n", "\n", "Try it: \n", "* Download an Excel file (or use your own); here is the one used in the demo code above: https://yint.org/static/colour-reference.xlsx\n", "* Save the file somewhere on your hard drive.\n", "* Open it up to see the file structure, and to see what data you expect to see next.\n", "* Change the `filename` line in the code above.\n", "* Run the code and verify you got what you expected.\n", "* Adjust the `skiprows` and `index_col` function inputs to see what happens.\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "Excel files can be complex, with different layouts, so read the documentation about Pandas and Excel files: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Exercise: mass balance\n", "\n", "1. Create an Excel file of a tank reactor: each column is a measurement from the reactor, and each row is a measurement that is taken at some point in time.\n", "2. Simulate (create) some data. Save that file to your hard drive.\n", "3. Use the knowledge you learned above to read in that Excel file. Use the ``df.head()`` function to make sure you have the correct values.\n", "4. Use the mass balance principle: $$ \\text{Accumulation} = \\text{Input} - \\text{Output} + \\text{Generation} - \\text{Consumption} $$\n", "5. Collect all the columns that are needed for the right hand side of the equation. For example, consider a carbon balance (then the $\\text{Generation}$ and $\\text{Consumption}$ columns are zero). Therefore calculate the input and the output carbon, and check if there is an accumulation in the tank over time." ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Exporting to Excel\n", "\n", "\n", "Similarly, for writing Excel files, it is often enough to just use:\n", "```python\n", "df = ... # code goes here to create/update your data frame, df\n", "df.to_excel(\"output.xlsx\", sheet_name='Summary')\n", "```\n", "and it is worth checking the documentation for further function options: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} } ], "metadata": { "hide_input": false, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" }, "toc": { "base_numbering": "1", "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": true, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "349px" }, "toc_section_display": true, "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }