{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section2_2-Pandas-Fundamentals.ipynb)\n", "\n", "# Pandas Fundamentals\n", "\n", "This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.\n", "\n", "For some variety, we will leave our digestive tract bacteria behind and employ some baseball data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "pd.set_option('display.max_rows', 10)\n", "\n", "DATA_PATH = 'https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/'\n", "\n", "baseball = pd.read_csv(DATA_PATH + \"baseball.csv\", index_col='id')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that we specified the `id` column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by combining `player` and `year`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "player_id = baseball.player + baseball.year.astype(str)\n", "baseball_newind = baseball.copy()\n", "baseball_newind.index = player_id\n", "baseball_newind" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This looks okay, but let's check:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.index.is_unique" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, indices need not be unique. Our choice is not unique because some players change teams within years." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series(baseball_newind.index).value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.loc['wickmbo012007']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will learn more about indexing below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can create a truly unique index by combining `player`, `team` and `year`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "player_unique = baseball.player + baseball.team + baseball.year.astype(str)\n", "baseball_newind = baseball.copy()\n", "baseball_newind.index = player_unique\n", "baseball_newind.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.index.is_unique" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can create meaningful indices more easily using a hierarchical index; for now, we will stick with the numeric `id` field as our index." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Manipulating indices\n", "\n", "**Reindexing** allows users to manipulate the data labels in a DataFrame. It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.\n", "\n", "A simple use of `reindex` is to alter the order of the rows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.reindex(baseball.index[::-1]).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the `id` index is not sequential. Say we wanted to populate the table with every `id` value. We could specify and index that is a sequence from the first to the last `id` numbers in the database, and Pandas would fill in the missing data with `NaN` values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "id_range = range(baseball.index.values.min(), baseball.index.values.max())\n", "baseball.reindex(id_range).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Missing values can be filled as desired, either with selected values, or by rule:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.reindex(id_range, method='ffill').head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.reindex(id_range, fill_value='charliebrown', columns=['player']).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Keep in mind that `reindex` does not work if we pass a non-unique index series." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can remove rows or columns via the `drop` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.drop([89525, 89526])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.drop(['ibb','hbp'], axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Indexing and Selection\n", "\n", "Indexing works analogously to indexing in NumPy arrays, except we can use the labels in the `Index` object to extract values in addition to arrays of integers." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Sample Series object\n", "hits = baseball_newind.h\n", "hits" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Numpy-style indexing\n", "hits[:3]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Indexing by label\n", "hits[['womacto01CHN2006','schilcu01BOS2006']]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also slice with data labels, since they have an intrinsic order within the Index:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hits['womacto01CHN2006':'gonzalu01ARI2006']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hits['womacto01CHN2006':'gonzalu01ARI2006'] = 5\n", "hits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a `DataFrame` we can slice along either or both axes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind[['h','ab']]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind[baseball_newind.ab>500]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a more concise (and readable) syntax, we can use the new `query` method to perform selection on a `DataFrame`. Instead of having to type the fully-specified column, we can simply pass a string that describes what to select. The query above is then simply:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.query('ab > 500')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `DataFrame.index` and `DataFrame.columns` are placed in the query namespace by default. If you want to refer to a variable in the current namespace, you can prefix the variable with `@`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "min_ab = 450" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.query('ab > @min_ab')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The indexing field `loc` allows us to select subsets of rows and columns in an intuitive way:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.loc['gonzalu01ARI2006', ['h','X2b', 'X3b', 'hr']]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.loc[:'myersmi01NYA2006', 'hr']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to using `loc` to select rows and columns by **label**, pandas also allows indexing by **position** using the `iloc` attribute.\n", "\n", "So, we can query rows and columns by absolute position, rather than by name:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.iloc[:5, 5:8]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "You can use the `isin` method query a DataFrame based upon a list of values as follows: \n", "\n", " data['phylum'].isin(['Firmacutes', 'Bacteroidetes'])\n", "\n", "Use `isin` to find all players that played for the Los Angeles Dodgers (LAN) or the San Francisco Giants (SFN). How many records contain these values?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Operations\n", "\n", "`DataFrame` and `Series` objects allow for several operations to take place either on a single object, or between two or more objects.\n", "\n", "For example, we can perform arithmetic on the elements of two objects, such as combining baseball statistics across years. First, let's (artificially) construct two Series, consisting of home runs hit in years 2006 and 2007, respectively:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hr2006 = baseball.loc[baseball.year==2006, 'hr']\n", "hr2006.index = baseball.player[baseball.year==2006]\n", "\n", "hr2007 = baseball.loc[baseball.year==2007, 'hr']\n", "hr2007.index = baseball.player[baseball.year==2007]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hr2007" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's add them together, in hopes of getting 2-year home run totals:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hr_total = hr2006 + hr2007\n", "hr_total" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pandas' data alignment places `NaN` values for labels that do not overlap in the two Series. In fact, there are only 6 players that occur in both years." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hr_total[hr_total.notnull()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While we do want the operation to honor the data labels in this way, we probably do not want the missing values to be filled with `NaN`. We can use the `add` method to calculate player home run totals by using the `fill_value` argument to insert a zero for home runs where labels do not overlap:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hr2007.add(hr2006, fill_value=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Operations can also be **broadcast** between rows or columns.\n", "\n", "For example, if we subtract the maximum number of home runs hit from the `hr` column, we get how many fewer than the maximum were hit by each player:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.hr - baseball.hr.max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or, looking at things row-wise, we can see how a particular player compares with the rest of the group with respect to important statistics" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.loc[89521, \"player\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stats = baseball[['h','X2b', 'X3b', 'hr']]\n", "diff = stats - stats.loc[89521]\n", "diff[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also apply functions to each column or row of a `DataFrame`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "stats.apply(np.median)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def range_calc(x):\n", " return x.max() - x.min()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stat_range = lambda x: x.max() - x.min()\n", "stats.apply(stat_range)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets use apply to calculate a meaningful baseball statistics, slugging percentage:\n", "\n", "$$SLG = \\frac{1B + (2 \\times 2B) + (3 \\times 3B) + (4 \\times HR)}{AB}$$\n", "\n", "And just for fun, we will format the resulting estimate." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def slugging(x): \n", " bases = x['h']-x['X2b']-x['X3b']-x['hr'] + 2*x['X2b'] + 3*x['X3b'] + 4*x['hr']\n", " ab = x['ab']+1e-6\n", " \n", " return bases/ab\n", "\n", "baseball.apply(slugging, axis=1).round(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sorting and Ranking\n", "\n", "Pandas objects include methods for re-ordering data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.sort_index().head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.sort_index(ascending=False).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try sorting the **columns** instead of the rows, in ascending order:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_newind.sort_index(axis=1).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also use `sort_values` to sort a `Series` by value, rather than by label." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.hr.sort_values()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a `DataFrame`, we can sort according to the values of one or more columns using the `by` argument of `sort_values`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball[['player','sb','cs']].sort_values(ascending=[False,True], \n", " by=['sb', 'cs']).head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Ranking** does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.hr.rank()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ties are assigned the mean value of the tied ranks, which may result in decimal values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.Series([100,100]).rank()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.hr.rank(method='first')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Calling the `DataFrame`'s `rank` method results in the ranks of all columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.rank(ascending=False).head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball[['r','h','hr']].rank(ascending=False).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "Calculate **on base percentage** for each player, and return the ordered series of estimates.\n", "\n", "$$OBP = \\frac{H + BB + HBP}{AB + BB + HBP + SF}$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hierarchical indexing\n", "\n", "In the baseball example, I was forced to combine 3 fields to obtain a unique index that was not simply an integer value. A more elegant way to have done this would be to create a hierarchical index from the three fields." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_h = baseball.set_index(['year', 'team', 'player'])\n", "baseball_h.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This index is a `MultiIndex` object that consists of a sequence of tuples, the elements of which is some combination of the three columns used to create the index. Where there are multiple repeated values, Pandas does not print the repeats, making it easy to identify groups of values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_h.index[:10]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_h.index.is_unique" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try using this hierarchical index to retrieve Julio Franco (`francju01`), who played for the Atlanta Braves (`ATL`) in 2007:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball_h.loc[(2007, 'ATL', 'francju01')]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb = pd.read_csv(\"../data/microbiome.csv\", index_col=['Taxon','Patient'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With a hierachical index, we can select subsets of the data based on a *partial* index:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb.loc['Proteobacteria']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Hierarchical indices can be created on either or both axes. Here is a trivial example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frame = pd.DataFrame(np.arange(12).reshape(( 4, 3)), \n", " index =[['a', 'a', 'b', 'b'], [1, 2, 1, 2]], \n", " columns =[['Ohio', 'Ohio', 'Colorado'], ['Green', 'Red', 'Green']])\n", "\n", "frame" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to get fancy, both the row and column indices themselves can be given names:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frame.index.names = ['key1', 'key2']\n", "frame.columns.names = ['state', 'color']\n", "frame" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With this, we can do all sorts of custom indexing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "frame.loc['a', 'Ohio']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try retrieving the value corresponding to `b2` in `Colorado`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Additionally, the order of the set of indices in a hierarchical `MultiIndex` can be changed by swapping them pairwise:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb.swaplevel('Patient', 'Taxon').head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Data can also be sorted by any index level, using `sort_index` with the appropriate `level` argument:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb.sort_index(level='Patient', ascending=False).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Missing data\n", "\n", "The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.\n", "\n", "Missing data are represented in `Series` and `DataFrame` objects by the `NaN` floating point value. However, `None` is also treated as missing, since it is commonly used as such in other contexts (*e.g.* NumPy)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "foo = pd.Series([np.nan, -3, None, 'foobar'])\n", "foo" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "foo.isnull()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Missing values may be dropped or indexed out:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores = pd.read_csv(DATA_PATH + 'test_scores.csv', index_col=0, nrows=50)\n", "test_scores" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.dropna()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.isnull()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores[test_scores.notnull()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, `dropna` drops entire rows in which one or more values are missing." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This can be overridden by passing the `how='all'` argument, which only drops a row when every field is a missing value." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.dropna(how='all')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This can be customized further by specifying how many values need to be present before a row is dropped via the `thresh` argument." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.dropna(thresh=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "Try using the `axis` argument to drop columns with missing values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write your answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the `fillna` argument." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.fillna(-999)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.fillna({'family_inv': 0, 'prev_disab': 1})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that `fillna` by default returns a new object with the desired filling behavior, rather than changing the `Series` or `DataFrame` in place (**in general, we like to do this, by the way!**)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can alter values in-place using `inplace=True`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.prev_disab.fillna(0, inplace=True)\n", "test_scores" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Missing values can also be interpolated, using any one of a variety of methods:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.fillna(method='bfill')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data summarization\n", "\n", "We often wish to summarize data in `Series` or `DataFrame` objects, so that they can more easily be understood or compared with similar data. The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Clearly, `sum` is more meaningful for some columns than others. For methods like `mean` for which application to string variables is not just meaningless, but impossible. We can filter `DataFrame`s by column type with `select_dtypes`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.select_dtypes(include='number').mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The important difference between NumPy's functions and Pandas' methods is that the latter have built-in support for handling missing data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sometimes we may not want to ignore missing values, and allow the `nan` to propagate." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_scores.mean(skipna=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Passing `axis=1` will summarize over rows instead of columns, which only makes sense in certain situations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "extra_bases = baseball[['X2b','X3b','hr']].sum(axis=1)\n", "extra_bases.sort_values(ascending=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A useful summarization that gives a quick snapshot of multiple statistics for a `Series` or `DataFrame` is `describe`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`describe` can detect non-numeric data and sometimes yield useful information about it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.player.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also calculate summary statistics *across* multiple columns, for example, correlation and covariance.\n", "\n", "$$cov(x,y) = \\sum_i (x_i - \\bar{x})(y_i - \\bar{y})$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.hr.cov(baseball.X2b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$$corr(x,y) = \\frac{cov(x,y)}{(n-1)s_x s_y} = \\frac{\\sum_i (x_i - \\bar{x})(y_i - \\bar{y})}{\\sqrt{\\sum_i (x_i - \\bar{x})^2 \\sum_i (y_i - \\bar{y})^2}}$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.hr.corr(baseball.X2b)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.ab.corr(baseball.h)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try running `corr` on the entire `baseball` DataFrame to see what is returned:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Write answer here" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we have a `DataFrame` with a hierarchical index (or indices), summary statistics can be applied with respect to any of the index levels:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb.groupby(level='Taxon').sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Writing Data to Files\n", "\n", "As well as being able to read several data input formats, Pandas can also export data to a variety of storage formats. We will bring your attention to just a couple of these." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mb.to_csv(\"mb.csv\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `to_csv` method writes a `DataFrame` to a comma-separated values (csv) file. You can specify custom delimiters (via `sep` argument), how missing values are written (via `na_rep` argument), whether the index is writen (via `index` argument), whether the header is included (via `header` argument), among other options." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An efficient way of storing data to disk is in binary format. Pandas supports this using Python’s built-in pickle serialization." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseball.to_pickle(\"baseball_pickle\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The complement to `to_pickle` is the `read_pickle` function, which restores the pickle to a `DataFrame` or `Series`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.read_pickle(\"baseball_pickle\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As Wes warns in his book, it is recommended that binary storage of data via pickle only be used as a temporary storage format, in situations where speed is relevant. This is because there is no guarantee that the pickle format will not change with future versions of Python." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## References\n", "\n", "[Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do) Wes McKinney" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" }, "latex_envs": { "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 0 }, "nav_menu": {}, "toc": { "navigate_menu": true, "number_sections": false, "sideBar": false, "threshold": "3", "toc_cell": true, "toc_section_display": "none", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }