{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
popstateyear
01.5OH2000
11.7OH2001
23.6OH2002
32.4NV2001
42.9NV2002
\n", "
" ], "text/plain": [ " pop state year\n", "0 1.5 OH 2000\n", "1 1.7 OH 2001\n", "2 3.6 OH 2002\n", "3 2.4 NV 2001\n", "4 2.9 NV 2002" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ " '''\n", "Pandas is one of the most powerful contributions to python for quick and easy data analysis. Data Science is dominated by\n", "one common data structure - the table. Python never had a great native way to manipulate tables in ways that many analysts\n", "are used to (if you're at all familliar with spreadsheets or relational databases). The basic Pandas data structure is the Data \n", "Frame which, if you are an R user, should sound familliar.\n", "\n", "This module is a very high level treatment of basic data operations one typically uses when manipulating tables in Python. \n", "To really learn all of the details, refer to the book.\n", "'''\n", "\n", "#To import\n", "import pandas as pd #Its common to use pd as the abbreviation\n", "from pandas import Series, DataFrame #Wes McKinney recommends importing these separately - they are used so often and benefit from having their own namespace\n", "\n", "'''\n", "The Series - for this module we'll skip the Series (see book for details), but we will define it. A Series is a one dimensional\n", "array like object that has an array plus an index, which labels the array entries. Once we present a Data Frame, one can think\n", "of a series as similar to a Data Frame with just one column.\n", "'''\n", "\n", "'''\n", "A simple example of the DataFrame - building one from a dictionary\n", "(note for this to work each list has to be the same length)\n", "'''\n", "data = {'state':['OH', 'OH', 'OH', 'NV', 'NV'], \n", " 'year':[2000, 2001, 2002, 2001, 2002],\n", " 'pop':[1.5, 1.7, 3.6, 2.4, 2.9]}\n", "\n", "frame = pd.DataFrame(data) #This function will turn the dict to the data frame. Notice that the keys become columns and an index is created\n", "\n", "frame\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "(0 OH\n", " 1 OH\n", " 2 OH\n", " 3 NV\n", " 4 NV\n", " Name: state, dtype: object, 0 OH\n", " 1 OH\n", " 2 OH\n", " 3 NV\n", " 4 NV\n", " Name: state, dtype: object)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#To retrieve columns...use dict-like notation or use the column name as an attribute\n", "frame['state'], frame.state" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "( pop state year\n", " 1 1.7 OH 2001, pop 1.7\n", " state OH\n", " year 2001\n", " Name: 1, dtype: object)" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#To retrieve a row, you can index it like a list, or use the actual row index name using the .ix method\n", "frame[1:2], frame.iloc[1]" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
popstateyearbig_pop
01.5OH2000False
11.7OH2001False
23.6OH2002True
32.4NV2001False
42.9NV2002False
\n", "
" ], "text/plain": [ " pop state year big_pop\n", "0 1.5 OH 2000 False\n", "1 1.7 OH 2001 False\n", "2 3.6 OH 2002 True\n", "3 2.4 NV 2001 False\n", "4 2.9 NV 2002 False" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Assigning a new column is easy too\n", "frame['big_pop'] = (frame['pop']>3)\n", "frame" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Rand1OrigOrd
60.8856616
30.7910403
10.6818141
00.4999510
8-0.3465398
9-0.4660189
5-0.6551655
4-0.6722324
7-0.9812477
2-1.1204292
\n", "
" ], "text/plain": [ " Rand1 OrigOrd\n", "6 0.885661 6\n", "3 0.791040 3\n", "1 0.681814 1\n", "0 0.499951 0\n", "8 -0.346539 8\n", "9 -0.466018 9\n", "5 -0.655165 5\n", "4 -0.672232 4\n", "7 -0.981247 7\n", "2 -1.120429 2" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "'''\n", "One operation on data that is frequent enough to highlight here is sorting\n", "'''\n", "import numpy as np\n", "\n", "df = pd.DataFrame(np.random.randn(10,1), columns = ['Rand1'])\n", "df['OrigOrd'] = df.index.values\n", "df = df.sort_values(by = 'Rand1', ascending = False) #Sorting by a particular column\n", "df" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Rand1OrigOrd
00.4999510
10.6818141
2-1.1204292
30.7910403
4-0.6722324
5-0.6551655
60.8856616
7-0.9812477
8-0.3465398
9-0.4660189
\n", "
" ], "text/plain": [ " Rand1 OrigOrd\n", "0 0.499951 0\n", "1 0.681814 1\n", "2 -1.120429 2\n", "3 0.791040 3\n", "4 -0.672232 4\n", "5 -0.655165 5\n", "6 0.885661 6\n", "7 -0.981247 7\n", "8 -0.346539 8\n", "9 -0.466018 9" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = df.sort_index() #Now sorting back, using the index\n", "df" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
keyrand_floatrand_int
0a0.7103134
1b0.6275691
2c1.8983921
3d0.3485064
4e-1.6233302
5f1.1490504
6g-0.1651412
7h-0.8746441
8i-0.1260672
9j1.7985920
\n", "
" ], "text/plain": [ " key rand_float rand_int\n", "0 a 0.710313 4\n", "1 b 0.627569 1\n", "2 c 1.898392 1\n", "3 d 0.348506 4\n", "4 e -1.623330 2\n", "5 f 1.149050 4\n", "6 g -0.165141 2\n", "7 h -0.874644 1\n", "8 i -0.126067 2\n", "9 j 1.798592 0" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "'''\n", "Some of the real power we are after is the ability to condense, merge and concatenate data sets. This is where we\n", "want Python to have the same data munging functionality we usually get from executing SQL statements on relational\n", "databases.\n", "'''\n", "\n", "alpha = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']\n", "df1 = DataFrame({'rand_float':np.random.randn(10), 'key':alpha})\n", "df2 = DataFrame({'rand_int':np.random.randint(0, 5, size = 10), 'key':alpha})\n", "\n", "'''\n", "So we have two dataframes that share indexes (in this case all of them). We want to combine them. In sql we would execute\n", "a join, such as Select * from table1 a join table2 b on a.key=b.key;\n", "'''\n", "df_merge = pd.merge(df1,df2,on='key')\n", "df_merge" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
rand_float
rand_int
01.798592
10.550439
2-0.638179
40.735957
\n", "
" ], "text/plain": [ " rand_float\n", "rand_int \n", "0 1.798592\n", "1 0.550439\n", "2 -0.638179\n", "4 0.735957" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "'''\n", "Now that we have this merged table, we might want to summarize it within a key grouping\n", "'''\n", "df_merge.groupby('rand_int').mean()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
rand_float
summeanlenstd
rand_int
01.7985921.7985921.0NaN
11.6513180.5504393.01.388126
2-1.914538-0.6381793.00.853389
42.2078700.7359573.00.400888
\n", "
" ], "text/plain": [ " rand_float \n", " sum mean len std\n", "rand_int \n", "0 1.798592 1.798592 1.0 NaN\n", "1 1.651318 0.550439 3.0 1.388126\n", "2 -1.914538 -0.638179 3.0 0.853389\n", "4 2.207870 0.735957 3.0 0.400888" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "'''\n", "You can have multiple aggregation functions, but the syntax isn't the same\n", "'''\n", "df_merge.groupby('rand_int').agg([np.sum, np.mean, len, np.std])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 0 }