{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Pandas Time series" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import addutils.toc ; addutils.toc.js(ipy_notebook=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For further documentation see [Time Series / Date functionality](http://pandas.pydata.org/pandas-docs/stable/timeseries.html) on the [pandas](http://pandas.pydata.org/) documentation." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "import pandas as pd\n", "from numpy import NaN\n", "from IPython.display import (display, HTML)\n", "from pandas.tseries.offsets import *\n", "from addutils import side_by_side2\n", "from addutils import css_notebook\n", "css_notebook()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1 Timestamps and DatetimeIndex" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`to_datetime` convertsa list of date-like objects to Time Stamps. If the list is homegeneous `infer_datetime_format=True` can give a great speed-up. Otherwise with `format` is possible to define the format of the strings a-priori." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "DatetimeIndex(['2000-11-10 15:52:13.773100+00:00', '2014-05-03 10:57:18.775400+00:00'], dtype='datetime64[ns, UTC]', freq=None)" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.to_datetime(['10-11-2000 17:52:13.7731+02:00', '3-5-2014 11:57:18.7754+01:00'], dayfirst=True, utc=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It’s also possible to convert integer or float epoch times. The float value is interpreted as a Unix timestamp and the default unit of measure is nanoseconds bu a different unit (D,s,ms,us,ns) can be specified:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "DatetimeIndex(['1970-01-01'], dtype='datetime64[ns]', freq=None)\n", "DatetimeIndex(['1970-01-01 00:00:00.000000001'], dtype='datetime64[ns]', freq=None)\n", "DatetimeIndex(['1970-01-02'], dtype='datetime64[ns]', freq=None)\n" ] } ], "source": [ "print (pd.to_datetime([0]))\n", "print (pd.to_datetime([1]))\n", "print (pd.to_datetime([1], unit='D'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Index of Timestamps: use `date_range` and `bdate_range` to create regular frequency timestamp indexes. These functions return **`DatetimeIndex`** objects that are **array of Timestamps**:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2012-01-01 15:53:25.335000+01:00\n", "2012-01-03 19:29:07.450000+01:00\n", "2012-01-05 23:04:49.565000+01:00\n", "2012-01-08 02:40:31.680000+01:00\n", "2012-01-10 06:16:13.795000+01:00\n", "2012-01-12 09:51:55.910000+01:00\n" ] } ], "source": [ "rng = pd.date_range('1/1/2012 15:53:25.335',\n", " periods=6,\n", " freq='2d3h35min42s115ms',\n", " tz='Europe/Rome')\n", "for date in rng:\n", " print (date)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "Timestamp('2012-01-01 15:53:25.335000+0100', tz='Europe/Rome', offset='185742115L')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "rng[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This can be used as an index for Series, Dataframes and Panels:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2012-01-01 15:53:25.335000+01:00794663
2012-01-03 19:29:07.450000+01:00734914
2012-01-05 23:04:49.565000+01:0046395
2012-01-08 02:40:31.680000+01:00289558
2012-01-10 06:16:13.795000+01:00318730
2012-01-12 09:51:55.910000+01:00456765
\n", "
" ], "text/plain": [ " A B C\n", "2012-01-01 15:53:25.335000+01:00 79 46 63\n", "2012-01-03 19:29:07.450000+01:00 73 49 14\n", "2012-01-05 23:04:49.565000+01:00 4 63 95\n", "2012-01-08 02:40:31.680000+01:00 28 95 58\n", "2012-01-10 06:16:13.795000+01:00 31 87 30\n", "2012-01-12 09:51:55.910000+01:00 45 67 65" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "data = np.random.randint(0,99,(len(rng),3))\n", "d1 = pd.DataFrame(data, index=rng, columns=list('ABC'))\n", "display(d1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pandas provides a timezone conversion. Here we produce new data by converting the previous timeseries form the Rome timezone to the Eastern US timezone. When data with different timezones are combined toghether (as in `d3 = d1+d2`) the results are given in **UTC** time which can be in turn converted in any timezone. In this example the UTC timezone is shown with the notation **`+00:00`**" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2012-01-01 15:53:25.335000+01:00794663
2012-01-03 19:29:07.450000+01:00734914
2012-01-05 23:04:49.565000+01:0046395
2012-01-08 02:40:31.680000+01:00289558
2012-01-10 06:16:13.795000+01:00318730
2012-01-12 09:51:55.910000+01:00456765
\n", "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2012-01-01 09:53:25.335000-05:00794663
2012-01-03 13:29:07.450000-05:00734914
2012-01-05 17:04:49.565000-05:0046395
2012-01-07 20:40:31.680000-05:00289558
2012-01-10 00:16:13.795000-05:00318730
2012-01-12 03:51:55.910000-05:00456765
\n", "
" ], "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d2 = d1.tz_convert('US/Eastern')\n", "HTML(side_by_side2(d1, d2))" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2012-01-01 14:53:25.335000+00:0015892126
2012-01-03 18:29:07.450000+00:001469828
2012-01-05 22:04:49.565000+00:008126190
2012-01-08 01:40:31.680000+00:0056190116
2012-01-10 05:16:13.795000+00:006217460
2012-01-12 08:51:55.910000+00:0090134130
\n", "
" ], "text/plain": [ " A B C\n", "2012-01-01 14:53:25.335000+00:00 158 92 126\n", "2012-01-03 18:29:07.450000+00:00 146 98 28\n", "2012-01-05 22:04:49.565000+00:00 8 126 190\n", "2012-01-08 01:40:31.680000+00:00 56 190 116\n", "2012-01-10 05:16:13.795000+00:00 62 174 60\n", "2012-01-12 08:51:55.910000+00:00 90 134 130" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d3 = d1+d2\n", "d3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2 DateOffsets objects" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous example we used a time frequency string (with Offset Aliases) `freq='2d3h35min42s115ms` with `date_range` to create a `DatetimeIndex`. These frequency strings are being translated into an instance of pandas `DateOffset`, which represents a regular frequency increment. Specific offset logic like “month”, “business day”, or “one hour” is represented in its various subclasses.\n", "\n", "The key features of a `DateOffset` object are:\n", "* it can be used to shift a datetime object\n", "* it can be multiplied by an integer\n", "* it has rollforward and rollback methods for moving a date forward or backward" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from pandas.tseries.offsets import *" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2014-01-08 13:15:00.350000\n", "2014-01-31 00:00:00\n", "2014-02-01 00:00:00\n", "2014-06-02 00:00:00\n" ] } ], "source": [ "print (pd.datetime(2014, 1, 1) + Week() + Hour(13) + Minute()*15 + Milli()*350)\n", "print (pd.datetime(2014, 1, 1) + MonthEnd())\n", "print (pd.datetime(2014, 1, 1) + MonthBegin())\n", "print (pd.datetime(2014, 1, 1) + BQuarterBegin(2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Offset Aliases can be used to define to define time series frequencies:\n", "* B - Business Day\n", "* D - Calendar Day\n", "* H - Hour\n", "* T - Minute\n", "* (see the Pandas documentation for a complete list)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "DatetimeIndex(['2014-01-01 00:00:00', '2014-01-01 00:04:00'], dtype='datetime64[ns]', freq='4T')\n", "DatetimeIndex(['2014-01-01 00:00:00', '2014-01-01 00:04:00'], dtype='datetime64[ns]', freq='4T')\n", "DatetimeIndex(['2014-01-01 00:00:00', '2014-01-01 00:04:00'], dtype='datetime64[ns]', freq='4T')\n" ] } ], "source": [ "print (pd.date_range('2014/1/1', periods=2, freq=Minute(4)))\n", "print (pd.date_range('2014/1/1', periods=2, freq='4T'))\n", "print (pd.date_range('2014/1/1', periods=2, freq='4min'))" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2014-01-01 12:30:00\n", "2014-01-02 13:40:00\n", "2014-01-03 14:50:00\n", "2014-01-06 16:00:00\n", "2014-01-07 17:10:00\n", "2014-01-08 18:20:00\n", "2014-01-09 19:30:00\n" ] } ], "source": [ "start = pd.datetime(2014, 1, 1, 12, 30)\n", "end = pd.datetime(2014, 1, 12)\n", "idx2 = pd.date_range(start, end, freq='1B1H10T')\n", "for i in idx2:\n", " print (i)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3 Indexing with a DateTime index" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the name can suggest Datetime Index objects can be used to index a DataFrames. Try uncommenting some of the following lines to see alternative ways for selecting dates:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:00366535
2014-01-02 13:40:0054812
2014-01-03 14:50:0043807
2014-01-06 16:00:00136043
2014-01-07 17:10:00297649
2014-01-08 18:20:0014113
2014-01-09 19:30:00602639
\n", "
" ], "text/plain": [ " A B C\n", "2014-01-01 12:30:00 36 65 35\n", "2014-01-02 13:40:00 54 8 12\n", "2014-01-03 14:50:00 43 80 7\n", "2014-01-06 16:00:00 13 60 43\n", "2014-01-07 17:10:00 29 76 49\n", "2014-01-08 18:20:00 1 41 13\n", "2014-01-09 19:30:00 60 26 39" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = np.random.randint(0,99,(len(idx2),3))\n", "d2 = pd.DataFrame(data, index=idx2, columns=list('ABC'))\n", "d2" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AC
2014-01-02 13:40:005412
2014-01-03 14:50:00437
2014-01-06 16:00:001343
2014-01-07 17:10:002949
2014-01-08 18:20:00113
\n", "
" ], "text/plain": [ " A C\n", "2014-01-02 13:40:00 54 12\n", "2014-01-03 14:50:00 43 7\n", "2014-01-06 16:00:00 13 43\n", "2014-01-07 17:10:00 29 49\n", "2014-01-08 18:20:00 1 13" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Remember that in Pandas endpoints are included\n", "d2.ix['2014/01/02':'2014/01/08', ['A', 'C']]" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AC
2014-01-06 16:00:001343
2014-01-07 17:10:002949
\n", "
" ], "text/plain": [ " A C\n", "2014-01-06 16:00:00 13 43\n", "2014-01-07 17:10:00 29 49" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d2.ix['2014/01/06 12:00':'2014/01/08 12:00', ['A', 'C']]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is possible to mangle with indexes using offset objects. BDay means business day (be aware that xmas is considered a business day)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4 Frequency conversion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can convert all TimeSeries to specified frequency using DateOffset objects. Optionally we can provide a fill method to handle missing values." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:00366535
2014-01-02 13:40:0054812
2014-01-03 14:50:0043807
2014-01-06 16:00:00136043
2014-01-07 17:10:00297649
2014-01-08 18:20:0014113
2014-01-09 19:30:00602639
\n", "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:00366535
2014-01-02 12:30:00366535
2014-01-03 12:30:0054812
2014-01-04 12:30:0043807
2014-01-05 12:30:0043807
2014-01-06 12:30:0043807
2014-01-07 12:30:00136043
2014-01-08 12:30:00297649
2014-01-09 12:30:0014113
\n", "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:00366535
2014-01-02 12:30:00NaNNaNNaN
2014-01-03 12:30:00NaNNaNNaN
2014-01-04 12:30:00NaNNaNNaN
2014-01-05 12:30:00NaNNaNNaN
2014-01-06 12:30:00NaNNaNNaN
2014-01-07 12:30:00NaNNaNNaN
2014-01-08 12:30:00NaNNaNNaN
2014-01-09 12:30:00NaNNaNNaN
\n", "
" ], "text/plain": [ "" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# TODO: Fix\n", "HTML(side_by_side2(d2,\n", " d2.asfreq(Day(), method='ffill'),\n", " d2.asfreq(Day(), method=None)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5 Filling gaps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are going to see some ways to let pandas fill `NaN` values on a dataframe.\n", "\n", "The first method is called forward filling and consists on using the first element above that isn't `NaN`." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:0036NaN35
2014-01-02 13:40:0054NaN12
2014-01-03 14:50:00NaNNaN7
2014-01-06 16:00:00NaNNaN43
2014-01-07 17:10:00NaN76NaN
2014-01-08 18:20:00141NaN
2014-01-09 19:30:006026NaN
\n", "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:0036NaN35
2014-01-02 13:40:0054NaN12
2014-01-03 14:50:0054NaN7
2014-01-06 16:00:0054NaN43
2014-01-07 17:10:00547643
2014-01-08 18:20:0014143
2014-01-09 19:30:00602643
\n", "
" ], "text/plain": [ "" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "d3 = d2.copy()\n", "d3.iloc[2:5, 0] = np.nan\n", "d3.iloc[0:4, 1] = np.nan\n", "d3.iloc[4:, 2] = np.nan\n", "cols = ['A', 'B', 'C']\n", "HTML(side_by_side2(d3, d3[cols].fillna(method='ffill')))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that on column 'B' the `NaN` values are at the beginning. So the method hasn't been able to fill those holes. The backward filling methods is complementary to the one above. It fills gaps using the first non `NaN` value below the cell." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:0036NaN35
2014-01-02 13:40:0054NaN12
2014-01-03 14:50:00NaNNaN7
2014-01-06 16:00:00NaNNaN43
2014-01-07 17:10:00NaN76NaN
2014-01-08 18:20:00141NaN
2014-01-09 19:30:006026NaN
\n", "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
2014-01-01 12:30:00367635
2014-01-02 13:40:00547612
2014-01-03 14:50:001767
2014-01-06 16:00:0017643
2014-01-07 17:10:00176NaN
2014-01-08 18:20:00141NaN
2014-01-09 19:30:006026NaN
\n", "
" ], "text/plain": [ "" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "HTML(side_by_side2(d3, d3[cols].fillna(method='bfill')))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "Visit [www.add-for.com]() for more tutorials and updates.\n", "\n", "This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.4.3" } }, "nbformat": 4, "nbformat_minor": 0 }