{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Handling Data over time\n", "\n", "There's a widespread trend in solar physics at the moment for correlation over actual science, so being able to handle data over time spans is a skill we all need to have. Python has ample support for this so lets have a look at what we can use." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "
\n", "

Learning Objectives

\n", "
\n", "\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "\n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sunpy Time Series\n", "\n", "SunPy provides a timeseries object to handle this type of time series data. The module has a number of instruments associated with it, including:\n", "\n", "* GOES XRS TimeSeries\n", "* SDO EVE TimeSeries for level 0CS data\n", "* Proba-2 LYRA TimeSeries\n", "* NOAA Solar Cycle monthly indices.\n", "* Nobeyama Radioheliograph Correlation TimeSeries.\n", "* RHESSI X-ray Summary TimeSeries.\n", "\n", "We're going to examine the data created by a solar flare on June 7th 2011.\n", "\n", "Lets begin with the import statements:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now lets look at some test series data, in this case we can utilitse the sunpy sample data. Do this with `import sunpy.data.sample`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now goes data is a sunpy time seris object so we can inspect the object" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NB: not all sources provide meta data so this may be empty.\n", "\n", "The actual data is accessible tthrough the attributes of the timeseries object. Part of the advantage of using these inbuilt functions we can get a quicklook at our data using short commands:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Accessing and using the data\n", "\n", "More custom plots can be made easily by accessing the data in the timeseries functionality. Both the time information and the data are contained within the timeseries.data code, which is a pandas dataframe. We can see what data is contained in the dataframe by finding which columns it contains and also asking what's in the meta data: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "
\n", "

On Dictionaries

\n", "
\n", "\n", "\n", "
\n", "\n", "

We can create keyword-data pairs to form a dictionary (shock horror) of values. In this case we have defined some strings and number to represent temperatures across europe

\n", "
temps = {'Brussles': 9, 'London': 3, 'Barcelona': 13, 'Rome': 16}\n",
    "temps['Rome']\n",
    "16\n",
    "
\n", "\n", "\n", "

We can also find out what keywords are associated with a given dictionary, In this case:

\n", "
temps.keys()\n",
    "dict_keys(['London', 'Barcelona', 'Rome', 'Brussles'])\n",
    "
\n", "\n", "\n", "

Dictionaries will crop up more and more often, typically as a part of differnt file structure such as ynl and json.

\n", "\n", "
\n", "\n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pandas\n", "\n", "In its own words Pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. Pandas has two forms of structures, 1D series and 2D dataframe. It also has its own functions associated with it.\n", "\n", "It is also amazing.\n", "\n", "Timeseries uses these in built Pandas functions, so we can find out things like the maximum of curves:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So lets plot them on the graph" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reading in Tablulated data\n", "\n", "Now we have seen a little of what Pandas can do, lets read in some of our own data. In this case we are going to use data from Bennett et al. 2015, ApJ, a truly ground breaking work. Now the data we are reading in here is a structured Array.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, the above line imports information on some solar features over a sample time period. Specifically we have, maximum length, lifetime and time at which they occured. Now if we type `data[0]` what will happen?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the first row of the array, containing the first element of our three properties. This particular example is a stuctured array, so the columns and rows can have properties and assign properties to the header. We can ask what the title of these columns is by using a `dtype` command:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unhelpful, so lets give them something more recognisable. We can use the docs to look up syntax and change the names of the column lables." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "
\n", "

Google your troubles away

\n", "
\n", "\n", "\n", "
\n", "\n", "

So the docs are here. Find the syntax to change to names to better to represent maximum length, lifetime and point in time which they occured.

\n", "\n", "
\n", "\n", "
\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DataFrame\n", "\n", "Now a pandas DataFrame takes two arguments as a minimum, index and data. In this case the index will be our time within the sample and the maximum length and lifetime will be our data. So lets import pandas and use the dataframe:\n", "\n", "Pandas reads a dictionary when we want to input multiple data columns. Therefore we need to make a dictionary of our data and read that into a pandas data frame. First we need to import pandas." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "
\n", "

Dictionaries

\n", "
\n", "\n", "\n", "
\n", "\n", "

So we covered dictionaries earlier. We can create keyword data pairs to form a dictionary (shock horror) of values. In this case

\n", "
temps = {'Brussles': 9, 'London': 3, 'Barcelona': 13, 'Rome': 16}\n",
    "temps['Rome']\n",
    "16\n",
    "
\n", "\n", "\n", "

We can also find out what keywords are associated with a given dictionary, In this case:

\n", "
temps.keys()\n",
    "dict_keys(['London', 'Barcelona', 'Rome', 'Brussles'])\n",
    "
\n", "\n", "
\n", "\n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's import Pandas:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Datetime Objects\n", "\n", "Notice that the time for the sample is in a strange format. It is a string containing the date in YYYY-MM-DD and time in HH-MM-SS-mmmmmm. These datetime objects have their own set of methods associated with them. Python appreciates that these are built this way and can use them for the indexing easily. \n", "\n", "We can use this module to create date objects (representing just year, month, day). We can also get information about universal time, such as the time and date today.\n", "\n", "NOTE: Datetime objects are NOT strings. They are objects which print out as strings.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looking back at when we discussed the first element of data, and the format of the time index was awkward to use so lets do something about that. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So this is a byte rather than a string so we'll need to convert that using the string handling functionality in pandas" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a string and python will just treat it as such. We need to use datetime to pick this string appart and change it into an oject we can use.\n", "\n", "[To the Docs!](https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior)\n", "\n", "So we use the formatting commands to match up with the string we have." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now get attributes from this such as the hour, month, second and so on" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now the next logical step would be to make a for loop and iterate over the index and reassign it.\n", "\n", "*HOWEVER* there is almost always a better way. And Pandas has a `to_dateime()` method that we can feed the time columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is also one of the most powerful featues of python, Apply. \n", "Apply will take a function and apply it to all rows in a dataframe or column. The easiest way to do this is with a lambda function" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Both these are much cleaner and faster due to pandas' optimisation. We can now set one of these as the index" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that there are official datetime objects on the index we can start operating based on the time of the frame" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we have used the groupby command to take the `'max_len'` column, called as a dictionary key, and create bins for our data to sit in according to year and then month. \n", "\n", "The object `l_bins` has `mean`, `max`, `std` etc. attributes in the same way as the numpy arrays we handled the other day." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have all this data we can build a lovely bargraph with error bars and wonderful things like that.\n", "\n", "Remember, these pandas objects have functions associated with them, and one of them is a plot command." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the date on the x-axis is a little messed up we can fix with `fig.autofmt_xdate()`\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "
\n", "

How do the lifetimes change?

\n", "
\n", "\n", "\n", "
\n", "\n", "

Now that we have the plot for the maximum length, now make a bar graph of the lifetimes of the features.

\n", "\n", "
\n", "\n", "
\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "
\n", "

Exoplanet Data

\n", "
\n", "\n", "\n", "
\n", "\n", "

Now, to all the astronomers out there, let us process some real data. We have some txt files containing the timeseries data from a recent paper. Can you process the data and show us the planet?

\n", "

HINT: You'll need to treat this data slightly differently. The date here is in Julian Day so you will need to use these docs to convert it to a sensible datetime object, before you make the DataFrame.

\n", "\n", "
\n", "\n", "
\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 1 }