{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "

Tutorial 4. Basic Plotting

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Previous sections have focused on putting various simple types of data together in notebooks and deployed servers, but most people will want to include plots as well. In this section, we'll focus on one of the simplest (but still powerful) ways to get a plot.\n", "\n", "If you have tried to visualize a `pandas.DataFrame` before, then you have likely encountered the [Pandas .plot() API](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html). This basic plotting interface uses [Matplotlib](http://matplotlib.org) to render static PNGs or SVGs in a Jupyter notebook using the`inline` backend (or interactive figures via `%matplotlib notebook` or `%matplotlib widget`) and for exporting from Python, with a command that can be as simple as `df.plot()` for a DataFrame with one or two columns. \n", "\n", "The Pandas .plot() API has emerged as a de-facto standard for high-level plotting APIs in Python, and is now supported by many different libraries that use other underlying plotting engines to provide additional power and flexibility. Thus learning this API allows you to access capabilities provided by a wide variety of underlying tools, with relatively little additional effort. The libraries currently supporting this API include:\n", "\n", "- [Pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html) -- Matplotlib-based API included with Pandas. Static or interactive output in Jupyter notebooks.\n", "- [xarray](https://xarray.pydata.org/en/stable/plotting.html) -- Matplotlib-based API included with xarray, based on pandas .plot API. Static or interactive output in Jupyter notebooks.\n", "- [hvPlot](https://hvplot.pyviz.org) -- HoloViews and Bokeh-based interactive plots for Pandas, GeoPandas, xarray, Dask, Intake, and Streamz data.\n", "- [Pandas Bokeh](https://github.com/PatrikHlobil/Pandas-Bokeh) -- Bokeh-based interactive plots, for Pandas, GeoPandas, and PySpark data.\n", "- [Cufflinks](https://github.com/santosjorge/cufflinks) -- Plotly-based interactive plots for Pandas data.\n", "- [Plotly Express](https://plotly.com/python/pandas-backend) -- Plotly-Express-based interactive plots for Pandas data; only partial support for the .plot API keywords\n", "- [PdVega](https://altair-viz.github.io/pdvega) -- Vega-lite-based, JSON-encoded interactive plots for Pandas data.\n", "\n", "In this notebook we'll explore what is possible with the default `.plot` API and demonstrate the additional capabilities of `.hvplot`, using the same tabular dataset of earthquakes and other seismological events queried \n", "from the [USGS Earthquake Catalog](https://earthquake.usgs.gov/earthquakes/search) using its \n", "[API](https://github.com/pyviz/holoviz/wiki/Creating-the-USGS-Earthquake-dataset) as in previous sections. Of course, this particular dataset is just an example; the same approach can be used with just about any tabular dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Read in the data\n", "\n", "Here we'll read in the data using Dask, which works well with a relatively large dataset like this (2.1 million rows). We'll use `.persist()` to bring the whole dataset into main memory (which should be feasible on any recent machine) for higher performance:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import dask.dataframe as dd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = dd.read_parquet('../data/earthquakes.parq').persist()\n", "df.time = df.time.astype('datetime64[ns]')\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using Pandas `.plot`\n", "\n", "The first thing that we'd like to do with this data is visualize the locations of every earthquake. So we would like to make a scatter or points plot where `x='longitude'` and `y='latitude'`. \n", "\n", "If you are familiar with the `pandas.plot` API, you might expect to execute `df.plot.scatter(x='longitude', y='latitude')`. Feel free to try this out in a new cell, but it will throw an error: `AttributeError: 'DataFrame' object has no attribute 'plot'`. Since we have a Dask dataframe rather than a Pandas dataframe, we need to first convert it to Pandas to use `.plot`. In order to make the data more manageable for now, we'll briefly use just a fraction (1%) of it and call that `small_df`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "small_df = df.sample(frac=.01).compute()\n", "small_df.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have a smaller dataset with just 21k earthquakes. We can use that to test out our visualizations before ramping back up to the full dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "small_df.plot.scatter(x='longitude', y='latitude');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using `.hvplot`\n", "\n", "As you can see above, the Pandas API gives you a usable plot very easily, where you can start to see the structure of the edges of the plates (which in some cases correspond with the edges of the continents and in others are between two continents). You can make a very similar plot with the same arguments using hvplot. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import hvplot.pandas" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "small_df.hvplot.scatter(x='longitude', y='latitude')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here unlike in the Pandas `.plot()` there is a default hover action on the datapoints to show the location values, and you can also pan and zoom to focus on any particular region of the data of interest.\n", "\n", "You might have noticed that many of the dots in the scatter that we've just created lie on top of one another. This is called [\"overplotting\"](http://datashader.org/user_guide/1_Plotting_Pitfalls.html#1.-Overplotting) and can be avoided in a variety of ways, such as by making the dots slightly transparent, or binning the data. These approaches have the downside of introducing bias because you need to choose the alpha or the edges of the bins, and in order to do that, you have to make assumptions about the data. For an initial exploration of a new dataset, it's much safer if you can just ***see*** the data, before you impose any assumptions about its form or structure. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise\n", "\n", "Try changing the alpha (try .1) on the plot above to see the effect of this approach" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try creating a `hexbin` plot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Datashader\n", "\n", "To avoid some of the problems of traditional scatter/point plots we can use [Datashader](datashader.org), which aggregates data into each pixel without any arbitrary parameter settings. In `hvplot` we can activate this capability by setting `datashade=True`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "small_df.hvplot.scatter(x='longitude', y='latitude', datashade=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can already see a lot more detail, but remember that we are still only plotting 1% of the data (21k earthquakes). With datashader, we can easily plot all of the full, original dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import hvplot.dask # noqa: adds hvplot method to dask objects" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.hvplot.scatter(x='longitude', y='latitude', datashade=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here you can see all of the rich detail in this set of millions of earthquake event locations. If you have a live Python process running, you can zoom in and see additional detail at each zoom level, without tuning any parameters or making any assumptions about the form or structure of the data. We'll come back to Datashader later, but for now the important thing to know about it is that it lets us work with arbitrarily large datasets in a web browser conveniently.\n", "\n", "Note that the `.hvplot()` API works here even though `df` is a Dask and not Pandas object, because unlike the other `.plot` libraries, `hvplot` doesn't just target Pandas objects. Instead hvplot can be used with: \n", " - Pandas : DataFrame, Series (columnar/tabular data)\n", " - xarray : Dataset, DataArray (labelled multidimensional arrays)\n", " - Dask : DataFrame, Series (distributed/out of core arrays and columnar data)\n", " - Streamz : DataFrame(s), Series(s) (streaming columnar data)\n", " - Intake : DataSource (data catalogues)\n", " - GeoPandas : GeoDataFrame (geometry data)\n", " - NetworkX : Graph (network graphs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise\n", "\n", "If you are brave and don't mind refreshing your browser tab if it dies, create a scatter for the full dataset without setting datashade=True." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A Note on `points`\n", "\n", "As a final note, we should really use `hvplot.points` instead of `hvplot.scatter` in this instance. The former does not exist in the standard pandas `.plot` API which is why we have used `hvplot.scatter` up until now.\n", "\n", "The reason scatter is inappropriate is that it implies that the y-axis (latitude) is a *dependent variable* with respect to the x-axis (latitude). In reality, this is not the case, as earthquakes can happen anywhere on the Earth's two-dimensional surface. For this reason, it is best to use `hvplot.points` for earthquake locations, as will be explained further in the next notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.hvplot.points(x='longitude', y='latitude', datashade=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Statistical Plots\n", "\n", "Let's dive into some of the other capabilities of `.plot()` and `.hvplot()`, starting with the frequency of different magnitude earthquakes.\n", "\n", "| Magnitude | Earthquake Effect | Estimated Number Each Year |\n", "|---------------|-------------------|----------------------------|\n", "| 2.5 or less | Usually not felt, but can be recorded by seismograph. |900,000|\n", "| 2.5 to 5.4 | Often felt, but only causes minor damage. |30,000 |\n", "| 5.5 to 6.0 | Slight damage to buildings and other structures. |500 |\n", "| 6.1 to 6.9 | May cause a lot of damage in very populated areas. | 100 |\n", "| 7.0 to 7.9 | Major earthquake. Serious damage. | 20 |\n", "| 8.0 or greater| Great earthquake. Can totally destroy communities near the epicenter. | One every 5 to 10 years |\n", "\n", "As a first pass, we'll use a histogram first with `plot.hist` on the small data, then with `.hvplot.hist` on the full dataset. Before plotting we can clean the data by setting any magnitudes that are less than 0 to NaN." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cleaned_df = df.copy()\n", "cleaned_df['mag'] = df.mag.where(df.mag > 0)\n", "cleaned_small_df = cleaned_df.sample(frac=.01).compute()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cleaned_small_df.plot.hist(y='mag');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similarly we can create a histogram of the whole dataset using hvplot." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cleaned_df.hvplot.hist(y='mag', bin_range=(0,10), bins=50)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise\n", "\n", "Create a kernel density estimate (kde) plot of magnitude." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Categorical variables\n", "\n", "Next we'll categorize the earthquakes based on depth. You can read about all the different variables available in this dataset [here](https://earthquake.usgs.gov/data/comcat/data-eventterms.php). In the interest of time, we'll use the small dataset and assume that it is representative of all the earthquakes. According to the [USGS page on earthquakes depths](https://earthquake.usgs.gov/learn/topics/determining_depth.php):\n", "\n", "> Shallow earthquakes are between 0 and 70 km deep; \n", "intermediate earthquakes, 70 - 300 km deep; and deep earthquakes, \n", "300 - 700 km deep. \n", "In general, the term \"deep-focus earthquakes\" is applied to earthquakes deeper than 70 km. \n", "All earthquakes deeper than 70 km are localized within great slabs of lithosphere that are sinking into the Earth's mantle.\n", "\n", "First we'll use `pd.cut` to split the small_dataset into depth categories." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "depth_bins = [-np.inf, 70, 300, np.inf]\n", "depth_names = ['Shallow', 'Intermediate', 'Deep']\n", "depth_class_column = pd.cut(cleaned_small_df['depth'], depth_bins, labels=depth_names)\n", "\n", "cleaned_small_df.insert(1, 'depth_class', depth_class_column)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now use this new categorical variable to group our data. First we will overlay all our groups on the same plot using the `by` option:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cleaned_small_df.hvplot.hist(y='mag', by='depth_class')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NOTE:** Click on the legend to turn off certain categories and see what is behind them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise\n", "\n", "Add `subplots=True` and `width=300` to see the different classes side-by-side. The y-axis will be linked, so try zooming." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Grouping\n", "To use a widget to toggle between classes, use the `groupby` option, here in a bivariate plot:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cleaned_small_df.hvplot.bivariate(x='mag', y='depth', groupby='depth_class')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to classifying by depth, we can classify by magnitude.\n", "\n", "| Class | Magnitude | \n", "|----------|-----------|\n", "| Great | 8 or more | \n", "| Major | 7 - 7.9 | \n", "| Strong | 6 - 6.9 |\n", "| Moderate | 5 - 5.9 |\n", "| Light | 4 - 4.9 |\n", "| Minor | 3 -3.9 |" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "classified_df = df[df.mag >= 3].compute()\n", "\n", "depth_class_column = pd.cut(classified_df['depth'], depth_bins, labels=depth_names)\n", "classified_df.insert(1, 'depth_class', depth_class_column)\n", "\n", "mag_bins = [2.9, 3.9, 4.9, 5.9, 6.9, 7.9, 10]\n", "mag_names = ['Minor', 'Light', 'Moderate', 'Strong', 'Major', 'Great']\n", "mag_class_column = pd.cut(classified_df['mag'], mag_bins, labels=mag_names)\n", "classified_df.insert(11, 'mag_class', mag_class_column)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "classified_df.hvplot.heatmap(x='mag_class', y='depth_class', C='id', reduce_function=np.count_nonzero)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here it is clear that the most commonly detected events are light, and typically shallow.\n", "\n", "# Exploring further\n", "\n", "These visualizations just touch the surface of what is available from hvPlot. To see many more examples, study the [hvPlot website](https://hvplot.pyviz.org). The following section will focus on how to put these plots together once you have them, linking them to understand and show their relationships." ] } ], "metadata": { "language_info": { "name": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 2 }