{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to points, lines, areas, rasters, and trimeshes, Datashader can quickly render large collections of polygons (filled polylines). Datashader's polygon support depends on data structures provided by the separate [spatialpandas](https://spatialpandas.holoviz.org) library, which extends Pandas and Parquet to support efficient storage and manipulation of \"ragged\" (variable length) data like polygons. \n", "\n", "Before running these examples, you will need spatialpandas installed with pip:\n", "\n", "```\n", "$ pip install spatialpandas\n", "```\n", "\n", "or conda:\n", "```\n", "$ conda install -c pyviz spatialpandas\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import dask.dataframe as dd\n", "import colorcet as cc\n", "import datashader as ds\n", "import datashader.transfer_functions as tf\n", "import spatialpandas as sp\n", "import spatialpandas.geometry\n", "import spatialpandas.dask" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pandas supports custom column types using an \"ExtensionArray\" interface. Spatialpandas provides two Pandas ExtensionArrays that support polygons:\n", "\n", "- `spatialpandas.geometry.PolygonArray`: Each row in the column is a single `Polygon` instance. As with shapely and geopandas, each Polygon may contain zero or more holes. \n", " \n", "- `spatialpandas.geometry.MultiPolygonArray`: Each row in the column is a `MultiPolygon` instance, each of which can store one or more polygons, with each polygon containing zero or more holes.\n", "\n", "Datashader assumes that the vertices of the outer filled polygon will be listed as x1, y1, x2, y2, etc. in counter clockwise (CCW) order around the polygon edge, while the holes will be in clockwise (CW) order. All polygons (both filled and holes) must be \"closed\", with the first vertex of each polygon repeated as the last vertex." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simple Example\n", "Here is a simple example of a two-element `MultiPolygonArray`. The first element specifies two filled polygons, the first with two holes. The second element contains one filled polygon with one hole." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "polygons = sp.geometry.MultiPolygonArray([\n", " # First Element\n", " [[[0, 0, 1, 0, 2, 2, -1, 4, 0, 0], # Filled quadrilateral (CCW order)\n", " [0.5, 1, 1, 2, 1.5, 1.5, 0.5, 1], # Triangular hole (CW order)\n", " [0, 2, 0, 2.5, 0.5, 2.5, 0.5, 2, 0, 2]], # Rectangular hole (CW order)\n", "\n", " [[-0.5, 3, 1.5, 3, 1.5, 4, -0.5, 3]],], # Filled triangle\n", "\n", " # Second Element\n", " [[[1.25, 0, 1.25, 2, 4, 2, 4, 0, 1.25, 0], # Filled rectangle (CCW order)\n", " [1.5, 0.25, 3.75, 0.25, 3.75, 1.75, 1.5, 1.75, 1.5, 0.25]],]]) # Rectangular hole (CW order)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The filled quadrilateral starts at (x,y) (0,0) and goes to (1,0), then (2,2), then (-1,4), and back to (0,0); others similarly go left to right (but drawing in either CCW or CW order around the edge of the polygon depending on whether they are filled or holes).\n", "\n", "Since a `MultiPolygonArray` is a pandas/dask extension array, it can be added as a column to a `DataFrame`. For convenience, you can define your DataFrame as `sp.GeoDataFrame` instead of `pd.DataFrame`, which will automatically include support for polygon columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = sp.GeoDataFrame({'polygons': polygons, 'v': range(1, len(polygons)+1)})\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Polygons are rasterized to a `Canvas` by Datashader using the `Canvas.polygons` method. This method works like the existing glyph methods (`.points`, `.line`, etc), except it does not have `x` and `y` arguments. Instead, it has a single `geometry` argument that should be passed the name of a `PolygonArray` or `MultiPolygonArray` column in the supplied `DataFrame`. For comparison we'll also show the polygon outlines rendered with `Canvas.line` (which also supports geometry columns):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "cvs = ds.Canvas()\n", "agg = cvs.polygons(df, geometry='polygons', agg=ds.sum('v'))\n", "filled = tf.shade(agg)\n", "float(agg.min()), float(agg.max())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "cvs = ds.Canvas()\n", "agg = cvs.line(df, geometry='polygons', agg=ds.sum('v'), line_width=4)\n", "unfilled = tf.shade(agg)\n", "float(agg.min()), float(agg.max())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf.Images(filled, unfilled)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here as you can see each polygon is filled or outlined with the indicated value from the `v` column for that polygon specification. The `sum` aggregator for the filled polygons specifies that the rendered colors should indicate the sum of the `v` values of polygons that overlap that pixel, and so the first element (`v=1`) has an overlapping area with value 2, and the second element has a value of 2 except in overlap areas where it gets a value of 3. Each plot is normalized separately, so the filled plot uses the colormap for the range 1 (light blue) to 3 (dark blue), while the outlined plot ranges from 1 to 2.\n", "\n", "You can use polygons from within interactive plotting programs with axes to see the underlying values by hovering, which helps for comparing Datashader's aggregation-based approach (right) to standard polygon rendering (left):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import holoviews as hv\n", "from holoviews.operation.datashader import rasterize\n", "from holoviews.streams import PlotSize\n", "PlotSize.scale=2 # Sharper plots on Retina displays\n", "hv.extension(\"bokeh\")\n", "\n", "hvpolys = hv.Polygons(df, vdims=['v']).opts(color='v', tools=['hover'])\n", "hvpolys + rasterize(hvpolys, aggregator=ds.sum('v')).opts(tools=['hover'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Realistic Example\n", "Here is a more realistic example, plotting the unemployment rate of the counties in Texas. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import bokeh.sampledata\n", "try:\n", " from bokeh.sampledata.us_counties import data # noqa\n", "except:\n", " bokeh.sampledata.download()\n", "from bokeh.sampledata.us_counties import data as counties\n", "from bokeh.sampledata.unemployment import data as unemployment\n", "\n", "counties = { code: county for code, county in counties.items()\n", " if county[\"state\"] in [\"tx\"] }\n", "\n", "county_boundaries = [[[*zip(county[\"lons\"] + county[\"lons\"][:1],\n", " county[\"lats\"] + county[\"lats\"][:1])]\n", " for county in counties.values()]]\n", "\n", "county_rates = [unemployment[county_id] for county_id in counties]\n", "\n", "boundary_coords = [[np.concatenate(list(\n", " zip(county[\"lons\"][::-1] + county[\"lons\"][-1:],\n", " county[\"lats\"][::-1] + county[\"lats\"][-1:])\n", "))] for county in counties.values()]\n", "\n", "boundaries = sp.geometry.PolygonArray(boundary_coords)\n", "\n", "county_info = sp.GeoDataFrame({'boundary': boundaries,\n", " 'unemployment': county_rates})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Discard the output from one aggregation of each type, to avoid measuring Numba compilation times\n", "tf.shade(cvs.polygons(county_info, geometry='boundary', agg=ds.mean('unemployment')));\n", "tf.shade(cvs.line(county_info, geometry='boundary', agg=ds.any()));\n", "tf.shade(cvs.line(county_info, geometry='boundary', agg=ds.any(), line_width=1.5));" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "cvs = ds.Canvas(plot_width=600, plot_height=600)\n", "agg = cvs.polygons(county_info, geometry='boundary', agg=ds.mean('unemployment'))\n", "filled = tf.shade(agg)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "agg = cvs.line(county_info, geometry='boundary', agg=ds.any())\n", "unfilled = tf.shade(agg)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "agg = cvs.line(county_info, geometry='boundary', agg=ds.any(), line_width=3.5)\n", "antialiased = tf.shade(agg)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tf.Images(filled, unfilled, antialiased)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Geopandas import\n", "The `.from_geopandas` static method on each `spatialpandas` ExtensionArray can be used to import a geopandas `GeoSeries` of `Polygon`/`MultiPolygon` objects:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import geopandas\n", "\n", "world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))\n", "world = world.to_crs(epsg=4087) # simple cylindrical projection\n", "world['boundary'] = world.geometry.boundary\n", "world['centroid'] = world.geometry.centroid\n", "world.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Convert the geopandas GeoDataFrame to a spatialpandas GeoDataFrame for Datashader to use:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "df_world = sp.GeoDataFrame(world)\n", "df_world.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since version 0.16, Datashader supports direct use of `geopandas` `GeoDataFrame`s without having to convert them to `spatialpandas`. See [GeoPandas](13_Geopandas.ipynb)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Geopandas/shapely export\n", "A `MultiPolygonArray` can be converted to a geopandas `GeometryArray` using the `to_geopandas` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "pd.Series(df_world.boundary.array.to_geopandas())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Individual elements of a `MultiPolygonArray` can be converted into `shapely` shapes using the `to_shapely` method" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_world.geometry.array[3].to_shapely()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting as filled polygons" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Discard the output to avoid measuring Numba compilation times\n", "tf.shade(cvs.polygons(df_world, geometry='geometry', agg=ds.mean('pop_est')));" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "cvs = ds.Canvas(plot_width=650, plot_height=400)\n", "agg = cvs.polygons(df_world, geometry='geometry', agg=ds.mean('pop_est'))\n", "tf.shade(agg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting as centroid points" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Discard the output to avoid measuring Numba compilation times\n", "cvs.points(df_world, geometry='centroid', agg=ds.mean('pop_est'));" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "agg = cvs.points(df_world, geometry='centroid', agg=ds.mean('pop_est'))\n", "tf.spread(tf.shade(agg), 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Polygon Perimeter/Area calculation\n", "\n", "The spatialpandas library provides highly optimized versions of some of the geometric operations supported by a geopandas `GeoSeries`, including parallelized [Numba](https://numba.pydata.org) implementations of the `length` and `area` properties:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "{\"MultiPolygon2dArray length\": df_world.geometry.array.length[:4],\n", " \"GeoPandas length\": world.geometry.array.length[:4],\n", " \"MultiPolygonArray area\": df_world.geometry.array.area[:4],\n", " \"GeoPandas area\": world.geometry.array.area[:4],}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Speed differences:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Duplicate world 1000 times\n", "df_world_large = pd.concat([df_world.geometry] * 1000)\n", "world_large = pd.concat([world.geometry] * 1000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "length_ds = %timeit -o world_large.array.length\n", "length_gp = %timeit -o df_world_large.array.length\n", "print(\"\\nMultiPolygonArray.length speedup: %.2f\" % (length_ds.average / length_gp.average))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "area_ds = %timeit -o world_large.array.area\n", "area_gp = %timeit -o df_world_large.array.area\n", "print(\"\\nMultiPolygonArray.area speedup: %.2f\" % (area_ds.average / area_gp.average))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Other GeoPandas operations could be sped up similarly by [adding them to spatialpandas](https://github.com/holoviz/spatialpandas/issues/1)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Parquet support\n", "\n", "spatialpandas geometry arrays can be stored in Parquet files, which support efficient chunked columnar access that is particularly important when working with Dask for large files. To create such a file, use `.to_parquet`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_world.to_parquet('df_world.parq')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A parquet file containing geometry arrays should be read using the `spatialpandas.io.read_parquet` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from spatialpandas.io import read_parquet\n", "read_parquet('df_world.parq').head(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dask support\n", "\n", "For large collections of polygons, you can use [Dask](https://dask.org) to parallelize the rendering. If you are starting with a Pandas dataframe with a geometry column, just use the standard `dask.dataframe.from_pandas` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ddf = dd.from_pandas(df_world, npartitions=2).pack_partitions(npartitions=100).persist()\n", "\n", "tf.shade(cvs.polygons(ddf, geometry='geometry', agg=ds.mean('gdp_md_est')), cmap=cc.kg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we've used `pack_partitions` to re-sort and re-partition the dataframe such that each partition contains geometry objects that are relatively close together in space. This partitioning makes it faster for Datashader to identify which partitions are needed in order to render a particular view, which is useful when zooming into local regions of large datasets. (This particular dataset is quite small, and unlikely to benefit, of course!)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Interactive example using HoloViews\n", "\n", "As you can see above, HoloViews can easily invoke Datashader on polygons using `rasterize`, with full interactive redrawing at each new zoom level as long as you have a live Python process running. The code for the world population example would be:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out = rasterize(hv.Polygons(ddf, vdims=['pop_est']), aggregator=ds.sum('pop_est'))\n", "out.opts(width=700, height=500, tools=[\"hover\"]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, we've used a semicolon to suppress the output, because we'll actually use a more complex example with a custom callback function `cb` to update the plot title whenever you zoom in. That way, you'll be able to see the number of partitions that the spatial index has determined are needed to cover the current viewport:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def compute_partitions(el):\n", " n = ddf.cx_partitions[slice(*el.range('x')), slice(*el.range('y'))].npartitions\n", " return el.opts(title=f'Population by country (npartitions: {n})')\n", "\n", "out.apply(compute_partitions).opts(width=700, height=500, tools=[\"hover\"], clim=(0, 1.3e9))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, if you zoom in with this page backed by a live Python process, you'll not only see the plot redraw at full resolution whenever you zoom in, you'll see how many partitions of the dataset are needed to render it, ranging from 100 for the full map down to 1 when zoomed into a small area.\n", "\n", "A larger example with polygons for the footprints of a million buildings in the New York City area can be run online at [PyViz examples](https://examples.pyviz.org/nyc_buildings).\n", "\n", "\n" ] } ], "metadata": { "language_info": { "name": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 2 }