{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Introduction to geospatial vector data in Python

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import pandas as pd\n", "import geopandas" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Importing geospatial data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Geospatial data is often available from specific GIS file formats or data stores, like ESRI shapefiles, GeoJSON files, geopackage files, PostGIS (PostgreSQL) database, ...\n", "\n", "We can use the GeoPandas library to read many of those GIS file formats (relying on the `fiona` library under the hood, which is an interface to GDAL/OGR), using the `geopandas.read_file` function.\n", "\n", "For example, let's start by reading a shapefile with all the countries of the world (adapted from http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-countries/, zip file is available in the `/data` directory), and inspect the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "countries = geopandas.read_file(\"data/ne_110m_admin_0_countries.zip\")\n", "# or if the archive is unpacked:\n", "# countries = geopandas.read_file(\"data/ne_110m_admin_0_countries/ne_110m_admin_0_countries.shp\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "countries.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "countries.plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "countries.explore()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What do we observe:\n", "\n", "- Using `.head()` we can see the first rows of the dataset, just like we can do with Pandas.\n", "- There is a `geometry` column and the different countries are represented as polygons\n", "- We can use the `.plot()` (matplotlib) or `explore()` (Folium / Leaflet.js) method to quickly get a *basic* visualization of the data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What's a GeoDataFrame?\n", "\n", "We used the GeoPandas library to read in the geospatial data, and this returned us a `GeoDataFrame`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "type(countries)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A GeoDataFrame contains a tabular, geospatial dataset:\n", "\n", "* It has a **'geometry' column** that holds the geometry information (or features in GeoJSON).\n", "* The other columns are the **attributes** (or properties in GeoJSON) that describe each of the geometries\n", "\n", "Such a `GeoDataFrame` is just like a pandas `DataFrame`, but with some additional functionality for working with geospatial data:\n", "\n", "* A `.geometry` attribute that always returns the column with the geometry information (returning a GeoSeries). The column name itself does not necessarily need to be 'geometry', but it will always be accessible as the `.geometry` attribute.\n", "* It has some extra methods for working with spatial data (area, distance, buffer, intersection, ...), which we will learn in later notebooks" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "countries.geometry" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "type(countries.geometry)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "countries.geometry.area" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**It's still a DataFrame**, so we have all the Pandas functionality available to use on the geospatial dataset, and to do data manipulations with the attributes and geometry information together.\n", "\n", "For example, we can calculate average population number over all countries (by accessing the 'pop_est' column, and calling the `mean` method on it):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "countries['pop_est'].mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or, we can use boolean filtering to select a subset of the dataframe based on a condition:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "africa = countries[countries['continent'] == 'Africa']" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "africa.plot();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "The rest of the tutorial is going to assume you already know some pandas basics, but we will try to give hints for that part for those that are not familiar. \n", "A few resources in case you want to learn more about pandas:\n", "\n", "- Pandas docs: https://pandas.pydata.org/pandas-docs/stable/10min.html\n", "- Other tutorials: chapter from pandas in https://jakevdp.github.io/PythonDataScienceHandbook/, https://github.com/jorisvandenbossche/pandas-tutorial, https://github.com/TomAugspurger/pandas-head-to-tail, ..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**REMEMBER:**
\n", "\n", "* A `GeoDataFrame` allows to perform typical tabular data analysis together with spatial operations\n", "* A `GeoDataFrame` (or *Feature Collection*) consists of:\n", " * **Geometries** or **features**: the spatial objects\n", " * **Attributes** or **properties**: columns with information about each spatial object\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Geometries: Points, Linestrings and Polygons\n", "\n", "Spatial **vector** data can consist of different types, and the 3 fundamental types are:\n", "\n", "![](img/simple_features_3_text.svg)\n", "\n", "* **Point** data: represents a single point in space.\n", "* **Line** data (\"LineString\"): represents a sequence of points that form a line.\n", "* **Polygon** data: represents a filled area.\n", "\n", "And each of them can also be combined in multi-part geometries (See https://shapely.readthedocs.io/en/stable/manual.html#geometric-objects for extensive overview)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the example we have seen up to now, the individual geometry objects are Polygons:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(countries.geometry[2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's import some other datasets with different types of geometry objects.\n", "\n", "A dateset about cities in the world (adapted from http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-populated-places/, zip file is available in the `/data` directory), consisting of Point data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cities = geopandas.read_file(\"data/ne_110m_populated_places.zip\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(cities.geometry[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And a dataset of rivers in the world (from http://www.naturalearthdata.com/downloads/50m-physical-vectors/50m-rivers-lake-centerlines/, zip file is available in the `/data` directory) where each river is a (multi-)line:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rivers = geopandas.read_file(\"data/ne_50m_rivers_lake_centerlines.zip\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(rivers.geometry[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The `shapely` library\n", "\n", "The individual geometry objects are provided by the [`shapely`](https://shapely.readthedocs.io/en/stable/) library" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "type(countries.geometry[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To construct one ourselves:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from shapely.geometry import Point, Polygon, LineString" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p = Point(0, 0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(p)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "polygon = Polygon([(1, 1), (2,2), (2, 1)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "polygon.area" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "polygon.distance(p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**REMEMBER**:
\n", "\n", "Single geometries are represented by `shapely` objects:\n", "\n", "* If you access a single geometry of a GeoDataFrame, you get a shapely geometry object\n", "* Those objects have similar functionality as geopandas objects (GeoDataFrame/GeoSeries). For example:\n", " * `single_shapely_object.distance(other_point)` -> distance between two points\n", " * `geodataframe.distance(other_point)` -> distance for each point in the geodataframe to the other point\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting our different layers together" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# fig, ax = plt.subplots(figsize=(15, 10))\n", "ax = countries.plot(edgecolor='k', facecolor='none', figsize=(15, 10))\n", "rivers.plot(ax=ax)\n", "cities.plot(ax=ax, color='red')\n", "ax.set(xlim=(-20, 60), ylim=(-40, 40))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See the [05-more-on-visualization.ipynb](05-more-on-visualization.ipynb) notebook for more details on visualizing geospatial datasets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let's practice!\n", "\n", "Throughout the exercises in this course, we will work with several datasets about the city of Paris.\n", "\n", "Here, we start with the following datasets:\n", "\n", "- The administrative districts of Paris (https://opendata.paris.fr/explore/dataset/quartier_paris/): `paris_districts_utm.geojson`\n", "- Real-time (at the moment I downloaded them ..) information about the public bicycle sharing system in Paris (vélib, https://opendata.paris.fr/explore/dataset/stations-velib-disponibilites-en-temps-reel/information/): `data/paris_bike_stations_mercator.gpkg`\n", "\n", "Both datasets are provided as spatial datasets using a GIS file format.\n", "\n", "Let's explore further those datasets, now using the spatial aspect as well." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**EXERCISE 1**:\n", "\n", "We will start with exploring the bicycle station dataset (available as a GeoPackage file: `data/paris_bike_stations_mercator.gpkg`)\n", " \n", "* Read the stations datasets into a GeoDataFrame called `stations`.\n", "* Check the type of the returned object\n", "* Check the first rows of the dataframes. What kind of geometries does this datasets contain?\n", "* How many features are there in the dataset? \n", " \n", "
Hints\n", "\n", "* Use `type(..)` to check any Python object type\n", "* The `geopandas.read_file()` function can read different geospatial file formats. You pass the file name as first argument.\n", "* Use the `.shape` attribute to get the number of features\n", "\n", "
\n", " \n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data1.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data2.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data3.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data4.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**EXERCISE 2**:\n", "\n", "* Make a quick plot of the `stations` dataset.\n", "* Make the plot a bit larger by setting the figure size to (12, 6) (hint: the `plot` method accepts a `figsize` keyword).\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data5.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A plot with just some points can be hard to interpret without any spatial context. We have seen that we can use the `explore()` method to easily get an interactive figure that by default includes a background map. But also for the static matplotlib-based plot, it can be useful to add such a base map, and that's what we will learn in the next excercise. \n", "\n", "We are going to make use of the [contextily](https://github.com/darribas/contextily) package. The `add_basemap()` function of this package makes it easy to add a background web map to our plot. We begin by plotting our data first, and then pass the matplotlib axes object (returned by dataframe's `plot()` method) to the `add_basemap()` function. `contextily` will then download the web tiles needed for the geographical extent of your plot.\n", "\n", "\n", "
\n", "\n", "**EXERCISE 3**:\n", "\n", "* Import `contextily`.\n", "* Re-do the figure of the previous exercise: make a plot of all the points in `stations`, but assign the result to an `ax` variable.\n", "* Set the marker size equal to 5 to reduce the size of the points (use the `markersize` keyword of the `plot()` method for this).\n", "* Use the `add_basemap()` function of `contextily` to add a background map: the first argument is the matplotlib axes object `ax`.\n", "\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data6.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data7.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**EXERCISE 4**:\n", "\n", "* Make a histogram showing the distribution of the number of bike stands in the stations.\n", "\n", "
\n", " Hints\n", "\n", "* Selecting a column can be done with the square brackets: `df['col_name']`\n", "* Single columns have a `hist()` method to plot a histogram of its values.\n", " \n", "
\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data8.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**EXERCISE 5**:\n", "\n", "Let's now visualize where the available bikes are actually stationed:\n", " \n", "* Make a plot of the `stations` dataset (also with a (12, 6) figsize).\n", "* Use the `'available_bikes'` columns to determine the color of the points. For this, use the `column=` keyword.\n", "* Use the `legend=True` keyword to show a color bar.\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data9.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**EXERCISE 6**:\n", "\n", "Next, we will explore the dataset on the administrative districts of Paris (available as a GeoJSON file: \"data/paris_districts_utm.geojson\")\n", "\n", "* Read the dataset into a GeoDataFrame called `districts`.\n", "* Check the first rows of the dataframe. What kind of geometries does this dataset contain?\n", "* How many features are there in the dataset? (hint: use the `.shape` attribute)\n", "* Make a quick plot of the `districts` dataset (set the figure size to (12, 6)).\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data10.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data11.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data12.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data13.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**EXERCISE 7**:\n", " \n", "What are the largest districts (biggest area)?\n", "\n", "* Calculate the area of each district.\n", "* Add this area as a new column to the `districts` dataframe.\n", "* Sort the dataframe by this area column for largest to smallest values (descending).\n", "\n", "
Hints\n", "\n", "* Adding a column can be done by assigning values to a column using the same square brackets syntax: `df['new_col'] = values`\n", "* To sort the rows of a DataFrame, use the `sort_values()` method, specifying the colum to sort on with the `by='col_name'` keyword. Check the help of this method to see how to sort ascending or descending.\n", "\n", "
\n", "\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data14.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data15.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data16.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "**EXERCISE 8**:\n", "\n", "* Add a column `'population_density'` representing the number of inhabitants per squared kilometer (Note: The area is given in squared meter, so you will need to multiply the result with `10**6`).\n", "* Plot the districts using the `'population_density'` to color the polygons. For this, use the `column=` keyword.\n", "* Use the `legend=True` keyword to show a color bar.\n", "\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data17.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data18.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false }, "tags": [ "nbtutor-solution" ] }, "outputs": [], "source": [ "# %load _solved/solutions/01-introduction-geospatial-data19.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "## For the curious: A bit more on importing and creating GeoDataFrames" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Note on `fiona`\n", "\n", "Under the hood, GeoPandas uses the [Fiona library](http://toblerity.org/fiona/) (pythonic interface to GDAL/OGR) to read and write data. GeoPandas provides a more user-friendly wrapper, which is sufficient for most use cases. But sometimes you want more control, and in that case, to read a file with fiona you can do the following:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import fiona\n", "from shapely.geometry import shape\n", "\n", "with fiona.Env():\n", " with fiona.open(\"zip://./data/ne_110m_admin_0_countries.zip\") as collection:\n", " for feature in collection:\n", " # ... do something with geometry\n", " geom = shape(feature['geometry'])\n", " # ... do something with properties\n", " print(feature['properties']['name'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Constructing a GeoDataFrame manually" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "geopandas.GeoDataFrame({\n", " 'geometry': [Point(1, 1), Point(2, 2)],\n", " 'attribute1': [1, 2],\n", " 'attribute2': [0.1, 0.2]})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating a GeoDataFrame from an existing dataframe\n", "\n", "For example, if you have lat/lon coordinates in two columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(\n", " {'City': ['Buenos Aires', 'Brasilia', 'Santiago', 'Bogota', 'Caracas'],\n", " 'Country': ['Argentina', 'Brazil', 'Chile', 'Colombia', 'Venezuela'],\n", " 'Latitude': [-34.58, -15.78, -33.45, 4.60, 10.48],\n", " 'Longitude': [-58.66, -47.91, -70.66, -74.08, -66.86]})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gdf = geopandas.GeoDataFrame(\n", " df, geometry=geopandas.points_from_xy(df.Longitude, df.Latitude))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gdf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See https://geopandas.org/en/latest/gallery/create_geopandas_from_pandas.html for full example" ] } ], "metadata": { "celltoolbar": "Nbtutor - export exercises", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }