{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Ingestion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Machine learning tasks are typically data heavy, requiring labelled data for supervised learning or unlabelled data for unsupervised learning. In Python, data is typically stored in memory as NumPy arrays at some level, but in most cases you can use higher-level containers built on top of NumPy that are more convenient for tabular data ([Pandas](http://pandas.pydata.org)), multidimensional gridded data ([xarray](http://xarray.pydata.org)), or out-of-core and distributed data ([Dask](http://dask.pydata.org)). \n", "\n", "Each of these libraries allows reading local data in a variety of formats. In many cases the required datasets are large and stored on remote servers, so we will show how to use the [Intake](https://intake.readthedocs.io) library to fetch remote datasets efficiently, including built-in caching to avoid unncessary downloads when the files are available locally.\n", "\n", "To ensure that you understand the properties of your data and how it gets transformed at each step in the workflow, we will use exploratory visualization tools as soon as the data is available and at every subsequent step.\n", "\n", "Once you have loaded your data, you will typically need to reshape it appropriately before it can be fed into a machine learning pipeline. Those steps will be detailed in the next tutorial: [Alignment and Preprocessing](03_Alignment_and_Preprocessing.ipynb). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inline loading\n", "\n", "We'll start with the simple case of loading small local datasets, such as a .csv file for Pandas:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "training_df = pd.read_csv('../data/landsat5_training.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can inspect the first several lines of the file using ``.head``, or a random set of rows using ``.sample(n)``" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get a better sense of how this dataframe is set up, we can look at ``.info()``" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_df.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To use methods like `pd.read_csv`, the data all needs to be on the local filesystem (or on one of the limited remote specification formats supported by Pandas, such as S3). We could of course put in various commands here to fetch a file explicitly from a remote server, but the notebook would then very quickly get complex and unreadable.\n", "\n", "Instead, for larger datasets, we can automate those steps using intake so that remote and local data can be treated similarly. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import intake\n", "\n", "training = intake.open_csv('../data/landsat5_training.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get better insight into the data without loading it all in just yet, we can inspect the data using ``.to_dask()``" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_dd = training.to_dask()\n", "training_dd.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_dd.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get a full pandas.DataFrame object, use ``.read()`` to load in all the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_df = training.read()\n", "training_df.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NOTE:** There are different items in these two info views which reflect what is knowable before and after we read all the data. For instance, it is not possible to know the ``shape`` of the whole dataset before it is loaded." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading multiple files\n", "\n", "In addition to allowing partitioned reading of files, intake lets the user load and concatenate data across multiple files in one command" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training = intake.open_csv(['../data/landsat5_training.csv', '../data/landsat8_training.csv'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_df = training.read()\n", "training_df.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NOTE:** The length of the dataframe has increased now that we are loading multiple sets of training data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This can be more simply expressed as:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training = intake.open_csv('../data/landsat*_training.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sometimes, there is data encoded in a file name or path that causes concatenated data to lose some important context. In this example, we lose the information about which version of landsat the training was done on. To keep track of that information, we can use a python format string to specify our path and declare a new field on our data. That field will get populated based on its value in the path. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training = intake.open_csv('../data/landsat{version:d}_training.csv')\n", "training_df = training.read()\n", "training_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Exercise: Try looking at the tail of the data using training_df.tail(), or a random sample using training_df.sample(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Catalogs\n", "\n", "For more complicated setups, we use the file catalog.yml to declare how the data should be loaded. The catalog lays out how the data should be loaded, defines some metadata, and specifies any patterns in the file path that should be included in the data. Here is an example of a catalog entry:\n", "\n", "```\n", "sources:\n", " landsat_5_small:\n", " description: Small version of Landsat 5 Surface Reflectance Level-2 Science Product.\n", " driver: rasterio\n", " cache:\n", " - argkey: urlpath\n", " regex: 'earth-data/landsat'\n", " type: file\n", " args:\n", " urlpath: 's3://earth-data/landsat/small/LT05_L1TP_042033_19881022_20161001_01_T1_sr_band{band:d}.tif'\n", " chunks:\n", " band: 1\n", " x: 50\n", " y: 50\n", " concat_dim: band\n", " storage_options: {'anon': True}\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ``urlpath`` can be a path to a file, list of files, or a path with glob notation. Alternatively the path can be written as a python style [format_string](https://docs.python.org/3.6/library/string.html#format-string-syntax). In the case where the ``urlpath`` is a format string, the fields specified in that string will be parsed from the filenames and returned in the data. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat = intake.open_catalog('../catalog.yml')\n", "list(cat)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Exercise: Read the description of the landsat_5_small data source using cat.landsat_5_small.description" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NOTE:** If you don't have the data cached yet, then the next cell will take a few seconds." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "landsat_5 = cat.landsat_5_small\n", "landsat_5.to_dask()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data has not yet been loaded so we don't have access to the actual data values yet, but we do have access to coordinates and metadata." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualizing the data\n", "\n", "To get a quick sense of the data, we can plot it using [hvPlot](https://hvplot.pyviz.org/), which provides interactive plotting commands for Intake, Pandas, XArray, Dask, and GeoPandas. We'll look more closely at hvPlot and its options in later tutorials." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import hvplot.intake\n", "intake.output_notebook()\n", "\n", "import holoviews as hv\n", "hv.extension('bokeh')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can quickly generate a plot of each of the landsat bands using the overview plot declared in the catalog. Here is the relevant part of `catalog.yml`:\n", "\n", "```\n", "metadata:\n", " plots:\n", " band_image:\n", " kind: 'image'\n", " x: 'x'\n", " y: 'y'\n", " groupby: 'band'\n", " rasterize: True\n", " width: 400\n", " dynamic: False\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "landsat_5.hvplot.band_image()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Accessing the data\n", "So far we have been looking at the intake data entry object `landsat_5`. To access the data on this object we will read the data. If the data are big, we can use dask to do this using the `.to_dask()` method to create a `dask xarray.DataArray`. If the data are small, then we can use the `read()` method to read all the data straight into a regular `xarray.DataArray`. Once in an `xarray` object the data can be more easily manipulated and visualized." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(landsat_5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Xarray DataArray\n", "To get an `xarray` object, we'll use the `.read()` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "landsat_5_xda = landsat_5.read()\n", "type(landsat_5_xda)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use tab completion to explore what other information is stored on our xarray.DataArray object. We can use tab completion to explore attributes and methods available on our object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Exercise: Try typing landsat_5_xda. and press [tab] - don't forget the trailing dot!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Numpy Array\n", "Machine Learning pipelines such as scikit-learn accept Numpy arrays as input. These arrays are accessible in DataArray objects on the `values` attribute." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "landsat_5_npa = landsat_5_xda.values\n", "type(landsat_5_npa)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Next:\n", "\n", "Now that you have loaded your data, you will typically need to reshape it appropriately before it can be fed into a machine-learning pipeline. These steps are detailed in the next tutorial: [Alignment and Preprocessing](03_Alignment_and_Preprocessing.ipynb)." ] } ], "metadata": { "language_info": { "name": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 2 }