{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Machine Learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the data preparation complete, this step will demonstrate how you can configure a [scikit-learn](http://scikit-learn.org/stable/index.html) or [dask_ml](http://ml.dask.org/) pipeline, but any library, algorithm, or simulator could be used at this stage if it can accept array data. In the next step of the tutorial, [Data Visualization](./05_Data_Visualization.ipynb) you will learn how to visualize the output of this pipeline and diagnose as well as ensure that the inputs to the pipeline have the expected structure." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import intake\n", "import numpy as np\n", "import xarray as xr\n", "\n", "import holoviews as hv\n", "\n", "import cartopy.crs as ccrs\n", "import geoviews as gv\n", "\n", "import hvplot.xarray\n", "\n", "hv.extension('bokeh', width=80)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Recap: Loading data\n", "\n", "Note in this tutorial we will use the small version of the landsat data for time constraints. If you prefer to work with full-scale data, use `cat.landsat_4.read_chunked()` instead." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cat = intake.open_catalog('../catalog.yml')\n", "landsat_5_da = cat.landsat_5_small.read_chunked()\n", "landsat_5_da.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reshaping Data\n", "\n", "We'll need to reshape the image to be how dask-ml / scikit-learn expect it: `(n_samples, n_features)` where n_features is the number of bands and n_samples is the total number of pixels in each band. Essentially, we'll be creating a bag of pixels out of each image, where each pixel has multiple features (bands), but the ordering of the pixels is no longer relevant. In this case we start with an array that is n_bands by n_y by n_x (6, 300, 300) and we need to reshape to an array that is `(n_samples, n_features)` (9e4, 6). We'll first look at using NumPy, then Xarray." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Numpy\n", "\n", "Data can be reshaped at the lowest level using NumPy, by getting the underlying values from the `xarray.DataArray`, and using flatten and transpose to get the right shape. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "arr = landsat_5_da.values\n", "arr.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we want to flatten along the x and y but not along the band axis, we need to iterate over each band and flatten the data. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flattened_npa = np.array([arr[i].flatten() for i in range(arr.shape[0])])\n", "flattened_npa" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flattened_npa.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get our flattened into the shape of n_samples, n_features, we'll reorder the dimensions using `.transpose`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flattened_t_npa = flattened_npa.transpose()\n", "flattened_t_npa.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since `numpy.array`s are not labeled data, the semantics of the data are lost over the course of these operations, as the necessary metadata does not exist at the NumPy level." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Xarray\n", "\n", "By using `xarray` methods to flatten the data, we can keep track of the coordinate labels ('x' and 'y') along the way. This means that we have the ability to reshape back to our original array at any time with no information loss." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flattened_xda = landsat_5_da.stack(z=('x','y'))\n", "flattened_xda" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can reorder the dimensions using `DataArray.transpose`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flattened_t_xda = flattened_xda.transpose('z', 'band')\n", "flattened_t_xda.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Rescaling Data\n", "\n", "Rescale (standardize) the data to input to the algorithm since the ML pipeline that we have selected expects input values to be small. \n", "Here we'll demonstrate doing this in `numpy` or `xarray`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(flattened_t_npa - flattened_t_npa.mean()) / flattened_t_npa.std()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rescaled = (flattened_t_xda - flattened_t_xda.mean()) / flattened_t_xda.std()\n", "rescaled.compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NOTE:** Since the the xarray object is in dask, the actual computation isn't performed until `.compute()`is called. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Exercise: Inspect the numpy array at rescaled.values to check that it matches the numpy array above. You could use == for this with .all" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Side-note: Other preprocessing\n", "\n", "Although this isn't the case in this instance, sometimes to get the data into the right shape you need to add or remove axes. Here is an example of adding an axis with `numpy` and with `xarray`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.expand_dims(flattened_t_npa, axis=2).shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flattened_t_xda.expand_dims(dim='e', axis=2).shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Exercise: Try removing the extra axis using np.squeeze or .squeeze on the xarray object" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ML pipeline\n", "The Machine Learning pipeline shown below is just for the purpose of understanding the shaping/reshaping of the data. In practice you will likely be using a more sophisticated pipeline. Here we will use a version of SpectralClustering from dask_ml that is a scalable equivalent to operations from Scikit-learn that cluster pixels based on similarity (across all bands, which makes it spectral clustering by spectra!)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from dask_ml.cluster import SpectralClustering\n", "from dask.distributed import Client" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "client = Client(processes=False)\n", "client" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will compute and persist the rescaled data to feed into the ML pipeline. Notice that X has the shape: `n_samples, n_features` as discussed above. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = client.persist(rescaled)\n", "X.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we will set up the model with the number of clusters, and other options." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clf = SpectralClustering(n_clusters=4, random_state=0, gamma=None,\n", " kmeans_params={'init_max_iter': 5},\n", " persist_embedding=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**This is the slow part.** Then we'll fit the model to out data `X`. This is the part that will take a noticeable amount of time. Something like 1 minute for the data in this tutorial or 9 minutes for a full size landsat image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%time clf.fit(X)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Exercise: Open the dask status dashboard and watch the workers in progress." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "labels = clf.assign_labels_.labels_.compute()\n", "labels.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Un-flattening\n", "\n", "Once the computation is done, the output can be used to create a new array with the same structure as the input array. This new output array will have the coordinates needed to be unstacked similarly to how they were stacked. One of the main benefits of using `xarray` for this stacking and unstacking is that allows `xarray` to keep track of the coordinate information for us. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the original array is n_samples by n_features (90_000, 6) and the output only contains one feature (90_000,), the template structure for this data needs to have the shape (n_samples). We achieve this by just taking one of the bands." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "template = flattened_t_xda[:, 0]\n", "output_array = template.copy(data=labels)\n", "output_array" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With this new output array in hand, we can unstack back to the original dimensions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "unstacked = output_array.unstack()\n", "unstacked" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "landsat_5_da.sel(band=4).hvplot(x='x', y='y', width=400, height=400, datashade=True, cmap='greys').relabel('Image') + \\\n", " unstacked.hvplot(x='x', y='y', width=400, height=400, cmap='Category10', colorbar=False).relabel('Clustered')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Geographic plot\n", "\n", "The plot above is useful and quick to generate, but it isn't referenced against the underlying geographic coordinates, which is crucial if we want to overlay the data on any other geographic data sources. Adding the coordinate reference system in the hvplot method, ensures that the data is properly positioned in space. This geo-referencing is made very straightforward because of the way `xarray` persists metadata. We can even add tiles underneath." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gv.tile_sources.EsriImagery * unstacked.hvplot(x='x', y='y', geo=True, height=500, cmap='Category10', alpha=0.7)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Exercise: Try adding a different set of map tiles. Use tab completion to find others." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Next:\n", "\n", "Now that your analysis is complete, you are ready for some more information about [Data Visualization](./05_Data_Visualization.ipynb) you will learn how to visualize the output of this pipeline and diagnose as well as ensure that the inputs to the pipeline have the expected structure." ] } ], "metadata": { "language_info": { "name": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 2 }