{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Carbon Monitoring Project" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[FluxNet](http://fluxnet.fluxdata.org/) is a worldwide collection of sensor stations that record a number of local variables relating to atmospheric conditions, solar flux and soil moisture. This notebook visualizes the data used in the NASA Goddard/University of Alabama carbon monitoring project [NEE Data Fusion](https://github.com/greyNearing/nee_data_fusion/) (Grey Nearing et al., 2018), but using Python tools rather than Matlab.\n", "\n", "The scientific goals of this notebook are to:\n", "\n", "* examine the carbon flux measurements from each site (net C02 ecosystem exchange, or NEE)\n", "* determine the feasibility of using a model to predict the carbon flux at one site from every other site.\n", "* generate and explain model\n", "\n", "The \"meta\" goal is to show how Python tools let you solve the scientific goals, so that you can apply these tools to your own problems." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys\n", "import dask\n", "import numpy as np\n", "import pandas as pd\n", "\n", "import holoviews as hv\n", "\n", "import hvplot.pandas\n", "import geoviews.tile_sources as gts\n", "\n", "pd.options.display.max_columns = 10\n", "hv.extension('bokeh', width=80)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Open the `intake` catalog\n", "This notebook uses [`intake`](https://intake.readthedocs.io/) to set up a data catalog with instructions for loading data for various projects. Before we read in any data, we'll open that catalog file and inspect the various data sources:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import intake\n", "\n", "cat = intake.open_catalog('./catalog.yml')\n", "list(cat)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load metadata\n", "First we will load in the fluxnet_metadata containing some site information for each of the fluxnet sites. Included in these data are the lat and lon of each site and the vegetation encoding (more on this below). In the next cell we will read in these data and take a look at a random few lines:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "metadata = cat.fluxnet_metadata().read()\n", "metadata.sample(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The vegetation type is classified according to the categories set out in the International Geosphere–Biosphere Programme (**igbd**) with several additional categories defined on the [fluxdata website](http://www.fluxdata.org/DataInfo/Dataset%20Doc%20Lib/VegTypeIGBP.aspx)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "igbp_vegetation = {\n", " 'WAT': '00 - Water',\n", " 'ENF': '01 - Evergreen Needleleaf Forest',\n", " 'EBF': '02 - Evergreen Broadleaf Forest',\n", " 'DNF': '03 - Deciduous Needleleaf Forest',\n", " 'DBF': '04 - Deciduous Broadleaf Forest',\n", " 'MF' : '05 - Mixed Forest',\n", " 'CSH': '06 - Closed Shrublands',\n", " 'OSH': '07 - Open shrublands',\n", " 'WSA': '08 - Woody Savannas',\n", " 'SAV': '09 - Savannas',\n", " 'GRA': '10 - Grasslands',\n", " 'WET': '11 - Permanent Wetlands',\n", " 'CRO': '12 - Croplands',\n", " 'URB': '13 - Urban and Built-up',\n", " 'CNV': '14 - Cropland/Nartural Vegetation Mosaics',\n", " 'SNO': '15 - Snow and Ice',\n", " 'BSV': '16 - Baren or Sparsely Vegetated'\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the dictionary above to map from igbp codes to longer labels - creating a new column on our metadata. We will make this column an ordered categorical to improve visualizations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pandas.api.types import CategoricalDtype\n", "\n", "dtype = CategoricalDtype(ordered=True, categories=sorted(igbp_vegetation.values()))\n", "metadata['vegetation'] = (metadata['igbp']\n", " .apply(lambda x: igbp_vegetation[x])\n", " .astype(dtype))\n", "metadata.sample(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualize the fluxdata sites\n", "\n", "The PyViz ecosystem strives to make it always straightforward to visualize your data, to encourage you to be aware of it and understand it at each stage of a workflow. Here we will use Open Street Map tiles from `geoviews` \n", "to make a quick map of where the different sites are located and the vegetation at each site. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "metadata.hvplot.points('lon', 'lat', geo=True, color='vegetation',\n", " height=420, width=800, cmap='Category20') * gts.OSM" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading FluxNet data\n", "\n", "The data in the [nee_data_fusion](https://github.com/greyNearing/nee_data_fusion/) repository is expressed as a collection of CSV files where the site names are expressed in the filenames.\n", "\n", "This cell defines a function to:\n", "\n", "* read in the data from all sites\n", "* discard columns that we don't need\n", "* calculate day of year\n", "* caculate the season (spring, summer, fall, winter)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_columns = ['P_ERA', 'TA_ERA', 'PA_ERA', 'SW_IN_ERA', 'LW_IN_ERA', 'WS_ERA',\n", " 'VPD_ERA', 'TIMESTAMP', 'site', 'NEE_CUT_USTAR50']\n", "soil_data_columns = ['SWC_F_MDS_1', 'SWC_F_MDS_2', 'SWC_F_MDS_3',\n", " 'TS_F_MDS_1', 'TS_F_MDS_2', 'TS_F_MDS_3']\n", "\n", "keep_from_csv = data_columns + soil_data_columns\n", "\n", "y_variable = 'NEE_CUT_USTAR50'\n", "\n", "def season(df, metadata):\n", " \"\"\"Add season column based on lat and month\n", " \"\"\"\n", " site = df['site'].cat.categories.item()\n", " lat = metadata[metadata['site'] == site]['lat'].item()\n", " if lat > 0:\n", " seasons = {3: 'spring', 4: 'spring', 5: 'spring',\n", " 6: 'summer', 7: 'summer', 8: 'summer',\n", " 9: 'fall', 10: 'fall', 11: 'fall',\n", " 12: 'winter', 1: 'winter', 2: 'winter'}\n", " else:\n", " seasons = {3: 'fall', 4: 'fall', 5: 'fall',\n", " 6: 'winter', 7: 'winter', 8: 'winter',\n", " 9: 'spring', 10: 'spring', 11: 'spring',\n", " 12: 'summer', 1: 'summer', 2: 'summer'}\n", " return df.assign(season=df.TIMESTAMP.dt.month.map(seasons))\n", "\n", "def clean_data(df):\n", " \"\"\"\n", " Clean data columns:\n", " \n", " * add NaN col for missing columns\n", " * throw away un-needed columns\n", " * add day of year\n", " \"\"\"\n", " df = df.assign(**{col: np.nan for col in keep_from_csv if col not in df.columns})\n", " df = df[keep_from_csv]\n", " \n", " df = df.assign(DOY=df.TIMESTAMP.dt.dayofyear)\n", " df = df.assign(year=df.TIMESTAMP.dt.year)\n", " df = season(df, metadata)\n", " \n", " return df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Read and clean data\n", "\n", "This will take a few minutes if the data is not cached yet. First we will get a list of all the files on the S3 bucket, then we will iterate over those files and cache, read, and munge the data in each one. This is necessary since the columns in each file don't necessarily match the columns in the other files. Before we concatenate across sites, we need to do some cleaning. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from s3fs import S3FileSystem" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3 = S3FileSystem(anon=True)\n", "s3_paths = s3.glob('earth-data/carbon_flux/nee_data_fusion/FLX*')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "datasets = []\n", "skipped = []\n", "used = []\n", "\n", "for i, s3_path in enumerate(s3_paths):\n", " sys.stdout.write('\\r{}/{}'.format(i+1, len(s3_paths)))\n", " \n", " dd = cat.fluxnet_daily(s3_path=s3_path).to_dask()\n", " site = dd['site'].cat.categories.item()\n", " \n", " if not set(dd.columns) >= set(data_columns):\n", " skipped.append(site)\n", " continue\n", "\n", " datasets.append(clean_data(dd))\n", " used.append(site)\n", "\n", "print()\n", "print('Found {} fluxnet sites with enough data to use - skipped {}'.format(len(used), len(skipped)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have a list of datasets, we will concatenate across all rows. Since the data is loaded lazily - using `dask` - we need to explicitly call `compute` to get the data in memory. To learn more about this look at the [Data Ingestion](../tutorial/01_Data_Ingestion.ipynb) tutorial." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = dask.dataframe.concat(datasets).compute()\n", "data.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll also set the data type of `'site'` to `'category'`. This will come in handy later." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data['site'] = data['site'].astype('category')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing Data Available at Sites\n", "\n", "We can look at the sites for which we have data. We'll plot the sites on a world map again - this time using a custom colormap to denote sites with valid data, sites where data exist but were not loaded because too many fields were missing, and sites where no data was available. In addition to this map we'll get the count of different vegetation types at the sites." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def mapper(x):\n", " if x in used:\n", " return 'valid'\n", " elif x in skipped:\n", " return 'skipped'\n", " else:\n", " return 'no data'\n", " \n", "cmap = {'valid': 'green', 'skipped': 'red', 'no data': 'darkgray'}\n", "\n", "QA = metadata.copy()\n", "QA['quality'] = QA['site'].map(mapper)\n", "\n", "all_points = QA.hvplot.points('lon', 'lat', geo=True, color='quality', \n", " cmap=cmap, hover_cols=['site', 'vegetation'],\n", " height=420, width=600).options(tools=['hover', 'tap'], \n", " legend_position='top')\n", "\n", "def veg_count(data):\n", " veg_count = data['vegetation'].value_counts().sort_index(ascending=False)\n", " return veg_count.hvplot.barh(height=420, width=500)\n", "\n", "hist = veg_count(QA[QA.quality=='valid']).relabel('Vegetation counts for valid sites')\n", "\n", "all_points * gts.OSM + hist" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll make a couple of functions that generate plots on the full set of data or a subset of the data. We will use these in a dashboard below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def site_timeseries(data):\n", " \"\"\"Timeseries plot showing the mean carbon flux at each DOY as well as the min and max\"\"\"\n", " \n", " tseries = hv.Overlay([\n", " (data.groupby(['DOY', 'year'])[y_variable]\n", " .mean().groupby('DOY').agg([np.min, np.max])\n", " .hvplot.area('DOY', 'amin', 'amax', alpha=0.2, fields={'amin': y_variable})),\n", " data.groupby('DOY')[y_variable].mean().hvplot()])\n", " \n", " return tseries.options(width=800, height=400)\n", "\n", "def site_count_plot(data):\n", " \"\"\"Plot of the number of observations of each of the non-mandatory variables.\"\"\"\n", " return data[soil_data_columns + ['site']].count().hvplot.bar(rot=90, width=300, height=400)\n", "\n", "timeseries = site_timeseries(data)\n", "count_plot = site_count_plot(data)\n", "timeseries + count_plot" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dashboard\n", "\n", "Using the plots and functions defined above, we can make a [Panel](http://panel.pyviz.org) dashboard of sites where by clicking on a site, you get the timeseries and variable count for that particular site." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from holoviews.streams import Selection1D\n", "import panel as pn" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stream = Selection1D(source=all_points)\n", "empty = timeseries.relabel('No selection') + count_plot.relabel('No selection')\n", "\n", "def site_selection(index):\n", " if not index:\n", " return empty\n", " i = index[0]\n", " if i in QA[QA.quality=='valid'].index:\n", " site = QA.iloc[i].site\n", " ts = site_timeseries(data[data.site == site]).relabel(site)\n", " ct = site_count_plot(data[data.site == site]).relabel(site)\n", " return ts + ct\n", " else:\n", " return empty\n", "\n", "one_site = hv.DynamicMap(site_selection, streams=[stream])\n", "\n", "pn.Column(pn.Row(all_points * gts.OSM, hist), pn.Row(one_site))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Merge data\n", "\n", "Now that the data are loaded in we can merge the daily data with the metadata from before.\n", "\n", "In order to use the categorical `igbp` field with machine-learning tools, we will create a one-hot encoding where each column corresponds to one of the `igbp` types, the rows correspond to observations and all the cells are filled with 0 or 1. This can be done use the method `pd.get_dummies`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "onehot_metadata = pd.get_dummies(metadata, columns=['igbp'])\n", "onehot_metadata.sample(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll do the same for season - keeping season as a column. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pd.get_dummies(data, columns=['season']).assign(season=data['season'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll merge the metadata with all our daily observations - creating a tidy dataframe. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.merge(data, onehot_metadata, on='site')\n", "df.sample(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualizing Soil Data Availability at Sites\n", "Now that all of our observations are merged with the site metadata, we can take a look at which sites have soil data. Some sites have soil moisture and temperature data at one depths and others have the data at all 3 depths. We'll look at the distribution of availability across sites." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "partial_soil_data = df[df[soil_data_columns].notnull().any(1)]\n", "partial_soil_data_sites = metadata[metadata.site.isin(partial_soil_data.site.unique())]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "full_soil_data = df[df[soil_data_columns].notnull().all(1)]\n", "full_soil_data_sites = metadata[metadata.site.isin(full_soil_data.site.unique())]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "args = dict(geo=True, hover_cols=['site', 'vegetation'], height=420, width=600)\n", "\n", "partial = partial_soil_data_sites.hvplot.points('lon', 'lat', **args).relabel('partial soil data')\n", "full = full_soil_data_sites.hvplot.points('lon', 'lat', **args).relabel('full soil data')\n", "\n", "(partial * full * gts.OSM).options(legend_position='top') + veg_count(partial_soil_data_sites) * veg_count(full_soil_data_sites)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since there seems to be a strong geographic pattern in the availablity of soil moisture and soil temperature data, we won't use those columns in our model. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = df.drop(columns=soil_data_columns)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will set data to only the rows where there are no null values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = df[df.notnull().all(1)].reset_index(drop=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df['site'] = df['site'].astype('category')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assigning roles to variables\n", "\n", "Before we train a model to predict carbon flux globally we need to choose which variables will be included in the input to the model. For those we should only use variables that we expect to have some relationship with the variable that we are trying to predict. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "explanatory_cols = ['lat']\n", "data_cols = ['P_ERA', 'TA_ERA', 'PA_ERA', 'SW_IN_ERA', 'LW_IN_ERA', 'WS_ERA', 'VPD_ERA']\n", "season_cols = [col for col in df.columns if col.startswith('season_')]\n", "igbp_cols = [col for col in df.columns if col.startswith('igbp_')]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = df[data_cols + igbp_cols + explanatory_cols + season_cols].values\n", "y = df[y_variable].values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Scaling the Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import StandardScaler\n", "\n", "# transform data matrix so 0 mean, unit variance for each feature\n", "X = StandardScaler().fit_transform(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are ready to train a model to predict carbon flux globally. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training and Testing\n", "\n", "We'll shuffle the sites and select 10% of them to be used as a test set. The rest we will use for training. Note that you might get better results using leave-one-out, but since we have a large amount of data, classical validation will be much faster." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import GroupShuffleSplit\n", "\n", "sep = GroupShuffleSplit(train_size=0.9, test_size=0.1)\n", "train_idx, test_idx = next(sep.split(X, y, df.site.cat.codes.values))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_sites = df.site.iloc[train_idx].unique()\n", "test_sites = df.site.iloc[test_idx].unique()\n", "\n", "train_site_metadata = metadata[metadata.site.isin(train_sites)]\n", "test_site_metadata = metadata[metadata.site.isin(test_sites)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's make a world map showing the sites that will be used as in training and those that will be used in testing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train = train_site_metadata.hvplot.points('lon', 'lat', **args).relabel('training sites')\n", "test = test_site_metadata.hvplot.points( 'lon', 'lat', **args).relabel('testing sites') \n", "\n", "(train * test * gts.OSM).options(legend_position='top') + veg_count(train_site_metadata) * veg_count(test_site_metadata)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This distribution seems reasonably uniform and unbiased, though a different random sampling might have allowed testing for each continent and all vegetation types." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training the Regression Model\n", "\n", "We'll construct a linear regression model using our randomly selected training sites and test sites." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LinearRegression" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = LinearRegression()\n", "model.fit(X[train_idx], y[train_idx]);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll create a little function to look at observed vs predicted values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from holoviews.operation.datashader import datashade\n", "\n", "def result_plot(predicted, observed, title, corr=None, res=0.1):\n", " \"\"\"Plot datashaded observed vs predicted\"\"\"\n", " \n", " corr = corr if corr is not None else np.corrcoef(predicted, observed)[0][1]\n", " title = '{} (correlation: {:.02f})'.format(title, corr)\n", " scatter = hv.Scatter((predicted, observed), 'predicted', 'observed')\\\n", " .redim.range(predicted=(observed.min(), observed.max()))\n", " \n", " return datashade(scatter, y_sampling=res, x_sampling=res).relabel(title)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(result_plot(model.predict(X[train_idx]), y[train_idx], 'Training') + \\\n", " result_plot(model.predict(X[test_idx ]), y[test_idx], 'Testing')).options('RGB', axiswise=True, width=500)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Prediction at test sites\n", "\n", "We can see how well the prediction does at each of our testing sites by making another dashboard. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = []\n", "\n", "for site in test_sites:\n", " site_test_idx = df[df.site == site].index\n", " y_hat_test = model.predict(X[site_test_idx])\n", " corr = np.corrcoef(y_hat_test, y[site_test_idx])[0][1]\n", " \n", " results.append({'site': site,\n", " 'observed': y[site_test_idx], \n", " 'predicted': y_hat_test, \n", " 'corr': corr})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_site_results = pd.merge(test_site_metadata, pd.DataFrame(results), \n", " on='site').set_index('site', drop=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can set up another dashboard with just the test sites, where tapping on a given site produces a plot of the predicted vs. observed carbon flux.\n", "\n", "First we'll set up a timeseries function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def timeseries_observed_vs_predicted(site=None):\n", " \"\"\"\n", " Make a timeseries plot showing the predicted/observed \n", " mean carbon flux at each DOY as well as the min and max\n", " \"\"\"\n", " if site:\n", " data = df[df.site == site].assign(predicted=test_site_results.loc[site, 'predicted'])\n", " corr = test_site_results.loc[site, 'corr']\n", " title = 'Site: {}, correlation coefficient: {:.02f}'.format(site, corr)\n", " else:\n", " data = df.assign(predicted=np.nan)\n", " title = 'No Selection'\n", "\n", " spread = data.groupby(['DOY', 'year'])[y_variable].mean().groupby('DOY').agg([np.min, np.max]) \\\n", " .hvplot.area('DOY', 'amin', 'amax', alpha=0.2, fields={'amin': 'observed'})\n", " observed = data.groupby('DOY')[y_variable ].mean().hvplot().relabel('observed')\n", " predicted = data.groupby('DOY')['predicted'].mean().hvplot().relabel('predicted')\n", " \n", " return (spread * observed * predicted).options(width=800).relabel(title)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "timeseries_observed_vs_predicted(test_sites[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we'll set up the points colored by correlation coefficient." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_points = test_site_results.hvplot.points('lon', 'lat', geo=True, c='corr', legend=False,\n", " cmap='coolwarm_r', s=150, height=420, width=800, \n", " hover_cols=['vegetation', 'site']).options(\n", " tools=['tap', 'hover'], line_color='black')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And put it together into a dashboard. This will look very similar to the one above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_stream = Selection1D(source=test_points)\n", "\n", "def test_site_selection(index):\n", " site = None if not index else test_sites[index[0]]\n", " return timeseries_observed_vs_predicted(site)\n", "\n", "one_test_site = hv.DynamicMap(test_site_selection, streams=[test_stream])\n", "title = 'Test sites colored by correlation: tap on site to plot long-term-mean timeseries'\n", "\n", "dash = pn.Column((test_points * gts.OSM).relabel(title), one_test_site)\n", "dash.servable()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Optional: Seasonal Prediction\n", "\n", "Clicking on some of the sites above suggests that prediction often works well for some months and not for others. Perhaps different variables are important for prediction, depending on the season? We might be able to achieve better results if we generate separate models for each season. First we'll set up a function that computes prediction stats for a given training index, test index, array of X, array of y and array of seasons." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "seasons = ['summer', 'fall', 'spring', 'winter']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def prediction_stats(train_idx, test_idx, X, y, season):\n", " \"\"\"\n", " Compute prediction stats for equal length arrays X, y, and season\n", " split into train_idx and test_idx\n", " \"\"\"\n", " pred = {}\n", "\n", " for s in seasons:\n", " season_idx = np.where(season==s)\n", " season_train_idx = np.intersect1d(season_idx, train_idx, assume_unique=True)\n", " season_test_idx = np.intersect1d(season_idx, test_idx, assume_unique=True)\n", " \n", " model = LinearRegression()\n", " model.fit(X[season_train_idx], y[season_train_idx])\n", " \n", " y_hat = model.predict(X[season_test_idx])\n", " y_test = y[season_test_idx]\n", " pred[s] = {'predicted': y_hat,\n", " 'observed': y_test,\n", " 'corrcoef': np.corrcoef(y_hat, y_test)[0][1],\n", " 'test_index': test_idx}\n", " return pred" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup Dask\n", "With dask, we can distribute tasks over cores and do parallel computation. For more information see https://dask.org/" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from distributed import Client\n", "\n", "client = Client()\n", "client" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we'll scatter our data using `dask` and make a bunch of different splits. For each split we'll compute the predicton stats for each season." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "futures = []\n", "sep = GroupShuffleSplit(n_splits=50, train_size=0.9, test_size=0.1)\n", "\n", "X_future = client.scatter(X)\n", "y_future = client.scatter(y)\n", "season_future = client.scatter(df['season'].values)\n", "\n", "for i, (train_index, test_index) in enumerate(sep.split(X, y, df.site.cat.codes.values)):\n", " train_future = client.scatter(train_index)\n", " test_future = client.scatter(test_index)\n", " futures += [client.submit(prediction_stats, train_future, test_future,\n", " X_future, y_future, season_future)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have our computations set up in dask, we can gather the results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = client.gather(futures)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And consolidate the results for each season." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "output = {\n", " s: {\n", " 'predicted': np.concatenate([i[s]['predicted'] for i in results]),\n", " 'observed': np.concatenate([i[s]['observed'] for i in results]),\n", " 'test_index': np.concatenate([i[s]['test_index'] for i in results]),\n", " 'corrcoef': np.array([i[s]['corrcoef'] for i in results])\n", " } for s in seasons}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hv.Layout([\n", " result_plot(output[s]['predicted'], output[s]['observed'], s, output[s]['corrcoef'].mean())\n", " for s in seasons]).cols(2).options('RGB', axiswise=True, width=400)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def helper(s):\n", " corr = output[s]['corrcoef']\n", " return pd.DataFrame([corr, [s] * len(corr)], index=['corr', 'season']).T\n", "\n", "corr = pd.concat(map(helper, seasons)).reset_index(drop=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "corr.hvplot.hist(y='corr', groupby='season', bins=np.arange(0, .9, .05).tolist(), dynamic=False, width=500)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "corr.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Suggested Next Steps\n", "\n", " - Can we predict certain vegetations better than others?\n", " - Calculate fraction of explained variance.\n", " - replace each FluxNet input variable with a remotely sensed (satellite imaged) quantity to predict carbon flux globally" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 2 }