{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## [02_Probabilistic.ipynb](https://github.com/raybellwaves/xskillscore-tutorial/blob/master/02_Probabilistic.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook shows how to use probabilistic metrics in a typical data science task where the data is a pandas.DataFrame.\n", "\n", "The metric Continuous Ranked Probability Score (CRPS) is used to verify multiple forecasts for the same target." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import xarray as xr\n", "import pandas as pd\n", "import numpy as np\n", "import xskillscore as xs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use the same data as in [01_Deterministic.ipynb](https://github.com/raybellwaves/xskillscore-tutorial/blob/master/01_Determinisitic.ipynb)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
y
DATESTORESKU
2020-01-01006
19
22
106
18
\n", "
" ], "text/plain": [ " y\n", "DATE STORE SKU \n", "2020-01-01 0 0 6\n", " 1 9\n", " 2 2\n", " 1 0 6\n", " 1 8" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "stores = np.arange(4)\n", "skus = np.arange(3)\n", "dates = pd.date_range(\"1/1/2020\", \"1/5/2020\", freq=\"D\")\n", "\n", "rows = []\n", "for _, date in enumerate(dates):\n", " for _, store in enumerate(stores):\n", " for _, sku in enumerate(skus):\n", " rows.append(\n", " dict(\n", " {\n", " \"DATE\": date,\n", " \"STORE\": store,\n", " \"SKU\": sku,\n", " \"QUANTITY_SOLD\": np.random.randint(9) + 1,\n", " }\n", " )\n", " )\n", "df = pd.DataFrame(rows)\n", "df.rename(columns={\"QUANTITY_SOLD\": \"y\"}, inplace=True)\n", "df.set_index(['DATE', 'STORE', 'SKU'], inplace=True)\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instread of making a single prediction as in [01_Deterministic.ipynb](https://github.com/raybellwaves/xskillscore-tutorial/blob/master/01_Determinisitic.ipynb) we will make multiple forecasts (ensemble forecast). This is akin to more complex methods such as **bagging**, **boosting** and **stacking**. \n", "\n", "Do 6 forecasts and append them to the `pandas.DataFrame` using an extra field called `member`. This will be saved in a new `pandas.DataFrame` called `df_yhat`:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ymemberyhat
DATESTORESKU
2020-01-0100614
1917
2211
10611
1815
..................
2020-01-0521360
2161
30967
1363
2260
\n", "

360 rows × 3 columns

\n", "
" ], "text/plain": [ " y member yhat\n", "DATE STORE SKU \n", "2020-01-01 0 0 6 1 4\n", " 1 9 1 7\n", " 2 2 1 1\n", " 1 0 6 1 1\n", " 1 8 1 5\n", "... .. ... ...\n", "2020-01-05 2 1 3 6 0\n", " 2 1 6 1\n", " 3 0 9 6 7\n", " 1 3 6 3\n", " 2 2 6 0\n", "\n", "[360 rows x 3 columns]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tmp = df.copy()\n", "for i in range(1, 7):\n", " tmp['member'] = i\n", " noise = np.random.uniform(-1, 1, size=len(df['y']))\n", " tmp['yhat'] = (df['y'] + (df['y'] * noise)).astype(int)\n", " if i == 1:\n", " df_yhat = tmp.copy()\n", " else:\n", " df_yhat = df_yhat.append(tmp)\n", "df_yhat" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Drop the `y` column from `df_yhat` and add `member` to the MultiIndex:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
yhat
DATESTORESKUmember
2020-01-010014
117
211
1011
115
...............
2020-01-052160
261
3067
163
260
\n", "

360 rows × 1 columns

\n", "
" ], "text/plain": [ " yhat\n", "DATE STORE SKU member \n", "2020-01-01 0 0 1 4\n", " 1 1 7\n", " 2 1 1\n", " 1 0 1 1\n", " 1 1 5\n", "... ...\n", "2020-01-05 2 1 6 0\n", " 2 6 1\n", " 3 0 6 7\n", " 1 6 3\n", " 2 6 0\n", "\n", "[360 rows x 1 columns]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_yhat.drop('y', axis=1, inplace=True)\n", "df_yhat.set_index(['member'], append=True, inplace=True)\n", "df_yhat" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Convert the target `pandas.DataFrame` (`df`) to an `xarray.Dataset`:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", "Show/Hide data repr\n", "\n", "\n", "\n", "\n", "\n", "Show/Hide attributes\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
xarray.Dataset
" ], "text/plain": [ "\n", "Dimensions: (DATE: 5, SKU: 3, STORE: 4)\n", "Coordinates:\n", " * DATE (DATE) datetime64[ns] 2020-01-01 2020-01-02 ... 2020-01-05\n", " * STORE (STORE) int64 0 1 2 3\n", " * SKU (SKU) int64 0 1 2\n", "Data variables:\n", " y (DATE, STORE, SKU) int64 6 9 2 6 8 8 2 3 8 8 ... 9 2 9 7 3 1 9 3 2" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ds = df.to_xarray()\n", "ds" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now add the predicted `pandas.DataFrame` (`df`) as an `xarray.DataArray` called `yhat` to the `xarray.Dataset`:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", "Show/Hide data repr\n", "\n", "\n", "\n", "\n", "\n", "Show/Hide attributes\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
xarray.Dataset
    • DATE: 5
    • SKU: 3
    • STORE: 4
    • member: 6
    • DATE
      (DATE)
      datetime64[ns]
      2020-01-01 ... 2020-01-05
      array(['2020-01-01T00:00:00.000000000', '2020-01-02T00:00:00.000000000',\n",
             "       '2020-01-03T00:00:00.000000000', '2020-01-04T00:00:00.000000000',\n",
             "       '2020-01-05T00:00:00.000000000'], dtype='datetime64[ns]')
    • STORE
      (STORE)
      int64
      0 1 2 3
      array([0, 1, 2, 3])
    • SKU
      (SKU)
      int64
      0 1 2
      array([0, 1, 2])
    • member
      (member)
      int64
      1 2 3 4 5 6
      array([1, 2, 3, 4, 5, 6])
    • y
      (DATE, STORE, SKU)
      int64
      6 9 2 6 8 8 2 3 ... 2 9 7 3 1 9 3 2
      array([[[6, 9, 2],\n",
             "        [6, 8, 8],\n",
             "        [2, 3, 8],\n",
             "        [8, 2, 7]],\n",
             "\n",
             "       [[2, 4, 9],\n",
             "        [3, 4, 6],\n",
             "        [7, 2, 2],\n",
             "        [2, 9, 1]],\n",
             "\n",
             "       [[6, 5, 8],\n",
             "        [3, 1, 9],\n",
             "        [6, 1, 9],\n",
             "        [3, 9, 8]],\n",
             "\n",
             "       [[4, 3, 7],\n",
             "        [1, 1, 3],\n",
             "        [8, 5, 7],\n",
             "        [3, 7, 3]],\n",
             "\n",
             "       [[1, 5, 6],\n",
             "        [9, 2, 9],\n",
             "        [7, 3, 1],\n",
             "        [9, 3, 2]]])
    • yhat
      (DATE, STORE, SKU, member)
      int64
      4 3 3 5 2 9 7 16 ... 3 1 1 3 2 2 0
      array([[[[ 4,  3,  3,  5,  2,  9],\n",
             "         [ 7, 16, 14, 14,  7, 15],\n",
             "         [ 1,  1,  0,  1,  0,  3]],\n",
             "\n",
             "        [[ 1,  9, 11,  6, 10,  3],\n",
             "         [ 5,  4,  1, 14,  5, 14],\n",
             "         [ 0,  4, 12,  4,  8,  0]],\n",
             "\n",
             "        [[ 3,  2,  0,  3,  3,  2],\n",
             "         [ 5,  1,  0,  0,  2,  4],\n",
             "         [ 7,  4,  1,  7,  9,  1]],\n",
             "\n",
             "        [[ 6,  3,  4,  9, 12, 12],\n",
             "         [ 3,  2,  1,  3,  2,  1],\n",
             "         [ 6,  6,  2,  7, 10,  1]]],\n",
             "\n",
             "\n",
             "       [[[ 1,  0,  1,  3,  1,  0],\n",
             "         [ 0,  7,  3,  6,  3,  2],\n",
             "         [11, 12, 11, 11,  6,  9]],\n",
             "\n",
             "        [[ 3,  0,  3,  4,  5,  1],\n",
             "         [ 5,  4,  7,  0,  6,  0],\n",
             "         [ 1,  8,  2,  1,  2,  3]],\n",
             "\n",
             "        [[ 4,  1,  9,  5, 12, 11],\n",
             "         [ 1,  0,  3,  2,  1,  0],\n",
             "         [ 2,  0,  1,  2,  2,  3]],\n",
             "\n",
             "        [[ 3,  0,  0,  3,  2,  0],\n",
             "         [14, 13,  5,  9, 17, 11],\n",
             "         [ 1,  1,  1,  1,  0,  1]]],\n",
             "\n",
             "\n",
             "       [[[10,  4,  0,  1,  7,  3],\n",
             "         [ 3,  7,  3,  2,  2,  2],\n",
             "         [ 4,  8, 11,  5,  8,  4]],\n",
             "\n",
             "        [[ 1,  0,  3,  0,  0,  4],\n",
             "         [ 1,  0,  0,  1,  1,  0],\n",
             "         [13,  6,  7,  6,  3,  4]],\n",
             "\n",
             "        [[ 3,  8,  2,  2, 11,  9],\n",
             "         [ 0,  0,  0,  1,  0,  1],\n",
             "         [14,  5,  5,  9,  4, 12]],\n",
             "\n",
             "        [[ 4,  2,  3,  4,  0,  5],\n",
             "         [15,  9,  6,  6, 16,  2],\n",
             "         [ 7, 13,  0,  9,  4, 12]]],\n",
             "\n",
             "\n",
             "       [[[ 5,  0,  1,  7,  3,  6],\n",
             "         [ 3,  1,  3,  0,  1,  4],\n",
             "         [ 1,  8,  4,  9,  4,  0]],\n",
             "\n",
             "        [[ 0,  0,  1,  0,  0,  1],\n",
             "         [ 0,  0,  0,  0,  1,  1],\n",
             "         [ 1,  1,  5,  1,  5,  3]],\n",
             "\n",
             "        [[ 4,  4,  3,  1,  7,  2],\n",
             "         [ 2,  0,  2,  5,  5,  5],\n",
             "         [ 0, 12, 12,  2,  6,  6]],\n",
             "\n",
             "        [[ 3,  2,  0,  3,  5,  0],\n",
             "         [ 4,  0,  4,  5,  1,  6],\n",
             "         [ 1,  3,  0,  1,  4,  2]]],\n",
             "\n",
             "\n",
             "       [[[ 0,  0,  1,  0,  0,  1],\n",
             "         [ 6,  5,  2,  4,  2,  5],\n",
             "         [ 3, 10,  5,  8,  8,  7]],\n",
             "\n",
             "        [[ 5, 13, 16, 16,  2,  3],\n",
             "         [ 0,  0,  3,  0,  0,  2],\n",
             "         [13,  1,  8,  2,  0,  8]],\n",
             "\n",
             "        [[10,  8, 11,  5,  1,  0],\n",
             "         [ 0,  3,  2,  2,  4,  0],\n",
             "         [ 0,  1,  1,  1,  0,  1]],\n",
             "\n",
             "        [[ 8, 15, 14,  4,  5,  7],\n",
             "         [ 2,  5,  3,  1,  2,  3],\n",
             "         [ 1,  1,  3,  2,  2,  0]]]])
" ], "text/plain": [ "\n", "Dimensions: (DATE: 5, SKU: 3, STORE: 4, member: 6)\n", "Coordinates:\n", " * DATE (DATE) datetime64[ns] 2020-01-01 2020-01-02 ... 2020-01-05\n", " * STORE (STORE) int64 0 1 2 3\n", " * SKU (SKU) int64 0 1 2\n", " * member (member) int64 1 2 3 4 5 6\n", "Data variables:\n", " y (DATE, STORE, SKU) int64 6 9 2 6 8 8 2 3 8 8 ... 9 2 9 7 3 1 9 3 2\n", " yhat (DATE, STORE, SKU, member) int64 4 3 3 5 2 9 7 16 ... 3 1 1 3 2 2 0" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ds['yhat'] = df_yhat.to_xarray()['yhat']\n", "ds" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how an `xarray.Dataset` can handle Data variables which have different shape but share some dimenstions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Using xskillscore - CRPS" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Continuous Ranked Probability Score (CRPS) can also be considered as the probabilistic Mean Absolute Error. It compares the empirical distribution of an ensemble forecast to a scalar observation it is given as:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\\begin{align}\n", "CRPS = \\int_{-\\infty}^{\\infty} (F(f) - H(f - o))^{2} df\n", "\\end{align}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "where where `F(f)` is the cumulative distribution function (CDF) of the forecast and `H()` is the Heaviside step function where the value is 1 if the argument is positive (the prediction overestimates the target or 0 (the prediction is equal to or lower than the target data).\n", "\n", "See https://climpred.readthedocs.io/en/stable/metrics.html#continuous-ranked-probability-score-crps for further documentation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is not a common verification metric and in most cases the predictions are averaged then verified using deterministic metrics.\n", "\n", "For example, you can see averaging on the `member` dimension gives a better prediction than any indivdual prediction:\n", "\n", "Note: for this we will use the function itself insead of the Accessor method:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "avg_member_rmse: 1.5715939066461817\n", "member 1:\n", "ind_member_rmse: 2.972092416687835\n", "member 2:\n", "ind_member_rmse: 3.1754264805429417\n", "member 3:\n", "ind_member_rmse: 3.3216461782274966\n", "member 4:\n", "ind_member_rmse: 2.851899951494325\n", "member 5:\n", "ind_member_rmse: 3.286335345030997\n", "member 6:\n", "ind_member_rmse: 3.361547262794322\n" ] } ], "source": [ "avg_member_rmse = xs.rmse(\n", " ds[\"y\"], ds[\"yhat\"].mean(dim=\"member\"), [\"DATE\", \"STORE\", \"SKU\"]\n", ")\n", "print(\"avg_member_rmse: \", avg_member_rmse.values)\n", "for i in range(len(ds.coords[\"member\"])):\n", " print(f\"member {i + 1}:\")\n", " ind_member_rmse = xs.rmse(\n", " ds[\"y\"], ds[\"yhat\"].sel(member=i + 1), [\"DATE\", \"STORE\", \"SKU\"]\n", " )\n", " print(\"ind_member_rmse: \", ind_member_rmse.values)\n", " \n", " assert avg_member_rmse < ind_member_rmse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, you will see it appear in some kaggle compeitions such as the [NFL Big Data Bowl](https://www.kaggle.com/c/nfl-big-data-bowl-2020/overview/evaluation) and the [Second Annual Data Science Bowl](https://www.kaggle.com/c/second-annual-data-science-bowl/overview/evaluation) so it's good to have in your arsenal." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The CRPS is only valid over the `member` dimension and therefore only takes 2 arguments:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", "Show/Hide data repr\n", "\n", "\n", "\n", "\n", "\n", "Show/Hide attributes\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
xarray.DataArray
  • DATE: 5
  • STORE: 4
  • SKU: 3
  • 1.5 2.583 0.8333 1.278 2.194 ... 0.6944 0.1111 1.528 0.3333 0.3056
    array([[[1.5       , 2.58333333, 0.83333333],\n",
           "        [1.27777778, 2.19444444, 2.33333333],\n",
           "        [0.30555556, 0.94444444, 1.80555556],\n",
           "        [1.33333333, 0.22222222, 1.        ]],\n",
           "\n",
           "       [[0.83333333, 0.86111111, 1.        ],\n",
           "        [0.38888889, 0.83333333, 2.69444444],\n",
           "        [1.44444444, 0.58333333, 0.16666667],\n",
           "        [0.61111111, 1.69444444, 0.02777778]],\n",
           "\n",
           "       [[1.58333333, 1.69444444, 0.94444444],\n",
           "        [1.16666667, 0.25      , 2.19444444],\n",
           "        [1.52777778, 0.44444444, 1.41666667],\n",
           "        [0.44444444, 1.55555556, 1.30555556]],\n",
           "\n",
           "       [[0.88888889, 0.55555556, 1.83333333],\n",
           "        [0.44444444, 0.44444444, 0.72222222],\n",
           "        [3.47222222, 0.80555556, 1.5       ],\n",
           "        [0.52777778, 2.5       , 0.75      ]],\n",
           "\n",
           "       [[0.44444444, 0.5       , 0.91666667],\n",
           "        [2.58333333, 0.91666667, 2.44444444],\n",
           "        [1.47222222, 0.69444444, 0.11111111],\n",
           "        [1.52777778, 0.33333333, 0.30555556]]])
    • DATE
      (DATE)
      datetime64[ns]
      2020-01-01 ... 2020-01-05
      array(['2020-01-01T00:00:00.000000000', '2020-01-02T00:00:00.000000000',\n",
             "       '2020-01-03T00:00:00.000000000', '2020-01-04T00:00:00.000000000',\n",
             "       '2020-01-05T00:00:00.000000000'], dtype='datetime64[ns]')
    • STORE
      (STORE)
      int64
      0 1 2 3
      array([0, 1, 2, 3])
    • SKU
      (SKU)
      int64
      0 1 2
      array([0, 1, 2])
" ], "text/plain": [ "\n", "array([[[1.5 , 2.58333333, 0.83333333],\n", " [1.27777778, 2.19444444, 2.33333333],\n", " [0.30555556, 0.94444444, 1.80555556],\n", " [1.33333333, 0.22222222, 1. ]],\n", "\n", " [[0.83333333, 0.86111111, 1. ],\n", " [0.38888889, 0.83333333, 2.69444444],\n", " [1.44444444, 0.58333333, 0.16666667],\n", " [0.61111111, 1.69444444, 0.02777778]],\n", "\n", " [[1.58333333, 1.69444444, 0.94444444],\n", " [1.16666667, 0.25 , 2.19444444],\n", " [1.52777778, 0.44444444, 1.41666667],\n", " [0.44444444, 1.55555556, 1.30555556]],\n", "\n", " [[0.88888889, 0.55555556, 1.83333333],\n", " [0.44444444, 0.44444444, 0.72222222],\n", " [3.47222222, 0.80555556, 1.5 ],\n", " [0.52777778, 2.5 , 0.75 ]],\n", "\n", " [[0.44444444, 0.5 , 0.91666667],\n", " [2.58333333, 0.91666667, 2.44444444],\n", " [1.47222222, 0.69444444, 0.11111111],\n", " [1.52777778, 0.33333333, 0.30555556]]])\n", "Coordinates:\n", " * DATE (DATE) datetime64[ns] 2020-01-01 2020-01-02 ... 2020-01-05\n", " * STORE (STORE) int64 0 1 2 3\n", " * SKU (SKU) int64 0 1 2" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ds.xs.crps_ensemble('y', 'yhat')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To return an overal CRPS it is recommened averaging over all dimensions before using `crps`:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", "Show/Hide data repr\n", "\n", "\n", "\n", "\n", "\n", "Show/Hide attributes\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
xarray.DataArray
  • 0.8069
    array(0.80694444)
    " ], "text/plain": [ "\n", "array(0.80694444)" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "y = ds['y'].mean(dim=['DATE', 'STORE', 'SKU'])\n", "yhat = ds['yhat'].mean(dim=['DATE', 'STORE', 'SKU'])\n", "xs.crps_ensemble(y, yhat)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 4 }