{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# The FIxed Risk Multicategorical (FIRM) Score\n", "[Taggart, R., Loveday, N. and Griffiths, D., 2022. A scoring framework for tiered warnings and multicategorical forecasts based on fixed risk measures. Quarterly Journal of the Royal Meteorological Society, 148(744), pp.1389-1406.](https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/qj.4266)\n", "\n", "The FIRM score is a scoring/verification framework for multicategorical forecast and warnings. The framework is tied to a risk threshold (or probabilistic decision threshold). This is in contrast to most other verification scores for multicategorical forecasts, where the optimal probability/risk threshold depends on the sample climatological frequency. Many of these other verification scores for multicategorical forecasts would encourage forecasters to over-warn/over-forecast more extreme, rarer events, causing many false alarms, particularly for longer lead days, which would erode the confidence that users have in forecasts.\n", "\n", "Lower FIRM scores are better (zero is best), similar to how lower MSE scores are better.\n", "\n", "## How does the FIRM framework work?\n", "A user needs to specify the following:\n", "\n", "1. An increasing sequence of category thresholds.\n", "2. The weights for the relative importance of each decision threshold. \n", "3. The risk threshold. This is a value between 0 and 1 and indicates the cost of a miss relative to the cost of a false alarm. \n", "4. Optional: A discount distance parameter. There is the opportunity to set the discount the penalty for near misses and near false alarms. \n", "\n", "## Example\n", "Let's say that we have a wind warning service that has three categories; no warning, gale warning and storm warning. This means that our category thresholds are `[34, 48]` where values correspond to wind magnitude forecasts (knots). It's twice as important to get the warning vs no warning decision threshold correct rather than the gale vs storm warning decision correct, so our weights are `[2, 1]`. The risk threshold of the service is that the cost of a miss to the cost of a false alarm is equal to 0.7. To optimise the expected FIRM score, one must forecast the highest category that has at least a 30% chance of occuring. This is calculated by $P (event) \\ge 1 - risk \\: threshold$. In this wind warning service we don't discount the penalty for near misses or near fale alarms.\n", "\n", "To summarise our FIRM framework parameters:\n", "\n", "1. Our category thresholds values are `[34, 48]`\n", "2. Our threshold weights are `[2, 1]`\n", "3. Our risk threshold is `0.7`\n", "\n", "\n", "Our scoring matrix based on these parameters is\n", "\n", "| | None | Gale | Storm |\n", "|-------|------|------|-------|\n", "| None | 0 | 1.4 | 2.3 |\n", "| Gale | 0.6 | 0 | 0.7 |\n", "| Storm | 0.9 | 0.3 | 0 |\n", "\n", "where the column headers correspond to the observed category and the row headers correspond to the forecast category. We can see that:\n", "\n", "- There is no penalty along the diagonal if you forecast the correct category.\n", "- You get a bigger penalty for missing an event than a false alarm. This is due to the risk threshold being set at `0.7`.\n", "- You get twice the penalty for forecasting none and observing gales compared to forecasting gales and observing storm force conditions due to the threshold weights. These two penalties sum together when being out by two categories (i.e., forecasting None and observing storm force conditions).\n", "\n", "For a detailed explanation of the calculations and various extensions, please read the [paper](https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/qj.4266)." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Python example\n", "\n", "Let's verify two synthetic wind warning services based on the hypothetical service listed above. Forecast A's wind forecasts are unbiased compared to observations. Forecast B's wind forecasts have been biased up by 5%. This is to see if we can improve our forecast score by biasing up our forecasts slightly since the forecast service directive is to forecast the highest category that has at least a 30% chance of occuring.\n", "\n", "We assume that our categorical warning service is derived by converting continuous obs/forecasts into our 3 categories. This is handled within the implementation of `FIRM` within `scores`." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "import xarray as xr\n", "import numpy as np\n", "import pandas as pd\n", "\n", "from scores.categorical import firm\n", "from scipy.stats import norm" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "# Multiplicative bias factor\n", "BIAS_FACTOR = 1.05" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [], "source": [ "# Create observations for 100 dates\n", "obs = 50 * np.random.random((100, 100))\n", "obs = xr.DataArray(\n", " data=obs, \n", " dims=[\"time\", \"x\"],\n", " coords={\"time\": pd.date_range(\"2023-01-01\", \"2023-04-10\"), \"x\": np.arange(0, 100)}\n", ")\n", "\n", "# Create forecasts for 7 lead days\n", "fcst = xr.DataArray(data=[1]*7, dims=\"lead_day\", coords={\"lead_day\": np.arange(1, 8)})\n", "fcst = fcst * obs\n", "\n", "# Create two forecasts. Forecast A has no bias compared to the observations, but \n", "# Forecast B is biased upwards to align better with forecast directive of forecast \n", "# the highest category that has at least a 30% chance of occuring\n", "noise = 4 * norm.rvs(size=(7, 100, 100))\n", "fcst_a = fcst + noise\n", "fcst_b = BIAS_FACTOR * fcst + noise" ] }, { "cell_type": "code", "execution_count": 75, "metadata": {}, "outputs": [], "source": [ "# Calculate FIRM scores for both forecasts\n", "fcst_a_firm = firm(fcst_a, obs, 0.7, [34, 48], [2, 1], preserve_dims=\"lead_day\")\n", "fcst_b_firm = firm(fcst_b, obs, 0.7, [34, 48], [2, 1], preserve_dims=\"lead_day\")" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
<xarray.Dataset>\n",
       "Dimensions:                ()\n",
       "Coordinates:\n",
       "    lead_day               int64 1\n",
       "Data variables:\n",
       "    firm_score             float64 0.08464\n",
       "    overforecast_penalty   float64 0.02808\n",
       "    underforecast_penalty  float64 0.05656
" ], "text/plain": [ "\n", "Dimensions: ()\n", "Coordinates:\n", " lead_day int64 1\n", "Data variables:\n", " firm_score float64 0.08464\n", " overforecast_penalty float64 0.02808\n", " underforecast_penalty float64 0.05656" ] }, "execution_count": 80, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# View results for lead day 1\n", "fcst_a_firm.sel(lead_day=1)" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
<xarray.Dataset>\n",
       "Dimensions:                ()\n",
       "Coordinates:\n",
       "    lead_day               int64 1\n",
       "Data variables:\n",
       "    firm_score             float64 0.07648\n",
       "    overforecast_penalty   float64 0.04722\n",
       "    underforecast_penalty  float64 0.02926
" ], "text/plain": [ "\n", "Dimensions: ()\n", "Coordinates:\n", " lead_day int64 1\n", "Data variables:\n", " firm_score float64 0.07648\n", " overforecast_penalty float64 0.04722\n", " underforecast_penalty float64 0.02926" ] }, "execution_count": 81, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fcst_b_firm.sel(lead_day=1)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the FIRM score from Forecast B was lower (better) than Forecast A. This is because we biased our forecast up slightly to align better with the service directive of \"forecast the highest category that has at least a 30% chance of occuring\".\n", "\n", "We can also see the overforecast and underforecast penalties in the output. This allows ups to see the relative contribution to the overall scores due to over/under forecasting.\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Further extensions\n", "- Test the impact of varying the risk threshold\n", "- Solve for the optimal way to bias the forecasts to improve the verification score\n", "- Test out the impact of the discount distance parameter\n", "- Explore some of the extensions in the [FIRM paper](https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/qj.4266)" ] } ], "metadata": { "kernelspec": { "display_name": "scoresenv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }