{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Tweedie regression on insurance claims\n\nThis example illustrates the use of Poisson, Gamma and Tweedie regression on\nthe [French Motor Third-Party Liability Claims dataset](https://www.openml.org/d/41214), and is inspired by an R tutorial [1]_.\n\nIn this dataset, each sample corresponds to an insurance policy, i.e. a\ncontract within an insurance company and an individual (policyholder).\nAvailable features include driver age, vehicle age, vehicle power, etc.\n\nA few definitions: a *claim* is the request made by a policyholder to the\ninsurer to compensate for a loss covered by the insurance. The *claim amount*\nis the amount of money that the insurer must pay. The *exposure* is the\nduration of the insurance coverage of a given policy, in years.\n\nHere our goal is to predict the expected\nvalue, i.e. the mean, of the total claim amount per exposure unit also\nreferred to as the pure premium.\n\nThere are several possibilities to do that, two of which are:\n\n1. Model the number of claims with a Poisson distribution, and the average\n claim amount per claim, also known as severity, as a Gamma distribution\n and multiply the predictions of both in order to get the total claim\n amount.\n2. Model the total claim amount per exposure directly, typically with a Tweedie\n distribution of Tweedie power $p \\in (1, 2)$.\n\nIn this example we will illustrate both approaches. We start by defining a few\nhelper functions for loading the data and visualizing results.\n\n.. [1] A. Noll, R. Salzmann and M.V. Wuthrich, Case Study: French Motor\n Third-Party Liability Claims (November 8, 2018). [doi:10.2139/ssrn.3164764](https://doi.org/10.2139/ssrn.3164764)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from functools import partial\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.metrics import (\n mean_absolute_error,\n mean_squared_error,\n mean_tweedie_deviance,\n)\n\n\ndef load_mtpl2(n_samples=None):\n \"\"\"Fetch the French Motor Third-Party Liability Claims dataset.\n\n Parameters\n ----------\n n_samples: int, default=None\n number of samples to select (for faster run time). Full dataset has\n 678013 samples.\n \"\"\"\n # freMTPL2freq dataset from https://www.openml.org/d/41214\n df_freq = fetch_openml(data_id=41214, as_frame=True).data\n df_freq[\"IDpol\"] = df_freq[\"IDpol\"].astype(int)\n df_freq.set_index(\"IDpol\", inplace=True)\n\n # freMTPL2sev dataset from https://www.openml.org/d/41215\n df_sev = fetch_openml(data_id=41215, as_frame=True).data\n\n # sum ClaimAmount over identical IDs\n df_sev = df_sev.groupby(\"IDpol\").sum()\n\n df = df_freq.join(df_sev, how=\"left\")\n df[\"ClaimAmount\"] = df[\"ClaimAmount\"].fillna(0)\n\n # unquote string fields\n for column_name in df.columns[[t is object for t in df.dtypes.values]]:\n df[column_name] = df[column_name].str.strip(\"'\")\n return df.iloc[:n_samples]\n\n\ndef plot_obs_pred(\n df,\n feature,\n weight,\n observed,\n predicted,\n y_label=None,\n title=None,\n ax=None,\n fill_legend=False,\n):\n \"\"\"Plot observed and predicted - aggregated per feature level.\n\n Parameters\n ----------\n df : DataFrame\n input data\n feature: str\n a column name of df for the feature to be plotted\n weight : str\n column name of df with the values of weights or exposure\n observed : str\n a column name of df with the observed target\n predicted : DataFrame\n a dataframe, with the same index as df, with the predicted target\n fill_legend : bool, default=False\n whether to show fill_between legend\n \"\"\"\n # aggregate observed and predicted variables by feature level\n df_ = df.loc[:, [feature, weight]].copy()\n df_[\"observed\"] = df[observed] * df[weight]\n df_[\"predicted\"] = predicted * df[weight]\n df_ = (\n df_.groupby([feature])[[weight, \"observed\", \"predicted\"]]\n .sum()\n .assign(observed=lambda x: x[\"observed\"] / x[weight])\n .assign(predicted=lambda x: x[\"predicted\"] / x[weight])\n )\n\n ax = df_.loc[:, [\"observed\", \"predicted\"]].plot(style=\".\", ax=ax)\n y_max = df_.loc[:, [\"observed\", \"predicted\"]].values.max() * 0.8\n p2 = ax.fill_between(\n df_.index,\n 0,\n y_max * df_[weight] / df_[weight].values.max(),\n color=\"g\",\n alpha=0.1,\n )\n if fill_legend:\n ax.legend([p2], [\"{} distribution\".format(feature)])\n ax.set(\n ylabel=y_label if y_label is not None else None,\n title=title if title is not None else \"Train: Observed vs Predicted\",\n )\n\n\ndef score_estimator(\n estimator,\n X_train,\n X_test,\n df_train,\n df_test,\n target,\n weights,\n tweedie_powers=None,\n):\n \"\"\"Evaluate an estimator on train and test sets with different metrics\"\"\"\n\n metrics = [\n (\"D\u00b2 explained\", None), # Use default scorer if it exists\n (\"mean abs. error\", mean_absolute_error),\n (\"mean squared error\", mean_squared_error),\n ]\n if tweedie_powers:\n metrics += [\n (\n \"mean Tweedie dev p={:.4f}\".format(power),\n partial(mean_tweedie_deviance, power=power),\n )\n for power in tweedie_powers\n ]\n\n res = []\n for subset_label, X, df in [\n (\"train\", X_train, df_train),\n (\"test\", X_test, df_test),\n ]:\n y, _weights = df[target], df[weights]\n for score_label, metric in metrics:\n if isinstance(estimator, tuple) and len(estimator) == 2:\n # Score the model consisting of the product of frequency and\n # severity models.\n est_freq, est_sev = estimator\n y_pred = est_freq.predict(X) * est_sev.predict(X)\n else:\n y_pred = estimator.predict(X)\n\n if metric is None:\n if not hasattr(estimator, \"score\"):\n continue\n score = estimator.score(X, y, sample_weight=_weights)\n else:\n score = metric(y, y_pred, sample_weight=_weights)\n\n res.append({\"subset\": subset_label, \"metric\": score_label, \"score\": score})\n\n res = (\n pd.DataFrame(res)\n .set_index([\"metric\", \"subset\"])\n .score.unstack(-1)\n .round(4)\n .loc[:, [\"train\", \"test\"]]\n )\n return res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading datasets, basic feature extraction and target definitions\n\nWe construct the freMTPL2 dataset by joining the freMTPL2freq table,\ncontaining the number of claims (``ClaimNb``), with the freMTPL2sev table,\ncontaining the claim amount (``ClaimAmount``) for the same policy ids\n(``IDpol``).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import (\n FunctionTransformer,\n KBinsDiscretizer,\n OneHotEncoder,\n StandardScaler,\n)\n\ndf = load_mtpl2()\n\n\n# Correct for unreasonable observations (that might be data error)\n# and a few exceptionally large claim amounts\ndf[\"ClaimNb\"] = df[\"ClaimNb\"].clip(upper=4)\ndf[\"Exposure\"] = df[\"Exposure\"].clip(upper=1)\ndf[\"ClaimAmount\"] = df[\"ClaimAmount\"].clip(upper=200000)\n# If the claim amount is 0, then we do not count it as a claim. The loss function\n# used by the severity model needs strictly positive claim amounts. This way\n# frequency and severity are more consistent with each other.\ndf.loc[(df[\"ClaimAmount\"] == 0) & (df[\"ClaimNb\"] >= 1), \"ClaimNb\"] = 0\n\nlog_scale_transformer = make_pipeline(\n FunctionTransformer(func=np.log), StandardScaler()\n)\n\ncolumn_trans = ColumnTransformer(\n [\n (\n \"binned_numeric\",\n KBinsDiscretizer(n_bins=10, random_state=0),\n [\"VehAge\", \"DrivAge\"],\n ),\n (\n \"onehot_categorical\",\n OneHotEncoder(),\n [\"VehBrand\", \"VehPower\", \"VehGas\", \"Region\", \"Area\"],\n ),\n (\"passthrough_numeric\", \"passthrough\", [\"BonusMalus\"]),\n (\"log_scaled_numeric\", log_scale_transformer, [\"Density\"]),\n ],\n remainder=\"drop\",\n)\nX = column_trans.fit_transform(df)\n\n# Insurances companies are interested in modeling the Pure Premium, that is\n# the expected total claim amount per unit of exposure for each policyholder\n# in their portfolio:\ndf[\"PurePremium\"] = df[\"ClaimAmount\"] / df[\"Exposure\"]\n\n# This can be indirectly approximated by a 2-step modeling: the product of the\n# Frequency times the average claim amount per claim:\ndf[\"Frequency\"] = df[\"ClaimNb\"] / df[\"Exposure\"]\ndf[\"AvgClaimAmount\"] = df[\"ClaimAmount\"] / np.fmax(df[\"ClaimNb\"], 1)\n\nwith pd.option_context(\"display.max_columns\", 15):\n print(df[df.ClaimAmount > 0].head())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Frequency model -- Poisson distribution\n\nThe number of claims (``ClaimNb``) is a positive integer (0 included).\nThus, this target can be modelled by a Poisson distribution.\nIt is then assumed to be the number of discrete events occurring with a\nconstant rate in a given time interval (``Exposure``, in units of years).\nHere we model the frequency ``y = ClaimNb / Exposure``, which is still a\n(scaled) Poisson distribution, and use ``Exposure`` as `sample_weight`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.linear_model import PoissonRegressor\nfrom sklearn.model_selection import train_test_split\n\ndf_train, df_test, X_train, X_test = train_test_split(df, X, random_state=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us keep in mind that despite the seemingly large number of data points in\nthis dataset, the number of evaluation points where the claim amount is\nnon-zero is quite small:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "len(df_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "len(df_test[df_test[\"ClaimAmount\"] > 0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a consequence, we expect a significant variability in our\nevaluation upon random resampling of the train test split.\n\nThe parameters of the model are estimated by minimizing the Poisson deviance\non the training set via a Newton solver. Some of the features are collinear\n(e.g. because we did not drop any categorical level in the `OneHotEncoder`),\nwe use a weak L2 penalization to avoid numerical issues.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "glm_freq = PoissonRegressor(alpha=1e-4, solver=\"newton-cholesky\")\nglm_freq.fit(X_train, df_train[\"Frequency\"], sample_weight=df_train[\"Exposure\"])\n\nscores = score_estimator(\n glm_freq,\n X_train,\n X_test,\n df_train,\n df_test,\n target=\"Frequency\",\n weights=\"Exposure\",\n)\nprint(\"Evaluation of PoissonRegressor on target Frequency\")\nprint(scores)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the score measured on the test set is surprisingly better than on\nthe training set. This might be specific to this random train-test split.\nProper cross-validation could help us to assess the sampling variability of\nthese results.\n\nWe can visually compare observed and predicted values, aggregated by the\ndrivers age (``DrivAge``), vehicle age (``VehAge``) and the insurance\nbonus/malus (``BonusMalus``).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(ncols=2, nrows=2, figsize=(16, 8))\nfig.subplots_adjust(hspace=0.3, wspace=0.2)\n\nplot_obs_pred(\n df=df_train,\n feature=\"DrivAge\",\n weight=\"Exposure\",\n observed=\"Frequency\",\n predicted=glm_freq.predict(X_train),\n y_label=\"Claim Frequency\",\n title=\"train data\",\n ax=ax[0, 0],\n)\n\nplot_obs_pred(\n df=df_test,\n feature=\"DrivAge\",\n weight=\"Exposure\",\n observed=\"Frequency\",\n predicted=glm_freq.predict(X_test),\n y_label=\"Claim Frequency\",\n title=\"test data\",\n ax=ax[0, 1],\n fill_legend=True,\n)\n\nplot_obs_pred(\n df=df_test,\n feature=\"VehAge\",\n weight=\"Exposure\",\n observed=\"Frequency\",\n predicted=glm_freq.predict(X_test),\n y_label=\"Claim Frequency\",\n title=\"test data\",\n ax=ax[1, 0],\n fill_legend=True,\n)\n\nplot_obs_pred(\n df=df_test,\n feature=\"BonusMalus\",\n weight=\"Exposure\",\n observed=\"Frequency\",\n predicted=glm_freq.predict(X_test),\n y_label=\"Claim Frequency\",\n title=\"test data\",\n ax=ax[1, 1],\n fill_legend=True,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "According to the observed data, the frequency of accidents is higher for\ndrivers younger than 30 years old, and is positively correlated with the\n`BonusMalus` variable. Our model is able to mostly correctly model this\nbehaviour.\n\n## Severity Model - Gamma distribution\nThe mean claim amount or severity (`AvgClaimAmount`) can be empirically\nshown to follow approximately a Gamma distribution. We fit a GLM model for\nthe severity with the same features as the frequency model.\n\nNote:\n\n- We filter out ``ClaimAmount == 0`` as the Gamma distribution has support\n on $(0, \\infty)$, not $[0, \\infty)$.\n- We use ``ClaimNb`` as `sample_weight` to account for policies that contain\n more than one claim.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.linear_model import GammaRegressor\n\nmask_train = df_train[\"ClaimAmount\"] > 0\nmask_test = df_test[\"ClaimAmount\"] > 0\n\nglm_sev = GammaRegressor(alpha=10.0, solver=\"newton-cholesky\")\n\nglm_sev.fit(\n X_train[mask_train.values],\n df_train.loc[mask_train, \"AvgClaimAmount\"],\n sample_weight=df_train.loc[mask_train, \"ClaimNb\"],\n)\n\nscores = score_estimator(\n glm_sev,\n X_train[mask_train.values],\n X_test[mask_test.values],\n df_train[mask_train],\n df_test[mask_test],\n target=\"AvgClaimAmount\",\n weights=\"ClaimNb\",\n)\nprint(\"Evaluation of GammaRegressor on target AvgClaimAmount\")\nprint(scores)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Those values of the metrics are not necessarily easy to interpret. It can be\ninsightful to compare them with a model that does not use any input\nfeatures and always predicts a constant value, i.e. the average claim\namount, in the same setting:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.dummy import DummyRegressor\n\ndummy_sev = DummyRegressor(strategy=\"mean\")\ndummy_sev.fit(\n X_train[mask_train.values],\n df_train.loc[mask_train, \"AvgClaimAmount\"],\n sample_weight=df_train.loc[mask_train, \"ClaimNb\"],\n)\n\nscores = score_estimator(\n dummy_sev,\n X_train[mask_train.values],\n X_test[mask_test.values],\n df_train[mask_train],\n df_test[mask_test],\n target=\"AvgClaimAmount\",\n weights=\"ClaimNb\",\n)\nprint(\"Evaluation of a mean predictor on target AvgClaimAmount\")\nprint(scores)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We conclude that the claim amount is very challenging to predict. Still, the\n:class:`~sklearn.linear_model.GammaRegressor` is able to leverage some\ninformation from the input features to slightly improve upon the mean\nbaseline in terms of D\u00b2.\n\nNote that the resulting model is the average claim amount per claim. As such,\nit is conditional on having at least one claim, and cannot be used to predict\nthe average claim amount per policy. For this, it needs to be combined with\na claims frequency model.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(\n \"Mean AvgClaim Amount per policy: %.2f \"\n % df_train[\"AvgClaimAmount\"].mean()\n)\nprint(\n \"Mean AvgClaim Amount | NbClaim > 0: %.2f\"\n % df_train[\"AvgClaimAmount\"][df_train[\"AvgClaimAmount\"] > 0].mean()\n)\nprint(\n \"Predicted Mean AvgClaim Amount | NbClaim > 0: %.2f\"\n % glm_sev.predict(X_train).mean()\n)\nprint(\n \"Predicted Mean AvgClaim Amount (dummy) | NbClaim > 0: %.2f\"\n % dummy_sev.predict(X_train).mean()\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can visually compare observed and predicted values, aggregated for\nthe drivers age (``DrivAge``).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(ncols=1, nrows=2, figsize=(16, 6))\n\nplot_obs_pred(\n df=df_train.loc[mask_train],\n feature=\"DrivAge\",\n weight=\"Exposure\",\n observed=\"AvgClaimAmount\",\n predicted=glm_sev.predict(X_train[mask_train.values]),\n y_label=\"Average Claim Severity\",\n title=\"train data\",\n ax=ax[0],\n)\n\nplot_obs_pred(\n df=df_test.loc[mask_test],\n feature=\"DrivAge\",\n weight=\"Exposure\",\n observed=\"AvgClaimAmount\",\n predicted=glm_sev.predict(X_test[mask_test.values]),\n y_label=\"Average Claim Severity\",\n title=\"test data\",\n ax=ax[1],\n fill_legend=True,\n)\nplt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Overall, the drivers age (``DrivAge``) has a weak impact on the claim\nseverity, both in observed and predicted data.\n\n## Pure Premium Modeling via a Product Model vs single TweedieRegressor\nAs mentioned in the introduction, the total claim amount per unit of\nexposure can be modeled as the product of the prediction of the\nfrequency model by the prediction of the severity model.\n\nAlternatively, one can directly model the total loss with a unique\nCompound Poisson Gamma generalized linear model (with a log link function).\nThis model is a special case of the Tweedie GLM with a \"power\" parameter\n$p \\in (1, 2)$. Here, we fix apriori the `power` parameter of the\nTweedie model to some arbitrary value (1.9) in the valid range. Ideally one\nwould select this value via grid-search by minimizing the negative\nlog-likelihood of the Tweedie model, but unfortunately the current\nimplementation does not allow for this (yet).\n\nWe will compare the performance of both approaches.\nTo quantify the performance of both models, one can compute\nthe mean deviance of the train and test data assuming a Compound\nPoisson-Gamma distribution of the total claim amount. This is equivalent to\na Tweedie distribution with a `power` parameter between 1 and 2.\n\nThe :func:`sklearn.metrics.mean_tweedie_deviance` depends on a `power`\nparameter. As we do not know the true value of the `power` parameter, we here\ncompute the mean deviances for a grid of possible values, and compare the\nmodels side by side, i.e. we compare them at identical values of `power`.\nIdeally, we hope that one model will be consistently better than the other,\nregardless of `power`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.linear_model import TweedieRegressor\n\nglm_pure_premium = TweedieRegressor(power=1.9, alpha=0.1, solver=\"newton-cholesky\")\nglm_pure_premium.fit(\n X_train, df_train[\"PurePremium\"], sample_weight=df_train[\"Exposure\"]\n)\n\ntweedie_powers = [1.5, 1.7, 1.8, 1.9, 1.99, 1.999, 1.9999]\n\nscores_product_model = score_estimator(\n (glm_freq, glm_sev),\n X_train,\n X_test,\n df_train,\n df_test,\n target=\"PurePremium\",\n weights=\"Exposure\",\n tweedie_powers=tweedie_powers,\n)\n\nscores_glm_pure_premium = score_estimator(\n glm_pure_premium,\n X_train,\n X_test,\n df_train,\n df_test,\n target=\"PurePremium\",\n weights=\"Exposure\",\n tweedie_powers=tweedie_powers,\n)\n\nscores = pd.concat(\n [scores_product_model, scores_glm_pure_premium],\n axis=1,\n sort=True,\n keys=(\"Product Model\", \"TweedieRegressor\"),\n)\nprint(\"Evaluation of the Product Model and the Tweedie Regressor on target PurePremium\")\nwith pd.option_context(\"display.expand_frame_repr\", False):\n print(scores)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, both modeling approaches yield comparable performance\nmetrics. For implementation reasons, the percentage of explained variance\n$D^2$ is not available for the product model.\n\nWe can additionally validate these models by comparing observed and\npredicted total claim amount over the test and train subsets. We see that,\non average, both model tend to underestimate the total claim (but this\nbehavior depends on the amount of regularization).\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "res = []\nfor subset_label, X, df in [\n (\"train\", X_train, df_train),\n (\"test\", X_test, df_test),\n]:\n exposure = df[\"Exposure\"].values\n res.append(\n {\n \"subset\": subset_label,\n \"observed\": df[\"ClaimAmount\"].values.sum(),\n \"predicted, frequency*severity model\": np.sum(\n exposure * glm_freq.predict(X) * glm_sev.predict(X)\n ),\n \"predicted, tweedie, power=%.2f\"\n % glm_pure_premium.power: np.sum(exposure * glm_pure_premium.predict(X)),\n }\n )\n\nprint(pd.DataFrame(res).set_index(\"subset\").T)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can compare the two models using a plot of cumulative claims: for\neach model, the policyholders are ranked from safest to riskiest based on the\nmodel predictions and the cumulative proportion of claim amounts is plotted\nagainst the cumulative proportion of exposure. This plot is often called\nthe ordered Lorenz curve of the model.\n\nThe Gini coefficient (based on the area between the curve and the diagonal)\ncan be used as a model selection metric to quantify the ability of the model\nto rank policyholders. Note that this metric does not reflect the ability of\nthe models to make accurate predictions in terms of absolute value of total\nclaim amounts but only in terms of relative amounts as a ranking metric. The\nGini coefficient is upper bounded by 1.0 but even an oracle model that ranks\nthe policyholders by the observed claim amounts cannot reach a score of 1.0.\n\nWe observe that both models are able to rank policyholders by riskiness\nsignificantly better than chance although they are also both far from the\noracle model due to the natural difficulty of the prediction problem from a\nfew features: most accidents are not predictable and can be caused by\nenvironmental circumstances that are not described at all by the input\nfeatures of the models.\n\nNote that the Gini index only characterizes the ranking performance of the\nmodel but not its calibration: any monotonic transformation of the predictions\nleaves the Gini index of the model unchanged.\n\nFinally one should highlight that the Compound Poisson Gamma model that is\ndirectly fit on the pure premium is operationally simpler to develop and\nmaintain as it consists of a single scikit-learn estimator instead of a pair\nof models, each with its own set of hyperparameters.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.metrics import auc\n\n\ndef lorenz_curve(y_true, y_pred, exposure):\n y_true, y_pred = np.asarray(y_true), np.asarray(y_pred)\n exposure = np.asarray(exposure)\n\n # order samples by increasing predicted risk:\n ranking = np.argsort(y_pred)\n ranked_exposure = exposure[ranking]\n ranked_pure_premium = y_true[ranking]\n cumulative_claim_amount = np.cumsum(ranked_pure_premium * ranked_exposure)\n cumulative_claim_amount /= cumulative_claim_amount[-1]\n cumulative_exposure = np.cumsum(ranked_exposure)\n cumulative_exposure /= cumulative_exposure[-1]\n return cumulative_exposure, cumulative_claim_amount\n\n\nfig, ax = plt.subplots(figsize=(8, 8))\n\ny_pred_product = glm_freq.predict(X_test) * glm_sev.predict(X_test)\ny_pred_total = glm_pure_premium.predict(X_test)\n\nfor label, y_pred in [\n (\"Frequency * Severity model\", y_pred_product),\n (\"Compound Poisson Gamma\", y_pred_total),\n]:\n cum_exposure, cum_claims = lorenz_curve(\n df_test[\"PurePremium\"], y_pred, df_test[\"Exposure\"]\n )\n gini = 1 - 2 * auc(cum_exposure, cum_claims)\n label += \" (Gini index: {:.3f})\".format(gini)\n ax.plot(cum_exposure, cum_claims, linestyle=\"-\", label=label)\n\n# Oracle model: y_pred == y_test\ncum_exposure, cum_claims = lorenz_curve(\n df_test[\"PurePremium\"], df_test[\"PurePremium\"], df_test[\"Exposure\"]\n)\ngini = 1 - 2 * auc(cum_exposure, cum_claims)\nlabel = \"Oracle (Gini index: {:.3f})\".format(gini)\nax.plot(cum_exposure, cum_claims, linestyle=\"-.\", color=\"gray\", label=label)\n\n# Random baseline\nax.plot([0, 1], [0, 1], linestyle=\"--\", color=\"black\", label=\"Random baseline\")\nax.set(\n title=\"Lorenz Curves\",\n xlabel=(\n \"Cumulative proportion of exposure\\n\"\n \"(ordered by model from safest to riskiest)\"\n ),\n ylabel=\"Cumulative proportion of claim amounts\",\n)\nax.legend(loc=\"upper left\")\nplt.plot()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }