{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Comparing Linear Bayesian Regressors\n\nThis example compares two different bayesian regressors:\n\n- a `automatic_relevance_determination`\n- a `bayesian_ridge_regression`\n\nIn the first part, we use an `ordinary_least_squares` (OLS) model as a\nbaseline for comparing the models' coefficients with respect to the true\ncoefficients. Thereafter, we show that the estimation of such models is done by\niteratively maximizing the marginal log-likelihood of the observations.\n\nIn the last section we plot predictions and uncertainties for the ARD and the\nBayesian Ridge regressions using a polynomial feature expansion to fit a\nnon-linear relationship between `X` and `y`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Authors: The scikit-learn developers\n# SPDX-License-Identifier: BSD-3-Clause" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Models robustness to recover the ground truth weights\n\n### Generate synthetic dataset\n\nWe generate a dataset where `X` and `y` are linearly linked: 10 of the\nfeatures of `X` will be used to generate `y`. The other features are not\nuseful at predicting `y`. In addition, we generate a dataset where `n_samples\n== n_features`. Such a setting is challenging for an OLS model and leads\npotentially to arbitrary large weights. Having a prior on the weights and a\npenalty alleviates the problem. Finally, gaussian noise is added.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.datasets import make_regression\n\nX, y, true_weights = make_regression(\n n_samples=100,\n n_features=100,\n n_informative=10,\n noise=8,\n coef=True,\n random_state=42,\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Fit the regressors\n\nWe now fit both Bayesian models and the OLS to later compare the models'\ncoefficients.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pandas as pd\n\nfrom sklearn.linear_model import ARDRegression, BayesianRidge, LinearRegression\n\nolr = LinearRegression().fit(X, y)\nbrr = BayesianRidge(compute_score=True, max_iter=30).fit(X, y)\nard = ARDRegression(compute_score=True, max_iter=30).fit(X, y)\ndf = pd.DataFrame(\n {\n \"Weights of true generative process\": true_weights,\n \"ARDRegression\": ard.coef_,\n \"BayesianRidge\": brr.coef_,\n \"LinearRegression\": olr.coef_,\n }\n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot the true and estimated coefficients\n\nNow we compare the coefficients of each model with the weights of\nthe true generative model.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport seaborn as sns\nfrom matplotlib.colors import SymLogNorm\n\nplt.figure(figsize=(10, 6))\nax = sns.heatmap(\n df.T,\n norm=SymLogNorm(linthresh=10e-4, vmin=-80, vmax=80),\n cbar_kws={\"label\": \"coefficients' values\"},\n cmap=\"seismic_r\",\n)\nplt.ylabel(\"linear model\")\nplt.xlabel(\"coefficients\")\nplt.tight_layout(rect=(0, 0, 1, 0.95))\n_ = plt.title(\"Models' coefficients\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Due to the added noise, none of the models recover the true weights. Indeed,\nall models always have more than 10 non-zero coefficients. Compared to the OLS\nestimator, the coefficients using a Bayesian Ridge regression are slightly\nshifted toward zero, which stabilises them. The ARD regression provides a\nsparser solution: some of the non-informative coefficients are set exactly to\nzero, while shifting others closer to zero. Some non-informative coefficients\nare still present and retain large values.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot the marginal log-likelihood\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\nard_scores = -np.array(ard.scores_)\nbrr_scores = -np.array(brr.scores_)\nplt.plot(ard_scores, color=\"navy\", label=\"ARD\")\nplt.plot(brr_scores, color=\"red\", label=\"BayesianRidge\")\nplt.ylabel(\"Log-likelihood\")\nplt.xlabel(\"Iterations\")\nplt.xlim(1, 30)\nplt.legend()\n_ = plt.title(\"Models log-likelihood\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Indeed, both models minimize the log-likelihood up to an arbitrary cutoff\ndefined by the `max_iter` parameter.\n\n## Bayesian regressions with polynomial feature expansion\nGenerate synthetic dataset\n--------------------------\nWe create a target that is a non-linear function of the input feature.\nNoise following a standard uniform distribution is added.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import PolynomialFeatures, StandardScaler\n\nrng = np.random.RandomState(0)\nn_samples = 110\n\n# sort the data to make plotting easier later\nX = np.sort(-10 * rng.rand(n_samples) + 10)\nnoise = rng.normal(0, 1, n_samples) * 1.35\ny = np.sqrt(X) * np.sin(X) + noise\nfull_data = pd.DataFrame({\"input_feature\": X, \"target\": y})\nX = X.reshape((-1, 1))\n\n# extrapolation\nX_plot = np.linspace(10, 10.4, 10)\ny_plot = np.sqrt(X_plot) * np.sin(X_plot)\nX_plot = np.concatenate((X, X_plot.reshape((-1, 1))))\ny_plot = np.concatenate((y - noise, y_plot))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Fit the regressors\n\nHere we try a degree 10 polynomial to potentially overfit, though the bayesian\nlinear models regularize the size of the polynomial coefficients. As\n`fit_intercept=True` by default for\n:class:`~sklearn.linear_model.ARDRegression` and\n:class:`~sklearn.linear_model.BayesianRidge`, then\n:class:`~sklearn.preprocessing.PolynomialFeatures` should not introduce an\nadditional bias feature. By setting `return_std=True`, the bayesian regressors\nreturn the standard deviation of the posterior distribution for the model\nparameters.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ard_poly = make_pipeline(\n PolynomialFeatures(degree=10, include_bias=False),\n StandardScaler(),\n ARDRegression(),\n).fit(X, y)\nbrr_poly = make_pipeline(\n PolynomialFeatures(degree=10, include_bias=False),\n StandardScaler(),\n BayesianRidge(),\n).fit(X, y)\n\ny_ard, y_ard_std = ard_poly.predict(X_plot, return_std=True)\ny_brr, y_brr_std = brr_poly.predict(X_plot, return_std=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plotting polynomial regressions with std errors of the scores\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ax = sns.scatterplot(\n data=full_data, x=\"input_feature\", y=\"target\", color=\"black\", alpha=0.75\n)\nax.plot(X_plot, y_plot, color=\"black\", label=\"Ground Truth\")\nax.plot(X_plot, y_brr, color=\"red\", label=\"BayesianRidge with polynomial features\")\nax.plot(X_plot, y_ard, color=\"navy\", label=\"ARD with polynomial features\")\nax.fill_between(\n X_plot.ravel(),\n y_ard - y_ard_std,\n y_ard + y_ard_std,\n color=\"navy\",\n alpha=0.3,\n)\nax.fill_between(\n X_plot.ravel(),\n y_brr - y_brr_std,\n y_brr + y_brr_std,\n color=\"red\",\n alpha=0.3,\n)\nax.legend()\n_ = ax.set_title(\"Polynomial fit of a non-linear feature\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The error bars represent one standard deviation of the predicted gaussian\ndistribution of the query points. Notice that the ARD regression captures the\nground truth the best when using the default parameters in both models, but\nfurther reducing the `lambda_init` hyperparameter of the Bayesian Ridge can\nreduce its bias (see example\n`sphx_glr_auto_examples_linear_model_plot_bayesian_ridge_curvefit.py`).\nFinally, due to the intrinsic limitations of a polynomial regression, both\nmodels fail when extrapolating.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 0 }