{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "expmkveO04pw" }, "source": [ "## Maximum a Posteriori Parameter Inference\n", "\n", "In this notebook, we introduce the Maximum a Posteriori (MAP), which extends Maximum Likelihood Estimation (MLE) by inclusion of a prior $p(\\theta)$ into the cost function. To include this prior information, we construct a Bayesian Posterior with Bayesian's Theorem given as,\n", "\n", "$$\n", "P(\\theta|D) = \\frac{P(D|\\theta)P(\\theta)}{P(D)}\n", "$$\n", "\n", "where, \n", "$~$ \n", "$P(\\theta|D)$ represents the posterior and can be read as \"the probability of the parameters $(\\theta)$ given the data $(D)$\", \n", "$P(D|\\theta)$ is the probability of the data given the parameters, commonly called the likelihood, \n", "$P(\\theta)$ represents the probability of the parameters commonly called the prior, \n", "$P(D)$ is the probability of the data and is commonly called the marginal probability. \n", "\n", "However, as the marginal probability is commonly difficult to compute and represents a normalisation constant, in the case of MAP this term is forgone and the proportional posterior is optmised instead. This is given as,\n", "\n", "$$\n", "P(\\theta|D) \\propto P(D|\\theta)P(\\theta)\n", "$$\n", "\n", "### Setting up the Environment\n", "\n", "Before we begin, we need to ensure that we have all the necessary tools. We will install and import PyBOP as well as upgrade dependencies. We also fix the random seed in order to generate consistent output during development, although this does not need to be done in practice." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "X87NUGPW04py", "outputId": "0d785b07-7cff-4aeb-e60a-4ff5a669afbf" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/Users/engs2510/Documents/Git/Second_PyBOP/.nox/notebooks-overwrite/bin/python3: No module named pip\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "/Users/engs2510/Documents/Git/Second_PyBOP/.nox/notebooks-overwrite/bin/python3: No module named pip\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install --upgrade pip ipywidgets -q\n", "%pip install pybop -q\n", "\n", "import time\n", "\n", "import numpy as np\n", "\n", "import pybop\n", "\n", "pybop.plot.PlotlyManager().pio.renderers.default = \"notebook_connected\"\n", "np.random.seed(8)" ] }, { "cell_type": "markdown", "metadata": { "id": "5XU-dMtU04p2" }, "source": [ "### Creating the model\n", "\n", "To demonstrate the MAP process, we will first need a forward model and data to run parameter inference on. As we are introducing this as a simple example, we will use the PyBOP forward model with white noise as the reference. This requires defining a parameter set and the model itself." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "parameter_set = pybop.ParameterSet.pybamm(\"Chen2020\")\n", "model = pybop.lithium_ion.SPM(parameter_set=parameter_set)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulating Forward Model\n", "\n", "We can now simulate the model using the `model.predict` method. This method is a light wrapper on the `Pybamm.Simulation` class and be used as such. For this example, we use the default current function for the `Chen2020` parameter set (5A) to generate the voltage data. As the goal is to investigate the MAP method, we will generate a range of observations from the forward model. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sBasxv8U04p3" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of observations in each trajectory: [2, 4, 8, 16, 32, 64, 128, 256]\n" ] } ], "source": [ "n = 8 # Number of time-series trajectories\n", "observations = [\n", " 2**j for j in range(1, n + 1)\n", "] # Number of observations in each trajectory\n", "values = []\n", "for i in observations:\n", " t_eval = np.linspace(0, 10, i)\n", " values.append(model.predict(t_eval=t_eval)) # predict and store\n", "\n", "print(f\"Number of observations in each trajectory: {observations}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding noise to synthetic voltage data\n", "\n", "To make the parameter inference more realistic, we add gaussian noise with zero mean to the data. While this doesn't truly represent the challenge of parameter inference with experimental data, this does ensure the cost landscape curvature isn't perfect. For a more realistic representation of experimental data, a different noise function could be used. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sigma = 0.005\n", "corrupt_values = values[1][\"Voltage [V]\"].data + np.random.normal(\n", " 0, sigma, len(values[1][\"Voltage [V]\"].data)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating the PyBOP dataset\n", "\n", "The reference trajectory needs to be included in the optimisation task, which is handed within the `Dataset` class. In this situation, this class is composed of the time, current, and the noisy voltage data; however, if we were performing parameter inference from a different measured signal, such as 'Cell Temperature', that would need to be included." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zuvGHWID04p_" }, "outputs": [], "source": [ "dataset = pybop.Dataset(\n", " {\n", " \"Time [s]\": values[1][\"Time [s]\"].data,\n", " \"Current function [A]\": values[1][\"Current [A]\"].data,\n", " \"Voltage [V]\": corrupt_values,\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "ffS3CF_704qA" }, "source": [ "### Constructing Parameters Class\n", "Next, we need to select the forward model parameters for inference. The PyBOP parameters class composes as many individual PyBOP parameters as the user wants (whether these parameters can be identified is left out of this example). This class requires the parameter name, which must resolve to a parameter within the `ParameterSet` defined above. Additionally, this class can accept an `initial_value` which will be used by the optimiser, as well as bounds. For this example, we will provide a `prior` to the parameter class, which will be used later by the MAP process." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WPCybXIJ04qA" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Default bounds applied based on prior distribution.\n", "Default bounds applied based on prior distribution.\n" ] } ], "source": [ "parameters = pybop.Parameters(\n", " pybop.Parameter(\n", " \"Negative particle radius [m]\",\n", " prior=pybop.Gaussian(4e-6, 1e-6),\n", " true_value=parameter_set[\"Negative particle radius [m]\"],\n", " ),\n", " pybop.Parameter(\n", " \"Positive particle radius [m]\",\n", " prior=pybop.Gaussian(5e-6, 1e-6),\n", " true_value=parameter_set[\"Positive particle radius [m]\"],\n", " ),\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "n4OHa-aF04qA" }, "source": [ "### Setting up the Fitting Problem, Likelihood, and Posterior\n", "\n", "With the datasets and parameters defined, we can now construct the `FittingProblem` which composes the model, parameters, and dataset providing a single class with the required information for simulating and assessing the forward model. \n", "\n", "As described in the introduction to this notebook, the MAP method uses the non-normalised posterior for optimisation. This is defined in PyBOP as the `LogPosterior` class, and has arguments for the likelihood and prior functions. If a prior is not provided, the parameter priors are used as default. In this example, we will use a `GaussianLogLikelihoodKnownSigma` likelihood function, and the default priors set above. For numerical reasons, we optimise the log of the posterior; however this doesn't affect the final results." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "etMzRtx404qA" }, "outputs": [], "source": [ "problem = pybop.FittingProblem(model, parameters, dataset)\n", "likelihood = pybop.GaussianLogLikelihoodKnownSigma(problem, sigma0=sigma)\n", "posterior = pybop.LogPosterior(likelihood)" ] }, { "cell_type": "markdown", "metadata": { "id": "caprp-bV04qB" }, "source": [ "### Plotting the Posterior components\n", "\n", "Next, to investigate the individual components of the Posterior. The `LogPosterior` class provides attributes of the prior and likelihood. To investigate the contributions of each to the Posterior we plot the landscapes across a selected parameter range." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-9OVt0EQ04qB" }, "outputs": [ { "data": { "text/html": [ " \n", " " ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "