{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "expmkveO04pw" }, "source": [ "## Different Optimisers for SPMe Parameter Estimation\n", "\n", "In this notebook, we demonstrate parameter estimation for a single-particle model for various PyBOP optimisers. PyBOP offers a variety of gradient and non-gradient based optimisers, with a table of the currently supported methods shown in the Readme. In this example, we will set up the model, problem, and cost function and investigate how the different optimisers perform under this task.\n", "\n", "### Setting up the Environment\n", "\n", "Before we begin, we need to ensure that we have all the necessary tools. We will install PyBOP and upgrade dependencies:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "X87NUGPW04py", "outputId": "0d785b07-7cff-4aeb-e60a-4ff5a669afbf" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/Users/engs2510/Documents/Git/Second_PyBOP/.nox/notebooks-overwrite/bin/python3: No module named pip\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "/Users/engs2510/Documents/Git/Second_PyBOP/.nox/notebooks-overwrite/bin/python3: No module named pip\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install --upgrade pip ipywidgets -q\n", "%pip install pybop -q" ] }, { "cell_type": "markdown", "metadata": { "id": "jAvD5fk104p0" }, "source": [ "### Importing Libraries\n", "\n", "With the environment set up, we can now import PyBOP alongside other libraries we will need:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SQdt4brD04p1" }, "outputs": [], "source": [ "import numpy as np\n", "\n", "import pybop\n", "\n", "pybop.plot.PlotlyManager().pio.renderers.default = \"notebook_connected\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's fix the random seed in order to generate consistent output during development, although this does not need to be done in practice." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.random.seed(8)" ] }, { "cell_type": "markdown", "metadata": { "id": "5XU-dMtU04p2" }, "source": [ "## Generating Synthetic Data\n", "\n", "To demonstrate the parameter estimation, we first need some data. We will generate synthetic data using a PyBOP DFN forward model, which requires defining a parameter set and the model itself.\n", "\n", "### Defining Parameters and Model\n", "\n", "We start by creating an example parameter set, constructing the DFN for synthetic generation, and the model we will be fitting (SPMe)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "parameter_set = pybop.ParameterSet.pybamm(\"Chen2020\")\n", "synth_model = pybop.lithium_ion.DFN(parameter_set=parameter_set)\n", "model = pybop.lithium_ion.SPMe(parameter_set=parameter_set)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulating the Forward Model\n", "\n", "We can then simulate the model using the `predict` method, with a default constant current discharge to generate the voltage data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sBasxv8U04p3" }, "outputs": [], "source": [ "t_eval = np.arange(0, 2000, 10)\n", "initial_state = {\"Initial SoC\": 1.0}\n", "values = synth_model.predict(t_eval=t_eval, initial_state=initial_state)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding Noise to Voltage Data\n", "\n", "To make the parameter estimation more realistic, we add Gaussian noise to the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sigma = 0.002\n", "corrupt_values = values[\"Voltage [V]\"].data + np.random.normal(0, sigma, len(t_eval))" ] }, { "cell_type": "markdown", "metadata": { "id": "X8-tubYY04p_" }, "source": [ "## Identifying the Parameters" ] }, { "cell_type": "markdown", "metadata": { "id": "PQqhvSZN04p_" }, "source": [ "We will now set up the parameter estimation process by defining the datasets for optimisation and selecting the model parameters we wish to estimate." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating a Dataset\n", "\n", "The dataset for optimisation is composed of time, current, and the noisy voltage data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zuvGHWID04p_" }, "outputs": [], "source": [ "dataset = pybop.Dataset(\n", " {\n", " \"Time [s]\": t_eval,\n", " \"Current function [A]\": values[\"Current [A]\"].data,\n", " \"Voltage [V]\": corrupt_values,\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "ffS3CF_704qA" }, "source": [ "### Defining Parameters to Estimate\n", "\n", "We select the parameters for estimation and set up their prior distributions and bounds. In this example, non-geometric parameters for each electrode's active material volume fraction are selected." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WPCybXIJ04qA" }, "outputs": [], "source": [ "parameters = pybop.Parameters(\n", " pybop.Parameter(\n", " \"Negative electrode active material volume fraction\",\n", " prior=pybop.Gaussian(0.6, 0.02),\n", " bounds=[0.5, 0.8],\n", " ),\n", " pybop.Parameter(\n", " \"Positive electrode active material volume fraction\",\n", " prior=pybop.Gaussian(0.48, 0.02),\n", " bounds=[0.4, 0.7],\n", " ),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Selecting the Optimisers\n", "\n", "Now, we can select the optimisers to investigate. The first object is a list of non-gradient-based PINTS's optimisers. The next object comprises the gradient-based PINTS's optimisers (AdamW, GradientDescent, IRPropMin). The final object forms the SciPy optimisers which can have gradient and non-gradient-based algorithms." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gradient_optimisers = [\n", " pybop.AdamW,\n", " pybop.GradientDescent,\n", " pybop.IRPropMin,\n", "]\n", "\n", "non_gradient_optimisers = [\n", " pybop.CMAES,\n", " pybop.SNES,\n", " pybop.PSO,\n", " pybop.XNES,\n", " pybop.NelderMead,\n", " pybop.CuckooSearch,\n", "]\n", "\n", "scipy_optimisers = [\n", " pybop.SciPyMinimize,\n", " pybop.SciPyDifferentialEvolution,\n", "]" ] }, { "cell_type": "markdown", "metadata": { "id": "n4OHa-aF04qA" }, "source": [ "### Setting up the Optimisation Problem\n", "\n", "With the datasets, parameters, and optimisers defined, we can set up the optimisation problem and cost function. In this example we loop through all of the above optimisers and store the results for later visualisation and analysis." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "etMzRtx404qA" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running AdamW\n", "NOTE: Boundaries ignored by AdamW\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: Maximum number of iterations (60) reached.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.72077396 0.674774 ]\n", " Final cost: 0.003046118194519812\n", " Optimisation time: 3.55043888092041 seconds\n", " Number of iterations: 60\n", " SciPy result available: No\n", "Running GradientDescent\n", "NOTE: Boundaries ignored by \n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: Maximum number of iterations (60) reached.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.67939892 0.6815674 ]\n", " Final cost: 0.0038166067064497105\n", " Optimisation time: 3.2792601585388184 seconds\n", " Number of iterations: 60\n", " SciPy result available: No\n", "Running IRPropMin\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: Maximum number of iterations (60) reached.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.72167189 0.67300586]\n", " Final cost: 0.0029223029353733646\n", " Optimisation time: 3.3285961151123047 seconds\n", " Number of iterations: 60\n", " SciPy result available: No\n" ] } ], "source": [ "optims = []\n", "xs = []\n", "model.set_initial_state(initial_state)\n", "problem = pybop.FittingProblem(model, parameters, dataset)\n", "cost = pybop.SumSquaredError(problem)\n", "for optimiser in gradient_optimisers:\n", " print(f\"Running {optimiser.__name__}\")\n", " sigma0 = 0.01 if optimiser is pybop.GradientDescent else None\n", " optim = optimiser(\n", " cost, sigma0=sigma0, max_unchanged_iterations=20, max_iterations=60\n", " )\n", " results = optim.run()\n", " optims.append(optim)\n", " xs.append(results.x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running CMAES\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: No significant change for 20 iterations.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.72099642 0.673128 ]\n", " Final cost: 0.0029219755355584616\n", " Optimisation time: 2.9584238529205322 seconds\n", " Number of iterations: 42\n", " SciPy result available: No\n", "Running SNES\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: No significant change for 20 iterations.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.72125026 0.67307644]\n", " Final cost: 0.0029220074793387843\n", " Optimisation time: 2.7991578578948975 seconds\n", " Number of iterations: 52\n", " SciPy result available: No\n", "Running PSO\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: No significant change for 20 iterations.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.7335151 0.67158104]\n", " Final cost: 0.0030254785798547734\n", " Optimisation time: 2.0981178283691406 seconds\n", " Number of iterations: 43\n", " SciPy result available: No\n", "Running XNES\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: Maximum number of iterations (60) reached.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.67677389 0.68423077]\n", " Final cost: 0.004170543052960678\n", " Optimisation time: 3.208811044692993 seconds\n", " Number of iterations: 60\n", " SciPy result available: No\n", "Running NelderMead\n", "NOTE: Boundaries ignored by \n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: Maximum number of iterations (60) reached.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.72127038 0.67308243]\n", " Final cost: 0.002922015000486046\n", " Optimisation time: 0.572443962097168 seconds\n", " Number of iterations: 60\n", " SciPy result available: No\n", "Running CuckooSearch\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Halt: No significant change for 20 iterations.\n", "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.71228325 0.67482605]\n", " Final cost: 0.0029570571571777425\n", " Optimisation time: 2.216571807861328 seconds\n", " Number of iterations: 40\n", " SciPy result available: No\n" ] } ], "source": [ "for optimiser in non_gradient_optimisers:\n", " print(f\"Running {optimiser.__name__}\")\n", " optim = optimiser(cost, max_unchanged_iterations=20, max_iterations=60)\n", " results = optim.run()\n", " optims.append(optim)\n", " xs.append(results.x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running SciPyMinimize\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.62747952 0.7 ]\n", " Final cost: 0.010270031808476705\n", " Optimisation time: 0.4111368656158447 seconds\n", " Number of iterations: 23\n", " SciPy result available: Yes\n", "Running SciPyDifferentialEvolution\n", "Ignoring x0. Initial conditions are not used for differential_evolution.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "OptimisationResult:\n", " Initial parameters: [0.61271354 0.47586173]\n", " Optimised parameters: [0.72099808 0.67312761]\n", " Final cost: 0.0029219755546384587\n", " Optimisation time: 5.829385042190552 seconds\n", " Number of iterations: 20\n", " SciPy result available: Yes\n" ] } ], "source": [ "for optimiser in scipy_optimisers:\n", " print(f\"Running {optimiser.__name__}\")\n", " optim = optimiser(cost, max_iterations=60)\n", " results = optim.run()\n", " optims.append(optim)\n", " xs.append(results.x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we can compare the identified parameters across the optimisers. This gives us insight into how well each optimiser traversed the cost landscape. The ground-truth parameter values for the `Chen2020` parameter set are: \n", "\n", "- Negative active material volume fraction: `0.75`\n", "- Positive active material volume fraction: `0.665`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "| Optimiser: AdamW | Results: [0.72077396 0.674774 ] |\n", "| Optimiser: Gradient descent | Results: [0.67939892 0.6815674 ] |\n", "| Optimiser: iRprop- | Results: [0.72167189 0.67300586] |\n", "| Optimiser: Covariance Matrix Adaptation Evolution Strategy (CMA-ES) | Results: [0.72099642 0.673128 ] |\n", "| Optimiser: Seperable Natural Evolution Strategy (SNES) | Results: [0.72125026 0.67307644] |\n", "| Optimiser: Particle Swarm Optimisation (PSO) | Results: [0.7335151 0.67158104] |\n", "| Optimiser: Exponential Natural Evolution Strategy (xNES) | Results: [0.67677389 0.68423077] |\n", "| Optimiser: Nelder-Mead | Results: [0.72127038 0.67308243] |\n", "| Optimiser: Cuckoo Search | Results: [0.71228325 0.67482605] |\n", "| Optimiser: SciPyMinimize | Results: [0.62747952 0.7 ] |\n", "| Optimiser: SciPyDifferentialEvolution | Results: [0.72099808 0.67312761] |\n" ] } ], "source": [ "for optim in optims:\n", " print(f\"| Optimiser: {optim.name()} | Results: {optim.result.x} |\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Many of the above optimisers found the correct value for the positive active material volume fraction. However, none of them found the correct value for the negative electrode. Next, we can investigate if this was an optimiser or parameter observability failure." ] }, { "cell_type": "markdown", "metadata": { "id": "KxKURtH704qC" }, "source": [ "## Plotting and Visualisation\n", "\n", "PyBOP provides various plotting utilities to visualise the results of the optimisation." ] }, { "cell_type": "markdown", "metadata": { "id": "-cWCOiqR04qC" }, "source": [ "### Comparing Solutions\n", "\n", "We can quickly plot the system's response using the estimated parameters for each optimiser and the target dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 467 }, "id": "tJUJ80Ve04qD", "outputId": "855fbaa2-1e09-4935-eb1a-8caf7f99eb75" }, "outputs": [ { "data": { "text/html": [ " \n", " " ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "for optim, x in zip(optims, xs):\n", " pybop.plot.quick(optim.cost.problem, problem_inputs=x, title=optim.name())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Convergence and Parameter Trajectories\n", "\n", "To assess the optimisation process, we can plot the convergence of the cost function and the trajectories of the parameters:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "N5XYkevi04qD" }, "outputs": [ { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "for optim in optims:\n", " pybop.plot.convergence(optim, title=optim.name())\n", " pybop.plot.parameters(optim)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Cost Landscape\n", "\n", "Finally, we can visualise the cost landscape and the path taken by the optimiser. This should give us additional insight into whether the negative electrode volume fraction is observable or not. For an observable parameter, the cost landscape needs to have a clear minimum with respect to the parameter in question. More clearly, the parameter value has to have an effect on the cost function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Plot the cost landscape with optimisation path and updated bounds\n", "bounds = np.asarray([[0.5, 0.8], [0.55, 0.8]])\n", "for optim in optims:\n", " pybop.plot.surface(optim, bounds=bounds, title=optim.name())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given the synthetic data and corresponding system excitation, the observability of the negative electrode active material fraction is quite low. As such, we would need to excite the system in a different way or observe a different signal to acquire a unique value." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Conclusion\n", "\n", "This notebook illustrates how to perform parameter estimation using PyBOP, across both gradient and non-gradient-based optimisers. " ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 4 }