{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Plots: MCMCPlotter\n", "=====================\n", "\n", "This example illustrates how to plot visualization summarizing the results of a emcee non-linear search using\n", "a `ZeusPlotter`.\n", "\n", "__Start Here Notebook__\n", "\n", "If any code in this script is unclear, refer to the `plot/start_here.ipynb` notebook." ] }, { "cell_type": "code", "metadata": {}, "source": [ "%matplotlib inline\n", "from pyprojroot import here\n", "workspace_path = str(here())\n", "%cd $workspace_path\n", "print(f\"Working Directory has been set to `{workspace_path}`\")\n", "\n", "import matplotlib.pyplot as plt\n", "from os import path\n", "\n", "import autofit as af\n", "import autolens as al\n", "import autolens.plot as aplt" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, lets create a result via emcee by repeating the simple model-fit that is performed in \n", "the `modeling/start_here.py` example.\n", "\n", "We use a model with an initialized starting point, which is necessary for lens modeling with emcee." ] }, { "cell_type": "code", "metadata": {}, "source": [ "dataset_name = \"simple__no_lens_light\"\n", "\n", "search = af.Emcee(\n", " path_prefix=path.join(\"plot\"),\n", " name=\"MCMCPlotter\",\n", " unique_tag=dataset_name,\n", " nwalkers=30,\n", " nsteps=500,\n", ")\n", "\n", "dataset_path = path.join(\"dataset\", \"imaging\", dataset_name)\n", "\n", "dataset = al.Imaging.from_fits(\n", " data_path=path.join(dataset_path, \"data.fits\"),\n", " psf_path=path.join(dataset_path, \"psf.fits\"),\n", " noise_map_path=path.join(dataset_path, \"noise_map.fits\"),\n", " pixel_scales=0.1,\n", ")\n", "\n", "mask = al.Mask2D.circular(\n", " shape_native=dataset.shape_native, pixel_scales=dataset.pixel_scales, radius=3.0\n", ")\n", "\n", "dataset = dataset.apply_mask(mask=mask)\n", "\n", "mass = af.Model(al.mp.Isothermal)\n", "mass.centre.centre_0 = af.UniformPrior(lower_limit=-0.1, upper_limit=0.1)\n", "mass.centre.centre_1 = af.UniformPrior(lower_limit=-0.1, upper_limit=0.1)\n", "mass.ell_comps.ell_comps_0 = af.UniformPrior(lower_limit=-0.3, upper_limit=0.3)\n", "mass.ell_comps.ell_comps_1 = af.UniformPrior(lower_limit=-0.3, upper_limit=0.3)\n", "mass.einstein_radius = af.UniformPrior(lower_limit=1.0, upper_limit=2.0)\n", "\n", "shear = af.Model(al.mp.ExternalShear)\n", "shear.gamma_1 = af.UniformPrior(lower_limit=-0.1, upper_limit=0.1)\n", "shear.gamma_2 = af.UniformPrior(lower_limit=-0.1, upper_limit=0.1)\n", "\n", "bulge = af.Model(al.lp.Sersic)\n", "bulge.centre.centre_0 = af.UniformPrior(lower_limit=-0.1, upper_limit=0.1)\n", "bulge.centre.centre_1 = af.UniformPrior(lower_limit=-0.1, upper_limit=0.1)\n", "bulge.ell_comps.ell_comps_0 = af.UniformPrior(lower_limit=-0.3, upper_limit=0.3)\n", "bulge.ell_comps.ell_comps_1 = af.UniformPrior(lower_limit=-0.3, upper_limit=0.3)\n", "bulge.intensity = af.UniformPrior(lower_limit=0.1, upper_limit=0.5)\n", "bulge.effective_radius = af.UniformPrior(lower_limit=0.0, upper_limit=0.4)\n", "bulge.sersic_index = af.UniformPrior(lower_limit=0.5, upper_limit=2.0)\n", "\n", "lens = af.Model(al.Galaxy, redshift=0.5, mass=mass, shear=shear)\n", "source = af.Model(al.Galaxy, redshift=1.0, bulge=bulge)\n", "\n", "model = af.Collection(galaxies=af.Collection(lens=lens, source=source))\n", "\n", "analysis = al.AnalysisImaging(dataset=dataset)\n", "\n", "result = search.fit(model=model, analysis=analysis)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Notation__\n", "\n", "Plot are labeled with short hand parameter names (e.g. the `centre` parameters are plotted using an `x`). \n", "\n", "The mappings of every parameter to its shorthand symbol for plots is specified in the `config/notation.yaml` file \n", "and can be customized.\n", "\n", "Each label also has a superscript corresponding to the model component the parameter originates from. For example,\n", "Gaussians are given the superscript `g`. This can also be customized in the `config/notation.yaml` file.\n", "\n", "__Plotting__\n", "\n", "We now pass the samples to a `MCMCPlotter` which will allow us to use emcee's in-built plotting libraries to \n", "make figures.\n", "\n", "The emcee readthedocs describes fully all of the methods used below \n", "\n", " - https://emcee.readthedocs.io/en/stable/user/sampler/\n", " \n", " The plotter wraps the `corner` method of the library `corner.py` to make corner plots of the PDF:\n", "\n", "- https://corner.readthedocs.io/en/latest/index.html\n", " \n", "In all the examples below, we use the `kwargs` of this function to pass in any of the input parameters that are \n", "described in the API docs." ] }, { "cell_type": "code", "metadata": {}, "source": [ "plotter = aplt.MCMCPlotter(samples=result.samples)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `corner` method produces a triangle of 1D and 2D PDF's of every parameter in the model fit." ] }, { "cell_type": "code", "metadata": {}, "source": [ "plotter.corner_cornerpy(\n", " bins=20,\n", " range=None,\n", " color=\"k\",\n", " hist_bin_factor=1,\n", " smooth=None,\n", " smooth1d=None,\n", " label_kwargs=None,\n", " titles=None,\n", " show_titles=False,\n", " title_fmt=\".2f\",\n", " title_kwargs=None,\n", " truths=None,\n", " truth_color=\"#4682b4\",\n", " scale_hist=False,\n", " quantiles=None,\n", " verbose=False,\n", " fig=None,\n", " max_n_ticks=5,\n", " top_ticks=False,\n", " use_math_text=False,\n", " reverse=False,\n", " labelpad=0.0,\n", " hist_kwargs=None,\n", " group=\"posterior\",\n", " var_names=None,\n", " filter_vars=None,\n", " coords=None,\n", " divergences=False,\n", " divergences_kwargs=None,\n", " labeller=None,\n", ")\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Search Specific Visualization__\n", "\n", "The internal sampler can be used to plot the results of the non-linear search. \n", "\n", "We do this using the `search_internal` attribute which contains the sampler in its native form.\n", "\n", "The first time you run a search, the `search_internal` attribute will be available because it is passed ot the\n", "result via memory. \n", "\n", "If you rerun the fit on a completed result, it will not be available in memory, and therefore\n", "will be loaded from the `files/search_internal` folder. The `search_internal` entry of the `output.yaml` must be true \n", "for this to be possible." ] }, { "cell_type": "code", "metadata": {}, "source": [ "search_internal = result.search_internal" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "The method below shows a 2D projection of the walker trajectories." ] }, { "cell_type": "code", "metadata": {}, "source": [ "fig, axes = plt.subplots(result.model.prior_count, figsize=(10, 7))\n", "\n", "for i in range(result.model.prior_count):\n", " for walker_index in range(search_internal.get_log_prob().shape[1]):\n", " ax = axes[i]\n", " ax.plot(\n", " search_internal.get_chain()[:, walker_index, i],\n", " search_internal.get_log_prob()[:, walker_index],\n", " alpha=0.3,\n", " )\n", "\n", " ax.set_ylabel(\"Log Likelihood\")\n", " ax.set_xlabel(result.model.parameter_labels_with_superscripts_latex[i])\n", "\n", "plt.show()" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "This method shows the likelihood as a series of steps." ] }, { "cell_type": "code", "metadata": {}, "source": [ "\n", "fig, axes = plt.subplots(1, figsize=(10, 7))\n", "\n", "for walker_index in range(search_internal.get_log_prob().shape[1]):\n", " axes.plot(search_internal.get_log_prob()[:, walker_index], alpha=0.3)\n", "\n", "axes.set_ylabel(\"Log Likelihood\")\n", "axes.set_xlabel(\"step number\")\n", "\n", "plt.show()" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "This method shows the parameter values of every walker at every step." ] }, { "cell_type": "code", "metadata": {}, "source": [ "fig, axes = plt.subplots(result.samples.model.prior_count, figsize=(10, 7), sharex=True)\n", "\n", "for i in range(result.samples.model.prior_count):\n", " ax = axes[i]\n", " ax.plot(search_internal.get_chain()[:, :, i], alpha=0.3)\n", " ax.set_ylabel(result.model.parameter_labels_with_superscripts_latex[i])\n", "\n", "axes[-1].set_xlabel(\"step number\")\n", "\n", "plt.show()\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finish." ] }, { "cell_type": "code", "metadata": {}, "source": [], "outputs": [], "execution_count": null } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 4 }