{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Modeling: Quantity\n", "==================\n", "\n", "Ordinary model-fits in **PyAutoLens** fit a lens model to a dataset (e.g. `Imaging`, `Interferometer`). The inferred\n", "lens model then tells us about the properties of the lens galaxy, for example its convergence, potential and\n", "deflection angles.\n", "\n", "This script instead fits a lens model directly to a quantity of lens galaxy, which could be its convergence,\n", "potential, deflection angles or another of its quantities.\n", "\n", "This fit allows us to fit a quantity of a certain mass profile (e.g. the convergence of an `NFW` mass profile) to\n", "the same quantity of a different mass profile (e.g. the convergence of a `PowerLaw`). This provides parameters\n", "describing how to translate between two mass profiles as closely as possible, and to understand how similar or\n", "different the mass profiles are.\n", "\n", "This script fits a `DatasetQuantity` dataset of a 'galaxy-scale' strong lens with a model. The `DatasetQuantity` is the\n", "convergence map of a `NFW` mass model which is fitted by a lens model where:\n", "\n", " - The lens galaxy's light is omitted (and is not present in the simulated data).\n", " - The lens galaxy's total mass distribution is an `PowerLaw`.\n", " - There is no source galaxy." ] }, { "cell_type": "code", "metadata": {}, "source": [ "%matplotlib inline\n", "from pyprojroot import here\n", "workspace_path = str(here())\n", "%cd $workspace_path\n", "print(f\"Working Directory has been set to `{workspace_path}`\")\n", "\n", "from os import path\n", "import autofit as af\n", "import autolens as al\n", "import autolens.plot as aplt" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Grid__\n", "\n", "Define the 2D grid the quantity (in this example, the convergence) is evaluated using." ] }, { "cell_type": "code", "metadata": {}, "source": [ "grid = al.Grid2D.uniform(shape_native=(100, 100), pixel_scales=0.1)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Tracer__\n", "\n", "Create a tracer which we will use to create our `DatasetQuantity`. " ] }, { "cell_type": "code", "metadata": {}, "source": [ "lens_galaxy = al.Galaxy(\n", " redshift=0.5,\n", " mass=al.mp.NFW(\n", " centre=(0.0, 0.0),\n", " ell_comps=al.convert.ell_comps_from(axis_ratio=0.9, angle=45.0),\n", " kappa_s=0.2,\n", " scale_radius=20.0,\n", " ),\n", ")\n", "\n", "tracer = al.Tracer.from_galaxies(galaxies=[lens_galaxy])" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Dataset__\n", "\n", "Use this `Tracer`'s 2D convergence to create the `DatasetQuantity`.\n", "\n", "We assume a noise-map where all values are arbritrarily 0.01." ] }, { "cell_type": "code", "metadata": {}, "source": [ "convergence_2d = tracer.convergence_2d_from(grid=grid)\n", "\n", "dataset = al.DatasetQuantity(\n", " data=convergence_2d,\n", " noise_map=al.Array2D.full(\n", " fill_value=0.01,\n", " shape_native=convergence_2d.shape_native,\n", " pixel_scales=convergence_2d.pixel_scales,\n", " ),\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Mask__\n", "\n", "The model-fit requires a `Mask2D` defining the regions of the convergence we fit, which we define and apply to the \n", "`DatasetQuantity` object." ] }, { "cell_type": "code", "metadata": {}, "source": [ "mask = al.Mask2D.circular(\n", " shape_native=dataset.shape_native, pixel_scales=dataset.pixel_scales, radius=3.0\n", ")\n", "\n", "dataset = dataset.apply_mask(mask=mask)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Model__\n", "\n", "We compose a lens model where:\n", "\n", " - The lens galaxy's total mass distribution is an `PowerLaw` [6 parameters].\n", "\n", "The number of free parameters and therefore the dimensionality of non-linear parameter space is N=14." ] }, { "cell_type": "code", "metadata": {}, "source": [ "lens = af.Model(al.Galaxy, redshift=0.5, mass=al.mp.PowerLaw)\n", "\n", "model = af.Collection(galaxies=af.Collection(lens=lens))" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Search__\n", "\n", "The lens model is fitted to the data using the nested sampling algorithm Nautilus (see `start.here.py` for a \n", "full description).\n", "\n", "The folders: \n", "\n", " - `autolens_workspace/*/imaging/modeling/searches`.\n", " - `autolens_workspace/*/imaging/modeling/customize`\n", " \n", "Give overviews of the non-linear searches **PyAutoLens** supports and more details on how to customize the\n", "model-fit, including the priors on the model.\n", "\n", "The `name` and `path_prefix` below specify the path where results ae stored in the output folder: \n", "\n", " `/autolens_workspace/output/imaging/modeling/simple__no_lens_light/mass[sie]_source[bulge]/unique_identifier`.\n", "\n", "__Unique Identifier__\n", "\n", "In the path above, the `unique_identifier` appears as a collection of characters, where this identifier is generated \n", "based on the model, search and dataset that are used in the fit.\n", "\n", "An identical combination of model and search generates the same identifier, meaning that rerunning the script will use \n", "the existing results to resume the model-fit. In contrast, if you change the model or search, a new unique identifier \n", "will be generated, ensuring that the model-fit results are output into a separate folder.\n", "\n", "We additionally want the unique identifier to be specific to the dataset fitted, so that if we fit different datasets\n", "with the same model and search results are output to a different folder. We achieve this below by passing \n", "the `dataset_name` to the search's `unique_tag`.\n", "\n", "__Number Of Cores__\n", "\n", "We include an input `number_of_cores`, which when above 1 means that Nautilus uses parallel processing to sample multiple \n", "lens models at once on your CPU. When `number_of_cores=2` the search will run roughly two times as\n", "fast, for `number_of_cores=3` three times as fast, and so on. The downside is more cores on your CPU will be in-use\n", "which may hurt the general performance of your computer.\n", "\n", "You should experiment to figure out the highest value which does not give a noticeable loss in performance of your \n", "computer. If you know that your processor is a quad-core processor you should be able to use `number_of_cores=4`. \n", "\n", "Above `number_of_cores=4` the speed-up from parallelization diminishes greatly. We therefore recommend you do not\n", "use a value above this.\n", "\n", "For users on a Windows Operating system, using `number_of_cores>1` may lead to an error, in which case it should be \n", "reduced back to 1 to fix it." ] }, { "cell_type": "code", "metadata": {}, "source": [ "search = af.Nautilus(\n", " path_prefix=path.join(\"misc\", \"modeling\"),\n", " name=\"quantity_via_convergence_fit\",\n", " n_live=100,\n", " number_of_cores=1,\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Analysis__\n", "\n", "The `AnalysisQuantity` object defines the `log_likelihood_function` used by the non-linear search to fit the model to \n", "the `DatasetQuantity` dataset.\n", "\n", "This includes a `func_str` input which defines what quantity is fitted. It corresponds to the function of the \n", "model `Tracer` objects that are called to create the model quantity. For example, if `func_str=\"convergence_2d_from\"`, \n", "the convergence is computed from each model `Tracer`." ] }, { "cell_type": "code", "metadata": {}, "source": [ "analysis = al.AnalysisQuantity(dataset=dataset, func_str=\"convergence_2d_from\")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Model-Fit__\n", "\n", "We begin the model-fit by passing the model and analysis object to the non-linear search (checkout the output folder\n", "for on-the-fly visualization and results)." ] }, { "cell_type": "code", "metadata": {}, "source": [ "result = search.fit(model=model, analysis=analysis)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Result__\n", "\n", "The search returns a result object, which whose `info` attribute shows the result in a readable format:" ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(result.info)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "We plot the maximum likelihood fit, tracer images and posteriors inferred via Nautilus.\n", "\n", "Checkout `autolens_workspace/*/imaging/results` for a full description of analysing results in **PyAutoLens**." ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(result.max_log_likelihood_instance)\n", "\n", "tracer_plotter = aplt.TracerPlotter(\n", " tracer=result.max_log_likelihood_tracer, grid=result.grid\n", ")\n", "tracer_plotter.subplot_tracer()\n", "\n", "fit_quantity_plotter = aplt.FitQuantityPlotter(fit=result.max_log_likelihood_fit)\n", "fit_quantity_plotter.subplot_fit()\n", "\n", "search_plotter = aplt.NautilusPlotter(samples=result.samples)\n", "search_plotter.cornerplot()" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Checkout `autolens_workspace/*/imaging/modeling/results.py` for a full description of the result object." ] }, { "cell_type": "code", "metadata": {}, "source": [], "outputs": [], "execution_count": null } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 4 }