{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Tutorial 1: Search Chaining\n", "===========================\n", "\n", "In chapter 2, we learnt how to perform lens modeling using a non-linear search. In all of the tutorials, we fitted the\n", "data using just one non-linear search. In this chapter, we introduce a technique called 'non-linear search chaining',\n", "fits a lens model using a sequence of non-linear searches. The initial searches fit simpler lens models whose parameter\n", "spaces can be more accurately and efficiently sampled. The results of this search are then passed to later searches\n", "which fit lens models of gradually increasing complexity.\n", "\n", "Lets think back to tutorial 4 of chapter 2. We learnt there were three approaches one could take fitting a lens model\n", "accurately if we found that a model fit failed. These were:\n", "\n", " 1) Tuning our priors to the strong lens we're fitting.\n", " 2) Making our lens model less complex.\n", " 3) Searching non-linear parameter space for longer.\n", "\n", "However, each of the above approaches has disadvantages. The more we tune our priors, the less we can generalize our\n", "analysis to a different strong lens. The less complex we make our model, the less realistic it is. And if we rely too\n", "much on searching parameter space for longer, we could end up with search`s that take days, weeks or months to run.\n", "\n", "In this tutorial, we are going to show how search chaining combines these 3 approaches such that we can fit\n", "complex and realistic lens models in a way that that can be generalized to many different strong lenses. To do this,\n", "we'll run 2 searches, and chain the lens model inferred in the first search to the priors of the second search`s lens\n", "model.\n", "\n", "Our first search will make the same light-traces-mass assumption we made in the previous tutorial. We saw that this\n", "gives a reasonable lens model. However, we'll make a couple of extra simplifying assumptions, to really try and bring\n", "our lens model complexity down and get the non-linear search running fast.\n", "\n", "The model we infer above will therefore be a lot less realistic. But it does not matter, because in the second search\n", "we are going to relax these assumptions and fit the more realistic lens model. The beauty is that, by running the first\n", "search, we can use its results to tune the priors of our second search. For example:\n", "\n", " 1) The first search should give us a pretty good idea of the lens galaxy's light and mass profiles, for example its\n", " intensity, effective radius and einstein radius.\n", "\n", " 2) It should also give us a pretty good fit to the lensed source galaxy. This means we'll already know where in\n", " source-plane its is located and what its intensity and effective are." ] }, { "cell_type": "code", "metadata": {}, "source": [ "%matplotlib inline\n", "from pyprojroot import here\n", "workspace_path = str(here())\n", "%cd $workspace_path\n", "print(f\"Working Directory has been set to `{workspace_path}`\")\n", "\n", "import numpy as np\n", "from os import path\n", "import autolens as al\n", "import autolens.plot as aplt\n", "import autofit as af" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Initial Setup__\n", "\n", "we'll use the same strong lensing data as the previous tutorial, where:\n", "\n", " - The lens galaxy's light is an `Sersic`.\n", " - The lens galaxy's total mass distribution is an `Isothermal` and `ExternalShear`.\n", " - The source galaxy's light is an `Exponential`." ] }, { "cell_type": "code", "metadata": {}, "source": [ "dataset_name = \"lens_sersic\"\n", "dataset_path = path.join(\"dataset\", \"imaging\", dataset_name)\n", "\n", "dataset = al.Imaging.from_fits(\n", " data_path=path.join(dataset_path, \"data.fits\"),\n", " noise_map_path=path.join(dataset_path, \"noise_map.fits\"),\n", " psf_path=path.join(dataset_path, \"psf.fits\"),\n", " pixel_scales=0.1,\n", ")\n", "\n", "mask = al.Mask2D.circular(\n", " shape_native=dataset.shape_native, pixel_scales=dataset.pixel_scales, radius=2.6\n", ")\n", "\n", "dataset = dataset.apply_mask(mask=mask)\n", "\n", "dataset_plotter = aplt.ImagingPlotter(\n", " dataset=dataset, visuals_2d=aplt.Visuals2D(mask=mask)\n", ")\n", "dataset_plotter.subplot_dataset()" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Model__\n", "\n", "As we've eluded to before, one can look at an image and immediately identify the centre of the lens galaxy. It's \n", "that bright blob of light in the middle! Given that we know we're going to make the lens model more complex in the \n", "next search, lets take a more liberal approach than before and fix the lens centre to $(y,x)$ = (0.0\", 0.0\").\n", "\n", "Now, you might be thinking, doesn`t this prevent our search from generalizing to other strong lenses? What if the \n", "centre of their lens galaxy isn't at (0.0\", 0.0\")?\n", "\n", "Well, this is true if our dataset reduction centres the lens galaxy somewhere else. But we get to choose where we \n", "centre it when we make the image. Therefore, I`d recommend you always centre the lens galaxy at the same location, \n", "and (0.0\", 0.0\") seems the best choice!" ] }, { "cell_type": "code", "metadata": {}, "source": [ "bulge = af.Model(al.lp_linear.Sersic)\n", "mass = af.Model(al.mp.Isothermal)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "You haven't actually seen a line like this one before. By setting a parameter to a number (and not a prior) it is be \n", "removed from non-linear parameter space and always fixed to that value. Pretty neat, huh?" ] }, { "cell_type": "code", "metadata": {}, "source": [ "bulge.centre_0 = 0.0\n", "bulge.centre_1 = 0.0\n", "mass.centre_0 = 0.0\n", "mass.centre_1 = 0.0" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets use the same approach of making the ellipticity of the mass trace that of the bulge." ] }, { "cell_type": "code", "metadata": {}, "source": [ "mass.ell_comps = bulge.ell_comps" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also discussed that the Sersic index of most lens galaxies is around 4. Lets fix it to 4 this time." ] }, { "cell_type": "code", "metadata": {}, "source": [ "bulge.sersic_index = 4.0" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now compose the model with these components that have had their priors customized. \n", "\n", "We have not done anything to the source model, but use an `Exponential` which will become the more complex\n", "`Sersic` in the second search." ] }, { "cell_type": "code", "metadata": {}, "source": [ "lens = af.Model(\n", " al.Galaxy, redshift=0.5, bulge=bulge, mass=mass, shear=al.mp.ExternalShear\n", ")\n", "\n", "source = af.Model(al.Galaxy, redshift=1.0, bulge=al.lp_linear.Exponential)\n", "\n", "model_1 = af.Collection(galaxies=af.Collection(lens=lens, source=source))" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `info` attribute shows the model in a readable format." ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(model_1.info)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Search + Analysis__\n", "\n", "Now lets create the search and analysis." ] }, { "cell_type": "code", "metadata": {}, "source": [ "search_1 = af.Nautilus(\n", " path_prefix=path.join(\"howtolens\", \"chapter_3\"),\n", " name=\"tutorial_1_search_chaining_1\",\n", " unique_tag=dataset_name,\n", " n_live=100,\n", " number_of_cores=1,\n", ")\n", "\n", "analysis_1 = al.AnalysisImaging(dataset=dataset)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Run Time__\n", "\n", "It is good practise to always check the `log_likelihood_function` run time before starting the non-linear search. \n", "It will be similar to the value we saw in the previous chapter." ] }, { "cell_type": "code", "metadata": {}, "source": [ "run_time_dict, info_dict = analysis_1.profile_log_likelihood_function(\n", " instance=model_1.random_instance()\n", ")\n", "\n", "print(f\"Log Likelihood Evaluation Time (second) = {run_time_dict['fit_time']}\")\n", "print(\n", " \"Estimated Run Time Upper Limit (seconds) = \",\n", " (run_time_dict[\"fit_time\"] * model_1.total_free_parameters * 10000)\n", " / search_1.number_of_cores,\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets run the search, noting that our liberal approach to reducing the lens model complexity has reduced it to just \n", "11 parameters." ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(\n", " \"The non-linear search has begun running - checkout the workspace/output/5_chaining_searches\"\n", " \" folder for live output of the results, images and lens model.\"\n", " \" This Jupyter notebook cell with progress once search has completed - this could take some time!\"\n", ")\n", "\n", "result_1 = search_1.fit(model=model_1, analysis=analysis_1)\n", "\n", "print(\"Search has finished run - you may now continue the notebook.\")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Result__\n", "\n", "The results are summarized in the `info` attribute." ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(result_1.info)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "And indeed, we get a reasonably good model and fit to the data, in a much shorter space of time!" ] }, { "cell_type": "code", "metadata": {}, "source": [ "fit_plotter = aplt.FitImagingPlotter(fit=result_1.max_log_likelihood_fit)\n", "fit_plotter.subplot_fit()" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Prior Passing__\n", "\n", "Now all we need to do is look at the results of search 1 and pass the results as priors for search 2. Lets setup \n", "a custom search that does exactly that.\n", "\n", "`GaussianPrior`'s are a nice way to pass priors. They tell the non-linear search where to look, but leave open the \n", "possibility that there might be a better solution nearby. In contrast, `UniformPrior`'s put hard limits on what values a \n", "parameter can or can`t take. It makes it more likely we will accidentally cut-out the global maxima solution." ] }, { "cell_type": "code", "metadata": {}, "source": [ "bulge = af.Model(al.lp_linear.Sersic)\n", "mass = af.Model(al.mp.Isothermal)\n", "shear = af.Model(al.mp.ExternalShear)\n", "source_bulge = af.Model(al.lp_linear.Sersic)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "What I've done below is looked at the results of search 1 and manually specified a prior for every parameter. If a \n", "parameter was fixed in the previous search, its prior is based around the previous value. Don't worry about the sigma \n", "values for now, I've chosen values that I know will ensure reasonable sampling, but we'll cover this later.\n", "\n", "__LENS LIGHT PRIORS:__" ] }, { "cell_type": "code", "metadata": {}, "source": [ "bulge.centre.centre_0 = af.GaussianPrior(\n", " mean=0.0, sigma=0.1, lower_limit=-np.inf, upper_limit=np.inf\n", ")\n", "bulge.centre.centre_1 = af.GaussianPrior(\n", " mean=0.0, sigma=0.1, lower_limit=-np.inf, upper_limit=np.inf\n", ")\n", "bulge.ell_comps.ell_comps_0 = af.GaussianPrior(\n", " mean=0.05, sigma=0.15, lower_limit=-1.0, upper_limit=1.0\n", ")\n", "bulge.ell_comps.ell_comps_1 = af.GaussianPrior(\n", " mean=0.0, sigma=0.2, lower_limit=-1.0, upper_limit=1.0\n", ")\n", "bulge.effective_radius = af.GaussianPrior(\n", " mean=0.72, sigma=0.2, lower_limit=0.0, upper_limit=np.inf\n", ")\n", "bulge.sersic_index = af.GaussianPrior(\n", " mean=4.0, sigma=2.0, lower_limit=0.0, upper_limit=np.inf\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__LENS MASS PRIORS:__" ] }, { "cell_type": "code", "metadata": {}, "source": [ "mass.centre.centre_0 = af.GaussianPrior(\n", " mean=0.0, sigma=0.1, lower_limit=-np.inf, upper_limit=np.inf\n", ")\n", "mass.centre.centre_1 = af.GaussianPrior(\n", " mean=0.0, sigma=0.1, lower_limit=-np.inf, upper_limit=np.inf\n", ")\n", "mass.ell_comps.ell_comps_0 = af.GaussianPrior(\n", " mean=0.05, sigma=0.15, lower_limit=-1.0, upper_limit=1.0\n", ")\n", "mass.ell_comps.ell_comps_1 = af.GaussianPrior(\n", " mean=0.0, sigma=0.2, lower_limit=-1.0, upper_limit=1.0\n", ")\n", "mass.einstein_radius = af.GaussianPrior(\n", " mean=1.6, sigma=0.1, lower_limit=0.0, upper_limit=np.inf\n", ")\n", "shear.gamma_1 = af.GaussianPrior(mean=0.05, sigma=0.05)\n", "shear.gamma_2 = af.GaussianPrior(mean=0.05, sigma=0.05)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__SOURCE LIGHT PRIORS:__" ] }, { "cell_type": "code", "metadata": {}, "source": [ "source_bulge.centre.centre_0 = af.GaussianPrior(\n", " mean=0.0, sigma=0.1, lower_limit=-np.inf, upper_limit=np.inf\n", ")\n", "source_bulge.centre.centre_1 = af.GaussianPrior(\n", " mean=0.0, sigma=0.1, lower_limit=-np.inf, upper_limit=np.inf\n", ")\n", "source_bulge.ell_comps.ell_comps_0 = af.GaussianPrior(\n", " mean=0.08, sigma=0.15, lower_limit=-1.0, upper_limit=1.0\n", ")\n", "source_bulge.ell_comps.ell_comps_1 = af.GaussianPrior(\n", " mean=-0.06, sigma=0.2, lower_limit=-1.0, upper_limit=1.0\n", ")\n", "source_bulge.effective_radius = af.GaussianPrior(\n", " mean=0.1, sigma=0.2, lower_limit=0.0, upper_limit=np.inf\n", ")\n", "source_bulge.sersic_index = af.GaussianPrior(\n", " mean=1.0, sigma=1.0, lower_limit=0.0, upper_limit=np.inf\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now compose the model with these components that have had their priors customized. " ] }, { "cell_type": "code", "metadata": {}, "source": [ "lens = af.Model(al.Galaxy, redshift=0.5, bulge=bulge, mass=mass, shear=shear)\n", "\n", "source = af.Model(al.Galaxy, redshift=1.0, bulge=source_bulge)\n", "\n", "model_2 = af.Collection(galaxies=af.Collection(lens=lens, source=source))" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `info` attribute shows the model, including the priors specified above." ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(model_2.info)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets setup and run the search. As expected, it gives us the correct lens model. However, it does so significantly \n", "faster than we are used to!" ] }, { "cell_type": "code", "metadata": {}, "source": [ "search_2 = af.Nautilus(\n", " path_prefix=path.join(\"howtolens\", \"chapter_3\"),\n", " name=\"tutorial_1_search_chaining_2\",\n", " unique_tag=dataset_name,\n", " n_live=150,\n", " number_of_cores=1,\n", ")\n", "\n", "analysis_2 = al.AnalysisImaging(dataset=dataset)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Run Time__\n", "\n", "Whilst the run-time of the log likelihood function is pretty much unchanged from the first search, the overall run-time\n", "of the search should decrease.\n", "\n", "This is because via prior passing we have informed the search of where to look in parameter space, meaning it \n", "should spend far fewer than ~10000 iterations per free parameter." ] }, { "cell_type": "code", "metadata": {}, "source": [ "run_time_dict, info_dict = analysis_2.profile_log_likelihood_function(\n", " instance=model_2.random_instance()\n", ")\n", "\n", "print(f\"Log Likelihood Evaluation Time (second) = {run_time_dict['fit_time']}\")\n", "print(\n", " \"Estimated Run Time Upper Limit (seconds) = \",\n", " (run_time_dict[\"fit_time\"] * model_2.total_free_parameters * 10000)\n", " / search_2.number_of_cores,\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the search." ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(\n", " \"The non-linear search has begun running - checkout the workspace/output/5_chaining_searches\"\n", " \" folder for live output of the results, images and lens model.\"\n", " \" This Jupyter notebook cell with progress once search has completed - this could take some time!\"\n", ")\n", "\n", "result_2 = search_2.fit(model=model_2, analysis=analysis_2)\n", "\n", "print(\"Search has finished run - you may now continue the notebook.\")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Result__\n", "\n", "We can again inspect the results via the `info` attribute." ] }, { "cell_type": "code", "metadata": {}, "source": [ "print(result_2.info)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "And a plot of the image shows we get a good model again!" ] }, { "cell_type": "code", "metadata": {}, "source": [ "fit_plotter = aplt.FitImagingPlotter(fit=result_2.max_log_likelihood_fit)\n", "fit_plotter.subplot_fit()" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Wrap Up__\n", "\n", "Chaining two searches together was a huge success. We managed to fit a complex and realistic model, but were able to \n", "begin by making simplifying assumptions that eased our search of non-linear parameter space. We could apply search 1 to \n", "pretty much any strong lens and therefore get ourselves a decent lens model with which to tune search 2`s priors.\n", "\n", "You are probably thinking though that there is one huge, giant, glaring flaw in all of this that I've not mentioned. \n", "Search 2 can`t be generalized to another lens, because its priors are tuned to the image we fitted. If we had a lot \n", "of lenses, we`d have to write a new search for every single one. This isn't ideal, is it?\n", "\n", "Fortunately, we can pass priors in **PyAutoLens** without specifying the specific values. The API for this technique,\n", "called prior passing, is the topic of the next tutorial." ] }, { "cell_type": "code", "metadata": {}, "source": [], "outputs": [], "execution_count": null } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 4 }