{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# How to gauge-optimize to a model other than the ideal targets \n", "Typically gauge optimizations are performed with respect to the set of ideal target gates and spam operations. This is convenient, since you need to specify the ideal targets as points of comparison, but not always the best approach. Particularly when you expect all or some of the gate estimates to either substantially differ from the ideal operations or differ, even by small amounts, in particular ways from the ideal operations, it can be hugely aid later interpretation to specify a non-ideal `Model` as the target for gauge-optimization. By separating the \"ideal targets\" from the \"gauge optimization targets\", you're able to tell the gauge optimizer what gates you *think* you have, including any known errors. This can result in a gauge-optimized estimate which is much more sensible and straightforward to interpet.\n", "\n", "For example, gauge transformations can slosh error between the SPAM operations and the non-unital parts of gates. If you know your gates are slightly non-unital you can include this information in the gauge-optimization-target (by specifying a `Model` which is slightly non-unital) and obtain a resulting estimate of low SPAM-error and slightly non-unital gates. If you just used the ideal (unital) target gates, the gauge-optimizer, which is often setup to care more about matching gate than SPAM ops, could have sloshed all the error into the SPAM ops, resulting in a confusing estimate that indicates perfectly unital gates and horrible SPAM operations.\n", "\n", "This example demonstrates how to separately specify the gauge-optimization-target `Model`. There are two places where you might want to do this: 1) when calling `pygsti.do_long_sequence_gst`, to direct the gauge-optimization it performs, or 2) when calling `estimate.add_gaugeoptimized` to add a gauge-optimized version of an estimate after the main GST algorithms have been run. \n", "\n", "In both cases, a dictionary of gauge-optimization \"parameters\" (really just a dictionary of arguments for `pygsti.gaugeopt_to_target`) is required, and one simply needs to set the `targetModel` argument of `pygsti.gaugeopt_to_target` by specifying `targetModel` within the parameter dictionary. We demonstrate this below.\n", "\n", "First, we'll setup a standard GST analysis as usual except we'll create a `mdl_guess` model that is meant to be an educated guess at what we expect the estimate to be. We'll gauge optimize to `mdl_guess` instead of the usual `target_model`:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from __future__ import print_function\n", "import pygsti\n", "from pygsti.construction import std1Q_XYI" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "#Generate some fake data (all usual stuff here)\n", "target_model = std1Q_XYI.target_model()\n", "mdl_datagen = std1Q_XYI.target_model().depolarize(op_noise=0.01, spam_noise=0.001).rotate( (0,0,0.05) )\n", "mdl_datagen['Gx'].depolarize(0.1) #depolarize Gx even further\n", "listOfExperiments = pygsti.construction.make_lsgst_experiment_list(\n", " target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4])\n", "ds = pygsti.construction.generate_fake_data(mdl_datagen, listOfExperiments, nSamples=1000,\n", " sampleError=\"binomial\", seed=1234)\n", "target_model.set_all_parameterizations(\"TP\") #we'll do TP-constrained GST below" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create a \"guess\" model that anticipates a more-depolarized Gx gate" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "mdl_guess = std1Q_XYI.target_model()\n", "mdl_guess['Gx'].depolarize(0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Run GST with and without the guess model" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Circuit Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 450 operation sequences ---\n", "Iterative MLGST Total Time: 1.2s\n" ] } ], "source": [ "# GST with standard \"ideal target\" gauge optimization\n", "results1 = pygsti.do_long_sequence_gst(\n", " ds, target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],\n", " gaugeOptParams={'itemWeights': {'gates': 1, 'spam': 1}}, verbosity=1)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Circuit Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 450 operation sequences ---\n", "Iterative MLGST Total Time: 1.2s\n" ] } ], "source": [ "# GST with our guess as the gauge optimization target (just add \"targetModel\" to gaugeOptParams)\n", "results2 = pygsti.do_long_sequence_gst(\n", " ds, target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],\n", " gaugeOptParams={'targetModel': mdl_guess, \n", " 'itemWeights': {'gates': 1, 'spam': 1}}, verbosity=1)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Std Practice: [##################################################] 100.0% (Target) --\n" ] } ], "source": [ "#Note: you can also use the gaugeOptTarget\n", "results3 = pygsti.do_stdpractice_gst(\n", " ds, target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],\n", " gaugeOptTarget=mdl_guess, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Comparisons\n", "After running both the \"ideal-target\" and \"mdl_guess-target\" gauge optimizations, we can compare them with the ideal targets and the data-generating gates themselves. We see that using `mdl_guess` results in a similar frobenius distance to the ideal targets, a slightly closer estimate to the data-generating model, and reflects our expectation that the `Gx` gate is slightly worse than the other gates." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Diff between ideal and ideal-target-gauge-opt = 0.026385852729678447\n", "Diff between ideal and mdl_guess-gauge-opt = 0.026386339218052658\n", "Diff between data-gen and ideal-target-gauge-opt = 0.011688530015221483\n", "Diff between data-gen and mdl_guess-gauge-opt = 0.011622694618020846\n", "Diff between ideal-target-GO and mdl_guess-GO = 0.00015586066892434803\n", "\n", "Per-op difference between ideal and ideal-target-GO\n", "Model Difference:\n", " Preps:\n", " rho0 = 0.0180446\n", " POVMs:\n", " Mdefault: 0 = 0.0147282\n", " 1 = 0.0147282\n", " Gates:\n", " Gi = 0.0795213\n", " Gx = 0.184807\n", " Gy = 0.0231531\n", "\n", "\n", "Per-op difference between ideal and mdl_guess-GO\n", "Model Difference:\n", " Preps:\n", " rho0 = 0.0178975\n", " POVMs:\n", " Mdefault: 0 = 0.0144092\n", " 1 = 0.0144092\n", " Gates:\n", " Gi = 0.0795216\n", " Gx = 0.184867\n", " Gy = 0.0232215\n", "\n" ] } ], "source": [ "mdl_1 = results1.estimates['default'].models['go0']\n", "mdl_2 = results2.estimates['default'].models['go0']\n", "print(\"Diff between ideal and ideal-target-gauge-opt = \", mdl_1.frobeniusdist(target_model))\n", "print(\"Diff between ideal and mdl_guess-gauge-opt = \", mdl_2.frobeniusdist(target_model))\n", "print(\"Diff between data-gen and ideal-target-gauge-opt = \", mdl_1.frobeniusdist(mdl_datagen))\n", "print(\"Diff between data-gen and mdl_guess-gauge-opt = \", mdl_2.frobeniusdist(mdl_datagen))\n", "print(\"Diff between ideal-target-GO and mdl_guess-GO = \", mdl_1.frobeniusdist(mdl_2))\n", "\n", "print(\"\\nPer-op difference between ideal and ideal-target-GO\")\n", "print(mdl_1.strdiff(target_model))\n", "\n", "print(\"\\nPer-op difference between ideal and mdl_guess-GO\")\n", "print(mdl_2.strdiff(target_model))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Adding a gauge optimization to existing `Results`\n", "We can also include our `mdl_guess` as the `targetModel` when adding a new gauge-optimized result. See other examples for more info on using `add_gaugeoptimized`." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "results1.estimates['default'].add_gaugeoptimized( {'targetModel': mdl_guess, \n", " 'itemWeights': {'gates': 1, 'spam': 1}},\n", " label=\"using mdl_guess\")" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.0\n" ] } ], "source": [ "mdl_1b = results1.estimates['default'].models['using mdl_guess']\n", "print(mdl_1b.frobeniusdist(mdl_2)) # gs1b is the same as gs2" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" } }, "nbformat": 4, "nbformat_minor": 2 }