{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# How to add new gauge-optimizations to GST results\n", "This example demonstrates how to take a previously computed `Results` object and add new gauge-optimized version of to one of the estimates. First, let's \"pre-compute\" a `Results` object using `do_long_sequence_gst`, which contains a single `Estimate` called \"default\":" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from __future__ import print_function\n", "import pygsti, pickle\n", "from pygsti.construction import std1Q_XYI" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Circuit Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 450 operation sequences ---\n", "Iterative MLGST Total Time: 1.0s\n", "--- Re-optimizing logl after robust data scaling ---\n" ] } ], "source": [ "#Generate some fake data and run GST on it.\n", "target_model = std1Q_XYI.target_model()\n", "mdl_datagen = std1Q_XYI.target_model().depolarize(op_noise=0.1, spam_noise=0.001)\n", "listOfExperiments = pygsti.construction.make_lsgst_experiment_list(\n", " target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4])\n", "ds = pygsti.construction.generate_fake_data(mdl_datagen, listOfExperiments, nSamples=1000,\n", " sampleError=\"binomial\", seed=1234)\n", "target_model.set_all_parameterizations(\"TP\")\n", "results = pygsti.do_long_sequence_gst(\n", " ds, target_model, std1Q_XYI.fiducials, std1Q_XYI.fiducials, std1Q_XYI.germs, [1,2,4],\n", " gaugeOptParams={'itemWeights': {'gates': 1, 'spam': 1}}, verbosity=1)\n", "\n", "with open(\"example_files/regaugeopt_result.pkl\",\"wb\") as f:\n", " pickle.dump(results, f) # pickle the results, to mimic typical workflow" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's load in the pre-computed results and use the `add_gauge_optimization` method of the `pygsti.objects.Estimate` object to add a new gauge-optimized version of the (gauge un-fixed) model estimate stored in `my_results.estimates['default']`. The first argument of `add_gauge_optimization` is just a dictionary of arguments to `pygsti.gaugeopt_to_target` **except** that you don't need to specify the `Model` to gauge optimize or the target `Model` (just like the `gaugeOptParams` argument of `do_long_sequence_gst`). The optional \"`label`\" argument defines the key name for the gauge-optimized `Model` and the corresponding parameter dictionary within the `Estimate`'s `.models` and `.goparameters` dictionaries, respectively." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "my_results = pickle.load(open(\"example_files/regaugeopt_result.pkl\",\"rb\"))" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['go0', 'Spam 1e-3']\n", "0.039998560434501076\n" ] } ], "source": [ "estimate = my_results.estimates['default']\n", "estimate.add_gaugeoptimized( {'itemWeights': {'gates': 1, 'spam': 0.001}}, label=\"Spam 1e-3\" )\n", "mdl_gaugeopt = estimate.models['Spam 1e-3']\n", "\n", "print(list(estimate.goparameters.keys())) # 'go0' is the default gauge-optimization label\n", "print(mdl_gaugeopt.frobeniusdist(estimate.models['target']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can also perform the gauge optimization separately and specify it using the `model` argument (this is useful when you want or need to compute the gauge optimization elsewhere):" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['go0', 'Spam 1e-3', 'Spam 1e-3 custom']\n", "0.0\n" ] } ], "source": [ "mdl_unfixed = estimate.models['final iteration estimate']\n", "mdl_gaugefixed = pygsti.gaugeopt_to_target(mdl_unfixed, estimate.models['target'], {'gates': 1, 'spam': 0.001})\n", "estimate.add_gaugeoptimized( {'any': \"dictionary\", \n", " \"doesn't really\": \"matter\",\n", " \"but could be useful it you put gaugeopt params\": 'here'},\n", " model=mdl_gaugefixed, label=\"Spam 1e-3 custom\" )\n", "print(list(estimate.goparameters.keys()))\n", "print(estimate.models['Spam 1e-3 custom'].frobeniusdist(estimate.models['Spam 1e-3']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can look at the gauge optimization parameters using `.goparameters`:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'_gaugeGroupEl': ,\n", " 'itemWeights': {'gates': 1, 'spam': 0.001},\n", " 'model': ,\n", " 'returnAll': True,\n", " 'targetModel': }\n" ] } ], "source": [ "import pprint\n", "pp = pprint.PrettyPrinter()\n", "pp.pprint(dict(estimate.goparameters['Spam 1e-3']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, note that if, in the original call to `do_long_sequence_gst`, you set **`gaugeOptParams=False`** then no gauge optimizations are performed (there would be no \"`go0`\" elements) and you start with a blank slate to perform whatever gauge optimizations you want on your own." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" } }, "nbformat": 4, "nbformat_minor": 2 }