{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Report Generation Tutorial\n", "\n", "PyGSTi is able to construct polished report documents, which provide high-level summaries as well as detailed analyses of results (Gate Set Tomography (GST) and model-testing results in particular). Reports are intended to be quick and easy way of analyzing `Model`-type estimates, and pyGSTi's report generation functions are specifically designed to interact with the `Results` object (producted by several high-level algorithm functions - see, for example, the [GST overview tutorial](../algorithms/GST-Overview.ipynb) and [GST functions tutorial](../algorithms/GST-Drivers.ipynb).). The report generation functions in pyGSTi takes one or more results (often `Results`-type) objects as input and produces an HTML file as output. The HTML format allows the reports to include **interactive plots** and **switches** (see the [workspace switchboard tutorial](advanced/WorkspaceSwitchboards.ipynb), making it easy to compare different types of analysis or data sets. \n", "\n", "PyGSTi's reports are stand-alone HTML documents which cannot run Python. Thus, all the results displayed in a report must be pre-computed (in Python). If you find yourself wanting to fiddle with things and feel that these reports are too static, please consider using a `Workspace` object (see the [Workspace tutorial](Workspace.ipynb)) within a Jupyter notebook, where you can intermix report tables/plots and Python. Internally, functions like `create_standard_report` (see below) are just canned routines which use a `WorkSpace` object to generate various tables and plots and then insert them into a HTML template. \n", "\n", "**Note to veteran users:** PyGSTi has for some time now transitioned to producing HTML (rather than LaTeX/PDF) reports. The way to generate such reports is largely unchanged, with one important exception. Previously, the `Results` object had various report-generation methods included within it. We've found this is too restrictive, as we'd sometimes like to generate a report which utilizes the results from multiple runs of GST (to compare them, for instance). Thus, the `Results` class is now just a container for a `DataSet` and its related `Model`s, `CircuitStructure`s, etc. All of the report-generation capability is now housed in within separate report functions, which we now demonstrate.\n", "\n", "\n", "### Get some `Results`\n", "We start by performing GST using `do_long_sequence_gst`, as usual, to create a `Results` object (we could also have just loaded one from file). See the [GST functions tutorial](../algorithms/GST-Drivers.ipynb) for more details." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading from cache file: ../tutorial_files/Example_Dataset.txt.cache\n", "--- Circuit Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", "--- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.243730350963286\n", " 1.1796261581655645\n", " 0.9627515645786063\n", " 0.9424890722054706\n", " 0.033826151547621315\n", " 0.01692336936843073\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119286\n", " 1.414213562373096\n", " 1.4142135623730956\n", " 1.4142135623730954\n", " 2.5038933168948026e-16\n", " 2.023452063009528e-16\n", " \n", " Resulting model:\n", " \n", " rho0 = TPSPAMVec with dimension 4\n", " 0.71-0.02 0.03 0.75\n", " \n", " \n", " Mdefault = TPPOVM with effect vectors:\n", " 0: FullSPAMVec with dimension 4\n", " 0.73 0 0 0.65\n", " \n", " 1: ComplementSPAMVec with dimension 4\n", " 0.69 0 0-0.65\n", " \n", " \n", " \n", " Gi = \n", " TPDenseOp with shape (4, 4)\n", " 1.00 0 0 0\n", " 0.01 0.92-0.03 0.02\n", " 0.01-0.01 0.90 0.02\n", " -0.01 0 0 0.91\n", " \n", " \n", " Gx = \n", " TPDenseOp with shape (4, 4)\n", " 1.00 0 0 0\n", " 0 0.91-0.01 0\n", " -0.02-0.02-0.04-0.99\n", " -0.05 0.03 0.81 0\n", " \n", " \n", " Gy = \n", " TPDenseOp with shape (4, 4)\n", " 1.00 0 0 0\n", " 0.05 0 0 0.98\n", " 0.01 0 0.89-0.03\n", " -0.06-0.82 0 0\n", " \n", " \n", " \n", " \n", "--- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 86.3537, mu=0, |J|=1010.99\n", " --- Outer Iter 1: norm_f = 49.6491, mu=79.0766, |J|=1009.86\n", " --- Outer Iter 2: norm_f = 49.5669, mu=26.3589, |J|=1008.85\n", " --- Outer Iter 3: norm_f = 49.5665, mu=8.78629, |J|=1008.87\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 49.5665 (92 data params - 31 model params = expected mean of 61; p-value = 0.85235)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 49.6936\n", " Iteration 1 took 0.2s\n", " \n", "--- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 150.19, mu=0, |J|=1397.23\n", " --- Outer Iter 1: norm_f = 111.389, mu=138.539, |J|=1388.05\n", " --- Outer Iter 2: norm_f = 111.209, mu=46.1798, |J|=1387.46\n", " --- Outer Iter 3: norm_f = 111.208, mu=15.3933, |J|=1387.45\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 111.208 (168 data params - 31 model params = expected mean of 137; p-value = 0.948166)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 111.486\n", " Iteration 2 took 0.2s\n", " \n", "--- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 498.77, mu=0, |J|=2295.79\n", " --- Outer Iter 1: norm_f = 421.84, mu=346.423, |J|=2300.79\n", " --- Outer Iter 2: norm_f = 421.713, mu=115.474, |J|=2300.65\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 421.713 (450 data params - 31 model params = expected mean of 419; p-value = 0.453619)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 422.191\n", " Iteration 3 took 0.3s\n", " \n", "--- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 851.493, mu=0, |J|=3309.82\n", " --- Outer Iter 1: norm_f = 806.348, mu=636.017, |J|=3286.21\n", " --- Outer Iter 2: norm_f = 806.308, mu=212.006, |J|=3286.08\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 806.308 (862 data params - 31 model params = expected mean of 831; p-value = 0.724212)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 807.505\n", " Iteration 4 took 0.6s\n", " \n", "--- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1263, mu=0, |J|=4223.66\n", " --- Outer Iter 1: norm_f = 1245.9, mu=917.211, |J|=4227.36\n", " --- Outer Iter 2: norm_f = 1245.88, mu=305.737, |J|=4228.06\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1245.88 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.53552)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 1247.4\n", " Iteration 5 took 1.0s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " --- Outer Iter 0: norm_f = 623.698, mu=0, |J|=2989.23\n", " --- Outer Iter 1: norm_f = 623.667, mu=458.353, |J|=2990.87\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 623.667 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1247.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.523935)\n", " Completed in 0.4s\n", " 2*Delta(log(L)) = 1247.33\n", " Final MLGST took 0.4s\n", " \n", "Iterative MLGST Total Time: 2.6s\n", " -- Adding Gauge Optimized (go0) --\n" ] } ], "source": [ "import pygsti\n", "from pygsti.construction import std1Q_XYI\n", "\n", "target_model = std1Q_XYI.target_model()\n", "fiducials = std1Q_XYI.fiducials\n", "germs = std1Q_XYI.germs\n", "maxLengths = [1,2,4,8,16]\n", "ds = pygsti.io.load_dataset(\"../tutorial_files/Example_Dataset.txt\", cache=True)\n", "\n", "#Run GST\n", "target_model.set_all_parameterizations(\"TP\") #TP-constrained\n", "results = pygsti.do_long_sequence_gst(ds, target_model, fiducials, fiducials, germs,\n", " maxLengths, verbosity=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Make a report\n", "Now that we have `results`, we use the `create_standard_report` method within `pygsti.report` to generate a report. \n", "`pygsti.report.create_standard_report` is the most commonly used report generation function in pyGSTi, as it is appropriate for smaller models (1- and 2-qubit) which have *operations that are or can be represeted as dense matrices and/or vectors*. \n", "\n", "If the given filename ends in \"`.pdf`\" then a PDF-format report is generated; otherwise the file name specifies a folder that will be filled with HTML pages. To open a HTML-format report, you open the `main.html` file directly inside the report's folder. Setting `auto_open=True` makes the finished report open in your web browser automatically. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to ../tutorial_files/exampleReport directory\n", "Opening ../tutorial_files/exampleReport/main.html...\n", "*** Report Generation Complete! Total time 34.5031s ***\n", "\n", "\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "*** Generating plots ***\n", "*** Merging into template file ***\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/usr/local/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:424: MatplotlibDeprecationWarning:\n", "\n", "\n", "Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Latex file(s) successfully generated. Attempting to compile with pdflatex...\n", "Opening ../tutorial_files/exampleReport.pdf...\n", "*** Report Generation Complete! Total time 72.7787s ***\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n", "ERROR: pdflatex returned code 1 Check exampleReport.log to see details.\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#HTML\n", "pygsti.report.create_standard_report(results, \"../tutorial_files/exampleReport\", \n", " title=\"GST Example Report\", verbosity=1, auto_open=True)\n", "\n", "print(\"\\n\")\n", "\n", "#PDF\n", "pygsti.report.create_standard_report(results, \"../tutorial_files/exampleReport.pdf\", \n", " title=\"GST Example Report\", verbosity=1, auto_open=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are several remarks about these reports worth noting:\n", "1. The **HTML reports are the primary report type in pyGSTi**, and are much more flexible. The PDF reports are more limited (they can only display a *single* estimate and gauge optimization), and essentially contain a subset of the information and descriptive text of a HTML report. So, if you can, use the HTML reports. The PDF report's strength is its portability: PDFs are easily displayed by many devices, and they embed all that they need neatly into a single file. **If you need to generate a PDF report** from `Results` objects that have multiple estimates and/or gauge optimizations, consider using the `Results` object's `view` method to single out the estimate and gauge optimization you're after.\n", "2. It's best to use **Firefox** when opening the HTML reports. (If there's a problem with your brower's capabilities it will be shown on the screen when you try to load the report.)\n", "3. You'll need **`pdflatex`** on your system to compile PDF reports.\n", "4. To familiarize yourself with the layout of an HTML report, click on the gray **\"Help\" link** on the black sidebar." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multiple estimates in a single report\n", "Next, let's analyze the same data two different ways: with and without the TP-constraint (i.e. whether the gates *must* be trace-preserving) and furthermore gauge optmimize each case using several different SPAM-weights. In each case we'll call `do_long_sequence_gst` with `gaugeOptParams=False`, so that no gauge optimization is done, and then perform several gauge optimizations separately and add these to the `Results` object via its `add_gaugeoptimized` function." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Circuit Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 1282 operation sequences ---\n", "Iterative MLGST Total Time: 2.3s\n" ] } ], "source": [ "#Case1: TP-constrained GST\n", "tpTarget = target_model.copy()\n", "tpTarget.set_all_parameterizations(\"TP\")\n", "results_tp = pygsti.do_long_sequence_gst(ds, tpTarget, fiducials, fiducials, germs,\n", " maxLengths, gaugeOptParams=False, verbosity=1)\n", "\n", "#Gauge optimize\n", "est = results_tp.estimates['default']\n", "mdlFinal = est.models['final iteration estimate']\n", "mdlTarget = est.models['target']\n", "for spamWt in [1e-4,1e-2,1.0]:\n", " mdl = pygsti.gaugeopt_to_target(mdlFinal,mdlTarget,{'gates':1, 'spam':spamWt})\n", " est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, mdl, \"Spam %g\" % spamWt)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Circuit Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 1282 operation sequences ---\n", "Iterative MLGST Total Time: 2.7s\n" ] } ], "source": [ "#Case2: \"Full\" GST\n", "fullTarget = target_model.copy()\n", "fullTarget.set_all_parameterizations(\"full\")\n", "results_full = pygsti.do_long_sequence_gst(ds, fullTarget, fiducials, fiducials, germs,\n", " maxLengths, gaugeOptParams=False, verbosity=1)\n", "\n", "#Gauge optimize\n", "est = results_full.estimates['default']\n", "mdlFinal = est.models['final iteration estimate']\n", "mdlTarget = est.models['target']\n", "for spamWt in [1e-4,1e-2,1.0]:\n", " mdl = pygsti.gaugeopt_to_target(mdlFinal,mdlTarget,{'gates':1, 'spam':spamWt})\n", " est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, mdl, \"Spam %g\" % spamWt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll now call the *same* `create_standard_report` function but this time instead of passing a single `Results` object as the first argument we'll pass a *dictionary* of them. This will result in a **HTML report that includes switches** to select which case (\"TP\" or \"Full\") as well as which gauge optimization to display output quantities for. PDF reports cannot support this interactivity, and so **if you try to generate a PDF report you'll get an error**." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", " targetSpamBriefTable took 0.09227 seconds\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " targetGatesBoxTable took 0.115583 seconds\n", " datasetOverviewTable took 0.667548 seconds\n", " bestGatesetSpamParametersTable took 0.001178 seconds\n", " bestGatesetSpamBriefTable took 0.235467 seconds\n", " bestGatesetSpamVsTargetTable took 0.108223 seconds\n", " bestGatesetGaugeOptParamsTable took 0.000386 seconds\n", " bestGatesetGatesBoxTable took 0.530659 seconds\n", " bestGatesetChoiEvalTable took 0.429807 seconds\n", " bestGatesetDecompTable took 0.263574 seconds\n", " bestGatesetEvalTable took 0.005017 seconds\n", " bestGermsEvalTable took 0.034672 seconds\n", " bestGatesetVsTargetTable took 0.064467 seconds\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/theory.py:200: UserWarning:\n", "\n", "Output may be unreliable because the model is not approximately trace-preserving.\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " bestGatesVsTargetTable_gv took 0.335761 seconds\n", " bestGatesVsTargetTable_gvgerms took 0.130275 seconds\n", " bestGatesVsTargetTable_gi took 0.0143 seconds\n", " bestGatesVsTargetTable_gigerms took 0.053035 seconds\n", " bestGatesVsTargetTable_sum took 0.283558 seconds\n", " bestGatesetErrGenBoxTable took 1.198687 seconds\n", " metadataTable took 0.001889 seconds\n", " stdoutBlock took 0.000193 seconds\n", " profilerTable took 0.000905 seconds\n", " softwareEnvTable took 0.001578 seconds\n", " exampleTable took 0.038661 seconds\n", " singleMetricTable_gv took 0.311023 seconds\n", " singleMetricTable_gi took 0.043193 seconds\n", " fiducialListTable took 0.000633 seconds\n", " prepStrListTable took 0.000252 seconds\n", " effectStrListTable took 0.000187 seconds\n", " colorBoxPlotKeyPlot took 0.046957 seconds\n", " germList2ColTable took 0.00034 seconds\n", " progressTable took 3.55919 seconds\n", "*** Generating plots ***\n", " gramBarPlot took 0.061222 seconds\n", " progressBarPlot took 0.081087 seconds\n", " progressBarPlot_sum took 0.000544 seconds\n", " finalFitComparePlot took 0.475071 seconds\n", " bestEstimateColorBoxPlot took 13.585453 seconds\n", " bestEstimateTVDColorBoxPlot took 13.131424 seconds\n", " bestEstimateColorScatterPlot took 15.735445 seconds\n", " bestEstimateColorHistogram took 13.254422 seconds\n", " progressTable_scl took 9.2e-05 seconds\n", " progressBarPlot_scl took 6.1e-05 seconds\n", " bestEstimateColorBoxPlot_scl took 0.000166 seconds\n", " bestEstimateColorScatterPlot_scl took 0.000157 seconds\n", " bestEstimateColorHistogram_scl took 0.000144 seconds\n", " dataScalingColorBoxPlot took 5.9e-05 seconds\n", "Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.\n", "Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.\n", "Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.\n", "Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.\n", " dsComparisonSummary took 0.118111 seconds\n", " dsComparisonHistogram took 0.380056 seconds\n", " dsComparisonBoxPlot took 0.414922 seconds\n", "*** Merging into template file ***\n", " Rendering topSwitchboard took 0.000101 seconds\n", " Rendering maxLSwitchboard1 took 7.8e-05 seconds\n", " Rendering targetSpamBriefTable took 0.063934 seconds\n", " Rendering targetGatesBoxTable took 0.061765 seconds\n", " Rendering datasetOverviewTable took 0.000923 seconds\n", " Rendering bestGatesetSpamParametersTable took 0.002088 seconds\n", " Rendering bestGatesetSpamBriefTable took 0.247024 seconds\n", " Rendering bestGatesetSpamVsTargetTable took 0.002611 seconds\n", " Rendering bestGatesetGaugeOptParamsTable took 0.002972 seconds\n", " Rendering bestGatesetGatesBoxTable took 0.220525 seconds\n", " Rendering bestGatesetChoiEvalTable took 0.516746 seconds\n", " Rendering bestGatesetDecompTable took 0.129993 seconds\n", " Rendering bestGatesetEvalTable took 0.024324 seconds\n", " Rendering bestGermsEvalTable took 0.092785 seconds\n", " Rendering bestGatesetVsTargetTable took 0.001486 seconds\n", " Rendering bestGatesVsTargetTable_gv took 0.004191 seconds\n", " Rendering bestGatesVsTargetTable_gvgerms took 0.007059 seconds\n", " Rendering bestGatesVsTargetTable_gi took 0.004562 seconds\n", " Rendering bestGatesVsTargetTable_gigerms took 0.004776 seconds\n", " Rendering bestGatesVsTargetTable_sum took 0.003621 seconds\n", " Rendering bestGatesetErrGenBoxTable took 0.487722 seconds\n", " Rendering metadataTable took 0.012294 seconds\n", " Rendering stdoutBlock took 0.001058 seconds\n", " Rendering profilerTable took 0.002418 seconds\n", " Rendering softwareEnvTable took 0.002448 seconds\n", " Rendering exampleTable took 0.020387 seconds\n", " Rendering metricSwitchboard_gv took 5.4e-05 seconds\n", " Rendering metricSwitchboard_gi took 3.8e-05 seconds\n", " Rendering singleMetricTable_gv took 0.016138 seconds\n", " Rendering singleMetricTable_gi took 0.024912 seconds\n", " Rendering fiducialListTable took 0.00485 seconds\n", " Rendering prepStrListTable took 0.003289 seconds\n", " Rendering effectStrListTable took 0.003453 seconds\n", " Rendering colorBoxPlotKeyPlot took 0.023572 seconds\n", " Rendering germList2ColTable took 0.006918 seconds\n", " Rendering progressTable took 0.006477 seconds\n", " Rendering gramBarPlot took 0.021127 seconds\n", " Rendering progressBarPlot took 0.036267 seconds\n", " Rendering progressBarPlot_sum took 0.038944 seconds\n", " Rendering finalFitComparePlot took 0.019729 seconds\n", " Rendering bestEstimateColorBoxPlot took 0.274103 seconds\n", " Rendering bestEstimateTVDColorBoxPlot took 0.2693 seconds\n", " Rendering bestEstimateColorScatterPlot took 0.443059 seconds\n", " Rendering bestEstimateColorHistogram took 0.291644 seconds\n", " Rendering progressTable_scl took 0.000594 seconds\n", " Rendering progressBarPlot_scl took 0.000808 seconds\n", " Rendering bestEstimateColorBoxPlot_scl took 0.001029 seconds\n", " Rendering bestEstimateColorScatterPlot_scl took 0.00066 seconds\n", " Rendering bestEstimateColorHistogram_scl took 0.000538 seconds\n", " Rendering dataScalingColorBoxPlot took 0.00077 seconds\n", " Rendering dscmpSwitchboard took 4.1e-05 seconds\n", " Rendering dsComparisonSummary took 0.027765 seconds\n", " Rendering dsComparisonHistogram took 0.139076 seconds\n", " Rendering dsComparisonBoxPlot took 0.125621 seconds\n", "Output written to ../tutorial_files/exampleMultiEstimateReport directory\n", "Opening ../tutorial_files/exampleMultiEstimateReport/main.html...\n", "*** Report Generation Complete! Total time 77.945s ***\n" ] } ], "source": [ "ws = pygsti.report.create_standard_report({'TP': results_tp, \"Full\": results_full},\n", " \"../tutorial_files/exampleMultiEstimateReport\",\n", " title=\"Example Multi-Estimate Report\", \n", " verbosity=2, auto_open=True)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "In the above call we capture the return value in the variable `ws` - a `Workspace` object. PyGSTi's `Workspace` objects function as both a factory for figures and tables as well as a smart cache for computed values. Within `create_standard_report` a `Workspace` object is created and used to create all the figures in the report. As an intended side effect, each of these figures is cached, along with some of the intermediate results used to create it. As we'll see below, a `Workspace` can also be specified as input to `create_standard_report`, allowing it to utilize previously cached quantities.\n", "\n", "**Another way**: Because both `results_tp` and `results_full` above used the same dataset and operation sequences, we could have combined them as two estimates in a single `Results` object (see the previous tutorial on pyGSTi's `Results` object). This can be done by renaming at least one of the `\"default\"`-named estimates in `results_tp` or `results_full` (below we rename both) and then adding the estimate within `results_full` to the estimates already contained in `results_tp`: " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "results_tp.rename_estimate('default','TP')\n", "results_full.rename_estimate('default','Full')\n", "results_both = results_tp.copy() #copy just for neatness\n", "results_both.add_estimates(results_full, estimatesToAdd=['Full'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Creating a report using `results_both` will result in the same report we just generated. We'll demonstrate this anyway, but in addition we'll supply `create_standard_report` a `ws` argument, which tells it to use any cached values contained in a given *input* `Workspace` to expedite report generation. Since our workspace object has the exact quantities we need cached in it, you'll notice a significant speedup. Finally, note that even though there's just a single `Results` object, you **still can't generate a PDF report** from it because it contains multiple estimates." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", " targetSpamBriefTable took 0.000417 seconds\n", " targetGatesBoxTable took 0.000349 seconds\n", " datasetOverviewTable took 0.000245 seconds\n", " bestGatesetSpamParametersTable took 0.000965 seconds\n", " bestGatesetSpamBriefTable took 0.001204 seconds\n", " bestGatesetSpamVsTargetTable took 0.000777 seconds\n", " bestGatesetGaugeOptParamsTable took 0.000713 seconds\n", " bestGatesetGatesBoxTable took 0.001171 seconds\n", " bestGatesetChoiEvalTable took 0.000702 seconds\n", " bestGatesetDecompTable took 0.000817 seconds\n", " bestGatesetEvalTable took 0.000323 seconds\n", " bestGermsEvalTable took 0.000661 seconds\n", " bestGatesetVsTargetTable took 0.010797 seconds\n", " bestGatesVsTargetTable_gv took 0.001265 seconds\n", " bestGatesVsTargetTable_gvgerms took 0.002182 seconds\n", " bestGatesVsTargetTable_gi took 0.00037 seconds\n", " bestGatesVsTargetTable_gigerms took 0.000732 seconds\n", " bestGatesVsTargetTable_sum took 0.001124 seconds\n", " bestGatesetErrGenBoxTable took 0.001084 seconds\n", " metadataTable took 0.002658 seconds\n", " stdoutBlock took 0.000156 seconds\n", " profilerTable took 0.00112 seconds\n", " softwareEnvTable took 0.000156 seconds\n", " exampleTable took 0.000157 seconds\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " singleMetricTable_gv took 1.00074 seconds\n", " singleMetricTable_gi took 0.043727 seconds\n", " fiducialListTable took 0.000187 seconds\n", " prepStrListTable took 0.000124 seconds\n", " effectStrListTable took 0.000166 seconds\n", " colorBoxPlotKeyPlot took 0.000375 seconds\n", " germList2ColTable took 0.000265 seconds\n", " progressTable took 0.494041 seconds\n", "*** Generating plots ***\n", " gramBarPlot took 0.000445 seconds\n", " progressBarPlot took 0.047545 seconds\n", " progressBarPlot_sum took 0.00063 seconds\n", " finalFitComparePlot took 0.039485 seconds\n", " bestEstimateColorBoxPlot took 6.670777 seconds\n", " bestEstimateTVDColorBoxPlot took 6.586583 seconds\n", " bestEstimateColorScatterPlot took 7.28725 seconds\n", " bestEstimateColorHistogram took 6.426439 seconds\n", " progressTable_scl took 8.8e-05 seconds\n", " progressBarPlot_scl took 6e-05 seconds\n", " bestEstimateColorBoxPlot_scl took 0.00016 seconds\n", " bestEstimateColorScatterPlot_scl took 0.000153 seconds\n", " bestEstimateColorHistogram_scl took 0.00014 seconds\n", " dataScalingColorBoxPlot took 5.7e-05 seconds\n", "*** Merging into template file ***\n", " Rendering topSwitchboard took 0.000109 seconds\n", " Rendering maxLSwitchboard1 took 8e-05 seconds\n", " Rendering targetSpamBriefTable took 0.068026 seconds\n", " Rendering targetGatesBoxTable took 0.058047 seconds\n", " Rendering datasetOverviewTable took 0.001053 seconds\n", " Rendering bestGatesetSpamParametersTable took 0.002395 seconds\n", " Rendering bestGatesetSpamBriefTable took 0.253642 seconds\n", " Rendering bestGatesetSpamVsTargetTable took 0.003003 seconds\n", " Rendering bestGatesetGaugeOptParamsTable took 0.003293 seconds\n", " Rendering bestGatesetGatesBoxTable took 0.223867 seconds\n", " Rendering bestGatesetChoiEvalTable took 0.213146 seconds\n", " Rendering bestGatesetDecompTable took 0.131016 seconds\n", " Rendering bestGatesetEvalTable took 0.024017 seconds\n", " Rendering bestGermsEvalTable took 0.08801 seconds\n", " Rendering bestGatesetVsTargetTable took 0.001673 seconds\n", " Rendering bestGatesVsTargetTable_gv took 0.004549 seconds\n", " Rendering bestGatesVsTargetTable_gvgerms took 0.006734 seconds\n", " Rendering bestGatesVsTargetTable_gi took 0.004355 seconds\n", " Rendering bestGatesVsTargetTable_gigerms took 0.004779 seconds\n", " Rendering bestGatesVsTargetTable_sum took 0.003868 seconds\n", " Rendering bestGatesetErrGenBoxTable took 0.478271 seconds\n", " Rendering metadataTable took 0.011528 seconds\n", " Rendering stdoutBlock took 0.001166 seconds\n", " Rendering profilerTable took 0.002611 seconds\n", " Rendering softwareEnvTable took 0.002303 seconds\n", " Rendering exampleTable took 0.02164 seconds\n", " Rendering metricSwitchboard_gv took 3.8e-05 seconds\n", " Rendering metricSwitchboard_gi took 3.1e-05 seconds\n", " Rendering singleMetricTable_gv took 0.016514 seconds\n", " Rendering singleMetricTable_gi took 0.014514 seconds\n", " Rendering fiducialListTable took 0.002621 seconds\n", " Rendering prepStrListTable took 0.002107 seconds\n", " Rendering effectStrListTable took 0.002571 seconds\n", " Rendering colorBoxPlotKeyPlot took 0.022725 seconds\n", " Rendering germList2ColTable took 0.003585 seconds\n", " Rendering progressTable took 0.006995 seconds\n", " Rendering gramBarPlot took 0.022479 seconds\n", " Rendering progressBarPlot took 0.035332 seconds\n", " Rendering progressBarPlot_sum took 0.036974 seconds\n", " Rendering finalFitComparePlot took 0.019669 seconds\n", " Rendering bestEstimateColorBoxPlot took 0.27344 seconds\n", " Rendering bestEstimateTVDColorBoxPlot took 0.269551 seconds\n", " Rendering bestEstimateColorScatterPlot took 0.439941 seconds\n", " Rendering bestEstimateColorHistogram took 0.299781 seconds\n", " Rendering progressTable_scl took 0.00091 seconds\n", " Rendering progressBarPlot_scl took 0.000923 seconds\n", " Rendering bestEstimateColorBoxPlot_scl took 0.000987 seconds\n", " Rendering bestEstimateColorScatterPlot_scl took 0.000771 seconds\n", " Rendering bestEstimateColorHistogram_scl took 0.000743 seconds\n", " Rendering dataScalingColorBoxPlot took 0.000715 seconds\n", "Output written to ../tutorial_files/exampleMultiEstimateReport2 directory\n", "Opening ../tutorial_files/exampleMultiEstimateReport2/main.html...\n", "*** Report Generation Complete! Total time 31.9865s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pygsti.report.create_standard_report(results_both,\n", " \"../tutorial_files/exampleMultiEstimateReport2\",\n", " title=\"Example Multi-Estimate Report (v2)\", \n", " verbosity=2, auto_open=True, ws=ws)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Multiple estimates and `do_stdpractice_gst`\n", "It's no coincidence that a `Results` object containing multiple estimates using the same data is precisely what's returned from `do_stdpractice_gst` (see docstring for information on its arguments, and see the [GST functions tutorial](../algorithms/GST-Drivers.ipynb)). This allows one to run GST multiple times, creating several different \"standard\" estimates and gauge optimizations, and plot them all in a single (HTML) report. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Std Practice: Iter 1 of 3 (TP) --: \n", " --- Circuit Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.243730350963286\n", " 1.1796261581655645\n", " 0.9627515645786063\n", " 0.9424890722054706\n", " 0.033826151547621315\n", " 0.01692336936843073\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119286\n", " 1.414213562373096\n", " 1.4142135623730956\n", " 1.4142135623730954\n", " 2.5038933168948026e-16\n", " 2.023452063009528e-16\n", " \n", " Resulting model:\n", " \n", " rho0 = TPSPAMVec with dimension 4\n", " 0.71-0.02 0.03 0.75\n", " \n", " \n", " Mdefault = TPPOVM with effect vectors:\n", " 0: FullSPAMVec with dimension 4\n", " 0.73 0 0 0.65\n", " \n", " 1: ComplementSPAMVec with dimension 4\n", " 0.69 0 0-0.65\n", " \n", " \n", " \n", " Gi = \n", " TPDenseOp with shape (4, 4)\n", " 1.00 0 0 0\n", " 0.01 0.92-0.03 0.02\n", " 0.01-0.01 0.90 0.02\n", " -0.01 0 0 0.91\n", " \n", " \n", " Gx = \n", " TPDenseOp with shape (4, 4)\n", " 1.00 0 0 0\n", " 0 0.91-0.01 0\n", " -0.02-0.02-0.04-0.99\n", " -0.05 0.03 0.81 0\n", " \n", " \n", " Gy = \n", " TPDenseOp with shape (4, 4)\n", " 1.00 0 0 0\n", " 0.05 0 0 0.98\n", " 0.01 0 0.89-0.03\n", " -0.06-0.82 0 0\n", " \n", " \n", " \n", " \n", " --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 86.3537, mu=0, |J|=1010.99\n", " --- Outer Iter 1: norm_f = 49.6491, mu=79.0766, |J|=1009.86\n", " --- Outer Iter 2: norm_f = 49.5669, mu=26.3589, |J|=1008.85\n", " --- Outer Iter 3: norm_f = 49.5665, mu=8.78629, |J|=1008.87\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 49.5665 (92 data params - 31 model params = expected mean of 61; p-value = 0.85235)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 49.6936\n", " Iteration 1 took 0.2s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 150.19, mu=0, |J|=1397.23\n", " --- Outer Iter 1: norm_f = 111.389, mu=138.539, |J|=1388.05\n", " --- Outer Iter 2: norm_f = 111.209, mu=46.1798, |J|=1387.46\n", " --- Outer Iter 3: norm_f = 111.208, mu=15.3933, |J|=1387.45\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 111.208 (168 data params - 31 model params = expected mean of 137; p-value = 0.948166)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 111.486\n", " Iteration 2 took 0.2s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 498.77, mu=0, |J|=2295.79\n", " --- Outer Iter 1: norm_f = 421.84, mu=346.423, |J|=2300.79\n", " --- Outer Iter 2: norm_f = 421.713, mu=115.474, |J|=2300.65\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 421.713 (450 data params - 31 model params = expected mean of 419; p-value = 0.453619)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 422.191\n", " Iteration 3 took 0.3s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 851.493, mu=0, |J|=3309.82\n", " --- Outer Iter 1: norm_f = 806.348, mu=636.017, |J|=3286.21\n", " --- Outer Iter 2: norm_f = 806.308, mu=212.006, |J|=3286.08\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 806.308 (862 data params - 31 model params = expected mean of 831; p-value = 0.724212)\n", " Completed in 0.6s\n", " 2*Delta(log(L)) = 807.505\n", " Iteration 4 took 0.6s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1263, mu=0, |J|=4223.66\n", " --- Outer Iter 1: norm_f = 1245.9, mu=917.211, |J|=4227.36\n", " --- Outer Iter 2: norm_f = 1245.88, mu=305.737, |J|=4228.06\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1245.88 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.53552)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 1247.4\n", " Iteration 5 took 0.9s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " --- Outer Iter 0: norm_f = 623.698, mu=0, |J|=2989.23\n", " --- Outer Iter 1: norm_f = 623.667, mu=458.353, |J|=2990.87\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 623.667 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1247.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.523935)\n", " Completed in 0.4s\n", " 2*Delta(log(L)) = 1247.33\n", " Final MLGST took 0.4s\n", " \n", " Iterative MLGST Total Time: 2.6s\n", " -- Performing 'single' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "-- Std Practice: Iter 2 of 3 (CPTP) --: \n", " --- Circuit Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).\n", " --- Outer Iter 0: norm_f = 1.10824e+07, mu=0, |J|=1098.32\n", " --- Outer Iter 1: norm_f = 525198, mu=152.044, |J|=23245\n", " --- Outer Iter 2: norm_f = 105604, mu=119.002, |J|=4968.62\n", " --- Outer Iter 3: norm_f = 17775.9, mu=81.851, |J|=1539.41\n", " --- Outer Iter 4: norm_f = 2118.67, mu=39.4495, |J|=988.314\n", " --- Outer Iter 5: norm_f = 91.4772, mu=13.1498, |J|=781.246\n", " --- Outer Iter 6: norm_f = 66.6366, mu=10.0856, |J|=746.609\n", " --- Outer Iter 7: norm_f = 59.8988, mu=16.5022, |J|=744.779\n", " --- Outer Iter 8: norm_f = 55.2767, mu=32.1916, |J|=740.96\n", " --- Outer Iter 9: norm_f = 50.7549, mu=24.8843, |J|=744.012\n", " --- Outer Iter 10: norm_f = 49.7522, mu=8.29476, |J|=747.925\n", " --- Outer Iter 11: norm_f = 49.7405, mu=48.5801, |J|=748.36\n", " --- Outer Iter 12: norm_f = 49.7394, mu=44.6982, |J|=748.432\n", " --- Outer Iter 13: norm_f = 49.739, mu=34.6233, |J|=748.465\n", " --- Outer Iter 14: norm_f = 49.7386, mu=29.4361, |J|=748.5\n", " --- Outer Iter 15: norm_f = 49.7383, mu=29.4353, |J|=748.54\n", " --- Outer Iter 16: norm_f = 49.7383, mu=46.4292, |J|=748.574\n", " --- Outer Iter 17: norm_f = 49.7379, mu=53.6274, |J|=748.601\n", " --- Outer Iter 18: norm_f = 49.7376, mu=54.0897, |J|=748.637\n", " --- Outer Iter 19: norm_f = 49.7374, mu=53.7448, |J|=748.669\n", " --- Outer Iter 20: norm_f = 49.7372, mu=41.3964, |J|=748.695\n", " --- Outer Iter 21: norm_f = 49.737, mu=19.6625, |J|=748.727\n", " --- Outer Iter 22: norm_f = 49.7367, mu=15.9398, |J|=748.787\n", " --- Outer Iter 23: norm_f = 49.7365, mu=33.1175, |J|=748.829\n", " --- Outer Iter 24: norm_f = 49.7365, mu=52.8695, |J|=748.861\n", " --- Outer Iter 25: norm_f = 49.7362, mu=54.9158, |J|=748.889\n", " --- Outer Iter 26: norm_f = 49.736, mu=54.9155, |J|=748.92\n", " --- Outer Iter 27: norm_f = 49.7359, mu=48.065, |J|=748.945\n", " --- Outer Iter 28: norm_f = 49.7357, mu=23.5141, |J|=748.971\n", " --- Outer Iter 29: norm_f = 49.7355, mu=13.3491, |J|=749.021\n", " --- Outer Iter 30: norm_f = 49.7355, mu=22.8901, |J|=749.097\n", " --- Outer Iter 31: norm_f = 49.7352, mu=58.1488, |J|=749.123\n", " --- Outer Iter 32: norm_f = 49.735, mu=58.7563, |J|=749.156\n", " --- Outer Iter 33: norm_f = 49.7349, mu=58.067, |J|=749.182\n", " --- Outer Iter 34: norm_f = 49.7348, mu=32.8341, |J|=749.203\n", " --- Outer Iter 35: norm_f = 49.7347, mu=10.9447, |J|=749.237\n", " --- Outer Iter 36: norm_f = 49.7344, mu=10.5674, |J|=749.327\n", " --- Outer Iter 37: norm_f = 49.7342, mu=82.2963, |J|=749.355\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 49.7342 (92 data params - 31 model params = expected mean of 61; p-value = 0.848291)\n", " Completed in 2.5s\n", " 2*Delta(log(L)) = 49.8652\n", " Iteration 1 took 2.5s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).\n", " --- Outer Iter 0: norm_f = 151.528, mu=0, |J|=1014.34\n", " --- Outer Iter 1: norm_f = 122.487, mu=173.839, |J|=987.258\n", " --- Outer Iter 2: norm_f = 112.801, mu=57.9464, |J|=996.402\n", " --- Outer Iter 3: norm_f = 111.668, mu=31.1358, |J|=999.747\n", " --- Outer Iter 4: norm_f = 111.484, mu=10.3786, |J|=1002.08\n", " --- Outer Iter 5: norm_f = 111.476, mu=4.36119, |J|=1002.23\n", " --- Outer Iter 6: norm_f = 111.475, mu=93.0386, |J|=1002.33\n", " --- Outer Iter 7: norm_f = 111.475, mu=88.9496, |J|=1002.32\n", " --- Outer Iter 8: norm_f = 111.475, mu=58.1522, |J|=1002.3\n", " --- Outer Iter 9: norm_f = 111.474, mu=28.2497, |J|=1002.26\n", " --- Outer Iter 10: norm_f = 111.474, mu=27.9109, |J|=1002.16\n", " --- Outer Iter 11: norm_f = 111.474, mu=91.7847, |J|=1002.11\n", " --- Outer Iter 12: norm_f = 111.474, mu=94.8186, |J|=1002.09\n", " --- Outer Iter 13: norm_f = 111.474, mu=94.8078, |J|=1002.07\n", " --- Outer Iter 14: norm_f = 111.474, mu=78.1253, |J|=1002.05\n", " --- Outer Iter 15: norm_f = 111.474, mu=32.956, |J|=1002.02\n", " --- Outer Iter 16: norm_f = 111.473, mu=20.4206, |J|=1001.95\n", " --- Outer Iter 17: norm_f = 111.473, mu=42.907, |J|=1001.9\n", " --- Outer Iter 18: norm_f = 111.473, mu=88.1256, |J|=1001.88\n", " --- Outer Iter 19: norm_f = 111.473, mu=88.1234, |J|=1001.86\n", " --- Outer Iter 20: norm_f = 111.473, mu=80.7846, |J|=1001.84\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 111.473 (168 data params - 31 model params = expected mean of 137; p-value = 0.946188)\n", " Completed in 1.3s\n", " 2*Delta(log(L)) = 111.765\n", " Iteration 2 took 1.3s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).\n", " --- Outer Iter 0: norm_f = 496.83, mu=0, |J|=1622.21\n", " --- Outer Iter 1: norm_f = 425.12, mu=172.635, |J|=1614.05\n", " --- Outer Iter 2: norm_f = 422.084, mu=57.545, |J|=1622.3\n", " --- Outer Iter 3: norm_f = 422.023, mu=19.1817, |J|=1623.99\n", " --- Outer Iter 4: norm_f = 422.007, mu=19.202, |J|=1622.74\n", " --- Outer Iter 5: norm_f = 421.888, mu=19.3361, |J|=1622.9\n", " --- Outer Iter 6: norm_f = 421.713, mu=6.44536, |J|=1625.17\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 421.713 (450 data params - 31 model params = expected mean of 419; p-value = 0.45362)\n", " Completed in 0.8s\n", " 2*Delta(log(L)) = 422.195\n", " Iteration 3 took 0.8s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).\n", " --- Outer Iter 0: norm_f = 851.552, mu=0, |J|=2237.29\n", " --- Outer Iter 1: norm_f = 813.414, mu=295.355, |J|=2217.99\n", " --- Outer Iter 2: norm_f = 811.822, mu=324.854, |J|=2226.8\n", " --- Outer Iter 3: norm_f = 807.477, mu=108.285, |J|=2242.71\n", " --- Outer Iter 4: norm_f = 807.406, mu=110.4, |J|=2245.32\n", " --- Outer Iter 5: norm_f = 807.331, mu=685.514, |J|=2246.43\n", " --- Outer Iter 6: norm_f = 807.319, mu=654.52, |J|=2246.59\n", " --- Outer Iter 7: norm_f = 807.317, mu=520.906, |J|=2246.64\n", " --- Outer Iter 8: norm_f = 807.316, mu=173.635, |J|=2246.59\n", " --- Outer Iter 9: norm_f = 807.313, mu=57.8784, |J|=2246.3\n", " --- Outer Iter 10: norm_f = 807.305, mu=33.5976, |J|=2245.2\n", " --- Outer Iter 11: norm_f = 807.302, mu=42.516, |J|=2243.22\n", " --- Outer Iter 12: norm_f = 807.289, mu=339.848, |J|=2243.46\n", " --- Outer Iter 13: norm_f = 807.287, mu=339.599, |J|=2243.35\n", " --- Outer Iter 14: norm_f = 807.286, mu=256.378, |J|=2243.19\n", " --- Outer Iter 15: norm_f = 807.284, mu=85.4593, |J|=2242.94\n", " --- Outer Iter 16: norm_f = 807.279, mu=59.0419, |J|=2242.14\n", " --- Outer Iter 17: norm_f = 807.277, mu=417.602, |J|=2242.06\n", " --- Outer Iter 18: norm_f = 807.276, mu=139.201, |J|=2241.9\n", " --- Outer Iter 19: norm_f = 807.273, mu=46.4002, |J|=2241.4\n", " --- Outer Iter 20: norm_f = 807.264, mu=28.3539, |J|=2239.71\n", " --- Outer Iter 21: norm_f = 807.259, mu=31.9167, |J|=2236.66\n", " --- Outer Iter 22: norm_f = 807.244, mu=200.188, |J|=2236.94\n", " --- Outer Iter 23: norm_f = 807.242, mu=214.672, |J|=2236.5\n", " --- Outer Iter 24: norm_f = 807.24, mu=266.844, |J|=2236.08\n", " --- Outer Iter 25: norm_f = 807.237, mu=285.22, |J|=2235.76\n", " --- Outer Iter 26: norm_f = 807.235, mu=286.131, |J|=2235.48\n", " --- Outer Iter 27: norm_f = 807.232, mu=284.409, |J|=2235.19\n", " --- Outer Iter 28: norm_f = 807.23, mu=246.399, |J|=2234.88\n", " --- Outer Iter 29: norm_f = 807.228, mu=160.075, |J|=2234.51\n", " --- Outer Iter 30: norm_f = 807.225, mu=125.143, |J|=2233.93\n", " --- Outer Iter 31: norm_f = 807.223, mu=132.762, |J|=2233.16\n", " --- Outer Iter 32: norm_f = 807.22, mu=285.635, |J|=2232.82\n", " --- Outer Iter 33: norm_f = 807.217, mu=286.703, |J|=2232.53\n", " --- Outer Iter 34: norm_f = 807.214, mu=284.888, |J|=2232.23\n", " --- Outer Iter 35: norm_f = 807.212, mu=242.968, |J|=2231.91\n", " --- Outer Iter 36: norm_f = 807.21, mu=151.601, |J|=2231.53\n", " --- Outer Iter 37: norm_f = 807.207, mu=119.477, |J|=2230.89\n", " --- Outer Iter 38: norm_f = 807.205, mu=137.491, |J|=2230.07\n", " --- Outer Iter 39: norm_f = 807.201, mu=293.21, |J|=2229.75\n", " --- Outer Iter 40: norm_f = 807.198, mu=293.562, |J|=2229.48\n", " --- Outer Iter 41: norm_f = 807.196, mu=286.699, |J|=2229.19\n", " --- Outer Iter 42: norm_f = 807.194, mu=210.881, |J|=2228.88\n", " --- Outer Iter 43: norm_f = 807.192, mu=112.754, |J|=2228.45\n", " --- Outer Iter 44: norm_f = 807.188, mu=106.226, |J|=2227.62\n", " --- Outer Iter 45: norm_f = 807.186, mu=217.92, |J|=2227.23\n", " --- Outer Iter 46: norm_f = 807.184, mu=236.72, |J|=2226.83\n", " --- Outer Iter 47: norm_f = 807.181, mu=253.411, |J|=2226.47\n", " --- Outer Iter 48: norm_f = 807.179, mu=258.714, |J|=2226.16\n", " --- Outer Iter 49: norm_f = 807.177, mu=258.905, |J|=2225.85\n", " --- Outer Iter 50: norm_f = 807.175, mu=258.416, |J|=2225.55\n", " --- Outer Iter 51: norm_f = 807.173, mu=248.597, |J|=2225.24\n", " --- Outer Iter 52: norm_f = 807.171, mu=216.922, |J|=2224.93\n", " --- Outer Iter 53: norm_f = 807.17, mu=180.949, |J|=2224.57\n", " --- Outer Iter 54: norm_f = 807.167, mu=172.91, |J|=2224.15\n", " --- Outer Iter 55: norm_f = 807.166, mu=174, |J|=2223.71\n", " --- Outer Iter 56: norm_f = 807.165, mu=270.951, |J|=2223.28\n", " --- Outer Iter 57: norm_f = 807.163, mu=293.556, |J|=2223.03\n", " --- Outer Iter 58: norm_f = 807.161, mu=294.205, |J|=2222.84\n", " --- Outer Iter 59: norm_f = 807.159, mu=287.827, |J|=2222.63\n", " --- Outer Iter 60: norm_f = 807.158, mu=206.067, |J|=2222.41\n", " --- Outer Iter 61: norm_f = 807.156, mu=103.898, |J|=2222.09\n", " --- Outer Iter 62: norm_f = 807.154, mu=97.6646, |J|=2221.47\n", " --- Outer Iter 63: norm_f = 807.153, mu=213.583, |J|=2221.19\n", " --- Outer Iter 64: norm_f = 807.152, mu=262.549, |J|=2220.92\n", " --- Outer Iter 65: norm_f = 807.15, mu=277.118, |J|=2220.73\n", " --- Outer Iter 66: norm_f = 807.149, mu=277.588, |J|=2220.56\n", " --- Outer Iter 67: norm_f = 807.148, mu=274.669, |J|=2220.38\n", " --- Outer Iter 68: norm_f = 807.147, mu=231.101, |J|=2220.2\n", " --- Outer Iter 69: norm_f = 807.146, mu=148.993, |J|=2219.99\n", " --- Outer Iter 70: norm_f = 807.145, mu=123.155, |J|=2219.67\n", " --- Outer Iter 71: norm_f = 807.145, mu=140.655, |J|=2219.3\n", " --- Outer Iter 72: norm_f = 807.143, mu=292.778, |J|=2219.16\n", " --- Outer Iter 73: norm_f = 807.143, mu=292.782, |J|=2219.04\n", " --- Outer Iter 74: norm_f = 807.142, mu=272.276, |J|=2218.91\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 807.142 (862 data params - 31 model params = expected mean of 831; p-value = 0.717193)\n", " Completed in 11.3s\n", " 2*Delta(log(L)) = 808.459\n", " Iteration 4 took 11.3s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 60 params (taken as 1 param groups of ~60 params).\n", " --- Outer Iter 0: norm_f = 1264.71, mu=0, |J|=2587.75\n", " --- Outer Iter 1: norm_f = 1247.44, mu=339.21, |J|=2580.77\n", " --- Outer Iter 2: norm_f = 1247.19, mu=113.07, |J|=2583.49\n", " --- Outer Iter 3: norm_f = 1247.18, mu=51.8848, |J|=2582.38\n", " --- Outer Iter 4: norm_f = 1247.17, mu=102.682, |J|=2581.25\n", " --- Outer Iter 5: norm_f = 1247.17, mu=740.295, |J|=2581.29\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1247.17 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.525249)\n", " Completed in 1.8s\n", " 2*Delta(log(L)) = 1248.89\n", " Iteration 5 took 1.8s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " --- Outer Iter 0: norm_f = 624.444, mu=0, |J|=1825.16\n", " --- Outer Iter 1: norm_f = 624.418, mu=169.838, |J|=1826.36\n", " --- Outer Iter 2: norm_f = 624.417, mu=56.6127, |J|=1826.14\n", " --- Outer Iter 3: norm_f = 624.414, mu=36.7733, |J|=1825.47\n", " --- Outer Iter 4: norm_f = 624.414, mu=115.665, |J|=1825.09\n", " --- Outer Iter 5: norm_f = 624.412, mu=241.82, |J|=1825\n", " --- Outer Iter 6: norm_f = 624.411, mu=241.831, |J|=1824.93\n", " --- Outer Iter 7: norm_f = 624.41, mu=213.348, |J|=1824.84\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 624.41 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1248.82 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.512074)\n", " Completed in 1.7s\n", " 2*Delta(log(L)) = 1248.82\n", " Final MLGST took 1.7s\n", " \n", " Iterative MLGST Total Time: 19.6s\n", " -- Performing 'single' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "-- Std Practice: Iter 3 of 3 (Target) --: \n", " --- Circuit Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " -- Performing 'single' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to ../tutorial_files/exampleStdReport directory\n", "Opening ../tutorial_files/exampleStdReport/main.html...\n", "*** Report Generation Complete! Total time 70.4261s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "results_std = pygsti.do_stdpractice_gst(ds, target_model, fiducials, fiducials, germs,\n", " maxLengths, verbosity=4, modes=\"TP,CPTP,Target\",\n", " gaugeOptSuite=('single','toggleValidSpam'))\n", "\n", "# Generate a report with \"TP\", \"CPTP\", and \"Target\" estimates\n", "pygsti.report.create_standard_report(results_std, \"../tutorial_files/exampleStdReport\", \n", " title=\"Post StdPractice Report\", auto_open=True,\n", " verbosity=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reports with confidence regions\n", "To display confidence intervals for reported quantities, you must do two things:\n", "\n", "1. you must specify the `confidenceLevel` argument to `create_standard_report`.\n", "2. the estimate(s) being reported must have a valid confidence-region-factory.\n", "\n", "Constructing a factory often means computing a Hessian, which can be time consuming, and so this is *not* done automatically. Here we demonstrate how to construct a valid factory for the \"Spam 0.001\" gauge-optimization of the \"CPTP\" estimate by computing and then projecting the Hessian of the likelihood function. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " \n", "--- Hessian Projector Optimization from separate SPAM and Gate weighting ---\n", " Resulting intrinsic errors: 0.0083633 (gates), 0.0048806 (spam)\n", " Resulting sqrt(mean(operationCIs**2)): 0.0164815\n", " Resulting sqrt(mean(spamCIs**2)): 0.0132789\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to ../tutorial_files/exampleStdReport2 directory\n", "Opening ../tutorial_files/exampleStdReport2/main.html...\n", "*** Report Generation Complete! Total time 89.6974s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Construct and initialize a \"confidence region factory\" for the CPTP estimate\n", "crfact = results_std.estimates[\"CPTP\"].add_confidence_region_factory('Spam 0.001', 'final')\n", "crfact.compute_hessian(comm=None) #we could use more processors\n", "crfact.project_hessian('intrinsic error')\n", "\n", "pygsti.report.create_standard_report(results_std, \"../tutorial_files/exampleStdReport2\", \n", " title=\"Post StdPractice Report (w/CIs on CPTP)\",\n", " confidenceLevel=95, auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reports with multiple *different* data sets\n", "We've already seen above that `create_standard_report` can be given a dictionary of `Results` objects instead of a single one. This allows the creation of reports containing estimates for different `DataSet`s (each `Results` object only holds estimates for a single `DataSet`). Furthermore, when the data sets have the same operation sequences, they will be compared within a tab of the HTML report.\n", "\n", "Below, we generate a new data set with the same sequences as the one loaded at the beginning of this tutorial, proceed to run standard-practice GST on that dataset, and create a report of the results along with those of the original dataset. Look at the **\"Data Comparison\" tab** within the gauge-invariant error metrics category." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Std Practice: Iter 1 of 3 (TP) --: \n", " --- Circuit Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.244829997162508\n", " 1.1936677889884049\n", " 0.9868539533169907\n", " 0.932197724091589\n", " 0.04714742318656945\n", " 0.012700520808584604\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119286\n", " 1.414213562373096\n", " 1.4142135623730956\n", " 1.4142135623730954\n", " 2.5038933168948026e-16\n", " 2.023452063009528e-16\n", " \n", " --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 47.848 (92 data params - 31 model params = expected mean of 61; p-value = 0.89017)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 47.897\n", " Iteration 1 took 0.2s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 112.296 (168 data params - 31 model params = expected mean of 137; p-value = 0.939668)\n", " Completed in 0.1s\n", " 2*Delta(log(L)) = 112.295\n", " Iteration 2 took 0.2s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 409.638 (450 data params - 31 model params = expected mean of 419; p-value = 0.618972)\n", " Completed in 0.4s\n", " 2*Delta(log(L)) = 409.806\n", " Iteration 3 took 0.4s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 833.614 (862 data params - 31 model params = expected mean of 831; p-value = 0.467957)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 833.943\n", " Iteration 4 took 0.6s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 1262.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.405531)\n", " Completed in 0.8s\n", " 2*Delta(log(L)) = 1262.98\n", " Iteration 5 took 0.9s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Maximum log(L) = 631.455 below upper bound of -2.13633e+06\n", " 2*Delta(log(L)) = 1262.91 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.401035)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 1262.91\n", " Final MLGST took 0.3s\n", " \n", " Iterative MLGST Total Time: 2.6s\n", " -- Performing 'single' gauge optimization on TP estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on TP estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --\n", "-- Std Practice: Iter 2 of 3 (CPTP) --: \n", " --- Circuit Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- Iterative MLGST: Iter 1 of 5 92 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 50.2614 (92 data params - 31 model params = expected mean of 61; p-value = 0.835129)\n", " Completed in 2.3s\n", " 2*Delta(log(L)) = 50.3385\n", " Iteration 1 took 2.3s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 112.857 (168 data params - 31 model params = expected mean of 137; p-value = 0.934907)\n", " Completed in 1.3s\n", " 2*Delta(log(L)) = 112.882\n", " Iteration 2 took 1.4s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 409.841 (450 data params - 31 model params = expected mean of 419; p-value = 0.616256)\n", " Completed in 2.1s\n", " 2*Delta(log(L)) = 410.036\n", " Iteration 3 took 2.1s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 833.614 (862 data params - 31 model params = expected mean of 831; p-value = 0.467957)\n", " Completed in 1.5s\n", " 2*Delta(log(L)) = 833.943\n", " Iteration 4 took 1.5s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 operation sequences ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 1262.33 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.405528)\n", " Completed in 1.3s\n", " 2*Delta(log(L)) = 1262.98\n", " Iteration 5 took 1.3s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Maximum log(L) = 631.455 below upper bound of -2.13633e+06\n", " 2*Delta(log(L)) = 1262.91 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.40103)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 1262.91\n", " Final MLGST took 0.9s\n", " \n", " Iterative MLGST Total Time: 9.4s\n", " -- Performing 'single' gauge optimization on CPTP estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --\n", "-- Std Practice: Iter 3 of 3 (Target) --: \n", " --- Circuit Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " -- Performing 'single' gauge optimization on Target estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on Target estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "*** Generating plots ***\n", "Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.\n", "The datasets are INCONSISTENT at 5.00% significance.\n", " - Details:\n", " - The aggregate log-likelihood ratio test is significant at 20.30 standard deviations.\n", " - The aggregate log-likelihood ratio test standard deviations signficance threshold is 1.98\n", " - The number of sequences with data that is inconsistent is 14\n", " - The maximum SSTVD over all sequences is 0.15\n", " - The maximum SSTVD was observed for Qubit * ---|Gx|-|Gi|-|Gi|-|Gi|-|Gi|---\n", "\n", "The datasets are INCONSISTENT at 5.00% significance.\n", " - Details:\n", " - The aggregate log-likelihood ratio test is significant at 20.30 standard deviations.\n", " - The aggregate log-likelihood ratio test standard deviations signficance threshold is 1.98\n", " - The number of sequences with data that is inconsistent is 14\n", " - The maximum SSTVD over all sequences is 0.15\n", " - The maximum SSTVD was observed for Qubit * ---|Gx|-|Gi|-|Gi|-|Gi|-|Gi|---\n", "\n", "Statistical hypothesis tests did NOT find inconsistency between the datasets at 5.00% significance.\n", "*** Merging into template file ***\n", "Output written to ../tutorial_files/exampleMultiDataSetReport directory\n", "Opening ../tutorial_files/exampleMultiDataSetReport/main.html...\n", "*** Report Generation Complete! Total time 142.596s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Make another dataset & estimates\n", "depol_gateset = target_model.depolarize(op_noise=0.1)\n", "datagen_gateset = depol_gateset.rotate((0.05,0,0.03))\n", "\n", "#Compute the sequences needed to perform Long Sequence GST on \n", "# this Model with sequences up to lenth 512\n", "circuit_list = pygsti.construction.make_lsgst_experiment_list(\n", " std1Q_XYI.target_model(), std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,\n", " std1Q_XYI.germs, [1,2,4,8,16,32,64,128,256,512])\n", "ds2 = pygsti.construction.generate_fake_data(datagen_gateset, circuit_list, nSamples=1000,\n", " sampleError='binomial', seed=2018)\n", "results_std2 = pygsti.do_stdpractice_gst(ds2, target_model, fiducials, fiducials, germs,\n", " maxLengths, verbosity=3, modes=\"TP,CPTP,Target\",\n", " gaugeOptSuite=('single','toggleValidSpam'))\n", "\n", "pygsti.report.create_standard_report({'DS1': results_std, 'DS2': results_std2},\n", " \"../tutorial_files/exampleMultiDataSetReport\", \n", " title=\"Example Multi-Dataset Report\", \n", " auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Other cool `create_standard_report` options\n", "Finally, let us highlight a few of the additional arguments one can supply to `create_standard_report` that allows further control over what gets reported.\n", "\n", "- Setting the `link_to` argument to a tuple of `'pkl'`, `'tex'`, and/or `'pdf'` will create hyperlinks within the plots or below the tables of the HTML linking to Python pickle, LaTeX source, and PDF versions of the content, respectively. The Python pickle files for tables contain pickled pandas `DataFrame` objects, wheras those of plots contain ordinary Python dictionaries of the data that is plotted. Applies to HTML reports only.\n", "\n", "- Setting the `brevity` argument to an integer higher than $0$ (the default) will reduce the amount of information included in the report (for details on what is included for each value, see the doc string). Using `brevity > 0` will reduce the time required to create, and later load, the report, as well as the output file/folder size. This applies to both HTML and PDF reports.\n", "\n", "Below, we demonstrate both of these options in very brief (`brevity=4`) report with links to pickle and PDF files. Note that to generate the PDF files you must have `pdflatex` installed." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/report/factory.py:785: UserWarning:\n", "\n", "Idle tomography failed:\n", "Label{layers}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to ../tutorial_files/exampleBriefReport directory\n", "Opening ../tutorial_files/exampleBriefReport/main.html...\n", "*** Report Generation Complete! Total time 60.318s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pygsti.report.create_standard_report(results_std,\n", " \"../tutorial_files/exampleBriefReport\", \n", " title=\"Example Brief Report\", \n", " auto_open=True, verbosity=1,\n", " brevity=4, link_to=('pkl','pdf'))" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Advanced Reports: `create_report_notebook`\n", "In addition to the standard HTML-page reports demonstrated above, pyGSTi is able to generate a Jupyter notebook containing the Python commands to create the figures and tables within a general report. This is facilitated\n", "by `Workspace` objects, which are factories for figures and tables (see previous tutorials). By calling `create_report_notebook`, all of the relevant `Workspace` initialization and calls are dumped to a new notebook file, which can be run (either fully or partially) by the user at their convenience. Creating such \"report notebooks\" has the advantage that the user may insert Python code amidst the figure and table generation calls to inspect or modify what is display in a highly customizable fashion. The chief disadvantages of report notebooks is that they require the user to 1) have a Jupyter server up and running and 2) to run the notebook before any figures are displayed.\n", "\n", "The line below demonstrates how to create a report notebook using `create_report_notebook`. Note that the argument list is very similar to `create_general_report`." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Report Notebook created as ../tutorial_files/exampleReport.ipynb\n" ] } ], "source": [ "pygsti.report.create_report_notebook(results, \"../tutorial_files/exampleReport.ipynb\", \n", " title=\"GST Example Report Notebook\", confidenceLevel=None,\n", " auto_open=True, connected=False, verbosity=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multi-qubit reports\n", "The dimension of the density matrix space with with more than 2 qubits starts to become quite large, and Models for 3+ qubits rarely allow every element of the operation process matrices to vary independently. As such, many of the figures generated by `create_standard_report` are both too unwieldy (displaying a $64 \\times 64$ grid of colored boxes for each operation) and not very helpful (you don't often care about what each element of an operation matrix is). For this purpose, we are developing a report that doesn't just dump out and analyze operation matrices as a whole, but looks at a `Model`'s structure to determine how best to report quantities. This \"n-qubit report\" is invoked using `pygsti.report.create_nqnoise_report`, and has similar arguments to `create_standard_report`. It is, however still under development, and while you're welcome to try it out, it may crash or not work in other weird ways." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" } }, "nbformat": 4, "nbformat_minor": 2 }