{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# Report Generation Tutorial\n", "\n", "PyGSTi is able to construct polished report documents, which provide high-level summaries as well as detailed analyses of GST results. Reports are intended to be quick and easy way of analyzing a GST estimate, and pyGSTi's report generation functions are specifically designed to interact with its high-level driver functions (see the high-level algorithms tutorial). Currently there is only a single report generation function, `pygsti.report.create_general_report`, which takes one or more `Results` objects as input and produces an HTML file as output. The HTML format allows the reports to include **interactive plots** and **switches**, making it easy to compare different types of analysis or data sets. \n", "\n", "PyGSTi's \"general\" report creates a stand-alone HTML document which cannot run Python. Thus, all the results displayed in the report must be pre-computed (in Python). If you find yourself wanting to fiddle with things and feel that the general report is too static, please consider using a `Workspace` object (see following tutorials) within a Jupyter notebook, where you can intermix report tables/plots and Python. Internally, `create_general_report` is just a canned routine which uses a `WorkSpace` object to generate various tables and plots and then inserts them into a HTML template. \n", "\n", "**Note to veteran users:** PyGSTi has recently transitioned to producing HTML (rather than LaTeX/PDF) reports. The way to generate such report is largely unchanged, with one important exception. Previously, the `Results` object had various report-generation methods included within it. We've found this is too restrictive, as we'd sometimes like to generate a report which utilizes the results from multiple runs of GST (to compare them, for instance). Thus, the `Results` class is now just a container for a `DataSet` and its related `GateSet`s, `GatestringStructure`s, etc. All of the report-generation capability is now housed in within separate report functions, which we now demonstrate.\n", "\n", "\n", "### Get some `Results`\n", "We start by performing GST using `do_long_sequence_gst`, as usual, to create a `Results` object (we could also have just loaded one from file)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading from cache file: tutorial_files/Example_Dataset.txt.cache\n", "--- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", "--- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.243708948119547\n", " 1.1796799227093195\n", " 0.9631622821716207\n", " 0.9415276081969872\n", " 0.04762148814438355\n", " 0.015427853978163174\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119285\n", " 1.4142135623730954\n", " 1.4142135623730947\n", " 1.4142135623730945\n", " 3.1723744950054595e-16\n", " 1.0852733691121267e-16\n", " \n", " Resulting gate set:\n", " \n", " rho0 = TPParameterizedSPAMVec with dimension 4\n", " 0.71 0 0.03 0.75\n", " \n", " \n", " Mdefault = TPPOVM with effect vectors:\n", " 0: FullyParameterizedSPAMVec with dimension 4\n", " 0.73 0 0 0.65\n", " \n", " 1: ComplementSPAMVec with dimension 4\n", " 0.69 0 0-0.65\n", " \n", " \n", " \n", " Gi = \n", " TPParameterizedGate with shape (4, 4)\n", " 1.00 0 0 0\n", " 0 0.93-0.05 0.02\n", " 0 0.01 0.90 0.02\n", " 0 0.01 0 0.91\n", " \n", " \n", " Gx = \n", " TPParameterizedGate with shape (4, 4)\n", " 1.00 0 0 0\n", " 0 0.91 0 0\n", " -0.02 0-0.04-1.00\n", " -0.05 0.04 0.81 0\n", " \n", " \n", " Gy = \n", " TPParameterizedGate with shape (4, 4)\n", " 1.00 0 0 0\n", " 0.03-0.02 0 0.98\n", " 0-0.01 0.89-0.03\n", " -0.06-0.81 0 0.02\n", " \n", " \n", " \n", " \n", "--- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 82.3697, mu=0, |J|=1008.64\n", " --- Outer Iter 1: norm_f = 53.5029, mu=78.8972, |J|=1008.97\n", " --- Outer Iter 2: norm_f = 53.4377, mu=26.2991, |J|=1008.95\n", " --- Outer Iter 3: norm_f = 53.4374, mu=8.76636, |J|=1008.96\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 53.4374 (91 data params - 31 model params = expected mean of 60; p-value = 0.712585)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 53.6041\n", " Iteration 1 took 0.2s\n", " \n", "--- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 161.901, mu=0, |J|=1397.55\n", " --- Outer Iter 1: norm_f = 122.269, mu=138.541, |J|=1387.32\n", " --- Outer Iter 2: norm_f = 122.044, mu=46.1804, |J|=1387.25\n", " --- Outer Iter 3: norm_f = 122.043, mu=15.3935, |J|=1387.25\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 122.043 (167 data params - 31 model params = expected mean of 136; p-value = 0.798502)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 122.364\n", " Iteration 2 took 0.5s\n", " \n", "--- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 494.492, mu=0, |J|=2295.1\n", " --- Outer Iter 1: norm_f = 420.494, mu=346.299, |J|=2300.53\n", " --- Outer Iter 2: norm_f = 420.362, mu=115.433, |J|=2300.53\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 420.362 (449 data params - 31 model params = expected mean of 418; p-value = 0.45835)\n", " Completed in 0.7s\n", " 2*Delta(log(L)) = 420.91\n", " Iteration 3 took 0.7s\n", " \n", "--- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 834.305, mu=0, |J|=3309.45\n", " --- Outer Iter 1: norm_f = 803.557, mu=636.004, |J|=3285.78\n", " --- Outer Iter 2: norm_f = 803.524, mu=212.001, |J|=3285.44\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 803.524 (861 data params - 31 model params = expected mean of 830; p-value = 0.739067)\n", " Completed in 1.2s\n", " 2*Delta(log(L)) = 804.772\n", " Iteration 4 took 1.2s\n", " \n", "--- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1254.77, mu=0, |J|=4219.25\n", " --- Outer Iter 1: norm_f = 1245.46, mu=917.165, |J|=4221.1\n", " --- Outer Iter 2: norm_f = 1245.45, mu=305.722, |J|=4221.48\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1245.45 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.53101)\n", " Completed in 2.1s\n", " 2*Delta(log(L)) = 1247.04\n", " Iteration 5 took 2.2s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " --- Outer Iter 0: norm_f = 623.519, mu=0, |J|=2984.56\n", " --- Outer Iter 1: norm_f = 623.489, mu=458.349, |J|=2986.18\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 623.489 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1246.98 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.518811)\n", " Completed in 0.8s\n", " 2*Delta(log(L)) = 1246.98\n", " Final MLGST took 0.8s\n", " \n", "Iterative MLGST Total Time: 5.7s\n", " -- Adding Gauge Optimized (go0) --\n" ] } ], "source": [ "import pygsti\n", "from pygsti.construction import std1Q_XYI\n", "\n", "gs_target = std1Q_XYI.gs_target\n", "fiducials = std1Q_XYI.fiducials\n", "germs = std1Q_XYI.germs\n", "maxLengths = [1,2,4,8,16]\n", "ds = pygsti.io.load_dataset(\"tutorial_files/Example_Dataset.txt\", cache=True)\n", "\n", "#Run GST\n", "gs_target.set_all_parameterizations(\"TP\") #TP-constrained\n", "results = pygsti.do_long_sequence_gst(ds, gs_target, fiducials, fiducials, germs,\n", " maxLengths, verbosity=3)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Make a report\n", "Now that we have `results`, we use the `create_standard_report` method within `pygsti.report.factory` to generate a report. If the given filename ends in \"`.pdf`\" then a PDF-format report is generated; otherwise the file name specifies a folder that will be filled with HTML pages. To open a HTML-format report, you open the `main.html` file directly inside the report's folder. Setting `auto_open=True` makes the finished report open in your web browser automatically. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleReport directory\n", "Opening tutorial_files/exampleReport/main.html...\n", "*** Report Generation Complete! Total time 101.352s ***\n", "\n", "\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Latex file(s) successfully generated. Attempting to compile with pdflatex...\n", "Initial output PDF tutorial_files/exampleReport.pdf successfully generated.\n", "Final output PDF tutorial_files/exampleReport.pdf successfully generated. Cleaning up .aux and .log files.\n", "Opening tutorial_files/exampleReport.pdf...\n", "*** Report Generation Complete! Total time 142.178s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#HTML\n", "pygsti.report.create_standard_report(results, \"tutorial_files/exampleReport\", \n", " title=\"GST Example Report\", verbosity=1, auto_open=True)\n", "\n", "print(\"\\n\")\n", "\n", "#PDF\n", "pygsti.report.create_standard_report(results, \"tutorial_files/exampleReport.pdf\", \n", " title=\"GST Example Report\", verbosity=1, auto_open=True)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "There are several remarks about these reports worth noting:\n", "1. The **HTML reports are the primary report type in pyGSTi**, and are much more flexible. The PDF reports are more limited (they can only display a *single* estimate and gauge optimization), and essentially contain a subset of the information and descriptive text of a HTML report. So, if you can, use the HTML reports. The PDF report's strength is its portability: PDFs are easily displayed by many devices, and they embed all that they need neatly into a single file. **If you need to generate a PDF report** from `Results` objects that have multiple estimates and/or gauge optimizations, consider using the `Results` object's `view` method to single out the estimate and gauge optimization you're after.\n", "2. It's best to use **Firefox** when opening the HTML reports. (If there's a problem with your brower's capabilities it will be shown on the screen when you try to load the report.)\n", "3. You'll need **`pdflatex`** on your system to compile PDF reports.\n", "4. To familiarize yourself with the layout of an HTML report, click on the gray **\"Help\" link** on the black sidebar." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Multiple estimates in a single report\n", "Next, let's analyze the same data two different ways: with and without the TP-constraint (i.e. whether the gates *must* be trace-preserving) and furthermore gauge optmimize each case using several different SPAM-weights. In each case we'll call `do_long_sequence_gst` with `gaugeOptParams=False`, so that no gauge optimization is done, and then perform several gauge optimizations separately and add these to the `Results` object via its `add_gaugeoptimized` function." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Gate Sequence Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 1282 gate strings ---\n", "Iterative MLGST Total Time: 3.9s\n" ] } ], "source": [ "#Case1: TP-constrained GST\n", "tpTarget = gs_target.copy()\n", "tpTarget.set_all_parameterizations(\"TP\")\n", "results_tp = pygsti.do_long_sequence_gst(ds, tpTarget, fiducials, fiducials, germs,\n", " maxLengths, gaugeOptParams=False, verbosity=1)\n", "\n", "#Gauge optimize\n", "est = results_tp.estimates['default']\n", "gsFinal = est.gatesets['final iteration estimate']\n", "gsTarget = est.gatesets['target']\n", "for spamWt in [1e-4,1e-2,1.0]:\n", " gs = pygsti.gaugeopt_to_target(gsFinal,gsTarget,{'gates':1, 'spam':spamWt})\n", " est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, gs, \"Spam %g\" % spamWt)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Gate Sequence Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 1282 gate strings ---\n", "Iterative MLGST Total Time: 4.9s\n" ] } ], "source": [ "#Case2: \"Full\" GST\n", "fullTarget = gs_target.copy()\n", "fullTarget.set_all_parameterizations(\"full\")\n", "results_full = pygsti.do_long_sequence_gst(ds, fullTarget, fiducials, fiducials, germs,\n", " maxLengths, gaugeOptParams=False, verbosity=1)\n", "\n", "#Gauge optimize\n", "est = results_full.estimates['default']\n", "gsFinal = est.gatesets['final iteration estimate']\n", "gsTarget = est.gatesets['target']\n", "for spamWt in [1e-4,1e-2,1.0]:\n", " gs = pygsti.gaugeopt_to_target(gsFinal,gsTarget,{'gates':1, 'spam':spamWt})\n", " est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, gs, \"Spam %g\" % spamWt)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We'll now call the *same* `create_standard_report` function but this time instead of passing a single `Results` object as the first argument we'll pass a *dictionary* of them. This will result in a **HTML report that includes switches** to select which case (\"TP\" or \"Full\") as well as which gauge optimization to display output quantities for. PDF reports cannot support this interactivity, and so **if you try to generate a PDF report you'll get an error**." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "deletable": true, "editable": true, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", " targetSpamBriefTable took 0.312973 seconds\n", " targetGatesBoxTable took 1.28042 seconds\n", " datasetOverviewTable took 2.30963 seconds\n", " bestGatesetSpamParametersTable took 0.005752 seconds\n", " bestGatesetSpamBriefTable took 2.579674 seconds\n", " bestGatesetSpamVsTargetTable took 0.542768 seconds\n", " bestGatesetGaugeOptParamsTable took 0.0014 seconds\n", " bestGatesetGatesBoxTable took 2.949121 seconds\n", " bestGatesetChoiEvalTable took 4.359151 seconds\n", " bestGatesetDecompTable took 2.126372 seconds\n", " bestGatesetEvalTable took 0.008888 seconds\n", " bestGermsEvalTable took 0.041023 seconds\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/theory.py:200: UserWarning:\n", "\n", "Output may be unreliable because the gateset is not approximately trace-preserving.\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " bestGatesetVsTargetTable took 0.473496 seconds\n", " bestGatesVsTargetTable_gv took 1.461507 seconds\n", " bestGatesVsTargetTable_gvgerms took 0.905775 seconds\n", " bestGatesVsTargetTable_gi took 0.032181 seconds\n", " bestGatesVsTargetTable_gigerms took 0.082171 seconds\n", " bestGatesVsTargetTable_sum took 1.550003 seconds\n", " bestGatesetErrGenBoxTable took 10.681632 seconds\n", " metadataTable took 0.005285 seconds\n", " stdoutBlock took 0.00044 seconds\n", " profilerTable took 0.004826 seconds\n", " softwareEnvTable took 0.001049 seconds\n", " exampleTable took 0.097128 seconds\n", " singleMetricTable_gv took 1.46908 seconds\n", " singleMetricTable_gi took 0.068071 seconds\n", " fiducialListTable took 0.001376 seconds\n", " prepStrListTable took 0.000579 seconds\n", " effectStrListTable took 0.00061 seconds\n", " colorBoxPlotKeyPlot took 0.119705 seconds\n", " germList2ColTable took 0.000893 seconds\n", " progressTable took 5.499939 seconds\n", "*** Generating plots ***\n", " gramBarPlot took 0.257305 seconds\n", " progressBarPlot took 4.839213 seconds\n", " progressBarPlot_sum took 0.001218 seconds\n", " finalFitComparePlot took 3.0564 seconds\n", " bestEstimateColorBoxPlot took 16.599898 seconds\n", " bestEstimateTVDColorBoxPlot took 14.595608 seconds\n", " bestEstimateColorScatterPlot took 17.926864 seconds\n", " bestEstimateColorHistogram took 13.630301 seconds\n", " progressTable_scl took 0.000103 seconds\n", " progressBarPlot_scl took 0.000153 seconds\n", " bestEstimateColorBoxPlot_scl took 0.000429 seconds\n", " bestEstimateColorScatterPlot_scl took 0.000363 seconds\n", " bestEstimateColorHistogram_scl took 0.000266 seconds\n", " dataScalingColorBoxPlot took 0.000141 seconds\n", " dsComparisonSummary took 0.718253 seconds\n", " dsComparisonHistogram took 1.845246 seconds\n", " dsComparisonBoxPlot took 0.789799 seconds\n", "*** Merging into template file ***\n", " Rendering bestEstimateColorScatterPlot_scl took 0.00122 seconds\n", " Rendering bestGatesetChoiEvalTable took 0.073341 seconds\n", " Rendering bestGatesetDecompTable took 0.065171 seconds\n", " Rendering dsComparisonBoxPlot took 0.055849 seconds\n", " Rendering fiducialListTable took 0.007464 seconds\n", " Rendering bestGatesVsTargetTable_gigerms took 0.007171 seconds\n", " Rendering profilerTable took 0.003117 seconds\n", " Rendering exampleTable took 0.003778 seconds\n", " Rendering progressTable_scl took 0.000734 seconds\n", " Rendering bestGatesetSpamVsTargetTable took 0.012409 seconds\n", " Rendering stdoutBlock took 0.001215 seconds\n", " Rendering gramBarPlot took 0.00716 seconds\n", " Rendering bestGatesetVsTargetTable took 0.006127 seconds\n", " Rendering metricSwitchboard_gi took 8.6e-05 seconds\n", " Rendering bestGatesetEvalTable took 0.031753 seconds\n", " Rendering bestGatesVsTargetTable_sum took 0.025222 seconds\n", " Rendering maxLSwitchboard1 took 0.000129 seconds\n", " Rendering singleMetricTable_gi took 0.038006 seconds\n", " Rendering bestGatesetSpamParametersTable took 0.007875 seconds\n", " Rendering progressBarPlot_sum took 0.004387 seconds\n", " Rendering metricSwitchboard_gv took 6e-05 seconds\n", " Rendering finalFitComparePlot took 0.003422 seconds\n", " Rendering bestGatesetGatesBoxTable took 0.089595 seconds\n", " Rendering bestGatesetErrGenBoxTable took 0.15915 seconds\n", " Rendering topSwitchboard took 0.00012 seconds\n", " Rendering prepStrListTable took 0.003802 seconds\n", " Rendering bestGermsEvalTable took 0.125183 seconds\n", " Rendering dsComparisonHistogram took 0.047215 seconds\n", " Rendering effectStrListTable took 0.003593 seconds\n", " Rendering bestEstimateColorHistogram_scl took 0.00065 seconds\n", " Rendering germList2ColTable took 0.006534 seconds\n", " Rendering bestEstimateColorScatterPlot took 0.123247 seconds\n", " Rendering bestEstimateTVDColorBoxPlot took 0.121311 seconds\n", " Rendering bestGatesVsTargetTable_gvgerms took 0.052309 seconds\n", " Rendering progressBarPlot took 0.004596 seconds\n", " Rendering targetSpamBriefTable took 0.019525 seconds\n", " Rendering dsComparisonSummary took 0.004462 seconds\n", " Rendering metadataTable took 0.020159 seconds\n", " Rendering bestEstimateColorBoxPlot_scl took 0.000685 seconds\n", " Rendering targetGatesBoxTable took 0.01523 seconds\n", " Rendering softwareEnvTable took 0.00352 seconds\n", " Rendering bestGatesetSpamBriefTable took 0.098314 seconds\n", " Rendering dscmpSwitchboard took 5.3e-05 seconds\n", " Rendering bestEstimateColorHistogram took 0.081824 seconds\n", " Rendering datasetOverviewTable took 0.000798 seconds\n", " Rendering bestGatesetGaugeOptParamsTable took 0.00568 seconds\n", " Rendering bestGatesVsTargetTable_gi took 0.009045 seconds\n", " Rendering singleMetricTable_gv took 0.060615 seconds\n", " Rendering bestGatesVsTargetTable_gv took 0.030262 seconds\n", " Rendering progressBarPlot_scl took 0.000839 seconds\n", " Rendering colorBoxPlotKeyPlot took 0.004212 seconds\n", " Rendering bestEstimateColorBoxPlot took 0.103172 seconds\n", " Rendering progressTable took 0.009069 seconds\n", " Rendering dataScalingColorBoxPlot took 0.000695 seconds\n", "Output written to tutorial_files/exampleMultiEstimateReport directory\n", "Opening tutorial_files/exampleMultiEstimateReport/main.html...\n", "*** Report Generation Complete! Total time 119.93s ***\n" ] } ], "source": [ "ws = pygsti.report.create_standard_report({'TP': results_tp, \"Full\": results_full},\n", " \"tutorial_files/exampleMultiEstimateReport\",\n", " title=\"Example Multi-Estimate Report\", \n", " verbosity=2, auto_open=True)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "deletable": true, "editable": true }, "source": [ "In the above call we capture the return value in the variable `ws` - a `Workspace` object. PyGSTi's `Workspace` objects function as both a factory for figures and tables as well as a smart cache for computed values. Within `create_standard_report` a `Workspace` object is created and used to create all the figures in the report. As an intended side effect, each of these figures is cached, along with some of the intermediate results used to create it. As we'll see below, a `Workspace` can also be specified as input to `create_standard_report`, allowing it to utilize previously cached quantities.\n", "\n", "**Note to veteran users:** Other report formats such as **`beamer`-class PDF presentation and Powerpoint presentation have been dropped from pyGSTi**. These presentation formats were rarely used and moreover we feel that the HTML format is able to provide all of the functionality that was present in these discontinued formats.\n", "\n", "**Another way**: Because both `results_tp` and `results_full` above used the same dataset and gate sequences, we could have combined them as two estimates in a single `Results` object (see the previous tutorial on pyGSTi's `Results` object). This can be done by renaming at least one of the `\"default\"`-named estimates in `results_tp` or `results_full` (below we rename both) and then adding the estimate within `results_full` to the estimates already contained in `results_tp`: " ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "results_tp.rename_estimate('default','TP')\n", "results_full.rename_estimate('default','Full')\n", "results_both = results_tp.copy() #copy just for neatness\n", "results_both.add_estimates(results_full, estimatesToAdd=['Full'])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Creating a report using `results_both` will result in the same report we just generated. We'll demonstrate this anyway, but in addition we'll supply `create_standard_report` a `ws` argument, which tells it to use any cached values contained in a given *input* `Workspace` to expedite report generation. Since our workspace object has the exact quantities we need cached in it, you'll notice a significant speedup. Finally, note that even though there's just a single `Results` object, you **still can't generate a PDF report** from it because it contains multiple estimates." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", " targetSpamBriefTable took 0.000777 seconds\n", " targetGatesBoxTable took 0.000484 seconds\n", " datasetOverviewTable took 0.00037 seconds\n", " bestGatesetSpamParametersTable took 0.001608 seconds\n", " bestGatesetSpamBriefTable took 0.001055 seconds\n", " bestGatesetSpamVsTargetTable took 0.001732 seconds\n", " bestGatesetGaugeOptParamsTable took 0.000828 seconds\n", " bestGatesetGatesBoxTable took 0.001761 seconds\n", " bestGatesetChoiEvalTable took 0.001175 seconds\n", " bestGatesetDecompTable took 0.001198 seconds\n", " bestGatesetEvalTable took 0.000639 seconds\n", " bestGermsEvalTable took 0.041371 seconds\n", " bestGatesetVsTargetTable took 0.010018 seconds\n", " bestGatesVsTargetTable_gv took 0.001545 seconds\n", " bestGatesVsTargetTable_gvgerms took 0.002953 seconds\n", " bestGatesVsTargetTable_gi took 0.000428 seconds\n", " bestGatesVsTargetTable_gigerms took 0.056629 seconds\n", " bestGatesVsTargetTable_sum took 0.001403 seconds\n", " bestGatesetErrGenBoxTable took 0.001185 seconds\n", " metadataTable took 0.004146 seconds\n", " stdoutBlock took 0.000214 seconds\n", " profilerTable took 0.001093 seconds\n", " softwareEnvTable took 0.00019 seconds\n", " exampleTable took 0.000362 seconds\n", " singleMetricTable_gv took 1.287142 seconds\n", " singleMetricTable_gi took 0.079235 seconds\n", " fiducialListTable took 0.00044 seconds\n", " prepStrListTable took 0.000402 seconds\n", " effectStrListTable took 0.000306 seconds\n", " colorBoxPlotKeyPlot took 0.000513 seconds\n", " germList2ColTable took 0.000376 seconds\n", " progressTable took 4.966169 seconds\n", "*** Generating plots ***\n", " gramBarPlot took 0.000858 seconds\n", " progressBarPlot took 4.956245 seconds\n", " progressBarPlot_sum took 0.001129 seconds\n", " finalFitComparePlot took 1.708799 seconds\n", " bestEstimateColorBoxPlot took 11.38969 seconds\n", " bestEstimateTVDColorBoxPlot took 11.136312 seconds\n", " bestEstimateColorScatterPlot took 15.227864 seconds\n", " bestEstimateColorHistogram took 11.324483 seconds\n", " progressTable_scl took 0.000114 seconds\n", " progressBarPlot_scl took 0.000154 seconds\n", " bestEstimateColorBoxPlot_scl took 0.000295 seconds\n", " bestEstimateColorScatterPlot_scl took 0.000402 seconds\n", " bestEstimateColorHistogram_scl took 0.000389 seconds\n", " dataScalingColorBoxPlot took 0.000168 seconds\n", "*** Merging into template file ***\n", " Rendering bestEstimateColorScatterPlot_scl took 0.001085 seconds\n", " Rendering bestGatesetChoiEvalTable took 0.06947 seconds\n", " Rendering bestGatesetDecompTable took 0.065558 seconds\n", " Rendering fiducialListTable took 0.002655 seconds\n", " Rendering bestGatesVsTargetTable_gigerms took 0.008593 seconds\n", " Rendering profilerTable took 0.003759 seconds\n", " Rendering exampleTable took 0.004476 seconds\n", " Rendering progressTable_scl took 0.001497 seconds\n", " Rendering bestGatesetSpamVsTargetTable took 0.015135 seconds\n", " Rendering stdoutBlock took 0.001826 seconds\n", " Rendering gramBarPlot took 0.006848 seconds\n", " Rendering bestGatesetVsTargetTable took 0.007588 seconds\n", " Rendering metricSwitchboard_gi took 6.6e-05 seconds\n", " Rendering bestGatesetEvalTable took 0.049971 seconds\n", " Rendering bestGatesVsTargetTable_sum took 0.039591 seconds\n", " Rendering maxLSwitchboard1 took 0.000223 seconds\n", " Rendering singleMetricTable_gi took 0.024445 seconds\n", " Rendering bestGatesetSpamParametersTable took 0.010423 seconds\n", " Rendering progressBarPlot_sum took 0.005552 seconds\n", " Rendering metricSwitchboard_gv took 6.4e-05 seconds\n", " Rendering finalFitComparePlot took 0.004044 seconds\n", " Rendering bestGatesetGatesBoxTable took 0.106337 seconds\n", " Rendering bestGatesetErrGenBoxTable took 0.162506 seconds\n", " Rendering topSwitchboard took 0.00012 seconds\n", " Rendering prepStrListTable took 0.003244 seconds\n", " Rendering bestGermsEvalTable took 0.117191 seconds\n", " Rendering effectStrListTable took 0.002013 seconds\n", " Rendering bestEstimateColorHistogram_scl took 0.001244 seconds\n", " Rendering germList2ColTable took 0.005174 seconds\n", " Rendering bestEstimateColorScatterPlot took 0.124914 seconds\n", " Rendering bestEstimateTVDColorBoxPlot took 0.140803 seconds\n", " Rendering bestGatesVsTargetTable_gvgerms took 0.052809 seconds\n", " Rendering progressBarPlot took 0.005496 seconds\n", " Rendering targetSpamBriefTable took 0.020061 seconds\n", " Rendering metadataTable took 0.019916 seconds\n", " Rendering bestEstimateColorBoxPlot_scl took 0.00112 seconds\n", " Rendering targetGatesBoxTable took 0.016745 seconds\n", " Rendering softwareEnvTable took 0.003139 seconds\n", " Rendering bestGatesetSpamBriefTable took 0.101722 seconds\n", " Rendering bestEstimateColorHistogram took 0.084224 seconds\n", " Rendering datasetOverviewTable took 0.000857 seconds\n", " Rendering bestGatesetGaugeOptParamsTable took 0.005323 seconds\n", " Rendering bestGatesVsTargetTable_gi took 0.006649 seconds\n", " Rendering singleMetricTable_gv took 0.030942 seconds\n", " Rendering bestGatesVsTargetTable_gv took 0.023581 seconds\n", " Rendering progressBarPlot_scl took 0.000684 seconds\n", " Rendering colorBoxPlotKeyPlot took 0.005168 seconds\n", " Rendering bestEstimateColorBoxPlot took 0.105023 seconds\n", " Rendering progressTable took 0.009055 seconds\n", " Rendering dataScalingColorBoxPlot took 0.001103 seconds\n", "Output written to tutorial_files/exampleMultiEstimateReport2 directory\n", "Opening tutorial_files/exampleMultiEstimateReport2/main.html...\n", "*** Report Generation Complete! Total time 64.1567s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pygsti.report.create_standard_report(results_both,\n", " \"tutorial_files/exampleMultiEstimateReport2\",\n", " title=\"Example Multi-Estimate Report (v2)\", \n", " verbosity=2, auto_open=True, ws=ws)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Multiple estimates and `do_stdpractice_gst`\n", "It's no coincidence that a `Results` object containing multiple estimates using the same data is precisely what's returned from `do_stdpractice_gst` (see docstring for information on its arguments). This allows one to run GST multiple times, creating several different \"standard\" estimates and gauge optimizations, and plot them all in a single (HTML) report. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Std Practice: Iter 1 of 3 (TP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.243708948119547\n", " 1.1796799227093195\n", " 0.9631622821716207\n", " 0.9415276081969872\n", " 0.04762148814438355\n", " 0.015427853978163174\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119285\n", " 1.4142135623730954\n", " 1.4142135623730947\n", " 1.4142135623730945\n", " 3.1723744950054595e-16\n", " 1.0852733691121267e-16\n", " \n", " Resulting gate set:\n", " \n", " rho0 = TPParameterizedSPAMVec with dimension 4\n", " 0.71 0 0.03 0.75\n", " \n", " \n", " Mdefault = TPPOVM with effect vectors:\n", " 0: FullyParameterizedSPAMVec with dimension 4\n", " 0.73 0 0 0.65\n", " \n", " 1: ComplementSPAMVec with dimension 4\n", " 0.69 0 0-0.65\n", " \n", " \n", " \n", " Gi = \n", " TPParameterizedGate with shape (4, 4)\n", " 1.00 0 0 0\n", " 0 0.93-0.05 0.02\n", " 0 0.01 0.90 0.02\n", " 0 0.01 0 0.91\n", " \n", " \n", " Gx = \n", " TPParameterizedGate with shape (4, 4)\n", " 1.00 0 0 0\n", " 0 0.91 0 0\n", " -0.02 0-0.04-1.00\n", " -0.05 0.04 0.81 0\n", " \n", " \n", " Gy = \n", " TPParameterizedGate with shape (4, 4)\n", " 1.00 0 0 0\n", " 0.03-0.02 0 0.98\n", " 0-0.01 0.89-0.03\n", " -0.06-0.81 0 0.02\n", " \n", " \n", " \n", " \n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 82.3697, mu=0, |J|=1008.64\n", " --- Outer Iter 1: norm_f = 53.5029, mu=78.8972, |J|=1008.97\n", " --- Outer Iter 2: norm_f = 53.4377, mu=26.2991, |J|=1008.95\n", " --- Outer Iter 3: norm_f = 53.4374, mu=8.76636, |J|=1008.96\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 53.4374 (91 data params - 31 model params = expected mean of 60; p-value = 0.712585)\n", " Completed in 0.1s\n", " 2*Delta(log(L)) = 53.6041\n", " Iteration 1 took 0.1s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 161.901, mu=0, |J|=1397.55\n", " --- Outer Iter 1: norm_f = 122.269, mu=138.541, |J|=1387.32\n", " --- Outer Iter 2: norm_f = 122.044, mu=46.1804, |J|=1387.25\n", " --- Outer Iter 3: norm_f = 122.043, mu=15.3935, |J|=1387.25\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 122.043 (167 data params - 31 model params = expected mean of 136; p-value = 0.798502)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 122.364\n", " Iteration 2 took 0.2s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 494.492, mu=0, |J|=2295.1\n", " --- Outer Iter 1: norm_f = 420.494, mu=346.299, |J|=2300.53\n", " --- Outer Iter 2: norm_f = 420.362, mu=115.433, |J|=2300.53\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 420.362 (449 data params - 31 model params = expected mean of 418; p-value = 0.45835)\n", " Completed in 0.4s\n", " 2*Delta(log(L)) = 420.91\n", " Iteration 3 took 0.5s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 834.305, mu=0, |J|=3309.45\n", " --- Outer Iter 1: norm_f = 803.557, mu=636.004, |J|=3285.78\n", " --- Outer Iter 2: norm_f = 803.524, mu=212.001, |J|=3285.44\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 803.524 (861 data params - 31 model params = expected mean of 830; p-value = 0.739067)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 804.772\n", " Iteration 4 took 1.0s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1254.77, mu=0, |J|=4219.25\n", " --- Outer Iter 1: norm_f = 1245.46, mu=917.165, |J|=4221.1\n", " --- Outer Iter 2: norm_f = 1245.45, mu=305.722, |J|=4221.48\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1245.45 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.53101)\n", " Completed in 1.1s\n", " 2*Delta(log(L)) = 1247.04\n", " Iteration 5 took 1.1s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " --- Outer Iter 0: norm_f = 623.519, mu=0, |J|=2984.56\n", " --- Outer Iter 1: norm_f = 623.489, mu=458.349, |J|=2986.18\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 623.489 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1246.98 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.518811)\n", " Completed in 0.4s\n", " 2*Delta(log(L)) = 1246.98\n", " Final MLGST took 0.5s\n", " \n", " Iterative MLGST Total Time: 3.4s\n", " -- Performing 'single' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "-- Std Practice: Iter 2 of 3 (CPTP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1.10824e+07, mu=0, |J|=21245.6\n", " --- Outer Iter 1: norm_f = 827.29, mu=50045.8, |J|=996.236\n", " --- Outer Iter 2: norm_f = 720.478, mu=16681.9, |J|=947.523\n", " --- Outer Iter 3: norm_f = 708.52, mu=5560.65, |J|=933.264\n", " --- Outer Iter 4: norm_f = 705.237, mu=1853.55, |J|=931.906\n", " --- Outer Iter 5: norm_f = 704.334, mu=617.85, |J|=932.043\n", " --- Outer Iter 6: norm_f = 704.252, mu=205.95, |J|=932.112\n", " --- Outer Iter 7: norm_f = 704.063, mu=68.65, |J|=932.083\n", " --- Outer Iter 8: norm_f = 680.677, mu=183.067, |J|=926.823\n", " --- Outer Iter 9: norm_f = 588.025, mu=488.178, |J|=914.004\n", " --- Outer Iter 10: norm_f = 418.923, mu=1301.81, |J|=917.108\n", " --- Outer Iter 11: norm_f = 213.173, mu=1144.16, |J|=891.694\n", " --- Outer Iter 12: norm_f = 105.297, mu=381.386, |J|=979.128\n", " --- Outer Iter 13: norm_f = 101.939, mu=127.129, |J|=964.231\n", " --- Outer Iter 14: norm_f = 99.843, mu=127.128, |J|=949.404\n", " --- Outer Iter 15: norm_f = 94.52, mu=81.609, |J|=951.231\n", " --- Outer Iter 16: norm_f = 92.8077, mu=27.203, |J|=953.238\n", " --- Outer Iter 17: norm_f = 85.0301, mu=18.1353, |J|=941.494\n", " --- Outer Iter 18: norm_f = 78.421, mu=36.2706, |J|=923.039\n", " --- Outer Iter 19: norm_f = 60.1288, mu=26.2589, |J|=933.945\n", " --- Outer Iter 20: norm_f = 53.8824, mu=8.75297, |J|=947.079\n", " --- Outer Iter 21: norm_f = 53.7107, mu=6.57963, |J|=947.149\n", " --- Outer Iter 22: norm_f = 53.6724, mu=8.51865, |J|=946.496\n", " --- Outer Iter 23: norm_f = 53.4385, mu=2.83955, |J|=948.209\n", " --- Outer Iter 24: norm_f = 53.4374, mu=0.946516, |J|=948.325\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 53.4374 (91 data params - 31 model params = expected mean of 60; p-value = 0.712585)\n", " Completed in 1.2s\n", " 2*Delta(log(L)) = 53.6041\n", " Iteration 1 took 1.2s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 161.922, mu=0, |J|=1266.96\n", " --- Outer Iter 1: norm_f = 129.895, mu=138.542, |J|=1251.74\n", " --- Outer Iter 2: norm_f = 124.539, mu=120.128, |J|=1250.84\n", " --- Outer Iter 3: norm_f = 122.184, mu=40.0425, |J|=1258.26\n", " --- Outer Iter 4: norm_f = 122.062, mu=15.0297, |J|=1259.06\n", " --- Outer Iter 5: norm_f = 122.043, mu=5.00989, |J|=1259.68\n", " --- Outer Iter 6: norm_f = 122.043, mu=1.66996, |J|=1259.77\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 122.043 (167 data params - 31 model params = expected mean of 136; p-value = 0.798502)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 122.364\n", " Iteration 2 took 0.5s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 494.48, mu=0, |J|=1997.27\n", " --- Outer Iter 1: norm_f = 422.919, mu=346.298, |J|=1993.51\n", " --- Outer Iter 2: norm_f = 420.396, mu=115.433, |J|=1998.94\n", " --- Outer Iter 3: norm_f = 420.362, mu=38.4776, |J|=1999.49\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 420.362 (449 data params - 31 model params = expected mean of 418; p-value = 0.458351)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 420.91\n", " Iteration 3 took 0.5s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 834.332, mu=0, |J|=2714.95\n", " --- Outer Iter 1: norm_f = 805.156, mu=636.001, |J|=2696.74\n", " --- Outer Iter 2: norm_f = 803.666, mu=212, |J|=2699.35\n", " --- Outer Iter 3: norm_f = 803.535, mu=70.6667, |J|=2699.64\n", " --- Outer Iter 4: norm_f = 803.524, mu=23.5556, |J|=2699.75\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 803.524 (861 data params - 31 model params = expected mean of 830; p-value = 0.739066)\n", " Completed in 1.7s\n", " 2*Delta(log(L)) = 804.775\n", " Iteration 4 took 1.7s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1254.76, mu=0, |J|=3159.02\n", " --- Outer Iter 1: norm_f = 1245.53, mu=917.158, |J|=3158.53\n", " --- Outer Iter 2: norm_f = 1245.45, mu=305.719, |J|=3158.97\n", " --- Outer Iter 3: norm_f = 1245.45, mu=101.906, |J|=3158.95\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1245.45 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.531009)\n", " Completed in 1.5s\n", " 2*Delta(log(L)) = 1247.04\n", " Iteration 5 took 1.5s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " --- Outer Iter 0: norm_f = 623.519, mu=0, |J|=2233.21\n", " --- Outer Iter 1: norm_f = 623.489, mu=458.349, |J|=2234.51\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 623.489 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1246.98 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.51881)\n", " Completed in 0.6s\n", " 2*Delta(log(L)) = 1246.98\n", " Final MLGST took 0.6s\n", " \n", " Iterative MLGST Total Time: 6.1s\n", " -- Performing 'single' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "-- Std Practice: Iter 3 of 3 (Target) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " -- Performing 'single' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleStdReport directory\n", "Opening tutorial_files/exampleStdReport/main.html...\n", "*** Report Generation Complete! Total time 107.956s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "results_std = pygsti.do_stdpractice_gst(ds, gs_target, fiducials, fiducials, germs,\n", " maxLengths, verbosity=4, modes=\"TP,CPTP,Target\",\n", " gaugeOptSuite=('single','toggleValidSpam'))\n", "\n", "# Generate a report with \"TP\", \"CPTP\", and \"Target\" estimates\n", "pygsti.report.create_standard_report(results_std, \"tutorial_files/exampleStdReport\", \n", " title=\"Post StdPractice Report\", auto_open=True,\n", " verbosity=1)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Reports with confidence regions\n", "To display confidence intervals for reported quantities, you must do two things:\n", "\n", "1. you must specify the `confidenceLevel` argument to `create_standard_report`.\n", "2. the estimate(s) being reported must have a valid confidence-region-factory.\n", "\n", "Constructing a factory often means computing a Hessian, which can be time consuming, and so this is *not* done automatically. Here we demonstrate how to construct a valid factory for the \"Spam 0.001\" gauge-optimization of the \"CPTP\" estimate by computing and then projecting the Hessian of the likelihood function. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " \n", "--- Hessian Projector Optimization from separate SPAM and Gate weighting ---\n", " Resulting intrinsic errors: 0.0120587 (gates), 0.00212761 (spam)\n", " Resulting sqrt(mean(gateCIs**2)): 0.017386\n", " Resulting sqrt(mean(spamCIs**2)): 0.00498623\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleStdReport2 directory\n", "Opening tutorial_files/exampleStdReport2/main.html...\n", "*** Report Generation Complete! Total time 105.186s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Construct and initialize a \"confidence region factory\" for the CPTP estimate\n", "crfact = results_std.estimates[\"CPTP\"].add_confidence_region_factory('Spam 0.001', 'final')\n", "crfact.compute_hessian(comm=None) #we could use more processors\n", "crfact.project_hessian('intrinsic error')\n", "\n", "pygsti.report.create_standard_report(results_std, \"tutorial_files/exampleStdReport2\", \n", " title=\"Post StdPractice Report (w/CIs on CPTP)\",\n", " confidenceLevel=95, auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Reports with multiple *different* data sets\n", "We've already seen above that `create_standard_report` can be given a dictionary of `Results` objects instead of a single one. This allows the creation of reports containing estimates for different `DataSet`s (each `Results` object only holds estimates for a single `DataSet`). Furthermore, when the data sets have the same gate sequences, they will be compared within a tab of the HTML report.\n", "\n", "Below, we generate a new data set with the same sequences as the one loaded at the beginning of this tutorial, proceed to run standard-practice GST on that dataset, and create a report of the results along with those of the original dataset. Look at the **\"Data Comparison\" tab** within the gauge-invariant error metrics category." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Std Practice: Iter 1 of 3 (TP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.244829997162508\n", " 1.1936677889884049\n", " 0.9868539533169902\n", " 0.9321977240915887\n", " 0.04714742318656941\n", " 0.0127005208085848\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119285\n", " 1.4142135623730954\n", " 1.4142135623730947\n", " 1.4142135623730945\n", " 3.1723744950054595e-16\n", " 1.0852733691121267e-16\n", " \n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 47.848 (91 data params - 31 model params = expected mean of 60; p-value = 0.871295)\n", " Completed in 0.1s\n", " 2*Delta(log(L)) = 47.897\n", " Iteration 1 took 0.1s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 112.296 (167 data params - 31 model params = expected mean of 136; p-value = 0.931805)\n", " Completed in 0.1s\n", " 2*Delta(log(L)) = 112.295\n", " Iteration 2 took 0.1s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 409.638 (449 data params - 31 model params = expected mean of 418; p-value = 0.605678)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 409.806\n", " Iteration 3 took 0.3s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 833.614 (861 data params - 31 model params = expected mean of 830; p-value = 0.458213)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 833.943\n", " Iteration 4 took 0.5s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 1262.33 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.397808)\n", " Completed in 0.7s\n", " 2*Delta(log(L)) = 1262.98\n", " Iteration 5 took 0.8s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Maximum log(L) = 631.455 below upper bound of -2.13633e+06\n", " 2*Delta(log(L)) = 1262.91 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.393335)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 1262.91\n", " Final MLGST took 0.3s\n", " \n", " Iterative MLGST Total Time: 2.2s\n", " -- Performing 'single' gauge optimization on TP estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on TP estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --\n", "-- Std Practice: Iter 2 of 3 (CPTP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 47.848 (91 data params - 31 model params = expected mean of 60; p-value = 0.871296)\n", " Completed in 1.1s\n", " 2*Delta(log(L)) = 47.8971\n", " Iteration 1 took 1.1s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 112.296 (167 data params - 31 model params = expected mean of 136; p-value = 0.931805)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 112.296\n", " Iteration 2 took 0.4s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 409.638 (449 data params - 31 model params = expected mean of 418; p-value = 0.605673)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 409.812\n", " Iteration 3 took 0.5s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 833.614 (861 data params - 31 model params = expected mean of 830; p-value = 0.458214)\n", " Completed in 0.7s\n", " 2*Delta(log(L)) = 833.943\n", " Iteration 4 took 0.8s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 1262.33 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.397806)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 1262.98\n", " Iteration 5 took 0.9s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Maximum log(L) = 631.455 below upper bound of -2.13633e+06\n", " 2*Delta(log(L)) = 1262.91 (1281 data params - 31 model params = expected mean of 1250; p-value = 0.393334)\n", " Completed in 0.4s\n", " 2*Delta(log(L)) = 1262.91\n", " Final MLGST took 0.4s\n", " \n", " Iterative MLGST Total Time: 4.0s\n", " -- Performing 'single' gauge optimization on CPTP estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --\n", "-- Std Practice: Iter 3 of 3 (Target) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " -- Performing 'single' gauge optimization on Target estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on Target estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleMultiDataSetReport directory\n", "Opening tutorial_files/exampleMultiDataSetReport/main.html...\n", "*** Report Generation Complete! Total time 186.726s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Make another dataset & estimates\n", "depol_gateset = gs_target.depolarize(gate_noise=0.1)\n", "datagen_gateset = depol_gateset.rotate((0.05,0,0.03))\n", "\n", "#Compute the sequences needed to perform Long Sequence GST on \n", "# this GateSet with sequences up to lenth 512\n", "gatestring_list = pygsti.construction.make_lsgst_experiment_list(\n", " std1Q_XYI.gs_target, std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,\n", " std1Q_XYI.germs, [1,2,4,8,16,32,64,128,256,512])\n", "ds2 = pygsti.construction.generate_fake_data(datagen_gateset, gatestring_list, nSamples=1000,\n", " sampleError='binomial', seed=2018)\n", "results_std2 = pygsti.do_stdpractice_gst(ds2, gs_target, fiducials, fiducials, germs,\n", " maxLengths, verbosity=3, modes=\"TP,CPTP,Target\",\n", " gaugeOptSuite=('single','toggleValidSpam'))\n", "\n", "pygsti.report.create_standard_report({'DS1': results_std, 'DS2': results_std2},\n", " \"tutorial_files/exampleMultiDataSetReport\", \n", " title=\"Example Multi-Dataset Report\", \n", " auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Other cool `create_standard_report` options\n", "Finally, let us highlight a few of the additional arguments one can supply to `create_standard_report` that allows further control over what gets reported.\n", "\n", "- Setting the `link_to` argument to a tuple of `'pkl'`, `'tex'`, and/or `'pdf'` will create hyperlinks within the plots or below the tables of the HTML linking to Python pickle, LaTeX source, and PDF versions of the content, respectively. The Python pickle files for tables contain pickled pandas `DataFrame` objects, wheras those of plots contain ordinary Python dictionaries of the data that is plotted. Applies to HTML reports only.\n", "\n", "- Setting the `brevity` argument to an integer higher than $0$ (the default) will reduce the amount of information included in the report (for details on what is included for each value, see the doc string). Using `brevity > 0` will reduce the time required to create, and later load, the report, as well as the output file/folder size. This applies to both HTML and PDF reports.\n", "\n", "Below, we demonstrate both of these options in very brief (`brevity=4`) report with links to pickle and PDF files. Note that to generate the PDF files you must have `pdflatex` installed." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleBriefReport directory\n", "Opening tutorial_files/exampleBriefReport/main.html...\n", "*** Report Generation Complete! Total time 100.195s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pygsti.report.create_standard_report(results_std,\n", " \"tutorial_files/exampleBriefReport\", \n", " title=\"Example Brief Report\", \n", " auto_open=True, verbosity=1,\n", " brevity=4, link_to=('pkl','pdf'))" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "deletable": true, "editable": true }, "source": [ "## Advanced Reports: `create_report_notebook`\n", "In addition to the standard HTML-page reports demonstrated above, pyGSTi is able to generate a Jupyter notebook containing the Python commands to create the figures and tables within a general report. This is facilitated\n", "by `Workspace` objects, which are factories for figures and tables (see previous tutorials). By calling `create_report_notebook`, all of the relevant `Workspace` initialization and calls are dumped to a new notebook file, which can be run (either fully or partially) by the user at their convenience. Creating such \"report notebooks\" has the advantage that the user may insert Python code amidst the figure and table generation calls to inspect or modify what is display in a highly customizable fashion. The chief disadvantages of report notebooks is that they require the user to 1) have a Jupyter server up and running and 2) to run the notebook before any figures are displayed.\n", "\n", "The line below demonstrates how to create a report notebook using `create_report_notebook`. Note that the argument list is very similar to `create_general_report`." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Report Notebook created as tutorial_files/exampleReport.ipynb\n" ] } ], "source": [ "pygsti.report.create_report_notebook(results, \"tutorial_files/exampleReport.ipynb\", \n", " title=\"GST Example Report Notebook\", confidenceLevel=None,\n", " auto_open=True, connected=False, verbosity=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 2 }