{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# Report Generation Tutorial\n", "\n", "PyGSTi is able to construct polished report documents, which provide high-level summaries as well as detailed analyses of GST results. Reports are intended to be quick and easy way of analyzing a GST estimate, and pyGSTi's report generation functions are specifically designed to interact with its high-level driver functions (see the high-level algorithms tutorial). Currently there is only a single report generation function, `pygsti.report.create_general_report`, which takes one or more `Results` objects as input and produces an HTML file as output. The HTML format allows the reports to include **interactive plots** and **switches**, making it easy to compare different types of analysis or data sets. \n", "\n", "PyGSTi's \"general\" report creates a stand-alone HTML document which cannot run Python. Thus, all the results displayed in the report must be pre-computed (in Python). If you find yourself wanting to fiddle with things and feel that the general report is too static, please consider using a `Workspace` object (see following tutorials) within a Jupyter notebook, where you can intermix report tables/plots and Python. Internally, `create_general_report` is just a canned routine which uses a `WorkSpace` object to generate various tables and plots and then inserts them into a HTML template. \n", "\n", "**Note to veteran users:** PyGSTi has recently transitioned to producing HTML (rather than LaTeX/PDF) reports. The way to generate such report is largely unchanged, with one important exception. Previously, the `Results` object had various report-generation methods included within it. We've found this is too restrictive, as we'd sometimes like to generate a report which utilizes the results from multiple runs of GST (to compare the, for instance). Thus, the `Results` class is now just a container for a `DataSet` and its related `GateSet`s, `GatestringStructure`s, etc. All of the report-generation capability is now housed in within separate report functions, which we now demonstrate.\n", "\n", "\n", "### Get some `Results`\n", "We start by performing GST using `do_long_sequence_gst`, as usual, to create a `Results` object (we could also have just loaded one from file)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading from cache file: tutorial_files/Example_Dataset.txt.cache\n", "--- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", "--- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.245030583357433\n", " 1.1797105733752997\n", " 0.956497891831113\n", " 0.9423535266759971\n", " 0.04708902142849769\n", " 0.015314932955168444\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119285\n", " 1.4142135623730954\n", " 1.4142135623730947\n", " 1.4142135623730945\n", " 3.1723744950054595e-16\n", " 1.0852733691121267e-16\n", " \n", " Resulting gate set:\n", " \n", " rho0 = 0.7071 -0.0302 0.0396 0.7480\n", " \n", " \n", " Mdefault = TP-POVM with effect vectors:\n", " 0:\n", " 0.73\n", " 0\n", " 0\n", " 0.65\n", " \n", " 1:\n", " 0.69\n", " 0\n", " 0\n", " -0.65\n", " \n", " \n", " \n", " Gi = \n", " 1.0000 0 0 0\n", " 0.0094 0.9238 0.0542 -0.0155\n", " 0.0285 -0.0149 0.9021 0.0200\n", " -0.0142 0.0280 0.0009 0.9057\n", " \n", " \n", " Gx = \n", " 1.0000 0 0 0\n", " 0.0064 0.9053 0.0281 -0.0044\n", " -0.0006 0.0215 -0.0471 -0.9983\n", " -0.0692 -0.0056 0.8095 0.0090\n", " \n", " \n", " Gy = \n", " 1.0000 0 0 0\n", " -0.0152 -0.0245 0.0379 0.9906\n", " 0.0076 -0.0126 0.8876 -0.0257\n", " -0.0771 -0.8084 -0.0476 0.0210\n", " \n", " \n", " \n", " \n", "--- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 62.7837, mu=0, |J|=1362.27\n", " --- Outer Iter 1: norm_f = 42.2737, mu=172.193, |J|=4003.22\n", " --- Outer Iter 2: norm_f = 41.0782, mu=57.3976, |J|=4002.28\n", " --- Outer Iter 3: norm_f = 41.0771, mu=19.1325, |J|=4002.28\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 41.0771 (92 data params - 31 model params = expected mean of 61; p-value = 0.976519)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 41.2329\n", " Iteration 1 took 0.2s\n", " \n", "--- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 175.573, mu=0, |J|=4118.16\n", " --- Outer Iter 1: norm_f = 124.232, mu=1805.46, |J|=4115.29\n", " --- Outer Iter 2: norm_f = 119.708, mu=601.819, |J|=4114.91\n", " --- Outer Iter 3: norm_f = 119.299, mu=200.606, |J|=4114.85\n", " --- Outer Iter 4: norm_f = 119.288, mu=66.8688, |J|=4114.84\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 119.288 (168 data params - 31 model params = expected mean of 137; p-value = 0.859758)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 119.601\n", " Iteration 2 took 0.4s\n", " \n", "--- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 498.161, mu=0, |J|=4505.2\n", " --- Outer Iter 1: norm_f = 416.334, mu=2013.07, |J|=4507.21\n", " --- Outer Iter 2: norm_f = 415.465, mu=671.024, |J|=4506.99\n", " --- Outer Iter 3: norm_f = 415.46, mu=223.675, |J|=4506.99\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 415.46 (450 data params - 31 model params = expected mean of 419; p-value = 0.539658)\n", " Completed in 0.6s\n", " 2*Delta(log(L)) = 415.96\n", " Iteration 3 took 1.0s\n", " \n", "--- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 860.476, mu=0, |J|=5095.16\n", " --- Outer Iter 1: norm_f = 814.571, mu=2303.12, |J|=5078.86\n", " --- Outer Iter 2: norm_f = 814.34, mu=767.708, |J|=5078.57\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 814.34 (862 data params - 31 model params = expected mean of 831; p-value = 0.65359)\n", " Completed in 1.0s\n", " 2*Delta(log(L)) = 815.742\n", " Iteration 4 took 1.9s\n", " \n", "--- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1265.58, mu=0, |J|=5735.19\n", " --- Outer Iter 1: norm_f = 1252.76, mu=2582.97, |J|=5736.35\n", " --- Outer Iter 2: norm_f = 1252.69, mu=860.99, |J|=5736.93\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1252.69 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.481202)\n", " Completed in 1.4s\n", " 2*Delta(log(L)) = 1254.49\n", " Iteration 5 took 2.9s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 627.245, mu=0, |J|=3702.41\n", " --- Outer Iter 1: norm_f = 627.238, mu=3782.07, |J|=3378.33\n", " --- Outer Iter 2: norm_f = 627.234, mu=4.13103e+07, |J|=3299.12\n", " --- Outer Iter 3: norm_f = 627.225, mu=2.26458e+07, |J|=3086.64\n", " --- Outer Iter 4: norm_f = 627.218, mu=2.09848e+07, |J|=3021.71\n", " --- Outer Iter 5: norm_f = 627.217, mu=2.05917e+07, |J|=3128.49\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 627.217 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1254.43 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.467364)\n", " Completed in 3.2s\n", " 2*Delta(log(L)) = 1254.43\n", " Final MLGST took 3.2s\n", " \n", "Iterative MLGST Total Time: 9.6s\n", " -- Adding Gauge Optimized (go0) --\n" ] } ], "source": [ "import pygsti\n", "from pygsti.construction import std1Q_XYI\n", "\n", "gs_target = std1Q_XYI.gs_target\n", "fiducials = std1Q_XYI.fiducials\n", "germs = std1Q_XYI.germs\n", "maxLengths = [1,2,4,8,16]\n", "ds = pygsti.io.load_dataset(\"tutorial_files/Example_Dataset.txt\", cache=True)\n", "\n", "#Run GST\n", "gs_target.set_all_parameterizations(\"TP\") #TP-constrained\n", "results = pygsti.do_long_sequence_gst(ds, gs_target, fiducials, fiducials, germs,\n", " maxLengths, verbosity=3)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Make a report\n", "Now that we have `results`, we use the `create_standard_report` method within `pygsti.report.factory` to generate a report. If the given filename ends in \"`.pdf`\" then a PDF-format report is generated; otherwise the file name specifies a folder that will be filled with HTML pages. To open a HTML-format report, you open the `main.html` file directly inside the report's folder. Setting `auto_open=True` makes the finished report open in your web browser automatically. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleReport directory\n", "Opening tutorial_files/exampleReport/main.html...\n", "*** Report Generation Complete! Total time 74.8747s ***\n", "\n", "\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Latex file(s) successfully generated. Attempting to compile with pdflatex...\n", "Initial output PDF tutorial_files/exampleReport.pdf successfully generated.\n", "Final output PDF tutorial_files/exampleReport.pdf successfully generated. Cleaning up .aux and .log files.\n", "Opening tutorial_files/exampleReport.pdf...\n", "*** Report Generation Complete! Total time 92.6683s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#HTML\n", "pygsti.report.create_standard_report(results, \"tutorial_files/exampleReport\", \n", " title=\"GST Example Report\", verbosity=1, auto_open=True)\n", "\n", "print(\"\\n\")\n", "\n", "#PDF\n", "pygsti.report.create_standard_report(results, \"tutorial_files/exampleReport.pdf\", \n", " title=\"GST Example Report\", verbosity=1, auto_open=True)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "There are several remarks about these reports worth noting:\n", "1. The **HTML reports are the primary report type in pyGSTi**, and are much more flexible. The PDF reports are more limited (they can only display a *single* estimate and gauge optimization), and essentially contain a subset of the information and descriptive text of a HTML report. So, if you can, use the HTML reports. The PDF report's strength is its portability: PDFs are easily displayed by many devices, and they embed all that they need neatly into a single file. **If you need to generate a PDF report** from `Results` objects that have multiple estimates and/or gauge optimizations, consider using the `Results` object's `view` method to single out the estimate and gauge optimization you're after.\n", "2. It's best to use **Firefox** when opening the HTML reports. (If there's a problem with your brower's capabilities it will be shown on the screen when you try to load the report.)\n", "3. You'll need **`pdflatex`** on your system to compile PDF reports.\n", "4. To familiarize yourself with the layout of an HTML report, click on the gray **\"Help\" link** on the black sidebar." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Multiple estimates in a single report\n", "Next, let's analyze the same data two different ways: with and without the TP-constraint (i.e. whether the gates *must* be trace-preserving) and furthermore gauge optmimize each case using several different SPAM-weights. In each case we'll call `do_long_sequence_gst` with `gaugeOptParams=False`, so that no gauge optimization is done, and then perform several gauge optimizations separately and add these to the `Results` object via its `add_gaugeoptimized` function." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Gate Sequence Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 1282 gate strings ---\n", "Iterative MLGST Total Time: 4.8s\n" ] } ], "source": [ "#Case1: TP-constrained GST\n", "tpTarget = gs_target.copy()\n", "tpTarget.set_all_parameterizations(\"TP\")\n", "results_tp = pygsti.do_long_sequence_gst(ds, tpTarget, fiducials, fiducials, germs,\n", " maxLengths, gaugeOptParams=False, verbosity=1)\n", "\n", "#Gauge optimize\n", "est = results_tp.estimates['default']\n", "gsFinal = est.gatesets['final iteration estimate']\n", "gsTarget = est.gatesets['target']\n", "for spamWt in [1e-4,1e-2,1.0]:\n", " gs = pygsti.gaugeopt_to_target(gsFinal,gsTarget,{'gates':1, 'spam':spamWt})\n", " est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, gs, \"Spam %g\" % spamWt)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--- Gate Sequence Creation ---\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 1282 gate strings ---\n", "Iterative MLGST Total Time: 4.9s\n" ] } ], "source": [ "#Case2: \"Full\" GST\n", "fullTarget = gs_target.copy()\n", "fullTarget.set_all_parameterizations(\"full\")\n", "results_full = pygsti.do_long_sequence_gst(ds, fullTarget, fiducials, fiducials, germs,\n", " maxLengths, gaugeOptParams=False, verbosity=1)\n", "\n", "#Gauge optimize\n", "est = results_full.estimates['default']\n", "gsFinal = est.gatesets['final iteration estimate']\n", "gsTarget = est.gatesets['target']\n", "for spamWt in [1e-4,1e-2,1.0]:\n", " gs = pygsti.gaugeopt_to_target(gsFinal,gsTarget,{'gates':1, 'spam':spamWt})\n", " est.add_gaugeoptimized({'itemWeights': {'gates':1, 'spam':spamWt}}, gs, \"Spam %g\" % spamWt)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We'll now call the *same* `create_standard_report` function but this time instead of passing a single `Results` object as the first argument we'll pass a *dictionary* of them. This will result in a **HTML report that includes switches** to select which case (\"TP\" or \"Full\") as well as which gauge optimization to display output quantities for. PDF reports cannot support this interactivity, and so **if you try to generate a PDF report you'll get an error**." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "deletable": true, "editable": true, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", " targetSpamBriefTable took 0.03205 seconds\n", " targetGatesBoxTable took 0.068427 seconds\n", " datasetOverviewTable took 0.292969 seconds\n", " bestGatesetSpamParametersTable took 0.003658 seconds\n", " bestGatesetSpamBriefTable took 0.206302 seconds\n", " bestGatesetSpamVsTargetTable took 0.26785 seconds\n", " bestGatesetGaugeOptParamsTable took 0.000964 seconds\n", " bestGatesetGatesBoxTable took 0.194 seconds\n", " bestGatesetChoiEvalTable took 0.249394 seconds\n", " bestGatesetDecompTable took 0.223806 seconds\n", " bestGatesetEvalTable took 0.006698 seconds\n", " bestGermsEvalTable took 0.028191 seconds\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/extras/rb/rbutils.py:382: UserWarning:\n", "\n", "Predicted RB decay parameter / error rate may be unreliable:\n", "Gateset is not (approximately) trace-preserving.\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " bestGatesetVsTargetTable took 0.39517 seconds\n", " bestGatesVsTargetTable_gv took 0.715082 seconds\n", " bestGatesVsTargetTable_gvgerms took 0.335222 seconds\n", " bestGatesVsTargetTable_gi took 0.01881 seconds\n", " bestGatesVsTargetTable_gigerms took 0.0341 seconds\n", " bestGatesVsTargetTable_sum took 0.695501 seconds\n", " bestGatesetErrGenBoxTable took 0.851781 seconds\n", " metadataTable took 0.004229 seconds\n", " stdoutBlock took 0.00039 seconds\n", " profilerTable took 0.001395 seconds\n", " softwareEnvTable took 0.000512 seconds\n", " exampleTable took 0.009101 seconds\n", " singleMetricTable_gv took 0.754915 seconds\n", " singleMetricTable_gi took 0.053593 seconds\n", " fiducialListTable took 0.000675 seconds\n", " prepStrListTable took 0.000268 seconds\n", " effectStrListTable took 0.000497 seconds\n", " colorBoxPlotKeyPlot took 0.016362 seconds\n", " germList2ColTable took 0.000374 seconds\n", " progressTable took 2.606624 seconds\n", "*** Generating plots ***\n", " gramBarPlot took 0.054486 seconds\n", " progressBarPlot took 0.275432 seconds\n", " progressBarPlot_sum took 0.001061 seconds\n", " finalFitComparePlot took 0.152279 seconds\n", " bestEstimateColorBoxPlot took 16.84382 seconds\n", " bestEstimateTVDColorBoxPlot took 17.16911 seconds\n", " bestEstimateColorScatterPlot took 18.453839 seconds\n", " bestEstimateColorHistogram took 16.402786 seconds\n", " progressTable_scl took 0.000139 seconds\n", " progressBarPlot_scl took 0.00013 seconds\n", " bestEstimateColorBoxPlot_scl took 0.000385 seconds\n", " bestEstimateColorScatterPlot_scl took 0.000265 seconds\n", " bestEstimateColorHistogram_scl took 0.000217 seconds\n", " dataScalingColorBoxPlot took 0.000134 seconds\n", " dsComparisonSummary took 0.018569 seconds\n", " dsComparisonHistogram took 1.054446 seconds\n", " dsComparisonBoxPlot took 0.45789 seconds\n", "*** Merging into template file ***\n", " Rendering dsComparisonBoxPlot took 0.031074 seconds\n", " Rendering dsComparisonSummary took 0.008894 seconds\n", " Rendering bestGatesVsTargetTable_gvgerms took 0.046067 seconds\n", " Rendering bestGatesetDecompTable took 0.112869 seconds\n", " Rendering dataScalingColorBoxPlot took 0.000607 seconds\n", " Rendering bestEstimateColorBoxPlot took 0.077907 seconds\n", " Rendering bestGatesVsTargetTable_gigerms took 0.005866 seconds\n", " Rendering targetSpamBriefTable took 0.030136 seconds\n", " Rendering profilerTable took 0.004052 seconds\n", " Rendering bestGermsEvalTable took 0.117593 seconds\n", " Rendering targetGatesBoxTable took 0.030416 seconds\n", " Rendering progressBarPlot took 0.004698 seconds\n", " Rendering bestGatesetGatesBoxTable took 0.162742 seconds\n", " Rendering bestGatesetChoiEvalTable took 0.113938 seconds\n", " Rendering dsComparisonHistogram took 0.02146 seconds\n", " Rendering bestEstimateColorHistogram_scl took 0.00056 seconds\n", " Rendering bestGatesetSpamBriefTable took 0.211345 seconds\n", " Rendering bestEstimateColorHistogram took 0.062521 seconds\n", " Rendering progressTable_scl took 0.000661 seconds\n", " Rendering bestGatesVsTargetTable_sum took 0.02318 seconds\n", " Rendering colorBoxPlotKeyPlot took 0.006611 seconds\n", " Rendering bestGatesetVsTargetTable took 0.003911 seconds\n", " Rendering progressTable took 0.008651 seconds\n", " Rendering softwareEnvTable took 0.003926 seconds\n", " Rendering effectStrListTable took 0.004723 seconds\n", " Rendering bestGatesetSpamVsTargetTable took 0.014124 seconds\n", " Rendering bestGatesVsTargetTable_gi took 0.006171 seconds\n", " Rendering finalFitComparePlot took 0.004308 seconds\n", " Rendering bestGatesetGaugeOptParamsTable took 0.003374 seconds\n", " Rendering fiducialListTable took 0.004536 seconds\n", " Rendering bestGatesetEvalTable took 0.034756 seconds\n", " Rendering metadataTable took 0.025714 seconds\n", " Rendering bestEstimateColorBoxPlot_scl took 0.000725 seconds\n", " Rendering singleMetricTable_gi took 0.034549 seconds\n", " Rendering metricSwitchboard_gi took 0.000151 seconds\n", " Rendering maxLSwitchboard1 took 0.000221 seconds\n", " Rendering bestGatesetSpamParametersTable took 0.010978 seconds\n", " Rendering dscmpSwitchboard took 9.7e-05 seconds\n", " Rendering bestEstimateColorScatterPlot_scl took 0.000836 seconds\n", " Rendering bestGatesVsTargetTable_gv took 0.022777 seconds\n", " Rendering progressBarPlot_scl took 0.000569 seconds\n", " Rendering exampleTable took 0.006003 seconds\n", " Rendering progressBarPlot_sum took 0.006421 seconds\n", " Rendering bestEstimateTVDColorBoxPlot took 0.083743 seconds\n", " Rendering topSwitchboard took 0.000185 seconds\n", " Rendering germList2ColTable took 0.009168 seconds\n", " Rendering bestEstimateColorScatterPlot took 0.061822 seconds\n", " Rendering bestGatesetErrGenBoxTable took 0.328783 seconds\n", " Rendering prepStrListTable took 0.004254 seconds\n", " Rendering singleMetricTable_gv took 0.044535 seconds\n", " Rendering stdoutBlock took 0.001628 seconds\n", " Rendering metricSwitchboard_gv took 0.000128 seconds\n", " Rendering datasetOverviewTable took 0.000871 seconds\n", " Rendering gramBarPlot took 0.006537 seconds\n", "Output written to tutorial_files/exampleMultiEstimateReport directory\n", "Opening tutorial_files/exampleMultiEstimateReport/main.html...\n", "*** Report Generation Complete! Total time 84.6716s ***\n" ] } ], "source": [ "ws = pygsti.report.create_standard_report({'TP': results_tp, \"Full\": results_full},\n", " \"tutorial_files/exampleMultiEstimateReport\",\n", " title=\"Example Multi-Estimate Report\", \n", " verbosity=2, auto_open=True)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "deletable": true, "editable": true }, "source": [ "In the above call we capture the return value in the variable `ws` - a `Workspace` object. PyGSTi's `Workspace` objects function as both a factory for figures and tables as well as a smart cache for computed values. Within `create_standard_report` a `Workspace` object is created and used to create all the figures in the report. As an intended side effect, each of these figures is cached, along with some of the intermediate results used to create it. As we'll see below, a `Workspace` can also be specified as input to `create_standard_report`, allowing it to utilize previously cached quantities.\n", "\n", "**Note to veteran users:** Other report formats such as **`beamer`-class PDF presentation and Powerpoint presentation have been dropped from pyGSTi**. These presentation formats were rarely used and moreover we feel that the HTML format is able to provide all of the functionality that was present in these discontinued formats.\n", "\n", "**Another way**: Because both `results_tp` and `results_full` above used the same dataset and gate sequences, we could have combined them as two estimates in a single `Results` object (see the previous tutorial on pyGSTi's `Results` object). This can be done by renaming at least one of the `\"default\"`-named estimates in `results_tp` or `results_full` (below we rename both) and then adding the estimate within `results_full` to the estimates already contained in `results_tp`: " ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "results_tp.rename_estimate('default','TP')\n", "results_full.rename_estimate('default','Full')\n", "results_both = results_tp.copy() #copy just for neatness\n", "results_both.add_estimates(results_full, estimatesToAdd=['Full'])" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Creating a report using `results_both` will result in the same report we just generated. We'll demonstrate this anyway, but in addition we'll supply `create_standard_report` a `ws` argument, which tells it to use any cached values contained in a given *input* `Workspace` to expedite report generation. Since our workspace object has the exact quantities we need cached in it, you'll notice a significant speedup. Finally, note that even though there's just a single `Results` object, you **still can't generate a PDF report** from it because it contains multiple estimates." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", " targetSpamBriefTable took 0.000495 seconds\n", " targetGatesBoxTable took 0.000417 seconds\n", " datasetOverviewTable took 0.000329 seconds\n", " bestGatesetSpamParametersTable took 0.001463 seconds\n", " bestGatesetSpamBriefTable took 0.00168 seconds\n", " bestGatesetSpamVsTargetTable took 0.000995 seconds\n", " bestGatesetGaugeOptParamsTable took 0.001204 seconds\n", " bestGatesetGatesBoxTable took 0.001613 seconds\n", " bestGatesetChoiEvalTable took 0.0012 seconds\n", " bestGatesetDecompTable took 0.00097 seconds\n", " bestGatesetEvalTable took 0.00047 seconds\n", " bestGermsEvalTable took 0.032373 seconds\n", " bestGatesetVsTargetTable took 0.011474 seconds\n", " bestGatesVsTargetTable_gv took 0.001043 seconds\n", " bestGatesVsTargetTable_gvgerms took 0.001883 seconds\n", " bestGatesVsTargetTable_gi took 0.000433 seconds\n", " bestGatesVsTargetTable_gigerms took 0.045929 seconds\n", " bestGatesVsTargetTable_sum took 0.001143 seconds\n", " bestGatesetErrGenBoxTable took 0.001385 seconds\n", " metadataTable took 0.005651 seconds\n", " stdoutBlock took 0.000152 seconds\n", " profilerTable took 0.001333 seconds\n", " softwareEnvTable took 0.000164 seconds\n", " exampleTable took 0.000161 seconds\n", " singleMetricTable_gv took 1.148238 seconds\n", " singleMetricTable_gi took 0.077286 seconds\n", " fiducialListTable took 0.000492 seconds\n", " prepStrListTable took 0.000248 seconds\n", " effectStrListTable took 0.000327 seconds\n", " colorBoxPlotKeyPlot took 0.000547 seconds\n", " germList2ColTable took 0.000403 seconds\n", " progressTable took 0.448984 seconds\n", "*** Generating plots ***\n", " gramBarPlot took 0.000967 seconds\n", " progressBarPlot took 0.35197 seconds\n", " progressBarPlot_sum took 0.000654 seconds\n", " finalFitComparePlot took 0.129702 seconds\n", " bestEstimateColorBoxPlot took 14.282138 seconds\n", " bestEstimateTVDColorBoxPlot took 12.862145 seconds\n", " bestEstimateColorScatterPlot took 14.794409 seconds\n", " bestEstimateColorHistogram took 13.193801 seconds\n", " progressTable_scl took 9.3e-05 seconds\n", " progressBarPlot_scl took 6.9e-05 seconds\n", " bestEstimateColorBoxPlot_scl took 0.000192 seconds\n", " bestEstimateColorScatterPlot_scl took 0.000197 seconds\n", " bestEstimateColorHistogram_scl took 0.000244 seconds\n", " dataScalingColorBoxPlot took 0.000126 seconds\n", "*** Merging into template file ***\n", " Rendering bestGatesVsTargetTable_gvgerms took 0.036451 seconds\n", " Rendering bestGatesetDecompTable took 0.080957 seconds\n", " Rendering dataScalingColorBoxPlot took 0.000631 seconds\n", " Rendering bestEstimateColorBoxPlot took 0.057368 seconds\n", " Rendering bestGatesVsTargetTable_gigerms took 0.005071 seconds\n", " Rendering targetSpamBriefTable took 0.024192 seconds\n", " Rendering profilerTable took 0.00289 seconds\n", " Rendering bestGermsEvalTable took 0.093425 seconds\n", " Rendering targetGatesBoxTable took 0.021389 seconds\n", " Rendering progressBarPlot took 0.004094 seconds\n", " Rendering bestGatesetGatesBoxTable took 0.124966 seconds\n", " Rendering bestGatesetChoiEvalTable took 0.100466 seconds\n", " Rendering bestEstimateColorHistogram_scl took 0.000574 seconds\n", " Rendering bestGatesetSpamBriefTable took 0.137343 seconds\n", " Rendering bestEstimateColorHistogram took 0.041018 seconds\n", " Rendering progressTable_scl took 0.000507 seconds\n", " Rendering bestGatesVsTargetTable_sum took 0.014985 seconds\n", " Rendering colorBoxPlotKeyPlot took 0.005478 seconds\n", " Rendering bestGatesetVsTargetTable took 0.003839 seconds\n", " Rendering progressTable took 0.007396 seconds\n", " Rendering softwareEnvTable took 0.002525 seconds\n", " Rendering effectStrListTable took 0.001735 seconds\n", " Rendering bestGatesetSpamVsTargetTable took 0.009722 seconds\n", " Rendering bestGatesVsTargetTable_gi took 0.004923 seconds\n", " Rendering finalFitComparePlot took 0.002155 seconds\n", " Rendering bestGatesetGaugeOptParamsTable took 0.003325 seconds\n", " Rendering fiducialListTable took 0.002214 seconds\n", " Rendering bestGatesetEvalTable took 0.025346 seconds\n", " Rendering metadataTable took 0.015707 seconds\n", " Rendering bestEstimateColorBoxPlot_scl took 0.000748 seconds\n", " Rendering singleMetricTable_gi took 0.016926 seconds\n", " Rendering metricSwitchboard_gi took 7.1e-05 seconds\n", " Rendering maxLSwitchboard1 took 0.000115 seconds\n", " Rendering bestGatesetSpamParametersTable took 0.007512 seconds\n", " Rendering bestEstimateColorScatterPlot_scl took 0.000793 seconds\n", " Rendering bestGatesVsTargetTable_gv took 0.016135 seconds\n", " Rendering progressBarPlot_scl took 0.000688 seconds\n", " Rendering exampleTable took 0.004642 seconds\n", " Rendering progressBarPlot_sum took 0.003849 seconds\n", " Rendering bestEstimateTVDColorBoxPlot took 0.056864 seconds\n", " Rendering topSwitchboard took 0.00013 seconds\n", " Rendering germList2ColTable took 0.002841 seconds\n", " Rendering bestEstimateColorScatterPlot took 0.043878 seconds\n", " Rendering bestGatesetErrGenBoxTable took 0.247469 seconds\n", " Rendering prepStrListTable took 0.001887 seconds\n", " Rendering singleMetricTable_gv took 0.02074 seconds\n", " Rendering stdoutBlock took 0.001358 seconds\n", " Rendering metricSwitchboard_gv took 7.3e-05 seconds\n", " Rendering datasetOverviewTable took 0.001012 seconds\n", " Rendering gramBarPlot took 0.005049 seconds\n", "Output written to tutorial_files/exampleMultiEstimateReport2 directory\n", "Opening tutorial_files/exampleMultiEstimateReport2/main.html...\n", "*** Report Generation Complete! Total time 59.0804s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pygsti.report.create_standard_report(results_both,\n", " \"tutorial_files/exampleMultiEstimateReport2\",\n", " title=\"Example Multi-Estimate Report (v2)\", \n", " verbosity=2, auto_open=True, ws=ws)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Multiple estimates and `do_stdpractice_gst`\n", "It's no coincidence that a `Results` object containing multiple estimates using the same data is precisely what's returned from `do_stdpractice_gst` (see docstring for information on its arguments). This allows one to run GST multiple times, creating several different \"standard\" estimates and gauge optimizations, and plot them all in a single (HTML) report. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Std Practice: Iter 1 of 3 (TP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.245030583357433\n", " 1.1797105733752997\n", " 0.956497891831113\n", " 0.9423535266759971\n", " 0.04708902142849769\n", " 0.015314932955168444\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119285\n", " 1.4142135623730954\n", " 1.4142135623730947\n", " 1.4142135623730945\n", " 3.1723744950054595e-16\n", " 1.0852733691121267e-16\n", " \n", " Resulting gate set:\n", " \n", " rho0 = 0.7071 -0.0302 0.0396 0.7480\n", " \n", " \n", " Mdefault = TP-POVM with effect vectors:\n", " 0:\n", " 0.73\n", " 0\n", " 0\n", " 0.65\n", " \n", " 1:\n", " 0.69\n", " 0\n", " 0\n", " -0.65\n", " \n", " \n", " \n", " Gi = \n", " 1.0000 0 0 0\n", " 0.0094 0.9238 0.0542 -0.0155\n", " 0.0285 -0.0149 0.9021 0.0200\n", " -0.0142 0.0280 0.0009 0.9057\n", " \n", " \n", " Gx = \n", " 1.0000 0 0 0\n", " 0.0064 0.9053 0.0281 -0.0044\n", " -0.0006 0.0215 -0.0471 -0.9983\n", " -0.0692 -0.0056 0.8095 0.0090\n", " \n", " \n", " Gy = \n", " 1.0000 0 0 0\n", " -0.0152 -0.0245 0.0379 0.9906\n", " 0.0076 -0.0126 0.8876 -0.0257\n", " -0.0771 -0.8084 -0.0476 0.0210\n", " \n", " \n", " \n", " \n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 62.7837, mu=0, |J|=1362.27\n", " --- Outer Iter 1: norm_f = 42.2737, mu=172.193, |J|=4003.22\n", " --- Outer Iter 2: norm_f = 41.0782, mu=57.3976, |J|=4002.28\n", " --- Outer Iter 3: norm_f = 41.0771, mu=19.1325, |J|=4002.28\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 41.0771 (92 data params - 31 model params = expected mean of 61; p-value = 0.976519)\n", " Completed in 0.1s\n", " 2*Delta(log(L)) = 41.2329\n", " Iteration 1 took 0.1s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 175.573, mu=0, |J|=4118.16\n", " --- Outer Iter 1: norm_f = 124.232, mu=1805.46, |J|=4115.29\n", " --- Outer Iter 2: norm_f = 119.708, mu=601.819, |J|=4114.91\n", " --- Outer Iter 3: norm_f = 119.299, mu=200.606, |J|=4114.85\n", " --- Outer Iter 4: norm_f = 119.288, mu=66.8688, |J|=4114.84\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 119.288 (168 data params - 31 model params = expected mean of 137; p-value = 0.859758)\n", " Completed in 0.1s\n", " 2*Delta(log(L)) = 119.601\n", " Iteration 2 took 0.2s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 498.161, mu=0, |J|=4505.2\n", " --- Outer Iter 1: norm_f = 416.334, mu=2013.07, |J|=4507.21\n", " --- Outer Iter 2: norm_f = 415.465, mu=671.024, |J|=4506.99\n", " --- Outer Iter 3: norm_f = 415.46, mu=223.675, |J|=4506.99\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 415.46 (450 data params - 31 model params = expected mean of 419; p-value = 0.539658)\n", " Completed in 0.3s\n", " 2*Delta(log(L)) = 415.96\n", " Iteration 3 took 0.5s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 860.476, mu=0, |J|=5095.16\n", " --- Outer Iter 1: norm_f = 814.571, mu=2303.12, |J|=5078.86\n", " --- Outer Iter 2: norm_f = 814.34, mu=767.708, |J|=5078.57\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 814.34 (862 data params - 31 model params = expected mean of 831; p-value = 0.65359)\n", " Completed in 0.6s\n", " 2*Delta(log(L)) = 815.742\n", " Iteration 4 took 1.1s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1265.58, mu=0, |J|=5735.19\n", " --- Outer Iter 1: norm_f = 1252.76, mu=2582.97, |J|=5736.35\n", " --- Outer Iter 2: norm_f = 1252.69, mu=860.99, |J|=5736.93\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1252.69 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.481202)\n", " Completed in 0.7s\n", " 2*Delta(log(L)) = 1254.49\n", " Iteration 5 took 1.2s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 627.245, mu=0, |J|=3702.41\n", " --- Outer Iter 1: norm_f = 627.238, mu=3782.07, |J|=3378.33\n", " --- Outer Iter 2: norm_f = 627.234, mu=4.13103e+07, |J|=3299.12\n", " --- Outer Iter 3: norm_f = 627.225, mu=2.26458e+07, |J|=3086.64\n", " --- Outer Iter 4: norm_f = 627.218, mu=2.09848e+07, |J|=3021.71\n", " --- Outer Iter 5: norm_f = 627.217, mu=2.05917e+07, |J|=3128.49\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 627.217 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1254.43 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.467364)\n", " Completed in 1.2s\n", " 2*Delta(log(L)) = 1254.43\n", " Final MLGST took 1.2s\n", " \n", " Iterative MLGST Total Time: 4.2s\n", " -- Performing 'single' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "-- Std Practice: Iter 2 of 3 (CPTP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1.10824e+07, mu=0, |J|=21595.7\n", " --- Outer Iter 1: norm_f = 1055.69, mu=51712.5, |J|=1014.07\n", " --- Outer Iter 2: norm_f = 906.343, mu=17237.5, |J|=958.022\n", " --- Outer Iter 3: norm_f = 890.808, mu=5745.83, |J|=941.716\n", " --- Outer Iter 4: norm_f = 887.155, mu=1915.28, |J|=940.07\n", " --- Outer Iter 5: norm_f = 886.12, mu=638.426, |J|=940.129\n", " --- Outer Iter 6: norm_f = 886.031, mu=212.809, |J|=940.16\n", " --- Outer Iter 7: norm_f = 885.532, mu=70.9362, |J|=940.053\n", " --- Outer Iter 8: norm_f = 829.351, mu=189.163, |J|=926.197\n", " --- Outer Iter 9: norm_f = 639.664, mu=504.435, |J|=905.287\n", " --- Outer Iter 10: norm_f = 468.911, mu=1020.24, |J|=863.757\n", " --- Outer Iter 11: norm_f = 144.284, mu=340.081, |J|=1014.76\n", " --- Outer Iter 12: norm_f = 134.688, mu=113.36, |J|=982.031\n", " --- Outer Iter 13: norm_f = 133.055, mu=113.81, |J|=968.439\n", " --- Outer Iter 14: norm_f = 127.127, mu=95.4437, |J|=966.489\n", " --- Outer Iter 15: norm_f = 101.018, mu=63.6291, |J|=951.557\n", " --- Outer Iter 16: norm_f = 79.1423, mu=57.8456, |J|=949.601\n", " --- Outer Iter 17: norm_f = 55.2669, mu=19744.6, |J|=3971.7\n", " --- Outer Iter 18: norm_f = 53.4873, mu=8123.49, |J|=3974.72\n", " --- Outer Iter 19: norm_f = 52.6336, mu=5158.47, |J|=3975.96\n", " --- Outer Iter 20: norm_f = 51.6616, mu=2895.56, |J|=3977.28\n", " --- Outer Iter 21: norm_f = 50.1915, mu=965.188, |J|=3979\n", " --- Outer Iter 22: norm_f = 47.4829, mu=321.729, |J|=3981.8\n", " --- Outer Iter 23: norm_f = 44.9263, mu=107.243, |J|=3985.99\n", " --- Outer Iter 24: norm_f = 43.5176, mu=35.7477, |J|=3992.24\n", " --- Outer Iter 25: norm_f = 43.1156, mu=31.473, |J|=1802.81\n", " --- Outer Iter 26: norm_f = 41.9408, mu=10.491, |J|=4017.9\n", " --- Outer Iter 27: norm_f = 41.2585, mu=8.25944, |J|=4022.01\n", " --- Outer Iter 28: norm_f = 41.0862, mu=2.75315, |J|=4023.48\n", " --- Outer Iter 29: norm_f = 41.0859, mu=4.98069, |J|=4025.25\n", " --- Outer Iter 30: norm_f = 41.0858, mu=9.8871, |J|=4027.21\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 41.1114 (92 data params - 31 model params = expected mean of 61; p-value = 0.976293)\n", " Completed in 1.0s\n", " 2*Delta(log(L)) = 44.6096\n", " Iteration 1 took 1.1s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 179.784, mu=0, |J|=4102.96\n", " --- Outer Iter 1: norm_f = 138.536, mu=2285.72, |J|=4101.71\n", " --- Outer Iter 2: norm_f = 125.246, mu=761.907, |J|=4104.28\n", " --- Outer Iter 3: norm_f = 120.441, mu=253.969, |J|=4105.63\n", " --- Outer Iter 4: norm_f = 119.465, mu=84.6563, |J|=4106.02\n", " --- Outer Iter 5: norm_f = 119.322, mu=62.5071, |J|=4106.74\n", " --- Outer Iter 6: norm_f = 119.317, mu=81.3216, |J|=4107.73\n", " --- Outer Iter 7: norm_f = 119.315, mu=139.502, |J|=4108.97\n", " --- Outer Iter 8: norm_f = 119.314, mu=242.128, |J|=4110.48\n", " --- Outer Iter 9: norm_f = 119.313, mu=426.252, |J|=4112.26\n", " --- Outer Iter 10: norm_f = 119.312, mu=771.941, |J|=4114.3\n", " --- Outer Iter 11: norm_f = 119.312, mu=1478.24, |J|=4116.6\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 119.375 (168 data params - 31 model params = expected mean of 137; p-value = 0.858461)\n", " Completed in 0.5s\n", " 2*Delta(log(L)) = 122.407\n", " Iteration 2 took 0.6s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 520.731, mu=0, |J|=4388.01\n", " --- Outer Iter 1: norm_f = 425.747, mu=2348.99, |J|=4387.99\n", " --- Outer Iter 2: norm_f = 418.289, mu=782.998, |J|=4387.4\n", " --- Outer Iter 3: norm_f = 416.05, mu=365.617, |J|=4387.75\n", " --- Outer Iter 4: norm_f = 415.687, mu=355.573, |J|=4389.64\n", " --- Outer Iter 5: norm_f = 415.658, mu=503.681, |J|=4392.13\n", " --- Outer Iter 6: norm_f = 415.643, mu=802.345, |J|=4395.23\n", " --- Outer Iter 7: norm_f = 415.629, mu=1291.1, |J|=4399.08\n", " --- Outer Iter 8: norm_f = 415.617, mu=2101.57, |J|=4403.66\n", " --- Outer Iter 9: norm_f = 415.607, mu=3490.92, |J|=4408.96\n", " --- Outer Iter 10: norm_f = 415.599, mu=6019.34, |J|=4414.94\n", " --- Outer Iter 11: norm_f = 415.596, mu=11138.8, |J|=4421.59\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 415.944 (450 data params - 31 model params = expected mean of 419; p-value = 0.532983)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 431.785\n", " Iteration 3 took 1.0s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 893.164, mu=0, |J|=4750.25\n", " --- Outer Iter 1: norm_f = 815.33, mu=2372.71, |J|=4738.51\n", " --- Outer Iter 2: norm_f = 814.675, mu=2184.19, |J|=4739.75\n", " --- Outer Iter 3: norm_f = 814.612, mu=2654.57, |J|=4742.24\n", " --- Outer Iter 4: norm_f = 814.583, mu=3911.8, |J|=4746.52\n", " --- Outer Iter 5: norm_f = 814.565, mu=6296.89, |J|=4752.54\n", " --- Outer Iter 6: norm_f = 814.552, mu=10677.9, |J|=4760.2\n", " --- Outer Iter 7: norm_f = 814.545, mu=19253, |J|=4769.42\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 815.074 (862 data params - 31 model params = expected mean of 831; p-value = 0.646841)\n", " Completed in 1.3s\n", " 2*Delta(log(L)) = 823.722\n", " Iteration 4 took 1.8s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 1289.5, mu=0, |J|=5010.07\n", " --- Outer Iter 1: norm_f = 1254.01, mu=2582.86, |J|=5006.93\n", " --- Outer Iter 2: norm_f = 1253.18, mu=2427.07, |J|=5010.14\n", " --- Outer Iter 3: norm_f = 1253.05, mu=2664.24, |J|=5016.29\n", " --- Outer Iter 4: norm_f = 1253.01, mu=3781.49, |J|=5025.33\n", " --- Outer Iter 5: norm_f = 1252.98, mu=5981.45, |J|=5037.22\n", " --- Outer Iter 6: norm_f = 1252.96, mu=9875.24, |J|=5051.82\n", " --- Outer Iter 7: norm_f = 1252.95, mu=16863.4, |J|=5068.99\n", " --- Outer Iter 8: norm_f = 1252.94, mu=30266.9, |J|=5088.58\n", " --- Outer Iter 9: norm_f = 1252.94, mu=59685.2, |J|=5110.48\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Sum of Chi^2 = 1253.54 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.47442)\n", " Completed in 1.6s\n", " 2*Delta(log(L)) = 1270.99\n", " Iteration 5 took 2.1s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Created evaluation tree with 1 subtrees. Will divide 1 procs into 1 (subtree-processing)\n", " groups of ~1 procs each, to distribute over 43 params (taken as 1 param groups of ~43 params).\n", " --- Outer Iter 0: norm_f = 635.496, mu=0, |J|=3112.36\n", " --- Outer Iter 1: norm_f = 627.361, mu=979.324, |J|=2703.11\n", " Least squares message = Both actual and predicted relative reductions in the sum of squares are at most 1e-06\n", " Maximum log(L) = 627.761 below upper bound of -2.13594e+06\n", " 2*Delta(log(L)) = 1255.52 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.458742)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 1255.52\n", " Final MLGST took 0.9s\n", " \n", " Iterative MLGST Total Time: 7.4s\n", " -- Performing 'single' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "-- Std Practice: Iter 3 of 3 (Target) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " -- Performing 'single' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (single) --\n", " -- Performing 'Spam 0.001' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (Spam 0.001) --\n", " -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --\n", " -- Adding Gauge Optimized (Spam 0.001+v) --\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleStdReport directory\n", "Opening tutorial_files/exampleStdReport/main.html...\n", "*** Report Generation Complete! Total time 91.7888s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "results_std = pygsti.do_stdpractice_gst(ds, gs_target, fiducials, fiducials, germs,\n", " maxLengths, verbosity=4, modes=\"TP,CPTP,Target\",\n", " gaugeOptSuite=('single','toggleValidSpam'))\n", "\n", "# Generate a report with \"TP\", \"CPTP\", and \"Target\" estimates\n", "pygsti.report.create_standard_report(results_std, \"tutorial_files/exampleStdReport\", \n", " title=\"Post StdPractice Report\", auto_open=True,\n", " verbosity=1)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Reports with confidence regions\n", "To display confidence intervals for reported quantities, you must do two things:\n", "\n", "1. you must specify the `confidenceLevel` argument to `create_standard_report`.\n", "2. the estimate(s) being reported must have a valid confidence-region-factory.\n", "\n", "Constructing a factory often means computing a Hessian, which can be time consuming, and so this is *not* done automatically. Here we demonstrate how to construct a valid factory for the \"Spam 0.001\" gauge-optimization of the \"CPTP\" estimate by computing and then projecting the Hessian of the likelihood function. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " \n", "--- Hessian Projector Optimization from separate SPAM and Gate weighting ---\n", " Resulting intrinsic errors: 0.0119628 (gates), 0.000159745 (spam)\n", " Resulting sqrt(mean(gateCIs**2)): 0.0144373\n", " Resulting sqrt(mean(spamCIs**2)): 0.00181812\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleStdReport2 directory\n", "Opening tutorial_files/exampleStdReport2/main.html...\n", "*** Report Generation Complete! Total time 136.359s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Construct and initialize a \"confidence region factory\" for the CPTP estimate\n", "crfact = results_std.estimates[\"CPTP\"].add_confidence_region_factory('Spam 0.001', 'final')\n", "crfact.compute_hessian(comm=None) #we could use more processors\n", "crfact.project_hessian('intrinsic error')\n", "\n", "pygsti.report.create_standard_report(results_std, \"tutorial_files/exampleStdReport2\", \n", " title=\"Post StdPractice Report (w/CIs on CPTP)\",\n", " confidenceLevel=95, auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Reports with multiple *different* data sets\n", "We've already seen above that `create_standard_report` can be given a dictionary of `Results` objects instead of a single one. This allows the creation of reports containing estimates for different `DataSet`s (each `Results` object only holds estimates for a single `DataSet`). Furthermore, when the data sets have the same gate sequences, they will be compared within a tab of the HTML report.\n", "\n", "Below, we generate a new data set with the same sequences as the one loaded at the beginning of this tutorial, proceed to run standard-practice GST on that dataset, and create a report of the results along with those of the original dataset. Look at the **\"Data Comparison\" tab** within the gauge-invariant error metrics category." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-- Std Practice: Iter 1 of 3 (TP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- LGST ---\n", " Singular values of I_tilde (truncating to first 4 of 6) = \n", " 4.244829997162508\n", " 1.1936677889884049\n", " 0.9868539533169902\n", " 0.9321977240915887\n", " 0.04714742318656941\n", " 0.0127005208085848\n", " \n", " Singular values of target I_tilde (truncating to first 4 of 6) = \n", " 4.242640687119285\n", " 1.4142135623730954\n", " 1.4142135623730947\n", " 1.4142135623730945\n", " 3.1723744950054595e-16\n", " 1.0852733691121267e-16\n", " \n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 50.2568 (92 data params - 31 model params = expected mean of 61; p-value = 0.835246)\n", " Completed in 0.1s\n", " 2*Delta(log(L)) = 50.4026\n", " Iteration 1 took 0.2s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 112.85 (168 data params - 31 model params = expected mean of 137; p-value = 0.934965)\n", " Completed in 0.2s\n", " 2*Delta(log(L)) = 112.943\n", " Iteration 2 took 0.3s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 409.836 (450 data params - 31 model params = expected mean of 419; p-value = 0.616314)\n", " Completed in 0.4s\n", " 2*Delta(log(L)) = 410.099\n", " Iteration 3 took 0.9s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 833.69 (862 data params - 31 model params = expected mean of 831; p-value = 0.467224)\n", " Completed in 0.6s\n", " 2*Delta(log(L)) = 834.058\n", " Iteration 4 took 1.0s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 1262.38 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.405135)\n", " Completed in 0.9s\n", " 2*Delta(log(L)) = 1263.06\n", " Iteration 5 took 1.8s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Maximum log(L) = 631.509 below upper bound of -2.13633e+06\n", " 2*Delta(log(L)) = 1263.02 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.400201)\n", " Completed in 2.3s\n", " 2*Delta(log(L)) = 1263.02\n", " Final MLGST took 2.3s\n", " \n", " Iterative MLGST Total Time: 6.5s\n", " -- Performing 'single' gauge optimization on TP estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on TP estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on TP estimate --\n", "-- Std Practice: Iter 2 of 3 (CPTP) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " --- Iterative MLGST: Iter 1 of 5 92 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 110.747 (92 data params - 31 model params = expected mean of 61; p-value = 0.00010228)\n", " Completed in 1.0s\n", " 2*Delta(log(L)) = 118.179\n", " Iteration 1 took 1.1s\n", " \n", " --- Iterative MLGST: Iter 2 of 5 168 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 141.775 (168 data params - 31 model params = expected mean of 137; p-value = 0.372448)\n", " Completed in 0.6s\n", " 2*Delta(log(L)) = 177.975\n", " Iteration 2 took 0.7s\n", " \n", " --- Iterative MLGST: Iter 3 of 5 450 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 520.025 (450 data params - 31 model params = expected mean of 419; p-value = 0.00054571)\n", " Completed in 1.0s\n", " 2*Delta(log(L)) = 621.726\n", " Iteration 3 took 1.2s\n", " \n", " --- Iterative MLGST: Iter 4 of 5 862 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 1095.72 (862 data params - 31 model params = expected mean of 831; p-value = 1.56063e-09)\n", " Completed in 1.4s\n", " 2*Delta(log(L)) = 1474.75\n", " Iteration 4 took 1.9s\n", " \n", " --- Iterative MLGST: Iter 5 of 5 1282 gate strings ---: \n", " --- Minimum Chi^2 GST ---\n", " Sum of Chi^2 = 1667.03 (1282 data params - 31 model params = expected mean of 1251; p-value = 2.10942e-14)\n", " Completed in 1.1s\n", " 2*Delta(log(L)) = 1577.75\n", " Iteration 5 took 1.9s\n", " \n", " Switching to ML objective (last iteration)\n", " --- MLGST ---\n", " Maximum log(L) = 800.58 below upper bound of -2.13633e+06\n", " 2*Delta(log(L)) = 1601.16 (1282 data params - 31 model params = expected mean of 1251; p-value = 5.60004e-11)\n", " Completed in 1.1s\n", " 2*Delta(log(L)) = 1601.16\n", " Final MLGST took 1.1s\n", " \n", " Iterative MLGST Total Time: 8.0s\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n", "WARNING: MLGST failed to improve logl: retaining chi2-objective estimate\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " --- Re-optimizing logl after robust data scaling ---\n", " --- MLGST ---\n", " Maximum log(L) = 647.631 below upper bound of -2.13633e+06\n", " 2*Delta(log(L)) = 1295.26 (1282 data params - 31 model params = expected mean of 1251; p-value = 0.187279)\n", " Completed in 1.3s\n", " -- Performing 'single' gauge optimization on CPTP estimate --\n", " -- Performing 'single' gauge optimization on CPTP.Robust+ estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on CPTP estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on CPTP.Robust+ estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on CPTP estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on CPTP.Robust+ estimate --\n", "-- Std Practice: Iter 3 of 3 (Target) --: \n", " --- Gate Sequence Creation ---\n", " 1282 sequences created\n", " Dataset has 3382 entries: 1282 utilized, 0 requested sequences were missing\n", " -- Performing 'single' gauge optimization on Target estimate --\n", " -- Performing 'Spam 0.001' gauge optimization on Target estimate --\n", " -- Performing 'Spam 0.001+v' gauge optimization on Target estimate --\n", "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleMultiDataSetReport directory\n", "Opening tutorial_files/exampleMultiDataSetReport/main.html...\n", "*** Report Generation Complete! Total time 305.06s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Make another dataset & estimates\n", "depol_gateset = gs_target.depolarize(gate_noise=0.1)\n", "datagen_gateset = depol_gateset.rotate((0.05,0,0.03))\n", "\n", "#Compute the sequences needed to perform Long Sequence GST on \n", "# this GateSet with sequences up to lenth 512\n", "gatestring_list = pygsti.construction.make_lsgst_experiment_list(\n", " std1Q_XYI.gs_target, std1Q_XYI.prepStrs, std1Q_XYI.effectStrs,\n", " std1Q_XYI.germs, [1,2,4,8,16,32,64,128,256,512])\n", "ds2 = pygsti.construction.generate_fake_data(datagen_gateset, gatestring_list, nSamples=1000,\n", " sampleError='binomial', seed=2018)\n", "results_std2 = pygsti.do_stdpractice_gst(ds2, gs_target, fiducials, fiducials, germs,\n", " maxLengths, verbosity=3, modes=\"TP,CPTP,Target\",\n", " gaugeOptSuite=('single','toggleValidSpam'))\n", "\n", "pygsti.report.create_standard_report({'DS1': results_std, 'DS2': results_std2},\n", " \"tutorial_files/exampleMultiDataSetReport\", \n", " title=\"Example Multi-Dataset Report\", \n", " auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Other cool `create_standard_report` options\n", "Finally, let us highlight a few of the additional arguments one can supply to `create_standard_report` that allows further control over what gets reported.\n", "\n", "- Setting the `link_to` argument to a tuple of `'pkl'`, `'tex'`, and/or `'pdf'` will create hyperlinks within the plots or below the tables of the HTML linking to Python pickle, LaTeX source, and PDF versions of the content, respectively. The Python pickle files for tables contain pickled pandas `DataFrame` objects, wheras those of plots contain ordinary Python dictionaries of the data that is plotted. Applies to HTML reports only.\n", "\n", "- Setting the `brevity` argument to an integer higher than $0$ (the default) will reduce the amount of information included in the report (for details on what is included for each value, see the doc string). Using `brevity > 0` will reduce the time required to create, and later load, the report, as well as the output file/folder size. This applies to both HTML and PDF reports.\n", "\n", "Below, we demonstrate both of these options in very brief (`brevity=4`) report with links to pickle and PDF files. Note that to generate the PDF files you must have `pdflatex` installed." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "*** Creating workspace ***\n", "*** Generating switchboard ***\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "Found standard clifford compilation from std1Q_XYI\n", "*** Generating tables ***\n", "*** Generating plots ***\n", "*** Merging into template file ***\n", "Output written to tutorial_files/exampleBriefReport directory\n", "Opening tutorial_files/exampleBriefReport/main.html...\n", "*** Report Generation Complete! Total time 50.5172s ***\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pygsti.report.create_standard_report(results_std,\n", " \"tutorial_files/exampleBriefReport\", \n", " title=\"Example Brief Report\", \n", " auto_open=True, verbosity=1,\n", " brevity=4, link_to=('pkl','pdf'))" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "deletable": true, "editable": true }, "source": [ "## Advanced Reports: `create_report_notebook`\n", "In addition to the standard HTML-page reports demonstrated above, pyGSTi is able to generate a Jupyter notebook containing the Python commands to create the figures and tables within a general report. This is facilitated\n", "by `Workspace` objects, which are factories for figures and tables (see previous tutorials). By calling `create_report_notebook`, all of the relevant `Workspace` initialization and calls are dumped to a new notebook file, which can be run (either fully or partially) by the user at their convenience. Creating such \"report notebooks\" has the advantage that the user may insert Python code amidst the figure and table generation calls to inspect or modify what is display in a highly customizable fashion. The chief disadvantages of report notebooks is that they require the user to 1) have a Jupyter server up and running and 2) to run the notebook before any figures are displayed.\n", "\n", "The line below demonstrates how to create a report notebook using `create_report_notebook`. Note that the argument list is very similar to `create_general_report`." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Report Notebook created as tutorial_files/exampleReport.ipynb\n" ] } ], "source": [ "pygsti.report.create_report_notebook(results, \"tutorial_files/exampleReport.ipynb\", \n", " title=\"GST Example Report Notebook\", confidenceLevel=None,\n", " auto_open=True, connected=False, verbosity=3)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 2 }