{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Gate String Reduction Tutorial\n", "The gate sequences used in standard Long Sequence GST are more than what are needed to amplify every possible gate error. (Technically, this is due to the fact that the informationaly complete fiducial sub-sequences allow extraction of each germ's *entire* process matrix, when all that is needed is the part describing the amplified directions in gate set space.) Because of this over-completeness, fewer sequences, i.e. experiments, may be used whilst retaining the desired Heisenberg-like scaling ($\\sim 1/L$, where $L$ is the maximum length sequence). The over-completeness can still be desirable, however, as it makes the GST optimization more robust to model violation and so can serve to stabilize the GST parameter optimization in the presence of significant non-Markovian noise. Recall that the form of a GST gate sequence is\n", "\n", "$$S = F_i (g_k)^n F_j $$\n", "\n", "where $F_i$ is a \"preparation fiducial\" sequence, $F_j$ is a \"measurement fiducial\" sequence, and \"g_k\" is a \"germ\" sequence. The repeated germ sequence $(g_k)^n$ we refer to as a \"germ-power\". There are currently three different ways to reduce a standard set of GST gate sequences within pyGSTi, each of which removes certain $(F_i,F_j)$ fiducial pairs for certain germ-powers.\n", "\n", "- **Global fiducial pair reduction (GFPR)** removes the same intelligently-selected set of fiducial pairs for all germ-powers. This is a conceptually simple method of reducing the gate sequences, but it is the most computationally intensive since it repeatedly evaluates the number of amplified parameters for en *entire germ set*. In practice, while it can give very large sequence reductions, its long run can make it prohibitive, and the \"per-germ\" reduction discussed next is used instead.\n", "- **Per-germ fiducial pair reduction (PFPR)** removes the same intelligently-selected set of fiducial pairs for all powers of a given germ, but different sets are removed for different germs. Since different germs amplify different directions in gate set space, it makes intuitive sense to specify different fiducial pair sets for different germs. Because this method only considers one germ at a time, it is less computationally intensive than GFPR, and thus more practical. Note, however, that PFPR usually results in less of a reduction of the gate sequences, since it does not (currently) take advantage overlaps in the amplified directions of different germs (i.e. if $g_1$ and $g_3$ both amplify two of the same directions, then GST doesn't need to know about these from both germs).\n", "- **Random fiducial pair reduction (RFPR)** randomly chooses a different set of fiducial pairs to remove for each germ-power. It is extremly fast to perform, as pairs are just randomly selected for removal, and in practice works well (i.e. does not impair Heisenberg-scaling) up until some critical fraction of the pairs are removed. This reflects the fact that the direction detected by a fiducial pairs usually has some non-negligible overlap with each of the directions amplified by a germ, and it is the exceptional case that an amplified direction escapes undetected. As such, the \"critical fraction\" which can usually be safely removed equals the ratio of amplified-parameters to germ-process-matrix-elements (typically $\\approx 1/d^2$ where $d$ is the Hilbert space dimension, so $1/4 = 25\\%$ for 1 qubit and $1/16 = 6.25\\%$ for 2 qubits). RFPR can be combined with GFPR or PFPR so that some number of randomly chosen pairs can be added on top of the \"intelligently-chosen\" pairs of GFPR or PFPR. In this way, one can vary the amount of sequence reduction (in order to trade off speed vs. robustness to non-Markovian noise) without inadvertently selecting too few or an especially bad set of random fiducial pairs.\n", "\n", "## Preliminaries\n", "\n", "We now demonstrate how to invoke each of these methods within pyGSTi for the case of a single qubit, using our standard $X(\\pi/2)$, $Y(\\pi/2)$, $I$ gateset. First, we retrieve a target `GateSet` as usual, along with corresponding sets of fiducial and germ sequences. We set the maximum length to be 32, roughly consistent with our data-generating gate set having gates depolarized by 10%." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Gate labels = ['Gi', 'Gx', 'Gy']\n" ] } ], "source": [ "import matplotlib\n", "matplotlib.use(\"Agg\") #so report creation works within ipython notebooks\n", "\n", "#Import pyGSTi and the \"stardard 1-qubit quantities for a gateset with X(pi/2), Y(pi/2), and idle gates\"\n", "import pygsti\n", "import pygsti.construction as pc\n", "from pygsti.construction import std1Q_XYI\n", "\n", "#Collect a target gate set, germ and fiducial strings, and set \n", "# a list of maximum lengths.\n", "gs_target = std1Q_XYI.gs_target\n", "prep_fiducials = std1Q_XYI.fiducials\n", "meas_fiducials = std1Q_XYI.fiducials\n", "germs = std1Q_XYI.germs\n", "maxLengths = [0,1,2,4,8,16,32]\n", "\n", "gateLabels = list(gs_target.gates.keys())\n", "print(\"Gate labels = \", gateLabels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sequence Reduction\n", "\n", "Now let's generate a list of all the gate sequences for each maximum length - so a list of lists. We'll generate the full lists (without any reduction) and the lists for each of the three reduction types listed above. In the random reduction case, we'll keep 30% of the fiducial pairs, removing 70% of them.\n", "\n", "### No Reduction (\"standard\" GST)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "** Without any reduction ** \n", "L=0: 92 gate sequences\n", "L=1: 92 gate sequences\n", "L=2: 168 gate sequences\n", "L=4: 441 gate sequences\n", "L=8: 817 gate sequences\n", "L=16: 1201 gate sequences\n", "L=32: 1585 gate sequences\n", "\n", "1585 experiments to run GST.\n" ] } ], "source": [ "#Make list-of-lists of GST gate sequences\n", "fullLists = pc.make_lsgst_lists(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths)\n", "\n", "#Print the number of gate sequences for each maximum length\n", "print(\"** Without any reduction ** \")\n", "for L,lst in zip(maxLengths,fullLists):\n", " print(\"L=%d: %d gate sequences\" % (L,len(lst)))\n", " \n", "#Make a (single) list of all the GST sequences ever needed,\n", "# that is, the list of all the experiments needed to perform GST.\n", "fullExperiments = pc.make_lsgst_experiment_list(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths)\n", "print(\"\\n%d experiments to run GST.\" % len(fullExperiments))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Global Fiducial Pair Reduction (GFPR)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------ Fiducial Pair Reduction --------\n", "maximum number of amplified parameters = 34\n", "Beginning search for a good set of 2 pairs (630 pair lists to test)\n", "Beginning search for a good set of 3 pairs (7140 pair lists to test)\n", "Global FPR says we only need to keep the 3 pairs:\n", " [(2, 3), (5, 1), (5, 2)]\n", "\n", "Global FPR reduction\n", "L=0: 92 gate sequences\n", "L=1: 92 gate sequences\n", "L=2: 100 gate sequences\n", "L=4: 130 gate sequences\n", "L=8: 163 gate sequences\n", "L=16: 196 gate sequences\n", "L=32: 229 gate sequences\n", "\n", "229 experiments to run GST.\n" ] } ], "source": [ "fidPairs = pygsti.alg.find_sufficient_fiducial_pairs(\n", " gs_target, prep_fiducials, meas_fiducials, germs,\n", " searchMode=\"random\", nRandom=100, seed=1234,\n", " verbosity=1, memLimit=int(2*(1024)**3), minimumPairs=2)\n", "\n", "# fidPairs is a list of (prepIndex,measIndex) 2-tuples, where\n", "# prepIndex indexes prep_fiducials and measIndex indexes meas_fiducials\n", "print(\"Global FPR says we only need to keep the %d pairs:\\n %s\\n\"\n", " % (len(fidPairs),fidPairs))\n", "\n", "gfprLists = pc.make_lsgst_lists(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n", " fidPairs=fidPairs)\n", "\n", "print(\"Global FPR reduction\")\n", "for L,lst in zip(maxLengths,gfprLists):\n", " print(\"L=%d: %d gate sequences\" % (L,len(lst)))\n", " \n", "gfprExperiments = pc.make_lsgst_experiment_list(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n", " fidPairs=fidPairs)\n", "print(\"\\n%d experiments to run GST.\" % len(gfprExperiments))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Per-germ Fiducial Pair Reduction (PFPR)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------ Individual Fiducial Pair Reduction --------\n", "Progress: [##################################################] 100.0% -- GxGxGyGxGyGy germ (4 params)\n", "\n", "Per-germ FPR to keep the pairs:\n", "GxGiGy: [(1, 3), (1, 4), (4, 0), (5, 0)]\n", "Gi: [(0, 0), (1, 1), (5, 1), (5, 2)]\n", "GxGyGi: [(1, 3), (1, 4), (4, 0), (5, 0)]\n", "GxGxGyGxGyGy: [(1, 3), (1, 4), (4, 0), (5, 0)]\n", "GxGyGyGi: [(0, 2), (1, 3), (1, 4), (4, 4), (5, 0), (5, 2)]\n", "GyGiGi: [(0, 0), (0, 5), (1, 1), (4, 4)]\n", "Gx: [(0, 0), (0, 4), (3, 3), (5, 2)]\n", "Gy: [(0, 0), (0, 5), (1, 1), (4, 4)]\n", "GxGy: [(1, 3), (1, 4), (4, 0), (5, 0)]\n", "GxGiGi: [(0, 0), (0, 4), (3, 3), (5, 2)]\n", "GxGxGiGy: [(0, 2), (0, 4), (1, 3), (2, 5), (3, 2), (4, 4)]\n", "\n", "Per-germ FPR reduction\n", "L=0: 92 gate sequences\n", "L=1: 92 gate sequences\n", "L=2: 99 gate sequences\n", "L=4: 138 gate sequences\n", "L=8: 185 gate sequences\n", "L=16: 233 gate sequences\n", "L=32: 281 gate sequences\n", "\n", "281 experiments to run GST.\n" ] } ], "source": [ "fidPairsDict = pygsti.alg.find_sufficient_fiducial_pairs_per_germ(\n", " gs_target, prep_fiducials, meas_fiducials, germs,\n", " searchMode=\"random\", constrainToTP=True,\n", " nRandom=100, seed=1234, verbosity=1,\n", " memLimit=int(2*(1024)**3))\n", "print(\"\\nPer-germ FPR to keep the pairs:\")\n", "for germ,pairsToKeep in fidPairsDict.items():\n", " print(\"%s: %s\" % (str(germ),pairsToKeep))\n", "\n", "pfprLists = pc.make_lsgst_lists(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n", " fidPairs=fidPairsDict) #note: fidPairs arg can be a dict too!\n", "\n", "print(\"\\nPer-germ FPR reduction\")\n", "for L,lst in zip(maxLengths,pfprLists):\n", " print(\"L=%d: %d gate sequences\" % (L,len(lst)))\n", "\n", "pfprExperiments = pc.make_lsgst_experiment_list(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n", " fidPairs=fidPairsDict)\n", "print(\"\\n%d experiments to run GST.\" % len(pfprExperiments))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Random Fiducial Pair Reduction (RFPR)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Random FPR reduction\n", "L=0: 92 gate sequences\n", "L=1: 92 gate sequences\n", "L=2: 113 gate sequences\n", "L=4: 207 gate sequences\n", "L=8: 324 gate sequences\n", "L=16: 442 gate sequences\n", "L=32: 561 gate sequences\n", "\n", "561 experiments to run GST.\n" ] } ], "source": [ "#keep only 30% of the pairs\n", "rfprLists = pc.make_lsgst_lists(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n", " keepFraction=0.30, keepSeed=1234)\n", "\n", "print(\"Random FPR reduction\")\n", "for L,lst in zip(maxLengths,rfprLists):\n", " print(\"L=%d: %d gate sequences\" % (L,len(lst)))\n", " \n", "rfprExperiments = pc.make_lsgst_experiment_list(\n", " gateLabels, prep_fiducials, meas_fiducials, germs, maxLengths,\n", " keepFraction=0.30, keepSeed=1234)\n", "print(\"\\n%d experiments to run GST.\" % len(rfprExperiments))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Running GST\n", "In each case above, we constructed (1) a list-of-lists giving the GST gate sequences for each maximum-length stage, and (2) a list of the experiments. In what follows, we'll use the experiment list to generate some simulated (\"fake\") data for each case, and then run GST on it. Since this is done in exactly the same way for all three cases, we'll put all of the logic in a function." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "------ GST with standard (full) sequences ------\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 1585 gate strings ---\n", "Iterative MLGST Total Time: 11.0s\n", "\n", "------ GST with GFPR sequences ------\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 229 gate strings ---\n", "Iterative MLGST Total Time: 3.4s\n", "\n", "------ GST with PFPR sequences ------\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 281 gate strings ---\n", "Iterative MLGST Total Time: 7.0s\n", "\n", "------ GST with RFPR sequences ------\n", "--- LGST ---\n", "--- Iterative MLGST: [##################################################] 100.0% 561 gate strings ---\n", "Iterative MLGST Total Time: 11.8s\n" ] } ], "source": [ "#use a depolarized version of the target gates to generate the data\n", "gs_datagen = gs_target.depolarize(gate_noise=0.1, spam_noise=0.001)\n", "\n", "def runGST(gstLists, exptList):\n", " #Use list of experiments, expList, to generate some data\n", " ds = pc.generate_fake_data(gs_datagen, exptList,\n", " nSamples=1000,sampleError=\"binomial\", seed=1234)\n", " \n", " #Pass in list-of-lists of GST sequences as 'lsgstLists' argument\n", " return pygsti.do_long_sequence_gst(\n", " ds, gs_target, prep_fiducials, meas_fiducials,\n", " germs, maxLengths,lsgstLists=gstLists, verbosity=1)\n", "\n", "print(\"\\n------ GST with standard (full) sequences ------\")\n", "full_results = runGST(fullLists, fullExperiments)\n", "\n", "print(\"\\n------ GST with GFPR sequences ------\")\n", "gfpr_results = runGST(gfprLists, gfprExperiments)\n", "\n", "print(\"\\n------ GST with PFPR sequences ------\")\n", "pfpr_results = runGST(pfprLists, pfprExperiments)\n", "\n", "print(\"\\n------ GST with RFPR sequences ------\")\n", "rfpr_results = runGST(rfprLists, rfprExperiments)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, one can generate reports using GST with reduced-sequences:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [], "source": [ "full_results.create_full_report_pdf(\n", " filename=\"tutorial_files/example_stdstrs_report.pdf\")\n", "gfpr_results.create_full_report_pdf(\n", " filename=\"tutorial_files/example_gfpr_report.pdf\")\n", "pfpr_results.create_full_report_pdf(\n", " filename=\"tutorial_files/example_pfpr_report.pdf\")\n", "rfpr_results.create_full_report_pdf(\n", " filename=\"tutorial_files/example_rfpr_report.pdf\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all has gone well, the [Standard GST](tutorial_files/example_stdstrs_report.pdf),\n", "[GFPR](tutorial_files/example_gfpr_report.pdf),\n", "[PFPR](tutorial_files/example_pfpr_report.pdf), and\n", "[RFPR](tutorial_files/example_rfpr_report.pdf),\n", "reports may now be viewed.\n", "The only notable difference in the output are \"gaps\" in the color box plots which plot quantities such as the log-likelihood across all gate sequences, organized by germ and fiducials (see figure 2 in each report). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 1 }