{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# Randomized Benchmarking: Basic Tutorial" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "This tutorial demonstrates how to perform randomized benchmarking (RB) using `pygsti`. While RB is a very distinct protocol from Gate Set Tomography (GST), `pygsti` includes basic support for RB because of its prevalence in the community, its simplicity, and its considerable use of GST-related concepts and data structures. The core protocol is standard Clifford randomized benchmarking defined in [\"Scalable and Robust Benchmarking of Quantum Processes\"](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.106.180504). Much of the notation is consistent with Wallman and Flammia's [\"Randomized benchmarking with confidence\"](http://iopscience.iop.org/article/10.1088/1367-2630/16/10/103032).\n", "\n", "This tutorial will show the following, all in the context of benchmarking a single qubit:\n", "- How to create a list of random RB sequences (experiments). These are just a list of pyGSTi `GateString` objects.\n", "- How to write a template data file from this list.\n", "- How to compute RB fit parameters from a pyGSTi `DataSet` filled with RB sequence data.\n", "- How to compute error bars on the various RB parameters and derived quantities.\n", "\n", "We'll begin by importing relevant modules:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/enielse/research/pyGSTi/packages/pygsti/tools/matrixtools.py:23: UserWarning: Could not import Cython extension - falling back to slower pure-python routines\n", " _warnings.warn(\"Could not import Cython extension - falling back to slower pure-python routines\")\n" ] } ], "source": [ "from __future__ import print_function #python 2 & 3 compatibility\n", "\n", "import pygsti\n", "from pygsti.extras import rb\n", "from pygsti.construction import std1Q_XYI" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Primitive gates, and how they map to Cliffords\n", "First, let's choose a \"target\" gateset. This is the set of physically-implemented, or \"primitive\" gates. For this tutorial, we'll just use the standard $I$, $X(\\pi/2)$, $Y(\\pi/2)$ set. The target gateset should generate the Clifford group (or some other unitary 2-design)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Primitive gates = [u'Gi', u'Gx', u'Gy']\n" ] } ], "source": [ "gs_target = std1Q_XYI.gs_target\n", "print(\"Primitive gates = \", gs_target.gates.keys())" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "To generate appropriately random RB sequences, we'll need to know how the set of all the Clifford gates map onto the given primitive set (since RB requires sequences to be random sequences of *Cliffords*, not of primitive gates). PyGSTi already contains the group of 1-qubit Cliffords. Benchmarking of a different group, or the $n>1$ qubit Clifford group requires the user to define this group.\n", "\n", "PyGSTi contains a standard compilation of each 1-qubit Clifford into the gates $\\{I,X(\\pi/2),Y(\\pi/2)\\}$, which we will use here." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "clifford_to_primitive = std1Q_XYI.clifford_compilation\n", "\n", "# get the 1Q Clifford group: the canonical set of superoperator matrices representing the Clifford group, used later.\n", "clifford_group = rb.std1Q.clifford_group" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Generating RB sequences\n", "Now let's decide what random Clifford sequences to generate. We use $m$ to denote the length of a Clifford sequence, in Clifford gates and *not* including the inversion Clifford at the end of each sequence. $K_m$ denotes the number of different random sequences of length $m$ to use. Note: `K_m_sched` need not be $m$-independent, and can be a dictionary, with $(m,K_m)$ key-value pairs." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "m_list = [1,101,201,301,401,501,601,701,801,801,1001]\n", "K_m = 10" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now we generate the list of random RB Clifford sequences to run. The `write_empty_rb_files` function handles this job, and does a lot. Here's what this one function call does:\n", "\n", "- It creates lists of random RB gate sequences, one list for each $m$, according to the schedule given by $m_{min}$, $m_{max}$, $\\delta_m$, and $K_m$. These sequences are expressed as strings of Clifford gate labels and translated using any of the supplied maps (in this case, the string are translated to \"primitive\" labels also). These lists-of-lists are returned as a dictionary whose keys are \"clifford\" (always present) and \"primitive\" (b/c it's a key of the dict passed as `alias_maps`).\n", "- The lists for each set of gate labels (the Cliffords and primitives in this case) is aggregated across all $m$ values (so there's just a single list of all the RB sequences) and saved to a file beginning with the given base filename.\n", "- An empty `DataSet` is saved in text format using the RB sequences expressed in terms of Clifford gates.\n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "filename_base = 'tutorial_files/rb_template'\n", "rb_sequences = rb.write_empty_rb_files(filename_base, m_list, K_m, clifford_group,\n", " {'primitive': clifford_to_primitive},\n", " seed=0)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "deletable": true, "editable": true }, "source": [ "There is now an empty template file [tutorial_files/rb_template.txt](tutorial_files/rb_template.txt). For actual physical experiments, this file should be filled with experimental data and read in using `pygsti.io.load_dataset`. In this tutorial, we will generate fake data instead and just use the resulting dataset object.\n", "\n", "The files [tutorial_files/rb_template_clifford.txt](tutorial_files/rb_template_clifford.txt) and [tutorial_files/rb_template_primitive.txt](tutorial_files/rb_template_primitive.txt) are text files listing all the RB sequences, expressed in terms of Cliffords and primitives respectively." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Generating fake data" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "To generate a dataset, we first need to make a gateset. Here we assume a gate set that is perfect except for some small amount of depolarizing noise on each primitive gate." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "depol_strength = 1e-3\n", "gs_experimental = std1Q_XYI.gs_target\n", "gs_experimental = gs_experimental.depolarize(gate_noise=depol_strength)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now we choose the number of clicks per experiment and simulate our data. More information on simulating RB can be found in the following tutorial." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "all_rb_sequences = [] #construct an aggregate list of Clifford sequences\n", "for seqs_for_single_cliff_len in rb_sequences:\n", " all_rb_sequences.extend(seqs_for_single_cliff_len)\n", " \n", "N=100 # number of samples\n", "rb_data = pygsti.construction.generate_fake_data(\n", " gs_experimental,all_rb_sequences,N,'binomial',seed=1,\n", " aliasDict=clifford_to_primitive, collisionAction=\"keepseparate\")" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Running the RB analysis\n", "Now that we have data, it's time to perform the RB analysis. The \n", "function `do_randomized_benchmarking` returns an `RBResults` object which holds all the relevant input and output RB quantities. This object can be used to generate error bars on the computed RB quanties.\n", "\n", "Some important arguments are:\n", "- success_spamlabel : the spam label corresponding to the *expected* outcome when preparing and immediately measuring.\n", "- dim : the Hilbert space dimension. This defaults to 2 (the 1-qubit case) and so can usually be left out." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "rb_results = rb.do_randomized_benchmarking(rb_data, all_rb_sequences,fit='first order',success_outcomelabel='0', dim=2)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Examining the output" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Okay, so we've done RB! Now let's examine how we can use the returned `RBResults` object to visualize and inspect the results. First let's plot the averaged RB data (i.e., averaged over sequences at each length) and the decay curve that has been fit to the data.\n", "\n", "Some useful optional arguments are: xlim, ylim, save_fig_path, loc, which all perform the standard matploblib functions, and also legend (true or false), title (true or false)." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
Loading...
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "#Create a workspace to show plots\n", "w = pygsti.report.Workspace()\n", "w.init_notebook_mode(connected=False, autodisplay=True) " ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "
\n", "
\n", "\n", "
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "w.RandomizedBenchmarkingPlot(rb_results)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Let's look at the RB fit results. The parameters are defined as follows,\n", "following the Wallman and Flammia article cited above.\n", "- `A`,`B`,`f` are fit parameters to the standard RB fitting function $P_m = A+B\\,f^m$, where $P_m$ is the average \"survival probability\" for sequences of length $m$.\n", "- `r` $= (1-p)(d-1)/d$. This is the the \"RB number\".\n", "- To express the RB result as a \"fidelity\"-like quantity, rather than an error rate, we can consider $1-r$." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RB results\n", "\n", " - Fitting to the first order fitting function: A + (B+Cm)*f^m.\n", "\n", "A = 0.5158046259207254\n", "B = 0.48752902145336097\n", "C = -8.702270289706322e-09\n", "f = 0.996307105067199\n", "r = 0.0018464474664005026\n", "\n" ] } ], "source": [ "rb_results.print_results()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Error Bars\n", "Lastly, let's put some error bars on the estimates. There are two methods for computing the error bars: analytic error bars using the method of Wallman and Flammia in [\"Randomized benchmarking with confidence\"](http://iopscience.iop.org/article/10.1088/1367-2630/16/10/103032), or bootstrapped error bars. Error bars here are 1-sigma confidence intervals. The Wallman and Flammia method is only possible with a particular $K_m$ schedule, and they cannot be calculated with a constant $K_m$, as here. So, we compute bootstrapped error bars:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "deletable": true, "editable": true, "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Generating non-parametric dataset.\n", "Bootstrapped error bars computed. Use print methods to access.\n" ] } ], "source": [ "rb_results.compute_bootstrap_error_bars(seed=0)" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Now that we've generated (bootstrapped) error bars, we can print them using the same print methods as before:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RB results\n", "\n", " - Fitting to the first order fitting function: A + (B+Cm)*f^m.\n", " - Boostrapped-derived error bars (1 sigma).\n", "\n", "A = 0.5158046259207254 +/- 0.0\n", "B = 0.48752902145336097 +/- 5.579080615598709e-17\n", "C = -8.702270289706322e-09 +/- 0.0\n", "f = 0.996307105067199 +/- 1.1158161231197418e-16\n", "r = 0.0018464474664005026 +/- 5.579080615598709e-17\n", "\n" ] } ], "source": [ "rb_results.print_results()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "We can also manually extract the error bars, and other results parameters. For example:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.0018464474664005026\n", "5.579080615598709e-17\n" ] } ], "source": [ "print(rb_results.results['r'])\n", "print(rb_results.results['r_error_BS'])" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" } }, "nbformat": 4, "nbformat_minor": 0 }