{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Functions for Model Testing\n", "This tutorial covers different methods of **comparing data to given (fixed) QIP models**. This is distinct from model-based *tomography*, which finds the best-fitting model for a data set within a space of models set by a `Model` object's parameterization. You might use this as a tool alongside or separate from GST. Perhaps you suspect that a given noisy QIP model is compatible with your data - model *testing* is the way to find out. Because there is no optimization involved, model testing requires much less time than GST does, and doens't place any requirements on which circuits are used in performing the test (though some circuits will give a more precise result).\n", "\n", "## Setup\n", "First, after some usual imports, we'll create some test data based on a depolarized and rotated version of a standard 1-qubit model consisting of $I$ (the identity), $X(\\pi/2)$ and $Y(\\pi/2)$ gates." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import pygsti\n", "import numpy as np\n", "import scipy\n", "from scipy import stats\n", "from pygsti.modelpacks import smq1Q_XYI" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "datagen_model = smq1Q_XYI.target_model().depolarize(op_noise=0.05, spam_noise=0.1).rotate((0.05,0,0.03))\n", "exp_list = pygsti.construction.make_lsgst_experiment_list(\n", " smq1Q_XYI.target_model(), smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(),\n", " smq1Q_XYI.germs(), [1,2,4,8,16,32,64])\n", "ds = pygsti.construction.generate_fake_data(datagen_model, exp_list, nSamples=1000,\n", " sampleError='binomial', seed=100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1: Construct a test model\n", "After we have some data, the first step is creating a model or models that we want to test. This just means creating a `Model` object containing the operations (including SPAM) found in the data set. We'll create several models that are meant to look like guesses (some including more types of noise) of the true underlying model." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "target_model = smq1Q_XYI.target_model()\n", "test_model1 = target_model.copy()\n", "test_model2 = target_model.depolarize(op_noise=0.07, spam_noise=0.07)\n", "test_model3 = target_model.depolarize(op_noise=0.07, spam_noise=0.07).rotate( (0.02,0.02,0.02) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2: Test it!\n", "There are three different ways to test a model. Note that in each case the default behavior (and the only behavior demonstrated here) is to **never gauge-optimize the test `Model`**. (Whenever gauge-optimized versions of an `Estimate` are useful for comparisons with other estimates, *copies* of the test `Model` are used *without* actually performing any modification of the original `Model`.)\n", "\n", "### Method1: `do_model_test`\n", "First, you can do it \"from scratch\" by calling `do_model_test`, which has a similar signature as `do_long_sequence_gst` and folows its pattern of returning a `Results` object. The \"estimateLabel\" advanced option, which names the `Estimate` within the returned `Results` object, can be particularly useful. " ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# creates a Results object with a \"default\" estimate\n", "results = pygsti.do_model_test(test_model1, ds, target_model, \n", " smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(),\n", " [1,2,4,8,16,32,64]) \n", "\n", "# creates a Results object with a \"default2\" estimate\n", "results2 = pygsti.do_model_test(test_model2, ds, target_model, \n", " smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(),\n", " [1,2,4,8,16,32,64], advancedOptions={'estimateLabel': 'default2'}) \n", "\n", "# creates a Results object with a \"default3\" estimate\n", "results3 = pygsti.do_model_test(test_model3, ds, target_model, \n", " smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(),\n", " [1,2,4,8,16,32,64], advancedOptions={'estimateLabel': 'default3'})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Like any other set of `Results` objects which share the same `DataSet` and operation sequences, we can collect all of these estimates into a single `Results` object and easily make a report containing all three." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "results.add_estimates(results2)\n", "results.add_estimates(results3)\n", "\n", "pygsti.report.construct_standard_report(\n", " results, title=\"Model Test Example Report\", verbosity=1\n", ").write_html(\"../tutorial_files/modeltest_report\", auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Method 2: `add_model_test`\n", "Alternatively, you can add a model-to-test to an existing `Results` object. This is convenient when running GST via `do_long_sequence_gst` or `do_stdpractice_gst` has left you with a `Results` object and you also want to see how well a hand-picked model fares. Since the `Results` object already contains a `DataSet` and list of sequences, all you need to do is provide a `Model`. This is accomplished using the `add_model_test` method of a `Results` object." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "#Create some GST results using do_stdpractice_gst\n", "gst_results = pygsti.do_stdpractice_gst(ds, target_model, \n", " smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(),\n", " [1,2,4,8,16,32,64])\n", "\n", "#Add a model to test\n", "gst_results.add_model_test(target_model, test_model3, estimate_key='MyModel3')\n", "\n", "#Create a report to see that we've added an estimate labeled \"MyModel3\"\n", "pygsti.report.construct_standard_report(\n", " gst_results, title=\"GST with Model Test Example Report 1\", verbosity=1\n", ").write_html(\"../tutorial_files/gstwithtest_report1\", auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Method 3: `modelToTest` argument\n", "Finally, yet another way to perform model testing alongside GST is by using the `modelsToTest` argument of `do_stdpractice_gst`. This essentially combines calls to `do_stdpractice_gst` and `Results.add_model_test` (demonstrated above) with the added control of being able to specify the ordering of the estimates via the `modes` argument. To important remarks are in order:\n", "\n", "1. You *must* specify the names (keys of the `modelsToTest` argument) of your test models in the comma-delimited string that is the `modes` argument. Just giving a dictionary of `Model`s as `modelsToTest` will not automatically test those models in the returned `Results` object.\n", "\n", "2. You don't actually need to run any GST modes, and can use `do_stdpractice_gst` in this way to in one call create a single `Results` object containing multiple model tests, with estimate names that you specify. Thus `do_stdpractice_gst` can replace the multiple `do_model_test` calls (with \"estimateLabel\" advanced options) followed by collecting the estimates using `Results.add_estimates` demonstrated under \"Method 1\" above. " ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "gst_results = pygsti.do_stdpractice_gst(ds, target_model, smq1Q_XYI.prep_fiducials(), smq1Q_XYI.meas_fiducials(), smq1Q_XYI.germs(),\n", " [1,2,4,8,16,32,64], modes=\"TP,Test2,Test3,Target\", # You MUST \n", " modelsToTest={'Test2': test_model2, 'Test3': test_model3})\n", "\n", "pygsti.report.construct_standard_report(\n", " gst_results, title=\"GST with Model Test Example Report 2\", verbosity=1\n", ").write_html(\"../tutorial_files/gstwithtest_report2\", auto_open=True, verbosity=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thats it! Now that you know more about model-testing you may want to go back to the [overview of pyGST applications](../02-Using-Essential-Objects.ipynb)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" } }, "nbformat": 4, "nbformat_minor": 2 }