{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Introduction\n", "============\n", "This notebook is the computational appendix of [arXiv:1609.06139](http://arxiv.org/abs/1609.06139). We demonstrate how to use some convenience functions to decide whether a qubit or qutrit POVM is simulable and how much noise is needed to make it simulable. We also give the details how to reproduce the numerical results shown in the paper. Furthermore, we show how to decompose a simulable POVM into a convex combination of projective measurements.\n", "\n", "To improve readability of this notebook, we placed the supporting functions to a [separate file](https://github.com/peterwittek/ipython-notebooks/blob/master/povm_tools.py); please download this in the same folder as the notebook if you would like to evaluate it. The following dependencies must also be available: the Python Interface for Conic Optimization Software [Picos](http://picos.zib.de/) and its dependency [cvxopt](http://cvxopt.org/), at least one SDP solver ([SDPA](http://sdpa.sourceforge.net/) as an executable in the path or [Mosek](http://mosek.com/) with its Python interface installed; cvxopt as a solver is not recommended), and a vertex enumerator ([cdd](https://www.inf.ethz.ch/personal/fukudak/cdd_home/) with its [Python interface](https://pypi.python.org/pypi/pycddlib) or [lrs/plrs](http://cgm.cs.mcgill.ca/~avis/C/lrs.html) as an executable in the path).\n", "\n", "First, we import everything we will need:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from __future__ import print_function, division\n", "from fractions import Fraction\n", "import numpy as np\n", "import numpy.linalg\n", "import random\n", "import time\n", "from povm_tools import basis, check_ranks, complex_cross_product, dag, decomposePovmToProjective, \\\n", " enumerate_vertices, find_best_shrinking_factor, get_random_qutrit, \\\n", " get_visibility, Pauli, truncatedicosahedron" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Checking criticial visibility\n", "=============================\n", "The question whether a qubit or qutrit POVM is simulable by projective measurements boils down to an SDP feasibility problem. Few SDP solvers can handle feasibility problems, so from a computational point of view, it is always easier to phrase the question as an SDP optimization, which would return the critical visibility, below which the amount of depolarizing noise would allow simulability. Recall Eq. (8) from the paper that defines the noisy POVM that we obtain subjecting a POVM $\\mathbf{M}$ to a depolarizing channel $\\Phi_t$:\n", "\n", "$\\left[\\Phi_t\\left(\\mathbf{M}\\right)\\right]_i := t M_i + (1-t)\\frac{\\mathrm{tr}(M_i)}{d} \\mathbb{1}$.\n", "\n", "If this visibility $t\\in[0,1]$ is one, the POVM $\\mathbf{M}$ is simulable.\n", "\n", "Qubit example\n", "-------------\n", "As an example, we study the tetrahedron measurement (see Appendix B in [arXiv:quant-ph/0702021](https://arxiv.org/abs/quant-ph/0702021)):" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def dp(v):\n", " result = np.eye(2, dtype=np.complex128)\n", " for i in range(3):\n", " result += v[i]*Pauli[i]\n", " return result\n", "\n", "b = [np.array([ 1, 1, 1])/np.sqrt(3),\n", " np.array([-1, -1, 1])/np.sqrt(3),\n", " np.array([-1, 1, -1])/np.sqrt(3),\n", " np.array([ 1, -1, -1])/np.sqrt(3)]\n", "M = [dp(bj)/4 for bj in b]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we check with an SDP whether it is simulable by projective measurements. A four outcome qubit POVM $\\mathbf{M} \\in\\mathcal{P}(2,4)$ is simulable if and only if \n", "\n", "$M_{1}=N_{12}^{+}+N_{13}^{+}+N_{14}^{+},$\n", "\n", "$M_{2}=N_{12}^{-}+N_{23}^{+}+N_{24}^{+},$\n", "\n", "$M_{3}=N_{13}^{-}+N_{23}^{-}+N_{34}^{+},$\n", "\n", "$M_{4}=N_{14}^{-}+N_{24}^{-}+N_{34}^{-},$\n", "\n", "where Hermitian operators $N_{ij}^{\\pm}$ satisfy $N_{ij}^{\\pm}\\geq0$ and $N_{ij}^{+}+N_{ij}^{-}=p_{ij}\\mathbb{1}$, where $i 0,\n", "# corresponding to the case where rx, ry = 0 above, around the origin\n", "yz = []\n", "xz = []\n", "xy = []\n", "for r in range(1, m1+1):\n", " # P = [0, 0, -1], x = 0, y \\in [0, 1]\n", " yz.append([0,\n", " Fraction(2*(r*m1), (r*m1)**2 + 1),\n", " 1 - Fraction(2, (r*m1)**2 + 1)])\n", " # P = [0, 0,-1], y = 0, x \\in [0, 1]\n", " xz.append([Fraction(2*(r*m1), (r*m1)**2 + 1),\n", " 0,\n", " 1 - Fraction(2, (r*m1)**2 + 1)])\n", " # P = [0, -1, 0], z = 0, x \\in [0, 1]\n", " xy.append([Fraction(2*(r*m1), (r*m1)**2 + 1),\n", " 1 - Fraction(2, (r*m1)**2 + 1),\n", " 0])\n", "\n", "yz = np.array(yz)\n", "xz = np.array(xz)\n", "xy = np.array(xy)\n", "\n", "yz1 = np.column_stack((yz[:, 0], -yz[:, 1], yz[:, 2]))\n", "yz2 = np.column_stack((yz[:, 0], yz[:, 1], -yz[:, 2]))\n", "yz3 = np.column_stack((yz[:, 0], -yz[:, 1], -yz[:, 2]))\n", "yz = np.row_stack((yz, yz1, yz2, yz3))\n", "\n", "xz1 = np.column_stack((-xz[:, 0], xz[:, 1], xz[:, 2]))\n", "xz2 = np.column_stack((xz[:, 0], xz[:, 1], -xz[:, 2]))\n", "xz3 = np.column_stack((-xz[:, 0], xz[:, 1], -xz[:, 2]))\n", "xz = np.row_stack((xz, xz1, xz2, xz3))\n", "\n", "xy1 = np.column_stack((-xy[:, 0], xy[:, 1], xy[:, 2]))\n", "xy2 = np.column_stack((xy[:, 0], -xy[:, 1], xy[:, 2]))\n", "xy3 = np.column_stack((-xy[:, 0], -xy[:, 1], xy[:, 2]))\n", "xy = np.row_stack((xy, xy1, xy2, xy3))\n", "\n", "v2 = np.row_stack((yz, xz, xy))\n", "\n", "v = np.row_stack((v1, v2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following constraints ensure that the third operator $M_3$ of the measurement is a quasi-effect, with the approximation given by $v$:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "W2 = np.zeros((v.shape[0], 9), dtype=fractions.Fraction)\n", "for i in range(v.shape[0]):\n", " W2[i, 5:] = np.array([1, -v[i, 0], -v[i, 1], -v[i, 2]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next set of constraints ensures the same condition for the last effect, which we express by $\\mathbb{1} - M_1 - M_2 - M_3$:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "W3 = np.zeros((v.shape[0], 9))\n", "for i in range(v.shape[0]):\n", " W3[i] = [1, -1+v[i, 0], -1, v[i, 0], v[i, 1], -1,\n", " v[i, 0], v[i, 1], v[i, 2]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need that $\\alpha_0, \\alpha_1, \\alpha_2 \\geq 0$:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "W4 = np.array([[0, 1, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 1, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 1, 0, 0, 0]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also require that $\\alpha_0+\\alpha_1+\\alpha_2 \\leq 1$, which corresponds to expressing the previous constraint for the last effect:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true }, "outputs": [], "source": [ "W5 = np.array([[1, -1, -1, 0, 0, -1, 0, 0, 0]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we need that $\\alpha_0 \\geq \\alpha_1 \\geq \\alpha_2 \\geq 1-\\alpha_0-\\alpha_1-\\alpha_2$, a condition that we can impose without lost of generality due to relabeling. Once we have the last constraints, we stack the vectors in a single array." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [], "source": [ "W6 = np.array([[0, 1, -1, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 1, 0, 0, -1, 0, 0, 0],\n", " [-1, 1, 1, 0, 0, 2, 0, 0, 0]])\n", "hull = np.row_stack((W1, W2, W3, W4, W5, W6))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We enumerate the vertices, which is a time-consuming operation:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Vertex enumeration in 1219 seconds\n" ] } ], "source": [ "time0 = time.time()\n", "ext = enumerate_vertices(hull, method=\"plrs\", verbose=1)\n", "print(\"Vertex enumeration in %d seconds\" % (time.time()-time0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the last step, we iterate the SDP optimization described in Section \"Checking criticial visibility\" over all vertices to get the best shrinking factor. This takes several hours to complete. Parallel computations do not work from a notebook, but they do when the script is executed in a console. For this reason, here we disable parallel computations." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[KCurrent alpha: 0.815743 (done: 7.24%) " ] } ], "source": [ "time0 = time.time()\n", "alphas = find_best_shrinking_factor(ext, 2, solver=\"mosek\", parallel=False)\n", "print(\"\\n Found in %d seconds\" % (time.time()-time0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "External polytopes approximating $\\mathcal{P}_{\\mathrm{cov}}(3,9)$\n", "=====================================\n", "\n", "Following the same reasoning applied to approximate $\\mathcal{P}(2,4)$ we now approximate the set of qutrit covariant POVMs $\\mathcal{P}_{\\mathrm{cov}}(3,9)$ regarding the discrete Heinsenberg group. This task is relatively simple, since we need only to consider a quasi-positive seed $M$ and derive the effects from it by conjugating by the unitaries $D_{ij}$, which rotate the space by symmetric directions. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "w = np.cos(2*np.pi/3) + 1j*np.sin(2*np.pi/3)\n", "x = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])\n", "D = [[], [], []]\n", "for j in range(3):\n", " for k in range(3):\n", " D[j].append(np.matrix((w**(j*k/2))*(\n", " sum(w**(j*m)*x[:, np.mod(k + m, 3)]*dag(x[:, m]) for m in\n", " range(3)))))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the the approximation to the set of positive semi-definite operators given by $tr(M\\Psi_i)\\geq0$, for some finite set of rank-one projectors $\\{\\Psi_i\\}$. We generate random projectors and rotate them by the unitaries $D_{ij}$ above in order to obtain a more regular distribution in the space of projectors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Discretization of the set of PSD matrices with 9*N elements\n", "N = 2\n", "disc = []\n", "for _ in range(N):\n", " psi = np.matrix(qutip.Qobj.full(qutip.rand_ket(3)))\n", " for j in range(3):\n", " for k in range(3):\n", " disc.append(D[j][k]*(psi*dag(psi))*dag(D[j][k]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now translate each of the trace constraints above by writing each $M = (m_{ab})$ as a 8-dimensional real vector $(m_{00}, re(m_{01}), im(m_{01}),..., re(m_{12}), im(m_{12}))$, where $m_{22} = 1/3 -m_{00} -m_{11}$ since its trace is fixed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "hull = []\n", "for i in range(9*N):\n", " # each row of hull ensures tr(M*disc[i])>= 0\n", " hull.append([np.real(disc[i][2, 2])/3,\n", " np.real(disc[i][0, 0]) - np.real(disc[i][2, 2]),\n", " 2*np.real(disc[i][1, 0]), -2*np.imag(disc[i][1, 0]),\n", " 2*np.real(disc[i][2, 0]), -2*np.imag(disc[i][2, 0]),\n", " np.real(disc[i][1, 1]) - np.real(disc[i][2, 2]),\n", " 2*np.real(disc[i][2, 1]), -2*np.imag(disc[i][2, 1])])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then construct the polytope generated by these inequalities." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "cov_ext = enumerate_vertices(np.array(hull), method=\"plrs\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To have an idea on how good the approximation is, we can translate each vertice obtained into a quasi-POVM and check how negative its eigenvalues are." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Converting vectors into covariant POVMs\n", "povms = []\n", "for i in range(cov_ext.shape[0]):\n", " eff = np.matrix([[cov_ext[i, 1],\n", " cov_ext[i, 2] + cov_ext[i, 3]*1j,\n", " cov_ext[i, 4] + cov_ext[i, 5]*1j],\n", " [cov_ext[i, 2] - cov_ext[i, 3]*1j,\n", " cov_ext[i, 6],\n", " cov_ext[i, 7] + cov_ext[i, 8]*1j],\n", " [cov_ext[i, 4] - cov_ext[i, 5]*1j,\n", " cov_ext[i, 7] - cov_ext[i, 8]*1j,\n", " 1/3 - cov_ext[i, 1] - cov_ext[i, 6]]])\n", " M = []\n", " for j in range(3):\n", " for k in range(3):\n", " M.append(D[j][k]*eff*dag(D[j][k]))\n", " povms.append(M)\n", "\n", "# Finding the least eigenvalues\n", "A = np.zeros((cov_ext.shape[0]))\n", "for i in range(cov_ext.shape[0]):\n", " A[i] = min(numpy.linalg.eigvalsh(povms[i][0]))\n", "a = min(A)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then optimise over the extremal points of the polytope." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "alphas = find_best_shrinking_factor(cov_ext, 3, parallel=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Decomposition of qutrit three-outcome, trace-one POVMS into projective measurements\n", "=====================================\n", "\n", "\n", "We implemented the constructive strategy to find a projective decomposition for trace-one qutrit measurements $\\mathbf{M}\\in\\mathcal{P}_1(3,3)$ we described in Appendix D.\n", "\n", "First we define a function to generate a random trace-1 POVM. This is the only step that requires an additional dependency compared to the ones we loaded in the beginning of the notebook. The dependency is [QuTiP](http://qutip.org/)." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from qutip import rand_unitary\n", "\n", "def get_random_trace_one_povm(dim=3):\n", " U = rand_unitary(dim)\n", " M = [U[:, i]*dag(U[:, i]) for i in range(dim)]\n", " for _ in range(dim-1):\n", " U = rand_unitary(dim)\n", " r = random.random()\n", " for i in range(dim):\n", " M[i] = r*M[i] + (1-r)*U[:, i]*dag(U[:, i])\n", " return M" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we decompose a random POVM following the cascade of \"rank reductions\" described in Appendix D, and check the ranks:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank of POVM: [3, 2, 2]\n" ] } ], "source": [ "M = get_random_trace_one_povm()\n", "print(\"Rank of POVM: \", check_ranks(M))\n", "coefficients, projective_measurements = decomposePovmToProjective(M)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a sanity check, we look at the ranks of the effects of the individual projective measurements. We must point out that the numerical calculations occasionally fail, and we set the tolerance in rank calculations high." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ranks of projective measurements: \n", "[0, 0, 0]\n", "[0, 0, 0]\n", "[1, 1, 1]\n", "[1, 1, 1]\n", "[1, 1, 1]\n", "[1, 1, 1]\n" ] } ], "source": [ "print(\"Ranks of projective measurements: \")\n", "for measurement in projective_measurements:\n", " print(check_ranks(measurement, tolerance=0.01))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We show that the projective measurements indeed return the POVM:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "N = coefficients[0]*projective_measurements[0] + \\\n", " coefficients[1]*(coefficients[2]*projective_measurements[1] + \n", " coefficients[3]*(coefficients[4]*(coefficients[6]*projective_measurements[2] + \n", " coefficients[7]*projective_measurements[3]) + \n", " coefficients[5]*(coefficients[8]*projective_measurements[4] +\n", " coefficients[9]*projective_measurements[5])))\n", "not np.any(M - N > 10e-10)" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 0 }