{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ngl\\_resum: a package to resum non-global logarithms at leading logarithmic accuracy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you use the package ngl\\_resum, please cite [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this documentation we show some features of ngl\\_resum. In particular, we want to visit each of the classes defined in the module and explain their main purposes. We suggest this notebook to be used in Binder:\n", "[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/MarcelBalsiger/ngl_resum/master?filepath=%2Fdocs%2Fnglresum.ipynb) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To have this example working as a jupyter notebook, one needs to have the packages numpy, physt and - obviously - ngl\\_resum installed. The easiest way to do this is to use pip install ngl_resum. Details may be found here: [https://packaging.python.org/tutorials/installing-packages/#use-pip-for-installing](https://packaging.python.org/tutorials/installing-packages/#use-pip-for-installing)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start by importing the package ngl\\_resum and numpy:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import ngl_resum as ngl\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## FourVector" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start with the FourVector class. As its name suggests, this class is used to describe fourvectors and contains some information specifically used in collider physics. To instantiate a FourVector, we have to feed all four components of it. Let us define fvA with energy $e$ energyFvA and momenta $p_i$ iMomFvA:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[6.9,4.2,3.5,1.2]\n" ] } ], "source": [ "energyFvA=6.9\n", "xMomFvA=4.2\n", "yMomFvA=3.5\n", "zMomFvA=1.2\n", "fvA=ngl.FourVector(energyFvA,xMomFvA,yMomFvA,zMomFvA)\n", "print(fvA)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attributes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course, we can access the four individual coordinates:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "energy: 6.9 \n", "x-mom.: 4.2 \n", "y-mom.: 3.5 \n", "z-mom.: 1.2\n" ] } ], "source": [ "print(\"energy: \",fvA.e,\"\\nx-mom.: \",fvA.px,\\\n", " \"\\ny-mom.: \",fvA.py,\"\\nz-mom.: \",fvA.pz)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In some cases, it may become useful to have the momentum vector displayed as a numpy-array:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "numpy-array: [6.9 4.2 3.5 1.2]\n" ] } ], "source": [ "print(\"numpy-array: \",fvA.vec)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The two angles $\\theta$ (angle between the z-axis or beam-axis)and $\\phi$ (angle in the x-y-plane) of the three-vector are attributes: " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "theta: 1.3547308176908472 \n", "phi: 0.6947382761967031\n" ] } ], "source": [ "print(\"theta: \",fvA.theta,\"\\nphi: \",fvA.phi)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also access the mass and the velocity of the particle:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "mass: 4.034848200366403 \n", "velocity: 0.811205911255451\n" ] } ], "source": [ "print(\"mass: \",fvA.m,\"\\nvelocity: \",fvA.beta)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The length of the spatial vector $\\sqrt{p_x^2+p_y^2+p_z^2}$ can also be accessed:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "np.sqrt(px*px+py*py+pz*pz): 5.597320787662612\n" ] } ], "source": [ "print(\"np.sqrt(px*px+py*py+pz*pz): \",fvA.absSpace)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Collider-physics specific attributes are the transverse energy $E_T$, the transverse momentum $p_T$, the rapidity $y=\\frac{1}{2}\\ln\\frac{e+p_z}{e-p_z}$ and the pseudorapidity $\\eta=-\\ln\\left(\\tan\\frac{\\theta}{2}\\right)$:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "transverse energy: 6.7395647606579505 \n", "transverse momentum: 5.4671747731346585 \n", "rapidity: 0.1756989434189443 \n", "pseudorapidity: 0.2177665445807071\n" ] } ], "source": [ "print(\"transverse energy: \",fvA.eT,\"\\ntransverse momentum: \",fvA.pT,\\\n", " \"\\nrapidity: \",fvA.rap,\"\\npseudorapidity: \",fvA.pseudorap)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Operations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us instantiate another FourVector:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "fvB=ngl.FourVector(5,4,3,0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now add and subtract the fourvectors with the usual operators:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "fvA+fvB: [11.9,8.2,6.5,1.2] \n", "fvA-fvB: [1.9000000000000004,0.20000000000000018,0.5,1.2]\n" ] } ], "source": [ "print(\"fvA+fvB:\",fvA+fvB,\"\\nfvA-fvB: \",fvA-fvB)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The multiplication operator $*$ can be used to give a scalar product of two fourvectors or as the multiplication of a scalar. Note that we use te mostly-minus metric (+ - - -):" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "fvA*fvB: 7.199999999999999 \n", "10*fvA: [69.0,42.0,35.0,12.0]\n" ] } ], "source": [ "print(\"fvA*fvB:\",fvA*fvB,\"\\n10*fvA: \",10*fvA)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The division by a scalar does work, too:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "fvA/10: [0.6900000000000001,0.42000000000000004,0.35,0.12]\n" ] } ], "source": [ "print(\"fvA/10:\",fvA/10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To measure the squared angular distance $\\Delta R^2=\\Delta\\phi^2+\\Delta\\eta^2=|\\phi_A-\\phi_B|^2+|\\eta_A-\\eta_B|^2$ between two fourvectors, we can use " ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "deltaR^2: 0.050047515262147006 or 0.050047515262147006\n" ] } ], "source": [ "print(\"deltaR^2:\",fvA.R2(fvB),\" or \",fvB.R2(fvA))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "while the cosine of the spatial angle between two fourvectors can be accessed by" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cosTheta: 0.9754666932856004 or 0.9754666932856004\n" ] } ], "source": [ "print(\"cosTheta:\",fvA.costheta(fvB),\" or \",fvB.costheta(fvA))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can check, whether the FourVector is massive or massless. A FourVector a is treated as massive (or time-like), if a*a is larger than $10^{-7}$, otherwise it is treated as massless (or light-like):" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvA.isMassive()" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvA.isMassless()" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvB.isMassive()" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvB.isMassless()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can check whether two FourVectors are the same. We consider two FourVectors a and b to be the same, if (a-b).e^2+(a-b).px^2+(a-b).py^2+(a-b).pz^2 is smaller than $10^{-10}$ to account for rounding errors. To apply this check, we can use" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvA.isSame(fvB)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvA.isSame(fvA+ngl.FourVector(0.000000001,0.000000001,0.000000001,0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One last method of the FourVector class is going to be the tensor product fvA$_\\mu$fvB$_\\nu$ (which is probably not going to be used that often), given by " ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "fvA.metric.fvB:\n", " [[ 34.5 -27.6 -20.7 -0. ]\n", " [ 21. -16.8 -12.6 -0. ]\n", " [ 17.5 -14. -10.5 -0. ]\n", " [ 6. -4.8 -3.6 -0. ]]\n" ] } ], "source": [ "print(\"fvA.metric.fvB:\\n\",fvA.tensorProd(fvB))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Boost" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Boost class takes care of the boosting procedure as described in Section 3.1 [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660). It takes two arbitrary momentum vectors from the lab frame and creates the boost from lab frame to the frame where these two fourvectors are back-to-back alongside the z-axis. For transparency we take the first two vectors of the event from Appendix A of above article, as given in (A.3):" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "p1=ngl.FourVector(504.7,125.6,82.44,-450.4)\n", "u1=p1/p1.e\n", "u2=ngl.FourVector(1,0,0,-1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now to get the boost accounting for the boost from the lab frame to the frame where u1 is alongside the positive z-axis and u2 alongside the negative z-axis, we just have to instantiate a Boost with the two fourvectors as arguments:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "bst=ngl.Boost(u1,u2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attributes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have access to each single transformation $X$, $B$ and $Z$ as defined in Section 3.1 and thoroughly explained with an example in the pages after (A.3) of [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660). Note, that each of these matrices are 4x4 numpy arrays." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start out with the boost $X$, which is a rotation that puts the added two initial fourvectors along the x-axis:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([2.00000000e+00, 1.91568102e+00, 0.00000000e+00, 2.18575158e-16])" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.dot(bst.X,(u1+u2).vec)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we apply the boost $B$, which removes the spatial component of above fourvector:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 5.74600946e-01, -8.88178420e-16, 0.00000000e+00, 2.18575158e-16])" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.dot(bst.B,np.dot(bst.X,(u1+u2).vec))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we apply a second rotation $Z$ that puts the two initial vectors alongside the z-axis, with u1 in the positive direction:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 3.87360275e-01, -3.53883589e-16, -8.32667268e-17, 1.87240671e-01])" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.dot(bst.Z,np.dot(bst.B,np.dot(bst.X,u1.vec)))" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 1.87240671e-01, -4.68375339e-16, 0.00000000e+00, -1.87240671e-01])" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.dot(bst.Z,np.dot(bst.B,np.dot(bst.X,u2.vec)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, the two fourvectors are now nicely aligned back-to-back. The full boost is also stored in an attribute:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([3.87360275e-01, 0.00000000e+00, 1.11022302e-16, 1.87240671e-01])" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "u1prime=np.dot(bst.LABtoCMS,u1.vec)\n", "u1prime" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The same goes for the inverse boost:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 1. , 0.24886071, 0.16334456, -0.89241133])" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "u1BoostedBack=np.dot(bst.CMStoLAB,u1prime)\n", "u1BoostedBack" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 4.44089210e-16, -1.11022302e-16, 2.77555756e-17, -2.22044605e-16])" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "u1BoostedBack-u1.vec" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course, the numpy arrays are not easy to handle as we have to keep in mind which variable is a FourVector and which one a numpy array. We have a shortcut to erase this problem. We can take the boost and apply it on any FourVector and get back the FourVector in the new frame as follows:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0.38736027472048606,0.0,1.1102230246251565e-16,0.18724067086957508]" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvu1prime=bst.boostLABtoCMS(u1)\n", "fvu1prime" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[1.0000000000000004,0.24886070933227647,0.16334456112542106,-0.8924113334654252]" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvu1BoostedBack=bst.boostCMStoLAB(fvu1prime)\n", "fvu1BoostedBack" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fvu1BoostedBack.isSame(u1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hist" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Hist class acts as an adapter to the physt package and is immensly based on it. It accounts for the histograms $R(t)$ that is the result of the resummation (see (4.3) of [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660))." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When initializing a Hist, we at least need to provide a number of bins nbins and a maximal value for $t$, tmax:" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "nbins=10\n", "tmax=0.1\n", "hst=ngl.Hist(nbins, tmax)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 0.000000\n", " 0.0150 | 0.000000\n", " 0.0250 | 0.000000\n", " 0.0350 | 0.000000\n", " 0.0450 | 0.000000\n", " 0.0550 | 0.000000\n", " 0.0650 | 0.000000\n", " 0.0750 | 0.000000\n", " 0.0850 | 0.000000\n", " 0.0950 | 0.000000" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another possibility is to also calculate an error estimate of each bin:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "hstErr=ngl.Hist(nbins,tmax,errorHistCalc=True)" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries | error \n", "--------|----------|---------\n", " 0.0050 | 0.000000 | 0.000000\n", " 0.0150 | 0.000000 | 0.000000\n", " 0.0250 | 0.000000 | 0.000000\n", " 0.0350 | 0.000000 | 0.000000\n", " 0.0450 | 0.000000 | 0.000000\n", " 0.0550 | 0.000000 | 0.000000\n", " 0.0650 | 0.000000 | 0.000000\n", " 0.0750 | 0.000000 | 0.000000\n", " 0.0850 | 0.000000 | 0.000000\n", " 0.0950 | 0.000000 | 0.000000" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hstErr" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compared to the rest of the classes we switch the order and postpone the discussion of the attributes to after the methods and operations. This is due to the fact that the methods are mainly used to populate the histograms to avoid looking at a wall of zeroes in the discussion of the attributes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the sake of streamlinedness, we will stop discussing the more involved case of the error estimation at this point. The discussion thereof is quite involved and provides little insight. We will look at the extraction of the error from the histogram later." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have two functions that can be used to set a whole histogram to zero or to one:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 1.000000\n", " 0.0150 | 1.000000\n", " 0.0250 | 1.000000\n", " 0.0350 | 1.000000\n", " 0.0450 | 1.000000\n", " 0.0550 | 1.000000\n", " 0.0650 | 1.000000\n", " 0.0750 | 1.000000\n", " 0.0850 | 1.000000\n", " 0.0950 | 1.000000" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst.setOne()\n", "hst" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 0.000000\n", " 0.0150 | 0.000000\n", " 0.0250 | 0.000000\n", " 0.0350 | 0.000000\n", " 0.0450 | 0.000000\n", " 0.0550 | 0.000000\n", " 0.0650 | 0.000000\n", " 0.0750 | 0.000000\n", " 0.0850 | 0.000000\n", " 0.0950 | 0.000000" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst.setZero()\n", "hst" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To populate the histogram, we can use addToBin by specifying the value of t at which we add a weight w:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 0.000000\n", " 0.0150 | 0.000000\n", " 0.0250 | 0.000000\n", " 0.0350 | 0.000000\n", " 0.0450 | 0.000000\n", " 0.0550 | 0.696969\n", " 0.0650 | 0.000000\n", " 0.0750 | 0.000000\n", " 0.0850 | 0.000000\n", " 0.0950 | 0.000000" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tVal=0.053\n", "w=0.696969\n", "hst.addToBin(tVal,w)\n", "hst" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Operators" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us start with a disclaimer - we assume the two histograms to be initialized with the same number of bin nbins and the same maximal $t$ value tmax. If this is not the case, anything might happen." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To show the use of some operators, we will populate two histograms with some random numbers:" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "hst1=ngl.Hist(nbins, tmax)\n", "hst2=ngl.Hist(nbins, tmax)\n", "for i in range(0,50):\n", " hst1.addToBin(tmax*np.random.random_sample(),np.random.random_sample())\n", " hst2.addToBin(tmax*np.random.random_sample(),np.random.random_sample())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us have a look at how these historgams are populated:" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 2.102598\n", " 0.0150 | 5.474357\n", " 0.0250 | 3.765161\n", " 0.0350 | 1.790246\n", " 0.0450 | 0.792018\n", " 0.0550 | 2.373341\n", " 0.0650 | 1.990888\n", " 0.0750 | 1.367947\n", " 0.0850 | 4.934581\n", " 0.0950 | 1.853173" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst1" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 2.918347\n", " 0.0150 | 3.740859\n", " 0.0250 | 3.559869\n", " 0.0350 | 1.760655\n", " 0.0450 | 2.113326\n", " 0.0550 | 4.627729\n", " 0.0650 | 2.237888\n", " 0.0750 | 0.591951\n", " 0.0850 | 4.630542\n", " 0.0950 | 1.203527" ] }, "execution_count": 43, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can add and subtract the histograms. This sums (or subtracts) the entry of each bin:" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 5.020945\n", " 0.0150 | 9.215216\n", " 0.0250 | 7.325029\n", " 0.0350 | 3.550901\n", " 0.0450 | 2.905344\n", " 0.0550 | 7.001070\n", " 0.0650 | 4.228776\n", " 0.0750 | 1.959898\n", " 0.0850 | 9.565123\n", " 0.0950 | 3.056700" ] }, "execution_count": 44, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst1+hst2" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | -0.815749\n", " 0.0150 | 1.733499\n", " 0.0250 | 0.205292\n", " 0.0350 | 0.029590\n", " 0.0450 | -1.321308\n", " 0.0550 | -2.254388\n", " 0.0650 | -0.247000\n", " 0.0750 | 0.775997\n", " 0.0850 | 0.304039\n", " 0.0950 | 0.649647" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst1-hst2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The multiplication of two Hists multiplies the entry of each bin, and we can also multiply with a scalar (which multiplicates each entry by the scalar):" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 6.136110\n", " 0.0150 | 20.478796\n", " 0.0250 | 13.403477\n", " 0.0350 | 3.152005\n", " 0.0450 | 1.673791\n", " 0.0550 | 10.983180\n", " 0.0650 | 4.455385\n", " 0.0750 | 0.809758\n", " 0.0850 | 22.849784\n", " 0.0950 | 2.230344" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst1*hst2" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 21.025980\n", " 0.0150 | 54.743571\n", " 0.0250 | 37.651608\n", " 0.0350 | 17.902456\n", " 0.0450 | 7.920177\n", " 0.0550 | 23.733412\n", " 0.0650 | 19.908881\n", " 0.0750 | 13.679475\n", " 0.0850 | 49.345810\n", " 0.0950 | 18.531734" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "10*hst1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Division by a scalar is possible as well:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries \n", "--------|----------\n", " 0.0050 | 0.210260\n", " 0.0150 | 0.547436\n", " 0.0250 | 0.376516\n", " 0.0350 | 0.179025\n", " 0.0450 | 0.079202\n", " 0.0550 | 0.237334\n", " 0.0650 | 0.199089\n", " 0.0750 | 0.136795\n", " 0.0850 | 0.493458\n", " 0.0950 | 0.185317" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst1/10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Attributes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us finally look at how to access the data in Hist. The Histograms certainly knows about the number of bins and maximal $t$:" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "nbins: 10 \n", "tmax: 0.1\n" ] } ], "source": [ "print(\"nbins: \",hst1.nbins,\"\\ntmax: \",hst1.tmax)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To access the entries of the histogram, we can do so:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([2.10259796, 5.47435712, 3.76516075, 1.79024562, 0.7920177 ,\n", " 2.37334117, 1.99088811, 1.36794745, 4.93458098, 1.85317337])" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hst1.entries" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The bin values can be read out as well. As it is sometimes useful to have the lower or upper bin boundary or the central value, we have created access to all of them:" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "central bin values: [0.005, 0.015, 0.025, 0.035, 0.045, 0.055, 0.065, 0.07500000000000001, 0.08499999999999999, 0.095]\n", "lower bin boundary: [0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09]\n", "upper bin boundary: [0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1]\n" ] } ], "source": [ "print(\"central bin values: \",hst1.centerBinValue)\n", "print(\"lower bin boundary: \",hst1.lowerBinBoundary)\n", "print(\"upper bin boundary: \",hst1.upperBinBoundary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let us have a quick glance at the error estimations of the bins. Note, that while the representation of the histogram comes with the error itself, due to intricacies of the error computation when multiplying histograms (which is used in the showering procedure), we are only able to access the squared of the error estimate. To illustrate this, let us first populize the hstErr from above and show its representation:" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " t | entries | error \n", "--------|----------|---------\n", " 0.0050 | 1.295799 | 0.997647\n", " 0.0150 | 0.542311 | 0.527699\n", " 0.0250 | 1.156891 | 0.837240\n", " 0.0350 | 3.134448 | 1.461181\n", " 0.0450 | 1.987160 | 1.212088\n", " 0.0550 | 1.799335 | 1.103540\n", " 0.0650 | 3.308246 | 1.490670\n", " 0.0750 | 4.888158 | 1.768580\n", " 0.0850 | 5.766296 | 2.122759\n", " 0.0950 | 0.334684 | 0.296820" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "for i in range(0,50):\n", " hstErr.addToBin(tmax*np.random.random_sample(),np.random.random_sample())\n", "hstErr" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will access the squared error of the bins:" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([0.99530047, 0.27846598, 0.70097064, 2.1350506 , 1.46915656,\n", " 1.21780011, 2.2220969 , 3.127874 , 4.50610368, 0.08810237])" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "hstErr.squaredError" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This might seem a little bit odd. For more insight into the error handling we refer to the documentation of the example codes. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Event" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we get to the core of the package. An instance of Event contains all the reevant information to start one showering. We will not go too deep into details here, but mainly refer to the two example codes. To instantiate an Event, one can either feed it \n", "* a dipole (using the feedDipole parameter), or\n", "* an pylhe.LHEEvent read-in via pylhe.readLHE (using the eventFromFile parameter)\n", "\n", "To each of those we have an example code.While we will not explain every attribute and method of this class in detail, we still want to give an overview of some intricacies. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First of all we want to discuss the feedDipole feature. Via Event(feedDipole=dipole) we can set up the showering of one single dipole, consisting of two fourVectors in an array: " ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "leg1=ngl.FourVector(1,0,0,0.5)\n", "leg2=ngl.FourVector(1,0,0,-0.5)\n", "dipole=[leg1,leg2]\n", "evDip=ngl.Event(feedDipole=dipole)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This feature is straightforward enough. We have set up this dipole for showering." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The other feature of Event is the more intricate showering of an event read in from a .lhe-file. After feeding the pylhe.LHEEvent into the parameter eventFromFile we have two additional options, namely\n", "* whether we want to form the color-connected dipoles between the incoming and outgoing particles of the event or between the incoming and intermediate ones, and\n", "* whether we also want to account for the decay dipoles between the intermediate and the outgoing particles.\n", "\n", "An example where we shower both the dipoles formed by the incoming-intermediate particles and the intermediate-outgoing particles is given for example in Section 5 of [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660) ." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To keep this documentation simple, we will not actually read in a .lhe-file, and therefore can not go hands-on here. If you want to play around with this feature, we suggest you move to the example code." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To instantiate an Event by an event=pylhe.LHEEvent, we have to use
evLHE=ngl.Event(eventFromFile=\"pylhe.LHEEvent\").
\n", "It sets up the Event with the default case of color-sorting the incoming and outgoing particles. To set up the showering of the incoming-intermediate particle dipoles, we have to use
evLHE=ngl.Event(eventFromFile=pylhe.LHEEvent, productionDipoles='intermediate'),
\n", "and if we not only want to have the production dipoles showered, but also the dipoles associated to the decay, we have to use
evLHE=ngl.Event(eventFromFile=pylhe.LHEEvent,productionDipoles='intermediate',decayDipoles=True).
\n", "Note that you will probably seldom use these additional features and most oftenly only have to use evLHE=ngl.Event(eventFromFile=\"pylhe.LHEEvent\"), except if you work with top quarks." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that if you instantiate an Event using a pylhe.LHEEvent, you have access to the weight as well as an array of the FourVector of each kind of particles in the form of an attribute. You can access the weight of the event via \n", "ev=ngl.Event(eventFromFile=\"pylhe.LHEEvent\")
\n", "ev.weight
\n", "If you want the fourvectors of all incoming up-type quarks and antiquarks, you can access them via
\n", "ev=ngl.Event(eventFromFile=\"pylhe.LHEEvent\")
\n", "ev.incomingUp.
\n", "In the same way, you can access ev.statusType, with status being incoming,intermediate or outgoing, and Type being Down, Up, Strange, Charm, Bottom, Top, Electron, ENeutrino, Muon, MNeutrino, Tau, TNeutrino, Gluon, Photon, ZBoson, WBoson or Higgs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## OutsideRegion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Just as the Event class, OutsideRegion is very specific to the observable you are considering. Its whole purpose is to tell the Shower, whether a FourVector is pointing into the region where it gets vetoed. The nomenclature of **outside** comes from the textbook example of the interjet energy flow, where radiation that is not inside the jets gets vetoed. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can initiate the OutsideRegion with or without an Event. Whether you should feed it an Event or not depends on whether the region where you want to veto radiation depends on the distribution of the outgoing particles. We have one example code each for the usage with and without an Event." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An instance of OutsideRegion doesn't do anything. It containes the stub of a method outside(self,v) which needs to be implemented by you. To do so, you need to write a method" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [], "source": [ "def _outside(self,v):\n", " #\n", " # Code that checks whether v is a FourVector\n", " # landing outside. \n", " #\n", " retVal=True # if v outside\n", " retVal=False # if v not outside\n", " return (retVal)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "and - after creating an instance of OutsideRegion" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [], "source": [ "outsideRegion=ngl.OutsideRegion()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "exchange the stub of outside(self,v) of your instance outsideRegion to your method by invoking" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [], "source": [ "outsideRegion.outside = _outside.__get__(outsideRegion,ngl.OutsideRegion)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more details we refer to the two example codes which show the handling of the OutsideRegion class." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Shower" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now get to the core of our resummation precedure, the showering of an Event. To instantiate a Shower, we feed it the following parameters (most of which come with a default choice):" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [], "source": [ "# event: Event,\n", "# outsideRegion: OutsideRegion,\n", "# nsh: int=50, \n", "# nbins: int=100, \n", "# tmax: float=0.1, \n", "# cut: float=5.0, \n", "# fixedOrderExpansion: bool=True,\n", "# virtualSubtracted: bool=False" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of these parameters, event and outsideRegion unsurprisingly contain the Event to shower with the respective OutsideRegion under consideration. The number of showerings you want to apply on the event is fed in via nsh (50 by default). To create the Hist which will eventually be the result of the resummation, we can change the nbins (100 by default) and tmax (0.1 by default). We can also change the collinear cutoff cut (5.0 by default), which corresponds to $\\eta_{max}$ as discussed in (A.14) of [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660). Finally, we can decide whether or not we want to calculate the first two expansion parameters of the resummation as given in (4.3) of [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660) in fixedOrderExpansion (True by default). The last option is virtualSubtracted will most likely have to be turned off (as is the default), it is used to subtract the global one-loop part from the soft anomalous dimension as discussed in (3.5) of [doi:10.1007/JHEP04(2019)020](https://inspirehep.net/literature/1717208)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instead of going through the details of the showering precedure we refer to Appendix A of [doi:10.1007/JHEP09(2020)029](https://inspirehep.net/literature/1798660), where this is explained in a very detailed fashion." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.11" } }, "nbformat": 4, "nbformat_minor": 4 }