{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# **Statistics lecture 2 Hands-on session : exercises notebook**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the companion notebook to lecture 2 in the statistical course series, covering the following topics:\n", "1. Maximum likelihood basics\n", "2. Maximum likelihood for binned data\n", "3. Maximum likelihood in the Gaussian case and chi2\n", "4. Hypothesis testing basics\n", "\n", "First perform the usual imports:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import scipy.stats\n", "from matplotlib import pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Maximum likelihood basics\n", "\n", "Maximum likelihood is a powerful tool to *estimate* (=compute the value of, in stats parlance) unknown parameters. The principle is as follows: what we know is\n", "* **The PDF** $P(x;\\theta)$, in terms of the observable $x$ (a.k.a. the random variable) and one or more unknown parameters $\\theta$.\n", "* **A dataset**, i.e. a value $x_{\\text{obs}}$ of the observable.\n", "\n", "The natural usage of the PDF is to draw values of $x$, for a given value of $\\theta$. What we need here is the reverse: we already have the observed $x_{\\text{obs}}$, and we'd like to use it to infer the value of $\\theta$.\n", "\n", "Maximum likelihood (ML) is the procedure of doing this by maximizing the PDF as a function of $\\theta$ and with the observable fixed to to $x = x_{\\text{obs}}$. This presentation of the PDF is called the *likelihood*, and it's really still just the PDF:\n", "$$\n", "L(\\theta) = P(x_{\\text{obs}}; \\theta).\n", "$$\n", "The ML estimator for $\\theta$ is then\n", "$$\n", "\\hat{\\theta} = \\text{arg max}_{\\theta} \\, L(\\theta),\n", "$$\n", "the value of $\\theta$ that maximizes $L$.\n", "\n", "We start with a simple example: a counting experiment where we observe $n=5$ with an expected background of $b=3$. The PDF is a Poisson, with a parameter given as $s+b$, and we'd like to estimate $s$.\n", "\n", "The ML procedure is performed as" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# Set the inputs\n", "n_obs = 5\n", "b = 3\n", "\n", "# We're going to scan values of s ranging from 0 to 5, in steps of 0.1\n", "s_values = np.arange(0, 5, 0.1)\n", "\n", "# ==> Compute the likelihood for each value of s. As a reminder, the Poisson PDF P(n,s+b) is provided as scipy.stats.poisson.pmf(n, s+b)\n", "# ==> Plot the results and find the value of s (\"s-hat\") that maximizes the likelihood" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all goes well, you should get a maximum at $\\hat{s} = 2$ as one could naively expect.\n", "\n", "Note that while the position of the maximum is meaningful, the likelihood value isn't: as we'll see later, likelihoods only need to be defined up to a multiplicative factor since we only use likelihood ratios in measurements, so their absolute value is arbitrary.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Maximum likelihood for binned data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To make things a bit more interesting, we move to the analysis of binned data. We take a model similar to the one used for the $\\chi^2$ exercise in the previous lecture:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX4AAAEGCAYAAABiq/5QAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAcBklEQVR4nO3dfZhVdb338feHhyNOopmQd4owZIh6Iw+GD0dLyYfMC29M8S5rKDI80ynvtDqaeqikOnOqS+uu61zlCdTEmtRuM9PyWPkwqBUqIKBJnNKASEvERIREJr/3H3sNDsPMsGZmr7Vm9vq8rmtfs9fa6+G79+z57jW//ft9f4oIzMysPAYVHYCZmeXLid/MrGSc+M3MSsaJ38ysZJz4zcxKZkjRAaQxYsSIqK+vLzoMM7MBZenSpc9FxMiO6wdE4q+vr2fJkiVFh2FmNqBIWtvZejf1mJmVjBO/mVnJOPGbmZXMgGjjNzPrzPbt21m/fj0vv/xy0aEUatiwYYwaNYqhQ4em2j6zxC/pOuAM4NmImJCsewNwM1APrAHeExF/zSoGM6tt69evZ/jw4dTX1yOp6HAKERFs3LiR9evXM3bs2FT7ZNnUcz3wrg7rLgPuiYhxwD3JsplZr7z88svst99+pU36AJLYb7/9evRfT2aJPyLuB57vsPpMYGFyfyHw7qzOb2blUOak36anr0HeX+7uHxHPACQ/39jVhpIaJS2RtGTDhg25BWhmVuv6ba+eiJgfEVMjYurIkbsMPDMz65fmzZvHVVdd1eXjt912G0888USOEe0q78T/F0lvAkh+Ppvz+c36jWnTpjFt2rSiw7CclTHx3w7MTu7PBn6c8/nNrMSam5upr69n0KBB1NfX09zcXJXjNjU1MX78eE455RRWr14NwIIFCzjqqKOYNGkSM2fOZOvWrfzqV7/i9ttv55JLLmHy5Mk8+eSTnW6XtcwSv6QbgV8D4yWtlzQH+DJwqqTfAacmy2ZmmWtubqaxsZG1a9cSEaxdu5bGxsY+J/+lS5dy00038eijj3LrrbfyyCOPAHD22WfzyCOPsGLFCg477DCuvfZajjvuOGbMmMGVV17J8uXLOfjggzvdLmuZ9eOPiPd18dDJWZ3TzKwrc+fO3eVqeuvWrcydO5eGhoZeH/eBBx7grLPOoq6uDoAZM2YA8Pjjj/OZz3yGF154gZdeeonTTjut0/3TbldNHrlrZqWwbt26Hq3vic66U37oQx/itttuY9KkSVx//fW0tLR0um/a7aqp3/bqMTOrptGjR/dofVonnHACP/rRj/jb3/7G5s2bueOOOwDYvHkzb3rTm9i+fftOzUnDhw9n8+bNO5a72i5LTvxmVgpNTU07mmPa1NXV0dTU1KfjHnnkkbz3ve9l8uTJzJw5k7e//e0AfPGLX+SYY47h1FNP5dBDD92x/bnnnsuVV17JlClTePLJJ7vcLkuKiFxO1BdTp04NT8RitaatK2ce/9rXqlWrVnHYYYel3r65uZm5c+eybt06Ro8eTVNTU5/a9/uTzl4LSUsjYmrHbd3Gb2al0dDQUDOJvi/c1GNmVjJO/GZmJePEb2ZWMk78ZmYl48RvZlYy7tVjZrXj+1WelOX92XR3nzdvHnvttRcXX3xxJsffHV/xm5llqL6+vugQduHEb1aA5uZmFi9ezKJFi6paHtjyt2XLFqZPn86kSZOYMGECN998c6r9VqxYwUknncS4ceNYsGBBxlHuzE09ZjlrKw+8bds2gB3lgQEPLhqA7rrrLg444AB++tOfArBp06ZU+61cuZLFixezZcsWpkyZwvTp0znggAOyDHUHX/Gb5ay78sA28BxxxBHcfffdXHrppTzwwAPss88+XHDBBUyePJnJkyfz9NNP77jfvi7QmWeeyZ577smIESN4xzvewcMPP5xbzL7i78D1UyxrWZYHtvwdcsghLF26lDvvvJPLL7+cd77znXzzm9/c8Xh9fT3Lly/fZb+OpZw7K+2cFV/xm+Usq/LAVoynn36auro6Zs2axcUXX8yyZctS7ffjH/+Yl19+mY0bN9LS0sJRRx2VcaSv8RW/Wc6amppobGzcqbmnGuWBjcy6X3bnscce45JLLmHQoEEMHTqUq6++OtV+Rx99NNOnT2fdunV89rOfza19H5z4zXLX9gXunDlz2LZtG2PGjKmp8sBlc9ppp3U7XeKaNWt2WTdv3rzsAkrBid+sAA0NDTu68Pn7JMub2/jNzErGid/MrGSc+M3MSsaJvx+ZNm3ajnEEZmZZceI3MysZJ34zqxlSdW+7s2bNGiZMmFD159HS0sIZZ5xR9eO2ceI3MysZJ34zsz5obW1l9uzZTJw4kXPOOWeXAnwAy5cv59hjj2XixImcddZZ/PWvfwUq3+tdeumlHH300RxyyCE88MADO+336quvMm7cODZs2LBj+S1veQvPPfdcn2J24jcz64PVq1fT2NjIypUr2XvvvfnWt761yzYf/OAH+cpXvsLKlSs54ogj+PznP7/jsdbWVh5++GG+/vWv77QeYNCgQcyaNWvHfA133303kyZNYsSIEX2K2YnfzKwPDjroII4//ngAZs2axYMPPrjT45s2beKFF17gxBNPBGD27Nncf//9Ox4/++yzAXjrW9/aaXmHD3/4w9xwww0AXHfddZx33nl9jtmJ38ysD/paXnmPPfYAYPDgwbS2tu7y+EEHHcT+++/Pvffey0MPPcTpp5/e+2ATTvwGeAyBWW+tW7eOX//61wDceOONvO1tb9vp8X322Yd99913R/v9d7/73R1X/2mdf/75zJo1i/e85z0MHjy4zzE78ZtZzYio7i2Nww47jIULFzJx4kSef/55PvrRj+6yzcKFC7nkkkuYOHEiy5cv53Of+1yPnteMGTN46aWXqtLMAwVV55T0SeB8IIDHgPMi4uUiYukv2ibf3rZtG/X19S7TazYA1NfX88QTT+x2u8mTJ7N48eJd1revzDpixIgdbfwd/wNfsWIFkyZN4tBDD+1ryEABV/ySDgQuBKZGxARgMHBu3nH0J11Nvt32Tb6ZldeXv/xlZs6cyZe+9KWqHbOopp4hwJ6ShgB1wNMFxdEvePJtM+vKZZddxtq1a3f57qAvck/8EfEn4CpgHfAMsCkift5xO0mNkpZIWtI2eKFWefJts96LtI3xNaynr0ERTT37AmcCY4EDgNdJmtVxu4iYHxFTI2LqyJEj8w4zV55826x3hg0bxsaNG0ud/COCjRs3MmzYsNT7FPHl7inAHyJiA4CkW4HjgO8VEEu/4Mm3zXpn1KhRrF+/nlpvFdidYcOGMWrUqNTbF5H41wHHSqoD/gacDCwpII5+w5Nvm/XO0KFDGTt2bNFhDDi5J/6IeEjSLcAyoBV4FJifdxz9TZkn327rtlbE8y7y3GZFKaQff0RcAVxRxLnNzMrOI3fNzErGid/MrGSc+M3MSsaJ38ysZJz4zcxKpstePZLuoFI9s1MRMSOTiMzMLFPddee8Kvl5NvA/eG1k7fuANRnGZGZmGeoy8UfEIgBJX4yIE9o9dIek+7vYzczM+rk0bfwjJb25bUHSWKC2q6aZmdWwNCN3Pwm0SHoqWa4HPpJZRGZmlqndJv6IuEvSOKBtzq/fRsS2bMMyG4C+r55t/2wv9nt/ecsPW/V016vn7C4eOlgSEXFrRjGZmVmGurvi/1/dPBaAE3+N8ETvZuXSXa+e8/IMxIrR1UTvgJO/WY3aba8eSftLulbSfyXLh0uak31olgdP9G5WPmm6c14P/IzK/LgA/w18IqN4LGee6L2cpk2btmMSGiufNIl/RET8AHgVICJagb9nGpXlxhO9m5VPmsS/RdJ+JHV7JB0LbMo0KstNU1MTdXV1O63zRO9mtS3NAK5PAbdT6cb5Syqjds/JNCrLjSd6NyufNAO4lkk6ERgPCFgdEdszj8xyU+aJ3s3KqLsBXCdFxL2dDOQ6xAO4zMwGru6u+E8E7qXzgVwewGVmNkB1N4DriuTu+RHhXjxmZjUiTa+e30u6UtLhmUdjZmaZS5P4J1IZtHWNpMWSGiXtnXFcZmaWkd0m/ojYHBELIuI44NPAFcAzkhZKekvmEZqZWVXttjunpMHAdOA8KpOwfBVoBt4O3AkckmF8faYelkjvzX7hEulmNoCkGcD1O+A+4MqI+FW79bdIOqGLfczMrJ9Kk/gnRsRLnT0QERdWOR4zM8tYmjb+TpO+mZkNTGl69ZiZWQ1x4jezXHkugOJ1V6vnU93tGBFfq344ZmaWte6u+Icnt6nAR4EDk9s/A30axSvp9ZJukfRbSask/WNfjtdfST27LVpUufVkn4GubaL3RYsWUV9fT3Nzc9Eh1Ty/5tZdrZ7PA0j6OXBkRGxOlucB/6+P5/0GcFdEnCPpH4C63e1gtccTvefPr7lBujb+0cAr7ZZfoTKQq1eScg8nANcCRMQrEfFCb49nA5cnes+fX3ODdP34vws8LOlHVMoxnwXc0IdzvhnYAHxH0iRgKXBRRGxpv5GkRqARPP9rrfJE7/nza26Qrh9/E5VyDX8FXgDOi4h/78M5hwBHAldHxBRgC3BZJ+edHxFTI2LqyJEj+3A666880Xv+/JobpO/OWQe8GBHfANZLGtuHc64H1kfEQ8nyLVQ+CKxkPNF7/vyaG6RI/JKuAC4FLk9WDQW+19sTRsSfgT9KGp+sOhl4orfHs4GroaGB+fPns8ceewAwZswY5s+f7y8ZM1T219xjCCrStPGfBUwBlgFExNOShvfxvB8HmpMePU9RaUqyKhooVUk90Xv+/JpbmsT/SkSEpACQ9Lq+njQillMZH2BmZjlL08b/A0nfBl4v6Z+Au4Frsg3LzMyystsr/oi4StKpwIvAeOBzEfGLzCMzM7NMpJmB6ysRcSnwi07WmZnZAJOmqefUTtadXu1AzMwsH91V5/wo8DHgYEkr2z00HPhV53uZDZweRWZl1V1Tz/eB/wK+xM4jazdHxPOZRmXWW9/v4afOs73Y7/3+1LGBrcumnojYFBFrqFTSfD4i1kbEWmC7pGPyCtDMrBb0p8Fjadr4rwbaz7u7JVlnZmYDUJrEr4jXWlQj4lXSDfwyM7N+KE3if0rShZKGJreLqJRZMDOzAShN4v9n4DjgT1Qqax5DUiffzMwGnjQjd58Fzs0hFrM+U0NPe9xMS/ZrSb1HvL+HpzDrZ9KUZT5E0j2SHk+WJ0r6TPahmZlZFtI09SygUot/O0BErMT/AZiZDVhpEn9dRDzcYV1rFsGYmVn20iT+5yQdTGWidSSdAzyTaVRmZpaZNP3xLwDmA4dK+hPwB6Ac87SZmdWgNL16ngJOSWbeGhQRm7MPy8zMspKmHv9+wBXA24CQ9CDwhYjYmHVwZgNJ4V1Je1JozsXpSi1NG/9NwAZgJnBOcv/mLIMyM7PspGnjf0NEfLHd8r9JendG8ZiZWcbSXPHfJ+lcSYOS23uAn2YdmJmZZSNN4v8IlUlZtiW3m4BPSdos6cUsgzMzs+pL06tneB6BmJlZPtLU6pnTYXmwpCuyC8msLFqSm1m+0ny5e7KkmcAcYARwHbAo06jMrMd61p10WrJPS+o9XJW0dqRp6nm/pPcCjwFbgfdFxC8zj8zMzDKRpqlnHHAR8ENgDfABSXUZx2VmZhlJ06vnDuBzEfER4ETgd8AjmUZlZmaZSdPGf3REvAiQTLr+VUm3ZxuWmZllJc0V/56SrpV0F4Ckw4ETsg3LzMyykuaK/3rgO8DcZPm/qdTquTajmMxsgFEPar31Zp9wfbiqSnPFPyIifgC8ChARrcDf+3riZDzAo5J+0tdjVU8zsJhKb9X6ZNlql3/flo/m5mYWL17MokWLqK+vp7m52PdamsS/JSnN3DYD17HApiqc+yJgVRWOUyXNQCOVqhQAa5NlJ4Pa5N+35aO5uZnGxka2bau819auXUtjY2OhyT9N4v8UcDtwsKRfAjcAH+/LSSWNAqYD1/TlONU1l8owhfa28loLl9UW/74tH3PnzmXr1p3fa1u3bmXu3OLea2kGcC2TdCIwHhCwOiK29/G8Xwc+DXRZB0hSI5VLMEaPHt3H06WxrofrbWDz79vysW5d5++prtbnIc0VPxHRGhG/iYjH+5r0JZ0BPBsRS3dzzvkRMTUipo4cObIvp0ypqw+XPD50LH/+fVs+urpwzeeCtnOpEn+VHQ/MkLSGSonnkyR9r4A4OmgCOg5IrkvWW+3x79vy0dTURF3dzu+1uro6mpqKe6+l6c5ZVRFxOXA5gKRpwMURMSvvOHbVkPycS+Xf/dFUkkBDl3vYQNb2e51D5QveMfj3PTD1966kDQ2V99ScOXPYtm0bY8aMoampacf6InSZ+CUd2d2OEbGs+uEUrQH/4ZdJA7Agud9SYBxW6xoaGliwoPJea2lpKTYYur/i/2o3jwVwUl9PHhEt+C/OzCxXXSb+iHhHnoGYmVk+UrXxS5oAHA4Ma1sXETdkFZSZmWVnt4k/mWZxGpXEfydwOvAglYFcVlUtRQdgZiWQpjvnOcDJwJ8j4jxgErBHplGZmVlm0jT1/C0iXpXUKmlv4FngzRnHZWbW72XdlRSyqUyaJvEvkfR6Kv3elgIvAQ9XPxQzM8tDmlo9H0vu/mcyGcveEbEy27DMzCwraSZbv6ftfkSsiYiV7ddZrWjBXy6blUN3I3eHUSleMkLSvlQqcwLsDRyQQ2xmZpaB7pp6PgJ8gkqSb1+e4UXgmxnGZGZmGepu5O43gG9I+nhE/EeOMZmZWYbS9Or5tqQLgROS5Rbg21WYjMXMzAqQJvF/Cxia/AT4AHA1cH5WQVnZtBQdgJVCM7CYShnuespchru7L3eHREQrcFRETGr30L2SVmQfmlkeWooOwHLRTGUm123J8tpkGcqY/Lvrztk2SOvvkg5uWynpzcDfM43KzKyq5gJbO6zbmqwvn+6aetq6b14M3CfpqWS5Hjgvy6DMzKqrq4nNi5vwvEjdJf6Rkj6V3P82MBjYQqU08xTgvoxjMzOrktFUmnc6W18+3TX1DAb2AoZT+YBQsjwkWWdmNkA0URmP2l5dsr58urvifyYivpBbJGZmmWn7AncOlS94x+BePZ3rRcFRM7P+qoFKkWEoe2+u7pp6Ts4tCjMzy02XiT8ins8zEDMzy0eaqRfNzKyGpCnZYGY1p6XoAKxAvuI3MysZJ34zs5Jx4jczKxknfjOzknHiNzMrGSd+M7OSceI3MysZJ34zs5Jx4jczK5ncE7+kgyTdJ2mVpN9IuijvGMysKM3AGuB+KpP5NRcZTGkVUbKhFfiXiFgmaTiwVNIvIuKJAmIxs9y0TXjeNvdtuSc8L1LuV/wR8UxELEvubwZWAQfmHYeZ5c0TnvcXhbbxS6qnMn/vQ5081ihpiaQlGzZsyD02M6s2T3jeXxSW+CXtBfwQ+EREvNjx8YiYHxFTI2LqyJEj8w/QzKqsq4nNyznheZEKSfyShlJJ+s0RcWsRMZhZ3vrDhOctuCR1Mb16BFwLrIqIr+V9fjMrSgMwn8pE50p+zsdf7OaviF49xwMfAB6TtDxZ968RcWcBsZhZrhpwoi9e7ok/Ih6k8nFvZmYF8MhdM7OS8Zy7Zma5aCk6gB18xW9mVjJO/GZmJePEb2ZWMk78ZmYl48RvZlYyTvxmZiXjxG9mVjJO/GZmJePEb2ZWMk78ZmYl48RvZlYyTvxmZiXjxG9mVjJO/GZmJePEb2ZWMk78ZmYl48RvZlYyTvxmZiXjxG9mVjJO/GZmJePEb2ZWMk78ZmYl48RvZlYyTvxmZiXjxG9mVjJO/GZmJePEb2ZWMk78ZmYl48RvZlYyTvxmZiXjxG9mVjKFJH5J75K0WtLvJV1WRAxmZmWVe+KXNBj4JnA6cDjwPkmH5x2HmVlZFXHFfzTw+4h4KiJeAW4CziwgDjOzUhpSwDkPBP7Ybnk9cEzHjSQ1Ao3J4kuSVvfgHCOA53odYQ9JeZ1pt+f2887/3Lny894ht+de5PPu5Pw9fd5jOltZROLv7GWMXVZEzAfm9+oE0pKImNqbfQcyP+9yKevzhvI+92o97yKaetYDB7VbHgU8XUAcZmalVETifwQYJ2mspH8AzgVuLyAOM7NSyr2pJyJaJf0f4GfAYOC6iPhNlU/TqyaiGuDnXS5lfd5Q3udeleetiF2a183MrIZ55K6ZWck48ZuZlUxNJf6yloKQdJCk+yStkvQbSRcVHVOeJA2W9KiknxQdS14kvV7SLZJ+m/ze/7HomPIg6ZPJe/xxSTdKGlZ0TFmQdJ2kZyU93m7dGyT9QtLvkp/79vb4NZP4S14KohX4l4g4DDgWuKBEzx3gImBV0UHk7BvAXRFxKDCJEjx/SQcCFwJTI2IClc4h5xYbVWauB97VYd1lwD0RMQ64J1nulZpJ/JS4FEREPBMRy5L7m6kkgQOLjSofkkYB04Frio4lL5L2Bk4ArgWIiFci4oVCg8rPEGBPSUOAOmp0DFBE3A8832H1mcDC5P5C4N29PX4tJf7OSkGUIvm1J6kemAI8VHAoefk68Gng1YLjyNObgQ3Ad5Imrmskva7ooLIWEX8CrgLWAc8AmyLi58VGlav9I+IZqFzsAW/s7YFqKfGnKgVRyyTtBfwQ+EREvFh0PFmTdAbwbEQsLTqWnA0BjgSujogpwBb68G//QJG0aZ8JjAUOAF4naVaxUQ1MtZT4S10KQtJQKkm/OSJuLTqenBwPzJC0hkrT3kmSvldsSLlYD6yPiLb/6m6h8kFQ604B/hARGyJiO3ArcFzBMeXpL5LeBJD8fLa3B6qlxF/aUhCSRKW9d1VEfK3oePISEZdHxKiIqKfy+743Imr+CjAi/gz8UdL4ZNXJwBMFhpSXdcCxkuqS9/zJlOBL7XZuB2Yn92cDP+7tgYqozpmJnEpB9FfHAx8AHpO0PFn3rxFxZ3EhWcY+DjQnFzlPAecVHE/mIuIhSbcAy6j0ZHuUGi3dIOlGYBowQtJ64Argy8APJM2h8iH4v3t9fJdsMDMrl1pq6jEzsxSc+M3MSsaJ38ysZJz4zcxKxonfzKxknPitpkiqb1/RsMNj1/SH4nXdxWiWh5rpx2+2OxFxftExVIOkIRHRWnQcNnD5it9q0RBJCyWtTGrW1wFIapE0Nbn/kqQmSSskLZa0f8eDSJqX1EVvkfSUpAuT9TtdsUu6WNK8duf4v5LuT+rkHyXp1qSG+r+liPGtkhZJWirpZ+2G6LdI+ndJi6iUoTbrNSd+q0XjgfkRMRF4EfhYJ9u8DlgcEZOA+4F/6uJYhwKnUSn7fUVSE2l3XomIE4D/pDKs/gJgAvAhSft1FWNy7P8AzomItwLXAU3tjvv6iDgxIr6aIgazLjnxWy36Y0T8Mrn/PeBtnWzzCtA2Y9dSoL6LY/00IrZFxHNUimLt8p9BJ9pqRD0G/CaZL2EbldIKbYUEO4txPJUPiF8kpTc+Q6XYYJubU5zbbLfcxm+1qGMdks7qkmyP1+qV/J2u/xa2tbvftl0rO180dZz+r22fVzvs/2q783QWo6h8UHQ1jeKWLtab9Yiv+K0WjW43B+37gAerfPy/AG+UtJ+kPYAzenGMzmJcDYxsWy9pqKT/WZWIzdpx4rdatAqYLWkl8Abg6moePKkF/wUqs5z9BPhtLw6zS4zJlKHnAF+RtAJYTrnqzVtOXJ3TzKxkfMVvZlYyTvxmZiXjxG9mVjJO/GZmJePEb2ZWMk78ZmYl48RvZlYy/x/XmdoWwPa4EQAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# Define the binning\n", "nbins = 10\n", "x = np.linspace(0.5, nbins - 0.5, nbins)\n", "\n", "# The background follows a linear shape\n", "b_yields = np.array([ (1 - i/2/nbins) for i in range(0, nbins) ])\n", "b_fracs = b_yields/np.sum(b_yields)\n", "\n", "# The signal shape is a peak\n", "s_fracs = np.zeros(nbins)\n", "s_fracs[4:7] = [ 0.1, 0.8, 0.1 ]\n", "\n", "# Define the signal and background\n", "s = 3\n", "b = 50\n", "s_and_b = s*s_fracs + b*b_fracs\n", "b_only = b*b_fracs\n", "\n", "# Now generate some data for the s+b model\n", "np.random.seed(14) # Set the random seed to make sure we generate the same data every time\n", "data = [ np.random.poisson(s*s_frac + b*b_frac) for s_frac, b_frac in zip(s_fracs, b_fracs) ]\n", "\n", "# Make a plot of the result\n", "plt.bar(x, s_and_b, color='orange', yerr=np.sqrt(s_and_b), label='s+b')\n", "plt.bar(x, b_only, color='b', label='b only')\n", "plt.scatter(x, data, zorder=10, color='k', label='data')\n", "plt.xlabel('bin number')\n", "plt.ylabel('Total expected yield')\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The plot above shows the typical case of a binned analysis: we have 2 *templates*, i.e. reference shapes for signal and background. These are given by bin fractions $f_{S,i}$ and $f_{B,i}$ which are normalized to 1. They are then scaled by the yields $s$ and $b$.\n", "\n", "The ML estimate $\\hat{s}$ corresponds to the \"best fit\" of the templates to the data, so one can more or less read it off the plot: naively, one expects something a bit larger than the $s=3$ value that is shown in the plot, perhaps $\\hat{s} \\approx 5$ or so.\n", "\n", "We now check this by performing the ML estimation. A small change compared to the previous case is that instead of $L(s)$, we'll be using the quantity $\\lambda = -2 \\log L(s)$. The reason is as follows: first, the total PDF for all the bins together is a product of Poisson terms, one for each bin:\n", "$$\n", "L(s) = \\prod_{i=1}^{n_{\\text{bins}}} P(n_i, s f_{s,i} + b f_{b,i})\n", "$$\n", "Technically it's easier to deal with sums than products, and the log does this for us. As we'll see in a moment, this is also useful to deal with Gaussian PDFs, since in this case the log just extracts the squared term in the exponent. This is also the reason for the $-2$ term in the formula.\n", "The only consequence in practical terms is that instead of $L(s)$ we compute $\\lambda(s)$, and that we now want to minimize it instead of maximizing. But the ML estimate $\\hat{s}$ remains the same, since the value of $s$ that maximizes $L(s)$ also minimizes $\\lambda(s)$.\n", "\n", "Let's not compute the ML estimate (best fit) $\\hat{s}$ using $\\lambda(s)$ :" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Define the s scan\n", "s_values = np.arange(-2, 15, 0.1)\n", "\n", "# ==> Compute lambda for each value of s, plot the result and find the best-fit value among the values tested;\n", "# ==> For later convenience, you may want to define a function lambda_s(s) that returns lambda as a function of s :\n", "\n", "def lambda_s(s) :\n", " pass # replace this with the appropriate code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all goes well, you should find a best-fit $\\hat{s}$ of about 5. Of course this value is a bit imprecise due to the step size of 0.1 in the scan, but we can do better:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "from scipy.optimize import minimize_scalar\n", "# ==> Use the minimize_scalar function to find the minimum value. \n", "# The syntax is minimize_scalar(func, (min, max)).x (the .x returns the position of the minimum)\n", "# and 'func' should be the lambda_s function you defined above" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should find a best-fit value of about $\\hat{s} = 5.06$. \n", "\n", "Note than since $L(s)$ was defined only up to a multiplicative factor, $\\lambda(s)$ is defined up to an additive constant, so the value $\\lambda(\\hat{s})$ at the minimum isn't meaningful. On the other hand, the difference between values of $\\lambda(s)$ at different $s$ is meaningful, and will be used later in hypothesis testing." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Maximum-likelihood in the Gaussian case" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've seen that the Poisson distribution can often be well approximated by a Gaussian (and other distributions as well, though the Central-limit theorem). If we write the expected number of events as\n", "$$\n", "N_i = s f_{s,i} + b f_{b,i}\n", "$$\n", "then the likelihood is\n", "$$\n", "L(s) = \\prod_{i=1}^{n_{\\text{bins}}} P(n_i, N_i) \\approx \\prod_{i=1}^{n_{\\text{bins}}} \\exp\\left(-\\frac{1}{2} \\frac{(n_i - N_i)^2}{\\sigma_i^2} \\right)\n", "$$\n", "\n", "And then the -2 log likelihood is\n", "\n", "$$\n", "\\lambda(s) = -2 \\log L(s) = \\sum_{i=1}^{n_{\\text{bins}}}\\left(\\frac{n_i - N_i}{\\sigma_i}\\right)^2\n", "$$\n", "which is of course very familiar: it's just the usual $\\chi^2$. This is an important motivation for using $\\lambda(s)$ : it can be defined for any likelihood, but in the Gaussian case, it reduces to the $\\chi^2$. (Note that we've dropped the normalization factor of the Gaussian PDF along the way, but again, we're always free to drop constants when computing likelihoods)\n", "\n", "The upshot is that for the Gaussian case, ML is the same as $\\chi^2$ minimization -- this is why we're using \"ML\" and \"best-fit\" interchangeably in this notebook. To illustrate this, we can repeat the example above in the Gaussian case, now estimating $\\hat{s}$ using the $\\chi^2$.\n", "\n", "For the standard $\\chi^2$ (also called the *Pearson* $\\chi^2$), we take $\\sigma_i^2$ as the uncertainty on the prediction. Recall that for a Poisson distribution the variance is the same as the mean, so we have $\\sigma_i^2 = N_i$.\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# ==> Define a function to compute the chi2 between the model and the data as a function of s\n", "def chi2_Pearson(s) :\n", " pass # your code here instead\n", "\n", "# ==> Plot the chi2_Pearson(s) value and the lambda(s) values on the same plot to see if they are close to each other as expected. You will\n", "# need to add a vertical offset to one of the curves to get good agreement, which one can always do as discussed above. \n", "# ==> Use minimize_scalar to find the value of s which minimizes the chi2, and compare to what was found for lambda(s) above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If all goes well, you should see that the $\\chi^2$ is quite close to the Poisson case above. The difference comes from the fact that the event counts are not quite large enough for the Poisson distributions to be fully Gaussian, and the residual non-Gaussianity leads to a small discrepency between the exact Poisson model and its $\\chi^2$ approximation.\n", "\n", "One can have an even less Gaussian situation by setting $s$ and $b$ to smaller values (say $s=2$ and $b=1$), so that the number of events is not large enough to trigger the Central-limit theorem. In this case one should get an asymmetric shape for the Poisson $\\lambda$ and sizable differences with the $\\chi^2$ approximation. \n", "\n", "**The rest of this section is optional -- feel free to skip if you are short on time (you should be about half way through the session at this point)**" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# ==> (optional) Repeat the previous study for s=2 and b=1 to check for larger non-Gaussian effect.\n", "# ==> (optional) One can also repeat it again for larger counts such as b=1000, s=20 and check that the chi2 is now an excellent approximation to the Poisson." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can also define the alternate *Neyman* $\\chi^2$, where the uncertainty is taken from the observation rather than the expectation, so $\\sigma_i^2 = n_i$. This has some advantages, but typically agrees less well with the the Poisson limit:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# ==> Repeat the previous study (going back to the original event yields), and add the Neyman chi2 curve to the plot with lambda(s)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Hypothesis testing basics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far we've focused on estimating parameters, but to get realistic results (including setting uncertainties on the best-fit values), we need an additional ingredient, *hypothesis testing*. The procedure for this is as follows:\n", "* **Define the hypotheses to test**. We need two of them: a baseline *null* hypothesis, and an *alternate* which is tested against it\n", "* **Define the *test* to perform** -- meaning the observables to use, and procedure to decide which hypothesis to accept.\n", "* **Compute the result of the test for the observed data**, in particular the *p-value* of the test, and decide whether to accept the null or the alternate hypothesis.\n", "\n", "To start with, we consider a simple Poisson counting process\n", "* Define two different signal hypotheses ($n=0$ and $n=5$)\n", "* Define the test in terms of the observed number of events: if it is less than some threshold, accept the null $n=0$, otherwise $n=5$.\n", "* As for any test, there are two ways to go wrong:\n", " * Rejecting the null although it is true (false positive): the fraction of such cases is the *Type-I error rate* or *p-value*.\n", " * Accepting the null although it is false (false negative) : : the fraction of such cases is the *Type-II error rate*, also (1 - Power)\n", "\n", "A more stringent test reduces the Type-I error rate, but at the expense of an increased Type-II rate and vice versa. This is illustrated by the *ROC curve* which shows each error rate as a function of the other. The threshold for the test should be chosen depending on the desired balance between these error rates.\n", "\n", "For the study below, we consider a single-bin counting experiment. In this case the disciminant is just the observed count $n$." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# Define the hypotheses\n", "b = 10 # background of 10 events\n", "s_null = 0 # the null hypothesis: no signal, only background\n", "s_alt = 5 # the alternate hypothesis: \n", "\n", "# consider various values of the observed n (our discriminant)\n", "ns = np.arange(1, 30, 1)\n", "\n", "# ==> Compute and plot the PDFs for the two hypotheses. These are Poisson PDFs with parameters s_null + b and s_alt + b,\n", "# and should be evaluated at the n values given in the array above.\n", "# Reminder: the poisson PDF is given by scipy.stats.poisson.pmf(n, s+b)\n", "\n", "# Now we draw a couple of plots illustrating different test configurations\n", "\n", "# First a loose test with a threshold at n=12 (i.e s=2)\n", "threshold1 = 12\n", "\n", "# Then a more stringent test with a threshold at n=16 (i.e s=6)\n", "threshold2 = 16\n", "\n", "# ==> Compute the Type-I error probability (p-value) and the Type-II error probability for 2 tests\n", "# Recall that the p-value is the integral of the null hypothesis PDF above the threshold, so given by scipy.stats.poisson.sf(threshold, s_null+b)\n", "# and the Type-II error is the integral of the alt. hypothesis PDF below the threshold, so given by scipy.stats.poisson.cdf(threshold, s_alt+b)\n", "# ==> Also optionally plot the results using the fill_between command of matplotlib, as done in lecture 1\n", "\n", "thresholds = np.arange(1, 30, 1)\n", "# ==> Compute the Type-I and Type-II error probabilities for each threshold in the list above, and plot the ROC curve\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The threshold for discovery in physics is often set at $5\\sigma$, which corresponds to a p-value of about $3 \\cdot 10^{-7}$. What threshold would one use to get so small a Type-I error rate ?" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# ==> Compute the value of the threshold on n which would give us a 5sigma discovery\n", "# Note that the exact value of the p-value corresponding to \"5sigmas\" is given by scipy.stats.normal.sf(5) -- the integral of a normal PDF above 5\n", "\n", "# ==> also compute the power of this test, i.e. 1 - (Type II error rate) for this threshold" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should find $n_{\\text{obs}} \\ge 29$. If we do osberve such a large n, then we can indeed reject the null at with a p-value corresponding to a $5\\sigma$ threshold. This corresponds to testing for discovery: we reject the no-signal hypothesis in favor of an hypothesis with non-zero signal.\n", "\n", "However the power of the test is also very low (you should have found about $4\\, 10^{-4}$ above), which means that even if the $s=5 $signal is there, we are very unlikely to observe an event count of $n_{\\text{obs}} \\ge 29$. The underlying problem that is causing this is the fact that the different between the alternate hypothesis $s=5$ and the null $s=0$ is quite small on the scale of the measurement uncertainties, so the two hypotheses are difficult to separate.\n", "\n", "If the alternate hypothesis was larger, say $20$ signal events, then things would be easier. The threshold for discovery would still be 29 events (this only depends on the null, not the alt), but now the power would be:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "s_alt2 = 20\n", "# ==> Compute the power of the test " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should find that the power is about 50% -- i.e. this means that if $s_{\\text{alt}} = 20$ is indeed true, then we have an approximately 50% chance of obtaining $n \\ge 29$ and therefore of making the discovery." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }