{
"cells": [
{
"cell_type": "markdown",
"id": "4818d2aa-fa6c-4687-bb7d-65c52856e849",
"metadata": {},
"source": [
"# Exercise 6.2: The Olkin-Petkau-Zidek example of MLE fragility\n",
"\n",
"
\n",
"\n",
"In 1981, [Olkin, Petkau, and Zidek](https://doi.org/10.1080/01621459.1981.10477697) demonstrated an example in which MLE estimates can very wildly with only small changes in the data. We will work through their example in this problem.\n",
"\n",
"**a)** Say you are measuring the outcomes of $N$ Bernoulli trials, but you can only measure a positive result; negative results are not detected in your experiment. You do know, however that $N$, while unknown, is the same for all experiments. The number of positive results you get from a set of measurements (sorted for convenience) are *n* = 16, 18, 22, 25, 27. Modeling the generative process with Binomial distribution, $n_i \\sim \\text{Binom}(\\theta, N)\\;\\;\\forall i$, obtain maximum likelihood estimates for $\\theta$ and $N$. *Hint:* You can work out an analytical expression for the MLE of $\\theta$ in terms of $N$, and then you can find $N$ by enumerating $N$.\n",
"\n",
"**b)** Now, let's say that the final measurement has 28 positive results instead of 27. Repeat your MLE calculation. How do the results vary?"
]
},
{
"cell_type": "markdown",
"id": "fc00722d-ff25-440c-b525-6b04664f32f7",
"metadata": {},
"source": [
" "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}