{
"cells": [
{
"cell_type": "markdown",
"id": "20994f29",
"metadata": {},
"source": [
"# Monte Carlo Methods"
]
},
{
"cell_type": "markdown",
"id": "f299ea9e",
"metadata": {},
"source": [
"The basic idea of Monte Carlo Methods is to use random numbers and statistics to solve very complicated problems that are hard to solve in any other way. Typically, this is achieved through simulations, such as Markov chains. However, the most straightforward application is in computing high-dimensional integrals by throwing points into a volume and estimating the result. There are numerous other applications, such as finding global minimums through simulated annealing, drawing inspiration from statistical physics, and so on.\n",
"\n",
"Here we will cover a very small fraction of `Classical Monte Carlo` algorithms. There is a huge set of quantum Monte Carlo algorithms that we will not discuss here.\n"
]
},
{
"cell_type": "markdown",
"id": "534d7c6c",
"metadata": {},
"source": [
"First we need to understand what are random numbers on a comuter."
]
},
{
"cell_type": "markdown",
"id": "70a893dc",
"metadata": {},
"source": [
"## Random Numbers"
]
},
{
"cell_type": "markdown",
"id": "9ec03e27",
"metadata": {},
"source": [
"It is very hard to implement a good random number generator because a sequence of trully random numbers can not be generated by deterministic computers. Only pseudo-random number generators can be coded. There are several excellent pseudo random number generators available in various libraries, which give very satisfactory results in combination\n",
"with Monte Carlo methods or multidimensional integrations. For every Monte Carlo application, it is crucial to use high quality random number generator. In practice, it is best to select a few random number generators with a good\n",
"reputation, and make sure that results do not depend on the choice of a random number\n",
"generator. \n",
"\n",
"Numpy default generators are now from `PCG` library, which has very good reputation, and is very new. Older random number generations with good reputation are `Mersenne Twister` generators, which usually go by the name `rng mt19937`. For example gnu scientific library is implementing them as `gsl_rng_mt19937` and intel `mkl` library is calling them `VSL_BRNG_MT19937` and `VSL_BRNG_SFMT19937`.\n",
"\n",
"\n",
"**How do random number generators work?**\n",
"\n",
"The simplest and fastest random number generators, which are unfortunately not of \n",
"high quality, are linear congruential generators:\n",
"$$I_{j+1}=(a I_j + c)\\; \\textrm{mod}\\; m$$\n",
"for example, $a=16843009$, $c=3014898611$, and $m=2^{32}$ in `cc65` generator. \n",
"\n",
"Modern generators, like `PCG` are of course much better, but still one needs to understand that these are pseudo-random numbers. Before you spent million of CPU hours, you should test that results don't depend on random number generator within stastistical error."
]
},
{
"cell_type": "markdown",
"id": "1fe5a975",
"metadata": {},
"source": [
"## Multidimensional integration"
]
},
{
"cell_type": "markdown",
"id": "c48bd3c5",
"metadata": {},
"source": [
"Multidimensional numeric integration in more than 4 dimensions is more\n",
"appropriate for Monte-Carlo than one-dimensional quadratures. If the\n",
"function is smooth enough, or we know how to transform integral to\n",
"make it smooth, the integration can be performed with MC. The reason\n",
"for MC success is that it's error, according to central limit theorem,\n",
"is always proportional to $1/\\sqrt{N}$ independent of dimension.\n",
"\n",
"The error of one dimensional quadratures can be estimated: If the number of\n",
"points used in each dimension is $N_1$, the number of all points used\n",
"in $d$ dimensions is $N=(N_1)^d$. The error for trapezoid rule was\n",
"estimated to $1/(N_1)^2$ therefore the error in $d$ dimensions is\n",
"$1/N^{2/d}$.\n",
"It is therefore clear than for $d=4$ the Monte-Carlo error and the\n",
"trapezoid-rule error are equal.\n",
"\n",
"\n",
"**Straighforward MC**\n",
"\n",
"For more or less flat functions, the integration is straightforward\n",
"\\begin{eqnarray}\n",
" \\int f dV \\approx V \\langle f\\rangle \\pm V \\sqrt{\\frac{\\langle\n",
" f^2\\rangle-\\langle f\\rangle^2}{N}}\n",
"\\end{eqnarray}\n",
"here\n",
"\\begin{eqnarray}\n",
" \\langle f\\rangle\\equiv \\frac{1}{N}\\sum_{i=0}^{N-1} f(x_i)\\\\\n",
" \\langle f^2\\rangle\\equiv \\frac{1}{N}\\sum_{i=0}^{N-1} f^2(x_i)\n",
"\\end{eqnarray}\n",
"\n",
"If the function $f$ is rapidly varying, the variance is going to be\n",
"large and precision of the integral vary bad. The scaling $1/\\sqrt{N}$ is\n",
"very bad! If one has a lot of computer time and patience, one might\n",
"still try to use this method, because it is so straighforward to implement.\n",
"\n",
"If the region $V$ of the integral is complicated and is hard to\n",
"generate distribution of points in volume $V$, one can just find a larger and simpler volume\n",
"$W$ which contains volume $V$. Then one samples over $W$ and defines\n",
"the function $f$ to be zero outside $V$. Of course the error will increase because\n",
"number of \"good\" points is smaller."
]
},
{
"cell_type": "markdown",
"id": "10103b8e",
"metadata": {},
"source": [
"**Importance sampling**\n",
"\n",
"Usefulness of Monte Carlo becomes more appealing when importance\n",
"sampling strategy is implemented. Of course one needs to know something\n",
"about the function to implement the strategy, but one is rewarded with\n",
"much higher accuracy.\n",
"\n",
"It is simplest to ilustrate the idea in 1D. If one knows that function\n",
"$f(x)$ to be integrated is mostly proportional to another function\n",
"$w(x)$ in the region where the integral contains most of the weight,\n",
"we might want to rewrite the integral\n",
"\\begin{eqnarray}\n",
" \\int f(x)dx = \\int \\frac{f(x)}{w(x)} w(x) dx\n",
"\\end{eqnarray}\n",
"If weight function $w(x)$ is a simple analytic function, which can be\n",
"integrated analytically, and obeys the following constrains\n",
"- $w(x)>0$ for every $x$ \n",
"- $\\int w(x)dx = 1$\n",
"we can define\n",
"\\begin{eqnarray}\n",
" W(x) = \\int^x w(t)dt \\qquad \\rightarrow\\qquad dW(x) = w(x)dx\n",
"\\end{eqnarray}\n",
"and rewrite\n",
"\\begin{eqnarray}\n",
" \\int f(x)dx = \\int \\frac{f(x)}{w(x)} dW(x) = \\int\n",
" \\frac{f(x(W))}{w(x(W))} dW\\rightarrow\n",
" \\left\\langle \\frac{f(x(W))}{w(x(W))}\\right\\rangle_{W\\; uniform\\;\\in[0,1]}\n",
"\\nonumber\n",
"\\end{eqnarray}\n",
"\n",
"If the function $f/w$ on mesh $W$ is reasonably flat, it can be\n",
"efficiently integrated by MC. The error is now proportional to\n",
"$$\\sqrt{\\frac{\\langle(f/w)^2\\rangle-\\langle f/w\\rangle^2}{N}}$$ and is\n",
"therefore greatly reduced.\n",
"\n",
"To implement the algorith, we generate uniform random numbers $r$\n",
"in the interval $r\\in[0,1]$ which correspond to variable $W$. We can\n",
"solve the equaton for $x=W^{-1}(r)$ to get $x$ and use it to evaluate\n",
"$f(x)/w(x)$. The random numbers are therefore uniformly distributed on\n",
"mesh $W$ while they are non-uniformly distributed on mesh $x$.\n",
"\n",
"This can also be writtens as\n",
"\\begin{eqnarray}\n",
" \\left\\langle \\frac{f(x)}{w(x)}\\right\\rangle_{\\frac{P(x)}{dx}=w(x)}\n",
"\\nonumber\n",
"\\end{eqnarray}\n",
"because the distribution of points $x$ is\n",
"$\\frac{dP}{dx}=\\frac{dP}{dW}\\frac{dW}{dx}$ and since distribution\n",
"$\\frac{dP}{dW}$ is uniform, and $\\frac{dW}{dx}=w(x)$, we have $\\frac{dP}{dx}=w(x)$.\n",
"\n",
"The archaic example is the **exponential** weight function\n",
"\\begin{eqnarray}\n",
" w(x) = \\frac{1}{\\lambda}e^{-x/\\lambda}\\qquad for\\; x>0\n",
"\\end{eqnarray}\n",
"This is equivalent to our exponentially distributed mesh points. Most\n",
"of them are going to be close to $0$ and only few at large $x$.\n",
"\n",
"The integral is $W(x)=1-e^{-x/\\lambda}$ which gives for the inverse\n",
"$x=-\\lambda\\ln(1-W)$. The integral\n",
"\\begin{equation}\n",
" \\int_0^\\infty f(x)dx=\\int_0^1\n",
" \\frac{f(-\\lambda\\ln(1-W))}{w(-\\lambda\\ln(1-W))} dW =\n",
" \\int_0^1 f(-\\lambda\\ln(1-W))\\frac{\\lambda dW}{1-W}\n",
"\\end{equation}\n",
"is easily evaluated with MC if $f(x)$ is exponentially falling function.\n",
"\\begin{eqnarray}\n",
" \\int_0^\\infty f(x)dx\\rightarrow\n",
" \\langle \\lambda \\frac{f(-\\lambda\\ln(1-W))}{1-W}\\rangle_{W\\;uniform\\in[0,1]}\n",
"\\end{eqnarray}\n",
"\n",
"Since $\\frac{dP}{dW}=1$ ($W$ is uniformly distributed), the probability\n",
"for $x$ is $\\frac{dP}{dx}=w(x)$, therefore $x$ is exponentially distributed random\n",
"number.\n",
"\n",
"We could also write\n",
"\\begin{eqnarray}\n",
" \\int_0^\\infty f(x)dx\\rightarrow\n",
" \\left\\langle\\frac{f(x)}{\\frac{e^{-x/\\lambda}}{\\lambda}}\\right\\rangle_{\\frac{dP}{dx}=e^{-x/\\lambda}/\\lambda}\n",
"\\end{eqnarray}\n",
"\n",
"\n",
"Another very usefull weight function is **Gaussian distribution**\n",
"\\begin{eqnarray}\n",
" w(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{\\frac{(x-x_0)^2}{2\\sigma^2}}\n",
"\\end{eqnarray}\n",
"How do we get random number $x$ to be distributed according to the\n",
"above distribution? The integral gives $\\mathrm{erf}$ function and its\n",
"inverse is not simple to evaluate.\n",
"\n",
"The trick is to use two random numbers to get\n",
"**Gaussian distrbution**. Consider the following algorithm\n",
"\\begin{eqnarray}\n",
" x_1 = x_0 + \\sqrt{-2\\sigma^2\\ln r_1}\\cos(2\\pi r_2)\\\\\n",
" x_2 = x_0 + \\sqrt{-2\\sigma^2\\ln r_1}\\sin(2\\pi r_2)\n",
"\\end{eqnarray}\n",
"The distribution of $r_1$ and $r_2$ is uniform in the interval\n",
"$[0,1]$. The distribution of $x_1$ and $x_2$ is\n",
"\\begin{eqnarray}\n",
" \\frac{d^2 P}{dx_1 dx_2} = \\frac{d^2 P}{dr_1 dr_2}\n",
" \\left|\\frac{\\partial(r_1,r_2)}{\\partial(x_1,x_2)}\\right|=\n",
" \\frac{1}{\\sqrt{2\\pi}}e^{-(x_1-x_0)^2/2} \\frac{1}{\\sqrt{2\\pi}}e^{-(x_2-x_0)^2/2} \n",
"\\end{eqnarray}\n",
"therefore Gaussian. In this way, we can obtaine $x$ to be distributed\n",
"gaussian, and we can evaluate\n",
"\\begin{eqnarray}\n",
" \\int f(x)dx = \\left\\langle \\frac{f(x)}{\\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{-(x-x_0)^2/(2\\sigma^2)}}\n",
" \\right\\rangle_{\\frac{dP}{dx}=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{\\frac{(x-x_0)^2}{2\\sigma^2}}}\n",
"\\end{eqnarray}\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "48c0d858",
"metadata": {},
"source": [
"**What is the best choice for weight function $w$?**\n",
"\n",
"If function $f$ is positive, clearly best $w$ is just proportional to\n",
"$f$. What if $f$ is not positive everywhere? It turns out that the\n",
"best choice is absolute value of $f$, i.e.,\n",
"\\begin{eqnarray}\n",
" w=\\frac{|f|}{\\int|f|dV}\n",
"\\end{eqnarray}\n",
"The proof is simple. The MC importance sampling evaluates\n",
"\\begin{eqnarray}\n",
" \\int f dV = \\int \\frac{f}{w}w dV\\approx\n",
" \\left\\langle\\frac{f}{w}\\right\\rangle\\pm\n",
" \\sqrt{\\frac{\\langle(\\frac{f}{w})^2\\rangle-\\langle\\frac{f}{w}\\rangle^2}{N}}\n",
"\\end{eqnarray}\n",
"and the error is minimal when\n",
"\\begin{eqnarray}\n",
"\\delta\\left(\\langle(\\frac{f}{w})^2\\rangle-\\langle\\frac{f}{w}\\rangle^2\n",
"+\\lambda(\\int wdV -1)\\right)=0\\\\\n",
"\\delta\\left(\n",
"\\int \\frac{f^2}{w^2} w dV -\n",
"\\left(\n",
"\\int \\frac{f}{w}w dV\n",
"\\right)^2+\\lambda(\\int wdV -1)\n",
"\\right)=0\n",
"\\end{eqnarray}\n",
"\\begin{eqnarray}\n",
" \\int \\left(\\frac{f^2}{w^2}-\\lambda\\right)dV=0\\qquad\\rightarrow\\qquad w\\propto|f|\n",
"\\end{eqnarray}\n",
"\n",
"If we know a good approximation for function $f$, we can use this\n",
"information to sample the same function $f$ to higher accuracy with\n",
"importance sampling. The solution can thus be improved\n",
"**iteratively**. This idea is implemented in **Vegas**\n",
"algorithm, which is pedagogically implemented in 509 course.\n",
"\n",
"\n",
"There is another set of algorithms to improve precision of Monte Carlo\n",
"sampling. The idea is to divide the volume into smaller\n",
"\\DarkGreen{subregions} and check in each subregion how rapidly is the\n",
"function $f$ varying in each subregion. The quantitative estimation\n",
"can be the variance of the function is each subregion $\\sqrt{\\langle\n",
"f^2\\rangle-\\langle f\\rangle^2}$. The idea is to increase the number of\n",
"points in those regions where variance is big. The algorithm is called\n",
"\\DarkGreen{Stratified Sampling} and is used in \\DarkGreen{Miser}\n",
"integration routine. The idea seems simple and powerful, but is not\n",
"very usefull for high-dimensional integration because the number of\n",
"subregions grows exponentially with the number of dimensions therefore\n",
"it is usefull only if we have some idea how to constract small number\n",
"of subregions where variance of $f$ is large. This last trick is also\n",
"used in Vegas algorithm which is probabbly the best algorithm\n",
"available at the moment.\n"
]
},
{
"cell_type": "markdown",
"id": "0083adb6",
"metadata": {},
"source": [
"## Monte Carlo Importance Sampling\n",
"\n",
"\n",
"Let us introduce the concept of importance sampling method by\n",
"application to classical many-particle system (like Ising model or\n",
"classical gas).\n",
"\n",
"**The basic idea of Monte Carlo Simulation:** \n",
"The simulation is performed by random walk through *very large* configuration\n",
"space. The probability to make a move has to be such that the system\n",
"gets to thermal equilibrium (and remains in thermal equilibrium) at\n",
"certain temperature $T$ (is usually a parameter) after a lot of Monte\n",
"Carlo steps.\n",
"\n",
"**The basic idea of simulated annealing:**\n",
"Slowly decreasing the temperature of the system leads to the ground state\n",
"of the system. When using this type of cooling down by Monte Carlo,\n",
"one can find a global minimum of a general minimization problem. This\n",
"is called simulated annealing.\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "db28abb5",
"metadata": {},
"source": [
"### MC Importance Sampling by Markov chain\n",
"\n",
"If a configuration in phase space is denoted by $X$, the probability\n",
"for configuration according to Boltzman is\n",
"\\begin{equation}\n",
" \\rho(X)\\propto e^{-\\beta E(X)}\\qquad \\beta=\\frac{1}{T}\n",
"\\end{equation}\n",
"\n",
"\n",
"How to sample over the whole phase space for a general problem? How to\n",
"generate configurations?\n",
"\n",
"- **Brute force**: generate a truly random configuration X and accept it with probability $e^{-\\beta E(X)}$ where all $E>0$. Successive $X$ are **statistically independent**.\n",
"\n",
" **VERY INEFFICIENT**\n",
" \n",
"- **Markov chain**: Successive configurations $X_i$, $X_{i+1}$ are **NOT statistically independent** but are distributed according to the choosen disribution (such as Boltzman distribution).\n",
"\n",
" **Can be made VERY EFFICIENT**\n",
" \n",
" \n",
"\n",
"What is the difference between Markov chain and uncorrelated sequence?\n",
"\\begin{itemize}\n",
"- *Truly random* or *uncorrelated* sequence of configurations satisfies the identity\n",
"$$P(X_1,X_2,\\cdots,P_{X_N})=P_1(X_1)P_1(X_2)\\cdots P_1(X_{N})$$\n",
"- *Markov chain* satisfies the equation \n",
"$$P(X_1,X_2,\\cdots,P_{X_N})=P_1(X_1)T(X_1\\rightarrow X_2) T(X_2\\rightarrow X_3)\\cdots T(X_{N-1}\\rightarrow X_N)$$\n",
" where the transition probabilities \n",
" $T(X\\rightarrow X')$ are normalized\n",
" $$\\sum_{X'} T(X \\rightarrow X')=1$$\n",
"\n",
"\n",
"We want to generate Markov chain where distribution of states is\n",
"proportional to the choosen distribution, for example $$e^{-\\beta E(X)},$$ and the\n",
"distribution of states is independent of the position within the\n",
"chain and independent of the initial configuration.\n",
"\n",
"- **1) Connectedness**\n",
"\n",
"The necessary conditions for generating such Markov chain is that every configuration in the phase space should be accesible from any other configuration within finite number of steps (*connectedness* or *irreducibility*) - (Be careful to check this condition when choosing Monte Carlo step!)\n",
"\n",
"- **2) Detail balance**\n",
"\n",
"We need to find transition probability $T(X\\rightarrow X')$ which leads to a given **stationary** distribution $\\rho(X)$ (in this case $\\rho(X)\\propto e^{-\\beta E(X)}$).\n"
]
},
{
"cell_type": "markdown",
"id": "b9d0204e",
"metadata": {},
"source": [
"The probability for $X$ decreases, if system goes from $X$\n",
"to any other $X'$: $\\Delta \\rho(X)=-\\sum_{X'}\\rho(X)T(X\\rightarrow X')$ and increases\n",
"if $X$ configuration is visited from any other state $X'$:\n",
"$\\Delta \\rho(X) = \\sum_{X'}\\rho(X')T(X'\\rightarrow X)$. The\n",
"step (time) difference of the probability $X$ is therefore\n",
"\\begin{equation}\n",
" \\rho(X,t+1)-\\rho(X,t) = -\\sum_{X'}\\rho(X)T(X\\rightarrow X')+\\sum_{X'}\\rho(X')T(X'\\rightarrow X)\n",
"\\end{equation}\n",
"We look for stationary solution, i.e., $\\rho(X,t+1)-\\rho(X,t)=0$ and\n",
"therefore\n",
"\\begin{equation}\n",
" \\sum_{X'}\\rho(X)T(X\\rightarrow X')=\\sum_{X'}\\rho(X')T(X'\\rightarrow X)\n",
"\\end{equation}\n",
"General solution of this equation is not accesible, but a particular\n",
"solution is obvious\n",
"\\begin{equation}\n",
" \\rho(X)T(X\\rightarrow X')=\\rho(X')T(X'\\rightarrow X)\n",
"\\end{equation}\n",
"This solution is called **DETAIL BALANCE** solution.\n",
"\n",
"\n",
"To construct algorithm, we divide transition prob. \n",
"$T(X\\rightarrow X')=\\omega_{X\\rightarrow X'}A_{X\\rightarrow X'}$:\n",
"\n",
"\n",
"- *trial step probability* $\\omega_{X\\rightarrow X'}$.\n",
"\n",
" Many times it is symmetric, i.e., $\\omega_{X\\rightarrow X'}=\\omega_{X'\\rightarrow X}$. \n",
" (for example spin flip in ising: $\\omega_{XX'}$ is $1/L^2$ if $X$ and $X'$ differ for a single spin flip and zero otherwise ).\n",
"\n",
"- *acceptance probability* $A_{X\\rightarrow X'}$\n",
"\n",
" (for example accepting or rejecting new configuration with probability proportional to $min(1,\\exp(-\\beta(E(X')-E(X))))$).\n",
"\n",
"\n",
"Detail balance condition becomes\n",
"$$\\frac{A_{X\\rightarrow X'}}{A_{X'\\rightarrow X}}=\\frac{\\omega_{X' \\rightarrow X}\\;\\rho(X')}\n",
"{\\omega_{X\\rightarrow X'}\\;\\rho(X)}$$\n",
"\n",
"\n",
"Metropolis chooses\n",
"\\begin{eqnarray}\n",
"\\begin{array}{lc}\n",
"A_{X\\rightarrow X'}=1 & \\;\\mathrm{if}\\;\\; \\omega_{X'\\rightarrow X} \\rho(X')> \\omega_{X\\rightarrow X'} \\rho(X)\\\\\n",
"A_{X\\rightarrow X'}=\n",
"\\frac{\\omega_{X'\\rightarrow X}\\,\\rho(X')}{\\omega_{X\\rightarrow X'}\\,\\rho(X)} & \n",
"\\;\\mathrm{if}\\;\\;\\omega_{X'\\rightarrow X}\\rho(X')<\\omega_{X\\rightarrow X'}\\;\\rho(X).\n",
"\\end{array}\n",
"\\end{eqnarray}\n",
"Obviously, this acceptance probability satisfies detail balance\n",
"condition and therefore leads to desired Markov chain with stationary\n",
"probability for any configuration $X\\propto \\rho(X)$ for long times.\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "db275f2c",
"metadata": {},
"source": [
"To summarize Metropolis algorithm\n",
"\n",
"- $T(X\\rightarrow X')=\\omega_{X\\rightarrow X'}A_{X\\rightarrow X'}$ (we have to consider trial step probability)\n",
"- $\\sum_{X'}\\omega_{X\\rightarrow X'}=1$ and $\\omega_{X\\rightarrow X'}>0$ for all $X, X'$ after finite number of steps (ergodicity)\n",
"- $A_{X\\rightarrow X'}=min(1,\\frac{\\rho(X')\\omega_{X'\\rightarrow\n",
" X}}{\\rho(X)\\omega_{X\\rightarrow X'}})$ (acceptance step probability for Metropolis)\n",
"\n",
"\n",
"How to accept a step with probability $A_{XX'}<1$? One can generate\n",
"a random number $r\\in[0,1]$ and accept the step if $rn_0}^n A_i$$ where $n_0$ steps\n",
"are used to \"warm-up\". This is because the configurations in the\n",
"Markov's chain are distributed according to $\\rho(X)$.\n",
"\n",
"We can also try compute the following quantity\n",
"$$\\sum_X \\rho(X)(A(X)-\\overline{A})^2 \\rightarrow^?\n",
"\\frac{1}{n-n_0}\\sum_{i>n_0}^n (A_i-\\overline{A})^2.$$\n",
"This is A WRONG ESTIMATION OF THE ERROR OF MC. The error is much\n",
"bigger than this estimation, because configurations $X$ are\n",
"correlated! Imagine the limit of large correlations when almost all\n",
"values $A_i$ are the same (very slowly changing configurations). We\n",
"would estimate that standard deviation is zero regardless of the\n",
"actual error!\n",
"\n",
"To compute standard deviation, we need to group meassurements within\n",
"the correlation time into bins and than estimate the standard\n",
"deviation of the bins:\n",
"\\begin{eqnarray}\n",
" B_l = \\frac{1}{N_0}\\sum_{i=N_l}^{i