\input fontmac \input mathmac \def\Seq{\op{\tensc Seq}} \def\Set{\op{\tensc Set}} \def\Cyc{\op{\tensc Cyc}} \def\Res{\op{\rm Res}} \def\A{{\cal A}} \def\B{{\cal B}} \def\C{{\cal C}} \def\D{{\cal D}} \def\E{{\cal E}} \def\F{{\cal F}} \def\I{{\cal I}} \def\M{{\cal M}} \def\P{{\cal P}} \def\T{{\cal T}} \def\Z{{\cal Z}} \def\bul{\bullet} \def\d{\,d} \def\CC{{\bf C}} \def\RR{{\bf R}} \def\eps{\epsilon} \def\eval{\big|} \def\beginref{\noindent} \def\endref{\smallskip} \input cyracc.def \font\tencyr=wncyr10 \font\tencyri=wncyi10 \def\cyr{\tencyr\cyracc} \def\cyri{\tencyri\cyracc} \maketitle{A crash course on generating functions}{by}{Marcel K. Goh}{17 July 2020} \bigskip {\obeylines\eightssi \hfill A generating function is a clothesline \hfill on which we hang up a sequence for display. \eightss \smallskip \hfill --- HERBERT S. WILF, {\eightssi generatingfunctionology} (1989)} \bigskip \section\the\sectcount. Introduction When counting discrete structures, we often run into sequences that are recursively defined. For example, suppose we have the recurrence $$R_n = \cases{1,&if n=0; \cr 2R_{n-1}+3, & if n\geq 1.}$$ We might want to know how the sequence $(R_n)$ grows as $n$ gets large. Now, this particular recurrence is easy enough to solve by the substitution method. But often, it is easier to work with the {\it generating function} of a sequence. For a sequence $(A_n)$, this is the formal power series $$A(z) = \sum_{n\geq 0} A_nz^n.$$ H. S. Wilf famously described a generating function as a clothesline, and that is exactly how we should treat the function $A(z)$ for now; in later sections, we will discuss how this series can be treated as an analytic object. Let us try to find the generating function for the sequence $(R_n)$ defined above. Right off the bat, we have $$R(z) = \sum_{n\geq 0} R_nz^n = R_0 + \sum_{n\geq 1} R_nz^n.$$ Plugging in the recursive definition of $R_n$, we get \newcount\cgenfcteq \cgenfcteq=\eqcount \eqalign{ R(z) &= 1 + \sum_{n\geq 1} (2R_{n-1}+3)z^n \cr &= 1 + 2\sum_{n\geq 1} R_{n-1}z^n + 3\sum_{n\geq 1} z^n \cr &= 1 + 2z\sum_{n\geq 0} R_nz^n + 3\Big(\sum_{n\geq 0} z^n - 1\Big) \cr }\adveq The first summation on the right-hand side is $R(z)$ and the second one is well-known, so we derive the generating function \eqalign{ R(z) &= 1+ 2zR(z) + 3\bigg({1\over 1-z} - 1\bigg) \cr (1-2z)R(z) &= {1+2z\over 1-z} \cr R(z) &= {1+2z \over (1-z)(1-2z)}. \cr } Here we can use the partial fraction expansion to get \eqalign{ R(z) &= {1+2z\over (1-z)(1-2z)} = {4\over 1-2z} - {3\over 1-z} \cr &= 4\sum_{n\geq 0} (2z)^n - 3\sum_{n\geq 0} z^n \cr &= \sum_{n\geq 0}(2^{n+2} - 3)z^n, \cr }\adveq implying that $R_n = 2^{n+2} -3$. This could have been found by easier methods, but this example demonstrates how the generating function of a sequence provides a bridge from the recurrence to a closed form. In later sections, we will see that if we only want asymptotic estimates, then we can actually stop at the generating function equation found in \refeq{\the\cgenfcteq}. \advance\sectcount by 1 \section\the\sectcount. The Symbolic Method and Ordinary Generating Functions Deriving generating functions is a rather drawn-out process, and it would not be very enjoyable to have to work from scratch every time. Luckily, generating functions are often made up of simpler ones, so we can use the {\it symbolic method} to arrive directly at a generating function equation. This is the subject of the monolithic textbook {\sl Analytic Combinatorics}, by P. Flajolet and R. Sedgewick. In these notes, we will often skip some details and proofs; anything we gloss over here can certainly be found somewhere in {\sl Analytic Combinatorics}. We will refer to this book from time to time and abbreviate it by AC'. We define a {\it combinatorial class} $\A$ to be a countable set with a {\it size function} $|\cdot|:\A\to \N_0$ for which the number of elements of any given size is finite. Here are examples of combinatorial classes: \medskip \item{i)} The set of all strings of 0 and 1, with size given by string length. \smallskip \item{ii)} The set of all permutations. A permutation of size $n$ is a one-to-one correspondence \smallskip \item{iii)} The set of all binary trees, with size given by the number of nodes. from $[1\twodots n]$ to itself. \medskip \boldlabel Ordinary generating functions. The {\it ordinary generating function}, or OGF, of a combinatorial class $\A$ is defined to be the formal power series $$A(z) = \sum_{a\in \A} z^{|a|} = \sum_{n\geq 0} A_nz^n,$$ where $A_n$ is the number of elements in $\A$ of size $n$. It is easy to see that the two summations are equivalent characterisations of $A(z)$, and $A_n = [z^n]A(z)$ is the coefficient of $z^n$ in $A(z)$. We now define the {\it neutral class} $\E$ containing only one element, of size 0. Thus the generating function of $\E$ is 1. Likewise, we define the {\it atomic class} $\Z$, which contains a single element of size 1. The generating function of $\Z$ is $z$. When classes $\A$ and $\B$ are disjoint, we will denote their {\it union} by $$\A + \B = \A\cup\B.$$ It is easy to see that the generating function of $\A + \B$ is $A(z) + B(z)$. In a similar vein, let the {\it product} of two combinatorial classes be given by $$\A\times\B = \{(a,b) : a\in \A,\; b\in B\};$$ the generating function of $\A\times\B$ is $A(z)\cdot B(z)$. We will denote the product of $n$ copies of $\A$ by $\A^n$ (this is $\E$ if $n=0$), which has the generating function $A(z)^n$. A more complex construction is the {\it sequence class}. For a class $\A$, this is $$\Seq(\A) = \E + \A + (\A\times\A) + (\A\times\A\times\A) + \cdots$$ It follows that the generating function of $\Seq(\A)$ is $$S(z) = 1 + A(z) + A(z)^2 + A(z)^3 + \cdots = {1\over 1-A(z)}.$$ More generally, if $\Omega$ is a subset of $\N_0$, we define $\Seq_\Omega(\A)$ as the class $\sum_{n\in \Omega} A^n$, with generating function $\sum_{n\in\Omega} A(z)^n$. A list of further constructions can be found in Fig.~I.18 of AC. \medskip \boldlabel Integers and compositions. The set of all positive integers is a combinatorial class. To get its generating function, we note that the positive integers can be associated with nonempty sequences of unlabelled atoms: $$\{1,2,3,\ldots\} \cong \{\bul,\bul\;\bul,\bul\bul\bul,\ldots\}$$ This gives the construction ${\cal I} = \Seq_{\geq 1}(\Z)$, which immediately tells us that $I(z) = z/(1-z)$ and $I_n = 1$ for all $n\geq 1$. A {\it composition} of an integer $n$ is expression of $n$ as a sum of a sequence of positive integers (so the order of the terms matters). For example, there are four compositions of the number 3: \newcount\threecomp \threecomp=\eqcount $$1 + 1 + 1\qquad1 + 2\qquad2 + 1\qquad3\adveq$$ Let $\C$ be the set of all integer compositions. We can describe a composition as a sequence of integers, so $$\C = \Seq(\I).$$ From this, we find that $$C(z) = {1\over 1 - z/(1-z)} = {1-z\over 1-2z}.\adveq$$ Reexpressing this as the series $$C(z) = {1\over 1-2z}-{z\over 1-2z} =\sum_{n\geq 0} (2z)^n - \sum_{n\geq 0} 2^nz^{n+1}= 1+\sum_{n\geq 1} 2^{n-1}z^n,$$ we find that $C_0 = 1$ and $C_n = 2^{n-1}$ for $n\geq 1$. Ignoring $C_0$, this makes sense because there is a one-to-one correspondence between compositions and a subset of $n-1$ possible bars between $n$ balls: the compositions in \refeq{\the\threecomp} correspond to $$\bul\mid\bul\mid\bul\qquad \bul\mid\bul\bul\qquad \bul\;\bul\mid\bul\qquad \bul\bul\bul.$$ \medskip \boldlabel Fibonacci numbers. Let $\I^*$ denote the cute combinatorial class $\{1,2\}$, which clearly has the generating function $I^*(z) = z + z^2$. Let $\T$ be the set of partitions using only the numbers 1 and 2 (for simplicity's sake, we also include the neutral object, i.e.\ $T_0=1$). We have \eqalign{ 1 &= 1, \cr 2 &= 1+1 = 2,\cr 3 &= 1 + 1 + 1 = 1 + 2 = 2+1, \cr 4 &= 1 + 1 + 1 + 1 = 1 + 1 + 2 = 1 + 2 + 1 = 2 + 1 + 1 = 2 + 2,\cr } and so on. After computing a few more terms, we notice the pattern $$(T_n)_{n\geq 0} = (1,1,2,3,5,8,13,\ldots).$$ This is almost the infamous Fibonacci sequence; we have $T_n = F_{n+1}$. Hence $T(z)$ must be the generating function of the Fibonacci numbers, divided by $z$. We can easily derive this function with the symbolic method, since a composition using only 1s and 2s is simply a sequence of elements of $\I^*$. We conclude that $$\T = \Seq(\I^*)\qquad\hbox{and}\qquad T(z) = {1\over 1-I^*(z)} = {1\over 1-z-z^2},\adveq$$ and the generating function of the Fibonacci numbers is \newcount\fibeq \fibeq=\eqcount $$F(z) = {z\over 1-z-z^2}.\adveq$$ \bigskip {\obeylines\eightssi \hfill What's one and one and one and one and one \hfill and one and one and one and one and one?'' \hfill I don't know,'' said Alice. I lost count.'' \hfill She can't do Addition,'' the Red Queen interrupted. \eightss \smallskip \hfill --- LEWIS CARROLL, {\eightssi Through the Looking Glass} (1871)} \bigskip \boldlabel Catalan numbers. Let $\B$ be the class of all binary trees. This combinatorial class can be described recursively as follows: A binary tree is either empty, or else it consists of a root node adjoined to two more binary trees.''. An empty binary tree has size 0, so it is represented by $\E$ and a node has size $1$, so we represent it by $\Z$. We arrive at the symbolic description $$\B = \E + \Z\times\B\times\B,$$ whence we derive a functional equation $$B(z) = 1 + zB(z)^2$$ that the generating function must satisfy. Treating $B(z)$ as a variable and employing the quadratic formula, we find that \newcount\catalaneq \catalaneq=\eqcount $$B(z) = {1 - \sqrt{1-4z}\over 2z}.\adveq$$ (Since we know that $B(0) = B_0 = 1$, the positive branch of the square root is invalid. On the other hand, l'Hospital's rule can be applied to the negative version to get a limit of 1.) From here, one could use Newton's generalisation of the binomial theorem to get that \newcount\catalanformula \catalanformula=\eqcount $$B_n = {1\over n+1}{2n\choose n},\adveq$$ and indeed, $B(z)$ is the generating function of the Catalan numbers. From here, Stirling's approximation can be applied to give asymptotics, but the techniques of later sections will allow us to characterise the asymptotic growth directly from the generating function \refeq{\the\catalaneq}. \medskip \boldlabel Making change. Suppose a cashier has exactly five loonies, three toonies, and four \$5 bills in her till. How many ways can she give \$23 in change? To solve this, we assign each coin/bill a size proportional to its value; so a loonie is described by $\Z$, a toonie by $\Z^2$, and a \$5 bill by$\Z^5$. Now we can describe the set of all possiblities with the combinatorial class $$\M = \Seq_{\leq 5}(\Z) \times \Seq_{\leq 3}(\Z^2) \times \Seq_{\leq 4}(\Z^5).$$ This immediately gives the generating function $$M(z) = (1 + z + z^2 + z^3 + z^4 + z^5)(1 + z^2 + z^4 + z^6)(1 + z^5 + z^{10} + z^{15} + z^{20}),$$ and the number of ways to make \$23 is exactly $[z^{23}]M(z)$. Note that if the cashier had an unlimited supply of coins and bills, we would have $$\M^* = \Seq(\Z) \times \Seq(\Z^2) \times \Seq(\Z^5).$$ and the number of ways to make $n$ dollars out of these denominations is $$[z^n]\bigg({1\over 1-z}\bigg)\bigg({1\over 1-z^2}\bigg)\bigg({1\over 1-z^5}\bigg).$$ The change-making problem was used as a primary example in G. P\'olya's 1956 paper on the applications of generating functions. \advance\sectcount by 1 \section\the\sectcount. Labelled Structures and Exponential Generating Functions Ordinary generating functions deal with atoms that are unlabelled, meaning these atoms are indistinguishable from one another. When we want to count labelled objects, we will use an {\it exponential generating function}, or EGF. For a combinatorial class $\A$, this is the formal power series $$A(z) = \sum_{a\in \A} {z^{|a|} \over |a|!} = \sum_{n\geq 0} A_n {z^n\over n!}.$$ In particular, $A_n = [z^n/n!]A(z)$ for any EGF $A(z)$. The symbolic method applies to EGFs as well. As before, $\E$ denotes the neutral class, with a single object of size 0 and $\Z$ denotes the atomic class, with a single labelled object of size 1. If $\A$ and $\B$ are disjoint combinatorial classes with EGFs $A(z)$ and $B(z)$, then the EGF of their union $\A + \B$ is $A(z) + B(z)$. Instead of an ordinary Cartesian product, we have the notion of a {\it labelled product}, in which atoms are relabelled in all consistent ways. For example, a valid relabelling of the labelled triple $(1,3,2)$ is $(2,7,5)$, because the relative ordering of the atoms is unchanged. The set of all labelled pairs of elements from $\A$ and $\B$ is denoted $\A\star \B$ and its EGF is $A(z)\cdot B(z)$. The labelled sequence class $\Seq(\A)$, consists of sequences of elements of $\A$. It has the EGF $1/(1-A(z))$, just as with the OGF, but notice that the EGF counts a lot more things. The easiest way to see this is to consider the generating function of $\Seq(\Z)$, which is $S(z) = 1/(1-z)$. When considered as an OGF, there is only one element of each given size, since $[z^n]S(z) = 1$ for all $n$. But when taken as an EGF, we find that there are $n!$ objects of size $n$, because we have $[z^n/n!]S(z) = n!$ for all $n$. Of course, this makes sense since permuting the atoms in a sequence of length $n$ creates a new labelled object, but does not change the unlabelled one. There are two more constructions that will be useful to define on labelled classes. First, we let $\Set_k(\A)$ denote the class of all sets of size $k$ with elements in $\A$. This is like a sequence of length $k$, except that the order is not important, meaning we can divide by a factor of $k!$. The generating function of $\Set_k(\A)$ is therefore $A(z)^k/k!$. Now we let $$\Set(\A) = \E + \A + \Set_2(\A) + \Set_3(\A) + \cdots$$ and derive the generating function for $\Set(\A)$: \newcount\setgenfct \setgenfct=\eqcount $$T(z) = \sum_{k\geq 0} {A(z)^k \over k!} = e^{A(z)}\adveq$$ The last construction we will handle is the class of $k$-cycles with elements in $\A$, denoted $\Cyc_k(\A)$. There are $k$ ways to represent a cycle as a sequence of $k$ objects, so the generating function of $\Cyc_k(\A)$ is $A(z)/k$. Now we define the class of unrestricted cycles $$\Cyc(\A) = \Cyc_1(\A) + \Cyc_2(\A) + \Cyc_3(\A) + \cdots$$ and find that the generating function of $\Cyc(\A)$ is $$U(z) = \sum_{k\geq 1} {A(z)^k \over k} = \log {1\over 1-A(z)}.\adveq$$ There are many other interesting EGF constructions; see, for example, Fig.\ II.18 in AC. Note that a permutation is simply a set of cycles: $$\Seq(\Z) = \Set(\Cyc(\Z))$$ Of course, the cycle decomposition of a permutation is well-known, but generating functions give us another way of seeing that this is true, since $${1\over 1-z} = \exp\bigg(\log{1\over 1-z}\bigg).$$ Now is as good a time as any to state, without proof, an important classic theorem that will be used in the next example. \parenproclaim Theorem L (Lagrange inversion theorem). If a generating function $g(z)$ satisfies the equation $z = f(g(z))$, with $f(0) = 0$ and $f'(0) \neq 0$, then for an arbitrary function $H(u)$, we have $$[z^n]H(g(z)) = {1\over n}[u^{n-1}]H'(u)\bigg({u\over f(u)}\bigg)^n.$$ In particular, for $H(u) = u$ we have $$[z^n]g(z) = {1\over n}[u^{n-1}]\bigg({u\over f(u)}\bigg)^n.$$ (This particular form of of Lagrange's inversion theorem is often called the B\"urmann form.)\slug \medskip \boldlabel Cayley trees. We are now equipped to count rooted non-plane labelled trees, sometimes called Cayley trees. Children of a given node are not ordered and nodes in these trees have unrestricted degree, so we have the specification $$\T = \Z \star \Set(\T).$$ This immediately gives the functional equation $$T(z) = ze^{T(z)}$$ and by Lagrange's inversion theorem, with $f(u) = u/e^u$ and $H(u) = u$, we obtain \newcount\cayleyexact \cayleyexact=\eqcount $$n![z^n]T(z) = n!\bigg({1\over n}[u^{n-1}]e^{un}\bigg) = n!\cdot{1\over n}\cdot{n^{n-1}\over(n-1)!} = n^{n-1}.\adveq$$ Note that there are $n^{n-2}$ {\it unrooted} Cayley trees, since there are $n$ choices for the root. This is called Cayley's formula and the standard combinatorial proof involves establishing a bijection with Pr\"ufer sequences; see Pr\"ufer (1918). \medskip \boldlabel Derangements. A derangement is a permutation in which no element is fixed. This can equally be characterised as a permutation with no cycle of length 1. We saw earlier that permutations is simply a set of cycles, so disallowing cycles of length 1 gives the specification $$\D = \Set(\Cyc_{\geq 1}(\Z)).$$ We remove singleton cycles from the generating function by simply subtracting the generating function of $\Cyc_1(\Z)$ (which is simply $z$, because a singleton cycle is just an atom) from $\Cyc_(\Z)$, so $$D(z) = \exp\bigg(\log {1\over 1-z}- z\bigg) = {e^{-z}\over 1-z}.\adveq$$ We will revisit this generating function in the context of a word problem once we have the tools to analyse its asymptotics. \advance\sectcount by 1 \section\the\sectcount. Counting Parameters with Bivariate Generating Functions Some situations may require us to do more than just count the number of objects of a certain size. In addition, we might like to know how many of a certain substructure is embedded in an object of a combinatorial class. We can use a {\it bivariate generating function} or BGF to accomplish this. (BGFs are part of a larger class of {\it multivariate generating functions}, but we will keep things simple in this crash course. See Chapter III of AC for many more constructions.) Consider a two-dimensional array of numbers $(a_{n,k})$, where $n$ counts the number of objects of a certain size and $k$ counts some other combinatorial parameter. For example, we might have \medskip \item{i)} The number of $n$-bit strings of 0s and 1s that have exactly $k$ 0s. \smallskip \item{ii)} The number of $n$-letter permutations that have exactly $k$ cycles. \smallskip \item{iii)} The number of binary trees of size $n$ that have $k$ leaves. \medskip Symbolically, suppose we have a combinatorial class $\A$ with an extra scalar parameter $\chi : \A \to \N_0$. The ordinary and exponential BGFs are defined, respectively, to be the power series $$\sum_{a\in\A} z^{|a|} u^{\chi(a)} \qquad\hbox{and}\qquad \sum_{a\in\A} {z^{|a|}\over n!} u^{\chi(a)},$$ which can be rewritten as $$\sum_{n,k} a_{n,k} z^n u^k \qquad\hbox{and}\qquad \sum_{n,k} a_{n,k} {z^n\over n!} u^k,$$ where $a_{n,k}$ is the number of objects of size $a$ with $\chi(a) = k$. Note that substituting $u=1$ into a BGF returns the ordinary counting sequence for the class $\A$. \medskip \boldlabel Bitstrings. To warm up, we will consider a very simple example. Let $\B$ be the class of all binary strings, where for $b\in \B$, its size $n$ is given by its length and $\chi(b)$ is the number of zeroes in the string. There are two distinguishable atoms, $\Z_0$ and $\Z_1$, but we introduce a $u$ only when a zero is present, so we have the specification $$\B = \Seq(u\Z_0 + \Z_1)$$ with generating function $$B(z,u) = {1\over 1-(u+1)z}.\adveq$$ When we substitute $u=1$, we get $$B(z,1) = {1\over 1-2z},$$ which is the generating function of $2^n$, the number of binary strings with $n$ bits. The number of $n$-bit binary strings with $k$ zeroes can now be extracted. It is equal to $$[u^k][z^n]B(z,u) = [u^k](u+1)^n = {n\choose k},$$ by an application of the binomial theorem. Hence the probability that an $n$-bit binary string has $k$ zeroes is $${[u^k][z^n] B(z,u) \over [z^n] B(z,1)} = {{n\choose k}\over 2^n}.\adveq$$ \medskip \boldlabel Probabilities and moments. The above example has an elementary result, but it illustrates how BGFs allow us to extract probabilities. For all $a$ in a class $\A$, we have $$\pr\{\chi(a) = k\mid |a| = n\} = {[u^k][z^n] A(z,u) \over [z^n] A(z,1)}.\adveq$$ We can also calculate higher moments with the BGF. Notice that if we take the partial derivative with respect to $u$, the $\chi$ values of each term pop down into the coefficient of each term: $${\partial \over\partial u} A(z,u) = {\partial \over\partial u}\sum_{a\in \A} z^{|a|} u^{\chi(a)} = \sum_{a\in \A} \chi(a) z^{|a|} u^{\chi(a) -1}$$ Of course, the exponents of the variable $u$ have changed, but letting $A_u(z,u)$ denote the partial derivative with respect to $u$ and evaluating at $u=1$, we get the cumulated cost $$A_u(z,u)|_{u=1} = \sum_{n\geq 0} \Big(\sum_{a\in \A_n} \chi(a) \Big)z^n,\adveq$$ whence we can calculate the mean over $\A_n$, the class of all $a\in \A$ of size $n$, to be \newcount\firstmoment \firstmoment=\eqcount $$\ex_{\A_n}\{\chi\} = {[z^n] A_u(z,u)\eval_{u=1} \over [z^n]A(z,1)}.\adveq$$ These formulas are valid even when the generating functions are exponential, because the $n!$ factors cancel out. More generally, by taking repeated derivatives, we can calculate factorial moments of $\chi$. Let $A_u^r(z,u)$ denote the $r$th partial derivative of $A$ with respect to $u$. We have $$\ex_{\A_n}\{\chi(\chi-1)\cdots(\chi-r+1)\} = {[z^n] A_u^r(z,u)\eval_{u=1} \over [z^n]A(z,1)}.\adveq$$ For example, by linearity of expectation, the second moment is easily seen to be \newcount\secondmoment \secondmoment=\eqcount \eqalign{ \ex_{\A_n}\{\chi^2\} &= \ex_{\A_n}\{\chi + \chi(\chi-1)\} \cr &= \ex_{\A_n}\{\chi\} + \ex_{\A_n}\{\chi(\chi-1)\} \cr &= {[z^n] A_u(z,u)\eval_{u=1} \over [z^n]A(z,1)} + {[z^n] A_u^2(z,u)\eval_{u=1} \over [z^n]A(z,1)} \cr }.\adveq This can be easily used to calculate the variance and standard deviation of $\chi$, from the usual formula $$\var_{\A_n}\{\chi\} = \sigma(\chi)^2 = \ex_{\A_n}\{\chi^2\} - \ex_{\A_n}\{\chi\}^2.\adveq$$ \medskip \boldlabel Cycles in a permutation. Recall that a permutation is a set of cycles. If we mark every cycle with a $u$, we get the specification $$\P = \Set(u\Cyc(\Z)),$$ which gives the bivariate exponential generating function $$P(z,u) = \exp\bigg(u\log{1\over 1-z}\bigg) = \exp\bigg(\log{1\over (1-z)^u}\bigg) = \bigg({1\over 1-z}\bigg)^u\adveq$$ for the number of $n$-letter permutations with exactly $k$ cycles. We can calculate the partial derivatives $$P_u(z,u) = \bigg({1\over 1-z}\bigg)^u\log{1\over 1-z}$$ Here the random variable $\chi$ counts the number of cycles, and we have $$\ex_{\P_n}\{\chi\} = {[z^n] P_u(z,u)\eval_{u=1} \over [z^n]P(z,1)} = 1 + {1\over 2} + \cdots + {1\over n} = H_n.\adveq$$ The harmonic numbers $H_n$ can be expanded as $\log n + \gamma + O(1/n)$, so we find that there are roughly $\log n$ cycles in a random permutation on $n$ letters. A slightly harder analysis of Taylor coefficients retrives the second moment. From this, it can be shown that $\var_{\P_n}\{\chi\}\sim \log n$ an well. \medskip \boldlabel External nodes in a binary tree. Hidden inside the specification for binary trees, we can see that we are actually only counting internal nodes, since we have given $\E$, the external node, the generating function of 1 (meaning it has size 0). If we mark it with a $u$, we arrive at $$\B = u\E + \Z\times\B\times\B\adveq$$ and the bivariate ordinary generating function satisfies $$B(z,u) = u + zB(z)^2.$$ Using the quadratic formula once again, we find that the generating function is $$B(z,u) = {1-\sqrt{1-4uz} \over 2z},\adveq$$ which checks out, since setting $u=1$ gives exactly \refeq{\the\catalaneq}. We have the partial derivatives $$P_u(z,u) = {1\over \sqrt{1-4uz}} \qquad\hbox{and}\qquad P_u^2(z,u) = {2z \over (1-4uz)^{3/2}}.\adveq$$ With the substitution $u=1$, these are both relatively well-known generating functions. The first is the generating function for the central binomial coefficients ${2n\choose n}$ and the second is $z {\partial\over \partial z} P_u(z,1)$, so it has coefficients $n{2n\choose n}$ (the powers of $n$ drop down into the coefficient when differentiating, and multiplying by $z$ restores the correct power). We calculated in \refeq{\the\catalanformula} that the total number of binary trees with $n$ internal nodes is ${2n\choose n}/(n+1)$, so applying our method gives us $$\ex_{\B_n}\{\chi\} = {{2n\choose n}\over {1\over n+1}{2n\choose n}} = n+1\adveq$$ and $$\ex_{\B_n}\{\chi^2\} = {n{2n\choose n} \over {1\over n+1}{2n\choose n}} + n + 1 = n(n+1)+n+1 = (n+1)^2.$$ So the average number of external nodes in a tree with $n$ internal nodes is $n+1$ and since $$\var_{\B_n}\{\chi\} = \ex_{\B_n}\{\chi^2\} - \ex_{\B_n}\{\chi\}^2 = (n+1)^2 - (n+1)^2 = 0,\adveq$$ we discover that, in fact, every tree with $n$ internal nodes has exactly $n+1$ external nodes. (We have performed what is probably the most convoluted proof of this simple fact, but it is a good exercise in calculating moments using bivariate generating functions.) \bigskip {\obeylines\eightssi \hfill And if I should live to be \hfill The last leaf upon the tree \eightss \smallskip \hfill --- OLIVER WENDELL HOLMES, {\eightssi The Last Leaf} (1831)} \bigskip \advance\sectcount by 1 \section\llap{*}\the\sectcount. Rudiments of Complex Analysis In this section, we will collect definitions and theorems that will be useful in studying the coefficient asymptotics of generating functions. We will state results without proof; the details can be found any introductory textbook on complex analysis (e.g., Flanigan (1983)). Readers who are familiar with complex numbers may choose to skim or skip this section. The complex plane $\CC$ is the set of all numbers $x + iy$ were $x$ and $y$ are real numbers and $i$ is the square root of $-1$. If $z = x + iy$, then the {\it real part} of $z$ is $\Re z = x$ and the {\it imaginary part} of $z$ is $\Im z = y$. The complex conjugate of $z = x + iy$ is $x-iy$, and it is easily verified, from the identity $i^2 = -1$, that $$z\cdot \bar z = x^2 + y^2.$$ Letting $|z| = \sqrt{x^2 + y^2}$ denote the {\it modulus} of $z$, we find that $${z\cdot \bar z\over |z|^2} = 1,$$ making it clear that the inverse $1/z$ of a complex number $z$ is equal to $\bar z / |z|^2$. For real $\theta$, we have the well-known formula $$e^{i\theta} = \cos \theta + i\sin \theta,\adveq$$ which gives another way to represent a complex number. The complex number $z = x+iy$ can also be written as $|z|e^{i\theta}$, where $\theta$ is the angle that $z$ makes with the positive real axis. Note that $e^{i\theta} = e^{i(\theta + 2\pi k)}$ for any integer $k$. \bigskip {\obeylines\eightssi \hfill The imaginary relish is so sweet \hfill That it enchants my sense. \eightss \smallskip \hfill --- WILLIAM SHAKESPEARE, {\eightssi Twelfth Night} (1602)} \bigskip \boldlabel Complex differentation. A complex function defined on a domain $D$ is said to be {\it analytic} at a point $z_0\in D$ if for some open ball $\Omega$ centred at $z_0$, $f$ is representable as a power series $$f(z) = \sum_{n\geq 0} f_n(z-z_0)^n.$$ For example, $1/(1-z)$ is analytic for at any $z_0$ in the ball of radius 1 about the origin. A complex function whose domain is D is called {\it differentiable} or {\it holomorphic} at $z_0\in D$ if the limit $$f'(z_0) = \lim_{z\to z_0} {f(z) - f(z_0) \over z- z_0}$$ exists, in the sense that $f'(z_0)$ has the same value regardless of the manner in which $z$ approaches $z_0$ in the complex plane. The following theorem says that these two notions are equivalent. \parenproclaim Theorem E (Basic Equivalence Theorem). A complex function $f$ defined on a domain $D$ is analytic at a point $z_0\in D$ if and only if it is differentiable at $z_0$.\slug It is a useful fact that if a function is differentiable at $z_0$, then its derivative is also differentiable at $z_0$, meaning that it admits derivatives of all orders, and we have the expansion $$f(z) = \sum_{n\geq 0} {f^{(n)} \over n!} (z-z_0)^n.\adveq$$ Of course, if a function is analytic in a disc around $z_0$, then we can differentiate it by taking derivatives of the terms in its power series. \medskip \boldlabel Complex integration. The Fundamental Theorem of Calculus states that for a real-valued function $f$ with antiderivative $F$, $$\int_a^b f(x)\d x = F(b) - F(a).\adveq$$ We can think of this as summing the values of $f$ over a path from $a$ to $b$. Since $\RR$ is one-dimensional, there is only one way to get from $a$ to $b$, but on the complex plane, we can integrate over any path $\gamma$ from $z_0\in \CC$ to $z_1\in \CC$. We have the following complex analogue of the Fundamental Theorem of Calculus. \parenproclaim Theorem P (Path integration). Let $f$ a complex function, continuous in a domain $D$, and let $F$ be a complex function, analytic in $D$, that satisfies $F'(z) = f(z)$ for all $z\in D$. If $\gamma$ is any piecewise smooth path from $z_0\in D$ to $z_1\in D$, then $$\int_\gamma f(z)\d z = F(z_1) - F(z_0).\noskipslug$$ We can evaluate complex integrals over paths by parametrising the curve $\gamma$ as a function of a real variable $t$ and then performing a change of variables. If $\gamma(t)$ is a parametrised curve and $t$ ranges from $a$ to $b$, then $$\int_\gamma f(z)\d z = \int_a^b f\big(\gamma(t)\big)\gamma'(t)\d t.\adveq$$ As an example, if $f(z) = z^2$ and we want to find the integral of $f$ from $z_0 = 0$ to $z_1 = 1+2i$. We can walk from 0 to $1+2i$ via a path $\gamma_1$ that goes from $0$ to $2i$ and then another path $\gamma_2$ that goes from $2i$ to $1+2i$. So $\gamma_1(t) = it$ for $0\leq t\leq 2$ and $\gamma_2(t) = 2i + t$ for $0\leq t\leq 1$ and we have the derivatives ${\gamma_1}'(t) = i$ and ${\gamma_2}'(t) = 1$. Putting this all together, the integral over the whole path is \eqalign{ \int_\gamma z^2 \dz &= \int_0^2 (it)^2\cdot i\d t + \int_0^1 (2i + t)^2\d t \cr &= \int_0^2 -it^2\d t + \int_0^1 t^2 + 4it - 4 \d t \cr &= \bigg[-{it^3\over 3}\bigg]_{t=0}^2 + \bigg[{t^3\over 3} + 2it^2 - 4t\bigg]_{t=0}^1 \cr &= -{8i\over 3} + {1\over 3} + 2i - 4 \cr &= -{11\over 3} - {2i\over 3} \cr }\adveq Of course, in this example, we could have just noticed that the antiderivative of $f(z) = z^2$ is $F(z) = z^3/3$ and applied Theorem P to get $$\int_\gamma f(z)\d z = F(1+2i) - F(0) = {(1+2i)^3\over 3} = -{11\over 3} - {2i\over 3}.\adveq$$ When $\gamma$ is a {\it closed Jordan curve}, i.e.\ it starts and ends at the same point and does not cross itself, we often write the integral sign with a circle around it. The rules of integration are the same; the circle is just there to remind us that our path goes around in a loop. Integrals over closed curves are important, because of the following theorem. \parenproclaim Theorem I (Cauchy Integral Theorem). Let $\gamma$ be a closed Jordan curve in a domain $D$. Suppose $f$ is a function that is analytic both on $\gamma$ and the interior of $\gamma$. Then $$\oint_\gamma f(z)\d z = 0.\noskipslug$$ A corollary of Theorem I is that if $\gamma_1$ is a closed curve that is fully contained in the interior of another closed curve $\gamma_2$ and $f$ is analytic on both curves as well as the region between the two curves, then $$\oint_{\gamma_1} f(z)\d z = \oint_{\gamma_2} f(z)\d z.\adveq$$ \medskip \boldlabel Singularities. If functions tended to be analytic everywhere on $\CC$, then Theorem I tells us that integration wouldn't be very interesting. Luckily for us, most of the functions we study will fail to be analytic at certain points. Let $D$ be a domain and let $z_0\in D$ be an interior point. If $f$ is not analytic at $z_0$ but it is analytic on $D\setminus \{z_0\}$, then the point $z_0$ is said to be a {\it singularity} of $f$. There are three types of singularities: \medskip \item{i)} If $\lim_{z\to z_0} f(z) = w_0$ for some finite value $w_0$, then it is possible to continuously extend $f$ to an analytic function $g$ on all of $D$ (just set $g(z_0) = w_0$). In this case, $z_0$ is called a {\it removable singularity}. \smallskip \item{ii)} If $\lim_{z\to z_0} |f(z)| = \infty$, then $z_0$ is called a {\it pole} of $f$. \smallskip \item{iii)} If neither (i) nor (ii) hold, then $z_0$ is called an {\it essential singularity} of $f$. \medskip We will now show that each of these types of singularities actually pop up in the wild with some examples. First, we can create a function with a removable singularity by taking a {\it bona fide} analytic function $f$ and consider its restriction $f'$ on the domain $D\setminus \{z_0\}$. Now $z_0$ is a removable singularity of the function $f'$, since $f'$ can be continuously extended to an analytic function on all of $D$. An easy example of a function with a pole is $f(z) = 1/z$. This function is analytic everywhere on $\CC$ except at $z_0 = 0$, and $\lim_{z\to z_0} |1/z| = \infty$. This tells us that $f(z)$ may have an interesting integral around the origin. It does not matter which closed curve we take around 0, so the unit circle $C$ is a tempting choice. We can parametrise $C$ by $\gamma(\theta) = e^{i\theta}$ for $0\leq \theta< 2\pi$, which has the derivative $\gamma'(\theta) = ie^{i\theta}$. We now find that \newcount\twopii \twopii=\eqcount $$\oint_C {1\over z}\d z = \int_0^{2\pi} e^{-i\theta} ie^{i\theta} \d\theta= i\int_0^{2\pi} 1\d\theta = 2\pi i.\adveq$$ This integral crops up a lot in complex analysis; commit it to memory! A function with an essential singularity is the square root function $\sqrt z$. This is because, given $z = re^{i\theta}$, we can rewrite $z = re^{i\theta + 2\pi ik}$ for any integer $k$, so for any integer $k$, $w = \sqrt re^{i\theta/2 + \pi i k}$ satisfies the property that $w^2 = z$. To get a well-defined square root function, we set $$\sqrt z = \sqrt re^{i\theta/2},\adveq$$ where $\theta\in (-\pi, \pi]$. This spells big trouble for complex numbers on the negative real axis. For example, let $z = -1 = e^{i\pi}$. Note that $\sqrt{e^{i(\pi - \eps)}} = e^{i(\pi/2 - \eps/2)}$ and $\sqrt{e^{-i(\pi -\eps)}} = e^{-i(\pi/2 - \eps/2)}$ (so both angles are in the range $(-\pi, \pi]$. As $\eps\to 0$, the arguments to the function both get arbitrarily close to $-1$ but the outputs have different limits ($i$ and $-i$). So the square root function has an infinite number of essential singularities (any negative real number); these are often called {\it algebraic singularities}. Another function with essential singularities is the natural logarithm $\log z$. Note that, given $z = re^{i\theta}$, the number $w = \log r + 2k\pi i\theta$ satisfies $e^w = z$ for any integer $k$. Once again, to get a well-defined function we must restrict $\theta$ to a single loop around the unit circle: $$\log z = \log r + i\theta,\adveq$$ where $\theta\in (-\pi, \pi]$. Now, just as with the square root, we have a discontinuity in the argument for all real numbers $>1$. These are the {\it logarithmic singularities}. \medskip \boldlabel Residues. If we allow a power series expansion around a point $z_0$ to have a finite number of negative powers, then it is called a {\it Laurent series}; this has the form $$f(z) = \sum_{n\geq -M} f_n (z-z_0)^n = {f_{-M} \over (z-z_0)^M} + \cdots + {f_{-1}\over (z-z_0)} + f_0 + f_1(z-z_0) + f_2(z-z_0)^2 + \cdots,$$ for some nonnegative integer $M$. Note that, integrating on a circle of radius 1 around $z_0$, we can make the substitution $z - z_0 = e^{i\theta}$ to get $dz = ie^{i\theta}\d\theta$ and $$\oint (z-z_0)^M \d z = i\int_0^{2\pi} e^{i(M+1)\theta}\d\theta = \cases{2\pi i & if M=-1; \cr 0 & otherwise.}\adveq$$ So the coefficient $f_{-1}$ of $(z-z_0)^{-1}$ is special; it is the only term that survives the integration of a Laurent series. We call this the {\it residue of $f$ at $z_0$} and denote it by $\Res(f,z_0)$; we have $$\Res(f,z_0) = {1\over 2\pi i} \oint f(z)\d z.\adveq$$ This is the crucial fact that will allows the following theorem to extract coefficients of generating functions. \parenproclaim Theorem C (Cauchy coefficient formula). Let $f(z)$ be analytic. The coefficient of $z^n$ is given by the formula $$[z^n]f(z) = {1\over 2\pi i} \oint {f(z) \over z^{n+1}}\d z,\adveq$$ where the integral is taken on any closed curve about the origin.\slug \advance\sectcount by 1 \section\the\sectcount. Generating Functions as Analytic Functions This section represents a turning point in our study of generating functions. Up until now, we have treated generating functions as purely formal objects. However, we can get a lot of information about the asymptotic behaviour of a power series' coefficients by treating it as a function of a complex argument. This section will give a general feel for how to do so, without getting too formal. A key theorem that allows us to work with generating functions as analytic objects is the following. \parenproclaim Theorem P (Pringsheim). If $f(z)$ has a series expansion whose coefficients are all nonnegative and the radius of convergence of $f(z)$ is $R$, then the real number $R$ is a singularity of $f(z)$.\slug Since counting objects usually induces a generating function with nonnegative coefficients, this theorem applies to most of the interesting cases we study. Note that we will only deal with the case where there is a {\it unique} root of smallest modulus, meaning that $R$ is the only singularity on the circle $|z| = R$. This assumption eliminates complications that involve periodicities in the coefficients. \medskip \boldlabel Rational functions. Treating a generating function as a complex function pays off immediately when the function is a ratio $f(z)/g(z)$ of two polynomials. Since a polynomial of degree $n$ always has $n$ complex roots, we can factorise $g(z)$ into a product of linear factors and then use the partial fraction expansion to extract the coefficients. For example, if $f(z) = z$ and $g(z) = 18z^3 - 3z^2 - 4z + 1$, then \eqalign{ {f(z)\over g(z)} &= {z\over 18z^3 - 3z^2 - 4z + 1} = {z\over 18(z-1/3)^2(z+1/2)} \cr &= {1\over 45(z-1/3)^2} + {1\over 25(z-1/3)} - {1\over 25(z+1/2)}\cr &= {1\over 5(1-3z)^2} - {3\over 25(1-3z)} - {2\over 25(1+2z)}.\cr }\adveq This immediately tells us that \newcount\exactrat \exactrat=\eqcount \eqalign{ [z^n]{f(z)\over g(z)} &= {1\over 5}(n+1)3^{n} - {3\over 25} 3^n - {2\over 25}(-2)^n \cr &= {1\over 5}n3^n + {2\over 25} 3^n - {2\over 25}(-2)^n. \cr }\adveq Notice that only the first term is important, because $3^n$ is exponentially larger than $2^n$. We got an exponential factor of $\sim 3^n$ because the root of $g(z)$ that is closest to the origin is $1/3$. The fact that this root has multiplicity $2$ gives a factor of $\sim n$ in the first term. It is safe to ignore all the other terms because they are much smaller, as formalised in the following theorem. \parenproclaim Theorem G (Rational asymptotics). Let $f(z)$ and $g(z)$ be polynomials that are relatively prime, with $g(z) \neq 0$. If $g(z)$ has a a unique real root $\alpha$ of smallest modulus whose multiplicity is $r$, then $$[z^n]{f(z)\over g(z)} \sim C\alpha^{-n} n^{r-1} \qquad\hbox{where}\qquad C = {r(-1/\alpha)^r f(\alpha) \over g^{(r)}(\alpha)}.\noskipslug$$ This will become clear later on when we discuss a more general result. Applying this theorem to our most recent example, we have $\alpha = 1/3$, $r = 2$, and $g''(z) = 108z-6$. So we can derive $$[z^n]{z\over 18z^3 - 3z^2 - 4z + 1} = {2(-3)^2 (1/3) \over 108(1/3)-6} 3^n n^{2-1} = {1\over 5} n3^n,\adveq$$ which is exactly the first term in \refeq{\the\exactrat}. \medskip \boldlabel Fibonacci asymptotics. As an example, recall the generating function for the Fibonacci numbers, $z/(1-z-z^2)$. This function has poles at $$-\phi = {-1-\sqrt 5\over 2}\approx -1.618 \qquad\hbox{and}\qquad \bar\phi = {-1+\sqrt 5\over 2}\approx0.618,\adveq$$ where $\phi$ is the golden ratio. The root of smaller modulus is $\alpha = \bar\phi = 1/\phi$, and the multiplicity $r = 1$. This gives $$F_n = [z^n]{z\over 1-z-z^2} \sim {(-\phi)(1/\phi)\over 1-2/\phi} \phi^n = {\phi \over \phi + 2}\phi^n = {1 + \sqrt 5 \over 5 + \sqrt 5}\phi^n = {\phi ^n\over \sqrt 5},\adveq$$ which says that the ratio between successive Fibonacci numbers tends towards $\phi$. \medskip \boldlabel Meromorphic functions. Rational functions are special cases of a broader class of functions called meromorphic functions. A complex function is {\it meromorphic} if it can be expressed as the ratio of two analytic functions. A function is meromorphic in a region if and only if it is analytic except for at a finite set of poles. We will use the fact that any function that is meromorphic at the point $z_0$ admits a Laurent expansion $$h(z) = {h_{-M} \over (z-z_0)^M} + \cdots + {h_{-1}\over (z-z_0)} + h_0 + h_1(z-z_0) + h_2(z-z_0)^2 + \cdots,$$ where $M$ is a nonnegative integer. We say that $h$ has a {\it pole of order $M$} at the point $z_0$. In this case, the function $(z-z_0)^M h(z)$ is analytic at $z_0$, because the Laurent series becomes a common-or-garden Taylor series. We have the following generalisation of Cauchy's coefficient formula for meromorphic functions. \parenproclaim Theorem R (Residue theorem). Let $h(z)$ is meromorphic in a region $D$ and let $\gamma$ be a closed loop in $D$. Suppose that $\gamma$ encloses $k$ poles of $h$; call them $\alpha_1,\ldots,\alpha_k$. Then $$[z^n]h(z) = {1\over 2\pi i} \oint_\gamma h(z)\d z = \sum_{i=1}^k \Res(h, \alpha_i).\adveq$$ In particular, if $h$ is meromorphic in closed ball $|z|\leq R$, analytic at 0, and the poles have orders $m_1,\ldots,m_k$, then $$[z^n]h(z) = {p_1(n)\over {\alpha_1}^n} + \cdots + {p_k(n)\over {\alpha_k}^n} + O(R^{-N}),\adveq$$ where each polynomial $p_i$ has degree $m_i - 1$.\slug As before, we can get a very good approximation by looking only at the pole $\alpha$ of smallest modulus. The residue around that pole contributes exponentially more to the coefficient asymptotics than the others, so we should analyse the Laurent expansion of $h(z)$ around $\alpha$. If $\alpha$ has order $M$, then this looks like $$h(z) = {h_{-M} \over (z-\alpha)^M} + \cdots + {h_{-1}\over (z-\alpha)} + h_0 + h_1(z-\alpha) + h_2(z-\alpha)^2 + \cdots,$$ and when $z$ is very close to $\alpha$, only the first term matters. In other words, $$h(z) \sim {h_{-M} \over (z-\alpha)^M} = {h_{-M}\over (-\alpha)^M (1-z/\alpha)^M} = (-1)^M{h_{-M}\over\alpha^M} \sum_{n\geq 0} {n+M-1\over \alpha^n}z^n.\adveq$$ The actual residue $h_{-M}$ can be computed as a limit, by repeated application of l'Hospital's Rule. If $h(z)$ is equal to the ratio $f(z)/g(z)$ of analytic functions, then $$h_{-M} = \lim_{z\to \alpha} {(z-\alpha)^M f(z) \over g(z)} = {Mf(\alpha) \over g^{(M)}(\alpha)}.\adveq$$ Putting this all together, we have the following theorem. \parenproclaim Theorem M (Meromorphic asymptotics). If $h(z) = f(z)/g(z)$ is a meromorphic function with a unique pole of smallest modulus $\alpha$ whose order is $M$, then $$[z^n]h(z) \sim (-1)^M {Mf(\alpha) \over \alpha^M g^{(M)}(\alpha)} \alpha^{-n} n^{M-1}.\noskipslug$$ We now see that Theorem G is a special case of this, since a multiplicity-$r$ root of a polynomial in the denominator induces a pole of order $r$. \medskip \boldlabel Asymptotics of derangements. Consider the following word problem: It is a Tuesday night and $n$ froshies leave their Residence to go to Caf\'e Campus. After attaining a sufficient state of inebriation, the $n$ froshies all make it safely back to Residence, but each of them has found a random room to sleep in; that is, they return to their rooms in a random permutation. What is the likelihood that no froshie returns to his or her own room? We can regard the $n$ froshies as $n$ labelled atoms and since a permutation as a set of cycles, this problem asks for the proportion of all $n$-letter permutations that have no singleton cycle. These are exactly the derangements, which have the EGF $D(z) = e^{-z}/(1-z)$. We can use Theorem M to get an estimate of the number of $n$-letter derangements. We have $f(z) = e^{-z}$, $g(z) = 1-z$, and a pole of order $M=1$ at $\alpha=1$. The derivative of $g(z)$ is $-1$ so after a rumble and a tumble and a great deal of cancelling out 1s, we have $$n![z^n]D(z) \sim n! (-1)^1 {1\cdot e^{-1} \over 1^1 (-1)} 1^{-n} n^0 = {1\over e}n!.\adveq$$ There are $n!$ total permutations so the proportion of permutations that are derangements is $1/e$; we have a $1/e \approx 36.8\%$ chance that the froshies all end up in rooms not belonging to them. \advance\sectcount by 1 \section\the\sectcount. Singularity Analysis We have now dealt with generating functions that have poles. It remains to tackle functions that have essential singularities, such as those caused by square roots and logarithms. First we need a couple of facts about the gamma function. For positive integers $n$, we have $\Gamma(n) = (n-1)!$ and for all $z\in \CC$ with $\Re z > 0$, $\Gamma$ is defined by $$\Gamma(z) = \int_0^\infty x^{z-1} e^{-x}\d x.\adveq$$ Now the full gamma function is defined to be the analytic continuation to a meromorphic function that has analytic everywhere except the nonpositive integers. The nonpositive integers are poles of $\Gamma$. Just like with factorials, we have $\Gamma(z+1) = z\Gamma(z)$. The important identity $$\Gamma(1/2) = \int_0^\infty x^{-1/2} e^{-x}\d x = 2\int_0^\infty e^{-x^2}\d x = \sqrt\pi\adveq$$ should also be memorised. The {\it Hankel contour} $H$ is the path that starts at $\infty$, hovers $+\delta$ above the real line until reaching the unit circle, travels around the unit circle counterclockwise until reaching a point a distance of $\delta$ below the real line, then increases off to $\infty$ once again. The distance $\delta$ is called the {\it slit width}. We have the identity \newcount\hankel \hankel=\eqcount $${1\over \Gamma(z)} = {1\over 2\pi i} \int_H (-x)^{-z} e^{-x} \d x,\adveq$$ which will come in handy later. \bigskip {\obeylines\eightssi \hfill Singularity is almost invariably a clue. \eightss \smallskip \hfill --- SHERLOCK HOLMES, {\eightssi The Boscombe Valley Mystery} (1891)} \bigskip \boldlabel The standard function scale. We saw that the function $\sqrt z$ is singular when $z$ is negative and real. More generally, the function $(1-z)^{-\alpha}$ is singular when $z$ is real and $>1$. This leads to the fundamental result that makes singularity analysis possible. \parenproclaim Theorem S (Standard function scale). If $\alpha\in \CC$ is not a nonpositive real number, then $$[z^n](1-z)^{-\alpha} \sim {n^{\alpha - 1}\over \Gamma(\alpha)}.\adveq$$ \proof (Sketch.) Let $H$ be a slight variant of the Hankel contour of slit width $1/n$ that circles around $z=1$ instead of the origin. We start with the coefficient formula $$[z^n](1-z)^{-\alpha} = {1\over 2\pi i} \oint {(1-z)^{-\alpha} \over z^{n+1}}\d z,$$ where the path of integration goes in a circle around the origin. We deform this circle to a ball of infinite radius that avoids the essential singularities; this is exactly the contour $H$. The change of variable $z = 1+t/n$ and $dz = dt/n$ gives $$[z^n](1-z)^{-\alpha} = {1\over 2\pi i} \int_H {(-t)^{-\alpha} \over n^{-\alpha} z^{n+1}}{1\over n}\d t = {n^{\alpha -1} \over 2\pi i} \int_H (-t)^{-\alpha}\bigg(1+{t\over n}\bigg)^{-n-1} \d t \sim {n^{\alpha - 1}\over \Gamma(\alpha)},\adveq$$ where the last $\sim$ comes from the approximation $$\bigg(1+{t\over n}\bigg)^{-n-1} \sim \bigg(1+{t\over n}\bigg)^{-n} \to e^{-t}$$ combined with $\refeq{\the\hankel}$.\slug A similar but equally involved process proves the more general version of the standard function scale, which also handles logarithms. \parenproclaim Theorem S$'$ (Standard function scale, logarithms). If $\alpha\in \CC$ is not a nonpositive real number, then $$[z^n](1-z)^{-\alpha}\bigg({1\over z}\log {1\over 1-z}\bigg)^\beta \sim {N^{\alpha - 1}\over \Gamma(\alpha)}(\log n)^\beta.\noskipslug\adveq$$ If $f(z)$ is singular at $z=\rho$, then $g(z) = f(\rho z)$ is singular at $1$ and $$[z^n] f(z) = \rho^{-n} [z^n] f(\rho z) = \rho^{-n} [z^n] g(z).\adveq$$ When a generating function is not meromorphic, the approach to getting its coefficient asymptotics is as follows: \medskip \item{i)} First, we locate the dominant singularity $\rho$ (the one closest to the origin). Check that it is the only singularity on the circle of convergence. \smallskip \item{ii)} Next, we find functions $\sigma$ and $\tau$ on the standard scale such that, for $z$ near $\rho$, $$f(z) = \sigma(z/\rho) + O\big(\tau(z/\rho)\big).$$ We should have $\tau(z) = o\big(\sigma(z)\big)$. \smallskip \item{iii)} Lastly, we apply Theorem S$'$ to $\sigma$ and $\tau$ to get an approximation for $f$. \medskip \boldlabel Catalan asymptotics. We are now ready to derive an asymptotic estimate for the growth of the Catalan numbers. Earlier, we derived the OGF $$C(z) = {1 - \sqrt{1-4z} \over 2z},$$ which has a radius of convergence of $1/4$. By Pringsheim's theorem, we have an algebraic singularity at $\rho = 1/4$. Now we rewrite $$C(z) = {1\over 2z} - {(1-4z)^{1/2} \over 2z};$$ when $z$ is close to $1/4$ this is $$C(z) \sim 2 - 2(1-4z)^{1/2}.\adveq$$ Now we use the standard scale with $\alpha = -1/2$. From $z\Gamma(z) = \Gamma(z+1)$ we infer that \eqalign{ -{1\over 2}\Gamma(-1/2) &= \Gamma(1/2) \cr \Gamma(-1/2) &= -2\sqrt \pi \cr },\adveq whence the famous approximation $$[z^n]C(z) \sim [z^n]\big(-2(1-4z)^{1/2}\big) \sim {-2n^{-3/2}4^n \over \Gamma(-1/2)} = {4^n \over \sqrt \pi n^{3/2}}\adveq$$ can be derived. \medskip \boldlabel Simple varieties of trees. Many generating functions associated with tree classes have a generating function that satisfies $f(z) = z\phi\big(f(z)\big)$. We have the following theorem that generalises singularity analysis of simple varieties of trees. \parenproclaim Theorem V (Simple varieties of trees). Suppose $f(z) = z\phi\big(f(z)\big)$ and furthermore, the following conditions hold: \medskip \item{i)} The function $\phi(u)$ is nonlinear, satisfies $\phi(0)\neq 0$, is analytic at 0 and has nonnegative coefficients. \smallskip \item{ii)} Within the open ball of convergence $|z| < R$ of $\phi$, there exists a positive real solution $\tau$ to the equation $$\phi(\tau) = \tau\phi'(\tau).\adveq$$ \medskip \noindent Then $\rho = \tau/\phi(\tau)$ is the radius of convergence of $f(z)$ and $$[z^n]f(z) \sim \sqrt{{\phi(\tau) \over 2\pi \phi''(\tau)}} {\rho^{-n} \over n^{3/2}}.\adveq$$ \proof The proof is beyond the scope of the notes, but the general gist is to first show that $$f(z) \sim \tau - \sqrt{2\phi(\tau) / \phi''(\tau)} \sqrt{(1-z\phi'(\tau))}\adveq$$ and then use Theorem S.\slug \medskip \boldlabel Cayley trees and Stirling's formula. We will apply Theorem V to the family of Cayley trees, whose EGF satisfies $$T(z) = ze^{T(z)}.$$ So $\phi(u) = \phi'(u) = \phi''(u) = e^u$, $\tau = 1$, $\rho = 1/e$ and $$n![z^n]T(z) \sim n!\sqrt{{e \over 2\pi e}} {e^n \over n^{3/2}} = {n!e^n\over \sqrt{2\pi} n^{3/2}}.\adveq$$ We saw earlier that $n![z^n]T(z)$ is exactly $n^{n-1}$. This can be substituted to obtain \eqalign{ {n! e^n \over \sqrt{2\pi} n^{3/2}} &\sim n^{n-1} \cr n! &\sim {\sqrt{2\pi} n^{3/2} \cdot n^{n-1} \over e^n} = \sqrt{2\pi n}\bigg({n\over e}\bigg)^n,\cr }\adveq and we have, as a byproduct of counting trees, derived Stirling's formula! \section References \frenchspacing \parskip=0pt \hyphenpenalty=-1000 \pretolerance=-1 \tolerance=1000 \doublehyphendemerits=-100000 \finalhyphendemerits=-100000 \frenchspacing \def\beginref{ \par\begingroup\nobreak\smallskip\parindent=0pt\kern1pt\nobreak \everypar{\strut} } \def\endref{\kern1pt\endgroup\smallbreak\noindent} \beginref Philippe Flajolet and Robert Sedgewick, {\sl Analytic Combinatorics} (New York: Cambridge University Press, 2009). \endref \beginref Francis J.~Flanigan, {\sl Complex Variables: Harmonic and Analytic Functions} (New York: Dover, 1983). \endref \beginref George P\'olya, On picture-writing,'' {\sl American Mathematical Monthly}\/ {\bf 63} (1956), 689--697. \endref \beginref Heinz Pr\"ufer, `Neuer Beweis eines Satzes \"uber Permutationen,'' {\sl Archiv der Mathematik und Physik}\/ {\bf 27} (1918), 142--144. \endref \beginref Herbert S.~Wilf, {\sl generatingfunctionology} (Boston: Academic Press, 1994). \endref \bye