#+TITLE: \zallman Advanced Modern Algebra #+AUTHOR: Joseph J. Rotman #+OPTIONS: tex:imagemagick #+OPTIONS: toc:2 #+LATEX_HEADER: \input{preamble.tex} #+LATEX_HEADER: \setcounter{secnumdepth}{2} #+LATEX_HEADER: \setcounter{tocdepth}{2} #+EXPORT_FILE_NAME: ../latex/AdvancedModernAlgebra/AdvancedModernAlgebra.tex * Things Past ** Some Number Theory *Least Integer Axiom* (*Well-ordering Principle*). There is a smallest integer in every nonempty subset $C$ of $\N$ ** Roots of Unity #+ATTR_LATEX: :options [Polar Decomposition] #+BEGIN_proposition Every complex number $z$ has a factorization \begin{equation*} z=r(\cos\theta+i\sin\theta) \end{equation*} where $r=\abs{z}\ge0$ and $0\le\theta\le 2\pi$ #+END_proposition #+ATTR_LATEX: :options [Addition Theorem] #+BEGIN_proposition If $z=\cos\theta+i\sin\theta$ and $w=\cos\psi+i\sin\psi$, then \begin{equation*} zw=\cos(\theta+\psi)+i\sin(\theta+\psi) \end{equation*} #+END_proposition #+ATTR_LATEX: :options [De Moivre] #+BEGIN_theorem $\forall x\in\R,n\in\N$ \begin{equation*} \cos(nx)+i\sin(nx)=(\cos x+i\sin x)^n \end{equation*} #+END_theorem #+ATTR_LATEX: :options [Euler] #+BEGIN_theorem $e^{ix}=\cos x+i\sin x$ #+END_theorem #+ATTR_LATEX: :options [] #+BEGIN_definition If $n\in\N\ge 1$ , an *nth root of unity* is a complex number $\xi$ with $\xi^n=1$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_corollary Every nth root of unity is equal to \begin{equation*} e^{2\pi ik/n}=\cos(\frac{2\pi k}{n})+i\sin(\frac{2\pi k}{n}) \end{equation*} for $k=0,1,\dots,n-1$ #+END_corollary \begin{equation*} x^n-1=\displaystyle\prod_{\xi^n=1}(x-\xi) \end{equation*} If $\xi$ is an nth root of unity and if $n$ is the smallest, then $\xi$ is a *primitive* *\(n\)th root of unity* #+ATTR_LATEX: :options [] #+BEGIN_definition If $d\in\N^+$ , then the \(d\)th *cyclotomic polynomial* is \begin{equation*} \Phi_d(x)=\displaystyle\prod(x-\xi) \end{equation*} where $\xi$ ranges over all the /primitive dth/ roots of unity #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition For every integer $n\ge 1$ \begin{equation*} x^n-1=\displaystyle\prod_{d|n}\Phi_d(x) \end{equation*} #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_definition Define *Euler \phi-function* as the degree of the nth cyclotomic polynomial \begin{equation*} \phi(n)=\deg(\Phi_n(x)) \end{equation*} #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition If $n\ge1$ is an integer, then $\phi(n)$ is the number of integers $k$ with $1\le k\le n$ and $(k,n)=1$ #+END_proposition #+BEGIN_proof Suffice to prove $e^{2\pi ik/n}$ is a primitive nth root of unity if and only if $k$ and $n$ are relatively prime #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary For every integer $n\ge 1$, we have \begin{equation*} n=\displaystyle\sum_{d|n}\phi(d) \end{equation*} #+END_corollary ** Some Set Theory #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop1.47 1. If \(f:X\to Y\) and \(g:Y\to X\) are functions s.t. \(g\circ f=1_X\), then $f$ is injective and $g$ is surjective 2. A function $f:X\to Y$ has an inverse \(g:Y\to X\) if and only if $f$ is a bijection #+END_proposition * Group \rom{1} ** Permutations #+ATTR_LATEX: :options [] #+BEGIN_definition A *permutation* of a set $X$ is a bijection from $X$ to itself. #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition The family of all the permutations of a set $X$, denoted by $S_X$ is called the *symmetric group* on $X$. When $X=\lb 1,2,\dots,n\rb$, $S_X$ is usually denoted by $X_n$ and is called the *symmetric group on* $n$ *letters* #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition Let $i_1,i_2,\dots,i_r$ be distinct integers in $\lb 1,2,\dots,n\rb$. If $\alpha\in S_n$ fixes the other integers and if \begin{equation*} \alpha(i_1)=i_2,\alpha(i_2)=i_3,\dots,\alpha(i_{r-1})=i_r,\alpha(i_r)=i_1 \end{equation*} then \alpha is called an *r-cycle*. \alpha is a cycle of *length* $r$ and denoted by \begin{equation*} \alpha=(i_1\; i_2\;\dots\; i_r) \end{equation*} #+END_definition 2-cycles are also called the *transpositions*. #+ATTR_LATEX: :options [] #+BEGIN_definition Two permutations $\alpha,\beta\in S_n$ are *disjoint* if every $i$ moved by one is fixed by the other. #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma Disjoint permutations $\alpha,\beta\in S_n$ commute #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_proposition Every permutation $\alpha\in S_n$ is either a cycle or a product of disjoint cycles. #+END_proposition #+BEGIN_proof Induction on the number $k$ of points moved by $\alpha$ #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition A *complete factorization* of a permutation $\alpha$ is a factorization of $\alpha$ into disjoint cycles that contains exactly one 1-cycle $(i)$ for every $i$ fixed by $\alpha$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_theorem Let $\alpha\in S_n$ and let $\alpha=\beta_1\dots\beta_t$ be a complete factorization into disjoint cycles. This factorization is unique except for the order in which the cycles occur #+END_theorem #+BEGIN_proof for all $i$, if $\beta_t(i)\neq i$, then $\beta_t^k(i)\neq\beta_t^{k-1}(i)$ for any $k\ge 1$ #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_lemma If $\gamma,\alpha\in S_n$, then $\alpha\gamma\alpha^{-1}$ has the same cycle structure as $\gamma$. In more detail, if the complete factorization of $\gamma$ is \begin{equation*} \gamma=\beta_1\beta_2\dots(i_1\; i_2\;\dots)\dots\beta_t \end{equation*} then $\alpha\gamma\alpha^{-1}$ is permutation that is obtained from $\gamma$ by applying $\alpha$ to the symbols in the cycles of $\gamma$ #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_examplle label:example2.8 Suppose \begin{gather*} \beta=(1\;2\;3)(4)(5)\\ \gamma=(5\;2\;4)(1)(3) \end{gather*} then we can easily find the $\alpha$ \begin{equation*} \alpha= \begin{pmatrix} 1&2&3&4&5\\ 5&2&4&1&3 \end{pmatrix} \end{equation*} and so \(\alpha=(1\;5\;3\;4)\). Now $\alpha\in S_5$ and $\gamma=(\alpha 1\;\alpha 2\;\alpha 3)$ #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_theorem Permutations $\gamma$ and $\sigma$ in $S_n$ has the same cycle structure if and only if there exists $\alpha\in S_n$ with $\sigma=\alpha\gamma\alpha^{-1}$ #+END_theorem #+ATTR_LATEX: :options [] #+BEGIN_proposition If $n\ge 2$ then every $\alpha\in S_n$ is a product of tranpositions #+END_proposition #+BEGIN_proof $(1\;2\;\dots\; r)=(1\; r)(1\; r-1)\dots(1\; 2)$ #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_examplle The *15-puzzle* has a *starting position* that is a $4\times 4$ array of the numbers between 1 and 15 and a symbol #, which we interpret as "blank". For example, consider the following starting position #+ATTR_LATEX: :align |c|c|c|c| |----+----+----+----| | 3 | 15 | 4 | 8 | |----+----+----+----| | 10 | 11 | 1 | 9 | |----+----+----+----| | 2 | 5 | 13 | 12 | |----+----+----+----| | 6 | 7 | 14 | # | |----+----+----+----| A *simple move* interchanges the blank with a symbol adjacent to it. We win the game if after a sequence of simple moves, the starting position is transformed into the standard array $1,2,\dots,15,\#$. To analyze this game, note that the given array is really a permutation $\alpha\in S_{16}$. For example, the given starting position is #+ATTR_LATEX: :mode math :environment pmatrix | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | | 3 | 15 | 4 | 8 | 10 | 11 | 1 | 9 | 2 | 5 | 13 | 12 | 6 | 7 | 14 | 16 | To win the game, we need special transpositions $\tau_1,\dots,\tau_m$ sot that \begin{equation*} \tau_m\dots\tau_1\alpha=(1) \end{equation*} #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_definition A permutation $\alpha\in S_n$ is *even* if it can be factored into a product of an even number of transpositions. Otherwise *odd* #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition If $\alpha\in S_n$ and $\alpha=\beta_1\dots\beta_t$ is a complete factorization, then *signum* $\alpha$ is defined by \begin{equation*} \sgn(\alpha)=(-1)^{n-t} \end{equation*} #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_theorem For all $\alpha,\beta\in S_n$ \begin{equation*} \sgn(\alpha\beta)=\sgn(\alpha)\sgn(\beta) \end{equation*} #+END_theorem #+ATTR_LATEX: :options [] #+BEGIN_theorem 1. Let $\alpha\in S_n$; if $\sgn(\alpha)=1$ then $\alpha$ is even. otherwise odd 2. A permutation $\alpha$ is odd if and only if it's a product of an odd number of transpositions #+END_theorem #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $\alpha,\beta\in S_n$. If $\alpha$ and $\beta$ have the same parity, then $\alpha\beta$ is even while if $\alpha$ and $\beta$ have distinct parity, $\alpha\beta$ is odd #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_examplle An analysis of the 15-puzzle shows that if $\alpha\in S_{16}$ is the starting position, then the game can be won if and only if \alpha is an even permutation that fixes 16. The blank 16 starts in position 16. Each simple move takes 16 up, down, left or right. Thus the total number $m$ of moves is $u+d+l+r$. If 16 is to return home, each one of these must be undone. Thus the total number of moves is even: $m=2u+2r$. Hence $\alpha=\tau_1\dots\tau_m$ and so $\alpha$ is an even permutation. In example \begin{equation*} \alpha=(1\;3\;4\;8\;9\;2\;15\;14\;7)(5\;10)(6\;11\;13)(12)(16) \end{equation*} Now $\sgn(\alpha)=(-1)^{16-5}=-1$. #+END_examplle ** Groups #+ATTR_LATEX: :options [] #+BEGIN_definition A *binary operation* on a set $G$ is a function \begin{equation*} *:G\times G\to G \end{equation*} #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition A *group* is a set $G$ equipped with a binary operation * s.t. 1. the *associative law* holds 2. *identity* 3. every $x\in G$ has an *inverse*, there is a $x'\in G$ with $x*x'=e=x'*x$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition A group $G$ is called *abelian* if it satisfies the *commutative law* #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma Let $G$ be a group 1. The *cancellation laws* holds: if either $x*a=x*b$ or $a*x=b*x$, then $a=b$ 2. $e$ is unique 3. Each $x\in G$ has a unique inverse 4. $(x^{-1})^{-1}=x$ #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_definition An expression $a_1a_2\dots a_n$ *needs no parentheses* if all the ultimate products it yields are equal #+END_definition #+ATTR_LATEX: :options [Generalized Associativity] #+BEGIN_theorem If $G$ is a group and $a_1,a_2,\dots,a_n\in G$ then the expression $a_1a_2\dots a_n$ needs no parentheses #+END_theorem #+ATTR_LATEX: :options [] #+BEGIN_definition Let $G$ be a group and let $a\in G$. If $a^k=1$ for some $k>1$ then the smallest such exponent $k\ge 1$ is called the *order* or $a$; if no such power exists, then one says that $a$ has *infinite order* #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition If $G$ is a finite group, then every $x\in G$ has finite order #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_definition A *motion* is a distance preserving bijection $\varphi:\R^2\to\R^2$. If \pi is a polygon in the plane, then its *symmetry group* $\Sigma(\pi)$ consists of all the motions $\varphi$ for which $\varphi(\pi)=\pi$. The elements of $\Sigma(\pi)$ are called the *symmetries* of \pi #+END_definition Let $\pi_4$ be a square. Then the group $\Sigma(\pi_4)$ is called the *dihedral group* with 8 elements, denoted by $D_8$ #+ATTR_LATEX: :options [] #+BEGIN_definition If $\pi_n$ is a regular polygon with $n$ vertices $v_1,\dots,v_n$ and center $O$, then the symmetry group $\Sigma(\pi_n)$ is called the *dihedral* *group* with $2n$ elements, and it's denoted by $D_{2n}$ #+END_definition #+BEGIN_exercise label:ex2.26 If $G$ is a group in which \(x^2=1\) for every \(x\in G\), prove that $G$ must be abelian #+END_exercise #+BEGIN_exercise label:ex2.27 If $G$ is a group with an even number of elements, prove that the number of elements in $G$ of order 2 is odd. In particular, $G$ must contain an element of order 2. #+END_exercise #+BEGIN_proof 1 is an element of order 1. #+END_proof ** Lagrange's Theorem #+ATTR_LATEX: :options [] #+BEGIN_definition A subset $H$ of a group $G$ is a *subgroup* if 1. $1\in H$ 2. if $x,y\in H$, then $xy\in H$ 3. if $x\in H$, then $x^{-1}\in H$ #+END_definition If $H$ is a subgroup of $G$, we write $H\le G$. If $H$ is a proper subgroup, then we write $Hm$; that is \begin{equation*} \sigma=(s_0,\dots,s_m,0,\dots) \end{equation*} A polynomial has only finitely many nonzero coefficients. The *zero polynomial*, denoted by $\sigma=0$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition If $\sigma(s_0,\dots,s_n,0,\dots)\neq0$ is a polynomial, we call $s_n$ the *leading coefficient* of $\sigma$, we call $n$ the *degree* of \sigma, an we denote $n$ by $\deg(\sigma)$ #+END_definition *Notation*. If $R$ is a commutative ring, then the set of all polynomials with coefficients in $R$ is denoted by $R[x]$ #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.14 If $R$ is a commutative ring, then $R[x]$ is a commutative ring that contains $R$ as a subring #+END_proposition #+BEGIN_proof \(\sigma=(s_0,s_1,\dots),\tau=(t_0,t_1,\dots)\) \begin{align*} &\sigma+\tau=(s_0+t_0,s_1+t_1,\dots)\\ &\sigma\tau=(c_0,c_1,\dots) \end{align*} where \(c_k=\sum_{i+j=k}s_it_j=\sum_{i=0}^ks_it_{k-i}\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_lemma label:lemma3.15 Let $R$ be a commutative ring and let \(\sigma,\tau\in R[x]\) be nonzero polynomials. 1. Either \(\sigma\tau=0\) or \(\deg(\sigma\tau)\le\deg(\sigma)+\deg(\tau)\) 2. If $R$ is a domain, then $\sigma\tau\neq0$ and \begin{equation*} \deg(\sigma\tau)=\deg(\sigma)+\deg(\tau) \end{equation*} 3. If $R$ is a domain, then $R[x]$ is a domain #+END_lemma #+BEGIN_proof \(\sigma=(s_0,s_1,\dots),\tau=(t_0,t_1,\dots)\) have degrees $m$ and $n$ respectively. 1. if $k>m+n$, then each term in \(\sum_is_it_{k-i}\) is 0 2. Each term in \(\sum_is_it_{m+n-i}\) is 0 with the possible exception of $s_mt_n$. Since $R$ is a domain, $s_m\neq 0$ and $t_n\neq0$ imply $s_mt_n\neq0$. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition If $R$ is a commutative ring, then $R[x]$ is called the *ring of polynomials over $R$* #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition Define the element $x\in R[x]$ by \begin{equation*} x=(0,1,0,0,\dots) \end{equation*} #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma 1. IF $\sigma=(s_0,\dots)$, then \begin{equation*} x\sigma=(0,s_0,s_1,\dots) \end{equation*} 2. If $n\ge 1$, then $x^n$ is the polynomial having 0 everywhere except for 1 in the \(n\)th coordinate 3. If \(r\in R\), then \begin{equation*} (r,0,\dots)(s_0,s_1,\dots,s_j,\dots)=(rs_0,rs_1,\dots,rs_j,\dots) \end{equation*} #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_proposition If $\sigma=(s_0,\dots,s_n,0,\dots)$, then \begin{equation*} \sigma=s_0+s_1x+s_2x^2+\dots+s_nx^n \end{equation*} where each element \(s\in R\) is identified with the polynomial \((s,0,\dots)\) #+END_proposition As a customary, we shall write \begin{equation*} f(x)=s_0+s_1x+\dots+s_nx^n \end{equation*} instead of \sigma. $s_0$ is called its *constant term*. If $s_n=1$ , then $f(x)$ is called *monic*. #+ATTR_LATEX: :options [] #+BEGIN_corollary Polynomials $f(x)=s_0+\dots+s_nx^n$ and $g(x)=t_0+\dots+t_mx^m$ are equal if and only if $n=m$ and $s_i=t_i$ for all $i$. #+END_corollary If $R$ is a commutative ring, each polynomial $f(x)=s_0+\dots+s_nx^n$ defines a *polynomial function* $f:R\to R$ by evaluation: If \(a\in R\), define \(f(a)=s_0+\dots+s_na^n\in R\). #+ATTR_LATEX: :options [] #+BEGIN_definition Let $k$ be a field. The fraction field of $k[x]$, denoted by $k(x)$, is called the *field of rational function* over $k$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition If $k$ is a field, then the elements of $k(x)$ have the form $f(x)/g(x)$ where $f(x),g(x)\in k[x]$ and $g(x)\neq 0$ #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_proposition If $p$ is a prime, then the field of rational functions \(\I_p(x)\) is a n infinite field containing \(\I_p\) as a subfield. #+END_proposition #+BEGIN_proof By Lemma ref:lemma3.15 (3), \(\I_p[x]\) is an infinite domain for the powers $x^n$ for $n\in\N$ are distinct. Thus its fraction filed \(\I_p(x)\) is an infinite field containing \(\I_p[x]\) as a subring. But \(\I_p[x]\) contains \(\I_p\) as a subring, by Proposition ref:prop3.14. #+END_proof $R[x]$ is often called the ring of all *polynomials over $R$ in one variable*. If we write $A=R[x]$, then $A[y]$ is called the ring of all *polynomials over $R$ in two variables $x$ and $y$*, and it is denoted by $R[x,y]$. #+BEGIN_exercise Show that if $R$ is a commutative ring, then $R[x]$ is never a field #+END_exercise #+BEGIN_proof If $R[x]$ is a field, then $x^{-1}\in R[x]$ and $x^{-1}=\sum_ic_ix^i$. However \begin{equation*} \deg(xx^{-1})=\deg(1)=1=\deg(x)+\deg(x^{-1}) \end{equation*} A contradiction. #+END_proof #+BEGIN_exercise label:ex3.22 Show that the polynomial function defined by $f(x)=x^p-x\in\I_p[x]$ is identically zero. #+END_exercise #+BEGIN_proof By Fermat's theorem ref:Fermat, \(a^p\equiv a\mod p\) #+END_proof #+BEGIN_exercise label:ex3.23 If $R$ is a commutative ring and \(f(x)=\sum_{i=0}^n s_ix^i\in R[x]\) has degree \(n\ge1\), define its *derivative* \(f'(x)\in R[x]\) by \begin{equation*} f'(x)=s_1+2s_2x+3s_3x^2+\cdots+ns_nx^{n-1} \end{equation*} if \(f(x)\) is a constant polynomial, define its derivative to be the zero polynomial. Prove that the usual rules of calculus hold: \begin{align*} (f+g)'&=f'+g'\\ (rf)'&=r(f)'\quad\text{if }r\in R\\ (fg)'&=fg'+f'g\\ (f^n)'&=nf^{n-1}f'\quad\text{for all }n\ge 1 \end{align*} #+END_exercise #+BEGIN_exercise label:ex3.24 Let $R$ be a commutative ring and let \(f(x)\in R[x]\) 1. Prove that if \((x-a)^2\mid f(x)\), then \(x-a\mid f'(x)\) in $R[x]$ 2. Prove that if \(x-a\mid f(x)\) and \(x-a\mid f'(x)\), then \((x-a)^2\mid f(x)\) #+END_exercise ** Greatest Common Divisors #+ATTR_LATEX: :options [Division Algorithm] #+BEGIN_theorem Assume that $k$ is a field and that \(f(x),g(x)\in k[x]\) with $f(x)\neq 0$. Then there are unique polynomials \(q(x),r(x)\in k[x]\) with \begin{equation*} g(x)=q(x)f(x)+r(x) \end{equation*} and either $r(x)=0$ or $\deg(r)<\deg(f)$ #+END_theorem #+BEGIN_proof We first prove the existence of such $q$ and $r$. If \(f\mid g\), then \(g=qf\) for some $q$; define the remainder $r=0$. If \(f\nmid g\), then consider all polynomials of the form \(g-qf\) as $q$ varies over $k[x]$. The least integer axiom provides a polynomial \(r=g-qf\) having least degree among all such polynomials. Since \(g=qf+r\), it suffices to show that \(\deg(r)<\deg(f)\). Write \(f(x)=s_nx^n+\dots+s_1x+s_0\) and \(r(x)=t^mx^m+\dots t_0\). Now $s_n\neq 0$ implies that $s_n$ is a unit because $k$ is a field and so $s_n^{-1}\in k$. If $\deg(r)\ge\deg(f)$, define \begin{equation*} h(x)=r(x)-t_ms_n^{-1}x^{m-n}f(x) \end{equation*} that is, if \(\LT(f)=s_nx^n\), where LT abbreviates *leading term*, then \begin{equation*} h=r-\frac{\LT(r)}{\LT(f)}f \end{equation*} note that $h=0$ or \(\deg(h)<\deg(r)\). If $h=0$, then \(r=[\LT(r)/\LT(f)]f\) and \begin{align*} g&=qf+r=qf+\frac{\LT(r)}{\LT(f)}f\\ &=\left[q+\frac{\LT(r)}{\LT(f)}\right]f \end{align*} contradicting \(f\nmid g\). If \(h\neq0\), then $\deg(h)<\deg(r)$ and \begin{equation*} g-qf=r=h+\frac{\LT(r)}{\LT(f)}f \end{equation*} Thus $g-[q+\LT(r)/\LT(f)]f=h$, contradicting $r$ being a polynomial of least degree having this form. Therefore $\deg(r)<\deg(f)$ To prove uniqueness of $q(x)$ and $r(x)$ assume that $g=q'f+r'$, where $\deg(r')<\deg(f)$. Then \begin{equation*} (q-q')f=r'-r \end{equation*} If $r'\neq r$, then each side has a degree. But \(\deg((q-q')f)=\deg(q-q')+\deg(f)\ge\deg(f)\), while \(\deg(r'-r)\le\max\{\deg(r'),\deg(r)\}<\deg(f)\), a contradiction. Hence $r'=r$ and $(q-q')f=0$. As $k[x]$ is a domain and $f\neq 0$, it follows that $q-q'=0$ and $q=q'$ #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition If $f(x)$ and $g(x)$ are polynomials in $k[x]$, where $k$ is a field, then the polynomials $q(x)$ and $r(x)$ occurring in the division algorithm are called the *quotient* and the *remainder* after dividing $g(x)$ by $f(x)$ #+END_definition The hypothesis that $k$ is a filed is much too strong: long division can be carried out in $R[x]$ for every commutative ring $R$ as long as the leading coefficient of $f(x)$ is a unit in $R$; in particular, long division is always possible when $f(x)$ is monic. #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $R$ be a commutative ring and let $f(x)\in R[x]$ be a monic polynomial. If $g(x)\in R[x]$, then there exists $q(x),r(x)\in R[x]$ with \begin{equation*} g(x)=q(x)f(x)+r(x) \end{equation*} where either $r(x)=0$ or \(\deg(r)<\deg(f)\) #+END_corollary #+BEGIN_proof Note that \(\LT(r)/\LT(f)\in R\) because $f(x)$ is monic #+END_proof [[index:root]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $f(x)\in k[x]$, where $k$ is a field, then a *root* of $f(x)$ *in $k$* is an element $a\in k$ with $f(a)=0$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma Let $f(x)\in k[x]$, where $k$ is a field, and let $u\in k$. Then there is \(q(x)\in k[x]\) with \begin{equation*} f(x)=q(x)(x-u)+f(u) \end{equation*} #+END_lemma #+BEGIN_proof The division algorithm gives \begin{equation*} f(x)=q(x)(x-u)+r \end{equation*} Now evaluate \begin{equation*} f(u)=q(u)(u-u)+r \end{equation*} and so $r=f(u)$ #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.24 If $f(x)\in k[x]$, where $k$ is a field, then $a$ is a root of $f(x)$ in $k$ if and only if $x-a$ divides $f(x)$ in $k[x]$ #+END_proposition #+BEGIN_proof If $a$ is a root of $f(x)$ in $k$, then $f(a)=0$ and the lemma gives $f(x)=q(x)(x-a)$. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.25 Let $k$ be a field and let $f(x)\in k[x]$. If $f(x)$ has degree $n$, then $f(x)$ has at most $n$ roots in $k$ #+END_theorem #+BEGIN_proof We prove the statement by induction on $n\ge 0$. If $n=0$, then $f(x)$ is a nonzero constant, and so the number of its roots in $k$ is zero. Now let $n>0$. If $f(x)$ has no roots in $k$, then we are done. Otherwise we may assume that there is $a\in k$ with $a$ a root of $f(x)$; hence by Proposition ref:prop3.24 \begin{equation*} f(x)=q(x)(x-a) \end{equation*} moreover, \(q(x)\in k[x]\) has degree $n-1$. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_examplle Theorem ref:thm3.25 is not true for polynomials with coefficients in an arbitrary commutative ring $R$. For example, if $R=\I_8$, then the quadratic polynomial $x^2-1$ has 4 roots: $[1],[3],[5],[7]$ #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_corollary Every \(n\)th root of unity in $\C$ is equal to \begin{equation*} e^{2\pi ik/n}=\cos\left(\frac{2\pi k}{n}\right)+i\sin\left(\frac{2\pi k}{n}\right) \end{equation*} where $k=0,1,\dots,n-1$ #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $k$ be an infinite field and let $f(x)$ and $g(x)$ be polynomials in $k[x]$. If $f(x)$ and $g(x)$ determine the same polynomial function, then $f(x)=g(x)$ #+END_corollary #+BEGIN_proof If $f(x)\neq g(x)$, then the polynomial $h(x)=f(x)-g(x)$ is nonzero, so that it has some degree, say $n$. Now every element of $k$ is a root of $h(x)$; since $k$ is infinite, $h(x)$ has more than $n$ roots, a contradiction. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem label:nthm2.46 If $k$ is a field and $G$ is a finite subgroup of the multiplicative group $k^\times$., then $G$ is cyclic. In particular, if $k$ itself is finite, then $k^\times$ is cyclic. #+END_theorem #+BEGIN_proof Let $d$ be a divisor of $\abs{G}$. If there are two subgroups of $G$ of order $d$, say $S$ and $T$, then $\abs{S\cup T}>d$. But each $a\in S\cup T$ satisfies $a^d=1$ and hence it's a root of $x^d-1$, a contradiction. Thus $G$ is cyclic, by Theorem ref:thm2.86. #+END_proof [[index:primitive element]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $k$ is a finite field, a generator of the cyclic group $k^\times$ is called a *primitive element* of $k$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition If $f(x)$ and $g(x)$ are polynomials in $k[x]$, where $k$ is a field, then a *common divisor* is a polynomial \(c(x)\in k[x]\) with \(c(x)\mid f(x)\) and \(c(x)\mid g(x)\). If $f(x)$ and $g(x)$ in $k[x]$ are not both 0, define their *greatest common divisor*, abbreviated gcd, to be the monic common divisor having largest degree. If $f(x)=0=g(x)$, define their $\gcd=0$. The gcd of $f(x)$ and $g(x)$ is often denoted by $(f,g)$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.31 If $k$ is a field and $f(x),g(x)\in k[x]$, then their \(\gcd d(x)\) is a nonlinear combination of $f(x)$ and $g(x)$; that is there are \(s(x),t(x)\in k[x]\) with \begin{equation*} d(x)=s(x)f(x)+t(x)g(x) \end{equation*} #+END_theorem #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $k$ be a field and let \(f(x),g(x)\in k[x]\). A monic common divisor $d(x)$ is the gcd if and only if $d(x)$ is divisible by every common divisor #+END_corollary [[index:irreducible]] #+ATTR_LATEX: :options [] #+BEGIN_definition An element $p$ in a domain $R$ is *irreducible* if $p$ is neither 0 nor a unit and in any factorization $p=uv$ in $R$, either $u$ or $v$ is a unit. Elements \(a,b\in R\) are *associates* if there is a unit \(u\in R\) with $b=ua$ #+END_definition For example, a prime $p$ is irreducible in $\Z$ #+ATTR_LATEX: :options [] #+BEGIN_proposition If $k$ is a field, then a polynomial \(p(x)\in k[x]\) is irreducible if and only if \(\deg(p)=n\ge 1\) and there is no factorization in $k[x]$ of the form $p(x)=g(x)h(x)$ in which both factors have degree smaller than $n$ #+END_proposition #+BEGIN_proof We show fist that $h(x)\in k[x]$ is a unit if and only if \(\deg(h)=0\). If $h(x)u(x)=1$, then $\deg(h)+\deg(u)=\deg(1)=0$, we have \(\deg(h)=0\). Conversely if $\deg(h)=0$, then $h(x)$ is a nonzero constant; that is, $h\in k$; since $k$ is a field, $h$ has an inverse If $p(x)$ is irreducible, then its only factorization are of the form $p(x)=g(x)h(x)$ where $g(x)$ or $h(x)$ is a unit; that is, either $\deg(g)=0$ or $\deg(h)=0$. Conversely, if $p(x)$ is reducible, then it has factorization $p(x)=g(x)h(x)$ where neither $g(x)$ nor $h(x)$ is a unit; #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary label:cor3.34 Let $k$ be a field and let $f(x)\in k[x]$ be a quadratic or cubic polynomial. Then $f(x)$ is irreducible in $k[x]$ if and only if $f(x)$ does not have a root in $k$ #+END_corollary #+BEGIN_proof If $f(x)=g(x)h(x)$, then $\deg(f)=\deg(g)+\deg(h)$ #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_examplle label:example3.35 1. We determine the irreducible polynomials in $\I_2[x]$ of small degree. As always, the linear polynomials $x$ and $x+1$ are irreducible There are four quadratics: $x^2,x^2+x,x^2+1,x^2+x+1$ . Since each of the first three has a root in \(\I_2\), there is only one irreducible quadratic There are eight cubics, of which four are reducible because their constant term is 0. The remaining polynomials are \begin{equation*} x^3+1;\quad x^3+x+1;\quad x^3+x^2+1;\quad x^3+x^2+x+1 \end{equation*} Since 1 is a root of the first and fourth, the middle two are the only irreducible cubics. #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_lemma label:lemma3.36 Let $k$ be a field, let $p(x),f(x)\in k[x]$, and let $d(x)=(p,f)$. If $p(x)$ is a monic irreducible polynomial, then \begin{equation*} d(x)= \begin{cases} 1&\text{if }p(x)\nmid f(x)\\ p(x)&\text{if }p(x)\mid f(x) \end{cases} \end{equation*} #+END_lemma #+ATTR_LATEX: :options [Euclid's Lemma] #+BEGIN_theorem Let $k$ be a field and let \(f(x),g(x)\in k[x]\). If $p(x)$ is an irreducible polynomial in $k[x]$, and \(p(x)\mid f(x)g(x)\), then either \begin{equation*} p(x)\mid f(x)\hspace{0.5cm}\text{or}\hspace{0.5cm}p(x)\mid g(x) \end{equation*} More generally, if \(p(x)\mid f_1(x)\dots f_n(x))\), then \(p(x)\mid f_i(x)\) for some $i$ #+END_theorem #+BEGIN_proof Assume $p\mid fg$ but that \(p\nmid f\). Since $p$ is irreducible, $(p,f)=1$, and so $1=sp+tf$ for some polynomials $s$ and $t$. Therefore \begin{equation*} g=spg+tfg \end{equation*} and so \(p\mid g\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition Two polynomials \(f(x),g(x)\in k[x]\) where $k$ is a field, are called *relatively prime* if their gcd is 1 #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_corollary Let \(f(x),g(x),h(x)\in k[x]\), where $k$ is a field and let $h(x)$ and $f(x)$ be relatively prime. If \(h(x)\mid f(x)g(x)\), then \(h(x)\mid g(x)\) #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_definition If $k$ is a field, then a rational function \(f(x)/g(x)\in k(x)\) is in *lowest terms* if $f(x)$ and $g(x)$ are relatively prime #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition If $k$ is a field, every nonzero \(f(x)/g(x)\in k(x)\) can be put in lowest terms #+END_proposition #+ATTR_LATEX: :options [Euclidean Algorithm] #+BEGIN_theorem If $k$ is a field and \(f(x),g(x)\in k[x]\), then there are algorithms for computing \(\gcd(f,g)\) as well as for finding a pair of polynomials $s(x)$ and $t(x)$ with \begin{equation*} (f,g)=s(x)f(x)+t(x)g(x) \end{equation*} #+END_theorem #+BEGIN_proof \begin{gather*} g=q_1f+r_1\\ f=q_2r_1+r_2\\ r_1=q_3r_2+r_3\\ \vdots\\ r_{n-4}=q_{n-2}r_{n-3}+r_{n-2}\\ r_{n-3}=q_{n-1}r_{n-2}+r_{n-1}\\ r_{n-2}=q_nr_{n-1}+r_n\\ r_{n-1}=q_{n+1}r_n \end{gather*} Since the degrees of the remainders are strictly decreasing, this procedure must stop after a finite number of steps. The claim is that $d=r_n$ is the gcd. If $c$ is any common divisor of $f$ and $g$, then $c\mid r_i$ for every $i$. Also \begin{align*} r_n&=r_{n-2}-q_nr_{n-1}\\ &=r_{n-2}-q_n(r_{n-3}-q_{n-1}r_{n-2})\\ &=(1+q_{n-1})r_{n-2}-q_nr_{n-3}\\ &=(1+q_{n-1})(r_{n-4}-q_{n-2}r_{n-3})-q_nr_{n-3}\\ &=(1+q_{n-1})r_{n-4}-[(1+q_{n-1})q_{n-2}+q_n]r_{n-3}\\ &\vdots\\ &=sf+tg \end{align*} #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $k$ be a subfield of a field $K$, so that $k[x]$ is a subring of $K[x]$. If \(f(x),g(x)\in k[x]\), then their gcd in $k[x]$ is equal to their gcd in $K[x]$ #+END_corollary #+BEGIN_proof The division algorithm in $K[x]$ gives \begin{equation*} g(x)=Q(x)f(x)+R(x) \end{equation*} $k[x]$ gives \begin{equation*} g(x)=q(x)f(x)+r(x) \end{equation*} and this also holds in $K[x]$. So that uniqueness of quotient and remainder gives $Q(x)=q(x),R(x)=r(x)$. #+END_proof #+ATTR_LATEX: :options [Unique Factorization] #+BEGIN_theorem If $k$ is a field, then every polynomial \(f(x)\in k[x]\) of degree $\ge1$ is a product of a nonzero constant and monic irreducibles. Moreover, if $f(x)$ has two such factorizations \begin{equation*} f(x)=ap_1(x)\dots p_m(x)\hspace{0.5cm}\text{and}\hspace{0.5cm} f(x)=bq_1(x)\dots q_n(x) \end{equation*} then \(a=b,m=n\) and the \(q\)'s may be reindexed so that \(q_i=p_i\) for all $i$ #+END_theorem #+BEGIN_proof We prove the existence of a factorization for a polynomial $f(x)$ by induction on $\deg(f)\ge1$. If \(\deg(f)=1=\), then $f(x)=ax+c=a(x+a^{-1}c)$. As every linear polynomial, $x+a^{-1}c$ is irreducible. Assume now that $\deg(f)\ge1$. If $f(x)$ is irreducible and its leading coefficient is $a$, write $f(x)=a(a^{-1}f(x))$; we are done. If $f(x)$ is not irreducible, then $f(x)=g(x)h(x)$, where \(\deg(g)<\deg(f)\) and \(\deg(h)<\deg(f)\). By the inductive hypothesis, \(g(x)=bp_1(x)\dots p_m(x)\) and \(h(x)=cq_1(x)\dots q_n(x)\). It follows that \begin{equation*} f(x)=(bc)p_1(x)\dots p_m(x)q_x(x)\dots q_n(x) \end{equation*} We now prove by induction on \(M=\max\{m,n\}\ge1\) if there is an equation \begin{equation*} ap_1(x)\dots p_m(x)=bq_1(x)\dots q_n(x) \end{equation*} where $a$ and $b$ are nonzero constants and the \(p\)'s and \(q\)'s are monic irreducibles. For the inductive step, \(p_m(x)\mid q_1(x)\dots q_n(x)\). By Euclid's lemma, there is $i$ with \(p_m(x)\mid q_i(x)\). But $q_i(x)$ are monic irreducible, so that $q_i(x)=p_m(x)$. Canceling this factor we will use inductive hypothesis #+END_proof Let $k$ be a field and assume that there are $a,r_1,\dots,r_n\in k$ with \begin{equation*} f(x)=a \displaystyle\prod_{i=1}^n(x-r_i) \end{equation*} If \(r_1,\dots,r_s\) where \(s\le n\) are the distinct roots of $f(x)$, then collecting terms gives \begin{equation*} f(x)=a(x-r_1)^{e_1}\dots (x-r_s)^{e_s} \end{equation*} where $r_j$ are distinct and $e_j\ge1$. We call $e_j$ the *multiplicity* of the root $r_j$. #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.43 Let \(f(x)=a_0+a_1x+\dots+a_nx^n\in\Z[x]\subseteq\Q[x]\). Every rational root $r$ of $f(x)$ has the form $b/c$, where \(b\mid a_0\) and \(c\mid a_n\) #+END_theorem #+BEGIN_proof We may assume that $r=b/c$ is in lowest form. \begin{gather*} 0=f(b/c)=a_0+a_1(b/c)+\dots+a_n(b/c)^n\\ 0=a_0c^n+a_1bc^{n-1}+\dots+a_nb^n \end{gather*} Hence \(a_0c^n=b(-a_1c^{n-1}-\dots-a_nb^{n-1})\), that is \(b\mid a_0c^n\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition A complex number \alpha is called an *algebraic integer* if \alpha is a root of a monic \(f(x)\in\Z[x]\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_corollary label:cor3.44 A rational number $z$ that is an algebraic integer must lie in $\Z$. More precisely, if \(f(x)\in\Z[x]\subseteq\Q[x]\) is a monic polynomial, then every rational root of $f(x)$ is an integer that divides the constant term #+END_corollary #+BEGIN_proof $a_n=1$ in Theorem ref:thm3.43 #+END_proof For example, consider \(f(x)=x^3+4x^2-2x-1\in\Q[x]\). By Corollary ref:cor3.34, this cubic is irreducible if and only if it has no rational root. As $f(x)$ is monic, the candidates for rational roots are \(\pm 1\), for these are the only divisor of -1 in $\Z$. Thus $f(x)$ has no roots in $\Q$ and hence $f(x)$ is irreducible in $\Q[x]$ #+BEGIN_exercise label:ex3.37 1. Let \(f(x)=(x-a_1)\cdots(x-a_n)\in k[x]\) where $k$ is a field. Show that \(f(x)\) has *no repeated roots* if and only if \(\gcd(f,f')=1\) where \(f'(x)\) is the derivative of $f$ 2. Prove that if \(p(x)\in\Q[x]\) is an irreducible polynomial, then \(p(x)\) has no repeated ro!ots in \(\C\) #+END_exercise #+BEGIN_proof 1. \(f'(x)=\sum_{i=1}^n(x-a_1)\cdots(x-a_{i-1})(x-a_{i+1})\cdots(x-a_n)\) #+END_proof ** Homomorphisms [[index:ring homomorphism]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $A$ and $R$ are (commutative) rings, a *(ring) homomorphism* is a function \(f:A\to R\) s.t. 1. \(f(1)=1\) 2. \(f(a+a')=f(a)+f(a')\) 3. \(f(aa')=f(a)f(a')\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. Let $R$ be a domain and let \(F=\Frac(R)\). \(R'=\{[a,1]:a\in R\}\subseteq F\), then the function \(f:R\to R'\) given by \(f(a) =[a,1]\), is an isomorphism 2. Complex conjugation \(z=a+ib\mapsto\overline{z}=a-ib\) is an isomorphism \(\C\to \C\). 3. Let $R$ be a commutative ring, and let \(a\in R\). Define the *evaluation homomorphism* \(e_a:R[x]\to R\) by \(e_a(f(x))=f(a)\). #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_lemma If \(f:A\to R\) is a ring homomorphism, then for all \(a\in A\) 1. \(f(a^n)=f(a)^n\) 2. if \(a\) is a unit, then \(f(a)\) is a unit and \(f(a^{-1})=f(a)^{-1}\) 3. if \(f:A\to R\) is a ring homomorphism, then \begin{equation*} f(U(A))\le U(R) \end{equation*} where \(U(A)\) is the group of units of $A$; if $f$ is an isomorphism, then \begin{equation*} U(A)\cong U(R) \end{equation*} #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.48 If $R$ and $S$ are commutative rings and \(\varphi:R\to S\) is a ring homomorphism, then there is a ring homomorphism \(\varphi^*:R[x]\to S[x]\) given by \begin{equation*} \varphi^*:r_0+r_1x+r_2x^2+\dots\mapsto\varphi(r_0)+\varphi(r_1)x+ \varphi(r_2)x^2+\dots \end{equation*} #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_definition If \(f:A\to R\) is a ring homomorphism, then its *kernel* is \begin{equation*} \ker f=\{a\in A:f(a)=0\} \end{equation*} and its *image* is \begin{equation*} \im f=\{r\in R:\exists a\in R\e r=f(a)\} \end{equation*} #+END_definition The kernel of a group homomorphism is not merely a subgroup; it is a *normal* subgroup. Similarly, the kernel of a ring homomorphism is almost a subring (\(1\not\in\ker f\)) and is closed under multiplication. [[index:ideal]] #+ATTR_LATEX: :options [] #+BEGIN_definition An *ideal* in a commutative ring $R$ is a subset $I$ of $R$ s.t. 1. \(0\in I\) 2. if \(a,b\in I\), then \(a+b\in I\) 3. if \(a\in I\) and \(r\in R\), then \(ra\in I\) #+END_definition An ideal \(I\neq R\) is called a *proper ideal* #+ATTR_LATEX: :options [] #+BEGIN_examplle If \(b_1,\dots,b_n\in R\), then the set of all linear combinations \begin{equation*} I=\{r_1b_1+\dots+r_nb_n:r_i\in R\} \end{equation*} is an ideal in $R$. We write \(I=(b_1,\dots,b_n)\) in this case and we call $I$ the *ideal generated by* \(b_1,\dots,b_n\). In particular, if $n=1$, then \begin{equation*} I=(b)=\{rb:r\in R\} \end{equation*} is an ideal in $R$; $(b)$ consists of all the multiplies of $b$ and it is called the *principal ideal* generated by $b$. Notice that $R$ and $\{0\}$ are always principal ideals: \(R=(1),\{0\}=(0)\) #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_proposition If \(f:A\to R\) is a ring homomorphism, then \(\ker f\) is an ideal in $A$ and \(\im f\) is a subring of $R$. Moreover, if $A$ and $R$ are not zero rings, then \(\ker f\) is a proper ideal. #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. If an ideal $I$ in a commutative ring $R$ contains 1, then $I=R$ 2. it follows from 1 that if $R$ is a field, then the only ideals are $\{0\}$ and $R$ #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_proposition A ring homomorphism \(f:A \to R\) is an injection if and only if \(\ker f=\{0\}\) #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_corollary label:ncor2.32 If \(f:k\to R\) is a ring homomorphism, where $k$ is a field and $R$ is not the zero ring, then $f$ is an injection #+END_corollary #+BEGIN_proof the only proper ideal in $k$ is $\{0\}$ #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.54 If $k$ is a field, then every ideal $I$ in $k[x]$ is a principal ideal. Moreover, if \(I\neq\{0\}\), there is a monic polynomial that generates $I$ #+END_theorem #+BEGIN_proof If $k$ is a field, then $k[x]$ is an example of a *euclidean ring*. Follows Theorem ref:thm3.60 #+END_proof [[index:principal ideal domain]] #+ATTR_LATEX: :options [] #+BEGIN_definition A domain $R$ is a *principal ideal domain* (PID) if every ideal in $R$ is a principal ideal. #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. The ring of integers is a PID 2. Every field is a PID 3. If $k$ is a field, then the polynomial ring $k[x]$ is a PID 4. There are rings other than \(\Z\) and \(k[x]\) where $k$ is a field that have a division algorithm; they are called *euclidean rings*. #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_examplle Let \(R=\Z[x]\). The set of all polynomials with even constant term is an ideal in \(\Z[x]\). We show that \(I\) is not a principal ideal. Suppose there is \(d(x)\in\Z[x]\) with \(I=(d(x))\). The constant \(2\in I\), so that there is \(f(x)\in\Z[x]\) with \(2=d(x)f(x)\). We have \(0=\deg(2)=\deg(d)+\deg(f)\). The candidates for $d(x)$ are $\pm 1$ and $\pm2$. Suppose $d(x)=\pm2$; since \(x\in I\), there is \(g(x)\in\Z[x]\) with \(x=d(x)g(x)=\pm2g(x)\). But every coefficients on the right side is even. This contradiction gives \(d(x)=\pm1\). Hence $I=\Z[x]$, another contradiction. Therefore $I$ is not a principal ideal. #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_definition An element \delta in a commutative ring $R$ is a *greatest commmon divisor*, gcd, of elements \(\alpha,\beta \in R\) if 1. \delta is a common divisor of \alpha and \beta 2. if \gamma is any common divisor of \alpha and \beta, then \(\gamma\mid\delta\) #+END_definition #+BEGIN_remark Let $R$ be a PID and let \(\pi,\alpha\in R\) with \pi irreducible. A gcd \delta of \pi and \alpha is a divisor of \pi. Hence \(\pi=\delta\epsilon\). And irreducibility of \pi forces either \delta or \epsilon to be a unit. Now \(\alpha=\delta\beta\). If \delta is not a unit, then \epsilon is a unit and so \begin{equation*} \alpha=\delta\beta=\pi\epsilon^{-1}\beta \end{equation*} that is \(\pi\mid\alpha\). We conclude that if \(\pi\nmid\alpha\) then \delta is a unit; that is 1 is a gcd of \pi and \alpha #+END_remark #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.57 Let $R$ be a PID 1. Every \(\alpha,\beta\in R\) has a gcd, \delta, which is a linear combination of \alpha and \beta \begin{equation*} \delta=\sigma\alpha+\tau\beta \end{equation*} 2. If an irreducible element \(\pi\in R\) divides a product \(\alpha\beta\), then either \(\pi\mid\alpha\) or \(\pi\mid\beta\) #+END_theorem #+BEGIN_proof 1. We may assume that at least one of \alpha and \beta is not zero. Consider the set $I$ of all the linear combinations \begin{equation*} I=\{\sigma\alpha+\tau\beta:\sigma,\tau\in R\} \end{equation*} $I$ is an ideal and so there is \(\delta\in I\) with \(I=(\delta)\); we claim that \delta is gcd of \alpha and \beta. Note that \(\alpha,\beta,\delta \in I\) 2. If \(\pi\nmid\alpha\), then the remark says that 1 is a gcd of \pi and \alpha. Thus \(1=\sigma\pi+\tau\alpha\) and so \begin{equation*} \beta=\sigma\pi\beta+\tau\alpha\beta \end{equation*} Since \(\pi\mid\alpha\beta\), it follows that \(\pi\mid\beta\) #+END_proof [[index:least common multiple]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $f$ and $g$ are elements in a commutative ring $R$, then a *common multiple* is an element \(m\in R\) with \(f\mid m\) and \(g\mid m\). If $f$ and $g$ in $R$ are not both 0, define their *least common multiple*, abbreviated lcm. #+END_definition #+BEGIN_exercise label:ex3.34 If $k$ is a field, prove that \(\sqrt{1-x^2}\not\in k(x)\), where \(k(x)\) is the field of rational functions #+END_exercise #+BEGIN_proof If \(\sqrt{1-x^2}\in k(x)\), then \(1-x^2=p^2(x)/q^2(x)\) and \(q^2(x)=p^2(x)+x^2q^2(x)\). However \(\deg(x^2q^2(x)>\deg(q^2(x)))\) #+END_proof #+BEGIN_exercise label:ex3.45 1. Show that every element \(a\in\I_p\) has a \(p\)th root 2. Let $k$ be a field that contains \(\I_p\) as a subfield. For every positive integer $n$, show that the function \(\varphi_n:k\to k\), given by \(\varphi(a)=a^{p^n}\) is a ring homomorphism #+END_exercise #+BEGIN_proof 1. \(a^p\equiv a\mod p\) 2. [[https://math.stackexchange.com/questions/565135/finite-fields-existence-of-field-of-order-pn-proof-help][stackexchange]] #+END_proof #+BEGIN_exercise label:ex3.47 1. If $A$ and $R$ are domains and \(\varphi:A\to R\) is a ring homomorphism, prove that \begin{equation*} [a,b]\to [\varphi(a),\varphi(b)] \end{equation*} is a ring homomorphism \(\Frac(A)\to\Frac(R)\) 2. Prove that if a field $k$ contains an isomorphic copy of \(\Z\) as a subring, then $k$ must contain an isomorphic copy of \(\Q\) 3. Let $R$ be a domain and let \(\varphi:R\to k\) be an injective ring homomorphism, where $k$ is a field. Prove that there exists a unique ring homomorphism \(\Phi:\Frac(R)\to k\) extending \varphi ; that is, \(\Phi|R=\varphi\) #+END_exercise #+BEGIN_proof 1. \begin{align*} f([1,1])&=[1,1]\\ f([a,b]+[c,d])&=f([ad+bc,bd])=[\varphi(ad+bc),\varphi(bd)]\\ &=[\varphi(a)\varphi(d)+\varphi(b)\varphi(c),\varphi(b)\varphi(d)]\\ &=[\varphi(a),\varphi(b)]+[\varphi(c),\varphi(d)]\\ &=f([a,b])+f([c,d])\\ f([a,b][c,d])&=f([ac,bd])=[\varphi(ac),\varphi(bd)]=[\varphi(a)\varphi(c), \varphi(b)\varphi(d)]\\&=f([a,b])f([c,d]) \end{align*} 2. Suppose \(k'\le k\) and \(k'\cong\Z\), then \(\Frac(k')\cong\Frac(\Z)\). Obviously. 3. $k$ is a field and has inverse. #+END_proof ** Euclidean Rings [[index:degree function]] [[index:euclidean ring]] #+ATTR_LATEX: :options [] #+BEGIN_definition A *euclidean ring* is a domain that is equipped with a function \begin{equation*} \partial:R-\{0\}\to\N \end{equation*} called a *degree function*, s.t. 1. \(\partial(f)\le\partial(fg)\) for all \(f,g\in R\) with \(f,g\neq0\) 2. for all \(f,g\in R\) with \(f\neq0\), there exists \(q,r\in R\) with \begin{equation*} g=qf+r \end{equation*} where either \(r=0\) or \(\partial(r)<\partial(f)\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. The integers \(\Z\) is a euclidean ring with the degree function \(\partial(m)=\abs{m}\). In \(\Z\) we have \begin{equation*} \partial(mn)=\abs{mn}=\abs{m}\abs{n}=\partial(m)\partial(n) \end{equation*} 2. when $k$ is a field, the domain $k[x]$ is a euclidean ring with degree function the usual degree of a nonzero polynomial. In $k[x]$, we have \begin{align*} \partial(fg)=\deg(fg)=\deg(f)+\deg(g)=\partial(f)+\partial(g) \end{align*} If a degree function is multiplicative, then \partial is called a *norm* 3. The Gaussian integers \(\Z[i]\) form a euclidean ring whose degree function \begin{equation*} \partial(a+bi)=a^2+b^2 \end{equation*} is a norm. One reason to show that \(\Z[i]\) is a euclidean ring is that it is a PID, and hence it has unique factorization of its elements of into products of irreducibles. \(\partial\) is a multiplicative degree function for \begin{equation*} \partial(\alpha\beta)=\alpha\beta\overline{\alpha\beta}=\alpha\beta\overline{\alpha} \overline{\beta}=\alpha\overline{\alpha}\beta\overline{\beta}= \partial(\alpha)\partial(\beta) \end{equation*} Let us show that \partial satisfies the second desired property. Given \(\alpha,\beta\in\Z[i]\) with \(\beta\neq0\), regard \(\alpha/\beta\) as an element of \(\C\). Rationalizing the denominator gives \(\alpha/\beta=\alpha\overline{\beta}/\beta\overline{\beta}=\alpha\overline{\beta}/\partial{\beta}\), so that \begin{equation*} a/\beta=x+yi \end{equation*} where \(x,y\in\Q\). Write \(x=a+u\) and \(y=b+v\), where \(a,b\in\Z\)are integers closest to $x$ and $y$, respectively; thus \(\abs{u},\abs{v}\le1/2\). It follows that \begin{equation*} \alpha=\beta(a+bi)+\beta(u+vi) \end{equation*} Notice that \(\beta(u+vi)\in\Z[i]\). Finally we have \begin{equation*} \partial(\beta(u+vi))=\partial(\beta)\partial(u+vi)<\partial(\beta) \end{equation*} And so \(\Z[i]\) is a euclidean ring whose degree function is a norm Note that quotients and remainders are not unique because of the choice #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.60 Every euclidean ring $R$ is a PID #+END_theorem #+BEGIN_proof Let $I$ be an ideal in $R$. If $I\neq\{0\}$, by the least integer axiom, the set of all degrees of nonzero elements in $I$ has a smallest element, say $n$; choose $d\in I$ with \(\partial(d)=n\). Clearly \((d)\subseteq I\). For any \(a\in I\), then there are \(q,r\in R\) with \(a=qd+r\), where either $r=0$ or \(\partial(r)<\partial(a)\). But \(r=a-qd\in I\) and so $d$ having the least degree implies that $r=0$. Hence $a=qd\in(d)$. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary The ring of Gaussian integers $\Z[i]$ is a PID #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_definition An element $u$ in a domain $R$ is a *universal side divisor* if $u$ is not a unit and for every $x\in R$, either \(u\mid x\) or there is a unit \(z\in R\) with \(u\mid(x+z)\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition If $R$ is a euclidean ring but not a field, then $R$ has a universal side divisor #+END_proposition #+BEGIN_proof Define \begin{equation*} S=\{\partial(v):v\neq 0\text{ and }v\text{ is not a unit}\} \end{equation*} where \partial is the degree function on $R$. Since $R$ is not a field, $S$ is a nonempty subset of the natural number. By the least integer axiom, $S$ has a smallest element, say, $\partial(u)$. We claim that $u$ is a universal side divisor. If \(x\in R\), then there are \(q,r\) with \(x=qu+r\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.64 1. Let $R$ be a euclidean ring $R$ that is not a field. If the degree function \partial is a norm, then \alpha is a unit if and only if \(\partial(\alpha)=1\) 2. Let $R$ be a euclidean ring $R$ that is not a field. If the degree function \partial is a norm and if \(\partial(a)=p\), where $p$ is a prime, then \alpha is not irreducible 3. The only units in the ring \(\Z[i]\) of Gaussian integers are \(\pm1\) and \(\pm i\) #+END_proposition #+BEGIN_proof 1. Since \(1^2=1\), we have \(\partial(1)^2=\partial(1)\), so that \(\partial(1)=0\) or \(\partial(1)=1\). If \(\partial(1)=0\), then \(\partial(a)=\partial(1a)=0\). But $R$ is not a field, and so \partial is not identically zero. We conclude that \(\partial(1)=1\) If \(a\in R\) is a unit, then there is \(\beta\in R\) with \(\alpha\beta=1\). Therefore \(\partial(\alpha)\partial(\beta)=1\) and hence \(\partial(\alpha)=1\) For the converse, we begin by showing that there is no element \(\beta\in R\) with \(\partial(\beta)=0\). If such an element exists, the division algorithms gives \(1=q\beta+r\) and so \(\partial(r)=0\). That is \beta is a unit, then \(\partial(\beta)=1\), a contradiction Assume now that \(\partial(\alpha)=1\). The division algorithm gives \begin{equation*} \alpha=q\alpha^2+r \end{equation*} As \(\partial(\alpha^2)=\partial(\alpha)^2=1\), \(r=0\) or \(\partial(r)=0\), which would not occur. Hence $r=0$ and \(\alpha=q\alpha^2\). It follows that \(1=q\alpha\), and so \alpha is a unit 2. If on the contrary, \(\alpha=\beta\gamma\), where neither \(\beta\) or \gamma is a unit, then \(p=\partial(\alpha)=\partial(\beta)\partial(\gamma)\). 3. If \(\alpha=a+bi\in\Z[i]\) is a unit, then \(1=\partial(\alpha)=a^2+b^2\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_lemma If $p$ is a prime and \(p\equiv 1\mod 4\), then there is an integer $m$ with \begin{equation*} m^2\equiv -1\mod p \end{equation*} #+END_lemma #+BEGIN_proof If \(G=(\I_p)^\times\) is the multiplicative group of nonzero elements in \(\I_p\), then \(\abs{G}=p-1\equiv 0\mod 4\). By Proposition ref:prop2.78, $G$ contains a subgroup $S$ of order $4$. By Exercise ref:ex2.36 either $S$ is cyclic or \(a^2=1\) for all \(a\in S\). Since \(\I_p\) is a field, however, it cannot contain four roots of the quadratic \(x^2-1\). Therefore, $S$ is cyclic, say \(S=\la[m]\ra\) where \([m]\) is the congruence class of $m$ mod $p$. Since \([m]\) has order 4, we have \([m^4]=[1],[m^2]\neq1\), and so \([m^2]=[-1]\) for \([-1]\) is the unique element in \(S\) of order 2. Therefore, \(m^2\equiv -1\mod p\) #+END_proof #+ATTR_LATEX: :options [Fermat's Two-Squares Theorem] #+BEGIN_theorem label:thm3.66 An odd prime $p$ is a sum of two squares, \begin{equation*} p=a^2+b^2 \end{equation*} where $a$ and $b$ are integers if and only if \(p\equiv 1\mod 4\) #+END_theorem #+BEGIN_proof Assume that \(p=a^2+b^2\). Since $p$ is odd, $a$ and $b$ have different parity; say, $a$ is even and $b$ is odd. Hence \(a=2m\) and \(b=2n+1\) and \begin{equation*} p=a^2+b^2=4m^2+4n^2+4n+1\equiv1\mod 4 \end{equation*} Conversely, assume that \(p\equiv1\mod4\). By the lemma, there is an integer $m$ s.t. \begin{equation*} p\mid(m^2+1) \end{equation*} In \(\Z[i]\), there is a factorization \(m^2+1=(m+i)(m-i)\) and so \begin{equation*} p\mid(m+i)(m-i) \text{ in }\Z[i] \end{equation*} If \(p\mid(m\pm i)\) in \(\Z[i]\), then there are integers $u$ and $v$ with \(m\pm i=p(u+iv)\). Comparing the imaginary parts gives \(pv=1\), a contradiction. We conclude that $p$ does not satisfy the analog of Euclid's lemma in Theorem ref:thm3.57; it follows from Exercise ref:ex3.62 that $p$ is not irreducible. Hence there is a factorization \begin{equation*} p=\alpha\beta\in\Z[i] \end{equation*} Therefore, taking norms gives an equation in \(\Z\) \begin{align*} p^2&=\partial(p)=\partial(\alpha\beta)\\ &=\partial(\alpha)\partial(\beta)=(a^2+b^2)(c^2+d^2) \end{align*} By Proposition ref:prop3.64, the only units in \(\Z[i]\) are \(\pm1\) and \(\pm i\), so that any nonzero Gaussian integers that is not a unit has a norm $>1$; therefore \(a^2+b^2\neq1\) and \(c^2+d^2\neq1\). Euclid's lemma now gives \(p\mid a^2+b^2\) or \(p\mid c^2+d^2\); then fundamental theorem of arithmetic gives \(p=a^2+b^2\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_lemma [[label:lemma3.67]] If \(\alpha\in\Z[i]\) is irreducible, then there is a unique prime number $p$ with \(a\mid p\) in \(\Z[i]\) #+END_lemma #+BEGIN_proof Since \(\partial(\alpha)=\alpha\overline{\alpha}\), we have \(\alpha\mid\partial(\alpha)\). Now \(\partial(\alpha)=p_1\dots p_n\). If \(\alpha\mid q\) for some prime \(q\neq p_i\), then \(\alpha\mid(q,p_i)=1\), forcing \alpha to be unit. A contradiction #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition Let \(\alpha=a+bi\in\Z[i]\) be neither 0 nor a unit. Then \alpha is irreducible if and only if 1. \alpha is an associate of a prime $p$ in \(\Z\) of the form \(p=4m+3\); or 2. \alpha is an associate of $1+i$ or its conjugate; or 3. \(\partial(\alpha)=a^2+b^2\) is a prime in \(\Z\) of the form \(4m+1\) #+END_proposition #+BEGIN_proof By Lemma ref:lemma3.67 there is a unique prime number $p$ divides by \alpha in \(\Z[i]\). Since \(\alpha\mid p\), we have \(\partial(\alpha)\mid\partial(p)=p^2\) in \(\Z\), so that \(\partial(\alpha)=p\) or \(\partial(\alpha)=p^2\). 1. \(p\equiv 3\mod4\) By Theorem ref:thm3.66 \(p^2=a^2+b^2\). We have \(\alpha\beta=p\) and \(\partial(\alpha)\partial(\beta)=\partial(p)\). Therefore, \(p^2\partial(\beta)=p^2\) and \(\partial(\beta)=1\). Thus \beta is a unit by Proposition ref:prop3.64 and $p$ is irreducible. 2. \(p\equiv 2\mod 4\) \(a^2+b^2=2\) 3. \(p\equiv1\mod4\) If \(\partial(\alpha)=p^2\), \beta is a unit as case 1. Now \(\alpha\overline{\alpha}=p^2=(\alpha\beta)^2\), so that \(\overline{\alpha}=\alpha\beta^2\) but \(\beta^2=\pm1\) by Proposition ref:prop3.64 #+END_proof #+BEGIN_exercise label:ex3.62 If $R$ is a euclidean ring and \(\pi\in R\) is irreducible, prove that \(\pi\mid\alpha\beta\) implies \(\pi\mid\alpha\) or \(\pi\mid\beta\) #+END_exercise #+BEGIN_proof $R$ is PID and follow Theorem ref:thm3.57. #+END_proof ** Linear Algebra *** Vector Spaces [[index:vector space]] [[index:vector]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $k$ is a field, then a *vector space over* $k$ is an (additive) abelian group $V$ equipped with a *scalar multiplication*; there is a function \(k\times V\to V\), denoted by \((a,v)\mapsto av\) s.t. for all \(a,b,1\in k\) and all \(u,v\in V\) 1. \(a(u+v)=au+av\) 2. \((a+b)v=av+bv\) 3. \((abv)=a(bv)\) 4. \(1v=v\) The elements of \(V\) are called *vectors* and the elements of $k$ are called *scalars* #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. Euclidean space \(V=\R^n\) is a vector space over $\R$ 2. If $R$ is a commutative ring and $k$ is a subring that is a field, then $R$ is a vector space over $k$ For example, if $k$ is a field, then the polynomial ring \(R=k[x]\) is a vector space over $k$. #+END_examplle [[index:subspace]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $V$ is a vector space over a field $k$, then a *subspace* of $V$ is a subset $U$ of $V$ s.t. 1. \(0\in U\) 2. \(u,u'\in U\) imply \(u+u'\in U\) 3. \(u\in U\) and \(a\in k\) imply \(au\in U\) #+END_definition [[index:k-linear combination]] [[index:span]] #+ATTR_LATEX: :options [] #+BEGIN_definition Let $V$ be a vector space over a field $k$. A *\(k\)-linear combination* of a list \(v_1,\dots,v_n\) in $V$ is a vector of $v$ of the form \begin{equation*} v=a_1v_1+\dots+a_nv_n \end{equation*} where \(a_i\in k\) for all \(i\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition If \(X=v_1,\dots,v_m\) is a list in a vector space $V$, then \begin{equation*} \la v_1,\dots,v_m\ra \end{equation*} the set of all the \(k\)-linear combinations of \(v_1,\dots,v_m\) is called the *subspace spanned by \(X\)*. We also say that \(v_1,\dots,v_m\) *spans* \(\la v_1,\dots,v_m\ra\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma Let $V$ be a vector space over a field $k$ 1. Every intersection of subspaces of $V$ is itself a subspace 2. If \(X=v_1,\dots,v_m\) is a list in $V$, then the intersection of all the subspaces of $V$ containing $X$ is \(\la v_1,\dots,v_m\ra\), and so \(\la v_1,\dots,v_m\ra\) is the *smallest subspace* #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_examplle Let \(V=\R^2\), let \(e_1=(1,0)\) and let \(e_2=(0,1)\). then \(V=\la e_1,e_2\ra\) #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_definition A vector space $V$ is called *finite-dimensional* if it is spanned by a finite list; otherwise $V$ is called *infinite-dimensional* #+END_definition *Notation*. If \(v_1,\dots,v_m\) is a list, then \(v_1,\dots,\hatv_i,\dots,v_m\) is the shorter list with \(v_i\) deleted #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.73 If $V$ is a vector space, then the following conditions on a list \(X=v_1,\dots,v_m\) spanning $V$ are equivalent 1. $X$ is not a shortest spanning list 2. some \(v_i\) is in the subspace spanned by the others; that is \begin{equation*} v_i\in\la v_1,\dots,\hatv_i,\dots,v_m\ra \end{equation*} 3. there are scalars \(a_1,\dots,a_m\) not all zero with \begin{equation*} \displaystyle\sum_{l=1}^ma_lv_l=0 \end{equation*} #+END_proposition [[index:linearly dependent]] #+ATTR_LATEX: :options [] #+BEGIN_definition A list \(X=v_1,\dots,v_m\) in a vector space $V$ is *linearly dependent* if there are scalars \(a_1,\dots,a_m\) not all zero, with \(\sum_{l=1}^ma_lv_l=0\); otherwise $X$ is called *linearly independent* #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_corollary If \(X=v_1,\dots,v_m\) is a list spanning a vector space $V$, then $X$ is a shortest spanning list if and only if $X$ is linearly independent #+END_corollary [[index:basis]] #+ATTR_LATEX: :options [] #+BEGIN_definition A *basis* of a vector space $V$ is a linearly independent list that spans $V$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition Let \(X=v_1,\dots,v_n\) be a list in a vector space $V$ over a field $k$. Then $X$ is a basis if and only if each vector in $V$ has a unique expression as a \(k\)-linear combination of vectors in $X$ #+END_proposition #+BEGIN_proof If a vector \(v=\sum a_iv_i=\sum b_iv_i\), then \(\sum(a_i-b_i)=0\) #+END_proof [[index:coordinate set]] #+ATTR_LATEX: :options [] #+BEGIN_definition If \(X=v_1,\dots,v_n\) is a basis of a vector space $V$ and if \(v\in V\), then there are unique scalars \(a_1,\dots,a_n\) with \(v=\sum_{i=1}^na_iv_i\). The \(n\)-tuple \((a_1,\dots,a_n)\) is called the *coordinate set* of a vector \(v\in V\) relative to the basis $X$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_theorem Every finite-dimensional vector space $V$ has a basis #+END_theorem #+BEGIN_proof A finite spanning list \(X\) exists, since $V$ is finite-dimensional. If it is linearly independent, it is a basis; if not, $X$ can be shortened to a spanning list \(X'\) by Proposition ref:prop3.73 #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_lemma Let \(u_1,\dots,u_n\) be elements in a vector space $V$, and let \(v_1,\dots,v_m\in\la u_1,\dots,u_n\ra\). If \(m>n\), then \(v_1,\dots,v_m\) is a linearly dependent list #+END_lemma #+BEGIN_proof Induction on \(n\ge 1\) /Base step/. If \(n=1\) /Inductive step/. For \(i=1,\dots,m\) \begin{equation*} v_i=a_{i1}u_1+\dots+a_{in}u_n \end{equation*} We may assume that some \(a_{i1}\neq0\) otherwise \(v_1,\dots,v_m\in\la u_2,\dots,u_n\ra\) , and the inductive hypothesis applies. Changing notation if necessary we may assume \(a_{11}\neq 0\). For each \(i\ge 2\), define \begin{equation*} v_i'=v_i-a_{i1}a_{11}^{-1}v_1\in\la u_2,\dots,u_n\ra \end{equation*} Since \(m-1>n-1\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary A homogeneous system of linear equations, over a field $k$, with more unknowns than equations has a nontrivial solution. #+END_corollary #+BEGIN_proof An \(n\)-tuple \((\beta_1,\dots,\beta_n)\) is a solution of a system \begin{gather*} \alpha_{11}x_1+\dots+\alpha_{1n}x_n=0\\ \vdots\quad\vdots\quad\vdots\\ \alpha_{m1}x_1+\dots+\alpha_{mn}x_n=0 \end{gather*} if \(\alpha_{i1}\beta_1+\dots+\alpha_{in}\beta_n=0\) for all $i$. In other words, if \(c_1,\dots,c_n\) are the columns of the \(m\times n\) coefficient matrix \(A=[\alpha_{ij}]\), then \begin{equation*} \beta_1c_1+\dots+\beta_nc_n=0 \end{equation*} Note that \(c_i\in k^m\). Now \(k^m\) can be spanned by \(m\) vectors. Since \(n>m\), \(c_1,\dots,c_n\) is linearly dependent #+END_proof #+ATTR_LATEX: :options [Invariance of Dimension] #+BEGIN_theorem If \(X=x_1,\dots,x_n\) and \(Y=y_1,\dots,y_m\) are bases of a vector space $V$, then \(m=n\) #+END_theorem #+BEGIN_proof Otherwise \(nm\), then this procedure ends with a spanning list consisting of \(m\) \(y\)'s and no \(x\)'. Thus a proper sublist of \(Y=y_1,\dots,y_n\) spans $V$, a contradiction #+END_proof #+ATTR_LATEX: :options [Invariance of Dimension] #+BEGIN_theorem If \(X=x_1,\dots,x_n\) and \(Y=y_1,\dots,y_m\) are bases of a vector space $V$, then \(m=n\) #+END_theorem #+BEGIN_proof By Lemma ref:lemma3.74, \(n\le m\) and \(m\le n\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition A *longest* (or a *maximal*) linearly independent list \(u_1,\dots,u_m\) is a linearly independent list for which there is no vector \(v\in V\) s.t. \(u_1,\dots,u_m,v\) is linearly independent #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma If $V$ is a finite-dimensional vector space, then a longest linearly independent list \(v_1,\dots,v_n\) is a basis of $V$ #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_proposition Let \(Z=u_1,\dots,u_m\) be a linearly independent list in an \(n\)-dimensional vector space $V$. Then \(Z\) can be extended to a basis #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_corollary If \(\dim(V)=n\), then any list of \(n+1\) or more vectors is linearly dependent #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $V$ be a vector space with \(\dim(V)=n\) 1. A list of \(n\) vectors that spans $V$ must be linearly independent 2. Any linearly independent list of \(n\) vectors must span $V$ #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $U$ be a subspace of a vector space $V$ of dimension $n$ 1. $U$ is finite-dimensional and \(\dim(U)\le\dim(V)\) 2. If \(\dim(U)=\dim(V)\), then \(U=V\) #+END_corollary #+BEGIN_exercise label:nex2.77 If \(V\) is a finite-dimensional vector space and \(U\) is a subspace, prove that \begin{equation*} \dim(U)+\dim(V/U)=\dim(V) \end{equation*} #+END_exercise #+BEGIN_proof If \(v_1+U,\dots,v_r+U\) is a basis of \(V/U\), then \(v_1,\dots,v_r\) are linearly independent. #+END_proof *** Linear Tranformations [[index:linear transformation]] #+ATTR_LATEX: :options [] #+BEGIN_definition If \(V\) and $W$ are vector spaces over a field $k$, then a function \(T:V\to W\) is a *linear transformation* if for all vectors \(u,v\in V\), and all scalars \(a\in k\) 1. \(T(u+v)=T(u)+T(v)\) 2. \(T(av)=aT(v)\) #+END_definition [[index:singular]] We say that a linear transformation $T$ is *nonsingular* (or is an *isomorphism*) if $T$ is a bijection. #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. If \theta is an angle, then the rotation about the origin by \theta is a linear transformation \(R_\theta:\R^2\to \R^2\) 2. If \(V\) and $W$ are vector spaces over a field $k$, write \(\Hom_k(V,W)\) for the set of all linear transformations \(V\to W\). It's a vector space #+END_examplle [[index:general linear group]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $V$ is a vector space over a field $k$, then the *general linear group*, denoted by \(\GL(V)\), is the set of all nonsingular linear transformations \(V\to V\) #+END_definition A composite $ST$ of linear transformation $S$ and $T$ is again a linear transformation #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.92 Let \(v_1,\dots,v_n\) be a basis of a vector space $V$ over a field $k$. If $W$ is a vector space over $k$ and \(u_1,\dots,u_n\) is a list in $W$, then there exists a unique linear transformation \(T:V\to W\) with \(T(v_i)=u_i\) for all $i$ #+END_theorem #+BEGIN_proof Each \(v\in V\) has a unique expression of the form \(v=\sum_ia_iv_i\) and so \(T:V\to W\) given by \(T(v)=\sum a_iu_i\) is a well-defined function To prove the uniqueness of $T$, assume that \(S:V\to W\) is a linear transformation with \begin{equation*} S(v_i)=u_i=T(v_i) \end{equation*} Then \begin{align*} S(v)&=S(\sum a_iv_i)=\sum S(a_iv_i)\\ &=\sum a_iS(v_i)=\sum a_iT(v_i)=T(v) \end{align*} #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary If two linear transformations \(S,T:V\to W\) agree on a basis, then \(S=T\) #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_proposition If \(T:k^n\to k^m\) is a linear transformation, then there exists an \(m\times n\) matrix $A$ s.t. \begin{equation*} T(y)=Ay \end{equation*} for all \(y\in k^n\) (here \(y\) is an \(n\times 1\) column matrix) #+END_proposition #+BEGIN_proof If \(e_1,\dots,e_n\) is the standard basis of \(k^n\) and \(e_1',\dots,e_m'\) is the standard basis of \(k^m\), define \(A=[a_{ij}]\) to be the matrix whose \(j\)th column is the coordinate set of \(T(e_j)\). If \(S:k^n\to k^m\) is defined by \(S(y)=Ay\), then \(S=T\) since they agree on a basis: \(T(e_j)=\sum_ia_{ij}e_i'=Ae_j\) #+END_proof [[index:matrix]] #+ATTR_LATEX: :options [] #+BEGIN_definition Let \(X=v_1,\dots,v_n\) be a basis of $V$ and let \(Y=w_1,\dots,w_m\) be a basis of $W$. If \(T:V\to W\) is a linear transformation, then the *matrix of $T$* is the \(m\times n\) matrix \(A=[a_{ij}]\), whose \(j\)th column \(a_{1j},a_{2j},\dots,a_{mj}\) is the coordinate set of \(T(v_j)\) determined by \(w\)'s: \(T(v_j)=\sum_{i=1}^ma_{ij}w_j\). The matrix $A$ does depend on the choice of bases $X$ and $Y$: we will write \begin{equation*} A={}_Y[T]_X \end{equation*} In case \(V=W\), we often let the basis \(X=v_1,\dots,v_n\) and \(w_1,\dots,w_m\) coincide. If \(1_V:V\to V\), given by \(v\mapsto v\) is the identity linear transformation, then \({}_X[1_V]_X\) is the \(n\times n\) *identity matrix $I_n$*, defined by \begin{equation*} I=[\delta_{ij}] \end{equation*} where \(\delta_{ij}\) is the Kronecker delta. A matrix is *nonsingular* if it has inverse. #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle Let \(T:V\to W\) be a linear transformation, and let \(X=v_1,\dots,v_n\) and \(Y=w_1,\dots,w_n\) be bases of $V$ and $W$ ,respectively. The matrix for $T$ is set up from the equation \begin{equation*} T(v_j)=a_{1j}w_1+\dots+a_{mj}w_m \end{equation*} #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. Let \(T:\R^2\to \R^2\) be rotation by \(\ang{90}\). The matrix of $T$ related to the standard basis \(X=(1,0),(0,1)\) is #+ATTR_LATEX: :mode math :environment bmatrix :math-prefix {}_X[T]_X= | 0 | -1 | | 1 | 0 | However if \(Y=(0,1)(1,0)\), then #+ATTR_LATEX: :mode math :environment bmatrix :math-prefix {}_Y[T]_Y= | 0 | 1 | | -1 | 0 | 2. Let $k$ be a field, let \(T:V\to V\) be a linear transformation on a two-dimensional vector space, and assume that there is some vector \(v\in V\) with \(T(v)\) not a scalar multiple of \(v\). The assumption on $v$ says that the list \(X=v,T(v)\) is linearly independent, and hence it's a basis of $V$. Write \(v_1=v,v_2=Tv\). We compute \({}_X[T]_X\) \begin{equation*} T(v_1)=v_2\quad\text{ and }\quad T(v_2)=av_1+bv_2 \end{equation*} for some \(a,b\in k\). We conclude that #+ATTR_LATEX: :mode math :environment bmatrix :math-prefix {}_X[T]_X= | 0 | a | | 1 | b | #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.97 Let \(V\) and \(W\) be a vector spaces over a field $k$, and let \(X=v_1,\dots,v_n\) and \(Y=w_1,\dots,w_m\) be bases of \(V\) and \(W\), respectively. If \(\Hom_k(V,W)\) denotes the set of all linear transformations \(T:V\to W\) and \(\Mat_{m\times n}k\) denotes the set of all \(m\times n\) matrices with entries in $k$, then the function \(T\mapsto{}_Y[T]_X\) is a bijection \(\Hom_k(V,W)\to\Mat_{m\times n}(k)\) #+END_proposition #+BEGIN_proof Given a matrix \(A\), its columns define vectors in \(W\); in more detail, if the \(j\)th column of $A$ is \(a_{1j},\dots,a_{mj}\), define \(z_j=\sum_{i=1}^ma_{ij}w_i\). By Theorem ref:thm3.92, there exists a linear transformation \(T:V\to W\) with \(T(v_j)=z_j\) and \({}_Y[T]_X=A\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.98 Let \(T:V\to W\) and \(S:W\to U\) be linear transformations. Choose bases \(X=x_1,\dots,x_n\) of \(V\), \(Y=y_1,\dots,y_m\) of \(W\), and \(Z=z_1,\dots,z_l\) of $U$, then \begin{equation*} {}_Z[S\circ T]_X=({}_Z[S]_Y)({}_Y[T]_X) \end{equation*} #+END_proposition #+BEGIN_proof Let \({}_Y[T]_X=[a_{ij}]\), so that \(T(x_j)=\sum_pa_{pj}y_p\), and let \({}_Z[S]_Y=[b_{qp}]\), so that \(S(y_p)=\sum_qb_{qp}z_q\). Then \begin{align*} ST(x_j)=S(T(x_j))&=S(\displaystyle\sum_{p}a_{pj}y_p)\\ &=\displaystyle\sum_{p}a_{pj}S(y_p)=\displaystyle\sum_{p} \displaystyle\sum_qa_{pj}b_{qp}z_q=\displaystyle\sum_{q}c_{qj}z_q \end{align*} where \(c_{qj}=\sum_{p}b_{qp}a_{pj}\). Therefore \begin{equation*} {}_Z[ST]_X=[c_{qj}]={}_Z[S]_Y{}_Y[T]_X \end{equation*} #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary Matrix multiplication is associative #+END_corollary #+BEGIN_proof Let $A$ be an \(m\times n\) matrix, let \(B\) be an \(n\times p\) matrix, and let \(C\) be a \(p\times q\) matrix. By Theorem ref:thm3.92, there are linear transformations \begin{equation*} k^q\xrightarrow{T}k^p\xrightarrow{S}k^n\xrightarrow{R}k^m \end{equation*} with \(C=[T],B=[S],A=[R]\) Then \begin{equation*} [R\circ(S\circ T)]=[R][S\circ T]=[R]([S][T])=A(BC) \end{equation*} On the other hand \begin{equation*} [(R\circ S)\circ T]=[R\circ S][T]=([R][S])[T]=(AB)C \end{equation*} #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary label:cor3.100 Let \(T:V\to W\) be a linear transformation of vector space \(V\) over a field $k$, and let \(X\) and \(Y\) be bases of \(V\) and \(W\), respectively. If \(T\) is nonsingular, then the matrix of \(T^{-1}\) is the inverse of the matrix of $T$ \begin{equation*} {}_X[T^{-1}]_Y=({}_Y[T]_X)^{-1} \end{equation*} #+END_corollary #+BEGIN_proof \(I={}_Y[1_W]_Y={}_Y[T]_{XX}[T^{-1}]_Y\) and \(I={}_X[1_V]_X={}_X[T^{-1}]_{YY}[T]_X\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary label:cor3.101 Let \(T:V\to V\) be a linear transformation on a vector space $V$ over a field $k$. If \(X\) and \(Y\) are bases of $V$, then there is a nonsingular matrix $P$ with entries in $k$ so that \begin{equation*} {}_Y[T]_Y=P(_X[T]_X)P^{-1} \end{equation*} Conversely, if \(B=PAP^{-1}\), where \(B,A,P\) are \(n\times n\) matrices with entries in $k$ and $P$ is nonsingular, then there is a linear transformation \(T:k^n\to k^n\) and bases $X$ and $Y$ of \(k^n\) s.t. \(B=_Y[T]_Y,A=_X[T]_X\) #+END_corollary #+BEGIN_proof The first statement follows from Proposition ref:prop3.98 and associativity \begin{equation*} {}_Y[T]_Y=_Y[1_VT1_V]_Y=(_Y[1_V]_X)(_X[T]_X)(_X[1_V]_Y) \end{equation*} Set \(P=_Y[1_V]_X\) For the converse, let \(E=e_1,\dots,e_n\) be the standard basis of \(k^n\), and define \(T:k^n\to k^n\) be \(T(e_j)=Ae_j\). If follows that \(A=_E[T]_E\). Now define a basis \(Y=y_1,\dots,y_n\) by \(y_j=P^{-1}e_j\). $Y$ is a basis because $P^{-1}$ is nonsingular. It suffices to prove that \(B=_Y[T]_Y\); that is \(T(y_j)=\sum_ib_{ij}y_i\), where \(B=[b_{ij}]\) \begin{align*} T(y_j)&=Ay_y=AP^{-1}e_j=P^{-1}Be_j\\ &=P^{-1}\displaystyle\sum_{i}b_{ij}e_i =\displaystyle\sum_{i}b_{ij}P^{-1}e_i\\ &=\displaystyle\sum_{i}b_{ij}y_i \end{align*} #+END_proof [[index:similar]] #+ATTR_LATEX: :options [] #+BEGIN_definition Two \(n\times n\) matrices $B$ and \(A\) with entries in field $k$ are *similar* if there is a nonsingular matrix \(P\) with entries in $k$ with \(B=PAP^{-1}\) #+END_definition Corollary ref:cor3.101 says the two matrices arise from the same linear transformation on a vector space $V$ if and only if they are similar #+ATTR_LATEX: :options [] #+BEGIN_definition If \(T:V\to W\) is a linear transformation, then the *kernel* (or the *null space*) of $T$ is \begin{equation*} \ker T=\{v\in V:T(v)=0\} \end{equation*} and the *image* of $T$ is \begin{equation*} \im T=\{w\in W:w=T(v)\text{ for some }v\in V\} \end{equation*} #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition Let \(T:V\to W\) be a linear transformation 1. \(\ker T\) is a subspace of $V$ and \(\im T\) is a subspace of $W$ 2. $T$ is injective if and only if \(\ker T=\{0\}\) #+END_proposition #+ATTR_LATEX: :options [] #+BEGIN_lemma label:lemma3.103 Let \(T:V\to W\) be a linear transformation 1. If $T$ is nonsingular, then for every basis \(X=v_1,\dots,v_n\) of $V$, we have \(T(X)=T(v_1),\dots,T(v_n)\) a basis of $W$ 2. Conversely, if there exists some basis \(X=v_1,\dots,v_n\) of $V$ for which \(T(X)\) is a basis of $W$, then $T$ is nonsingular #+END_lemma #+BEGIN_proof 1. If \(\sum c_iT(v_i)=0\), then \(T(\sum c_iv_i)=0\) and so \(\sum c_iv_i\in\ker T=\{0\}\). Hence each \(c_i=0\) because $X$ is linearly independent. If \(w\in W\), then the surjectivity of $T$ provides \(v\in V\) with \(w=T(v)\). But \(v=\sum a_iv_i\), and so \(w=T(v)=T(\sum a_iv_i)=\sum a_iT(v_i)\). Therefore $T(X)$ is a basis of $W$ 2. Let \(w\in W\). Since \(T(X)\) is a basis of $W$, we have \(w=\sum c_iT(v_i)=T(\sum c_iv_i)\). Add so $T$ is surjective. If \(\sum c_iv_i\in\ker T\), then \(\sum c_iT(v_i)=0\) and so linear independence gives all \(c_i=0\); hence \(\ker T=\{0\}\). Therefore $T$ is nonsingular #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem If $V$ is an \(n\)-dimensional vector space over a field $k$, then \(V\) is isomorphic to \(k^n\) #+END_theorem #+BEGIN_proof Choose a basis \(v_1,\dots,v_n\) of $V$. If \(e_1,\dots,e_n\) is the standard basis of \(k^n\), then Theorem ref:thm3.92 says that there is a linear transformation \(T:V\to k^n\) with \(T(v_i)=e_i\); by Lemma ref:lemma3.103 $T$ is nonsingular #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary Two finite-dimensional vector space $V$ and $W$ over a field $k$ are isomorphic if and only if \(\dim(V)=\dim(W)\) #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.106 Let $V$ be a finite-dimensional vector space with \(\dim(V)=n\), and let \(T:V\to V\) be a linear transformation. The following statements are equivalent 1. $T$ is an isomorphism 2. $T$ is surjective 3. $T$ is injective #+END_proposition #+BEGIN_proof \(2\to 3\). Let \(v_1,\dots,v_n\) be the basis of $V$. Since $T$ is surjective, there are vectors \(u_1,\dots,u_n\) with \(Tu_i=v_i\). We claim that \(u_1,\dots,u_n\) are linearly independent. To show that $T$ is injective, it suffices to show that \(\ker T=\{0\}\) \(3\to 1\). Let \(v_1,\dots,v_n\) be a basis of $V$. If \(c_1,\dots,c_n\) are scalars not all 0, then \(\sum c_iv_i\neq0\). Since $T$ is injective, it follows that \(\sum c_iT(v_i)\neq0\) and so \(Tv_1,\dots,Tv_n\) are linearly independent. Therefore Lemma ref:lemma3.103 shows that $T$ is an isomorphism #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary If \(A\) and $B$ are \(n\times n\) matrices with \(AB=I\), then \(BA=I\). Therefore $A$ is nonsingular with inverse $B$ #+END_corollary #+BEGIN_proof There are linear transformations \(T,S:k^n\to k^n\) with \([T]=A,[S]=B\), and \(AB=I\) gives \begin{equation*} [TS]=[T][S]=[1_{k^n}] \end{equation*} Since \(T\mapsto[T]\) is a bijection, by Proposition ref:prop3.97, it follows that \(TS=1_{k^n}\). Hence $T$ is a surjection and \(S\) is an injection by Proposition ref:prop1.47 But Proposition ref:prop3.106 says taht $T,S$ are both isomorphism, so that \(S=T^{-1}\) and \(TS=1_{k^n}=ST\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition The set of all nonsingular \(n\times n\) matrices with entries in $k$ is denoted by \(\GL(n,k)\) #+END_definition It's easy to prove that \(\GL(n,k)\) is a group #+ATTR_LATEX: :options [] #+BEGIN_proposition Let $V$ be an \(n\)-dimensional vector space over a field $k$, and let \(X=v_1,\dots,v_n\) be a basis of $V$. Then \(\mu:\GL(V)\to\GL(n,k)\) defined by \(T\mapsto[T]={}_X[T]_X\) is an isomorphism #+END_proposition #+BEGIN_proof By Proposition ref:prop3.97 the function \(\mu':T\mapsto[T]\) is a bijection \begin{equation*} \Hom_k(V,V)\to\Mat_n(k) \end{equation*} If \(T\in\GL(V)\), then \([T]\) is a nonsingular matrix by Corollary ref:cor3.100; that is, if \mu is the restriction of \(\mu'\), then \(\mu:\GL(V)\to\GL(n,k)\) is an injective homomorphism. If \(A\in\GL(n,k)\), then \(A=[T]\) for some \(T:V\to V\). It suffices to show that \(T\) is an isomorphism; that is, \(T\in\GL(V)\). Since \([T]\) is a nonsingular matrix, there is a matrix $B$ with \([T]B=I\). Now \(B=[S]\) for some \(S:V\to V\) and \begin{equation*} [TS]=[T][S]=I=[1_V] \end{equation*} #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition A linear transformation $T:V\to V$ is a *scalar transformation* if there is \(c\in k\) with \(T(v)=cv\) for all \(v\in V\); that is \(T=c1_V\). A *scalar matrix* is a matrix of the form \(cI\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_corollary 1. The center of the group \(\GL(V)\) consists of all the nonsingular scalar transformations 2. The center of the group \(\GL(n,k)\) consists of all the nonsingular scalar matrices #+END_corollary ** Quotient Rings and Finite Fields #+ATTR_LATEX: :options [] #+BEGIN_theorem label:thm3.110 If $I$ is an ideal in a commutative ring $R$, then the additive abelian group \(R/I\) can be made into a commutative ring in such a way that the natural map \(\pi:R\to R/I\) is a surjective ring homomorphism #+END_theorem #+BEGIN_proof Define multiplication on the additive abelian group \(R/I\) by \begin{equation*} (a+I)(b+I)=ab+I \end{equation*} #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition The commutative ring \(R/I\) constructed in Theorem ref:thm3.110 is called the *quotient ring* of $R$ modulo $I$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_corollary If $I$ is an ideal in a commutative ring $R$, then there are a commutative ring $A$ and a ring homomorphism \(\pi:R\to A\) with \(I=\ker\pi\) #+END_corollary #+BEGIN_proof Natural map \(\pi:R\to R/I\) #+END_proof #+ATTR_LATEX: :options [First Isomorphism Theorem] #+BEGIN_theorem If \(f:R\to A\) is a homomorphism of rings, then \(\ker f\) is an ideal in $R$, \(\im f\) is a subring of $A$, and \begin{equation*} R/\ker f\cong\im f \end{equation*} #+END_theorem [[index:prime field]] #+ATTR_LATEX: :options [] #+BEGIN_definition If $k$ is a field, the intersection of all the subfields of $k$ is called the *prime field* of $k$ #+END_definition Every subfield of \(\C\) contains $\Q$ and so the prime field of \(\C\) and of $\R$ is $\Q$. *Notation*. From now on, we will denote \(\I_p\) by \(\F_p\) when we are regarding it as a field. #+ATTR_LATEX: :options [] #+BEGIN_proposition If $k$ is a field, then its prime field is isomorphic to $\Q$ or to $\F_p$ for some prime $p$ #+END_proposition #+BEGIN_proof Consider the ring homomorphism \(\chi:\Z\to k\) defined by \(\chi(n)=n\epsilon\), where we denote the *one* in $k$ by \epsilon. Since every ideal in \(\Z\) is principal, there is an integer $m$ with \(\ker\chi=(m)\). If \(m=0\), then \chi is an injection, and so there is an isomorphism copy of \(\Z\) that is a subring of $k$. By Exercise ref:ex3.47 , there is a field \(Q\cong\Frac(\Z)=\Q\). with \(\im\chi\subseteq Q\subseteq k\). Now \(Q\) is the prime ideal of $k$, for it is the subfield generated by by \epsilon. . If \(m\neq0\), the first isomorphism theorem gives \(\I_m=\Z/(m)\cong\im\chi\subseteq k\). Since $k$ is a field, \(\im\chi\) is a domain, and so Proposition ref:prop3.6 gives $m$ prime. If we now write $p$ instead of $m$, then \(\im\chi=\{0,\epsilon,2\epsilon,\dots,(p-1)\epsilon\}\) is a subfield of $k$ isomorphic to \(\F_p\) #+END_proof [[index:characteristic]] #+ATTR_LATEX: :options [] #+BEGIN_definition A field $k$ has *characteristic 0* if its prime field is isomorphic to \(\Q\); a field $k$ has *characteristic $p$* if its prime field is isomorphic to \(\F_p\) for some prime $p$ #+END_definition The fields \(\Q,\R,\C\) have characteristic 0 #+ATTR_LATEX: :options [] #+BEGIN_proposition If $k$ is a field of characteristic \(p>0\), then \(pa=0\) for all \(a\in k\) #+END_proposition #+BEGIN_proof \(p\cdot 1=0\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition If \(k\) is a finite field, then \(\abs{k}=p^n\) for some prime $p$ and some \(n\ge1\) #+END_proposition #+BEGIN_proof The prime field \(P\) of $k$ cannot be the infinite field $\Q$, and so \(P\cong\F_p\) for some $p$. Now $k$ is a vector space over $P$, and so it is a vector space over \(\F_p\). Clearly, $k$ is finite-dimensional, and if \(\dim_{\F_p}(k)=n\), then \(\abs{k}=p^n\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.116 If $k$ is a field and \(I=(p(x))\), where \(p(x)\) is a nonzero polynomial in $k[x]$, then the following are equivalent: \(p(x)\) is irreducible; \(k[x]/I\) is a field; \(k[x]/I\) is a domain #+END_proposition #+BEGIN_proof Assume \(p(x)\) is irreducible. Note that \(I=(p(x))\) is a proper ideal so that the /one/ in \(k[x]/I\), namely, \(1+I\) is not zero. If \(f(x)+I\in k[x]/I\) is nonzero, then \(f(x)\not\in I\). Since \(p(x)\) is irreducible, by Lemma ref:lemma3.36, $p$ and $f$ are relatively prime. By Theorem ref:thm3.31, there are polynomials \(s\) and $t$ s.t. \(1=sp+tf\). Thus \(tf-1\in I\) and so \(1+I=tf+I=(t+I)(f+I)\). Therefore every nonzero element of \(k[x]/I\) has an inverse and so \(k[x]/I\) is a field. If \(k[x]/I\) is a domain. If \(p(x)\) is not an irreducible polynomial. Then \(p(x)=g(x)f(x)\) with \(\deg(p)> \deg(g)\), \(\deg(p)>\deg(f)\). It follows that \(f+I\) and \(g+I\) is nonzero but \((f+I)(g+I)=p+I\) is zero, contradicting to the fact that \(k[x]/I\) is a domain. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.117 label:nprop2.141 Let $k$ be a field, let \(p(x)\in k[x]\) be a monic irreducible polynomial of degree $d$, let \(K=k[x]/I\), where \(I=(p(x))\), and let \(\beta=x+I\in K\) 1. $K$ is a field and \(k'=\{a+I;a\in k\}\) is a subfield of $K$ isomorphic to $k$ 2. \beta is a root of \(p(x)\) in $K$ 3. if \(g(x)\in k[x]\) and \beta is a root of $g(x)$, then \(p(x)\mid g(x)\) in $k[x]$ 4. \(p(x)\) is the unique monic irreducible polynomial in $k[x]$ having \beta as a root 5. The list \(1,\beta,\beta^2,\dots,\beta^{d-1}\) is a basis of $K$ as a vector space over $k$, and so \(\dim_k(K)=d\) #+END_proposition #+BEGIN_proof 2. [@2] Note that \beta is a root in $K$, suppose \(p(x)=a_0+a_1x+\dots+a_{d-1}x^{d-1}+x^d\), hence \begin{align*} p(\beta)&=(a_0+I)+(a_1+I)\beta+\dots+(a_{d-1}+I)\beta^{d-1}+(1+I)\beta^{d}\\ &=(a_0+I)+(a_1+I)(x+I)+\dots+(a_{d-1}+I)(x+I)^{d-1}+(1+I)(x+I)^d\\ &=a_0+a_1x+\dots+x^d+I=p(x)+I=I \end{align*} 3. If \(p(x)\nmid g(x)\), then \(p(x)\) and \(g(x)\) are relatively prime since $p(x)$ is irreducible. Hence \(1=p(x)s(x)+g(x)t(x)\). Since \(k[x]\subseteq K[x]\), we may regard this equation in $K[x]$. Hence \(1=0\), a contradiction 5. [@5] Every element of $K$ has the form \(f(x)+I\), where \(f(x)\in k[x]\). By the division algorithm, there are \(f(x)=g(x)p(x)+r(x)\) with either \(r(x)=0\) or \(\deg(r)< d\). We know that \(r(\beta)=r+I\), hence \(1,\beta,\dots,\beta^{d-1}\) spans $K$ To prove uniqueness, suppose \begin{equation*} b_0+b_1\beta+\dots+b_{d-1}\beta^{d-1}=c_0+c_1\beta+\dots+c_{d-1}\beta^{d-1} \end{equation*} Define \(g(x)=\sum_{i=0}^{d-1}(c_i-b_i)x^i\); if \(g(x)=0\) we are done. Otherwise, then \(\deg(g)\) is defined and \(\deg(g)1\), write \(f(x)=p(x)g(x)\) where \(p(x)\) is irreducible. Now Proposition ref:prop3.117 provides a field $F$ containing $k$ and root \(z\) of \(p(x)\). Hence in \(F[x]\), we have \(p(x)=(x-z)h(x)\) and \(f(x)=(x-z)h(x)g(x)\). By induction, there is a field $K$ containing $F$ so that \(h(x)g(x)\), and hence \(f(x)\) is a product of linear factors in \(K[x]\) #+END_proof [[index:split]] #+ATTR_LATEX: :options [] #+BEGIN_definition Let $k$ be a subfield of a field $K$, and let \(f(x)\in k[x]\). We say that \(f(x)\) *splits over $K$* if \begin{equation*} f(x)=a(x-z_1)\cdots(x-z_n) \end{equation*} where \(z_1,\dots,z_n\in K\) and \(a\in k\) is nonzero If \(f(x)\in k[x]\) is a polynomial, then a field extension \(E/k\) is called a *splitting field* of \(f(x)\) *over* $k$ if \(f(x)\) splits over $E$, but \(f(x)\) does not split over any proper subfield of $E$ #+END_definition \(f(x)=x^2+1\in\Q[x]\). \(f(x)\) splits over \(\C\). \(\Q(i)\) is a splitting field of \(f(x)\) over \(\Q\). \(\R(i)=\C\) is a splitting field of \(f(x)\) over \(\R\) #+ATTR_LATEX: :options [] #+BEGIN_corollary Let $k$ be a field, and let \(f(x)\in k[x]\). Then a splitting field of $f(x)$ over $k$ exists. #+END_corollary #+BEGIN_proof By Kronecker theorem #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_examplle label:nexample2.151 Let \(f(x)=x^n-1\in k[x]\) for some field $k$ ,and let $E/k$ be a splitting field. In Theorem ref:nthm2.46, we saw that the group \(\Gamma_n\) of all \(n\)th roots of unity in $E$ is a cyclic group: \(\Gamma_n=\la\omega\ra\), where \omega is a primitive \(n\)th root of unity. #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_proposition label:prop3.126 Let $p$ be a prime, and let $k$ be a field. If \(f(x)=x^p-c\in k[x]\) and \alpha is a \(p\)th root of $c$ (in some splitting field), then either \(f(x)\) is irreducible in \(k[x]\) or $c$ has a \(p\)th root in $k$. In either case, if $k$ contains the \(p\)th roots of unity, then \(k(\alpha)\) is a splitting field of $f(x)$ #+END_proposition #+BEGIN_proof By Kronecker's theorem, there exists a field extension \(K/k\) that contains all the roots of \(f(x)\); that is, $K$ contains all the \(p\)th roots of $c$. If \(\alpha^p=c\), then every such root has the form \(\omega\alpha\), where \omega is a \(p\)th root of unity. If \(f(x)\) is not irreducible in \(k[x]\). Then there is a factorization \(f(x)=g(x)h(x)\). Now the constant term $b$ of $g(x)$ is, up to sign, the product of some of the roots of $f(x)$: \begin{equation*} \pm b=\alpha^d\omega \end{equation*} where \omega, which is a product of $d$ \(p\)th roots of unity, is itself a \(p\)th root of unity. It follows that \begin{equation*} (\pm b)^p=\alpha^{dp}=c^d \end{equation*} _But $p$ being prime and \(d1\\ (-1)^n&\text{if }1=e_1=e_2=\dots=e_n \end{cases} \end{equation*} If \(N_n\) is the number of irreducible polynomials in \(\F_p[x]\) of degree $n$, then \begin{equation*} N_n=\frac{1}{n}\sum_{d\mid n}\mu(d)p^{n/d} \end{equation*} #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. Consider a field with four elements: \begin{equation*} \F_4=\left\{\begin{bsmallmatrix}a&b\\b&a+b\end{bsmallmatrix} :a,b\in\F_2 \right\} \end{equation*} On the other hand, we may construct a field of order 4 as the quotient \(F=\F_2[x]/(q(x))\), where \(q(x)\in\F_2[x]\) is the irreducible polynomial \(x^2+x+1\). By Proposition ref:nprop2.141, $F$ is a field consisting of all \(a+b\beta\), where \(\beta=x+(q(x))\) is a root of \(q(x)\) and \(a,b\in\F_2\). Since \(\beta^2+\beta+1=0\), we have \(\beta^2=-\beta-1=\beta+1\); moreover, \(\beta^3=\beta\beta^2=\beta(\beta+1)=\beta^2+\beta=1\). There is a ring isomorphism \(\varphi:\F_4\to F\) with \(\varphi(\begin{bsmallmatrix}a&b\\b&a+b\end{bsmallmatrix})=a+b\beta\) 2. There are three monic irreducible quadratics in \(\F_3[x]\), namely, \begin{equation*} p(x)=x^2+1,\quad q(x)=x^2+x-1,\quad r(x)=x^2-x-1 \end{equation*} each give rise to a field with \(9=3^2\) elements. Let us look at the first two in more detail. Proposition ref:nprop2.141 says that \(E=\F_3[x]/(p(x))\) is given by \begin{equation*} E=\{a+b\alpha:\text{where }\alpha^2+1=0\} \end{equation*} Similarly, if \(F=\F_3[x]/(q(x))\), then \begin{equation*} F=\{a+b\beta:\text{where }\beta^2+\beta-1=0\} \end{equation*} There two fields are isomorphic. The map \(\varphi:E\to F\), defined by \(\varphi(a+b\alpha)=a+b(1-\beta)\) #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_lemma label:nlemma2.156 Let \(\varphi:k\to k'\) be an isomorphism of fields, and let \(\varphi^*:k[x]\to k'[x]\) be the ring isomorphism: \(\varphi^*:g(x)=a_0+a_1x+\dots+a_nx^n\mapsto g^*(x)=\varphi(a_0)+\dots+\varphi(a_n)x^n\). Let \(f(x)\in k[x]\) and \(f^*(x)=\varphi^*(x)\in k'[x]\). If $E$ is a splitting field of \(f(x)\) over $k$ and \(E'\) is a splitting field of \(f^*(x)\) over $k'$, then there is an isomorphism \(\Phi:E\to E'\) extending \varphi: \begin{tikzcd} E\arrow[r,dashed,"\Phi"]\arrow[d,no head]&E'\\ k\arrow[r,"\varphi"]&k'\arrow[u,no head] \end{tikzcd} #+END_lemma #+BEGIN_proof Induction on \(d=[E:k]\). If \(d=1\), then \(f(x)\) is a product of linear polynomials in \(k[x]\). \(\Phi=\varphi\) Choose a root \(z\) of \(f(x)\) in $E$ that is not in $k$, and let \(p(x)=\irr(z,k)\). Now \(\deg(p)>1\) and \([k(z):k]=\deg(p)\). Let \(z'\) be a root of \(p^*(x)\) in \(E'\), and let \(p^*(x)=\irr(z',k')\) be the corresponding monic irreducible polynomial in \(k'[x]\). By a generalization of Proposition ref:nthm2.144, there is an isomorphism \(\widetilde{\varphi}:k(z)\to k'(z')\) extending \varphi with \(\widetilde{\varphi}z\mapsto z\). We may regard \(f(x)\) as a polynomial with coefficients in \(k(z)\), for \(k\subseteq k(z)\) implies \(k[x]\subseteq k(z)[x]\). We claim that \(E\) is a splitting field of \(f(x)\) over \(k(z)\); that is \begin{equation*} E=k(z)(z_1,\dots,z_n) \end{equation*} where \(z_1,\dots,z_n\) are the roots of \(f(x)/(x-z)\). Similarly, \(E'\) is a splitting field of \(f^*(x)\) over \(k'(z')\). But \([E:k(z)]<[E:k]\), so that the inductive hypothesis gives an isomorphism \(\Phi:E\to E'\) that extends \(\widetilde{\varphi}\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem If $k$ is a field and \(f(x)\in k[x]\), then any two splitting fields of \(f(x)\) over $k$ are isomorphic via an isomorphism that fixes \(k\) pointwise #+END_theorem #+ATTR_LATEX: :options [Moore] #+BEGIN_corollary Any two finite fields having exactly \(p^n\) elements are isomorphic #+END_corollary #+BEGIN_proof If $E$ is a field with \(q=p^n\) elements, then Lagrange's Theorem applied to the multiplicative group \(E^\times\) shows that \(a^{q-1}=1\) for every \(a\in E^\times\). It follows that every element of $E$ is a root of \(f(x)=x^q-x\in\F_p[x]\), and so $E$ is a splitting field of \(f(x)\) over \(\F_p\) #+END_proof Finite fields are often called *Galois fields*. We speak of the field with \(q\) elements, where \(q=p^n\) is a power of a prime $p$, and we denote it by \begin{equation*} \F_q \end{equation*} #+BEGIN_exercise label:ex3.83 For every commutative ring $R$, prove that \(R[x]/(x)\cong R\) #+END_exercise * Fields :PROPERTIES: :PP: 212 :END: ** Insolvability of the Quintic :PROPERTIES: :PP: 217 :END: By Kronecker's theorem, for each monic \(f(x)\in k[x]\), where $k$ is a field, there is an extension field \(K/k\) and roots \(z_1,\dots,z_n\in K\) with \begin{equation*} f(x)=x^n+a_{n-1}x^{n-1}+\dots+a_1x+a_0=(x-z_1)\dots(x-z_n) \end{equation*} Hence we have \begin{equation*} \begin{cases} a_{n-1}=-\displaystyle\sum_{i}z_i\\ a_{n-2}=\displaystyle\sum_{i1\), let \(f(x)=p(x)g(x)\), where \(p(x)\) is an irreducible factor of largest degree, say $d$. We may assume \(d>1\), otherwise \(f(x)\) splits over $k$ and \([E:k]=1\). Choose a root \alpha of \(p(x)\). If \(\widetilde{\varphi}:k(\alpha)\to E'\) is any extension of \varphi, then \(\varphi(\alpha)\) is a root \(\alpha'\) of \(p^*(x)\), by Proposition ref:prop4.1. Since \(f^*(x)\) is separable, \(p^*(x)\) has exactly $d$ roots \(\alpha'\in E'\); by Lemma ref:lemma4.2 and Theorem ref:thm3.120, there are exactly $d$ isomorphisms \(\widetilde{\varphi}:k(\alpha)\to k'(\alpha')\) extending \varphi, for each \(\alpha'\). Now $E$ is also a splitting field of \(f(x)\) over $k(\alpha)$, because adjoining all the roots of \(f(x)\) to \(k(\alpha)\) ; similarly, \(E^*\) is a splitting field of \(f^*(x)\) over \(k'(\alpha)\). Now \([E:k(\alpha)]<[E:k]\) because \([E:k(\alpha)]=[E:k]/d\) (\(1,\alpha,\dots,\alpha^{d-1}\) is an basis by Proposition ref:prop3.117), so that induction shows that each of the $d$ isomorphisms \(\widehat{\varphi}\) has exactly \([E:k]/d\) extensions \(\Phi:E\to E^*\). Thus we have constructed \([E:k]\) isomorphisms extending \varphi. But there are no others, because every \tau extending \varphi has \(\tau|k(\alpha)=\widehat{\varphi}\) for some \(\widehat{\varphi}:k(\alpha)\to k'(\alpha^*)\) 2. take \(k=k',E=E^*,\varphi=1_k\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary label:ncor3.9 Let \(E/k\) be a splitting field of a separable polynomial \(f(x)\in k[x]\) of degree $n$. If \(f(x)\) is irreducible, then \(n\mid\abs{\Gal(E/k)}\) #+END_corollary #+BEGIN_proof By Theorem ref:nthm3.7, \(\abs{\Gal(E/k)}=[E:k]\). Let \(\alpha\in E\) be a root of \(f(x)\). Since \(f(x)\) is irreducible, \([k(\alpha):k]=n\), and \begin{equation*} [E:k]=[E:k(\alpha)][k(\alpha):k]=n[E:k(\alpha)] \end{equation*} #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition The polynomial \(f(x)=x^4+1\) is irreducible in \(\Q[x]\), yet it factors in \(\F_p[x]\) for every prime $p$ #+END_proposition #+BEGIN_proof To see that \(f(x)\) is irreducible, it suffices to show that \(f(x-1)\) is irreducible by Lemma ref:nlemma2.74. But \(f(x-1)=x^4-4x^3+6x^2-4x+2\) is irreducible, by Eisenstein's Criterion. We now show for all prime $p$, that \(x^4+1\) factors in \(\F_p[x]\). If \(p=2\), then \(x^4+1=(x+1)^4\) and so we may assume that $p$ is an odd prime. It is easy to check that every square \(n^2\) is congruent to \(0,1\) or \(4\mod 8\); since $p$ is odd, we must have \(p^2\equiv1\mod8\). Therefore, \(\abs{(\F_{p^2})^\times}=p^2-1\) is divisible by 8. But \((\F_{p^2})^\times\) is a cyclic group, by Theorem ref:nthm2.46, and so it has a (cyclic) subgroup of order 8, by Lemma ref:nlemma1.92. It follows that \((\F_{p^2})\) contains all the 8th roots of unity; in particular, \(\F_{p^2}\) contains all the roots of \(x^4+1\). Hence, the splitting field \(E_p\) of \(x^4+1\) over \(\F_{p}\) is \(\F_{p^2}\), and so \([E_p:\F_p]=2\) (\(qp+r\)). Since \(x^4+1\) is irreducible in \(\F_p[x]\), then \(4\mid[E_p:\F_p]\), by Corollary label:ncor3.9. Therefore, \(x^4+1\) factors in \(\F_p[x]\) for every prime $p$. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. Let \(f(x)=x^3-1\in\Q[x]\). Now \(f(x)=(x-1)(x^2+x+1)\). Suppose roots of \(x^2+x+1\) is \(\omega\) and \(\bar{\omega}\). The splitting field of \(f(x)\) is \(\Q(\omega)\), for \(\omega^2=\bar{\omega}\), and so \([\Q(\omega),\Q]=2\). Therefore \(\abs{\Gal(\Q(\omega)/\Q)}=2\). It's nontrivial element is complex conjugation. 2. Let \(f(x)=x^2-2\in\Q[x]\). Now \(f(x)\) is irreducible with roots \(\pm\sqrt{2}\), so that \(E=\Q(\sqrt{2})\) is a splitting field. \(\abs{\Gal(E/\Q)}=2\). Now every element of $E$ has a unique expression of the form \(a+b\sqrt{2}\). 3. Let \(g(x)=x^3-2\in\Q[x]\). The roots of \(g(x)\) are \(\beta,\omega\beta,\omega^2\beta\), where \(\beta=\sqrt[3]{2}\) and \omega is a primitive cube root of unity. The splitting field of \(g(x)\) is \(E=\Q(\beta,\omega)\). Now that \begin{equation*} [E:\Q]=[E:\Q(\beta)][\Q(\beta):\Q]=3[E:\Q(\beta)] \end{equation*} for \(g(x)\) is irreducible over $\Q$. Now \(E\neq\Q(\beta)\) for every element in \(\Q(\beta)\) is real. Therefore \([E:\Q]=\abs{\Gal(E/\Q)}>3\). On the other hand, we know that \(\Gal(E/\Q)\) is isomorphic to \(S_3\), and so we must have \(\Gal(E/\Q)\cong S_3\) 4. We examined \(f(x)=x^4-10x^2+1\in\Q[x]\) which is irreducible; in fact, \(f(x)=\irr(\beta,\Q)\) , where \(\beta=\sqrt{2}+\sqrt{3}\). If \(E=\Q(\beta)\), then \([E:\Q]=4\); moreover $E$ is a splitting field of $f(x)$. It follows that \(\abs{G}=\abs{\Gal(E/\Q)}=4\); hence either \(G\cong\I_4\) or \(G\cong\bV\). Actually \(G\cong\bV\) #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_proposition label:nprop3.12 If $m$ is a positive integer, $k$ is a field, and $E$ is a splitting field of \(x^m-1\) over $k$, then \(\Gal(E/k)\) is abelian; in fact, \(\Gal(E/k)\) is isomorphic to a subgroup of the group of units \(U(\I_m)=\{[i]\in\I_m:(i,m=1\}\}\)q #+END_proposition #+BEGIN_proof By Example ref:nexample2.151, \(E=k(\omega)\), where \omega is a primitive \(m\)th root of unity. The group \(\Gamma_m\) of all roots of \(x^m-1\) in $E$ is cyclic and if \(\sigma\in\Gal(E/k)\), then its restriction to (\Gamma_m) is an automorphism of \(\Gamma_m\). Hence \(\sigma(\omega)=\omega^i\) must also be a generator of \(\Gamma_m\); that is, \((i,m)=1\), by Theorem ref:nthm1.39. $i$ is uniquely determined mod $m$, so that the function \(\theta:\Gal(k(\omega)/k)\to U(\I_m)\) is well-defined. \theta is homomorphism. Therefore, Lemma ref:nlemma3.2 shows that \theta is injective #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem If $p$ is prime, then \begin{equation*} \Gal(\F_{p^n}/\F_p)\cong\I_n \end{equation*} and a generator is the *Frobenius automorphism* \(\Fr:u\mapsto u^p\) #+END_theorem #+BEGIN_proof Let \(q=p^n\) and \(G=\Gal(\F_q/\F_p)\). Since \(\F_q\) has characteristic $p$, we have \((a+b)^p=a^p+b^p\), and so the Frobenius Fr is a homomorphism of fields. As any homomorphism of fields, Fr is injection; as \(\F_q\) is finite, Fr must be an automorphism, by the Pigeonhole principle; that is, \(\Fr\in G\) (Fr fixes \(\F_p\), by Fermat's Theorem) If \(\pi\in\F_q\) is a primitive element, then \(d(x)=\irr(\pi,\F_p)\) has degree $n$, by Corollary ref:ncor2.154. It suffices to prove that the order \(j\) of \(\Fr\) is not less than $n$. But if \(\Fr^j=1_{\F_q}\) for \(j0\), then \(f(x)=c(f)f^*(x)\) where \(c(f)\in R\) and \(f^*(x)\) is primitive. If \(f^*(x)\) is irreducible, we are done. Otherwise \(f^*(x)=g(x)h(x)\), where neither $g$ nor $h$ is a unit. And so each is is a product of irreducibles, by the inductive hypothesis Now Proposition ref:prop6.17 applies: \(R[x]\) is a UFD if \((p(x))\) is a prime ideal for every irreducible \(p(x)\in R[x]\). Let's assume \(p\mid fg\) and \(p\nmid f\) Case (i). Suppose that \(\deg(p)=0\) . Write \begin{equation*} f(x)=c(f)f^*(x),\quad g(x)=c(g)g^*(x) \end{equation*} Now \(p\mid fg\), so that \begin{equation*} p\mid c(f)c(g)f^*(x)g^*(x) \end{equation*} Since \(f^*(x)g^*(x)\) is primitive, Lemma ref:lemma6.24 says that \(c(f)c(g)\) is an associate of \(c(fg)\). However, if \(p\mid fg\), then $p$ divides each coefficient of \(fg\); that is, $p$ is a common divisor of all coefficients of $fg$, and hence in $R$, which is a UFD, $p$ divides the associates \(c(fg)\) and \(c(f)c(g)\). But Proposition ref:prop6.17 says that \((p)\) is a prime ideal, and so \(p\mid c(f)\) or \(p\mid c(g)\). Therefore \(p\mid c(g)\) Case (ii). Suppose that \(\deg(p)>0\). Let \begin{equation*} (p,f)=\{s(x)p(x)+t(x)f(x):s(x),t(x)\in R[x]\} \end{equation*} Choose \(m(x)\in(p,f)\) of minimal degree. If \(Q=\Frac(R)\) is the fraction field of $R$, then the division algorithm in \(Q[x]\) gives polynomials \(q'(x),r'(x)\in Q[x]\) with \begin{equation*} f(x)=m(x)q'(x)+r'(x) \end{equation*} where either \(r'(x)=0\) or \(\deg(r')<\deg(m)\). Clearing denominators, there are polynomials \(q(x),r(x)\in R[x]\) and a constant \(b\in R\) with \begin{equation*} bf(x)=q(x)m(x)+r(x) \end{equation*} Since \(m\in(p,f)\), there are polynomials \(s(x),t(x)\in R[x]\) with \(m=sp+tx\); hence \(r=bf-qm\in(b,f)\). Since \(m\) has minimal degree, we must have \(r=0\); that is, \(bf(x)=q(x)m(x)\), and so \(bf(x)=c(m)m*(x)q(x)\). So that \(m^*(x)\mid f(x)\) by Lemma ref:lemma6.24. A similar argument, replacing \(f(x)\) by \(p(x)\), gives \(m^*(x)\mid p(x)\). If \(m^*(x)\) were an associate of \(p(x)\), then \(p(x)\mid f(x)\), contrary to hypothesis. Hence \(m^*(x)\) is a unit; that is, \(m(x)=c(m)\in R\), and so \(p,f\) contains the nonzero constant \(c(m)\). Now \(c(m)=sp+tf\), and so \begin{equation*} c(m)g(x)=s(x)p(x)g(x)+t(x)f(x)g(x) \end{equation*} Since \(p\mid fg\), we have \(p(x)\mid c(m)g(x)\). But \(p(x)\) is primitive, \(p(x)\mid g(x)\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary If $k$ is a field, then \(k[x_1,\dots,x_n]\) is a UFD #+END_corollary #+ATTR_LATEX: :options [Gauss] #+BEGIN_corollary label:ncor5.28 Let $R$ be a UFD, let \(Q=\Frac(R)\), and let \(f(x)\in R[x]\) . If \begin{equation*} f(x)=G(x)H(x)\in Q[x] \end{equation*} then there is a factorization \begin{equation*} f(x)=g(x)h(x)\in R[x] \end{equation*} where \(\deg(g)=\deg(G)\) and \(\deg(h)=\deg(H)\); in fact, \(G(x)\) is a constant multiple of \(g(x)\) and \(H(x)\) is a constant multiple of \(h(x)\). Therefore, if \(f(x)\) does not factor into polynomials of smaller degree in \(R[x]\), then \(f(x)\) is irreducible in \(Q[x]\) #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_examplle We claim that \(f(x,y)=x^2+y^2-1\in k[x,y]\) is irreducible, where $k$ is a field. Write \(Q=k(y)=\Frac(k[y])\), and view \(f(x,y)\in Q[x]\). Now the quadratic \(g(x)=x^2+(y^2-1)\) is irreducible in \(Q[x]\) iff it has no roots in \(Q=k(y)\), and this is so by Exercise ref:ex3.34 It follows from Proposition ref:prop6.17 that \((x^2+y^2-1)\) is a prime ideal because it is generated by an irreducible polynomial #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_proposition label:nprop5.30 Let $k$ be a field, and view \(f(x_1,\dots,x_n)\in k[x_1,\dots,x_n]\) as a polynomial in \(R[x_n]\), where \(R=k[x_1,\dots,x_{n-1}]\) \begin{equation*} f(x_n)=a_0(x_1,\dots,x_{n-1})+\dots+a_m(x_1,\dots,x_{n-1})x^m_n \end{equation*} If \(f(x_n)\) is primitive and cannot be factored into two polynomials of lower degree in \(R[x_n]\), then \(f(x_1,\dots,x_n)\) is irreducible in \(k[x_1,\dots,x_n]\) #+END_proposition #+BEGIN_proof Suppose that \(f(x_n)=g(x_n)h(x_n)\) in \(R[x_n]\); by hypothesis, the degrees of \(g\) and $h$ in $x_n$ cannot both be less than \(\deg(f)\); say, \(\deg(g)=0\). It follows, because $f$ is primitive, that $g$ is a unit in \(k[x_1,\dots,x_{n-1}]\). Therefore, \(f(x_1,\dots,x_n)\) is irreducible in \(R[x_n]\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary label:ncor5.31 If $k$ is a field and \(g(x_1,\dots,x_n),h(x_1,\dots,x_n)\in k[x_1,\dots,x_n]\) are relatively prime, then \(f(x_1,\dots,x_n,y)=yg(x_1,\dots,x_n)+h(x_1,\dots,x_n)\) is irreducible in \(k[x_1,\dots,x_n,y]\) #+END_corollary #+BEGIN_proof Let \(R=k[x_1,\dots,x_n]\). Note that $f$ is primitive in $R[y]$, because \((g,h)=1\) forces any divisor of its coefficients \(g,h\) to be a unit. Since $f$ is linear in $y$, it is not the product of two polynomials in $R[y]$ of smaller degree, and hence Proposition ref:nprop5.30 shows that $f$ is irreducible in \(R[y]=k[x_1,\dots,x_n,y]\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_examplle The polynomials $x$ and \(y^2+z^2-1\) are relatively prime in \(\R[x,y,z]\), so that \(f(x,y,z)=x^2+y^2+z^2-1\) is irreducible #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_corollary If \alpha is an algebraic integer, then \(\irr(\alpha,\Q)\) lies in \(\Z[x]\) #+END_corollary #+ATTR_LATEX: :options [] #+BEGIN_definition If \alpha is an algebraic integer, then its *minimal polynomial* is the monic polynomial in \(\Z[x]\) of least degree having \alpha as a root. #+END_definition #+BEGIN_remark We define the (algebraic) *conjugates* of \alpha to be the roots of \(\irr(\alpha,\Q)\), and we define the *norm* of \alpha to be the absolute value of the product of the conjugates \alpha #+END_remark #+ATTR_LATEX: :options [] #+BEGIN_theorem Let \(f(x)=a_0+a_1x+\dots+x^n\in\Z[x]\) be monic, and let $p$ be a prime. If \(f(x)\) is irreducible\(\mod p\), that is, if \begin{equation*} \widetilde{f}(x)=[a_0]+[a_1]x+\dots+x^n\in\F_p[x] \end{equation*} is irreducible, then \(f(x)\) is irreducible in \(\Q[x]\) #+END_theorem #+BEGIN_proof By Proposition ref:prop3.48, the natural map \(\varphi:\Z\to\F_p\) defines a homomorphism \(\widetilde{\varphi}:\Z[x]\to\F_p\). If \(g(x)\in\Z[x]\), define its image \(\widetilde{\varphi}(g(x))\in\F_p[x]\) by \(\widetilde{g}(x)\). Prove \(f(x)\) is irreducible in \(\Z[x]\). Then by Gauss's theorem, \(f(x)\) is irreducible in \(\Q[x]\) #+END_proof #+BEGIN_exercise label:ex6.17 Let $R$ be a UFD and let \(Q=\Frac{R}\) be its fraction field. Prove that each nonzero \(a/b\in Q\) has an expression in lowest terms; that is, \(a\) and \(b\) are relatively prime. #+END_exercise #+BEGIN_proof there gcd exists #+END_proof #+BEGIN_exercise label:ex6.18 Let $R$ be a UFD 1. If \(a,b,c\in R\) and $a$ and $b$ are relatively prime, prove that \(a\mid bc\) implies \(a\mid c\) 2. If \(a,c_1,\dots,c_n\in R\) and \(c_i\mid a\) for all \(i\), prove that \(c\mid a\), where \(c=\lcm\{c_1,\dots,c_n\}\) #+END_exercise #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. We show that \(f(x)=x^4-5x^3+2x+3\) is an irreducible polynomial in \(\Q[x]\) The only candidates for rational roots are \(1,-1,3,-3\) and none of these is a root. Since \(\widetilde{f}(x)=x^4+x^3+1\in\F_2[x]\) is irreducible by Example ref:example3.35, it follows that \(f(x)\) is irreducible in \(\Q[x]\). 2. Let \(\Phi_5(x)=x^4+x^3+x^2+x+1\in\Q[x]\) \(\widetilde{\Phi}_5(x)\) is irreducible in \(\F_2[x]\), and so \(\Phi_5(x)\) is irreducible in \(\Q[x]\) #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_lemma label:nlemma2.74 Let \(g(x)\in\Z[x]\). If there is \(c\in\Z\) with \(g(x+c)\) irreducible in \(\Z[x]\), then \(g(x)\) is irreducible in \(\Q[x]\) #+END_lemma #+BEGIN_proof \(\varphi:\Z[x]\to\Z[x]\), given by \(f(x)\mapsto f(x+c)\) is an isomorphism. If \(g(x)=s(x)t(x)\), then \(g(x+c)=\varphi(g(x))=\varphi(s)\varphi(t)\). Therefore \(g(x)\) is irreducible in \(\Q\). #+END_proof #+ATTR_LATEX: :options [Eisenstein Criterion] #+BEGIN_theorem Let $R$ be a UFD with \(Q=\Frac(R)\), and let \(f(x)=a_0+a_1x+\dots+a_nx^n\in R[x]\). If there is an irreducible element \(p\in R\) with \(p\mid a_i\) for all \(ic\) functions \(f:\R\to\R\) satisfying the functional equation, for every linear transformation is additive. On the other hand, every continuous function on $\R$ is determined by its values on \(\Q\), which is countable. Let \(x\in\R\), then there is a sequence of rational numbers \((q_n)_{n=1}^\infty\) that converges to $x$. Continuity of $f$ means that \begin{equation*} \lim_{n\to\infty}f(q_n)=f(\lim_{n\to\infty}q_n)=f(x) \end{equation*} This means that the values of $f$ at rational numbers already determine $f$. In other words, the mapping \(\Phi:C(\R,\R)\to\R^\Q\), defined by \(\Phi(f)=f_{\Q}\), where \(f|_\Q:\Q\to\R\) is the restriction of $f$ to $\Q$ is an injection. Here \(C(\R,\R)\) denotes the set of all continuous functions from \(\R\) to \(\R\). #+ATTR_LATEX: :options [] #+BEGIN_examplle An *inner product* on a vector space $V$ over a field $k$ is a function \begin{equation*} V\times V\to k \end{equation*} whose values are denoted by \((v,w)\), s.t. 1. \((v+v',w)=(v,w)+(v',w)\) for all \(v,v',w\in V\) 2. \((\alpha v,w)=\alpha(v,w)\) for all \(v,w\in V\) and \(\alpha\in k\) 3. \((v,w)=(w,v)\) for all \(v,w\in V\) We say that the inner product is *definite* if \((v,v)\neq 0\) whenever \(v\neq 0\) Regard \(\R\) is a vector space over \(\Q\), and let $Y$ be a basis. Using 0 coefficients if necessary, for each \(v,w\in\R\), there are \(y_i\in Y\) and rational \(a_i\) and \(b_i\) with \(v=\sum a_iy_i\) and \(w=\sum b_iy_i\). Define \begin{equation*} (v,w)=\sum a_ib_i \end{equation*} #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_lemma label:fact1 Let \(X\) and \(Y\) be sets, and let \(f:X\to Y\) be a function. If \(f^{-1}(y)\) is finite for every \(y\in Y\), then \(\abs{X}\le\aleph_0\abs{Y}\); hence if \(Y\) is infinite, then \(\abs{X}\le\abs{Y}\) #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_lemma label:fact2 If $X$ is an infinite set and \(\Fin(X)\) is the family of all its finite subsets, then \(\abs{\Fin(X)}=\abs{X}\) #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_lemma label:fact3 If \(X\) and \(Y\) are sets with \(\abs{X}\le\abs{Y}\) and \(\abs{Y}\le\abs{X}\), then \(\abs{X}=\abs{Y}\) #+END_lemma #+ATTR_LATEX: :options [] #+BEGIN_theorem Let $k$ be a field and let \(V\) be a vector space over $k$ 1. Any two bases of $V$ have the same number of elements; this cardinal is called the *dimension* of $V$ and is denoted by \(\dim(V)\) 2. Vector spaces \(V\) and \(V'\) over $k$ are isomorphic if and only if \(\dim(V)=\dim(V')\) #+END_theorem #+BEGIN_proof 1. Let $B$ and $B'$ be bases of $V$. We may assume $B$ and $B'$ are infinite. Each \(v\in V\) has a unique expression of the form \(v=\sum_{b\in B}\alpha_bb\), where \(\alpha_b\in k\) and almost all \(\alpha_b=0\)(the rule only allows finite operation) . Define the *support* of $v$ (w.r.t. $B$) by \begin{equation*} \supp(v)=\{b\in B:\alpha_b\neq0\} \end{equation*} thus \(\supp(v)\) is a finite subset of $B$ for every \(v\in V\). Define \(f:B'\to\Fin(B)\) by \(f(b')=\supp(b)\). Note that if \(\supp(b')=\{b_1,\dots,b_n\}\), then \(b'\in\la b_1,\dots,b_n\ra=\la\supp(b')\ra\). Since \(\la\supp(b')\ra\) has dimension $n$, it contains at most $n$ elements of $B'$, because $B'$ is independent. Therefore \(f^{-1}(T)\) is finite for every finite subset subset \(T\) of $B$. By Lemma ref:fact1, we have \(\abs{B'}\le\abs{\Fin(B)}\), and by Lemma ref:fact2, we have \(\abs{B'}\le\abs{B}\). Interchanging the roles of $B$ and $B'$ gives the reverse inequality, and so Lemma ref:fact3 gives \(\abs{B}=\abs{B'}\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_lemma Let $R$ be a commutative ring and let \(\calf\) be the family of all those ideals in $R$ that are not finitely generated. If \(\calf\neq\emptyset\), then \(\calf\) has a maximal element #+END_lemma #+ATTR_LATEX: :options [I. S. Cohen] #+BEGIN_theorem A commutative ring $R$ is noetherian if and only if every prime ideal in $R$ is finitely generated #+END_theorem #+BEGIN_proof Let \(\calf\) be the family of all ideals in $R$ that are not finitely generated. If \(\calf\neq\emptyset\), then the lemma provides an ideal $I$ that is not finitely generated and that is maximal. Suppose \(ab\in I\) but \(a\not\in I\) and \(b\not\in I\). \(I\subsetneq I+Ra\) and \(I_Ra\) is finitely generated; we may assume that \begin{equation*} I+Ra=(i_1+r_1a,\dots,i_n+r_na) \end{equation*} where \(i_k\in I\) and \(r_k\in R\). Consider \(J=(I:a)=\{x\in R:xa\in I\}\). Now \(I+Rb\subseteq J\) and hence $J$ is finitely generated. We claim that \(I=(i_1,\dots,i_n,Ja)\). Clearly \((i_1,\dots,i_n,Ja)\subseteq I\). If \(z\in I\subseteq I+Ra\), there are \(u_k\in R\) with \(z=\sum_ku_k(i_k+r_ka)\). Then \((\sum_ku_kr_k)a=z-\sum_ku_ki_k\in I\), so that \(\sum_ku_kr_k\in J\). Hence \(z\in(i_1,\dots,i_n,Ja)\). It follows that $I$ is finitely generated. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_proposition Let $K/k$ be an extension 1. If \(z\in K\), then \(z\) is algebraic over $k$ iff \(k(z)/k\) is finite 2. If \(z_1,\dots,z_n\in K\) are algebraic over $k$, then \(k(z_1,\dots,z_n)/k\) is a finite extension 3. If \(y,z\in K\) are algebraic over $k$, then \(y+z,yz\) and \(y^{-1}\) (for \(y\neq0\)) are also algebraic 4. Define \begin{equation*} K_{\alg}=\{z\in K:z\text{ is algebraic over }k\} \end{equation*} Then $K_{\alg}$ is a subfield of $K$ #+END_proposition #+BEGIN_proof 1. If \(k(z)/k\) is finite, then Proposition ref:prop3.117 shows that \(z\) is algebraic over $k$ 2. We prove this by induction on \(n\ge1\); the base step is part (1). For the inductive steop, there is a tower of fields \begin{equation*} k\subseteq k(z_1)\subseteq \cdots\subseteq k(z_1,\dots,z_n) \subseteq k(z_1,\dots,z_{n+1}) \end{equation*} Now \([k(z_{n+1}):k]\) is finite by Theorem ref:nthm2.144; say, \([k(z_{n+1}):k]=d\), where $d$ is the degree of the monic irreducible polynomial in \(k[x]\) having \(z_{n+1}\) as a root. Since \(z_{n+1}\) satisfies a polynomial of degree $d$ over $k$, it satisfies a polynomial of degree \(d'\le d\) over the large field $F=k(z_1,\dots,z_n)$ 3. \(k(y+z)\subseteq k(y,z)\) and \(k(yz)\subseteq k(y,z)\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition Given the extension \(\C/\Q\), define the *algebraic numbers* by \begin{equation*} \A=(\C/ \Q)_{alg} \end{equation*} #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle We claim that \(\A/\Q\) is an algebraic extension that is not finite. Suppose \([\A:\Q]=n\) for some integer $n$. There exist irreducible polynomial in \(\Q[x]\) of degree \(n+1\); for example, \(p(x)=x^{n+1}-2\). If \alpha is a root of \(p(x)\), then \(\alpha\in\A\) and so \(\Q(\alpha)\subseteq\A\). Thus \begin{equation*} n=[\A:\Q]=[\A:\Q(\alpha)][\Q(\alpha):\Q]\ge n+1 \end{equation*} #+END_examplle #+BEGIN_lemma label:nlemma5.55 1. If \(k\subseteq K\subseteq E\) is a tower of fields with \(E/K\) and \(K/k\) algebraic, then \(E/k\) is also algebraic 2. Let \begin{equation*} K_0\subseteq K_1\subseteq\cdots\subseteq K_n\subseteq K_{n+1}\subseteq\cdots \end{equation*} be an ascending tower of fields. If \(K_{n+1}/K_n\) is algebraic for all \(n\ge0\), then \(K^*=\bigcup_{n\ge0}K_n\) is a field algebraic over $K_0$ 3. Let \(K=k(A)\). If each element \(a\in A\) is algebraic over $k$, then \(K/k\) is an algebraic extension #+END_lemma [[index:algebraic closure]] #+ATTR_LATEX: :options [] #+BEGIN_definition A field $K$ is *algebraically closed* if every nonconstant \(f(x)\in K[x]\) has a root in $K$. An *algebraic closure* of a field $k$ is an algebraic extension \(\bar{k}\) of $k$ that is algebraically closed #+END_definition \(\bar{\Q}=\A\) #+ATTR_LATEX: :options [] #+BEGIN_lemma label:nlemma5.56 Let $k$ be a field, and let \(k[T]\) be the polynomial ring in a set $T$ of indeterminates. If \(t_1,\dots,t_n\in T\) are distinct, where \(n\ge2\), and \(f_i(t_i)\in k[t_i]\subseteq k[T]\) are nonconstant polynomials, then the ideal \(I=(f_1(t_1),\dots,f_n(t_n))\) in \(k[T]\) is a proper ideal #+END_lemma #+BEGIN_proof If \(I\) not a proper ideal, then there exists \(h_i(T)\in k[T]\) with \begin{equation*} 1=h_1(T)f_1(t_1)+\cdots+h_n(T)f_n(t_n) \end{equation*} Consider the field extension \(k(\alpha_1,\dots,\alpha_n)\) whre \(\alpha_i\) is a root of \(f_i(t_i)\) for \(i=1,\dots,n\). Denote the variables involved in the \(h_i(T)\) other than \(t_1,\dots,t_n\), if any, by \(t_{n+1},\dots,t_m\). Evaluating when \(t_i=\alpha_i\) if \(i\le n\) and \(t_i=0\) if \(i\ge n+1\), then right side is 0 (evaluation is a homomorphism) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem Given a field $k$ , there exists an algebraical closure \(\bar{k}\) of $k$ #+END_theorem #+BEGIN_proof Let $T$ be a set in bijective correspondence with the family of nonconstant polynomials in \(k[x]\). Let \(R=k[T]\) be the big polynomial ring, and let $I$ be the ideal in $R$ generated by all elements of the form \(f(t_f)\), where \(t_f\in T\) We claim that the ideal $I$ is proper; if not, \(1\in I\), and there are distinct \(t_1,\dots,t_n\in T\) and polynomials \(h_1(T),\dots,h_n(T)\in k[T]\) with \(1=h_1(T)f_1(T)+\dots+h_n(T)f_n(t_n)\), contradicting Lemma ref:nlemma5.56. Therefore there is a maximal ideal $M$ in $R$ containing $I$ by Theorem ref:nthm5.43. Define \(K=R/M\). The proofs is now completed in a series of steps 1. \(K/k\) is a field extension Because $M$ is maximal, \(R=(a)+M\) for any \(a\not\in M\). So \(1=ra+m\) or equivalently \(1+M=(r+M)(a+M)\). So \(K=R/M\) is a field. Let \(i:k\to k[T]\) be the ring map taking \(a\in k\) to the constant polynomial $a$, and let \theta be the composite \(k\xrightarrow{i}k[T]=R\xrightarrow{\text{nat}}R/M=K\). Now \theta is injective by Corollary ref:ncor2.32. We identify $k$ with \(\im\theta\subseteq K\) 2. Every nonconstant \(f(x)\in k[x]\) splits in \(K[x]\) By definition, there is \(t_f\in T\) with \(f(t_f)\in I\subseteq M\), and the coset \(t_f+M\in R/M=K\) is a root of \(f(x)\). It now follows by induction on degree that \(f(x)\) splits over $K$ 3. The extension \(K/k\) is algebraic By Lemma ref:nlemma5.55, it suffices to show that each \(t_f+M\) is algebraic over $k$ (for $K=k(\text{all }t_f+M)$ follows from ref:prop3.117) Let \(k_1=K\) and construct \(k_{n+1}\) from $k$ in the same way $K$ is constructed from $k$. There is a tower of fields \(k=k_0\subseteq k_1\subseteq \dots\subseteq k_n\subseteq\dots\) with each extension \(k_{n+1}/k_n\) algebraic and with every nonconstant polynomials in \(k_n[x]\) having a root in \(k_{n+1}\). By Lemma ref:nlemma5.55 \(E=\bigcup_nk_n\)is an algebraic extension of $k$. We claim that $E$ is algebraiclly closed. #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary If $k$ is a countable field, then it has a countable algebraic closure. In particular, the algebraic closures of the prime fields \(\Q\) and \(\F_p\) are countable #+END_corollary #+BEGIN_proof If $k$ is countable, then the set $T$ of all nonconstant polynomials is countable, say, \(T=\{t_1,t_2,\dots\}\), because \(k[x]\) is countable. Hence \(k[T]=\bigcup_{l\ge1}k[t_1,\dots,t_l]\) is countable, as its quotient \(k_1\). It follows by induction on \(g\ge1\), that every \(k_n\) is countable #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition If \(F/k\) and \(K/k\) are field extensions, then a *\(k\)-map* is a ring homomorphism \(\varphi:F\to K\) that fixes $k$ pointwise #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma label:nlemma5.59 If $K/k$ is an algebraic extension, then every \(k\)-map \(\varphi:K\to K\) is an automorphism of $K$ #+END_lemma #+BEGIN_proof By Corollary ref:ncor2.32, the \(k\)-map \varphi is injective. Let \(a\in K\) and there is an irreducible \(p(x)\in k[x]\) having \(a\) as a root. \varphi permutes all roots of \(p(x)\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_lemma label:nlemma5.60 Let $k$ be a field and let \(\bar{k}/k\) be an algebraic closure. If \(F/k\) is an algebraic extension, then there is an injective \(k\)-map \(\psi:F\to\bar{k}\) #+END_lemma #+BEGIN_proof If $E$ is an intermediate field, \(k\subseteq E\subseteq F\), let us call an ordered pair \((E,f)\) an *approximation* if \(f:E\to\bar{k}\) is a \(k\)-map. In the following diagram, all arrows other than $f$ are inclusions \begin{center} \begin{tikzcd} \bar{k}\\ k \arrow[u,"i"]\arrow[r]&E\arrow[ul,"f"']\arrow[r]&F \end{tikzcd} \end{center} Define \(X=\{\text{approximations }(E,f):k\subseteq E\subseteq F\}\). Note that \(X\neq\emptyset\) because \((k,i)\in X\). Partially order $X$ by \begin{equation*} (E,f)\preceq(E',f')\text{ if }E\subseteq E'\text{ and }f|E'=f \end{equation*} Chain \begin{equation*} \cals=\{(E_j,f_j):j\in J\} \end{equation*} has an upper bound \((\bigcup E_j,\bigcup f_j=\Phi)\). \Phi is a \(k\)-map By Zorn's Lemma, there exists a maximal element \((E_0,f_0)\) in $X$. We claim that \(E_0=F\) and this will complete the proof (take \(\psi=f_0\)). If \(E_0\subsetneq F\), then there is \(a\in F\) with \(a\not\in E_0\). Since \(F/k\) is algebraic, we have \(F/E_0\) algebraic, and there is an irreducible \(p(x)\in E_0[x]\) having $a$ as a root. Since \(\bar{k}/k\) is algebraic and \(\bar{k}\) is algebraically closed, we have a factorization in \(\bar{k}[x]\): \begin{equation*} f_0^*(p(x))=\displaystyle\prod_{i=1}^n(x-b_i) \end{equation*} where \(f_0^*:E_0[x]\to\bar{k}[x]\) is the map \(f_0^*:e_0+\dots+e_nx^n\mapsto f_0(e_0)+\dots+f_0(e_n)x^n\). If all the \(b_i\) lie in \(f_0(E_0)\subseteq\bar{k}\), then \(f_0^{-1}(b_i)\in E_0\subseteq F\) for all $i$, and there is a factorization of \(p(x)\) in \(F[x]\), namely, \(p(x)=\prod_{i=1}^n[x-f_0^{-1}(b_i)]\). But \(a\not\in E_0\) implies \(a\neq f_0^{-1}(b_i)\) for all $i$. Thus \(x-a\) is another factor of \(p(x)\) in \(F[x]\), _contrary to unique factorization._ . We conclude that there is some \(b_i\not\in\im f_0\). By Theorem ref:nthm2.144, we may define \(f_1:E_0(a)\to\bar{k}\) by \begin{equation*} c_0+c_1a+c_2a^2+\cdots\mapsto f_0(c_0)+f_0(c_1)b_i+\dots \end{equation*} A straight forward check shows that \(f_1\) is a \(k\)-map extending \(f_0\). Hence \((E_0,f_0)\prec(E_0(a),f_1)\), contradicting the maximality of \((E_0,f_0)\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem Any two algebraic closure of a field $k$ are isomorphic via a \(k\)-map #+END_theorem #+BEGIN_proof Let $K$ and $L$ be two algebraic closure of a field $k$. By Lemma ref:nlemma5.60, there are \(k\)-maps \(\psi:K\to L\) and \(\theta:L\to K\). By Lemma ref:nlemma5.59, both composites \(\theta\psi\) and \(\psi\theta\) are automorphisms. It follows that \psi (and \theta) is a \(k\)-isomorphisms #+END_proof [[index:degree]] #+ATTR_LATEX: :options [] #+BEGIN_definition If \(\varphi\in k(x)\), then there are polynomials \(g(x),h(x)\in k[x]\) with \((g,h)=1\) and \(\varphi=g(x)/h(x)\). Define the *degree* of \varphi by \begin{equation*} \degree(\varphi)=\max\{\deg(g),\deg(h)\} \end{equation*} A rational function \(\varphi\in k(x)\) is called a *linear fractional transformation* if \begin{equation*} \varphi=\frac{ax+b}{cx+d} \end{equation*} where \(a,b,c,d\in k\) and \(ad-bc\neq 0\). Let \begin{equation*} \LF(k) \end{equation*} denote the group of all linear fractional transformations in \(k(x)\) with binary operation composition: if \(\varphi:x\mapsto(ax+b)/(cx+d)\) and \(\psi:x\mapsto(rx+s)/(tx+u)\), then \begin{equation*} \psi\varphi:x\mapsto\frac{r\varphi(x)+s}{t\varphi(x)+u}= \frac{(ra+sc)x+(rb+sd)}{(ta+ud)x+(tb+ud)} \end{equation*} #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition label:nprop5.62 If \(\varphi\in k(x)\) is nonconstant, then \varphi is transcendental over $k$ and \(k(x)\) is a finite extension of \(k(\varphi)\) with \begin{equation*} [k(x):k(\varphi)]=\degree(\varphi) \end{equation*} Moreover, if \(\varphi=g(x)/h(x)\) and \((g,h)=1\), then \begin{equation*} \irr(x,k(\varphi))=g(y)-\varphi h(y) \end{equation*} where \(\varphi h(y)\) denotes the product of \varphi and \(h(y)\) in \(k(\varphi)[y]\) #+END_proposition #+BEGIN_proof Let \(g(x)=\sum a_ix^i\) and \(h(x)=\sum b_ix^i\in k[x]\). Define \begin{equation*} \theta(y)=g(y)-\varphi h(y) \end{equation*} Now \(\theta(y)\) is a polynomial in \(k(\varphi)[y]\): \(\theta(y)=\sum a_iy^i-\varphi\sum b_iy^i=\sum(a_i-\varphi b_i)y^i\). If \(\theta(y)\) were the zero polynomial, then all its coefficients would be 0. But if $b_i$ is a nonzero coefficient of \(h(y)\), then \(a_i-\varphi b_i=0\) gives \(\varphi=a_i/b_i\), contradicting \varphi not being a constant; that is, \(\varphi\not\in k\). \begin{equation*} \deg(\theta(y))=\deg(g(y)-\varphi h(y))=\max\{\deg(g),\deg(h)\}=\degree(\varphi) \end{equation*} Now \(x\) is a root of \(\theta(y)\), so that \(x\) is algebraic over \(k(\varphi)\). Were \varphi algebraic over $k$, then \(k(\varphi)/k\) would be finite, giving \([k(x):k]=[k(x):k(\varphi)][k(\varphi):k]\) finite, a contradiction. Therefore \varphi is transcendental over $k$. We claim that \(\theta(y)\) is an irreducible polynomial in \(k(\varphi)[y]\). If not, then \(\theta(y)\) factors in \(k[\varphi][y]\) by Gauss's Corollary ref:ncor5.28. But \(\theta(y)=g(y)-\varphi h(y)\) is linear in \varphi, and so Corollary ref:ncor5.31 shows that \(\theta(y)\) is irreducible. Finally, since \(\deg(\theta)=\degree(\varphi)\), we have \([k(x):k(\varphi)]=\degree(\varphi)\) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_corollary label:ncor5.63 Let \(\varphi\in k(x)\), where \(k(x)\) is the field of rational functions over a field $k$. Then \(k(\varphi)=k(x)\) if and only if \varphi is a linear fractional transformations #+END_corollary #+BEGIN_proof By Proposition ref:nprop5.62, \(k(\varphi)=k(x)\) if and only if \(\degree(\varphi)=1\); that is, \varphi is a linear fractional transformation #+END_proof Define a map \(\zeta:\GL(2,k)\to\LF(k)\) by \(\begin{psmallmatrix}a&b\\c&d\end{psmallmatrix}\mapsto(ax+b)/(cx+d)\). In Exercise ref:nex5.45, \(\ker\zeta=Z(2,k)\), the center of \(\GL(2,k)\) consisting of all nonzero \(2\times 2\) scalar matrices. Hence if \begin{equation*} \PGL(2,k)=\GL(2,k)/Z(2,k) \end{equation*} then \(\LF(k)\cong\PGL(2,k)\) #+ATTR_LATEX: :options [] #+BEGIN_corollary If \(k(x)\) is the field of rational functions over a field $k$, then \begin{equation*} \Gal(k(x)/k)\cong\LF(k)\cong\PGL(2,k) \end{equation*} #+END_corollary #+BEGIN_proof Let \(\sigma:k(x)\to k(x)\) be an automorphism of \(k(x)\) fixing $k$. Since \(k(\sigma(x))=k(x)\), Corollary ref:ncor5.63 says that \(\sigma(x)\) is a linear fractional transformation. Define \(\gamma:\Gal(k(x)/k)\to\LF(k)\) by \(\gamma:\sigma\mapsto\sigma(x)\). Now \gamma is a homomorphism: \(\gamma(\sigma\tau)=\gamma(\sigma)\gamma(\tau)\). Finally \gamma is an isomorphisms #+END_proof #+ATTR_LATEX: :options [Lüroth's theorem] #+BEGIN_theorem If \(k(x)\) is a simple transcendental extension, then every intermediate field $B$ with \(k\subsetneq B\subseteq k(x)\) is also a simple transcendental extension of $k$: there is \(\varphi\in B\) with \(B=k(\varphi)\) #+END_theorem [[index:purely transcendental]] [[index:algebraically dependent]] #+ATTR_LATEX: :options [] #+BEGIN_definition Let \(E/k\) be a field extension. A subset \(U\) of $E$ is *algebraically dependent* over $k$ if there exists a finite subset \(u_1,\dots,u_n\subseteq U\) and a nonzero polynomial \(f(x_1,\dots,x_n)\in k[x_1,\dots,x_n]\) with \(f(u_1,\dots,u_n)=0\). A field extension \(E/k\) is *purely transcendental* if either \(E=k\) or $E$ contains an algebraically independent $B$ and \(E=k(B)\) #+END_definition Let \(E/k\) be a field extension, let \(u_1,\dots,u_n\in E\) and let \(\varphi:k[x_1,\dots,x_n]\to E\) be the evaluation map \(f(x_1,\dots,x_n)\mapsto f(u_1,\dots,u_n)\). Now \(\{u_1,\dots,u_n\}\)is algebraically dependent if and only if \(\ker\varphi\neq\{0\}\). If \(\{u_1,\dots,u_n\}\) is algebraically independent, then \varphi extends to an isomorphism \(\widetilde{\varphi}:k(x_1,\dots,x_n)\to k(u_1,\dots,u_n)\subseteq E\), where \(k(x_1,\dots,x_n)\) is the field of rational functions \begin{center} \begin{tikzcd} k(x_1,\dots,x_n)\arrow[r,"\widetilde{\varphi}"]& \Frac(E)=E\\ k[x_1,\dots,x_n]\arrow[u]\arrow[r,"\varphi"]& E\arrow[u] \end{tikzcd} \end{center} In particular, if \(\{u_1,\dots,u_n\}\) is algebraically independent and \(E=k(u_1,\dots,u_n)\), then \(\widetilde{\varphi}\) is an isomorphism \(k(x_1,\dots,x_n)\to k(u_1,\dots,u_n)\) with \(x_i\mapsto u_i\). Therefore, a purely transcendental extension \(k(u_1,\dots,u_n)/k\) is isomorphic to the *function field in $n$ variables* #+ATTR_LATEX: :options [] #+BEGIN_proposition label:nprop5.66 Let \(E/k\) be a field extension. Then \(U\subseteq E\) is algebraically dependent over $k$ if and only if there is \(v\in U\) with $v$ algebraic over \(k(U-\{v\})\) #+END_proposition #+BEGIN_proof If $U$ is algebraically dependent over $k$, then there is a finite algebraically dependent subset \(\{u_1,\dots,u_n\}\subseteq U\); thus, we may assume that \(U\) is finite. We prove, by induction on \(n\ge1\), that some \(u_i\) is algebraic over \(k(U-\{u_i\})\). If \(n=1\), it's obvious. Let \(U=\{u_1,\dots,u_{n+1}\}\) be algebraically dependent. We may assume that \(\{u_1,\dots,u_n\}\) is algebraically independent. Since $U$ is algebraically dependent, there is a nonzero \(f(X,y)\in k[x_1,\dots,x_n,y]\) with \(f(u_1,\dots,u_n,u_{n+1}=0)\), where \(X=\{x_1,\dots,x_n\}\) and $y$ is a new variable. We may write \(f(X,y)=\sum_ig_i(X)y^i\), where \(g_i(X)\in k[X]\).Since \(f(X,y)\neq0\), some \(g_i(X)\neq0\), and it follows from algebraic independence of \(\{u_1,\dots,u_n\}\) that \(g_i(u_1,\dots,u_n)\neq0\). Therefore, \(h(y)=\sum_ig(u_1,\dots,u_n)y^i\in k(U)[y]\). But \(0=h(u_{n+1})\), so that \(u_{n+1}\) is algebraic over \(k(u_1,\dots,u_n)\) For the converse, assume that $v$ is algebraic over \(k(U-\{v\})\). We may assume that \(U-\{v\}\) is finite, say, \(U-\{v\}=\{u_1,\dots,u_n\}\), where \(n\ge0\). We prove by induction on \(n\ge0\). For the inductive step, let \(U-\{u_{n+1}\}=\{u_1,\dots,u_n\}\) and assume it is algebraically independent. By hypothesis, there is a nonzero polynomial \(f(y)=\sum_ic_iy^i\in k(u_1,\dots,u_n)[y]\) with \(f(u_{n+1})=0\). As \(f(y)\neq0\), we may assume that at least one of its coefficients is nonzero. For all $i$, the coefficient \(c_i\in k(u_1,\dots,u_n)\), so there are rational functions \(c_i(x_1,\dots,x_n)\) with \(c_i(u_1,\dots,u_n)=c_i\) [because \(k(u_1,\dots,u_n)\cong k(x_1,\dots,x_n)\)] #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition A *dependency relation on a set* \Omega is a relation \(\preceq\) from \Omega to \(2^\Omega\), pronounced "is dependent on", satisfying the following *Dependency Axioms*: 1. if \(S\subseteq\Omega\) and \(u\in S\), then \(u\preceq S\) 2. if \(u\preceq S\), then there exists a finite subset \(S'\subseteq S\) with \(u\preceq S'\) 3. (*Transitivity*) if \(u\preceq S\) and, for some \(T\subseteq\Omega\), we have \(s\preceq T\) for every \(s\in S\), then \(u\preceq T\) 4. (*Exchange Axiom*) if \(u\preceq S\) and \(u\not\preceq S-\{v\}\), then \(v\preceq (S-\{v\})\cup\{u\}\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle If \Omega is a vector space, define \(u\preceq S\) to mean \(u\in\la S\ra\), the subspace spanned by \(S\). We claim that \(\preceq\) is a dependency relation. #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_lemma If \(E/k\) is a field extension, then \(u\preceq S\), defined by $u$ being algebraic over \(k(S)\) is a dependency relation on $E$ #+END_lemma #+BEGIN_proof If \(u\preceq S\), then $u$ is algebraic over \(k(S)\); that is, \(u\in(E/k(S))_{\alg}=\{e\in E:e\text{ is algebraic over }k(S)\}\). Suppose there is some \(T\subseteq E\) with \(s\preceq T\) for every \(s\in S\); that is, \(S\subseteq(E/k(T))_{\alg}\). It follows from Lemma ref:nlemma5.55 that \((E/k(S))_{\alg}\subseteq(E/k(T))_{\alg}\); and so \(u\preceq T\) The Exchange Axiom assumes that \(u\preceq S\) and $u$ is transcendental over \(k(S-\{v\})\). Note that \(v\in S\) and \(u\not\in S\). Let us apply Proposition ref:nprop5.66 to the subsets \(U'=\{u,v\}\) and \(S'=S-\{v\}\) of $E$ and the subfield \(k'=k(S')\). With this notation, \(k'(U'-\{u\})=k'(v)=k(S',v)=k(S)\), so that $u$ is algebraic over \(k(S)\) can be restated as \(u\) algebraic over \(k'(U'-\{u\})\). Thus Proposition ref:nprop5.66 says that \(U'=\{u,v\}\) is algebraically dependent over \(k'=k(S)\): there is a nonzero polynomial \(f(x,y)\in k(S')[x,y]\) with \(f(u,v)=0\). #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition Let \(\preceq\) be a dependency relation on a set \Omega. Call a subset \(S\subseteq\Omega\) *dependent* if there exists \(s\in S\) with \(s\preceq S-\{s\}\); call $S$ *independent* if it is not dependent #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_definition Let \(\preceq\) be a dependency relation on a set \Omega. We say that a subset $S$ *generates* \Omega if \(x\preceq S\) for all \(x\in\Omega\). A *basis* of \Omega is an independent subset that generates \Omega #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_lemma label:nlemma5.69 1. Let \(\preceq\) be a dependency relation on a set $\Omega$. If \(T\subseteq\Omega\) is independent and \(z\not\preceq T\) for some \(z\in\Omega\), then \(T\cup\{z\}\supsetneq T\)is a strictly larger independent set. 2. Let \(E/k\) be a field extension. If \(T\supseteq E\) is algebraically independent over $k$ and \(z\in E\) is transcendental over \(k(T)\), then \(T\cup\{z\}\) is algebraically independent #+END_lemma #+BEGIN_proof 1. not hard 2. By Proposition ref:nprop5.66, this is a special case of (1) #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_definition If \(E/k\) is a field extension, then a *transcendence basis* is a maximal algebraically independent subset of $E$ over $k$ #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_theorem label:nthm5.70 1. If \(\preceq\) is a dependency relation on a set \Omega, then \Omega has a basis. In fact, every independent subset $B$ of \Omega is part of basis 2. If \(E/k\) is a field extension, then $E$ has a transcendence basis. In fact, every algebraically independent subset is part of a transcendence basis #+END_theorem #+BEGIN_proof 1. Since the empty set \(\emptyset\) is independent, the second statement implies the first We use Zorn's Lemma to prove the existence of maximal independent subsets of \Omega containing $B$. Let $X$ be the family of all independent subsets of \Omega containing $B$. $X$ is nonempty. Suppose that \(\calb=(B_j)_{j\in J}\) is a chain in $X$. It is clear that \(B^*=\bigcup_{j\in J}B_j\) is an upper bound of $\calb$ if it lies in $X$. If \(B^*\) is dependent, then there is \(y\in B^*\) with \(y\preceq B^*-\{y\}\). By _Dependency Axiom (2)_, there is a finite subset \(\{x_1,\dots,x_n\}\subseteq B^*-\{y\}\) with \(y\preceq\{x_1,\dots,x_n\}-\{y\}\). There is \(B'\) s.t. \(\{x_1,\dots,x_n,y\}\subseteq B'\) dependent #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem If $B$ is a transcendence basis, then \(k(B)/k\) is purely transcendental and \(E/k(B)\) is algebraic #+END_theorem #+BEGIN_proof By Theorem ref:nthm5.70, it suffices to show that if $B$ is a transcendence basis, then \(E/k(B)\) is algebraic #+END_proof #+ATTR_LATEX: :options [] #+BEGIN_theorem 1. If \Omega is a set with a dependency relation \(\preceq\), then any two bases $B$ and $C$ have the same cardinality 2. If $B$ and $C$ are transcendence basis of a field extension \(E/k\) , then \(\abs{B}=\abs{C}\) #+END_theorem #+BEGIN_proof If \(B=\emptyset\), we claim that \(C=\emptyset\). Otherwise there exists \(y\in C\), and since \(C\) is independent, \(y\not\preceq C-\{y\}\). But \(y\preceq B=\emptyset\) and \(\emptyset\subseteq C-\{y\}\), so that _Dependency Axiom (3)_ \(y\preceq C-\{y\}\), a contradiction. Now assume \(B\) is finite; say, \(B=\{x_1,\dots,x_n\}\). We prove, by induction on \(k\ge0\), that there exists \(\{y_1,\dots,y_{k-1}\subseteq C\}\) with \begin{equation*} B_k=\{y_1,\dots,y_{k-1},x_k,\dots,x_n\} \end{equation*} a basis; that is , the elements \(x_1,\dots,x_{k-1}\) can be exchanged with elements \(y_1,\dots,y_{k-1}\in C\) so that \(B_k\) is a basis. Assume \(B_k=\{y_1,\dots,y_{k-1},x_k,\dots,x_n\}\) is a basis, we want to show that \(B_{k+1}\) is basis. We claim that there is \(y_k\in C\) with \(y_k\not\preceq B_k-\{x_k\}\). Otherwise, \(y\preceq B_k-\{x_k\}\) for all \(y\preceq C\). But \(x_k\preceq C\), and so _Dependency Axiom (3)_ gives \(x_k\preceq B_k-\{x_k\}\). By Lemma ref:nlemma5.69, \(B_{k+1}\) is independent. To see that \(B_{k+1}\) is a basis, it suffices to show that it generates \Omega. Now \(y_k\preceq B_k\) and \(y_k\not\preceq B_k-\{x_k\}\); the Exchange Axiom gives \(x_k\preceq (B_k-\{x_k\}\cup\{y_k\})=B_{k+1}\) If \(\abs{C}>n=\abs{B}\). Then \(B_n\subseteq C\). Thus a proper subset of $C$ generates \Omega, contradicting the independence of $C$ #+END_proof [[index:transcedence degree]] #+ATTR_LATEX: :options [] #+BEGIN_definition The *transcendence degree* of \(E/k\)is defined by \begin{equation*} \trdeg(E/k)=\abs{B} \end{equation*} where $B$ is a transcendence basis of \(E/k\) #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. If \(E/k\) is a field extension, then \(\trdeg(E/k)=0\) if and only if \(E/k\) is algebraic 2. If \(E=k(x_1,\dots,x_n)\) is the function field in $n$ variables over a field $k$, then \(\trdeg(E/k)=n\), because \(\{x_1,\dots,x_n\}\) is a transcendence basis of $E$ #+END_examplle #+ATTR_LATEX: :options [] #+BEGIN_proposition There are nonisomorphic fields each of which is isomorphic to a subfield of the other #+END_proposition #+BEGIN_proof Clearly \(\C\) is isomorphic to a subfield of \(\C(x)\). However we claim that \(\C(x)\) is isomorphic to a subfield of $\C$. Let $B$ be a transcendence basis of \(\C\) over \(\Q\), and discard one of its element, say, $b$. The algebraic closure \(F\) of \(\Q(B-\{b\})\) is a proper subfield of \(\C\) #+END_proof *** Exercise #+BEGIN_exercise label:nex5.44 Prove that \(\varphi\in k(x)\) has degree 1 if and only if \varphi is a linear fractional transformation #+END_exercise #+BEGIN_exercise label:nex5.45 For any field $k$, define a map \(\zeta:\GL(2,k)\to LF(k)\) by \begin{equation*} \zeta:\begin{psmallmatrix}a&b\\c&d\end{psmallmatrix}\mapsto (ax+b)/(cx+d) \end{equation*} 1. Prove that \zeta is a surjective group homomorphism 2. Prove that \(\ker\zeta=Z(2,k)\), the subgroup of \(\GL(2,k)\) consisting of all nonzero scalar matrices #+END_exercise #+BEGIN_exercise label:nex5.46 Prove that the set $\A$ of all algebraic numbers is an algebraic closure of \(\Q\) #+END_exercise ** Varieties *** Varieties and Ideals Let \(k\) be a field and let \(k^n\) denoted the set of all \(n\)-tuples: \begin{equation*} k^n=\{a=(a_1,\dots,a_n):a_i\in k\text{ for all }i\} \end{equation*} We use the abbreviation \begin{equation*} X=(x_1,\dots,x_n) \end{equation*} so that the polynomial ring \(k[x_1,\dots,x_n]\) in several variables may be denoted by \(k[X]\) and a polynomial \(f(x_1,\dots,x_n)\) in \(k[X]\) may be abbreviated by \(f(X)\) #+ATTR_LATEX: :options [] #+BEGIN_definition If \(f(X)\in k[X]\), its *polynomial function* \(f^{\flat}:k^n\to k\) is defined by evaluation \begin{equation*} f^\flat:(a_1,\dots,a_n)\mapsto f(a_1,\dots,a_n) \end{equation*} #+END_definition *For the remainder of this section, we assume that all fields are infinite* #+ATTR_LATEX: :options [] #+BEGIN_definition If \(f(X)\in k[X]=k[x_1,\dots,x_n]\) and \(f(a)=0\), where \(a\in k^n\), then \(a\) is called a *zero* of \(f(X)\). #+END_definition #+ATTR_LATEX: :options [] #+BEGIN_proposition If \(k\) is an algebraically closed field and \(f(X)\in k[X]\) is not a constant, then \(f(X)\) has a zero #+END_proposition #+BEGIN_proof Induction on \(n\ge1\), where \(X=(x_1,\dots,x_n)\). Write \begin{equation*} f(X,y)=\displaystyle\sum_{i}g_i(X)y^i \end{equation*} For each \(a\in k^n\), define \(f_a(y)=\sum_ig_i(a)y^i\). If \(f(X,y)\) has no zeros, then each \(f_a(y)\in k[y]\) has no zeros, hence \(f_a(y)\) is a nonzero constant. #+END_proof * Rings ** Modules #+ATTR_LATEX: :options [] #+BEGIN_examplle 1. If \(k\) is any nonzero commutative ring, then \(\Mat_n(k)\), all \(n\times n\) matrices with entries in \(k\), is a ring under matrix multiplication and matrix addition; it is commutative if and only if \(n=1\). Let \(k\) be any ring. If \(A=[a_{ip}]\) is an \(n\times l\) matrix and \(B=[b_{pj}]\) is an \(l\times m\) matrix, then their product \(AB\) is defined to be the \(n\times m\) matrix whose \(ij\) entry has the usual formula: \((AB)_{ij}=\sum_pa_{ip}b_{pj}\). Thus \(\Mat_n(k)\) is a ring, even if \(k\) is not commutative 2. If $G$ is a group, we define the *group ring* \(\Z G\) as follows. Its additive group is the free abelian group having a basis #+END_examplle * Index \renewcommand{\indexname}{} printindex:nil * TODO Some statements need to be verified ref:nlemma5.60