{ "metadata": { "name": "", "signature": "sha256:c1fd41d62e09e606fb575de755ada5bce9d5ea50a96e4b956094fe857e2b4193" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Ortogonalidade e MMQ\n", "A introdu\u00e7\u00e3o de um produto escalar $\\langle \\cdot, \\cdot \\rangle$ em um espa\u00e7o vetorial $V$ de dimens\u00e3o finita $n$, al\u00e9m de determinar uma dist\u00e2ncia entre os vetores, tamb\u00e9m define o \u00e2ngulo. Lembremos que:\n", "$$ \\| x-y \\| = \\sqrt{\\langle x-y,x-y \\rangle}.$$\n", "Para o conceito de \u00e2ngulo, vamos come\u00e7ar s\u00f3 com ortoganalidade. Dizemos que dois vetores $x$ e $y$ s\u00e3o **ortogonais** quando o produto interno entre eles for zero, $\\langle x, y\\rangle = 0$.\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Usamos o numpy para a o produto escalar usual\n", "import numpy as np\n", "x=np.array([1,2,0,4,0])\n", "y=np.array([2,-1,4,0,2])\n", "# np.dot() \u00e9 a fun\u00e7\u00e3o de produto escalar para vetores ver help(np.dot)para o caso geral\n", "np.dot(x,y)\n" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 1, "text": [ "0" ] } ], "prompt_number": 1 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Processo de ortogonaliza\u00e7\u00e3o de Gram-Schmidt\n", "A partir de uma base qualquer $\\{\\mathbf{e}_1,\\dots,\\mathbf{e}_n\\}$, podemos construir uma base ortogonal (os elementos s\u00e3o, dois a dois, ortogonais) $\\{\\mathbf{f}_1,\\dots,\\mathbf{f}_n\\}$, num processo iterativo chamado *m\u00e9todo de Gram-Schmidt*:\n", "$$ \\begin{eqnarray}\n", "\\mathbf{f}_1 &= &\\mathbf{e}_1\\\\\n", "\\mathbf{f}_k & = & \\mathbf{e}_k - \\sum_{i=1}^{k-1}\\frac{\\langle \\mathbf{e}_k,\\mathbf{f}_i\\rangle}{\\langle \\mathbf{f}_i,\\mathbf{f}_i\\rangle}\\mathbf{f}_i\n", "\\end{eqnarray}\n", "$$" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def GramSchimdt(E):\n", " '''Aplica o processo de Gram-Schmidt para uma array de vetores '''\n", " n,k = np.shape(E)\n", " F = E.copy()\n", " for i in range(1,k):\n", " F[:,i]=F[:,i]-sum(np.dot(E[:,i],F[:,j])/np.dot(F[:,j],F[:,j])*F[:,j] for j in range(i))\n", " return F " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 2 }, { "cell_type": "code", "collapsed": false, "input": [ "# Teste da fun\u00e7\u00e3o:\n", "# Exemplo 1\n", "E = np.array([[2,1],[1,0],[0,2]], dtype=float)\n", "print(E)\n", " " ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "[[ 2. 1.]\n", " [ 1. 0.]\n", " [ 0. 2.]]\n" ] } ], "prompt_number": 3 }, { "cell_type": "code", "collapsed": false, "input": [ "F=GramSchimdt(E)\n", "print(F)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "[[ 2. 0.2]\n", " [ 1. -0.4]\n", " [ 0. 2. ]]\n" ] } ], "prompt_number": 4 }, { "cell_type": "code", "collapsed": false, "input": [ "# Exemplo 2\n", "E1 = np.array([[2,1,0],[1,-1,3],[0,2,1]], dtype=float)\n", "print(E1)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "[[ 2. 1. 0.]\n", " [ 1. -1. 3.]\n", " [ 0. 2. 1.]]\n" ] } ], "prompt_number": 5 }, { "cell_type": "code", "collapsed": false, "input": [ "F1=GramSchimdt(E1)\n", "print(F1)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "[[ 2. 0.6 -1.03448276]\n", " [ 1. -1.2 2.06896552]\n", " [ 0. 2. 1.55172414]]\n" ] } ], "prompt_number": 6 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## complemento ortogonal\n", "Se $W \\subset V$ \u00e9 um subspa\u00e7o vetorial de dimens\u00e3o $k$ ent\u00e3o o conjunto $W^\\bot =\\{ y \\in V: \\langle x, y \\rangle =0 \\forall x\\in W\\}$ \u00e9 o complemento ortogonal de $W$. \u00c9 um subespa\u00e7o de dimens\u00e3o $n-k$ e $V=W\\oplus W^\\bot$. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* De fato. Tome uma base ortogonal qualquer de $W$, $\\{a_1,\\dots,a_k\\}$.\n", "* Para cada $v\\in V$ definimos $\\hat{v} = \\sum_{i=1}^k \\frac{ \\langle v, a_i \\rangle}{\\langle a_i, a_i \\rangle}a_i$. Chamado de proje\u00e7\u00e3o ortogonal de $v$ em $W$.\n", "* O operador linear $T(v)= v-\\hat{v}$ tem o n\u00facleo $\\ker{T} = W$ e a imagem $\\text{Im}(T) = W^\\bot$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## propriedade de minimiza\u00e7\u00e3o da proje\u00e7\u00e3o ortogonal\n", "Seja $W$ um subespa\u00e7o de $V$, de dimens\u00e3o $k$. Para cada $v\\in V$, denotamos por $\\hat{v}$ a proje\u00e7\u00e3o ortogonal de $v$ em $W$.\n", "Vimos no par\u00e1grafo anterior que $(v-\\hat{v})\\in W^\\bot$. Para um outro $w$ qualquer de $W$, podemos escrever $w=\\hat{v} + w_1$.\n", "Da\u00ed:\n", "$$ \\|v-w\\|^2 = \\|v-\\hat{v} -w_1\\|^2 = \\langle v-\\hat{v} -w_1,v-\\hat{v} -w_1\\rangle = \\|v-\\hat{v}\\|^2 + \\|w_1\\|^2 $$\n", "Ent\u00e3o a proje\u00e7\u00e3o ortogonal $\\hat{v}$ \u00e9 o vetor de $W$ mais pr\u00f3ximo de $v$ pela norma euclidiana." ] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 6 } ], "metadata": {} } ] }