{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "假设输入空间$\\mathcal{X} \\subseteq R^{n}$,输出空间$\\mathcal{Y} = \\left\\{+1, -1 \\right\\}$。输入$x \\in \\mathcal{X}$表示实例的特征向量,对应于输入空间的点;输出$y \\in \\mathcal{Y}$表示实例的类别。由输入空间到输出空间的函数\n", "\\begin{align*} \\\\& f \\left( x \\right) = sign \\left( w \\cdot x + b \\right) \\end{align*} \n", "称为感知机。其中,$w$和$b$为感知机模型参数,$w \\in R^{n}$叫做权值或权值向量,$b \\in R$叫偏置,$w \\cdot x$表示$w$和$x$的内积。$sign$是符号函数,即 \n", "\\begin{align*} sign \\left( x \\right) = \\left\\{\n", "\\begin{aligned} \n", "\\ & +1, x \\geq 0\n", "\\\\ & -1, x<0\n", "\\end{aligned}\n", "\\right.\\end{align*} " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "感知机是一种线性分类模型,属于判别模型。感知机模型的假设空间是定义在特征空间中的所有线性分类模型或线性分类器,即函数集合$\\left\\{ f | f \\left( x \\right) = w \\cdot x + b \\right\\}$。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "线性方程\n", "\\begin{align*} \\\\& w \\cdot x + b = 0 \\end{align*} \n", "对应于特征空间$R^{n}$中的一个超平面$S$,其中$w$是超平面的法向量,$b$是超平面的截距。超平面$S$将特征空间划分为两部分,位于其中的点被分为正、负两类,超平面$S$称为分离超平面。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "给定数据集\n", "\\begin{align*} \\\\& T = \\left\\{ \\left( x_{1}, y_{1} \\right), \\left( x_{2}, y_{2} \\right), \\cdots, \\left( x_{N}, y_{N} \\right) \\right\\} \\end{align*} \n", "其中,$x_{i} \\in \\mathcal{X} = R^{n}, y_{i} \\in \\mathcal{Y} = \\left\\{ +1, -1 \\right\\}, i = 1, 2, \\cdots, N$,如果存在某个超平面$S$\n", "\\begin{align*} \\\\& w \\cdot x + b = 0 \\end{align*} \n", "能够将数据集的正实例和负实例完全正确地划分到超平面的两侧,即对所有$y_{i}=+1$的实例$x_{i}$,有$w \\cdot x_{i} + b > 0$,对所有$y_{i}=-1$的实例$x_{i}$,有$w \\cdot x_{i} + b < 0$,则称数据集$T$为线性可分数据集;否则,称数据集$T$线性不可分。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "输入空间$R^{n}$中的任一点$x_{0}$到超平面$S$的距离:\n", "\\begin{align*} \\\\& \\dfrac{1}{\\| w \\|} \\left| w \\cdot x_{0} + b \\right| \\end{align*} \n", "其中$\\| w \\|$是$w$的$L_{2}$范数。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "对于误分类数据$\\left( x_{i}, y_{i} \\right)$,当$w \\cdot x + b > 0$时,$y_{i}=-1$,当$w \\cdot x + b < 0$时,$y_{i}=+1$,有\n", "\\begin{align*} \\\\& -y_{i} \\left( w \\cdot x_{i} + b \\right) > 0 \\end{align*} \n", "误分类点$x_{i}$到分离超平面的距离:\n", "\\begin{align*} \\\\& -\\dfrac{1}{\\| w \\|} y_{i}\\left( w \\cdot x_{i} + b \\right) \\end{align*}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "假设超平面$S$的误分类点集合为$M$,则所有误分类点到超平面$S$的总距离:\n", "\\begin{align*} \\\\& -\\dfrac{1}{\\| w \\|} \\sum_{x_{i} \\in M} y_{i} \\left( w \\cdot x_{i} + b \\right) \\end{align*}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "给定训练数据集\n", "\\begin{align*} \\\\& T = \\left\\{ \\left( x_{1}, y_{1} \\right), \\left( x_{2}, y_{2} \\right), \\cdots, \\left( x_{N}, y_{N} \\right) \\right\\} \\end{align*} \n", "其中,$x_{i} \\in \\mathcal{X} = R^{n}, y_{i} \\in \\mathcal{Y} = \\left\\{ +1, -1 \\right\\}, i = 1, 2, \\cdots, N$。感知机$sign \\left( w \\cdot x + b \\right)$的损失函数定义为\n", "\\begin{align*} \\\\& L \\left( w, b \\right) = -\\sum_{x_{i} \\in M} y_{i} \\left( w \\cdot x_{i} + b \\right) \\end{align*} \n", "其中,$M$为误分类点的集合。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "给定训练数据集\n", "\\begin{align*} \\\\& T = \\left\\{ \\left( x_{1}, y_{1} \\right), \\left( x_{2}, y_{2} \\right), \\cdots, \\left( x_{N}, y_{N} \\right) \\right\\} \\end{align*} \n", "其中,$x_{i} \\in \\mathcal{X} = R^{n}, y_{i} \\in \\mathcal{Y} = \\left\\{ +1, -1 \\right\\}, i = 1, 2, \\cdots, N$。求参数$w$和$b$,使其为以下损失函数极小化问题的解\n", "\\begin{align*} \\\\& \\min_{w,b} L \\left( w, b \\right) = -\\sum_{x_{i} \\in M} y_{i} \\left( w \\cdot x_{i} + b \\right) \\end{align*} \n", "其中,$M$为误分类点的集合。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "假设误分类点集合$M$是固定的,则损失函数$L \\left( w, b \\right)$的梯度\n", "\\begin{align*} \\\\& \\nabla _{w} L \\left( w, b \\right) = -\\sum_{x_{i} \\in M} y_{i} x_{i} \n", "\\\\ & \\nabla _{b} L \\left( w, b \\right) = -\\sum_{x_{i} \\in M} y_{i} \\end{align*} " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "随机选取一个误分类点$\\left( x_{i}, y_{i} \\right)$,对$w, b$进行更新:\n", "\\begin{align*} \\\\& w \\leftarrow w + \\eta y_{i} x_{i} \n", "\\\\ & b \\leftarrow b + \\eta y_{i} \\end{align*} \n", "其中,$\\eta \\left( 0 < \\eta \\leq 1 \\right)$是步长,称为学习率。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "感知机算法(原始形式): \n", "输入:训练数据集$T = \\left\\{ \\left( x_{1}, y_{1} \\right), \\left( x_{2}, y_{2} \\right), \\cdots, \\left( x_{N}, y_{N} \\right) \\right\\}$,其中$x_{i} \\in \\mathcal{X} = R^{n}, y_{i} \\in \\mathcal{Y} = \\left\\{ +1, -1 \\right\\}, i = 1, 2, \\cdots, N $;学习率$\\eta \\left( 0 < \\eta \\leq 1 \\right)$。 \n", "输出:$w,b$;感知机模型$f \\left( x \\right) = sign \\left( w \\cdot x + b \\right)$\n", "1. 选取初值$w_{0},b_{0}$ \n", "2. 在训练集中选取数据$\\left( x_{i}, y_{i} \\right)$ \n", "3. 如果$y_{i} \\left( w \\cdot x_{i} + b \\right) \\leq 0$ \n", "\\begin{align*} \\\\& w \\leftarrow w + \\eta y_{i} x_{i} \n", "\\\\ & b \\leftarrow b + \\eta y_{i} \\end{align*} \n", "4. 转至2,直至训练集中没有误分类点。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "设$w,b$修改n次,则$w,b$关于$\\left( x_{i}, y_{i} \\right)$的增量分别是$\\alpha_{i} y_{i} x_{i}$和$\\alpha_{i} y_{i}$,其中$\\alpha_{i} = n_{i} \\eta$。$w,b$可表示为\n", "\\begin{align*} \\\\& w = \\sum_{i=1}^{N} \\alpha_{i} y_{i} x_{i} \n", "\\\\ & b = \\sum_{i=1}^{N} \\alpha_{i} y_{i} \\end{align*} \n", "其中,$\\alpha_{i} \\geq 0, i=1,2, \\cdots, N$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "感知机算法(对偶形式): \n", "输入:训练数据集$T = \\left\\{ \\left( x_{1}, y_{1} \\right), \\left( x_{2}, y_{2} \\right), \\cdots, \\left( x_{N}, y_{N} \\right) \\right\\}$,其中$x_{i} \\in \\mathcal{X} = R^{n}, y_{i} \\in \\mathcal{Y} = \\left\\{ +1, -1 \\right\\}, i = 1, 2, \\cdots, N $;学习率$\\eta \\left( 0 < \\eta \\leq 1 \\right)$。 \n", "输出:$\\alpha,b$;感知机模型$f \\left( x \\right) = sign \\left( \\sum_{j=1}^{N} \\alpha_{j} y_{j} x_{j} \\cdot x + b \\right)$ ,其中$\\alpha = \\left( \\alpha_{1}, \\alpha_{2}, \\cdots, \\alpha_{N} \\right)^{T}$\n", "1. $\\alpha \\leftarrow 0, b \\leftarrow 0$ \n", "2. 在训练集中选取数据$\\left( x_{i}, y_{i} \\right)$ \n", "3. 如果$y_{i} \\left( \\sum_{j=1}^{N} \\alpha_{j} y_{j} x_{j} \\cdot x_{i} + b \\right) \\leq 0$ \n", "\\begin{align*} \\\\& \\alpha_{i} \\leftarrow \\alpha_{i} + \\eta\n", "\\\\ & b \\leftarrow b + \\eta y_{i} \\end{align*} \n", "4. 转至2,直至训练集中没有误分类点。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "对偶形式中训练实例仅以内积形式出现,可预先计算$Gram$矩阵存储实例间内积\n", "\\begin{align*} \\\\& G = \\left[ x_{i} \\cdot x_{j} \\right]_{N \\times N} \\end{align*} " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 0 }