{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "%load_ext watermark\n", "%watermark -d -u -a 'Andreas Mueller, Kyle Kastner, Sebastian Raschka' -v -p numpy,scipy,matplotlib,scikit-learn" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 无监督学习(Unsupervised Learning) Part 1 -- 变换(Transformation)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "很多无监督学习的实例,如降维、流形(manifold)学习和特征提取,可以找到输入数据一种新的表示方式,而不需要任何额外的输入。(与监督学习相比,无监督算法不需要或参考目标变量,像前面的分类和回归例子一样)。\n", "\n", "\n", "\n", "一个非常基本的例子,是数据的尺度缩放,也是许多机器学习算法所需要的——尺度缩放属于数据预处理范畴,几乎不能被称为*学习*。有许多不同的尺度缩放技术,在下面的例子中,是一个特殊的方法,通常被称为“标准化”。在这里,我们将重新缩放数据,以使每个特征都以0为中心(均值=0),具有单位方差(标准差=1)。\n", "\n", "例如,一个具有值[1,2,3,4,5]的一维数据集,标准化以后的值为\n", "\n", "- 1 -> -1.41\n", "- 2 -> -0.71\n", "- 3 -> 0.0\n", "- 4 -> 0.71\n", "- 5 -> 1.41\n", "\n", "用公式 $x_{standardized} = \\frac{x - \\mu_x}{\\sigma_x}$ 进行计算,\n", "\n", "其中$\\mu$是样本均值,$\\sigma$是标准差。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "ary = np.array([1, 2, 3, 4, 5])\n", "ary_standardized = (ary - ary.mean()) / ary.std()\n", "ary_standardized" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "尽管标准化是最基本的预处理过程——正如我们在上面的代码片段中看到的那样——scikit-learn为这种计算实现了一个`StandardScaler`类。在后面的部分中,将看到scikit-learn接口为什么以及什么时候在上面执行的代码片段中派上用场。\n", "\n", "应用这样的预处理,有一个和监督学习算法非常类似的接口。为了练习使用scikit-learn的\"Transformer\"接口,从加载Iris数据集开始,对其进行缩放:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from sklearn.datasets import load_iris\n", "from sklearn.model_selection import train_test_split\n", "\n", "iris = load_iris()\n", "X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=0)\n", "print(X_train.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "iris数据集不是“居中的”,也就是说,它具有非零的均值,每个分量的标准差也不同:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(\"mean : %s \" % X_train.mean(axis=0))\n", "print(\"standard deviation : %s \" % X_train.std(axis=0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "为了使用预处理方法,首先导入估计器,在这里用StandardScaler,将其实例化:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import StandardScaler\n", "scaler = StandardScaler()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "与分类和回归算法一样,我们称之为``fit``,从数据中学习模型。既然是无监督模型,就只传递``X``,而不用传``y``。其实这样做也只是估计出了均值和标准差。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "scaler.fit(X_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "现在,可以通过应用``transform``(不是``predict``)方法来重新缩放数据:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train_scaled = scaler.transform(X_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "``X_train_scaled``具有相同数量的样本和特征,但是减去了均值,并且所有特征都被缩放为具有单位标准差:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(X_train_scaled.shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(\"mean : %s \" % X_train_scaled.mean(axis=0))\n", "print(\"standard deviation : %s \" % X_train_scaled.std(axis=0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "总结:通过`fit`方法,估计器拟合提供给它的数据。在这一步中,估计器从数据中估计出参数(在这里是:平均值和标准差)。然后,再`transform`数据,将这些参数用于数据的变换。(注意,变换方法不会更新这些参数)。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "需要注意的是,要对训练集和测试集是假同样的变换。其结果是,在缩放后,测试数据的均值通常不为零:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "X_test_scaled = scaler.transform(X_test)\n", "print(\"mean test data: %s\" % X_test_scaled.mean(axis=0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "用完全相同的方式对训练数据和测试数据进行变换,对后续处理步骤理解数据来说非常重要,如下图所示:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from figures import plot_relative_scaling\n", "plot_relative_scaling()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "缩放数据有几种常见的方法。最常见的是刚刚介绍的``StandardScaler``,但是用``MinMaxScaler``将数据重新缩放到一个固定最小值和最大值(通常在0和1之间),或用更鲁棒的统计学指标,如中值和分位数,而不是均值和标准差(用``RobustScaler``)也很有用。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from figures import plot_scaling\n", "plot_scaling()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "主成分分析(Principal Component Analysis)\n", "============================" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "无监督变换更有意思的一个例子,是主成分分析(PCA)。这是一种通过创建线性投影来降低数据维度的技术。也就是说,找到新的特征来表示数据,这些新特征是旧数据的线性组合。因此,我们可以把PCA视为将数据投影到一个*新的*特征空间上。\n", "\n", "PCA找到这些新(投影)方向的方法是寻找最大方差的方向。通常只有少数几个组成成分,能解释数据中的大部分方差。前提是在捕获数据集大部分信息的同时减少数据集的大小(维数)。有很多原因可以解释为什么降维很有用: 它可以减少运行学习算法时的计算成本,减少存储空间,并且可能有助于所谓的“维数灾难”,在后面会更详细地讨论。\n", "\n", "为了说明变换看起来是什么样子,首先在二维数据上显示它,并保留两个主成分。图示如下:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from figures import plot_pca_illustration\n", "plot_pca_illustration()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "现在让我们更详细地介绍所有步骤:\n", "\n", "先创建一个Gaussian blob:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "rnd = np.random.RandomState(5)\n", "X_ = rnd.normal(size=(300, 2))\n", "X_blob = np.dot(X_, rnd.normal(size=(2, 2))) + rnd.normal(size=2)\n", "y = X_[:, 0] > 0\n", "plt.scatter(X_blob[:, 0], X_blob[:, 1], c=y, linewidths=0, s=30)\n", "plt.xlabel(\"feature 1\")\n", "plt.ylabel(\"feature 2\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "实例化主成分分析模型。默认情况下,所有方向都保留。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from sklearn.decomposition import PCA\n", "pca = PCA()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "然后用我数据拟合主成分分析模型。由于主成分分析是无监督算法,没有输出``y``。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "pca.fit(X_blob)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "然后变换数据,投影到主成分上:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "X_pca = pca.transform(X_blob)\n", "\n", "plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, linewidths=0, s=30)\n", "plt.xlabel(\"first principal component\")\n", "plt.ylabel(\"second principal component\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "主成分分析发现的第一个成分沿对角线,第二个成分垂直于对角线。当主成分分析发现旋转变换时,主成分总是彼此“正交”。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "基于主成分分析(PCA)的降维可视化\n", "-------------------------------------------------------------\n", "考虑digits数据集。它无法在一个单一的二维图中可视化,因为它有64个特征。我们将用sklearn示例中的例子 [here](http://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html) 提取2个维度来进行可视化" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "from figures import digits_plot\n", "\n", "digits_plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "注意,这个投影是在*没有*任何关于标签(由颜色表示)的信息的情况下完成的:这也正是**无监督**学习的意义所在。我们看到这个投影给了我们洞察参数空间中不同数字分布的另一种角度。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 练习\n", "\n", "使用前两个主成分可视化Iris数据集,用两个原始特征压缩该可视化。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# %load solutions/07A_iris-pca.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.5" } }, "nbformat": 4, "nbformat_minor": 4 }