{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Sebastian Raschka, 2015 \n", "`mlxtend`, a library of extension and helper modules for Python's data analysis and machine learning libraries\n", "\n", "- GitHub repository: https://github.com/rasbt/mlxtend\n", "- Documentation: http://rasbt.github.io/mlxtend/\n", "\n", "View this page in [jupyter nbviewer](http://nbviewer.ipython.org/github/rasbt/mlxtend/blob/master/docs/sources/_ipynb_templates/_template.ipynb)" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sebastian Raschka \n", "last updated: 2016-01-30 \n", "\n", "CPython 3.5.1\n", "IPython 4.0.3\n", "\n", "matplotlib 1.5.1\n", "numpy 1.10.2\n", "scipy 0.16.1\n", "mlxtend 0.3.0\n" ] } ], "source": [ "%load_ext watermark\n", "%watermark -a 'Sebastian Raschka' -u -d -v -p matplotlib,numpy,scipy,mlxtend" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# MinMax Scaling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A function for min-max scaling of pandas DataFrames or NumPy arrays." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> from mlxtend.preprocessing import MinMaxScaling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An alternative approach to Z-score normalization (or standardization) is the so-called Min-Max scaling (often also simply called \"normalization\" - a common cause for ambiguities).\n", "In this approach, the data is scaled to a fixed range - usually 0 to 1.\n", "The cost of having this bounded range - in contrast to standardization - is that we will end up with smaller standard deviations, which can suppress the effect of outliers.\n", "\n", "A Min-Max scaling is typically done via the following equation:\n", "\n", "$$X_{sc} = \\frac{X - X_{min}}{X_{max} - X_{min}}.$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One family of algorithms that is scale-invariant encompasses tree-based learning algorithms. Let's take the general CART decision tree algorithm. Without going into much depth regarding information gain and impurity measures, we can think of the decision as \"is feature x_i >= some_val?\" Intuitively, we can see that it really doesn't matter on which scale this feature is (centimeters, Fahrenheit, a standardized scale -- it really doesn't matter).\n", "\n", "\n", "Some examples of algorithms where feature scaling matters are:\n", "\n", "\n", "- k-nearest neighbors with an Euclidean distance measure if want all features to contribute equally\n", "- k-means (see k-nearest neighbors)\n", "- logistic regression, SVMs, perceptrons, neural networks etc. if you are using gradient descent/ascent-based optimization, otherwise some weights will update much faster than others\n", "- linear discriminant analysis, principal component analysis, kernel principal component analysis since you want to find directions of maximizing the variance (under the constraints that those directions/eigenvectors/principal components are orthogonal); you want to have features on the same scale since you'd emphasize variables on \"larger measurement scales\" more.\n", "\n", "\n", "There are many more cases than I can possibly list here ... I always recommend you to think about the algorithm and what it's doing, and then it typically becomes obvious whether we want to scale your features or not.\n", "\n", "\n", "In addition, we'd also want to think about whether we want to \"standardize\" or \"normalize\" (here: scaling to [0, 1] range) our data. Some algorithms assume that our data is centered at 0. For example, if we initialize the weights of a small multi-layer perceptron with tanh activation units to 0 or small random values centered around zero, we want to update the model weights \"equally.\"\n", "As a rule of thumb I'd say: When in doubt, just standardize the data, it shouldn't hurt. \n", "\n", "\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Examples" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 1 - Scaling a Pandas DataFrame" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
s1s2
0110
129
238
347
456
565
\n", "
" ], "text/plain": [ " s1 s2\n", "0 1 10\n", "1 2 9\n", "2 3 8\n", "3 4 7\n", "4 5 6\n", "5 6 5" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd\n", "\n", "s1 = pd.Series([1, 2, 3, 4, 5, 6], index=(range(6)))\n", "s2 = pd.Series([10, 9, 8, 7, 6, 5], index=(range(6)))\n", "df = pd.DataFrame(s1, columns=['s1'])\n", "df['s2'] = s2\n", "df" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
s1s2
00.01.0
10.20.8
20.40.6
30.60.4
40.80.2
51.00.0
\n", "
" ], "text/plain": [ " s1 s2\n", "0 0.0 1.0\n", "1 0.2 0.8\n", "2 0.4 0.6\n", "3 0.6 0.4\n", "4 0.8 0.2\n", "5 1.0 0.0" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from mlxtend.preprocessing import minmax_scaling\n", "minmax_scaling(df, columns=['s1', 's2'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 2 - Scaling a NumPy Array" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 1, 10],\n", " [ 2, 9],\n", " [ 3, 8],\n", " [ 4, 7],\n", " [ 5, 6],\n", " [ 6, 5]])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "\n", "X = np.array([[1, 10], [2, 9], [3, 8], [4, 7], [5, 6], [6, 5]])\n", "X" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "array([[ 0. , 1. ],\n", " [ 0.2, 0.8],\n", " [ 0.4, 0.6],\n", " [ 0.6, 0.4],\n", " [ 0.8, 0.2],\n", " [ 1. , 0. ]])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from mlxtend.preprocessing import minmax_scaling\n", "minmax_scaling(X, columns=[0, 1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# API" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "## minmax_scaling\n", "\n", "*minmax_scaling(array, columns, min_val=0, max_val=1)*\n", "\n", "Min max scaling of pandas' DataFrames.\n", "\n", "**Parameters**\n", "\n", "- `array` : pandas DataFrame or NumPy ndarray, shape = [n_rows, n_columns].\n", "\n", "\n", "- `columns` : array-like, shape = [n_columns]\n", "\n", " Array-like with column names, e.g., ['col1', 'col2', ...]\n", " or column indices [0, 2, 4, ...]\n", "\n", "- `min_val` : `int` or `float`, optional (default=`0`)\n", "\n", " minimum value after rescaling.\n", "\n", "- `min_val` : `int` or `float`, optional (default=`1`)\n", "\n", " maximum value after rescaling.\n", "\n", "**Returns**\n", "\n", "- `df_new` : pandas DataFrame object.\n", "\n", " Copy of the array or DataFrame with rescaled columns.\n", "\n", "\n" ] } ], "source": [ "with open('../../api_modules/mlxtend.preprocessing/minmax_scaling.md', 'r') as f:\n", " print(f.read())" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.1" } }, "nbformat": 4, "nbformat_minor": 0 }