{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# High-Performance Pandas: eval and query" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "As we've already seen in previous chapters, the power of the PyData stack is built upon the ability of NumPy and Pandas to push basic operations into lower-level compiled code via an intuitive higher-level syntax: examples are vectorized/broadcasted operations in NumPy, and grouping-type operations in Pandas.\n", "While these abstractions are efficient and effective for many common use cases, they often rely on the creation of temporary intermediate objects, which can cause undue overhead in computational time and memory use.\n", "\n", "To address this, Pandas includes some methods that allow you to directly access C-speed operations without costly allocation of intermediate arrays: `eval` and `query`, which rely on the [NumExpr package](https://github.com/pydata/numexpr).\n", "In this chapter I will walk you through their use and give some rules of thumb about when you might think about using them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Motivating query and eval: Compound Expressions\n", "\n", "We've seen previously that NumPy and Pandas support fast vectorized operations; for example, when adding the elements of two arrays:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2.21 ms ± 142 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n" ] } ], "source": [ "import numpy as np\n", "rng = np.random.default_rng(42)\n", "x = rng.random(1000000)\n", "y = rng.random(1000000)\n", "%timeit x + y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As discussed in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb), this is much faster than doing the addition via a Python loop or comprehension:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "263 ms ± 43.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n" ] } ], "source": [ "%timeit np.fromiter((xi + yi for xi, yi in zip(x, y)),\n", " dtype=x.dtype, count=len(x))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But this abstraction can become less efficient when computing compound expressions.\n", "For example, consider the following expression:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [] }, "outputs": [], "source": [ "mask = (x > 0.5) & (y < 0.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because NumPy evaluates each subexpression, this is roughly equivalent to the following:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "tags": [] }, "outputs": [], "source": [ "tmp1 = (x > 0.5)\n", "tmp2 = (y < 0.5)\n", "mask = tmp1 & tmp2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In other words, *every intermediate step is explicitly allocated in memory*. If the `x` and `y` arrays are very large, this can lead to significant memory and computational overhead.\n", "The NumExpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays.\n", "The [NumExpr documentation](https://github.com/pydata/numexpr) has more details, but for the time being it is sufficient to say that the library accepts a *string* giving the NumPy-style expression you'd like to compute:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numexpr\n", "mask_numexpr = numexpr.evaluate('(x > 0.5) & (y < 0.5)')\n", "np.all(mask == mask_numexpr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The benefit here is that NumExpr evaluates the expression in a way that avoids temporary arrays where possible, and thus can be much more efficient than NumPy, especially for long sequences of computations on large arrays.\n", "The Pandas `eval` and `query` tools that we will discuss here are conceptually similar, and are essentially Pandas-specific wrappers of NumExpr functionality." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## pandas.eval for Efficient Operations\n", "\n", "The `eval` function in Pandas uses string expressions to efficiently compute operations on `DataFrame` objects.\n", "For example, consider the following data:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import pandas as pd\n", "nrows, ncols = 100000, 100\n", "df1, df2, df3, df4 = (pd.DataFrame(rng.random((nrows, ncols)))\n", " for i in range(4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To compute the sum of all four ``DataFrame``s using the typical Pandas approach, we can just write the sum:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "73.2 ms ± 6.72 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ] } ], "source": [ "%timeit df1 + df2 + df3 + df4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The same result can be computed via ``pd.eval`` by constructing the expression as a string:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "34 ms ± 4.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ] } ], "source": [ "%timeit pd.eval('df1 + df2 + df3 + df4')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `eval` version of this expression is about 50% faster (and uses much less memory), while giving the same result:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.allclose(df1 + df2 + df3 + df4,\n", " pd.eval('df1 + df2 + df3 + df4'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`pd.eval` supports a wide range of operations.\n", "To demonstrate these, we'll use the following integer data:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [] }, "outputs": [], "source": [ "df1, df2, df3, df4, df5 = (pd.DataFrame(rng.integers(0, 1000, (100, 3)))\n", " for i in range(5))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Arithmetic operators\n", "`pd.eval` supports all arithmetic operators. For example:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result1 = -df1 * df2 / (df3 + df4) - df5\n", "result2 = pd.eval('-df1 * df2 / (df3 + df4) - df5')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Comparison operators\n", "`pd.eval` supports all comparison operators, including chained expressions:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result1 = (df1 < df2) & (df2 <= df3) & (df3 != df4)\n", "result2 = pd.eval('df1 < df2 <= df3 != df4')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Bitwise operators\n", "`pd.eval` supports the `&` and `|` bitwise operators:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result1 = (df1 < 0.5) & (df2 < 0.5) | (df3 < df4)\n", "result2 = pd.eval('(df1 < 0.5) & (df2 < 0.5) | (df3 < df4)')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition, it supports the use of the literal `and` and `or` in Boolean expressions:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result3 = pd.eval('(df1 < 0.5) and (df2 < 0.5) or (df3 < df4)')\n", "np.allclose(result1, result3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Object attributes and indices\n", "\n", "`pd.eval` supports access to object attributes via the `obj.attr` syntax and indexes via the `obj[index]` syntax:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result1 = df2.T[0] + df3.iloc[1]\n", "result2 = pd.eval('df2.T[0] + df3.iloc[1]')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Other operations\n", "\n", "Other operations, such as function calls, conditional statements, loops, and other more involved constructs are currently *not* implemented in `pd.eval`.\n", "If you'd like to execute these more complicated types of expressions, you can use the NumExpr library itself." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DataFrame.eval for Column-Wise Operations\n", "\n", "Just as Pandas has a top-level `pd.eval` function, `DataFrame` objects have an `eval` method that works in similar ways.\n", "The benefit of the `eval` method is that columns can be referred to by name.\n", "We'll use this labeled array as an example:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
00.8508880.9667090.958690
10.8201260.3856860.061402
20.0597290.8317680.652259
30.2447740.1403220.041711
40.8182050.7533840.578851
\n", "
" ], "text/plain": [ " A B C\n", "0 0.850888 0.966709 0.958690\n", "1 0.820126 0.385686 0.061402\n", "2 0.059729 0.831768 0.652259\n", "3 0.244774 0.140322 0.041711\n", "4 0.818205 0.753384 0.578851" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame(rng.random((1000, 3)), columns=['A', 'B', 'C'])\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using `pd.eval` as in the previous section, we can compute expressions with the three columns like this:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result1 = (df['A'] + df['B']) / (df['C'] - 1)\n", "result2 = pd.eval(\"(df.A + df.B) / (df.C - 1)\")\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `DataFrame.eval` method allows much more succinct evaluation of expressions with the columns:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result3 = df.eval('(A + B) / (C - 1)')\n", "np.allclose(result1, result3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice here that we treat *column names as variables* within the evaluated expression, and the result is what we would wish." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Assignment in DataFrame.eval\n", "\n", "In addition to the options just discussed, `DataFrame.eval` also allows assignment to any column.\n", "Let's use the `DataFrame` from before, which has columns `'A'`, `'B'`, and `'C'`:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
00.8508880.9667090.958690
10.8201260.3856860.061402
20.0597290.8317680.652259
30.2447740.1403220.041711
40.8182050.7533840.578851
\n", "
" ], "text/plain": [ " A B C\n", "0 0.850888 0.966709 0.958690\n", "1 0.820126 0.385686 0.061402\n", "2 0.059729 0.831768 0.652259\n", "3 0.244774 0.140322 0.041711\n", "4 0.818205 0.753384 0.578851" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use `df.eval` to create a new column `'D'` and assign to it a value computed from the other columns:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABCD
00.8508880.9667090.9586901.895916
10.8201260.3856860.06140219.638139
20.0597290.8317680.6522591.366782
30.2447740.1403220.0417119.232370
40.8182050.7533840.5788512.715013
\n", "
" ], "text/plain": [ " A B C D\n", "0 0.850888 0.966709 0.958690 1.895916\n", "1 0.820126 0.385686 0.061402 19.638139\n", "2 0.059729 0.831768 0.652259 1.366782\n", "3 0.244774 0.140322 0.041711 9.232370\n", "4 0.818205 0.753384 0.578851 2.715013" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.eval('D = (A + B) / C', inplace=True)\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the same way, any existing column can be modified:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABCD
00.8508880.9667090.958690-0.120812
10.8201260.3856860.0614027.075399
20.0597290.8317680.652259-1.183638
30.2447740.1403220.0417112.504142
40.8182050.7533840.5788510.111982
\n", "
" ], "text/plain": [ " A B C D\n", "0 0.850888 0.966709 0.958690 -0.120812\n", "1 0.820126 0.385686 0.061402 7.075399\n", "2 0.059729 0.831768 0.652259 -1.183638\n", "3 0.244774 0.140322 0.041711 2.504142\n", "4 0.818205 0.753384 0.578851 0.111982" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.eval('D = (A - B) / C', inplace=True)\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Local Variables in DataFrame.eval\n", "\n", "The `DataFrame.eval` method supports an additional syntax that lets it work with local Python variables.\n", "Consider the following:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "column_mean = df.mean(1)\n", "result1 = df['A'] + column_mean\n", "result2 = df.eval('A + @column_mean')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `@` character here marks a *variable name* rather than a *column name*, and lets you efficiently evaluate expressions involving the two \"namespaces\": the namespace of columns, and the namespace of Python objects.\n", "Notice that this `@` character is only supported by the `DataFrame.eval` *method*, not by the `pandas.eval` *function*, because the `pandas.eval` function only has access to the one (Python) namespace." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The DataFrame.query Method\n", "\n", "The `DataFrame` has another method based on evaluated strings, called `query`.\n", "Consider the following:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result1 = df[(df.A < 0.5) & (df.B < 0.5)]\n", "result2 = pd.eval('df[(df.A < 0.5) & (df.B < 0.5)]')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As with the example used in our discussion of `DataFrame.eval`, this is an expression involving columns of the `DataFrame`.\n", "However, it cannot be expressed using the `DataFrame.eval` syntax!\n", "Instead, for this type of filtering operation, you can use the `query` method:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result2 = df.query('A < 0.5 and B < 0.5')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to being a more efficient computation, compared to the masking expression this is much easier to read and understand.\n", "Note that the `query` method also accepts the `@` flag to mark local variables:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Cmean = df['C'].mean()\n", "result1 = df[(df.A < Cmean) & (df.B < Cmean)]\n", "result2 = df.query('A < @Cmean and B < @Cmean')\n", "np.allclose(result1, result2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Performance: When to Use These Functions\n", "\n", "When considering whether to use `eval` and `query`, there are two considerations: *computation time* and *memory use*.\n", "Memory use is the most predictable aspect. As already mentioned, every compound expression involving NumPy arrays or Pandas ``DataFrame``s will result in implicit creation of temporary arrays. For example, this:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "tags": [] }, "outputs": [], "source": [ "x = df[(df.A < 0.5) & (df.B < 0.5)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "is roughly equivalent to this:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "tmp1 = df.A < 0.5\n", "tmp2 = df.B < 0.5\n", "tmp3 = tmp1 & tmp2\n", "x = df[tmp3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If the size of the temporary ``DataFrame``s is significant compared to your available system memory (typically several gigabytes), then it's a good idea to use an `eval` or `query` expression.\n", "You can check the approximate size of your array in bytes using this:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "32000" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.values.nbytes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the performance side, `eval` can be faster even when you are not maxing out your system memory.\n", "The issue is how your temporary objects compare to the size of the L1 or L2 CPU cache on your system (typically a few megabytes); if they are much bigger, then `eval` can avoid some potentially slow movement of values between the different memory caches.\n", "In practice, I find that the difference in computation time between the traditional methods and the `eval`/`query` method is usually not significant—if anything, the traditional method is faster for smaller arrays!\n", "The benefit of `eval`/`query` is mainly in the saved memory, and the sometimes cleaner syntax they offer.\n", "\n", "We've covered most of the details of `eval` and `query` here; for more information on these, you can refer to the Pandas documentation.\n", "In particular, different parsers and engines can be specified for running these queries; for details on this, see the discussion within the [\"Enhancing Performance\" section](https://pandas.pydata.org/pandas-docs/dev/user_guide/enhancingperf.html) of the documentation." ] } ], "metadata": { "anaconda-cloud": {}, "jupytext": { "formats": "ipynb,md" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.2" } }, "nbformat": 4, "nbformat_minor": 4 }