{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Handling Missing Data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The difference between data found in many tutorials and data in the real world is that real-world data is rarely clean and homogeneous.\n", "In particular, many interesting datasets will have some amount of data missing.\n", "To make matters even more complicated, different data sources may indicate missing data in different ways.\n", "\n", "In this chapter, we will discuss some general considerations for missing data, look at how Pandas chooses to represent it, and explore some built-in Pandas tools for handling missing data in Python.\n", "Here and throughout the book, I will refer to missing data in general as *null*, *NaN*, or *NA* values." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Trade-offs in Missing Data Conventions\n", "\n", "A number of approaches have been developed to track the presence of missing data in a table or `DataFrame`.\n", "Generally, they revolve around one of two strategies: using a *mask* that globally indicates missing values, or choosing a *sentinel value* that indicates a missing entry.\n", "\n", "In the masking approach, the mask might be an entirely separate Boolean array, or it might involve appropriation of one bit in the data representation to locally indicate the null status of a value.\n", "\n", "In the sentinel approach, the sentinel value could be some data-specific convention, such as indicating a missing integer value with –9999 or some rare bit pattern, or it could be a more global convention, such as indicating a missing floating-point value with `NaN` (Not a Number), a special value that is part of the IEEE floating-point specification.\n", "\n", "Neither of these approaches is without trade-offs. Use of a separate mask array requires allocation of an additional Boolean array, which adds overhead in both storage and computation. A sentinel value reduces the range of valid values that can be represented, and may require extra (often nonoptimized) logic in CPU and GPU arithmetic, because common special values like `NaN` are not available for all data types.\n", "\n", "As in most cases where no universally optimal choice exists, different languages and systems use different conventions.\n", "For example, the R language uses reserved bit patterns within each data type as sentinel values indicating missing data, while the SciDB system uses an extra byte attached to every cell to indicate an NA state." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Missing Data in Pandas\n", "\n", "The way in which Pandas handles missing values is constrained by its reliance on the NumPy package, which does not have a built-in notion of NA values for non-floating-point data types.\n", "\n", "Perhaps Pandas could have followed R's lead in specifying bit patterns for each individual data type to indicate nullness, but this approach turns out to be rather unwieldy.\n", "While R has just 4 main data types, NumPy supports *far* more than this: for example, while R has a single integer type, NumPy supports 14 basic integer types once you account for available bit widths, signedness, and endianness of the encoding.\n", "Reserving a specific bit pattern in all available NumPy types would lead to an unwieldy amount of overhead in special-casing various operations for various types, likely even requiring a new fork of the NumPy package. Further, for the smaller data types (such as 8-bit integers), sacrificing a bit to use as a mask would significantly reduce the range of values it can represent.\n", "\n", "Because of these constraints and trade-offs, Pandas has two \"modes\" of storing and manipulating null values:\n", "\n", "- The default mode is to use a sentinel-based missing data scheme, with sentinel values `NaN` or `None` depending on the type of the data.\n", "- Alternatively, you can opt in to using the nullable data types (dtypes) Pandas provides (discussed later in this chapter), which results in the creation an accompanying mask array to track missing entries. These missing entries are then presented to the user as the special `pd.NA` value.\n", "\n", "In either case, the data operations and manipulations provided by the Pandas API will handle and propagate those missing entries in a predictable manner. But to develop some intuition into *why* these choices are made, let's dive quickly into the trade-offs inherent in `None`, `NaN`, and `NA`. As usual, we'll start by importing NumPy and Pandas:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### None as a Sentinel Value\n", "\n", "For some data types, Pandas uses `None` as a sentinel value. `None` is a Python object, which means that any array containing `None` must have `dtype=object`—that is, it must be a sequence of Python objects.\n", "\n", "For example, observe what happens if you pass `None` to a NumPy array:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([1, None, 2, 3], dtype=object)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vals1 = np.array([1, None, 2, 3])\n", "vals1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This `dtype=object` means that the best common type representation NumPy could infer for the contents of the array is that they are Python objects.\n", "The downside of using `None` in this way is that operations on the data will be done at the Python level, with much more overhead than the typically fast operations seen for arrays with native types:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2.73 ms ± 288 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n" ] } ], "source": [ "%timeit np.arange(1E6, dtype=int).sum()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "92.1 ms ± 3.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ] } ], "source": [ "%timeit np.arange(1E6, dtype=object).sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Further, because Python does not support arithmetic operations with `None`, aggregations like `sum` or `min` will generally lead to an error:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "ename": "TypeError", "evalue": "unsupported operand type(s) for +: 'int' and 'NoneType'", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m/var/folders/xc/sptt9bk14s34rgxt7453p03r0000gp/T/ipykernel_91333/1181914653.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mvals1\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m~/.local/share/virtualenvs/python-data-science-handbook-2e-u_kwqDTB/lib/python3.9/site-packages/numpy/core/_methods.py\u001b[0m in \u001b[0;36m_sum\u001b[0;34m(a, axis, dtype, out, keepdims, initial, where)\u001b[0m\n\u001b[1;32m 46\u001b[0m def _sum(a, axis=None, dtype=None, out=None, keepdims=False,\n\u001b[1;32m 47\u001b[0m initial=_NoValue, where=True):\n\u001b[0;32m---> 48\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mumr_sum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ma\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0maxis\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mout\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkeepdims\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minitial\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwhere\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 49\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 50\u001b[0m def _prod(a, axis=None, dtype=None, out=None, keepdims=False,\n", "\u001b[0;31mTypeError\u001b[0m: unsupported operand type(s) for +: 'int' and 'NoneType'" ] } ], "source": [ "vals1.sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this reason, Pandas does not use `None` as a sentinel in its numerical arrays." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### NaN: Missing Numerical Data\n", "\n", "The other missing data sentinel, `NaN` is different; it is a special floating-point value recognized by all systems that use the standard IEEE floating-point representation:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "array([ 1., nan, 3., 4.])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vals2 = np.array([1, np.nan, 3, 4]) \n", "vals2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that NumPy chose a native floating-point type for this array: this means that unlike the object array from before, this array supports fast operations pushed into compiled code.\n", "Keep in mind that `NaN` is a bit like a data virus—it infects any other object it touches.\n", "Regardless of the operation, the result of arithmetic with `NaN` will be another `NaN`:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "nan" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "1 + np.nan" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "nan" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "0 * np.nan" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This means that aggregates over the values are well defined (i.e., they don't result in an error) but not always useful:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "(nan, nan, nan)" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vals2.sum(), vals2.min(), vals2.max()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That said, NumPy does provide ``NaN``-aware versions of aggregations that will ignore these missing values:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "(8.0, 1.0, 4.0)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "np.nansum(vals2), np.nanmin(vals2), np.nanmax(vals2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main downside of `NaN` is that it is specifically a floating-point value; there is no equivalent `NaN` value for integers, strings, or other types." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### NaN and None in Pandas\n", "\n", "`NaN` and `None` both have their place, and Pandas is built to handle the two of them nearly interchangeably, converting between them where appropriate:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "0 1.0\n", "1 NaN\n", "2 2.0\n", "3 NaN\n", "dtype: float64" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.Series([1, np.nan, 2, None])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For types that don't have an available sentinel value, Pandas automatically typecasts when NA values are present.\n", "For example, if we set a value in an integer array to ``np.nan``, it will automatically be upcast to a floating-point type to accommodate the NA:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "0 0\n", "1 1\n", "dtype: int64" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = pd.Series(range(2), dtype=int)\n", "x" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "0 NaN\n", "1 1.0\n", "dtype: float64" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x[0] = None\n", "x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that in addition to casting the integer array to floating point, Pandas automatically converts the ``None`` to a ``NaN`` value.\n", "\n", "While this type of magic may feel a bit hackish compared to the more unified approach to NA values in domain-specific languages like R, the Pandas sentinel/casting approach works quite well in practice and in my experience only rarely causes issues.\n", "\n", "The following table lists the upcasting conventions in Pandas when NA values are introduced:\n", "\n", "|Typeclass | Conversion when storing NAs | NA sentinel value |\n", "|--------------|-----------------------------|------------------------|\n", "| ``floating`` | No change | ``np.nan`` |\n", "| ``object`` | No change | ``None`` or ``np.nan`` |\n", "| ``integer`` | Cast to ``float64`` | ``np.nan`` |\n", "| ``boolean`` | Cast to ``object`` | ``None`` or ``np.nan`` |\n", "\n", "Keep in mind that in Pandas, string data is always stored with an ``object`` dtype." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pandas Nullable Dtypes" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "In early versions of Pandas, `NaN` and `None` as sentinel values were the only missing data representations available. The primary difficulty this introduced was with regard to the implicit type casting: for example, there was no way to represent a true integer array with missing data.\n", "\n", "To address this difficulty, Pandas later added *nullable dtypes*, which are distinguished from regular dtypes by capitalization of their names (e.g., `pd.Int32` versus `np.int32`). For backward compatibility, these nullable dtypes are only used if specifically requested.\n", "\n", "For example, here is a `Series` of integers with missing data, created from a list containing all three available markers of missing data:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 1\n", "1 \n", "2 2\n", "3 \n", "4 \n", "dtype: Int32" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.Series([1, np.nan, 2, None, pd.NA], dtype='Int32')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This representation can be used interchangeably with the others in all the operations explored through the rest of this chapter." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Operating on Null Values\n", "\n", "As we have seen, Pandas treats `None`, `NaN`, and `NA` as essentially interchangeable for indicating missing or null values.\n", "To facilitate this convention, Pandas provides several methods for detecting, removing, and replacing null values in Pandas data structures.\n", "They are:\n", "\n", "- ``isnull``: Generates a Boolean mask indicating missing values\n", "- ``notnull``: Opposite of ``isnull``\n", "- ``dropna``: Returns a filtered version of the data\n", "- ``fillna``: Returns a copy of the data with missing values filled or imputed\n", "\n", "We will conclude this chapter with a brief exploration and demonstration of these routines." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Detecting Null Values\n", "Pandas data structures have two useful methods for detecting null data: `isnull` and `notnull`.\n", "Either one will return a Boolean mask over the data. For example:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "tags": [] }, "outputs": [], "source": [ "data = pd.Series([1, np.nan, 'hello', None])" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "0 False\n", "1 True\n", "2 False\n", "3 True\n", "dtype: bool" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.isnull()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As mentioned in [Data Indexing and Selection](03.02-Data-Indexing-and-Selection.ipynb), Boolean masks can be used directly as a `Series` or `DataFrame` index:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "0 1\n", "2 hello\n", "dtype: object" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data[data.notnull()]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `isnull()` and `notnull()` methods produce similar Boolean results for ``DataFrame`` objects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dropping Null Values\n", "\n", "In addition to these masking methods, there are the convenience methods `dropna`\n", "(which removes NA values) and `fillna` (which fills in NA values). For a `Series`,\n", "the result is straightforward:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "0 1\n", "2 hello\n", "dtype: object" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.dropna()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a ``DataFrame``, there are more options.\n", "Consider the following ``DataFrame``:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
012
01.0NaN2
12.03.05
2NaN4.06
\n", "
" ], "text/plain": [ " 0 1 2\n", "0 1.0 NaN 2\n", "1 2.0 3.0 5\n", "2 NaN 4.0 6" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame([[1, np.nan, 2],\n", " [2, 3, 5],\n", " [np.nan, 4, 6]])\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We cannot drop single values from a `DataFrame`; we can only drop entire rows or columns.\n", "Depending on the application, you might want one or the other, so `dropna` includes a number of options for a `DataFrame`.\n", "\n", "By default, `dropna` will drop all rows in which *any* null value is present:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
012
12.03.05
\n", "
" ], "text/plain": [ " 0 1 2\n", "1 2.0 3.0 5" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.dropna()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, you can drop NA values along a different axis. Using `axis=1` or `axis='columns'` drops all columns containing a null value:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
2
02
15
26
\n", "
" ], "text/plain": [ " 2\n", "0 2\n", "1 5\n", "2 6" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.dropna(axis='columns')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But this drops some good data as well; you might rather be interested in dropping rows or columns with *all* NA values, or a majority of NA values.\n", "This can be specified through the `how` or `thresh` parameters, which allow fine control of the number of nulls to allow through.\n", "\n", "The default is `how='any'`, such that any row or column containing a null value will be dropped.\n", "You can also specify `how='all'`, which will only drop rows/columns that contain *all* null values:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123
01.0NaN2NaN
12.03.05NaN
2NaN4.06NaN
\n", "
" ], "text/plain": [ " 0 1 2 3\n", "0 1.0 NaN 2 NaN\n", "1 2.0 3.0 5 NaN\n", "2 NaN 4.0 6 NaN" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df[3] = np.nan\n", "df" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
012
01.0NaN2
12.03.05
2NaN4.06
\n", "
" ], "text/plain": [ " 0 1 2\n", "0 1.0 NaN 2\n", "1 2.0 3.0 5\n", "2 NaN 4.0 6" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.dropna(axis='columns', how='all')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For finer-grained control, the `thresh` parameter lets you specify a minimum number of non-null values for the row/column to be kept:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123
12.03.05NaN
\n", "
" ], "text/plain": [ " 0 1 2 3\n", "1 2.0 3.0 5 NaN" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.dropna(axis='rows', thresh=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, the first and last rows have been dropped because they each contain only two non-null values." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Filling Null Values\n", "\n", "Sometimes rather than dropping NA values, you'd like to replace them with a valid value.\n", "This value might be a single number like zero, or it might be some sort of imputation or interpolation from the good values.\n", "You could do this in-place using the `isnull` method as a mask, but because it is such a common operation Pandas provides the `fillna` method, which returns a copy of the array with the null values replaced.\n", "\n", "Consider the following `Series`:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "a 1\n", "b \n", "c 2\n", "d \n", "e 3\n", "dtype: Int32" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'), dtype='Int32')\n", "data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can fill NA entries with a single value, such as zero:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "a 1\n", "b 0\n", "c 2\n", "d 0\n", "e 3\n", "dtype: Int32" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.fillna(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can specify a forward fill to propagate the previous value forward:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "a 1\n", "b 1\n", "c 2\n", "d 2\n", "e 3\n", "dtype: Int32" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# forward fill\n", "data.fillna(method='ffill')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or we can specify a backward fill to propagate the next values backward:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "a 1\n", "b 2\n", "c 2\n", "d 3\n", "e 3\n", "dtype: Int32" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# back fill\n", "data.fillna(method='bfill')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the case of a `DataFrame`, the options are similar, but we can also specify an `axis` along which the fills should take place:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123
01.0NaN2NaN
12.03.05NaN
2NaN4.06NaN
\n", "
" ], "text/plain": [ " 0 1 2 3\n", "0 1.0 NaN 2 NaN\n", "1 2.0 3.0 5 NaN\n", "2 NaN 4.0 6 NaN" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123
01.01.02.02.0
12.03.05.05.0
2NaN4.06.06.0
\n", "
" ], "text/plain": [ " 0 1 2 3\n", "0 1.0 1.0 2.0 2.0\n", "1 2.0 3.0 5.0 5.0\n", "2 NaN 4.0 6.0 6.0" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.fillna(method='ffill', axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that if a previous value is not available during a forward fill, the NA value remains." ] } ], "metadata": { "anaconda-cloud": {}, "jupytext": { "formats": "ipynb,md" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.2" } }, "nbformat": 4, "nbformat_minor": 4 }