{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "DweYe9FcbMK_" }, "source": [ "##### Copyright 2019 The TensorFlow Authors.\n" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "cellView": "form", "colab": {}, "colab_type": "code", "id": "AVV2e0XKbJeX" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "sUtoed20cRJJ" }, "source": [ "# Load CSV data" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "1ap_W4aQcgNT" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "C-3Xbt0FfGfs" }, "source": [ "This tutorial provides an example of how to load CSV data from a file into a `tf.data.Dataset`.\n", "\n", "The data used in this tutorial are taken from the Titanic passenger list. The model will predict the likelihood a passenger survived based on characteristics like age, gender, ticket class, and whether the person was traveling alone." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "fgZ9gjmPfSnK" }, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "baYFZMW_bJHh" }, "outputs": [], "source": [ "import functools\n", "\n", "import numpy as np\n", "import tensorflow as tf" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Ncf5t6tgL5ZI" }, "outputs": [], "source": [ "TRAIN_DATA_URL = \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\"\n", "TEST_DATA_URL = \"https://storage.googleapis.com/tf-datasets/titanic/eval.csv\"\n", "\n", "train_file_path = tf.keras.utils.get_file(\"train.csv\", TRAIN_DATA_URL)\n", "test_file_path = tf.keras.utils.get_file(\"eval.csv\", TEST_DATA_URL)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "4ONE94qulk6S" }, "outputs": [], "source": [ "# Make numpy values easier to read.\n", "np.set_printoptions(precision=3, suppress=True)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Wuqj601Qw0Ml" }, "source": [ "## Load data\n", "\n", "To start, let's look at the top of the CSV file to see how it is formatted." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "54Dv7mCrf9Yw" }, "outputs": [], "source": [ "!head {train_file_path}" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "jC9lRhV-q_R3" }, "source": [ "You can [load this using pandas](pandas_dataframe.ipynb), and pass the NumPy arrays to TensorFlow. If you need to scale up to a large set of files, or need a loader that integrates with [TensorFlow and tf.data](../../guide/data.ipynb) then use the `tf.data.experimental.make_csv_dataset` function:" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "67mfwr4v-mN_" }, "source": [ "The only column you need to identify explicitly is the one with the value that the model is intended to predict. " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "iXROZm5f3V4E" }, "outputs": [], "source": [ "LABEL_COLUMN = 'survived'\n", "LABELS = [0, 1]" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "t4N-plO4tDXd" }, "source": [ "Now read the CSV data from the file and create a dataset. \n", "\n", "(For the full documentation, see `tf.data.experimental.make_csv_dataset`)\n" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "yIbUscB9sqha" }, "outputs": [], "source": [ "def get_dataset(file_path, **kwargs):\n", " dataset = tf.data.experimental.make_csv_dataset(\n", " file_path,\n", " batch_size=5, # Artificially small to make examples easier to show.\n", " label_name=LABEL_COLUMN,\n", " na_value=\"?\",\n", " num_epochs=1,\n", " ignore_errors=True, \n", " **kwargs)\n", " return dataset\n", "\n", "raw_train_data = get_dataset(train_file_path)\n", "raw_test_data = get_dataset(test_file_path)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "v4oMO9MIxgTG" }, "outputs": [], "source": [ "def show_batch(dataset):\n", " for batch, label in dataset.take(1):\n", " for key, value in batch.items():\n", " print(\"{:20s}: {}\".format(key,value.numpy()))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "vHUQFKoQI6G7" }, "source": [ "Each item in the dataset is a batch, represented as a tuple of (*many examples*, *many labels*). The data from the examples is organized in column-based tensors (rather than row-based tensors), each with as many elements as the batch size (5 in this case).\n", "\n", "It might help to see this yourself." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "HjrkJROoxoll" }, "outputs": [], "source": [ "show_batch(raw_train_data)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "YOYKQKmMj3D6" }, "source": [ "As you can see, the columns in the CSV are named. The dataset constructor will pick these names up automatically. If the file you are working with does not contain the column names in the first line, pass them in a list of strings to the `column_names` argument in the `make_csv_dataset` function." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "2Av8_9L3tUg1" }, "outputs": [], "source": [ "CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']\n", "\n", "temp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)\n", "\n", "show_batch(temp_dataset)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "gZfhoX7bR9u4" }, "source": [ "This example is going to use all the available columns. If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and pass it into the (optional) `select_columns` argument of the constructor.\n" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "S1TzSkUKwsNP" }, "outputs": [], "source": [ "SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'class', 'deck', 'alone']\n", "\n", "temp_dataset = get_dataset(train_file_path, select_columns=SELECT_COLUMNS)\n", "\n", "show_batch(temp_dataset)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "9cryz31lxs3e" }, "source": [ "## Data preprocessing\n", "\n", "A CSV file can contain a variety of data types. Typically you want to convert from those mixed types to a fixed length vector before feeding the data into your model.\n", "\n", "TensorFlow has a built-in system for describing common input conversions: `tf.feature_column`, see [this tutorial](../keras/feature_columns) for details.\n", "\n", "\n", "You can preprocess your data using any tool you like (like [nltk](https://www.nltk.org/) or [sklearn](https://scikit-learn.org/stable/)), and just pass the processed output to TensorFlow. \n", "\n", "\n", "The primary advantage of doing the preprocessing inside your model is that when you export the model it includes the preprocessing. This way you can pass the raw data directly to your model." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "9AsbaFmCeJtF" }, "source": [ "### Continuous data" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Xl0Q0DcfA_rt" }, "source": [ "If your data is already in an appropriate numeric format, you can pack the data into a vector before passing it off to the model:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "4Yfji3J5BMxz" }, "outputs": [], "source": [ "SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'parch', 'fare']\n", "DEFAULTS = [0, 0.0, 0.0, 0.0, 0.0]\n", "temp_dataset = get_dataset(train_file_path, \n", " select_columns=SELECT_COLUMNS,\n", " column_defaults = DEFAULTS)\n", "\n", "show_batch(temp_dataset)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "zEUhI8kZCfq8" }, "outputs": [], "source": [ "example_batch, labels_batch = next(iter(temp_dataset)) " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "IP45_2FbEKzn" }, "source": [ "Here's a simple function that will pack together all the columns:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "JQ0hNSL8CC3a" }, "outputs": [], "source": [ "def pack(features, label):\n", " return tf.stack(list(features.values()), axis=-1), label" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "75LA9DisEIoE" }, "source": [ "Apply this to each element of the dataset:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "VnP2Z2lwCTRl" }, "outputs": [], "source": [ "packed_dataset = temp_dataset.map(pack)\n", "\n", "for features, labels in packed_dataset.take(1):\n", " print(features.numpy())\n", " print()\n", " print(labels.numpy())" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "1VBvmaFrFU6J" }, "source": [ "If you have mixed datatypes you may want to separate out these simple-numeric fields. The `tf.feature_column` api can handle them, but this incurs some overhead and should be avoided unless really necessary. Switch back to the mixed dataset:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "ad-IQ_JPFQge" }, "outputs": [], "source": [ "show_batch(raw_train_data)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "HSrYNKKcIdav" }, "outputs": [], "source": [ "example_batch, labels_batch = next(iter(temp_dataset)) " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "p5VtThKfGPaQ" }, "source": [ "So define a more general preprocessor that selects a list of numeric features and packs them into a single column:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "5DRishYYGS-m" }, "outputs": [], "source": [ "class PackNumericFeatures(object):\n", " def __init__(self, names):\n", " self.names = names\n", "\n", " def __call__(self, features, labels):\n", " numeric_features = [features.pop(name) for name in self.names]\n", " numeric_features = [tf.cast(feat, tf.float32) for feat in numeric_features]\n", " numeric_features = tf.stack(numeric_features, axis=-1)\n", " features['numeric'] = numeric_features\n", "\n", " return features, labels" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "1SeZka9AHfqD" }, "outputs": [], "source": [ "NUMERIC_FEATURES = ['age','n_siblings_spouses','parch', 'fare']\n", "\n", "packed_train_data = raw_train_data.map(\n", " PackNumericFeatures(NUMERIC_FEATURES))\n", "\n", "packed_test_data = raw_test_data.map(\n", " PackNumericFeatures(NUMERIC_FEATURES))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "wFrw0YobIbUB" }, "outputs": [], "source": [ "show_batch(packed_train_data)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "_EPUS8fPLUb1" }, "outputs": [], "source": [ "example_batch, labels_batch = next(iter(packed_train_data)) " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "o2maE8d2ijsq" }, "source": [ "#### Data Normalization\n", "\n", "Continuous data should always be normalized." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "WKT1ASWpwH46" }, "outputs": [], "source": [ "import pandas as pd\n", "desc = pd.read_csv(train_file_path)[NUMERIC_FEATURES].describe()\n", "desc" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "cHHstcKPsMXM" }, "outputs": [], "source": [ "MEAN = np.array(desc.T['mean'])\n", "STD = np.array(desc.T['std'])" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "REKqO_xHPNx0" }, "outputs": [], "source": [ "def normalize_numeric_data(data, mean, std):\n", " # Center the data\n", " return (data-mean)/std\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "VPsoMUgRCpUM" }, "source": [ "Now create a numeric column. The `tf.feature_columns.numeric_column` API accepts a `normalizer_fn` argument, which will be run on each batch.\n", "\n", "Bind the `MEAN` and `STD` to the normalizer fn using [`functools.partial`](https://docs.python.org/3/library/functools.html#functools.partial)." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Bw0I35xRS57V" }, "outputs": [], "source": [ "# See what you just created.\n", "normalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)\n", "\n", "numeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])\n", "numeric_columns = [numeric_column]\n", "numeric_column" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "HZxcHXc6LCa7" }, "source": [ "When you train the model, include this feature column to select and center this block of numeric data:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "b61NM76Ot_kb" }, "outputs": [], "source": [ "example_batch['numeric']" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "j-r_4EAJAZoI" }, "outputs": [], "source": [ "numeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)\n", "numeric_layer(example_batch).numpy()" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "M37oD2VcCO4R" }, "source": [ "The mean based normalization used here requires knowing the means of each column ahead of time." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "tSyrkSQwYHKi" }, "source": [ "### Categorical data\n", "\n", "Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.\n", "\n", "Use the `tf.feature_column` API to create a collection with a `tf.feature_column.indicator_column` for each categorical column.\n" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "mWDniduKMw-C" }, "outputs": [], "source": [ "CATEGORIES = {\n", " 'sex': ['male', 'female'],\n", " 'class' : ['First', 'Second', 'Third'],\n", " 'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],\n", " 'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],\n", " 'alone' : ['y', 'n']\n", "}\n" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "kkxLdrsLwHPT" }, "outputs": [], "source": [ "categorical_columns = []\n", "for feature, vocab in CATEGORIES.items():\n", " cat_col = tf.feature_column.categorical_column_with_vocabulary_list(\n", " key=feature, vocabulary_list=vocab)\n", " categorical_columns.append(tf.feature_column.indicator_column(cat_col))" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "H18CxpHY_Nma" }, "outputs": [], "source": [ "# See what you just created.\n", "categorical_columns" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "p7mACuOsArUH" }, "outputs": [], "source": [ "categorical_layer = tf.keras.layers.DenseFeatures(categorical_columns)\n", "print(categorical_layer(example_batch).numpy()[0])" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "R7-1QG99_1sN" }, "source": [ "This will be become part of a data processing input later when you build the model." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "kPWkC4_1l3IG" }, "source": [ "### Combined preprocessing layer" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "R3QAjo1qD4p9" }, "source": [ "Add the two feature column collections and pass them to a `tf.keras.layers.DenseFeatures` to create an input layer that will extract and preprocess both input types:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "3-OYK7GnaH0r" }, "outputs": [], "source": [ "preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numeric_columns)" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "m7_U_K0UMSVS" }, "outputs": [], "source": [ "print(preprocessing_layer(example_batch).numpy()[0])" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "DlF_omQqtnOP" }, "source": [ "## Build the model" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "lQoFh16LxtT_" }, "source": [ "Build a `tf.keras.Sequential`, starting with the `preprocessing_layer`." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "3mSGsHTFPvFo" }, "outputs": [], "source": [ "model = tf.keras.Sequential([\n", " preprocessing_layer,\n", " tf.keras.layers.Dense(128, activation='relu'),\n", " tf.keras.layers.Dense(128, activation='relu'),\n", " tf.keras.layers.Dense(1),\n", "])\n", "\n", "model.compile(\n", " loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n", " optimizer='adam',\n", " metrics=['accuracy'])" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hPdtI2ie0lEZ" }, "source": [ "## Train, evaluate, and predict" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "8gvw1RE9zXkD" }, "source": [ "Now the model can be instantiated and trained." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "sW-4XlLeEQ2B" }, "outputs": [], "source": [ "train_data = packed_train_data.shuffle(500)\n", "test_data = packed_test_data" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Q_nm28IzNDTO" }, "outputs": [], "source": [ "model.fit(train_data, epochs=20)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "QyDMgBurzqQo" }, "source": [ "Once the model is trained, you can check its accuracy on the `test_data` set." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "eB3R3ViVONOp" }, "outputs": [], "source": [ "test_loss, test_accuracy = model.evaluate(test_data)\n", "\n", "print('\\n\\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "sTrn_pD90gdJ" }, "source": [ "Use `tf.keras.Model.predict` to infer labels on a batch or a dataset of batches." ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Qwcx74F3ojqe" }, "outputs": [], "source": [ "predictions = model.predict(test_data)\n", "\n", "# Show some results\n", "for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):\n", " prediction = tf.sigmoid(prediction).numpy()\n", " print(\"Predicted survival: {:.2%}\".format(prediction[0]),\n", " \" | Actual outcome: \",\n", " (\"SURVIVED\" if bool(survived) else \"DIED\"))\n" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "csv.ipynb", "private_outputs": true, "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }